name
stringlengths
5
231
severity
stringclasses
3 values
description
stringlengths
107
68.2k
recommendation
stringlengths
12
8.75k
impact
stringlengths
3
11.2k
function
stringlengths
15
64.6k
Storage operations optimization
medium
There are a lot of operations that write some value to the storage (uses `SSTORE` opcode) without actually changing it.\\nIn `getAndUpdateValue` function of `DelegationController` and TokenLaunchLocker:\\n```\\nfor (uint i = sequence.firstUnprocessedMonth; i <= month; ++i) {\\n sequence.value = sequence.value.add(sequence.addDiff[i]).sub(sequence.subtractDiff[i]);\\n delete sequence.addDiff[i];\\n delete sequence.subtractDiff[i];\\n}\\n```\\n\\nIn `handleSlash` function of `Punisher` contract `amount` will be zero in most cases:\\n```\\nfunction handleSlash(address holder, uint amount) external allow("DelegationController") {\\n \\_locked[holder] = \\_locked[holder].add(amount);\\n}\\n```\\n
Resolution\\nMitigated in skalenetwork/skale-manager#179\\nCheck if the value is the same and don't write it to the storage in that case.
null
```\\nfor (uint i = sequence.firstUnprocessedMonth; i <= month; ++i) {\\n sequence.value = sequence.value.add(sequence.addDiff[i]).sub(sequence.subtractDiff[i]);\\n delete sequence.addDiff[i];\\n delete sequence.subtractDiff[i];\\n}\\n```\\n
Function overloading
low
Some functions in the codebase are overloaded. That makes code less readable and increases the probability of missing bugs.\\nFor example, there are a lot of `reduce` function implementations in DelegationController:\\n```\\nfunction reduce(PartialDifferencesValue storage sequence, uint amount, uint month) internal returns (Fraction memory) {\\n require(month.add(1) >= sequence.firstUnprocessedMonth, "Can't reduce value in the past");\\n if (sequence.firstUnprocessedMonth == 0) {\\n return createFraction(0);\\n }\\n uint value = getAndUpdateValue(sequence, month);\\n if (value == 0) {\\n return createFraction(0);\\n }\\n\\n uint \\_amount = amount;\\n if (value < amount) {\\n \\_amount = value;\\n }\\n\\n Fraction memory reducingCoefficient = createFraction(value.sub(\\_amount), value);\\n reduce(sequence, reducingCoefficient, month);\\n return reducingCoefficient;\\n}\\n\\nfunction reduce(PartialDifferencesValue storage sequence, Fraction memory reducingCoefficient, uint month) internal {\\n reduce(\\n sequence,\\n sequence,\\n reducingCoefficient,\\n month,\\n false);\\n}\\n\\nfunction reduce(\\n PartialDifferencesValue storage sequence,\\n PartialDifferencesValue storage sumSequence,\\n Fraction memory reducingCoefficient,\\n uint month) internal\\n{\\n reduce(\\n sequence,\\n sumSequence,\\n reducingCoefficient,\\n month,\\n true);\\n}\\n\\nfunction reduce(\\n PartialDifferencesValue storage sequence,\\n PartialDifferencesValue storage sumSequence,\\n Fraction memory reducingCoefficient,\\n uint month,\\n bool hasSumSequence) internal\\n{\\n require(month.add(1) >= sequence.firstUnprocessedMonth, "Can't reduce value in the past");\\n if (hasSumSequence) {\\n require(month.add(1) >= sumSequence.firstUnprocessedMonth, "Can't reduce value in the past");\\n }\\n require(reducingCoefficient.numerator <= reducingCoefficient.denominator, "Increasing of values is not implemented");\\n if (sequence.firstUnprocessedMonth == 0) {\\n return;\\n }\\n uint value = getAndUpdateValue(sequence, month);\\n if (value == 0) {\\n return;\\n }\\n\\n uint newValue = sequence.value.mul(reducingCoefficient.numerator).div(reducingCoefficient.denominator);\\n if (hasSumSequence) {\\n subtract(sumSequence, sequence.value.sub(newValue), month);\\n }\\n sequence.value = newValue;\\n\\n for (uint i = month.add(1); i <= sequence.lastChangedMonth; ++i) {\\n uint newDiff = sequence.subtractDiff[i].mul(reducingCoefficient.numerator).div(reducingCoefficient.denominator);\\n if (hasSumSequence) {\\n sumSequence.subtractDiff[i] = sumSequence.subtractDiff[i].sub(sequence.subtractDiff[i].sub(newDiff));\\n }\\n sequence.subtractDiff[i] = newDiff;\\n }\\n}\\n\\nfunction reduce(\\n PartialDifferences storage sequence,\\n Fraction memory reducingCoefficient,\\n uint month) internal\\n{\\n require(month.add(1) >= sequence.firstUnprocessedMonth, "Can't reduce value in the past");\\n require(reducingCoefficient.numerator <= reducingCoefficient.denominator, "Increasing of values is not implemented");\\n if (sequence.firstUnprocessedMonth == 0) {\\n return;\\n }\\n uint value = getAndUpdateValue(sequence, month);\\n if (value == 0) {\\n return;\\n }\\n\\n sequence.value[month] = sequence.value[month].mul(reducingCoefficient.numerator).div(reducingCoefficient.denominator);\\n\\n for (uint i = month.add(1); i <= sequence.lastChangedMonth; ++i) {\\n sequence.subtractDiff[i] = sequence.subtractDiff[i].mul(reducingCoefficient.numerator).div(reducingCoefficient.denominator);\\n }\\n}\\n```\\n
Resolution\\nFixed in skalenetwork/skale-manager#181\\nAvoid function overloading as a general guideline.
null
```\\nfunction reduce(PartialDifferencesValue storage sequence, uint amount, uint month) internal returns (Fraction memory) {\\n require(month.add(1) >= sequence.firstUnprocessedMonth, "Can't reduce value in the past");\\n if (sequence.firstUnprocessedMonth == 0) {\\n return createFraction(0);\\n }\\n uint value = getAndUpdateValue(sequence, month);\\n if (value == 0) {\\n return createFraction(0);\\n }\\n\\n uint \\_amount = amount;\\n if (value < amount) {\\n \\_amount = value;\\n }\\n\\n Fraction memory reducingCoefficient = createFraction(value.sub(\\_amount), value);\\n reduce(sequence, reducingCoefficient, month);\\n return reducingCoefficient;\\n}\\n\\nfunction reduce(PartialDifferencesValue storage sequence, Fraction memory reducingCoefficient, uint month) internal {\\n reduce(\\n sequence,\\n sequence,\\n reducingCoefficient,\\n month,\\n false);\\n}\\n\\nfunction reduce(\\n PartialDifferencesValue storage sequence,\\n PartialDifferencesValue storage sumSequence,\\n Fraction memory reducingCoefficient,\\n uint month) internal\\n{\\n reduce(\\n sequence,\\n sumSequence,\\n reducingCoefficient,\\n month,\\n true);\\n}\\n\\nfunction reduce(\\n PartialDifferencesValue storage sequence,\\n PartialDifferencesValue storage sumSequence,\\n Fraction memory reducingCoefficient,\\n uint month,\\n bool hasSumSequence) internal\\n{\\n require(month.add(1) >= sequence.firstUnprocessedMonth, "Can't reduce value in the past");\\n if (hasSumSequence) {\\n require(month.add(1) >= sumSequence.firstUnprocessedMonth, "Can't reduce value in the past");\\n }\\n require(reducingCoefficient.numerator <= reducingCoefficient.denominator, "Increasing of values is not implemented");\\n if (sequence.firstUnprocessedMonth == 0) {\\n return;\\n }\\n uint value = getAndUpdateValue(sequence, month);\\n if (value == 0) {\\n return;\\n }\\n\\n uint newValue = sequence.value.mul(reducingCoefficient.numerator).div(reducingCoefficient.denominator);\\n if (hasSumSequence) {\\n subtract(sumSequence, sequence.value.sub(newValue), month);\\n }\\n sequence.value = newValue;\\n\\n for (uint i = month.add(1); i <= sequence.lastChangedMonth; ++i) {\\n uint newDiff = sequence.subtractDiff[i].mul(reducingCoefficient.numerator).div(reducingCoefficient.denominator);\\n if (hasSumSequence) {\\n sumSequence.subtractDiff[i] = sumSequence.subtractDiff[i].sub(sequence.subtractDiff[i].sub(newDiff));\\n }\\n sequence.subtractDiff[i] = newDiff;\\n }\\n}\\n\\nfunction reduce(\\n PartialDifferences storage sequence,\\n Fraction memory reducingCoefficient,\\n uint month) internal\\n{\\n require(month.add(1) >= sequence.firstUnprocessedMonth, "Can't reduce value in the past");\\n require(reducingCoefficient.numerator <= reducingCoefficient.denominator, "Increasing of values is not implemented");\\n if (sequence.firstUnprocessedMonth == 0) {\\n return;\\n }\\n uint value = getAndUpdateValue(sequence, month);\\n if (value == 0) {\\n return;\\n }\\n\\n sequence.value[month] = sequence.value[month].mul(reducingCoefficient.numerator).div(reducingCoefficient.denominator);\\n\\n for (uint i = month.add(1); i <= sequence.lastChangedMonth; ++i) {\\n sequence.subtractDiff[i] = sequence.subtractDiff[i].mul(reducingCoefficient.numerator).div(reducingCoefficient.denominator);\\n }\\n}\\n```\\n
ERC20Lockable - inconsistent locking status
low
`Vega_Token.is_tradable()` will incorrectly return `false` if the token is never manually unlocked by the owner but `unlock_time` has passed, which will automatically unlock trading.\\n```\\n/\\*\\*\\n \\* @dev locked status, only applicable before unlock\\_date\\n \\*/\\nbool public \\_is\\_locked = true;\\n\\n/\\*\\*\\n \\* @dev Modifier that only allows function to run if either token is unlocked or time has expired.\\n \\* Throws if called while token is locked.\\n \\*/\\nmodifier onlyUnlocked() {\\n require(!\\_is\\_locked || now > unlock\\_date);\\n \\_;\\n}\\n\\n/\\*\\*\\n \\* @dev Internal function that unlocks token. Can only be ran before expiration (give that it's irrelevant after)\\n \\*/\\nfunction \\_unlock() internal {\\n require(now <= unlock\\_date);\\n \\_is\\_locked = false;\\n```\\n
declare `_is_locked` as `private` instead of `public`\\ncreate a getter method that correctly returns the locking status\\n```\\nfunction \\_isLocked() internal view {\\n return !\\_is\\_locked || now > unlock\\_date;\\n}\\n```\\n\\nmake `modifier onlyUnlocked()` use the newly created getter (_isLocked())\\nmake `Vega_Token.is_tradeable()` use the newly created getter (_isLocked())\\n`_unlock()` should raise an errorcondition when called on an already unlocked contract\\nit could make sense to emit a “contract hast been unlocked” event for auditing purposes
null
```\\n/\\*\\*\\n \\* @dev locked status, only applicable before unlock\\_date\\n \\*/\\nbool public \\_is\\_locked = true;\\n\\n/\\*\\*\\n \\* @dev Modifier that only allows function to run if either token is unlocked or time has expired.\\n \\* Throws if called while token is locked.\\n \\*/\\nmodifier onlyUnlocked() {\\n require(!\\_is\\_locked || now > unlock\\_date);\\n \\_;\\n}\\n\\n/\\*\\*\\n \\* @dev Internal function that unlocks token. Can only be ran before expiration (give that it's irrelevant after)\\n \\*/\\nfunction \\_unlock() internal {\\n require(now <= unlock\\_date);\\n \\_is\\_locked = false;\\n```\\n
Merkle.checkMembership allows existence proofs for the same leaf in multiple locations in the tree
high
`checkMembership` is used by several contracts to prove that transactions exist in the child chain. The function uses a `leaf`, an `index`, and a `proof` to construct a hypothetical root hash. This constructed hash is compared to the passed in `rootHash` parameter. If the two are equivalent, the `proof` is considered valid.\\nThe proof is performed iteratively, and uses a pseudo-index (j) to determine whether the next proof element represents a “left branch” or “right branch”:\\n```\\nuint256 j = index;\\n// Note: We're skipping the first 32 bytes of `proof`, which holds the size of the dynamically sized `bytes`\\nfor (uint256 i = 32; i <= proof.length; i += 32) {\\n // solhint-disable-next-line no-inline-assembly\\n assembly {\\n proofElement := mload(add(proof, i))\\n }\\n if (j % 2 == 0) {\\n computedHash = keccak256(abi.encodePacked(NODE\\_SALT, computedHash, proofElement));\\n } else {\\n computedHash = keccak256(abi.encodePacked(NODE\\_SALT, proofElement, computedHash));\\n }\\n j = j / 2;\\n}\\n```\\n\\nIf `j` is even, the computed hash is placed before the next proof element. If `j` is odd, the computed hash is placed after the next proof element. After each iteration, `j` is decremented by `j` = `j` / 2.\\nBecause `checkMembership` makes no requirements on the height of the tree or the size of the proof relative to the provided `index`, it is possible to pass in invalid values for `index` that prove a leaf's existence in multiple locations in the tree.\\nBy modifying existing tests, we showed that for a tree with 3 leaves, leaf 2 can be proven to exist at indices 2, 6, and 10 using the same proof each time. The modified test can be found here: https://gist.github.com/wadeAlexC/01b60099282a026f8dc1ac85d83489fd#file-merkle-test-js-L40-L67\\n```\\nit('should accidentally allow different indices to use the same proof', async () => {\\n const rootHash = this.merkleTree.root;\\n const proof = this.merkleTree.getInclusionProof(leaves[2]);\\n\\n const result = await this.merkleContract.checkMembership(\\n leaves[2],\\n 2,\\n rootHash,\\n proof,\\n );\\n expect(result).to.be.true;\\n\\n const nextResult = await this.merkleContract.checkMembership(\\n leaves[2],\\n 6,\\n rootHash,\\n proof,\\n );\\n expect(nextResult).to.be.true;\\n\\n const nextNextResult = await this.merkleContract.checkMembership(\\n leaves[2],\\n 10,\\n rootHash,\\n proof,\\n );\\n expect(nextNextResult).to.be.true;\\n});\\n```\\n\\nConclusion\\nExit processing is meant to bypass exits processed more than once. This is implemented using an “output id” system, where each exited output should correspond to a unique id that gets flagged in the `ExitGameController` contract as it's exited. Before an exit is processed, its output id is calculated and checked against `ExitGameController`. If the output has already been exited, the exit being processed is deleted and skipped. Crucially, output id is calculated differently for standard transactions and deposit transactions: deposit output ids factor in the transaction index.\\nBy using the behavior described in this issue in conjunction with methods discussed in issue 5.8 and https://github.com/ConsenSys/omisego-morevp-audit-2019-10/issues/20, we showed that deposit transactions can be exited twice using indices `0` and `2**16`. Because of the distinct output id calculation, these exits have different output ids and can be processed twice, allowing users to exit double their deposited amount.\\nA modified `StandardExit.load.test.js` shows that exits are successfully enqueued with a transaction index of 65536: https://gist.github.com/wadeAlexC/4ad459b7510e512bc9556e7c919e0965#file-standardexit-load-test-js-L55
Use the length of the proof to determine the maximum allowed index. The passed-in index should satisfy the following criterion: `index < 2**(proof.length/32)`. Additionally, ensure range checks on transaction position decoding are sufficiently restrictive (see https://github.com/ConsenSys/omisego-morevp-audit-2019-10/issues/20).\\nCorresponding issue in plasma-contracts repo: https://github.com/omisego/plasma-contracts/issues/546
null
```\\nuint256 j = index;\\n// Note: We're skipping the first 32 bytes of `proof`, which holds the size of the dynamically sized `bytes`\\nfor (uint256 i = 32; i <= proof.length; i += 32) {\\n // solhint-disable-next-line no-inline-assembly\\n assembly {\\n proofElement := mload(add(proof, i))\\n }\\n if (j % 2 == 0) {\\n computedHash = keccak256(abi.encodePacked(NODE\\_SALT, computedHash, proofElement));\\n } else {\\n computedHash = keccak256(abi.encodePacked(NODE\\_SALT, proofElement, computedHash));\\n }\\n j = j / 2;\\n}\\n```\\n
Improper initialization of spending condition abstraction allows “v2 transactions” to exit using PaymentExitGame
high
`PaymentOutputToPaymentTxCondition` is an abstraction around the transaction signature check needed for many components of the exit games. Its only function, `verify`, returns `true` if one transaction (inputTxBytes) is spent by another transaction (spendingTxBytes):\\n```\\nfunction verify(\\n bytes calldata inputTxBytes,\\n uint16 outputIndex,\\n uint256 inputTxPos,\\n bytes calldata spendingTxBytes,\\n uint16 inputIndex,\\n bytes calldata signature,\\n bytes calldata /\\*optionalArgs\\*/\\n)\\n external\\n view\\n returns (bool)\\n{\\n PaymentTransactionModel.Transaction memory inputTx = PaymentTransactionModel.decode(inputTxBytes);\\n require(inputTx.txType == supportInputTxType, "Input tx is an unsupported payment tx type");\\n\\n PaymentTransactionModel.Transaction memory spendingTx = PaymentTransactionModel.decode(spendingTxBytes);\\n require(spendingTx.txType == supportSpendingTxType, "The spending tx is an unsupported payment tx type");\\n\\n UtxoPosLib.UtxoPos memory utxoPos = UtxoPosLib.build(TxPosLib.TxPos(inputTxPos), outputIndex);\\n require(\\n spendingTx.inputs[inputIndex] == bytes32(utxoPos.value),\\n "Spending tx points to the incorrect output UTXO position"\\n );\\n\\n address payable owner = inputTx.outputs[outputIndex].owner();\\n require(owner == ECDSA.recover(eip712.hashTx(spendingTx), signature), "Tx in not signed correctly");\\n\\n return true;\\n}\\n```\\n\\nVerification process\\nThe verification process is relatively straightforward. The contract performs some basic input validation, checking that the input transaction's `txType` matches `supportInputTxType`, and that the spending transaction's `txType` matches `supportSpendingTxType`. These values are set during construction.\\nNext, `verify` checks that the spending transaction contains an input that matches the position of one of the input transaction's outputs.\\nFinally, `verify` performs an EIP-712 hash on the spending transaction, and ensures it is signed by the owner of the output in question.\\nImplications of the abstraction\\nThe abstraction used requires several files to be visited to fully understand the function of each line of code: `ISpendingCondition`, `PaymentEIP712Lib`, `UtxoPosLib`, `TxPosLib`, `PaymentTransactionModel`, `PaymentOutputModel`, `RLPReader`, `ECDSA`, and `SpendingConditionRegistry`. Additionally, the abstraction obfuscates the underlying spending condition verification primitive where used.\\nFinally, understanding the abstraction requires an understanding of how `SpendingConditionRegistry` is initialized, as well as the nature of its relationship with `PlasmaFramework` and `ExitGameRegistry`. The aforementioned `txType` values, `supportInputTxType` and `supportSpendingTxType`, are set during construction. Their use in `ExitGameRegistry` seems to suggest they are intended to represent different versions of transaction types, and that separate exit game contracts are meant to handle different transaction types:\\n```\\n/\\*\\*\\n \\* @notice Registers an exit game within the PlasmaFramework. Only the maintainer can call the function.\\n \\* @dev Emits ExitGameRegistered event to notify clients\\n \\* @param \\_txType The tx type where the exit game wants to register\\n \\* @param \\_contract Address of the exit game contract\\n \\* @param \\_protocol The transaction protocol, either 1 for MVP or 2 for MoreVP\\n \\*/\\nfunction registerExitGame(uint256 \\_txType, address \\_contract, uint8 \\_protocol) public onlyFrom(getMaintainer()) {\\n require(\\_txType != 0, "Should not register with tx type 0");\\n require(\\_contract != address(0), "Should not register with an empty exit game address");\\n require(\\_exitGames[\\_txType] == address(0), "The tx type is already registered");\\n require(\\_exitGameToTxType[\\_contract] == 0, "The exit game contract is already registered");\\n require(Protocol.isValidProtocol(\\_protocol), "Invalid protocol value");\\n\\n \\_exitGames[\\_txType] = \\_contract;\\n \\_exitGameToTxType[\\_contract] = \\_txType;\\n \\_protocols[\\_txType] = \\_protocol;\\n \\_exitGameQuarantine.quarantine(\\_contract);\\n\\n emit ExitGameRegistered(\\_txType, \\_contract, \\_protocol);\\n}\\n```\\n\\nMigration and initialization\\nThe migration script seems to corroborate this interpretation:\\ncode/plasma_framework/migrations/5_deploy_and_register_payment_exit_game.js:L109-L124\\n```\\n// handle spending condition\\nawait deployer.deploy(\\n PaymentOutputToPaymentTxCondition,\\n plasmaFramework.address,\\n PAYMENT\\_OUTPUT\\_TYPE,\\n PAYMENT\\_TX\\_TYPE,\\n);\\nconst paymentToPaymentCondition = await PaymentOutputToPaymentTxCondition.deployed();\\n\\nawait deployer.deploy(\\n PaymentOutputToPaymentTxCondition,\\n plasmaFramework.address,\\n PAYMENT\\_OUTPUT\\_TYPE,\\n PAYMENT\\_V2\\_TX\\_TYPE,\\n);\\nconst paymentToPaymentV2Condition = await PaymentOutputToPaymentTxCondition.deployed();\\n```\\n\\nThe migration script shown above deploys two different versions of `PaymentOutputToPaymentTxCondition`. The first sets `supportInputTxType` and `supportSpendingTxType` to `PAYMENT_OUTPUT_TYPE` and `PAYMENT_TX_TYPE`, respectively. The second sets those same variables to `PAYMENT_OUTPUT_TYPE` and `PAYMENT_V2_TX_TYPE`, respectively.\\nThe migration script then registers both of these contracts in `SpendingConditionRegistry`, and then calls `renounceOwnership`, freezing the spending conditions registered permanently:\\ncode/plasma_framework/migrations/5_deploy_and_register_payment_exit_game.js:L126-L135\\n```\\nconsole.log(`Registering paymentToPaymentCondition (${paymentToPaymentCondition.address}) to spendingConditionRegistry`);\\nawait spendingConditionRegistry.registerSpendingCondition(\\n PAYMENT\\_OUTPUT\\_TYPE, PAYMENT\\_TX\\_TYPE, paymentToPaymentCondition.address,\\n);\\n\\nconsole.log(`Registering paymentToPaymentV2Condition (${paymentToPaymentV2Condition.address}) to spendingConditionRegistry`);\\nawait spendingConditionRegistry.registerSpendingCondition(\\n PAYMENT\\_OUTPUT\\_TYPE, PAYMENT\\_V2\\_TX\\_TYPE, paymentToPaymentV2Condition.address,\\n);\\nawait spendingConditionRegistry.renounceOwnership();\\n```\\n\\nFinally, the migration script registers a single exit game contract in PlasmaFramework:\\ncode/plasma_framework/migrations/5_deploy_and_register_payment_exit_game.js:L137-L143\\n```\\n// register the exit game to framework\\nawait plasmaFramework.registerExitGame(\\n PAYMENT\\_TX\\_TYPE,\\n paymentExitGame.address,\\n config.frameworks.protocols.moreVp,\\n { from: maintainerAddress },\\n);\\n```\\n\\nNote that the associated `_txType` is permanently associated with the deployed exit game contract:\\n```\\n/\\*\\*\\n \\* @notice Registers an exit game within the PlasmaFramework. Only the maintainer can call the function.\\n \\* @dev Emits ExitGameRegistered event to notify clients\\n \\* @param \\_txType The tx type where the exit game wants to register\\n \\* @param \\_contract Address of the exit game contract\\n \\* @param \\_protocol The transaction protocol, either 1 for MVP or 2 for MoreVP\\n \\*/\\nfunction registerExitGame(uint256 \\_txType, address \\_contract, uint8 \\_protocol) public onlyFrom(getMaintainer()) {\\n require(\\_txType != 0, "Should not register with tx type 0");\\n require(\\_contract != address(0), "Should not register with an empty exit game address");\\n require(\\_exitGames[\\_txType] == address(0), "The tx type is already registered");\\n require(\\_exitGameToTxType[\\_contract] == 0, "The exit game contract is already registered");\\n require(Protocol.isValidProtocol(\\_protocol), "Invalid protocol value");\\n\\n \\_exitGames[\\_txType] = \\_contract;\\n \\_exitGameToTxType[\\_contract] = \\_txType;\\n \\_protocols[\\_txType] = \\_protocol;\\n \\_exitGameQuarantine.quarantine(\\_contract);\\n\\n emit ExitGameRegistered(\\_txType, \\_contract, \\_protocol);\\n}\\n```\\n\\nConclusion\\nCrucially, this association is never used. It is implied heavily that transactions with some `txType` must use a certain registered exit game contract. In fact, this is not true. When using `PaymentExitGame`, its routers, and their associated controllers, the `txType` is invariably inferred from the encoded transaction, not from the mappings in `ExitGameRegistry`. If initialized as-is, both `PAYMENT_TX_TYPE` and `PAYMENT_V2_TX_TYPE` transactions may be exited using `PaymentExitGame`, provided they exist in the plasma chain.
Remove `PaymentOutputToPaymentTxCondition` and `SpendingConditionRegistry`\\nImplement checks for specific spending conditions directly in exit game controllers. Emphasize clarity of function: ensure it is clear when called from the top level that a signature verification check and spending condition check are being performed.\\nIf the inferred relationship between `txType` and `PaymentExitGame` is correct, ensure that each `PaymentExitGame` router checks for its supported `txType`. Alternatively, the check could be made in `PaymentExitGame` itself.\\nCorresponding issue in plasma-contracts repo: https://github.com/omisego/plasma-contracts/issues/472
null
```\\nfunction verify(\\n bytes calldata inputTxBytes,\\n uint16 outputIndex,\\n uint256 inputTxPos,\\n bytes calldata spendingTxBytes,\\n uint16 inputIndex,\\n bytes calldata signature,\\n bytes calldata /\\*optionalArgs\\*/\\n)\\n external\\n view\\n returns (bool)\\n{\\n PaymentTransactionModel.Transaction memory inputTx = PaymentTransactionModel.decode(inputTxBytes);\\n require(inputTx.txType == supportInputTxType, "Input tx is an unsupported payment tx type");\\n\\n PaymentTransactionModel.Transaction memory spendingTx = PaymentTransactionModel.decode(spendingTxBytes);\\n require(spendingTx.txType == supportSpendingTxType, "The spending tx is an unsupported payment tx type");\\n\\n UtxoPosLib.UtxoPos memory utxoPos = UtxoPosLib.build(TxPosLib.TxPos(inputTxPos), outputIndex);\\n require(\\n spendingTx.inputs[inputIndex] == bytes32(utxoPos.value),\\n "Spending tx points to the incorrect output UTXO position"\\n );\\n\\n address payable owner = inputTx.outputs[outputIndex].owner();\\n require(owner == ECDSA.recover(eip712.hashTx(spendingTx), signature), "Tx in not signed correctly");\\n\\n return true;\\n}\\n```\\n
RLPReader - Leading zeroes allow multiple valid encodings and exit / output ids for the same transaction
high
The current implementation of RLP decoding can take 2 different `txBytes` and decode them to the same structure. Specifically, the `RLPReader.toUint` method can decode 2 different types of bytes to the same number. For example:\\n`0x821234` is decoded to `uint(0x1234)`\\n`0x83001234` is decoded to `uint(0x1234)`\\n`0xc101` can decode to `uint(1)`, even though the tag specifies a short list\\n`0x01` can decode to `uint(1)`, even though the tag specifies a single byte\\nAs explanation for this encoding:\\n`0x821234` is broken down into 2 parts:\\n`0x82` - represents `0x80` (the string tag) + `0x02` bytes encoded\\n`0x1234` - are the encoded bytes\\nThe same for 0x83001234:\\n`0x83` - represents `0x80` (the string tag) + `0x03` bytes encoded\\n`0x001234` - are the encoded bytes\\nThe current implementation casts the encoded bytes into a uint256, so these different encodings are interpreted by the contracts as the same number:\\n`uint(0x1234) = uint(0x001234)`\\n```\\nresult := mload(memPtr)\\n```\\n\\nHaving different valid encodings for the same data is a problem because the encodings are used to create hashes that are used as unique ids. This means that multiple ids can be created for the same data. The data should only have one possible id.\\nThe encoding is used to create ids in these parts of the code:\\n```\\nreturn keccak256(abi.encodePacked(\\_txBytes, \\_outputIndex, \\_utxoPosValue));\\n```\\n\\n```\\nreturn keccak256(abi.encodePacked(\\_txBytes, \\_outputIndex));\\n```\\n\\n```\\nbytes32 hashData = keccak256(abi.encodePacked(\\_txBytes, \\_utxoPos.value));\\n```\\n\\n```\\nreturn uint160((uint256(keccak256(\\_txBytes)) 105).setBit(151));\\n```\\n\\n```\\nbytes32 leafData = keccak256(data.txBytes);\\n```\\n\\nOther methods that are affected because they rely on the return values of these methods:
Enforce strict-length decoding for `txBytes`, and specify that `uint` is decoded from a 32-byte short string.\\nEnforcing a 32-byte length for `uint` means that `0x1234` should always be encoded as:\\n`0xa00000000000000000000000000000000000000000000000000000000000001234`\\n`0xa0` represents the tag + the length: `0x80 + 32`\\n`0000000000000000000000000000000000000000000000000000000000001234` is the number 32 bytes long with leading zeroes\\nUnfortunately, using leading zeroes is against the RLP spec:\\nhttps://github.com/ethereum/wiki/wiki/RLP\\npositive RLP integers must be represented in big endian binary form with no leading zeroes\\nThis means that libraries interacting with OMG contracts which are going to correctly and fully implement the spec will generate “incorrect” encodings for uints; encodings that are not going to be recognized by the OMG contracts.\\nFully correct spec encoding: `0x821234`. Proposed encoding in this solution: `0xa00000000000000000000000000000000000000000000000000000000000001234`.\\nSimilarly enforce restrictions where they can be added; this is possible because of the strict structure format that needs to be encoded.\\nSome other potential solutions are included below. Note that these solutions are not recommended for reasons included below:\\nNormalize the encoding that gets passed to methods that hash the transaction for use as an id:\\nThis can be implemented in the methods that call `keccak256` on `txBytes` and should decode and re-encode the passed `txBytes` in order to normalize the passed encoding.\\na `txBytes` is passed\\nthe `txBytes` are decoded into structure: `tmpDecodedStruct` = decode(txBytes)\\nthe `tmpDecodedStruct` is re-encoded in order to normalize it: `normalizedTxBytes = encode(txBytes)`\\nThis method is not recommended because it needs a Solidity encoder to be implemented and a lot of gas will be used to decode and re-encode the initial `txBytes`.\\nCorrectly and fully implement RLP decoding\\nThis is another solution that adds a lot of code and is prone to errors.\\nThe solution would be to enforce all of the restrictions when decoding and not accept any encoding that doesn't fully follow the spec. This for example means that is should not accept uints with leading zeroes.\\nThis is a problem because it needs a lot of code that is not easy to write in Solidity (or EVM).
null
```\\nresult := mload(memPtr)\\n```\\n
Recommendation: Remove TxFinalizationModel and TxFinalizationVerifier. Implement stronger checks in Merkle
medium
`TxFinalizationVerifier` is an abstraction around the block inclusion check needed for many of the features of plasma exit games. It uses a struct defined in `TxFinalizationModel` as inputs to its two functions: `isStandardFinalized` and `isProtocolFinalized`.\\n`isStandardFinalized` returns the result of an inclusion proof. Although there are several branches, only the first is used:\\n```\\n/\\*\\*\\n\\* @notice Checks whether a transaction is "standard finalized"\\n\\* @dev MVP: requires that both inclusion proof and confirm signature is checked\\n\\* @dev MoreVp: checks inclusion proof only\\n\\*/\\nfunction isStandardFinalized(Model.Data memory data) public view returns (bool) {\\n if (data.protocol == Protocol.MORE\\_VP()) {\\n return checkInclusionProof(data);\\n } else if (data.protocol == Protocol.MVP()) {\\n revert("MVP is not yet supported");\\n } else {\\n revert("Invalid protocol value");\\n }\\n}\\n```\\n\\n`isProtocolFinalized` is unused:\\n```\\n/\\*\\*\\n\\* @notice Checks whether a transaction is "protocol finalized"\\n\\* @dev MVP: must be standard finalized\\n\\* @dev MoreVp: allows in-flight tx, so only checks for the existence of the transaction\\n\\*/\\nfunction isProtocolFinalized(Model.Data memory data) public view returns (bool) {\\n if (data.protocol == Protocol.MORE\\_VP()) {\\n return data.txBytes.length > 0;\\n } else if (data.protocol == Protocol.MVP()) {\\n revert("MVP is not yet supported");\\n } else {\\n revert("Invalid protocol value");\\n }\\n}\\n```\\n\\nThe abstraction used introduces branching logic and requires several files to be visited to fully understand the function of each line of code: `ITxFinalizationVerifier`, `TxFinalizationModel`, `TxPosLib`, `Protocol`, `BlockController`, and `Merkle`. Additionally, the abstraction obfuscates the underlying inclusion proof primitive when used in the exit game contracts. `isStandardFinalized` is not clearly an inclusion proof, and `isProtocolFinalized` simply adds confusion.\\n```\\nfunction checkInclusionProof(Model.Data memory data) private view returns (bool) {\\n if (data.inclusionProof.length == 0) {\\n return false;\\n }\\n\\n (bytes32 root,) = data.framework.blocks(data.txPos.blockNum());\\n bytes32 leafData = keccak256(data.txBytes);\\n return Merkle.checkMembership(\\n leafData, data.txPos.txIndex(), root, data.inclusionProof\\n );\\n}\\n```\\n\\nBy introducing the abstraction of `TxFinalizationVerifier`, the input validation performed by `Merkle` is split across multiple files, and the reasonable-seeming decision of calling `Merkle.checkMembership` directly becomes unsafe. In fact, this occurs in one location in the contracts:\\n```\\nfunction verifyAndDeterminePositionOfTransactionIncludedInBlock(\\n bytes memory txbytes,\\n UtxoPosLib.UtxoPos memory utxoPos,\\n bytes32 root,\\n bytes memory inclusionProof\\n)\\n private\\n pure\\n returns(uint256)\\n{\\n bytes32 leaf = keccak256(txbytes);\\n require(\\n Merkle.checkMembership(leaf, utxoPos.txIndex(), root, inclusionProof),\\n "Transaction is not included in block of Plasma chain"\\n );\\n\\n return utxoPos.value;\\n}\\n```\\n
PaymentChallengeIFEOutputSpent.verifyInFlightTransactionStandardFinalized:\\n```\\nrequire(controller.txFinalizationVerifier.isStandardFinalized(finalizationData), "In-flight transaction not finalized");\\n```\\n\\nPaymentChallengeIFENotCanonical.verifyCompetingTxFinalized:\\n```\\nrequire(self.txFinalizationVerifier.isStandardFinalized(finalizationData), "Failed to verify the position of competing tx");\\n```\\n\\nPaymentStartInFlightExit.verifyInputTransactionIsStandardFinalized:\\n```\\nrequire(exitData.controller.txFinalizationVerifier.isStandardFinalized(finalizationData),\\n "Input transaction is not standard finalized");\\n```\\n\\nIf none of the above recommendations are implemented, ensure that `PaymentChallengeIFENotCanonical` uses the abstraction `TxFinalizationVerifier` so that a length check is performed on the inclusion proof.\\nCorresponding issue in plasma-contracts repo: https://github.com/omisego/plasma-contracts/issues/471
null
```\\n/\\*\\*\\n\\* @notice Checks whether a transaction is "standard finalized"\\n\\* @dev MVP: requires that both inclusion proof and confirm signature is checked\\n\\* @dev MoreVp: checks inclusion proof only\\n\\*/\\nfunction isStandardFinalized(Model.Data memory data) public view returns (bool) {\\n if (data.protocol == Protocol.MORE\\_VP()) {\\n return checkInclusionProof(data);\\n } else if (data.protocol == Protocol.MVP()) {\\n revert("MVP is not yet supported");\\n } else {\\n revert("Invalid protocol value");\\n }\\n}\\n```\\n
Merkle - The implementation does not enforce inclusion of leaf nodes.
medium
A observation with the current Merkle tree implementation is that it may be possible to validate nodes other than leaves. This is done by providing `checkMembership` with a reference to a hash within the tree, rather than a leaf.\\n```\\n/\\*\\*\\n \\* @notice Checks that a leaf hash is contained in a root hash\\n \\* @param leaf Leaf hash to verify\\n \\* @param index Position of the leaf hash in the Merkle tree\\n \\* @param rootHash Root of the Merkle tree\\n \\* @param proof A Merkle proof demonstrating membership of the leaf hash\\n \\* @return True, if the leaf hash is in the Merkle tree; otherwise, False\\n\\*/\\nfunction checkMembership(bytes32 leaf, uint256 index, bytes32 rootHash, bytes memory proof)\\n internal\\n pure\\n returns (bool)\\n{\\n require(proof.length % 32 == 0, "Length of Merkle proof must be a multiple of 32");\\n\\n bytes32 proofElement;\\n bytes32 computedHash = leaf;\\n uint256 j = index;\\n // Note: We're skipping the first 32 bytes of `proof`, which holds the size of the dynamically sized `bytes`\\n for (uint256 i = 32; i <= proof.length; i += 32) {\\n // solhint-disable-next-line no-inline-assembly\\n assembly {\\n proofElement := mload(add(proof, i))\\n }\\n if (j % 2 == 0) {\\n computedHash = keccak256(abi.encodePacked(computedHash, proofElement));\\n } else {\\n computedHash = keccak256(abi.encodePacked(proofElement, computedHash));\\n }\\n j = j / 2;\\n }\\n\\n return computedHash == rootHash;\\n}\\n```\\n\\nThe current implementation will validate the provided “leaf” and return `true`. This is a known problem of Merkle trees https://en.wikipedia.org/wiki/Merkle_tree#Second_preimage_attack.\\nProvide a hash from within the Merkle tree as the `leaf` argument. The index has to match the index of that node in regards to its current level in the tree. The `rootHash` has to be the correct Merkle tree `rootHash`. The proof has to skip the necessary number of levels because the nodes “underneath” the provided “leaf” will not be processed.
A remediation needs a fixed Merkle tree size as well as the addition of a byte prepended to each node in the tree. Another way would be to create a structure for the Merkle node and mark it as `leaf` or no `leaf`.\\nCorresponding issue in plasma-contracts repo: https://github.com/omisego/plasma-contracts/issues/425
null
```\\n/\\*\\*\\n \\* @notice Checks that a leaf hash is contained in a root hash\\n \\* @param leaf Leaf hash to verify\\n \\* @param index Position of the leaf hash in the Merkle tree\\n \\* @param rootHash Root of the Merkle tree\\n \\* @param proof A Merkle proof demonstrating membership of the leaf hash\\n \\* @return True, if the leaf hash is in the Merkle tree; otherwise, False\\n\\*/\\nfunction checkMembership(bytes32 leaf, uint256 index, bytes32 rootHash, bytes memory proof)\\n internal\\n pure\\n returns (bool)\\n{\\n require(proof.length % 32 == 0, "Length of Merkle proof must be a multiple of 32");\\n\\n bytes32 proofElement;\\n bytes32 computedHash = leaf;\\n uint256 j = index;\\n // Note: We're skipping the first 32 bytes of `proof`, which holds the size of the dynamically sized `bytes`\\n for (uint256 i = 32; i <= proof.length; i += 32) {\\n // solhint-disable-next-line no-inline-assembly\\n assembly {\\n proofElement := mload(add(proof, i))\\n }\\n if (j % 2 == 0) {\\n computedHash = keccak256(abi.encodePacked(computedHash, proofElement));\\n } else {\\n computedHash = keccak256(abi.encodePacked(proofElement, computedHash));\\n }\\n j = j / 2;\\n }\\n\\n return computedHash == rootHash;\\n}\\n```\\n
Maintainer can bypass exit game quarantine by registering not-yet-deployed contracts
medium
The plasma framework uses an `ExitGameRegistry` to allow the maintainer to add new exit games after deployment. An exit game is any arbitrary contract. In order to prevent the maintainer from adding malicious exit games that steal user funds, the framework uses a “quarantine” system whereby newly-registered exit games have restricted permissions until their quarantine period has expired. The quarantine period is by default `3 * minExitPeriod`, and is intended to facilitate auditing of the new exit game's functionality by the plasma users.\\nHowever, by registering an exit game at a contract which has not yet been deployed, the maintainer can prevent plasma users from auditing the game until the quarantine period has expired. After the quarantine period has expired, the maintainer can deploy the malicious exit game and immediately steal funds.\\nExplanation\\nExit games are registered in the following function, callable only by the plasma contract maintainer:\\n```\\n/\\*\\*\\n \\* @notice Registers an exit game within the PlasmaFramework. Only the maintainer can call the function.\\n \\* @dev Emits ExitGameRegistered event to notify clients\\n \\* @param \\_txType The tx type where the exit game wants to register\\n \\* @param \\_contract Address of the exit game contract\\n \\* @param \\_protocol The transaction protocol, either 1 for MVP or 2 for MoreVP\\n \\*/\\nfunction registerExitGame(uint256 \\_txType, address \\_contract, uint8 \\_protocol) public onlyFrom(getMaintainer()) {\\n require(\\_txType != 0, "Should not register with tx type 0");\\n require(\\_contract != address(0), "Should not register with an empty exit game address");\\n require(\\_exitGames[\\_txType] == address(0), "The tx type is already registered");\\n require(\\_exitGameToTxType[\\_contract] == 0, "The exit game contract is already registered");\\n require(Protocol.isValidProtocol(\\_protocol), "Invalid protocol value");\\n\\n \\_exitGames[\\_txType] = \\_contract;\\n \\_exitGameToTxType[\\_contract] = \\_txType;\\n \\_protocols[\\_txType] = \\_protocol;\\n \\_exitGameQuarantine.quarantine(\\_contract);\\n\\n emit ExitGameRegistered(\\_txType, \\_contract, \\_protocol);\\n}\\n```\\n\\nNotably, the function does not check the `extcodesize` of the submitted contract. As such, the maintainer can submit the address of a contract which does not yet exist and is not auditable.\\nAfter at least `3 * minExitPeriod` seconds pass, the submitted contract now has full permissions as a registered exit game and can pass all checks using the `onlyFromNonQuarantinedExitGame` modifier:\\n```\\n/\\*\\*\\n \\* @notice A modifier to verify that the call is from a non-quarantined exit game\\n \\*/\\nmodifier onlyFromNonQuarantinedExitGame() {\\n require(\\_exitGameToTxType[msg.sender] != 0, "The call is not from a registered exit game contract");\\n require(!\\_exitGameQuarantine.isQuarantined(msg.sender), "ExitGame is quarantined");\\n \\_;\\n}\\n```\\n\\nAdditionally, the submitted contract passes checks made by external contracts using the `isExitGameSafeToUse` function:\\n```\\n/\\*\\*\\n \\* @notice Checks whether the contract is safe to use and is not under quarantine\\n \\* @dev Exposes information about exit games quarantine\\n \\* @param \\_contract Address of the exit game contract\\n \\* @return boolean Whether the contract is safe to use and is not under quarantine\\n \\*/\\nfunction isExitGameSafeToUse(address \\_contract) public view returns (bool) {\\n return \\_exitGameToTxType[\\_contract] != 0 && !\\_exitGameQuarantine.isQuarantined(\\_contract);\\n}\\n```\\n\\nThese permissions allow a registered quarantine to:\\nWithdraw any users' tokens from ERC20Vault:\\n```\\nfunction withdraw(address payable receiver, address token, uint256 amount) external onlyFromNonQuarantinedExitGame {\\n IERC20(token).safeTransfer(receiver, amount);\\n emit Erc20Withdrawn(receiver, token, amount);\\n}\\n```\\n\\nWithdraw any users' ETH from EthVault:\\n```\\nfunction withdraw(address payable receiver, uint256 amount) external onlyFromNonQuarantinedExitGame {\\n // we do not want to block exit queue if transfer is unucessful\\n // solhint-disable-next-line avoid-call-value\\n (bool success, ) = receiver.call.value(amount)("");\\n if (success) {\\n emit EthWithdrawn(receiver, amount);\\n } else {\\n emit WithdrawFailed(receiver, amount);\\n }\\n```\\n\\nActivate and deactivate the `ExitGameController` reentrancy mutex:\\n```\\nfunction activateNonReentrant() external onlyFromNonQuarantinedExitGame() {\\n require(!mutex, "Reentrant call");\\n mutex = true;\\n}\\n```\\n\\n```\\nfunction deactivateNonReentrant() external onlyFromNonQuarantinedExitGame() {\\n require(mutex, "Not locked");\\n mutex = false;\\n}\\n```\\n\\n`enqueue` arbitrary exits:\\n```\\nfunction enqueue(\\n uint256 vaultId,\\n address token,\\n uint64 exitableAt,\\n TxPosLib.TxPos calldata txPos,\\n uint160 exitId,\\n IExitProcessor exitProcessor\\n)\\n external\\n onlyFromNonQuarantinedExitGame\\n returns (uint256)\\n{\\n bytes32 key = exitQueueKey(vaultId, token);\\n require(hasExitQueue(key), "The queue for the (vaultId, token) pair is not yet added to the Plasma framework");\\n PriorityQueue queue = exitsQueues[key];\\n\\n uint256 priority = ExitPriority.computePriority(exitableAt, txPos, exitId);\\n\\n queue.insert(priority);\\n delegations[priority] = exitProcessor;\\n\\n emit ExitQueued(exitId, priority);\\n return priority;\\n}\\n```\\n\\nFlag outputs as “spent”:\\n```\\nfunction flagOutputSpent(bytes32 \\_outputId) external onlyFromNonQuarantinedExitGame {\\n require(\\_outputId != bytes32(""), "Should not flag with empty outputId");\\n isOutputSpent[\\_outputId] = true;\\n}\\n```\\n
`registerExitGame` should check that `extcodesize` of the submitted contract is non-zero.\\nCorresponding issue in plasma-contracts repo: https://github.com/omisego/plasma-contracts/issues/410
null
```\\n/\\*\\*\\n \\* @notice Registers an exit game within the PlasmaFramework. Only the maintainer can call the function.\\n \\* @dev Emits ExitGameRegistered event to notify clients\\n \\* @param \\_txType The tx type where the exit game wants to register\\n \\* @param \\_contract Address of the exit game contract\\n \\* @param \\_protocol The transaction protocol, either 1 for MVP or 2 for MoreVP\\n \\*/\\nfunction registerExitGame(uint256 \\_txType, address \\_contract, uint8 \\_protocol) public onlyFrom(getMaintainer()) {\\n require(\\_txType != 0, "Should not register with tx type 0");\\n require(\\_contract != address(0), "Should not register with an empty exit game address");\\n require(\\_exitGames[\\_txType] == address(0), "The tx type is already registered");\\n require(\\_exitGameToTxType[\\_contract] == 0, "The exit game contract is already registered");\\n require(Protocol.isValidProtocol(\\_protocol), "Invalid protocol value");\\n\\n \\_exitGames[\\_txType] = \\_contract;\\n \\_exitGameToTxType[\\_contract] = \\_txType;\\n \\_protocols[\\_txType] = \\_protocol;\\n \\_exitGameQuarantine.quarantine(\\_contract);\\n\\n emit ExitGameRegistered(\\_txType, \\_contract, \\_protocol);\\n}\\n```\\n
EthVault - Unused state variable
low
The state variable `withdrawEntryCounter` is not used in the code.\\n```\\nuint256 private withdrawEntryCounter = 0;\\n```\\n
Remove it from the contract.
null
```\\nuint256 private withdrawEntryCounter = 0;\\n```\\n
ECDSA error value is not handled
low
Resolution\\nThis was addressed in commit 32288ccff5b867a7477b4eaf3beb0587a4684d7a by adding a check that the returned value is nonzero.\\nThe OpenZeppelin `ECDSA` library returns `address(0x00)` for many cases with malformed signatures:\\n```\\nif (uint256(s) > 0x7FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF5D576E7357A4501DDFE92F46681B20A0) {\\n return address(0);\\n}\\n\\nif (v != 27 && v != 28) {\\n return address(0);\\n}\\n```\\n\\nThe `PaymentOutputToPaymentTxCondition` contract does not explicitly handle this case:\\n```\\naddress payable owner = inputTx.outputs[outputIndex].owner();\\nrequire(owner == ECDSA.recover(eip712.hashTx(spendingTx), signature), "Tx in not signed correctly");\\n\\nreturn true;\\n```\\n
Adding a check to handle this case will make it easier to reason about the code.\\nCorresponding issue in plasma-contracts repo: https://github.com/omisego/plasma-contracts/issues/454
null
```\\nif (uint256(s) > 0x7FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF5D576E7357A4501DDFE92F46681B20A0) {\\n return address(0);\\n}\\n\\nif (v != 27 && v != 28) {\\n return address(0);\\n}\\n```\\n
No existence checks on framework block and timestamp reads
low
The exit game libraries make several queries to the main `PlasmaFramework` contract where plasma block hashes and timestamps are stored. In multiple locations, the return values of these queries are not checked for existence.\\nPaymentStartStandardExit.setupStartStandardExitData:\\n```\\n(, uint256 blockTimestamp) = controller.framework.blocks(utxoPos.blockNum());\\n```\\n\\nPaymentChallengeIFENotCanonical.respond:\\n```\\n(bytes32 root, ) = self.framework.blocks(utxoPos.blockNum());\\n```\\n\\nPaymentPiggybackInFlightExit.enqueue:\\n```\\n(, uint256 blockTimestamp) = controller.framework.blocks(utxoPos.blockNum());\\n```\\n\\nTxFinalizationVerifier.checkInclusionProof:\\n```\\n(bytes32 root,) = data.framework.blocks(data.txPos.blockNum());\\n```\\n
Although none of these examples seem exploitable, adding existence checks makes it easier to reason about the code. Each query to `PlasmaFramework.blocks` should be followed with a check that the returned value is nonzero.\\nCorresponding issue in plasma-contracts repo: https://github.com/omisego/plasma-contracts/issues/463
null
```\\n(, uint256 blockTimestamp) = controller.framework.blocks(utxoPos.blockNum());\\n```\\n
BondSize - effectiveUpdateTime should be uint64
low
In BondSize, the mechanism to update the size of the bond has a grace period after which the new bond size becomes active.\\nWhen updating the bond size, the time is casted as a `uint64` and saved in a `uint128` variable.\\n```\\nuint128 effectiveUpdateTime;\\n```\\n\\n```\\nuint64 constant public WAITING\\_PERIOD = 2 days;\\n```\\n\\n```\\nself.effectiveUpdateTime = uint64(now) + WAITING\\_PERIOD;\\n```\\n\\nThere's no need to use a `uint128` to save the time if it never will take up that much space.
Change the type of the `effectiveUpdateTime` to `uint64`.\\n```\\n- uint128 effectiveUpdateTime;\\n+ uint64 effectiveUpdateTime;\\n```\\n
null
```\\nuint128 effectiveUpdateTime;\\n```\\n
PaymentExitGame contains several redundant plasmaFramework declarations
low
`PaymentExitGame` inherits from both `PaymentInFlightExitRouter` and `PaymentStandardExitRouter`. All three contracts declare and initialize their own `PlasmaFramework` variable. This pattern can be misleading, and may lead to subtle issues in future versions of the code.\\n`PaymentExitGame` declaration:\\n```\\nPlasmaFramework private plasmaFramework;\\n```\\n\\n`PaymentInFlightExitRouter` declaration:\\n```\\nPlasmaFramework private framework;\\n```\\n\\n`PaymentStandardExitRouter` declaration:\\n```\\nPlasmaFramework private framework;\\n```\\n\\nEach variable is initialized in the corresponding file's constructor.
Introduce an inherited contract common to `PaymentStandardExitRouter` and `PaymentInFlightExitRouter` with the `PlasmaFramework` variable. Make the variable internal so it is visible to inheriting contracts.
null
```\\nPlasmaFramework private plasmaFramework;\\n```\\n
Creating proposal is not trustless in Pull Pattern
high
Usually, if someone submits a proposal and transfers some amount of tribute tokens, these tokens are transferred back if the proposal is rejected. But if the proposal is not processed before the emergency processing, these tokens will not be transferred back to the proposer. This might happen if a tribute token or a deposit token transfers are blocked.\\n```\\nif (!emergencyProcessing) {\\n require(\\n proposal.tributeToken.transfer(proposal.proposer, proposal.tributeOffered),\\n "failing vote token transfer failed"\\n );\\n```\\n\\nTokens are not completely lost in that case, they now belong to the LAO shareholders and they might try to return that money back. But that requires a lot of coordination and time and everyone who ragequits during that time will take a part of that tokens with them.
Resolution\\nthis issue no longer exists in the Pull Pattern update, due to the fact that emergency processing and in function ERC20 transfers are removed.\\nPull pattern for token transfers would solve the issue.
null
```\\nif (!emergencyProcessing) {\\n require(\\n proposal.tributeToken.transfer(proposal.proposer, proposal.tributeOffered),\\n "failing vote token transfer failed"\\n );\\n```\\n
Emergency processing can be blocked in Pull Pattern
high
The main reason for the emergency processing mechanism is that there is a chance that some token transfers might be blocked. For example, a sender or a receiver is in the USDC blacklist. Emergency processing saves from this problem by not transferring tribute token back to the user (if there is some) and rejecting the proposal.\\n```\\nif (!emergencyProcessing) {\\n require(\\n proposal.tributeToken.transfer(proposal.proposer, proposal.tributeOffered),\\n "failing vote token transfer failed"\\n );\\n```\\n\\nThe problem is that there is still a deposit transfer back to the sponsor and it could be potentially blocked too. If that happens, proposal can't be processed and the LAO is blocked.
Implementing pull pattern for all token withdrawals would solve the problem. The alternative solution would be to also keep the deposit tokens in the LAO, but that makes sponsoring the proposal more risky for the sponsor.
null
```\\nif (!emergencyProcessing) {\\n require(\\n proposal.tributeToken.transfer(proposal.proposer, proposal.tributeOffered),\\n "failing vote token transfer failed"\\n );\\n```\\n
Token Overflow might result in system halt or loss of funds
high
If a token overflows, some functionality such as `processProposal`, `cancelProposal` will break due to safeMath reverts. The overflow could happen because the supply of the token was artificially inflated to oblivion.\\nThis issue was pointed out by Heiko Fisch in Telegram chat.\\nAny function using `internalTransfer()` can result in an overflow:\\n```\\nfunction max(uint256 x, uint256 y) internal pure returns (uint256) {\\n return x >= y ? x : y;\\n}\\n```\\n
We recommend to allow overflow for broken or malicious tokens. This is to prevent system halt or loss of funds. It should be noted that in case an overflow occurs, the balance of the token will be incorrect for all token holders in the system.\\n`rageKick`, `rageQuit` were fixed by not using safeMath within the function code, however this fix is risky and not recommended, as there are other overflows in other functions that might still result in system halt or loss of funds.\\nOne suggestion is having a function named `unsafeInternalTransfer()` which does not use safeMath for the cases that overflow should be allowed. This mainly adds better readability to the code.\\nIt is still a risky fix and a better solution should be planned.
null
```\\nfunction max(uint256 x, uint256 y) internal pure returns (uint256) {\\n return x >= y ? x : y;\\n}\\n```\\n
Whitelisted tokens limit
high
`_ragequit` function is iterating over all whitelisted tokens:\\n```\\nfor (uint256 i = 0; i < tokens.length; i++) {\\n uint256 amountToRagequit = fairShare(userTokenBalances[GUILD][tokens[i]], sharesAndLootToBurn, initialTotalSharesAndLoot);\\n // deliberately not using safemath here to keep overflows from preventing the function execution (which would break ragekicks)\\n // if a token overflows, it is because the supply was artificially inflated to oblivion, so we probably don't care about it anyways\\n userTokenBalances[GUILD][tokens[i]] -= amountToRagequit;\\n userTokenBalances[memberAddress][tokens[i]] += amountToRagequit;\\n}\\n```\\n\\nIf the number of tokens is too big, a transaction can run out of gas and all funds will be blocked forever. Ballpark estimation of this number is around 300 tokens based on the current OpCode gas costs and the block gas limit.
A simple solution would be just limiting the number of whitelisted tokens.\\nIf the intention is to invest in many new tokens over time, and it's not an option to limit the number of whitelisted tokens, it's possible to add a function that removes tokens from the whitelist. For example, it's possible to add a new type of proposals, that is used to vote on token removal if the balance of this token is zero. Before voting for that, shareholders should sell all the balance of that token.
null
```\\nfor (uint256 i = 0; i < tokens.length; i++) {\\n uint256 amountToRagequit = fairShare(userTokenBalances[GUILD][tokens[i]], sharesAndLootToBurn, initialTotalSharesAndLoot);\\n // deliberately not using safemath here to keep overflows from preventing the function execution (which would break ragekicks)\\n // if a token overflows, it is because the supply was artificially inflated to oblivion, so we probably don't care about it anyways\\n userTokenBalances[GUILD][tokens[i]] -= amountToRagequit;\\n userTokenBalances[memberAddress][tokens[i]] += amountToRagequit;\\n}\\n```\\n
Whitelist proposal duplicate Won't Fix
low
Every time when a whitelist proposal is sponsored, it's checked that there is no other sponsored whitelist proposal with the same token. This is done in order to avoid proposal duplicates.\\n```\\n// whitelist proposal\\nif (proposal.flags[4]) {\\n require(!tokenWhitelist[address(proposal.tributeToken)], "cannot already have whitelisted the token");\\n require(!proposedToWhitelist[address(proposal.tributeToken)], 'already proposed to whitelist');\\n proposedToWhitelist[address(proposal.tributeToken)] = true;\\n```\\n\\nThe issue is that even though you can't sponsor a duplicate proposal, you can still submit a new proposal with the same token.
Check that there is currently no sponsored proposal with the same token on proposal submission.
null
```\\n// whitelist proposal\\nif (proposal.flags[4]) {\\n require(!tokenWhitelist[address(proposal.tributeToken)], "cannot already have whitelisted the token");\\n require(!proposedToWhitelist[address(proposal.tributeToken)], 'already proposed to whitelist');\\n proposedToWhitelist[address(proposal.tributeToken)] = true;\\n```\\n
Moloch - bool[6] flags can be changed to a dedicated structure Won't Fix
low
The Moloch contract uses a structure that includes an array of bools to store a few flags about the proposal:\\n```\\nbool[6] flags; // [sponsored, processed, didPass, cancelled, whitelist, guildkick]\\n```\\n\\nThis makes reasoning about the correctness of the code a bit complicated because one needs to remember what each item in the flag list stands for. The make the reader's life simpler a dedicated structure can be created that incorporates all of the required flags.\\n```\\n bool[6] memory flags; // [sponsored, processed, didPass, cancelled, whitelist, guildkick]\\n```\\n
Based on the provided examples change the `bool[6] flags` to the proposed examples.\\nFlags as bool array with enum (proposed)\\nThis second contract implements the `flags` as a defined structure with each named element representing a specific flag. This method makes clear which flag is accessed because they are referred to by the name, not by the index.\\nThis third contract has the least amount of changes to the code and uses an enum structure to handle the index.\\n```\\npragma solidity 0.5.15;\\n\\ncontract FlagsEnum {\\n struct Proposal {\\n address applicant;\\n uint value;\\n bool[3] flags; // [sponsored, processed, kicked]\\n }\\n \\n enum ProposalFlags {\\n SPONSORED,\\n PROCESSED,\\n KICKED\\n }\\n \\n uint proposalCount;\\n \\n mapping(uint256 => Proposal) public proposals;\\n \\n function addProposal(uint \\_value, bool \\_sponsored, bool \\_processed, bool \\_kicked) public returns (uint) {\\n Proposal memory proposal = Proposal({\\n applicant: msg.sender,\\n value: \\_value,\\n flags: [\\_sponsored, \\_processed, \\_kicked]\\n });\\n \\n proposals[proposalCount] = proposal;\\n proposalCount += 1;\\n \\n return (proposalCount);\\n }\\n \\n function getProposal(uint \\_proposalId) public view returns (address, uint, bool, bool, bool) {\\n return (\\n proposals[\\_proposalId].applicant,\\n proposals[\\_proposalId].value,\\n proposals[\\_proposalId].flags[uint(ProposalFlags.SPONSORED)],\\n proposals[\\_proposalId].flags[uint(ProposalFlags.PROCESSED)],\\n proposals[\\_proposalId].flags[uint(ProposalFlags.KICKED)]\\n );\\n }\\n}\\n```\\n
null
```\\nbool[6] flags; // [sponsored, processed, didPass, cancelled, whitelist, guildkick]\\n```\\n
Passing duplicate tokens to Redemptions and TokenRequest may have unintended consequences
medium
Both `Redemptions` and `TokenRequest` are initialized with a list of acceptable tokens to use with each app. For `Redemptions`, the list of tokens corresponds to an organization's treasury assets. For `TokenRequest`, the list of tokens corresponds to tokens accepted for payment to join an organization. Neither contract makes a uniqueness check on input tokens during initialization, which can lead to unintended behavior.\\nIn `Redemptions`, each of an organization's assets are redeemed according to the sender's proportional ownership in the org. The redemption process iterates over the `redeemableTokens` list, paying out the sender their proportion of each token listed:\\n```\\nfor (uint256 i = 0; i < redeemableTokens.length; i++) {\\n vaultTokenBalance = vault.balance(redeemableTokens[i]);\\n\\n redemptionAmount = \\_burnableAmount.mul(vaultTokenBalance).div(burnableTokenTotalSupply);\\n totalRedemptionAmount = totalRedemptionAmount.add(redemptionAmount);\\n\\n if (redemptionAmount > 0) {\\n vault.transfer(redeemableTokens[i], msg.sender, redemptionAmount);\\n }\\n}\\n```\\n\\nIf a token address is included more than once, the sender will be paid out more than once, potentially earning many times more than their proportional share of the token.\\nIn `TokenRequest`, this behavior does not allow for any significant deviation from expected behavior. It was included because the initialization process is similar to that of `Redemptions`.
Resolution\\nThis was addressed in Redemptions commit 2b0034206a5b9cdf239da7a51900e89d9931554f by checking `redeemableTokenAdded[token] == false` for each subsequent token added during initialization. Note that ordering is not enforced.\\nAdditionally, the issue in `TokenRequest` was addressed in commit eb4181961093439f142f2e74eb706b7f501eb5c0 by requiring that each subsequent token added during initialization has a value strictly greater than the previous token added.\\nDuring initialization in both apps, check that input token addresses are unique. One simple method is to require that token addresses are submitted in ascending order, and that each subsequent address added is greater than the one before.
null
```\\nfor (uint256 i = 0; i < redeemableTokens.length; i++) {\\n vaultTokenBalance = vault.balance(redeemableTokens[i]);\\n\\n redemptionAmount = \\_burnableAmount.mul(vaultTokenBalance).div(burnableTokenTotalSupply);\\n totalRedemptionAmount = totalRedemptionAmount.add(redemptionAmount);\\n\\n if (redemptionAmount > 0) {\\n vault.transfer(redeemableTokens[i], msg.sender, redemptionAmount);\\n }\\n}\\n```\\n
The Delay app allows scripts to be paused even after execution time has elapsed
medium
The `Delay` app is used to configure a delay between when an evm script is created and when it is executed. The entry point for this process is `Delay.delayExecution`, which stores the input script with a future execution date:\\n```\\nfunction \\_delayExecution(bytes \\_evmCallScript) internal returns (uint256) {\\n uint256 delayedScriptIndex = delayedScriptsNewIndex;\\n delayedScriptsNewIndex++;\\n\\n delayedScripts[delayedScriptIndex] = DelayedScript(getTimestamp64().add(executionDelay), 0, \\_evmCallScript);\\n\\n emit DelayedScriptStored(delayedScriptIndex);\\n\\n return delayedScriptIndex;\\n}\\n```\\n\\nAn auxiliary capability of the `Delay` app is the ability to “pause” the delayed script, which sets the script's `pausedAt` value to the current block timestamp:\\n```\\nfunction pauseExecution(uint256 \\_delayedScriptId) external auth(PAUSE\\_EXECUTION\\_ROLE) {\\n require(!\\_isExecutionPaused(\\_delayedScriptId), ERROR\\_CAN\\_NOT\\_PAUSE);\\n delayedScripts[\\_delayedScriptId].pausedAt = getTimestamp64();\\n\\n emit ExecutionPaused(\\_delayedScriptId);\\n}\\n```\\n\\nA paused script cannot be executed until `resumeExecution` is called, which extends the script's `executionTime` by the amount of time paused. Essentially, the delay itself is paused:\\n```\\nfunction resumeExecution(uint256 \\_delayedScriptId) external auth(RESUME\\_EXECUTION\\_ROLE) {\\n require(\\_isExecutionPaused(\\_delayedScriptId), ERROR\\_CAN\\_NOT\\_RESUME);\\n DelayedScript storage delayedScript = delayedScripts[\\_delayedScriptId];\\n\\n uint64 timePaused = getTimestamp64().sub(delayedScript.pausedAt);\\n delayedScript.executionTime = delayedScript.executionTime.add(timePaused);\\n delayedScript.pausedAt = 0;\\n\\n emit ExecutionResumed(\\_delayedScriptId);\\n}\\n```\\n\\nA delayed script whose execution time has passed and is not currently paused should be able to be executed via the `execute` function. However, the `pauseExecution` function still allows the aforementioned script to be paused, halting execution.
Add a check to `pauseExecution` to ensure that execution is not paused if the script's execution delay has already transpired.
null
```\\nfunction \\_delayExecution(bytes \\_evmCallScript) internal returns (uint256) {\\n uint256 delayedScriptIndex = delayedScriptsNewIndex;\\n delayedScriptsNewIndex++;\\n\\n delayedScripts[delayedScriptIndex] = DelayedScript(getTimestamp64().add(executionDelay), 0, \\_evmCallScript);\\n\\n emit DelayedScriptStored(delayedScriptIndex);\\n\\n return delayedScriptIndex;\\n}\\n```\\n
Misleading intentional misconfiguration possible through misuse of newToken and newBaseInstance
medium
The instantiation process for a Dandelion organization requires two separate external calls to `DandelionOrg`. There are two primary functions: `installDandelionApps`, and `newTokenAndBaseInstance`.\\n`installDandelionApps` relies on cached results from prior calls to `newTokenAndBaseInstance` and completes the initialization step for a Dandelion org.\\n`newTokenAndBaseInstance` is a wrapper around two publicly accessible functions: `newToken` and `newBaseInstance`. Called together, the functions:\\nDeploy a new `MiniMeToken` used to represent shares in an organization, and cache the address of the created token:\\n```\\n/\\*\\*\\n\\* @dev Create a new MiniMe token and save it for the user\\n\\* @param \\_name String with the name for the token used by share holders in the organization\\n\\* @param \\_symbol String with the symbol for the token used by share holders in the organization\\n\\*/\\nfunction newToken(string memory \\_name, string memory \\_symbol) public returns (MiniMeToken) {\\n MiniMeToken token = \\_createToken(\\_name, \\_symbol, TOKEN\\_DECIMALS);\\n \\_saveToken(token);\\n return token;\\n}\\n```\\n\\nCreate a new dao instance using Aragon's `BaseTemplate` contract:\\n```\\n/\\*\\*\\n\\* @dev Deploy a Dandelion Org DAO using a previously saved MiniMe token\\n\\* @param \\_id String with the name for org, will assign `[id].aragonid.eth`\\n\\* @param \\_holders Array of token holder addresses\\n\\* @param \\_stakes Array of token stakes for holders (token has 18 decimals, multiply token amount `\\* 10^18`)\\n\\* @param \\_useAgentAsVault Boolean to tell whether to use an Agent app as a more advanced form of Vault app\\n\\*/\\nfunction newBaseInstance(\\n string memory \\_id,\\n address[] memory \\_holders,\\n uint256[] memory \\_stakes,\\n uint64 \\_financePeriod,\\n bool \\_useAgentAsVault\\n)\\n public\\n{\\n \\_validateId(\\_id);\\n \\_ensureBaseSettings(\\_holders, \\_stakes);\\n\\n (Kernel dao, ACL acl) = \\_createDAO();\\n \\_setupBaseApps(dao, acl, \\_holders, \\_stakes, \\_financePeriod, \\_useAgentAsVault);\\n}\\n```\\n\\nSet up prepackaged Aragon apps, like `Vault`, `TokenManager`, and Finance:\\n```\\nfunction \\_setupBaseApps(\\n Kernel \\_dao,\\n ACL \\_acl,\\n address[] memory \\_holders,\\n uint256[] memory \\_stakes,\\n uint64 \\_financePeriod,\\n bool \\_useAgentAsVault\\n)\\n internal\\n{\\n MiniMeToken token = \\_getToken();\\n Vault agentOrVault = \\_useAgentAsVault ? \\_installDefaultAgentApp(\\_dao) : \\_installVaultApp(\\_dao);\\n TokenManager tokenManager = \\_installTokenManagerApp(\\_dao, token, TOKEN\\_TRANSFERABLE, TOKEN\\_MAX\\_PER\\_ACCOUNT);\\n Finance finance = \\_installFinanceApp(\\_dao, agentOrVault, \\_financePeriod == 0 ? DEFAULT\\_FINANCE\\_PERIOD : \\_financePeriod);\\n\\n \\_mintTokens(\\_acl, tokenManager, \\_holders, \\_stakes);\\n \\_saveBaseApps(\\_dao, finance, tokenManager, agentOrVault);\\n \\_saveAgentAsVault(\\_dao, \\_useAgentAsVault);\\n\\n}\\n```\\n\\nNote that `newToken` and `newBaseInstance` can be called separately. The token created in `newToken` is cached in `_saveToken`, which overwrites any previously-cached value:\\n```\\nfunction \\_saveToken(MiniMeToken \\_token) internal {\\n DeployedContracts storage senderDeployedContracts = deployedContracts[msg.sender];\\n\\n senderDeployedContracts.token = address(\\_token);\\n}\\n```\\n\\nCached tokens are retrieved in _getToken:\\n```\\nfunction \\_getToken() internal returns (MiniMeToken) {\\n DeployedContracts storage senderDeployedContracts = deployedContracts[msg.sender];\\n require(senderDeployedContracts.token != address(0), ERROR\\_MISSING\\_TOKEN\\_CONTRACT);\\n\\n MiniMeToken token = MiniMeToken(senderDeployedContracts.token);\\n return token;\\n}\\n```\\n\\nBy exploiting the overwriteable caching mechanism, it is possible to intentionally misconfigure Dandelion orgs.\\n`installDandelionApps` uses `_getToken` to associate a token with the `DandelionVoting` app. The value returned from `_getToken` depends on the sender's previous call to `newToken`, which overwrites any previously-cached value. The steps for intentional misconfiguration are as follows:\\nSender calls `newTokenAndBaseInstance`, creating token `m0` and DAO `A`.\\nThe `TokenManager` app in `A` is automatically configured to be the controller of `m0`.\\n`m0` is cached using `_saveToken`.\\nDAO `A` apps are cached for future use using `_saveBaseApps` and `_saveAgentAsVault`.\\nSender calls `newToken`, creating token `m1`, and overwriting the cache of `m0`.\\nFuture calls to `_getToken` will retrieve `m1`.\\nThe `DandelionOrg` contract is the controller of `m1`.\\nSender calls `installDandelionApps`, which installs Dandelion apps in DAO `A`\\nThe `DandelionVoting` app is configured to use the current cached token, `m1`, rather than the token associated with `A.TokenManager`, `m0`\\nFurther calls to `newBaseInstance` and `installDandelionApps` create DAO `B`, populate it with Dandelion apps, and assign `B.TokenManager` as the controller of the earlier `DandelionVoting` app token, `m0`.\\nMany different misconfigurations are possible, and some may be underhandedly abusable.
Make `newToken` and `newBaseInstance` internal so they are only callable via `newTokenAndBaseInstance`.
null
```\\n/\\*\\*\\n\\* @dev Create a new MiniMe token and save it for the user\\n\\* @param \\_name String with the name for the token used by share holders in the organization\\n\\* @param \\_symbol String with the symbol for the token used by share holders in the organization\\n\\*/\\nfunction newToken(string memory \\_name, string memory \\_symbol) public returns (MiniMeToken) {\\n MiniMeToken token = \\_createToken(\\_name, \\_symbol, TOKEN\\_DECIMALS);\\n \\_saveToken(token);\\n return token;\\n}\\n```\\n
Delay.execute can re-enter and re-execute the same script twice
low
`Delay.execute` does not follow the “checks-effects-interactions” pattern, and deletes a delayed script only after the script is run. Because the script being run executes arbitrary external calls, a script can be created that re-enters `Delay` and executes itself multiple times before being deleted:\\n```\\n/\\*\\*\\n\\* @notice Execute the script with ID `\\_delayedScriptId`\\n\\* @param \\_delayedScriptId The ID of the script to execute\\n\\*/\\nfunction execute(uint256 \\_delayedScriptId) external {\\n require(canExecute(\\_delayedScriptId), ERROR\\_CAN\\_NOT\\_EXECUTE);\\n runScript(delayedScripts[\\_delayedScriptId].evmCallScript, new bytes(0), new address[](0));\\n\\n delete delayedScripts[\\_delayedScriptId];\\n\\n emit ExecutedScript(\\_delayedScriptId);\\n}\\n```\\n
Add the `Delay` contract address to the `runScript` blacklist, or delete the delayed script from storage before it is run.
null
```\\n/\\*\\*\\n\\* @notice Execute the script with ID `\\_delayedScriptId`\\n\\* @param \\_delayedScriptId The ID of the script to execute\\n\\*/\\nfunction execute(uint256 \\_delayedScriptId) external {\\n require(canExecute(\\_delayedScriptId), ERROR\\_CAN\\_NOT\\_EXECUTE);\\n runScript(delayedScripts[\\_delayedScriptId].evmCallScript, new bytes(0), new address[](0));\\n\\n delete delayedScripts[\\_delayedScriptId];\\n\\n emit ExecutedScript(\\_delayedScriptId);\\n}\\n```\\n
Delay.cancelExecution should revert on a non-existent script id
low
`cancelExecution` makes no existence check on the passed-in script ID, clearing its storage slot and emitting an event:\\n```\\n/\\*\\*\\n\\* @notice Cancel script execution with ID `\\_delayedScriptId`\\n\\* @param \\_delayedScriptId The ID of the script execution to cancel\\n\\*/\\nfunction cancelExecution(uint256 \\_delayedScriptId) external auth(CANCEL\\_EXECUTION\\_ROLE) {\\n delete delayedScripts[\\_delayedScriptId];\\n\\n emit ExecutionCancelled(\\_delayedScriptId);\\n}\\n```\\n
Add a check that the passed-in script exists.
null
```\\n/\\*\\*\\n\\* @notice Cancel script execution with ID `\\_delayedScriptId`\\n\\* @param \\_delayedScriptId The ID of the script execution to cancel\\n\\*/\\nfunction cancelExecution(uint256 \\_delayedScriptId) external auth(CANCEL\\_EXECUTION\\_ROLE) {\\n delete delayedScripts[\\_delayedScriptId];\\n\\n emit ExecutionCancelled(\\_delayedScriptId);\\n}\\n```\\n
ID validation check missing for installDandelionApps
low
`DandelionOrg` allows users to kickstart an Aragon organization by using a dao template. There are two primary functions to instantiate an org: `newTokenAndBaseInstance`, and `installDandelionApps`. Both functions accept a parameter, `string _id`, meant to represent an ENS subdomain that will be assigned to the new org during the instantiation process. The two functions are called independently, but depend on each other.\\nIn `newTokenAndBaseInstance`, a sanity check is performed on the `_id` parameter, which ensures the `_id` length is nonzero:\\n```\\n\\_validateId(\\_id);\\n```\\n\\nNote that the value of `_id` is otherwise unused in `newTokenAndBaseInstance`.\\nIn `installDandelionApps`, this check is missing. The check is only important in this function, since it is in `installDandelionApps` that the ENS subdomain registration is actually performed.
Use `_validateId` in `installDandelionApps` rather than `newTokenAndBaseInstance`. Since the `_id` parameter is otherwise unused in `newTokenAndBaseInstance`, it can be removed.\\nAlternatively, the value of the submitted `_id` could be cached between calls and validated in `newTokenAndBaseInstance`, similarly to `newToken`.
null
```\\n\\_validateId(\\_id);\\n```\\n
EOPBCTemplate - permission documentation inconsistencies
high
Undocumented\\nThe template documentation provides an overview of the permissions set with the template. The following permissions are set by the template contract but are not documented in the accompanied `fundraising/templates/externally_owned_presale_bonding_curve/README.md`.\\nTokenManager\\n```\\n\\_createPermissions(\\_acl, grantees, \\_fundraisingApps.bondedTokenManager, \\_fundraisingApps.bondedTokenManager.MINT\\_ROLE(), \\_owner);\\n\\_acl.createPermission(\\_fundraisingApps.marketMaker, \\_fundraisingApps.bondedTokenManager, \\_fundraisingApps.bondedTokenManager.BURN\\_ROLE(), \\_owner);\\n```\\n\\ncode/fundraising/templates/externally_owned_presale_bonding_curve/eopbc.yaml:L33-L44\\n```\\n- app: anj-token-manager\\n role: MINT\\_ROLE\\n grantee: market-maker\\n manager: owner\\n- app: anj-token-manager\\n role: MINT\\_ROLE\\n grantee: presale\\n manager: owner\\n- app: anj-token-manager\\n role: BURN\\_ROLE\\n grantee: market-maker\\n manager: owner\\n```\\n\\nInconsistent\\nThe following permissions are set by the template but are inconsistent to the outline in the documentation:\\nController\\n`owner` has the following permissions even though they are documented as not being set https://github.com/ConsenSys/aragonone-presale-audit-2019-11/blob/9ddae8c7fde9dea3af3982b965a441239d81f370/code/fundraising/templates/externally_owned_presale_bonding_curve/README.md#controller.\\n```\\n| App | Permission | Grantee | Manager |\\n| ---------- | ------------------------------------- | ------- | ------- |\\n| Controller | UPDATE_BENEFICIARY | NULL | NULL |\\n| Controller | UPDATE_FEES | NULL | NULL |\\n| Controller | ADD_COLLATERAL_TOKEN | Owner | Owner |\\n| Controller | REMOVE_COLLATERAL_TOKEN | Owner | Owner |\\n| Controller | UPDATE_COLLATERAL_TOKEN | Owner | Owner |\\n| Controller | UPDATE_MAXIMUM_TAP_RATE_INCREASE_PCT | NULL | NULL |\\n| Controller | UPDATE_MAXIMUM_TAP_FLOOR_DECREASE_PCT | NULL | NULL |\\n| Controller | ADD_TOKEN_TAP | NULL | NULL |\\n| Controller | UPDATE_TOKEN_TAP | NULL | NULL |\\n| Controller | OPEN_PRESALE | Owner | Owner |\\n| Controller | OPEN_TRADING | Presale | Owner |\\n| Controller | CONTRIBUTE | Any | Owner |\\n| Controller | OPEN_BUY_ORDER | Any | Owner |\\n| Controller | OPEN_SELL_ORDER | Any | Owner |\\n| Controller | WITHDRAW | NULL | NULL |\\n```\\n\\n```\\n\\_acl.createPermission(\\_owner, \\_fundraisingApps.controller, \\_fundraisingApps.controller.UPDATE\\_BENEFICIARY\\_ROLE(), \\_owner);\\n\\_acl.createPermission(\\_owner, \\_fundraisingApps.controller, \\_fundraisingApps.controller.UPDATE\\_FEES\\_ROLE(), \\_owner);\\n```\\n
Resolution\\nFixed with aragonone/[email protected]bafe100 by adding the undocumented and deviating permissions to the documentation.\\nFor transparency, all permissions set-up by the template must be documented.
null
```\\n\\_createPermissions(\\_acl, grantees, \\_fundraisingApps.bondedTokenManager, \\_fundraisingApps.bondedTokenManager.MINT\\_ROLE(), \\_owner);\\n\\_acl.createPermission(\\_fundraisingApps.marketMaker, \\_fundraisingApps.bondedTokenManager, \\_fundraisingApps.bondedTokenManager.BURN\\_ROLE(), \\_owner);\\n```\\n
EOPBCTemplate - AppId of BalanceRedirectPresale should be different from AragonBlack/Presale namehash to avoid collisions
high
The template references the new presale contract with `apmNamehash` `0x5de9bbdeaf6584c220c7b7f1922383bcd8bbcd4b48832080afd9d5ebf9a04df5`. However, this namehash is already used by the aragonBlack/Presale contract. To avoid confusion and collision a unique `apmNamehash` should be used for this variant of the contract.\\nNote that the contract that is referenced from an `apmNamehash` is controlled by the `ENS` resolver that is configured when deploying the template contract. Using the same namehash for both variants of the contract does not allow a single registry to simultaneously provide both variants of the contract and might lead to confusion as to which application is actually deployed. This also raises the issue that the `ENS` registry must be verified before actually using the contract as a malicious registry could force the template to deploy potentially malicious applications.\\naragonOne/Fundraising:\\n```\\nbytes32 private constant PRESALE\\_ID = 0x5de9bbdeaf6584c220c7b7f1922383bcd8bbcd4b48832080afd9d5ebf9a04df5;\\n```\\n\\naragonBlack/Fundraising:\\n```\\nbytes32 private constant PRESALE\\_ID = 0x5de9bbdeaf6584c220c7b7f1922383bcd8bbcd4b48832080afd9d5ebf9a04df5;\\n```\\n\\n`bytes32 private constant PRESALE_ID = 0x5de9bbdeaf6584c220c7b7f1922383bcd8bbcd4b48832080afd9d5ebf9a04df5;`
Create a new `apmNamehash` for `BalanceRedirectPresale`.
null
```\\nbytes32 private constant PRESALE\\_ID = 0x5de9bbdeaf6584c220c7b7f1922383bcd8bbcd4b48832080afd9d5ebf9a04df5;\\n```\\n
BalanceRedirectPresale - Presale can be extended indefinitely Won't Fix
high
The `OPEN_ROLE` can indefinitely extend the Presale even after users contributed funds to it by adjusting the presale period. The period might be further manipulated to avoid that token trading in the MarketMaker is opened.\\n```\\nfunction setPeriod(uint64 \\_period) external auth(OPEN\\_ROLE) {\\n \\_setPeriod(\\_period);\\n}\\n```\\n\\n```\\nfunction \\_setPeriod(uint64 \\_period) internal {\\n require(\\_period > 0, ERROR\\_TIME\\_PERIOD\\_ZERO);\\n require(openDate == 0 || openDate + \\_period > getTimestamp64(), ERROR\\_INVALID\\_TIME\\_PERIOD);\\n period = \\_period;\\n}\\n```\\n
Do not allow to extend the presale after funds have been contributed to it or only allow period adjustments in `State.PENDING`.
null
```\\nfunction setPeriod(uint64 \\_period) external auth(OPEN\\_ROLE) {\\n \\_setPeriod(\\_period);\\n}\\n```\\n
BalanceRedirectPresale - setPeriod uint64 overflow in validation check
medium
`setPeriod()` allows setting an arbitrary Presale starting date. The method can be called by an entity with the `OPEN_ROLE` permission. Providing a large enough value for `uint64 _period` can overflow the second input validation check. The result is unwanted behaviour where for relatively large values of `period` the require might fail because the overflow `openDate + _period` is less than or equal the current timestamp (getTimestamp64()) but if high enough it still might succeed because `openDate + _period` is higher than the current timestamp. The overflow has no effect on the presale end as it is calculated against `_timeSinceOpen`.\\n```\\nfunction \\_setPeriod(uint64 \\_period) internal {\\n require(\\_period > 0, ERROR\\_TIME\\_PERIOD\\_ZERO);\\n require(openDate == 0 || openDate + \\_period > getTimestamp64(), ERROR\\_INVALID\\_TIME\\_PERIOD);\\n period = \\_period;\\n}\\n```\\n\\n
Resolution\\nFixed with aragonone/[email protected]bafe100 by performing the addition using `SafeMath`.\\nUse `SafeMath` which is already imported to protect from overflow scenarios.
null
```\\nfunction \\_setPeriod(uint64 \\_period) internal {\\n require(\\_period > 0, ERROR\\_TIME\\_PERIOD\\_ZERO);\\n require(openDate == 0 || openDate + \\_period > getTimestamp64(), ERROR\\_INVALID\\_TIME\\_PERIOD);\\n period = \\_period;\\n}\\n```\\n
EOPBCTemplate - misleading method names _cacheFundraisingApps and _cacheFundraisingParams
low
The methods `_cacheFundraisingApps` and `_cacheFundraisingParams` suggest that parameters are cached as state variables in the contract similar to the multi-step deployment contract used for AragonBlack/Fundraising. However, the methods are just returning memory structs.\\n```\\nfunction \\_cacheFundraisingApps(\\n Agent \\_reserve,\\n Presale \\_presale,\\n MarketMaker \\_marketMaker,\\n Tap \\_tap,\\n Controller \\_controller,\\n TokenManager \\_tokenManager\\n)\\n internal\\n returns (FundraisingApps memory fundraisingApps)\\n{\\n fundraisingApps.reserve = \\_reserve;\\n fundraisingApps.presale = \\_presale;\\n fundraisingApps.marketMaker = \\_marketMaker;\\n fundraisingApps.tap = \\_tap;\\n fundraisingApps.controller = \\_controller;\\n fundraisingApps.bondedTokenManager = \\_tokenManager;\\n}\\n\\nfunction \\_cacheFundraisingParams(\\n address \\_owner,\\n string \\_id,\\n ERC20 \\_collateralToken,\\n MiniMeToken \\_bondedToken,\\n uint64 \\_period,\\n uint256 \\_exchangeRate,\\n uint64 \\_openDate,\\n uint256 \\_reserveRatio,\\n uint256 \\_batchBlocks,\\n uint256 \\_slippage\\n)\\n internal\\n returns (FundraisingParams fundraisingParams)\\n{\\n fundraisingParams = FundraisingParams({\\n owner: \\_owner,\\n id: \\_id,\\n collateralToken: \\_collateralToken,\\n bondedToken: \\_bondedToken,\\n period: \\_period,\\n exchangeRate: \\_exchangeRate,\\n openDate: \\_openDate,\\n reserveRatio: \\_reserveRatio,\\n batchBlocks: \\_batchBlocks,\\n slippage: \\_slippage\\n });\\n}\\n```\\n
The functions are only called once throughout the deployment process. The structs can therefore be created directly in the main method. Otherwise rename the functions to properly reflect their purpose.
null
```\\nfunction \\_cacheFundraisingApps(\\n Agent \\_reserve,\\n Presale \\_presale,\\n MarketMaker \\_marketMaker,\\n Tap \\_tap,\\n Controller \\_controller,\\n TokenManager \\_tokenManager\\n)\\n internal\\n returns (FundraisingApps memory fundraisingApps)\\n{\\n fundraisingApps.reserve = \\_reserve;\\n fundraisingApps.presale = \\_presale;\\n fundraisingApps.marketMaker = \\_marketMaker;\\n fundraisingApps.tap = \\_tap;\\n fundraisingApps.controller = \\_controller;\\n fundraisingApps.bondedTokenManager = \\_tokenManager;\\n}\\n\\nfunction \\_cacheFundraisingParams(\\n address \\_owner,\\n string \\_id,\\n ERC20 \\_collateralToken,\\n MiniMeToken \\_bondedToken,\\n uint64 \\_period,\\n uint256 \\_exchangeRate,\\n uint64 \\_openDate,\\n uint256 \\_reserveRatio,\\n uint256 \\_batchBlocks,\\n uint256 \\_slippage\\n)\\n internal\\n returns (FundraisingParams fundraisingParams)\\n{\\n fundraisingParams = FundraisingParams({\\n owner: \\_owner,\\n id: \\_id,\\n collateralToken: \\_collateralToken,\\n bondedToken: \\_bondedToken,\\n period: \\_period,\\n exchangeRate: \\_exchangeRate,\\n openDate: \\_openDate,\\n reserveRatio: \\_reserveRatio,\\n batchBlocks: \\_batchBlocks,\\n slippage: \\_slippage\\n });\\n}\\n```\\n
EOPBCTemplate - inconsistent storage location declaration
low
`_cacheFundraisingParams()` does not explicitly declare the return value memory location.\\n```\\nfunction \\_cacheFundraisingParams(\\n address \\_owner,\\n string \\_id,\\n ERC20 \\_collateralToken,\\n MiniMeToken \\_bondedToken,\\n uint64 \\_period,\\n uint256 \\_exchangeRate,\\n uint64 \\_openDate,\\n uint256 \\_reserveRatio,\\n uint256 \\_batchBlocks,\\n uint256 \\_slippage\\n)\\n internal\\n returns (FundraisingParams fundraisingParams)\\n```\\n\\n`_cacheFundraisingApps()` explicitly declares to return a copy of the storage struct.\\n```\\nfunction \\_cacheFundraisingApps(\\n Agent \\_reserve,\\n Presale \\_presale,\\n MarketMaker \\_marketMaker,\\n Tap \\_tap,\\n Controller \\_controller,\\n TokenManager \\_tokenManager\\n)\\n internal\\n returns (FundraisingApps memory fundraisingApps)\\n{\\n fundraisingApps.reserve = \\_reserve;\\n fundraisingApps.presale = \\_presale;\\n fundraisingApps.marketMaker = \\_marketMaker;\\n fundraisingApps.tap = \\_tap;\\n fundraisingApps.controller = \\_controller;\\n fundraisingApps.bondedTokenManager = \\_tokenManager;\\n}\\n```\\n
Resolution\\nFixed with aragonone/[email protected]bafe100 by adding the missing storage location declaration.\\nStorage declarations should be consistent.
null
```\\nfunction \\_cacheFundraisingParams(\\n address \\_owner,\\n string \\_id,\\n ERC20 \\_collateralToken,\\n MiniMeToken \\_bondedToken,\\n uint64 \\_period,\\n uint256 \\_exchangeRate,\\n uint64 \\_openDate,\\n uint256 \\_reserveRatio,\\n uint256 \\_batchBlocks,\\n uint256 \\_slippage\\n)\\n internal\\n returns (FundraisingParams fundraisingParams)\\n```\\n
EOPBCTemplate - EtherTokenConstant is never used
low
The constant value `EtherTokenConstant.ETH` is never used.\\n```\\ncontract EOPBCTemplate is EtherTokenConstant, BaseTemplate {\\n```\\n
Resolution\\nFixed with aragonone/[email protected]bafe100 by removing the `EtherTokenConstant` dependency.\\nRemove all references to `EtherTokenConstant`.
null
```\\ncontract EOPBCTemplate is EtherTokenConstant, BaseTemplate {\\n```\\n
Staking node can be inappropriately removed from the tree
high
The following code in `OrchidDirectory.pull()` is responsible for reattaching a child from a removed tree node:\\n```\\nif (name(stake.left\\_) == key) {\\n current.right\\_ = stake.right\\_;\\n current.after\\_ = stake.after\\_;\\n} else {\\n current.left\\_ = stake.left\\_;\\n current.before\\_ = stake.before\\_;\\n}\\n```\\n\\nThe condition name(stake.left_) == `key` can never hold because `key` is the `key` for `stake` itself.\\nThe result of this bug is somewhat catastrophic. The child is not reattached, but it still has a link to the rest of the tree via its ‘parent_' pointer. This means reducing the stake of that node can underflow the ancestors' before/after amounts, leading to improper random selection or failing altogether.\\nThe node replacing the removed node also ends up with itself as a child, which violates the basic tree structure and is again likely to produce integer underflows and other failures.
As a simple fix, use `if(name(stake.left_) == name(last))` as already suggested by the development team when this bug was first shared.\\nTwo suggestions for better long-term fixes:\\nUse a strict interface for tree operations. It should be impossible to update a node's parent without simultaneously updating that parent's child pointer.\\nAs suggested in (https://github.com/ConsenSys/orchid-audit-2019-10/issues/7), simplify the logic in `pull()` to avoid this logic altogether.
null
```\\nif (name(stake.left\\_) == key) {\\n current.right\\_ = stake.right\\_;\\n current.after\\_ = stake.after\\_;\\n} else {\\n current.left\\_ = stake.left\\_;\\n current.before\\_ = stake.before\\_;\\n}\\n```\\n
Verifiers need to be pure, but it's very difficult to validate pureness
medium
After the initial audit, a “verifier” was introduced to the `OrchidLottery` code. Each `Pot` can have an associated `OrchidVerifier`. This is a contract with a `good()` function that accepts three parameters:\\n```\\nfunction good(bytes calldata shared, address target, bytes calldata receipt) external pure returns (bool);\\n```\\n\\nThe verifier returns a boolean indicating whether a given micropayment should be allowed or not. An example use case is a verifier that only allows certain `target` addresses to be paid. In this case, `shared` (a single value for a given Pot) is a merkle root, `target` is (as always) the address being paid, and `receipt` (specified by the payment recipient) is a merkle proof that the `target` address is within the merkle tree with the given root.\\nA server providing bandwidth needs to know whether to accept a certain receipt. To do that, it needs to know that at some time in the future, a call to the verifier's `good()` function with a particular set of parameters will return `true`. The proposed scheme for determining that is for the server to run the contract's code locally and ensure that it returns `true` and that it doesn't execute any EVM opcodes that would read state. This prevents, for example, a contract from returning `true` until a certain timestamp and then start returning `false`. If a contract could do that, the server would be tricked into providing bandwidth without then receiving payment.\\nUnfortunately, this simple scheme is insufficient. As a simple example, a verifier contract could be created with the `CREATE2` opcode. It could be demonstrated that it reads no state when `good()` is called. Then the contract could be destroyed by calling a function that performs a `SELFDESTRUCT`, and it could be replaced via another `CREATE2` call with different code.\\nThis could be mitigated by rejecting any verifier contract that contains the `SELFDESTRUCT` opcode, but this would also catch harmless occurrences of that particular byte. https://gist.github.com/Arachnid/e8f0638dc9f5687ff8170a95c47eac1e attempts to find `SELFDESTRUCT` opcodes but fails to account for tricks where the `SELFDESTRUCT` appears to be data but can actually be executed. (See Recmo's comment.) In general, this approach is difficult to get right and probably requires full data flow analysis to be correct.\\nAnother possible mitigation is to use a factory contract to deploy the verifiers, guaranteeing that they're not created with `CREATE2`. This should render `SELFDESTRUCT` harmless, but there's no guarantee that future forks won't introduce new vectors here.\\nFinally, requiring servers to implement potentially complex contract validation opens up potential for denial-of-service attacks. A server will have to implement mitigations to prevent repeatedly checking the same verifier or spending inordinate resources checking a maliciously crafted contract (e.g. one with high branching factors).
The verifiers add quite a bit of complexity and risk. We recommend looking for an alternative approach, such as including a small number of vetted verifiers (e.g. a merkle proof verifier) or having servers use their own “allow list” for verifiers that they trust.
null
```\\nfunction good(bytes calldata shared, address target, bytes calldata receipt) external pure returns (bool);\\n```\\n
Use consistent staker, stakee ordering in OrchidDirectory
low
```\\nfunction lift(bytes32 key, Stake storage stake, uint128 amount, address stakee, address staker) private {\\n```\\n\\n`OrchidDirectory.lift()` has a parameter `stakee` that precedes `staker`, while the rest of the code always places `staker` first. Because Solidity doesn't have named parameters, it's a good idea to use a consistent ordering to avoid mistakes.
Resolution\\nThis is fixed in OrchidProtocol/[email protected]1cfef88.\\nSwitch `lift()` to follow the “staker then stakee” ordering convention of the rest of the contract.
null
```\\nfunction lift(bytes32 key, Stake storage stake, uint128 amount, address stakee, address staker) private {\\n```\\n
In OrchidDirectory.step() and OrchidDirectory.lift(), use a signed amount Won't Fix
low
`step()` and `lift()` both accept a `uint128` parameter called `amount`. This `amount` is added to various struct fields, which are also of type `uint128`.\\nThe contract intentionally underflows this `amount` to represent negative numbers. This is roughly equivalent to using a signed integer, except that:\\nUnsigned integers aren't sign extended when they're cast to a larger integer type, so care must be taken to avoid this.\\nTools that look for integer overflow/underflow will detect this possibility as a bug. It's then hard to determine which overflows are intentional and which are not.\\n```\\nlift(key, stake, -amount, stakee, staker);\\n```\\n\\n```\\nstep(key, stake, -current.amount\\_, current.parent\\_);\\n```\\n
Resolution\\nThe variables in question are now uint256s. The amount of type casts that would be needed in case the recommended change was implemented would defeat the purpose of simplification.\\nUse `int128` instead, and ensure that amounts can never exceed the maximum `int128` value. (This is trivially achieved by limiting the total number of tokens that can exist.)
null
```\\nlift(key, stake, -amount, stakee, staker);\\n```\\n
Document that math in OrchidDirectory assumes a maximum number of tokens
low
`OrchidDirectory` relies on mathematical operations being unable to overflow due to the particular ERC20 token being used being capped at less than `2**128`.\\nThe following code in `step()` assumes that no before/after amount can reach 2**128:\\n```\\nif (name(stake.left\\_) == key)\\n stake.before\\_ += amount;\\nelse\\n stake.after\\_ += amount;\\n```\\n\\nThe following code in `lift()` assumes that no staked amount (or total amount for a given stakee) can reach 2**128:\\n```\\nuint128 local = stake.amount\\_;\\nlocal += amount;\\nstake.amount\\_ = local;\\nemit Update(staker, stakee, local);\\n\\nuint128 global = stakees\\_[stakee].amount\\_;\\nglobal += amount;\\nstakees\\_[stakee].amount\\_ = global;\\n```\\n\\nThe following code in `have()` assumes that the total amount staked cannot reach 2**128:\\n```\\nreturn stake.before\\_ + stake.after\\_ + stake.amount\\_;\\n```\\n
Document this assumption in the form of code comments where potential overflows exist.\\nConsider also asserting the ERC20 token's total supply in the constructor to attempt to block using a token that violates this constraint and/or checking in `push()` that the total amount staked will remain less than `2**128`. This recommendation is in line with the mitigation proposed for issue 6.7.
null
```\\nif (name(stake.left\\_) == key)\\n stake.before\\_ += amount;\\nelse\\n stake.after\\_ += amount;\\n```\\n
Fees can be changed during the batch
high
Shareholders can vote to change the fees. For buy orders, fees are withdrawn immediately when order is submitted and the only risk is frontrunning by the shareholder's voting contract.\\nFor sell orders, fees are withdrawn when a trader claims an order and withdraws funds in `_claimSellOrder` function:\\n```\\nif (fee > 0) {\\n reserve.transfer(\\_collateral, beneficiary, fee);\\n}\\n```\\n\\nFees can be changed between opening order and claiming this order which makes the fees unpredictable.
Resolution\\nFixed with AragonBlack/[email protected]0941f53 by storing current fee in meta batch.\\nFees for an order should not be updated during its lifetime.
null
```\\nif (fee > 0) {\\n reserve.transfer(\\_collateral, beneficiary, fee);\\n}\\n```\\n
Bancor formula should not be updated during the batch
high
Shareholders can vote to change the bancor formula contract. That can make a price in the current batch unpredictable.\\n```\\nfunction updateFormula(IBancorFormula \\_formula) external auth(UPDATE\\_FORMULA\\_ROLE) {\\n require(isContract(\\_formula), ERROR\\_CONTRACT\\_IS\\_EOA);\\n\\n \\_updateFormula(\\_formula);\\n}\\n```\\n
Bancor formula update should be executed in the next batch or with a timelock that is greater than batch duration.
null
```\\nfunction updateFormula(IBancorFormula \\_formula) external auth(UPDATE\\_FORMULA\\_ROLE) {\\n require(isContract(\\_formula), ERROR\\_CONTRACT\\_IS\\_EOA);\\n\\n \\_updateFormula(\\_formula);\\n}\\n```\\n
Maximum slippage shouldn't be updated for the current batch
high
When anyone submits a new order, the batch price is updated and it's checked whether the price slippage is acceptable. The problem is that the maximum slippage can be updated during the batch and traders cannot be sure that price is limited as they initially expected.\\n```\\nfunction \\_slippageIsValid(Batch storage \\_batch, address \\_collateral) internal view returns (bool) {\\n uint256 staticPricePPM = \\_staticPricePPM(\\_batch.supply, \\_batch.balance, \\_batch.reserveRatio);\\n uint256 maximumSlippage = collaterals[\\_collateral].slippage;\\n```\\n\\nAdditionally, if a maximum slippage is updated to a lower value, some of the orders that should lower the current slippage will also revert.
Save a slippage value on batch initialization and use it during the current batch.
null
```\\nfunction \\_slippageIsValid(Batch storage \\_batch, address \\_collateral) internal view returns (bool) {\\n uint256 staticPricePPM = \\_staticPricePPM(\\_batch.supply, \\_batch.balance, \\_batch.reserveRatio);\\n uint256 maximumSlippage = collaterals[\\_collateral].slippage;\\n```\\n
AragonFundraisingController - an untapped address in toReset can block attempts of opening Trading after presale
high
AragonFundraisingController can be initialized with a list of token addresses `_toReset` that are to be reset when trading opens after the presale. These addresses are supposed to be addresses of tapped tokens. However, the list needs to be known when initializing the contract but the tapped tokens are added after initialization when calling `addCollateralToken` (and tapped with _rate>0). This can lead to an inconsistency that blocks `openTrading`.\\n```\\nfor (uint256 i = 0; i < \\_toReset.length; i++) {\\n require(\\_tokenIsContractOrETH(\\_toReset[i]), ERROR\\_INVALID\\_TOKENS);\\n toReset.push(\\_toReset[i]);\\n}\\n```\\n\\nIn case a token address makes it into the list of `toReset` tokens that is not tapped it will be impossible to `openTrading` as `tap.resetTappedToken(toReset[i]);` throws for untapped tokens. According to the permission setup in `FundraisingMultisigTemplate` only Controller can call `Marketmaker.open`\\n```\\nfunction openTrading() external auth(OPEN\\_TRADING\\_ROLE) {\\n for (uint256 i = 0; i < toReset.length; i++) {\\n tap.resetTappedToken(toReset[i]);\\n }\\n\\n marketMaker.open();\\n}\\n```\\n
Instead of initializing the Controller with a list of tapped tokens to be reset when trading opens, add a flag to `addCollateralToken` to indicate that the token should be reset when calling `openTrading`, making sure only tapped tokens are added to this list. This also allows adding tapped tokens that are to be reset at a later point in time.
null
```\\nfor (uint256 i = 0; i < \\_toReset.length; i++) {\\n require(\\_tokenIsContractOrETH(\\_toReset[i]), ERROR\\_INVALID\\_TOKENS);\\n toReset.push(\\_toReset[i]);\\n}\\n```\\n
[New] Tapped collaterals can be bought by traders Won't Fix
medium
When a trader submits a sell order, `_openSellOrder()` function checks that there are enough tokens in `reserve` by calling `_poolBalanceIsSufficient` function\\n```\\nfunction \\_poolBalanceIsSufficient(address \\_collateral) internal view returns (bool) {\\n return controller.balanceOf(address(reserve), \\_collateral) >= collateralsToBeClaimed[\\_collateral];\\n}\\n```\\n\\nthe problem is that because `collateralsToBeClaimed[_collateral]` has increased, `controller.balanceOf(address(reserve), _collateral)` could also increase. It happens so because `controller.balanceOf()` function subtracts tapped amount from the reserve's balance.\\n```\\nfunction balanceOf(address \\_who, address \\_token) public view isInitialized returns (uint256) {\\n uint256 balance = \\_token == ETH ? \\_who.balance : ERC20(\\_token).staticBalanceOf(\\_who);\\n\\n if (\\_who == address(reserve)) {\\n return balance.sub(tap.getMaximumWithdrawal(\\_token));\\n } else {\\n return balance;\\n }\\n}\\n```\\n\\nAnd `tap.getMaximumWithdrawal(_token)` could decrease because it depends on `collateralsToBeClaimed[_collateral]`\\n```\\nfunction \\_tappedAmount(address \\_token) internal view returns (uint256) {\\n uint256 toBeKept = controller.collateralsToBeClaimed(\\_token).add(floors[\\_token]);\\n uint256 balance = \\_token == ETH ? address(reserve).balance : ERC20(\\_token).staticBalanceOf(reserve);\\n uint256 flow = (\\_currentBatchId().sub(lastTappedAmountUpdates[\\_token])).mul(rates[\\_token]);\\n uint256 tappedAmount = tappedAmounts[\\_token].add(flow);\\n /\\*\\*\\n \\* whatever happens enough collateral should be\\n \\* kept in the reserve pool to guarantee that\\n \\* its balance is kept above the floor once\\n \\* all pending sell orders are claimed\\n \\*/\\n\\n /\\*\\*\\n \\* the reserve's balance is already below the balance to be kept\\n \\* the tapped amount should be reset to zero\\n \\*/\\n if (balance <= toBeKept) {\\n return 0;\\n }\\n\\n /\\*\\*\\n \\* the reserve's balance minus the upcoming tap flow would be below the balance to be kept\\n \\* the flow should be reduced to balance - toBeKept\\n \\*/\\n if (balance <= toBeKept.add(tappedAmount)) {\\n return balance.sub(toBeKept);\\n }\\n\\n /\\*\\*\\n \\* the reserve's balance minus the upcoming flow is above the balance to be kept\\n \\* the flow can be added to the tapped amount\\n \\*/\\n return tappedAmount;\\n}\\n```\\n\\nThat means that the amount that beneficiary can withdraw has just decreased, which should not be possible.
Ensure that `tappedAmount` cannot be decreased once updated.
null
```\\nfunction \\_poolBalanceIsSufficient(address \\_collateral) internal view returns (bool) {\\n return controller.balanceOf(address(reserve), \\_collateral) >= collateralsToBeClaimed[\\_collateral];\\n}\\n```\\n
Presale - contributionToken double cast and invalid comparison
medium
The Presale can be configured to accept `ETH` or a valid `ERC20` `token`. This `token` is stored as an `ERC20` contract type in the state variable `contributionToken`. It is then directly compared to constant `ETH` which is `address(0x0)` in various locations. Additionally, the `_transfer` function double casts the `token` to `ERC20` if the `contributionToken` is passed as an argument.\\n`contribute` - invalid comparison of contract type against `address(0x00)`. Even though this is accepted in solidity `<0.5.0` it is going to raise a compiler error with newer versions (>=0.5.0).\\n```\\nfunction contribute(address \\_contributor, uint256 \\_value) external payable nonReentrant auth(CONTRIBUTE\\_ROLE) {\\n require(state() == State.Funding, ERROR\\_INVALID\\_STATE);\\n\\n if (contributionToken == ETH) {\\n require(msg.value == \\_value, ERROR\\_INVALID\\_CONTRIBUTE\\_VALUE);\\n } else {\\n require(msg.value == 0, ERROR\\_INVALID\\_CONTRIBUTE\\_VALUE);\\n }\\n```\\n\\n`_transfer` - double cast `token` to `ERC20` if it is the contribution `token`.\\n```\\nrequire(ERC20(\\_token).safeTransfer(\\_to, \\_amount), ERROR\\_TOKEN\\_TRANSFER\\_REVERTED);\\n```\\n
`contributionToken` can either be `ETH` or a valid `ERC20` contract address. It is therefore recommended to store the token as an address type instead of the more precise contract type to resolve the double cast and the invalid contract type to address comparison or cast the `ERC20` type to `address()` before comparison.
null
```\\nfunction contribute(address \\_contributor, uint256 \\_value) external payable nonReentrant auth(CONTRIBUTE\\_ROLE) {\\n require(state() == State.Funding, ERROR\\_INVALID\\_STATE);\\n\\n if (contributionToken == ETH) {\\n require(msg.value == \\_value, ERROR\\_INVALID\\_CONTRIBUTE\\_VALUE);\\n } else {\\n require(msg.value == 0, ERROR\\_INVALID\\_CONTRIBUTE\\_VALUE);\\n }\\n```\\n
Fees are not returned for buy orders if a batch is canceled Won't Fix
medium
Every trader pays fees on each buy order and transfers it directly to the `beneficiary`.\\n```\\nuint256 fee = \\_value.mul(buyFeePct).div(PCT\\_BASE);\\nuint256 value = \\_value.sub(fee);\\n\\n// collect fee and collateral\\nif (fee > 0) {\\n \\_transfer(\\_buyer, beneficiary, \\_collateral, fee);\\n}\\n\\_transfer(\\_buyer, address(reserve), \\_collateral, value);\\n```\\n\\nIf the batch is canceled, fees are not returned to the traders because there is no access to the beneficiary account.\\nAdditionally, fees are returned to traders for all the sell orders if the batch is canceled.
Consider transferring fees to a beneficiary only after the batch is over.
null
```\\nuint256 fee = \\_value.mul(buyFeePct).div(PCT\\_BASE);\\nuint256 value = \\_value.sub(fee);\\n\\n// collect fee and collateral\\nif (fee > 0) {\\n \\_transfer(\\_buyer, beneficiary, \\_collateral, fee);\\n}\\n\\_transfer(\\_buyer, address(reserve), \\_collateral, value);\\n```\\n
Tap - Controller should not be updateable
medium
Similar to the issue 6.11, `Tap` allows updating the `Controller` contract it is using. The permission is currently not assigned in the `FundraisingMultisigTemplate` but might be used in custom deployments.\\n```\\n/\\*\\*\\n \\* @notice Update controller to `\\_controller`\\n \\* @param \\_controller The address of the new controller contract\\n\\*/\\nfunction updateController(IAragonFundraisingController \\_controller) external auth(UPDATE\\_CONTROLLER\\_ROLE) {\\n require(isContract(\\_controller), ERROR\\_CONTRACT\\_IS\\_EOA);\\n\\n \\_updateController(\\_controller);\\n}\\n```\\n
To avoid inconsistencies, we suggest to remove this functionality and provide a guideline on how to safely upgrade components of the system.
null
```\\n/\\*\\*\\n \\* @notice Update controller to `\\_controller`\\n \\* @param \\_controller The address of the new controller contract\\n\\*/\\nfunction updateController(IAragonFundraisingController \\_controller) external auth(UPDATE\\_CONTROLLER\\_ROLE) {\\n require(isContract(\\_controller), ERROR\\_CONTRACT\\_IS\\_EOA);\\n\\n \\_updateController(\\_controller);\\n}\\n```\\n
Tap - reserve can be updated in Tap but not in MarketMaker or Controller
medium
The address of the pool/reserve contract can be updated in `Tap` if someone owns the `UPDATE_RESERVE_ROLE` permission. The permission is currently not assigned in the template.\\nThe reserve is being referenced by multiple Contracts. `Tap` interacts with it to transfer funds to the beneficiary, `Controller` adds new protected tokens, and `MarketMaker` transfers funds when someone sells their Shareholder token.\\nUpdating reserve only in `Tap` is inconsistent with the system as the other contracts are still referencing the old reserve unless they are updated via the Aragon Application update mechanisms.\\n```\\n/\\*\\*\\n \\* @notice Update reserve to `\\_reserve`\\n \\* @param \\_reserve The address of the new reserve [pool] contract\\n\\*/\\nfunction updateReserve(Vault \\_reserve) external auth(UPDATE\\_RESERVE\\_ROLE) {\\n require(isContract(\\_reserve), ERROR\\_CONTRACT\\_IS\\_EOA);\\n\\n \\_updateReserve(\\_reserve);\\n}\\n```\\n
Remove the possibility to update reserve in `Tap` to keep the system consistent. Provide information about update mechanisms in case the reserve needs to be updated for all components.
null
```\\n/\\*\\*\\n \\* @notice Update reserve to `\\_reserve`\\n \\* @param \\_reserve The address of the new reserve [pool] contract\\n\\*/\\nfunction updateReserve(Vault \\_reserve) external auth(UPDATE\\_RESERVE\\_ROLE) {\\n require(isContract(\\_reserve), ERROR\\_CONTRACT\\_IS\\_EOA);\\n\\n \\_updateReserve(\\_reserve);\\n}\\n```\\n
Presale can be opened earlier than initially assigned date
medium
There are 2 ways how presale opening date can be assigned. Either it's defined on initialization or the presale will start when `open()` function is executed.\\n```\\nif (\\_openDate != 0) {\\n \\_setOpenDate(\\_openDate);\\n}\\n```\\n\\nThe problem is that even if `openDate` is assigned to some non-zero date, it can still be opened earlier by calling `open()` function.\\n```\\nfunction open() external auth(OPEN\\_ROLE) {\\n require(state() == State.Pending, ERROR\\_INVALID\\_STATE);\\n\\n \\_open();\\n}\\n```\\n
Require that `openDate` is not set (0) when someone manually calls the `open()` function.
null
```\\nif (\\_openDate != 0) {\\n \\_setOpenDate(\\_openDate);\\n}\\n```\\n
Presale - should not allow zero value contributions
low
The Presale accepts zero value contributions emitting a contribution event if none of the Aragon components (TokenManager, MinimeToken) raises an exception.\\n```\\nfunction contribute(address \\_contributor, uint256 \\_value) external payable nonReentrant auth(CONTRIBUTE\\_ROLE) {\\n require(state() == State.Funding, ERROR\\_INVALID\\_STATE);\\n\\n if (contributionToken == ETH) {\\n require(msg.value == \\_value, ERROR\\_INVALID\\_CONTRIBUTE\\_VALUE);\\n } else {\\n require(msg.value == 0, ERROR\\_INVALID\\_CONTRIBUTE\\_VALUE);\\n }\\n\\n \\_contribute(\\_contributor, \\_value);\\n}\\n```\\n
Reject zero value `ETH` or `ERC20` contributions.
null
```\\nfunction contribute(address \\_contributor, uint256 \\_value) external payable nonReentrant auth(CONTRIBUTE\\_ROLE) {\\n require(state() == State.Funding, ERROR\\_INVALID\\_STATE);\\n\\n if (contributionToken == ETH) {\\n require(msg.value == \\_value, ERROR\\_INVALID\\_CONTRIBUTE\\_VALUE);\\n } else {\\n require(msg.value == 0, ERROR\\_INVALID\\_CONTRIBUTE\\_VALUE);\\n }\\n\\n \\_contribute(\\_contributor, \\_value);\\n}\\n```\\n
FundraisingMultisigTemplate - should use BaseTemplate._createPermissionForTemplate() to assign permissions to itself
low
The template temporarily assigns permissions to itself to be able to configure parts of the system. This can either be done by calling `acl.createPermission(address(this), app, role, manager)` or by using a distinct method provided with the DAO-Templates BaseTemplate `_createPermissionForTemplate`.\\nWe suggest that in order to make it clear that permissions are assigned to the template and make it easier to audit that permissions are either revoked or transferred before the DAO is transferred to the new user, the method provided and used with the default Aragon DAO-Templates should be used.\\nuse `createPermission` if permissions are assigned to an entity other than the template contract.\\nuse `_createPermissionForTemplate` when creating permissions for the template contract.\\n```\\n// create and grant ADD\\_PROTECTED\\_TOKEN\\_ROLE to this template\\nacl.createPermission(this, controller, controller.ADD\\_COLLATERAL\\_TOKEN\\_ROLE(), this);\\n```\\n\\nSidenote: pass `address(this)` instead of the contract instance to `createPermission`.
Resolution\\nFixed with AragonBlack/[email protected]dd153e0.\\nUse `BaseTemplate._createPermissionForTemplate` to assign permissions to the template.
null
```\\n// create and grant ADD\\_PROTECTED\\_TOKEN\\_ROLE to this template\\nacl.createPermission(this, controller, controller.ADD\\_COLLATERAL\\_TOKEN\\_ROLE(), this);\\n```\\n
FundraisingMultisigTemplate - misleading comments
low
The comment mentionsADD_PROTECTED_TOKEN_ROLE but permissions for `ADD_COLLATERAL_TOKEN_ROLE` are created.\\n```\\n// create and grant ADD\\_PROTECTED\\_TOKEN\\_ROLE to this template\\nacl.createPermission(this, controller, controller.ADD\\_COLLATERAL\\_TOKEN\\_ROLE(), this);\\n```\\n\\n```\\n// transfer ADD\\_PROTECTED\\_TOKEN\\_ROLE\\n\\_transferPermissionFromTemplate(acl, controller, shareVoting, controller.ADD\\_COLLATERAL\\_TOKEN\\_ROLE(), shareVoting);\\n```\\n
`ADD_PROTECTED_TOKEN_ROLE` in the comment should be `ADD_COLLATERAL_TOKEN_ROLE`.
null
```\\n// create and grant ADD\\_PROTECTED\\_TOKEN\\_ROLE to this template\\nacl.createPermission(this, controller, controller.ADD\\_COLLATERAL\\_TOKEN\\_ROLE(), this);\\n```\\n
FundraisingMultisigTemplate - unnecessary cast to address
low
The addresses of DAI (argument `address` _dai) and AND (argument `address` _ant) are unnecessarily cast to `address`.\\n```\\nconstructor(\\n DAOFactory \\_daoFactory,\\n ENS \\_ens,\\n MiniMeTokenFactory \\_miniMeFactory,\\n IFIFSResolvingRegistrar \\_aragonID,\\n address \\_dai,\\n address \\_ant\\n)\\n BaseTemplate(\\_daoFactory, \\_ens, \\_miniMeFactory, \\_aragonID)\\n public\\n{\\n \\_ensureAragonIdIsValid(\\_aragonID);\\n \\_ensureMiniMeFactoryIsValid(\\_miniMeFactory);\\n \\_ensureTokenIsContractOrETH(\\_dai);\\n \\_ensureTokenIsContractOrETH(\\_ant);\\n\\n collaterals.push(address(\\_dai));\\n collaterals.push(address(\\_ant));\\n}\\n```\\n
Both arguments are already of type `address`, therefore remove the explicit cast to `address()` when pushing to the `collaterals` array.
null
```\\nconstructor(\\n DAOFactory \\_daoFactory,\\n ENS \\_ens,\\n MiniMeTokenFactory \\_miniMeFactory,\\n IFIFSResolvingRegistrar \\_aragonID,\\n address \\_dai,\\n address \\_ant\\n)\\n BaseTemplate(\\_daoFactory, \\_ens, \\_miniMeFactory, \\_aragonID)\\n public\\n{\\n \\_ensureAragonIdIsValid(\\_aragonID);\\n \\_ensureMiniMeFactoryIsValid(\\_miniMeFactory);\\n \\_ensureTokenIsContractOrETH(\\_dai);\\n \\_ensureTokenIsContractOrETH(\\_ant);\\n\\n collaterals.push(address(\\_dai));\\n collaterals.push(address(\\_ant));\\n}\\n```\\n
FundraisingMultisigTemplate - DAI/ANT token address cannot be zero
low
The fundraising template is configured with the `DAI` and `ANT` token address upon deployment and checks if the provided addresses are valid. The check performed is `_ensureTokenIsContractOrETH()` which allows the `address(0)` (constant for ETH) for the token contracts. However, `address(0)` is not a valid option for either `DAI` or `ANT` and the contract expects a valid token address to be provided as the deployment of a new DAO will have unexpected results (collateral `ETH` is added instead of an ERC20 token) or fail (DAI == `ANT` == 0x0).\\n```\\n\\_ensureTokenIsContractOrETH(\\_dai);\\n\\_ensureTokenIsContractOrETH(\\_ant);\\n```\\n\\n```\\n function \\_ensureTokenIsContractOrETH(address \\_token) internal view returns (bool) {\\n require(isContract(\\_token) || \\_token == ETH, ERROR\\_BAD\\_SETTINGS);\\n}\\n```\\n
Resolution\\nFixed with AragonBlack/[email protected]da561ce.\\nUse `isContract()` instead of `_ensureTokenIsContractOrETH()` and optionally require that `collateral[0] != collateral[1]` as an additional check to prevent that the fundraising template is being deployed with an invalid configuration.
null
```\\n\\_ensureTokenIsContractOrETH(\\_dai);\\n\\_ensureTokenIsContractOrETH(\\_ant);\\n```\\n
Anyone can remove a maker's pending pool join status
high
Using behavior described in https://github.com/ConsenSys/0x-v3-staking-audit-2019-10/issues/11, it is possible to delete the pending join status of any maker in any pool by passing in `NIL_POOL_ID` to `removeMakerFromStakingPool`. Note that the attacker in the following example must not be a confirmed member of any pool:\\nThe attacker calls `addMakerToStakingPool(NIL_POOL_ID, makerAddress)`. In this case, `makerAddress` can be almost any address, as long as it has not called `joinStakingPoolAsMaker` (an easy example is address(0)). The key goal of this call is to increment the number of makers in pool 0:\\n```\\n\\_poolById[poolId].numberOfMakers = uint256(pool.numberOfMakers).safeAdd(1).downcastToUint32();\\n```\\n\\nThe attacker calls `removeMakerFromStakingPool(NIL_POOL_ID, targetAddress)`. This function queries `getStakingPoolIdOfMaker(targetAddress)` and compares it to the passed-in pool id. Because the target is an unconfirmed maker, their staking pool id is NIL_POOL_ID:\\n```\\nbytes32 makerPoolId = getStakingPoolIdOfMaker(makerAddress);\\nif (makerPoolId != poolId) {\\n LibRichErrors.rrevert(LibStakingRichErrors.MakerPoolAssignmentError(\\n LibStakingRichErrors.MakerPoolAssignmentErrorCodes.MakerAddressNotRegistered,\\n makerAddress,\\n makerPoolId\\n ));\\n}\\n```\\n\\nThe check passes, and the target's `_poolJoinedByMakerAddress` struct is deleted. Additionally, the number of makers in pool 0 is decreased:\\n```\\ndelete \\_poolJoinedByMakerAddress[makerAddress];\\n\\_poolById[poolId].numberOfMakers = uint256(\\_poolById[poolId].numberOfMakers).safeSub(1).downcastToUint32();\\n```\\n\\nThis can be used to prevent any makers from being confirmed into a pool.
See `issue 5.6`.
null
```\\n\\_poolById[poolId].numberOfMakers = uint256(pool.numberOfMakers).safeAdd(1).downcastToUint32();\\n```\\n
MixinParams.setParams bypasses safety checks made by standard StakingProxy upgrade path.
medium
The staking contracts use a set of configurable parameters to determine the behavior of various parts of the system. The parameters dictate the duration of epochs, the ratio of delegated stake weight vs operator stake, the minimum pool stake, and the Cobb-Douglas numerator and denominator. These parameters can be configured in two ways:\\n```\\n// Call `init()` on the staking contract to initialize storage.\\n(bool didInitSucceed, bytes memory initReturnData) = stakingContract.delegatecall(\\n abi.encodeWithSelector(IStorageInit(0).init.selector)\\n);\\nif (!didInitSucceed) {\\n assembly {\\n revert(add(initReturnData, 0x20), mload(initReturnData))\\n }\\n}\\n \\n// Assert initialized storage values are valid\\n\\_assertValidStorageParams();\\n```\\n\\nAn authorized address can call `MixinParams.setParams` at any time and set the contract's parameters to arbitrary values.\\nThe latter method introduces the possibility of setting unsafe or nonsensical values for the contract parameters: `epochDurationInSeconds` can be set to 0, `cobbDouglassAlphaNumerator` can be larger than `cobbDouglassAlphaDenominator`, `rewardDelegatedStakeWeight` can be set to a value over 100% of the staking reward, and more.\\nNote, too, that by using `MixinParams.setParams` to set all parameters to 0, the `Staking` contract can be re-initialized by way of `Staking.init`. Additionally, it can be re-attached by way of `StakingProxy.attachStakingContract`, as the delegatecall to `Staking.init` will succeed.
Resolution\\nThis is fixed in 0xProject/0x-monorepo#2279. Now the parameter validity is asserted in `setParams()`.\\nEnsure that calls to `setParams` check that the provided values are within the same range currently enforced by the proxy.
null
```\\n// Call `init()` on the staking contract to initialize storage.\\n(bool didInitSucceed, bytes memory initReturnData) = stakingContract.delegatecall(\\n abi.encodeWithSelector(IStorageInit(0).init.selector)\\n);\\nif (!didInitSucceed) {\\n assembly {\\n revert(add(initReturnData, 0x20), mload(initReturnData))\\n }\\n}\\n \\n// Assert initialized storage values are valid\\n\\_assertValidStorageParams();\\n```\\n
Authorized addresses can indefinitely stall ZrxVaultBackstop catastrophic failure mode
medium
The `ZrxVaultBackstop` contract was added to allow anyone to activate the staking system's “catastrophic failure” mode if the `StakingProxy` is in “read-only” mode for at least 40 days. To enable this behavior, the `StakingProxy` contract was modified to track the last timestamp at which “read-only” mode was activated. This is done by way of StakingProxy.setReadOnlyMode:\\n```\\n/// @dev Set read-only mode (state cannot be changed).\\nfunction setReadOnlyMode(bool shouldSetReadOnlyMode)\\n external\\n onlyAuthorized\\n{\\n // solhint-disable-next-line not-rely-on-time\\n uint96 timestamp = block.timestamp.downcastToUint96();\\n if (shouldSetReadOnlyMode) {\\n stakingContract = readOnlyProxy;\\n readOnlyState = IStructs.ReadOnlyState({\\n isReadOnlyModeSet: true,\\n lastSetTimestamp: timestamp\\n });\\n```\\n\\nBecause the timestamp is updated even if “read-only” mode is already active, any authorized address can prevent `ZrxVaultBackstop` from activating catastrophic failure mode by repeatedly calling `setReadOnlyMode`.
If “read-only” mode is already active, `setReadOnlyMode(true)` should result in a no-op.
null
```\\n/// @dev Set read-only mode (state cannot be changed).\\nfunction setReadOnlyMode(bool shouldSetReadOnlyMode)\\n external\\n onlyAuthorized\\n{\\n // solhint-disable-next-line not-rely-on-time\\n uint96 timestamp = block.timestamp.downcastToUint96();\\n if (shouldSetReadOnlyMode) {\\n stakingContract = readOnlyProxy;\\n readOnlyState = IStructs.ReadOnlyState({\\n isReadOnlyModeSet: true,\\n lastSetTimestamp: timestamp\\n });\\n```\\n
Pool 0 can be used to temporarily prevent makers from joining another pool
medium
`removeMakerFromStakingPool` reverts if the number of makers currently in the pool is 0, due to `safeSub` catching an underflow:\\n```\\n\\_poolById[poolId].numberOfMakers = uint256(\\_poolById[poolId].numberOfMakers).safeSub(1).downcastToUint32();\\n```\\n\\nBecause of this, edge behavior described in issue 5.6 can allow an attacker to temporarily prevent makers from joining a pool:\\nThe attacker calls `addMakerToStakingPool(NIL_POOL_ID, victimAddress)`. This sets the victim's `MakerPoolJoinStatus.confirmed` field to `true` and increases the number of makers in pool 0 to 1:\\n```\\npoolJoinStatus = IStructs.MakerPoolJoinStatus({\\n poolId: poolId,\\n confirmed: true\\n});\\n\\_poolJoinedByMakerAddress[makerAddress] = poolJoinStatus;\\n\\_poolById[poolId].numberOfMakers = uint256(pool.numberOfMakers).safeAdd(1).downcastToUint32();\\n```\\n\\nThe attacker calls `removeMakerFromStakingPool(NIL_POOL_ID, randomAddress)`. The net effect of this call simply decreases the number of makers in pool 0 by 1, back to 0:\\n```\\ndelete \\_poolJoinedByMakerAddress[makerAddress];\\n\\_poolById[poolId].numberOfMakers = uint256(\\_poolById[poolId].numberOfMakers).safeSub(1).downcastToUint32();\\n```\\n\\nTypically, the victim should be able to remove themselves from pool 0 by calling `removeMakerFromStakingPool(NIL_POOL_ID, victimAddress)`, but because the attacker can set the pool's number of makers to 0, the aforementioned underflow causes this call to fail. The victim must first understand what is happening in `MixinStakingPool` before they are able to remedy the situation:\\nThe victim must call `addMakerToStakingPool(NIL_POOL_ID, randomAddress2)` to increase pool 0's number of makers back to 1.\\nThe victim can now call `removeMakerFromStakingPool(NIL_POOL_ID, victimAddress)`, and remove their confirmed status.\\nAdditionally, if the victim in question currently has a pending join, the attacker can use issue 5.1 to first remove their pending status before locking them in pool 0.
See issue 5.1.
null
```\\n\\_poolById[poolId].numberOfMakers = uint256(\\_poolById[poolId].numberOfMakers).safeSub(1).downcastToUint32();\\n```\\n
Recommendation: Fix weak assertions in MixinStakingPool stemming from use of NIL_POOL_ID
medium
The modifier `onlyStakingPoolOperatorOrMaker(poolId)` is used to authorize actions taken on a given pool. The sender must be either the operator or a confirmed maker of the pool in question. However, the modifier queries `getStakingPoolIdOfMaker(maker)`, which returns `NIL_POOL_ID` if the maker's `MakerPoolJoinStatus` struct is not confirmed. This implicitly makes anyone a maker of the nonexistent “pool 0”:\\n```\\nfunction getStakingPoolIdOfMaker(address makerAddress)\\n public\\n view\\n returns (bytes32)\\n{\\n IStructs.MakerPoolJoinStatus memory poolJoinStatus = \\_poolJoinedByMakerAddress[makerAddress];\\n if (poolJoinStatus.confirmed) {\\n return poolJoinStatus.poolId;\\n } else {\\n return NIL\\_POOL\\_ID;\\n }\\n}\\n```\\n\\n`joinStakingPoolAsMaker(poolId)` makes no existence checks on the provided pool id, and allows makers to become pending makers in nonexistent pools.\\n`addMakerToStakingPool(poolId, maker)` makes no existence checks on the provided pool id, allowing makers to be added to nonexistent pools (as long as the sender is an operator or maker in the pool).
Avoid use of `0x00...00` for `NIL_POOL_ID`. Instead, use `2**256 - 1`.\\nImplement stronger checks for pool existence. Each time a pool id is supplied, it should be checked that the pool id is between 0 and `nextPoolId`.\\n`onlyStakingPoolOperatorOrMaker` should revert if `poolId` == `NIL_POOL_ID` or if `poolId` is not in the valid range: (0, nextPoolId).
null
```\\nfunction getStakingPoolIdOfMaker(address makerAddress)\\n public\\n view\\n returns (bytes32)\\n{\\n IStructs.MakerPoolJoinStatus memory poolJoinStatus = \\_poolJoinedByMakerAddress[makerAddress];\\n if (poolJoinStatus.confirmed) {\\n return poolJoinStatus.poolId;\\n } else {\\n return NIL\\_POOL\\_ID;\\n }\\n}\\n```\\n
LibFixedMath functions fail to catch a number of overflows
medium
The `__add()`, `__mul()`, and `__div()` functions perform arithmetic on 256-bit signed integers, and they all miss some specific overflows.\\nAddition Overflows\\n```\\n/// @dev Adds two numbers, reverting on overflow.\\nfunction \\_add(int256 a, int256 b) private pure returns (int256 c) {\\n c = a + b;\\n if (c > 0 && a < 0 && b < 0) {\\n LibRichErrors.rrevert(LibFixedMathRichErrors.BinOpError(\\n LibFixedMathRichErrors.BinOpErrorCodes.SUBTRACTION\\_OVERFLOW,\\n a,\\n b\\n ));\\n }\\n if (c < 0 && a > 0 && b > 0) {\\n LibRichErrors.rrevert(LibFixedMathRichErrors.BinOpError(\\n LibFixedMathRichErrors.BinOpErrorCodes.ADDITION\\_OVERFLOW,\\n a,\\n b\\n ));\\n }\\n}\\n```\\n\\nThe two overflow conditions it tests for are:\\nAdding two positive numbers shouldn't result in a negative number.\\nAdding two negative numbers shouldn't result in a positive number.\\n`__add(-2**255, -2**255)` returns `0` without reverting because the overflow didn't match either of the above conditions.\\nMultiplication Overflows\\n```\\n/// @dev Returns the multiplication two numbers, reverting on overflow.\\nfunction \\_mul(int256 a, int256 b) private pure returns (int256 c) {\\n if (a == 0) {\\n return 0;\\n }\\n c = a \\* b;\\n if (c / a != b) {\\n LibRichErrors.rrevert(LibFixedMathRichErrors.BinOpError(\\n LibFixedMathRichErrors.BinOpErrorCodes.MULTIPLICATION\\_OVERFLOW,\\n a,\\n b\\n ));\\n }\\n}\\n```\\n\\nThe function checks via division for most types of overflows, but it fails to catch one particular case. `__mul(-2**255, -1)` returns `-2**255` without error.\\nDivision Overflows\\n```\\n/// @dev Returns the division of two numbers, reverting on division by zero.\\nfunction \\_div(int256 a, int256 b) private pure returns (int256 c) {\\n if (b == 0) {\\n LibRichErrors.rrevert(LibFixedMathRichErrors.BinOpError(\\n LibFixedMathRichErrors.BinOpErrorCodes.DIVISION\\_BY\\_ZERO,\\n a,\\n b\\n ));\\n }\\n c = a / b;\\n}\\n```\\n\\nIt does not check for overflow. Due to this, `__div(-2**255, -1)` erroneously returns `-2**255`.
For addition, the specific case of `__add(-2**255, -2**255)` can be detected by using a `>= 0` check instead of `> 0`, but the below seems like a clearer check for all cases:\\n```\\n// if b is negative, then the result should be less than a\\nif (b < 0 && c >= a) { /\\* subtraction overflow \\*/ }\\n\\n// if b is positive, then the result should be greater than a\\nif (b > 0 && c <= a) { /\\* addition overflow \\*/ }\\n```\\n\\nFor multiplication and division, the specific values of `-2**255` and `-1` are the only missing cases, so that can be explicitly checked in the `__mul()` and `__div()` functions.
null
```\\n/// @dev Adds two numbers, reverting on overflow.\\nfunction \\_add(int256 a, int256 b) private pure returns (int256 c) {\\n c = a + b;\\n if (c > 0 && a < 0 && b < 0) {\\n LibRichErrors.rrevert(LibFixedMathRichErrors.BinOpError(\\n LibFixedMathRichErrors.BinOpErrorCodes.SUBTRACTION\\_OVERFLOW,\\n a,\\n b\\n ));\\n }\\n if (c < 0 && a > 0 && b > 0) {\\n LibRichErrors.rrevert(LibFixedMathRichErrors.BinOpError(\\n LibFixedMathRichErrors.BinOpErrorCodes.ADDITION\\_OVERFLOW,\\n a,\\n b\\n ));\\n }\\n}\\n```\\n
Misleading MoveStake event when moving stake from UNDELEGATED to UNDELEGATED
low
Although moving stake between the same status (UNDELEGATED <=> UNDELEGATED) should be a no-op, calls to `moveStake` succeed even for invalid `amount` and nonsensical `poolId`. The resulting `MoveStake` event can log garbage, potentially confusing those observing events.\\nWhen moving between `UNDELEGATED` and `UNDELEGATED`, each check and function call results in a no-op, save the final event:\\nNeither `from` nor `to` are `StakeStatus.DELEGATED`, so these checks are passed:\\n```\\nif (from.status == IStructs.StakeStatus.DELEGATED) {\\n \\_undelegateStake(\\n from.poolId,\\n staker,\\n amount\\n );\\n}\\n \\nif (to.status == IStructs.StakeStatus.DELEGATED) {\\n \\_delegateStake(\\n to.poolId,\\n staker,\\n amount\\n );\\n}\\n```\\n\\nThe primary state changing function, `_moveStake`, immediately returns because the `from` and `to` balance pointers are equivalent:\\n```\\nif (\\_arePointersEqual(fromPtr, toPtr)) {\\n return;\\n}\\n```\\n\\nFinally, the `MoveStake` event is invoked, which can log completely invalid values for `amount`, `from.poolId`, and to.poolId:\\n```\\nemit MoveStake(\\n staker,\\n amount,\\n uint8(from.status),\\n from.poolId,\\n uint8(to.status),\\n to.poolId\\n);\\n```\\n
If `amount` is 0 or if moving between `UNDELEGATED` and `UNDELEGATED`, this function should no-op or revert. An explicit check for this case should be made near the start of the function.
null
```\\nif (from.status == IStructs.StakeStatus.DELEGATED) {\\n \\_undelegateStake(\\n from.poolId,\\n staker,\\n amount\\n );\\n}\\n \\nif (to.status == IStructs.StakeStatus.DELEGATED) {\\n \\_delegateStake(\\n to.poolId,\\n staker,\\n amount\\n );\\n}\\n```\\n
Remove unneeded fields from StoredBalance and Pool structs
low
Resolution\\nThis is fixed in 0xProject/0x-monorepo#2248. As part of a larger refactor, these fields were removed.\\nBoth structs have fields that are only written to, and never read:\\nStoredBalance.isInitialized:\\n```\\nbool isInitialized;\\n```\\n\\nPool.initialized:\\n```\\nbool initialized;\\n```\\n
The unused fields should be removed.
null
```\\nbool isInitialized;\\n```\\n
Pool IDs can just be incrementing integers
low
Pool IDs are currently `bytes32` values that increment by `2**128`. After discussion with the development team, it seems that this was in preparation for a feature that was ultimately not used. Pool IDs should instead just be incrementing integers.\\n```\\n// The upper 16 bytes represent the pool id, so this would be pool id 1. See MixinStakinPool for more information.\\nbytes32 constant internal INITIAL\\_POOL\\_ID = 0x0000000000000000000000000000000100000000000000000000000000000000;\\n\\n// The upper 16 bytes represent the pool id, so this would be an increment of 1. See MixinStakinPool for more information.\\nuint256 constant internal POOL\\_ID\\_INCREMENT\\_AMOUNT = 0x0000000000000000000000000000000100000000000000000000000000000000;\\n```\\n\\n```\\n/// @dev Computes the unique id that comes after the input pool id.\\n/// @param poolId Unique id of pool.\\n/// @return Next pool id after input pool.\\nfunction \\_computeNextStakingPoolId(bytes32 poolId)\\n internal\\n pure\\n returns (bytes32)\\n{\\n return bytes32(uint256(poolId).safeAdd(POOL\\_ID\\_INCREMENT\\_AMOUNT));\\n}\\n```\\n
Resolution\\nThis is fixed in 0xProject/0x-monorepo#2250. Pool IDs now start at 1 and increment by 1 each time.\\nMake pool IDs `uint256` values and simply add 1 to generate the next ID.
null
```\\n// The upper 16 bytes represent the pool id, so this would be pool id 1. See MixinStakinPool for more information.\\nbytes32 constant internal INITIAL\\_POOL\\_ID = 0x0000000000000000000000000000000100000000000000000000000000000000;\\n\\n// The upper 16 bytes represent the pool id, so this would be an increment of 1. See MixinStakinPool for more information.\\nuint256 constant internal POOL\\_ID\\_INCREMENT\\_AMOUNT = 0x0000000000000000000000000000000100000000000000000000000000000000;\\n```\\n
LibProxy.proxyCall() may overwrite important memory
low
`LibProxy.proxyCall()` copies from call data to memory, starting at address 0:\\n```\\nassembly {\\n // store selector of destination function\\n let freeMemPtr := 0\\n if gt(customEgressSelector, 0) {\\n mstore(0x0, customEgressSelector)\\n freeMemPtr := add(freeMemPtr, 4)\\n }\\n\\n // adjust the calldata offset, if we should ignore the selector\\n let calldataOffset := 0\\n if gt(ignoreIngressSelector, 0) {\\n calldataOffset := 4\\n }\\n\\n // copy calldata to memory\\n calldatacopy(\\n freeMemPtr,\\n calldataOffset,\\n calldatasize()\\n )\\n```\\n\\nThe first 64 bytes of memory are treated as “scratch space” by the Solidity compiler. Writing beyond that point is dangerous, as it will overwrite the free memory pointer and the “zero slot” which is where length-0 arrays point.\\nAlthough the current callers of `proxyCall()` don't appear to use any memory after calling `proxyCall()`, future changes to the code may introduce very serious and subtle bugs due to this unsafe handling of memory.
Use the actual free memory pointer to determine where it's safe to write to memory.
null
```\\nassembly {\\n // store selector of destination function\\n let freeMemPtr := 0\\n if gt(customEgressSelector, 0) {\\n mstore(0x0, customEgressSelector)\\n freeMemPtr := add(freeMemPtr, 4)\\n }\\n\\n // adjust the calldata offset, if we should ignore the selector\\n let calldataOffset := 0\\n if gt(ignoreIngressSelector, 0) {\\n calldataOffset := 4\\n }\\n\\n // copy calldata to memory\\n calldatacopy(\\n freeMemPtr,\\n calldataOffset,\\n calldatasize()\\n )\\n```\\n
NodeRegistry - URL can be arbitrary dns resolvable names, IP's and even localhost or private subnets
high
As outlined in issue 6.9 the `NodeRegistry` allows anyone to register nodes with arbitrary URLs. The `url` is then used by `in3-server` or clients to connect to other nodes in the system. Signers can only be convicted if they sign wrong blockhashes. However, if they never provide any signatures they can stay in the registry for as long as they want and sabotage the network. The Registry implements an admin functionality that is available for the first year to remove misbehaving nodes (or spam entries) from the Registry. However, this is insufficient as an attacker might just re-register nodes after the minimum timeout they specify or spend some more finneys on registering more nodes. Depending on the eth-price this will be more or less profitable.\\nFrom an attackers perspective the `NodeRegistry` is a good source of information for reconnaissance, allows to de-anonymize and profile nodes based on dns entries or netblocks or responses to `in3_stats` (https://github.com/ConsenSys/slockit-in3-audit-2019-09/issues/49), makes a good list of target for DoS attacks on the system or makes it easy to exploit nodes for certain yet unknown security vulnerabilities.\\nSince nodes and potentially clients (not in scope) do not validate the rpc URL received from the `NodeRegistry` they will try to connect to whatever is stored in a nodes `url` entry.\\ncode/in3-server/src/chains/signatures.ts:L58-L75\\n```\\nconst config = nodes.nodes.find(\\_ => \\_.address.toLowerCase() === adr.toLowerCase())\\nif (!config) // TODO do we need to throw here or is it ok to simply not deliver the signature?\\n throw new Error('The ' + adr + ' does not exist within the current registered active nodeList!')\\n\\n// get cache signatures and remaining blocks that have no signatures\\nconst cachedSignatures: Signature[] = []\\nconst blocksToRequest = blocks.filter(b => {\\n const s = signatureCaches.get(b.hash) && false\\n return s ? cachedSignatures.push(s) \\* 0 : true\\n})\\n\\n// send the sign-request\\nlet response: RPCResponse\\ntry {\\n response = (blocksToRequest.length\\n ? await handler.transport.handle(config.url, { id: handler.counter++ || 1, jsonrpc: '2.0', method: 'in3\\_sign', params: blocksToRequest })\\n : { result: [] }) as RPCResponse\\n if (response.error) {\\n```\\n\\nThis allows for a wide range of attacks not limited to:\\nAn attacker might register a node with an empty or invalid URL. The `in3-server` does not validate the URL and therefore will attempt to connect to the invalid URL, spending resources (cpu, file-descriptors, ..) to find out that it is invalid.\\nAn attacker might register a node with a URL that is pointing to another node's rpc endpoint and specify weights that suggest that it is capable of service a lot of requests to draw more traffic towards that node in an attempt to cause a DoS situation.\\nAn attacker might register a node for a http/https website at any port in an extortion attempt directed to website owners. The incubed network nodes will have to learn themselves that the URL is invalid and they will at least attempt to connect the website once.\\nAn attacker might update the node information in the `NodeRegistry` for a specific node every block, providing a new `url` (or a slightly different URLs issue 6.9) to avoid client/node URL blacklists.\\nAn attacker might provide IP addresses instead of DNS resolvable names with the `url` in an attempt to draw traffic to targets, avoiding canonicalization and blacklisting features.\\nAn attacker might provide a URL that points to private IP netblocks for IPv4 or IPv6 in various formats. Combined with the ability to ask another node to connect to an attacker defined `url` (via blockproof, signatures[] -> signer_address -> signer.url) this might allow an attacker to enumerate services in the LAN of node operators.\\nAn attacker might provide the loopback IPv4, IPv6 or resolvable name as the URL in an attempt to make the node connect to local loopback services (service discovery, bypassing authentication for some local running services - however this is very limited to the requests nodes may execute).\\nURLs may be provided in various formats: resolvable dns names, IPv4, IPv6 and depending on the http handler implementation even in Decimal, Hex or Octal form (i.e. http://2130706433/)\\nA valid DNS resolvable name might point to a localhost or private IP netblock.\\nSince none of the rpc endpoints provide signatures they cannot be convicted or removed (unless the `unregisterKey` does it within the first year. However, that will not solve the problem that someone can re-register the same URLs over and over again)
Resolution\\nThis issue has been addressed with the following commits:\\nIt is a design decision to base the Node registry on URLs (DNS resolvable names). This has the implications outlined in this issue and they cannot easily be mitigated. Adding a delay until nodes can be used after registration only delays the problem. Assuming that an entity curates the registry or a whitelist is in place centralizes the system. Adding DNS record verification still allows an owner of a DNS entry to point its name to any IP address they would like it to point to. It certainly makes it harder to add RPC URLs with DNS names that are not in control of the attacker but it also adds a whole lot more complexity to the system (including manual steps performed by the node operator). In the end, the system allows IP based URLs in the registry which cannot be used for DNS validation.\\nIt is a fundamental design decision of the system architecture to allow rpc urls in the Node Registry, therefore this issue can only be partially mitigated unless the system design is reworked. It is therefore suggested to add checks to both the registry contract (coarse validation to avoid adding invalid urls) and node implementations (rigorous validation of URL's and resolved IP addresses) and filter out any potentially harmful destinations.
null
```\\nconst config = nodes.nodes.find(\\_ => \\_.address.toLowerCase() === adr.toLowerCase())\\nif (!config) // TODO do we need to throw here or is it ok to simply not deliver the signature?\\n throw new Error('The ' + adr + ' does not exist within the current registered active nodeList!')\\n\\n// get cache signatures and remaining blocks that have no signatures\\nconst cachedSignatures: Signature[] = []\\nconst blocksToRequest = blocks.filter(b => {\\n const s = signatureCaches.get(b.hash) && false\\n return s ? cachedSignatures.push(s) \\* 0 : true\\n})\\n\\n// send the sign-request\\nlet response: RPCResponse\\ntry {\\n response = (blocksToRequest.length\\n ? await handler.transport.handle(config.url, { id: handler.counter++ || 1, jsonrpc: '2.0', method: 'in3\\_sign', params: blocksToRequest })\\n : { result: [] }) as RPCResponse\\n if (response.error) {\\n```\\n
in3-server - key management Pending
high
Secure and efficient key management is a challenge for any cryptographic system. Incubed nodes for example require an account on the ethereum blockchain to actively participate in the incubed network. The account and therefore a private-key is used to sign transactions on the ethereum blockchain and to provide signed proofs to other in3-nodes.\\nThis means that an attacker that is able to discover the keys used by an `in3-server` by any mechanism may be able to impersonate that node, steal the nodes funds or sign wrong data on behalf of the node which might also lead to a loss of funds.\\nThe private key for the `in3-server` can be specified in a configuration file called `config.json` residing in the program working dir. Settings from the `config.json` can be overridden via command-line options. The application keeps configuration parameters available internally in an `IN3RPCConfig` object and passes this object as an initialization parameter to other objects.\\nThe key can either be provided in plaintext as a hex-string starting with `0x` or within an ethereum keystore format compatible protected keystore file. Either way it is provided it will be held in plaintext in the object.\\nThe application accepts plaintext private keys and the keys are stored unprotected in the applications memory in JavaScript objects. The `in3-server` might even re-use the nodes private key which may weaken the security provided by the node. The repository leaks a series of presumably ‘test private keys' and the default config file already comes with a private key set that might be shared across unvary users that fail to override it.\\ncode/in3-server/config.json:L1-L4\\n```\\n{\\n "privateKey": "0xc858a0f49ce12df65031ba0eb0b353abc74f93f8ccd43df9682fd2e2293a4db3",\\n "rpcUrl": "http://rpc-kovan.slock.it"\\n}\\n```\\n\\ncode/in3-server/package.json:L20-L31\\nThe private key is also passed as arguments to other functions. In error cases these may leak the private key to log interfaces or remote log aggregation instances (sentry). See `txargs.privateKey` in the example below:\\ncode/in3-server/src/util/tx.ts:L100-L100\\n```\\nconst key = toBuffer(txargs.privateKey)\\n```\\n\\ncode/in3-server/src/util/tx.ts:L134-L140\\n```\\nconst txHash = await transport.handle(url, {\\n jsonrpc: '2.0',\\n id: idCount++,\\n method: 'eth\\_sendRawTransaction',\\n params: [toHex(tx.serialize())]\\n}).then((\\_: RPCResponse) => \\_.error ? Promise.reject(new SentryError('Error sending tx', 'tx\\_error', 'Error sending the tx ' + JSON.stringify(txargs) + ':' + JSON.stringify(\\_.error))) as any : \\_.result + '')\\n```\\n
Resolution\\nThe breakdown of the fixes addressed with git.slock.it/PR/13 are as follows:\\nKeys should never be stored or accepted in plaintext format Keys should only be accepted in an encrypted and protected format\\nThe private key in `code/in3-server/config.json` has been removed. The repository still contains private keys at least in the following locations:\\n`package.json`\\n`vscode/launch.json`\\n`example_docker-compose.yml`\\nNote that private keys indexed by a git repository can be restored from the repository history.\\nThe following statement has been provided to address this issue:\\nWe have removed all examples and usage of plain private keys and replaced them with json-keystore files. Also in the documentation we added warnings on how to deal with keys, especially with hints to the bash history or enviroment\\n\\nA single key should be used for only one purpose. Keys should not be shared.\\nThe following statement has been provided to address this issue:\\nThis is why we seperated the owner and signer-key. This way you can use a multisig to securly protect the owner-key. The signer-key is used to sign blocks (and convict) and is not able to do anything else (not even changing its own url)\\n\\nThe application should support developers in understanding where cryptographic keys are stored within the application as well as in which memory regions they might be accessible for other applications\\nAddressed by wrapping the private key in an object that stores the key in encrypted form and only decrypts it when signing. The key is cleared after usage. The IN3-server still allows raw private keys to be configured. A warning is printed if that is the case. The loaded raw private key is temporarily assigned to a local variable and not explicitly cleared by the method.\\nWhile we used to keep the unlocked key as part of the config, we have now removed the key from the config and store them in a special signer-function.\\nhttps://git.slock.it/in3/ts/in3-server/merge_requests/113\\n\\nKeys should be protected in memory and only decrypted for the duration of time they are actively used. Keys should not be stored with the applications source-code repository\\nsee previous remediation note.\\nAfter unlocking the signer key, we encrypt it again and keep it encrypted only decrypting it when signing. This way the raw private key only exist for a very short time in memory and will be filled with 0 right after. ( https://git.slock.it/in3/ts/in3-server/merge_requests/113/diffs#653b04fa41e35b55181776b9f14620b661cff64c_54_73 )\\n\\nUse standard libraries for cryptographic operations\\nThe following statement has been provided to address this issue\\nWe are using ethereumjs-libs.\\n\\nUse the system keystore and API to sign and avoid to store key material at all\\nThe following statement has been provided to address this issue\\nWe are looking into using different signer-apis, even supporting hardware-modules like HSMs. But this may happen in future releases.\\n\\nThe application should store the keys eth-address (util.getAddress()) instead of re-calculating it multiple times from the private key.\\nFixed by generating the address for a private key once and storing it in a private key wrapper object.\\n\\nDo not leak credentials and key material in debug-mode, to local log-output or external log aggregators.\\n`txArgs` still contains a field `privateKey` as outlined in the issue description. However, this `privateKey` now represents the wrapper object noted in a previous comment which only provides access to the ETH address generated from the raw private key.\\nThe following statement has been provided to address this issue:\\nsince the private key and the passphrase are actually deleted from the config, logoutputs or even debug will not be able to leak this information.\\nKeys should never be stored or accepted in plaintext format.\\nKeys should not be stored in plaintext on the file-system as they might easily be exposed to other users. Credentials on the file-system must be tightly restricted by access control.\\nKeys should not be provided as plaintext via environment variables as this might make them available to other processes sharing the same environment (child-processes, e.g. same shell session)\\nKeys should not be provided as plaintext via command-line arguments as they might persist in the shell's command history or might be available to privileged system accounts that can query other processes startup parameters.\\nKeys should only be accepted in an encrypted and protected format.\\nA single key should be used for only one purpose. Keys should not be shared.\\nThe use of the same key for two different cryptographic processes may weaken the security provided by one or both of the processes.\\nThe use of the same key for two different applications may weaken the security provided by one or both of the applications.\\nLimiting the use of a key limits the damage that could be done if the key is compromised.\\nNode owners keys should not be re-used as signer keys.\\nThe application should support developers in understanding where cryptographic keys are stored within the application as well as in which memory regions they might be accessible for other applications.\\nKeys should be protected in memory and only decrypted for the duration of time they are actively used.\\nKeys should not be stored with the applications source-code repository.\\nUse standard libraries for cryptographic operations.\\nUse the system keystore and API to sign and avoid to store key material at all.\\nThe application should store the keys eth-address (util.getAddress()) instead of re-calculating it multiple times from the private key.\\nDo not leak credentials and key material in debug-mode, to local log-output or external log aggregators.
null
```\\n{\\n "privateKey": "0xc858a0f49ce12df65031ba0eb0b353abc74f93f8ccd43df9682fd2e2293a4db3",\\n "rpcUrl": "http://rpc-kovan.slock.it"\\n}\\n```\\n
NodeRegistry - Multiple nodes can share slightly different RPC URL
high
One of the requirements for Node registration is to have a unique URL which is not already used by a different owner. The uniqueness check is done by hashing the provided `_url` and checking if someone already registered with that hash of `_url`.\\nHowever, byte-equality checks (via hashing in this case) to enforce uniqueness will not work for URLs. For example, while the following URLs are not equal and will result in different `urlHashes` they can logically be the same end-point:\\n`https://some-server.com/in3-rpc`\\n`https://some-server.com:443/in3-rpc`\\n`https://some-server.com/in3-rpc/`\\n`https://some-server.com/in3-rpc///`\\n`https://some-server.com/in3-rpc?something`\\n`https://some-server.com/in3-rpc?something&something`\\n`https://www.some-server.com/in3-rpc?something` (if www resolves to the same ip)\\n```\\nbytes32 urlHash = keccak256(bytes(\\_url));\\n\\n// make sure this url and also this owner was not registered before.\\n// solium-disable-next-line\\nrequire(!urlIndex[urlHash].used && signerIndex[\\_signer].stage == Stages.NotInUse,\\n "a node with the same url or signer is already registered");\\n```\\n\\nThis leads to the following attack vectors:\\nA user signs up multiple nodes that resolve to the same end-point (URL). A minimum deposit of `0.01 ether` is required for each registration. Registering multiple nodes for the same end-point might allow an attacker to increase their chance of being picked to provide proofs. Registering multiple nodes requires unique `signer` addresses per node.\\nAlso one node can have multiple accounts, hence one node can have slightly different URL and different accounts as the signers.\\nDoS - A user might register nodes for URLs that do not serve in3-clients in an attempt to DDoS e.g. in an attempt to extort web-site operators. This is kind of a reflection attack where nodes will request other nodes from the contract and try to contact them over RPC. Since it is http-rpc it will consume resources on the receiving end.\\nDoS - A user might register Nodes with RPC URLs of other nodes, manipulating weights to cause more traffic than the node can actually handle. Nodes will try to communicate with that node. If no proof is requested the node will not even know that someone else signed up other nodes with their RPC URL to cause problems. If they request proof the original `signer` will return a signed proof and the node will fail due to a signature mismatch. However, the node cannot be convicted and therefore forced to lose the deposit as conviction is bound the `signer` and the block was not signed by the rogue node entry. There will be no way to remove the node from the registry other than the admin functionality.
Canonicalize URLs, but that will not completely prevent someone from registering nodes for other end-points or websites. Nodes can be removed by an admin in the first year but not after that. Rogue owners cannot be prevented from registering random nodes with high weights and minimum deposit. They cannot be convicted as they do not serve proofs. Rogue owners can still unregister to receive their deposit after messing with the system.
null
```\\nbytes32 urlHash = keccak256(bytes(\\_url));\\n\\n// make sure this url and also this owner was not registered before.\\n// solium-disable-next-line\\nrequire(!urlIndex[urlHash].used && signerIndex[\\_signer].stage == Stages.NotInUse,\\n "a node with the same url or signer is already registered");\\n```\\n
Impossible to remove malicious nodes after the initial period
medium
The system has centralized power structure for the first year after deployment. An `unregisterKey` (creator of the contract) is allowed to remove Nodes that are in state `Stages.Active` from the registry, only in 1st year.\\nHowever, there is no possibility to remove malicious nodes from the registry after that.\\n```\\n/// @dev only callable in the 1st year after deployment\\nfunction removeNodeFromRegistry(address \\_signer)\\n external\\n onlyActiveState(\\_signer)\\n{\\n\\n // solium-disable-next-line security/no-block-members\\n require(block.timestamp < (blockTimeStampDeployment + YEAR\\_DEFINITION), "only in 1st year");// solhint-disable-line not-rely-on-time\\n require(msg.sender == unregisterKey, "only unregisterKey is allowed to remove nodes");\\n\\n SignerInformation storage si = signerIndex[\\_signer];\\n In3Node memory n = nodes[si.index];\\n\\n unregisterNodeInternal(si, n);\\n\\n}\\n```\\n
Resolution\\nThis issue has been addressed with a large change-set that splits the NodeRegistry into two contracts, which results in a code flow that mitigates this issue by making the logic contract upgradable (after 47 days of notice). The resolution adds more complexity to the system, and this complexity is not covered by the original audit. Splitting up the contracts has the side-effect of events being emitted by two different contracts, requiring nodes to subscribe to both contracts' events.\\nThe need for removing malicious nodes from the registry, arises from the design decision to allow anyone to register any URL. These URLs might not actually belong to the registrar of the URL and might not be IN3 nodes. This is partially mitigated by a centralization feature introduced in the mitigation phase that implements whitelist functionality for adding nodes.\\nWe generally advocate against adding complexity, centralization and upgrading mechanisms that can allow one party to misuse functionalities of the contract system for their benefit (e.g. `adminSetNodeDeposit` is only used to reset the deposit but allows the Logic contract to set any deposit; the logic contract is set by the owner and there is a 47 day timelock).\\nWe believe the solution to this issue, should have not been this complex. The trust model of the system is changed with this solution, now the logic contract can allow the admin a wide range of control over the system state and data.\\nThe following statement has been provided with the change-set:\\nDuring the 1st year, we will keep the current mechanic even though it's a centralized approach. However, we changed the structure of the smart contracts and separated the NodeRegistry into two different smart contracts: NodeRegistryLogic and NodeRegistryData. After a successful deployment only the NodeRegistryLogic-contract is able to write data into the NodeRegistryData-contract. This way, we can keep the stored data (e.g. the nodeList) in the NodeRegistryData-contract while changing the way the data gets added/updated/removed is handled in the NodeRegistryLogic-contract. We also provided a function to update the NodeRegistryLogic-contract, so that we are able to change to a better solution for removing nodes in an updated contract.\\nProvide a solution for the network to remove fraudulent node entries. This could be done by voting mechanism (with staking, etc).
null
```\\n/// @dev only callable in the 1st year after deployment\\nfunction removeNodeFromRegistry(address \\_signer)\\n external\\n onlyActiveState(\\_signer)\\n{\\n\\n // solium-disable-next-line security/no-block-members\\n require(block.timestamp < (blockTimeStampDeployment + YEAR\\_DEFINITION), "only in 1st year");// solhint-disable-line not-rely-on-time\\n require(msg.sender == unregisterKey, "only unregisterKey is allowed to remove nodes");\\n\\n SignerInformation storage si = signerIndex[\\_signer];\\n In3Node memory n = nodes[si.index];\\n\\n unregisterNodeInternal(si, n);\\n\\n}\\n```\\n
NodeRegistry.registerNodeFor() no replay protection and expiration Won't Fix
medium
An owner can register a node with the signer not being the owner by calling `registerNodeFor`. The owner submits a message signed for the owner including the properties of the node including the url.\\nThe signed data does not include the `registryID` nor the NodeRegistry's address and can therefore be used by the owner to submit the same node to multiple registries or chains without the signers consent.\\nThe signed data does not expire and can be re-used by the owner indefinitely to submit the same node again to future contracts or the same contract after the node has been removed.\\nArguments are not validated in the external function (also see issue 6.17)\\n```\\nbytes32 tempHash = keccak256(\\n abi.encodePacked(\\n \\_url,\\n \\_props,\\n \\_timeout,\\n \\_weight,\\n msg.sender\\n )\\n);\\n```\\n
Include `registryID` and an expiration timestamp that is checked in the contract with the signed data. Validate function arguments.
null
```\\nbytes32 tempHash = keccak256(\\n abi.encodePacked(\\n \\_url,\\n \\_props,\\n \\_timeout,\\n \\_weight,\\n msg.sender\\n )\\n);\\n```\\n
BlockhashRegistry - Structure of provided blockheaders should be validated
medium
`getParentAndBlockhash` takes an rlp-encoded blockheader blob, extracts the parent parent hash and returns both the parent hash and the calculated blockhash of the provided data. The method is used to add blockhashes to the registry that are older than 256 blocks as they are not available to the evm directly. This is done by establishing a trust-chain from a blockhash that is already in the registry up to an older block\\nThe method assumes that valid rlp encoded data is provided but the structure is not verified (rlp decodes completely; block number is correct; timestamp is younger than prevs, …), giving a wide range of freedom to an attacker with enough hashing power (or exploiting potential future issues with keccak) to forge blocks that would never be accepted by clients, but may be accepted by this smart contract. (threat: mining pool forging arbitrary non-conformant blocks to exploit the BlockhashRegistry)\\nIt is not checked that input was actually provided. However, accessing an array at an invalid index will raise an exception in the EVM. Providing a single byte > `0xf7` will yield a result and succeed even though it would have never been accepted by a real node.\\nIt is assumed that the first byte is the rlp encoded length byte and an offset into the provided `_blockheader` bytes-array is calculated. Memory is subsequently accessed via a low-level `mload` at this calculated offset. However, it is never validated that the offset actually lies within the provided range of bytes `_blockheader` leading to an out-of-bounds memory read access.\\nThe rlp encoded data is only partially decoded. For the first rlp list the number of length bytes is extracted. For the rlp encoded long string a length byte of 1 is assumed. The inline comment appears to be inaccurate or might be misleading. `// we also have to add "2" = 1 byte to it to skip the length-information`\\nInvalid intermediary blocks (e.g. with parent hash 0x00) will be accepted potentially allowing an attacker to optimize the effort needed to forge invalid blocks skipping to the desired blocknumber overwriting a certain blockhash (see issue 6.18)\\nWith one collisions (very unlikely) an attacker can add arbitrary or even random values to the BlockchainRegistry. The parent-hash of the starting blockheader cannot be verified by the contract ([target_block_random]<--parent_hash--[rnd]<--parent_hash--[rnd]<--parent_hash--...<--parent_hash--[collision]<--parent_hash_collission--[anchor_block]). While nodes can verify block structure and bail on invalid structure and check the first blocks hash and make sure the chain is in-tact the contract can't. Therefore one cannot assume the same trust in the blockchain registry when recreating blocks compared to running a full node.\\n```\\nfunction getParentAndBlockhash(bytes memory \\_blockheader) public pure returns (bytes32 parentHash, bytes32 bhash) {\\n\\n /// we need the 1st byte of the blockheader to calculate the position of the parentHash\\n uint8 first = uint8(\\_blockheader[0]);\\n\\n /// calculates the offset\\n /// by using the 1st byte (usually f9) and substracting f7 to get the start point of the parentHash information\\n /// we also have to add "2" = 1 byte to it to skip the length-information\\n require(first > 0xf7, "invalid offset");\\n uint8 offset = first - 0xf7 + 2;\\n\\n /// we are using assembly because it's the most efficent way to access the parent blockhash within the rlp-encoded blockheader\\n // solium-disable-next-line security/no-inline-assembly\\n assembly { // solhint-disable-line no-inline-assembly\\n // mstore to get the memory pointer of the blockheader to 0x20\\n mstore(0x20, \\_blockheader)\\n\\n // we load the pointer we just stored\\n // then we add 0x20 (32 bytes) to get to the start of the blockheader\\n // then we add the offset we calculated\\n // and load it to the parentHash variable\\n parentHash :=mload(\\n add(\\n add(\\n mload(0x20), 0x20\\n ), offset)\\n )\\n }\\n bhash = keccak256(\\_blockheader);\\n```\\n
Validate that the provided data is within a sane range of bytes that is expected (min/max blockheader sizes).\\nValidate that the provided data is actually an rlp encoded blockheader.\\nValidate that the offset for the parent Hash is within the provided data.\\nValidate that the parent Hash is non zero.\\nValidate that blockhashes do not repeat.
null
```\\nfunction getParentAndBlockhash(bytes memory \\_blockheader) public pure returns (bytes32 parentHash, bytes32 bhash) {\\n\\n /// we need the 1st byte of the blockheader to calculate the position of the parentHash\\n uint8 first = uint8(\\_blockheader[0]);\\n\\n /// calculates the offset\\n /// by using the 1st byte (usually f9) and substracting f7 to get the start point of the parentHash information\\n /// we also have to add "2" = 1 byte to it to skip the length-information\\n require(first > 0xf7, "invalid offset");\\n uint8 offset = first - 0xf7 + 2;\\n\\n /// we are using assembly because it's the most efficent way to access the parent blockhash within the rlp-encoded blockheader\\n // solium-disable-next-line security/no-inline-assembly\\n assembly { // solhint-disable-line no-inline-assembly\\n // mstore to get the memory pointer of the blockheader to 0x20\\n mstore(0x20, \\_blockheader)\\n\\n // we load the pointer we just stored\\n // then we add 0x20 (32 bytes) to get to the start of the blockheader\\n // then we add the offset we calculated\\n // and load it to the parentHash variable\\n parentHash :=mload(\\n add(\\n add(\\n mload(0x20), 0x20\\n ), offset)\\n )\\n }\\n bhash = keccak256(\\_blockheader);\\n```\\n
Registries - Incomplete input validation and inconsistent order of validations Pending
medium
Methods and Functions usually live in one of two worlds:\\n`public` API - methods declared with visibility `public` or `external` exposed for interaction by other parties\\n`internal` API - methods declared with visibility `internal`, `private` that are not exposed for interaction by other parties\\nWhile it is good practice to visually distinguish internal from public API by following commonly accepted naming convention e.g. by prefixing internal functions with an underscore (_doSomething vs. doSomething) or adding the keyword `unsafe` to `unsafe` functions that are not performing checks and may have a dramatic effect to the system (_unsafePayout vs. RequestPayout), it is important to properly verify that inputs to methods are within expected ranges for the implementation.\\nInput validation checks should be explicit and well documented as part of the code's documentation. This is to make sure that smart-contracts are robust against erroneous inputs and reduce the potential attack surface for exploitation.\\nIt is good practice to verify the methods input as early as possible and only perform further actions if the validation succeeds. Methods can be split into an external or public API that performs initial checks and subsequently calls an internal method that performs the action.\\nThe following lists some public API methods that are not properly checking the provided data:\\n`BlockhashRegistry.reCalculateBlockheaders` - bhash can be zero; blockheaders can be empty\\nBlockhashRegistry.getParentAndBlockhash- blockheader structure can be random as long as parenthash can be extracted\\n`BlockhashRegistry.recreateBlockheaders` - blockheaders can be empty; Arguments should be validated before calculating values that depend on them:\\n```\\nassert(\\_blockNumber > \\_blockheaders.length);\\n```\\n\\n`BlockhashRegistry.searchForAvailableBlock` - `_startNumber + _numBlocks` can be > `block.number; _startNumber + _numBlocks` can overflow.\\n`NodeRegistry.removeNode` - should check `require(_nodeIndex < nodes.length)` first before any other action.\\n```\\nfunction removeNode(uint \\_nodeIndex) internal {\\n // trigger event\\n emit LogNodeRemoved(nodes[\\_nodeIndex].url, nodes[\\_nodeIndex].signer);\\n // deleting the old entry\\n delete urlIndex[keccak256(bytes(nodes[\\_nodeIndex].url))];\\n uint length = nodes.length;\\n\\n assert(length > 0);\\n```\\n\\n`NodeRegistry.registerNodeFor` - Signature version `v` should be checked to be either `27 || 28` before verifying it.\\n```\\nfunction registerNodeFor(\\n string calldata \\_url,\\n uint64 \\_props,\\n uint64 \\_timeout,\\n address \\_signer,\\n uint64 \\_weight,\\n uint8 \\_v,\\n bytes32 \\_r,\\n bytes32 \\_s\\n)\\n external\\n payable\\n{\\n```\\n\\n`NodeRegistry.revealConvict` - unchecked `signer`\\n```\\nSignerInformation storage si = signerIndex[\\_signer];\\n```\\n\\n`NodeRegistry.revealConvict` - signer status can be checked earlier.\\n```\\nrequire(si.stage != Stages.Convicted, "node already convicted");\\n```\\n\\n`NodeRegistry.updateNode` - the check if the `newURL` is registered can be done earlier\\n```\\nrequire(!urlIndex[newURl].used, "url is already in use");\\n```\\n
Use Checks-Effects-Interactions pattern for all functions.
null
```\\nassert(\\_blockNumber > \\_blockheaders.length);\\n```\\n
BlockhashRegistry - recreateBlockheaders allows invalid parent hashes for intermediary blocks
medium
It is assumed that a blockhash of `0x00` is invalid, but the method accepts intermediary parent hashes extracted from blockheaders that are zero when establishing the trust chain.\\n`recreateBlockheaders` relies on `reCalculateBlockheaders` to correctly establish a chain of trust from the provided list of `_blockheaders` to a valid blockhash stored in the contract. However, `reCalculateBlockheaders` fails to raise an exception in case `getParentAndBlockhash` returns a blockhash of `0x00`. Subsequently it will skip over invalid blockhashes and continue to establish the trust chain without raising an error.\\nThis may allow an attacker with enough hashing power to store a blockheader hash that is actually invalid on the real chain but accepted within this smart contract. This may even only be done temporarily to overwrite an existing hash for a short period of time (see https://github.com/ConsenSys/slockit-in3-audit-2019-09/issues/24).\\n```\\nfor (uint i = 0; i < \\_blockheaders.length; i++) {\\n (calcParent, calcBlockhash) = getParentAndBlockhash(\\_blockheaders[i]);\\n if (calcBlockhash != currentBlockhash) {\\n return 0x0;\\n }\\n currentBlockhash = calcParent;\\n}\\n```\\n
Stop processing the array of `_blockheaders` immediately if a blockheader is invalid.
null
```\\nfor (uint i = 0; i < \\_blockheaders.length; i++) {\\n (calcParent, calcBlockhash) = getParentAndBlockhash(\\_blockheaders[i]);\\n if (calcBlockhash != currentBlockhash) {\\n return 0x0;\\n }\\n currentBlockhash = calcParent;\\n}\\n```\\n
BlockhashRegistry - recreateBlockheaders succeeds and emits an event even though no blockheaders have been provided
medium
The method is used to re-create blockhashes from a list of rlp-encoded `_blockheaders`. However, the method never checks if `_blockheaders` actually contains items. The result is, that the method will unnecessarily store the same value that is already in the `blockhashMapping` at the same location and wrongly log `LogBlockhashAdded` even though nothing has been added nor changed.\\nassume `_blockheaders` is empty and the registry already knows the blockhash of `_blockNumber`\\n```\\nfunction recreateBlockheaders(uint \\_blockNumber, bytes[] memory \\_blockheaders) public {\\n\\n bytes32 currentBlockhash = blockhashMapping[\\_blockNumber];\\n require(currentBlockhash != 0x0, "parentBlock is not available");\\n\\n bytes32 calculatedHash = reCalculateBlockheaders(\\_blockheaders, currentBlockhash);\\n require(calculatedHash != 0x0, "invalid headers");\\n```\\n\\nAn attempt is made to re-calculate the hash of an empty `_blockheaders` array (also passing the `currentBlockhash` from the registry)\\n```\\nbytes32 calculatedHash = reCalculateBlockheaders(\\_blockheaders, currentBlockhash);\\n```\\n\\nThe following loop in `reCalculateBlockheaders` is skipped and the `currentBlockhash` is returned.\\n```\\nfunction reCalculateBlockheaders(bytes[] memory \\_blockheaders, bytes32 \\_bHash) public pure returns (bytes32 bhash) {\\n\\n bytes32 currentBlockhash = \\_bHash;\\n bytes32 calcParent = 0x0;\\n bytes32 calcBlockhash = 0x0;\\n\\n /// save to use for up to 200 blocks, exponential increase of gas-usage afterwards\\n for (uint i = 0; i < \\_blockheaders.length; i++) {\\n (calcParent, calcBlockhash) = getParentAndBlockhash(\\_blockheaders[i]);\\n if (calcBlockhash != currentBlockhash) {\\n return 0x0;\\n }\\n currentBlockhash = calcParent;\\n }\\n\\n return currentBlockhash;\\n```\\n\\nThe assertion does not fire, the `bnr` to store the `calculatedHash` is the same as the one initially provided to the method as an argument.. Nothing has changed but an event is emitted.\\n```\\n /// we should never fail this assert, as this would mean that we were able to recreate a invalid blockchain\\n assert(\\_blockNumber > \\_blockheaders.length);\\n uint bnr = \\_blockNumber - \\_blockheaders.length;\\n blockhashMapping[bnr] = calculatedHash;\\n emit LogBlockhashAdded(bnr, calculatedHash);\\n}\\n```\\n
The method is crucial for the system to work correctly and must be tightly controlled by input validation. It should not be allowed to overwrite an existing value in the contract (issue 6.29) or emit an event even though nothing has happened. Therefore validate that user provided input is within safe bounds. In this case, that at least one `_blockheader` has been provided. Validate that `_blockNumber` is less than `block.number` and do not expect that parts of the code will throw and safe the contract from exploitation.
null
```\\nfunction recreateBlockheaders(uint \\_blockNumber, bytes[] memory \\_blockheaders) public {\\n\\n bytes32 currentBlockhash = blockhashMapping[\\_blockNumber];\\n require(currentBlockhash != 0x0, "parentBlock is not available");\\n\\n bytes32 calculatedHash = reCalculateBlockheaders(\\_blockheaders, currentBlockhash);\\n require(calculatedHash != 0x0, "invalid headers");\\n```\\n
NodeRegistry.updateNode replaces signer with owner and emits inconsistent events
medium
When the `owner` calls `updateNode()` function providing a new `url` for the node, the `signer` of the `url` is replaced by `msg.sender` which in this case is the `owner` of the node. Note that new URL can resolve to the same URL as before (See https://github.com/ConsenSys/slockit-in3-audit-2019-09/issues/36).\\n```\\nif (newURl != keccak256(bytes(node.url))) {\\n\\n // deleting the old entry\\n delete urlIndex[keccak256(bytes(node.url))];\\n\\n // make sure the new url is not already in use\\n require(!urlIndex[newURl].used, "url is already in use");\\n\\n UrlInformation memory ui;\\n ui.used = true;\\n ui.signer = msg.sender;\\n urlIndex[newURl] = ui;\\n node.url = \\_url;\\n}\\n```\\n\\nFurthermore, the method emits a `LogNodeRegistered` event when the node structure is updated. However, the event will always emit `msg.sender` as the signer even though that might not be true. For example, if the `url` does not change, the signer can still be another account that was previously registered with `registerNodeFor` and is not necessarily the `owner`.\\n```\\nemit LogNodeRegistered(\\n node.url,\\n \\_props,\\n msg.sender,\\n node.deposit\\n);\\n```\\n\\n```\\nevent LogNodeRegistered(string url, uint props, address signer, uint deposit);\\n```\\n
The `updateNode()` function gets the `signer` as an input used to reference the node structure and this `signer` should be set for the `UrlInformation`.\\n```\\nfunction updateNode(\\n address \\_signer,\\n string calldata \\_url,\\n uint64 \\_props,\\n uint64 \\_timeout,\\n uint64 \\_weight\\n )\\n```\\n\\nThe method should actually only allow to change node properties when `owner==signer` otherwise `updateNode` is bypassing the strict requirements enforced with `registerNodeFor` where e.g. the `url` needs to be signed by the signer in order to register it.\\nThe emitted event should always emit `node.signer` instead of `msg.signer` which can be wrong.\\nThe method should emit its own distinct event `LogNodeUpdated` for audit purposes and to be able to distinguish new node registrations from node structure updates. This might also require software changes to client/node implementations to listen for node updates.
null
```\\nif (newURl != keccak256(bytes(node.url))) {\\n\\n // deleting the old entry\\n delete urlIndex[keccak256(bytes(node.url))];\\n\\n // make sure the new url is not already in use\\n require(!urlIndex[newURl].used, "url is already in use");\\n\\n UrlInformation memory ui;\\n ui.used = true;\\n ui.signer = msg.sender;\\n urlIndex[newURl] = ui;\\n node.url = \\_url;\\n}\\n```\\n
NodeRegistry - In3Node memory n is never used
low
NodeRegistry `In3Node memory n` is never used inside the modifier `onlyActiveState`.\\n```\\nmodifier onlyActiveState(address \\_signer) {\\n\\n SignerInformation memory si = signerIndex[\\_signer];\\n require(si.stage == Stages.Active, "address is not an in3-signer");\\n\\n In3Node memory n = nodes[si.index];\\n assert(nodes[si.index].signer == \\_signer);\\n \\_;\\n}\\n```\\n
Use `n` in the assertion to access the node signer `assert(n.signer == _signer);` or directly access it from storage and avoid copying the struct.
null
```\\nmodifier onlyActiveState(address \\_signer) {\\n\\n SignerInformation memory si = signerIndex[\\_signer];\\n require(si.stage == Stages.Active, "address is not an in3-signer");\\n\\n In3Node memory n = nodes[si.index];\\n assert(nodes[si.index].signer == \\_signer);\\n \\_;\\n}\\n```\\n
NodeRegistry - removeNode unnecessarily casts the nodeIndex to uint64 potentially truncating its value
low
`removeNode` removes a node from the Nodes array. This is done by copying the last node of the array to the `_nodeIndex` of the node that is to be removed. Finally the node array size is decreased.\\nA Node's index is also referenced in the `SignerInformation` struct. This index needs to be adjusted when removing a node from the array as the last node is copied to the index of the node that is to be removed.\\nWhen adjusting the Node's index in the `SignerInformation` struct `removeNode` casts the index to `uint64`. This is both unnecessary as the struct defines the index as `uint` and theoretically dangerous if a node at an index greater than `uint64_max` is removed. The resulting `SignerInformation` index will be truncated to `uint64` leading to an inconsistency in the contract.\\n```\\nstruct SignerInformation {\\n uint64 lockedTime; /// timestamp until the deposit of an in3-node can not be withdrawn after the node was removed\\n address owner; /// the owner of the node\\n\\n Stages stage; /// state of the address\\n\\n uint depositAmount; /// amount of deposit to be locked, used only after a node had been removed\\n\\n uint index; /// current index-position of the node in the node-array\\n}\\n```\\n\\n```\\n// move the last entry to the removed one.\\nIn3Node memory m = nodes[length - 1];\\nnodes[\\_nodeIndex] = m;\\n\\nSignerInformation storage si = signerIndex[m.signer];\\nsi.index = uint64(\\_nodeIndex);\\nnodes.length--;\\n```\\n
Resolution\\nFixed as per recommendation https://git.slock.it/in3/in3-contracts/commit/6c35dd422e27eec1b1d2f70e328268014cadb515.\\nDo not cast and therefore truncate the index.
null
```\\nstruct SignerInformation {\\n uint64 lockedTime; /// timestamp until the deposit of an in3-node can not be withdrawn after the node was removed\\n address owner; /// the owner of the node\\n\\n Stages stage; /// state of the address\\n\\n uint depositAmount; /// amount of deposit to be locked, used only after a node had been removed\\n\\n uint index; /// current index-position of the node in the node-array\\n}\\n```\\n
BlockhashRegistry- assembly code can be optimized
low
The following code can be optimized by removing `mload` and mstore:\\n```\\nrequire(first > 0xf7, "invalid offset");\\nuint8 offset = first - 0xf7 + 2;\\n\\n/// we are using assembly because it's the most efficent way to access the parent blockhash within the rlp-encoded blockheader\\n// solium-disable-next-line security/no-inline-assembly\\nassembly { // solhint-disable-line no-inline-assembly\\n // mstore to get the memory pointer of the blockheader to 0x20\\n mstore(0x20, \\_blockheader)\\n\\n // we load the pointer we just stored\\n // then we add 0x20 (32 bytes) to get to the start of the blockheader\\n // then we add the offset we calculated\\n // and load it to the parentHash variable\\n parentHash :=mload(\\n add(\\n add(\\n mload(0x20), 0x20\\n ), offset)\\n )\\n}\\n```\\n
```\\nassembly { // solhint-disable-line no-inline-assembly\\n // mstore to get the memory pointer of the blockheader to 0x20\\n //mstore(0x20, \\_blockheader) //@audit should assign 0x20ptr to variable first and use it.\\n\\n // we load the pointer we just stored\\n // then we add 0x20 (32 bytes) to get to the start of the blockheader\\n // then we add the offset we calculated\\n // and load it to the parentHash variable\\n parentHash :=mload(\\n add(\\n add(\\n \\_blockheader, 0x20\\n ), offset)\\n )\\n }\\n```\\n
null
```\\nrequire(first > 0xf7, "invalid offset");\\nuint8 offset = first - 0xf7 + 2;\\n\\n/// we are using assembly because it's the most efficent way to access the parent blockhash within the rlp-encoded blockheader\\n// solium-disable-next-line security/no-inline-assembly\\nassembly { // solhint-disable-line no-inline-assembly\\n // mstore to get the memory pointer of the blockheader to 0x20\\n mstore(0x20, \\_blockheader)\\n\\n // we load the pointer we just stored\\n // then we add 0x20 (32 bytes) to get to the start of the blockheader\\n // then we add the offset we calculated\\n // and load it to the parentHash variable\\n parentHash :=mload(\\n add(\\n add(\\n mload(0x20), 0x20\\n ), offset)\\n )\\n}\\n```\\n
BlockhashRegistry - Existing blockhashes can be overwritten
low
Last 256 blocks, that are available in the EVM environment, are stored in `BlockhashRegistry` by calling `snapshot()` or `saveBlockNumber(uint _blockNumber)` functions. Older blocks are recreated by calling `recreateBlockheaders`.\\nThe methods will overwrite existing blockhashes.\\n```\\nfunction saveBlockNumber(uint \\_blockNumber) public {\\n\\n bytes32 bHash = blockhash(\\_blockNumber);\\n\\n require(bHash != 0x0, "block not available");\\n\\n blockhashMapping[\\_blockNumber] = bHash;\\n emit LogBlockhashAdded(\\_blockNumber, bHash);\\n}\\n```\\n\\n```\\nblockhashMapping[bnr] = calculatedHash;\\n```\\n
Resolution\\nAddressed with 80bb6ecf and 17d450cf by checking if blockhash exists and changing the `assert` to `require`.\\nIf a block is already saved in the smart contract, it can be checked and a SSTORE can be prevented to save gas. Require that blocknumber hash is not stored.\\n```\\nrequire(blockhashMapping[\\_blockNumber] == 0x0, "block already saved");\\n```\\n
null
```\\nfunction saveBlockNumber(uint \\_blockNumber) public {\\n\\n bytes32 bHash = blockhash(\\_blockNumber);\\n\\n require(bHash != 0x0, "block not available");\\n\\n blockhashMapping[\\_blockNumber] = bHash;\\n emit LogBlockhashAdded(\\_blockNumber, bHash);\\n}\\n```\\n
An account that confirms a transaction via AssetProxyOwner can indefinitely block that transaction
high
When a transaction reaches the required number of confirmations in `confirmTransaction()`, its confirmation time is recorded:\\n```\\n/// @dev Allows an owner to confirm a transaction.\\n/// @param transactionId Transaction ID.\\nfunction confirmTransaction(uint256 transactionId)\\n public\\n ownerExists(msg.sender)\\n transactionExists(transactionId)\\n notConfirmed(transactionId, msg.sender)\\n notFullyConfirmed(transactionId)\\n{\\n confirmations[transactionId][msg.sender] = true;\\n emit Confirmation(msg.sender, transactionId);\\n if (isConfirmed(transactionId)) {\\n \\_setConfirmationTime(transactionId, block.timestamp);\\n }\\n}\\n```\\n\\nBefore the time lock has elapsed and the transaction is executed, any of the owners that originally confirmed the transaction can revoke their confirmation via revokeConfirmation():\\n```\\n/// @dev Allows an owner to revoke a confirmation for a transaction.\\n/// @param transactionId Transaction ID.\\nfunction revokeConfirmation(uint256 transactionId)\\n public\\n ownerExists(msg.sender)\\n confirmed(transactionId, msg.sender)\\n notExecuted(transactionId)\\n{\\n confirmations[transactionId][msg.sender] = false;\\n emit Revocation(msg.sender, transactionId);\\n}\\n```\\n\\nImmediately after, that owner can call `confirmTransaction()` again, which will reset the confirmation time and thus the time lock.\\nThis is especially troubling in the case of a single compromised key, but it's also an issue for disagreement among owners, where any m of the n owners should be able to execute transactions but could be blocked.\\nMitigations\\nOnly an owner can do this, and that owner has to be part of the group that originally confirmed the transaction. This means the malicious owner may have to front run the others to make sure they're in that initial confirmation set.\\nEven once a malicious owner is in position to execute this perpetual delay, they need to call `revokeConfirmation()` and `confirmTransaction()` again each time. Another owner can attempt to front the attacker and execute their own `confirmTransaction()` immediately after the `revokeConfirmation()` to regain control.
There are several ways to address this, but to best preserve the original `MultiSigWallet` semantics, once a transaction has reached the required number of confirmations, it should be impossible to revoke confirmations. In the original implementation, this is enforced by immediately executing the transaction when the final confirmation is received.
null
```\\n/// @dev Allows an owner to confirm a transaction.\\n/// @param transactionId Transaction ID.\\nfunction confirmTransaction(uint256 transactionId)\\n public\\n ownerExists(msg.sender)\\n transactionExists(transactionId)\\n notConfirmed(transactionId, msg.sender)\\n notFullyConfirmed(transactionId)\\n{\\n confirmations[transactionId][msg.sender] = true;\\n emit Confirmation(msg.sender, transactionId);\\n if (isConfirmed(transactionId)) {\\n \\_setConfirmationTime(transactionId, block.timestamp);\\n }\\n}\\n```\\n
Orders with signatures that require regular validation can have their validation bypassed if the order is partially filled
high
The signature types `Wallet`, `Validator`, and `EIP1271Wallet` require explicit validation to authorize each action performed on a given order. This means that if an order was signed using one of these methods, the `Exchange` must perform a validation step on the signature each time the order is submitted for a partial fill. In contrast, the other canonical signature types (EIP712, `EthSign`, and PreSigned) are only required to be validated by the `Exchange` on the order's first fill; subsequent fills take the order's existing fill amount as implicit validation that the order has a valid, published signature.\\nThis re-validation step for `Wallet`, `Validator`, and `EIP1271Wallet` signatures is intended to facilitate their use with contracts whose validation depends on some state that may change over time. For example, a validating contract may call into a price feed and determine that some order is invalid if its price deviates from some expected range. In this case, the repeated validation allows 0x users to make orders with custom fill conditions which are evaluated at run-time.\\nWe found that if the sender provides the contract with an invalid signature after the order in question has already been partially filled, the regular validation check required for `Wallet`, `Validator`, and `EIP1271Wallet` signatures can be bypassed entirely.\\nSignature validation takes place in `MixinExchangeCore._assertFillableOrder`. A signature is only validated if it passes the following criteria:\\n```\\n// Validate either on the first fill or if the signature type requires\\n// regular validation.\\naddress makerAddress = order.makerAddress;\\nif (orderInfo.orderTakerAssetFilledAmount == 0 ||\\n \\_doesSignatureRequireRegularValidation(\\n orderInfo.orderHash,\\n makerAddress,\\n signature\\n )\\n) {\\n```\\n\\nIn effect, signature validation only occurs if:\\n`orderInfo.orderTakerAssetFilledAmount == 0` OR\\n`_doesSignatureRequireRegularValidation(orderHash, makerAddress, signature)`\\nIf an order is partially filled, the first condition will evaluate to false. Then, that order's signature will only be validated if `_doesSignatureRequireRegularValidation` evaluates to true:\\n```\\nfunction \\_doesSignatureRequireRegularValidation(\\n bytes32 hash,\\n address signerAddress,\\n bytes memory signature\\n)\\n internal\\n pure\\n returns (bool needsRegularValidation)\\n{\\n // Read the signatureType from the signature\\n SignatureType signatureType = \\_readSignatureType(\\n hash,\\n signerAddress,\\n signature\\n );\\n\\n // Any signature type that makes an external call needs to be revalidated\\n // with every partial fill\\n needsRegularValidation =\\n signatureType == SignatureType.Wallet ||\\n signatureType == SignatureType.Validator ||\\n signatureType == SignatureType.EIP1271Wallet;\\n return needsRegularValidation;\\n}\\n```\\n\\nThe `SignatureType` returned from `_readSignatureType` is directly cast from the final byte of the passed-in signature. Any value that does not cast to `Wallet`, `Validator`, and `EIP1271Wallet` will cause `_doesSignatureRequireRegularValidation` to return false, skipping validation.\\nThe result is that an order whose signature requires regular validation can be forced to skip validation if it has been partially filled, by passing in an invalid signature.
There are a few options for remediation:\\nHave the `Exchange` validate the provided signature every time an order is filled.\\nRecord the first seen signature type or signature hash for each order, and check that subsequent actions are submitted with a matching signature.\\nThe first option requires the fewest changes, and does not require storing additional state. While this does mean some additional cost validating subsequent signatures, we feel the increase in flexibility is well worth it, as a maker could choose to create multiple valid signatures for use across different order books.
null
```\\n// Validate either on the first fill or if the signature type requires\\n// regular validation.\\naddress makerAddress = order.makerAddress;\\nif (orderInfo.orderTakerAssetFilledAmount == 0 ||\\n \\_doesSignatureRequireRegularValidation(\\n orderInfo.orderHash,\\n makerAddress,\\n signature\\n )\\n) {\\n```\\n
Changing the owners or required confirmations in the AssetProxyOwner can unconfirm a previously confirmed transaction
medium
Once a transaction has been confirmed in the `AssetProxyOwner`, it cannot be executed until a lock period has passed. During that time, any change to the number of required confirmations will cause this transaction to no longer be executable.\\nIf the number of required confirmations was decreased, then one or more owners will have to revoke their confirmation before the transaction can be executed.\\nIf the number of required confirmations was increased, then additional owners will have to confirm the transaction, and when the new required number of confirmations is reached, a new confirmation time will be recorded, and thus the time lock will restart.\\nSimilarly, if an owner that had previously confirmed the transaction is replaced, the number of confirmations will drop for existing transactions, and they will need to be confirmed again.\\nThis is not disastrous, but it's almost certainly unintended behavior and may make it difficult to make changes to the multisig owners and parameters.\\n`executeTransaction()` requires that at the time of execution, the transaction is confirmed:\\n```\\nfunction executeTransaction(uint256 transactionId)\\n public\\n notExecuted(transactionId)\\n fullyConfirmed(transactionId)\\n```\\n\\n`isConfirmed()` checks for exact equality with the number of required confirmations. Having too many confirmations is just as bad as too few:\\n```\\n/// @dev Returns the confirmation status of a transaction.\\n/// @param transactionId Transaction ID.\\n/// @return Confirmation status.\\nfunction isConfirmed(uint256 transactionId)\\n public\\n view\\n returns (bool)\\n{\\n uint256 count = 0;\\n for (uint256 i = 0; i < owners.length; i++) {\\n if (confirmations[transactionId][owners[i]]) {\\n count += 1;\\n }\\n if (count == required) {\\n return true;\\n }\\n }\\n}\\n```\\n\\nIf additional confirmations are required to reconfirm a transaction, that resets the time lock:\\n```\\n/// @dev Allows an owner to confirm a transaction.\\n/// @param transactionId Transaction ID.\\nfunction confirmTransaction(uint256 transactionId)\\n public\\n ownerExists(msg.sender)\\n transactionExists(transactionId)\\n notConfirmed(transactionId, msg.sender)\\n notFullyConfirmed(transactionId)\\n{\\n confirmations[transactionId][msg.sender] = true;\\n emit Confirmation(msg.sender, transactionId);\\n if (isConfirmed(transactionId)) {\\n \\_setConfirmationTime(transactionId, block.timestamp);\\n }\\n}\\n```\\n
As in https://github.com/ConsenSys/0x-v3-audit-2019-09/issues/39, the semantics of the original `MultiSigWallet` were that once a transaction is fully confirmed, it's immediately executed. The time lock means this is no longer possible, but it is possible to record that the transaction is confirmed and never allow this to change. In fact, the confirmation time already records this. Once the confirmation time is non-zero, a transaction should always be considered confirmed.
null
```\\nfunction executeTransaction(uint256 transactionId)\\n public\\n notExecuted(transactionId)\\n fullyConfirmed(transactionId)\\n```\\n
Reentrancy in executeTransaction() Won't Fix
medium
In `MixinTransactions`, `executeTransaction()` and `batchExecuteTransactions()` do not have the `nonReentrant` modifier. Because of that, it is possible to execute nested transactions or call these functions during other reentrancy attacks on the exchange. The reason behind that decision is to be able to call functions with `nonReentrant` modifier as delegated transactions.\\nNested transactions are partially prevented with a separate check that does not allow transaction execution if the exchange is currently in somebody else's context:\\n```\\n// Prevent `executeTransaction` from being called when context is already set\\naddress currentContextAddress\\_ = currentContextAddress;\\nif (currentContextAddress\\_ != address(0)) {\\n LibRichErrors.rrevert(LibExchangeRichErrors.TransactionInvalidContextError(\\n transactionHash,\\n currentContextAddress\\_\\n ));\\n}\\n```\\n\\nThis check still leaves some possibility of reentrancy. Allowing that behavior is dangerous and may create possible attack vectors in the future.
Add a new modifier to `executeTransaction()` and `batchExecuteTransactions()` which is similar to `nonReentrant` but uses different storage slot.
null
```\\n// Prevent `executeTransaction` from being called when context is already set\\naddress currentContextAddress\\_ = currentContextAddress;\\nif (currentContextAddress\\_ != address(0)) {\\n LibRichErrors.rrevert(LibExchangeRichErrors.TransactionInvalidContextError(\\n transactionHash,\\n currentContextAddress\\_\\n ));\\n}\\n```\\n
“Poison” order that consumes gas can block market trades Won't Fix
medium
The market buy/sell functions gather a list of orders together for the same asset and try to fill them in order until a target amount has been traded.\\nThese functions use `MixinWrapperFunctions._fillOrderNoThrow()` to attempt to fill each order but ignore failures. This way, if one order is unfillable for some reason, the overall market order can still succeed by filling other orders.\\nOrders can still force `_fillOrderNoThrow()` to revert by using an external contract for signature validation and having that contract consume all available gas.\\nThis makes it possible to advertise a “poison” order for a low price that will block all market orders from succeeding. It's reasonable to assume that off-chain order books will automatically include the best prices when constructing market orders, so this attack would likely be quite effective. Note that such an attack costs the attacker nothing because all they need is an on-chain contract that consumers all available gas (maybe via an assert). This makes it a very appealing attack vector for, e.g., an order book that wants to temporarily disable a competitor.\\nDetails\\n`_fillOrderNoThrow()` forwards all available gas when filling the order:\\n```\\n// ABI encode calldata for `fillOrder`\\nbytes memory fillOrderCalldata = abi.encodeWithSelector(\\n IExchangeCore(address(0)).fillOrder.selector,\\n order,\\n takerAssetFillAmount,\\n signature\\n);\\n\\n(bool didSucceed, bytes memory returnData) = address(this).delegatecall(fillOrderCalldata);\\n```\\n\\nSimilarly, when the `Exchange` attempts to fill an order that requires external signature validation (Wallet, `Validator`, or `EIP1271Wallet` signature types), it forwards all available gas:\\n```\\n(bool didSucceed, bytes memory returnData) = verifyingContractAddress.staticcall(callData);\\n```\\n\\nIf the verifying contract consumes all available gas, it can force the overall transaction to revert.\\nPedantic Note\\nTechnically, it's impossible to consume all remaining gas when called by another contract because the EVM holds back a small amount, but even at the block gas limit, the amount held back would be insufficient to complete the transaction.
Constrain the gas that is forwarded during signature validation. This can be constrained either as a part of the signature or as a parameter provided by the taker.
null
```\\n// ABI encode calldata for `fillOrder`\\nbytes memory fillOrderCalldata = abi.encodeWithSelector(\\n IExchangeCore(address(0)).fillOrder.selector,\\n order,\\n takerAssetFillAmount,\\n signature\\n);\\n\\n(bool didSucceed, bytes memory returnData) = address(this).delegatecall(fillOrderCalldata);\\n```\\n
Front running in matchOrders() Won't Fix
medium
Calls to `matchOrders()` are made to extract profit from the price difference between two opposite orders: left and right.\\n```\\nfunction matchOrders(\\n LibOrder.Order memory leftOrder,\\n LibOrder.Order memory rightOrder,\\n bytes memory leftSignature,\\n bytes memory rightSignature\\n)\\n```\\n\\nThe caller only pays protocol and transaction fees, so it's almost always profitable to front run every call to `matchOrders()`. That would lead to gas auctions and would make `matchOrders()` difficult to use.
Consider adding a commit-reveal scheme to `matchOrders()` to stop front running altogether.
null
```\\nfunction matchOrders(\\n LibOrder.Order memory leftOrder,\\n LibOrder.Order memory rightOrder,\\n bytes memory leftSignature,\\n bytes memory rightSignature\\n)\\n```\\n
The Exchange owner should not be able to call executeTransaction or batchExecuteTransaction Won't Fix
medium
If the owner calls either of these functions, the resulting `delegatecall` can pass `onlyOwner` modifiers even if the transaction signer is not the owner. This is because, regardless of the `contextAddress` set through `_executeTransaction`, the `onlyOwner` modifier checks `msg.sender`.\\n`_executeTransaction` sets the context address to the signer address, which is not `msg.sender` in this case:\\n```\\n// Set the current transaction signer\\naddress signerAddress = transaction.signerAddress;\\n\\_setCurrentContextAddressIfRequired(signerAddress, signerAddress);\\n```\\n\\nThe resulting `delegatecall` could target an admin function like this one:\\n```\\n/// @dev Registers an asset proxy to its asset proxy id.\\n/// Once an asset proxy is registered, it cannot be unregistered.\\n/// @param assetProxy Address of new asset proxy to register.\\nfunction registerAssetProxy(address assetProxy)\\n external\\n onlyOwner\\n{\\n // Ensure that no asset proxy exists with current id.\\n bytes4 assetProxyId = IAssetProxy(assetProxy).getProxyId();\\n address currentAssetProxy = \\_assetProxies[assetProxyId];\\n if (currentAssetProxy != address(0)) {\\n LibRichErrors.rrevert(LibExchangeRichErrors.AssetProxyExistsError(\\n assetProxyId,\\n currentAssetProxy\\n ));\\n }\\n \\n // Add asset proxy and log registration.\\n \\_assetProxies[assetProxyId] = assetProxy;\\n emit AssetProxyRegistered(\\n assetProxyId,\\n assetProxy\\n );\\n}\\n```\\n\\nThe `onlyOwner` modifier does not check the context address, but checks msg.sender:\\n```\\nfunction \\_assertSenderIsOwner()\\n internal\\n view\\n{\\n if (msg.sender != owner) {\\n LibRichErrors.rrevert(LibOwnableRichErrors.OnlyOwnerError(\\n msg.sender,\\n owner\\n ));\\n }\\n}\\n```\\n
Add a check to `_executeTransaction` that prevents the owner from calling this function.
null
```\\n// Set the current transaction signer\\naddress signerAddress = transaction.signerAddress;\\n\\_setCurrentContextAddressIfRequired(signerAddress, signerAddress);\\n```\\n
By manipulating the gas limit, relayers can affect the outcome of ZeroExTransactions Won't Fix
low
ZeroExTransactions are meta transactions supported by the `Exchange`. They do not require that they are executed with a specific amount of gas, so the transaction relayer can choose how much gas to provide. By choosing a low gas limit, a relayer can affect the outcome of the transaction.\\nA `ZeroExTransaction` specifies a signer, an expiration, and call data for the transaction:\\n```\\nstruct ZeroExTransaction {\\n uint256 salt; // Arbitrary number to ensure uniqueness of transaction hash.\\n uint256 expirationTimeSeconds; // Timestamp in seconds at which transaction expires.\\n uint256 gasPrice; // gasPrice that transaction is required to be executed with.\\n address signerAddress; // Address of transaction signer.\\n bytes data; // AbiV2 encoded calldata.\\n}\\n```\\n\\nIn `MixinTransactions._executeTransaction()`, all available gas is forwarded in the delegate call, and the transaction is marked as executed:\\n```\\ntransactionsExecuted[transactionHash] = true;\\n(bool didSucceed, bytes memory returnData) = address(this).delegatecall(transaction.data);\\n```\\n\\nA likely attack vector for this is front running a `ZeroExTransaction` that ultimately invokes `_fillNoThrow()`. In this scenario, an attacker sees the call to `executeTransaction()` and makes their own call with a lower gas limit, causing the order being filled to run out of gas but allowing the transaction as a whole to succeed.\\nIf such an attack is successful, the `ZeroExTransaction` cannot be replayed, so the signer must produce a new signature and try again, ad infinitum.
Resolution\\nFrom the development team:\\nWhile this is an annoyance when used in combination with `marketBuyOrdersNoThrow` and `marketSellOrdersNoThrow`, it does not seem worth it to add a `gasLimit` to 0x transactions for this reason alone. Instead, this quirk should be documented along with a recommendation to use the `fillOrKill` variants of each market fill function when used in combination with 0x transactions.\\nAdd a `gasLimit` field to `ZeroExTransaction` and forward exactly that much gas via `delegatecall`. (Note that you must explicitly check that sufficient gas is available because the EVM allows you to supply a gas parameter that exceeds the actual remaining gas.)
null
```\\nstruct ZeroExTransaction {\\n uint256 salt; // Arbitrary number to ensure uniqueness of transaction hash.\\n uint256 expirationTimeSeconds; // Timestamp in seconds at which transaction expires.\\n uint256 gasPrice; // gasPrice that transaction is required to be executed with.\\n address signerAddress; // Address of transaction signer.\\n bytes data; // AbiV2 encoded calldata.\\n}\\n```\\n
Modifier ordering plays a significant role in modifier efficacy
low
The `nonReentrant` and `refundFinalBalance` modifiers always appear together across the 0x monorepo. When used, they invariably appear with `nonReentrant` listed first, followed by `refundFinalBalance`. This specific order appears inconsequential at first glance but is actually important. The order of execution is as follows:\\nThe `nonReentrant` modifier runs (_lockMutexOrThrowIfAlreadyLocked).\\nIf `refundFinalBalance` had a prefix, it would run now.\\nThe function itself runs.\\nThe `refundFinalBalance` modifier runs (_refundNonZeroBalanceIfEnabled).\\nThe `nonReentrant` modifier runs (_unlockMutex).\\nThe fact that the `refundFinalBalance` modifier runs before the mutex is unlocked is of particular importance because it potentially invokes an external call, which may reenter. If the order of the two modifiers were flipped, the mutex would unlock before the external call, defeating the purpose of the reentrancy guard.\\n```\\nnonReentrant\\nrefundFinalBalance\\n```\\n
Resolution\\nThis is fixed in 0xProject/0x-monorepo#2228 by introducing a new modifier that combines the two: `refundFinalBalance`.\\nAlthough the order of the modifiers is correct as-is, this pattern introduces cognitive overhead when making or reviewing changes to the 0x codebase. Because the two modifiers always appear together, it may make sense to combine the two into a single modifier where the order of operations is explicit.
null
```\\nnonReentrant\\nrefundFinalBalance\\n```\\n
Several overflows in LibBytes
low
Several functions in `LibBytes` have integer overflows.\\n`LibBytes.readBytesWithLength` returns a pointer to a `bytes` array within an existing `bytes` array at some given `index`. The length of the nested array is added to the given `index` and checked against the parent array to ensure the data in the nested array is within the bounds of the parent. However, because the addition can overflow, the bounds check can be bypassed to return an array that points to data out of bounds of the parent array.\\n```\\nif (b.length < index + nestedBytesLength) {\\n LibRichErrors.rrevert(LibBytesRichErrors.InvalidByteOperationError(\\n LibBytesRichErrors\\n .InvalidByteOperationErrorCodes.LengthGreaterThanOrEqualsNestedBytesLengthRequired,\\n b.length,\\n index + nestedBytesLength\\n ));\\n}\\n```\\n\\nThe following functions have similar issues:\\n`readAddress`\\n`writeAddress`\\n`readBytes32`\\n`writeBytes32`\\n`readBytes4`
An overflow check should be added to the function. Alternatively, because `readBytesWithLength` does not appear to be used anywhere in the 0x project, the function should be removed from `LibBytes`. Additionally, the following functions in `LibBytes` are also not used and should be considered for removal:\\n`popLast20Bytes`\\n`writeAddress`\\n`writeBytes32`\\n`writeUint256`\\n`writeBytesWithLength`\\n`deepCopyBytes`
null
```\\nif (b.length < index + nestedBytesLength) {\\n LibRichErrors.rrevert(LibBytesRichErrors.InvalidByteOperationError(\\n LibBytesRichErrors\\n .InvalidByteOperationErrorCodes.LengthGreaterThanOrEqualsNestedBytesLengthRequired,\\n b.length,\\n index + nestedBytesLength\\n ));\\n}\\n```\\n
NSignatureTypes enum value bypasses Solidity safety checks Won't Fix
low
The `ISignatureValidator` contract defines an enum `SignatureType` to represent the different types of signatures recognized within the exchange. The final enum value, `NSignatureTypes`, is not a valid signature type. Instead, it is used by `MixinSignatureValidator` to check that the value read from the signature is a valid enum value. However, Solidity now includes its own check for enum casting, and casting a value over the maximum enum size to an enum is no longer possible.\\nBecause of the added `NSignatureTypes` value, Solidity's check now recognizes `0x08` as a valid `SignatureType` value.\\nThe check is made here:\\n```\\n// Ensure signature is supported\\nif (uint8(signatureType) >= uint8(SignatureType.NSignatureTypes)) {\\n LibRichErrors.rrevert(LibExchangeRichErrors.SignatureError(\\n LibExchangeRichErrors.SignatureErrorCodes.UNSUPPORTED,\\n hash,\\n signerAddress,\\n signature\\n ));\\n}\\n```\\n
The check should be removed, as should the `SignatureTypes.NSignatureTypes` value.
null
```\\n// Ensure signature is supported\\nif (uint8(signatureType) >= uint8(SignatureType.NSignatureTypes)) {\\n LibRichErrors.rrevert(LibExchangeRichErrors.SignatureError(\\n LibExchangeRichErrors.SignatureErrorCodes.UNSUPPORTED,\\n hash,\\n signerAddress,\\n signature\\n ));\\n}\\n```\\n
Intentional secret reuse can block borrower and lender from accepting liquidation payment
high
For Dave (the liquidator) to claim the collateral he's purchasing, he must reveal secret D. Once that secret is revealed, Alice and Bob (the borrower and lender) can claim the payment.\\nSecrets must be provided via the `Sales.provideSecret()` function:\\n```\\n function provideSecret(bytes32 sale, bytes32 secret\\_) external {\\n require(sales[sale].set);\\n if (sha256(abi.encodePacked(secret\\_)) == secretHashes[sale].secretHashA) { secretHashes[sale].secretA = secret\\_; }\\n else if (sha256(abi.encodePacked(secret\\_)) == secretHashes[sale].secretHashB) { secretHashes[sale].secretB = secret\\_; }\\n else if (sha256(abi.encodePacked(secret\\_)) == secretHashes[sale].secretHashC) { secretHashes[sale].secretC = secret\\_; }\\n else if (sha256(abi.encodePacked(secret\\_)) == secretHashes[sale].secretHashD) { secretHashes[sale].secretD = secret\\_; }\\n else { revert(); }\\n }\\n```\\n\\nNote that if Dave chooses the same secret hash as either Alice, Bob, or Charlie (arbiter), there is no way to set `secretHashes[sale].secretD` because one of the earlier conditionals will execute.\\nFor Alice and Bob to later receive payment, they must be able to provide Dave's secret:\\n```\\n function accept(bytes32 sale) external {\\n require(!accepted(sale));\\n require(!off(sale));\\n require(hasSecrets(sale));\\n require(sha256(abi.encodePacked(secretHashes[sale].secretD)) == secretHashes[sale].secretHashD);\\n```\\n\\nDave can exploit this to obtain the collateral for free:\\nDave looks at Alice's secret hashes to see which will be used in the sale.\\nDave begins the liquidation process, using the same secret hash.\\nAlice and Bob reveal their secrets A and B through the process of moving the collateral.\\nDave now knows the preimage for the secret hash he provided. It was revealed by Alice already.\\nDave uses that secret to obtain the collateral.\\nAlice and Bob now want to receive payment, but they're unable to provide Dave's secret to the `Sales` smart contract due to the order of conditionals in `provideSecret()`.\\nAfter an expiration, Dave can claim a refund.\\nMitigating factors\\nAlice and Bob could notice that Dave chose a duplicate secret hash and refuse to proceed with the sale. This is not something they are likely to do.
Either change the way `provideSecret()` works to allow for duplicate secret hashes or reject duplicate hashes in `create()`.
null
```\\n function provideSecret(bytes32 sale, bytes32 secret\\_) external {\\n require(sales[sale].set);\\n if (sha256(abi.encodePacked(secret\\_)) == secretHashes[sale].secretHashA) { secretHashes[sale].secretA = secret\\_; }\\n else if (sha256(abi.encodePacked(secret\\_)) == secretHashes[sale].secretHashB) { secretHashes[sale].secretB = secret\\_; }\\n else if (sha256(abi.encodePacked(secret\\_)) == secretHashes[sale].secretHashC) { secretHashes[sale].secretC = secret\\_; }\\n else if (sha256(abi.encodePacked(secret\\_)) == secretHashes[sale].secretHashD) { secretHashes[sale].secretD = secret\\_; }\\n else { revert(); }\\n }\\n```\\n
There is no way to convert between custom and non-custom funds Won't Fix
medium
Each fund is created using either `Funds.create()` or `Funds.createCustom()`. Both enforce a limitation that there can only be one fund per account:\\n```\\nfunction create(\\n uint256 maxLoanDur\\_,\\n uint256 maxFundDur\\_,\\n address arbiter\\_,\\n bool compoundEnabled\\_,\\n uint256 amount\\_\\n) external returns (bytes32 fund) {\\n require(fundOwner[msg.sender].lender != msg.sender || msg.sender == deployer); // Only allow one loan fund per address\\n```\\n\\n```\\nfunction createCustom(\\n uint256 minLoanAmt\\_,\\n uint256 maxLoanAmt\\_,\\n uint256 minLoanDur\\_,\\n uint256 maxLoanDur\\_,\\n uint256 maxFundDur\\_,\\n uint256 liquidationRatio\\_,\\n uint256 interest\\_,\\n uint256 penalty\\_,\\n uint256 fee\\_,\\n address arbiter\\_,\\n bool compoundEnabled\\_,\\n uint256 amount\\_\\n) external returns (bytes32 fund) {\\n require(fundOwner[msg.sender].lender != msg.sender || msg.sender == deployer); // Only allow one loan fund per address\\n```\\n\\nThese functions are the only place where `bools[fund].custom` is set, and there's no way to delete a fund once it exists. This means there's no way for a given account to switch between a custom and non-custom fund.\\nThis could be a problem if, for example, the default parameters change in a way that a user finds unappealing. They may want to switch to using a custom fund but find themselves unable to do so without moving to a new Ethereum account.
Either allow funds to be deleted or allow funds to be switched between custom and non-custom.
null
```\\nfunction create(\\n uint256 maxLoanDur\\_,\\n uint256 maxFundDur\\_,\\n address arbiter\\_,\\n bool compoundEnabled\\_,\\n uint256 amount\\_\\n) external returns (bytes32 fund) {\\n require(fundOwner[msg.sender].lender != msg.sender || msg.sender == deployer); // Only allow one loan fund per address\\n```\\n
Funds.maxFundDur has no effect if maxLoanDur is set
medium
`Funds.maxFundDur` specifies the maximum amount of time a fund should be active. It's checked in `request()` to ensure the duration of the loan won't exceed that time, but the check is skipped if `maxLoanDur` is set:\\n```\\nif (maxLoanDur(fund) > 0) {\\n require(loanDur\\_ <= maxLoanDur(fund));\\n} else {\\n require(now + loanDur\\_ <= maxFundDur(fund));\\n}\\n```\\n\\nIf a user sets `maxLoanDur` (the maximum loan duration) to 1 week and sets the `maxFundDur` (timestamp when all loans should be complete) to December 1st, then there can actually be a loan that ends on December 7th.
Check against `maxFundDur` even when `maxLoanDur` is set.
null
```\\nif (maxLoanDur(fund) > 0) {\\n require(loanDur\\_ <= maxLoanDur(fund));\\n} else {\\n require(now + loanDur\\_ <= maxFundDur(fund));\\n}\\n```\\n
Funds.update() lets users update fields that may not have any effect
low
`Funds.update()` allows users to update the following fields which are only used if `bools[fund].custom` is set:\\n`minLoanamt`\\n`maxLoanAmt`\\n`minLoanDur`\\n`interest`\\n`penalty`\\n`fee`\\n`liquidationRatio`\\nIf `bools[fund].custom` is not set, then these changes have no effect. This may be misleading to users.\\n```\\nfunction update(\\n bytes32 fund,\\n uint256 minLoanAmt\\_,\\n uint256 maxLoanAmt\\_,\\n uint256 minLoanDur\\_,\\n uint256 maxLoanDur\\_,\\n uint256 maxFundDur\\_,\\n uint256 interest\\_,\\n uint256 penalty\\_,\\n uint256 fee\\_,\\n uint256 liquidationRatio\\_,\\n address arbiter\\_\\n) external {\\n require(msg.sender == lender(fund));\\n funds[fund].minLoanAmt = minLoanAmt\\_;\\n funds[fund].maxLoanAmt = maxLoanAmt\\_;\\n funds[fund].minLoanDur = minLoanDur\\_;\\n funds[fund].maxLoanDur = maxLoanDur\\_;\\n funds[fund].maxFundDur = maxFundDur\\_;\\n funds[fund].interest = interest\\_;\\n funds[fund].penalty = penalty\\_;\\n funds[fund].fee = fee\\_;\\n funds[fund].liquidationRatio = liquidationRatio\\_;\\n funds[fund].arbiter = arbiter\\_;\\n}\\n```\\n
Resolution\\nThis is fixed in AtomicLoans/atomicloans-eth-contracts#67.\\nThis could be addressed by creating two update functions: one for custom funds and one for non-custom funds. Only the update for custom funds would allow setting these values.
null
```\\nfunction update(\\n bytes32 fund,\\n uint256 minLoanAmt\\_,\\n uint256 maxLoanAmt\\_,\\n uint256 minLoanDur\\_,\\n uint256 maxLoanDur\\_,\\n uint256 maxFundDur\\_,\\n uint256 interest\\_,\\n uint256 penalty\\_,\\n uint256 fee\\_,\\n uint256 liquidationRatio\\_,\\n address arbiter\\_\\n) external {\\n require(msg.sender == lender(fund));\\n funds[fund].minLoanAmt = minLoanAmt\\_;\\n funds[fund].maxLoanAmt = maxLoanAmt\\_;\\n funds[fund].minLoanDur = minLoanDur\\_;\\n funds[fund].maxLoanDur = maxLoanDur\\_;\\n funds[fund].maxFundDur = maxFundDur\\_;\\n funds[fund].interest = interest\\_;\\n funds[fund].penalty = penalty\\_;\\n funds[fund].fee = fee\\_;\\n funds[fund].liquidationRatio = liquidationRatio\\_;\\n funds[fund].arbiter = arbiter\\_;\\n}\\n```\\n
Ingress.setContractAddress() can cause duplicate entries in contractKeys
medium
`setContractAddress()` checks `ContractDetails` existence by inspecting `contractAddress`. A `contractAddress` of `0` means that the contract does not already exist, and its name must be added to contractKeys:\\n```\\nfunction setContractAddress(bytes32 name, address addr) public returns (bool) {\\n require(name > 0x0000000000000000000000000000000000000000000000000000000000000000, "Contract name must not be empty.");\\n require(isAuthorized(msg.sender), "Not authorized to update contract registry.");\\n\\n ContractDetails memory info = registry[name];\\n // create info if it doesn't exist in the registry\\n if (info.contractAddress == address(0)) {\\n info = ContractDetails({\\n owner: msg.sender,\\n contractAddress: addr\\n });\\n\\n // Update registry indexing\\n contractKeys.push(name);\\n } else {\\n info.contractAddress = addr;\\n }\\n // update record in the registry\\n registry[name] = info;\\n\\n emit RegistryUpdated(addr,name);\\n\\n return true;\\n}\\n```\\n\\nIf, however, a contract is actually added with the address `0`, which is currently allowed in the code, then the contract does already exists, and adding the name to `contractKeys` again will result in a duplicate.\\nMitigation\\nAn admin can call `removeContract` repeatedly with the same name to remove multiple duplicate entries.
Resolution\\nThis is fixed in PegaSysEng/[email protected]faff726.\\nEither disallow a contract address of `0` or check for existence via the `owner` field instead (which can never be 0).
null
```\\nfunction setContractAddress(bytes32 name, address addr) public returns (bool) {\\n require(name > 0x0000000000000000000000000000000000000000000000000000000000000000, "Contract name must not be empty.");\\n require(isAuthorized(msg.sender), "Not authorized to update contract registry.");\\n\\n ContractDetails memory info = registry[name];\\n // create info if it doesn't exist in the registry\\n if (info.contractAddress == address(0)) {\\n info = ContractDetails({\\n owner: msg.sender,\\n contractAddress: addr\\n });\\n\\n // Update registry indexing\\n contractKeys.push(name);\\n } else {\\n info.contractAddress = addr;\\n }\\n // update record in the registry\\n registry[name] = info;\\n\\n emit RegistryUpdated(addr,name);\\n\\n return true;\\n}\\n```\\n
Use specific contract types instead of address where possible
low
For clarity and to get more out of the Solidity type checker, it's generally preferred to use a specific contract type for variables rather than the generic `address`.\\n`AccountRules.ingressContractAddress` could instead be `AccountRules.ingressContract` and use the type IngressContract:\\n```\\naddress private ingressContractAddress;\\n```\\n\\n```\\nAccountIngress ingressContract = AccountIngress(ingressContractAddress);\\n```\\n\\n```\\nconstructor (address ingressAddress) public {\\n```\\n\\nThis same pattern is found in NodeRules:\\n```\\naddress private nodeIngressContractAddress;\\n```\\n
Where possible, use a specific contract type rather than `address`.
null
```\\naddress private ingressContractAddress;\\n```\\n
Ingress should use a set
low
The `AdminList`, `AccountRulesList`, and `NodeRulesList` contracts have been recently rewritten to use a set. `Ingress` has the semantics of a set but has not been written the same way.\\nThis leads to some inefficiencies. In particular, `Ingress.removeContract` is an O(n) operation:\\n```\\nfor (uint i = 0; i < contractKeys.length; i++) {\\n // Delete the key from the array + mapping if it is present\\n if (contractKeys[i] == name) {\\n delete registry[contractKeys[i]];\\n contractKeys[i] = contractKeys[contractKeys.length - 1];\\n delete contractKeys[contractKeys.length - 1];\\n contractKeys.length--;\\n```\\n
Use the same set implementation for Ingress: an array of `ContractDetails` and a mapping of names to indexes in that array.
null
```\\nfor (uint i = 0; i < contractKeys.length; i++) {\\n // Delete the key from the array + mapping if it is present\\n if (contractKeys[i] == name) {\\n delete registry[contractKeys[i]];\\n contractKeys[i] = contractKeys[contractKeys.length - 1];\\n delete contractKeys[contractKeys.length - 1];\\n contractKeys.length--;\\n```\\n
ContractDetails.owner is never read
low
The `ContractDetails` struct used by `Ingress` contracts has an `owner` field that is written to, but it is never read.\\n```\\nstruct ContractDetails {\\n address owner;\\n address contractAddress;\\n}\\n\\nmapping(bytes32 => ContractDetails) registry;\\n```\\n
Resolution\\nThis is fixed in PegaSysEng/[email protected]d3f505e.\\nIf `owner` is not (yet) needed, the `ContractDetails` struct should be removed altogether and the type of `Ingress.registry` should change to `mapping(bytes32 => address)`
null
```\\nstruct ContractDetails {\\n address owner;\\n address contractAddress;\\n}\\n\\nmapping(bytes32 => ContractDetails) registry;\\n```\\n
[M-2] Failure in Maintaining Gauge Points
medium
The defaultGaugePointFunction in the smart contract does not explicitly handle the scenario where the percentage of the Base Deposited Value (BDV) equals the optimal percentage (optimalPercentDepositedBdv), resulting in an unintended reduction of gauge points to 0 instead of maintaining their current value.\\nThe testnew_GaugePointAdjustment() test demonstrated this flaw by providing inputs where currentGaugePoints = 1189, optimalPercentDepositedBdv = 64, and percentOfDepositedBdv = 64, expecting newGaugePoints to equal currentGaugePoints. However, the outcome was newGaugePoints = 0, indicating an unexpected reduction to zero.\\n```\\nfunction testnew_GaugePointAdjustment() public {\\n uint256 currentGaugePoints = 1189; \\n uint256 optimalPercentDepositedBdv = 64; \\n uint256 percentOfDepositedBdv = 64; \\n\\n uint256 newGaugePoints = gaugePointFacet.defaultGaugePointFunction(\\n currentGaugePoints,\\n optimalPercentDepositedBdv,\\n percentOfDepositedBdv\\n );\\n\\n assertTrue(newGaugePoints <= MAX_GAUGE_POINTS, "New gauge points exceed the maximum allowed");\\n assertEq(newGaugePoints, currentGaugePoints, "Gauge points adjustment does not match expected outcome");\\n}\\n```\\n
Implement Explicit Returns: Ensure the defaultGaugePointFunction has an explicit return for the case where gauge points should not be adjusted. This can be achieved by adding a final return statement that simply returns currentGaugePoints if neither condition for incrementing nor decrementing is met, as shown below:\\n```\\nelse {\\n return currentGaugePoints; \\n}\\n```\\n
This behavior can lead to an undesired decrease in incentives for contract participants, potentially affecting participation and reward accumulation within the contract's ecosystem. Users may lose gauge points and, consequently, rewards due to a technical flaw rather than their actions.
```\\nfunction testnew_GaugePointAdjustment() public {\\n uint256 currentGaugePoints = 1189; \\n uint256 optimalPercentDepositedBdv = 64; \\n uint256 percentOfDepositedBdv = 64; \\n\\n uint256 newGaugePoints = gaugePointFacet.defaultGaugePointFunction(\\n currentGaugePoints,\\n optimalPercentDepositedBdv,\\n percentOfDepositedBdv\\n );\\n\\n assertTrue(newGaugePoints <= MAX_GAUGE_POINTS, "New gauge points exceed the maximum allowed");\\n assertEq(newGaugePoints, currentGaugePoints, "Gauge points adjustment does not match expected outcome");\\n}\\n```\\n
Silo is not compatible with Fee-on-transfer or rebasing tokens
medium
According to the documentation there are certain conditions that need to be met for a token to be whitelisted:\\n```\\nAdditional tokens may be added to the Deposit Whitelist via Beanstalk governance. In order for a token to be added to the Deposit Whitelist, Beanstalk requires:\\n1. The token address;\\n2. A function to calculate the Bean Denominated Value (BDV) of the token (see Section 14.2 of the whitepaper for complete formulas); and\\n3. The number of Stalk and Seeds per BDV received upon Deposit.\\n```\\n\\nThus if the community proposes any kind of Fee-on-Transfer or rebasing tokens like (PAXG or stETH) and the Beanstalk governance approves it, then the protocol needs to integrate them into the system. But as it is now the system is definitely not compatible with such tokens.\\n`deposit`, `depositWithBDV`, `addDepositToAccount`, `removeDepositFromAccount` and any other `silo` accounting related functions perform operations using inputed/recorded amounts. They don't query the existing balance of tokens before or after receiving/sending in order to properly account for tokens that shift balance when received (FoT) or shift balance over time (rebasing).
Clearly state in the docs that weird tokens won't be implemented via Governance Vote or adjust the code to check the `token.balanceOf()` before and after doing any operation related to the `silo`.
Likelyhood - low/medium - At the moment of writing lido has over 31% of the ETH staked which makes `stETH` a very popular token. There's a strong chance that stakeholder would want to have `stETH` inside the silo.\\nOverall severity is medium.
```\\nAdditional tokens may be added to the Deposit Whitelist via Beanstalk governance. In order for a token to be added to the Deposit Whitelist, Beanstalk requires:\\n1. The token address;\\n2. A function to calculate the Bean Denominated Value (BDV) of the token (see Section 14.2 of the whitepaper for complete formulas); and\\n3. The number of Stalk and Seeds per BDV received upon Deposit.\\n```\\n
`removeWhitelistStatus` function Ignores updating `milestoneSeason` variable
medium
The issue in the `LibWhitelistedTokens:removeWhitelistStatus` function is that it removes the Whitelist status of a token without considering the impact on other related variables, such as the `milestoneSeason` variable.\\n`milestoneSeason` Is used in many functions for checking whether a token is whitelisted or not i.e.\\n```\\n require(s.ss[token].milestoneSeason == 0, "Whitelist: Token already whitelisted");\\n```\\n\\nIf the milestoneSeason variable is not updated or cleared when removing the Whitelist status, it may lead to incorrect behavior in subsequent checks or operations that rely on this variable.
To address this issue, ensure that related variables, such as `milestoneSeason`, are appropriately updated or cleared when removing the Whitelist status of a token. If the `milestoneSeason` variable is no longer relevant after removing the Whitelist status, it should be updated or cleared to maintain data integrity.
`removeWhitelistStatus` function Ignores updating `milestoneSeason` variable\\nRemoving the Whitelist status of a token without updating related variables can lead to inconsistencies in the data stored in the contract. The `milestoneSeason` variable, used for checking whitelist status in many functions, may still hold outdated or incorrect information after removing the status, potentially leading to unexpected behavior or vulnerabilities.
```\\n require(s.ss[token].milestoneSeason == 0, "Whitelist: Token already whitelisted");\\n```\\n
No validation of total supply of unripe beans & Lp in `percentBeansRecapped` & `percentLPRecapped`
low
`LibUnripe:percentBeansRecapped` & `LibUnripe:percentLPRecapped` functions calculate the percentage of Unripe Beans and Unripe LPs that have been recapitalized, respectively. These percentages are calculated based on the underlying balance of the Unripe Tokens and their total supply. There is no check if the `totalSupply` is zero which is used as division in the calculation.\\nSee the following code for both the functions:\\n```\\n /**\\n * @notice Returns the percentage that Unripe Beans have been recapitalized.\\n */\\n function percentBeansRecapped() internal view returns (uint256 percent) {\\n AppStorage storage s = LibAppStorage.diamondStorage();\\n return s.u[C.UNRIPE_BEAN].balanceOfUnderlying.mul(DECIMALS).div(C.unripeBean().totalSupply());\\n }\\n\\n /**\\n * @notice Returns the percentage that Unripe LP have been recapitalized.\\n */\\n function percentLPRecapped() internal view returns (uint256 percent) {\\n AppStorage storage s = LibAppStorage.diamondStorage();\\n return C.unripeLPPerDollar().mul(s.recapitalized).div(C.unripeLP().totalSupply());\\n }\\n```\\n
To handle this scenario, appropriate checks should be added to ensure that the `totalSupply` of Unripe Beans or LP tokens is non-zero before performing the division operation.
If the `totalSupply` in these two functions becomes zero, the calculation of the percentage of recapitalized Unripe Beans or LP tokens would result in a division by zero error. This is because of the denominator in the calculation. When the total supply is zero, dividing by zero is not defined in Solidity, and the contract would revert with an error.\\nThese functions are used widely across the different contracts at crucial places. So they will effect a lot of functionalities.
```\\n /**\\n * @notice Returns the percentage that Unripe Beans have been recapitalized.\\n */\\n function percentBeansRecapped() internal view returns (uint256 percent) {\\n AppStorage storage s = LibAppStorage.diamondStorage();\\n return s.u[C.UNRIPE_BEAN].balanceOfUnderlying.mul(DECIMALS).div(C.unripeBean().totalSupply());\\n }\\n\\n /**\\n * @notice Returns the percentage that Unripe LP have been recapitalized.\\n */\\n function percentLPRecapped() internal view returns (uint256 percent) {\\n AppStorage storage s = LibAppStorage.diamondStorage();\\n return C.unripeLPPerDollar().mul(s.recapitalized).div(C.unripeLP().totalSupply());\\n }\\n```\\n