issue_index int64 0 18 | location stringlengths 7 206 ⌀ | title stringlengths 9 110 | repository stringclasses 17 values | audited_commit stringclasses 22 values | reported_remediated_commit nullclasses 25 values | reported_impact stringclasses 18 values | reported_likelihood stringclasses 9 values | cwe_classification nullclasses 3 values | vulnerability_class_audit stringlengths 3 63 ⌀ | vulnerability_class_scout stringclasses 8 values | description stringlengths 168 4.25k | description_summary stringlengths 48 194 | reported_status nulllengths 4 352 ⌀ | is_substrate_finding bool 2 classes | audited_project_id int64 1 20 | project_name stringclasses 12 values | auditor stringclasses 11 values | audit_link stringclasses 20 values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | null | Vulnerable dependencies in the Substrate parachain | https://github.com/parallel-finance/parallel | 5ca8e13b7b4312855ae2ef1d39f14b38088dfdbd | null | Medium | Difficulty-High | null | Patching | Dependency | 1. Vulnerable dependencies in the Substrate parachain
Severity: Medium
Difficulty: High
Type: Patching
Finding ID: TOB-PLF-1
Target: parallel repository
Description
The Parallel Finance parachain node uses the following dependencies with known vulnerabilities. (All of the dependencies listed are inherited from the Substrate framework.)
DependencyVersionIDDescription
chrono 0.4.19 RUSTSEC-2020-0159 Potential segfault in localtime_r invocations
lru 0.6.6 RUSTSEC-2021-0130 Use after free in lru crate
time 0.1.44 RUSTSEC-2020-0071 Potential segfault in the time crate
net2 0.2.37 RUSTSEC-2020-0016 net2 crate has been deprecated; use socket2 instead
Other than chrono, all the dependencies can simply be updated to their newest versions to fix the vulnerabilities. The chrono crate issue has not been mitigated and remains problematic. A specific sequence of calls must occur to trigger the vulnerability, which is discussed in this GitHub thread in the chrono repository.
Exploit Scenario
An attacker exploits a known vulnerability in the Parallel Finance node and performs a denial-of-service attack on the network by taking down all nodes in the network.
Recommendations
Short term, update all dependencies to their newest versions. Monitor the referenced GitHub thread regarding the chrono crate segfault issue.
Long term, run cargo-audit as part of the CI/CD pipeline and ensure that the team is alerted to any vulnerable dependencies that are detected. | The Substrate parachain uses dependencies with known vulnerabilities, including chrono, lru, time, and net2, which may lead to potential segfaults or memory issues. | null | true | 1 | Parallel | Trail of Bits | https://github.com/parallel-finance/auditing-report/blob/main/Trail%20of%20Bits_Parallel%20Finance_Final%20Report.pdf |
1 | pallets/loans/src/lib.rs:1057-1087,1106-1121 | Users can avoid accruing interest by repaying a zero amount | https://github.com/parallel-finance/parallel | 5ca8e13b7b4312855ae2ef1d39f14b38088dfdbd | null | Medium | Difficulty-Low | null | Data Validation | Business Logic | 2. Users can avoid accruing interest by repaying a zero amount
Severity: Medium
Difficulty: Low
Type: Data Validation
Finding ID: TOB-PLF-2
Target: pallets/loans/src/lib.rs
Description
To repay borrowed funds, users call the repay_borrow extrinsic. The extrinsic implementation calls the Pallet::repay_borrow_internal method to recompute the loan balance. Pallet::repay_borrow_internal updates the loan balance for the account and resets the borrow index as part of the calculation.
fn repay_borrow_internal(
borrower: &T::AccountId,
asset_id: AssetIdOf<T>,
account_borrows: BalanceOf<T>,
repay_amount: BalanceOf<T>,
) -> DispatchResult {
// ... <redacted>
AccountBorrows::<T>::insert(
asset_id,
borrower,
BorrowSnapshot {
principal: account_borrows_new,
borrow_index: Self::borrow_index(asset_id),
},
);
TotalBorrows::<T>::insert(asset_id, total_borrows_new);
Ok(())
}
Figure 2.1: pallets/loans/src/lib.rs:1057-1087
The borrow index is used in the calculation of the accumulated interest for the loan in Pallet::current_balance_from_snapshot. Specifically, the outstanding balance, snapshot.principal, is multiplied by the quotient of borrow_index divided by snapshot.borrow_index.
pub fn current_balance_from_snapshot(
asset_id: AssetIdOf<T>,
snapshot: BorrowSnapshot<BalanceOf<T>>,
) -> Result<BalanceOf<T>, DispatchError> {
if snapshot.principal.is_zero() || snapshot.borrow_index.is_zero() {
return Ok(Zero::zero());
}
// Calculate new borrow balance using the interest index:
// recent_borrow_balance = snapshot.principal * borrow_index /
// snapshot.borrow_index
let recent_borrow_balance = Self::borrow_index(asset_id)
.checked_div(&snapshot.borrow_index)
.and_then(|r| r.checked_mul_int(snapshot.principal))
.ok_or(ArithmeticError::Overflow)?;
Ok(recent_borrow_balance)
}
Figure 2.2: pallets/loans/src/lib.rs:1106-1121
Therefore, if the snapshot borrow index is updated to Self::borrow_index(asset_id), the resulting recent_borrow_balance in Pallet::current_balance_from_snapshot will always be equal to snapshot.principal. That is, no interest will be applied to the loan. It follows that the accrued interest is lost whenever part of the loan is repaid. In an extreme case, if the repaid amount passed to repay_borrow is 0, users could reset the borrow index without repaying anything.
The same issue is present in the implementations of the liquidated_transfer and borrow extrinsics as well.
Exploit Scenario
A malicious user borrows assets from Parallel Finance and calls repay_borrow with a repay_amount of zero. This allows her to avoid paying interest on the loan.
Recommendations
Short term, modify the code so that the accrued interest is added to the snapshot principal when the snapshot is updated.
Long term, add unit tests for edge cases (like repaying a zero amount) to increase the chances of discovering unexpected system behavior. | Users can avoid paying interest on loans by repaying a zero amount, resetting the borrow index without repaying anything. | null | true | 1 | Parallel | Trail of Bits | https://github.com/parallel-finance/auditing-report/blob/main/Trail%20of%20Bits_Parallel%20Finance_Final%20Report.pdf |
2 | pallets/loans/src/lib.rs:539-556 | Missing validation in Pallet::force_update_market | https://github.com/parallel-finance/parallel | 5ca8e13b7b4312855ae2ef1d39f14b38088dfdbd | null | Informational | Difficulty-High | null | Data Validation | Error Handling and Validation | 3. Missing validation in Pallet::force_update_market
Severity: Informational
Difficulty: High
Type: Data Validation
Finding ID: TOB-PLF-3
Target: pallets/loans/src/lib.rs
Description
The Pallet::force_update_market method can be used to replace the stored market instance for a given asset. Other methods used to update market parameters perform extensive validation of the market parameters, but force_update_market checks only the rate model.
pub fn force_update_market(
origin: OriginFor<T>,
asset_id: AssetIdOf<T>,
market: Market<BalanceOf<T>>,
) -> DispatchResultWithPostInfo {
T::UpdateOrigin::ensure_origin(origin)?;
ensure!(
market.rate_model.check_model(),
Error::<T>::InvalidRateModelParam
);
let updated_market = Self::mutate_market(asset_id, |stored_market| {
*stored_market = market;
stored_market.clone()
})?;
Self::deposit_event(Event::<T>::UpdatedMarket(updated_market));
Ok(().into())
}
Figure 3.1: pallets/loans/src/lib.rs:539-556
This means that the caller (who is either the root account or half of the general council) could inadvertently change immutable market parameters like ptoken_id by mistake.
Exploit Scenario
The root account calls force_update_market to update a set of market parameters. By mistake, the ptoken_id market parameter is updated, which means that Pallet::ptoken_id and Pallet::underlying_id are no longer inverses.
Recommendations
Short term, consider adding more input validation to the force_update_market extrinsic. In particular, it may make sense to ensure that the ptoken_id market parameter has not changed. Alternatively, add validation to check whether the ptoken_id market parameter is updated and to update the UnderlyingAssetId map to ensure that the value matches the Markets storage map. | The force_update_market method lacks sufficient validation, potentially allowing unintended changes to immutable market parameters. | null | true | 1 | Parallel | Trail of Bits | https://github.com/parallel-finance/auditing-report/blob/main/Trail%20of%20Bits_Parallel%20Finance_Final%20Report.pdf |
3 | pallets/liquid-staking/src/types.rs:199-219,223-226,230-253 | Missing validation in multiple StakingLedger methods | https://github.com/parallel-finance/parallel | 5ca8e13b7b4312855ae2ef1d39f14b38088dfdbd | null | Undetermined | Difficulty-High | null | Data Validation | Error Handling and Validation | 4. Missing validation in multiple StakingLedger methods
Severity: Undetermined
Difficulty: High
Type: Data Validation
Finding ID: TOB-PLF-4
Target: pallets/liquid-staking/src/types.rs
Description
The staking ledger is used to keep track of the total amount of staked funds in the system. It is updated in response to cross-consensus messaging (XCM) requests to the parent chain (either Polkadot or Kusama). A number of the StakingLedger methods lack sufficient input validation before they update the staking ledger’s internal state. Even though the input is validated as part of the original XCM call, there could still be issues due to implementation errors or overlooked corner cases.
First, the StakingLedger::rebond method does not use checked arithmetic to update the active balance. The method should also check that the computed unlocking_balance is equal to the input value at the end of the loop to ensure that the system remains consistent.
pub fn rebond(&mut self, value: Balance) {
let mut unlocking_balance: Balance = Zero::zero();
while let Some(last) = self.unlocking.last_mut() {
if unlocking_balance + last.value <= value {
unlocking_balance += last.value;
self.active += last.value;
self.unlocking.pop();
} else {
let diff = value - unlocking_balance;
unlocking_balance += diff;
self.active += diff;
last.value -= diff;
}
if unlocking_balance >= value {
break;
}
}
}
Figure 4.1: pallets/liquid-staking/src/types.rs:199-219
Second, the StakingLedger::bond_extra method does not use checked arithmetic to update the total and active balances.
pub fn bond_extra(&mut self, value: Balance) {
self.total += value;
self.active += value;
}
Figure 4.2: pallets/liquid-staking/src/types.rs:223-226
Finally, the StakingLedger::unbond method does not use checked arithmetic when updating the active balance.
pub fn unbond(&mut self, value: Balance, target_era: EraIndex) {
if let Some(mut chunk) = self
.unlocking
.last_mut()
.filter(|chunk| chunk.era == target_era)
{
chunk.value = chunk.value.saturating_add(value);
} else {
self.unlocking.push(UnlockChunk {
value,
era: target_era,
});
};
self.active -= value;
}
Figure 4.3: pallets/liquid-staking/src/types.rs:230-253
Since the staking ledger is updated by a number of the XCM response handlers, and XCM responses may return out of order, it is important to ensure that input to the staking ledger methods is validated to prevent issues due to race conditions and corner cases.
We could not find a way to exploit this issue, but we cannot rule out the risk that it could be used to cause a denial-of-service condition in the system.
Exploit Scenario
The staking ledger's state is updated as part of a WithdrawUnbonded request, leaving the unlocking vector in the staking ledger empty. Later, when the response to a previous call to rebond is handled, the ledger is updated again, which leaves it in an inconsistent state.
Recommendations
Short term, ensure that the balance represented by the staking ledger’s unlocking vector is enough to cover the input balance passed to StakingLedger::rebond. Use checked arithmetic in all staking ledger methods that update the ledger’s internal state to ensure that issues due to data races are detected and handled correctly. | StakingLedger methods lack sufficient input validation and unchecked arithmetic, potentially leading to inconsistent states or race conditions. | null | true | 1 | Parallel | Trail of Bits | https://github.com/parallel-finance/auditing-report/blob/main/Trail%20of%20Bits_Parallel%20Finance_Final%20Report.pdf |
4 | pallets/liquid-staking/src/lib.rs:1071-1159 | Failed XCM requests left in storage | https://github.com/parallel-finance/parallel | 5ca8e13b7b4312855ae2ef1d39f14b38088dfdbd | null | Low | Difficulty-High | null | Data Validation | Error Handling and Validation | 5. Failed XCM requests left in storage
Severity: Low
Difficulty: High
Type: Data Validation
Finding ID: TOB-PLF-5
Target: pallets/liquid-staking/src/lib.rs
Description
When the liquid-staking pallet generates an XCM request for the parent chain, the corresponding XCM response triggers a call to Pallet::notification_received. If the response is of the Response::ExecutionResult type, this method calls Pallet::do_notification_received to handle the result.
The Pallet::do_notification_received method checks whether the request was successful and then updates the local state according to the corresponding XCM request, which is obtained from the XcmRequests storage map.
fn do_notification_received(
query_id: QueryId,
request: XcmRequest<T>,
res: Option<(u32, XcmError)>,
) -> DispatchResult {
use ArithmeticKind::*;
use XcmRequest::*;
let executed = res.is_none();
if !executed {
return Ok(());
}
match request {
Bond {
index: derivative_index,
amount,
} => {
ensure!(
!StakingLedgers::<T>::contains_key(&derivative_index),
Error::<T>::AlreadyBonded
);
let staking_ledger =
<StakingLedger<T::AccountId, BalanceOf<T>>>::new(
Self::derivative_sovereign_account_id(derivative_index),
amount,
);
StakingLedgers::<T>::insert(derivative_index, staking_ledger);
MatchingPool::<T>::try_mutate(|p| -> DispatchResult {
p.update_total_stake_amount(amount, Subtraction)
})?;
T::Assets::burn_from(
Self::staking_currency()?,
&Self::account_id(),
Amount
)?;
}
// ... <redacted>
}
XcmRequests::<T>::remove(&query_id);
Ok(())
}
Figure 5.1: pallets/liquid-staking/src/lib.rs:1071-1159
If the method completes without errors, the XCM request is removed from storage via a call to XcmRequests<T>::remove(query_id). However, if any of the following conditions are true, the corresponding XCM request is left in storage indefinitely:
1. The request fails and Pallet::do_notification_received exits early.
2. Pallet::do_notification_received fails.
3. The response type is not Response::ExecutionResult.
These three cases are currently unhandled by the codebase. The same issue is present in the crowdloans pallet implementation of Pallet::do_notification_received.
Recommendations
Short term, ensure that failed XCM requests are handled correctly by the crowdloans and liquid-staking pallets. | Failed XCM requests are left in storage indefinitely if not handled properly by the notification_received method. | null | true | 1 | Parallel | Trail of Bits | https://github.com/parallel-finance/auditing-report/blob/main/Trail%20of%20Bits_Parallel%20Finance_Final%20Report.pdf |
5 | pallets/loans/src/lib.rs:1430-1441 | Risk of using stale oracle prices in loans pallet | https://github.com/parallel-finance/parallel | 5ca8e13b7b4312855ae2ef1d39f14b38088dfdbd | null | Low | Difficulty-High | null | Data Validation | Business Logic | 6. Risk of using stale oracle prices in loans pallet
Severity: Low
Difficulty: High
Type: Data Validation
Finding ID: TOB-PLF-6
Target: pallets/loans/src/lib.rs
Description
The loans pallet uses oracle prices to find a USD value of assets using the get_price function. The get_price function internally uses the T::PriceFeeder::get_price function, which returns a timestamp and the price. However, the returned timestamp is ignored.
pub fn get_price(asset_id: AssetIdOf<T>) -> Result<Price, DispatchError> {
let (price, _) = T::PriceFeeder::get_price(&asset_id).ok_or(Error::<T>::PriceOracleNotReady)?;
if price.is_zero() {
return Err(Error::<T>::PriceIsZero.into());
}
log::trace!(target: "loans::get_price", "price: {:?}", price.into_inner());
Ok(price)
}
Figure 6.1: pallets/loans/src/lib.rs:1430-1441
Exploit Scenario
The price feeding oracles fail to deliver prices for an extended period of time. The get_price function returns stale prices, causing the get_asset_value function to return a non-market asset value.
Recommendations
Short term, modify the code so that it compares the returned timestamp from the T::PriceFeeder::get_price function with the current timestamp, returns an error if the price is too old, and handles the emergency price, which currently has a timestamp of zero. This will stop the market if stale prices are returned and allow the governance process to intervene with an emergency price. | The loans pallet may use stale oracle prices, leading to incorrect asset valuations. | null | true | 1 | Parallel | Trail of Bits | https://github.com/parallel-finance/auditing-report/blob/main/Trail%20of%20Bits_Parallel%20Finance_Final%20Report.pdf |
6 | pallets/crowdloans/src/lib.rs:718-765 | Missing calculations in crowdloans extrinsics | https://github.com/parallel-finance/parallel | 5ca8e13b7b4312855ae2ef1d39f14b38088dfdbd | null | Undetermined | Difficulty-High | null | Undefined Behavior | Business Logic | 7. Missing calculations in crowdloans extrinsics
Severity: Undetermined
Difficulty: High
Type: Undefined Behavior
Finding ID: TOB-PLF-7
Target: pallets/crowdloans/src/lib.rs
Description
The claim extrinsic in the crowdloans pallet is missing code to subtract the claimed amount from vault.contributed to update the total contribution amount. A similar bug exists in the refund extrinsic: there is no subtraction from vault.contributed after the Self::contribution_kill call.
pub fn claim(
origin: OriginFor<T>,
crowdloan: ParaId,
lease_start: LeasePeriod,
lease_end: LeasePeriod,
) -> DispatchResult {
// ... <redacted>
Self::contribution_kill(
vault.trie_index,
&who,
ChildStorageKind::Contributed
);
Self::deposit_event(Event::<T>::VaultClaimed(
crowdloan,
(lease_start, lease_end),
ctoken,
who,
amount,
VaultPhase::Succeeded,
));
Ok(())
}
Figure 7.1: pallets/crowdloans/src/lib.rs:718-765
Exploit Scenario
The claim extrinsic is called, but the total amount in vault.contributed is not updated, leading to incorrect calculations in other places.
Recommendations
Short term, update the claim and refund extrinsics so that they subtract the amount from vault.contributed.
Long term, add a test suite to ensure that the vault state stays consistent after the claim and refund extrinsics are called. | The claim and refund extrinsics do not update the vault's total contribution, leading to incorrect calculations. | null | true | 1 | Parallel | Trail of Bits | https://github.com/parallel-finance/auditing-report/blob/main/Trail%20of%20Bits_Parallel%20Finance_Final%20Report.pdf |
7 | pallets/crowdloans/src/lib.rs:424-472,599-616 | Event emitted when update_vault and set_vrf calls do not make updates | https://github.com/parallel-finance/parallel | 5ca8e13b7b4312855ae2ef1d39f14b38088dfdbd | null | Informational | Difficulty-High | null | Auditing and Logging | Code Quality | 8. Event emitted when update_vault and set_vrf calls do not make updates
Severity: Informational
Difficulty: High
Type: Auditing and Logging
Finding ID: TOB-PLF-8
Target: pallets/crowdloans/src/lib.rs
Description
The update_vault extrinsic in the crowdloans pallet is responsible for updating the three values shown. It is possible to call update_vault in such a way that no update is performed, but the function emits an event regardless of whether an update occurred. The same situation occurs in the set_vrfs extrinsic.
pub fn update_vault(
origin: OriginFor<T>,
crowdloan: ParaId,
cap: Option<BalanceOf<T>>,
end_block: Option<BlockNumberFor<T>>,
contribution_strategy: Option<ContributionStrategy>,
) -> DispatchResult {
T::UpdateVaultOrigin::ensure_origin(origin)?;
let mut vault = Self::current_vault(crowdloan).ok_or(Error::<T>::VaultDoesNotExist)?;
if let Some(cap) = cap {
// ... <redacted>
}
if let Some(end_block) = end_block {
// ... <redacted>
}
if let Some(contribution_strategy) = contribution_strategy {
// ... <redacted>
}
Self::deposit_event(Event::<T>::VaultUpdated(crowdloan,(lease_start, lease_end),contribution_strategy,cap,end_block));
Ok(())
}
Figure 8.1: pallets/crowdloans/src/lib.rs:424-472
pub fn set_vrfs(origin: OriginFor<T>, vrfs: Vec<ParaId>) -> DispatchResult {
T::VrfOrigin::ensure_origin(origin)?;
log::trace!(target: "crowdloans::set_vrfs", "pre-toggle. vrfs: {:?}",vrfs);
Vrfs::<T>::try_mutate(|b| -> Result<(), DispatchError> {
*b = vrfs.try_into().map_err(|_| Error::<T>::MaxVrfsExceeded)?;
Ok(())
})?;
Self::deposit_event(Event::<T>::VrfsUpdated(Self::vrfs()));
Ok(())
}
Figure 8.2: pallets/crowdloans/src/lib.rs:599-616
Exploit Scenario
A system observes that the VaultUpdate event was emitted even though the vault state did not actually change. Based on this observation, it performs logic that should be executed only when the state has been updated.
Recommendations
Short term, modify the VaultUpdate event so that it is emitted only when the update_vault extrinsic makes an actual update. Optionally, have the update_vault extrinsic return an error to the caller when calling it results in no updates. | The update_vault and set_vrfs extrinsics emit events even when no actual updates are made. | null | true | 1 | Parallel | Trail of Bits | https://github.com/parallel-finance/auditing-report/blob/main/Trail%20of%20Bits_Parallel%20Finance_Final%20Report.pdf |
8 | pallets/crowdloans/src/lib.rs:502-594 | The referral code is a sequence of arbitrary bytes | https://github.com/parallel-finance/parallel | 5ca8e13b7b4312855ae2ef1d39f14b38088dfdbd | null | Informational | Difficulty-High | null | Data Validation | Error Handling and Validation | 9. The referral code is a sequence of arbitrary bytes
Severity: Informational
Difficulty: High
Type: Data Validation
Finding ID: TOB-PLF-9
Target: pallets/crowdloans/src/lib.rs
Description
The referral code is used in a number of extrinsic calls in the crowdloans pallet. Because the referral code is never validated, it can be a sequence of arbitrary bytes. The referral code is logged by a number of extrinsics. However, it is currently impossible to perform log injection because the referral code is printed as a hexadecimal string rather than raw bytes (using the debug representation).
pub fn contribute(
origin: OriginFor<T>,
crowdloan: ParaId,
#[pallet::compact] amount: BalanceOf<T>,
referral_code: Vec<u8>,
) -> DispatchResultWithPostInfo {
// ... <redacted>
log::trace!(
target: "crowdloans::contribute",
"who: {:?}, para_id: {:?}, amount: {:?}, referral_code: {:?}",
&who,
&crowdloan,
&amount,
&referral_code
);
Ok(().into())
}
Figure 9.1: pallets/crowdloans/src/lib.rs:502-594
Exploit Scenario
The referral code is rendered as raw bytes in a vulnerable environment, introducing an opportunity to perform a log injection attack.
Recommendations
Short term, choose and implement a data type that models the referral code semantics as closely as possible. | The referral code in the crowdloans pallet is arbitrary and unvalidated, potentially allowing log injection attacks. | null | true | 1 | Parallel | Trail of Bits | https://github.com/parallel-finance/auditing-report/blob/main/Trail%20of%20Bits_Parallel%20Finance_Final%20Report.pdf |
9 | pallets/crowdloans/src/lib.rs:1429-1464 | Missing validation of referral code size | https://github.com/parallel-finance/parallel | 5ca8e13b7b4312855ae2ef1d39f14b38088dfdbd | null | Low | Difficulty-Low | null | Data Validation | Error Handling and Validation | 10. Missing validation of referral code size
Severity: Low
Difficulty: Low
Type: Data Validation
Finding ID: TOB-PLF-10
Target: pallets/crowdloans/src/lib.rs
Description
The length of the referral code is not validated by the contribute extrinsic defined by the crowdloans pallet. Since the referral code is stored by the node, a malicious user could call contribute multiple times with a very large referral code. This would increase the memory pressure on the node, potentially leading to memory exhaustion.
fn do_contribute(
who: &AccountIdOf<T>,
crowdloan: ParaId,
vault_id: VaultId,
amount: BalanceOf<T>,
referral_code: Vec<u8>,
) -> Result<(), DispatchError> {
XcmRequests::<T>::insert(
query_id,
XcmRequest::Contribute {
crowdloan,
vault_id,
who: who.clone(),
amount,
referral_code: referral_code.clone(),
},
);
Ok(())
}
Figure 10.1: pallets/crowdloans/src/lib.rs:1429-1464
Exploit Scenario
A malicious user calls the contribute extrinsic multiple times with a very large referral code. This increases the memory pressure on the validator nodes and eventually causes all parachain nodes to run out of memory and crash.
Recommendations
Short term, add validation that limits the size of the referral code argument to the contribute extrinsic. | The contribute extrinsic lacks validation on the referral code size, potentially leading to memory exhaustion. | null | true | 1 | Parallel | Trail of Bits | https://github.com/parallel-finance/auditing-report/blob/main/Trail%20of%20Bits_Parallel%20Finance_Final%20Report.pdf |
10 | pallets/crowdloans/src/lib.rs | Code duplication in crowdloans pallet | https://github.com/parallel-finance/parallel | 5ca8e13b7b4312855ae2ef1d39f14b38088dfdbd | null | Informational | Difficulty-High | null | Patching | Code Quality | 11. Code duplication in crowdloans pallet
Severity: Informational
Difficulty: High
Type: Patching
Finding ID: TOB-PLF-11
Target: pallets/crowdloans/src/lib.rs
Description
A number of extrinsics in the crowdloans pallet have duplicate code. The close, reopen, and auction_succeeded extrinsics have virtually identical logic. The migrate_pending and refund extrinsics are also fairly similar.
Exploit Scenario
A vulnerability is found in the duplicate code, but it is patched in only one place.
Recommendations
Short term, refactor the close, reopen, and auction_succeeded extrinsics into one function, to be called with values specific to the extrinsics. Refactor common pieces of logic in the migrate_pending and refund extrinsics.
Long term, avoid code duplication, as it makes the system harder to review and update. Perform regular code reviews and track any logic that is duplicated. | Code duplication in the crowdloans pallet increases the risk of inconsistent patches and complicates maintenance. | null | true | 1 | Parallel | Trail of Bits | https://github.com/parallel-finance/auditing-report/blob/main/Trail%20of%20Bits_Parallel%20Finance_Final%20Report.pdf |
0 | null | Need to upgrade the module | https://github.com/parallel-finance/parallel | a223cd7910af3540048b58f958fea5b784876468 | null | low-risk | null | null | null | Dependency | 5.1 Need to upgrade the module [low-risk]
ID:RUSTSEC-2021-0067
Crate:cranelift-codegen
Version: 0.71.0
Date:2021-05-21
URL:https://rustsec.org/advisories/RUSTSEC-2021-0067
Title: Memory access due to code generation flaw in Cranelift module
Solution: upgrade to >= 0.73.1 OR >= 0.74
Dependency tree: cranelift-codegen 0.71.0
Fixed in: https://github.com/parallel-finance/parallel/pull/210 | Memory access vulnerability in Cranelift module due to code generation flaw, fixed by upgrading to >= 0.73.1 or >= 0.74. | null | true | 2 | Parallel | Slow Mist | https://github.com/parallel-finance/auditing-report/blob/main/Slow%20Mist%20-%20Parallel%20Security%20Audit%20Report.pdf |
1 | parallel/pallets/loans/src/lib.rs | Numeric overflow | https://github.com/parallel-finance/parallel | a223cd7910af3540048b58f958fea5b784876468 | null | enhancement | null | null | null | Arithmetic | 5.2 Numeric overflow[enhancement]
parallel/pallets/loans/src/lib.rs
let total_reserves_new = total_reserves - reduce_amount;
//...snip code...//
let total_reserves_new = total_reserves + add_amount;
It is recommended to use `checked_add/checked_sub` to prevent numerical overflow.
Fixed in: https://github.com/parallel-finance/parallel/pull/241 | Numeric overflow vulnerability due to lack of checked_add/checked_sub, fixed by adding these methods. | null | true | 2 | Parallel | Slow Mist | https://github.com/parallel-finance/auditing-report/blob/main/Slow%20Mist%20-%20Parallel%20Security%20Audit%20Report.pdf |
2 | parallel/pallets/loans/src/lib.rs | Lack of bounds checking | https://github.com/parallel-finance/parallel | a223cd7910af3540048b58f958fea5b784876468 | null | weakness | null | null | null | Error Handling and Validation | 5.3 Lack of bounds checking[weakness]
`mint_amount` has no boundary limit, it is recommended to enhance.
Fixed in: https://github.com/parallel-finance/parallel/pull/258 | Lack of bounds checking for mint_amount, recommended to enhance. | null | true | 2 | Parallel | Slow Mist | https://github.com/parallel-finance/auditing-report/blob/main/Slow%20Mist%20-%20Parallel%20Security%20Audit%20Report.pdf |
3 | parallel/pallets/prices/src/lib.rs | Oracle price feed risk | https://github.com/parallel-finance/parallel | a223cd7910af3540048b58f958fea5b784876468 | null | weakness | null | null | null | Business Logic | 5.4 Oracle price feed risk[weakness]
Due to the lack of time parameter, if the price is not feed in time, the price may be inaccurate.
parallel/pallets/prices/src/lib.rs
pub enum Event<T: Config> {
/// Set emergency price. [currency_id, price_detail]
SetPrice(CurrencyId, PriceWithDecimal),
/// Reset emergency price. [currency_id]
ResetPrice(CurrencyId),
}
Feedback: At present, the source of price feeding is controlled by authority and credible. The range of trustworthiness includes the accuracy and real-time of the price, that is, outdated prices will not be sent to the chain, but the price feeding transaction sent is through operational on the chain. Transaction level guarantees will be packaged in real time, that is, the current block. | Oracle price feed may be inaccurate due to lack of a time parameter. | null | true | 2 | Parallel | Slow Mist | https://github.com/parallel-finance/auditing-report/blob/main/Slow%20Mist%20-%20Parallel%20Security%20Audit%20Report.pdf |
0 | pallets/automation-time/src/lib.rs | Calculate inaccurate risk | https://github.com/OAK-Foundation/OAK-blockchain | 643342e936bbc821d2fb91be69872e4fcecd2273 | null | Suggestion | null | null | Integer Overflow | Arithmetic | [N1] [Suggestion] Calculate inaccurate risk
Category: Integer Overflow Audit
Content
pallets/automation-time/src/lib.rs
There are some risks of value overflow.
saturating_mul, saturating_sub, saturating_add and +-*/, +=, -=
saturating at the numeric bounds instead of overflowing, The returned result is inaccurate.
Solution
Use checked_add/checked_sub/checked_mul/checked_div instead of
saturating_add/saturating_sub/saturating_mul/saturating_div and +-*/, +=, -= .
Status
Fixed | Inaccurate calculation risk due to potential integer overflow; replace saturating operations with checked operations. | null | true | 3 | AvaProtocol | Slow Mist | https://avaprotocol.org/docs/papers/SlowMist.Audit.Report.-.Turing.Network.-.June.2022.pdf |
1 | pallets/automation-time/src/lib.rs | User balance is not checked before transfer | https://github.com/OAK-Foundation/OAK-blockchain | 643342e936bbc821d2fb91be69872e4fcecd2273 | null | Suggestion | null | null | Others | Error Handling and Validation | [N2] [Suggestion] User balance is not checked
Category: Others
Content
pallets/automation-time/src/lib.rs
The amount transferred by the user is not compared with the user's balance here, and the user may not have enough NativeToken.
pub fn schedule_native_transfer_task(
origin: OriginFor<T>,
provided_id: Vec<u8>,
execution_times: Vec<UnixTime>,
recipient_id: T::AccountId,
#[pallet::compact] amount: BalanceOf<T>,
) -> DispatchResult {
let who = ensure_signed(origin)?;
// check for greater than existential deposit
if amount < T::NativeTokenExchange::minimum_balance() {
Err(<Error<T>>::InvalidAmount)?
}
// check not sent to self
if who == recipient_id {
Err(<Error<T>>::TransferToSelf)?
}
let action =
Action::NativeTransfer { sender: who.clone(), recipient: recipient_id, amount };
Self::validate_and_schedule_task(action, who, provided_id, execution_times)?;
Ok(().into())
}
Solution
Compare the user's transfer amount with the balance
Status
Ignored; This is expected behaviour. | User balance is not checked before transfer, allowing potential insufficient funds. | null | true | 3 | AvaProtocol | Slow Mist | https://avaprotocol.org/docs/papers/SlowMist.Audit.Report.-.Turing.Network.-.June.2022.pdf |
2 | pallets/parachain-staking/src/lib.rs:962,1004,1390
pallets/parachain-staking/src/types.rs:463
pallets/parachain-staking/src/delegation_requests.rs:279,648-651
pallets/parachain-staking/src/migrations.rs:404 | Return value not checked | https://github.com/OAK-Foundation/OAK-blockchain | 643342e936bbc821d2fb91be69872e4fcecd2273 | null | Low | null | null | Others | Error Handling and Validation | [N3] [Low] Return Value Not Checked
Category: Others
Content
pallets/parachain-staking/src/lib.rs
//#L962
T::Currency::unreserve(&bond.owner, bond.amount);
//#L1004
T::Currency::unreserve(&candidate, state.bond);
//#L1390
T::Currency::unreserve(&delegator, amount);
pallets/parachain-staking/src/types.rs
//#L463
T::Currency::unreserve(&who, request.amount.into());
//#L648:651
T::Currency::unreserve(&lowest_bottom_to_be_kicked.owner, lowest_bottom_to_be_kicked.amount);
pallets/parachain-staking/src/delegation_requests.rs
//#L279
T::Currency::unreserve(&delegator, amount);
pallets/parachain-staking/src/migrations.rs
//#L404
T::Currency::unreserve(&owner, *amount);
The return value of unreserve needs to be checked.
Solution
Check the return value.
Status
Ignored; If the account has less than that locked up. Not only is this unlikely to happen, there's nothing for parachain-staking to do if it occurs. | Return value of unreserve function is not checked, risking unhandled errors. | null | true | 3 | AvaProtocol | Slow Mist | https://avaprotocol.org/docs/papers/SlowMist.Audit.Report.-.Turing.Network.-.June.2022.pdf |
3 | pallets/parachain-staking/src/types.rs
pallets/parachain-staking/src/lib.rs
pallets/parachain-staking/src/delegation_requests.rs | Calculate inaccurate risk | https://github.com/OAK-Foundation/OAK-blockchain | 643342e936bbc821d2fb91be69872e4fcecd2273 | null | Suggestion | null | null | Integer Overflow | Arithmetic | [N4] [Suggestion] Calculate inaccurate risk
Category: Integer Overflow Audit
Content
pallets/parachain-staking/src/types.rs
pallets/parachain-staking/src/lib.rs
pallets/parachain-staking/src/delegation_requests.rs
There are some risks of value overflow.
saturating_mul, saturating_sub, saturating_add and +-*/, +=, -=
saturating at the numeric bounds instead of overflowing, The returned result is inaccurate.
Solution
Use checked_add/checked_sub/checked_mul/checked_div instead of
saturating_add/saturating_sub/saturating_mul/saturating_div and +-*/, +=, -= .
Status
Ignored; These functions are performed on the Balance type. Since the same type is used for total_issuance I don't think we need to be worried about a portion of the issuance overflowing the data type. | Potential inaccurate calculation due to integer overflow; replace saturating operations with checked operations. | null | true | 3 | AvaProtocol | Slow Mist | https://avaprotocol.org/docs/papers/SlowMist.Audit.Report.-.Turing.Network.-.June.2022.pdf |
4 | pallets/parachain-staking/src/lib.rs | Missing logic | https://github.com/OAK-Foundation/OAK-blockchain | 643342e936bbc821d2fb91be69872e4fcecd2273 | null | Low | null | null | Others | Business Logic | [N5] [Low] Missing logic
Category: Others
Content
pallets/parachain-staking/src/lib.rs
It is necessary to make a judgment in the case of deposit_into_existing failure. If the transfer fails, the entire transaction needs to be rolled back.
fn prepare_staking_payouts(now: RoundIndex) {
// payout is now - delay rounds ago => now - delay > 0 else return early
let delay = T::RewardPaymentDelay::get();
if now <= delay {
return;
}
let round_to_payout = now.saturating_sub(delay);
let total_points = <Points<T>>::get(round_to_payout);
if total_points.is_zero() {
return;
}
let total_staked = <Staked<T>>::take(round_to_payout);
let total_issuance = Self::compute_issuance(total_staked);
let mut left_issuance = total_issuance;
// reserve portion of issuance for parachain bond account
let bond_config = <ParachainBondInfo<T>>::get();
let parachain_bond_reserve = bond_config.percent * total_issuance;
if let Ok(imb) = T::Currency::deposit_into_existing(&bond_config.account, parachain_bond_reserve) {
// update round issuance iff transfer succeeds
left_issuance = left_issuance.saturating_sub(imb.peek());
Self::deposit_event(Event::ReservedForParachainBond {
account: bond_config.account,
value: imb.peek(),
});
}
let payout = DelayedPayout {
round_issuance: total_issuance,
total_staking_reward: left_issuance,
collator_commission: <CollatorCommission<T>>::get(),
};
<DelayedPayouts<T>>::insert(round_to_payout, payout);
}
pub(crate) fn pay_one_collator_reward(paid_for_round: RoundIndex, payout_info: DelayedPayout<BalanceOf<T>>, ) -> (Option<(T::AccountId, BalanceOf<T>)>, Weight) {
// TODO: it would probably be optimal to roll Points into the DelayedPayouts storage
// item so that we do fewer reads each block
let total_points = <Points<T>>::get(paid_for_round);
if total_points.is_zero() {
// TODO: this case is obnoxious... it's a value query, so it could mean one
// of two different logic errors:
// 1. we removed it before we should have
// 2. we called pay_one_collator_reward when we were actually done with deferred payouts
log::warn!("pay_one_collator_reward called with no <Points<T>> for the round!");
return (None, 0u64.into());
}
let mint = |amt: BalanceOf<T>, to: T::AccountId| {
if let Ok(amount_transferred) = T::Currency::deposit_into_existing(&to, amt) {
Self::deposit_event(Event::Rewarded {
account: to.clone(),
rewards: amount_transferred.peek(),
});
}
};
let collator_fee = payout_info.collator_commission;
let collator_issuance = collator_fee * payout_info.round_issuance;
if let Some((collator, pts)) = <AwardedPts<T>>::iter_prefix(paid_for_round).drain().next() {
let mut extra_weight = 0;
let pct_due = Perbill::from_rational(pts, total_points);
let total_paid = pct_due * payout_info.total_staking_reward;
let mut amt_due = total_paid;
// Take the snapshot of block author and delegations
let state = <AtStake<T>>::take(paid_for_round, &collator);
let num_delegators = state.delegations.len();
if state.delegations.is_empty() {
// solo collator with no delegators
mint(amt_due, collator.clone());
extra_weight += T::OnCollatorPayout::on_collator_payout(paid_for_round, collator.clone(), amt_due, );
} else {
// pay collator first; commission + due_portion
let collator_pct = Perbill::from_rational(state.bond, state.total);
let commission = pct_due * collator_issuance;
amt_due = amt_due.saturating_sub(commission);
let collator_reward = (collator_pct * amt_due).saturating_add(commission);
mint(collator_reward, collator.clone());
extra_weight += T::OnCollatorPayout::on_collator_payout(paid_for_round, collator.clone(), collator_reward, );
// pay delegators due portion
for Bond { owner, amount } in state.delegations {
let percent = Perbill::from_rational(amount, state.total);
let due = percent * amt_due;
if !due.is_zero() {
mint(due, owner.clone());
}
}
}
(Some((collator, total_paid)), T::WeightInfo::pay_one_collator_reward(num_delegators as u32) + extra_weight, )
} else {
// Note that we don't clean up storage here; it is cleaned up in handle_delayed_payouts()
(None, 0u64.into())
}
}
Solution
If the transfer fails, the transaction should be rolled back.
Status
Ignored; The project party considers that it will be updated in subsequent versions. | Missing rollback logic for failed transfers in staking payouts. | null | true | 3 | AvaProtocol | Slow Mist | https://avaprotocol.org/docs/papers/SlowMist.Audit.Report.-.Turing.Network.-.June.2022.pdf |
5 | pallets/parachain-staking/src/types.rs | Missing error message | https://github.com/OAK-Foundation/OAK-blockchain | 643342e936bbc821d2fb91be69872e4fcecd2273 | null | Low | null | null | Malicious Event Log | Error Handling and Validation | [N6] [Suggestion] Missing error message
Category: Malicious Event Log Audit
Content
pallets/parachain-staking/src/types.rs
let new_bottom_delegation = top_delegations.delegations.pop().expect(""); missing error message.
pub fn add_top_delegation<T: Config>(&mut self, candidate: &T::AccountId, delegation: Bond<T::AccountId, BalanceOf<T>>) -> Option<Balance> where BalanceOf<T>: Into<Balance> + From<Balance>,
{
let mut less_total_staked = None;
let mut top_delegations = <TopDelegations<T>>::get(candidate).expect("CandidateInfo existence => TopDelegations existence");
let max_top_delegations_per_candidate = T::MaxTopDelegationsPerCandidate::get();
if top_delegations.delegations.len() as u32 == max_top_delegations_per_candidate {
// pop lowest top delegation
let new_bottom_delegation = top_delegations.delegations.pop().expect("");
top_delegations.total = top_delegations.total.saturating_sub(new_bottom_delegation.amount);
if matches!(self.bottom_capacity, CapacityStatus::Full) {
less_total_staked = Some(self.lowest_bottom_delegation_amount);
}
self.add_bottom_delegation::<T>(true, candidate, new_bottom_delegation);
}
// insert into top
top_delegations.insert_sorted_greatest_to_least(delegation);
// update candidate info
self.reset_top_data::<T>(candidate.clone(), &top_delegations);
if less_total_staked.is_none() {
// only increment delegation count if we are not kicking a bottom delegation
self.delegation_count = self.delegation_count.saturating_add(1u32);
}
<TopDelegations<T>>::insert(&candidate, top_delegations);
less_total_staked
}
Solution
Record the corresponding error message.
Status
Ignored; The project party considers that it will be updated in subsequent versions. | Missing error message in delegation handling, leaving potential issues unlogged. | null | true | 3 | AvaProtocol | Slow Mist | https://avaprotocol.org/docs/papers/SlowMist.Audit.Report.-.Turing.Network.-.June.2022.pdf |
0 | runtime/foucoco/src/lib.rs | ChainExtension Implementation Lacks Weight Charging | https://github.com/pendulum-chain/pendulum | d01528d17b96bf3de72c36deb3800c2ed0cf2afb | null | Medium | null | null | Denial-of-Service (DoS) | Weight Management | ChainExtension Implementation Lacks Weight Charging
The current implementation of the ChainExtension trait fails to charge weight when allowing smart contracts to call into the runtime.
IDPDM-006
Scope: Chain Extension
Severity: MEDIUM
Vulnerability Type: Denial-of-Service (DoS)
Status: Fixed (5b922a210b2a6705d3ea6fefbf67b317698f7b80)
Description:
The call method in the ChainExtension trait defines how smart contracts can interact with the runtime. When a contract makes state changes by calling into the runtime, the corresponding weight should be charged. However, the current implementation of ChainExtension lacks any weight charging mechanism. It allows smart contracts to make calls such as transfer, approve_transfer, and transfer_approved, which are not queries and result in state changes.
By not charging weight for these operations, the system becomes vulnerable to potential security issues, particularly Denial-of-Service (DoS) attacks. Malicious contracts can exploit this lack of weight charging by flooding the system with a high volume of calls, overloading the network and disrupting its normal operation.
Recommendation:
To mitigate this vulnerability, it is crucial to incorporate weight calculations and invoke the charge_weight function before accessing contract memory. Here is an example of how to integrate this in the code:
let charged_weight = env.charge_weight(sufficient_weight)?;
trace!(target: "runtime", "[ChainExtension]|call|func_id / charge_weight:{:?}", charged_weight);
let input = env.read(256)?;
By implementing weight charging for contract calls, the system | The ChainExtension implementation does not charge weight for runtime calls, making the system vulnerable to Denial-of-Service (DoS) attacks. | null | true | 4 | Pendulum | Hacken | https://audits.hacken.io/pendulum/l1-pendulum-pendulum-chain-jun2023/ |
1 | pallets/orml-currencies-allowance-extension/src/lib.rs:134 | Vector of unlimited size in the pallet | https://github.com/pendulum-chain/pendulum | d01528d17b96bf3de72c36deb3800c2ed0cf2afb | null | Low | null | null | Memory exhaustion / DoS | Denial of Service (DoS) and Spamming | Vector of unlimited size in the pallet
The orml-currencies-allowance-extension pallet employs the usage of the Vec data structure without incorporating any size checks.
ID: PDM-007
Scope: orml-currencies-allowance-extension pallet
Severity: LOW
Vulnerability Type: Memory exhaustion / DoS
Status: Fixed (c1a20acd965cc024ac756effbff8a12522dac87a and 05607a1a9cd2ad3cebeff1294b2e4c34fa3e4721)
Description:
Within the orml-currencies-allowance-extension pallet, the Vec structure is utilized in the enum Event and struct GenesisConfig. Additionally, the extrinsic functions add_allowed_currencies and remove_allowed_currencies accept a vector as a parameter. Here's an example:
pallets/orml-currencies-allowance-extension/src/lib.rs:134:
#[pallet::call_index(0)]
#[pallet::weight(<T as Config>::WeightInfo::add_allowed_currencies())]
#[transactional]
pub fn add_allowed_currencies(
origin: OriginFor<T>,
currencies: Vec<CurrencyOf<T>>,
) -> DispatchResult {
ensure_root(origin)?;
for i in currencies.clone() {
AllowedCurrencies::<T>::insert(i, ());
}
Self::deposit_event(Event::AllowedCurrenciesAdded { currencies });
Ok(())
}
If an excessively large vector is provided as input to these functions, it can result in node overload due to the necessity of iterating through the entire vector. This poses a significant risk, leading to potential memory exhaustion and creating a vulnerability for denial-of-service (DoS) attacks. Although the function is restricted to root access, it's important to note that if the root account is compromised or controlled by a malicious actor, they can exploit this vulnerability by deliberately supplying large vectors. This would cause the node to consume excessive resources, potentially disrupting the normal operation of the system.
Recommendation:
To mitigate the risks associated with the use of unlimited-sized vectors, it is strongly recommended to revise the implementation in the orml-currencies-allowance-extension pallet. We advise against the use of the Vec data structure within any part of the pallet. Instead, consider utilizing frame_support::BoundedVec, which offers a bounded-size variant of a vector. Alternatively, it's important to impose a cap on the length of the vector during each of its instances. By ensuring a maximum limit, the vector's length stays within manageable bounds, preventing excessively long processing times or resource consumption during iterations over this data structure.
By proactively addressing this issue, you can enhance the stability and security of the orml-currencies-allowance-extension pallet, ensuring the robustness and reliability of the overall system. | Unrestricted vector size can lead to memory exhaustion and DoS attacks. | null | true | 4 | Pendulum | Hacken | https://audits.hacken.io/pendulum/l1-pendulum-pendulum-chain-jun2023/ |
2 | runtime/foucoco/src/lib.rs:1509
node/src/chain_spec.rs:229 | Employment of Sudo Pallet | https://github.com/pendulum-chain/pendulum | d01528d17b96bf3de72c36deb3800c2ed0cf2afb | null | null | null | null | Centralization | Business Logic | Employment of Sudo Pallet
The current implementation of the sudo FRAME pallet in the runtime employs it as an alternative to the governance mechanism.
ID: PDM-010
Scope: Decentralization
Status: Acknowledged
Description:
The sudo pallet is integrated into the runtime configuration:
runtime/foucoco/src/lib.rs:1509:
construct_runtime!(
pub enum Runtime where
Block = Block,
NodeBlock = opaque::Block,
UncheckedExtrinsic = UncheckedExtrinsic,
{
/* ... */
// Governance
Sudo: pallet_sudo::{Pallet, Call, Storage, Config<T>, Event<T>} = 12,
/* ... */
}
);
The root account, initially set at genesis, is defined as follows:
node/src/chain_spec.rs:229:
let sudo_account =
pallet_multisig::Pallet::<foucoco_runtime::Runtime>::multi_account_id(&signatories[..], 3);
This configuration allows the use of the ensure_root method for operations such as add_allowed_currencies and remove_allowed_currencies. Although there is no immediate security risk associated with this setup, it raises concerns regarding potential centralization.
Recommendation:
Utilizing a root account for governance purposes is considered a less favorable design choice due to its potential for centralization and the security risks associated with compromised private keys.
To address this issue, we recommend the following actions:
1. Comprehensive Documentation: It is crucial to thoroughly document the intended use of the sudo pallet and the root account within the project. This documentation should provide clear explanations of the potential risks and limitations associated with their usage. Both the development team and end-users should be adequately informed to ensure transparency and informed decision-making. If there are plans to deactivate the sudo functionality after the network launch, a detailed process similar to the sudo removal outlined by Polkadot should be documented.
2. Auditing and Monitoring: Prior to the removal of the sudo pallet, it is essential to establish regular auditing and monitoring processes. These measures will help ensure that the sudo pallet and the root account are not misused or compromising the system's security. By conducting thorough audits and continuous monitoring, any potential vulnerabilities or misuse can be identified and addressed promptly.
By following these recommendations, you will mitigate potential centralization concerns and enhance the overall governance and security aspects of the project. | The use of the sudo pallet risks centralization and security vulnerabilities due to potential misuse of root access. | null | true | 4 | Pendulum | Hacken | https://audits.hacken.io/pendulum/l1-pendulum-pendulum-chain-jun2023/ |
3 | runtime/foucoco/src/lib.rs:None | Error Handling in Chain Extension | https://github.com/pendulum-chain/pendulum | d01528d17b96bf3de72c36deb3800c2ed0cf2afb | null | null | null | null | Code Quality | Error Handling and Validation | Error Handling in Chain Extension
Implementation of ChainExtension often utilizes DispatchError::Other("Explanatory string") error, which makes error handling difficult.
ID: PDM-009
Scope: Code Quality
Status: Acknowledged
Description:
The current implementation of ChainExtension frequently relies on the generic DispatchError::Other() error. This approach makes error handling challenging, particularly for developers working with smart contracts. The indiscriminate use of this error type makes it difficult to monitor and diagnose specific errors, impeding efficient troubleshooting and code improvement.
Recommendation:
To improve error handling in the ChainExtension implementation, we recommend implementing a custom enum that covers all the required error types. This custom enum will provide a more structured approach to error handling by encapsulating specific errors and enabling precise identification and resolution of issues.
By implementing the custom enum and incorporating it into the RetVal result, developers will have access to more detailed error information, allowing them to effectively identify and handle different error scenarios. This approach enhances the overall robustness and maintainability of the codebase, as well as facilitates troubleshooting and future improvements. | Use of generic DispatchError::Other hampers error handling and troubleshooting. | null | true | 4 | Pendulum | Hacken | https://audits.hacken.io/pendulum/l1-pendulum-pendulum-chain-jun2023/ |
4 | runtime/foucoco/src/lib.rs:964 | Hardcoded Constants in match | https://github.com/pendulum-chain/pendulum | d01528d17b96bf3de72c36deb3800c2ed0cf2afb | null | null | null | null | Code Quality | Code Quality | Hardcoded Constants in match
The presence of hardcoded constants in the match statement within the implementation of ChainExtension hampers code readability.
ID: PDM-008
Scope: Code Quality
Status: Acknowledged
Description:
The following code segment has drawn our attention:
runtime/foucoco/src/lib.rs:964:
match func_id {
// transfer
1105 => { /* ... */ },
// balance
1106 => { /* ... */ },
// total_supply
1107 => { /* ... */ },
// approve_transfer
1108 => { /* ... */ },
// transfer_approved
1109 => { /* ... */ },
// allowance
1110 => { /* ... */ },
// dia price feed
7777 => { /* ... */ },
_ => { /* ... */ },
}
While the code comments provide some understanding of the functionality associated with each func_id, it is recommended to enhance the code logic in this section to improve comprehensibility, rather than solely relying on comments.
Recommendation:
We recommend creating a separate enum to handle all the supported func_id options, using descriptive names that correspond to each function. This approach enhances code readability and maintainability:
enum FuncId {
Transfer,
Balance,
TotalSupply,
ApproveTransfer,
TransferApproved,
Allowance,
DiaPriceFeed,
}
impl TryFrom<u16> for FuncId {
type Error = DispatchError;
fn try_from(func_id: u16) -> Result<Self, Self::Error> {
let id = match func_id {
1105 => Self::Transfer,
1106 => Self::Balance,
1107 => Self::TotalSupply,
1108 => Self::ApproveTransfer,
1109 => Self::TransferApproved,
1110 => Self::Allowance,
7777 => Self::DiaPriceFeed,
_ => {
error!("Called an unregistered `func_id`: {:?}", func_id);
return Err(DispatchError::Other("Unimplemented func_id"));
}
};
Ok(id)
}
}
This enum allows for easier handling of the logic within the match statement in the call implementation.
Furthermore, it is good practice to implement the logic for each function in a separate method instead of encapsulating all the code within the match arms. This promotes code modularity and readability.
Here's an example of how the code would look with the recommended changes:
let func_id = FuncId::try_from(env.func_id())?;
match func_id {
FuncId::Transfer => /* Call method that performs transfer */,
FuncId::Balance => /* Call method that returns balance */,
FuncId::TotalSupply => /* ... */ ,
FuncId::ApproveTransfer => /* ... */,
FuncId::TransferApproved => /* ... */,
FuncId::Allowance => /* ... */,
FuncId::DiaPriceFeed => /* ... */,
}
With these improvements, the code becomes more comprehensible, maintainable, and follows best practices. | Hardcoded constants in match statements reduce code readability and maintainability. | null | true | 4 | Pendulum | Hacken | https://audits.hacken.io/pendulum/l1-pendulum-pendulum-chain-jun2023/ |
5 | pallets/orml-currencies-allowance-extension/src/lib.rs:245,246 | Linter Warnings | https://github.com/pendulum-chain/pendulum | d01528d17b96bf3de72c36deb3800c2ed0cf2afb | null | null | null | null | Linters | Error Handling and Validation | Linter Warnings
The codebase has generated several warnings when analyzed using cargo clippy, indicating areas where improvements can be made to enhance the code quality.
ID: PDM-002
Scope: Linters
Status: Acknowledged
Description:
During the static analysis process using cargo clippy, the following warnings within the scope of the audit were identified:
- //rust-lang.github.io/rust-clippy/master/index.html#needless_return">needless_return
- //rust-lang.github.io/rust-clippy/master/index.html#needless_borrow">needless_borrow
Location:
pallets/orml-currencies-allowance-extension/src/lib.rs:245-246:
&owner,
&destination,
Recommendation:
To maintain a high-quality and easily maintainable codebase, it is advised to address the warnings generated by cargo clippy. Resolving these linter warnings will improve the overall code quality, facilitate troubleshooting, and potentially enhance performance and security within the project. | Codebase has linter warnings that impact maintainability and should be resolved to enhance quality and security. | null | true | 4 | Pendulum | Hacken | https://audits.hacken.io/pendulum/l1-pendulum-pendulum-chain-jun2023/ |
6 | runtime/foucoco/src/lib.rs | Logging in Runtime | https://github.com/pendulum-chain/pendulum | d01528d17b96bf3de72c36deb3800c2ed0cf2afb | null | null | null | null | Code Quality | Code Quality | Logging in Runtime
The implementation of ChainExtension contains a lot of unnecessary warn!() macros.
ID: PDM-011
Scope: Logging
Status: Acknowledged
Description:
Upon reviewing the runtime code, it was observed that numerous instances of the warn!() macro are present. While the warn macro can be useful during testing or debugging phases, it is unnecessary in the final version of the code. These superfluous warn macros clutter the codebase and can impede code readability and maintenance.
Unneeded logging statements add noise to the code and distract developers from critical information. In the absence of proper justification, excessive logging can obscure important log messages and make it more challenging to identify and address genuine issues.
Recommendation:
To enhance the code quality and maintainability of the runtime, we recommend refining the use of warn macros. By limiting the use of these macros to critical and important situations, the codebase will become more concise and easier to comprehend. This targeted approach will enable developers to focus on essential log messages that offer meaningful insights into the system's behavior, rather than being overwhelmed by extraneous warnings. Therefore, unnecessary warn macros should be reviewed and removed, with a priority on retaining only those that provide critical and important warnings. | Excessive warn!() macros in the implementation of ChainExtension clutter code and reduce maintainability. | null | true | 4 | Pendulum | Hacken | https://audits.hacken.io/pendulum/l1-pendulum-pendulum-chain-jun2023/ |
7 | Cargo.toml | Pendulum build | https://github.com/pendulum-chain/pendulum | d01528d17b96bf3de72c36deb3800c2ed0cf2afb | null | null | null | null | Build Process | Code Quality | Pendulum build
The Pendulum chain demonstrates a smooth and error-free build process.
ID: PDM-001
Scope: Build Process
Status: Fixed
Description:
The build process for the Pendulum chain is efficient and error-free. When executing the cargo build --release command, the output confirms a successful build with only one minor compiler warning related to an unused import. The build process adheres to sound Rust coding practices and follows idiomatic conventions.
During the assessment, it was observed that the orml-currencies-allowance-extension pallet is not included as a member of the workspace, although it should be.
Recommendation:
To address this issue, we recommend adding pallets/orml-currencies-allowance-extension to the members list in the main Cargo.toml file of the workspace.
Please note that this issue does not encompass the results of linting tools, such as Clippy, which may provide additional warnings and recommendations for improving code quality. These will be addressed separately in PDM-002. | Missing a member in the workspace configuration. | null | true | 4 | Pendulum | Hacken | https://audits.hacken.io/pendulum/l1-pendulum-pendulum-chain-jun2023/ |
8 | pallets/orml-currencies-allowance-extension/src/lib.rs:99 | Superfluous Implementation of Hooks Trait | https://github.com/pendulum-chain/pendulum | d01528d17b96bf3de72c36deb3800c2ed0cf2afb | null | null | null | null | Code Quality | Code Quality | Superfluous Implementation of Hooks Trait
The Hooks trait has been declared in the orml-currencies-allowance-extension pallet, but no custom implementations have been provided.
ID: PDM-005
Scope: Code Quality
Status: Acknowledged
Description:
The following code snippet raises concerns:
pallets/orml-currencies-allowance-extension/src/lib.rs:99:
#[pallet::hooks]
impl<T: Config> Hooks<T::BlockNumber> for Pallet<T> {}
The Hooks trait is typically used to perform logic on every block initialization, finalization, and other specific actions. However, in the case of the orml-currencies-allowance-extension pallet, no methods from the Hooks trait are implemented. While this does not cause any immediate detrimental effects, it can reduce code readability and increase complexity.
Recommendation:
To improve code clarity and readability, it is recommended to eliminate the use of the Hooks trait in the orml-currencies-allowance-extension pallet. Since no custom implementations are provided, removing the Hooks trait declaration will simplify the code and remove unnecessary complexity. | The Hooks trait has been declared in the orml-currencies-allowance-extension pallet, but no custom implementations have been provided. | null | true | 4 | Pendulum | Hacken | https://audits.hacken.io/pendulum/l1-pendulum-pendulum-chain-jun2023/ |
9 | pallets/orml-currencies-allowance-extension/src/lib.rs
runtime/foucoco/src/lib.rs | Test Coverage | https://github.com/pendulum-chain/pendulum | d01528d17b96bf3de72c36deb3800c2ed0cf2afb | null | null | null | null | Code Quality / Testing | Code Quality | Test Coverage
The current test coverage of the orml-currencies-allowance-extension pallet is insufficient, with only 22.22% coverage. Moreover, the implementation of ChainExtension in the runtime lacks any test coverage.
ID: PDM-003
Scope: Code Quality / Testing
Status: Acknowledged
Description:
To evaluate the test coverage, we recommend using the cargo tarpaulin command:
cargo tarpaulin --out Html --output-dir ./tarpaulin-report
Running this command generates an HTML file that provides detailed coverage information for all packages. Specifically, it highlights the orml-currencies-allowance-extension pallet, which currently has a coverage of only 22.22%. Additionally, it reveals the absence of tests for the methods utilized in the runtime to implement ChainExtension, including is_allowed_currency, allowance, do_approve_transfer, and do_transfer_approved, as well as the lack of testing for the overall ChainExtension implementation in the foucoco runtime.
Recommendation:
To address the low test coverage in the orml-currencies-allowance-extension pallet and the lack of test coverage in the runtime implementation of ChainExtension, it is essential to develop a comprehensive test suite. This suite should include thorough testing to ensure the security, stability, and maintainability of the project.
Implementing continuous integration (CI) systems to automate the execution of the test suite is highly recommended. This practice enables the team to identify areas with insufficient test coverage, detect regressions, and uncover areas that require improvement. By incorporating CI into the development workflow, valuable feedback is obtained, ultimately enhancing the overall quality of the codebase. | Insufficient test coverage in key areas affects security, stability, and maintainability. | null | true | 4 | Pendulum | Hacken | https://audits.hacken.io/pendulum/l1-pendulum-pendulum-chain-jun2023/ |
10 | pallets/orml-currencies-allowance-extension/Cargo.toml | Vulnerable and Unmaintained Dependencies | https://github.com/pendulum-chain/pendulum | d01528d17b96bf3de72c36deb3800c2ed0cf2afb | null | null | null | null | Dependency Management | Dependency | Vulnerable and Unmaintained Dependencies
The orml-currencies-allowance-extension pallet has dependencies that include one vulnerable crate plus three unmaintained crates.
ID: PDM-004
Scope: Dependencies
Status: Acknowledged
Description:
The orml-currencies-allowance-extension pallet has several dependencies that raise concerns regarding their security and maintenance. The following table provides details about these dependencies:
Dependency | Version | Id | Type | Remediation
------------|---------|---------------|-----------------|--------------------------------------------------
time | 0.1.45 | RUSTSEC-2020-0071 | Vulnerability | Upgrade to >=0.2.23
ansi_term | 0.12.1 | RUSTSEC-2021-0139 | Unmaintained | Use alternative crates: anstyle, console, nu-ansi-term, owo-colors, stylish, yansi
mach | 0.3.2 | RUSTSEC-2020-0168 | Unmaintained | Switch to mach2
parity-wasm | 0.45.0 | RUSTSEC-2022-0061 | Unmaintained | Switch to wasm-tools
Although these dependencies may not have an immediate impact on the security aspect and are part of the Substrate project, it is essential to regularly review dependencies and monitor for updates to ensure the overall security and maintenance of the project.
Recommendation:
To address these concerns and maintain a secure codebase, it is recommended to take the following actions:
1. Use cargo audit to check for any new vulnerabilities or outdated packages in your project's dependencies. Regularly perform these checks to stay updated on potential security issues.
2. Monitor for new releases of Substrate and update your project accordingly. Keeping your project up to date with the latest Substrate releases ensures that you benefit from bug fixes, security patches, and improvements.
Additionally, it is worth noting that the latest available Substrate release at the time of writing is v0.9.43. Stay informed about new releases and evaluate the feasibility of updating your project to benefit from the latest features and improvements. | Dependency includes one vulnerable and three unmaintained crates requiring updates or replacements. | null | true | 4 | Pendulum | Hacken | https://audits.hacken.io/pendulum/l1-pendulum-pendulum-chain-jun2023/ |
0 | null | TCR voting design should be improved | https://github.com/NodleCode/chain | de356170bfe2eb9f537e3c4861d6752dd099f43e | null | 3 | 3 | null | null | Business Logic | 3.1 (HAL-01) HAL-01 TCR VOTING DESIGN SHOULD BE IMPROVED - MEDIUM
Description:
It was observed that it is possible to:
• Vote with 0 amount
• Challenge yourself
• Counter yourself
• Vote for yourself
By combining these properties, some scenarios might be possible:
• A whale can influence any challenge/counter decision by voting for itself.
A whale can also farm additional tokens upon success by countering any application and then voting to itself.
By countering your application and voting with 0 amount, it is possible to fill up the storage since the values are pushed into vector
To remove yourself from members in the root of trust.
Risk Level:
Likelihood - 3
Impact - 4
Recommendation:
Consider improving the design by not letting the same account to:
• Vote to itself
• Counter itself
• Challenge itself
• Vote with 0 deposit
Remediation Plan:
NOT APPLICABLE: The issue is marked as not applicable by the Nodle team as the TCR and root of trust pallets will be removed. | TCR voting design allows self-voting, zero-amount votes, and potential manipulation by large token holders. | null | true | 5 | Nodle | Halborn | https://github.com/HalbornSecurity/PublicReports/blob/master/Substrate%20Audits/Nodle_Nodl_Substrate_Pallet_Security_Audit_Report_Halborn_Final.pdf |
1 | pallets/staking/src/lib.rs | Denomination logic should be improved | https://github.com/NodleCode/chain | de356170bfe2eb9f537e3c4861d6752dd099f43e | null | 2 | 4 | null | null | Business Logic | 3.2 (HAL-02) HAL-02 DENOMINATION LOGIC SHOULD BE IMPROVED - MEDIUM
Description:
It was observed that if a nominator has a single validator, it is not possible to remove a validator through nominator_denominate since it has a check for <StakingMinNominatorTotalBond<T>>. In that case, nominator_denominate_all has to be used, which bypasses that check, which is not intentional.
Code Location:
Listing 1: pallets/staking/src/lib.rs
1 if !do_force {
ensure!(
remaining >= <StakingMinNominatorTotalBond<T>>::get(),
<Error<T>>::NominatorBondBelowMin
);
}
Risk Level:
Likelihood - 4
Impact - 2
Recommendation:
Consider having a conditional statement in nominator_denominate that allows the forced removal of a validator if the nominator has only one validator.
Remediation Plan:
SOLVED: The issue was solved by the Nodle team.
• Fix Commit | Denomination logic bypasses minimum bond check when only one validator is nominated. | null | true | 5 | Nodle | Halborn | https://github.com/HalbornSecurity/PublicReports/blob/master/Substrate%20Audits/Nodle_Nodl_Substrate_Pallet_Security_Audit_Report_Halborn_Final.pdf |
2 | pallets | Emergency shutdown not used in critical functions | https://github.com/NodleCode/chain | de356170bfe2eb9f537e3c4861d6752dd099f43e | null | 4 | 3 | null | null | Denial of Service (DoS) and Spamming | 3.3 (HAL-03) HAL-03 EMERGENCY SHUTDOWN NOT USED IN MANY CRITICAL FUNCTIONS - MEDIUM
Description:
It was observed that the emergency shutdown pallet is used only in the allocate function in the allocations pallet. However, there are more public functions across different pallets that might be problematic if, at any point in time, there is a bug (security/non-security) discovered within them. There should be a functionality to shut them down before new fixes are pushed.
Code Location:
These functions should have a shutdown functionality:
Grants pallet
• add_vesting_schedule
Staking pallet
• validator_join_pool
• validator_exit_pool
• validator_bond_more
• validator_bond_less
• nominator_nominate
• nominator_denominate
• nominator_bond_more
• nominator_bond_less
• nominator_move_nomination
• unbond_frozen
• withdraw_unbonded
• withdraw_staking_rewards
Risk Level:
Likelihood - 3
Impact - 4
Recommendation:
Consider enabling shutdown functionality in critical public functions.
Example Code:
Listing 2
1 ensure!(
!pallet_emergency_shutdown :: Pallet::<T>::shutdown(),
Error::<T>::UnderShutdown
);
Remediation Plan:
PENDING: In a future release, the Nodle team will modify the emergency shutdown pallet to better generalize. | Emergency shutdown functionality is missing in multiple critical public functions across different pallets. | null | true | 5 | Nodle | Halborn | https://github.com/HalbornSecurity/PublicReports/blob/master/Substrate%20Audits/Nodle_Nodl_Substrate_Pallet_Security_Audit_Report_Halborn_Final.pdf |
3 | pallets/staking/src/lib.rs:201-208 | Missing sanity checks | https://github.com/NodleCode/chain | de356170bfe2eb9f537e3c4861d6752dd099f43e | null | 2 | 3 | null | null | Error Handling and Validation | 3.4 (HAL-04) HAL-04 MISSING SANITY CHECKS - LOW
Description:
It was observed that the set_staking_limits privileged function is missing sanity checks on provided values. Even though it is a protected function, it is still advised to have some sanity checks to avoid any human error.
Code Location:
Listing 3: pallets/staking/src/lib.rs
201 pub fn set_staking_limits (
origin: OriginFor<T>,
max_stake_validators: u32,
min_stake_session_selection: BalanceOf<T>,
min_validator_bond: BalanceOf<T>,
min_nominator_total_bond: BalanceOf<T>,
min_nominator_chill_threshold: BalanceOf<T>
) -> DispatchResultWithPostInfo {
Risk Level:
Likelihood - 3
Impact - 2
Recommendation:
It is recommended to add sanity checks to ensure:
• max_stake_validators != 0
• min_stake_session_selection != 0
• min_validator_bond != 0
• min_nominator_total_bond != 0
Remediation Plan:
SOLVED: The issue was solved by the Nodle team.
• Fix Commit | Missing sanity checks in set_staking_limits function may allow invalid parameter values. | null | true | 5 | Nodle | Halborn | https://github.com/HalbornSecurity/PublicReports/blob/master/Substrate%20Audits/Nodle_Nodl_Substrate_Pallet_Security_Audit_Report_Halborn_Final.pdf |
4 | pallets/grants/src/lib.rs:157-168 | Vesting to yourself is allowed | https://github.com/NodleCode/chain | de356170bfe2eb9f537e3c4861d6752dd099f43e | null | 2 | 3 | null | null | Business Logic | 3.5 (HAL-05) HAL-05 VESTING TO YOURSELF IS ALLOWED - LOW
Description:
It was observed that you can create a vesting schedule to yourself.
Code Location:
Listing 4: pallets/grants/src/lib.rs
157 pub fn add_vesting_schedule (
origin: OriginFor<T>,
dest: <T::Lookup as StaticLookup>::Source,
schedule: VestingScheduleOf<T>
) -> DispatchResultWithPostInfo {
let from = ensure_signed(origin)?;
let to = T::Lookup::lookup(dest)?;
Self::do_add_vesting_schedule(&from, &to, schedule.clone())?;
Self::deposit_event(Event::VestingScheduleAdded(from, to, schedule));
Ok(().into())
}
Risk Level:
Likelihood - 3
Impact - 2
Recommendation:
Please add a check that ensures that from != to in fn add_vesting_schedule.
Remediation Plan:
SOLVED: The issue was solved by the Nodle team.
• Fix Commit | Vesting schedule allows setting up a vesting to oneself without restrictions. | null | true | 5 | Nodle | Halborn | https://github.com/HalbornSecurity/PublicReports/blob/master/Substrate%20Audits/Nodle_Nodl_Substrate_Pallet_Security_Audit_Report_Halborn_Final.pdf |
5 | pallets/grants/src/lib.rs:88 | Missing zero value check | https://github.com/NodleCode/chain | de356170bfe2eb9f537e3c4861d6752dd099f43e | null | 2 | 2 | null | null | Error Handling and Validation | 3.6 (HAL-06) HAL-06 MISSING ZERO VALUE CHECK - LOW
Description:
It was observed that the allocate function should have a zero value check on the amount argument.
Code Location:
Listing 5: pallets/grants/src/lib.rs (Line 88)
85 pub fn allocate (
origin: OriginFor<T>,
to: T::AccountId,
amount: BalanceOf<T>,
proof: Vec<u8>
) -> DispatchResultWithPostInfo {
Self::ensure_oracle(origin)?;
...
Risk Level:
Likelihood - 2
Impact - 2
Recommendation:
Consider adding zero value checks to these functions to avoid performing redundant operations if a zero value is received.
Remediation Plan:
SOLVED: The issue was solved by the Nodle team.
• Fix Commit | Allocate function lacks a zero-value check for the amount argument. | null | true | 5 | Nodle | Halborn | https://github.com/HalbornSecurity/PublicReports/blob/master/Substrate%20Audits/Nodle_Nodl_Substrate_Pallet_Security_Audit_Report_Halborn_Final.pdf |
6 | null | Vesting schedules less than a current block can be created | https://github.com/NodleCode/chain | de356170bfe2eb9f537e3c4861d6752dd099f43e | null | 1 | 1 | null | null | Business Logic | 3.7 (HAL-07) HAL-07 VESTING SCHEDULES LESS THAN A CURRENT BLOCK CAN BE CREATED - INFORMATIONAL
Description:
It was observed that the pallet allows the creation of vesting schedules that are less than the current block number. Those vesting schedules are not more than the regular transfers with extra steps. Therefore, those are redundant.
Example:
Listing 6
1 Current Block: 100
3 Vesting Schedule Start: 1st Block
5 Period: 10 Blocks
7 Period_count: 2
9 Per Period: 1 knodl
13 Vesting Duration: 10 * 2 + 1 = 21 Blocks
14 Initial Transfer sent: 2 knodl
16 Next Claims: 0 since Vesting Duration < Current Block
Risk Level:
Likelihood - 1
Impact - 1
Recommendation:
Consider adding a check that ensures that:
(period * period_count) + start > current_block_number
Remediation Plan:
NOT APPLICABLE: The issue was marked as not applicable by the Nodle team, stating:
This can be useful to keep as it is. In fact, we may have to create retroactive awards that may have been partially vested. | Vesting schedules can be created with start blocks earlier than the current block, making them redundant. | null | true | 5 | Nodle | Halborn | https://github.com/HalbornSecurity/PublicReports/blob/master/Substrate%20Audits/Nodle_Nodl_Substrate_Pallet_Security_Audit_Report_Halborn_Final.pdf |
7 | pallets/grants/src/lib.rs:253 | Redundant check | https://github.com/NodleCode/chain | de356170bfe2eb9f537e3c4861d6752dd099f43e | null | 1 | 1 | null | null | Code Quality | 3.8 (HAL-08) HAL-08 REDUNDANT CHECK - INFORMATIONAL
Description:
It was observed that the grants pallet contains a redundant check.
Code Location:
There is no need for a second new_lock.is_zero() since it was already checked prior. Removing of the vestingSchedule can be performed within the first check.
Listing 7: pallets/grants/src/lib.rs (Line 253)
247 if new_lock . is_zero () {
248 T :: Currency :: remove_lock ( VESTING_LOCK_ID , & target );
249 }
250 else {
251 T :: Currency :: set_lock ( VESTING_LOCK_ID , & target , new_lock , WithdrawReasons :: all () );
252 }
253 if new_lock . is_zero () {
254 // No more claimable , clear
255 VestingSchedules :: <T >:: remove ( target . clone () );
256 }
257 else {
258 T :: Currency :: set_lock ( VESTING_LOCK_ID , & target , new_lock , WithdrawReasons :: all () );
}
Risk Level:
Likelihood - 1
Impact - 1
Recommendation:
Please remove the second new_lock.is_zero() check and remove the vestingSchedule within the first check.
Listing 8: pallets/grants/src/lib.rs
247 if new_lock . is_zero () {
248 T :: Currency :: remove_lock ( VESTING_LOCK_ID , & target );
249 VestingSchedules :: <T >:: remove ( target . clone () );
250 }
251 else {
252 T :: Currency :: set_lock ( VESTING_LOCK_ID , & target , new_lock , WithdrawReasons :: all () );
Remediation Plan:
SOLVED: The issue was solved by the Nodle team.
• Fix Commit | Redundant check of new_lock.is_zero() found in grants pallet. | null | true | 5 | Nodle | Halborn | https://github.com/HalbornSecurity/PublicReports/blob/master/Substrate%20Audits/Nodle_Nodl_Substrate_Pallet_Security_Audit_Report_Halborn_Final.pdf |
8 | pallets/tcr/src/lib.rs:138-152,478 | Redundant variable | https://github.com/NodleCode/chain | de356170bfe2eb9f537e3c4861d6752dd099f43e | null | 1 | 1 | null | null | Code Quality | 3.9 (HAL-09) HAL-09 REDUNDANT VARIABLE - INFORMATIONAL
Description:
It was observed that the old1 variable in on_finalize function in tcr pallet is redundant. Tuple returned from commit_applications is Ok(( new_members, Vec::new())). Therefore, old1 is always going to be an empty vector. Hence, extending it with old2 does not make any difference. In this scenario, we only care about new_1.
Code Location:
Listing 9: pallets/tcr/src/lib.rs
138 fn on_finalize ( block : T :: BlockNumber ) {
139 let ( mut new_1 , mut old_1 ) =
140 Self :: commit_applications ( block ) . unwrap_or (( Vec :: new () , Vec :: new () ) );
141 let ( new_2 , old_2 ) =
142 Self :: resolve_challenges ( block ). unwrap_or (( Vec :: new () , Vec :: new () ) );
143 // Should never be the same , so should not need some uniq checks
145 new_1 . extend ( new_2 ) ;
146 old_1 . extend ( old_2 ) ;
148 new_1 . sort () ;
149 old_1 . sort () ;
151 Self :: notify_members_change ( new_1 , old_1 ) ;
Listing 10: pallets/tcr/src/lib.rs (Line 478)
460 fn commit_applications ( block : T :: BlockNumber ) -> FinalizeHelperResultFrom <T > {
461 let new_members = < Applications <T , I > >:: iter ()
462 . filter (|( _account_id , application )| {
463 block
464 . checked_sub (& application . clone () . created_block )
465 . expect ( " created_block should always be smaller than block ; qed ")
466 >= T :: FinalizeApplicationPeriod :: get ()
})
468 . map (|( account_id , application ) | {
469 < Applications <T , I > >:: remove ( account_id . clone () );
470 < Members <T , I > >:: insert ( account_id . clone () , application . clone () ) ;
471 Self :: unreserve_for ( account_id . clone () , application . candidate_deposit );
472 Self :: deposit_event ( Event :: ApplicationPassed ( account_id . clone () ) ) ;
474 account_id
475 })
476 . collect :: < Vec <T :: AccountId > >() ;
478 Ok (( new_members , Vec :: new () ) ) // === HERE ===
}
Risk Level:
Likelihood - 1
Impact - 1
Recommendation:
Consider omitting old1 and remove all actions performed on it.
Listing 11: pallets/tcr/src/lib.rs
247 fn on_finalize ( block : T :: BlockNumber ) {
248 let ( mut new_1 , _ ) =
249 Self :: commit_applications ( block ) . unwrap_or (( Vec :: new () , Vec :: new () ) );
250 let ( new_2 , mut old ) =
251 Self :: resolve_challenges ( block ). unwrap_or (( Vec :: new () , Vec :: new () ) );
253 // Should never be the same , so should not need some uniq checks
254 new_1 . extend ( new_2 ) ;
257 new_1 . sort () ;
258 old . sort () ;
260 Self :: notify_members_change ( new_1 , old ) ;
Remediation Plan:
SOLVED: The issue was solved by the Nodle team.
• Fix Commit | Redundant variable old1 in on_finalize function. | null | true | 5 | Nodle | Halborn | https://github.com/HalbornSecurity/PublicReports/blob/master/Substrate%20Audits/Nodle_Nodl_Substrate_Pallet_Security_Audit_Report_Halborn_Final.pdf |
9 | null | Usage of vulnerable crates | https://github.com/NodleCode/chain | de356170bfe2eb9f537e3c4861d6752dd099f43e | null | 1 | 2 | null | null | Dependency | 3.10 (HAL-10) HAL-10 USAGE OF VULNERABLE CRATES - INFORMATIONAL
Description:
It was observed that the project uses crates with known vulnerabilities.
Code Location:
ID
RUSTSEC-2020-0159
RUSTSEC-2020-0071
RUSTSEC-2021-0130
RUSTSEC-2021-0067
RUSTSEC-2021-009
RUSTSEC-2021-0079
RUSTSEC-2021-0078
RUSTSEC-2021-0076
RUSTSEC-2021-0070
RUSTSEC-2021-0073
RUSTSEC-2021-0013
RUSTSEC-2021-0089
RUSTSEC-2021-0124
RUSTSEC-2021-0110
RUSTSEC-2021-0115
FINDINGS & TECH DETAILS
package: chrono
Short Description: Potential segfault in 'localtime_r' invocations
package: time
Short Description: Potential segfault in the time crate
package: lru
Short Description: Use after free in lru crate
package: cranelift-codegen
Short Description: Memory access due to code generation flaw in Cranelift module
package: crossbeam-deque
Short Description: Data race in crossbeam-deque
package: hyper
Short Description: Integer overflow in hyper's parsing of the Transfer-Encoding header leads to data loss
package: hyper
Short Description: Lenient hyper header parsing of Content-Length could allow request smuggling
package: libsecp256k1
Short Description: libsecp256k1 allows overflowing signatures
package: nalgebra
Short Description: VecStorage Deserialize Allows Violation of Length Invariant
package: prost-types
Short Description: Conversion from prost_types::Timestamp to SystemTime can cause an overflow and panic
package: raw-cpuid
Short Description: Soundness issues in raw-cpuid
package: raw-cpuid
Short Description: Optional Deserialize implementations lacking validation
package: tokio
Short Description: Data race when sending and receiving after closing a oneshot channel
package: wasmtime
Short Description: Multiple Vulnerabilities in Wasmtime
package: zeroize-derive
Short Description: #[zeroize(drop)] doesn’t implement Drop for enums
Risk Level:
Likelihood - 2
Impact - 1
Recommendation:
Even if those vulnerable crates cannot impact the underlying application, it is advised to be aware of them and attempt to update them to non-vulnerable versions. Furthermore, it is necessary to set up dependency monitoring to always be alerted when a new vulnerability is disclosed in one of the project’s crates.
Remediation Plan:
ACKNOWLEDGED: The issue was acknowledged by the Nodle team and will be fixed later. | Usage of crates with known vulnerabilities identified. | null | true | 5 | Nodle | Halborn | https://github.com/HalbornSecurity/PublicReports/blob/master/Substrate%20Audits/Nodle_Nodl_Substrate_Pallet_Security_Audit_Report_Halborn_Final.pdf |
10 | Cargo.toml | Outdated Rust edition | https://github.com/NodleCode/chain | de356170bfe2eb9f537e3c4861d6752dd099f43e | null | 1 | 1 | null | null | Code Quality | 3.11 (HAL-11) HAL-11 OUTDATED RUST EDITION - INFORMATIONAL
Description:
It was observed that the project is using outdated rust edition (2018). Recently, 2021 rust edition came out, which includes a lot of stability improvements and new features that might make the code more readable.
Code Location:
• Cargo.toml
Risk Level:
Likelihood - 1
Impact - 1
Recommendation:
Consider updating the Rust to the latest edition to use the latest features and stability improvements.
Reference:
Rust 2021 Edition Guide
Remediation Plan:
SOLVED: The issue was solved by the Nodle team.
• Fix Commit | Project is using an outdated Rust edition (2018). | null | true | 5 | Nodle | Halborn | https://github.com/HalbornSecurity/PublicReports/blob/master/Substrate%20Audits/Nodle_Nodl_Substrate_Pallet_Security_Audit_Report_Halborn_Final.pdf |
0 | modules/evm-accounts/src/lib.rs:182,313,314
modules/evm-bridge/src/lib.rs:183,186,191 | Integer overflow | https://github.com/reef-defi/reef-chain | 393d0c0821cc25ea5c6912d9cac8f61a9232c9a3 | null | 3 | 3 | null | null | Arithmetic | 3.1 (HAL-01) INTEGER OVERFLOW - MEDIUM
Description:
An overflow happens when an arithmetic operation reaches the maximum size of a type. For instance, in the ethereum_signable_message() method, an if statement is summing up a few u32 values, which may end up overflowing the integer. In computer programming, an integer overflow occurs when an arithmetic operation attempts to create a numeric value that is outside of the range that can be represented with a given number of bits—either larger than the maximum or lower than the minimum representable value.
Code Location:
Listing 1: modules/evm-accounts/src/lib.rs (Lines 182)
180 pub fn ethereum_signable_message(what: &[u8], extra: &[u8]) -> Vec<u8> {
181 let prefix = b"reef evm:";
182 let mut l = prefix.len() + what.len() + extra.len();
183 let mut rev = Vec::new();
Listing 2: modules/evm-accounts/src/lib.rs (Lines 313,314)
312 pub fn to_ascii_hex(data: &[u8]) -> Vec<u8> {
313 let mut r = Vec::with_capacity(data.len() * 2);
314 let mut push_nibble = |n| r.push(if n < 10 { b'0' + n } else { b'a' - 10 + n });
Listing 3: modules/evm-bridge/src/lib.rs (Lines 183,186,191)
182 let offset = U256::from_big_endian(&output[0..32]);
183 let length = U256::from_big_endian(&output[offset.as_usize()..offset.as_usize() + 32]);
184 ensure!(
// output is 32-byte aligned. ensure total_length >= offset + string length + string data length.
186 output.len() >= offset.as_usize() + 32 + length.as_usize(),
187 Error::<T>::InvalidReturnValue
188 );
189 let mut data = Vec::new();
191 data.extend_from_slice(&output[offset.as_usize() + 32..offset.as_usize() + 32 + length.as_usize()]);
Risk Level:
Likelihood - 3
Impact - 3
Recommendations:
It is recommended to use vetted safe math libraries for arithmetic operations consistently throughout the smart contract system. Consider replacing the addition and multiplication operators with Rust’s checked_add and checked_mul methods.
Remediation:
SOLVED: Reef fixed the issue in commit 6e4153498a28d03b8600739709cb200065c88781. | Integer overflow vulnerability due to unchecked arithmetic operations in ethereum_signable_message and to_ascii_hex functions. | null | true | 6 | ReefChain | Halborn | https://github.com/HalbornSecurity/PublicReports/blob/master/Substrate%20Audits/Reef_Chain_Substrate_Security_Audit_Report_Halborn_v1_1.pdf |
1 | modules/currencies/src/lib.rs:168 | Total issuance not updated on mint | https://github.com/reef-defi/reef-chain | 393d0c0821cc25ea5c6912d9cac8f61a9232c9a3 | null | 3 | 3 | null | null | Business Logic | 3.2 (HAL-02) TOTAL ISSUANCE NOT UPDATED ON MINT - MEDIUM
Description:
The update_balance dispatchable defined in modules/currencies/src/lib.rs does not update the total issuance of the currency (identified by user-supplied ID) which is minted to the target address. This may lead to discrepancies in token data.
Code Location:
Listing 4: modules/currencies/src/lib.rs (Lines 168)
159 #[pallet::weight(T::WeightInfo::update_balance_non_native_currency())]
160 pub fn update_balance(
161 origin: OriginFor<T>,
162 who: <T::Lookup as StaticLookup>::Source,
163 currency_id: CurrencyIdOf<T>,
164 amount: AmountOf<T>,
165 ) -> DispatchResultWithPostInfo {
166 ensure_root(origin)?;
167 let dest = T::Lookup::lookup(who)?;
168 <Self as MultiCurrencyExtended<T::AccountId>>::update_balance(currency_id, &dest, amount)?;
169 Ok(().into())
170 }
Risk Level:
Likelihood - 3
Impact - 3
Recommendations:
Total issuance should be updated every time tokens are minted or burned.
Remediation Plan:
ACKNOWLEDGED: Reef states that the affected function is sudo only and will be deprecated in a future release. | Total issuance is not updated on mint, causing potential discrepancies in token data. | null | true | 6 | ReefChain | Halborn | https://github.com/HalbornSecurity/PublicReports/blob/master/Substrate%20Audits/Reef_Chain_Substrate_Security_Audit_Report_Halborn_v1_1.pdf |
2 | modules/evm-bridge/src/lib.rs:183,186,191 | Casting overflow | https://github.com/reef-defi/reef-chain | 26ed9e88e773f5d628c01d558945cd38cd5a7d5a | null | 3 | 2 | null | null | Arithmetic | 3.3 (HAL-03) CASTING OVERFLOW - LOW
Description:
When converting or casting between types, an “overflow”/wrapping may occur and result in logic bugs leading to thread panic. The decode_string utility method defined in modules/evm-bridge/src/lib.rs does not validate if the values of the offset and length variables can be cast to the usize type. Although the method is not exported and available externally, the method is vulnerable still and the risk could increase in the future if the method is used before it’s patched.
Code Location:
Listing 5: modules/evm-bridge/src/lib.rs (Lines 183,186,191)
182 let offset = U256::from_big_endian(&output[0..32]);
183 let length = U256::from_big_endian(&output[offset.as_usize()..offset.as_usize() + 32]);
184 ensure!(
// output is 32-byte aligned. ensure total_length >= offset + string length + string data length.
186 output.len() >= offset.as_usize() + 32 + length.as_usize(),
187 Error::<T>::InvalidReturnValue
188 );
189 let mut data = Vec::new();
191 data.extend_from_slice(&output[offset.as_usize() + 32..offset.as_usize() + 32 + length.as_usize()]);
Risk Level:
Likelihood - 2
Impact - 3
Recommendations:
Check the value against maximum type value before casting.
Listing 6:
1 if (x <= usize::MAX) {
2 // logic ...
3 }
Remediation:
SOLVED: Reef fixed the issue in commit 313439bb7940afa0f0d5060fbcbbe26d5a3e5298. | Casting overflow vulnerability due to unchecked type conversion in decode_string method. | null | true | 6 | ReefChain | Halborn | https://github.com/HalbornSecurity/PublicReports/blob/master/Substrate%20Audits/Reef_Chain_Substrate_Security_Audit_Report_Halborn_v1_1.pdf |
3 | modules/currencies/src/lib.rs:396 | Slash amount validation missing | https://github.com/reef-defi/reef-chain | 393d0c0821cc25ea5c6912d9cac8f61a9232c9a3 | null | 2 | 2 | null | null | Error Handling and Validation | 3.4 (HAL-04) SLASH AMOUNT VALIDATION MISSING - LOW
Description:
The slash_reserved method defined in modules/currencies/src/lib.rs does not validate if the value of the user-supplied value parameter exceeds the actual balance of the account owned by the address that is to have its ERC20 tokens slashed.
Code Location:
Listing 7: modules/currencies/src/lib.rs (Lines 396)
394 fn slash_reserved(currency_id: Self::CurrencyId, who: &T::AccountId, value: Self::Balance) -> Self::Balance {
395 match currency_id {
396 CurrencyId::ERC20(_) => value,
397 CurrencyId::Token(TokenSymbol::REEF) => T::NativeCurrency::slash_reserved(who, value),
398 _ => T::MultiCurrency::slash_reserved(currency_id, who, value),
399 }
400 }
Risk Level:
Likelihood - 2
Impact - 2
Recommendations:
The slashed amount should always be lesser or equal to the account balance that is to be slashed.
Remediation:
SOLVED: Reef fixed the issue in commit bd43bec58890be763b32bfdfd18ba85a8c0ef9e5. | Missing validation in slash_reserved method allows slashing beyond account balance. | null | true | 6 | ReefChain | Halborn | https://github.com/HalbornSecurity/PublicReports/blob/master/Substrate%20Audits/Reef_Chain_Substrate_Security_Audit_Report_Halborn_v1_1.pdf |
4 | modules/currencies/src/lib.rs:125,178,186,199,217,235,290,303,316,324,336,376,386,394,402,423,445 | Currency ID validation missing | https://github.com/reef-defi/reef-chain | 393d0c0821cc25ea5c6912d9cac8f61a9232c9a3 | null | 2 | 2 | null | null | Error Handling and Validation | 3.5 (HAL-05) CURRENCY ID VALIDATION MISSING - LOW
Description:
Many dispatchables and helper methods defined in modules/currencies/src/lib.rs do not check if the user-supplied currency ID matches any of the existing ones before calling the possibly resource-intensive underlying utility functions.
Code Location:
Listing 8: modules/evm-accounts/src/lib.rs (Lines 125)
121 #[pallet::weight(T::WeightInfo::transfer_non_native_currency())]
122 pub fn transfer(
123 origin: OriginFor<T>,
124 dest: <T::Lookup as StaticLookup>::Source,
125 currency_id: CurrencyIdOf<T>,
126 #[pallet::compact] amount: BalanceOf<T>,
127 ) -> DispatchResultWithPostInfo {
128 let from = ensure_signed(origin)?;
129 let to = T::Lookup::lookup(dest)?;
130 <Self as MultiCurrency<T::AccountId>>::transfer(currency_id, &from, &to, amount)?;
131 Ok(().into())
132 }
List of all the functions that fail to validate the currency ID:
Listing 9: (Lines 2,3)
1 auditor@halborn:~/projects/reef/reef-chain/modules/currencies$ \
2 > grep -ne 'fn.*CurrencyId' src/lib.rs \
3 > | cut -d '-' -f 1
4 178: fn minimum_balance(currency_id: Self::CurrencyId)
5 186: fn total_issuance(currency_id: Self::CurrencyId)
6 199: fn total_balance(currency_id: Self::CurrencyId, who: &T::AccountId)
7 217: fn free_balance(currency_id: Self::CurrencyId, who: &T::AccountId)
8 235: fn ensure_can_withdraw(currency_id: Self::CurrencyId, who: &T::AccountId, amount: Self::Balance)
9 290: fn deposit(currency_id: Self::CurrencyId, who: &T::AccountId, amount: Self::Balance)
10 303: fn withdraw(currency_id: Self::CurrencyId, who: &T::AccountId, amount: Self::Balance)
11 316: fn can_slash(currency_id: Self::CurrencyId, who: &T::AccountId, amount: Self::Balance)
12 324: fn slash(currency_id: Self::CurrencyId, who: &T::AccountId, amount: Self::Balance)
13 336: fn update_balance(currency_id: Self::CurrencyId, who: &T::AccountId, by_amount: Self::Amount)
14 376: fn remove_lock(lock_id: LockIdentifier, currency_id: Self::CurrencyId, who: &T::AccountId)
15 386: fn can_reserve(currency_id: Self::CurrencyId, who: &T::AccountId, value: Self::Balance)
16 394: fn slash_reserved(currency_id: Self::CurrencyId, who: &T::AccountId, value: Self::Balance)
17 402: fn reserved_balance(currency_id: Self::CurrencyId, who: &T::AccountId)
18 423: fn reserve(currency_id: Self::CurrencyId, who: &T::AccountId, value: Self::Balance)
19 445: fn unreserve(currency_id: Self::CurrencyId, who: &T::AccountId, value: Self::Balance)
Risk Level:
Likelihood - 2
Impact - 2
Recommendations:
It is recommended to validate all user-supplied input in order to avoid executing unnecessary operations and mitigate the risk of resource exhaustion.
Remediation Plan:
ACKNOWLEDGED: Reef states that there is only 1 currency id in use, and there likely won’t be more going forward. | Missing validation for currency ID in multiple methods could lead to resource exhaustion. | null | true | 6 | ReefChain | Halborn | https://github.com/HalbornSecurity/PublicReports/blob/master/Substrate%20Audits/Reef_Chain_Substrate_Security_Audit_Report_Halborn_v1_1.pdf |
5 | modules/evm-accounts/src/lib.rs:313 | Vector capacity validation missing | https://github.com/reef-defi/reef-chain | 393d0c0821cc25ea5c6912d9cac8f61a9232c9a3 | null | 2 | 1 | null | null | Error Handling and Validation | 3.6 (HAL-06) VECTOR CAPACITY VALIDATION MISSING - INFORMATIONAL
Description:
The to_ascii_hex utility function defined in modules/evm-accounts/src/lib.rs when creating a new Vec<u8> from the user-supplied data slice with a Vec::with_capacity method does not validate if the capacity of the new vector exceeds the maximum allowed capacity.
Code Location:
Listing 10: modules/currencies/src/lib.rs (Lines 313)
312 pub fn to_ascii_hex(data: &[u8]) -> Vec<u8> {
313 let mut r = Vec::with_capacity(data.len() * 2);
314 let mut push_nibble = |n| r.push(if n < 10 { b'0' + n } else { b'a' - 10 + n });
315 for &b in data.iter() {
316 push_nibble(b / 16);
317 push_nibble(b % 16);
318 }
319 r
320 }
Risk Level:
Likelihood - 1
Impact - 2
Recommendations:
Validate if the new capacity (data.len() * 2) does not exceed isize::MAX bytes.
Remediation:
SOLVED: Reef fixed the issue in commit 6b826f7ca16d1a30f3fa55f0606d0b94b69b2b3a. | Missing validation in to_ascii_hex function allows vector capacity to exceed maximum limit. | null | true | 6 | ReefChain | Halborn | https://github.com/HalbornSecurity/PublicReports/blob/master/Substrate%20Audits/Reef_Chain_Substrate_Security_Audit_Report_Halborn_v1_1.pdf |
0 | parachain-staking/lib.rs | Static fee charged despite dynamic storage accesses | https://github.com/Manta-Network/Manta | 45ba60e1d940dbf3491ce0f1223e44c84d5b7218 | null | Medium | null | null | Bad Extrinsic Weight | Weight Management | 4.1.1 V-MANC-VUL-001: Static fee charged despite dynamic storage accesses
Severity: Medium
Type: Bad Extrinsic Weight
Commit: 45ba60e1d
Status: Acknowledged
File(s): parachain-staking/lib.rs
Location(s): go_online, go_offline, candidate_bond_more
Blockchain computations must have appropriate fees to prevent network congestion. For substrate extrinsics, these fees are set by computing an associated weight for the operation where the weight is intended to capture the maximum computational cost. As reads from and writes to storage are expensive, these weights should consider the number of these operations that are performed. The following extrinsics, however, have a fixed weight despite requiring a dynamic number of reads or writes due to insert or remove operations being performed on CandidatePool.
- go_online
- go_offline
- candidate_bond_more
- execute_candidate_bond_less
- delegate
- execute_leave_delegators
- delegator_bond_more
- execute_delegation_request
- schedule_leave_delegators
- schedule_delegator_bond_less
- cancel_leave_delegators
Also note that similar functions in the same pallet, such as schedule_leave_candidates charge the users dynamic fees. An example can be seen in Snippet 4.1.
Impact: As the size of the CandidatePool grows, the cost of insert and remove will increase linearly since vector inserts in Rust are linear in the size of the vector. This allows malicious actors to add many candidates to the pool for a fixed monetary cost despite an increasing computational cost. If the size of the pool becomes too large, this could effectively create a DoS.
Recommendation: Similar to schedule_leave_candidates, calculate the weights dynamically rather than charging a fixed cost.
Developer Response: The developers have acknowledged the issue and are determining how to address it.
Snippet 4.1: go_offline calls remove on the CandidatePool but charges users a fixed weight in WeightInfo::go_offline
#[pallet::call_index(12)]
#[pallet::weight(<T as Config>::WeightInfo::go_offline())]
/// Temporarily leave the set of collator candidates without unbonding
pub fn go_offline(origin: OriginFor<T>) -> DispatchResultWithPostInfo {
let collator = ensure_signed(origin)?;
let mut state = <CandidateInfo<T>>::get(&collator).ok_or(Error::<T>::CandidateDNE)?;
ensure!(state.is_active(), Error::<T>::AlreadyOffline);
state.go_offline();
let mut candidates = <CandidatePool<T>>::get();
if candidates.remove(&Bond::from_owner(collator.clone())) {
<CandidatePool<T>>::put(candidates);
}
<CandidateInfo<T>>::insert(&collator, state);
Self::deposit_event(Event::CandidateWentOffline { candidate: collator });
Ok(().into())
} | Fixed fees are charged for dynamic storage operations, potentially leading to DoS as CandidatePool size grows. | null | true | 7 | MantaNetwork | Veridise | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Veridise-Chain.pdf |
1 | pallets/manta-pay/src/lib.rs | Users can use any previously seen Merkle root | https://github.com/Manta-Network/Manta | 45ba60e1d940dbf3491ce0f1223e44c84d5b7218 | null | Medium | null | null | Hash Collision | Dependency | 4.1.2 V-MANC-VUL-002: Users can use any previously seen Merkle root
Severity: Medium
Type: Hash Collision
Commit: 45ba60e1d
Status: Acknowledged
File(s): pallets/manta-pay/src/lib.rs
Location(s): has_matching_utxo_accumulator_output
The MantaPay protocol maintains a Merkle tree on the ledger where the leaves of the ledger are the hashes of the UTXOs generated during the protocol’s lifetime. In order to spend a UTXO, users must supply a ZK proof that the UTXO belongs to the Merkle tree on the ledger. The membership proof takes as input the root of the Merkle tree (public input), the inner node hashes (private inputs), and proves that the root can be derived from the inner node hashes and leaf.
Ideally, the ledger would check that the root provided is equal to the latest root on-chain. However, this isn’t done in practice as the transaction could easily be front-runned since every transaction changes the root. Instead, the ledger maintains a set of all previously generated roots and just checks that the root provided belongs to that set.
However, by allowing the root provided by the user to be any previously generated root, an attacker simply needs to find a hash collision with any previously generated root to steal assets. The likelihood of finding a collision grows quadratically with the number of previously seen hashes. In particular, given an output size of b bits and n previously generated hashes, the likelihood of finding a collision with any of the n hashes is approximately n^2/2^(b+1).
The current version of the Protocol uses the Poseidon hash function, which produces 255-bit hashes and, in theory, should be safe even with billions of previously seen roots. However, this is contingent on the safety of the Poseidon hash. While there has been a significant amount of research and analysis conducted on the function, including various attacks and optimizations, there is no formal proof of its security and correctness, let alone any proofs about concrete implementations.
Impact: Storing all previously seen roots significantly increases the likelihood of collisions. If any attack or weakness is found in the Poseidon hash, this can be an additional means of attacking the protocol.
Recommendation: There are a few ways to mitigate this. Protocols like Semaphore maintain a timeout period TIMEOUT and associate each root with a timestamp indicating when it was created. Any root created before now() - TIMEOUT is rejected. Another option is to only store the N previously generated roots and only allow a root if it belongs to the set of N previously generated roots. The latter option would have the additional benefit of not needing to store every root on-chain.
Developer Response: The developers acknowledged the issue and will either use a timestamp or only maintain the N previously generated roots. | Allowing any previously seen Merkle root increases the risk of hash collisions, potentially allowing asset theft. | null | true | 7 | MantaNetwork | Veridise | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Veridise-Chain.pdf |
2 | pallets/manta-pay/src/lib.rs | MantaPay weights calculated with a small database | https://github.com/Manta-Network/Manta | 45ba60e1d940dbf3491ce0f1223e44c84d5b7218 | null | Medium | null | null | Bad Extrinsic Weight | Weight Management | 4.1.3 V-MANC-VUL-003: MantaPay weights calculated with a small database
Severity: Medium
Type: Bad Extrinsic Weight
Commit: 45ba60e1d
Status: Acknowledged
File(s): pallets/manta-pay/src/lib.rs
Location(s): to_private, to_public, private_transfer
Transactions to_public, to_private, and private_transfer take as input nullifiers and membership proofs and generate UTXOs. These UTXOs are then added to a Merkle tree on the ledger.
MantaPay shards this Merkle tree into 256 buckets where each bucket has its own Merkle tree. Instead of storing the entire tree at each bucket, the Ledger just stores the last path added to the tree. When adding a UTXO, the Ledger first computes its corresponding bucket, then computes the new path pointing to that UTXO, and finally adds that path to the bucket.
Computing the new path should take time proportional to log(n) where n is the size of the Merkle Tree. The current benchmarking scheme only covers cases where the previous path is small, i.e., at most size 1. However, if the number of transactions gets large, on the order of hundreds of millions or billions, then the size of the path can get to 24-28 (taking shards into account). If the tree grows to this size, this means each execution of the extrinsic will perform 24-28 hashes, multiplied by the number of UTXOs to be added.
The benchmarking scheme should take into account the size of the tree to ensure that the existing weights are sufficient to offset the computation of the new Merkle tree path.
Impact: Setting the weight too low can allow users to perform a large number of transactions with little cost, potentially allowing malicious users to launch a DoS attack.
Recommendation: There are several ways to address this. One strategy is to take an additional parameter corresponding to the logarithm of the Merkle Tree's size on the ledger. The weight charged can be proportional to this value. In the implementation, this value (technically 2^value) can be compared against the actual size, and the transaction will only proceed if it is larger than or equal to the actual size.
Another strategy is to benchmark the pallets by taking into account the tree's size. If MantaPay is expected not to exceed more than a billion transactions, then the pallet could be benchmarked assuming the current path length is around 24-28.
Developer Response: The developers acknowledged that the weights should be recalculated with a more saturated database. | Inadequate benchmarking of weights for MantaPay's Merkle tree could lead to DoS attacks as the tree grows. | null | true | 7 | MantaNetwork | Veridise | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Veridise-Chain.pdf |
3 | pallets/asset-manager/src/lib.rs | Total supply of native assets can exceed the set limit | https://github.com/Manta-Network/Manta | 45ba60e1d940dbf3491ce0f1223e44c84d5b7218 | null | Medium | null | null | Logic Error | Business Logic | 4.1.4 V-MANC-VUL-004: Total supply of native assets can exceed the set limit
Severity: Medium
Type: Logic Error
Commit: 45ba60e1d
Status: Acknowledged
File(s): pallets/asset-manager/src/lib.rs
Location(s): mint_asset
One invariant underlying the correctness of MantaPay is that the total supply of an asset cannot exceed the maximum amount that can be held in a particular account. This is because MantaPay uses a dedicated account A to store the value of all the private assets. As such, A should, in principle, be able to hold all the supply in the case where all of that asset is privatized.
In more detail, when privatizing a user’s public assets (via to_private), MantaPay constructs opaque UTXOs to encode the amount privatized, and then transfers those public assets into A. This transfer is expected not to fail because of the invariant described above. However, we found a case where the transfer can fail.
Manta enforces this invariant for NonNative assets because every time an asset is minted into an account, the total supply is increased. If the total supply would exceed the maximum that can be held in an account, the mint fails with the error Overflow. However, there is no such check for Native assets. Thus, if the total supply of Native assets exceeds the maximum that can be held in an account, u128::MAX, then to_private calls that should succeed can fail if the amount held in A is close to the maximum allowed. This is demonstrated in Snippet 4.2.
Impact: By not constraining the amount of Native assets to be less than the maximum amount that can be held in an account, to_private transactions that should succeed will fail.
Recommendation: We recommend a similar check be done for Native assets as is done for NonNative assets to enforce that the total supply cannot exceed the maximum that can be held in a given account.
Developer Response: The developers have acknowledged the issue and are determining how to address it.
Snippet 4.2: Failed to_private due to count of native assets exceeding u128::MAX
#[test]
fn public_account_issue() {
let mut rng = OsRng;
new_test_ext().execute_with(|| {
let asset_id = NATIVE_ASSET_ID;
let value = 1000u128;
let id = NATIVE_ASSET_ID;
let metadata = AssetRegistryMetadata {
metadata: AssetStorageMetadata {
name: b"Calamari".to_vec(),
symbol: b"KMA".to_vec(),
decimals: 12,
is_frozen: false,
},
min_balance: TEST_DEFAULT_ASSET_ED2,
is_sufficient: true,
};
assert_ok!(MantaAssetRegistry::create_asset(
id, metadata.into(), TEST_DEFAULT_ASSET_ED2,
true
));
assert_ok!(FungibleLedger::<Test>::deposit_minting(id, &ALICE, 2*value));
assert_ok!(FungibleLedger::<Test>::deposit_minting(id, &MantaPay::account_id(), u128::MAX));
let mut utxo_accumulator = UtxoAccumulator::new(UTXO_ACCUMULATOR_MODEL.clone());
let spending_key = rng.gen();
let address = PARAMETERS.address_from_spending_key(&spending_key);
let mut authorization =
Authorization::from_spending_key(&PARAMETERS, &spending_key, &mut rng);
let asset_0 = Asset::new(Fp::from(asset_id), value);
let (to_private_0, pre_sender_0) = ToPrivate::internal_pair(
&PARAMETERS, &mut authorization.context,
address, asset_0,
Default::default(), &mut rng,
);
let to_private_0 = to_private_0
.into_post(
FullParametersRef::new(&PARAMETERS, utxo_accumulator.model()),
&PROVING_CONTEXT.to_private,
None, Vec::new(), &mut rng,
)
.expect("Unable to build TO_PRIVATE proof.")
.expect("Did not match transfer shape.");
assert_ok!(MantaPay::to_private(
MockOrigin::signed(ALICE),
PalletTransferPost::try_from(to_private_0).unwrap()
));
}
}
| Native asset supply can exceed the maximum limit for an account, causing to_private transactions to fail. | null | true | 7 | MantaNetwork | Veridise | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Veridise-Chain.pdf |
4 | pallets/asset-manager/src/lib.rs | Missing updates in update_asset_metadata | https://github.com/Manta-Network/Manta | 45ba60e1d940dbf3491ce0f1223e44c84d5b7218 | null | Medium | null | null | Logic Error | Business Logic | 4.1.5 V-MANC-VUL-005: Missing updates in update_asset_metadata
Severity: Medium
Type: Logic Error
Commit: 45ba60e1d
Status: Acknowledged
File(s): pallets/asset-manager/src/lib.rs
Location(s): update_asset_metadata
Manta Chain has an asset-manager pallet responsible for registering and minting assets. Each asset has a unique id and is associated with various metadata like a name, symbol, decimal places, etc. One important metadata is called min_balance. To store an account with some quantity of assets on the ledger, it must have more than min_balance quantity. This metadata is also used when validating transfers.
In particular, many asset transfers take an “existential parameter” as input, called KeepAlive, which decides what to do if the transfer would take the account’s balance (with respect to the asset) below min_balance. If KeepAlive is set, the transfer will fail if the amount goes below the min_balance. If it is not set, other configurations come into play, and the account may be removed and the remaining balance burned.
The asset-manager pallet exposes an extrinsic called update_asset_metadata, which takes as input the new metadata for that asset and updates the ledger to associate the asset with that metadata. While the implementation took a new min_balance as input, it did not update the ledger to associate the asset with this metadata.
Additionally, this API also took as input a new value for the metadata is_sufficient, but similarly did not update the ledger to associate the asset with this metadata.
Impact: While it is rare for the min_balance to be changed, it is sometimes necessary if it was originally set too high, for example. The current API made it appear that min_balance could be changed, so users might think the min_balance was changed when, in fact, it wasn’t.
Recommendation: The main issue with this extrinsic is its interface makes it appear as though the metadata min_balance and is_sufficient could be changed when it actually didn’t. Either the API should be changed to only take the metadata that should be changed, or it should appropriately update min_balance and is_sufficient.
Developer Response: The developers acknowledged this issue and are discussing two possible fixes. The first is to update both parameters in the asset pallet, and the second is to change the interface to prevent updating the min_balance or is_sufficient parameters. | update_asset_metadata does not apply updates to min_balance and is_sufficient metadata as expected. | null | true | 7 | MantaNetwork | Veridise | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Veridise-Chain.pdf |
5 | parachain-staking | No slashing mechanism for collators | https://github.com/Manta-Network/Manta | 45ba60e1d940dbf3491ce0f1223e44c84d5b7218 | null | Medium | null | null | Consensus | Business Logic | 4.1.6 V-MANC-VUL-006: No slashing mechanism for collators
Severity: Medium
Type: Consensus
Commit: 45ba60e1d
Status: Acknowledged
File(s): parachain-staking
Location(s): N/A
Proof of Stake blockchains often have a slashing mechanism to detect poorly performing stakers and punish them. Typically, a significant portion of the staker’s stake is taken by the chain as punishment for poor performance.
Currently, Manta Chain does not have any slashing mechanism. Instead, it uses a combination of social pressure and manual slashing to incentivize good behavior. Specifically, when the owners detect a poorly performing collator, they contact the collator over Discord and warn them of the poor performance. If their performance does not improve, the owners manually slash the collator’s funds.
While this approach may work when the blockchain is small, it will be challenging to enforce as the chain grows. Therefore, we recommend that Manta Chain implement a slashing mechanism.
Impact: Manta Chain’s current method of using social pressure will only work with a small set of trusted collators. However, as the chain expands, this mechanism is unlikely to sufficiently incentivize collators to perform well.
Recommendation: We recommend that Manta Chain establish a slashing mechanism to implement if/when the current process becomes inadequate.
Developer Response: The developers acknowledged the lack of a slashing mechanism and plan to include one in the future if/when the current process stops working. | No automated slashing mechanism exists to penalize poorly performing collators, relying on social pressure and manual intervention. | null | true | 7 | MantaNetwork | Veridise | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Veridise-Chain.pdf |
6 | pallets/parachain-staking/src/lib.rs | Collators given full rewards regardless of quality | https://github.com/Manta-Network/Manta | 45ba60e1d940dbf3491ce0f1223e44c84d5b7218 | null | Medium | null | null | Consensus | Business Logic | 4.1.7 V-MANC-VUL-007: Collators given full rewards regardless of quality
Severity: Medium
Type: Consensus
Commit: 45ba60e1d
Status: Open
File(s): pallets/parachain-staking/src/lib.rs
Location(s): pay_one_collator_reward
Manta Chain rewards collators by first allocating a fixed number of points (20) for every block they author and then giving the collator a fixed percentage of those allocated points as rewards. However, there is no check on the quality of the blocks authored by the collator: an empty block will result in just as many rewards as a full block.
Currently, Manta relies on the owners to monitor the blocks on-chain and manually punish collators who perform poorly. However, as the chain grows, this misbehavior may not be easy to detect.
One relatively simple way to address this issue is to adjust the reward system to incentivize high quality blocks.
Impact: Collators can effectively steal funds from Manta by authoring low-quality blocks (i.e., empty or partial blocks) and reaping full rewards.
Recommendation: We recommend the developers adjust the rewards system to either reward the collators for high-quality blocks or punish them for authoring poor ones.
Developer Response: TBD | Collators receive full rewards regardless of block quality, allowing low-quality blocks to earn the same as high-quality ones. | null | true | 7 | MantaNetwork | Veridise | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Veridise-Chain.pdf |
7 | pallets/manta-pay/src/lib.rs:593 | Missing validation in pull_ledger_diff | https://github.com/Manta-Network/Manta | 45ba60e1d940dbf3491ce0f1223e44c84d5b7218 | null | Low | null | null | Data Validation | Error Handling and Validation | 4.1.8 V-MANC-VUL-008: Missing validation in pull_ledger_diff
Severity: Low
Type: Data Validation
Commit: 45ba60e1d
Status: Acknowledged
File(s): pallets/manta-pay/src/lib.rs
Location(s): Line 593
pull_ledger_diff takes as input a Checkpoint, which is a struct with two fields: receiver_index and sender_index. It pulls sender and receiver data from the ledger, starting at sender_index (resp. receiver_index) up to at most sender_index + PULL_MAX_SENDER_UPDATE_SIZE (resp. receiver_index + PULL_MAX_RECEIVER_UPDATE_SIZE). However, there is no check to ensure that this sum cannot overflow for both the sender and receiver index in pull_senders, pull_receivers, pull_senders_for_shard, and pull_receivers_for_shard.
Impact: If the code is compiled without the --release flag, a malicious user could crash the node by passing in bad values. If it is built with --release, the call will be reported as successful and no senders or receivers will be returned. However, if a benign end user calls the API with incorrect indexes, it might be preferable to return an Error informing them that the index is invalid.
Recommendation: We recommend adding bounds checks for safety and to return an Error.
Developer Response: The developers are aware and agree that it would be better to check and return an error. | Missing bounds checks in pull_ledger_diff could lead to overflow, potentially crashing the node. | null | true | 7 | MantaNetwork | Veridise | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Veridise-Chain.pdf |
8 | pallets/asset-manager/src/lib.rs:590 | increase_count_of_associated_assets can overflow | https://github.com/Manta-Network/Manta | 45ba60e1d940dbf3491ce0f1223e44c84d5b7218 | null | Low | null | null | Logic Error | Arithmetic | 4.1.9 V-MANC-VUL-009: increase_count_of_associated_assets can overflow
Severity: Low
Type: Logic Error
Commit: 45ba60e1d
Status: Acknowledged
File(s): pallets/asset-manager/src/lib.rs
Location(s): Line 590
The asset_manager pallet maintains a mapping of paraids to a count of assets associated with that paraid. Each paraid can be associated with at most u32::MAX assets. When registering an asset or moving its location, the pallet calls increase_count_of_associated_assets, which takes as input a paraid and increments the number of assets associated with that paraid. However, this function does not check whether increasing the number of assets will result in an overflow.
Impact: If the runtime is compiled using --debug, this can crash the node. However, if built under --release, the asset count will go to zero.
Recommendation: Make this function check if the addition will result in an overflow, i.e., check if the current count is u32::MAX and return an error.
Developer Response: Acknowledged | increase_count_of_associated_assets lacks overflow check, risking node crash or reset to zero in certain cases. | null | true | 7 | MantaNetwork | Veridise | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Veridise-Chain.pdf |
9 | pallets/manta-pay/src/lib.rs | Account checks are incorrect | https://github.com/Manta-Network/Manta | 45ba60e1d940dbf3491ce0f1223e44c84d5b7218 | null | Low | null | null | Logic Error | Business Logic | 4.1.10 V-MANC-VUL-010: Account checks are incorrect.
Severity: Low
Type: Logic Error
Commit: 45ba60e1d
Status: Acknowledged
File(s): pallets/manta-pay/src/lib.rs
Location(s): check_sink_accounts, check_source_accounts
When validating a transaction, the source and sink accounts are checked by check_sink_accounts and check_source_accounts. These functions iterate over pairs (account, value) and check that value can be safely deposited (withdrawn) from account. The logic is correct only if every account appears in at most one pair. While this is fine for the current APIs, if the APIs change to allow multiple sink or multiple source accounts, then this code needs to be refactored or the uniqueness needs to be enforced elsewhere.
Impact: Currently there is no impact since the current APIs only allow one account for the source and sink accounts.
Recommendation: To be safe, we recommend adding an additional check in the validation step to ensure the accounts are distinct for both sources and sinks.
Developer Response: The developers acknowledged the issue and plan to add a check during validation to ensure that the accounts are distinct. | Account checks in transaction validation assume unique accounts, which may lead to issues if APIs change to allow multiple accounts. | null | true | 7 | MantaNetwork | Veridise | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Veridise-Chain.pdf |
10 | parachain-staking/lib.rs | Unstaked user may be selected as collator | https://github.com/Manta-Network/Manta | 45ba60e1d940dbf3491ce0f1223e44c84d5b7218 | null | Low | null | null | Logic Error | Business Logic | 4.1.11 V-MANC-VUL-011: Unstaked user may be selected as collator
Severity: Low
Type: Logic Error
Commit: 45ba60e1d
Status: Acknowledged
File(s): parachain-staking/lib.rs
Location(s): select_top_candidates
Parachains use collators to combine transactions into blocks that are then checked by Validators on the relay chain. Notably, this allows collators to remain relatively untrusted as validators ensure blocks are created correctly. On Manta’s chain, collators are selected from a group of staked users who receive rewards for creating blocks. Requiring that collators be staked provides additional security guarantees as, if a collator misbehaves (e.g., submits no blocks for validation, submits multiple conflicting blocks), governance can step in and slash the user’s staked funds. As such, unstaked collators have less incentive to maintain parachain stability and should be avoided.
However, in the collator selection process, if no sufficiently staked collator can be found, collators from the previous round will be selected. As there is no validation on the current state of the previous collators’ stake, this could select unstaked validators who lack the incentive to ensure network stability.
Here is a simple test case demonstrating this:
#[test]
fn test_failed_candidate_selection() {
ExtBuilder::default()
.with_balances(vec![(10, 10)])
.with_candidates(vec![(10, 10)])
.build()
.execute_with(|| {
roll_to(2);
assert_ok!(ParachainStaking::schedule_leave_candidates(Origin::signed(10), 6u32));
roll_to(5);
let candidate: Vec<u64> = ParachainStaking::selected_candidates();
assert_ne!(candidate[0], 10u64);
});
}
Impact: Collators will not be incentivized to ensure network stability. Another set of partially staked or “trusted” collators could provide better stability.
Recommendation: Consider maintaining a set of “trusted” collators to fall back on if no staked collators can be found.
Developer Response: Acknowledged. | Unstaked users may be selected as collators if no sufficiently staked collators are available, reducing network stability. | null | true | 7 | MantaNetwork | Veridise | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Veridise-Chain.pdf |
11 | runtime/calamari/src/weights/xcm/mod.rs
runtime/dolphin/src/weights/xcm/mod.rs | XCM instructions can charge 0 weight | https://github.com/Manta-Network/Manta | 45ba60e1d940dbf3491ce0f1223e44c84d5b7218 | null | Low | null | null | Bad Extrinsic Weight | Weight Management | 4.1.12 V-MANC-VUL-012: XCM instructions can charge 0 weight
Severity: Low
Type: Bad Extrinsic Weight
Commit: 45ba60e1d
Status: Acknowledged
File(s): runtime/(calamari, dolphin)/src/weights/xcm/mod.rs
Location(s): Every use of weigh_multi_assets
The Polkadot ecosystem uses the XCM messaging standard to enable parachains and the relay chain to communicate with each other. For example, if a parachain P1 wants to deposit an asset onto another parachain P2, they can construct an XCM message stating they wish to deposit an asset into an account associated with P1 and send it to P2.
Each XCM message consists of a sequence of low-level XCM instructions that get executed by the XCM executor on the destination parachain. To offset the cost of executing these instructions, parachains set weights for each instruction, so the sender of the XCM message is charged fees for the destination parachain executing their message.
Manta Chain configured the weights of multiple instructions so that senders could generate messages with a total weight of 0. For example, in the code snippet below, the function deposit_asset sets the weight for the XCM instruction deposit_asset based on a parameter called assets. When assets is an empty vector, the weight_multi_assets function returns 0, resulting in a total weight of 0 for the instruction:
fn deposit_asset(
assets: &MultiAssetFilter,
_max_assets: &u32,
_dest: &MultiLocation,
) -> Weight {
let hardcoded_weight: u64 = 1_000_000_000;
let weight = assets.weigh_multi_assets(XcmFungibleWeight::<Runtime>::deposit_asset());
cmp::min(hardcoded_weight, weight)
}
This setup allows malicious or incompetent senders to spam Manta with messages costing 0, even though the instruction is successfully executed by the XCM executor. While a denial of service may be unlikely due to the fast execution of 0-length vectors, spam prevention is recommended. Adding a minimal base fee for instructions that can be executed with 0 weight would help prevent spam.
Impact: Malicious users may spam Manta with XCM messages of weight 0, potentially slowing down blockchain performance and risking denial of service.
Recommendation: We recommend always charging a base fee to prevent spam.
Developer Response: The developers acknowledged the issue and plan to change weigh_multi_assets to charge the benchmarked weight of a single asset execution for any successfully executed multiasset XCM message. | XCM instructions can be executed with zero weight, allowing potential spam through 0-cost messages. | null | true | 7 | MantaNetwork | Veridise | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Veridise-Chain.pdf |
12 | pallets/asset-manager/src/lib.rs | Missing validation in set_units_per_second | https://github.com/Manta-Network/Manta | 45ba60e1d940dbf3491ce0f1223e44c84d5b7218 | null | Low | null | null | Data Validation | Error Handling and Validation | 4.1.13 V-MANC-VUL-013: Missing validation in set_units_per_second
Severity: Low
Type: Data Validation
Commit: 45ba60e1d
Status: Acknowledged
File(s): pallets/asset-manager/src/lib.rs
Location(s): set_units_per_second
The asset-manager pallet manages a hashmap called UnitsPerSecond which maps assetIds to a u128 value units_per_second, used to determine the price for an XCM transfer. It provides a function, set_units_per_second, to set the units_per_second for a given asset. This value determines the cost (in the corresponding asset) to purchase a certain weight for a transaction. The following code snippet calculates the amount:
let units_per_second = M::units_per_second(&asset_id).ok_or({
log::debug!(
target: "FirstAssetTrader::buy_weight",
"units_per_second missing for asset with id: {:?}",
id,
);
XcmError::TooExpensive
})?;
let amount = units_per_second * (weight as u128) / (WEIGHT_PER_SECOND as u128);
if amount.is_zero() {
return Ok(payment);
}
This calculation of amount uses multiplication, which can overflow. Currently, both units_per_second and amount are of type u128. If units_per_second exceeds u128::MAX / u64::MAX, a large weight (u64::MAX) can be bought for a small asset amount, allowing a malicious parachain to potentially perform a DoS attack.
There is currently no validation in set_units_per_second to ensure units_per_second is small enough. Although only the root can call set_units_per_second, there is a risk if the root user mistakenly sets an excessive value or is tricked into doing so.
Impact: If units_per_second is set above u128::MAX / u64::MAX, large amounts of weight can be purchased at low cost, potentially enabling a DoS attack on the chain.
Recommendation: Change the type of units_per_second to map assetId to a u64 value or validate that the amount is sufficiently small.
Developer Response: The developers acknowledged the issue and plan to fix the weight calculation to use saturating arithmetic, which should address this vulnerability. | Missing validation in set_units_per_second allows potential overflow, enabling weight to be purchased at low cost, risking DoS. | null | true | 7 | MantaNetwork | Veridise | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Veridise-Chain.pdf |
13 | null | Collator is a single point of failure for a round | https://github.com/Manta-Network/Manta | 45ba60e1d940dbf3491ce0f1223e44c84d5b7218 | null | Low | null | null | Consensus | Code Quality | 4.1.14 V-MANC-VUL-014: Collator is a single point of failure for a round
Severity: Low
Type: Consensus
Commit: 45ba60e1d
Status: Acknowledged
The Manta parachain uses the Aura consensus mechanism to select collators to author blocks. Aura selects a primary collator for a round, and only that collator is allowed to produce blocks in that round. However, if that collator goes down, then no blocks will be produced, making that collator a single point of failure.
Other parachains, like Moonbeam, address this by selecting multiple collators for a given round.
Impact: If a collator goes down, no blocks will be produced for that round, thereby impacting the transaction throughput of Manta.
Recommendation: We recommend that Manta adopt a consensus mechanism that selects multiple collators, ideally geographically separated, so if one collator fails, the likelihood of others failing remains low.
Developer Response: The developers are aware of this issue and have plans to move away from the Aura consensus mechanism. | Single collator selection creates a point of failure, risking block production halt if the collator goes down. | null | true | 7 | MantaNetwork | Veridise | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Veridise-Chain.pdf |
14 | pallets/manta-pay/src/lib.rs | Unchecked index calculation in spend_all | https://github.com/Manta-Network/Manta | 45ba60e1d940dbf3491ce0f1223e44c84d5b7218 | null | Warning | null | null | Logic Error | Arithmetic | 4.1.15 V-MANC-VUL-015: Unchecked index calculation in spend_all
Severity: Warning
Type: Logic Error
Commit: 45ba60e1d
Status: Acknowledged
File(s): pallets/manta-pay/src/lib.rs
Location(s): spend_all
The spend_all function in the Manta-Pay pallet does the following:
1. Adds the nullifier commitments in the TransactionPost to the NullifierCommitmentSet.
2. Inserts each (nullifier, outgoingNote) pair into the NullifierSetInsertionOrder structure.
3. Updates a global variable, NullifierSetSize, which stores the size of the nullifier commitment set.
The index where the pair gets inserted, along with the new nullifier size, is calculated as index + i, where i is the index of the corresponding SenderPost, and index is the current size of the set. However, this arithmetic is unchecked and could result in an overflow.
Impact: When the size of the commitment set reaches u64::MAX, the index calculation may overflow, causing the pair to be inserted at the beginning of the list and setting the nullifier set size to 1. However, reaching this value through normal execution is extremely unlikely.
Recommendation: Add an overflow check and return an error.
Developer Response: TBD. | Unchecked index calculation in spend_all may cause overflow, leading to incorrect data insertion. | null | true | 7 | MantaNetwork | Veridise | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Veridise-Chain.pdf |
15 | parachain-staking/lib.rs | Excess fees not refunded | https://github.com/Manta-Network/Manta | 45ba60e1d940dbf3491ce0f1223e44c84d5b7218 | null | Warning | null | null | Bad Extrinsic Weight | Weight Management | 4.1.16 V-MANC-VUL-016: Excess fees not refunded
Severity: Warning
Type: Bad Extrinsic Weight
File(s): parachain-staking/lib.rs
Location(s): (cancel_leave, execute_leave, schedule_leave, join)_candidates
Commit: 45ba60e1d
Status: Intended Behavior
When a substrate extrinsic is created, its weight must be carefully considered to ensure it correctly reflects the computational cost of the operation as extrinsic weight is directly related to the fees that are charged to the user. This weight should capture the maximum number of computational resources that will be consumed by the extrinsic as excess fees can be returned. In several functions, though, the weights are computed based on the value of an argument provided by the user which might not always reflect the true cost of the computation.
For example, consider the following:
1#[pallet::call_index(11)]
2#[pallet::weight(<T as Config>::WeightInfo::cancel_leave_candidates(*candidate_count))]
3/// Cancel open request to leave candidates
4/// - only callable by collator account
5/// - result upon successful call is the candidate is active in the candidate pool
pub fn cancel_leave_candidates(
7origin: OriginFor<T>,
8#[pallet::compact] candidate_count: u32,
) -> DispatchResultWithPostInfo {
...
12let mut candidates = <CandidatePool<T>>::get();
13ensure!(
14candidates.0.len() as u32 <= candidate_count,
15Error::<T>::TooLowCandidateCountWeightHintCancelLeaveCandidates
);
...
18Ok(().into())
}
In this function, the weight is computed using the candidate_count argument, and in order for the function to execute successfully, candidate_count must be greater than or equal to the current size of the candidate pool. A user might need to call this function with a candidate_count that is larger than the size of the pool to prevent a front-running attack where a malicious user would add candidates to prevent the transaction from executing successfully. In such a case, the weight would be larger than necessary, but no fees are returned to the user.
Impact: Such functions can charge unnecessary fees to the user.
Recommendation: Refund the user additional fees that are not consumed.
Developer Response: Since these extrinsics are likely to be executed sparingly and since the additional fees are likely to be small, we feel like the additional computational cost of determining the excess does not. | Extrinsics may charge unnecessary fees without refunding excess. | null | true | 7 | MantaNetwork | Veridise | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Veridise-Chain.pdf |
16 | pallets/asset-manager/src/lib.rs | Assets can be registered at unsupported locations | https://github.com/Manta-Network/Manta | 45ba60e1d940dbf3491ce0f1223e44c84d5b7218 | null | Warning | null | null | Data Validation | Error Handling and Validation | 4.1.17 V-MANC-VUL-017: Assets can be registered at unsupported locations
Severity: Warning
Type: Data Validation
File(s): pallets/asset-manager/src/lib.rs
Location(s): register_asset
Commit: 45ba60e1d
Status: Acknowledged
The asset-manager pallet allows assets to be registered, managed, and minted. In particular, register_asset takes as input an asset, location, and corresponding asset_metadata and registers the asset. Every asset must be associated with a location; however, Manta only supports assets from specific locations. The current implementation of asset-manager does not perform any validation on the locations passed into register_asset, potentially allowing assets to be registered from untested locations. The pallet also exposes a method called update_asset_location, which is supposed to update the location of an asset. It similarly does not perform any validation on the new location of the asset.
Impact: The current implementation allows assets to be registered from untested locations.
Recommendation: The asset-manager pallet already implements the Contains trait, which exposes a method contains that takes as input a location and returns true if and only if the location is supported. Currently, that method is unused and can be used to validate the locations passed in.
Developer Response: The developers acknowledged the issue and plan to add a check in register_asset. | Assets can be registered without validating their location. | null | true | 7 | MantaNetwork | Veridise | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Veridise-Chain.pdf |
17 | parachain-staking/lib.rs | Minimum delegator funds is not MinDelegatorStk | https://github.com/Manta-Network/Manta | 45ba60e1d940dbf3491ce0f1223e44c84d5b7218 | null | Warning | null | null | Logic Error | Business Logic | 4.1.18 V-MANC-VUL-018: Minimum delegator funds is not MinDelegatorStk
Severity: Warning
Type: Logic Error
File(s): parachain-staking/lib.rs
Location(s): N/A
Commit: 45ba60e1d
Status: Acknowledged
In the case where MinDelegation < MinDelegatorStk, it is possible for the delegator’s staked funds to be less than MinDelegatorStk. This can occur through the following sequence of calls:
1. delegate amount N from delegator D to candidate C1 where N >= MinDelegatorStk
2. delegate amount M from delegator D to candidate C2 where M < MinDelegatorStk and M >= MinDelegation
3. schedule_leave_candidates and execute_leave_candidates for C1
This results in D having M funds staked, where M < MinDelegatorStk.
Impact: If MinDelegation is less than MinDelegatorStk, a delegator may end up with less than MinDelegatorStk funds actually staked.
Note: This is not currently exploitable because MinDelegation == MinDelegatorStk in all production runtimes. However, if these values are adjusted in the future, this bug may become exploitable.
Recommendation: There are two options:
1. When starting a runtime, ensure that MinDelegation >= MinDelegatorStk
2. Whenever a delegation is removed (such as in execute_leave_candidates), ensure that the remaining locked funds for the delegator are at least MinDelegatorStk.
Developer Response: The developers acknowledged the issue and are considering removing MinDelegation as there is no apparent reason for it being different from MinDelegatorStk. | Delegators can have staked funds below the required MinDelegatorStk. | null | true | 7 | MantaNetwork | Veridise | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Veridise-Chain.pdf |
18 | pallets/manta-pay/src/lib.rs | Unintended test crashes | https://github.com/Manta-Network/Manta | 45ba60e1d940dbf3491ce0f1223e44c84d5b7218 | null | Info | null | null | Maintainability | Code Quality | 4.1.19 V-MANC-VUL-019: Unintended test crashes
Severity: Info
Type: Maintainability
File(s): pallets/manta-pay/src/lib.rs
Location(s): to_private_should_work
Commit: 45ba60e1d
Status: Open
Many of the manta-pay tests randomly generate an asset id, total supply, and an amount to make private. To ensure the total supply of the asset is greater than the minimum balance, the minimum balance is always added to the randomly generated total supply, as seen in this test:
fn to_private_should_work() {
let mut rng = OsRng;
for _ in 0..RANDOMIZED_TESTS_ITERATIONS {
new_test_ext().execute_with(|| {
let asset_id = rng.gen();
let total_free_supply = rng.gen();
initialize_test(asset_id, total_free_supply + TEST_DEFAULT_ASSET_ED);
mint_private_tokens(
asset_id,
&value_distribution(5, total_free_supply, &mut rng),
&mut rng,
);
});
}
}
If the random number generator generates a value for the total_free_supply which is greater than u128::MAX - TEST_DEFAULT_ASSET_ED, then the test will fail even though it is expected to succeed.
Impact: May cause tests to fail when they are expected to succeed.
Recommendation: Change the test to generate a value for total_free_supply between [0, u128::MAX - TEST_DEFAULT_ASSET_ED).
Developer Response: TBD | Randomized test may fail due to unintended overflow in total supply generation. | null | true | 7 | MantaNetwork | Veridise | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Veridise-Chain.pdf |
0 | pallets/manta-sbt/src/lib.rs:376 | Loss of Reserved SBT IDs | https://github.com/Manta-Network/Manta | ceb9e46cd53b77eb914ba6c17452fc238bc3a28f | null | Low | null | null | null | Business Logic | 4.1 (HAL-01) LOSS OF RESERVED SBT IDS - LOW (2.5)\nDescription:\nCode Location:\nThe reserve_sbt function calculates a range of IDs and stores this range in the ReservedIds storage map, using the caller’s address as the key. It was identified that users lose their reserved SBT IDs when they call the reserve_sbt function without first minting their previously reserved SBT IDs. This occurs because the previous reserved range is overwritten.\nBody of the reserve_sbt function:\nListing 1: pallets/manta-sbt/src/lib.rs (Line 376)\n368 let asset_id_range : Vec < StandardAssetId > = (0.. T :: MintsPerReserve :: get () )\n369 . map (| _ | Self :: next_sbt_id_and_increment () )\n370 . collect :: < Result < Vec < StandardAssetId > , _ > >() ?;\n\n372 // The range of ` AssetIds ` that are reserved as SBTs\n373 let start_id : StandardAssetId = * asset_id_range . first () . ok_or ( Error :: <T >:: ZeroMints ) ?;\n374 let stop_id : StandardAssetId = * asset_id_range . last () . ok_or ( Error :: <T >:: ZeroMints ) ?;\n375 ReservedIds :: <T >:: insert (& who , ( start_id , stop_id ) ) ;\nBVSS:\nAO:A/AC:L/AX:L/C:N/I:N/A:N/D:L/Y:N/R:N/S:U (2.5)\nProof Of Concept:\nThis test reserves ids two times and mints one zkSBT. The first zkSBT token will not have the id 1, it will have the id 6 instead.\nListing 2: pallets/manta-sbt/src/tests.rs\n1 #[ test ]\n2 fn hal01 () {\n3 let mut rng = OsRng ;\n4 new_test_ext () . execute_with (|| {\n5 assert_ok! ( Balances :: set_balance (\n6 MockOrigin :: root () ,\n7 ALICE ,\n8 1 _000_000_000_000_000 ,\n9 0\n10 )) ;\n11 // Reserve IDs from 1 to 5\n12 assert_ok! ( MantaSBTPallet :: reserve_sbt ( MockOrigin :: signed ( ALICE ) ) ) ;\n13 // Reserve IDs from 6 to 10\n14 assert_ok! ( MantaSBTPallet :: reserve_sbt ( MockOrigin :: signed ( ALICE ) ) ) ;\n15\n16 let value = 1;\n17 let id = field_from_id ( ReservedIds :: < Test >:: get ( ALICE ) . unwrap () .0) ;\n18 let post = sample_to_private ( id , value , & mut rng ) ;\n19 assert_ok! ( MantaSBTPallet :: to_private (\n20 MockOrigin :: signed ( ALICE ) ,\n21 Box :: new ( post ) ,\n22 bvec! [0]\n23 )) ;\n\n25 // The first zkSBT minted has the id 6.\n26 assert_eq! (\n27 SbtMetadata :: < Test >:: get (6) . unwrap () . extra ,\n28 Some ( bvec! [0])\n29 );\n30 });\n31 }\nRecommendation:\nTo resolve this issue, it is recommended to restrict users from reserving additional SBT IDs if they have not minted their previously reserved IDs. | Users lose previously reserved SBT IDs if they reserve new ones without minting the old. | null | true | 8 | MantaNetwork | Halborn | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Halborn-zkSBT.pdf |
1 | pallets/manta-sbt/src/lib.rs:369,883 | Last SBT IDs Cannot Be Reserved | https://github.com/Manta-Network/Manta | ceb9e46cd53b77eb914ba6c17452fc238bc3a28f | null | Low | null | null | null | Error Handling and Validation | 4.2 (HAL-02) LAST SBT IDS CANNOT BE RESERVED - LOW (2.5)\nDescription:\nWhen users invoke the reserve_sbt function, it reserves a specific number of IDs - quantified by MintPerReserve. The reserve_sbt function achieves this by repeatedly calling the next_sbt_id_and_increment function - as many times as the MintPerReserve value. This next_sbt_id_and_increment function serves to return the next available ID and concurrently increment the NextSbtId storage value by 1.\nA potential problem arises if the incrementing process results in an overflow, causing the next_sbt_id_and_increment function to throw an overflow exception, which in turn fails the ongoing transaction. In this scenario, previously identified IDs that did not contribute to the overflow situation remain unreserved. This issue presents a concern as it could potentially lead to resource allocation inefficiencies and transaction failures.\nCode Location:\nBody of the reserve_sbt function, where the next zkSBT id is incremented.\nListing 3: pallets/manta-sbt/src/lib.rs (Line 369)\n356 pub fn reserve_sbt ( origin : OriginFor <T >) -> DispatchResult {\nlet who = ensure_signed ( origin ) ?;\n// Charges fee to reserve AssetIds\n<T as pallet :: Config >:: Currency :: transfer (\n& who ,\n& Self :: account_id () ,\nT :: ReservePrice :: get () ,\nExistenceRequirement :: KeepAlive ,\n) ?;\n// Reserves uniques AssetIds to be used later to mint SBTs\n368 let asset_id_range : Vec < StandardAssetId > = (0.. T :: MintsPerReserve :: get () )\n369 . map (| _ | Self :: next_sbt_id_and_increment () )\n370 . collect :: < Result < Vec < StandardAssetId > , _ > >() ?;\nnext_sbt_id_and_increment function will overflow if the max number for u128 is surpassed\nListing 4: pallets/manta-sbt/src/lib.rs (Line 883)\n875 fn next_sbt_id_and_increment () -> Result < StandardAssetId , DispatchError > {\n876 NextSbtId :: <T >:: try_mutate (| maybe_val | {\n877 match maybe_val {\n878 Some ( current ) = > {\n879 let id = * current ;\n880 * maybe_val = Some (\n881 current\n882 . checked_add ( One :: one () )\n883 . ok_or ( ArithmeticError :: Overflow )? ,\n884 );\n885 Ok ( id )\n886 }\n887 // If storage is empty , starts at value of one ( Field cannot be zero )\n888 None = > {\n889 * maybe_val = Some (2) ;\n890 Ok ( One :: one () )\n891 }\n892 }\n893 })\n894 }\nBVSS:\nAO:A/AC:L/AX:L/C:N/I:N/A:L/D:N/Y:N/R:N/S:U (2.5)\nProof Of Concept:\nNote: For this Proof of Concept (PoC), the codebase was modified such that the zkSBT IDs are now u8 instead of u128. This alteration reduces the time needed to demonstrate that the function fails in this edge-case scenario.\nListing 5: pallets/manta-sbt/src/tests.rs\n236 #[ test ]\n237 fn hal02 () {\n238 new_test_ext () . execute_with (|| {\n239 assert_ok! ( Balances :: set_balance (\n240 MockOrigin :: root () ,\n241 ALICE ,\n242 1 _000_000_000_000_000 ,\n243 0\n244 )) ;\n245 for i in (1..51) {\n246 assert_ok! ( MantaSBTPallet :: reserve_sbt_bis ( MockOrigin :: signed ( ALICE )) );\n247 println! (" First id : {} - Last id : {} " , ReservedIdsBis :: < Test >:: get ( ALICE ) . unwrap () .0 , ReservedIdsBis :: < Test >:: get ( ALICE ) . unwrap () .1) ;\n248 }\n250 assert_noop! ( MantaSBTPallet :: reserve_sbt_bis ( MockOrigin :: signed ( ALICE )) , ArithmeticError :: Overflow );\n251 });\n252 }\nIn this test, we reserve all available IDs, excluding the last five. Attempting to reserve the last ID will cause the StandardAssetId value to overflow, resulting in a failure.\nRecommendation:\nTo address this issue, it is recommended to implement a check to determine whether the value of the StandardAssetId has reached the maximum value for u128 can prevent overflow. This measure will stop the occurrence of an exception. | Overflow in next_sbt_id_and_increment causes unreserved IDs and transaction failure. | null | true | 8 | MantaNetwork | Halborn | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Halborn-zkSBT.pdf |
2 | pallets/manta-sbt/src/lib.rs:783
pallets/manta-support/src/manta_pay.rs:874,1095,1096
runtime/calamari/src/migrations/staking.rs:70 | Downcasting of 64-Bit Integer | https://github.com/Manta-Network/Manta | ceb9e46cd53b77eb914ba6c17452fc238bc3a28f | null | Low | null | null | null | Arithmetic | 4.3 (HAL-03) DOWNCASTING OF 64-BIT INTEGER - LOW (2.5)\nDescription:\nIt was observed that in certain circumstances, usize values are cast to types such as u8 and u32. The usize data type in the Rust programming language represents a pointer-sized unsigned integer. The actual size of usize is dependent on the platform: it’s 32 bits on a 32-bit platform and 64 bits on a 64-bit platform. Consequently, depending on the system, there could be a cast from an u64 to an u32. This implies that an attempt could be made to store a value larger than the maximum value that can be held in an u32, leading to unexpected consequences.\nCode Location:\nFINDINGS & TECH DETAILS\nUsize is casted to u8:\nListing 6: pallets/manta-sbt/src/lib.rs (Line 783)\n768 fn pull_receivers (\n769 receiver_indices : [ usize ; MerkleTreeConfiguration :: FOREST_WIDTH ],\n770 max_update_request : u64 ,\n771 ) -> ( bool , ReceiverChunk ) {\n772 let mut more_receivers = false ;\n773 let mut receivers = Vec :: new () ;\n774 let mut receivers_pulled : u64 = 0;\n775 let max_update = if max_update_request > Self :: PULL_MAX_RECEIVER_UPDATE_SIZE {\n776 Self :: PULL_MAX_RECEIVER_UPDATE_SIZE\n777 } else {\n778 max_update_request\n779 };\n781 for ( shard_index , utxo_index ) in receiver_indices . into_iter () . enumerate () {\n782 more_receivers |= Self :: pull_receivers_for_shard (\n783 shard_index as u8 ,\n784 utxo_index ,\n785 max_update ,\n786 & mut receivers ,\n787 & mut receivers_pulled ,\n788 );\n790 if receivers_pulled == max_update && more_receivers {\n791 break ;\n792 }\n794 ( more_receivers , receivers )\n795 }\nUsize is casted to u32:\nListing 7: pallets/manta-support/src/manta_pay.rs (Line 860)\n867 impl TryFrom < merkle_tree :: CurrentPath < MerkleTreeConfiguration > > for CurrentPath {\n868 type Error = Error ;\n871 fn try_from ( path : merkle_tree :: CurrentPath < MerkleTreeConfiguration >) -> Result < Self , Error > {\n872 Ok ( Self {\n873 sibling_digest : fp_encode ( path . sibling_digest )? ,\n874 leaf_index : path . inner_path . leaf_index .0 as u32 ,\n875 inner_path : path . inner_path . path . into_iter () . map ( fp_encode ) . collect :: < Result <_ , _ > >() ? ,\n881 })\n883 }\nListing 8: pallets/manta-support/src/manta_pay.rs (Lines 1095,1096)\n1091 impl From < RawCheckpoint > for Checkpoint {\n1094 Self :: new ( checkpoint . receiver_index . map (| i| i as usize ) . into () , checkpoint . sender_index as usize ,)\nListing 9: runtime/calamari/src/migrations/staking.rs (Line 70)\n70 let n_of_candidates = manta_collator_selection :: Pallet :: <T >:: candidates () . len () as u32 ;\nRecommendation:\nTo address this issue, it is recommended to check the value against the maximum value before casting. | Casting usize to smaller types may cause data loss and unexpected behavior. | null | true | 8 | MantaNetwork | Halborn | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Halborn-zkSBT.pdf |
3 | runtime/calamari/src/fee.rs:76
primitives/manta/src/xcm.rs:183,251,252
runtime/manta/src/fee.rs:47,52
runtime/common/src/lib.rs:115
primitives/manta/src/constants.rs:110 | Unchecked Math Could Impact Weight Calculation | https://github.com/Manta-Network/Manta | ceb9e46cd53b77eb914ba6c17452fc238bc3a28f | null | Low | null | null | null | Weight Management | 4.4 (HAL-04) UNCHECKED MATH COULD IMPACT WEIGHT CALCULATION - LOW (2.5)\nDescription:\nIt was identified that several areas in the buy_weight and the refund_weight functions that could potentially benefit from enhanced computational checks. Currently, despite numerous instances of proven arithmetic calculations, the function does not have a mechanism to handle situations where underflow or overflow states might occur.\nWhile these states haven’t been identified as potential risks for exploitation, implementing additional safeguards to account for them will be beneficial.\nAnother point of consideration pertains to the WEIGHT_PER_SECOND value. This value serves as a divisor in computing the number of tokens required for payment or refund during the weight purchasing procedure. While it is predetermined as a constant during the system’s compilation, it currently lacks a constraint to assure that it never equals zero. This is a significant potential risk as it could result in a system panic if the value happens to be zero, causing a division by zero error. Moreover, as the WEIGHT_PER_SECOND value is also used in calculations elsewhere in the system, this issue could potentially affect other sections of the codebase as well.\nCode Location:\nUnsafe multiplication in the tests multiplier_growth_simulator_and_congestion_budget_test:\nListing 10: runtime/calamari/src/fee.rs (Line 76)\n69 #[ test ]\n#[ ignore ] // This test should not fail CI\n71 fn multiplier_growth_simulator_and_congestion_budget_test () {\n72 let target_daily_congestion_cost_usd = 100 _000 ;\n73 let kma_price = fetch_kma_price () . unwrap () ;\n74 println! (" KMA / USD price as read from CoinGecko = { kma_price } ") ;\n75 let target_daily_congestion_cost_kma =\n76 ( target_daily_congestion_cost_usd as f32 / kma_price * KMA as f32 ) as u128 ;\nUnsafe multiplication in buy_weight function\nListing 11: primitives/manta/src/xcm.rs (Line 183)\n146 fn buy_weight (& mut self , weight : Weight , payment : Assets ) -> Result < Assets > {\n153 let first_asset = payment . fungible_assets_iter () . next () . ok_or ({\n160 XcmError :: TooExpensive }) ?;\n183 let amount = units_per_second * ( weight as u128 ) / ( WEIGHT_PER_SECOND as u128 ) ;\nUnsafe subtraction in refund_weight function\nListing 12: primitives/manta/src/xcm.rs (Line 251)\n248 fn refund_weight (& mut self , weight : Weight ) -> Option < MultiAsset > {\n251 self . weight -= weight ;\n252 let amount = * units_per_second * ( weight as u128 ) / ( WEIGHT_PER_SECOND as u128 ) ;\nPlaces where WEIGHT_PER_SECOND is used as a divisor:\n• Function refund_weight\nListing 13: primitives/manta/src/xcm.rs (Line 252)\n• Function buy_weight:\nListing 14: primitives/manta/src/xcm.rs (Line 183)\nThe following snippets show how the q divisor is calculated and how it’s equal to zero if WEIGHT_PER_SECOND is zero too.\nListing 15: runtime/manta/src/fee.rs (Lines 47,52)\nRecommendation:\nWe recommend a review of these identified areas to ensure that adequate arithmetic checks are in place and a safety constraint is set for WEIGHT_PER_SECOND to prevent it from reaching zero. These improvements will further fortify the system, ensuring stability, reliability, and secure operation.\n• It is recommended to add a constraint to ensure that WEIGHT_PER_SECOND is never 0.\n• In “release” mode, Rust does not panic! due to overflows and overflowed values simply “wrap” without any explicit feedback to the user. It is recommended to use vetted safe math libraries for arithmetic operations consistently throughout the smart contract system. Consider replacing the multiplication operator with Rust’s checked_mul method, the subtraction operator with Rust’s checked_subs method, and so on. | Unchecked math in weight calculations may lead to overflow, underflow, or division by zero errors. | null | true | 8 | MantaNetwork | Halborn | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Halborn-zkSBT.pdf |
0 | pallets/manta-sbt/src/lib.rs | SBT reservations can be overwritten | https://github.com/Manta-Network/Manta | ceb9e46cd53b77eb914ba6c17452fc238bc3a28f | null | Medium | null | null | Logic Error | Business Logic | 4.1 Detailed Description of Issues
V-MSBT-VUL-001: SBT reservations can be overwritten
Severity: Medium
Type: Logic Error
Commit: ceb9e46
Status: Fixed
File(s): pallets/manta-sbt/src/lib.rs
Location(s): reserve_sbt
The manta-sbt pallet exposes a method reserve_sbt whereby callers pay MANTA tokens to reserve the right to mint N SBTs. This reservation is enforced by allocating reservation ids to the caller of the method so that when users mint an SBT, Manta Chain uses one of the reserved ids to track the number reserved. In particular, Manta Chain maintains a map called ReservedIds which maps users to an interval of reservation ids such that the length of the interval indicates the number of SBTs they can mint.
However, reserve_sbt does not check whether a user has already reserved SBTs to mint when calling the method and simply updates the mapping to a new interval of length N. This can be seen in the method implementation below:
/// Reserves AssetIds to be used subsequently in 'to_private' above.
/// Increments AssetManager’s AssetId counter.
#[pallet::call_index(1)]
#[pallet::weight(<T as pallet::Config>::WeightInfo::reserve_sbt())]
#[transactional]
pub fn reserve_sbt(origin: OriginFor<T>) -> DispatchResult {
let who = ensure_signed(origin)?;
// Charges fee to reserve AssetIds
<T as pallet::Config>::Currency::transfer(
&who,
&Self::account_id(),
T::ReservePrice::get(),
ExistenceRequirement::KeepAlive,
)?;
// Reserves unique AssetIds to be used later to mint SBTs
let asset_id_range: Vec<StandardAssetId> = (0..T::MintsPerReserve::get())
.map(|_| Self::next_sbt_id_and_increment())
.collect::<Result<Vec<StandardAssetId>, _>>()?;
// The range of 'AssetIds' that are reserved as SBTs
let start_id: StandardAssetId = *asset_id_range.first().ok_or(Error::<T>::ZeroMints)?;
let stop_id: StandardAssetId = *asset_id_range.last().ok_or(Error::<T>::ZeroMints)?;
ReservedIds::<T>::insert(&who, (start_id, stop_id));
Self::deposit_event(Event::<T>::SBTReserved { who, start_id, stop_id });
Ok(())
}
Thus, if a user calls the method M times to reserve M*N SBTs, then they will only be able to mint N.
Impact: Users can lose money because they may think they have reserved M*N SBTs when they in fact can only mint N. Furthermore, if a user calls reserve_sbt M times in a row, then M*(N-1) SBTs can no longer be minted. This is due to the fact that reserved ids are always incremented.
For example, suppose someone sets up a relayer account which users could use to purchase zkSBTs for themselves without being linked to the transaction. That relayer account would only receive SBTs on its last call reserve_sbt. Implementations in which the relayer reserves and sends the SBTs separately may be error-prone.
Recommendation: If at most N SBTs can be reserved at a time for a user, then reserve_sbt should check whether a user has already reserved SBTs. If the protocol permits more than N SBTs to be reserved at a time for a user, then the mapping should be changed. One option might be to map each user to a set of intervals corresponding to ids reserved for them.
Developer Response: The developers have acknowledged the issue and a fix has been proposed in this pull request. The fix changes the function to revert if the user has SBT ids already reserved. | SBT reservations can be overwritten, causing users to lose reserved tokens. | null | true | 9 | MantaNetwork | Veridise | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Veridise-zkSBT.pdf |
1 | pallets/manta-sbt/src/lib.rs | Extrinsics charge static fees that do not account for Merkle tree updates | https://github.com/Manta-Network/Manta | ceb9e46cd53b77eb914ba6c17452fc238bc3a28f | null | Low | null | null | Bad Extrinsic Weight | Weight Management | 4.1.2 V-MSBT-VUL-002: Extrinsics charge static fees that do not account for Merkle tree updates
Severity: Low
Type: Bad Extrinsic Weight
Commit: ceb9e46
Status: Fixed
File(s): pallets/manta-sbt/src/lib.rs
Location(s): to_private, mint_sbt_eth
Transactions to_private and mint_sbt_eth take membership proofs and store UTXOs to a Merkle tree on the ledger. manta-sbt shards this Merkle tree into 256 buckets where each bucket has its own Merkle tree. Instead of storing the entire tree at each bucket, the Ledger just stores the last path added to the tree. When adding a UTXO, the Ledger first computes its corresponding bucket, then computes the new path pointing to that UTXO, and finally adds that path to the bucket.
Computing the new path should take time proportional to log(n) where n is the size of the Merkle Tree. The current benchmarking scheme only covers cases where the previous path is small i.e, at most size 1. However, if the number of transactions gets large i.e, is on the order of hundreds of millions or billions, then the size of the path can get to 24-28 (taking shards into account). If the tree grows to this size, this means each execution of the extrinsic will perform 24-28 hashes, multiplied by the number of UTXOs to be added.
The benchmarking scheme should take into account the size of the tree to make sure that the existing weights are enough to offset the computation of the new Merkle tree path.
Impact: In general, it is important to set the weights to account for both computation and storage; setting the weight too low can allow users to perform a large number of transactions with little cost. In particular, malicious users may take advantage of the low fee to launch a DOS attack.
Recommendation: We recommend that the weights be computed with a larger database that reflects the state of the chain after a year’s worth of use.
Developer Response: In progress open PR for manta-pay right now that will be extended to pallet SBT. The manta-pay PR can be found here. | Static fees for extrinsics do not account for Merkle tree updates, risking underpriced transactions. | null | true | 9 | MantaNetwork | Veridise | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Veridise-zkSBT.pdf |
2 | pallets/manta-sbt/src/lib.rs | Missing validation in pull_ledger_diff | https://github.com/Manta-Network/Manta | ceb9e46cd53b77eb914ba6c17452fc238bc3a28f | null | Warning | null | null | Data Validation | Error Handling and Validation | 4.1.3 V-MSBT-VUL-003: Missing validation in pull_ledger_diff
Severity: Warning
Type: Data Validation
Commit: ceb9e46
Status: Acknowledged
File(s): pallets/manta-sbt/src/lib.rs
Location(s): pull_ledger_diff
pull_ledger_diff takes as input a Checkpoint which is a struct of two fields receiver_index and sender_index and pulls receiver data from the ledger starting at receiver_index up till at most receiver_index + PULL_MAX_RECEIVER_UPDATE_SIZE. However, there is no check that this sum cannot overflow for both the sender and receiver index in pull_receivers and pull_receivers_for_shard.
Impact: If the code is compiled without the --release flag then a malicious user could crash the node by passing in bad values. If it is built with --release then the call will be reported as successful and no senders or receivers will be returned. However, if a benign end user is calling the API with incorrect indexes it might be better to return an error informing them that the index is invalid.
Recommendation: We recommend adding bounds checks to be safe and to return an Error.
Developer Response: The developers acknowledged the issue and will fix this prior to release. | Missing bounds check in pull_ledger_diff could allow overflow and node crash. | null | true | 9 | MantaNetwork | Veridise | https://github.com/Manta-Network/Atlantic-Audits/blob/main/Atlantic-Veridise-zkSBT.pdf |
End of preview. Expand
in Data Studio
This dataset was generated using the data in the https://github.com/CoinFabrik/scout-substrate-dataset repository.
The code referenced in this dataset can be downloaded either from its original repository or from https://github.com/CoinFabrik/scout-substrate-dataset.
- Downloads last month
- 11