Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Commit

Permalink
Parathreads Feature Branch (#6969)
Browse files Browse the repository at this point in the history
* First baby steps

* Split scheduler into several modules

* Towards a more modular approach for scheduling

* move free_cores; IntoInterator -> BTreeMap

* Move clear()

* Move more functions out of scheduler

* Change weight composition

* More abstraction

* Further refactor

* clippy

* fmt

* fix test-runtime

* Add parathreads pallet to construct_runtime!

* Make all runtimes use (Parachains, Parathreads) scheduling

* Delete commented out code

* Remove parathreads scheduler from westend, rococo, and kusama

* fix rococo, westend, and kusama config

* Revert "fix rococo, westend, and kusama config"

This reverts commit 3ef951d.

* Revert "Remove parathreads scheduler from westend, rococo, and kusama"

This reverts commit 664bafa.

* Remove CoreIndex from free_cores

* Remove unnecessary struct for parathreads

* parathreads provider take 1

* Comment out parathread tests

* Pop into lookahead

* fmt

* Fill lookahead with two entries for parachains

* fmt

* Current stage

* Towards ab parathreads

* no AB use

* Make tests typecheck

* quick hack to set scheduling lookahead to 1

* Fix scheduler tests

* fix paras_inherent tests

* misc

* Update more of a test

* cfg(test)

* some cleanup

* Undo paras_inherent changes

* Adjust paras inherent tests

* Undo changes to v2 primitives

* Undo v2 mod changes to tests

* minor

* Remove parathreads assigner and pallet

* minor

* minor

* more cleanup

* fmt

* minor

* minor

* minor

* Remove on_new_session from assignment provider

* Make adder collator integration test pass

* disable failing unit tests

* minor

* minor

* re-enable one unit test

* minor

* handle retries, add concluded para to pop interface

* comment out unused code

* Remove core_para from interface

* Remove first claimqueue element on clear if None instead removing all Nones

* Move claimqueue get out of loop

* Use VecDeque instead of Ved in ClaimQueue

* Make occupied() AB ready(?)

* handle freed disputed in clear_and_fill_claimqueue

* clear_and_fill_claimqueue returns scheduled Vec

* Rename and minor refactor

* return position of assignment taken from claimqueue

* minor

* Fix session boundary parachains number change + extended test

* Fix runtimes

* Fix polkadot runtime

* Remove polkadot pallet from benchmarks

* fix test runtime

* Add storage migration

* Minor refactor

* Minor

* migratin typechecks

* Add migration to runtimes

* Towards modular scheduling II (#6568)

* Add post migration check

* pebkac

* Disable migrations but mine

* Revert "Disable migrations but mine"

This reverts commit 4fa5c5a.

* Move scheduler migration

* Revert "Move scheduler migration"

This reverts commit a16b165.

* Fix migration

* cleanup

* Don't lose retries value anymore

* comment out test function

* Remove retries value from Assignment again

* minor

* Make collator for parathreads optional

* data type refactor

* update scheduler tests

* Change test function cfg

* comment out test function

* Try cfg(test) only

* fix cfg flags

* Add get_max_retries function to provider interface (#7047)

* Fix merge commit

* pebkac

* fix merge

* update cargo.lock

* fix merge

* fix merge

* Use btreemap instead of vec, fix scheduler calls.

* Use imported `ScheduledCore`

* Remove unused import in inclusion tests

* Use keys() instead of mapping over a BTreeMap

* Fix migrations for parachains scheduler

* Use BlockNumberFor<T> everywhere in scheduler

* Add on demand assignment provider pallet (#7110)

* Address some PR comments

* minor

* more cleanup

* find_map and timeout availability fixes

* Change default scheduling_lookahead to 1

* Add on demand assignment provider pallet

* Move test-runtime to new assignment provider

* Run cargo format on scheduler tests

* minor

* Mutate cores in single loop

* timeout predicate simplification

* claimqueue desired size fix

* Replace expect by ok_or

* More improvements

* Fix push back order and next_up_on_timeout

* minor

* session change docs

* Add pre_new_session call to hand pre session updates

* Remove sc_network dependency and PeerId from unnecessary data structures

* Remove unnecessary peer_ids

* Add OnDemandOrdering proxy (#7156)

* Add OnDemandBidding proxy

* Fix names

* OnDemandAssigner for rococo only

* Check PeerId in collator protocol before fetching collation

* On occupied, remove non occupied cores from the claimqueue front and refill

* Add missing docs

* Comment out unused field

* fix ScheduledCore in tests

* Fix the fix

* pebkac

* fmt

* Fix occupied dropping

* Remove double import

* ScheduledCore fixes

* Readd sc-network dep

* pebkac

* OpaquePeerId -> PeerId in can_collate interface

* Cargo.lock update for interface change

* Remove checks not needed anymore?

* Drop occupied core on session change if it would time out after the new session

* Add on demand assignment provider pallet

* Move test-runtime to new assignment provider

* Run cargo format on scheduler tests

* Add OnDemandOrdering proxy (#7156)

* Add OnDemandBidding proxy

* Fix names

* OnDemandAssigner for rococo only

* Remove unneeded config values

* Update comments

* Use and_then for queue position

* Return the max size of the spot queue on error

* Add comments to add_parathread_entry

* Add module comments

* Add log for when can_collate fails

* Change assigner queue type to `Assignment`

* Update assignment provider tests

* More logs

* Remove unused keyring import

* disable can_collate

* comment out can_collate

* Can collate first checks set if empty

* Move can_collate call to collation advertisement

* Fix backing test

* map to loop

* Remove obsolete check

* Move invalid collation test from backing to collator-protocol

* fix unused imports

* fix test

* fix Debug derivation

* Increase time limit on zombienet predicates

* Increase zombienet timeout

* Minor

* Address some PR comments

* Address PR comments

* Comment out failing assert due to on-demand assigner missing

* remove collator_restrictions info from backing

* Move can_collate to ActiveParas

* minor

* minor

* Update weight information for on demand config

* Add ttl to parasentry

* Fix tests missing parasentry ttl

* Adjust scheduler tests to use ttl default values

* Use match instead of if let for ttl drop

* Use RuntimeDebug trait for `ParasEntry` fields

* Add comments to on demand assignment pallet

* Fix spot traffic calculation

* Revert runtimedebug changes to primitives

* Remove runtimedebug derivation from `ParasEntry`

* Mention affinity in pallet level docs

* Use RuntimeDebug trait for ParasEntry child types

* Remove collator restrictions

* Fix primitive versioning and other merge issues

* Fix tests post merge

* Fix node side tests

* Edit parascheduler migration for clarity

* Move parascheduler migration up to next release

* Remove vestiges from merge

* Fix tests

* Refactor ttl handling

* Remove unused things from scheduler tests

* Move on demand assigner to own directory

* Update documentation

* Remove unused sc-network dependency in primitives

Was used for collator restrictions

* Remove unused import

* Reenable scheduler test

* Remove unused storage value

* Enable timeout predicate test and fix fn

Turns out that the issue with the compiler is fixed and we can now
use impl Trait in the manner used here.

* Remove unused imports

* Add benchmarking entry for perbill in config

* Correct typo

* Address review comments

* Log out errors when calculating spot traffic.

* Change parascheduler's log target name

* Update scheduler_common documentation

* Use mutate for affinity fns, add tests

* Add another on demand affinity test

* Unify parathreads and parachains in HostConfig (take 2) (#7452)

* Unify parathreads and parachains in HostConfig

* Fixed missed occurences

* Remove commented out lines

* `HostConfiguration v7`

* Fix version check

* Add `MigrateToV7` to `Unreleased`

* fmt

* fmt

* Fix compilation errors after the rebase

* Update runtime/parachains/src/scheduler/tests.rs

Co-authored-by: Anton Vilhelm Ásgeirsson <[email protected]>

* Update runtime/parachains/src/scheduler/tests.rs

Co-authored-by: Anton Vilhelm Ásgeirsson <[email protected]>

* fmt

* Fix migration test

* Fix tests

* Remove unneeded assert from tests

* parathread_cores -> on_demand_cores; parathread_retries -> on_demand_retries

* Fix a compilation error in tests

* Remove unused `use`

* update colander image version

---------

Co-authored-by: alexgparity <[email protected]>
Co-authored-by: Anton Vilhelm Ásgeirsson <[email protected]>
Co-authored-by: Javier Viola <[email protected]>

* Fix branch after merge with master

* Refactor out duplicate checks into a helper fn

* Fix tests post merge

* Rename add_parathread_assignment, add test

* Update docs

* Remove unused on_finalize function

* Add weight info to on demand pallet

* Update runtime/parachains/src/configuration.rs

Co-authored-by: Tsvetomir Dimitrov <[email protected]>

* Update runtime/parachains/src/scheduler_common/mod.rs

Co-authored-by: Tsvetomir Dimitrov <[email protected]>

* Update runtime/parachains/src/assigner_on_demand/mod.rs

Co-authored-by: Tsvetomir Dimitrov <[email protected]>

* Add benchmarking to on demand pallet

* Make place_order test check for success

* Add on demand benchmarks

* Add local test weights to rococo runtime

* Modify TTL drop behaviour to not skip claims

Previous behaviour would jump a new claim from the assignment provider
ahead in the claimqueue, assuming lookahead is larger than 1.

* Refactor ttl test to test claimqueue order

* Disable place_order ext. when no on_demand cores

* Use default genesis config for benchmark tests

* Refactor config builder param

* Move lifecycle test from scheduler to on demand

* Remove unneeded lifecycle test

Paras module via the parachain assignment provider doesn't provide
new assignments if a parachain loses it's lease. The on demand
assignment provider doesn't provide an assignment that is not a
parathread.

* Re enable validator shuffle test

* More realistic weights for place_order

* Remove redundant import

* Fix backwards compatibility (hopefully)

* ".git/.scripts/commands/bench/bench.sh" --subcommand=runtime --runtime=rococo --target_dir=polkadot --pallet=runtime_parachains::assigner_on_demand

* Fix tests.

* Fix off-by-one.

* Re enable claimqueue fills test

* Re enable schedule_rotates_groups test

* Fix fill_claimqueue_fills test

* Re enable next_up_on_timeout test, move fn

* Do not pop from assignment provider when retrying

* Fix tests missing collator in scheduledcore

* Add comment about timeout predicate.

* Rename parasentry retries to availability timeouts

* Re enable schedule_schedules... test

* Refactor prune retried test to new scheduler

* Have all scheduler tests use genesis_cfg fn

* Update docs

* Update copyright notices on new files

* Rename is_parachain_core to is_bulk_core

* Remove erroneous TODO

* Simplify import

* ".git/.scripts/commands/bench/bench.sh" --subcommand=runtime --runtime=rococo --target_dir=polkadot --pallet=runtime_parachains::configuration

* Revert AdvertiseCollation order shuffle

* Refactor place_order into keepalive and allowdeath

* Revert rename of hrmp max inbound channels

parachain encompasses both on demand and slot auction / bulk.

* Restore availability_timeout_predicate function

* Clean up leftover comments

* Update runtime/parachains/src/scheduler/tests.rs

Co-authored-by: Tsvetomir Dimitrov <[email protected]>

* ".git/.scripts/commands/bench/bench.sh" --subcommand=runtime --runtime=westend --target_dir=polkadot --pallet=runtime_parachains::configuration

---------

Co-authored-by: alexgparity <[email protected]>
Co-authored-by: alexgparity <[email protected]>
Co-authored-by: Tsvetomir Dimitrov <[email protected]>
Co-authored-by: Javier Viola <[email protected]>
Co-authored-by: eskimor <[email protected]>
Co-authored-by: command-bot <>

* On Demand - update weights and small nits (#7605)

* Remove collator restriction test in inclusion

On demand parachains won't have collator restrictions implemented in
this way but will instead use a preferred collator registered to a
`ParaId` in `paras_registrar`.

* Remove redundant config guard for test fns

* Update weights

* Update WeightInfo for on_demand assigner

* Unify assignment provider parameters into one call (#7606)

* Combine assignmentprovider params into one fn call

* Move scheduler_common to a module under scheduler

* Fix ttl handling in benchmark builder

* Run cargo format

* Remove obsolete test.

* Small improvement.

* Use same migration pattern as config module

* Remove old TODO

* Change log target name for assigner on demand

* Fix migration

* Fix clippy warnings

* Add HostConfiguration storage migration to V8

* Add `MigrateToV8` to unreleased migrations for all runtimes

* Fix storage version check for config v8

* Set `StorageVersion` to 8 in `MigrateToV8`

* Remove dups.

* Update primitives/src/v5/mod.rs

Co-authored-by: Bastian Köcher <[email protected]>

---------

Co-authored-by: alexgparity <[email protected]>
Co-authored-by: alexgparity <[email protected]>
Co-authored-by: antonva <[email protected]>
Co-authored-by: Tsvetomir Dimitrov <[email protected]>
Co-authored-by: Anton Vilhelm Ásgeirsson <[email protected]>
Co-authored-by: Javier Viola <[email protected]>
Co-authored-by: eskimor <[email protected]>
Co-authored-by: Bastian Köcher <[email protected]>
  • Loading branch information
9 people authored Aug 17, 2023
1 parent 046c43b commit 56d45fe
Show file tree
Hide file tree
Showing 53 changed files with 4,196 additions and 1,873 deletions.
42 changes: 6 additions & 36 deletions node/core/backing/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ use polkadot_node_subsystem_util::{
request_validators, Validator,
};
use polkadot_primitives::{
BackedCandidate, CandidateCommitments, CandidateHash, CandidateReceipt, CollatorId,
BackedCandidate, CandidateCommitments, CandidateHash, CandidateReceipt,
CommittedCandidateReceipt, CoreIndex, CoreState, Hash, Id as ParaId, PvfExecTimeoutKind,
SigningContext, ValidatorId, ValidatorIndex, ValidatorSignature, ValidityAttestation,
};
Expand Down Expand Up @@ -354,7 +354,7 @@ async fn handle_active_leaves_update<Context>(
let group_index = group_rotation_info.group_for_core(core_index, n_cores);
if let Some(g) = validator_groups.get(group_index.0 as usize) {
if validator.as_ref().map_or(false, |v| g.contains(&v.index())) {
assignment = Some((scheduled.para_id, scheduled.collator));
assignment = Some(scheduled.para_id);
}
groups.insert(scheduled.para_id, g.clone());
}
Expand All @@ -363,15 +363,15 @@ async fn handle_active_leaves_update<Context>(

let table_context = TableContext { groups, validators, validator };

let (assignment, required_collator) = match assignment {
let assignment = match assignment {
None => {
assignments_span.add_string_tag("assigned", "false");
(None, None)
None
},
Some((assignment, required_collator)) => {
Some(assignment) => {
assignments_span.add_string_tag("assigned", "true");
assignments_span.add_para_id(assignment);
(Some(assignment), required_collator)
Some(assignment)
},
};

Expand All @@ -381,7 +381,6 @@ async fn handle_active_leaves_update<Context>(
let job = CandidateBackingJob {
parent,
assignment,
required_collator,
issued_statements: HashSet::new(),
awaiting_validation: HashSet::new(),
fallbacks: HashMap::new(),
Expand Down Expand Up @@ -412,8 +411,6 @@ struct CandidateBackingJob<Context> {
parent: Hash,
/// The `ParaId` assigned to this validator
assignment: Option<ParaId>,
/// The collator required to author the candidate, if any.
required_collator: Option<CollatorId>,
/// Spans for all candidates that are not yet backable.
unbacked_candidates: HashMap<CandidateHash, jaeger::Span>,
/// We issued `Seconded`, `Valid` or `Invalid` statements on about these candidates.
Expand Down Expand Up @@ -913,21 +910,6 @@ impl<Context> CandidateBackingJob<Context> {
candidate: &CandidateReceipt,
pov: Arc<PoV>,
) -> Result<(), Error> {
// Check that candidate is collated by the right collator.
if self
.required_collator
.as_ref()
.map_or(false, |c| c != &candidate.descriptor().collator)
{
// Break cycle - bounded as there is only one candidate to
// second per block.
ctx.send_unbounded_message(CollatorProtocolMessage::Invalid(
self.parent,
candidate.clone(),
));
return Ok(())
}

let candidate_hash = candidate.hash();
let mut span = self.get_unbacked_validation_child(
root_span,
Expand Down Expand Up @@ -1171,25 +1153,13 @@ impl<Context> CandidateBackingJob<Context> {
return Ok(())
}

let descriptor = attesting.candidate.descriptor().clone();

gum::debug!(
target: LOG_TARGET,
candidate_hash = ?candidate_hash,
candidate_receipt = ?attesting.candidate,
"Kicking off validation",
);

// Check that candidate is collated by the right collator.
if self.required_collator.as_ref().map_or(false, |c| c != &descriptor.collator) {
// If not, we've got the statement in the table but we will
// not issue validation work for it.
//
// Act as though we've issued a statement.
self.issued_statements.insert(candidate_hash);
return Ok(())
}

let bg_sender = ctx.sender().clone();
let pov = PoVData::FetchFromValidator {
from_validator: attesting.from_validator,
Expand Down
120 changes: 3 additions & 117 deletions node/core/backing/src/tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,8 @@ use polkadot_node_subsystem::{
};
use polkadot_node_subsystem_test_helpers as test_helpers;
use polkadot_primitives::{
CandidateDescriptor, CollatorId, GroupRotationInfo, HeadData, PersistedValidationData,
PvfExecTimeoutKind, ScheduledCore,
CandidateDescriptor, GroupRotationInfo, HeadData, PersistedValidationData, PvfExecTimeoutKind,
ScheduledCore,
};
use sp_application_crypto::AppCrypto;
use sp_keyring::Sr25519Keyring;
Expand Down Expand Up @@ -98,14 +98,10 @@ impl Default for TestState {
let group_rotation_info =
GroupRotationInfo { session_start_block: 0, group_rotation_frequency: 100, now: 1 };

let thread_collator: CollatorId = Sr25519Keyring::Two.public().into();
let availability_cores = vec![
CoreState::Scheduled(ScheduledCore { para_id: chain_a, collator: None }),
CoreState::Scheduled(ScheduledCore { para_id: chain_b, collator: None }),
CoreState::Scheduled(ScheduledCore {
para_id: thread_a,
collator: Some(thread_collator.clone()),
}),
CoreState::Scheduled(ScheduledCore { para_id: thread_a, collator: None }),
];

let mut head_data = HashMap::new();
Expand Down Expand Up @@ -1186,116 +1182,6 @@ fn backing_works_after_failed_validation() {
});
}

// Test that a `CandidateBackingMessage::Second` issues validation work
// and in case validation is successful issues a `StatementDistributionMessage`.
#[test]
fn backing_doesnt_second_wrong_collator() {
let mut test_state = TestState::default();
test_state.availability_cores[0] = CoreState::Scheduled(ScheduledCore {
para_id: ParaId::from(1),
collator: Some(Sr25519Keyring::Bob.public().into()),
});

test_harness(test_state.keystore.clone(), |mut virtual_overseer| async move {
test_startup(&mut virtual_overseer, &test_state).await;

let pov = PoV { block_data: BlockData(vec![42, 43, 44]) };

let expected_head_data = test_state.head_data.get(&test_state.chain_ids[0]).unwrap();

let pov_hash = pov.hash();
let candidate = TestCandidateBuilder {
para_id: test_state.chain_ids[0],
relay_parent: test_state.relay_parent,
pov_hash,
head_data: expected_head_data.clone(),
erasure_root: make_erasure_root(&test_state, pov.clone()),
}
.build();

let second = CandidateBackingMessage::Second(
test_state.relay_parent,
candidate.to_plain(),
pov.clone(),
);

virtual_overseer.send(FromOrchestra::Communication { msg: second }).await;

assert_matches!(
virtual_overseer.recv().await,
AllMessages::CollatorProtocol(
CollatorProtocolMessage::Invalid(parent, c)
) if parent == test_state.relay_parent && c == candidate.to_plain() => {
}
);

virtual_overseer
.send(FromOrchestra::Signal(OverseerSignal::ActiveLeaves(
ActiveLeavesUpdate::stop_work(test_state.relay_parent),
)))
.await;
virtual_overseer
});
}

#[test]
fn validation_work_ignores_wrong_collator() {
let mut test_state = TestState::default();
test_state.availability_cores[0] = CoreState::Scheduled(ScheduledCore {
para_id: ParaId::from(1),
collator: Some(Sr25519Keyring::Bob.public().into()),
});

test_harness(test_state.keystore.clone(), |mut virtual_overseer| async move {
test_startup(&mut virtual_overseer, &test_state).await;

let pov = PoV { block_data: BlockData(vec![1, 2, 3]) };

let pov_hash = pov.hash();

let expected_head_data = test_state.head_data.get(&test_state.chain_ids[0]).unwrap();

let candidate_a = TestCandidateBuilder {
para_id: test_state.chain_ids[0],
relay_parent: test_state.relay_parent,
pov_hash,
head_data: expected_head_data.clone(),
erasure_root: make_erasure_root(&test_state, pov.clone()),
}
.build();

let public2 = Keystore::sr25519_generate_new(
&*test_state.keystore,
ValidatorId::ID,
Some(&test_state.validators[2].to_seed()),
)
.expect("Insert key into keystore");
let seconding = SignedFullStatement::sign(
&test_state.keystore,
Statement::Seconded(candidate_a.clone()),
&test_state.signing_context,
ValidatorIndex(2),
&public2.into(),
)
.ok()
.flatten()
.expect("should be signed");

let statement =
CandidateBackingMessage::Statement(test_state.relay_parent, seconding.clone());

virtual_overseer.send(FromOrchestra::Communication { msg: statement }).await;

// The statement will be ignored because it has the wrong collator.
virtual_overseer
.send(FromOrchestra::Signal(OverseerSignal::ActiveLeaves(
ActiveLeavesUpdate::stop_work(test_state.relay_parent),
)))
.await;
virtual_overseer
});
}

#[test]
fn candidate_backing_reorders_votes() {
use sp_core::Encode;
Expand Down
1 change: 1 addition & 0 deletions node/network/collator-protocol/src/validator_side/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -921,6 +921,7 @@ async fn process_incoming_peer_message<Context>(
.span_per_relay_parent
.get(&relay_parent)
.map(|s| s.child("advertise-collation"));

if !state.view.contains(&relay_parent) {
gum::debug!(
target: LOG_TARGET,
Expand Down
5 changes: 1 addition & 4 deletions node/service/src/chain_spec.rs
Original file line number Diff line number Diff line change
Expand Up @@ -211,8 +211,7 @@ fn default_parachains_host_configuration(
max_pov_size: MAX_POV_SIZE,
max_head_data_size: 32 * 1024,
group_rotation_frequency: 20,
chain_availability_period: 4,
thread_availability_period: 4,
paras_availability_period: 4,
max_upward_queue_count: 8,
max_upward_queue_size: 1024 * 1024,
max_downward_message_size: 1024 * 1024,
Expand All @@ -223,10 +222,8 @@ fn default_parachains_host_configuration(
hrmp_channel_max_capacity: 8,
hrmp_channel_max_total_size: 8 * 1024,
hrmp_max_parachain_inbound_channels: 4,
hrmp_max_parathread_inbound_channels: 4,
hrmp_channel_max_message_size: 1024 * 1024,
hrmp_max_parachain_outbound_channels: 4,
hrmp_max_parathread_outbound_channels: 4,
hrmp_max_message_num_per_candidate: 5,
dispute_period: 6,
no_show_slots: 2,
Expand Down
3 changes: 1 addition & 2 deletions node/test/service/src/chain_spec.rs
Original file line number Diff line number Diff line change
Expand Up @@ -175,8 +175,7 @@ fn polkadot_testnet_genesis(
max_pov_size: MAX_POV_SIZE,
max_head_data_size: 32 * 1024,
group_rotation_frequency: 20,
chain_availability_period: 4,
thread_availability_period: 4,
paras_availability_period: 4,
no_show_slots: 10,
minimum_validation_upgrade_delay: 5,
..Default::default()
Expand Down
3 changes: 2 additions & 1 deletion primitives/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,8 @@ pub use v5::{
UpgradeRestriction, UpwardMessage, ValidDisputeStatementKind, ValidationCode,
ValidationCodeHash, ValidatorId, ValidatorIndex, ValidatorSignature, ValidityAttestation,
ValidityError, ASSIGNMENT_KEY_TYPE_ID, LOWEST_PUBLIC_ID, MAX_CODE_SIZE, MAX_HEAD_DATA_SIZE,
MAX_POV_SIZE, PARACHAINS_INHERENT_IDENTIFIER, PARACHAIN_KEY_TYPE_ID,
MAX_POV_SIZE, ON_DEMAND_DEFAULT_QUEUE_MAX_SIZE, PARACHAINS_INHERENT_IDENTIFIER,
PARACHAIN_KEY_TYPE_ID,
};

#[cfg(feature = "std")]
Expand Down
Loading

0 comments on commit 56d45fe

Please sign in to comment.