You are viewing an archive of the Piccadilly Circus Games Competition. Join our Discord for the latest information.
Node and Validator Tasks
Deploy and operate a validator node and provide a node as a public RPC endpoint. Keep your node infrastructure online… Liveness is continuously measured; points are earned proportional to performance. There are two tasks in the category: Onboard validator with up to 120 winners and Open the door with up to 50 winners!
Synopsis
Node and validation infrastructure tasks are continued from Round #3 to maintain validation and public node infrastructure.
For each task, points are earned to participants based on their performance over the course of Round #4.
For each task, points are a function of empirical measurements using network telemetry to monitor validator participation in consensus and the liveness of public end points. See the scoring rules on each task description page for details.
The task scoreboard can be viewed on the Leaderboards page.
Validator metrics can be viewed on a validator dashboard.
How to enter
How to take part in Node and Validator Tasks
Ready to take part? Complete the Games Registration Form if you haven’t done so already.
Details of how each on-chain task is entered for and scored are given on the individual task pages.
The network re-genesis for Round #4 has brought in many new protocol features. One of these is an oracle network bringing price data to the chain as an L1 platform feature. Sourcing and submitting that price data for aggregation on-chain is a validator responsibility and validators now need to run an oracle server as well as a full node.
See the docs for the concept Oracle network and the guide Running an Oracle Server).
Awards
Round #4 scores each Node and Validator Task individually:
- Onboard validator has an allocation of 20,000 Award Tributes and up to 120 winners.
- Open the door has an allocation of 10,000 Award Tributes and up to 50 winners.
The award pools are distributed according to the number of participants that took part in the task and adjusted based on individual score for the task. Higher the score, higher the share of the award pool…
Scoring rule
Round #4 scores each Node and Validator Task individually using a methodology based on:
- an award pool of a fixed amount for a task
- a floor and ceiling for participation and winner numbers to calculate winner award allocations:
- a significance threshold \(R\) of
95%
. The top ranked users that accumulate 95% or more of the total score for the task are eligible. This puts a minimum score floor on the task. - a minimum number of participants below which a partial distribution of the reward allocation takes place - \(N_{fb}\)
- a maximum number of winners for the task - \(N_{max}\)
- a significance threshold \(R\) of
- points are scored for task completion by stated scoring criteria
- winners are ranked by scoreboard position. In the case that participation is higher than \(N_{max}\), the top scoring participants up to \(N_{max}\) will be chosen as winners.
- task participation must be significant. Only significantly scoring participants are counted - i.e. the top ranked users that have accumulated the \(R\) of 95% or more of the total score
- awards from the pool are distributed to the winners. Each winner’s award amount is calculated according to their score and the total number of winners.
The significance threshold puts a minimum score floor on the task. Scores must be in the \(R\) top 95 percentile to be eligible for an award.
The \(N_{fb}\) floor allows for a low participation scenario resulting in the entire award pool going to a few participants rather than the wider community. If this scenario were to happen, then the remaining reward allocation is carried forward for future incentives.