Incentivising open source model development

FLock incentivises training participants (training nodes, validators, delegators, FL clients, etc) to behave honestly and actually contribute to the system. Their rewards are based on the accumulative staking amounts in training tasks and each participants’ relative performance. Weight of staking amounts is decided by a Decentralised Autonomous Organisation (DAO) voting. Each individual’s relative performance is decided by FLock consensus.

FLock incentivises training in AI Arena and FL Alliance by daily token emission. The proportion between the two is dynamic per month and can be refined by the community through the DAO voting.

In AI Arena, for training nodes, rewards depend on the relative staking amount of this task against all tasks, the stake of a given training node in a given task, as well as the quality of the training node’s submission, measured by its rank amongst its peers . As for validators, rewards depend on three factors: (i) the quality of the validation, determined by how validators vote compared to the majority consensus; (ii) how much validators stake; (iii) the number of submissions that the given validator has successfully validated. For delegators, rewards depend on the quality of the validations produced by the delegated validator, as well as how much the delegators stake.

As for FL Alliance, the participants will assigned to be the roles of voter and proposer randomly at each round. Their rewards at each round depends on the reward pool for the given task (initialised when a task is first created), remaining rounds in the task, stake amount of participants, and the total stake amount of all good participants (see here for how “good” is being determined).

Last updated