1. Staking and Role Assignment
FLock adopts a distributed voting and a reward-and-slash mechanism to construct secure FL Alliance systems. To participate in the training process, each participant is required to stake a specified quantity of $FML.
For each FL Alliance task within the ecosystem, rewards are initially secured by transferring them to the respective FL smart contract, provided the task is still in progress and has not surpassed its maximum allotted lifecycle. This preliminary step ensures that the rewards are earmarked and protected for participants actively engaged in the task. Subsequently, upon meeting these conditions, the FL smart contract autonomously distributes the rewards to the participants, according to their contributions.
An FL Alliance task should be derived from a finished AI Arena task. The initiation of an FL task automatically triggers the creation of a new FL contract. The reward for the FL task is determined based on the total reward allocated to the corresponding AI Arena version.
Task specific training logic is being uploaded to an off-chain storage platform and encapsulated in a hash. This hash is being used when an instance of the FL Task is being instantiated. Here, training logic refers to specific requirements in relation to the given FL task, such as parameters, epochs of training etc.
Upon formally joining the training task, each participant’s local dataset is randomly partitioned into a training set and a test set, which will not be shared with other participants at any time.
At the beginning of each round in the FL Alliance task, participants are randomly assigned roles as either a proposer or a voter. The random selection of roles for participants is conducted on-chain. A model initialised by one of the proposers is chosen at random temporarily to serve as the pioneering global model. The selected model’s weights or gradients are then distributed to all participants, ensuring a unified starting point for local models.
Proposers are responsible for training their local models using their own data and subsequently sharing the updated model weights or gradients with all participants. Voters, on the other hand, aggregate these updates from proposers. They then proceed to validate the aggregated model updates, resulting in the generation of a validation score.
Last updated