Running result


This demonstration results from based on the following settings:

  1. Pre-trained model: Vicuna 7B in 8 Bit

  2. Finetuning adapter model: LoRa

  3. Participants: 4 (2 Proposers, 2 Voters)

  4. Total communication round: 4

  5. Participants' local epoch: 1

  6. Stake requirement: 10

  7. Total Reward pool: 1000

Training logs

Here are the training logs on the proposer and voter's local PC

Finetuned model demo

We compared the fine-tuned Vicuna model with the original pre-trained Vicuna model. The results can be found as follows:

Note: Left screen is FLockLLM (Finetuned), right screen is Vicuna (Original)

Last updated