Training Node Guide
How to stake and submit a Training Task
Last updated
How to stake and submit a Training Task
Last updated
This guide provides step-by-step instructions for the entire Training Node workflow. By the end you will have successfully staked as a Training Node, completed a training task, and claimed your staking rewards.
Be sure you have completed all pre-requisite tasks.
You can only be either training node OR validator for each task.
Follow the steps here to connect your wallet.
Once you have FLOCK tokens, Navigate to the Training Node tab on the Stake page
Select a training task
Stake FLOCK
Once you’ve confirmed and approved the transaction in your web3 wallet, you will see a box on the Training Node tab with your stake details.
Note that you have the option to accept delegator. To do so, go to "Accept Delegator" tab. Select your profit-sharing ratio, then click "Create Delegation Contract":
Once the transaction is completed. You have the option to add a profile and a profile picture, as well as to modify your profit-sharing ratio here:
You can only modify your rewards-sharing ratio once per month.
Your API key is required for all remaining Validator steps. You can get it from the web app.
Select dropdown in upper right corner of web app
Select API
On the API page, copy your API key
Once you have your API key, you can proceed to the next step.
NOTE: If you have issues generating an API key try removing any ad blocker extensions and/or clearing your cookies.
For Windows users, we suggest installing WSL. Follow the guidance: WSL installation
You can install Anaconda via HERE
The quickstart repo contains everything you need to run our full_automation.py
script. To clone it run:
To set up all packages within the project directory
File Structure
dataset.py
- Contains the logic to process the raw data from demo_data.jsonl
.
demo_data.jsonl
- Follows the shareGPT format. The training data you receive from the fed-ledger
is in exactly the same format.
merge.py
- Contains the utility function for merging LoRA weights. If you are training with LoRA, please ensure you merge the adapter before uploading to your Hugging Face repository.
demo.py
- A training script that implements LoRA fine-tuning for a Gemma-2B model.
The script described in the following step automates the following:
Gets a task
Downloads the training data
Finetunes Gemma-2B on training data
Merges weights
Uploads to your Hugging Face model repo
Submits the task to fed-ledger
To run it, you will need:
TASK_ID
- the id of the task you are staking on as Training Node
FLOCK_API_KEY
- available at https://train.flock.io/flock_api
HF_TOKEN
- your Hugging Face Access Token (Hugging Face > Profile > Settings > Access Tokens)
HG_USERNAME
- your Hugging Face user name
NOTE: If you haven't already done so, please acknowledge license to access Gemma on Hugging Face:
Once you have all the information listed in the previous step, paste the script below into your terminal, update the repsective values, and run the command.
NOTE: Tasks can be completed with any supported base LLM model. The only restriction is max_param.
You can use any technique to train or fine-tune a model, all you need to make sure is that the model can be successfully run by our validation script.
Reward distribution is triggered every 24 hours at midnight UTC. The final round of reward distribution is triggered at the once the task training period is complete.
You can claim your rewards via train.flock.io.