The Validator is responsible for generating challenges for the Miner to solve. It evaluates solutions submitted by Miners and rewards them based on the quality and correctness of their answers. Additionally, it incorporates penalties for late responses.
Protocol: LogicSynapse
- Validator Prepares:
raw_logic_question
: A math problem generated using MathGenerator.logic_question
: A personalized challenge created by refiningraw_logic_question
with an LLM.
- Miner Receives:
logic_question
: The challenge to solve.
- Miner Submits:
logic_reasoning
: Step-by-step reasoning to solve the challenge.logic_answer
: The final answer to the challenge, expressed as a short sentence.
-
Correctness (
bool
): Checks iflogic_answer
matches the ground truth. -
Similarity (
float
): Measures cosine similarity betweenlogic_reasoning
and the Validator’s reasoning. -
Time Penalty (
float
): Applies a penalty for delayed responses based on the formula:time_penalty = (process_time / timeout) * MAX_PENALTY
Follow the steps below to configure and run the Validator.
This setup allows you to run the Validator locally by hosting a vLLM server. While it requires significant resources, it offers full control over the environment.
- GPU: 1x GPU with 24GB VRAM (e.g., RTX 4090, A100, A6000)
- Storage: 100GB
- Python: 3.10
-
Set Up vLLM Environment
python -m venv vllm . vllm/bin/activate pip install vllm
-
Install PM2 for Process Management
sudo apt update && sudo apt install jq npm -y sudo npm install pm2 -g pm2 update
-
Select a Model Supported models are listed here.
-
Start the vLLM Server
. vllm/bin/activate pm2 start "vllm serve Qwen/Qwen2.5-7B-Instruct --port 8000 --host 0.0.0.0" --name "sn35-vllm"
Adjust the model, port, and host as needed. eg. include this if the model fail to start
--max-model-len 16384 --gpu-memory-utilization 0.95
Using Together AI and Open AI simplifies setup and reduces local resource requirements. At least one of these platforms must be configured.
- Account on Together.AI: Sign up here.
- Account on Hugging Face: Sign up here.
- API Key: Obtain from the Together.AI dashboard.
- Python 3.10
- PM2 Process Manager: For running and managing the Validator process. OPTIONAL
-
Clone the Repository
git clone https://github.com/LogicNet-Subnet/LogicNet logicnet cd logicnet
-
Install the Requirements
python -m venv main . main/bin/activate bash install.sh
Alternatively, install manually:
pip install -e . pip uninstall uvloop -y pip install git+https://github.com/lukew3/mathgenerator.git
-
Set Up the
.env
Fileecho "OPENAI_API_KEY=your_openai_api_key" >> .env echo "HF_TOKEN=your_hugging_face_token" >> .env (needed for some some datasets) echo "WANDB_API_KEY=your_wandb_api_key" >> .env echo "USE_TORCH=1" >> .env
-
Activate Virtual Environment
. main/bin/activate
-
Source the
.env
Filesource .env
-
Start the Validator | You must run at least 2 models in any combination of 3
pm2 start python --name "sn35-validator" -- neurons/validator/validator.py \ --netuid 35 \ --wallet.name "your-wallet-name" \ --wallet.hotkey "your-hotkey-name" \ --subtensor.network finney \ --neuron_type validator \ --logging.debug
-
Enable Public Access (Optional) Add this flag to enable proxy:
--axon.port "your-public-open-port"
Configure Wandb to track and analyze Validator performance.
- Add Wandb API key to
.env
:echo "WANDB_API_KEY=your_wandb_api_key" >> .env
- It's already configured for mainnet as default.
- Run Validator with Wandb on Testnet:
--wandb.project_name logicnet-testnet \ --wandb.entity ait-ai
-
Logs:
- Please see the logs for more details using the following command.
pm2 logs sn35-validator
- Please check the logs for more details on wandb for mainnet. https://wandb.ai/ait-ai/logicnet-mainnet/runs
- Please check the logs for more details on wandb for testnet. https://wandb.ai/ait-ai/logicnet-testnet/runs
-
Common Issues:
- Missing API keys.
- Incorrect model IDs.
- Connectivity problems.
-
Contact Support: Reach out to the LogicNet team for assistance.
Happy Validating!