prime-lab-wordle
LoRA adapter weights from a Prime RL run on primeintellect/wordle.
- Run ID:
p5cd4iqxptn7o4s2zphhb1yx - Adapter ID:
w8conyurm3xgjplatw7oax2n - Base model:
PrimeIntellect/Qwen3-0.6B-Reverse-Text-SFT
Files
adapter_model.safetensors- LoRA adapter weightsadapter_config.json- PEFT adapter configurationrun_wordle_adapter.py- simple local inference script
Quickstart
pip install torch transformers peft accelerate safetensors
python run_wordle_adapter.py \
--base-model PrimeIntellect/Qwen3-0.6B-Reverse-Text-SFT \
--adapter-path . \
--prompt "You are playing Wordle. Guess the next 5-letter word: _ A _ E _"
Transformers Snippet
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model_id = "PrimeIntellect/Qwen3-0.6B-Reverse-Text-SFT"
adapter_repo = "burtenshaw/prime-lab-wordle"
tok = AutoTokenizer.from_pretrained(base_model_id)
base = AutoModelForCausalLM.from_pretrained(base_model_id, device_map="auto")
model = PeftModel.from_pretrained(base, adapter_repo).eval()
messages = [{"role": "user", "content": "Guess a Wordle word from _ A _ E _"}]
prompt = tok.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tok(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
out = model.generate(**inputs, max_new_tokens=64)
print(tok.decode(out[0][inputs.input_ids.shape[-1]:], skip_special_tokens=True))
Notes
- This repo contains an adapter, not a merged full model checkpoint.
- You must load the adapter on top of the base model listed above.
- Downloads last month
- 13
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for burtenshaw/prime-lab-wordle
Base model
Qwen/Qwen3-0.6B-Base
Finetuned
PrimeIntellect/Qwen3-0.6B