Configurable Preference Tuning ⚙️📝
Collection
CPT uses rubric-guided synthetic data and DPO to enable LLMs to dynamically adjust behavior (e.g., writing style) at inference with system prompts • 7 items • Updated
• 1
This is a LoRA adapter for unsloth/mistral-nemo-instruct-2407-bnb-4bit and was trained using the code and dataset described in the paper Configurable Preference Tuning with Rubric-Guided Synthetic Data.
The code is available at https://github.com/vicgalle/configurable-preference-tuning.