GLM-4.6-NVFP4
Model Overview
- Model Architecture: zai-org/GLM-4.6
- Input: Text
- Output: Text
- Model Optimizations:
- Weight quantization: FP4
- Activation quantization: FP4
- Out-of-scope: Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
- Version: 1.0
- Model Developers: RedHatAI
This model is a quantized version of zai-org/GLM-4.6. It was evaluated on a several tasks to assess the its quality in comparison to the unquatized model.
Model Optimizations
This model was obtained by quantizing the weights and activations of zai-org/GLM-4.6 to FP4 data type, ready for inference with vLLM>=0.11.0 This optimization reduces the number of bits per parameter from 16 to 4, reducing the disk size and GPU memory requirements by approximately 75%.
Only the weights and activations of the linear operators within transformers blocks are quantized using LLM Compressor.
Deployment
Use with vLLM
This model can be deployed efficiently using the vLLM backend, as shown in the example below.
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "RedHatAI/GLM-4.6-NVFP4"
number_gpus = 4
sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
vLLM aslo supports OpenAI-compatible serving. See the documentation for more details.
Creation
This model was created by applying LLM Compressor with calibration samples from UltraChat, as presented in the code snipet below.
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoTokenizer
from llmcompressor import oneshot
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.utils import dispatch_for_generation
MODEL_ID = "zai-org/GLM-4.6"
# Load model.
model = AutoModelForCausalLM.from_pretrained( MODEL_ID, torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained( MODEL_ID)
DATASET_ID = "HuggingFaceH4/ultrachat_200k"
DATASET_SPLIT = "train_sft"
# Select number of samples. 512 samples is a good place to start.
# Increasing the number of samples can improve accuracy.
NUM_CALIBRATION_SAMPLES = 256
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=f"{DATASET_SPLIT}[:{NUM_CALIBRATION_SAMPLES}]")
ds = ds.shuffle(seed=42)
def preprocess(example):
return {
"text": tokenizer.apply_chat_template(
example["messages"],
tokenize=False,
)
}
ds = ds.map(preprocess)
# Tokenize inputs.
def tokenize(sample):
return tokenizer(
sample["text"],
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
add_special_tokens=False,
)
ds = ds.map(tokenize, remove_columns=ds.column_names)
# Configure the quantization algorithm and scheme.
recipe = QuantizationModifier(
targets="Linear",
scheme="NVFP4",
ignore=[
"lm_head",
"re:.*mlp.gate$"
],
)
# Apply quantization.
oneshot(
model=model,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
pipeline="sequential",
sequential_targets=["Glm4MoeDecoderLayer"],
trust_remote_code_model=True,
)
SAVE_DIR = "./" + MODEL_ID.rstrip("/").split("/")[-1] + "-NVFP4"
model.save_pretrained(SAVE_DIR, save_compressed=True)
tokenizer.save_pretrained(SAVE_DIR)
Evaluation
This model was evaluated on the well-known text benchmarks using lm-evaluation-harness. The Reasoning evals were done using ligheval.
Accuracy
| Category | Metric | zai-org/GLM-4.6-FP8 | RedHatAI/GLM-4.6-NVFP4 (this model) | Recovery |
|---|---|---|---|---|
| Leaderboard | MMLU Pro | 50.65 | 55.09 | 108.77% |
| IFEVAL | 91.97 | 92.69 | 100.78% | |
| Reasoning | AIME25 | 96.67% | 93.33% | 96.54% |
| Math-500 (0-shot) | 88.80% | 88.00% | 99.10% | |
| GPQA (Diamond, 0-shot) | 81.82% | 80.30% | 98.14% |
Reproduction
The results were obtained using the following commands:
Leaderboard
lm_eval --model local-chat-completions \
--tasks mmlu_pro \
--model_args "model=RedHatAI/GLM-4.6-NVFP4,max_length=90000,base_url=http://0.0.0.0:3758/v1/chat/completions,num_concurrent=128,max_retries=3,tokenized_requests=False,tokenizer_backend=None,timeout=1200" \
--num_fewshot 5 \
--apply_chat_template \
--fewshot_as_multiturn \
--output_path ./ \
--seed 42 \
--gen_kwargs "do_sample=True,temperature=1.0,top_p=0.95,max_gen_toks=64000"
lm_eval --model local-chat-completions \
--tasks leaderboard_ifeval \
--model_args "model=RedHatAI/GLM-4.6-NVFP4,max_length=90000,base_url=http://0.0.0.0:3758/v1/chat/completions,num_concurrent=128,max_retries=3,tokenized_requests=False,tokenizer_backend=None,timeout=1200" \
--num_fewshot 5 \
--apply_chat_template \
--fewshot_as_multiturn \
--output_path ./ \
--seed 42 \
--gen_kwargs "do_sample=True,temperature=1.0,top_p=0.95,max_gen_toks=64000"
Reasoning
litellm_config.yaml:
model_parameters:
provider: "hosted_vllm"
model_name: "hosted_vllm/redhatai-glm-4.6-nvfp4"
base_url: "http://0.0.0.0:3759/v1"
api_key: ""
timeout: 3600
concurrent_requests: 128
generation_parameters:
temperature: 1.0
max_new_tokens: 131072
top_p: 0.95
seed: 0
lighteval endpoint litellm litellm_config.yaml \
"aime25|0,math_500|0,gpqa:diamond|0" \
--output-dir ./ \
--save-details
- Downloads last month
- 9
Model tree for inference-optimization/GLM-4.6-NVFP4
Base model
zai-org/GLM-4.6