Fix: add .model after language_model in quantization ignore/exclude_modules
#4
by
zhiyucheng - opened
This PR fixes the module path prefix in the quantization config files.
In both config.json (quantization_config.ignore) and hf_quant_config.json (quantization.exclude_modules), all entries starting with language_model. have been updated to language_model.model. to correctly reference the submodule path.
For example:
language_model.lm_headβlanguage_model.model.lm_headlanguage_model.layers.*.self_attn*βlanguage_model.model.layers.*.self_attn*
zhiyucheng changed pull request status to
closed