LFM2-Research

A small causal language model pre-trained on arXiv AI/ML research papers.

Overview

LFM2-Research is a compact language model built on the LFM2 (Liquid Foundation Model 2) -- i got the code of the model from src/transformers/models/lfm2, architecture and pre-trained exclusively on AI and machine learning research papers from arXiv. It is designed as a lightweight research assistant capable of engaging with technical literature in the AI/ML domain.

⚠️ This model has not undergone RLHF, instruction tuning, or any alignment procedure. It is a raw pre-trained model and is best suited for experimentation and research purposes.


Model Architecture

Parameter Value
Model Type Causal Language Model
Architecture LFM2
Hidden Size 512
Layers 8
Attention Heads 8
KV Heads 4 (Grouped Query Attention)
Max Sequence Length 2048
Vocabulary Size 50,257

Training Details

Parameter Value
Dataset FlameF0X/arXiv-AI-ML
Training Samples 2,500
Batch Size 4
Learning Rate 3e-4
Epochs 20
Final Loss 0.3772

Intended Use

  • Exploring AI/ML concepts in a research context
  • Prototyping lightweight domain-specific language model pipelines
  • Studying the effect of narrow-domain pre-training on small models

Out-of-Scope Use

This model is not intended for production use, general-purpose chat, or any application requiring safe, aligned, or factually reliable outputs. It has a very small training set (2,500 samples) and may produce repetitive, incoherent, or factually incorrect text.


Limitations

  • Small training set: Only 2,500 samples were used for pre-training, which significantly limits generalization, but because they are research papers are are quite rich.
  • No alignment: The model has not been fine-tuned with human feedback or instruction tuning of any kind.
  • Potential overfitting: Given the high number of epochs (20) relative to the dataset size, the model may have overfit to training examples.
  • Narrow domain: The model has only been exposed to AI/ML research text and will likely perform poorly on out-of-domain inputs.

Safety

Because training data consists solely of academic research papers, the risk of harmful content generation is low. However, the lack of any alignment procedure means outputs are unpredictable and should not be treated as authoritative or safe for end-user-facing applications.


Citation

If you use this model in your work, please cite it as:

@misc{lfm2-research,
  author = {FlameF0X},
  title  = {LFM2-Research: A Small Language Model Pre-trained on arXiv AI/ML Papers},
  year   = {2025},
  url    = {https://huggingface.co/FlameF0X/LFM2-Research}
}

Or dont do it. It's up to you.

Downloads last month
117
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train FlameF0X/LFM2-Research