We are a research group initially established by some LAMDA members on the large model branch (LAMDA-LM). Our primary focus is on technologies related to large language models and various foundation models. Recently, our key interests include:
- Model selection from a large zoo without inference
- General training and dynamic compatibility
- Adaptive training data engine
- Efficient domain reuse (adaptation)
- Comprehensive evaluation framework
- Parameter alignment and merging