Model distillation transfers knowledge from large language models to smaller ones for efficiency. However, excessive distillation can lead to model homogenization and reduced capability in handling ...
Abstract: In the field of knowledge graph completion, traditional methods mainly rely on Knowledge Graph Embedding (KGE), which maps entities and relations into a low-dimensional vector space to ...
Abstract: This study proposes a knowledge distillation algorithm based on large language models and feature alignment, aiming to effectively transfer the knowledge of large pre-trained models into ...