Evaluation of medium-sized language models (June 2023)

Large language models (LLMs) have garnered significant attention, but the definition of «large» lacks clarity. This dataset focuses on medium-sized language models (MLMs), defined as having at least six billion parameters but less than 100 billion. The corresponding study (https://doi.org/10.48550/arXiv.2305.11991) evaluates MLMs regarding zero-shot generative question answering, which requires models to provide elaborate answers without external document retrieval. The paper introduces an own test dataset and presents results from human evaluation. Results show that combining the best answers from different MLMs yielded an overall correct answer rate of 82.7% which is better than the 60.9% of ChatGPT. The best MLM achieved 46.4% and has 7B parameters, which highlights the importance of using appropriate training data for fine-tuning rather than solely relying on the number of parameters. More fine-grained feedback should be used to further improve the quality of answers.

In the section below, you can find the initial data and the results of the study, which are available there for download as well.