Article
Text Clustering with Large Language Model Embeddings
International Journal of Cognitive Computing in Engineering
2025 — Elsevier
—Key information
Authors:
Published in
12/01/2025
Abstract
Text clustering is an important method for organising the increasing volume of digital content, aiding in the structuring and discovery of hidden patterns in uncategorised data. The effectiveness of text clustering largely depends on the selection of textual embeddings and clustering algorithms. This study argues that recent advancements in large language models (LLMs) have the potential to enhance this task. The research investigates how different textual embeddings, particularly those utilised in LLMs, and various clustering algorithms influence the clustering of text datasets. A series of experiments were conducted to evaluate the impact of embeddings on clustering results, the role of dimensionality reduction through summarisation, and the adjustment of model size. The findings indicate that LLM embeddings are superior at capturing subtleties in structured language. OpenAI’s GPT-3.5 Turbo model yields better results in three out of five clustering metrics across most tested datasets. Most LLM embeddings show improvements in cluster purity and provide a more informative silhouette score, reflecting a refined structural understanding of text data compared to traditional methods. Among the more lightweight models, BERT demonstrates leading performance. Additionally, it was observed that increasing model dimensionality and employing summarisation techniques do not consistently enhance clustering efficiency, suggesting that these strategies require careful consideration for practical application. These results highlight a complex balance between the need for refined text representation and computational feasibility in text clustering applications. This study extends traditional text clustering frameworks by integrating embeddings from LLMs, offering improved methodologies and suggesting new avenues for future research in various types of textual analysis.
Publication details
Authors in the community:
Nuno Fachada
ist145239
Publication version
VoR - Version of Record
Publisher
Elsevier
Link to the publisher's version
https://www.sciencedirect.com/science/article/pii/S2666307424000482
Title of the publication container
International Journal of Cognitive Computing in Engineering
First page or article number
100
Last page
108
Volume
6
ISSN
2666-3074
Fields of Science and Technology (FOS)
computer-and-information-sciences - Computer and information sciences
Keywords
- Text clustering
- Large language models
- LLMs
- Text summarisation
Publication language (ISO code)
eng - English
Rights type:
Open access
Creative Commons license
CC-BY - CC-BY