Artigo

Text Clustering with Large Language Model Embeddings

International Journal of Cognitive Computing in Engineering

Petukhova, Alina; Matos-Carvalho, João P.; Fachada, Nuno2025Elsevier

Informações chave

Autores:

Petukhova, Alina; Matos-Carvalho, João P.; Fachada, Nuno (Nuno Fachada)

Publicado em

01/12/2025

Resumo

Text clustering is an important method for organising the increasing volume of digital content, aiding in the structuring and discovery of hidden patterns in uncategorised data. The effectiveness of text clustering largely depends on the selection of textual embeddings and clustering algorithms. This study argues that recent advancements in large language models (LLMs) have the potential to enhance this task. The research investigates how different textual embeddings, particularly those utilised in LLMs, and various clustering algorithms influence the clustering of text datasets. A series of experiments were conducted to evaluate the impact of embeddings on clustering results, the role of dimensionality reduction through summarisation, and the adjustment of model size. The findings indicate that LLM embeddings are superior at capturing subtleties in structured language. OpenAI’s GPT-3.5 Turbo model yields better results in three out of five clustering metrics across most tested datasets. Most LLM embeddings show improvements in cluster purity and provide a more informative silhouette score, reflecting a refined structural understanding of text data compared to traditional methods. Among the more lightweight models, BERT demonstrates leading performance. Additionally, it was observed that increasing model dimensionality and employing summarisation techniques do not consistently enhance clustering efficiency, suggesting that these strategies require careful consideration for practical application. These results highlight a complex balance between the need for refined text representation and computational feasibility in text clustering applications. This study extends traditional text clustering frameworks by integrating embeddings from LLMs, offering improved methodologies and suggesting new avenues for future research in various types of textual analysis.

Detalhes da publicação

Autores da comunidade :

Versão da publicação

VoR - Versão publicada

Editora

Elsevier

Ligação para a versão da editora

https://www.sciencedirect.com/science/article/pii/S2666307424000482

Título do contentor da publicação

International Journal of Cognitive Computing in Engineering

Primeira página ou número de artigo

100

Última página

108

Volume

6

ISSN

2666-3074

Domínio Científico (FOS)

computer-and-information-sciences - Ciências da Computação e da Informação

Palavras-chave

  • Text clustering
  • Large language models
  • LLMs
  • Text summarisation

Idioma da publicação (código ISO)

eng - Inglês

Acesso à publicação:

Acesso Aberto

Licença Creative Commons

CC-BY - CC-BY