Papers
arxiv:2508.10265

Why Cannot Large Language Models Ever Make True Correct Reasoning?

Published on Aug 14
Authors:

Abstract

The paper argues that large language models lack true reasoning ability due to inherent limitations in their working principles.

AI-generated summary

Recently, with the application progress of AIGC tools based on large language models (LLMs), led by ChatGPT, many AI experts and more non-professionals are trumpeting the "reasoning ability" of the LLMs. The present author considers that the so-called "reasoning ability" of LLMs are just illusions of those people who with vague concepts. In fact, the LLMs can never have the true reasoning ability. This paper intents to explain that, because the essential limitations of their working principle, the LLMs can never have the ability of true correct reasoning.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2508.10265 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2508.10265 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2508.10265 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.