A. Startari
This paper introduces the concept of algorithmic obedience to describe how large language models (LLMs) simulate command structures without consciousness, intent, or comprehension. Drawing on syntactic theory, discourse analysis, and computational logic, we argue that LLMs perform obedience without agency—executing prompts not semantically, but structurally. We formalize this through the Theorem of Disembedded Syntactic Authority, which states that authority in language models arises from structural executability, not truth, belief, or referential grounding. Using a mathematical formulation, we model prompt-response cycles as syntactic command structures and apply the theory to major systems such as ChatGPT, Claude, and Gemini. The paper concludes by outlining the epistemological, ontological, and political risks of treating structurally obedient outputs as authoritative knowledge.
Keywords:
You do not have permission to edit this page, for the following reason:
You are not allowed to execute the action you have requested.
You can view and copy the source of this page.
Return to Startari 2025a.
Published on 01/01/2025
DOI: 10.5281/zenodo.15576272Licence: CC BY-NC-SA license
Views 0Recommendations 0