Abstract

This paper introduces the concept of algorithmic obedience to describe how large language models (LLMs) simulate command structures without consciousness, intent, or comprehension. Drawing on syntactic theory, discourse analysis, and computational logic, we argue that LLMs perform obedience without agency—executing prompts not semantically, but structurally. We formalize this through the Theorem of Disembedded Syntactic Authority, which states that authority in language models arises from structural executability, not truth, belief, or referential grounding. Using a mathematical formulation, we model prompt-response cycles as syntactic command structures and apply the theory to major systems such as ChatGPT, Claude, and Gemini. The paper concludes by outlining the epistemological, ontological, and political risks of treating structurally obedient outputs as authoritative knowledge.


Document

The PDF file did not load properly or your web browser does not support viewing PDF files. Download directly to your device: Download PDF document
Back to Top
GET PDF

Document information

Published on 01/01/2025

DOI: 10.5281/zenodo.15576272
Licence: CC BY-NC-SA license

Document Score

0

Views 0
Recommendations 0

Share this document