Abstract

Generative language models increasingly produce texts that simulate authority without a verifiable author or institutional grounding. This paper introduces synthetic ethos: the appearance of credibility constructed by algorithms trained to replicate human-like discourse without any connection to expertise, accountability, or source traceability. Such simulations raise critical risks in high-stakes domains including healthcare, law, and education. We analyze 1500 AI-generated texts produced by large-scale models such as GPT-4, collected from public datasets and benchmark repositories. Using discourse analysis and pattern-based structural classification, we identify recurring linguistic features,such as depersonalized tone, adaptive register, and unreferenced assertions,that collectively produce the illusion of credible voice. In healthcare, for instance, generative models produce diagnostic language without citing medical sources, risking patient misguidance. In legal context, generated recommendations mimic normative authority while lacking any basis in legislation or case law. In education, synthetic essays simulate scholarly argumentation without verifiable references. Our findings demonstrate that synthetic ethos is not an accidental artifact, but an engineered outcome of training objectives aligned with persuasive fluency. We argue that detecting such algorithmic credibility is essential for ethical and epistemically responsible AI deployment. To this end, we propose technical standards for evaluating source traceability and discourse consistency in generative outputs. These metrics can inform regulatory frameworks in AI governance, enabling oversight mechanisms that protect users from misleading forms of simulated authority and mitigate long-term erosion of public trust in institutional knowledge.


Document

The PDF file did not load properly or your web browser does not support viewing PDF files. Download directly to your device: Download PDF document
Back to Top
GET PDF

Document information

Published on 01/01/2025

DOI: 10.2139/ssrn.5313317
Licence: CC BY-NC-SA license

Document Score

0

Views 0
Recommendations 0

Share this document