A. Startari
Simulated neutrality in generative models produces tangible harms (ranging from erroneous treatments in clinical reports to rulings with no legal basis) by projecting impartiality without evidence. This study explains how Large Language Models (LLMs) and logic-based systems achieve neutralidad simulada through form, not meaning: passive voice, abstract nouns and suppressed agents mask responsibility while asserting authority.
A balanced corpus of 1 000 model outputs was analysed: 600 medical texts from PubMed (2019-2024) and 400 legal summaries from Westlaw (2020-2024). Standard syntactic parsing tools identified structures linked to authority simulation. Example: a 2022 oncology note states “Treatment is advised” with no cited trial; a 2021 immigration decision reads “It was determined” without precedent.
Two audit metrics are introduced, agency score (share of clauses naming an agent) and reference score (proportion of authoritative claims with verifiable sources). Outputs scoring below 0.30 on either metric are labelled high-risk; 64 % of medical and 57 % of legal texts met this condition. The framework runs in <0.1 s per 500-token output on a standard CPU, enabling real-time deployment.
Quantifying this lack of syntactic clarity offers a practical layer of oversight for safety-critical applications.
This work is also published with DOI reference in Figshare https://doi.org/10.6084/m9.figshare.29390885 and SSRN (In Process )
Keywords:
You do not have permission to edit this page, for the following reason:
You are not allowed to execute the action you have requested.
You can view and copy the source of this page.
Return to Startari 2025g.
Published on 01/01/2025
Licence: CC BY-NC-SA license
Views 0Recommendations 0