<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://www.scipedia.com/wd/index.php?action=history&amp;feed=atom&amp;title=Startari_2025e</id>
		<title>Startari 2025e - Revision history</title>
		<link rel="self" type="application/atom+xml" href="https://www.scipedia.com/wd/index.php?action=history&amp;feed=atom&amp;title=Startari_2025e"/>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Startari_2025e&amp;action=history"/>
		<updated>2026-04-16T13:49:37Z</updated>
		<subtitle>Revision history for this page on the wiki</subtitle>
		<generator>MediaWiki 1.27.0-wmf.10</generator>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Startari_2025e&amp;diff=322615&amp;oldid=prev</id>
		<title>Agustinvstartari: Agustinvstartari moved page Draft Startari 558257447 to Startari 2025e</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Startari_2025e&amp;diff=322615&amp;oldid=prev"/>
				<updated>2025-07-29T12:15:49Z</updated>
		
		<summary type="html">&lt;p&gt;Agustinvstartari moved page &lt;a href=&quot;/public/Draft_Startari_558257447&quot; class=&quot;mw-redirect&quot; title=&quot;Draft Startari 558257447&quot;&gt;Draft Startari 558257447&lt;/a&gt; to &lt;a href=&quot;/public/Startari_2025e&quot; title=&quot;Startari 2025e&quot;&gt;Startari 2025e&lt;/a&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='1' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='1' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 12:15, 29 July 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan='2' style='text-align: center;' lang='en'&gt;&lt;div class=&quot;mw-diff-empty&quot;&gt;(No difference)&lt;/div&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
		<author><name>Agustinvstartari</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Startari_2025e&amp;diff=322614&amp;oldid=prev</id>
		<title>Agustinvstartari: Created page with &quot; == Abstract ==  &lt;p&gt;Generative language models increasingly produce texts that simulate authority without a verifiable author or institutional grounding. This paper introduces...&quot;</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Startari_2025e&amp;diff=322614&amp;oldid=prev"/>
				<updated>2025-07-29T12:15:45Z</updated>
		
		<summary type="html">&lt;p&gt;Created page with &amp;quot; == Abstract ==  &amp;lt;p&amp;gt;Generative language models increasingly produce texts that simulate authority without a verifiable author or institutional grounding. This paper introduces...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Generative language models increasingly produce texts that simulate authority without a verifiable author or institutional grounding. This paper introduces synthetic ethos: the appearance of credibility constructed by algorithms trained to replicate human-like discourse without any connection to expertise, accountability, or source traceability. Such simulations raise critical risks in high-stakes domains including healthcare, law, and education. We analyze 1500 AI-generated texts produced by large-scale models such as GPT-4, collected from public datasets and benchmark repositories. Using discourse analysis and pattern-based structural classification, we identify recurring linguistic features,such as depersonalized tone, adaptive register, and unreferenced assertions,that collectively produce the illusion of credible voice. In healthcare, for instance, generative models produce diagnostic language without citing medical sources, risking patient misguidance. In legal context, generated recommendations mimic normative authority while lacking any basis in legislation or case law. In education, synthetic essays simulate scholarly argumentation without verifiable references. Our findings demonstrate that synthetic ethos is not an accidental artifact, but an engineered outcome of training objectives aligned with persuasive fluency. We argue that detecting such algorithmic credibility is essential for ethical and epistemically responsible AI deployment. To this end, we propose technical standards for evaluating source traceability and discourse consistency in generative outputs. These metrics can inform regulatory frameworks in AI governance, enabling oversight mechanisms that protect users from misleading forms of simulated authority and mitigate long-term erosion of public trust in institutional knowledge.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Document ==&lt;br /&gt;
&amp;lt;pdf&amp;gt;Media:Draft_Startari_558257447-1465-document.pdf&amp;lt;/pdf&amp;gt;&lt;/div&gt;</summary>
		<author><name>Agustinvstartari</name></author>	</entry>

	</feed>