Abstract

This study examines how syntactic constructions in expense narratives affect misclassification rates in AI-powered corporate ERP systems. We trained transformerbased classifiers on labeled accounting data to predict expense categories and observed that these models frequently relied on grammatical form rather than financial semantics. We extracted syntactic features including nominalization frequency, defined as the ratio of deverbal nouns to verbs; coordination depth, measured by the maximum depth of coordinated clauses; and subordination complexity, expressed as the number of embedded subordinate clauses per sentence. Using SHAP (SHapley Additive exPlanations), we identified that these structural patterns significantly contribute to false allocations, thus increasing the likelihood of audit discrepancies. For interpretability, we applied the method introduced by Lundberg and Lee in their seminal work, “A Unified Approach to Interpreting Model Predictions,” published in Advances in Neural Information Processing Systems 30 (2017): 4765–4774. To mitigate these syntactic biases, we implemented a rule-based debiasing module that reparses each narrative into a standardized fair-syntax transformation, structured around a


Document

The PDF file did not load properly or your web browser does not support viewing PDF files. Download directly to your device: Download PDF document
Back to Top
GET PDF

Document information

Published on 22/07/25

Licence: CC BY-NC-SA license

Document Score

0

Views 0
Recommendations 0

Share this document