Record Detail
Advanced Search
Text
Judgment aggregation, discursive dilemma and reflective equilibrium: Neural language models as self-improving doxastic agents
Neural language models (NLMs) are susceptible to producing inconsistent output. This paper proposes a new diagnosis as well as a novel remedy for NLMs’ incoherence. We train NLMs on synthetic text corpora that are created by simulating text production in a society. For diagnostic purposes, we explicitly model the individual belief systems of artificial agents (authors) who produce corpus texts. NLMs, trained on those texts, can be shown to aggregate the judgments of individual authors during pre-training according to sentence-wise vote ratios (roughly, reporting frequencies), which inevitably leads to so-called discursive dilemmas: aggregate judgments are inconsistent even though all individual belief states are consistent. As a remedy for such inconsistencies, we develop a self-training procedure—inspired by the concept of reflective equilibrium—that effectively reduces the extent of logical incoherence in a model’s belief system, corrects global mis-confidence, and eventually allows the model to settle on a new, epistemically superior belief state. Thus, social choice theory helps to understand why NLMs are prone to produce inconsistencies; epistemology suggests how to get rid of them.
Availability
No copy data
Detail Information
Series Title |
-
|
---|---|
Call Number |
-
|
Publisher | Frontiers in Artificial Intelligence : Switzerland., 2022 |
Collation |
006
|
Language |
English
|
ISBN/ISSN |
2624-8212
|
Classification |
NONE
|
Content Type |
-
|
Media Type |
-
|
---|---|
Carrier Type |
-
|
Edition |
-
|
Subject(s) | |
Specific Detail Info |
-
|
Statement of Responsibility |
-
|
Other Information
Accreditation |
Scopus Q3
|
---|
Other version/related
No other version available
File Attachment
Information
Web Online Public Access Catalog - Use the search options to find documents quickly