Bridging Philosophical Foundations and Computational Realities: Semantic Under-Specification from Frege to Large Language Models
https://doi.org/10.55966/assaj.2025.4.1.0117
Abstract
Semantic underspecification occurs when linguistic expressions carry partial meaning, requiring context for full understanding. It poses key challenges across philosophy, cognitive science, and NLP. This review identifies five developmental stages: (1) Classical theories by Frege, Russell, and Davidson created truth-conditional frameworks but encountered difficulties with indexical and belief contexts due to assumptions of full specification; (2) Formal models like QLF, MRS, and Hole Semantics introduced computational underspecification to handle structural ambiguities such as quantifier scope; (3) Cognitive studies show humans use underspecification strategically for efficiency, relying on pragmatic inference and semantic memory; (4) Hybrid neuro-symbolic models like UMR and Glue Semantics combined structural ambiguity resolution with neural inference but lacked uncertainty modeling; (5) Modern NLP research highlights gaps: LLMs can detect underspecification but often overcommit to deterministic interpretations, and multimodal systems do not effectively utilize context. Cross-linguistic entropy models suggest grammatical underspecification as a strategy for cognitive efficiency. To bridge human semantic flexibility enabled by incremental processing and pragmatic co-construction—with computational systems, we propose integrated neuro-symbolic architectures incorporating explicit uncertainty modeling, multimodal grounding, and entropy-aware design. This approach paves the way for AI to achieve human-like language understanding.
Keywords: Philosophical Foundations, Computational Realities, Semantic, Frege, Large Language Models