🇮🇷 Iran Proxy | https://www.wikipedia.org/wiki/Verification_theory
Jump to content

Verificationism

From Wikipedia, the free encyclopedia
(Redirected from Verification theory)
According to verificationism, methods to measure temperature (such as this thermometer) constitute the very meaning of ascriptions of temperature.

Verificationism, also known as the verification principle or the verifiability criterion of meaning, is a doctrine in philosophy and the philosophy of language which holds that a declarative sentence is cognitively meaningful only if it is either analytic or tautological (true or false in virtue of its logical form and definitions) or at least in principle verifiable by experience.[1][2] On this view, many traditional statements of metaphysics, theology, and some of ethics and aesthetics are said to lack truth value or factual content, even though they may still function as expressions of emotions or attitudes rather than as genuine assertions.[1][3] Verificationism was typically formulated as an empiricist criterion of cognitive significance: a proposed test for distinguishing meaningful, truth-apt sentences from "nonsense".[2][4]

As a self-conscious movement, verificationism was a central thesis of logical positivism (or logical empiricism), developed in the 1920s and 1930s by members of the Vienna Circle and their allies in early analytic philosophy.[4] Drawing on earlier empiricism and positivism (especially David Hume, Auguste Comte and Ernst Mach), on pragmatism (notably C. S. Peirce and William James), and on the logical and semantic innovations of Gottlob Frege and the early Wittgenstein, these philosophers sought a "scientific" conception of philosophy in which meaningful discourse would either consist in empirical claims ultimately testable by observation or in analytic truths of logic and mathematics.[3][4] The verification principle was intended to explain why many traditional metaphysical disputes seemed irresolvable, to demarcate science from pseudo-science and speculative metaphysics, and to vindicate the special status of the natural sciences by taking empirical testability as the paradigm of serious inquiry.[2][4]

From the outset, however, attempts to state a precise verificationist criterion of significance faced technical and conceptual difficulties. Early "strong" versions, which required conclusive derivability from a finite set of observation sentences, excluded universal generalizations such as scientific laws, while purely falsificationist variants failed to accommodate existential and mixed-quantifier claims.[2] In response, logical empiricists developed a succession of more liberal proposals: distinctions between "strong" and "weak" or between practical and in-principle verifiability (especially in A. J. Ayer's Language, Truth and Logic), confirmation-based and translatability-based criteria for Carnap's "empiricist language", and increasingly sophisticated treatments of observation sentences and protocol sentences in the debate over the empirical basis of knowledge.[2][4][3] Work by Otto Neurath and others also pushed verificationism towards an explicitly physicalist and fallibilist picture of science, in which even basic observation reports are theory-laden and revisable rather than infallible data.

By the 1950s and 1960s, critics argued that no non-trivial, once-and-for-all verification criterion could be formulated that both salvaged accepted scientific practice and ruled out the kinds of sentences the positivists meant to exclude. Carl Gustav Hempel traced a series of proposed empiricist criteria and concluded that each was either too restrictive, excluding central parts of science, or too permissive, allowing paradigmatically "nonsensical" expressions to qualify as meaningful.[2] Willard Van Orman Quine's "Two Dogmas of Empiricism" challenged the analytic–synthetic distinction on which verificationism relied, while Karl Popper argued that the principle is itself unverifiable and that universal scientific hypotheses are never conclusively confirmable but are instead characterized by their falsifiability.[5][6][7] Further worries about the theory-ladenness of observation and about large-scale shifts in scientific "paradigms", especially in the work of Norwood Russell Hanson and Thomas Kuhn, undermined hopes for a stable, observation-based foundation for meaning and knowledge.[5]

As a result, classical verificationism is now widely regarded as untenable as a strict criterion of meaning, and its abandonment is often cited as a major factor in the decline of logical positivism as a distinct movement.[8][3] Nevertheless, the verificationist impulse—to connect what a sentence means with the kinds of evidence that would count for or against it—continues to influence later post-positivist philosophy of science, various forms of semantic anti-realism and empiricist theories of truth and meaning, and the work of philosophers such as Bas van Fraassen, Michael Dummett and Crispin Wright who have developed modified verification- or assertion-based constraints on meaningful discourse.[3][4]

Introduction

[edit]

Verificationism, or the verification principle, is the name given to a family of views that tie the meaning of a sentence closely to the experiences that would count for or against it. In its classic logical positivist form, a declarative sentence was said to be cognitively meaningful only if it was either analytic (true or false in virtue of its logical form and definitions) or at least in principle testable by observation; traditional metaphysical, theological and many ethical or aesthetic sentences were dismissed as lacking factual content, even if they might still express attitudes or emotions.[2][4][3][9]

For members of the Vienna Circle and allied logical empiricists, this proposal was attractive for several reasons. It seemed to offer a precise way of separating genuine questions from pseudo-questions, to explain why many long-standing disputes in metaphysics appeared irresolvable, and to vindicate the special status of the rapidly advancing natural sciences by taking scientific testability as the model for all serious inquiry.[4][3] The verification principle thus functioned as a kind of intellectual hygiene: sentences that could not, even in principle, be checked against experience were to be diagnosed as cognitively empty rather than as mysteriously profound.

Martin Heidegger, whose metaphysical sentences were targeted by the logical positivists as insignificant.

A much-discussed illustration of this attitude is Rudolf Carnap's critique of Martin Heidegger. In his essay "Überwindung der Metaphysik durch logische Analyse der Sprache" ("Overcoming Metaphysics through the Logical Analysis of Language") Carnap singled out Heidegger's claim that "Nothingness nothings" (German: das Nichts nichtet) from the 1929 lecture Was ist Metaphysik? (What Is Metaphysics?) as a paradigm of a metaphysical pseudo-sentence.[10][11] Although grammatically well-formed, Carnap argued, such a sentence yields no testable consequences and cannot, even in principle, be confirmed or disconfirmed by experience; on a verificationist view it therefore fails to state any fact at all and belongs, at best, to poetry or the expression of mood rather than to cognitive discourse.

For its supporters, verificationism was therefore part of a liberating project. By asking, for any disputed sentence, what observations would count for or against it, verificationists hoped to dissolve many traditional philosophical problems as products of linguistic confusion, while preserving, and clarifying, the empirical content of scientific theories.[2][3][4] The programme appeared to combine respect for the successes of modern science, the new tools of formal logic inspired by Frege and Wittgenstein, and an appealingly deflationary attitude towards grand metaphysical systems.

Subsequent work showed that no simple, once-and-for-all verification criterion could be formulated without excluding important parts of ordinary and scientific discourse, and by the 1960s classical verificationism had largely been abandoned as a general test for meaningfulness.[2][3][4] Nevertheless, the verificationist impulse—to connect what a sentence means with the kinds of evidence that would speak for or against it—continues to influence contemporary debates in the philosophy of language and science;[10][11] see § Legacy.

History

[edit]

Origins

[edit]

Nineteenth- and early twentieth-century empiricism already contained many of the ingredients of verificationism. Pragmatists such as C. S. Peirce and William James linked the meaning of a concept to its practical and experiential consequences, while the conventionalist Pierre Duhem treated physical theories as instruments for organizing observations rather than as literal descriptions of unobservable reality.[3][12] Later historians have therefore tended to treat verificationism as a sophisticated heir to this broader tradition of empiricist and pragmatist thought.[3] According to Gilbert Ryle, James's pragmatism was "one minor source of the Principle of Verifiability".[13]

At the same time, classical empiricism, especially the work of David Hume, provided exemplars for the idea that meaningful discourse must be tied to possible experience, even if Hume himself did not draw the later positivists' radical conclusions about metaphysics.[14] The positivism of Auguste Comte and Ernst Mach reinforced this orientation by insisting that science should confine itself to describing regularities among observable phenomena, a stance that influenced the early logical empiricists' suspicion of unobservable entities and their admiration for the empirical success of theories such as Einstein's general theory of relativity.[15]

Ludwig Wittgenstein, whose Tractatus Logico-Philosophicus was influential on the logical positivists and on the verificationist doctrine.

The more explicitly semantic side of verificationism drew on developments in analytic philosophy. Ludwig Wittgenstein's Tractatus (1921) was read in the 1920s as offering a picture theory of meaning: a proposition is meaningful only insofar as it can represent a possible state of affairs in the world.[6] Members of what would become the Vienna Circle took over this idea in an explicitly empiricist form, treating the "state of affairs" relevant to meaning as something that must in principle be checked in experience.[16] Building on earlier work by Gottlob Frege and the emerging analytic–synthetic distinction, they reconceived logical and mathematical truths as true in virtue of linguistic or inferential rules alone, so that their apparent exception to verificationism could be explained by classifying them as tautologies rather than empirical claims.[17]

By the mid-1920s these strands converged in the programme of logical positivism. Around Moritz Schlick in Vienna, philosophers and scientists such as Rudolf Carnap, Hans Hahn, Philipp Frank and Otto Neurath sought to develop a "scientific philosophy" in which philosophical statements would be as clear, testable and intersubjective as those of the empirical sciences.[16] The "verifiability principle" emerged in this context as a proposed criterion of cognitive significance, intended to underwrite the movement's anti-metaphysical stance and its aspiration to unify the special sciences within a single, naturalistic framework of knowledge.[1][3][4]

Revisions

[edit]

From early on, members of the Vienna Circle realised that a simple requirement of conclusive verification was too restrictive. Universal generalisations such as scientific laws cannot be derived from any finite set of observation reports, so a strict reading of the principle would render central parts of empirical science meaningless.[18] This difficulty, together with worries about how to treat dispositional and theoretical terms, set off a long series of attempts to refine the criterion of significance; the main stages of that story are reconstructed in § Criterion of significance.

Moritz Schlick, the central figure of the Vienna Circle, the group whose logical positivism was the context in which the verificationist criterion was first systematized.

Within the Circle, these debates were often framed as a contrast between "conservative" and "liberal" wings. Moritz Schlick and Friedrich Waismann defended a comparatively strict verificationism, exploring ways to reinterpret universal statements as rule-like or tautological so that they would not conflict with the original principle.[19] By contrast, Carnap, Neurath, Hahn and Frank advocated what they themselves called a "liberalization of empiricism", arguing that the link between theoretical sentences and observation could be looser and more probabilistic.[18] Neurath in particular proposed a resolutely physicalist and coherentist picture of scientific language, on which even basic "protocol sentences" are revisable parts of a holistic web of beliefs rather than an infallible experiential foundation; discussions of this topic are reviewed in § Choice of basis.[16][20]

Carnap's work in the 1930s and 1940s supplied many of the most influential revisions. In Logical Syntax of Language (1934) he developed a formal notion of analyticity intended to secure mathematics and logic as consequences of linguistic rules even in the face of Gödel's incompleteness theorems, thereby preserving their status as non-empirical truths compatible with verificationism.[21] In his later papers Testability and Meaning (1936–37) and subsequent work on theoretical terms, Carnap abandoned strict verification in favour of various confirmation-based and translatability-based criteria: a sentence would be cognitively significant if it could be connected, by chains of definition, reduction sentences or inductive support, to an agreed "observation language".[22][23] These proposals aimed to relax the principle enough to accommodate universal laws and theoretical entities while retaining an empiricist conception of meaning; the technical details and later criticisms are surveyed in § Criterion of significance.

Outside the German-speaking world, verificationism reached a wider audience above all through A. J. Ayer's Language, Truth and Logic (1936). Drawing on a period of study in Vienna, Ayer presented the verification principle as the central thesis of logical positivism and formulated influential distinctions between "strong" and "weak" verification and between practical and in-principle verifiability.[9][24] His book effectively became a manifesto for the movement in the English-speaking world, even though its specific formulation of the criterion soon came under pressure from critics and from later work by Carnap and others.[3][4][25]

By the 1950s, attempts to state a precise verificationist or confirmational criterion of significance were increasingly seen as problematic. Carl Gustav Hempel traced the succession of proposed tests—ranging from practical and in-principle verifiability through falsifiability to more complex requirements involving empirical import or translatability—and argued that each either excluded large swathes of accepted science or failed to rule out sentences the positivists regarded as "nonsense".[26][2][27] Hempel's conclusion that empirical significance comes in degrees, and depends on the theoretical role of a sentence as well as its logical relations to observation, is often taken, together with subsequent critiques by Willard Van Orman Quine, Karl Popper and Thomas Kuhn, to mark the collapse of the original verificationist project; more on this in § Criticisms and § Legacy.[2][3][4][5]

Theory

[edit]

Criterion of significance

[edit]

Logical positivists typically understood the verification principle as an empiricist criterion of cognitive significance: a declarative sentence was to count as factually meaningful if and only if it was either analytic (or contradictory) or at least in principle testable by experience.[2][4][3] Within this programme, the central problem became to state a precise criterion that would demarcate cognitively significant sentences from nonsense while still accommodating the needs of empirical science.

Early formulations equated empirical significance with the possibility of strong or conclusive verification. Roughly, a non-analytic sentence was said to be meaningful only if it could be logically deduced from a finite set of observation sentences reporting the presence or absence of observable properties of concrete objects.[26][2] Hempel reconstructs a sequence of increasingly liberalized criteria, beginning with practical verifiability "within one's lifetime", moving to verifiability in principle, then to conclusive falsifiability, and finally to a disjunctive requirement that a sentence be either conclusively verifiable or conclusively falsifiable by a finite set of observation sentences.[26][2] Each of these criteria was quickly found to be either too strong or too weak. Strong verification excludes universal generalizations over infinite domains—such as most scientific laws—since no finite set of confirming observations entails them, while a purely falsificationist criterion excludes existential claims and many mixed-quantifier statements (for example, "for every substance there is a solvent") from counting as meaningful.[2][27] Moreover, such criteria are not in general closed under negation, and they allow "nonsensical" expressions to be made significant by embedding them in larger sentences that satisfy the formal test (for instance, by disjoining them with an observationally testable claim).[26][2][4]

A. J. Ayer, one of the contributors to the theory of the criterion of significance.

A. J. Ayer's Language, Truth and Logic (1936; 2nd ed. 1946) responded to these difficulties by weakening the requirement on empirical sentences. Ayer distinguished between strong and weak verification, and between practical and in-principle verifiability, allowing a statement to be significant so long as experience could in some way count for or against it, even if not conclusively.[9] He later reformulated the criterion in terms of empirical import: a non-analytic sentence has cognitive significance only if, together with some set of auxiliary premises, it entails an "experiential proposition" (observation sentence) that could not be derived from those auxiliaries alone, and he correspondingly distinguished statements that are directly and indirectly verifiable.[9][2] Critics quickly pointed out that, unless the class of admissible auxiliary hypotheses is restricted, this empirical-import test is trivial: for any sentence whatsoever, including paradigmatically metaphysical ones, one can construct a conditional auxiliary premise that makes it yield some observational consequence.[2][3] Ayer's second-edition attempt to impose a recursive restriction—allowing only analytic, directly verifiable, or already indirectly verifiable statements as auxiliaries—avoided outright triviality, but Church, Hempel and others argued that it still renders almost any sentence or its negation significant and continues to allow complex sentences that "smuggle in" meaningless components.[2][28]

A different line of development, associated especially with Rudolf Carnap, replaced deductive connection to observation by a translatability requirement. In his work of the 1930s Carnap proposed that a sentence is cognitively meaningful iff it can be translated into a suitably regimented "empiricist language" whose non-logical vocabulary is restricted to observation predicates and to expressions definable from them by purely logical means.[22][4] Because many scientific terms are dispositional or theoretical (for example, "soluble", "magnetic", "gravitational field"), Carnap later introduced reduction sentences that relate such terms to observational conditions and responses, treating them as only partially defined outside their "test conditions".[22][23] Hempel and other critics objected that this strategy either leaves the relevant vocabulary undefined in many ordinary cases (when test conditions do not obtain) or, if multiple reduction sentences are used, makes substantive empirical generalizations follow analytically from the rules introducing the new terms.[2][29] More generally, the choice of a particular "empiricist language" as the reference language for translatability has been criticized as ad hoc unless it can be shown, on independent grounds, to capture exactly the verifiable sentences; without such a justification, the criterion threatens simply to build the positivists' anti-metaphysical verdict into the choice of language itself.[4][30][3]

By the mid-1960s Hempel concluded that none of the purely formal criteria then available—whether framed in terms of deducibility from observation sentences, falsifiability, empirical import or translatability into an observation language—could serve as a satisfactory, once-and-for-all test of cognitive significance.[2][27] He suggested instead that empirical significance comes in degrees and depends not only on logical relations to observation but also on the role a statement plays within a broader theoretical network.[2][4] Historians of verificationism generally take Hempel's diagnosis, together with subsequent critiques by Quine, Popper and Kuhn, to mark the collapse of the original positivist project of giving a precise empiricist criterion of significance, even though modified verificationist ideas continue to influence later work in the philosophy of language and science.[3][4][5]

Choice of basis

[edit]

Verificationist accounts of meaning presuppose some class of basic sentences that provide the experiential input for testing more complex claims. Within logical empiricism these are often called observation sentences, understood (following Carl Gustav Hempel) as sentences that ascribe or deny an observable characteristic to one or more specifically named macroscopic objects; such sentences were taken to form the empirical basis for criteria of cognitive significance.[26][31]

Debate within the Vienna Circle quickly revealed that the notion of an empirical basis was itself contentious. In the so-called protocol sentence (Protokollsatz) debate, members disagreed over whether basic statements should be formulated in a phenomenalist or a physicalist idiom.[16] Phenomenalist proposals, associated with early Rudolf Carnap and especially Moritz Schlick, treated the basis as first-person, present-tense reports of immediate experience – Schlick's Konstatierungen (affirmations), such as "Here now red", which were supposed to be incorrigible and theory-free data of consciousness.[32][33] Such a basis promised epistemic certainty but sat uneasily with the verificationists' scientific ambitions, since private experiences are not straightforwardly shareable or checkable among different observers.

Otto Neurath, who led the turn toward physicalist bases from earlier Machian phenomenalist bases.

By contrast, Otto Neurath argued for a thoroughly physicalist basis. His protocol sentences describe publicly observable events in a third-person, physical language, typically including explicit reference to an observer, time and place (for example, "Otto's protocol at 3:17…").[34][35] Neurath rejected any class of sentences as absolutely certain: even protocol sentences are embedded in a holistic network of beliefs and remain revisable in light of further experience, a view he illustrated with the image of sailors rebuilding their ship at sea. The protocol-sentence debate thus pushed many logical empiricists towards an explicitly fallibilist conception of the empirical basis, abandoning the idea of an infallible foundation for verification.

Carnap's later work on Testability and Meaning made the conventional and pragmatic dimension of this "choice of basis" explicit. He argued that the rules of an empiricist language are not fixed by reality but chosen, within broad constraints, for their usefulness; different choices of basic vocabulary yield different, but equally legitimate, "frameworks".[22][36] Nevertheless, Carnap recommended that the primitive predicates of the "observation language" be drawn from intersubjectively observable thing-predicates ("red cube", "meter reading 3", and so on), since these offer a shared physicalist basis for testing and confirming hypotheses.[37] On this liberalized view, verificationist criteria of significance are always relative to a chosen empirical basis, typically a fallible but intersubjective class of observation sentences rather than an infallible foundation of private experience.[38]

Explication

[edit]

Friends and critics of verificationism have long noted that a general "criterion of cognitive significance" is itself neither analytic nor empirically verifiable, and so appears to fall foul of its own requirement. Logical empiricists typically replied that the verification principle was not meant as a factual thesis about language, but as part of an explication of a vague pre-theoretic notion such as "cognitively meaningful sentence" or "intelligible assertion". Hempel, for example, describes the empiricist criterion as "a clarification and explication of the idea of a sentence which makes an intelligible assertion" and stresses that it is "a linguistic proposal" for which adequacy rather than truth or falsity is at issue.[26][2][27] In a similar spirit, A. J. Ayer later wrote that the verification principle in Language, Truth and Logic "is to be regarded, not as an empirical hypothesis, but as a definition", and Hans Reichenbach characterised the verifiability requirement as a stipulation governing the use of "meaning".[9][25][39]

Rudolf Carnap, whose notion of explication was used to dissolve the common allegation that the verificationist principle is self-refuting.

Rudolf Carnap systematised this stance in his more general methodology of explication. In Logical Foundations of Probability he defines explication as the process of replacing "an inexact prescientific concept" (the explicandum) by a new, exact concept (the explicatum) which must, among other things, be sufficiently similar to the explicandum, more precise, fruitful for the formulation of systematic theories, and as simple as possible.[40][41] Explications and the linguistic frameworks in which they are embedded are not themselves true or false; instead they are to be judged by these "conditions of adequacy". Within this framework, a verificationist criterion of significance becomes an explicatum for ordinary, somewhat indeterminate notions like "factual content" or "genuine assertion".

On such an explicative reading, verificationism forms part of a broader, revisionary approach to philosophical concepts rather than a metaphysical doctrine. Competing criteria of cognitive significance are proposals for regimenting scientific and everyday discourse so that logical relations to observation and to other sentences are made more explicit; they are evaluated by how well they capture intuitive judgements about meaningfulness, how precisely they can be stated, and how useful they are for organising scientific theories.[2][42][4] Historians and sympathetic "post-positivist" authors have therefore tended to interpret verificationism as a paradigm case of Carnapian explication or conceptual engineering, rather than as an attempt to discover a uniquely correct, antecedently fixed boundary between meaningful and meaningless sentences.[42][43]

Interpreting the verification principle as an explication also reframes certain traditional objections. If the principle is not itself a factual statement, it cannot straightforwardly be criticised as "self-refuting" on the grounds that it fails its own test of verifiability. The central questions then concern whether a given criterion of significance satisfies the Carnapian requirements of similarity, precision, fruitfulness and simplicity, and whether some rival explication might better serve the aims of empirical inquiry and philosophical clarification.[26][40][41]

Degrees of significance

[edit]

Classical formulations of the verification principle typically treated cognitive significance as an all-or-nothing affair: a sentence either met the verifiability requirement or else was to be dismissed as nonsensical. At the same time, some verificationists distinguished different notions of "possibility" and acknowledged that, in an empirical sense, hypotheses might be more or less easily testable. In "Meaning and Verification", Moritz Schlick contrasts "logical possibility" with "empirical possibility"—understood as compatibility with the laws of nature—and observes that in the latter sense "we may be permitted to speak of degrees of possibility", though he denies that this gradation is relevant to the categorical question of meaning.[44][2]

In his later reassessment of empiricist criteria of significance, Hempel concluded that no simple, once-and-for-all formal test could sharply separate meaningful from meaningless sentences. Taking account of theoretical vocabulary, auxiliary assumptions and the holistic character of empirical testing, he argued that cognitive significance "is a matter of degree" depending not only on a sentence's logical relations to observation reports but also on its role within a broader theoretical network.[2][27] On this picture, observation statements and simple existential claims occupy one end of a continuum of empirical meaningfulness, highly theoretical hypotheses lie further away from direct observation, and some expressions—such as those of traditional metaphysics—may fail to achieve empirical significance at all.

Karl Popper's falsificationist alternative to verificationism introduces a related gradational idea. Popper rejected verifiability as a criterion of meaning, but insisted that there are "degrees of testability": some hypotheses expose themselves to potential refutation more boldly than others, so that "well-testable theories, hardly testable theories, and non-testable theories" can be distinguished.[6] Although Popper used this hierarchy as part of a demarcation criterion for science rather than for meaning, later commentators have noted that it parallels verificationist concerns with how tightly a sentence is tied to possible experience.[7][3]

Hannes Leitgeb, who proposes a verificationist criterion in which a sentence A is meaningful if, and only if, there is evidence B such that P(B∣A)≠P(B).

Subsequent discussions of empirical and cognitive significance often combine Hempel's and Popper's insights. Instead of seeking a single, absolute threshold of meaningfulness, philosophers have developed graded or comparative notions of significance that measure, for example, how much a sentence contributes to the observational consequences of a theory or how severely it can be subjected to empirical tests.[2][4] Recent work by Hannes Leitgeb proposes a probabilistic reconstruction of a "verifiability criterion" that classifies sentences as meaningful when they are confirmable or disconfirmable relative to appropriate probability measures, and allows different choices of linguistic and probabilistic parameters to yield different verdicts on meaningfulness.[45] Such approaches retain the verificationist impulse to connect meaning with possible evidence, while abandoning the original positivist ambition of a simple, binary criterion of cognitive significance.

Criticisms

[edit]

Verificationism was subjected to sustained criticism from both friends and opponents of logical positivism, and by the late 1960s few philosophers regarded it as a tenable, exceptionless criterion of meaning.[2][3][4] Objections focussed on the alleged self-refuting character of the principle (on which see § Explication), its apparent mismatch with scientific practice, its dependence on controversial distinctions such as the analytic–synthetic divide, and worries about the holism and theory-ladenness of empirical testing.

Popper and falsificationism

[edit]

Philosopher Karl Popper, a contemporary critic working in Vienna but not a member of the Vienna Circle, argued that the verifiability principle suffers from several fundamental defects.[46][6][7] First, if meaningful empirical sentences must be conclusively verifiable, then universal generalizations of the sort employed in scientific laws (for example, "all metals expand when heated") would be meaningless, since no finite set of observations can logically entail such universals. Second, purely existential claims such as "there is at least one unicorn" qualify as empirically meaningful under the verification principle, even though in practice it may be impossible to show them false. Third, the verification principle itself appears neither analytic nor empirically verifiable; taken as a factual claim, it therefore seems to count as meaningless by its own standard, rendering the doctrine self-defeating.[46][6]

On the basis of these concerns, Popper rejected verifiability as a criterion of meaning and proposed falsifiability instead as a criterion for the demarcation of scientific from non-scientific statements.[46][47][7] On his view, scientific theories are characteristically universal, risk-bearing conjectures that can never be verified but may be refuted by experience; what marks a hypothesis as scientific is that it rules out certain possible observations. Verificationism, Popper argued, misconstrues the logic of scientific method by tying meaningfulness to possibilities of confirmation rather than to the capacity for severe tests and potential refutation.

Quine, Duhem and the Duhem–Quine thesis

[edit]

A different line of criticism targets the verificationist picture of individual sentences whose meanings are fixed by specific verification or falsification conditions. The French physicist–philosopher Pierre Duhem had already insisted that experiments in physics never test a single hypothesis in isolation, but only a whole "theoretical scaffolding" of assumptions, including auxiliary hypotheses about instruments, background theories and ceteris paribus clauses.[48][49] Because a recalcitrant observation can always be accommodated by adjusting some part of this network, Duhem concluded that there are no strictly "crucial experiments" in physics which decisively verify one hypothesis and falsify its rival.

W.V.O. Quine, whose Two Dogmas of Empiricism pointed out serious issues with verificationist criteria that had relied on the notion of analyticity.

Willard Van Orman Quine generalised Duhem's insight from physics to all of our knowledge. In his essay "Two Dogmas of Empiricism" he challenged both the analytic–synthetic distinction and the idea that each meaningful synthetic sentence admits an individual "reduction" to immediate experience.[5] Instead, Quine portrayed our statements about the world as forming a "web of belief" that faces the "tribunal of experience" only as a corporate body; when predictions fail, any part of the web—including logical principles, mathematical assumptions or supposedly observational statements—can in principle be revised.[5] The resulting Duhem–Quine thesis or confirmation holism holds that empirical tests always involve a bundle of hypotheses rather than isolated sentences.[50] Critics argued that this holistic picture undermines the verificationist project of assigning meanings to sentences by correlating each one with a determinate set of verification or falsification conditions.

Hempel and the fate of a strict criterion

[edit]

Internal criticisms by logical empiricists themselves also contributed to the abandonment of classical verificationism. Carl Gustav Hempel, a leading figure in the movement, examined successive attempts to refine the empiricist criterion of cognitive significance and argued that none could do the required work. In "Problems and Changes in the Empiricist Criterion of Meaning" (1950) and "Empiricist Criteria of Cognitive Significance: Problems and Changes" (1965) he reconstructed a series of proposals—strict verifiability, practical vs. in-principle testability, conclusive falsifiability, A. J. Ayer's requirement of "empirical import" and various translatability conditions linking theoretical to observational vocabulary—and showed that each has unacceptable consequences.[26][2][51]

On the one hand, criteria based on conclusive verification are too strict: they would declare universal laws and many dispositional or theoretical statements meaningless, since such sentences cannot be deduced from any finite set of observation reports.[26][2] On the other hand, more liberal criteria threaten to be too permissive. Hempel and others argued that, unless carefully constrained, Ayer-style tests of "empirical import" trivialise the distinction between meaningful and meaningless sentences by ensuring that practically any sentence, or its negation, can be connected with some observational consequences if one is free to introduce suitable auxiliary assumptions.[2][52] Hempel concluded that no non-trivial, purely formal and once-and-for-all criterion of cognitive significance could be found: empirical significance comes in degrees and depends on the theoretical role of a statement as well as its logical connections with observation.[2][51]

Analyticity, theory-ladenness and paradigms

[edit]

Verificationism also presupposed a sharp distinction between analytic and synthetic truths and a relatively stable, theory-neutral observation language. Both assumptions were challenged in mid-twentieth-century philosophy. In addition to his holism about confirmation, Quine's "Two Dogmas of Empiricism" argued that attempts to define analyticity—in terms of meaning, synonymy, logical truth or explicit definition—end up presupposing the very notion they are supposed to explain.[5] If there is no non-question-begging account of analyticity, the verificationists' strategy of rescuing logic and mathematics as meaningful analytic truths, while requiring empirical sentences to be verifiable, loses much of its motivation.

From a different direction, philosophers of science emphasised the theory-ladenness of observation. Norwood Russell Hanson argued that what scientists "see" in an experimental situation is shaped by the conceptual frameworks they bring to it: the same visual stimulus may be described as a "flare in the cloud chamber" or as "the track of an alpha particle" depending on one's theoretical commitments.[53][54] Thomas Kuhn's The Structure of Scientific Revolutions (1962) further suggested that periods of "normal science" are governed by shared paradigms that structure problems, standards of evidence and even the classification of phenomena; during scientific revolutions, these paradigms may be replaced by ones that are partially incommensurable.[55][56] If observation itself is permeated by theory and subject to historical change, the verificationist idea of fixing meanings by reference to a timeless, privileged observation language becomes highly problematic.[53][57]

Metaphysics, ethics and other domains

[edit]

Finally, many philosophers resisted verificationism's negative verdict on metaphysics, ethics and other non-scientific forms of discourse. Popper, for instance, maintained that metaphysical ideas, though not empirically testable, can be meaningful and may perform a productive heuristic role in the development of scientific theories.[6][46] Other critics argued that the verificationists' criterion would incorrectly exclude large parts of mathematics, modality, moral philosophy and ordinary discourse from the realm of cognitively significant talk, and that it begs the question against views which treat such statements as truth-apt.[58][59] Taken together with the technical and methodological objections above, these considerations led most philosophers to abandon classical verificationism as a strict criterion of meaning, even when they continued to endorse more modest links between meaning, justification and possible experience.[8][3][4]

Legacy

[edit]

Death of logical positivism

[edit]

In 1967, John Passmore, a leading historian of twentieth-century philosophy, famously remarked that "logical positivism is dead, or as dead as a philosophical movement ever becomes".[8] This verdict is often taken to mark the end of logical positivism as a self-conscious school, and with it the abandonment of classical verificationism as a strict criterion of meaning.[3][7] In many standard narratives, the decline of verificationism is intertwined with the rise of various forms of postpositivism in which Karl Popper's falsificationism, historically oriented accounts of scientific change and more pluralist views of scientific method displace the earlier search for a single verificationist test of significance.[47][7]

Even some of verificationism's most prominent advocates later distanced themselves from its more uncompromising claims. In a 1976 television interview, A. J. Ayer—whose Language, Truth and Logic had helped to popularise logical positivism in the English-speaking world—commented that "nearly all of it was false", while insisting that he continued to endorse "the same general approach" of empiricism and reductionism, according to which mental phenomena are to be understood in physical terms and philosophical questions are resolved by attention to language and logical analysis.[8][25]

"The verification principle is seldom mentioned and when it is mentioned it is usually scorned; it continues, however, to be put to work. The attitude of many philosophers reminds me of the relationship between Pip and Magwitch in Dickens's Great Expectations. They have lived on the money, but are ashamed to acknowledge its source."[3]

Falsificationism

[edit]
Karl R. Popper, whose falsificationism was based upon a critique of verificationism.

In The Logic of Scientific Discovery (1959), Popper proposed falsifiability, or falsificationism. Though formulated in the context of what he perceived were intractable problems in both verifiability and confirmability, Popper intended falsifiability, not as a criterion of meaning like verificationism (as commonly misunderstood),[47] but as a criterion to demarcate scientific statements from non-scientific statements.[6]

Notably, the falsifiability criterion would allow for scientific hypotheses (expressed as universal generalizations) to be held as provisionally true until proven false by observation, whereas under verificationism, they would be disqualified immediately as meaningless.[6]

In formulating his criterion, Popper was informed by the contrasting methodologies of Albert Einstein and Sigmund Freud. Appealing to the general theory of relativity and its predicted effects on gravitational lensing, it was evident to Popper that Einstein's theories carried significantly greater predictive risk than Freud's of being falsified by observation. Though Freud found ample confirmation of his theories in observations, Popper would note that this method of justification was vulnerable to confirmation bias, leading in some cases to contradictory outcomes. He would therefore conclude that predictive risk, or falsifiability, should serve as the criterion to demarcate the boundaries of science.[60]

Though falsificationism has been criticized extensively by philosophers for methodological shortcomings in its intended demarcation of science,[46] it would receive acclamatory adoption among scientists.[7] Logical positivists too adopted the criterion, even as their movement ran its course, catapulting Popper, initially a contentious misfit, to carry the richest philosophy out of interwar Vienna.[47]

Verificationist revivals

[edit]

Although the logical positivists' attempt to state a precise, once-and-for-all verifiability criterion of meaning is now generally regarded as untenable, a number of later philosophers have developed weaker, "post-positivist" forms of verificationism that retain a tight connection between meaning, truth and warranted assertion. Cheryl Misak's historical study Verificationism: Its History and Prospects traces both the rise and fall of classical verificationism and its re-emergence in more flexible guises, arguing that suitably liberalised verificationist ideas remain philosophically fruitful.[3][42]

Michael Dummett, whose semantic anti-realism was influenced by verificationist ideas.

In the philosophy of language and logic, Michael Dummett developed an influential form of semantic anti-realism that begins from the thought that understanding a statement involves grasping what would count as its correct verification or refutation.[61] On this "justificationist" view, the meaning of a sentence is tied to the conditions under which speakers are in a position to recognise it as warranted, and Dummett uses this to motivate anti-realist treatments of mathematical discourse and of some statements about the past, together with revisions of classical logic.[61] Crispin Wright, drawing extensively on Dummett, has explored epistemically constrained conceptions of truth and proposed the notion of superassertibility—roughly, a status a statement would possess if it could be justified by some body of information that is in principle extendable without undermining that justification—as a candidate truth predicate for certain discourses.[62][63] Both Dummett and Wright thus preserve a verificationist link between meaning and warranted use while giving up the positivists' sharp dichotomy between meaningful science and meaningless metaphysics.

In the philosophy of science, Bas van Fraassen's constructive empiricism has often been described as verificationist in spirit, even though it abandons any explicit verifiability criterion of meaning.[64] Van Fraassen distinguishes belief in the literal truth of a theory from acceptance of it as empirically adequate, requiring only that accepted theories get the observable phenomena right while remaining agnostic about their claims concerning unobservables.[65] Misak and others have suggested that this emphasis on observable consequences and on the role of empirical data in theory choice continues the verificationist impulse in a more modest, methodological form.[42]

Christopher Peacocke, whose theory of possession conditions for concepts was influenced by verificationism.

Other late twentieth-century writers have proposed explicitly "post-verificationist" approaches that reject the positivists' austere criterion of significance but retain a close tie between meaning, justification and experiential or inferential capacities. Christopher Peacocke has argued that many concepts are to be understood via possession conditions which specify what discriminations, recognitional capacities or patterns of inference a thinker must be able to deploy in order to count as grasping the concept, a project he presents as a successor to earlier verificationist accounts of meaning.[66][67] David Wiggins has defended a form of "conceptual realism" and has argued that truth is appropriately connected with what would be accepted under conditions of ideal reflection and convergence in judgement, a stance that many commentators, including Misak, interpret as containing important verificationist elements.[68][69][70]

Misak herself not only provides a historical reconstruction of verificationism but also defends a neo-pragmatist version inspired by Charles Sanders Peirce. In Truth and the End of Inquiry she develops a Peircean conception on which truth is the ideal limit of inquiry—what would be agreed upon at the hypothetical end of investigation by suitably situated and responsive inquirers—so that truth is tightly bound to what could in principle be justified by experience and argument.[71] On this view, verificationism survives not as a rigid test for meaningfulness but as a normative constraint linking the content of our statements to the kinds of evidence and justificatory practices that would speak for or against them, a constraint that Misak also finds echoed in parts of contemporary feminist philosophy, the later work of Richard Rorty and other strands of post-positivist thought.[42][71]

See also

[edit]

References

[edit]
  1. ^ a b c "Verifiability principle". Encyclopædia Britannica. 2024. Retrieved 8 October 2024.
  2. ^ a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad ae af Hempel, Carl G. (1976) [1965]. "Empiricist Criteria of Cognitive Significance: Problems and Changes". In Harding, Sandra G. (ed.). Can Theories Be Refuted?. Synthese Library. Vol. 81. Dordrecht: Reidel. pp. 65–85.
  3. ^ a b c d e f g h i j k l m n o p q r s t u v w x Misak, C.J. (1995). "The Logical Positivists and the Verifiability Principle". Verificationism: Its History and Prospects. New York: Routledge.
  4. ^ a b c d e f g h i j k l m n o p q r s t u v w Uebel, Thomas (2020). "Vienna Circle". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
  5. ^ a b c d e f g Rocknak, Stefanie. "Willard Van Orman Quine: The Analytic/Synthetic Distinction". Internet Encyclopedia of Philosophy. Retrieved July 14, 2024.
  6. ^ a b c d e f g h Popper, Karl (2011). "Science: Conjectures and refutations". In Andrew Bailey (ed.). First Philosophy: Fundamental Problems and Readings in Philosophy (2 ed.). Peterborough Ontario: Broadview Press. pp. 338–42.
  7. ^ a b c d e f g Godfrey-Smith, Peter (2005). Theory and Reality: An Introduction to the Philosophy of Science. Chicago: University of Chicago Press. pp. 57–59.
  8. ^ a b c d Hanfling, Oswald (1996). "Logical positivism". In Stuart G Shanker (ed.). Philosophy of Science, Logic and Mathematics in the Twentieth Century. Routledge. pp. 193–94.
  9. ^ a b c d e Ayer, A. J. (1936). Language, Truth and Logic. London: Victor Gollancz.
  10. ^ a b Carnap, Rudolf (1932). "Überwindung der Metaphysik durch logische Analyse der Sprache" [Overcoming Metaphysics through the Logical Analysis of Language]. Erkenntnis (in German). 2: 219–241.
  11. ^ a b Nelson, Eric S. (2013). "Heidegger and Carnap: Disagreeing about Nothing?". In Raffoul, François; Nelson, Eric S. (eds.). The Bloomsbury Companion to Heidegger. London: Bloomsbury. pp. 151–155. doi:10.5040/9781472548313.ch-017.
  12. ^ Epstein, Miran (2012). "Introduction to philosophy of science". In Clive Seale (ed.). Researching Society and Culture 3rd Ed. London: Sage Publications. pp. 18–19.
  13. ^ Gilbert Ryle, "Introduction", in A. J. Ayer (ed.), The Revolution in Philosophy, p. 9.
  14. ^ Antony G. Flew, "Science: Conjectures and refutations", in Andrew Bailey (ed.), A Dictionary of Philosophy, St Martin's Press, 1984, p. 156.
  15. ^ Uebel 2024 Section 3
  16. ^ a b c d Uebel, Thomas (2024). "Vienna Circle". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
  17. ^ Jerrold J. Katz, "The epistemic challenge to antirealism", in Realistic Rationalism, MIT Press, 2000, p. 69.
  18. ^ a b Sahotra Sarkar; Jessica Pfeifer, eds. (2006). "Rudolf Carnap". The Philosophy of Science: An Encyclopedia, Volume 1: A-M. New York: Routledge. p. 83.
  19. ^ Uebel 2024 Section 3.1
  20. ^ Flew 1984 p.245
  21. ^ Creath, Richard. "Logical Empiricism". Stanford Encyclopedia of Philosophy. Retrieved 9 March 2025.
  22. ^ a b c d Carnap, Rudolf (1936). "Testability and Meaning". Philosophy of Science. 3 (4): 419–471.; Carnap, Rudolf (1937). "Testability and Meaning—Continued". Philosophy of Science. 4 (1): 1–40.
  23. ^ a b Murzi, Mauro (2001). "Rudolf Carnap (1891–1970)". Internet Encyclopedia of Philosophy.
  24. ^ Ayer, A. J. (29 November 2007). "Ayer on the criterion of verifiability" (PDF). Retrieved 9 July 2023.
  25. ^ a b c Macdonald, Graham (2005). "A. J. Ayer". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
  26. ^ a b c d e f g h i Hempel, Carl G. (1950). "Problems and Changes in the Empiricist Criterion of Meaning". Revue Internationale de Philosophie. 4: 41–63.
  27. ^ a b c d e Fetzer, James H. (2013). "Carl Hempel". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
  28. ^ Church, Alonzo (1949). "Review: Alfred Jules Ayer, Language, Truth and Logic". Journal of Symbolic Logic. 14 (1): 52–53. doi:10.2307/2268980.
  29. ^ Hempel, Carl G. (1952). "Fundamentals of Concept Formation in Empirical Science". International Encyclopedia of Unified Science. 2 (7).
  30. ^ Lutz, Sebastian (2017). "Carnap on Empirical Significance". Synthese. 194 (1): 217–252. doi:10.1007/s11229-014-0561-8.
  31. ^ Hempel, Carl G. (1951). "The Concept of Cognitive Significance: A Reconsideration". Proceedings of the American Academy of Arts and Sciences. 80 (1): 61–77. doi:10.2307/20023635.
  32. ^ Schlick, Moritz (2013). "Moritz Schlick". In Creath, Richard (ed.). Stanford Encyclopedia of Philosophy.
  33. ^ Schlick, Moritz (1934). "Über das Fundament der Erkenntnis". Erkenntnis (in German). 4: 79–124.
  34. ^ Neurath, Otto (1932). "Protokollsätze" [Protocol Sentences]. Erkenntnis (in German). 3. Translated by Ayer, A. J.: 204–214.
  35. ^ Cat, Jordi (2019). "Otto Neurath". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
  36. ^ Carnap, Rudolf (1937). "Testability and Meaning—Continued". Philosophy of Science. 4 (1): 1–40. doi:10.1086/286443.
  37. ^ Carnap, Rudolf (1956). "Testability and Meaning". Meaning and Necessity: A Study in Semantics and Modal Logic (2nd ed.). Chicago: University of Chicago Press. pp. 420–424.
  38. ^ Creath, Richard (2020). "Logical Empiricism". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
  39. ^ Reichenbach, Hans (1951). "The Verifiability Theory of Meaning". Proceedings of the American Academy of Arts and Sciences. 80 (1): 46–60. doi:10.2307/20023636.
  40. ^ a b Carnap, Rudolf (1950). "1". Logical Foundations of Probability. Chicago: University of Chicago Press.
  41. ^ a b Cordes, Moritz; Siegwart, Geo (2018). "Explication". In Fieser, James; Dowden, Bradley (eds.). Internet Encyclopedia of Philosophy. Retrieved 28 November 2025.
  42. ^ a b c d e Misak, Cheryl J. (1995). Verificationism: Its History and Prospects. London: Routledge.
  43. ^ Dutilh Novaes, Catarina; Reck, Erich H. (2017). "Carnapian Explication, Formalisms as Cognitive Tools, and the Paradox of Adequate Formalization". Synthese. 194 (1): 195–215. doi:10.1007/s11229-014-0565-4.
  44. ^ Schlick, Moritz (1936). "Meaning and Verification". The Philosophical Review. 45 (4): 339–369. doi:10.2307/2180487.
  45. ^ Leitgeb, Hannes (2024). "Vindicating the Verifiability Criterion". Philosophical Studies. 181 (1): 223–245. doi:10.1007/s11098-023-02071-w.
  46. ^ a b c d e Shea, Brendan. "Karl Popper: Philosophy of Science". Internet Encyclopedia of Philosophy. Retrieved May 12, 2019.
  47. ^ a b c d Hacohen, Malachi Haim (2000). Karl Popper: The Formative Years, 1902–1945: Politics and Philosophy in Interwar Vienna. Cambridge: Cambridge University Press. pp. 212–13.
  48. ^ Duhem, Pierre (1954) [1906]. The Aim and Structure of Physical Theory. Princeton: Princeton University Press.
  49. ^ Ariew, Roger (2014). "Pierre Duhem". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
  50. ^ Stanford, Kyle (2017). "Underdetermination of Scientific Theory". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
  51. ^ a b Fetzer, James H. (2013). "Carl Hempel". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
  52. ^ Church, Alonzo (1949). "Review: Alfred Jules Ayer, Language, Truth and Logic". Journal of Symbolic Logic. 14 (1): 52–53. doi:10.2307/2268980.
  53. ^ a b Hanson, Norwood Russell (1958). Patterns of Discovery. Cambridge: Cambridge University Press.
  54. ^ Caldwell, Bruce (1994). Beyond Positivism: Economic Methodology in the 20th Century. London: Routledge. pp. 47–48.
  55. ^ Kuhn, Thomas S. (1962). The Structure of Scientific Revolutions. Chicago: University of Chicago Press.
  56. ^ Okasha, Samir (2002). "Scientific Change and Scientific Revolutions". Philosophy of Science: A Very Short Introduction. Oxford: Oxford University Press.
  57. ^ Uebel 2024 Section 3.3
  58. ^ Katz, Jerrold J. (2000). "The epistemic challenge to antirealism". Realistic Rationalism. Cambridge, MA: MIT Press. p. 69.
  59. ^ Strawson, P. F. (1952). Introduction to Logical Theory. London: Methuen.
  60. ^ Popper, Karl (1962). Conjectures and Refutations: The Growth of Scientific Knowledge (2 ed.). Routledge. pp. 34–37.
  61. ^ a b Murphy, Benjamin (2013). "Michael Dummett (1925–2011)". Internet Encyclopedia of Philosophy. Retrieved 28 November 2025.
  62. ^ Wright, Crispin (1992). Truth and Objectivity. Cambridge, MA: Harvard University Press.
  63. ^ Edwards, Jim (1996). "Anti-Realist Truth and Concepts of Superassertibility". Synthese. 107 (3): 383–419. doi:10.1007/BF00413824.
  64. ^ Monton, Bradley (2007). "Constructive Empiricism". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Retrieved 28 November 2025.
  65. ^ van Fraassen, Bas C. (1980). The Scientific Image. Oxford: Clarendon Press.
  66. ^ Peacocke, Christopher (1992). A Study of Concepts. Cambridge, MA: MIT Press.
  67. ^ Peacocke, Christopher (1988). "The Limits of Intelligibility: A Post-Verificationist Proposal". Philosophical Review. 97 (4).
  68. ^ Wiggins, David (1997). "Meaning and Truth-Conditions: From Frege's Grand Design to Davidson's". In Hale, Bob; Wright, Crispin (eds.). A Companion to the Philosophy of Language. Oxford: Blackwell. pp. 3–28.
  69. ^ Wiggins, David (1987). Needs, Values, Truth: Essays in the Philosophy of Value. Oxford: Basil Blackwell.
  70. ^ Hookway, Christopher (1996). "Review: Verificationism: Its History and Prospects by C. J. Misak". Mind. 105 (420): 709–710. JSTOR 2254597.
  71. ^ a b Misak, Cheryl J. (1991). Truth and the End of Inquiry: A Peircean Account of Truth. Oxford: Clarendon Press.