📓 Cabinet of Ideas

Does the Success of Ai ( Large Language Models) Support Wittgenstein's Position That Meaning Is Use Philosophy Stack Exchange

Does the success of AI (Large Language Models) support Wittgenstein’s position that “meaning is use”? - Philosophy Stack Exchange #

Excerpt #

By ‘success’ we think of current AI/LLMs capacity of producing text that is regarded as coherent, informative, even convincing, by human readers [see for instance Spitale et al. and Salvi et al.]


By ‘success’ we think of current AI/LLMs capacity of producing text that is regarded as coherent, informative, even convincing, by human readers [see for instance Spitale et al. and Salvi et al.]

Wittgenstein’s position is

For a large class of cases of the employment of the word “meaning” - though not for all - this word can be explained in this way: the meaning of a word is its use in the language.

in his “Philosophical Investigations

Notice that the question is not about “Does the machine understand? Can the machine think? Does it have a mind/conciousness?” or anything of the sort - the man himself says “But surely a machine cannot think!” -, but only about language, as in Given the machine produces text based on statistical analysis, and that the texts seem to us to be ‘meaningful’, is ‘meaning’ really just use?

[

ac15’s user avatar

]( https://philosophy.stackexchange.com/users/73257/ac15)

Yes, indeed: According to the post-Tractatus Wittgenstein, words are “meaning families”; the specific “meaning” of a word is determined by (or perhaps is) its use in context. Speaker and listener must share part of that context lest they could not communicate (see the “private language” discussion of e.g. a sensation peculiar to a single person in the Investigations).

In this sense, surely the words and sentences produced by LLMs have meaning. The LLMs have been trained with a lot of texts, in which the words appear in various contexts; the models have “absorbed” these contexts and internalized them. When they construct texts, they use the words based on these stored contexts. In a way, they are an intermediary between a collective of original human speakers and a listener. They use the words the same way the original authors use them; that’s why the words in the produced texts have their intelligible meaning, even though the model is not conscious and “doesn’t know what it’s talking about”; in fact, the word “know” is categorically inapplicable because there is no facility for knowing in the machine, beyond word contexts.

And yet it produces meaning. Wittgenstein, indeed.

[

Peter - Reinstate Monica’s user avatar

]( https://philosophy.stackexchange.com/users/18617/peter-reinstate-monica)

No. Wittgenstein would probably be the first to argue that the bare existence of a functioning Large Language Model does not by itself have any philosophical importance. The construction of an LLM is a mathematical or an engineering problem. Curb your enthusiasm, also, by considering that the recent proliferation of the development of LLMs have been largely the result of overcoming of barriers to purely practical limitaitons on the infrastructure of hardware, scale, and profitability; the theoretical infrastructure has been in place for a very long time. It would be silly to think that the large scale investment of capital into these technologies signals any kind of philosophical evidence of anything (does a functioning quantum computer prove the many worlds interpretation?).

Further, the LLM has no obvious bearing on the question of “meaning as use” because the LLM is not “using a word” in Wittgenstein’s sense of ‘use’. “Using a word” is a complex set of activities and social institutions which go well beyond the mechanical process of slotting in a word in the “right” spot to generate a coherent syntactical string that is responsive to an artificial context elaborated from the outside by determinate input parameters.

[

transitionsynthesis’s user avatar

]( https://philosophy.stackexchange.com/users/14499/transitionsynthesis)

From SEP Wittgenstein 3.3 Meaning as Use:

https://plato.stanford.edu/entries/wittgenstein/#MeanUse

Ascertainment of the use (of a word, of a proposition), however, is not given to any sort of constructive theory building, as in the Tractatus. Rather, when investigating meaning, the philosopher must “look and see” the variety of uses to which the word is put. An analogy with tools sheds light on the nature of words. When we think of tools in a toolbox, we do not fail to see their variety; but the “functions of words are as diverse as the functions of these objects” (PI 11).

Words can have different meanings depending on context. Using the tool analogy, a screwdriver has one meaning in the context of driving screws and a different meaning when prying open a can of paint.

For fun, I asked ChatGPT:

Is a screwdriver equivalent to a prybar?

The answer:

While both screwdrivers and pry bars can be used for leverage, they serve different primary purposes. Screwdrivers are primarily used for driving screws into or out of materials, while pry bars are specifically designed for prying or lifting objects apart. While a screwdriver can sometimes be used as a makeshift pry bar in certain situations, it’s not its intended purpose, and using it as such can damage the screwdriver and possibly the object you’re working on. It’s generally safer and more effective to use tools for their intended purposes.

Based on this answer, the LLM used by ChatGPT both supports and incorporates Wittgenstein’s position.

[

Idiosyncratic Soul’s user avatar

]( https://philosophy.stackexchange.com/users/67687/idiosyncratic-soul)

No. As @transitionsynthesis says, LLMs do not come in contact with Wittgenstein and so can’t support or refute him. Meaning (and Wittgenstein’s “use”) requires intention, which LLMs do not have. Strictly speaking they are only pattern-matching systems and don’t even have anything to do with language – language only happens to be the medium in which they match patterns. If their responses to our prompts seem meaningful to us that’s a coincidence – a hallucination on our part – and the meaning we find is our use of the words, which involves the intention of our reception (“How can I apply this string of terms to my question about good ways to melt an egg?”). You might take that as support of Wittgenstein’s proposition, but it has nothing to do with the LLMs – it’s the same use we make of all utterances we receive.

So, frame challenge: I think OP’s consideration of LLMs in the question is an error. The last clause, “Is ‘meaning’ really just use?” is a good question. But since the speaker (or anyway, word-emitter) in the situation is an intentionless automaton the only “use” involved is that of the receiver. That being the case, the fact that this producer is an LLM is irrelevant – the source might be ChatGPT or words drawn from a hat or a Magic 8-Ball.

So the success of LLMs at producing word-strings to which interpretations can be applied neither supports nor refutes the idea that “meaning is use”. With respect to that proposition we’re left in the same position we’ve always been in when considering “meaning” from only the recipient’s point of view.

[

uhClem’s user avatar

]( https://philosophy.stackexchange.com/users/63707/uhclem)

No

Outside from the philosophy perspective which has been touched on by other answers, from a machine learning perspective, the meaning of words in a transformer is only partially defined by their use.

The attention mechanism does modify the meaning of a word based on its relative position to other words, but it does not define it. Each word starts with a default definition (embedding), which the attention mechanism then modifies. Thus words in a transformer do have a default meaning. The context, or use, only modifies the default meaning.

[

LivesayEngineer’s user avatar

]( https://philosophy.stackexchange.com/users/73760/livesayengineer)

When I read LLM, I am using language, and often (if not always) it is meaningful to me. Though this may be his position, I wouldn’t say this supports Wittgenstein’s claim, as it seems too general to be refuted or confirmed my our use of LLM, not unless we take reading LLM as paradigmatic of all language.

In the Philosophical Investigations (156-178) and in the Brown Book (78-87) Wittgenstein gives a “deconstruction” of the “reading”

You could look there to find out if he thinks it is a language game.

[

user66697’s user avatar

]( https://philosophy.stackexchange.com/users/71399/user66697)

The quote is from “Ludwig Wittgenstein: Philosophical Investigations, Section 43”.

  • I simply take Wittgenstein’s statement as exactly what is says: It gives an operational definition of the term “meaning of a word”. The definition applies for a large class of cases. I do not search for further philosophical profoundness.
  • And similarly, AI-tools like ChatGPT learn how to use the words from the context of the textual examples from its training base. The tool has no possibility to learn the meaning from acting in the world.

Insofar is ChatGPT a good example how the meaning of a word is fixed by its use in different contexts.

[

Jo Wehler’s user avatar

]( https://philosophy.stackexchange.com/users/9174/jo-wehler)

Jo Wehler

32.1k3 gold badges29 silver badges99 bronze badges

You must log in to answer this question. #

#

Not the answer you’re looking for? Browse other questions tagged

.