Chapter 10

The IEML Metalanguage

Having described the main formal characteristics of the system of semantic coordinates of the mind in the preceding chapter, I will now discuss the strictly linguistic dimension of this system. Each USL, i.e. each “point” of the semantic sphere, is an IEML text that can be translated automatically into a network of concepts in natural languages. Figure 10.1 shows the place of this metalinguistic dimension in my general model of reflexive collective intelligence.

10.1. The problem of encoding concepts

In the 17th Century, the philosopher, mathematician and scientist W.G. Leibniz (1646–1716) examined the problem of the calculability of concepts understood as distinct but interdependent semantic qualities1. Leibniz called the system of encoding that would allow concepts to be manipulated using automata the universal characteristic. The system imagined by Leibniz identified primitive concepts with prime numbers, and composite concepts with multiples of those prime numbers. Since numbers are calculable, the encoding of concepts by numbers was intended to make concepts calculable. Despite this, Leibniz’s universal characteristic was very unwieldy. His system had no lasting success or direct successors in its original form. We can see from the work of my illustrious predecessor that sheer calculability is not enough. It is still necessary to encode concepts so that their manipulation can be usefully automated.

Figure 10.1. Position of Chapter 10 on the conceptual map

image

It goes without saying, first of all, that a system of notation for categories – or signifieds – will only solve the problem of calculability of concepts if its grammar is completely regular, unlike natural languages, which are full of irregularities. This is why IEML is a regular language in Chomsky’s sense2. In addition, IEML is an ideographic system of notation, since its goal is to encode and manipulate meaning, unlike phonetic notations whose purpose is to encode sound. I note in passing that contemporary notation systems for numbers and mathematical concepts are ideographic, since they are read differently in different languages (12 is an ideogram that is read as “twelve” in English, “douze” in French, etc.). IEML also has certain features common to natural languages, in particular those that make it possible to articulate categories and utterances as freely and with as much complexity as we might wish.

I would like to elaborate on this last condition. It is a necessary condition for calculability that the IEML metalanguage for encoding concepts be a regular ideographic language. This necessary condition is not sufficient in itself. Also required is a type of encoding or notation of concepts that is compact enough, and above all isomorphic enough with the structure of natural languages, to be automatically interpretable in them. We cannot, as Leibniz did, use natural numbers (or a finite subset of natural numbers) for encoding concepts, because the structure of numbers is too different from that of languages, which are the natural tools for manipulating concepts. We know with certainty that we can calculate, not only with numbers, but with symbols in general, provided that these symbols are arranged in a regular language. To make meaning calculable, we must indeed have a regular language. Above all, however, we need a regular language designed to reflect not only the basic structure of numbers, geometric figures or logical reasoning, but also the structure of the concepts that are manipulated by natural languages and that give meaning to the numbers, figures and reasoning. We want, hypothetically, an encoding of concepts that would allow us to automate a semantic calculation and not only an arithmetic or logical calculation. In short, arithmetic and logical calculability is a necessary condition for semantic calculability, but it is not sufficient.

To clarify the nature of the problem, I will use an analogy with cartography. The advantage of having a system of geometric coordinates for maps is well known: it makes it possible to calculate distances and perspectives, to see at a glance the relationships between different points, etc. The advantage of having the same system of coordinates (meridians and parallels) for all maps and GPS systems is that maps (whether they focus on one aspect or another of the territory) are then superimposable and interconnectable using simple changes of scale. The calculability and universality of their system of coordinates is not the only reason for the usefulness of maps. If mountains, rivers and roads can be projected so usefully on a map with geometric coordinates, it is because there is an isomorphy between the structure of the geographic objects and that of the Euclidean geometry that is the basis of the system of geographic coordinates. Similarly, as we will see, there is an isomorphy between textual objects and semantic topology in IEML that informs the scientific cartography of those objects. Cartography (and modeling in general) implies a correspondence between the object and the map – between the phenomenon and its model – that preserves as much as possible of the relevant features of the object, in particular its transformations and relationships with objects of the same kind. That is why a calculable system of coordinates that would permit us to usefully map concepts – i.e. ultimately, the semantics of expressions in natural languages – must be able to lend itself structurally to the cognitive manipulations we carry out on texts and their meanings. To construct the semantic topology of IEML3, I therefore adopted the following strategy: first identify the general structure of cognitive operations on linguistic objects, and then integrate this structure into a mechanism operating on a regular language. How do we reciprocally transform linguistic symbols in the general categories signified by these symbols? And on what universal features of the structure of languages is this transformation mechanism based? The IEML metalanguage integrates the universals of the structure of languages, which I will review in this chapter. It is precisely because IEML meets this strict linguistic requirement that the IEML semantic machine (which I will discuss in the next chapter) can automate the cognitive operations that reciprocally transform an IEML text (a USL) into its meaning.

I will first review the organization of text units in natural languages in layers, classes and roles. I will then present the two major types of semantic circuits among text units, which make it possible to express the meaning of texts: paradigmatic circuits and syntagmatic circuits. Finally, I will discuss some types of cognitive mechanisms for describing symmetric transformations between meaning (networks of categories) and text (sequences of symbols).

10.2. Text units

Linguistic objects are first presented in the form of texts4: meaningful sequences of symbols. Any reading or understanding of these texts implies at least three cognitive operations: first, analysis of the texts into units of different nested layers of complexity; second, categorization of these units in different classes; and, third, identification of the roles played by these units in the text.

The grammar of a language is a set of rules that defines explicitly what the units of the language are, distinguishes among different classes of units and describes how to correctly assemble these units by assigning semantic roles to them. It should be noted that the very concept of grammar is already the result of scientific modeling, which could only be developed on the basis of written representations of languages. The need to construct grammars was first felt by scholars in order to read and study ancient texts written in dead languages or texts that were not in the mother tongue of the readers. The grammatization of living languages has mostly developed since the widespread availability of print, mainly for political and religious reasons5.

10.2.1. The layers of text units

The organization of text units in layers, or levels of composition, is common to all languages6.

The units at the first level are phonemes, the basic sounds of languages. These may vary widely from language to language: there are tonal languages, such as Chinese, and languages with non-pulmonic consonants obtained by clicking the tongue or lips, such as the Khoisan languages of southeast Africa. Generally, phonemes are divided into consonants and vowels and have no meaning in themselves.

The units at the second level are morphemes (root words and markers of case, gender, number, etc.). Morphemes are made up of phonemes. Unlike phonemes, morphemes have meaning. They are the first meaningful units of languages. Take, for example, the morpheme flor, which is the basis of words such as flower, florist, flourish.

Words, which are made up of morphemes, may be considered the third-level unit. For example floret is made up of the root flor and the suffix -et, which marks a diminutive. Words can only be perceived in writing. For a culture without writing, the distinction between word and morpheme – or between words and sentences – would have less meaning than it does for a culture where written words are separated by blank spaces.

The units at the fourth level are sentences, which are composed of words. In the hierarchy of levels, sentences are the first units to have a reference as well as a meaning. In terms of logic, sentences represent propositions. The word flower cannot be true or false; it can only indicate a general category. Only sentences have the capacity to be true or false, e.g. “The flower is pink”, or the power to give rise to action in a context, e.g. “Go plant some flowers”.

The fifth level of articulation is dialog, verse, paragraph or some other text unit made up of sentences in semantic relationships. We can continue in this way through scenes, acts and plays, or even chapters, books, and so on and so forth.

The cognitive process of interpreting texts in natural languages is based largely on the capacity to identify recursively nested text units. So we should remember that even if our regular, calculable system for encoding concepts does not include exactly the same layers as natural languages, it should at least have analogous layers. This is the case with IEML. As we will see in detail in Volume 2, in IEML the determination of the level of a text unit is based on a grammatical structure in seven layers.

10.2.2. Classes of text units

Another universal feature of the cognitive manipulation of texts is the capacity to distribute units of the same level among different classes. In particular, speakers of all languages are capable of distinguishing (implicitly or explicitly) between nouns and verbs at the level of morphemes or words, as well as at the level of phrases, which can be verbal or nominal. I have sometimes met extremist postmodernists who claim there are languages without verbs or without nouns, but they have never been able to cite a single example. Although the concept word is sometimes problematic, the fact remains that the verbal and nominal functions are universal. In all languages, verbs indicate actions (“He gives”), events (“It rained”), processes (“He is growing”), states or relationships between a subject and a predicate (“It is blue”). Nouns designate people, things, more or less abstract entities (justice) or qualities (blue). It seems that the difference between verbs and nouns has deep roots in human cognitive psychology, distinguishing between processes and entities7. There are, of course, other grammatical classes, such as adjectives, adverbs, pronouns, prepositions, etc. Units belonging to these other classes generally modify the meaning of nouns and verbs or else specify their relationships.

In IEML, in accordance with the universal structure of languages, there are three grammatical classes: verbs (for which the initials are U or A), nouns (for which the initials are S, B, or T) and auxiliaries (for which the initial is E). These three classes of units are distinguished only by their initial symbol and therefore can easily be recognized automatically.

10.2.3. The roles of text units

The same unit can play different roles. For example, in “The girl gives the boy an apple”, “the girl” plays the role of subject, “an apple” plays the role of object and “the boy” plays the role of indirect object or beneficiary (dative). On the other hand, in “The boy gives the girl an apple”, “the boy” is the subject and “the girl” the beneficiary. Languages use various methods to specify the grammatical roles of units, whether these units are morphemes, words or phrases. The recognition of the grammatical roles played by text units is essential for understanding the meaning of a text.

Some languages use standard syntactic positions to specify the roles of their units; for example, the subject may always come before the verb, and the object after the verb. This order is purely conventional and depends on the language. Rather than use syntactic order to indicate grammatical roles, some languages use prepositions or modify words according to their role. Many languages, such as Latin, use cases, which are markers of grammatical roles (rosa, the nominative case, plays the role of subject while rosam, the accusative, plays the role of direct object). Finally, some languages combine the two strategies for indicating the grammatical role of units: both syntactic position and inflection. This mixed solution was chosen for IEML. Three syntactic positions, substance, attribute and mode (corresponding to the triplication operation that produces sequences of symbols), to which must be added auxiliaries placed in the role of mode, make it possible to automatically determine the grammatical roles of the units.

In short, in all texts in natural languages, units are characterized by their layers, their classes and their roles. Texts can only be interpreted by recognizing the units and their characteristics (classes and roles) in order to assemble them into a semantic circuit that specifies their relationships. This is exactly the same in IEML, except that the layers, classes and roles of the grammatical units can be identified automatically and their semantic circuits can be assembled just as automatically.

10.3. Circuits of meaning

10.3.1. Langue and parole

Since Ferdinand de Saussure, linguistics has distinguished between langue (language as a system) and parole (speech)8. Consideration of this now classic opposition will allow me to define two types of semantic circuits among the text units of languages: paradigmatic circuits and syntagmatic circuits.

Through its grammar, langue provides speakers with textual structures and markers that make it possible to break down the units, classify them and attribute roles to them. Through its lexicon, it organizes a priori semantic circuits among words. Parole, on the other hand, concerns the actualization of the textual potentialities of langue by speakers in concrete situations. These speakers produce utterances that are dated and situated.

The grammar and vocabulary of a language are theoretically independent of specific utterances, but in reality natural languages (the precise limits of which are difficult to establish) are living, shifting, syncretic, partly chaotic systems that emerge from the acts of enunciation of their speakers. Langue and parole are in a relationship of evolving, circular co-dependence. The true creative matrix of a language is the collective intelligence of a community of speakers. Symmetrically, a language ties together and coordinates in a more or less constraining way the acts of enunciation of the community of its speakers.

Although a language imposes its constraints on a linguistic community, it is also obvious that individual speakers do not strictly obey the rules grammarians try to establish. The role of dictionary writers and lexicographers is mainly to record usage. Langue as an abstract structure that is fixed and clearly defined is therefore primarily an ideal type, an object constructed by the linguist’s intellect or the speaker’s passion.

10.3.2. Paradigmatic circuits

The analysis of a language involves the way it divides up the continuum of experience in its dictionary. Graphs of relationships, or structures, specific to a language are called paradigms9. Paradigms organize relationships of distinction, derivation, opposition and substitution among potential text units. Relationships of distinction can be phonological (in the case of phonemes) or semantic.

For example, the words script, scripts, describe, description, descriptions, describing belong to the set of words that have the same root, scrib- or scrip- (from the Vulgar Latin scribere meaning “write”). All these words are thus part of the same etymological circuit (etymology is the genealogy of words, since it implies the concepts of origin and descent). But script and description are nouns, to describe is a verb in the infinitive and describing is a present participle. The differential relationships among words derived from the same root inform the etymological circuits. The etymological circuits are paradigmatic.

A second example: “I describe, you describe, he/she/it describes, we describe, you describe, they describe” is the conjugation of the verb to describe in the present indicative. The conjugation itself is a paradigm: the circuit of the different forms a verb can take. In this example, each position on the circuit represents a different person of the verb in a given tense.

A third example: the words red, green and blue belong to the same class of color adjectives. “Red”, “green” and “blue” are part of a paradigmatic circuit of colors, which includes relationships of opposition (white–black), belonging (red–scarlet), mixture and transition (green = blue + yellow), etc. – a circuit in which we stop at a particular position when determining the color of something.

As a general rule, paradigms are sets of text units characterized by variations on a common semantic theme and connected by links of difference, opposition, belonging, derivation, etc. A competent speaker of a given language is capable of selecting from among those variations to compose a specific utterance. The basic idea is that the meaning of a word – independently of its enunciation – is determined by its position in a complex paradigmatic circuit made up of many types of links, i.e. by all the relationships it has with other words in the same language.

10.3.3. Syntagmatic circuits

In contrast with langue, parole actualizes the paradigmatic structures of a language in a given utterance that can be dated and situated and that usually has an individual author (or a collective author, but one that is addressable: a specific team, group, etc.). While langue concerns competence, parole concerns performance.

The analysis of parole identifies the way a particular sentence is constructed and connected in order to explicate the grammatical relationships among the words of the sentence. The relationships among the words of an utterance concern only that specific utterance. For example, the meaning that emerges from the semantic circuit between the words thought, blue and color in the utterance “The thought of blue has no color” belongs only to that utterance.

While langue is analyzed in paradigms, parole is analyzed in syntagms. In its temporal sequentiality, as in the linearity of writing, parole is a series of text units. The syntagmatic chain is constructed by the speaker or writer through choices, which are necessarily successive, in the paradigmatic structures of the language: one word rather than another, one verb tense rather than another, etc. Understanding or analyzing parole, however, requires us to break down the syntagmatic chain and construct a circuit among the text units of the utterance, a circuit that explicates the “deep structure” of the syntagm. For example, to understand the sentence “The girl gives the boy an apple”, “The girl” must be assigned the role of subject, “the apple” the role of direct object, etc. This distribution of roles in a syntagmatic circuit is not sufficient; each of the units must be placed in a paradigmatic circuit, in which girl and boy represent the feminine and masculine poles of gender, apple belongs to the paradigm of edible fruits, gives is the third person singular of the present indicative of the verb to give, give is the opposite of receive, etc.

In short, to borrow Jakobson’s useful simplification10, langue can be compared to a code and parole to a message (i.e. to a text). To understand a text, we have to analyze it into units and connect those units in two distinct circuits: (i) the syntagmatic circuit, which indicates the semantic relationships internal to the text; and (ii) the paradigmatic circuits that, through various semantic relationships, link each actual unit of the text to the virtual units that could be substituted for it. I call this operation of text interpretation semantic inference. As we will see in the next chapter, what distinguishes IEML is that it automates semantic inference. As it includes (a) rules for the construction of paradigmatic and syntagmatic circuits and (b) a set of circuits with predefined meanings and relationships (the dictionary), the semantic machine can transform any IEML text into a semantic circuit translated into natural language (see Figure 11.3). When it is given a text in IEML (a USL), the semantic machine breaks the text down into units, constructs the syntagmatic circuit that explicates the internal relationships of the text and traces the paradigmatic circuits linking each unit to the other units of the metalanguage, while explicating the meaning of the units and their links in natural languages. The semantic circuit corresponding to an IEML text may thus be seen as a fractaloid syntagmatic rhizome (from layer to layer), each node of which explodes in paradigmatic stars. All this will be analyzed in detail in Volume 2.

10.4. Between text and circuits

10.4.1. What is meaning?

According to Igor Mel’cuk, a natural language can be summed up as a set of correspondences between the meanings and the texts of the language. A language is thus a set of rules that create correspondences between a text and all possible meanings, and between a meaning and all possible texts11. Note that in natural languages, a text can have many meanings and the same meaning can be expressed by many texts.

In order to understand Mel’cuk’s idea, we have to answer the question “What is meaning?” as clearly as possible.

If in order to explain the meaning of a text in language A (i.e. the concept corresponding to the text), we provide a paraphrase of the text, i.e. another text in language A, or a well-structured circuit of text units of language A. The result is a definition of meaning that is rather circular, since we can always ask the question: “But what is the meaning of the texts that explain the meaning?” It seems that circularity is consubstantial with meaning in its explicit dimension. We cannot communicate or think discursively about a subjective experience of meaning without symbolizing it in one way or another.

If we translate the text in language A into a text in language B and say, “The two texts (in languages A and B) have the same meaning”, we have still not isolated the meaning as a manifest entity, but have only shown that the two texts (the two signifying chains) represent the same meaning (the same circuit of signifieds, the same concept). Once again, the problem arises from the fact that the signifieds themselves (the concepts) can never be manifested directly, but only through signifiers.

The question can also be answered by saying that the meaning associated with a text is apprehended by a living human being, that it is embodied in the form of a psycho-corporal resonance by a personal vibration that is largely determined by the memory, learning and emotional and cognitive reflexes of the person who understands or perceives this meaning. But does this vibration represent the totality of the meaning or only its implicit, subjective part, the way one person embodies the meaning?

Let us recall that in the preceding chapter I postulated a universe of concepts, a coherent world of purely intellectual identities that is based only on an abstract machine for symbolic manipulation. I formulated this postulate to scientifically explain the rational faculty characteristic of the human species. We have seen that this hypothesis also allows us to avoid the insurmountable aporias that result from attempts to base a universal semantics on empirical data12. That is why the IEML model formalizes a “virtual” face of meaning, which is transparent to calculation, explicit and theoretical, as the counterpart to its “actual” face, which is opaque, implicit or empirical. On this virtual face, the identity of a concept is a unique node of relationships among concepts – a network13. Let us therefore adopt a working convention whereby the meaning of a text x of language A is the semantic circuit y (the combination of the syntagmatic and paradigmatic circuits) that the structure of language A permits us to infer from text x14, a circuit that could then be translated into languages B, C, D, etc.

In the case of IEML, the meaning of the USL-text will be the syntagmatic rhizome studded with the constellation of paradigmatic stars that automatically correspond to it. If a text in language A is translated (whatever the means of translation) into IEML, we would thus automatically obtain the meaning of this text in the form of a semantic circuit that is readable in all languages, a circuit transformable by calculable functions of all kinds.

10.4.2. Correspondences between chains of signifiers and circuits of signifieds: the natural semantic machine

A text is an arrangement of signifiers. In the case of natural languages, the arrangement is generally that of a chain (a linear sequence) of sounds or characters, but the arrangements can be more complex for other symbolic systems such as architecture, music and choreography. In the case of IEML, the basic text (the arrangement of signifiers) is the USL. A meaning, or a concept, is a circuit of signifieds that explicates the paradigmatic and syntagmatic relationships of these signifieds. The signifieds are necessarily encoded in some kind of symbolism and are therefore represented in turn by arrangements of signifiers15. These definitions can be generalized to most symbolic systems. Thus a symbolic system in general is a set of rules that establish a correspondence between signifying arrangements and graphs of signifieds, between texts and semantic circuits. In more teleological terms, a symbolic system is a tool for representing and manipulating semantic circuits (of meaning or concepts) by manipulating and representing chains of signifiers (text). It is possible that the basic “textuality” of symbolic cognition assumed by authors such as Derrida refers ultimately to the innate human capacity to “decode” texts, i.e. to transform them into semantic circuits16.

From a theoretical point of view, this implies that the basis of symbolic cognition can be represented by an abstract semantic machine, as shown in Figure 10.2, which links three machines (that are equally abstract). The first machine, which I have called the textual machine, produces and manipulates signifiers. The second, the conceptual machine, produces and transforms signifieds, or related concepts. The third, the linguistic engine, interprets the products of one in terms of the other. I would like to point out that the interpretative work of this third machine (in the middle in Figure 10.2) includes both the processes of reading (from text to meaning) and those of writing (from meaning to text).

Figure 10.2. Natural semantic computation

image

Let us keep in mind that a symbol is a social convention that connects a signifier and a signified. The foundation of symbolic cognition does not so much concern the local relationship between a particular signifier and a particular signified as it does the system of relationships between textual machine (which manipulates signifiers) and conceptual machine (which manipulates signifieds), a system of relationships that is controlled by languages (see Figure 10.2). As we will see in the next chapter, the IEML model is capable of modeling symbolic cognition computationally because it activates a linguistic engine that automatically connects a textual machine and a conceptual machine. Symbolic cognition cannot be reduced to the IEML semantic machine that reciprocally transforms symbols into conceptual categories and manipulates both. It also includes the hermeneutic functions that produce and interconnect ideas. The IEML semantic machine manipulates the concepts used to classify ideas, however, so it is the necessary condition for the hermeneutic functions and for symbolic cognition as a whole.

10.4.3. The independence of the textual and conceptual machines

If in the sentence, “the sky is blue”, I can quite naturally replace “blue” with “gray”, it is because the signifieds “gray” and “blue” are both colors, and are also sky colors. If in the same sentence I replace “blue” with “unconstitutional”, however, the result of the substitution seems less natural, probably because the signifieds of the words “unconstitutional” and “blue” do not belong to the same domain of variations of colors of the sky. Signifieds are organized in systems of differences, or paradigms: colors, virtues, sciences, plants, prohibitions and obligations, etc. These domains of variation constitute classes, which are themselves linked through relationships (between objects and colors, between sciences and virtues, etc.) and form domains of variations of classes. Classes of signifieds are structured in complex hierarchies combined in sets and subsets. The universe of possible signifieds, or concepts, extends without predetermined limits, and the relationships that organize this semantic universe can be as subtle and interlinked as we wish. Through its capacity to structure signifieds, the conceptual machine generates a potentially infinite variety of ways of organizing the practical world and thought.

The textual machine organizes signifiers, i.e. it structures the reflexive representation of signifieds in the phenomenal world according to the rules of a certain symbolic system. In the functioning of a given cognitive system, whenever one of the two machines is activated, the symbolically complementary operation of the other is triggered. One cannot work without the other. But in most processes of natural symbolic cognition, the structures that organize the two machines are autonomous.

Thinking of symbolic encoding–decoding as a “clutch” or interface between two distinct machines is not without consequences. It expresses the inherent autonomy of conceptual operations (which organize signifieds) and textual operations (which organize signifiers). This thesis is obviously in keeping with the widely accepted idea that most systems of symbols are arbitrary, or conventional. To illustrate the autonomy of the order of the signified (or signifier, depending on the starting point chosen), let us return to the example of colors from the beginning of this section. In many natural languages, nothing in the signifier of a color category indicates a color signified. In English, color adjectives are not distinguished by any specific structure or phoneme. Color can be indicated by a noun (“I like red”), an adjective (“the red house”) or a verb (“to redden”). So there is a class of signifieds that has no correspondence with any one phonetic or grammatical category, i.e. a class that cannot be distinguished by signifying (textual) criteria. The category of color terms is determined only by conceptual criteria, although it has a symbolic projection toward a set of signifying terms, or else it would be impossible to distinguish it.

The fact that the two machines are autonomous does not merely mean that another sound could be used to designate the same meaning, because in this case we are speaking only of the relationship between a signifier unit and a signified unit. As I pointed out above, it is not only the individual forms that have no natural or automatic relationship on the other side of the symbolic fold between conceptual operations and signifying phenomena, but also the classes of forms and the mechanisms of manipulation of forms17. This independence in principle between the determinations of a conceptual machine and a textual machine connected by an engine of linguistic inference has important practical implications.

First, it provides the basis for the possibility of translation. If complex programs of manipulation and organization of concepts could not be projected – using various functions of linguistic inference – in different mechanisms for the organization of signifiers, communication would be limited to people sharing exactly the same systems of signifiers (the same textual machines). However, notwithstanding certain extremist postmodern currents, translations, adaptations and cultural transpositions of all kinds have been carried out all over the world for millennia18.

Second, this autonomy explains the variability of interpretations and concepts that are based on the same system of signifiers. It is well known that different, and even opposing, philosophical or political points of view can be formulated in the same culture and the same language19. We also know that a text can be interpreted in different ways by speakers of the same language.

Third, the respective autonomy of the conceptual and textual machines opens a space for aesthetic and poetic creativity, in which no form is necessarily required to represent a specific meaning in an arrangement of signifiers.

10.4.4. The interdependence of textual and conceptual machines

After declaring the de jure independence between the determinations of the two machines, it must be added that they are almost never, de facto, absolutely independent. It is because the two machines are in principle independent that translation is possible, but it is because their structures weigh – sometimes heavily – on one other that translations are difficult, problematic or provisional. The textual and conceptual machines influence each other and can even have a relationship of iconicity – in the sense that the structuring of signifiers can imitate the intellectual operations it symbolizes. The textual machine is capable of providing an analogy with the conceptual circuits it is intended to represent20. This is why many grammatical categories also correspond to semantic categories21. An obvious example, already cited above, is that verbs generally represent processes and nouns generally entities, their representations being produced by different cognitive mechanisms. Moreover, discourse functions like little plays in which each sentence iconizes the “scene” it is intended to represent. To express the same thing, we can choose words (use of passive or active verbs, use of nouns to designate processes) and arrangements of words that produce different mental models22. Distinct intellectual perspectives on the same fact can be expressed by different texts.

Many linguistic and anthropological studies have shown that the grammar and vocabulary of natural languages correspond to unique ways of dividing up and organizing the world. In one famous study, Émile Benveniste highlighted the structural homology between the (supposedly universal) categories of Aristotle and the grammatical categories of Ancient Greek23. In this case, the textual machine of Greek is said to inform Aristotle’s conceptual machine. Conversely, one of the great contributions of critical thought, of “deconstruction” and contemporary cultural studies has been to show how conceptual machines are reified or naturalized in textual machines.

Finally (and this is the closest interdependence between the two machines), there are deliberately constructed symbolic systems such as those of the divinatory arts, games, scientific and musical notations, economic transactions, etc., where the structuring of the signifiers is used explicitly to serve a certain organization of the signifieds. The textual forms are thus aligned as far as possible with the conceptual operations. It is probably in the development of systems of mathematical notation that this effort to align the textual machine with the conceptual machine (and vice versa) is the most striking. It is also here that it easiest to understand how much symbolic cognition can benefit from the support provided by an effective signifying organization (a textual machine). Textual and conceptual machines never cooperate so closely as in systematic ideographies, whether they are logical, mathematical, chemical, cartographic, musical or other. These ideographies organize a deterministic one-to-one correspondence between textual structures and conceptual structures. IEML is precisely in the lineage of those ideographies deliberately constructed to organize the correspondence of a textual machine with a conceptual machine. In the case of IEML, this correspondence is both deterministic and open to play, since it is programmable.

We will see in the next chapter that the functioning of the IEML semantic machine is analogous to that of the natural semantic machine. The text units it manipulates are organized in layers, classes and roles. It has mechanisms for transformations between concepts and texts, which are implemented by a linguistic engine that includes a dictionary (controlling paradigmatic networks) and rules for the grammatical interpretation of texts (controlling syntagmatic networks). Once the meaning of the IEML texts is represented in the form of paradigmatic and syntagmatic circuits, it can be transformed mechanically. The main difference between IEML and natural languages is that the transformations of its texts and its semantic circuits are automatable. As this metalanguage has the same structure as natural languages, at least one calculable metalanguage exists that is capable of encoding the universe of concepts so as to produce a model that is usable in scientific practice. At least theoretically there is now nothing preventing the realization of Leibniz’s dream.


1 See the discussion of Leibniz’s universal characteristic by one of the contemporary masters of knowledge representation, John F. Sowa, Knowledge Representation: Logical, Philosophical, and Computational Foundations [SOW 2000], pp. 6–7. See also Louis Couture, La Logique de Leibniz d’après des Documents Inédits [COU 1901].

2 See his Syntactic Structures and the article already cited [CHO 1957, CHO 1963].

3 See the last chapter of Volume 2 of this book and, meanwhile, [LÉV 2010b].

4 I am using the word text in its most general sense: a text may be spoken, signed with gestures, etc., as well as written.

5 See Sylvain Auroux’s book on the grammatization of languages, cited above, and the remarkable Histoire des Idées Linguistiques, which he edited [AUR 1994, AUR 1995].

6 I am generalizing here from André Martinet’s double articulation theory; see his Elements of General Linguistics [MAR 1964].

7 On the cognitive foundations of grammar, see Ronald Langacker Foundations of Cognitive Grammar [LAN 1987].

8 Course in General Linguistics [SAU 1959].

9 The word paradigm also means “worldview” in general or “thought pattern or model in a scientific discipline” in epistemology and the history of science. The latter meaning was popularized by Thomas Kuhn in his The Structure of Scientific Revolutions [KUH 1962], but I am using it here in a strictly linguistic sense, as it was used by Saussure [SAU 1959] and his successors such as Louis Hjelmslev [HJE 1953, HJE 1959].

10 See his Essais de Linguistique Générale [JAK 1981].

11 See Igor Mel’cuk’s Vers une Linguistique Sens-Texte. Leçon Inaugurale au Collège de France [MEL 1997]. This lesson concludes as follows: “We have penetrated the atom and the depths of space; we have learned important things about the origins of our universe and the structure of our genes. But we have not made comparable progress in the field of information processing by the human brain. We know too little about the functioning of our reason, and yet the “reinforcement” of this organ, that is, the creation of powerful tools capable of supplementing certain essential functions of reason, is in my opinion the most urgent task of modern science. Confronting the most crucial problem of the 21st Century – the lack of natural resources on Earth for a population that is growing – we have an acute need for a superbrain, i.e. machines capable of thinking on a scale that humanity alone could never achieve. We need models, and good models, of human thought” [translation] It seems to me that the Hypercortex theorized in this book corresponds to this superbrain.

12 See section 9.5.

13 See section 9.2.2.

14 Actually, the same text in natural language could, because of its polysemy, lead to the construction of many semantic circuits. This definition of meaning is entirely theoretical.

15 On this point, see the analysis of the concept in section 9.2.

16 On this point, see section 3.3.

17 The philosopher Gilles Deleuze put particular emphasis on the disagreements of cognitive faculties in Difference and Repetition [DEL 1994].

18 For an inventory of the borrowing and circulation of concepts among disciplines, see Isabelle Stengers (ed.), Les Concepts Nomades [STE 1987]. On the problems of translation in philosophy, see “De l’intraduisible en philosophie”, Rue Descartes, no. 14 (1995), and more recently Barbara Cassin (ed.), Vocabulaire Européen des Philosophies [CAS 2004]. Like the famous paradox of Zeno of Elea on the impossibility of motion, the notion of the impossibility of translation is obviously paradoxical, since it denies in theory what is done in practice every day. It would be better to talk about the problems or risks of translation.

19 This simple observation obviously runs counter to the Sapir-Whorf hypothesis (which, in its most simplistic form, perhaps belongs more to certain commentators on those authors than to Sapir and Whorf themselves) that natural languages absolutely determine the categories and thought processes of their speakers. See Edward Sapir, Language: An Introduction to the Study of Speech [SAP 1921] and Benjamin Lee Whorf, Language, Thought, and Reality [WHO 1956].

20 An extensive exploration of the theme of iconization of meaning can be found in my book L’Idéographie Dynamique, vers une Imagination Artificielle [LÉV 1991].

21 This has been clearly shown by studies in cognitive grammar and by psychologists who have studied the relationships between cognition and categorization. See George Lakoff, Women, Fire and Dangerous Things: What Categories Reveal About the Mind [LAK 1987], Ronald Langacker, Foundations of Cognitive Grammar [LAN 1987] and Lakoff and Johnson, Metaphors We Live By [LAK 1980].

22 This point was made by Langacker [LAN 1987]. On the concept of mental model, I refer to the classic work by Philip Johnson-Laird, Mental Models [JOH 1983].

23 See “Categories of thought and language” [BEN 1958], reprinted in Problems in General Linguistics, Vol. 1, pp. 55–64 [BEN 1971]. See also the Sapir-Whorf hypothesis, mentioned above.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset