Chapter 8

The Computer Science Perspective: Toward a Reflexive Intelligence

8.1. Augmented collective intelligence

On the conceptual map of the subjects discussed in Part 2, Figure 8.1 examines the technical envelope of the Hypercortex (the digital medium, the Internet). From my point of view, the question that needs to be answered is the following: how can computers optimally contribute to the reflexivity of collective intelligence?

This chapter poses the question of the best possible use of the automated manipulation of symbols, in particular for computer engineers. I will point out the limitations of the models of cognition provided by classical artificial intelligence (AI) and contrast them with the research program in augmented collective intelligence, the cutting edge of which is the construction of the Hypercortex.

With respect to the means for automating cognitive operations, I do not question the usefulness or effectiveness of exploring decision trees, automated reasoning or statistical and probability calculation. I believe that the potential of these techniques could be increased, however, if they were used within the framework of a system of semantic coordinates, such as the IEML semantic sphere, which makes it possible to represent an unlimited variety of cognitive processes by calculable functions within a single transformation group. The use of IEML for encoding meaning would permit the automated manipulation of semantic qualities and relationships in a much more refined fashion than the automated reasoning techniques in use today allow.

Figure 8.1. Position of Chapter 8 on the conceptual map

With respect to the purpose of automatically manipulating symbols, I feel, like Douglas Engelbart, that augmenting human intelligence is more important than replacing it. I also feel, like Seymour Papert, that helping individuals and communities to increase their knowledge of their own mental operations is still the best way to enhance their cognitive potential, and thus that the priority should be to augment the reflexive dimension of intelligence. Finally — unlike certain extremist currents in AI involving the “global brain” and “singularity”1 — I do not believe that a conscious reflection of collective intelligence could exist outside the actual consciousnesses of living human individuals. The Hypercortex will not function as an autonomous subject conscious of itself, but as a mirror of processes of collective intelligence whose images will only be perceived by people.

8.1.1. A new field of research

One of the main hypotheses of the research program presented in this book is that the level of human development of a community and the cognitive power of the creative conversations that drive it are interdependent2. Since digital technologies offer us increasingly effective means for augmenting our individual and collective cognitive processes, it has become essential for us to understand precisely through which technical and cultural factors this augmentation could occur. The augmentation of collective intelligence through digital networks is clearly a new area of scientific research3, as shown by the abundant literature on knowledge management (KM)4 and the interest in social computing and the social media seen in many sectors of the economy and society5.

After remaining in the shadows until the early 1980s, the perspective of augmented collective intelligence has proven its value since the appearance of personal computing and the Internet. Its main pioneers are Paul Otlet (in the 1930s)6, Vannevar Bush (beginning in the 1940s)7, Joseph Licklider8 and Ted Nelson (in the 1960s)9, who had, each in his own way, foreseen and theorized the availability of all information online in the form of hypertext and hypermedia networks.

Douglas Engelbart may be considered the main founder of this new area of research around augmented cognition. He was one of the first to understand the importance computers would have in increasing the creative capacities of individuals and groups10. In the 1960s, digital calculators were still huge, extremely costly machines stored in refrigerated rooms, with scientists in white coats feeding them data on piles of punch cards. Almost no one imagined that computers would become communication tools. At that time, however, Douglas Engelbart was working to develop collaborative devices using digital technology and the interfaces (mouse, windows, icons, hypertext) that would become popular in the mid-1980s and practically universal in the early 21st Century. At a conference on philosophy and computing where this pioneer was a special guest, I had the privilege of discussing augmented collective intelligence with him. He confirmed that, to him, collective intelligence was a program of scientific and technical research, but added that this did not necessarily imply wholesale approval of all views of collective intelligence. If collective intelligence is understood in this way as a program of research, its opposite is not collective stupidity, but actually AI.

Historically, the aim of AI from the second half of the 20th Century was to simulate, or even surpass, individual cognitive performance by means of an information-processing automaton. In contrast, the research program on augmented collective intelligence initiated by Douglas Engelbart and a few others aimed to increase the cognitive performance of individuals and groups by means of a communication environment filled with information-processing automata. The research on AI did indeed lead to interesting theoretical advances in the cognitive sciences and numerous useful technical innovations. In fact, what is now called AI covers most of the technical advances in computer science, such as pattern recognition, automated problem-solving, automated reasoning — including probabilistic reasoning — machine learning and natural-language processing11.

The technical, cultural and social evolution of the past 30 years — personal computing for everyone, the Internet, the Web, social media and “augmented reality” through wireless devices and mobile access to the digital medium — has massively confirmed the relevance of the program on augmented collective intelligence. Although AI technologies function perfectly and are used almost everywhere, we do not primarily call on computers to think for us or imitate our intelligence, but rather to augment our capacities for communication, collaboration, multimedia creation and navigation in fictional worlds.

8.1.2. A direction for cultural evolution in the long term

The visions and laboratory work of the pioneers from the 1930s to the 1960s only began to become a social reality with the invention of the personal computer at the end of the 1970s and the success of intuitive “look and feel” interfaces in the mid-1980s (Apple's Macintosh dates from 1984). In this way computers became tools of communication and multimedia creation for everyone, whereas until the 1970s they were just arithmetic and logical calculators reserved for scientists, statisticians and managers of big companies.

Thanks to the invention of URLs12, HTTP13 and HTML language14 (based on SGML15), Tim Berners-Lee brought the communication possibilities opened up by the interconnection of computers to the general public16. In standardizing addresses, the exchange of hypertext links and the description of Web pages, these “linguistic” inventions led to the explosion of social use of the Internet starting in the mid-1990s.

The invention and development of the Web should be seen as part of a long-term techno-cultural trend, and there is no indication that this trend will not continue and even accelerate in the centuries to come. I will cite only three important authors. Henry Jenkins, one of the best analysts of contemporary popular culture, proved in Convergence Culture (2006)17 that collective intelligence, and participatory culture, were the main directions in which contemporary digital communication was evolving. Tim O'Reilly, publisher, conference organizer, great agitator of the hightech world in the United States and inventor of the term Web 2.0, explicitly relates the whole issue of innovation in digital communication to the concepts of collective intelligence and collective mind. Finally, the influential Clay Shirky has clearly shown in his last two books Here Comes Everybody and Cognitive Surplus that the decrease in transaction and communication costs brought about by the Internet is enhancing our capacity for collaborative creation18. As the title Cognitive Surplus suggests, we need to think about the digital medium in terms of cognitive augmentation.

If I had to define the direction of research on augmented collective intelligence in a few words, I would characterize it as the development of new universal symbolic instruments designed to exploit the calculating power and dynamic, interconnected nature of the new writing media. Icons, hypertext links, windows, spatial/visual tracking devices, standards for document communication and description are some of these new symbolic instruments. This research program is an extension, in the new digital communication environment, of the process of increasing the power of human language that began with the invention of writing (3000 BCE) and continued with the creation of the alphabet (1000 BCE), the widespread use of printing (1450) and the electrical media (19th and 20th centuries). The semiotic tools of today — languages, intermedia symbolic systems and software — are increasingly closely intertwined with individual and collective cognitive mechanisms, multiplying and transforming the human capacity to create meaning.

The purpose of this chapter is to trace a clear, reasoned direction for research on augmented collective intelligence at the beginning of the 21st Century. The research program proposed here is based on achievements that are already available (interactive multimedia and augmented reality environments, web of data, AI and ubiquitous computing19) and points unequivocally to the new symbolic territories to be conquered: the Hypercortex, containing an information economy coordinated by the IEML semantic sphere. I will often point to certain limitations of AI. I have no criticism of AI as a body of knowledge, techniques and methods. On the contrary, I feel we will have increasing need for the resources provided by this leading discipline of computer sciences, and to me it seems impossible to create a Hypercortex reflecting collective human intelligence without relying massively on the resources of AI. However, I question the philosophy of AI with respect to the ultimate purpose of the automated manipulation of symbols (to create intelligent, even conscious, machines) or with respect to the exclusivity of certain technical means of modeling cognition (exploration of graphs, automated reasoning, statistical and probability calculation).

8.2. The purpose of automatic manipulation of symbols: cognitive modeling and self-knowledge

8.2.1. Substitution or augmentation?

Popular fantasies and science fiction films often feature machines that have gained their autonomy and attempt to dominate humans. Similarly, in the 20th Century, journalists loved to report chess battles between grandmasters and computers — especially when the machine won. This type of story struck a chord with the public: “computers have become more powerful than man”. While it is true that the human species is increasingly dependent on the machines it manufactures and uses, it is absurd, however, to seriously imagine any kind of independence of machines with respect to humans.

Do we say that “man has been surpassed by machines” when a car, a train or an airplane travels faster than a human on foot? No, because it is clear to everyone that the human is being transported by the machine rather than surpassed by it. The same is true at another level for the automated manipulation of symbols. In mechanizing certain cognitive operations, calculating automata “transport” human intelligence in a faster and more powerful system for managing information, communication and thought.

Even if the program that beats a grandmaster at chess uses some of the heuristic shortcuts of human players, it mostly owes its effectiveness to its brute calculating power. It explicitly simulates the consequences of millions of possible moves, one by one, which, admittedly, no human player can do, but which does not really correspond to our intuitive concept of intelligence.

Starting from a detailed analysis of several examples, I showed in my 1992 book, De la Programmation Considérée comme un des Beaux-arts20, that expert systems – or knowledge-based systems — function more as media for distributing expertise, modifying the cognitive ecology of the environments in which they are implemented, than as AIs purely and simply replacing experts. In one of the four examples analyzed, I myself played the role of cognitive engineer, helping some experts to formalize their empirical knowledge in the form of machine-executable rules. I observed that the process of knowledge engineering I was carrying out with the experts allowed them for the first time to explicitly envisage their own decisionmaking process and finally to perfect their methods. For the users, the system functioned as a checklist, a support for practical learning and a decision-making aid in complex cases. All this had nothing to do with some omniscient machine replacing the human. In short, although in the late 1980s people were still talking about AI to designate knowledge-based systems, actual practice was tending instead toward augmented intelligence. My approach is confirmed by the fact that knowledge-based systems are today generally regarded more as tools for KM or decision-making aids than as AI programs.

In short, for the research program on augmented intelligence, the main purpose of the automation of symbolic processing is not to obtain machines that “think for us”, but rather machines that increase our individual and social power in information processing, communication and reflection. The IEML-based Hypercortex we are discussing with regard to the future development of augmented intelligence is thus not a rival of the Cortex. On the contrary, cortical intelligence is augmented along the autopoietic loop21 that reflects it in the Hypercortex. As for the Hypercortex, it has absolutely no autonomy and no meaning outside this Cortex-Hypercortex loop controlled by creative conversations (see Figures 7.4, 7.6 and 7.7).

8.2.2. Modeling of separate or connected intelligences?

The two research programs — augmented intelligence and AI — claim to model human cognitive processes. Since its beginnings in the mid-1950s, AI has proposed to simulate separate individual intelligences. The augmented intelligence of the early 21st Century, on the other hand, because it is willing to be informed by the traditions of the arts, humanities and social sciences, knows that there is no point in trying to model human symbolic cognition without including the conventional and collective dimension of symbolic systems.

As I demonstrated in Chapter 3, human symbolic cognition is essentially — and not just accidentally — cultural. Symbolic systems only exist at the social level, so any modeling of human intelligence that aims for a minimum of completeness and coherence must tackle collective intelligence. To clarify: individuals are obviously intellectual actors and it would be absurd to absolutely disallow modeling of their cognitive processes. Individual intelligence cannot, however, draw its coherence from itself22. It deals with signs that belong to conventional symbolic systems and thus only exist fully at the collective level: languages, disciplines, rituals, etc. It is ultimately meaningful only in social interaction and on a cultural horizon. From the point of view of its modeling activity, augmented intelligence therefore does not consider individuals as autonomous, separate intellectual centers, but rather as agents who are coordinated within one or more collective intelligences. This obviously does not prevent augmented intelligence from working for the benefit of its individual users, for example, to perfect their personal KM and to augment the reflexivity of their intelligence.

We must not simply contrast individual intelligence and collective intelligence. Instead the modeling of a centralized, separate individual intelligence must be contrasted with the consideration of individual intelligences that are very real but whose activity only becomes meaningful in interdependence with the thinking societies and shared symbolic systems that must be the main targets of the modeling. It is precisely the role of the IEML semantic sphere to serve as this background of interdependence against which any process of symbolic cognition stands out.

The classic AI program is a closed system. It can be represented by a database to which rules of inference are applied. It should be remembered that this approach was stabilized from the 1960s to 1980s, a time when the Web did not yet exist and social computing was only envisaged by a few pioneers. Updated in the context of the Web by the ontologies of the web of data, the traditional method of AI consists of organizing automated reasoning (controlled by logical rules) using fact bases. The existing fact or data bases are, however, fragmented in their conceptual organization and the many available ontologies (sets of logical rules describing the conceptual structure of a field) are often incompatible. It is also somewhat worrying that the most advanced features of the web of data23 in 2011 (ontologies formulated in OWL) are ultimately only adaptations of rule-based systems from the 1980s. It should also be noted that the web of data project — which is the heir of classical AI in this respect — does not explicitly aim to provide a reflexive scientific model of collective human intelligence. Is it not because of its philosophical roots in classical AI that the web of data is stymied by the multitude of incompatible ontologies and has been unable to achieve the same success as the “Web of pages”?

Thus, although our ultimate goal is certainly the same (to augment collective human intelligence), my vision differs from that of Tim Berners-Lee (the contemporary leader of the web of data project) on the fundamental point of addressing in the digital medium. Tim Berners-Lee feels that URLs are the ultimate addressing system of the digital medium (RDF being the standard for constructing graphs from URLs) and that URLs must be semantically opaque because of the way they are constructed. He does not believe it is possible to construct a system of semantic coordinates of the mind, addressing concepts transparently. I feel, on the contrary, that while URLs are indispensable for addressing data, we need a system for addressing metadata, a system that is transparent to semantic calculation — USLs — to create a real reflexive model of collective intelligence24.

I am here proposing a research and development strategy distinct from that of AI. It involves, first, modeling the processes of symbolic cognition using a universal system of semantic coordinates25. Second, drawing on the tradition of augmented intelligence, the Hypercortex coordinated by the IEML semantic sphere is certainly connected to the web of data, but its approach is radically oriented toward the reflexive modeling of collective interpretation games in dialog in the open, social, conversational digital environment.

8.2.3. Conscious machines or machines that mirror collective cognition?

A certain extreme view of AI26 proposes to build machines that not only behave intelligently but are actually conscious. Similarly, certain futurists feel that the global brain represented by the Internet could soon become conscious. Contrary to these trends, I am proposing a research program on augmented intelligence that aims to make real human individuals more conscious of their own individual and collective cognitive processes. This program involves using the Hypercortex to create a scientific observatory of the cognitive processes of creative conversations and a tool for dialog among these conversations. Like Nova Spivack27, I feel that the collective intelligence of the human species could one day become conscious. Like him, I think this will only come about through suitable reflection of the functioning of the Hypercortex28 in the consciousnesses of biologically embodied human beings, rather than through some supposed machine consciousness.

8.2.3.1. Embodiment

It may be useful here to recall two classic criticisms of the research program to create conscious machines put forward by Hubert Dreyfus and Joseph Weizenbaum.

The philosopher Hubert Dreyfus29 starts from a phenomenological analysis of human consciousness. The knowledge we have of our psychological state is situated in a physical environment (at least in the background) polarized by desires, expectations, intentions, fears, etc. This structure of human consciousness is thus rooted in corporeal animal experience, and symbolic discursivity is never absolutely separate from this primordial experience. In short, the computers cannot be conscious because they have no bodies. According to Dreyfus, the fact that computer programs can be executed independently of their material implementation confirms the disembodied nature of AI and therefore the impossibility of it ever achieving consciousness.

The criticism of Joseph Weizenbaum30, who is himself a famous practitioner of AI, takes a completely different tack. Weizenbaum starts from Turing’s definition, according to which we will have achieved true AI when a human is unable to determine whether he or she is conversing (in writing) with a machine or another human31. In fact, in 1966, Weizenbaum had produced an AI program (ELIZA) that generally gave users the impression of conversing with a human psychotherapist. Since this illusion was created by means of a relatively simple program, whose author acknowledged that it consisted of only a few pages of code, it became obvious to Weizenbaum that attributing consciousness or even intelligence to a machine was nothing but an anthropocentric projection. Even much more sophisticated AI software differs from ELIZA only in its degree of complexity, not its nature; attributing conscious intelligence to them would still be a form of projection.

To come back to the augmented intelligence program, the ultimate basis of the intelligence of the Hypercortex is the intelligence of the biological Cortex and, with this cortical intelligence, the activity and sensitivity of the living bodies of human beings immersed in the environment of the biosphere on which they are dependent. The only real media of reflexive consciousness are living human bodies: this is the philosophical premise of the research program on augmented intelligence. With this clearly stated thesis, this program opens up a research direction that is more useful for sustainable human development than that of the conscious machine, but also, in scientific terms, bolder and more productive.

8.2.3.2. Know thyself

Rather than working to create conscious machines, augmented intelligence works to equip human intelligence with a better knowledge of its own cognitive processes. Ultimately, its aim is to increase the reflexivity of human intelligence. This approach is consistent with that of Seymour Papert, a major player in AI and one of the founders of the MIT Media Lab. In 1980, in Mindstorms: Children, Computers, and Powerful Ideas32, Papert showed that controlling or programming symbol-manipulating automata could have remarkable cognitive benefits. By giving us back an explicit image of our own way of thinking in the form of the execution of programs we have designed, computers provide us with the means to improve our thinking. Papert’s findings are clearly in line with the augmented intelligence research program, for which the best way to develop human cognition is to help it to know itself. In entering a reflexive, or self-referential, loop, intelligence embarks on the path to open learning. The automation of symbol manipulation is thus used to enhance individuals’ autonomy and cognitive power and their creative conversations. This approach is obviously in line with one of the oldest and most universal precepts of philosophy. Is there any need for a lengthy justification of the Socratic adage to “know thyself”? This imperative is the foundation of most of the great wisdom traditions, as well as of Greek philosophy. The main difference here is that it is addressed to collective human intelligence in the new digital medium of its development.

8.2.3.3. Reflexive consciousness and computation of meaning

If our aim is not to create conscious machines, how should we understand the claim that the Hypercortex coordinated by the IEML semantic sphere computes meaning?33 Once again, the goal is not to construct a machine capable of consciously understanding the meaning of linguistic utterances. The semantic sphere will coordinate automata that will increase our capacities for exchanging and manipulating linguistic utterances — utterances that we, embodied and conscious living beings, will understand.

In the project of the Hypercortex based on IEML, the very mechanical “understanding” required of the machines is thus limited to three main processes. First, automata process texts encoded in IEML. Second, they establish the correspondence, in both directions, between IEML texts and semantic circuits that are readable in natural languages by creative conversations. Third, these automata transform, travel and measure the circuits of the semantic sphere. It should be kept in mind that the set of nodes (IEML texts) and the set of circuits (connecting the IEML texts) of the semantic sphere are two transformation groups in functional correspondence. Therefore three types of manipulation can be automated in a coordinated fashion: (i) manipulation of the nodes; (ii) manipulation of the semantic circuits between the nodes; and (iii) manipulation of the automated correspondence between nodes and circuits.

By automating these calculations, the IEML Hypercortex will help creative conversations conceive relevant semantic circuits for structuring data and the appropriate collective interpretation games that will use these circuits. The Hypercortex will also assist them in navigating the flows of data channeled by the circuits and evaluated by the games. Ultimately, the Hypercortex will involve them in a process of constant improvement of their semantic circuits and their collective interpretation games.

This is therefore not a matter of giving the semantic automata an actual consciousness of the meaning of the natural languages, although their sophisticated behavior might lend itself to such an anthropocentric projection. Living individuals have already ensured that natural languages are rooted in concrete human experience. Since the meaning of utterances in natural languages is already actualized in any human consciousness, there is no need to artificially reproduce this actualization using our semantic machinery. The computation of meaning therefore designates mainly (and this is already considerable!):

– the group structure of IEML texts and semantic circuits;

– the various possibilities for calculating semantic distance based on this structure;

– the reciprocal translation between IEML and natural languages34.

8.2.3.4. The Hypercortex: serving reflexive intelligence

I will now summarize the goals of contemporary augmented intelligence. At a time of quantum computing, photonics, nanorobotics, societies of agents and augmented reality, a distributed environment rich in robots and interconnected software agents is becoming the most suitable medium for distributed human cognition. In this ubiquitous computing environment, the augmented intelligence program does not aim to simulate individual intelligence, but rather to develop the reflexive powers of symbolic cognition both individually and collectively. The project of building the Hypercortex is therefore not about giving the omnipresent media environment a centralizing “AI”, but rather using the new massively distributed software ecology to create and share meaning peer-to-peer in order to improve individual and collective capacities to produce, manage and appropriate knowledge. The techno-cultural basis of this plan for omnidirectional cognitive growth is the new symbolic system, IEML, designed from the outset to use the calculating and communication power of the digital medium to increase the reflexivity of collective intelligence.

8.3. The means of automatic manipulation of symbols: beyond probabilities and logic

Having discussed the goals of augmented intelligence, I now come to the technical means. The basis of my argument is as follows: the traditional arsenal of AI — exploration of graphs, automated reasoning and statistical and probability calculation — are necessary for augmented intelligence, but they are not sufficient.

8.3.1. Exploration of graphs

Graph theory is one of the foundations of computer science, as it is of many areas of engineering concerned with building and maintaining networks. It is also beginning to be recognized as fundamental to many other areas of research, including the human sciences, from linguistics to sociology35. The IEML semantic topology36 provides a new framework that can be briefly summed up in the following four points:

– the nodes and links of the graphs of the semantic sphere have meanings, labeled in STAR- (Semantic Tool for Augmented Reasoning) IEML by USLs37;

– there is a transformation group38 on the USLs;

– the transformation group on the USLs leads to the existence of a transformation group on the semantic graphs labeled by the USLs;

– there is an automatable correspondence between the USLs and the semantic graphs. Not only are the nodes of the IEML semantic graphs labeled by USLs, but each distinct USL label itself corresponds to a distinct semantic graph. This means that each node and each link of the semantic sphere projects an image of its meaning in the graphs of the semantic sphere. The topology of the semantic sphere is selfreflexive.

It is due to this self-reflexive property that the IEML semantic sphere can be used as a system of coordinates for symbolic cognition. Within this system of coordinates, the automated paths in graphs, the automated reasoning (which attributes truth values to the nodes or determines the energy of semantic currents), like the statistical and probability calculations, then gain power and take on new meanings. My proposal is not intended to denigrate networks but, on the contrary, to increase the power of models from graph theory, using a semantic transformation group.

8.3.2. Limitations of statistics

In the exact sciences, Claude Shannon was the first researcher to suggest a precise, i.e. calculable, definition of information. As we will recall39, he says that the quantity of information carried by a message depends on the improbability of the message. This definition, while it is precise and true, only concerns the quantity of information. Although it is perfectly valid from an engineering perspective, we all know that the relevance of information depends much more on its meaning and its value in a human context than on the improbability of its symbolic structure calculated according to purely statistical criteria. This is the first limitation of statistics. It is precisely the relevance of information that the IEML collective interpretation games are intended to explicate and make calculable40.

The second limitation: a system of semantic coordinates cannot be based on statistics. As I will show in the next chapter on the formal properties of IEML, the system of coordinates must be an algebraic transformation group with strong internal coherence. This in no way invalidates the value of statistical calculations. On the contrary, the availability of a system of semantic coordinates will make it possible to generate statistics that are even more useful and meaningful. I am thinking, for example, of the statistics on current flows in the semantic circuits and the links between these flows and data.

8.3.3. Limitations of logic

Logic formalizes reasoning on propositions about which it does not necessarily have anything to say. The meaning and effectiveness of knowledge in human experience is not its concern. When Wittgenstein, at the end of the Tractatus41, declared that “what we cannot speak about we must pass over in silence”, we must hear this as an admission of an inability to express what makes human existence meaningful in scientific terms. But can science in its creative development be reduced to a series of logical propositions corresponding to “objective facts” and the application of valid reasoning to these propositions? I do not think so. It is true that logic only allows the stringing together of “tautologies”, since its function of faithful transmission of truth values (“Garbage in, garbage out”, as computer programmers say) does not permit any true creation. Perhaps this is one of the reasons Wittgenstein began to feel, toward the end of his career, that logic might not be all there was to language. Human symbolic cognition generates an unlimited number of language games, of which logical reasoning, though important, is only one specific case.

Austin42 showed clearly that many of the practical functions of natural language follow rules other than those of logic: orders, promises, verdicts, etc. He assigned to pragmatics the study of these non-logical uses of language, uses in context, whose purposes are other than the faithful transmission of truth values from one proposition to another. Searle43 pointed out that speech acts are indissociable from a meaning, and thus an intention of meaning and action that is present in all linguistic expression. Intentionality and pragmatic force go beyond the logical or even narrowly semantic dimension44 of the use of speech; they concern the enunciation considered as an event that changes the context. This is why the collective interpretation games structured by the IEML semantic topology deal with meaning and the relevance in context of formal ideas considered as enunciations45. Thus the IEML Hypercortex will be able to model — beyond logic — the pragmatic acts and language games that give rise to the richness of human symbolic cognition.

Freud46 and Jung47 showed that the ordinary human mind carries out operations of projection, inversion, displacement, metaphorization, analogical transformation and other transmutations on a daily basis. These richly diverse non-logical cognitive operations have been cultivated and theorized for centuries by poets and shamans, long before psychologists became aware of their importance.

Augmented intelligence makes a clear choice in favor of modeling the entire range of mental operations made possible by symbolic cognition, including the metamorphosis of mental functions themselves. In doing so, it is in keeping with Marshall McLuhan’s defense of a tradition of the humanities that would not be reduced to dialectics, i.e. to the refinements of logical reasoning, but that would also include the complexities of grammar and rhetoric. I should note that grammar should be understood here as meaning the literary tradition in all its diversity and that rhetoric should not be limited to figures of speech or the art of persuasion, but should include reflection on the proper use of language in society48. In short, intelligence as reflected and augmented by the Hypercortex will be able to model many mental operations other than those of logical reasoning.

8.3.4. Symbolic cognition cannot be modeled without full recognition of the interdependence in which it originates

AI was basically correct in wanting to formalize human cognition using the resources now available for the automated manipulation of symbols. In its haste, however, it neglected too many factors and ignored the legitimate rights of other disciplines.

It is likely correct that the dynamics of the circulation and processing of information can be abstracted from their material implementation and studied in themselves. This was the great discovery of cybernetics49. Yes, it is possible to have machines carry out certain functions formerly believed to be exclusive to plants and animals. Despite this, it seems rather presumptuous to imagine that with a few logical rules in a database we can recreate the sensory-motor, dreaming and fantasizing consciousness that arises in us from the opaque flow of physiological processes. Through mortal bodies immersed in biospheric interdependence, from layer to layer of encoding50, reflexive consciousness is deeply rooted in the totality of nature. In terms of hardware, AI has neglected the rights of the life sciences by refusing to take into account the physical/biological embodiment of human experience.

In terms of software, AI imagined that it would directly model symbolic cognition using automated reasoning, exploration of decision trees and statistical and probability calculation. In doing so, it neglected the cultural and social dimensions of meaning and seemed to ignore51 the irreplaceable contribution of the hermeneutic tradition in the human sciences. There is no thought without memory, and many layers of encoding and interpretation are needed to construct a memory worthy of the name52!

In recognizing the rights of physical/biological nature and of the cultural, linguistic, hermeneutic and symbolic traditions, intelligence augmented by the IEML Hypercortex acquires the means to produce computational models of symbolic cognition that are both more complex and more powerful than those of traditional AI.


1 See [KUR 2006].

2 On the theme of human development, see section 5.1.

3 See Brigitte Juanals and Jean-Max Noyer (eds.), Technologies de l'Information et Intelligences Collectives [JUA 2010]; Epaminonda Kapetanios, “On the notion of collective intelligence: opportunity or challenge?” [KAP 2009]; Nguyen Ngoc Thanh et al., Computational Collective Intelligence, Semantic Web, Social Networks and Multi-Agent Systems: First International Conference [NGU 2009] (the latter is concerned more with the collective intelligence of software).

4 See works already cited by Nonaka, Wenger, Dalkir and Morey et al.: [DAL 2005, MOR 2000, NON 1995, WEN 1998].

5 I am speaking here of an intrinsic interest in the subject, since it presents a great many economic, social and cultural opportunities (e.g. Rheingold, Weinberger, Tapscott, Pascu, Shirky and Li [LI 2008, PAS 2008, RHE 2002, SHI 2008, SHI 2010, TAP 2007, WEI 2007]). There is also a “reactive” interest that comes from the threat that the new forms of communication pose for the economic models and institutional structures suited to the old forms of communication.

6 See section 4.3.2 and works already cited [OTL 1934, OTL 1936].

7 See the famous article “As we may think” [BUS 1945].

8 Joseph Licklider was one of the first to foresee the development of electronic mail and virtual communities. See “Man-computer symbiosis” [LIC 1960].

9 See Literary Machines [NEL 1980].

10 See his pioneering work Augmenting Human Intellect [ENG 1962] and the historical book by Thierry Bardini on Engelbart's work, Bootstrapping, Coevolution, and the Origins of Personal Computing [BAR 2000].

11 See Stuart Russell and Peter Norvig, Artificial Intelligence, A Modern Approach [RUS 2010]. Peter Norvig was director of research at Google in 2010.

12 Uniform Resource Locator.

13 HyperText Transfer Protocol.

14 HyperText Markup Language.

15 Standard Generalized Markup Language, the main inventor of which was Charles Goldfarb.

16 See Tim Berners-Lee, Weaving the Web [BER 1999].

17 See Convergence Culture: Where Old and New Media Collide [JEN 2006].

18 See [SHI 2008, SHI 2010].

19 Also known as pervasive computing.

20 See [LÉV 1992b].

21 Autopoietic means “self-producing”.

22 Let us recall in passing that for the great Russian psychologist Lev Vygotsky (see his major work Thought and Language [VYG 1986]), thought that develops within the individual is, from a genetic point of view, an internalization of dialog. I already alluded to it in section 3.6. Clinical psychology and the various schools of psychoanalysis take the internalization of social relationships even further.

23 What is known as the web of data, linked data or the semantic Web covers a set of standards (RDF (Resource Description Framework) and OWL (Ontology Web Language) in particular) and methods, which I will not go into here. To learn more, see [BER 1999, FEI 2007, HEN 2008].

24 I wish to thank Harry Halpin, PhD, who helped me define as clearly as possible the difference between the web of data research program and that of the Hypercortex.

25 In my article “The IEML research program: from social computing to reflexive collective intelligence” [LÉV 2010a], there is a more extensive discussion of the relationship between the IEML research program and the web of data project sponsored by the WWW Consortium. I will simply point out here that, in practice, the two approaches are complementary: the ontologies can be expressed in IEML, and URLs could encode USLs. It is thus possible to develop the Hypercortex by capitalizing on all the efforts that have been made as part of the work on the web of data. I would simply like to point out here the theoretical differences between of the two approaches.

26 Represented, for example, by Ray Kurzweil; see [KUR 2006].

27 http://www.novaspivack.com/uncategorized/will-the-web-become-conscious.

28 Nova Spivack talks about a Metacortex and not a Hypercortex, but the basic idea seems to be the same.

29 Hubert Dreyfus, What Computers Still Can't Do: A Critique of Artificial Reason [DRE 1992].

30 Joseph Weizenbaum, Computer Power and Human Reason: From Judgment To Calculation [WEI 1976].

31 See the famous article by Turing “Computing machinery and intelligence” [TUR 1950].

32 [PAP 1980].

33 I will discuss the question of the automated calculation of meaning in the conclusion of this book, where I will answer the classic objection that meaning is not calculable because it depends on context. It is true that meaning depends on context. What marks the cognitive model of the Hypercortex is precisely that it formalizes this context at the four levels of language, utterance, enunciation and narrative.

34 I am speaking here only of meaning at the level of language and utterances. For meaning at the level of enunciation and narrative in context, see the collective interpretation games described in Chapter 13 and the general conclusion in Volume 2.

35 I dealt with this point and indicated the main authors on the subject at the beginning of section 9.4.1.

36 See Volume 2 and [LÉV 2010b].

37 Remember that the USLs are valid IEML texts or expressions and that IEML is a regular language in Chomsky’s meaning of the term.

38 The concept of a transformation group will be developed philosophically in section 9.4 and mathematically in Volume 2. See the article in Wikipedia for an introduction: http://en.wikipedia.org/wiki/Group_(mathematics).

39 See the last paragraph in section 2.2.2.

40 In the collective interpretation games, ideas are represented by the triad (URL, C, USL). The URL represents the address of the data on the Web. The USL represents the address of the metadata in the IEML semantic sphere. C represents the polarized intensive value (positive or negative) of the semantic current. See Chapter 6 for a general philosophical approach, and section 7.4.7 for an overview of the CI Games. See sections 13.7 and 13.4 for a detailed technical approach.

41 See [WIT 1921] and the penultimate note in section 3.1.3.

42 Austin was already mentioned in the last note of section 3.1.3; see his How to Do Things With Words [AUS 1962].

43 Searle was also mentioned in the last note of section 3.1.3; see [SEA 1969, SEA 1983].

44 If we limit semantics to the content of strictly locutionary acts (independently of their illocutionary or perlocutionary force), i.e. to the grammatical meaning of linguistic utterances.

45 On this point, see section 13.5.2.2.

46 See Freud's The Interpretation of Dreams [FRE 1933].

47 See Carl Gustav Jung, Psychology and Alchemy [JUN 1968].

48 On this subject, I strongly recommend Marshall McLuhan’s doctoral thesis, The classical trivium: the place of thomas nashe in the learning of his time [MAC 1943].

49 On this point, see section 2.2.2.

50 On this point, see section 2.3.

51 With a few notable exceptions; see, for example, Terry Winograd and Fernando Flores, Understanding Computers and Cognition: A New Foundation for Design [WIN 1987].

52 The theme of memory will be discussed in detail in Chapter 13.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset