Chapter 20

Analogy, Mind, and Life

Vitor Manuel Dinis Pereira [email protected]    LanCog (Language, Mind and Cognition Research Group), Philosophy Centre, University of Lisbon, Lisbon, Portugal

Abstract

In this chapter, I'll show that the kind of analogy between life and information that seems to be central to the effect that the artificial mind may represent an expected advance in the evolution of life in the universe. It is like the design argument, and if the design argument is unfounded and invalid, the other argument is as well. However, if we are prepared to admit this method of reasoning as valid (though we should not), this discussion will show that the analogy between life and information to the effect that the artificial mind may represent an expected advance in the evolution of life in the universe seems to suggest some type of reductionism of life to information, but biology, chemistry, and physics are not reductionist, contrary to what seems to be suggested by the analogy between life and information.

Keywords

Phenomenal consciousness

total Turing test

artificial intelligence

androids

analogy

pattern

recognition

reductionism

life

information

Acknowledgements

I wish to acknowledge my mother, Maria Dulce.

1 Introduction

The analogy between life and information—for example, pattern recognition, with hierarchical structure and suitable weightings for constituent features (Kurzweil, 2012)—seems to be central to the effect that artificial mind may represents an expected advance in the evolution of life in the universe, since information (namely, pattern recognition) is supposed to be the essence of mind and is implemented by the same basic neural mechanisms. And since we can replicate these mechanisms in a machine, there is nothing to prevent us from setting up an artificial mind. We just need to install1 the right pattern recognizers.

2 The artificial mind and cognitive science

The area of artificial mind research can be described as including the following: machine learning, reasoning, knowledge representation, restriction fulfillment, search, planning, and scheduling, agents, robotics, philosophical foundations, natural language processing, perception and vision, cognitive modeling, knowledge, and applications engineering. The main core consists of the first three: machine learning, reasoning, and knowledge representation. Now let’s look at the area of cognitive science research.

Consider the following items: perception and action, memory, attention and consciousness, so-called nuclear knowledge, classification, lexicon and ontology, learning, language and representation, choice, rationality and decision, culture, and social awareness. The area of study that includes these concepts is explored by cognitive science research, with artificial mind research as a major part of it. In a way, cybernetics, computer sciences, language sciences, neurosciences, brain sciences, psychology, biology, philosophy, mathematics, physics, and engineering sciences all contribute to the study of human cognition.

Artificial mind research is a way of discovering, describing, and modeling some of the main features of consciousness—specifically cognitive ones. Artificial mind researchers assist cognitive science researchers in explaining how consciousness emerges or could emerge (i.e., be caused) by nonconscious entities and processes (a explanatory question), or if it makes any difference for the performance/operation of the systems in which it is present; and if so, why and how this occurs (a functional question).

A central notion in artificial mind research is that of the agent. An agent is defined as an ongoing and autonomously operating entity in an environment in which there are other processes and agents. We are interested in knowing how a mind agent is designed. The usual questions are the following: How does it perceive, rationalize, decide, learn? How does it perform independently in a mutual environment of problems (specific agents for certain intervention domains)? In this discussion, we are interested in multiplying those agents and ask how it happens that an enormous variety of those agents can articulate coherently in a multiagent system (interaction and organization). The combination of these questions (and their answers) can be designated by the term distributed artificial intelligence.

3 Consciousness

Consciousness can be classified in the following ways (Block, 2002):

 Access consciousness: That is, we have access consciousness of something if we have its representation, it can be transmitted to each part of the brain, and in this way, it can be used in our reasoning and rational control of our actions. It is likely that this is the type of consciousness that can be implemented in a machine. But we have the problem of debating whether the machine actually “experiences” something (and in this case, “actually” is not clearly defined).

 Phenomenal consciousness: That is, x is in a phenomenal conscious state if x experiences something that characterizes that state. The criterion widely used to talk about phenomenal consciousness is that of “there is something it is like to be in that state.” For example, if we are phenomenally conscious of a bright blue sky, then it is because we are experiencing something that makes that mental state a phenomenal conscious state. This experience is the key concept of phenomenal consciousness.

Block identifies the following three differences between access consciousness and phenomenal consciousness:

 Access consciousness is completely defined by a representation (such as a logical agent clause that represents a concept or a fact). Phenomenal consciousness can also have a representational component, but what identifies it is that the experience of x (an agent) is such that were x not in this phenomenal conscious state, it would not have the experience that it de facto has.

 Access consciousness characterizes a mental state as a conscious state because their relations with other modules (in other words, access consciousness uses a functional way of classifying mental states as a conscious state). Being aware is being capable of reasoning and acting, and of being stimulated and responding to those stimuli.

 Phenomenal consciousness identifies types of conscious states. For example, all the sensations of pain are phenomenal conscious states of the same type (i.e., pain). But if we consider pain from the perspective of access consciousness, each pain is a different conscious state because it causes different reactions, memories, and inferences.

To better illustrate the difference between access and phenomenal consciousness, Block describes cases of access without phenomenal (a) and phenomenal without access (b):

(a) An individual can have his visual cortex damaged (i.e., have suffered an injury in the V1 area), there are things in his field of vision that he cannot see, the so-called blind spots. Even so, he responds with an elevated exactness to questions concerning the properties of those visual stimuli. This pathology, called blind-sight (Holt, 2003), exemplifies the case of access consciousness without phenomenal consciousness. Phenomenologically, it is not aware of anything, but this does not preclude it from representing those stimuli. Are your representations that enable it to respond to such visual stimuli?
However, we can still give another example: the Belief-Desire-Intention agent, which does not have experiences: he is presumably “aware” of everything in front of him but does not experience any of it. A discussion related to this example is the thought experiment of the Chinese Room by Searle (1980). His alleged “consciousness” is presumably “access,” not phenomenal consciousness.

(b) One case of having phenomenal consciousness without access consciousness is a situation in which we experience a normally disruptive sound, but because we are so used to living with it, we do not represent it. Perhaps a friend of yours, used to the silence of the countryside, could find it strange how you are able to live in the noise of the city. The reason, though, is that you do not have access consciousness of these noises even though you are phenomenally conscious of them.

Self-awareness is the state of something when there is an internal representation of oneself. For example, a chimpanzee or a two-year-old baby is capable of recognizing itself in the mirror, but a dog is not. It is likely when a dog looks at the reflected image of itself in the mirror, it is conscious of the phenomenal, but it interprets the representation to which it has conscious access as another dog.

Coelho (2008) asserts the need of a theory of subjectivity and a theory of the body. The difficulty of the subjectivity theory can be illustrated in the following way: we are not capable of having the sensations of a bat (Nagel, 1974) because we are not bats. And the difficult thing about the theory of the body is that robotic “organs” are not organs from natural selection, but our brain is (Edelman, 2006).

The main difficulty with phenomenal consciousness is the so-called hard problem of consciousness (Chalmers, 1995). There is nothing we know more intimately than the conscious experience, but there is nothing more difficult to explain. However, this problem is far from being exclusive to artificial mind research. For example, in neuroscience, the best one can do is find the neural correlates of access consciousness.

In other words, access consciousness refers to the possibility of a mental state to be available to the rest of the cognitive system (to be available, for example, to our production system language, such as when we try to describe the stinging of a pinprick, the taste of chocolate, or the vibrant red of a fire truck). The access is representational in a way that phenomenology is not: the contrast is between feeling that sting, savoring that chocolate, or seeing that red, and associated representations such that we may not access these representations (not being in possession of relevant concepts). But if we have an experience, we have the experience that in fact we have (for example, see the red of the truck in contrast with seeing that this truck is red).

In artificial mind research, the alleged “consciousness” that one gets are also presumably “access,” and agents have “representations of” their “own internal states,” the so-called self-awareness. Examples of these agents are Homer, implemented by Vere and Bickmore (1990), and the Conscious Machine (COMA) project by Schubert et al. (1993).

Architectures such as Soar (originally stood for State, Operator And Result), IDA (Intelligent Distribution Agent), and ACT-R (Adaptive Control of Thought—Rational) are computational models of human cognition (for example, real processing time). However, researchers working in those areas do not explicitly attempt to build an agent with access consciousness.

Other research projects involve the construction of androids. These have provided an experimental device for various debates: the debate about the relationship between mind and body (unifying the psychological and biological), the relationship of the social interaction with internal mechanisms (unifying social sciences and cognitive psychology), reductionism in neurosciences (the so-called theories of creation of artificial intelligence), connectionism versus modularity in cognitive science (the architectures that produce responses similar to human ones), and nature versus creation (the relative importance of innateness and learning in social interaction). The construction of androids could very well provide experimental data to the study of subjectivity.

Here, we must note the following: missing a theory of subjectivity is not missing information; rather, this information may be available (presumably provided by researchers in artificial mind) but still lack a theory of subjectivity. For example, consider what happens when you look at a Necker (1832) cube: suddenly it flips, and although the retinal image and visible two-dimensional (2D) structure are unchanged, the three-dimensional (3D) interpretation is different. Lines (or, rather, cube edges) that once sloped down away from the viewer now slope up (but still away from the viewer), and the vertical square cube face that was previously farther away is now nearer.

The Necker flip in what it's like to see the pattern of lines as a cube is likely to occur in visually sophisticated robots under appropriate conditions. There can be no reason for which that variation in what it is like to see these lines as a cube could not take place in robots that are visually sophisticated (in the appropriate conditions).

However, whether we are well informed about this x or not, it is a epistemological problem, not a ontological problem. In this sense, of information not being a ontological problem, Sloman (1996) said say that in this area— subjectivity—there is no philosophical problem.

We have information about, but not a theory of, subjectivity because here, we are confusing two things: epistemology and ontology. The first is the science of how we know things; the other is what things are.

Artificial mind research contributes to the study of human cognition, as well as to the study of subjectivity: contributes not exhausted the study of subjectivity.

The so-called Turing (1950) test assumed, in its evaluation of intelligence, that the mental does not have to be embodied. However, Alan Turing was wrong regarding the nature of the mental. The so-called total Turing test (TTT) preserves the idea that the mental has to be embodied (Harnad, 1991). The candidate for the TTT has to be capable of doing, in the world of objects and persons, all they can do, and do them in a way that is indistinguishable (to people) from their workings. The environment and the set design about which Coelho writes in 2008 (Coelho, 2008).

So, arguably, we have the experimental grounds to build androids. By implementing neuron-cognitive mechanisms in androids, evaluating their interactions with human beings, researchers can hope to build a bridge, for example, between neuroscience and behavioral sciences: with androids, we have an experimental apparatus for tests of subjectivity, our subjectivity being the same as theirs (even if they do not have subjectivity, we put our subjectivity in them experimentally).

We need a working hypothesis about the study of the human mind, a theory of subjectivity, and a theory of the body. Human beings have the mind they have because they have the body they have, and there are no disembodied minds outside the environment (as instantiated by humans). The mind, even if the mind is a distinct substance from the body, gets most of its stimulation from the body. Furthermore, the mind acts through the body. Given that so much of mental activity arises from bodily stimulation and so much of it is designed to contribute to bodily movement, the human mind is radically unlike, say, the mind of a pure intellect like God (if indeed God exists). Taking this seriously, it seems that the human mind could not exist without a body.

The consciousness of human beings is both access and phenomenal. Our problem is that there is no place for a necessary connection with physiology in the space of possible development defined by the concept of the mind. Although such conceptual expansion does not imply a contradiction with the essential nature of the subjective experience, nothing precludes an expanded concept of mind from preserving the features of the former concept and allowing the discovery of this connection (Nagel, 1998, 2002).

Homer and COMA (to cite the examples given previously), as a working hypothesis about the study of the human cognition, presumably have access consciousness (representations of) but not phenomenal consciousness (subjectivity). Presumably, “access consciousness” of agents as Homer and COMA cannot be separated from the body (Total Turing Test). However, this body cannot be any aggregate of matter; rather, a body must be indistinguishable from humans to humans: human beings looking at these bodies and confusedly process them as other humans. The phenomenological properties of the bodies of these agents, the way they appear to us, being indistinguishable from the phenomenological properties of human bodies.

The phenomenological properties of the bodies of these agents—that is, the way they appear to us—are indistinguishable from the phenomenological properties of human bodies. Our brain processes androids [note that the sophisticated robots discussed in Sloman (1996) have a body very different from ours] as human for 2 s. There are studies, such as Ishiguro (2005), showing that this is the case for 70% of participants). It is for this reason that we need a theory of subjectivity and a theory of the body as working hypotheses about the study of the human mind.

Notwithstanding, the kind of analogy between life and information argued for by authors such as Davies (2000), Walker and Davies (2013), Dyson (1979), Gleick (2011), Kurzweil (2005, 2012), and Ward (2009)—which seems to be central to the effect that the artificial mind may represents an expected advance in the life evolution in the universe—is like the design argument. If the design argument is unfounded and invalid, the argument to the effect that the artificial mind may represents an expected advance in the evolution of life in the universe is also unfounded and invalid.

4 The classic watchmaker analogy

The design argument presented and criticized, for example, by Hume (1779) in his dialogues concerning natural religion can be formulated as the classic watchmaker analogy as follows.

1. The clock, for its complexity and the way is ordered, is a machine that has to have an intelligent author and builder, with proportional capacities to his work—a human watchmaker.

2. The world, for its complexity and the way is ordered, is like a clock.

3. Therefore, the world also has to have a smart author and builder, with proportional capacities to his work—the divine watchmaker (i.e., God).

Basically, this argument holds that given the similarities between a watch and the world, just as we can assume that an intelligent entity built a clock in a specific way and for a specific purpose, we can do the same for the world. While in the first case, the most plausible hypothesis for the builder of the clock would be a human watchmaker, in the second, the most plausible hypothesis for the builder of the world would be a “divine watchmaker” because only such a being could be capable of this work.

This argument is an analogy, but, as we shall see next, it raises several problems. Consider this: it is obvious that the world is complex and has an order, and natural events have a regularity, but the analogy with the watch is fragile, remote, and reductive.

5 The classic watchmaker analogy is fragile, remote and reductive

The first issue with the watchmaker analogy is that it is fragile. While a clock is a perfect machine, the world is a “machine” full of imperfections and irregularities that go beyond their usual order or regularity. Next, it is remote, because any similarities between the watch and the world can only be regarded as very distant and only in some aspects. That is, one cannot say with certainty that the world order is similar to the order of the clock, because while we are sure, by our experience, that the clock and their order were created according to an end, we have no certainty (not having had any experience of this) that the world and its order were even created, much less that it occurred in accordance with an end (that would be divine) and not just by natural accident (the latter explanation is, moreover, the scientific explanation). Third, the analogy is reductive because while the clock is a machine with limited complexity to its small dimensions, the world is a machine not comparable to the dimensions of the watch, so its complexity cannot be compared with that of the clock.

Now, an analogy can be established from an example that is similar in a relevant aspect — in the case of the watchmaker analogy, the example would be the clock and the relevant aspect would be the complexity of the clock comparable to the complexity of the world. And we have seen that the watchmaker analogy does not fulfill these conditions, so we conclude that the analogy is neither founded nor valid. Therefore, the argument is unfounded and invalid and should not be considered as good proof of the existence of God.

The analogy between mental life and information is the same kind of analogy involved in the argument from design. From the fact that there are mental operations as thought and intention in some parts of nature, particularly in humans and other animals, it does not follow that this may be the rule of the whole (that is, of the nature).

The analogy between life and information takes a part (information) from the whole (life). The idea that a natural biological function of the brain is processing information has not been established empirically, by cognitive neuroscience, is a metaphor. The concepts of processing and information are concepts of folk psychology that seems scientifically rigorous, but are not scientifically rigorous. Concepts like pattern recognition does not exhaust all mental activity: if any mental activity falls under the concept of pattern recognition is only part of the activity of the mind.

In what way does thinking co-occur with a stimulus and categorizing it? When I am thinking about Waltham (Massachusetts) while in Lisbon (Portugal), I am not recognizing any presented stimulus as Waltham (Massachusetts) since I am not perceiving it with my senses. There is no perceptual recognition going on at all in thinking about an absent object. So concepts as pattern recognition although some part of what there is to say about the nature of thought—such as when I am perceiving Waltham (Massachusetts) with my senses—is far from all there is to say about the nature of thought.

Reach to the explanation of the whole [nature, as in the discussion of the argument from design by Hume; life, as in the discussion of the analogy between life and information by authors such as Davies (2000), Walker and Davies (2013), Dyson (1979), Gleick (2011), Kurzweil (2005, 2012), Ward (2009) starting with just one part (humans and other animals, as in the discussion of the argument from design by Hume; or information, as in the discussion of the analogy between life and information), without more, makes these arguments very weak: to the effect of the existence of God (criticized by Hume); and to the effect of the analogy between life and information (argued by authors such as Davies 2000; Walker and Davies 2013; Dyson 1979; Gleick 2011; Kurzweil 2005, 2012; Ward 2009).

At the same time, as Hume says, if we are prepared to admit (though we should not do) this method of reasoning as valid, why then choose the part of nature that says more about us, and not another? Or, as I say, why then choose specifically some of the cognitive features of consciousness, not subjectivity? In other words, why then choose the part of mental life that says more about perceptual cases and not emotion, imagination, reasoning, willing, intending, calculating, silently talking to oneself, feeling pain and pleasure, itches, and moods—the full life of the mind? Certainly, they are nothing like the perceptual cases on which the analogy between life and information rest.

According to science, a succession of chances (without any special or divine plan, although according to the “laws of nature”) led to the creation of the world and its existence as we know it. Thus, even before being able to dream even with Darwinian theories and how they revolutionized scientific knowledge, Hume, through his character Philo, already objected to the argument from design that he could not imagine as having a scientific basis of the most devastating effects against such an argument from design —namely, the watchmaker analogy.

Indeed, the hypothesis of Hume of a succession of chances, besides being more logical and plausible than the theistic hypothesis, is one that most closely matches Darwinian theories of evolution by natural selection, which would arise a century later (namely, in the 19th century), as well as approaches all subsequent scientific discoveries, not only of biology, but also of chemistry, and physics, regarding the possible certainties that we can have about the creation of the universe.

6 The analogy between life and information seems to suggest some type of reductionism

The analogy between life and information, if we are prepared to accept (supposing that you do not agree that the kind of analogy between life and information is like the design argument) that this method of reasoning as valid (though we should not), seems to suggest some type of reductionism of life to information. However, biology, chemistry, and physics are not reductionist, contrary to what seems to be suggested by the analogy between life and information.

On the biological level, for example, molecular genetics cannot provide a derivation base for evolutionary biology (Lewontin, 1983; Levins, 1968) or even for classical genetics (Kitcher, 1984). Particularly, Kitcher (1984, p. 350) writes: “the molecular derivation forfeits something important. […] The molecular account objectively fails to explain because it cannot bring out that feature of the situation which is highlighted in the [biological] cytological story.” Richard Lewontin (quoted in Callebaut, 1993, p. 261), in its turn, claims: “Any textbook or popular lecture on genetics will say: ‘The gene is a self-reproducing unit that determines a particular trait in an organism.’ That description of genes as self-reproducing units which determine the organism contains two fundamental biological untruths: The gene is not self-replicating and it does not determine anything. I heard an eminent biologist at an important meeting of evolutionists say that if he had a large enough computer and could put the DNA sequence of an organism into the computer, the computer could ‘compute’ the organism. Now that simply is not true. Organisms don’t even compute themselves from their own DNA. The organism is the consequence of the unique interaction between what it has inherited and the environment in which it is developing (cf. Changeux, 1985; Edelman, 1988a, 1998b), which is even more complex because the environment is itself changed in the consequence of the development of the organism.” So, as exemplified by these two quotes from people working in the field, biology is not reductionist. Neither chemistry nor physics is reductionist, either. On the chemical level, for example, the reduction of chemistry to quantum mechanics (Cartwright, 1997; Primas, 1983) is a case of failed or incomplete reduction.

And the presumed reductionism in physics is also no more successful than biology or chemistry (on a physical level, for example); it is not always possible to combine models of gravitation and electromagnetic forces in a coherent way: they generate inconsistent or incoherent results when applied to dense matter, for example. This is the main problem currently driving people searching for a unified field theory.

7 Conclusion

Things in the world are not representational; intentional mental states about them are that they are representational, but phenomenological, physical, and functional characteristics of mental states (certain type of nerve cell activation co-occurring with our view of the world) also are not representational; rather, they are sensations and experiences.

Cognitive mental states represent, but sensations do not represent anything: if certain things out there stimulate nerve cells, this stimulation of nerve cells are not representations.

Semantics is out there, things out there stimulate nerve cells, but the co-occurring configuration of these nerve cells with that stimulation, if they claim to be representational or informational or coding, is just a misuse and overuse of terms like representation: neurons, their synapses, neurotransmitters, molecular receptors, etc., are cellular organisms more than we can access because there is no information or representation to explain what in fact we felt and experienced.

The idea that neurons (their chemistry and physics) “encode” or represent “information” is wrong (cf. Burock, 2010). If neurons encode or represent, is starting to take for granted what is intended to show: there is no difference between saying that certain BOLD2 (fMRI) or electroencephalogram (EEG) signal correlates with certain information and saying that certain BOLD (fMRI) or EEG signal is correlated with certain conscious mental states (phenomenal or access). What's there here is question-begging. A fallacy, because they assume "information,” they study "consciousness": but someone already showed what neurons encode or represent.

The metaphor of information or representation is the same kind of fallacy. Neurons neither encode nor represent anything or nothing: what the human voice is encoding or representing? Certain sound waves.

Expressions such as neural code are not neurons, are us talking about them. They are to be things out there, they are being represented by us, but they themselves are not representations. Expressions like information and representation can be eliminated, that what the relevant discipline says about neurons (and related) remains informative. And if information is a certain kind of frequency, the frequency is enough! We telephoned someone, and the listener understands us. But we do not say that the signal between these devices, represents, encodes, or is information.

A book about oceans is not an ocean: we can bathe ourselves in parts of the ocean without have any concept of ocean or of part, we can see red things without seeing that they are red (i.e., not having the concept of red).

Having information about living organisms does not make this information living organisms—they can be “automata” (as Descartes said in the second of his Meditations on the First Philosophy, 1641). By definition, for example, an artificial plant (information about the way plants look) is not a living organism—it is not a plant. In the same vein, the artificial mind is not a mind and cannot represent an expected advance in the evolution of life in the universe in a way suggested by the analogy between life and information. But as a tool, pattern recognition can help us to have more information about humans and other animals in perceptual cases.

For example, methodological concerns of animal experiments as the problem of disparate animal species and strains, with a variety of metabolic pathways and drug metabolites, lead to variations in efficacy and toxicity or as the problem of length of the follow-up before determination of disease outcome varies and may not correspond to disease latency in humans (Pound et al., 2004) and given the third of the four Rs (reduction, refinement, replacement, and responsibility), namely replacement the use of nonliving systems and computer simulation (Schechtman, 2002; Hendriksen, 2009; Arora et al., 2011) pattern recognition can substitute animals in research (for example, drug research and vaccines).

References

Arora T, Mehta AK, Joshi V, Mehta KD, Rathor N, Mediratta PK, Sharma KK. Substitute of animals in drug research: an approach towards fulfillment of 4R’s. Indian J. Pharmaceut. Sci. 2011;73(1):1–6.

Block N. Some concepts of consciousness in Chalmers. In: David J, ed. Philosophy of Mind: Classical and Contemporary Readings. Oxford University Press; 2002:206–218.

Burock M. Evidence for Information Processing in the Brain. 2010. [Preprint] http://philsci-archive.pitt.edu/id/eprint/8845 [1 March 2015].

Callebaut W. Taking the Naturalistic Turn, or, How Real Philosophy of Science Is Done. Chicago: University of Chicago Press; 1993.

Cartwright N. Why Physics? In: Penrose R, Shimony A, Cartwright N, Hawking S, eds. The Large, the Small and the Human Mind. Cambridge: Cambridge University Press; 1997.

Chalmers DJ. Facing up to the problem of consciousness. J. Conscious. Stud. 1995;2(3):200–219.

Changeux J-P. Neuronal Man: The Biology of Mind. Oxford: Oxford University Press; 1985.

Coelho H. Teoria da Agência, Arquitectura e Cenografia [Theory of Agency, Architecture and Set Design]. 2008.

Davies P. The Fifth Miracle: The Search for the Origin and Meaning of Life. Simon & Schuster; 2000.

Descartes, R., 1641. Meditations on First Philosophy. http://www.wright.edu/~charles.taylor/descartes/meditation2.html [12 June 2015].

Dyson FJ. Time without end: physics and biology in an open universe. Rev. Mod. Phys. 1979;51:447–460.

Edelman GM. The Remembered Present: A Biological Theory of Consciousness. New York: Basic Books; 1988a.

Edelman GM. Topobiology: An Introduction to Molecular Embryology. New York: Basic Books; 1988b.

Edelman GM. Second Nature - Brain Science and Human Knowledge. Yale University Press; 2006.

Gleick J. The Information: A History, A Theory, A Flood. Vintage; 2011.

Harnad S. Other bodies, other minds: a machine incarnation of an old philosophical problem. Mind. Mach. 1991;1:43–54.

Hendriksen CF. Replacement, reduction and refinement alternatives to animal use in vaccine potency measurement. Expert Rev. Vaccines. 2009;8:313–322.

Holt J. Blindsight and the Nature of Consciousness. Ontario: Broadview Press; 2003.

Hume, D., 1779. Dialogues Concerning Natural Religion. http://www.davidhume.org/texts/dnr.html [12 June 2015].

Ishiguro H. Android science: toward a new cross-disciplinary framework. In: Cogsci - 2005 Workshop: Toward Social Mechanisms of Android Science; 2005:1–6.

Kitcher P. 1953 and all that: a tale of two sciences. Phil. Rev. 1984;93:335–373.

Kurzweil R. The Singularity is Near: When Humans Trascend Biology. Penguim Books; 2005.

Kurzweil R. How to Create a Mind: The Secret of Human Thought Revealed. Viking Adult; 2012.

Levins R. Evolution in Changing Environments. Princeton, NJ: Princeton University Press; 1968.

Lewontin RC. Biological Determinism. Tanner Lectures on Human Values. Salt Lake City: University of Utah Press; 1983.

Nagel T. What is it like to be a bat? Phil. Rev. 1974;83(4):435–450.

Nagel T. Conceiving the impossible and the mind-body problem. Philosophy. 1998;73(285):337–352.

Nagel T. Concealment and Exposure and Other Essays. New York: Oxford University Press; 2002.

Necker LA. Observations on some remarkable optical phaenomena seen in Switzerland; and on an optical phaenomenon which occurs on viewing a figure of a crystal or geometrical solid. London and Edinburgh Philosophical Magazine and Journal of Science. 1832;1(5):329–337.

Ogawa S, Tank DW, Menon R, Ellermannn JM, Kim S-G, Merkle H, Ugurbil K. Intrinsic signal changes accompanying sensory stimulation: functional brain mapping with magnetic resonance imaging. Proc. Natl. Acad. Sci. U. S. A. 1992;89:5951–5955.

Pound P, Ebrahim S, Sandercock P, Bracken MB, Roberts IReviewing Animal Trials Systematically (RATS) Group. Where is the evidence that animal research benefits humans? Br. Med. J. 2004;328(7438):514–517.

Primas H. Chemistry, Quantum Mechanics, and Reductionism. Berlin: Springer-Verlag; 1983.

Schechtman LM. Implementation of the 3Rs (refinement, reduction, and replacement): Validation and regulatory acceptance considerations for alternative toxicological test methods. J. Infect. Dis. 2002;43(Suppl.):S85–S94.

Schubert LK, Schaeffer S, Hwang CH, de Haan J. EPILOG: The Computational System for Episodic Logic. USER GUIDE. 1993.

Searle J. Minds, brains, and programs. Behav. Brain Sci. 1980;3:417–424.

Sloman A. What is like to be a rock? URL: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/like_to_be_a_rock/rock.html. 1996 [1 March 2015].

Turing AM. Computing machinery and intelligence. Mind. 1950;59:433–460.

Vere S, Bickmore T. A basic agent. Comput. Intell. 1990;6(1):41–60.

Walker S, Davies P. The algorithmic origins of life. J. R. Soc. Interface. 2013;10(79):1–9.

Ward P. The Medea Hypothesis: Is Life on Earth Ultimately Self-Destructive? Princeton University Press; 2009.


1 To create a mind, as argued by Kurzweil (2012), we need to create a machine that recognizes patterns, such as letters and words. Consider the task of translating a paper. Despite our best efforts to develop artificial universal translators, humans are still very far from being able to express what we write in another language.

2 Blood oxygenation level dependent (for example, Ogawa et al., 1992).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset