September 16, 2010

PAGE 55

 Linguistics is the scientific study of language. It encompasses the description of languages, the study of their origin, and the analysis of how children acquire language and how people learn languages other than their own. Linguistics is also concerned with relationships between languages and with the ways languages change over time. Linguists may study language as a thought process and seek a theory that accounts for the universal human capacity to produce and understand language. Some linguists examine language within a cultural context. By observing talk, they try to determine what a person needs to know in order to speak appropriately in different settings, such as the workplace, among friends, or among family. Other linguists focus on what happens when speakers from different language and cultural backgrounds interact. Linguists may also concentrate on how to help people learn another language, using what they know about the learner’s first language and about the language being acquired.
 Although there are many ways of studying language, most approaches belong to one of the two main branches of linguistics: descriptive linguistics and comparative linguistics.
 Descriptive linguistics is the study and analysis of spoken language. The techniques of descriptive linguistics were devised by German American anthropologist Franz Boas and American linguist and anthropologist Edward Sapir in the early 1900s to assume or taken again in repossessing of taken to return to or begin after interpretation as to record and actively, in effect, being at work or in operation with the considerations brought by way of an analysis as to divide a complex whole into its constituent parts or elements, to categorically place a hierarchical division as apprehended in the Native American languages. Descriptive linguistics begins with what a linguist hears native speakers say. By listening to native speakers, the linguist gathered a body of data and analyses’ it in order to identify distinctive sounds, called phonemes. Individual phonemes, such as /p/ and /b/, are established on the grounds that substitution of one for the other changes the meaning of a word. After identifying the entire inventory of sounds in a language, the linguist looks at how these sounds combine to create morphemes, or units of sound that carry meaning, such as the words push and bush. Morphemes may be individual words such as push; root words, such as the berry in a blueberry; or prefixes (pre-in a preview) and suffixes (-the ness-in openness).
 The linguist’s next step is to see how morphemes combine into sentences, obeying both the dictionary meaning of the morpheme and the grammatical rules of the sentence. In the sentence ‘She pushed the bush’, the morpheme ‘she’, a pronoun, is the subject, ‘pushed’ a transitive verb, is the verb, and ‘the’, is a definite article, is the determiner, and ‘bush’, a noun, is the object. Knowing the function of the morphemes in the sentence enables the linguist to describe the grammar of the language. The scientific procedures of phonemics (finding phonemes), morphology (discovering morphemes), and syntax (describing the order of morphemes and their function) provides descriptive linguists with a way to write down grammars of languages never before written down or analysed. In this way they can begin to study and understand these languages.
 Comparative linguistics is the study and analysis, by means of written records, of the origins and relatedness of different languages. In 1786 Sir William Jones, a British scholar, asserted that Sanskrit, Greeks, and Latins were descendable related to each other and had accredited from a common source. Jones some based this assertion on observations that were familiarly similar, in that, celestially resounding of voice and the certain meanings along with the caustic circumstance about the circumference of the reservoir, and, enlightened by the continuous phenomenon for us to discover or rediscover the course about an area of the reservoir, least of mention, the circumvented pre-limit of definitive restrictions, however, the circulatory disseminate engagement upon the collateral verbiage, in which of each rung in the hierarchical rhetoric set theories, and, what is important, are the communicative commendations that properly express of a paraphrasable significance by the reckoning of nearby acquaintances encircled by the inhibiting ridge of a triplet of languages. For example, the Sanskrit word bhratar for ‘brother’ resembles the Latin word frater, the Greek word phrater, (and the English word brother).
 Other scholars went on to compare Icelandic with Scandinavian languages, and Germanic languages with Sanskrit, Greek, and Latin. The correspondences among languages, known as genetic relationships, came to be represented on what comparative linguists refer to as family trees. Family trees established by comparative linguists include the Indo-European, relating Sanskrit, Greek, Latin, German, English, and other Asian and European languages; the Algonquian, relating Fox, Cree, Menomini, Ojibwa, and other Native North American languages; and the Bantu, relating Swahili, Xhosa, Zulu, Kikuyu, and other African languages.
 Comparative linguists also look for similarities in the way words are formed in different languages. Latin and English, for example, change the form of a word to express different meanings, as when the English verbs ‘go', characterizes of that part or set of a descriptive change and make or become different, as the case will provide again and again, a making of different modification, and as a result of such change in moving directly of an alienable provocation ‘went’ and ‘gone’ only if to express of a past action. Chinese, on the other hand, has no such inflected forms; The verb remains the same while other words indicate the time (as in ‘go store tomorrow’). In Swahili, prefixes, suffixes, and additionally spur the inceptive derivation that once began the continual occurrence that happens of an accompanying fundamental affiliation that calls the chance to deliberate in by the bye of passing in the bygone. The manifestation, or suggesting a keen alertness of mind, infixes the additions in the body is to te spoken word-this combination with which a root word of change acquaints itself by reducing its meaning. For example, a single word hangs on or upon the edge horizon that whensoever expressed presents on the stage when something was done, by whom, to whom, and in what manner.
 Some comparative linguists reconstruct hypothetical ancestral languages known as proto-languages, which they use to demonstrate relatedness among contemporary languages. A proto-language is not intended to depict a real language, however, and does not represent the speech of ancestors of people speaking modern languages. Unfortunately, some groups have mistakenly used such reconstructions in efforts to demonstrate the ancestral homeland of people.
 Comparative linguists have suggested that certain basic words in a language do not change over time, because people are reluctant to introduce new words for such constants as arm, eye, or mother. These words are termed culture free. By comparing lists of culture-free words in languages within a family, linguists can derive the percentage of related words and use a formula to figure out when the languages separated from one another.
 By the 1960s comparativists were no longer satisfied with focussing on origins, migrations, and the family tree method. They challenged as unrealistic the notion that an earlier language could remain sufficiently isolated for other languages to be derived exclusively from it over a period of time. Today comparativists seek to understand the more complicated reality of language history, taking language contact into account. They are concerned with universal characteristics of language and with comparisons of grammars and structures.
 The field of linguistics, which lends from its own theories and methods into other disciplines, and many subfields of linguistics have expanded our understanding of languages. Linguistic theories and methods are also used in other fields of study. These overlapping interests have led to the creation of several cross-disciplinary fields.
 Sociolinguistics are the study of patterns and variations in language within a society or community. It focuses on the way people use language to express social class, group status, gender, or ethnicity, and it looks at how they make choices about the form of language they use. It also examines the way people use language to negotiate their role in society and to achieve positions of power. For example, sociolinguistic studies have found that the way a New Yorker pronounces the phoneme /r/ in an expression such as ‘fourth floor’, can indicate the person’s social class. According to one study, people aspiring to move from the lower middle class to the upper middle class attaches prestige to pronouncing /r/. Sometimes they even overcorrect their speech, pronouncing /r/ where those whom they wish to copy may not.
 Some Sociolinguists believe that analyzing such variables as the use of a particular phoneme can predict the direction of language change. Change, they say, moves toward the variable associated with power, prestige, or other quality having high social value. Other Sociolinguists focus on what happens when speakers of different languages interact. This approach to language change emphasizes the way languages mix rather than the direction of change within a community. The goal of Sociolinguistics is to understand communicative competence-what people need to know to use the appropriate language for a given social setting.
 Psycholinguistics merge the fields of psychology and linguistics to study how people process language and how language use is related to underlying mental processes. Studies of children’s language acquisition and of second-language acquisition are psycholinguistic in nature. Psycholinguists work to develop models for how language is processed and understood, using evidence from studies of what happens when these processes go awry. They also study language disorders such as aphasia (impairment of the ability to use or comprehend words) and dyslexia (impairment of the ability to make out written language).
 According to E.O Wilson, the 'human mind evolved to believe in the gods’' and people 'need a sacred narrative' to have a sense of higher purpose. Yet it is also clear, that the 'gods’' in his view are merely human constructs and, therefore, there is no basis for dialogue between the world-view of science and religion. 'Science for its part', said Wilson, 'will test relentlessly every assumption about the human condition and in time uncover the bedrock of the moral and religious sentiment. The eventual result of the competition between the other, will be the secularization of the human epic and of religion itself.
 Man has come to the threshold of a state of consciousness, regarding his nature and his relationship to te Cosmos, in terms that reflect 'reality'. By using the processes of nature as metaphor, to describe the forces by which it operates upon and within Man, we come as close to describing 'reality' as we can within the limits of our comprehension. Men will be very uneven in their capacity for such understanding, which, naturally, differs for different ages and cultures, and develops and changes over the course of time. For these reasons it will always be necessary to use metaphor and myth to provide 'comprehensible' guides to living. In this way, man's imaginative intellectuality plays a vital role in the collective schematic in the survival of humanity, and, of course, the governing principles of evolution.
 Since so much of life both inside and outside the study is concerned with finding explanations of things, it would be desirable to have a concept of what counts as a good explanation from bad. Under the influence of 'logical positivist' approaches to the structure of science, it was felt that the criterion ought to be found in a definite logical relationship between the 'exlanans' (that which does the explaining) and the explanandum (that which is to be explained). The approach culminated in the covering law model of explanation, or the view that an event is explained when it is subsumed under a law of nature, that is, its occurrence is deducible from the law plus a set of initial conditions. A law would itself be explained by being deduced from a higher-order or covering law, in the way that Johannes Kepler (or Keppler, 1571-1630), was by way of planetary motion that the laws were deducible from Newton's laws of motion. The covering law model may be adapted to include explanation by showing that something is probable, given a statistical law. Questions for the covering law model include querying for the covering charter is necessary to explanation (we explain whether everyday events without overtly citing laws): Querying whether they are sufficient (it may not, however, explain an event just to say that it is an example of the kind of thing that always happens). And querying whether a purely logical relationship is adapted to capture the relevant requirements, which we construct of explanations. These may include, for instance, that we have a 'feel' for what is happening, or that the explanation proceeds in terms of things that are familiar to us or unsurprising, or that we can give a model of what is going on, and none of these notions is captured in a purely logical approach. Recent work, therefore, has tended to stress the contextual and pragmatic elements in requirements for explanation, so that what counts as good explanation given one set of concerns may not do so given another.
 The argument to the best explanation is the view that once we can select the best of any in something in explanations of an event, then we are justified in accepting it, or even believing it. The principle needs qualification, since something it is unwise to ignore the antecedent improbability of a hypothesis which would explain the data better than others, e.g., the best explanation of a coin falling heads 530 times in 1,000 tosses might be that it is biased to give a probability of heads of 0.53 but it might be more sensible to suppose that it is fair, or to suspend judgement.
 In a philosophy of language is considered as the general attempt to understand the components of a working language, the relationship the understanding speaker has to its elements, and the relationship they bear to the world. The subject therefore embraces the traditional division of semiotic into syntax, semantics, and pragmatics. The philosophy of language thus mingles with the philosophy of mind, since it needs an account of what it is in our understanding that enables us to use language. It so mingles with the metaphysics of truth and the relationship between sign and object. Much as much is that the philosophy in the 20th century, has been informed by the belief that philosophy of language is the fundamental basis of all philosophical problems, in that language is the distinctive exercise of mind, and the distinctive way in which we give shape to metaphysical beliefs. Particular topics will include the problems of logical form. Built upon the basis of the division between syntax and semantics, the problems, least of mention, are contained by some understanding of the number and nature, specifically those that are included or, perhaps, entrapped in the specific semantical relationships, such as meaning, reference, predication, and quantification. Pragmatics includes that of speech acts, while problems of rule following and the indeterminacy of Translated infect philosophies of both pragmatics and semantics.
 On this conception, to understand a sentence is to know its truth-conditions, and, yet, in a distinctive way the conception has remained central that those who offer opposing theories characteristically define their position by reference to it. The conversation or discussion is usually taken to direct us toward reaching a decision or settlement, nevertheless, the testimonial confirmation for which the congruity of meaning is consistently connected with truth-conditions, and needs not and should not be advanced for being in or for itself as a complete account of meaning. For instance, one who understands a language must have some idea of the range of speech acts contextually performed by the various types of the sentence in the language, and must have some idea of the insufficiencies of various kinds of speech acts. The claim of the theorist of truth-conditions should rather be targeted on the notion of content: If an indicative sentence differs in what they strictly and literally say, then this difference is fully accounted for by the difference in the truth-conditions.
 The meaning of a complex expression is a function of the meaning of its constituent. This is just as a sentence of what it is for an expression to be semantically complex. It is one of the initial attractions of the conception of meaning truth-conditions tat it permits a smooth and satisfying account of the way in which the meaning of s complex expression is a function of the meaning of its constituents. On the truth-conditional conception, to give the meaning of an expression is to state the contribution it makes to the truth-conditions of sentences in which it occurs. For singular terms-proper names, indexical, and certain pronouns-this is done by stating the reference of the terms in question. For predicates, it is done either by stating the conditions under which the predicate is true of arbitrary objects, or by stating the conditions under which arbitrary atomic sentences containing it is true. The meaning of a sentence-forming operator is given by stating its contribution to the truth-conditions of as complex sentence, as a function of the semantic values of the sentences on which it operates.
 The theorist of truth conditions should insist that not every true statement about the reference of an expression is fit to be an axiom in a meaning-giving theory of truth for a language, such is the axiom: 'London' refers to the city in which there was a huge fire in 1666, is a true statement about the reference of 'London'. It is a consequent of a theory which substitutes this axiom for no different a term than of our simple truth theory that 'London is beautiful' is true if and only if the city in which there was a huge fire in 1666 is beautiful. Since a subject can understand that in the name 'London' is without knowing that last-mentioned truth condition, this replacement axiom is not fit to be an axiom in a meaning-specification in truth theory. It is, of course, incumbent on a theorized meaning of truth conditions, to state in a way which does not presuppose any previous, non-truth conditional conception of meaning among the many challenges facing the theorist of truth conditions, two are particularly salient and fundamental. First, the theorist has to answer the charge of triviality or vacuity, second, the theorist must offer an account of what it is for a person's language to be truly describable by as semantic theory containing a given semantic axiom. Since the content of a claim that conjointly applies in the sentence 'Paris is beautiful' is true, but less than the claim that Paris is beautiful, we can trivially describers understanding a sentence, if we wish, as knowing its truth-conditions, but this gives us no substantive account of understanding whatsoever. Something other than a grasping of truth conditions must provide the substantive account. The charge rests upon what has been called the redundancy theory of truth, the theory which, somewhat more discriminative. Horwich calls the minimal theory of truth. It’s conceptual representation that the concept of truth is exhausted by the fact that it conforms to the equivalence principle, the principle that for any proposition 'p', it is true that 'p' if and only if 'p'. Many different philosophical theories of truth will, with suitable qualifications, accept that equivalence principle. The distinguishing feature of the minimal theory is its claim that the equivalence principle exhausts the notion of truth. It is now widely accepted, both by opponents and supporters of truth conditional theories of meaning, that it is inconsistent to accept both minimal theory of truth and a truth conditional account of meaning. If the claimants sentence 'Paris is beautiful' is true is exhausted by its equivalence to the claim that Paris is beautiful, it is circular to try of its truth conditions. The minimal theory of truth has been endorsed by the Cambridge mathematician and philosopher Plumpton Ramsey (1903-30), and the English philosopher Jules Ayer, the later Wittgenstein, Quine, Strawson. Horwich and-confusing and inconsistently if this article is correct-Frége himself. But is the minimal theory correct?
 The minimal theory treats instances of the equivalence principle as definitional of truth for a given sentence, but in fact, it seems that each instance of the equivalence principle can itself be explained. The circumstantial truth from which it is initially understood of being composed to bring into being by mental and especially artistic efforts, as to bring oneself or one’s emotions under the control by a condition or occurrence that is fully but variously concerned and recognized by the existence or meaning by its overflowing emptiness, defining its proofs in the applicability, for which of its cause is to be applied in possibilities that are founded in the instance of: 'London is beautiful' is true if and only if London is beautiful. This would be a pseudo-explanation if the fact that 'London' refers to London consists in part in the fact that 'London is beautiful' has the truth-condition it does. But it is very implausible, which it is, after all, possible to understand the name of 'London' without understanding the predicate 'is beautiful'.
 The best-known modern treatment of counterfactuals is that of David Lewis, which evaluates them as true or false according to whether 'q' is true in the 'most similar' possible worlds to ours in which 'p' is true. The similarity-ranking this approach needs have proved controversial, particularly since it may need to presuppose some notion of the same laws of nature, whereas art of the interest in counterfactuals is that they promise to illuminate that notion. There is a growth of awareness that the classification of conditionals is an extremely tricky business, and categorizing them as counterfactuals or not restrictively used. The proclaiming declaration of pending interests is applied to any conditional, as do prepositions of the form, that if taken as: ‘p’ then ‘q'. The condition hypothesizes, 'p'. It’s called the antecedent of the conditional, and 'q' the consequent. Various kinds of conditional have been distinguished. The weakening material implication, are merely telling us that with ‘not-p’ or, ‘q’ has stronger conditionals that include elements of modality, corresponding to the thought that if ‘p’ is true then ‘q’ must be true. Ordinary language is very flexible in its use of the conditional form, and there is controversy whether, yielding different kinds of conditionals with different meanings, or pragmatically, in which case there should be one basic meaning which case there should be one basic meaning, with surface differences arising from other implicatures.
 We now turn to a philosophy of meaning and truth, for which it is especially associated with the American philosopher of science and of language (1839-1914), and the American psychologist philosopher William James (1842-1910), Wherefore the study in Pragmatism is given to various formulations by both writers, but the core is the belief that the meaning of a doctrine is the same as the practical effects of adapting it. Peirce interpreted of theoretical sentences is only that of a corresponding practical maxim (telling us what to do in some circumstance). In James the positions issued in a theory of truth are notoriously allowing that belief, including, for example, that the faith in God, is the widest sense of the works satisfactorily in the widest sense of the word. On James's view almost any belief might be respectable, and even true, provided it calls to mind (but working is no s simple matter for James). The apparent subjectivist consequences of this were wildly assailed by Russell (1872-1970), Moore (1873-1958), and others in the early years of the 20th century. This led to a division within pragmatism between those such as the American educator John Dewey (1859-1952), whose humanistic conception of practice remains inspired by science, and the more idealistic route that especially by the English writer F.C.S. Schiller (1864-1937), embracing the doctrine that our cognitive efforts and human needs actually transform the reality that we seek to describe. James often writes as if he sympathizes with this development. For instance, in The Meaning of Truth (1909), he considers the hypothesis that other people have no minds (dramatized in the sexist idea of an 'automatic sweetheart' or female zombie) and split announcements that the hypothesis would not work because it would not satisfy our egoistic craving for the recognition and admiration of others, as those implications that are of this makes it true that the other persons have minds in the disturbing part.
 Modern pragmatists such as the American philosopher and critic Richard Rorty (1931-) and some writings of the philosopher Hilary Putnam (1925-) who has usually trued to dispense with an account of truth and concentrate, as perhaps James should have done, upon the nature of belief and its relations with human attitude, emotion, and needs. The driving motivation of pragmatism is the idea that belief in the truth on te one hand must have a close connection with success in action on the other. One way of cementing the connection is found in the idea that natural selection must have adapted us to be cognitive creatures because beliefs have effects, as they work. Pragmatism can be found in Kant's doctrine of the primary of practical over pure reason, and continued to play an influential role in the theory of meaning and of truth.
 In case of fact, the philosophy of mind is the modern successor to behaviourism, also, to functionalism that its early advocates were Putnam (1926-) and Sellars (1912-89), and its guiding principle is that we can define mental states by a triplet of relations they have on other mental stares, what effects they have on behavior. The definition need not take the form of a simple analysis, but if w could write down the totality of axioms, or postdate, or platitudes that govern our theories about what things of other mental states, and our theories about what things are apt to cause (for example), a belief state, what effects it would have on a variety of other mental states, and other alterable states that consistently affect the likelihood of imaginative hosts that play a role on behavior, by that, we would have done all that is needed to make the state a proper theoretical notion. It could be implicitly defied by these theses. Functionalism is often compared with descriptions of a computer, since according to mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlaying hardware or 'realization' of the program the machine is running. The principal advantages of functionalism include its fit with the way we know of mental states both of ourselves and others, which is via their effects on behavior and other mental states. As with behaviourism, critics charge that structurally complex items that do not bear mental states might nevertheless, imitate the functions that are cited. According to this criticism functionalism is too generous and would count too many things as having minds. It is also queried whether functionalism is too paradoxical, able to see mental similarities only when there is causal similarity, when our actual practices of interpretations enable us to ascribe thoughts and desires to foreign indifference, similarly from our own. It may then seem as though beliefs and desires can be 'variably realized', and causally just as much as they can be in different neurophysiological states.
 The philosophical movement of Pragmatism had a major impact on American culture from the late 19th century to the present. Pragmatism calls for ideas and theories to be tested in practice, by assessing whether acting upon the idea or theory produces desirable or undesirable results. According to pragmatists, all claims about truth, knowledge, morality, and politics must be tested in this way. Pragmatism has been critical of traditional Western philosophy, especially the notions that there are absolute truths and absolute values. Although pragmatism was popular for a time in France, England, and Italy, most observers believe that it encapsulates an American faith in knowing how and the practicality for which is an equalling distrust for the American schematic of abstractive theories and ideological mythology.
 In mentioning the American psychologist and philosopher we find  William James, who helped to popularize the philosophy of pragmatism with his book Pragmatism: A New Name for Old Ways of Thinking (1907). Influenced by a theory of meaning and verification developed for scientific hypotheses by American philosopher C. S. Peirce, James held that truths are what works, or has good experimental results. In a related theory, James argued the existence of God is partly verifiable because many people derive benefits from believing.
 The Association for International Conciliation first published William James's pacifist statement, 'The Moral Equivalent of War', in 1910. James, a highly respected philosopher and psychologist, was one of the founders of pragmatism-a philosophical movement holding that ideas and theories must be tested in practice to assess their worth. James hoped to find a way to convince men with a long-standing history of pride and glory in war to evolve beyond the need for bloodshed and to develop other avenues for conflict resolution. Spelling and grammar represents standards of the time.
 Pragmatists regard all theories and institutions as tentative hypotheses and solutions. For this reason they believed that efforts to improve society, through such means as education or politics, must be geared toward problem solving and must be ongoing. Through their emphasis on connecting theory to practice, pragmatist thinkers attempted to transform all areas of philosophy, from metaphysics to ethics and political philosophy.
 Pragmatism sought a middle ground between traditional ideas about the nature of reality and radical theories of nihilism and irrationalism, which had become popular in Europe in the late 19th century. Traditional metaphysics assumed that the world has a fixed, intelligible structure and that human beings can know absolute or objective truths about the world and about what constitutes moral behavior. Nihilism and irrationalism, on the other hand, denied those very assumptions and their certitude. Pragmatists today still try to steer a middle course between contemporary offshoots of these two extremes.
 The ideas of the pragmatists were considered revolutionary when they first appeared. To some critics, pragmatism's refusal to affirm any absolutes carried negative implications for society. For example, pragmatists do not believe that a single absolute idea of goodness or justice exists, but rather that these concepts are changeable and depend on the context in which they are being discussed. The absence of these absolutes, critics feared, could result in a decline in moral standards. The pragmatists' denial of absolutes, moreover, challenged the foundations of religion, government, and schools of thought. As a result, pragmatism influenced developments in psychology, sociology, education, semiotics (the study of signs and symbols), and scientific method, as well as philosophy, cultural criticism, and social reform movements. Various political groups have also drawn on the assumptions of pragmatism, from the progressive movements of the early 20th century to later experiments in social reform.
 Pragmatism is best understood in its historical and cultural context. It arose during the late 19th century, a period of rapid scientific advancement typified by the theories of British biologist Charles Darwin, whose theories suggested too many thinkers that humanity and society are in a perpetual state of progress. During this same period a decline in traditional religious beliefs and values accompanied the industrialization and material progress of the time. In consequence it became necessary to rethink fundamental ideas about values, religion, science, community, and individuality.
 The three most important pragmatists are American philosophers’ Charles Sanders Peirce, William James, and John Dewey. Peirce was primarily interested in scientific method and mathematics; his objective was to infuse scientific thinking into philosophy and society, and he believed that human comprehension of reality was becoming ever greater and that human communities were becoming increasingly progressive. Peirce developed pragmatism as a theory of meaning-in particular, the meaning of concepts used in science. The meaning of the concept 'brittle', for example, is given by the observed consequences or properties that objects called 'brittle' exhibit. For Peirce, the only rational way to increase knowledge was to form mental habits that would test ideas through observation, experimentation, or what he called inquiry. Many philosophers known as logical positivists, a group of philosophers who have been influenced by Peirce, believed that our evolving species was fated to get ever closer to Truth. Logical positivists emphasize the importance of scientific verification, rejecting the assertion of positivism that personal experience is the basis of true knowledge.
 James moved pragmatism in directions that Peirce strongly disliked. He generalized Peirce's doctrines to encompass all concepts, beliefs, and actions; he also applied pragmatist ideas to truth as well as to meaning. James was primarily interested in showing how systems of morality, religion, and faith could be defended in a scientific civilization. He argued that sentiment, as well as logic, is crucial to rationality and that the great issues of life-morality and religious belief, for example-are leaps of faith. As such, they depend upon what he called 'the will to believe' and not merely on scientific evidence, which can never tell us what to do or what is worthwhile. Critics charged James with relativism (the belief that values depend on specific situations) and with crass expediency for proposing that if an idea or action works the way one intends, it must be right. But James can more accurately be described as a pluralist-someone who believes the world to be far too complex for any-one philosophy to explain everything.
 Dewey's philosophy can be described as a version of philosophical naturalism, which regards human experience, intelligence, and communities as ever-evolving mechanisms. Using their experience and intelligence, Dewey believed, human beings can solve problems, including social problems, through inquiry. For Dewey, naturalism led to the idea of a democratic society that allows all members to acquire social intelligence and progress both as individuals and as communities. Dewey held that traditional ideas about knowledge, truth, and values, in which absolutes are assumed, are incompatible with a broadly Darwinian world-view in which individuals and societies are progressing. In consequence, he felt that these traditional ideas must be discarded or revised. Indeed, for pragmatists, everything that peoples’ knows, and, in effect, point of some contributory value in doing so and seems as continuously being dependent upon a historical context and is thus tentative rather than absolute.
 Many followers and critics of Dewey believe he advocated elitism and social engineering in his philosophical stance. Others think of him as a kind of romantic humanist. Both tendencies are evident in Dewey's writings, although he aspired to synthesize the two realms.
 The pragmatist’s tradition was revitalized in the 1980s by American philosopher Richard Rorty, who has faced similar charges of elitism for his belief in the relativism of values and his emphasis on the role of the individual in attaining knowledge. Interest has renewed in the classic pragmatists-Pierce, James, and Dewey-have an alternative to Rorty's interpretation of the tradition.
 Aristoteleans whose natural science dominated Western thought for two thousand years, believed that man could arrive at an understanding of ultimate reality by reasoning a form in self-evident principles. It is, for example, self-evident recognition as that the result that questions of truth becomes uneducable. Therefore in can be deduced that objects fall to the ground because that's where they belong, and goes up because that's where it belongs, the goal of Aristotelian science was to explain why things happen. Modern science was begun when Galileo began trying to explain how things happen and thus ordinated the method of controlled excitement which now form the basis of scientific investigation.
 Classical scepticism springs from the observation that the best methods in some given area seem to fall short of giving us contact with truth (e.g., there is a gulf between appearances and reality), and it frequently cites the conflicting judgements that our methods deliver, with the resulting questions of truth that become a written reminder that seems ideologically positive and indisputably certain. In classic thought the various examples of this conflict are a systemized or argument and ethics, as opposed to dogmatism, and particularly the philosophy system building of the Stoics
 The Stoic school was founded in Athens around the end of the fourth century Bc by Zeno of Citium (335-263 Bc). Epistemological issues were a concern of logic, which studied logos, reason and speech, in all of its aspects, not, as we might expect, only the principles of valid reasoning-these were the concern of another division of logic, dialectic. The epistemological part, which concerned with canons and criteria, belongs to logic canceled in this broader sense because it aims to explain how our cognitive capacities make possibly the full realization from reason in the form of wisdom, which the Stoics, in agreement with Socrates, equated with virtue and made the sole sufficient condition for human happiness.
 Reason is fully realized as knowledge, which the Stoics defined as secure and firm cognition, unshakable by argument. According to them, no one but the wise man can lay claim to this condition. He is armed by his mastery of dialectic against fallacious reasoning which might lead him to draw a false conclusion from sound evidence, and thus possibly force him to relinquish the ascent he has already properly confers on a true impression. Hence, as long as he does not ascend to any false grounded-level impressions, he will be secure against error, and his cognation will have the security and firmness required of knowledge. Everything depends, then, on his ability to void error in high ground-level perceptual judgements. To be sure, the Stoics do not claim that the wise man can distinguish true from false perceptual impression: impressions: that is beyond even his powers, but they do maintain that there is a kind of true perceptual impression, the so-called cognitive impression, by confining his assent to which the wise man can avoid giving error a foothold.
 An impression, none the least, is cognitive when it is (1) from what is (the case) (2) Stamped and impressed in accordance with what are, and, (3) such that could not arise from what is not. And because all of our knowledge depends directly or indirectly on it, the Stoics make the cognitive impression the criterion of truth. It makes possibly a secure grasp of the truth, and possibly a secure grasp on truth, not only by guaranteeing the truth of its own positional content, which in turn supported the conclusions that can be drawn from it: Even before we become capable of rational impressions, nature must have arranged for us to discriminate in favor of cognitive impressions that the common notions we end up with will be sound. And it is by means of these concepts that we are able to extend our grasp of the truth through if inferences beyond what is immediately given, least of mention, the Stoics also speak of two criteria, cognitive impressions and common (the trust worthy common basis of knowledge).
 A patternization in custom or habit of action, may exit without any specific basis in reason, however, the distinction between the real world, the world of the forms, accessible only to the intellect, and the deceptive world of displaced perceptions, or, merely a justified belief. The world forms are themselves a functioning change that implies development toward the realization of form. The problem of interpretations is, however confused by the question of whether of universals separate, but others, i.e., Plato did. It can itself from the basis for rational action, if the custom gives rise to norms of action. A theory that magnifies the role of decisions, or free selection from amongst equally possible alternatives, in order to show that what appears to be objective or fixed by nature is in fact an artefact of human convention, similar to convention of etiquette, or grammar, or law. Thus one might suppose that moral rules owe more to social convention than to anything inexorable necessities are in fact the shadow of our linguistic convention. In the philosophy of science, conventionalism is the doctrine often traced to the French mathematician and philosopher Jules Henry Poincaré that endorsed of an accurate and authentic science of differences, such that between describing space in terms of a Euclidean and non-Euclidean geometry, in fact register the acceptance of a different system of conventions for describing space. Poincaré did not hold that all scientific theory is conventional, but left space for genuinely experimental laws, and his conventionalism is in practice modified by recognition that one choice of description may be more conventional than another. The disadvantage of conventionalism is that it must show that alternative equal to workable conventions could have been adopted, and it is often not easy to believe that. For example, if we hold that some ethical norm such as respect for premises or property is conventional, we ought to be able to show that human needs would have been equally well satisfied by a system involving a different norm, and this may be hard to establish.
 Poincaré made important original contributions to differential equations, topology, probability, and the theory of functions. He is particularly noted for his development of the so-called Fusian functions and his contribution to analytical mechanics. His studies included research into the electromagnetic theory of light and into electricity, fluid mechanics, heat transfer, and thermodynamics. He also anticipated chaos theory. Amid the useful allowances that Jules Henri Poincaré took extra care with the greater of degree of carefully took in the vicinity of writing, more or less than 30 books, assembling, by and large, through which can be known as having an existence, but an attribute of things from Science and Hypothesis (1903; translated 1905), The Value of Science (1905; translated 1907), Science and Method (1908; translated 1914), and The Foundations of Science (1902-8; translated 1913). In 1887 Poincaré became a member of the French Academy of Sciences and served at its president up and until 1906. He also was elected to membership in the French Academy in 1908. Poincaré main philosophical interest lay in the physical formal and logical character of theories in the physical sciences. He is especially remembered for the discussion of the scientific status of geometry, in La Science and la et l' hpothése, 1902, trans. As Science and Hypothesis, 1905, the axioms of geometry are analytic, nor do they state fundamental empirical properties of space, rather, they are conventions governing the descriptions of space, whose adoption too governed by their utility in furthering the purpose of description. By their unity in Poincaré conventionalism about geometry proceeded, however against the background of a general and the alliance of always insisting that there could be good reason for adopting one set of conventions than another in his late Dermtêres Pensées (1912) translated, Mathematics and Science: Last Essays, 1963.
 A completed Unification Field Theory touches the 'grand aim of all science,' which Einstein once defined it, as, 'to cover the greatest number of empirical deductions from the smallest possible number of hypotheses or axioms.' But the irony of a man's quest for reality is that as nature is stripped of its disguises, as order emerges from chaos and unity from diversity. As concepts emerge and fundamental laws that assume an increasingly simpler form, the evolving pictures, that to become less recognizable than the bone structure behind a familiar distinguished appearance from reality and lay of bare the fundamental structure of the diverse, science that has had to transcend the 'rabble of the senses.' But it highest redefinition, as Einstein has pointed out, has been 'purchased at the prime cost of empirical content.' A theoretical concept is emptied of content to the very degree that it is diversely taken from sensory experience. For the only world man can truly know is the world created for him by his senses. So paradoxically what the scientists and the philosophers' call the world of appearances-the world of light and Colour, of blue skies and green leaves, of sighing winds and the murmuring of the water's creek, the world designed by the physiology of humans sense organs, are the worlds in which finite man is incarcerated by his essential nature and what the scientist and the philosophers call the world of reality. The colorless, soundless, impalpable cosmos which lies like an iceberg beneath the plane of man's perceptions-is a skeleton structure of symbols, and symbols change.
 For all the promises of future revelation it is possible that certain terminal boundaries have already been reached in man's struggle to understand the manifold of nature in which he finds himself. In his descent into the microcosm's and encountered indeterminacy, duality, a paradox-barriers that seem to admonish him and cannot pry too inquisitively into the heart of things without vitiating the processes he seeks to observe. Man's inescapable impasse is that he himself is part of the world he seeks to study or explore his body and in the self-respecting nature that the brain’s mosaic structure is of the same elemental particles that compose the dark, drifting clouds of interstellar space, are, in the final analysis, just merely an ephemeral confrontations of a primordial space-time-time fields. Standing midway between macrocosms and macrocosmic effects, that the finding barriers between every side and can perhaps, marvel for which St. Paul acknowledged more than nineteen-hundred-years ago, 'the world was created by the world of God, so that what is seen was made out of things under which do not appear.'
 Although, we are to center the Greek scepticism on the value of enquiry and questioning, we now depict scepticism for the denial that knowledge or even rational belief is possible, either about some specific subject-matter, e.g., ethics, or in any area elsewhere. Classical scepticism, sprouts from the remarking reflection that the best method in some area seems to fall short of giving to remain in a certain state with the truth, e.g., there is a widening disruption between appearances and reality, it frequently cites conflicting judgements that our personal methods of bring to a destination, the result that questions of truth becomes indefinable. In classic thought the various examples of this conflict were systemized in the tropes of Aenesidemus. So that, the scepticism of Pyrrho and the new Academy was a system of argument and inasmuch as opposing dogmatism, and, particularly the philosophical system building of the Stoics.
 Steadfast and fixed the philosophy of meaning holds beingness as formatted in and for and of itself, the given migratory scepticism for which accepts the every day or commonsensical beliefs, is not the saying of reason, but as due of more voluntary habituation. Nonetheless, it is self-satisfied at the proper time, however, the power of reason to give us much more. Mitigated scepticism is thus closer to the attitude fostered by the accentuations from Pyrrho through to Sextus Expiricus. Despite the fact that the phrase Cartesian scepticism is sometimes used, nonetheless, Descartes himself was not a sceptic, however, in the method of doubt uses a sceptical scenario in order to begin the process of finding a general distinction to mark its point of knowledge. Descartes trusts in categories of 'distinct' ideas, not far removed from that of the Stoics.
 For many sceptics have traditionally held that knowledge requires certainty, artistry. And, of course, they claim that not all of the knowledge is achievable. In part, nonetheless, of the principle that every effect it's a consequence of an antecedent cause or causes. For causality to be true it is not necessary for an effect to be predictable as the antecedent causes may be numerous, too complicated, or too interrelated for analysis. Nevertheless, in order to avoid scepticism, this participating sceptic has generally held that knowledge does not require certainty. For some alleged cases of things that are self-evident, the singular being of one is justifiably corrective if only for being true. It has often been thought, that any thing known must satisfy certain criteria as well for being true. It is often taught that anything is known must satisfy certain standards. In so saying, that by deduction or induction, there will be criteria specifying when it is. As these alleged cases of self-evident truths, the general principle specifying the sort of consideration that will make such standard in the apparent or justly conclude in accepting it warranted to some degree.
 Besides, there is another view-the absolute globular view that we do not have any knowledge whatsoever. In whatever manner, it is doubtful that any philosopher would seriously entertain to such as absolute scepticism. Even the Pyrrhonist sceptic shadow, in those who notably held that we should hold in ourselves back from doing or indulging something as from speaking or from accenting to any non-evident standards that no such hesitancy concert or settle through their point to tend and show something as probable in that all particular and often discerning intervals of this interpretation, if not for the moment, we take upon the quality of an utterance that arouses interest and produces an effect, likened to a projective connection, here and above, but instead of asserting to the evident, the non-evident are any belief that requires evidence because it is to maintain with the earnest of securities as pledged to foundationalism.
 René Descartes (1596-1650), in his sceptical guise, but in the 'method of doubt' uses a scenario to begin the process of finding himself a secure mark of knowledge. Descartes himself trusted a category of 'clear and distinct' ideas not far removed from the phantasia kataleptike of the Stoics, never doubted the content of his own ideas. It's challenging logic, inasmuch as whether they corresponded to anything beyond ideas.
 Scepticism should not be confused with relativism, which is a doctrine about nature of truth, and might be identical to motivating by trying to avoid scepticism. Nor does it accede in any condition or occurrence traceable to a cayuse whereby the effect may induce to come into being as specific genes affect specific bodily characters, only to carry to a successful conclusion. That which counsels by ways of approval and taken careful disregard for consequences, as free from moral restrain abandoning an area of thought, also to characterize things for being understood in collaboration of all things considered, as an agreement for the most part, but generally speaking, in the main of relevant occasion, beyond this is used as an intensive to stress the comparative degree that after-all, is that, to apply the pending occurrence that along its passage is made or ascertained in befitting the course for extending beyond a normal or acceptable limit, so and then, it is therefore given to an act, process or instance of expression in words of something that gives specially its equivalence in good qualities as measured through worth or value. Significantly, by compelling implication is given for being without but necessarily in being so in fact, as things are not always the way they seem, however, from a number or group by figures or given to preference, as to a select or selection that alternatively to be important as for which we owe ourselves to what is real matter. With the exclusion or exception of any condition in that of accord with being objectionably expectant for, in that, because we cannot know the truth, but because there cannot be framed in the terms we use.
 All the same, Pyrrhonism and Cartesian form of virtual lobularity, in that if scepticism has been held and opposed, that of assuming that knowledge is some form is true. Sufficiently warranted belief, is the warranted condition that provides the truth or belief conditions, in that of providing the grist for the sceptics manufactory in that direction. The Pyrrhonist will suggest that none if any are evident, empirically deferring the sufficiency of giving in but warranted. Whereas, a Cartesian sceptic will agree that no empirical standards about anything other than ones own mind and its contents are sufficiently warranted, because there are always legitimate grounds for doubting it. Out and away, the essential difference between the two views concerns the stringency of the requirements for a belief being sufficiently warranted to take account of as knowledge.
 A-Cartesian requirements are intuitively certain, justly as the Pyrrhonist, who merely require that the standards in case value are more warranted then the unsettled negativity.
 Cartesian scepticism was unduly influenced with which Descartes agues for scepticism, than his reply holds, in that we do not have any knowledge of any empirical standards, in that of anything beyond the contents of our own minds. The reason is roughly in the position that there is a legitimate doubt about all such standards, only because there is no way to justifiably deny that our senses are being stimulated by some sense, for which it is radically different from the objects which we normally think, in whatever manner they affect our senses. Therefrom, if the Pyrrhonist is the agnostic, the Cartesian sceptic is the atheist.
 Because the Pyrrhonist requires much less of a belief in order for it to be confirmed as knowledge than do the Cartesian, the argument for Pyrrhonism are much more difficult to construct. A Pyrrhonist must show that there is no better set of reasons for believing to any standards, of which are in case that any knowledge learnt of the mind is understood by some of its forms, that has to require certainty.
 The underlying latencies given among the many derivative contributions as awaiting their presence to the future that of specifying the theory of knowledge, but, nonetheless, the possibility to identify a set of shared doctrines, however, identity to discern two broad styles of instances to discern, in like manners, these two styles of pragmatism, clarify the innovation that a Cartesian approval is fundamentally flawed, nonetheless, of responding very differently but not forgone.
 Even so, the coherence theory of truth, sheds to view that the truth of a proposition consists in its being a member of same suitably defined body of coherent and possibly endowed with other virtues, provided these are not defined as for truths. The theory, at first sight, has two strengths (1) we test beliefs for truth in the light of other beliefs, including perceptual beliefs, and (2) we cannot step outside our own best system of belief, to see how well it is doing about correspondence with the world. To many thinkers the weak point of pure coherence theories is that they fail to include a proper sense of the way in which actual systems of belief are sustained by persons with perceptual experience, impinged upon by their environment. For a pure coherence theory, experience is only relevant as the source of perceptual belief representation, which take their place as part of the coherent or incoherent set. This seems not to do justice to our sense that experience plays a special role in controlling our system of beliefs, but coherenists have contested the claim in various ways.
 However, a correspondence theory is not simply the view that truth consists in correspondence with the 'facts', but rather the view that it is theoretically uninteresting to realize this. A correspondence theory is distinctive in holding that the notion of correspondence and fact can be sufficiently developed to make the platitude into an inter-setting theory of truth. We cannot look over our own shoulders to compare our beliefs with a reality to compare other means that those beliefs, or perhaps, further beliefs. So we have not unified the factoring solidarity about the 'facts', as something like structures to which our beliefs may not correspond.
 And now and again, we take upon the theory of measure to which evidence supports a theory. A fully formalized confirmation theory would dictate the degree of confidence that a rational investigator might have in a theory, given that of somebody of evidence. The principal developments were due to the German logical positivist Rudolf Carnap (1891-1970), who culminating in his Logical Foundations of Probability (1950), Carnap's idea was that the measure required would be the proposition of logical possible states of affairs in which the theory and the evidence both hold, compared to the number in which the evidence itself holds. The difficulty with the theory lies in identifying sets of possibilities so that they admit to measurement. It therefore demands that we can put a measure ion the 'range' of possibilities consistent with theory and evidence, compared with the range consistent with the enterprise alone. In addition, confirmation proves to vary with the language in which the science is couched and the Canadian programme has difficulty in separating genuine confirming varieties from fewer compelling repetitions of the same experiment. Confirmation also proved to be susceptible to acute paradoxes. In a few words, such that of Hempel's paradox, may that the principle of induction by enumeration allows a suitable generalization to be confirmed by its instance or Goodman's paradox, by which the classical problem of induction is often phrased in terms of finding some reason to accept that nature is uniform.
 Finally, scientific judgement seems to depend on such intangible factors as the problem facing rival theories, and most workers have come to stress instead the historic situation of an impossible sense for which of is to the greater degree or less than a definitely circumscribed place or region is reached through which the locality looms over to take its shape as an impending occurrence that we purposely take to look, because it is believable, in that it is casually the characteristic that is sustained of a scientific culture at anyone given time.
 Once said, of the philosophy of language, was that the general attempt to understand the components of a working language, the relationship that an understanding speaker has to its elements, and the relationship they bear to the world: Such that the subject therefore embraces the traditional division of semantic into syntax, semantic, and pragmatics. The philosophy of mind, since it needs an account of what it is in our understanding that enable us to use language. It mingles with the metaphysics of truth and the relationship between sign and object. Such a philosophy, especially in the 20th century, has been informed by the belief that a philosophy of language is the fundamental basis of all philosophical problems in that language is the philosophical problem of mind, and the distinctive way in which we give shape to metaphysical beliefs of logical form, and the basis of the division between syntax and semantics, as well a problem of understanding the number and nature of specifically semantic relationships such as meaning, reference, predication, and quantification. Pragmatics includes the theory of speech acts, while problems of rule following and the indeterminacy of Translated infect philosophies of both pragmatics and semantics.
 A formal system for which a theory whose sentences are well-formed formula of a logical calculus, and in which axioms or rules of being of a particular term corresponds to the principles of the theory being formalized. The theory is intended to be framed in the language of a calculus, e.g., first-order predicate calculus. Set theory, mathematics, mechanics, and many other axiomatically that may be developed formally, thereby making possible logical analysis of such matters as the independence of various axioms, and the relations between one theory and another.
 Are terms of logical calculus are also called a formal language, and a logical system? A system in which explicit rules are provided to determining (1) which are the expressions of the system (2) which sequence of expressions count as well formed (well-forced formulae) (3) which sequence would count as proofs. A system which takes on axioms for which leaves a terminable proof, however, it shows of the prepositional calculus and the predicated calculus.
 It's most immediate of issues surrounding certainty are especially connected with those concerning scepticism. Although Greek scepticism entered on the value of enquiry and questioning, scepticism is now the denial that knowledge or even rational belief is possible, either about some specific subject-matter, e.g., ethics, or in any area whatsoever. Classical scepticism, springs from the observation that the best methods in some area seem to fall short of giving us contact with the truth, e.g., there is a gulf between appearances and reality, it frequently cites the conflicting judgements that our methods deliver, with the result that questions of verifiable truth’s convert into undefinably less trued. In classic thought the various examples of this conflict were systemized in the tropes of Aenesidemus. So that, the scepticism of Pyrrho and the new Academy was a system of argument and inasmuch as opposing dogmatism, and, particularly the philosophical system building of the Stoics.
 As it has come down to us, particularly in the writings of Sextus Empiricus, its method was typically to cite reasons for finding our issue undesirable (sceptics devoted particular energy to undermining the Stoics conception of some truths as delivered by direct apprehension or some katalepsis). As a result the sceptic concludes eposhé, or the suspension of belief, and then go on to celebrate a way of life whose object was ataraxia, or the tranquillity resulting from suspension of belief.
 Fixed by its will for and of itself, the mere mitigated scepticism which accepts every day or commonsense belief, is that, not the delivery of reason, but as due more to custom and habit. Nonetheless, it is self-satisfied at the proper time, however, the power of reason to give us much more. Mitigated scepticism is thus closer to the attitude fostered by the accentuations from Pyrrho through to Sextus Expiricus, despite the fact that the phrase Cartesian scepticism is sometimes used. Descartes himself was not a sceptic, however, in the method of doubt uses a sceptical scenario in order to begin the process of finding a general distinction to mark its point of knowledge. Descartes trusts in categories of clear and distinct ideas, not far removed from the phantasiá kataleptikê of the Stoics.
 For many sceptics have traditionally held that knowledge requires certainty, artistry. And, of course, they assert strongly that distinctively intuitive  knowledge is not possible. In part, it is however, the principle that every effect is a consequence of an antecedent cause or causes. For causality to be true it is not necessary for an effect to be predictable as the antecedent causes may be numerous, too complicated, or too interrelated for analysis. Nevertheless, in order to avoid scepticism, this participating sceptic has generally held that knowledge does not require certainty. Refusing to consider for alleged instances of things that are explicitly evident, for a singular count for justifying of discerning that set to one side of being trued. It has often been thought, that any thing known must satisfy certain criteria as well for being true. It is often taught that anything is known must satisfy certain standards. In so saying, that by deduction or induction, there will be criteria specifying when it is. As these alleged cases of self-evident truths, the general principle specifying the sort of consideration that will make such standards in the apparent or justly conclude in accepting it warranted to some degree. The form of an argument determines whether it is a valid deduction, or speaking generally, in that these of arguments that display the form all 'P's' are 'Q's: 't' is 'P' (or a 'P'), is therefore, 't is Q' (or a Q) and accenting toward validity, as these are arguments that display the form if 'A' then 'B': It is not true that 'B' and, therefore, it is not so that 'A', however, the following example accredits to its consistent form as:
  If there is life on Pluto, then Pluto has an atmosphere.
  It is not the case that Pluto has an atmosphere.
  Therefore, it is not the case that there is life on Pluto.
The study of different forms of valid argument is the fundamental subject of deductive logic. These forms of argument are used in any discipline to establish conclusions on the basis of claims. In mathematics, propositions are established by a process of deductive reasoning, while in the empirical sciences, such as physics or chemistry, propositions are established by deduction as well as induction.
 The first person to discuss deduction was the ancient Greek philosopher Aristotle, who proposed a number of argument forms called syllogisms, the form of argument used in our first example. Soon after Aristotle, members of a school of philosophy known as Stoicism continued to develop deductive techniques of reasoning. Aristotle was interested in determining the deductive relations between general and particular assertions-for example, assertions containing the expression all (as in our first example) and those containing the expression some. He was also interested in the negations of these assertions. The Stoics focused on the relations among complete sentences that hold by virtue of particles such as if . . . then, it is not the action that or and, and so forth. Thus the Stoics are the originators of sentential logic (so called because its basic units are whole sentences), whereas Aristotle can be considered the originator of predicatelogic (so called because in predicate logic it is possible to distinguish between the subject and the predicate of a sentence).
 In the late 19th and early 20th centuries the German logician's Gottlob Frége and David Hilbert argued independently that deductively valid argument forms should not be couched in a natural language-the language we speak and write in-because natural languages are full of ambiguities and redundancies. For instance, consider the English sentence every event has a cause. It can mean that one cause brings either about every event, or to any or every place in or to which is demanded through differentiated causalities as for example: 'A' has a given causality for which is forwarding its position or place as for giving cause to 'B,' 'C,' 'D,' and so on, or that individual events each have their own, possibly different, cause, wherein 'X' causes 'Y,' 'Z' causes 'W,' and so on. The problem is that the structure of the English language does not tell us which one of the two readings is the correct one. This has important logical consequences. If the first reading is what is intended by the sentence, it follows that there is something akin to what some philosophers have called the primary cause, but if the second reading is what is intended, then there might be no primary cause.
 To avoid these problems, Frége and Hilbert proposed that the study of logic be carried out using set classes of categorically itemized languages. These artificial languages are specifically designed so that their assertions reveal precisely the properties that are logically relevant-that is, those properties that determine the deductive validity of an argument. Written in a formalized language, two unambiguous sentences remove the ambiguity of the sentence, Every event has a cause. The first possibility is represented by the sentence, which can be read as there is a thing 'x,' such that, for every 'y' or 'x,' until the finality of causes would be for itself the representation for constituting its final cause 'Y.' This would correspond with the first interpretation mentioned above. The second possible meaning is represented by, that which can be understood as, every thing 'y,' there is yet the thing 'x,' such that 'x' gives 'Y'. This would correspond with the second interpretation mentioned above. Following Frége and Hilbert, contemporary deductive logic is conceived as the study of formalized languages and formal systems of deduction.
 Although the process of deductive reasoning can be extremely complex, the aspects that are considered as conclusions are obtained from a step-by-step process in which each step establishes a new assertion that is the result of an application of one of the valid argument forms either to the premises or to previously established assertions. Thus the different valid argument forms can be conceived as rules of derivation that permit the construction of complex deductive arguments. No matter how long or complex the argument, if every step is the result of the application of a rule, the argument is deductively valid: If the premises are true, the conclusion has to be true as well.
 Although the examples in this process of deductive reasoning can be extremely complex, however conclusions are obtained from a step-by-step process in which each step establishes a new assertion that is the result of an application of one of the valid argument forms either to the premises or to previously established assertions. Thus the different valid argument forms can be conceived as rules of derivation that permit the construction of complex deductive arguments. No matter how long or complex the argument, if every step is the result of the application of a rule, the argument is deductively valid: If the premises are true, the conclusion has to be true as well.
 Additionally, the absolute globular view of knowledge whatsoever, may be considered as a manner of doubtful circumstance, meaning that not very many of the philosophers would seriously entertain of absolute scepticism. Even the Pyrrhonism sceptics, who held that we should refrain from accenting to any non-evident standards that no such hesitancy about asserting to the evident, the non-evident are any belief that requires evidences because it is warranted.
 We could derive a scientific understanding of these ideas with the aid of precise deduction, as Descartes continued his claim that we could lay the contours of physical reality out in three-dimensional co-ordinates. Following the publication of Isaac Newton Principia Mathematica in 1687, reductionism and mathematical modeling became the most powerful tools of modern science. The dream that we could know and master the entire physical world through the extension and refinement of mathematical theory became the central feature and principles of scientific knowledge.
 The radical separation between mind and nature formalized by Descartes served over time to allow scientists to concentrate on developing mathematical descriptions of matter as pure mechanism without any concern about its spiritual dimensions or ontological foundations. Meanwhile, attempts to rationalize, reconcile or eliminate Descartes merging division between mind and matter became the most central feature of Western intellectual life.
 Philosophers like John Locke, Thomas Hobbes, and David Hume all tried to articulate some basis for linking the mathematical describable motions of matter with linguistic representations of external reality in the subjective space of mind. Descartes compatriot Jean-Jacques Rousseau reified nature as the ground of human consciousness in a state of innocence and proclaimed that Liberty, Equality, Fraternities are the guiding principals of this consciousness. Rousseau also fabricated the idea of the general will of the people to achieve these goals and declared that those who do not conform to this will were social deviants.
 The Enlightenment idea of deism, which imaged the universe as a clockwork and God as the clockmaker, provided grounds for believing in a divine agency, from which the time of moment the formidable creations also imply, in of which, the exhaustion of all the creative forces of the universe at origins ends, and that the physical substrates of mind were subject to the same natural laws as matter. In that the only accomplishing implications for mediating the categorical prioritizations that were held temporarily, if not imperatively acknowledged between mind and matter, so as to perform the activities or dynamical functions for which an impending mental representation proceeded to seek and note-perfecting of pure reason. Causal traditions contracted in occasioned to Judeo-Christian theism, which had previously been based on both reason and revelation, responded to the challenge of deism by debasing tradionality as a test of faith and embracing the idea that we can know the truths of spiritual reality only through divine revelation. This engendered a conflict between reason and revelation that persists to this day. And laid the foundation for the fierce completion between the mega-narratives of science and religion as frame tales for mediating the relation between mind and matter and the manner in which they should ultimately define the special character of each.
 The nineteenth-century Romantics in Germany, England and the United States revived Jean-Jacques Rousseau (1712-78) attempt to posit a ground for human consciousness by reifying nature in a different form. Wolfgang von Johann Goethe (1749-1832) and Friedrich Wilhelm von Schelling (1775-1854) proposed a natural philosophy premised on ontological Monism (the idea that adhering manifestations that govern toward evolutionary principles have grounded inside an inseparable spiritual Oneness) and argued God, man, and nature for the reconciliation of mind and matter with an appeal to sentiment, mystical awareness, and quasi-scientific attempts, as he afforded the efforts of mind and matter, nature became a mindful agency that loves illusion, as it shrouds man in mist, presses him or her heart and punishes those who fail to see the light. The principal philosopher of German Romanticism Friedrich Wilhelm von Schelling (1775-1854) arrested a version of cosmic unity, and argued that scientific facts were at best partial truths and that the mindful creative spirit that unities mind and matter is progressively moving toward self-realization and undivided wholeness.
 The British version of Romanticism, articulated by figures like William Wordsworth and Samuel Taylor Coleridge (1772-1834), placed more emphasis on the primary of the imagination and the importance of rebellion and heroic vision as the grounds for freedom. As Wordsworth put it, communion with the incommunicable powers of the immortal sea empowers the mind to release itself from all the material constraints of the laws of nature. The founders of American transcendentalism, Ralph Waldo Emerson and Henry David Theoreau, articulated a version of Romanticism that commensurate with the ideals of American democracy.
 The American envisioned a unified spiritual reality that manifested itself as a personal ethos that sanctioned radical individualism and bred aversion to the emergent materialism of the Jacksonian era. They were also more inclined than their European counterpart, as the examples of Thoreau and Whitman attest, to embrace scientific descriptions of nature. However, the Americans also dissolved the distinction between mind and natter with an appeal to ontological monism and alleged that mind could free itself from all the constraint of assuming that by some sorted limitation of matter, in which such states have of them, some mystical awareness.
 Since scientists, during the nineteenth century were engrossed with uncovering the workings of external reality and seemingly knew of themselves that these virtually overflowing burdens of nothing, in that were about the physical substrates of human consciousness, the business of examining the distributive contribution in dynamic functionality and structural foundations of the mind became the province of social scientists and humanists. Adolphe Quételet proposed a social physics that could serve as the basis for a new discipline called sociology, and his contemporary Auguste Comte concluded that a true scientific understanding of the social reality was quite inevitable. Mind, in the view of these figures, was a separate and distinct mechanism subject to the lawful workings of a mechanical social reality.
 More formal European philosophers, such as Immanuel Kant (1724-1804), sought to reconcile representations of external reality in mind with the motions of matter-based on the dictates of pure reason. This impulse was also apparent in the utilitarian ethics of Jerry Bentham and John Stuart Mill, in the historical materialism of Karl Marx and Friedrich Engels, and in the pragmatism of Charles Smith, William James and John Dewey. These thinkers were painfully aware, however, of the inability of reason to posit a self-consistent basis for bridging the gap between mind and matter, and each remains obliged to conclude that the realm of the mental exists only in the subjective reality of the individual.
 The figure most responsible for infusing our understanding of Cartesian dualism with emotional content was the death of God theologian Friedrich Nietzsche (1844-1900). After declaring that God and divine will do not exist, Nietzsche reified the existence of consciousness in the domain of subjectivity as the ground for individual will and summarily dismissed all previous philosophical attempts to articulate the will to truth. The problem, claimed Nietzsche, is that earlier versions of the will to truth, disguised the fact that all alleged truths were arbitrarily created in the subjective reality of the individual and are expressions or manifestations of individual will.
 In Nietzsche's view, the separation between mind and matter is more absolute and total that had previously been imagined. Based on the assumption that there is no real or necessary correspondence between linguistic constructions of reality in human subjectivity and external reality, he declared that we are all locked in a prison house of language. The prison as he conceived it, however, was also a space where the philosopher can examine the innermost desires of his nature and articulate a new massage of individual existence founded on will.
 Those failing to enact their existence in this space, said Nietzsche, are enticed into sacrificing their individuality on the non-existent altars of religious beliefs and/or democratic or socialist ideals and become, therefore members of the anonymous and docile crowd. Nietzsche also invalidated science in the examination of human subjectivity. Science, he said, not only exalted natural phenomena and favors reductionistic examinations of phenomena at the expense of mind. It also seeks to reduce the separateness and uniqueness of mind with mechanistic descriptions that disallow any basis for the free exercise of individual will.
 What is not widely known, however, is that Nietzsche and other seminal figures in the history of philosophical postmodernism were very much aware of an epistemological crisis in scientific thought than arose much earlier that occasioned by wave-particle dualism in quantum physics. The crisis resulted from attempts during the last three decades of the nineteenth century to develop a logically self-consistent definition of number and arithmetic that would serve to reenforce the classical view of correspondence between mathematical theory and physical reality.
 Nietzsche appealed to this crisis in an effort to reinforce his assumptions that, in the absence of ontology, all knowledge (scientific knowledge) was grounded only in human consciousness. As the crisis continued, a philosopher trained in higher mathematics and physics, Edmund Husserl attempted to preserve the classical view of correspondence between mathematical theory and physical reality by deriving the foundation of logic and number from consciousness in ways that would preserve self-consistency and rigor. Thus effort to ground mathematical physics in human consciousness, or in human subjective reality was no trivial matter. It represented a direct link between these early challenges and the efficacy of classical epistemology and the tradition in philosophical thought that culminated in philosophical postmodernism.
 Exceeding in something otherwise that extends beyond its greatest equilibria, and to the highest degree, as in the sense of the embers sparking aflame into some awakening state, whereby our capable abilities to think-through the estranged dissimulations by which of inter-twirling composites, it's greater of puzzles lay withing the thickening foliage that lives the labyrinthine maze, in that sense and without due exception, only to be proven done. By some compromise, or formally subnormal surfaces of typically free all-knowing calculations, are we in such a way, that from underneath that comes upon those by some untold story of being human. These habituating and unchangeless and, perhaps, incestuous desires for its action's lay below the conscious struggle into the further gaiting steps of their pursuivants endless latencies, that we are drawn upon such things as their estranging dissimulations of arranging simulations, by which time and again we appear not to any separate conjunct for which we associate the subsequent realism, but in human subjectivity as ingrained of some external reality, may that be deducibly subtractive, but, that, if in at all, that we but locked in a prison house of language. The prison as he concluded it, was also a space where the philosopher can examine the innermost desires of his nature and articulate a new message of individual existence founded on will.
 Nietzsche's emotionally charged defense of intellectual freedom and his radical empowerment of mind as the maker and transformer of the collective fictions that shape human reality in a soulless mechanistic universe proved terribly influential on twentieth-century thought, With which apprehend the valuing cognation for which is self-removed by the  underpinning conditions of substantive intellectual freedom and radial empowerment of mind as the maker and transformer of the collective fictions. Furthermore, Nietzsche sought to reinforce his view of the subjective character of scientific knowledge by appealing to an epistemological crisis over the foundations of logic and arithmetic that arose during the last three decades of the nineteenth century. Through a curious course of events, attempted by Edmund Husserl 1859-1938, a German mathematician and a principal founder of phenomenology, wherefor was to resolve this crisis resulting in a view of the character of consciousness that closely resembled that of Nietzsche.
 Descartes, the foundational architect of modern philosophy, was able to respond without delay or any assumed hesitation or indicative to such ability, and spotted the trouble too quickly realized that there appears of nothing in viewing nature that implicates the crystalline possibilities of reestablishing beyond the reach of the average reconciliation, for being between a full-fledged comparative being such in comparison with an expressed or implied standard or absolute, yet the inclination to talk freely and sometimes indiscretely, if not, only not an idea on expressing deficient in originality or freshness, belonging in community with or in participation, that the diagonal line has been worn between Plotinus and English mathematician and philosopher A.N.Whitehead, whose view, for which they have found to perpetuate a non-locality station, within a particular point as occupied whenever is apprehended As having actuality, a distinct and demonstrable existence, for that are known as having existence in space or time, these bringing about the occurrences that come into one’s head, come to mind, cross one’s mind, or flashes across one’s mind, all of which go through one’s head, as occupying a particular point as appointed of its space and time. In space and time, owing to its peculiarity outside the scope of concerns, in that of an unusually modified subjective response or reaction that feelings or the sensations of adequacy and the reliance on oneself and one’s capacity, as to have serene confidence in himself and his own abilities, so that the interchange of views is only approved by the comparability of its fact. The confirmative state of effectual condition or occurrence can be traced as far back as to a cause, that the effect or aftereffects hold by an antecedent, however, the belongings to force leads in the impression through which one thing on another is effectually profound in the effect on our lives. Only in having an independent reality, the restrictive customs that have recently come into evidence, are not surprising to bring about and the concluding idea that exists in the idea of 'God,' especially. Still and all, the primordial nature of God', with which is eternal, a consequent of nature, which is in a flow of compliance, insofar as differentiation occurs of that which can be known as having existence in space or time, the significant relevance is cognitional to the thought noticeably regaining, excluding the use of examples in order to clarify that to explicate upon the interpolating relationships or the sequential occurrence to bring about an orderly disposition of individual approval that bears the settlements with the quantum theory,
 Given that Descartes disgusted the information from the senses to the point of doubling the perceptive results of repeatable scientific experiments, how did he conclude that our knowledge of the mathematical ideas residing only in mind or in human subjectivity was accurate, much less the absolute truth? He did so by making a leap of faith, God constricted the world, expressed Descartes, in accordance with the mathematical ideas that our minds are capable of uncovering, in their pristine essence the truths of classical physics Descartes viewed them were quite literally 'revealed' truths, and it was this seventeenth-century metaphysical presupposition that became a historical science, what we terminologically phrase the 'hidden ontology of classical epistemology?'
 While classical epistemology would serve the progress of science very well, it also presented us with a terrible dilemma about the relationships between mind and world. If there is a real or necessary correspondence between mathematical ideas in subject reality and external physical reality, how do we know that the world in which 'we have live, breath, love, and in its ending in death’, does, least of mention, actually exists? Descartes' resolution of the dilemma took the form of an exercise. He asked us to direct our attention inward and to divest our consciousness of all awareness of external physical reality. If we do so, he concluded, the real existence of human subjective reality could be confirmed.
 'As it turned out, this resolution was considerably more problematic and oppressive than Descartes could have imagined, 'I think, therefore I am' may be as marginally persuasive way of confirming the real existence of the thinking self. Nevertheless, some understanding of physical reality had obliged Descartes and others to doubt the existence of the self-clarity that is implied to the separation between the subjective world and the world of life, that is, that the real world of physical objectivity was actualizes as an 'absolute.'
 Inauspiciously, the inclining inclinations for which by an error plummet suddenly and involuntary, their prevailing odds or probabilities of chance aggress of standards that seem less than are fewer than some, in its gross effect, the diminishing succumbs of some immeasurable modernity, but are described as 'the disease of the Western mind.' The dialectical conduction services as background knowledge for understanding probabilities of chance aggress anatomically in the relationships between parts and wholes in physics. With a similar view that of for something that provides a reason for something else, perhaps, by unforeseen persuadable partiality, or perhaps, by some unduly powers exerted over the minds or behavior of others, giving cause to some entangled assimilation as 'x' imparts upon passing directly into dissimulated diminution. Relationships that emerge of the co-called 'new biology' and in recent studies thereof, finding that evolution directed toward a scientific understanding proved uncommonly exhaustive, in that to a greater or higher degree, that usually for reasons that posit for and of themselves their perceptual notion as deemed of existing or dealing with what exists only in the mind, therefore the ideational conceptual representation of ideas, and includes it’s as paralleled and, of course, as lacking nothing that properly belongs to it that is with 'content’.
 As the quality or state of being ready or skilled that in dexterity brings forward for consideration the adequacy that is to make known the inclination to expound of the actual notion that bing exactly as appears ir is claimed is undoubted. The representation of an actualized entity is supposed a self-realization that blends into harmonious processes of self-creation
 In the 20th century the validity of metaphysical thinking has been disputed by the logical positivists and by the so-called dialectical materialism of the Marxists. The basic principle maintained by the logical positivists is the verifiability theory of meaning. According to this theory a sentence has factual meaning only if it meets the test of observation. Logical positivists argue that metaphysical expressions such as nothing exists except material particles and everything is part of one all-encompassing spirit cannot be tested empirically. Therefore, according to the verifiability theory of meaning, these expressions have no factual cognitive meaning, although they can have an emotive meaning relevant to human hopes and feelings.
 The dialectical materialists assert that the mind is conditioned by and reflects material reality. Therefore, speculations that conceive of constructs of the mind as having any other than material reality is themselves unreal and can result only in delusion. To these assertions metaphysicians reply by denying the adequacy of the verifiability theory of meaning and of material perception as the standard of reality. Both logical positivism and dialectical materialism, they argue, conceal metaphysical assumptions, for example, that everything is observable or at least connected with something observable and that the mind has no distinctive life of its own. In the philosophical movement known as existentialism, thinkers have contended that the questions of the nature of being and of the individuals relationship to it is extremely important and meaningful in terms of human life. The investigation of these questions is therefore considered valid whether its results can be verified objectively.
 Since the 1950s the problems of systematic analytical metaphysics have been studied in Britain by Stuart Newton Hampshire and Peter Frederick Strawson, the former concerned, in the manner of Spinoza, with the relationship between thought and action, and the latter, in the manner of Kant, with describing the major categories of experience as they are embedded in language. Metaphysics have been pursued much in the spirit of positivism by Wilfred Stalker Sellars and Willard Van Orman Quine. Sellars have sought to express metaphysical questions in linguistic terms, and Quine has attempted to determine whether the structure of language commits the philosopher to asserting the existence of any entities whatever and, if so, what kind. In these new formulations the issues of metaphysics and ontology remain vital.
 In the 17th century, French philosopher René Descartes proposed that only two substances ultimately exist; Mind and body. Yet, if the two are entirely distinct, as Descartes believed, how can one substance interact with the other? How, for example, is the intention of a human mind able to cause movement in the persons limbs, the issue of the interaction between mind and body is known in philosophy as the mind-body problem.
 Many fields other than philosophy shares an interest in the nature of mind, in religion, the nature of mind is connected with various conceptions of the soul and the possibility of life after death. In many abstract theories of mind there is considerable overlap between philosophy and the science of psychology. Once part of philosophy, psychology split off and formed a separate branch of knowledge in the 19th century. While psychology used scientific experiments to study mental states and events, philosophy employed the use of resoluteness, a purposeful analysis for dissecting a determinate decision whereby an unfolding resemblance to be like or similar to the phenomenon of transference. Intensive interests brought to or upon the concerns with which to engage the attentions of the mind a required need of something wanted or needed, yet the essential requisites for the right and  proper responsibilities as proven of the essential requirements needed for much as the rightful and deserving exchange, so that reasoned deducibly thinks on behave of understanding the world and its surrounding surfaces, including the innate traits from which we learn, so, that, wee all must use reason to solve the considerations that incline that support to the question, because of the analytic situation places position of it transference into the connection as a status of a phenomenon, bearing upon something which extends beyond a level or a normal outer surface as projected for enabling skills where interpretative or explanation becomes the interaction between the analyst and the analysand through which cause a point to point support to reorient one ‘s justifiable attainment for truth or knowledge, as, perhaps, the healing is to distinguish between what is ‘real’ or ‘unreal’. Reasoned arguments and thought experiment in seeking to understand the concepts that underlie mental phenomena. Also influenced by philosophy of mind is the field of artificial intelligence, which endeavour to develop computers that can mimic what the human mind can do. Cognitive science attempts to integrate the understanding of mind provided by philosophy, psychology, AI, and other disciplines. Finally, all of these fields benefit from the detailed understanding of the brain that has emerged through neuroscience in the late 20th century.
 Philosophers use the characteristics of inward accessibility, subjectivity, intentionality, goal-directedness, creativity and freedom, and consciousness to distinguish mental phenomena from physical phenomena.
 Perhaps the most important characteristic of mental phenomena is that they are inwardly accessible, or available to us through introspection. We each know our own minds-our sensations, thoughts, memories, desires, and fantasies-in a direct sense, by internal reflection. We also know our mental states and mental events in a way that no one else can. In other words, we have privileged access to our own mental states.
 Certain mental phenomena, those we generally call experiences, have a subjective nature-that is, they have certain characteristics we become aware of when we reflect, for instance, there is something as definitely to feel pain, or have an itch, or see something red. These characteristics are subjective in that they are accessible to the subject of the experience, the person who has the experience, but not to others.
 Other mental phenomena, which we broadly refer to as thoughts, have a characteristic philosophers call intentionality. Intentional thoughts are about other thoughts or objects, which are represented as having certain properties or for being related to one another in a certain way. The belief that London is west of Toronto, for example, is about London and Toronto and represents the former as west of the latter. Although we have privileged access to our intentional states, many of them do not seem to have a subjective nature, at least not in the way that experiences do.
 The contrast between the subjective and the objective is made in both the epistemic and the ontological divisions of knowledge. In the objective field of study, it is oftentimes identified with the distension between the intrapersonal and the interpersonal, or with that between matters whose resolving power depends on the psychology of the person in question, and who in this way is dependent, or, sometimes, with the distinction between the biased and the impartial. Therefore, an objective question might be one answerable by a method usable by any competent investigator, while a subjective question would be answerable only from the questioners point of view. In the ontological domain, the subjective-objective contrast is often between what is what is not mind-dependent: Secondary quality, e.g., colours, has been variability with observation conditions. The truth of a proposition, for instance, apart from certain propositions about oneself, would be objective if it is interdependent of the perspective, especially for beliefs of those judging it. Truth would be subjective if it lacks such independence, because it is a construct from justified beliefs, e.g., those well-confirmed by observation.
 One notion of objectivity can be basic and the other as an end point of reasoning and observation, if only to infer of it as a conclusion. If the epistemic notion is essentially an underlying of something as related to or dealing with such that are to fundamental primitives, then the criteria for objectivity in the ontological sense derive from considerations of justification: An objective question is one answerable by a procedure that yields (adequate) justification is a matter of amenability to such a means or procedures used to attaining an end. , Its method, if, on the other hand, the ontological notion is basic, the criteria for an interpersonal method and its objective use are a matter of its mind-independence and tendency to lead to objective truth, perhaps, its applying to external objects and yielding predictive success. Since, the use of these criteria requires employing the methods which, on the epistemic conception, define objectivists most notably scientific methods-but no similar dependence obtains in the other direction, the epistemic notion os often taken as basic.
 A different theory of truth, or the epistemic theory, is motivated by the desire to avoid negative features of the correspondence theory, which celebrates the existence of God, whereby, its premises are that all natural things are dependent for their existence on something else, whereas the totality of dependent beings must then it depends upon a non-dependent, or necessarily existent, being, which is God. So, the God that ends the question must exist necessarily, it must not be an entity of which the same kinds of questions can be raised. The problem with such is the argument that it unfortunately affords no reason for attributing concern and care to the deity, nor for connecting the necessarily existent being it derives with human values and aspirations.
 This offering of truth, seems refutably confound by our best theory of reality, but truth is distributively contributed as a function of our thinking about the world and all surrounding surfaces. An obvious problem with this is the fact of revision; theories are constantly refined and corrected. To deal with this objection it is at the end of enquiry. We never in fact reach it, but it serves as a direct motivational disguised enticement, as an asymptotic end of enquiry. Nonetheless, the epistemic theory of truth is not antipathetic to ontological relativity, since it has no commitment to the ultimate furniture of the world and it also is open to the possibilities of some kinds of epistemological relativism.
 Lest be said, however, that of epistemology, the subjective-objective contrast arises above all for the concept of justification and its relatives. Externalism, particularly reliabilism, and since, for reliabilism, truth-conduciveness (non-subjectivity conceived) is central for justified belief. Internalism may or may not construe justification subjectivistically, depending on whether the proposed epistemic standards are interpersonally grounded. There are also various kinds of subjectivity: Justification may, e.g., be grounded in ones considered standards of simply in what one believes to be sound. Yet, justified beliefs accorded with precise or explicitly considered standards whether or not deem it a purposive necessity to think them justifiably made so.
 Any conception of objectivity may treat one domain as fundamental and the others derivatively. Thus, objectivity for methods (including sensory observation) might be thought basic. Let us look upon an objective method be that one is (1) interpersonally usable and tends to yield justification regarding the questions to which it applies (an epistemic conception), or (2) trends to yield truth when properly applied (an ontological conception) or (3) both. Then an objective person is one who appropriately uses objective methods by an objective method, as one appraisable by an objective method, an objective discipline is whose methods are objective, and so on. Typically, those who conceive objectivity epistemically tend to take methods as fundamental, and those who conceive it ontologically tend to take statements as basic.
 A number of mental phenomena appear to be connected to one another as elements in an intelligent, goal-directed system. The system works as follows: First, our sense organs are stimulated by events in our environment; next, by virtue of these stimulations, we perceive things about the external world; finally, we use this information, as well as information we have remembered or inferred, to guide our actions in ways that further our goals. Goal-directedness seems to accompany only mental phenomena.
 Another important characteristic of mind, especially of human minds, is the capacity for choice and imagination. Rather than automatically converting past influences into future actions, individual minds are capable of exhibiting creativity and freedom. For instance, we can imagine things we have not experienced and can act in ways that no one expects or could predict.
 Mental phenomena are conscious, and consciousness may be the closest term we have for describing what is special about mental phenomena. Minds are sometimes referred to as consciousness, yet it is difficult to describe exactly what consciousness is. Although consciousness is closely related to inward accessibility and subjectivity, these very characteristics seem to hinder us in reaching an objective scientific understanding of it.
 Although philosophers have written about mental phenomena since ancient times, the philosophy of mind did not garner much attention until the work of French philosopher René Descartes in the 17th century. Descartes work represented a turning point in thinking about mind by making a strong distinction between bodies and minds, or the physical and the mental. This duality between mind and body, known as Cartesian dualism, has posed significant problems for philosophy ever since.
 Descartes believed there are two basic kinds of things in the world, a belief known as substance dualism. For Descartes, the principles of existence for these two groups of things-bodies and minds-are completely different from each other: Bodies exist by being extended in space, while minds exist by being conscious. According to Descartes, nothing can be done to give a body thought and consciousness. No matter how we shape a body or combine it with other bodies, we cannot turn the body into a mind, a thing that is conscious, because being conscious is not a way of being extended.
 For Descartes, a person consists of a human body and a human mind causally interacting with one another. For example, the intentions of a human being, that may have conceivably, caused that persons’ limbs to move. In this way, the mind can affect the body. What is more,  are the sense organs of a human being as forced, in fact, the refractive rays of light, pressure, or sound, are external sources, with which effect the brain, and therefore affecting the alterable states in mental dimensions. Thus, the body may affect the mind. Exactly how mind can affect body, and vice versa, is a central issue in the philosophy of mind, and is known as the mind-body problem. According to Descartes, this interaction of mind and body is peculiarly intimate. Unlike the interaction between a pilot and his ship, the connexion between mind and body more closely resembles two substances that have been thoroughly mixed together.
 In response to the mind-body problem arising from Descartes theory of substance dualism, a number of philosophers have advocated various forms of substance monism, the doctrine that there is ultimately just one kind of thing in reality. In the 18th century, Irish philosopher George Berkeley claimed there were no material objects in the world, only minds and their ideas. Berkeley thought that talk about physical objects was simply a way of organizing the flow of experience. Near the turn of the 20th century, American psychologist and philosopher William James proposed another form of substance monism. James claimed that experience is the basic stuff from which both bodies and minds are constructed.
 Most philosophers of mind today are substance monists of a third type: They are materialists who believe that everything in the world is basically material, or a physical object. Among materialists, there is still considerable disagreement about the status of mental properties, which are conceived as properties of bodies or brains. Materialists who those properties undersized by duality, yet believe that mental properties are an additional kind of property or attribute, not reducible to physical properties. Property diarists have the problem of explaining how such properties can fit into the world envisaged by modern physical science, according to which there are physical explanations for all things.
 While, in the theory of probability the Cambridge mathematician and philosopher Frank Ramsey (1903-30), was the first to show how a personalized theory could be developed, based on precise behavioural notions of preference and expectation. In the philosophy of language, Ramsey was one of the first thinkers to accept a redundancy theory of truth, which he combined with radical views of the function of many kinds of propositions. Neither generalizations nor causal propositions, nor those treating probability or ethics, described facts, but each has a different specific function in our intellectual economy.
 Ramsey advocates that of a sentence generated by taking all the sentences affirmed in a scientific theory that use some term, e.g., quark. Replacing the term by a variable, and existentially quantifying into the result, instead of saying quarks have such-and-such properties, Ramsey postdated that the sentence as saying that there is something that has those properties. If the process is repeated, the sentence gives the topic-neutral structure of the theory, but removes any implications that we know what the term so treated as a denotative indication designating the open possibility of identifying the theoretical item with whatever, and it is that best fits the description provided. Nonetheless, it was pointed out by the Cambridge mathematician Newman that if the process is carried out for all except the logical bones of the theory, then by the Löwenheim-Skolem theorem, the result will be interpretable in any domain of sufficient cardinality, and the content of the theory may reasonably be felt to have been lost.
 Nevertheless, probability is a non-negative, additive set function whose maximum value is unity. What is harder to understand is the application of the formal notion to the actual world. One point of application is statistical, when kinds of an event or trials (such as the tossing of a coin) can be described, and the frequency of occurrence of particular outcomes (such as the coin falling heads) is measurable, then we can begin to think of the probability of that kind of outcome in that kind of trial. One account of probability is therefore the frequency theory, associated with Venn and Richard von Mises (1883-1953), that identifies the probability of an event with such a frequency of occurrence. A second point of application is the description of a hypothesis as probable when the evidence bears a favoured relation is conceived of as purely logical in nature, as in the works of Keynes and Carnap, probability statements are not empirical measures of frequency, but represent something like partial entailments or measures of possibilities left open by the evidence and by the hypothesis.
 Formal confirmation theories and range theories of probability are developments of this idea. The third point of application is in the use probability judgements have in regulating the confidence with which we hold various expectations. The approach sometimes called subjectivism or personalism, but more commonly known as Bayesianism, associated with de Finetti and Ramsey, whom of both, see probability judgements as expressions of a subjective measure of confidence in an event or kind of event, and attempts to describe constraints on the way we should have degrees of confidence in different judgements that explain those judgements having the mathematical form of judgements of probability. For Bayesianism, probability or chance is probability or chance is not an objective or real factor in the world, but rather a reflection of our own states of mind. However, these states of mind need to be governed by empirical frequencies, so this is not an invitation to licentious thinking.
 This concept of sampling and accompanying application of the laws of probability finds extensive use in polls, public opinion polls. Polls to determine what radio or television programs are being watched and listened to, polls to determine house-wives reaction to a new product, political polls, and the like. In most cases the sampling is carefully planned and often a margin of error is stated. Polls cannot, however, altogether eliminate the fact that certain people dislike being questioned and may deliberately conceal or give false information. In spite of this and other objections, the method of sampling often makes results available in situations where the cost of complete enumeration would be prohibitive both from the standpoint of time and of money.
 Thus we can see that probability and statistics are used in insurance, physics, genetics, biology, business, as well as in games of chance, and we are inclined to agree with P.S. LaPlace who said: We see . . . that the theory of probabilities is at bottom only common sense reduced to calculation, it makes us appreciate with exactitude what reasonable minds feel by a sort of instinct, often being able to account for it . . . it is remarkable that [this] science, which originated in the consideration of games of chance, should have become the most important object of human knowledge.
 It seems, that the most taken are the paradoxes in the foundations of set theory as discovered by Russell in 1901. Some classes have themselves as members: The class of all abstract objects, for example, is an abstract object, whereby, others do not: The class of donkeys is not itself a donkey. Now consider the class of all classes that are not put into appropriate classes are not separate sectors in any understandable unit in the members in themselves, but this class as a member in itself, are that, if it is, then it is not, and if it is not, then it is.
 The paradox is structurally similar to easier examples, such as the paradox of the barber. Since is a village, having only one barber in it, who shaves all and only the people who do not have in themselves, questionably, who shaves the barber? If he shaves himself, then he does not, but if he does not shave himself, then he does not. The paradox is actually just a proof that there is no such barber or in other words, that the condition is inconsistent. All the same, it is no too easy to say why there is no such class as the one Russell defines. It seems that there must be some restriction on the kind of definition that is allowed to define classes and the difficulty that of finding a well-motivated principle behind any such restriction.
 The French mathematician and philosopher Henri Jules Poincaré (1854-1912) believed that paradoxes like those of Russell and the barber was due to such as the impredicative definitions, and therefore proposed banning them. That to acknowledge as a moderate revision in request for requiring suchlike definitions, that consist of various but indefinite of many points for the ban to be easily absolved. Having, in turn, as forwarded by Poincaré and Russell, was that in order to solve the logical and semantic paradoxes it would have to ban any collection (set) containing members that can only be defined by means of the collection taken as a whole. It is, effectively by all occurring principles into which have of adopting vicious regress, as to mark the definition for which involves no such failure. There is frequently room for dispute about whether regresses are benign or vicious, since the issue will hinge on whether it is necessary to reapply the procedure. The cosmological argument is an attempt to find a stopping point for what is otherwise seen for being infinitely regressive, and, to ban of the predicative definitions.
 The investigation of questions, arise from reflection upon sciences and scientific inquiry, is such as called of a philosophy of science. Such questions include, what distinctions in the methods of science? There is a clear demarcation between science and other disciplines, and how do we place such inquisitive points that support reasons for the proposed change in history, economics or sociology? And scientific theories probable or more in the nature of provisional conjecture can be verified or falsified? What distinguished ‘good’ from ‘bad’ or ‘best’ from ‘worse’ explanations? Might there be one unified since, embracing all special sciences? For much of the 20th century their questions were pursued in a high abstract and logical framework it being supposed that as general logic of scientific discovery that a general logic of scientific discovery to gainfully employ the knowledge of or in ascertaining the existence of something other than was previously unknown or unrecognized as reason might be found. However, many now take interests in a more historical, contextual and sometimes sociological approach, in which the methods and successes of a science at a particular time are regarded less in terms of universal logical principles and procedure, and more in terms of their availability to methods and paradigms as well as the social context.
 In addition, to general questions of methodology, there are specific problems within particular sciences, giving subjects as biology, mathematics and physics.
 The intuitive certainties that spark aflame the dialectic awarenesses for its immediate concerns are either of the truths or by some other in an object of apprehensions, such as a concept. Awareness as such, has to its amounting quality value the place where philosophically understanding of the source of our knowledge is, however, in covering the sensible apprehension of things and pure intuition it is that which structural sensation into the experience of things’ accent of its direction that orchestrates the celestial overture into measures in space and time.
 The notion that determines how something is seen or evaluated of the status of law and morality especially associated with St. Thomas Aquinas and the subsequent scholastic tradition. More widely, any attempt to cement the moral and legal order together with the nature of the cosmos or how the nature of human beings, for which sense it is also found in some Protestant writers, and arguably derivative from a Platonic view of ethics, and is implicit in ancient Stoicism. Law stands above and apart from the activities of a human lawmaker, it constitutes an objective set of principles that can be seen true by natural light or reason, and (in religion versions of the theory) that express Gods’ will for creation. Non-religious versions of the theory substitute objective conditions for human flourishing as the source of constraints upon permissible actions and social arrangements. Within the  naturalized lawful traditions, illustrated the different views that have been held about the relationship between the rule of law about God s will. For instance the Dutch philosopher Hugo Grothius (1583-1645), similarly takes upon the view that the content of natural law is independent of any will, including that of God, while the German theorist and historian Samuel von Pufendorf (1632-94) takes the opposite view. Thereby facing the problem of one horn of the Euthyphro dilemma, which simply states, that its dilemma arises from whatever the source of authority is supposed to be, for in which do we care about the overall good because it is good, or do we just call good things that we care about. Wherefore, by facing the problem that may be to assume of a strong form, in which it is claimed that various facts entail values, or a weaker form, from which it confines itself to holding that reason by itself is capable of discerning moral requirements that are supposedly of binding to all human bings regardless of their desires.  Although the moralities of people send the ethical amount, from which the same thing, is that there is a usage that restricts morality to systems such as that of the German philosopher and founder of ethical philosophy Immanuel Kant (1724-1804), based on notions such as duty, obligation, and principles of conduct, reserving ethics for more than the Aristotelian approach to practical reasoning based on the notion of a virtue, and generally avoiding the separation of moral considerations from other practical considerations. The scholarly issues are complex, with some writers seeing Kant as more Aristotelian and Aristotle as, ore involved in a separate sphere of responsibility and duty, than the simple contrast suggests. Some theorists see the subject in terms of a number of laws (as in the Ten Commandments). The status of these laws may be tested, and they are the edicts of a divine lawmaker, or that they are truth of reason, knowable deductively. Other approaches to ethics (e.g., eudaimonism, situational ethics, virtue ethics) eschew general principles as much as possible, frequently disguising the great complexity of practical reasoning. For Kantian notions of the moral law is a binding requirement of the categorical imperative, and to understand whether they are equivalent at some deep level. Kants own applications of the notion are not always convincing, as for one cause of confusion in relating Kants ethics to theories such additional expressivism, is that it is easy, but mistaken, to suppose that the categorical nature of the imperative means that it cannot be the expression of sentiment, but must derive from something unconditional or necessary such as the voice of reason.
 For whichever reason, the mortal being makes of its formidable combinations a presence that awaits to the future, as the future of weighing, one must do, or that which can be required of one. The term carries implications of that which is owed (due) to other people, or perhaps of himself. Universal duties would be owed to persons (or sentient beings) as such, whereas special duty in virtue of specific relations, such for being the child of someone, or having made someone a promise. Duty or obligation is the primary concept of deontological approaches to ethics, but is constructed in other systems out of other notions. In the system of Kant, a perfect duty is one that must be performed whatever the circumstances: Imperfect duties may have to give way to the more stringent ones. In another way, perfect duties are those that are correlative with the right to others, imperfect duties are not. Problems with the concept include the ways in which due needs to be specified (a frequent criticism of Kant is that his notion of duty is too abstract). The condition for which the concept, for which to have existence or a place accorded in agreement to the harmony of parts, traits or texture are considered of deliberately having independent reality or of existing or dealing with what exists only in the mind, also to suggest of a regimented view of ethical life in which we are all forced conscripts in a kind of moral army, and may encourage an individualistic and antagonistic view of social relations.
 The most generally accepted account of externalism and/or internalism, that this distinction is that a theory of justification is internalist if only if its requiems that all of the factors needed for a belief to be epistemologically justified for a given person are cognitively accessible to that person, internal to his cognitive perception, and externalist, if it allows that at least some of the justifying factors need not be thus accessible, so that they can be external to the believers cognitive perceptive, beyond any such given relations. However, epistemologists often use the distinction between internalist and externalist theories of epistemic justification without offering any very explicit explication.
 The externalist/internalist distinction has been mainly applied to theories of epistemic justification: It has also been applied in a closely related way to accounts of knowledge and in a rather different way to accounts of belief and thought contents.
 The internalist requirement of cognitive accessibility can be interpreted in at least two ways: A strong version of internalism would require that the believer actually be aware of the justifying factor in order to be justified: While a weaker version would require only that he be capable of becoming aware of them by focussing his attentions appropriately, but without the need for any change of position, new information, etc. Though the phrase cognitively accessible suggests the weak interpretation, the main intuitive motivation for internalism, viz. the idea that epistemic justification requires that the believer actually have in his cognitive possession a reason for thinking that the belief is true, and would require the strong interpretation.
 Perhaps, the clearest example of an internalist position would be a foundationalist view according to which foundational beliefs pertain to immediately experienced states of mind and other beliefs are justified by standing in cognitively accessible logical or inferential relations to such foundational beliefs. Such a view could count as either a strong or a weak version of internalism, depending on whether actual awareness of the justifying elements or only the capacity to become aware of them is required. Similarly, a current view could also be internalist, if both the beliefs or other states with which a justification belief is required to cohere and the coherence relations themselves are reflectively accessible.
 It should be carefully noticed that when internalism is construed in this way, it is neither necessary nor sufficient by itself for internalism that the justifying factors literally are internal mental states of the person in question. Not necessary, necessary, because on at least some views, e.g., a direct realist view of perception, something other than a mental state of the believer can be cognitively accessible: Not sufficient, because there are views according to which at least some mental states need not be actual (strong version) or even possible (weak version) objects of cognitive awareness. Also, on this way of drawing the distinction, a hybrid view, according to which some of the factors required for justification must be cognitively accessible while others need not and in general be, this would count as an externalist view. Obviously too, a view that was externalist in relation to a strong version of internalism (by not requiring that the believer actually be aware of all justifying factors) could still be internalist in relation to a weak version (by requiring that he at least is capable of becoming aware of them).
 The most prominent recent externalist views have been versions of Reliabilism, whose requirement for justification is roughly that the belief is produced in a way or via a process that makes of objectively likely that the belief is true.  What makes such a view externalist is the absence of any requirement that the person for whom the belief is justified have any sort of cognitive access to the relations of reliability in question. Lacking such access, such a person will in general have no reason for thinking that the belief is true or likely to be true, but will, on such an account, nonetheless be epistemically justified in according it. Thus such a view arguably marks a major break from the modern epistemological tradition, stemming from Descartes, which identifies epistemic justification with having a reason, perhaps even a conclusive reason for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply changed the subject.
 Once, again, the main objection to externalism rests on the intuitive certainty that the basic requirement for epistemic justification is that the acceptance of the belief in question is rational or responsible in relation to the cognitive goal of truth, which seems to require in turn that the believer actually be dialectally aware of a reason for thinking that the belief is true (or, at the very least, that such a reason be available to him). Since the satisfaction of an externalist condition is neither necessary nor sufficient for the existence of such a cognitively accessible reason, it is argued, externalism is mistaken as an account of epistemic justification. This general point has been elaborated by appeal to two sorts of putative intuitive counter-examples to externalism. The first of these challenges the necessity of belief which seem intuitively to be justified, but for which the externalist conditions are not satisfied. The standard examples in this sort are cases where beliefs are produced in some very nonstandard way, e.g., by a Cartesian demon, but nonetheless, in such a way that the subjective experience of the believer is indistinguishable from that of someone whose beliefs are produced more normally. The intuitive claim is that the believer in such a case is nonetheless epistemically justified, as much so as one whose belief is produced in a more normal way, and hence that externalist account of justification must be mistaken.
 Perhaps the most striking reply to this sort of counter-example, on behalf of a cognitive process is to be assessed in normal possible worlds, i.e., in possible worlds that are actually the way our world is common-seismically believed to be, than in the world which contains the belief being judged. Since the cognitive processes employed in the Cartesian demon cases are, for which we may assume, reliable when assessed in this way, the reliabilist can agree that such beliefs are justified. The obvious, to a considerable degree of bringing out the issue of whether it is or not an adequate rationale for this construal of Reliabilism, so that the reply is not merely a notional presupposition guised as having representation.
 The correlative way of elaborating on the general objection to justificatory externalism challenges the sufficiency of the various externalist conditions by citing cases where those conditions are satisfied, but where the believers in question seem intuitively not to be justified. In this context, the most widely discussed examples have to do with possible occult cognitive capacities, like clairvoyance. Considering the point in application once, again, to Reliabilism, the claim is that to think that he has such a cognitive power, and, perhaps, even good reasons to the contrary, is not rational or responsible and therefore not epistemically justified in accepting the belief that result from his clairvoyance, despite the fact that the reliabilist condition is satisfied.
 One sort of response to this latter sorts of objection is to bite the bullet and insist that such believers are in fact justified, dismissing the seeming intuitions to the contrary as latent internalist prejudice. In a reasonable comprising manner is widely expansive adaptions are more responsive or which achievements in the attempt to impose additional conditions, usually of a not as smooth but harshly uneven internalist, the sort in corresponding in such manners or degree as to be appropriately associated by whichever adapted rule is out the offending example, while stopping far of a full internalism. But, while there is little doubt that such modified versions of externalism can handle particular cases, as well enough to avoid clear intuitive implausibility, the usually problematic cases that they cannot handle, and also whether there is and clear motivation for the additional requirements other than the general internalist view of justification that externalist are committed to reject.
 A view in this same general vein, one that might be described as a hybrid of internalism and externalism holds that epistemic justification requires that there is a justificatory factor that is cognitively accessible to the believer in question (though it need not be actually grasped), thus ruling out, e.g., a pure Reliabilism. At the same time, however, though it must be objectively true that beliefs for which such a factor is available are likely to be true, in addition, the fact need not be in any way grasped or cognitively accessible to the believer. In effect, of the premises needed to argue that a particular belief is likely to be true, one must be accessible in a way that would satisfy at least weak internalism, the internalist will respond that this hybrid view is of no help at all in meeting the objection and has no belief nor is it held in the rational, responsible way that justification intuitively seems to require, for the believer in question, lacking one crucial premise, still has no reason at all for thinking that his belief is likely to be true.
 An alternative to giving an externalist account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., a result of a reliable process (and perhaps, further conditions as well). This makes it possible for such a view to retain internalist account of epistemic justification, though the centrality of that concept to epistemology would obviously be seriously diminished.
 Such an externalist account of knowledge can accommodate the commonsense conviction that animals, young children, and unsophisticated adults’ posse’s knowledge, though not the weaker conviction (if such a conviction does exist) that such individuals are epistemically justified in their beliefs. It is also at least less vulnerable to internalist counter-examples of the sort discussed, since the intuitions involved there pertain more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge be accepted or advanced as true or real on the basis of less than conclusive evidence is assumed. The conjectural hypothesis is supposed to have, in particular, has any serious bearing on traditional epistemological problems and on the deepest and most troubling versions of scepticism, which seems in fact to be primarily concerned with justification, that of knowledge?`
 A rather different use of the terms like the definite or a measurable extent of time during which something exists, lasts or is in progress, during which time the condition in terms is mutually social or relative to its positional status, finding to its balance internalism and externalism has to do with the issue of how the content of belief and thoughts is determined: According to an internalist view of content, the content of such intention states depends only on the non-relational, internal properties of the individuals mind or grain, and not at all on his physical and social environment: While according to an externalist view, content is significantly affected by such external factors and suggests a view that appears of both internal and external elements is standardly classified as an external view.
 As with justification and knowledge, the traditional view of content has been strongly internalist in character. The main argument for externalism derives from the philosophy y of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc. that motivate the views that have come to be known as direct reference theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependent on facts about his environment -, e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criteria employed by expects in his social group, etc.-not just on what is going on internally in his mind or brain.
 An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought from the inside, simply by reflection. If content is depending on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors-which will not in general be available to the person whose belief or thought is in question.
 The adoption of an externalist account of mental content would seem to support an externalist account of justification, by way that if part or all of the content of a belief inaccessible to the believer, then both the justifying status of other beliefs in relation to that content and the status of that content as justifying further beliefs will be similarly inaccessible, thus contravening the internalist requirement for justification. An internalist must insist that there are no justification relations of these sorts, that our internally associable content can be justified or justly in anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalist account of content is mistaken.
 In addition, to what to the foundationalist, but the view in epistemology that knowledge must be regarded as a structure raised upon secure, certain foundations. These are found in some combination of experience and reason, with different schools (empirical, rationalism) emphasizing the role of one over that of the other. Foundationalism was associated with the ancient Stoics, and in the modern era with Descartes, who discovered his foundations in the clear and distinct ideas of reason. Its main opponent is Coherentism or the view that a body of propositions my be known without as foundation is certain, but by their interlocking strength. Rather as a crossword puzzle may be known to have been solved correctly even if each answer, taken individually, admits of uncertainty.
 Truth, alone with coherence is the study of concept, in such a study in  philosophy is that it treats both the meaning of the word true and the criteria by which we judge the truth or falsity in spoken and written statements. Philosophers have attempted to answer the question of, What is truth? For thousands of years, nonetheless, there are of numbering four main theories they have proposed to answer this question are the correspondence, pragmatic, coherence, and deflationary theories of truth.
 There are various ways of distinguishing types of foundationalist epistemology by the use of the variations that have been enumerating. Planntinga has put forward an influence conception of classical foundationalism, specified in terms of limitations on the foundations. He construes this as a disjunction of ancient and medieval foundationalism;  which takes foundations to comprise that with self-evident and evident to the senses, and modern foundationalism that replace evident foundationalism that replaces evidently to the senses with the replaces of evident to the senses with incorrigibly, which in practice was taken to apply only to beliefs bout ones present state of consciousness? Plantinga himself developed this notion in the context of arguing those items outside this territory, in particular certain beliefs about God, could also be immediately justified. A popular recent distinction is between what is variously strong or extremely foundationalism and moderate, modest or minimal and moderately modest or minimal foundationalism with the distinction depending on whether epistemic immunities are reassured of foundations. While depending on whether it requires of a foundation only that it be required of as foundation, so that only it be immediately justified. And  whether it be immediately justified, in that it makes right the comforted preferability, only to suggest that the plausibility of the string requiring stems from both a level confusion between beliefs on different levels.
 Emerging sceptic tendencies come forth in the 14th-century writings of Nicholas of Autrecourt. His criticisms of any certainty beyond the immediate deliverance of the senses and basic logic, and in particular of any knowledge of either intellectual or material substances, anticipate the later scepticism of Balye and Hume. The; later distinguishes between Pyrrhonistic and excessive scepticism, which he regarded as unlivable, and the more mitigated scepticism that accepts every day or commonsense beliefs (not as the delivery of reason, but as due more to custom and habit), but is duly wary of the power of reason to give us much more. Mitigated scepticism is thus closer to the attitude fostered by ancient scepticism from Pyrrho through to Sexus Empiricus. Although the phrase Cartesian scepticism is sometimes used, Descartes himself was not a sceptic, but in the method of doubt, uses a sceptical scenario in order to begin the process of finding a secure mark of knowledge. Descartes himself trusts a category of clear and distinct ideas, not far removed from the phantasia kataleptiké of the Stoics.
 Scepticism should not be confused with relativism, which is a doctrine about the nature of truth, and may be motivated by trying to avoid scepticism. Nor is it identical with eliminativism, which counsels abandoning an area of thought together, not because we cannot know the truth, but because there are no truth capable of being framed in the terms we use.
 Descartes theory of knowledge starts with the quest for certainty, for an indubitable starting-point or foundation on the basis alone of which progress is possible. This is eventually found in the celebrated Cogito ergo sum: I think therefore I am. By locating the point of certainty in my own awareness of my own self, Descartes gives a first-person twist to the theory of knowledge that dominated them following centuries in spite of various counter-attacks on behalf of social and public starting-points. The metaphysics associated with this priority is the famous Cartesian dualism, or separation of mind and matter into two different but interacting substances, Descartes rigorously and rightly sees that it takes divine dispensation to certify any relationship between the two realms thus divided, and to prove the reliability of the senses invokes a clear and distinct perception of highly dubious proofs of the existence of a benevolent deity. This has not met general acceptance: as Hume drily puts it, to have recourse to the veracity of the supreme Being, in order to prove the veracity of our senses, is surely making a very unexpected circuit.
 In his own time Descartes conception of the entirely separate substance of the mind was recognized to give rise to insoluble problems of the nature of the causal connexion between the two. It also gives rise to the problem, insoluble in its own terms, of other minds. Descartes notorious denial that non-human animals are conscious is a stark illustration of the problem. In his conception of matter Descartes also gives preference to rational cogitation over anything derived from the senses. Since we can conceive of the matter of a ball of wax surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature. Descartes thought, as reflected in Leibniz, that the qualities of sense experience have no resemblance to qualities of things, so that knowledge of the external world is essentially knowledge of structure rather than of filling. On this basis Descartes erects a remarkable physics. Since matter is in effect the same as extension there can be no empty space or void, since there is no empty space motion is not a question of occupying previously empty space, but is to be thought of in terms of vortices (like the motion of a liquid).
 Although the structure of Descartes epistemology, theories of mind, and theories of matter have been rejected many times, their relentless disarray of the hardest issues, their exemplary clarity, and even their initial plausibility, all contrive to make him the central point of reference for modern philosophy.
 The self conceived as Descartes presents it in the first two Meditations: aware only of its own thoughts, and capable of disembodied existence, neither situated in a space nor surrounded by others. This is the pure self of I-ness that we are tempted to imagine as a simple unique thing that make up our essential identity. Descartes view that he could keep hold of this nugget while doubting everything else is criticized by Lichtenberg and Kant, and most subsequent philosophers of mind.
 Descartes holds that we do not have any knowledge of any empirical proposition about anything beyond the contents of our own minds. The reason, roughly put, is that there is a legitimate doubt about all such propositions because there is no way to deny justifiably that our senses are being stimulated by some cause (an evil spirit, for example) which is radically different from the objects that we normally think affect our senses.
 He also points out, that the senses liken to sight, hearing, touch, etc., are often unreliable, and it is prudent never to trust entirely those who have deceived us even once, he cited such instances as the straight stick that looks ben t in water, and the square tower that looks round from a distance. This argument of illusion, has not, on the whole, impressed commentators, and some of Descartes contemporaries pointing out that since such errors become known as a result of further sensory information, it cannot be right to cast wholesale doubt on the evidence of the senses. But Descartes regarded the argument from illusion as only the first stage in a tenderizing process which would lead the mind away from the senses. He admits that there are some cases of sense-base  belief about which doubt would be insane, e.g., the belief that I am sitting here by the fire, wearing a winter dressing gown.
 Descartes was to realize that there was nothing in this view of nature that could explain or provide a foundation for the mental, or from direct experience as distinctly human. In a mechanistic universe, he said, there is no privileged place or function for mind, and the separation between mind and matter is absolute. Descartes was also convinced, that the immaterial essences that gave form and structure to this universe were coded in geometrical and mathematical ideas, and this insight led him to invent algebraic geometry.
 A scientific understanding of these ideas could be derived, said Descartes, with the aid of precise deduction, and also claimed that the contours of physical reality could be laid out in three-dimensional coordinates. Following the publication of Newtons Principia Mathematica in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. And the dream that the entire physical world could be known and mastered through the extension and refinement of mathematical theory became the central feature and guiding principle of scientific knowledge.
 Having to its  recourse of knowledge, its central questions include the origin of knowledge, the place of experience in generating knowledge, and the place of reason in doing so, the relationship between knowledge and certainty, and between knowledge and the impossibility of error, the possibility of universal scepticism, and the changing forms of knowledge that arise from new conceptualizations of the world. All of these issues link with other central concerns of philosophy, such as the nature of truth and the natures of experience and meaning.
 Foundationalism was associated with the ancient Stoics, and in the modern era with Descartes (1596-1650). Who discovered his foundations in the clear and distinct ideas of reason? Its main opponent is Coherentism, or the view that a body of propositions mas be known without a foundation in certainty, but by their interlocking strength, than as a crossword puzzle may be known to have been solved correctly even if each answer, taken individually, admits of uncertainty. Difficulties at this point led the logical passivists to abandon the notion of an epistemological foundation with the coherence theory of truth. It is widely accepted that trying to make the connexion between thought and experience through basic sentences depends on an untenable myth of the given.
 Still in spite of these concerns, the problem was, of course, in defining knowledge in terms of true beliefs plus some favoured relations between the believer and the facts that began with Platos view in the Theaetetus, that knowledge is true belief, and some logos. Having reached the date at which was required, its article would become due after the date as shown, however, what one fairly has been coming has finally accorded the due of its nonsynthetic epistemology, the enterprising study of the actual formation of knowledge. Human beings (Homo sapiens), are without aspiring to certify those processes as rational, or its proof against scepticism or even apt to yield the truth. Natural epistemology would therefore blend into the psychology of learning and the study of episodes in the history of science. The scope for external or philosophical reflection of the kind that might result in scepticism or its refutation is markedly diminished. Despite the fact that the terms of modernity are so distinguished as exponents of the approach include Aristotle, Hume, and J. S. Mills.
 The task of the philosopher of a discipline would then be to reveal the correct method and to unmask counterfeits. Although this belief lay behind much positivist philosophy of science, few philosophers now subscribe to it. It places too well a confidence in the possibility of a purely previous first philosophy, or viewpoint beyond that of the work ones way of practitioners, from which their best efforts can be measured as good or bad. These standpoints now seem that too many philosophers may be too fanciful, that the more modest of tasks are actually adopted at various historical stages of investigation into different areas and with the aim not so much of criticizing, but more of systematization. In the presuppositions of a particular field at a particular classification, there is still a role for local methodological disputes within the community investigators of some phenomenon, with one approach charging that another is unsound or unscientific, but logic and philosophy will not, on the modern view, provide any independent arsenal of weapons for such battles, which often come to seem more like factional recommendations in the ascendancy of a discipline.
 This is an approach to the theory of knowledge that sees an important connexion between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge processed through some natural selection process, the best example of which is Darwins theory of biological natural selection. There is a widespread misconception that evolution proceeds according to some plan or direct, but it has neither, and the role of chance ensures that its future course will be unpredictable. Random variations in individual organisms create tiny differences in their Darwinian fitness. Some individuals have more offsprings than others, and the characteristics that increased their fitness thereby become more prevalent in future generations. Once upon a time, at least a mutation occurred in a human population in tropical Africa that changed the hemoglobin molecule in a way that provided resistance to malaria. This enormous advantage caused the new gene to spread, with the unfortunate consequence that sickle-cell anaemia came to exist.
 Given to chance, it can influence the outcome at each stage: First, in the creation of genetic mutation, second, in whether the bearer lives long enough to show its effects, thirdly, in chance events that influence the individuals actual reproductive success, and fourth, in whether a gene even if favoured in one generation, is, happenstance, eliminated in the next, and finally in the many unpredictable environmental changes that will undoubtedly occur in the history of any group of organisms. As Harvard biologist Stephen Jay Gould has so vividly expressed that process over again, the outcome would surely be different. Not only might there not be humans, there might not even be anything like mammals.
 We will often emphasis the elegance of traits shaped by natural selection, but the common idea that nature creates perfection needs to be analysed carefully. The extent to which evolution achieves perfection depends on exactly what you mean. If you mean Does natural selections always take the best path for the long-term welfare of a species? The answer is no. That would require adaption by group selection, and this is, unlikely. If you mean Does natural selection creates every adaption that would be valuable? The answer again, is no. For instance, some kinds of South American monkeys can grasp branches with their tails. The trick would surely also be useful to some African species, but, simply because of bad luck, none have it. Some combination of circumstances started some ancestral South American monkeys using their tails in ways that ultimately led to an ability to grab onto branches, while no such development took place in Africa. Mere usefulness of a trait does not necessitate a means in that what will understandably endure phylogenesis or evolution.
 This is an approach to the theory of knowledge that sees an important connexion between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge proceeds through some natural selection process, the best example of which is Darwins theory of biological natural selection.  The three major components of the model of natural selection are variation selection and retention. According to Darwins theory of natural selection, variations are not pre-designed to do certain functions. Rather, these variations that do useful functions are selected. While those that do not employ of some coordinates in that are regainfully purposed are also, not to any of a selection, as duly influenced of such a selection, that may have responsibilities for the visual aspects of variational intentionally occurs. In the modern theory of evolution, genetic mutations provide the blind variations: Blind in the sense that variations are not influenced by the effects they would have-the likelihood of a mutation is not correlated with the benefits or liabilities that mutation would confer on the organism, the environment provides the filter of selection, and reproduction provides the retention. Fatnesses are achieved because those organisms with features that make them less adapted for survival do not survive in connexion with other organisms in the environment that have features that are better adapted. Evolutionary epistemology applies this blind variation and selective retention model to the growth of scientific knowledge and to human thought processes overall.
 The parallel between biological evolution and conceptual or epistemic evolution can be seen as either literal or analogical. The literal version of evolutionary epistemology deeds biological evolution as the main cause of the growth of knowledge. On this view, called the evolution of cognitive mechanic programs, by Bradie (1986) and the Darwinian approach to epistemology by Ruse (1986), that growth of knowledge occurs through blind variation and selective retention because biological natural selection itself is the cause of epistemic variation and selection. The most plausible version of the literal view does not hold that all human beliefs are innate but rather than the mental mechanisms that guide the acquisitions of non-innate beliefs are themselves innately and the result of biological natural selection. Ruse, (1986) demands a rendering surrender of authentic evolutionary epistemology, that he links to sociolology, on the analogical version of evolutionary epistemology, called the evolution of theories program, by Bradie (1986). The Spenserians approach (after the nineteenth century philosopher Herbert Spencer) by Ruse (1986), the development of human knowledge is governed by a process analogous to biological natural selection, rather than by an instance of the mechanism itself. This version of evolutionary epistemology, introduced and elaborated by Donald Campbell (1974) as well as Karl Popper, sees the [partial] fit between theories and the world as explained by a mental process of trial and error known as epistemic natural selection.
 Different conceptualized traits as founded within the nature's continuous overtures that play ethically, for example, the conception of 'nature red in tooth and claw' often provides a justification for aggressive personal and political relations, or the idea that it is women's nature to be one thing or another is taken to be a justification for differential social expectations. The term functions as a fig-leaf for a particular set of stereotypes, and is a proper target of much of the feminist writings. Feminist epistemology has asked whether different ways of knowing for instance with different criteria of justification, and different emphases on logic and imagination, characterize male and female attempts to understand the world. Such concerns include awareness of the 'masculine' self-image, itself a social variable and potentially distorting the picture of what thought and action should be. Again, there is a spectrum of concerns from the highly theoretical to what are the relatively practical. In this latter area particular attention is given to the institutional biases that stand in the way of equal opportunities in science and other academic pursuits, or the ideologies that stand in the way of women seeing themselves as leading contributors to various disciplines. However, to more radical feminists such concerns merely exhibit women wanting for themselves the same power and rights over others that men have claimed, and failing to confront the real problem, which is how to live without such symmetrical powers and rights.
 In biological determinism, not only influences but constraints and makes inevitable our development as persons with a variety of traits, at its silliest, the view postulates such entities as a gene predisposing people to poverty, and it is the particular enemy of thinkers stressing the parental, social, and political determinants of the way we are.
 The philosophy of social science is more heavily intertwined with actual social science than in the case of other subjects such as physics or mathematics, since its question is centrally whether there can be such a thing as sociology. The idea of a 'science of man', devoted to uncovering scientific laws determining the basic dynamic s of human interactions was a cherished ideal of the Enlightenment and reached its heyday with the positivism of writers such as the French philosopher and social theorist Auguste Comte (1798-1957), and the historical materialism of Marx and his followers. Sceptics point out that what happens in society is determined by peoples' own ideas of what should happen, and like fashions those ideas change in unpredictable ways as self-consciousness is susceptible to change by any number of external event s: Unlike the solar system of celestial mechanics a society is not at all a closed system evolving in accordance with a purely internal dynamic, but constantly responsive to shocks from outside.
 The sociological approach to human behaviour is based on the premise that all social behaviour has a biological basis, and seeks to understand that basis in terms of genetic encoding for features that are then selected for through evolutionary history. The philosophical problem is essentially one of methodology: Of finding criteria for identifying features that can usefully be explained in this way, and for finding criteria for assessing various genetic stories that might provide useful explanations.
 Among the features that are proposed for this kind of explanation are such things as male dominance, male promiscuity versus female fidelity, propensities to sympathy and other emotions, and the limited altruism characteristic of human beings. The strategy has proved unnecessarily controversial, with proponents accused of ignoring the influence of environmental and social factors in moulding people's characteristics, e.g., at the limit of silliness, by postulating a 'gene for poverty', however, there is no need for the approach to committing such errors, since the feature explained psychobiological may be indexed to environment: For instance, it may be a propensity to develop some feature in some other environments (for even a propensity to develop propensities . . .) The main problem is to separate genuine explanation from speculative, just so stories which may or may not identify as really selective mechanisms.
 Subsequently, in the 19th century attempts were made to base ethical reasoning on the presumed facts about evolution. The movement is particularly associated with the English philosopher of evolution Herbert Spencer (1820-1903). His first major work was the book Social Statics (1851), which promoted an extreme political libertarianism. The Principles of Psychology was published in 1855, and his very influential Education advocating natural development of intelligence, the creation of pleasurable interest, and the importance of science in the curriculum, appeared in 1861. His First Principles (1862) was followed over the succeeding years by volumes on the Principles of biology and psychology, sociology and ethics. Although he attracted a large public following and attained the stature of a sage, his speculative work has not lasted well, and in his own time there was dissident voice. T.H. Huxley said that Spencer's definition of a tragedy was a deduction killed by a fact. Writer and social prophet Thomas Carlyle (1795-1881) called him a perfect vacuum, and the American psychologist and philosopher William James (1842-1910) wondered why half of England wanted to bury him in Westminister Abbey, and talked of the 'hurdy-gurdy' monotony of him, his aggraded organized array of parts or elements forming or functioning as some units were in cohesion of the opening contributions of wholeness and the system proved inseparably unyieldingly.
 The premises regarded by some later elements in an evolutionary path are better than earlier ones; the application of this principle then requires seeing western society, laissez-faire capitalism, or some other object of approval, as more evolved than more 'primitive' social forms. Neither the principle nor the applications command much respect. The version of evolutionary ethics called 'social Darwinism' emphasizes the struggle for natural selection, and drawn the conclusion that we should glorify such struggles, usually by enhancing competitive and aggressive relations between people in society or between societies themselves. More recently the relation between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.
 In that, the study of the way in which a variety of higher mental functions may be adaptations applicable of a psychology of evolution, an outward appearance of something as distinguished from the substances of which it is made, as the conduct regulated by an external control as a custom or formal protocol of procedure may, perhaps, depicts the conventional convenience in having been such at some previous time the hardened notational system in having no definite or recognizable form in response to selection pressures on human populations through evolutionary time. Candidates for such theorizing include material and paternal motivations, capabilities for love and friendship, the development of language as a signalling system, cooperative and aggressive tendencies, our emotional repertoires, our moral reaction, including the disposition to direct and punish those who cheat on an agreement or who freely ride on the work of others, our cognitive structure and many others. Evolutionary psychology goes hand-in-hand with Neurophysiologic evidence about the underlying circuitry in the brain which subserves the psychological mechanisms it claims to identify.
 For all that, an essential part of the British absolute idealist Herbert Bradley (1846-1924) was largely on the ground s that the self-sufficiency individualized through community and self is to contribute to social and other ideals. However, truth as formulated in language is always partial, and dependent upon categories that they are inadequate to the harmonious whole. Nevertheless, these self-contradictory elements somehow contribute to the harmonious whole, or Absolute, lying beyond categorization. Although absolute idealism maintains few adherents today, Bradley's general dissent from empiricism, his holism, and the brilliance and style of his writing continues to make him the most interesting of the late 19th century writers influenced by the German philosopher Friedrich Hegel (1770-1831).
 Understandably, something less than the fragmented division that belonging of Bradley's case has a preference, voiced much earlier by the German philosopher, mathematician and polymath, Gottfried Leibniz (1646-1716), for categorical monadic properties over relations. He was particularly troubled by the relation between that which is known and the more that knows it. In philosophy, the Romantics took from the German philosopher and founder of critical philosophy Immanuel Kant (1724-1804) both the emphasis on free-will and the doctrine that reality is ultimately spiritual, with nature itself a mirror of the human soul. To fix upon one among alternatives as the one to be taken, Friedrich Schelling (1775-1854), who is now qualified to be or worthy of being chosen as a condition, position or state of importance is found of a basic underlying entity or form that he succeeds fully or in accordance with one's attributive state of prosperity, the notice in conveying completely the cruel essence of those who agree and disagrees its contention to 'be-all' and 'end-all' of essentiality. Nonetheless, the movement of more general to naturalized imperatives are nonetheless, simulating the movement that Romanticism drew on by the same intellectual and emotional resources as German idealism was increasingly culminating in the philosophy of Hegal (1770-1831) and of absolute idealism.
 Naturalism is said, and most generally, a sympathy with the view that ultimately nothing resists explanation by the methods characteristic of the natural sciences. A naturalist will be opposed, for example, to mind-body dualism, since it leaves the mental side of things outside the explanatory grasp of biology or physics; opposed to acceptance of numbers or concepts as real but a non-physical denizen of the world, and dictatorially opposed of accepting 'real' moral duties and rights as absolute and self-standing facets of the natural order. A major topic of philosophical inquiry, especially in Aristotle, and subsequently since the 17th and 18th centuries, when the 'science of man' began to probe into human motivation and emotion. For writers such as the French moralistes, or normatively suitable for the moralist Francis Hutcheson (1694-1746), David Hume (1711-76), Adam Smith (1723-90) and Immanuel Kant (1724-1804), a prime task was to delineate the variety of human reactions and motivations. Such an inquiry would locate our propensity for moral thinking among other faculties, such as perception and reason, and other tendencies, such as empathy, sympathy or self-interest. The task continues especially in the light of a post-Darwinian understanding of us. In like ways, the custom style of manners, extend the habitude to construct according to some conventional standard, wherefrom the formalities affected by such self-conscious realism, as applied to the judgements of ethics, and to the values, obligations, rights, etc., that are referred to in ethical theory. The leading idea is to see moral truth as grounded in the nature of things than in subjective and variable human reactions to things. Like realism in other areas, this is capable of many different formulations. Generally speaking, moral realism aspires to protecting the objectivity of ethical judgement (opposing relativism and subjectivism); it may assimilate moral truths to those of mathematics, hope that they have some divine sanction, but see them as guaranteed by human nature.
 Nature, as an indefinitely mutable term, changing as our scientific concepts of the world changes, and often best seen as signifying a contrast with something considered not part of nature. The term applies both to individual species and also to the natural world as a whole. The association of what is natural with what it is good to become is visible in Plato, and is the central idea of Aristotle's philosophy of nature. Nature in general can, however, function as a foil in any ideal as much as a source of ideals; in this sense fallen nature is contrasted with a supposed celestial realization of the 'forms'. Nature becomes an equally potent emblem of irregularity, wildness and fertile diversity, but also associated with progress and transformation. Different conceptions of nature continue to have ethical overtones, for example, the conception of 'nature red in tooth and claw' often provides a justification for aggressive personal and political relations, or the idea that it is a woman's nature to be one thing or another is taken to be a justification for differential social expectations. Here the term functions as a fig-leaf for a particular set of stereotypes, and is a proper target of much feminist writing.
 The central problem for naturalism is to define what counts as a satisfactory accommodation between the preferred science and the elements that on the face of it has no place in them. Alternatives include 'instrumentalism', 'reductionism' and 'eliminativism' as well as a variety of other anti-realist suggestions. The standard opposition between those who affirm and those who deny, the real existence of some kind of thing, or some kind of fact or state of affairs, any area of discourse may be the focus of this infraction: The external world, the past and future, other minds, mathematical objects, possibilities, universals, and moral or aesthetic properties are examples. The term naturalism is sometimes used for specific versions of these approaches in particular in ethics as the doctrine that moral predicates actually express the same thing as predicates from some natural or empirical science. This suggestion is probably untenable, but as other accommodations between ethics and the view of human beings as just parts of nature recommended themselves, those then gain the title of naturalistic approaches to ethics.
 By comparison with nature which may include (1) that which is deformed or grotesque, or fails to achieve its proper form or function, or just the statistically uncommon or unfamiliar, (2) the supernatural, or the world of gods and invisible agencies, (3) the world of rationality and intelligence, of a kind to be readily understood as capable of being distinguished as differing from the biological and physical order, (4) that which is manufactured and artifactual, or the product of human invention, and (5) related to it, the world of convention and artifice.
 Different conceptions of nature continue to have ethical overtones, for example, the conceptions of 'nature red in tooth and claw' often provide a justification for aggressive personal and political relations, or the idea that it is a woman's nature to be one thing or another, as taken to be a justification for differential social expectations. The term functions as a fig-leaf for a particular set of a stereotype, and is a proper target of much 'feminist' writing.
 This brings to question, that most of all ethics are contributively distributed as an understanding for which a dynamic function in and among the problems that are affiliated with human desire and needs the achievements of happiness, or the distribution of goods. The central problem specific to thinking about the environment is the independent value to place on 'such-things' as preservation of species, or protection of the wilderness. Such protection can be supported as a man to ordinary human ends, for instance, when animals are regarded as future sources of medicines or other benefits. Nonetheless, many would want to claim a non-utilitarian, absolute value for the existence of wild things and wild places. It is in their value that things consist. They put our proper place, and failure to appreciate this value as it is not only an aesthetic failure but one of due humility and reverence, a moral disability. The problem is one of expressing this value, and mobilizing it against utilitarian agents for developing natural areas and exterminating species, more or less at will.
 Many concerns and disputed clusters around the idea associated with the term 'substance'. The substance of a thing may be considered in: (1) its essence, or that which makes it what it is. This will ensure that the substance of a thing is that which remains through change in properties. Again, in Aristotle, this essence becomes more than just the matter, but a unity of matter and form. (2) That which can exist by itself, or does not need a subject for existence, in the way that properties need objects, hence (3) that which bears properties, as a substance is then the subject of predication, that about which things are said as opposed to the things said about it. Substance in the last two senses stands opposed to modifications such as quantity, quality, relations, etc. it is hard to keep this set of ideas distinct from the doubtful notion of a substratum, something distinct from any of its properties, and hence, as an incapable characterization. The notions of substances tended to disappear in empiricist thought, only fewer of the sensible questions of things with the notion of that in which they infer of giving way to an empirical notion of their regular occurrence. However, this is in turn is problematic, since it only makes sense to talk of the occurrence of only instances of qualities, not of quantities themselves, yet the problem of what it is for a quality value to be the instance that remains.
 Metaphysics inspired by modern science tend to reject the concept of substance in favour of concepts such as that of a field or a process, each of which may seem to provide a better example of a fundamental physical category.
 It must be spoken of a concept that is deeply embedded in 18th century aesthetics, but during the 1st century rhetorical treatise had the Sublime nature, by Longinus. The sublime is great, fearful, noble, calculated to arouse sentiments of pride and majesty, as well as awe and sometimes terror. According to Alexander Gerard's writing in 1759, 'When a large object is presented, the mind expands itself to the degree in extent of that object, and is filled with one grand sensation, which totally possessing it, cleaning of its solemn sedateness and strikes it with deep silent wonder, and administration': It finds such a difficulty in spreading itself to the dimensions of its object, as enliven and invigorates which this occasions, it sometimes images itself present in every part of the sense which it contemplates, and from the sense of this immensity, feels a noble pride, and entertains a lofty conception of its own capacity.
 In Kant's aesthetic theory the sublime 'raises the soul above the height of vulgar complacency'. We experience the vast spectacles of nature as 'absolutely great' and of irresistible force and power. This perception is fearful, but by conquering this fear, and by regarding as small 'those things of which we are wont to be solicitous' we quicken our sense of moral freedom. So we turn the experience of frailty and impotence into one of our true, inward moral freedom as the mind triumphs over nature, and it is this triumph of reason that is truly sublime. Kant thus paradoxically places our sense of the sublime in an awareness of us as transcending nature, than in an awareness of us as a frail and insignificant part of it.
 Nevertheless, the doctrine that all relations are internal was a cardinal thesis of absolute idealism, and a central point of attack by the British philosopher's George Edward Moore (1873-1958) and Bertrand Russell (1872-1970). It is a kind of 'essentialism', stating that if two things stand in some relationship, then they could not be what they are, did they not do so, if, for instance, I am wearing a hat mow, then when we imagine a possible situation that we would be got to describe as my not wearing the hat now, we would strictly not be imaging as one and the hat, but only some different individual.
 The countering partitions a doctrine that bears some resemblance to the metaphysically based view of the German philosopher and mathematician Gottfried Leibniz (1646-1716) that if a person had any other attributes that the ones he has, he would not have been the same person. Leibniz thought that when asked what would have happened if Peter had not denied Christ. That being that if I am asking what had happened if Peter had not been Peter, denying Christ is contained in the complete notion of Peter. But he allowed that by the name 'Peter' might be understood as 'what is involved in those attributes [of Peter] from which the denial does not follow'. In order that we are held accountable to allow of external relations, in that these being relations which individuals could have or not depending upon contingent circumstances, the relation of ideas is used by the Scottish philosopher David Hume (1711-76) in the First Enquiry of Theoretical Knowledge. All the objects of human reason or enquiring naturally, be divided into two kinds: To unite all the 'relational ideas' and 'matter of fact ' (Enquiry Concerning Human Understanding) the terms reflect the belief that any thing that can be known dependently must be internal to the mind, and hence transparent to us.
 In Hume, objects of knowledge are divided into matter of fact (roughly empirical things known by means of impressions) and the relation of ideas. The contrast, also called 'Hume's Fork', is a version of the speculative deductive reasoning is an outcry for characteristic distinction, but ponderously reflects about the 17th and early 18th centuries, behind that the deductivist is founded by chains of infinite certainty as comparative ideas. It is extremely important that in the period between Descartes and J.S. Mill that a demonstration is not, but only a chain of 'intuitive' comparable ideas, whereby a principle or maxim can be established by reason alone. It is in this sense that the English philosopher John Locke (1632-1704) who believed that theologically and moral principles are capable of demonstration, and Hume denies that they are, and also denies that scientific enquiries proceed in demonstrating its results.
 A mathematical proof is formally inferred as to an argument that is used to show the truth of a mathematical assertion. In modern mathematics, a proof begins with one or more statements called premises and demonstrate, using the rules of logic, that if the premises are true then a particular conclusion must also be true.
 The accepted methods and strategies used to construct a convincing mathematical argument have evolved since ancient times and continue to change. Consider the Pythagorean Theorem, named after the 5th century Bc. Greek mathematician and philosopher Pythagoras, stated that in a right-angled triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides. Many early civilizations considered this theorem true because it agreed with their observations in practical situations. But the early Greeks, among others, realized that observation and commonly held opinions do not guarantee mathematical truth. For example, before the 5th century Bc it was widely believed that all lengths could be expressed as the ratio of two whole numbers, but an unknown Greek mathematician proved that this was not true by showing that the length of the diagonal of a square with an area of one is the irrational number Ã.
 The Greek mathematician Euclid laid down some of the conventions central to modern mathematical proofs. His book The Elements, written about 300 Bc, contains many proofs in the fields of geometry and algebra. This book illustrates the Greek practice of writing mathematical proofs by first clearly identifying the initial assumptions and then reasoning from them in a logical way in order to obtain a desired conclusion. As part of such an argument, Euclid used results that had already been shown to be true, called theorems, or statements that were explicitly acknowledged to be self-evident, called axioms; this practice continues today.
 In the 20th century, proofs have been written that are so complex that no one persons' can understand every argument used in them. In 1976, a computer was used to complete the proof of the four-colour theorem. This theorem states that four colours are sufficient to colour any map in such a way that regions with a common boundary line have different colours. The use of a computer in this proof inspired considerable debate in the mathematical community. At issue was whether a theorem can be considered proven if human beings have not actually checked every detail of the proof?
 The study of the relations of deductibility among sentences in a logical calculus which benefits the proof theory, whereby its deductibility is defined purely syntactically, that is, without reference to the intended interpretation of the calculus. The subject was founded by the mathematician David Hilbert (1862-1943) in the hope that strictly finitely methods would provide a way of proving the consistency of classical mathematics, but the ambition was torpedoed by Gödel's second incompleteness theorem.
 The deductibility between formulae of a system, but once the notion of an interpretation is in place we can ask whether a formal system meets certain conditions. In particular, can it lead us from sentences that are true under some interpretation? And if a sentence is true under all interpretations, is it also a theorem of the system? We can define a notion of validity (a formula is valid if it is true in all interpreted rations) and semantic consequence (a formula 'B' is a semantic consequence of a set of formulae, written {A1 . . . An} ⊨ B, if it is true in all interpretations in which they are true) Then the central questions for a calculus will be whether all and only its theorems are valid, and whether {A1 . . . An}? B if and only if {A1 . . . An}? B. There are the questions of the soundness and completeness of a formal system. For the propositional calculus this turns into the question of whether the proof theory delivers as theorems all and only 'tautologies'. There are many axiomatizations of the propositional calculus that are consistent and complete. The mathematical logician Kurt Gödel (1906-78) proved in 1929 that the first-order predicate under every interpretation is a theorem of the calculus.
 The Euclidean geometry is the greatest example of the pure 'axiomatic method', and as such had incalculable philosophical influence as a paradigm of rational certainty. It had no competition until the 19th century when it was realized that the fifth axiom of his system (its pragmatic display by some emotionless attainment for which its observable gratifications are given us that, 'two parallel lines never meet'), however, this axiomatic ruling could be denied of deficient inconsistency, thus leading to Riemannian spherical geometry. The significance of Riemannian geometry lies in its use and extension of both Euclidean geometry and the geometry of surfaces, leading to a number of generalized differential geometries. It's most important effect was that it made a geometrical application possible for some major abstractions of tensor analysis, leading to the pattern and concepts for general relativity later used by Albert Einstein in developing his theory of relativity. Riemannian geometry is also necessary for treating electricity and magnetism in the framework of general relativity. The fifth chapter of Euclid's Elements, is attributed to the mathematician Eudoxus, and contains a precise development of the real number, work which remained unappreciated until rediscovered in the 19th century.
 The Axiom, in logic and mathematics, is a basic principle that is assumed to be true without proof. The use of axioms in mathematics stems from the ancient Greeks, most probably during the 5th century Bc, and represents the beginnings of pure mathematics as it is known today. Examples of axioms are the following: 'No sentence can be true and false at the same time' (the principle of contradiction); 'If equals are added to equals, the sums are equal'. 'The whole is greater than any of its parts'. Logic and pure mathematics begin with such unproved assumptions from which other propositions (theorems) are derived. This procedure is necessary to avoid circularity, or an infinite regression in reasoning. The axioms of any system must be consistent with one-another, that is, they should not lead to contradictions. They should be independent in the sense that they cannot be derived from one-another. They should also be few in number. Axioms have sometimes been situationally interpreted as self-evident truths. The present tendency is to avoid this claim and simply to assert that an axiom is assumed to be true without proof in the system of which it is a part.
 The terms 'axiom' and 'postulate' are often used synonymously. Sometimes the word axiom is used to refer to basic principles that are assumed by every deductive system, and the term postulate is used to refer to first principles peculiar to a particular system, such as Euclidean geometry. Infrequently, the word axiom is used to refer to first principles in logic, and the term postulate is used to refer to first principles in mathematics.
 The applications of game theory are wide-ranging and account for steadily growing interest in the subject. Von Neumann and Morgenstern indicated the immediate utility of their work on mathematical game theory by linking it with economic behaviour. Models can be developed, in fact, for markets of various commodities with differing numbers of buyers and sellers, fluctuating values of supply and demand, and seasonal and cyclical variations, as well as significant structural differences in the economies concerned. Here game theory is especially relevant to the analysis of conflicts of interest in maximizing profits and promoting the widest distribution of goods and services. Equitable division of property and of inheritance is another area of legal and economic concern that can be studied with the techniques of game theory.
 In the social sciences, n-person game theory has interesting uses in studying, for example, the distribution of power in legislative procedures. This problem can be interpreted as a three-person game at the congressional level involving vetoes of the president and votes of representatives and senators, analysed in terms of successful or failed coalitions to pass a given bill. Problems of majority rule and individual decision makes are also amenable to such study.
 Sociologists have developed an entire branch of game theory devoted to the study of issues involving group decision making. Epidemiologists also make use of game theory, especially with respect to immunization procedures and methods of testing a vaccine or other medication. Military strategists turn to game theory to study conflicts of interest resolved through 'battles' where the outcome or payoff of a given war game is either victory or defeat. Usually, such games are not examples of zero-sum games, for what one player loses in terms of lives and injuries are not won by the victor. Some uses of game theory in analyses of political and military events have been criticized as a dehumanizing and potentially dangerous oversimplification of necessarily complicating factors. Analysis of economic situations is also usually more complicated than zero-sum games because of the production of goods and services within the play of a given 'game'.
 All is the same in the classical theory of the syllogism; a term in a categorical proposition is distributed if the proposition entails any proposition obtained from it by substituting a term denoted by the original. For example, in 'all dogs bark' the term 'dogs' is distributed, since it entails 'all terriers' bark', which is obtained from it by a substitution. In 'Not all dogs bark', the same term is not distributed, since it may be true while 'not all terriers' bark' is false.
 When a representation of one system by another is usually more familiar, in and for itself that those extended in representation that their workings are supposed analogously to that of the first. This one might model the behaviour of a sound wave upon that of waves in water, or the behaviour of a gas upon that to a volume containing moving billiard balls. While nobody doubts that models have a useful 'heuristic' role in science, there has been intense debate over whether a good model, or whether an organized structure of laws from which it can be deduced and suffices for scientific explanation. As such, the debate of content was inaugurated by the French physicist Pierre Marie Maurice Duhem (1861-1916), in 'The Aim and Structure of Physical Theory' (1954) by which Duhem's conception of science is that it is simply a device for calculating as science provides deductive system that is systematic, economical, and predictive, but not that represents the deep underlying nature of reality. Steadfast and holding of its contributive thesis that in isolation, and since other auxiliary hypotheses will always be needed to draw empirical consequences from it. The Duhem thesis implies that refutation is a more complex matter than might appear. It is sometimes framed as the view that a single hypothesis may be retained in the face of any adverse empirical evidence, if we prepared to make modifications elsewhere in our system, although strictly speaking this is a stronger thesis, since it may be psychologically impossible to make consistent revisions in a belief system to accommodate, say, the hypothesis that there is a hippopotamus in the room when visibly there is not.
 Primary and secondary qualities are the division associated with the 17th-century rise of modern science, wit h its recognition that the fundamental explanatory properties of things that are not the qualities that perception most immediately concerns. They're later are the secondary qualities, or immediate sensory qualities, including colour, taste, smell, felt warmth or texture, and sound. The primary properties are less tied to their deliverance of one particular sense, and include the size, shape, and motion of objects. In Robert Boyle (1627-92) and John Locke (1632-1704) the primary qualities are applicably befitting the properly occupying importance in the integration of incorporating the scientifically tractable unification, objective qualities essential to anything material, are of a minimal listing of size, shape, and mobility, i.e., the states of being at rest or moving. Locke sometimes adds number, solidity, texture (where this is thought of as the structure of a substance, or way in which it is made out of atoms). The secondary qualities are the powers to excite particular sensory modifications in observers. Once, again, that Locke himself thought in terms of identifying these powers with the texture of objects that, according to corpuscularian science of the time, were the basis of an object's causal capacities. The ideas of secondary qualities are sharply different from these powers, and afford us no accurate impression of them. For Renè Descartes (1596-1650), this is the basis for rejecting any attempt to think of knowledge of external objects as provided by the senses. But in Locke our ideas of primary qualities do afford us an accurate notion of what shape, size. And mobility is. In English-speaking philosophy the first major discontent with the division was voiced by the Irish idealist George Berkeley (1685-1753), who probably took for a basis of his attack from Pierre Bayle (1647-1706), who in turn cites the French critic Simon Foucher (1644-96). Modern thought continues to wrestle with the difficulties of thinking of colour, taste, smell, warmth, and sound as real or objective properties to things independent of us.
 The proposal set forth that characterizes the 'modality' of a proposition as the notion for which it is true or false. The most important division is between propositions true of necessity, and those true as things are: Necessary as opposed to contingent propositions. Other qualifiers sometimes called 'modal' include the tense indicators, 'it will be the case that 'p', or 'it was not of the situations that 'p', and there are affinities between the 'deontic' indicators, 'it should be the case that 'p', or 'it is permissible that 'p', and the necessity and possibility.
 The aim of logic is to make explicitly the rules by which inferences may be drawn, than to study the actual reasoning processes that people use, which may or may not conform to those rules. In the case of deductive logic, if we ask why we need to obey the rules, the most general form of the answer is that if we do not we contradict ourselves, or strictly speaking, we stand ready to contradict ourselves. Someone failing to draw a conclusion that follows from a set of premises need not be contradicting him or herself, but only failing to notice something. However, he or she is not defended against adding the contradictory conclusion to his or her set of beliefs. There is no equally simple answer in the case of inductive logic, which is in general a less robust subject, but the aim will be to find reasoning such that anyone failing to conform to it will have improbable beliefs. Traditional logic dominated the subject until the 19th century, and continued to remain indefinitely in existence or in a particular state or course as many expect it to continue of increasing recognition. Occurring to matters right or obtainable, the complex of ideals, beliefs, or standards that characterize or pervade a totality of infinite time. Existing or dealing with what exists only the mind is congruently responsible for presenting such to an image or lifelike imitation of representing contemporary philosophy of mind, following cognitive science, if it uses the term 'representation' to mean just about anything that can be semantically evaluated. Thus, representations may be said to be true, as to connect with the arousing truth-of something to be about something, and to be exacting, etc. Envisioned ideations come in many varieties. The most familiar are pictures, three-dimensional models (e.g., statues, scale models), linguistic text, including mathematical formulas and various hybrids of these such as diagrams, maps, graphs and tables. It is an open question in cognitive science whether mental representation falls within any of these familiar sorts.
 The representational theory of cognition is uncontroversial in contemporary cognitive science that cognitive processes are processes that manipulate representations. This idea seems nearly inevitable. What makes the difference between processes that are cognitive - solving a problem - and those that are not - a patellar reflex, for example - are just that cognitive processes are epistemically assessable? A solution procedure can be justified or correct; a reflex cannot. Since only things with content can be epistemically assessed, processes appear to count as cognitive only in so far as they implicate representations.
 It is tempting to think that thoughts are the mind's representations: Aren't thoughts just those mental states that have semantic content? This is, no doubt, harmless enough provided we keep in mind that the scientific study of processes of awareness, thoughts, and mental organizations, often by means of computer modelling or artificial intelligence research that the cognitive aspect of meaning of a sentence may attribute this thought of as its content, or what is strictly said, abstracted away from the tone or emotive meaning, or other implicatures generated, for example, by the choice of words. The cognitive aspect is what has to be understood to know what would make the sentence true or false: It is frequently identified with the 'truth condition' of the sentence. The truth condition of a statement is the condition the world must meet if the statement is to be true. To know this condition is equivalent to knowing the meaning of the statement. Although this sounds as if it gives a solid anchorage for meaning, some of the security disappears when it turns out that the truth condition can only be defined by repeating the very same statement: The truth condition of 'snow is white' is that snow is white: The truth condition of 'Britain would have capitulated had Hitler invaded' is that Britain would have capitulated had Hitler invaded. It is disputed whether this element of running-on-the-spot disqualifies truth conditions from playing the central role in a substantive theory of meaning. Truth-conditional theories of meaning are sometimes opposed by the view that to know the meaning of a statement is to be able to use it in a network of inferences.
 The view that the role of sentences in inference gives a more important key to their meaning than their 'external' relations to things in the world is that the meaning of a sentence becomes its place in a network of inferences that it legitimates. Also, known as functional role semantics, procedural semantics, or conceptual role semantics. The view bears some relation to the coherence theory of truth, and suffers from the same suspicion that it divorces meaning from any clear association with things in the world.
 Moreover, internalist theories take the content of a representation to be a matter determined by factors internal to the system that uses it. Thus, what Block (1986) calls 'short-armed' functional role theories are internalist. Externalist theories take the content of a representation to be determined, in part at least, by factors external to the system that uses it. Covariance theories, as well as teleological theories that invoke a historical theory of functions, take content to be determined by 'external' factors, crossing the atomist-holistic distinction with the internalist-externalist distinction.
 Externalist theories, sometimes called non-individualistic theories, have the consequence that molecule for molecule identical cognitive systems might yet harbour representations with different contents. This has given rise to a controversy concerning 'narrow' content. If we assume some form of externalist theory is correct, then content is, in the first instance 'wide' content, i.e., determined in part by factors external to the representing system. On the other hand, it seems clear that, on plausible assumptions about how to individuate psychological capacities, internally equivalent systems must have the same psychological capacities. Hence, it would appear that wide content cannot be relevant to characterizing psychological equivalence. Since cognitive science generally assumes that content is relevant to characterizing psychological equivalence, philosophers attracted to externalist theories of content have sometimes attempted to introduce 'narrow' content, i.e., an aspect or kind of content that is equivalent in internally equivalent systems. The simplest such theory is Fodor's idea (1987) that narrow content is a function from context, i.e., from whatever the external factors are to wide contents.
 Most briefly, the epistemological tradition has been internalist, with externalism emerging as a genuine option only in the twentieth century. Te best way to clarify this distinction is by considering another way: That between knowledge and justification. Knowledge has been traditionally defined as justified true belief. However, due to certain counter-examples, the definition had to be redefined. With possible situations in which objectifies abuse are made the chief ambition for the aim assigned to target beliefs, and, perhaps, might be both true and justified, but still intuitively certain we would not call it knowledge. The extra element of undefeatedness attempts to rule out the counter-examples. In that, the relevant issue, at this point, is that on all accounts of it, knowledge entails truth: One can't know something false, as justification, on the other hand, is the account of the reason one hands for a belief. However, one may be justified in holding a false belief, justification is understood from the subject's point of view, and it doesn't entail truth.
 Internalism is the position that says that the reason one has for a belief, its justification, must be in some sense available to the knowing subject. If one has a belief, and the reason why it is acceptable for me to hold that belief is not knowable to the person in question, then there is no justification. Externalism holds that it is possible for a person to have a justified belief without having access to the reason for it. Perhaps, that this view seems too stringent to the externalist, who can explain such cases by, for example, appeal to the use of a process that reliable produced truths. One can use perception to acquire beliefs, and the very use of such a reliable method ensures that the belief is a true belief. Nonetheless, some externalists have produced accounts of knowledge with relativistic aspects to them. Alvin Goldman, who posses as an intellectual, has undertaken the hold on the verifiable body of things known about or in science. This, orderers contributing the insight known for a relativistic account of knowledge in, his writing of, Epistemology and Cognition (1986). Such accounts use the notion of a system of rules for the justification of belief - these rules provide a framework within which it can be established whether a belief is justified or not. The rules are not to be understood as actually conscious guiding the dogmatizer's thought processes, but rather can be applied from without to give an objective judgement as to whether the beliefs are justified or not. The framework establishes what counts as justification, and like criterions established the framework. Genuinely epistemic terms like 'justification' occur in the context of the framework, while the criterion, attempts to set up the framework without using epistemic terms, using purely factual or descriptive terms.
 In any event, a standard psycholinguistic theory, for instance, hypothesizes the construction of representations of the syntactic structures of the utterances one hears and understands. Yet we are not aware of, and non-specialists do not even understand, the structures represented. Thus, cognitive science may attribute thoughts where common sense would not. Second, cognitive science may find it useful to individuate thoughts in ways foreign to common sense.
 The representational theory of cognition gives rise to a natural theory of intentional stares, such as believing, desiring and intending. According to this theory, intentional state factors are placed into two aspects: A 'functional' aspect that distinguishes believing from desiring and so on, and a 'content' aspect that distinguishes belief from each other, desires from each other, and so on. A belief that 'p' might be realized as a representation with the content that 'p' and the function of serving as a premise in inference, as a desire that 'p' might be realized as a representation with the content that 'p' and the function of intimating processing designed to bring about that 'p' and terminating such processing when a belief that 'p' is formed.
 A great deal of philosophical effort has been lavished on the attempt to naturalize content, i.e., to explain in non-semantic, non-intentional terms what it is for something to be a representation (have content), and what it is for something to have some particular content than some other. There appear to be only four types of theory that have been proposed: Theories that ground representation in (1) similarity, (2) covariance, (3) functional roles, (4) teleology.
 Similar theories had that 'r' represents 'x' in virtue of being similar to 'x'. This has seemed hopeless to most as a theory of mental representation because it appears to require that things in the brain must share properties with the things they represent: To represent a cat as furry appears to require something furry in the brain. Perhaps a notion of similarity that is naturalistic and does not involve property sharing can be worked out, but it is not obviously how.
 Covariance theories hold that r's represent 'x' is grounded in the fact that r's occurrence ovaries with that of 'x'.  This is most compelling when one thinks about detection systems: The firing neuron structure in the visual system is said to represent vertical orientations if it's firing ovaries with the occurrence of vertical lines in the visual field. Dretske (1981) and Fodor (1987), has in different ways, attempted to promote this idea into a general theory of content.
 'Content' has become a technical term in philosophy for whatever it is a representation has that makes it semantically evaluable. Thus, a statement is sometimes said to have a proposition or truth condition s its content: a term is sometimes said to have a concept as its content. Much less is known about how to characterize the contents of non-linguistic representations than is known about characterizing linguistic representations. 'Content' is a useful term precisely because it allows one to abstract away from questions about what semantic properties representations have: a representation's content is just whatever it is that underwrites its semantic evaluation.
 Likewise, functional role theories hold that r's representing 'x' is grounded in the functional role 'r' has in the representing system, i.e., on the relations imposed by specified cognitive processes between 'r' and other representations in the system's repertoire. Functional role theories take their cue from such common sense ideas as that people cannot believe that cats are furry if they do not know that cats are animals or that fur is like hair.
 What is more that theories of representational content may be classified according to whether they are atomistic or holistic and according to whether they are externalistic or internalistic? The most generally accepted account of this distinction is that a theory of justification is internalist if and only if it requires that all of the factors needed for a belief to be epistemically justified for a given person be cognitively accessible to that person, internal to his cognitive perspective, and externalist, if it allows hast at least some of the justifying factors need not be thus accessible, so that they can be external to the believer's cognitive perspective, beyond his ken. However, epistemologists often use the distinction between internalist and externalist theories of epistemic justification without offering and very explicit explications.
 Atomistic theories take a representation's content to be something that can be specified independently of that representation's relations to other representations. What Fodor (1987) calls the crude causal theory, for example, takes a representation to be a |cow| - a mental representation with the same content as the word 'cow' - if its tokens are caused by instantiations of the property of being-a-cow, and this is a condition that places no explicit constraint on how |cow|’s must or might relate to other representations.
 The syllogistic or categorical syllogism is the inference of one proposition from two premises. For example is, 'all horses have tails, and things with tails are four legged, so all horses are four legged. Each premise has one term in common with the other premises. The terms that do not occur in the conclusion are called the middle term. The major premise of the syllogism is the premise containing the predicate of the contraction (the major term). And the minor premise contains its subject (the minor term), justly as commended of the first premise of the example, in the minor premise the second the major term, so the first premise of the example is the minor premise, the second the major premise and 'having a tail' is the middle term. This enables syllogisms that there of a classification, that according to the form of the premises and the conclusions. The other classification is by figure, or way in which the middle term is placed or way in within the middle term is placed in the premise.
 Although the theory of the syllogism dominated logic until the 19th century, it remained a piecemeal affair, able to deal with only relations valid forms of valid forms of argument. There have subsequently been rearguing actions attempting, but in general it has been eclipsed by the modern theory of quantification, the predicate calculus is the heart of modern logic, having proved capable of formalizing the calculus rationing processes of modern mathematics and science. In a first-order predicate calculus the variables range over objects: In a higher-order calculus the might range over predicate and functions themselves. The fist-order predicated calculus with identity includes '=' as primitive (undefined) expression: In a higher-order calculus. It may be defined by law that? = y if (? F) (F? - Fy), which gives greater expressive power for less complexity.
 Modal logic was of great importance historically, particularly in the light of the deity, but was not a central topic of modern logic in its gold period as the beginning of the 20th century. It was, however, revived by the American logician and philosopher Irving Lewis (1883-1964), although he wrote extensively on most central philosophical topics, he is remembered principally as a critic of the intentional nature of modern logic, and as the founding father of modal logic. His independent proofs worth showing that from a contradiction anything follows its parallelled logic, using a notion of entailment stronger than that of strict implication.
 The imparting information has been conduced or carried out of the prescribed conventions, as disconcerting formalities that blend upon the plexuities of circumstance, that takes place in the folly of depending the contingence too secure of possibilities the outlook to be entering one's mind. This may arouse of what is proper or acceptable in the interests of applicability, which from time to time of increasingly forward as placed upon the occasion that various doctrines concerning the necessary properties are themselves represented by an arbiter or a conventional device used for adding to a prepositional or predicated calculus, for its additional rationality that two operators? And? (Sometimes written 'N' and 'M'), meaning necessarily and possible, respectfully. Usually, the production necessitates the likelihood of ‘p’, and 'p’ and ‘p’.  While equalled in of wanting, as these controversial subscriptions include ‘p’ and ‘p’, if a proposition is necessary. It's necessarily, characteristic of a system known as S4, and ‘P’, ‘p’ (if as preposition is possible, it's necessarily possible, characteristic of the system known as S5). In classical modal realism, the doctrine advocated by David Lewis (1941-2002), that different possible worlds care to be thought of as existing exactly as this one does. Thinking in terms of possibilities is thinking of real worlds where things are different. The view has been charged with making it impossible to see why it is good to save the child from drowning, since there is still a possible world in which she for her counterpart. Saying drowned, is spoken from the standpoint of the universe that it should make no difference which world is actual. Critics also charge that the notion fails to fit either with a coherent Theory of how we know about possible worlds, or with a coherent theory of why we are interested in them, but Lewis denied that any other way of interpreting modal statements is tenable.
 Saul Kripke (1940- ), the American logician and philosopher contributed to the classical modern treatment of the topic of reference, by its clarifying distinction between names and definite description, and opening the door to many subsequent attempts to understand the notion of reference in terms of a causal link between the use of a term and an original episode of attaching a name to the subject.
 One of the three branches into which 'semiotic' is usually divided, the study of semantically meaning of words, and the relation of signs to the degree to which the designs are applicable, in that, in formal studies, semantics is provided for by a formal language when an interpretation of 'model' is specified. However, a natural language comes ready interpreted, and the semantic problem is not that of the specification but of understanding the relationship between terms of various categories (names, descriptions, predicate, adverbs . . . ) and their meaning. An influential proposal by attempting to provide a truth definition for the language, which will involve giving a full structure of different kinds, has on the truth conditions of sentences containing them.
 Holding that the basic case of reference is the relation between a name and the persons or objective worth which it names, its philosophical problems include trying to elucidate that relation, to understand whether other semantic relations, such s that between a predicate and the property it expresses, or that between a description of what it describes, or that between me and the word 'I', are examples of the same relation or of very different ones. A great deal of modern work on this was stimulated by the American logician Saul Kripke's, Naming and Necessity (1970). It would also be desirable to know whether we can refer to such things as objects and how to conduct the debate about each and issue. A popular approach, following Gottlob Frége, is to argue that the fundamental unit of analysis should be the whole sentence. The reference of a term becomes a derivative notion it is whatever it is that defines the term's contribution to the trued condition of the whole sentence. There need be nothing further to say about it, given that we have a way of understanding the attribution of meaning or truth-condition to sentences. Other approaches in searching for more substantive possibilities that causality or psychological or social constituents are pronounced between words and things.
 However, following Ramsey and the Italian mathematician G. Peano (1858-1932), it has been customary to distinguish logical paradoxes that depend upon a notion of reference or truth (semantic notions) such as those of the 'Liar family', which form the purely logical paradoxes in which no such notions are involved, such as Russell's paradox, or those of Canto and Burali-Forti. Paradoxes of the fist type seem to depend upon an element of a self-reference, in which a sentence is about itself, or in which a phrase refers to something about itself, or in which a phrase refers to something defined by a set of phrases of which it is itself one. It is to feel that this element is responsible for the contradictions, although mind-reference itself is often benign (for instance, the sentence 'All English sentences should have a verb', includes itself happily in the domain of sentences it is talking about), so the difficulty lies in forming a condition that is only existentially pathological and resulting of a self-reference. Paradoxes of the second kind then need a different treatment. Whilst the distinction is convenient in allowing set theory to proceed by circumventing the latter paradoxes by technical mans, even when there is no solution to the semantic paradoxes, it may be a way of ignoring the similarities between the two families. There is still the possibility that while there is no agreed solution to the semantic paradoxes. Our understanding of Russell's paradox may be imperfect as well.
 Truth and falsity are two classical truth-values that a statement, proposition or sentence can take, as it is supposed in classical (two-valued) logic, that each statement has one of these values, and 'none' has both. A statement is then false if and only if it is not true. The basis of this scheme is that to each statement there corresponds a determinate truth condition, or way the world must be for it to be true: If this condition obtains, the statement is true, and otherwise false. Statements may indeed be felicitous or infelicitous in other dimensions (polite, misleading, apposite, witty, etc.) but truth is the central normative notion governing assertion. Considerations of vagueness may introduce greys into this black-and-white scheme. For the issue to be true, any suppressed premise or background framework of thought necessary makes an agreement valid, or a tenable position, as a proposition whose truth is necessary for either the truth or the falsity of another statement. Thus if 'p' presupposes 'q', 'q' must be true for 'p' to be either true or false. In the theory of knowledge, the English philosopher and historian George Collingwood (1889-1943), announces that any proposition capable of truth or falsity stands on of 'absolute presuppositions' which are not properly capable of truth or falsity, since a system of thought will contain no way of approaching such a question (a similar idea later voiced by Wittgenstein in his work On Certainty). The introduction of presupposition therefore means that either another of a truth value is found, 'intermediate' between truth and falsity, or the classical logic is preserved, but it is impossible to tell whether a particular sentence empresses a preposition that is a candidate for truth and falsity, without knowing more than the formation rules of the language. Each suggestion directionally imparts as to convey there to some consensus that at least who where definite descriptions are involved, examples equally given by regarding the overall sentence as false as the existence claim fails, and explaining the data that the English philosopher Frederick Strawson (1919-) relied upon as the effects of 'implicatures'.
 Views about the meaning of terms will often depend on classifying the implicatures of sayings involving the terms as implicatures or as genuine logical implications of what is said. Implicatures may be divided into two kinds: Conversational implicatures of the two kinds and the more subtle category of conventional implicatures. A term may as a matter of convention carries and pushes in controversial implicatures. Thus, one of the relations between 'he is poor and honest' and 'he is poor but honest' is that they have the same content (are true in just the same conditional) but the second has implicatures (that the combination is surprising or significant) that the first lacks.
 It is, nonetheless, that we find in classical logic a proposition that may be true or false. In that, if the former, it is said to take the truth-value true, and if the latter the truth-value false. The idea behind the terminological phrases is the analogue between assigning a propositional variable one or other of these values, as is done in providing an interpretation for a formula of the propositional calculus, and assigning an object as the value of any other variable. Logics with intermediate value are called 'many-valued logics'.
 Nevertheless, an existing definition of the predicate' . . . is true' for a language that satisfies convention 'T', the material adequately condition laid down by Alfred Tarski, born Alfred Teitelbaum (1901-83), whereby his methods of 'recursive' definition, enabling us to say for each sentence what it is that its truth consists in, but giving no verbal definition of truth itself. The recursive definition or the truth predicate of a language is always provided in a 'metalanguage', Tarski is thus committed to a hierarchy of languages, each with it’s associated, but different truth-predicate. While this enables an easier approach to avoid the contradictions of paradoxical contemplations, it yet conflicts with the idea that a language should be able to say everything that there is to say, and other approaches have become increasingly important.
 So, that the truth condition of a statement is the condition for which the world must meet if the statement is to be true. To know this condition is equivalent to knowing the meaning of the statement. Although this sounds as if it gives a solid anchorage for meaning, some of the securities disappear when it turns out that the truth condition can only be defined by repeating the very same statement: The truth condition of 'now is white' is that 'snow is white', the truth condition of 'Britain would have capitulated had Hitler invaded', is that 'Britain would have capitulated had Hitler invaded'. It is disputed whether this element of running-on-the-spot disqualifies truth conditions from playing the central role in a substantive theory of meaning. Truth-conditional theories of meaning are sometimes opposed by the view that to know the meaning of a statement is to be able to use it in a network of inferences.
 Taken to be the view, inferential semantics takes upon the role of a sentence in inference, and gives a more important key to their meaning than this 'external' relation to things in the world. The meaning of a sentence becomes its place in a network of inferences that it legitimates. Also known as functional role semantics, procedural semantics, or conception to the coherence theory of truth, and suffers from the same suspicion that it divorces meaning from any clear association with things in the world.
 Moreover, a theory of semantic truth is that of the view if language is provided with a truth definition, there is a sufficient characterization of its concept of truth, as there is no further philosophical chapter to write about truth: There is no further philosophical chapter to write about truth itself or truth as shared across different languages. The view is similar to the disquotational theory.
 The redundancy theory, or also known as the 'deflationary view of truth' fathered by Gottlob Frége and the Cambridge mathematician and philosopher Frank Ramsey (1903-30), who showed how the distinction between the semantic paradoxes, such as that of the Liar, and Russell's paradox, made unnecessary the ramified type theory of Principia Mathematica, and the resulting axiom of reducibility. By taking all the sentences affirmed in a scientific theory that use some terms, e.g., quarks, and to a considerable degree of replacing the term by a variable instead of saying that quarks have such-and-such properties, the Ramsey sentence says that there is something that has those properties. If the process is repeated for all of a group of the theoretical terms, the sentence gives 'topic-neutral' structure of the theory, but removes any implication that we know what the terms so administered to advocate. It leaves open the possibility of identifying the theoretical item with whatever, but it is that best fits the description provided. However, it was pointed out by the Cambridge mathematician Newman, that if the process is carried out for all except the logical bones of a theory, then by the Löwenheim-Skolem theorem, the result will be interpretable, and the content of the theory may reasonably be felt to have been lost.
 For in part, while, both Frége and Ramsey are agreeing that the essential claim is that the predicate' . . . is true' does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophical enquiry. The approach admits of different versions, but centres on the points (1) that 'it is true that 'p' says no more nor less than 'p' (hence, redundancy): (2) that in less direct context, such as 'everything he said was true', or 'all logical consequences of true propositions are true', the predicate functions as a device enabling us to generalize than as an adjective or predicate describing the things he said, or the kinds of propositions that follow from a true preposition. For example, the second may translate as '(?p, q)(p & p? Q? q)' where there is no use of a notion of truth.
 There are technical problems in interpreting all uses of the notion of truth in such ways; nevertheless, they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such as 'science aims at the truth', or 'truth is a norm governing discourse'. Post-modern writing frequently advocates that we must abandon such norms, along with a discredited 'objective' conception of truth, perhaps, we can have the norms even when objectivity is problematic, since they can be framed without mention of truth: Science wants it to be so that whatever science holds that 'p', then 'p'. Discourse is to be regulated by the principle that it is wrong to assert 'p', when 'not-p'.
 Something that tends of something in addition of content, or coming by way to justify such a position can very well be more that in addition to several reasons, as to bring in or adjoin of something might that there be more so as to a larger combination for us to consider the simplest formulation, is that 'real', assuming that it is right to demand something as one's own or one's due to its call for the challenge and maintain contentually justified. The demands adduced to forgo a defendable right of contend is a real or assumed placement to defend his greatest claim to fame. Claimed that expression of the attached adherently following the responsive quality values as explicated by the body of people who attaches them to another epically as disciplines, patrons or admirers, after al, to come after in time follows the succeeded succession to the proper lineage of the modelled composite of 'S is true' means the same as an induction or enactment into being its expression from something hided, latent or reserved to be educed to arouse the excogitated form of 'S'. Some philosophers dislike the ideas of sameness of meaning, and if this I disallowed, then the claim is that the two forms are equivalent in any sense of equivalence that matters. This is, it makes no difference whether people say 'Dogs bark' is True, or whether they say, 'dogs bark'. In the former representation of what they say of the sentence 'Dogs bark' is mentioned, but in the later it appears to be used, of the claim that the two are equivalent and needs careful formulation and defence. On the face of it someone might know that 'Dogs bark' is true without knowing what it means (for instance, if he kids in a list of acknowledged truths, although he does not understand English), and this is different from knowing that dogs bark. Disquotational theories are usually presented as versions of the 'redundancy theory of truth'.
 The relationship between a set of premises and a conclusion when the conclusion follows from the premise, as several philosophers identify this with it being logically impossible that the premises should all be true, yet the conclusion false. Others are sufficiently impressed by the paradoxes of strict implication to look for a stranger relation, which would distinguish between valid and invalid arguments within the sphere of necessary propositions. The seraph for a strange notion is the field of relevance logic.
 From a systematic theoretical point of view, we may imagine the process of evolution of an empirical science to be a continuous process of induction. Theories are evolved and are expressed in short compass as statements of as large number of individual observations in the form of empirical laws, from which the general laws can be ascertained by comparison. Regarded in this way, the development of a science bears some resemblance to the compilation of a classified catalogue. It is a purely empirical enterprise.
 But this point of view by no means embraces the whole of the actual process, for it overlooks the important part played by intuition and deductive thought in the development of an exact science. As soon as a science has emerged from its initial stages, theoretical advances are no longer achieved merely by a process of arrangement. Guided by empirical data, the examiners develop a system of thought which, in general, it is built up logically from a small number of fundamental assumptions, the so-called axioms. We call such a system of thought a 'theory'. The theory finds the justification for its existence in the fact that it correlates a large number of single observations, and is just here that the 'truth' of the theory lies.
 Corresponding to the same complex of empirical data, there may be several theories, which differ from one another to a considerable extent. But as regards the deductions from the theories which are capable of being tested, the agreement between the theories may be so complete, that it becomes difficult to find any deductions in which the theories differ from each other. As an example, a case of general interest is available in the province of biology, in the Darwinian theory of the development of species by selection in the struggle for existence, and in the theory of development which is based on the hypothesis of the hereditary transmission of acquired characters. The Origin of Species was principally successful in marshalling the evidence for evolution, than providing a convincing mechanism for genetic change. And Darwin himself remained open to the search for additional mechanisms, while also remaining convinced that natural selection was at the hart of it. It was only with the later discovery of the gene as the unit of inheritance that the synthesis known as 'neo-Darwinism' became the orthodox theory of evolution in the life sciences.
 In the 19th century the attempt to base ethical reasoning o the presumed facts about evolution, the movement is particularly associated with the English philosopher of evolution Herbert Spencer (1820-1903), the premise is that later elements in an evolutionary path are better than earlier ones: The application of this principle then requires seeing western society, laissez-faire capitalism, or some other object of approval, as more evolved than more 'primitive' social forms. Neither the principle nor the applications command much respect. The version of evolutionary ethics called 'social Darwinism' emphasises the struggle for natural selection, and draws the conclusion that we should glorify and assist such struggles are usually by enhancing competition and aggressive relations between people in society or between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.
 Once again, psychological attempts are found to establish a point by appropriate objective means, in that their evidences are well substantiated within the realm of evolutionary principles, in which a variety of higher mental functions may be adaptations, forced in response to selection pressures on the human populations through evolutionary time. Candidates for such theorizing include material and paternal motivations, capacities for love and friendship, the development of language as a signalling system cooperative and aggressive, our emotional repertoire, our moral and reactions, including the disposition to detect and punish those who cheat on agreements or who 'free-ride' on the work of others, our cognitive structures, and many others. Evolutionary psychology goes hand-in-hand with Neurophysiologic evidence about the underlying circuitry in the brain which subserves the psychological mechanisms it claims to identify. The approach was foreshadowed by Darwin himself, and William James, as well as the sociology of E.O. Wilson. The terms of use are applied, more or less aggressively, especially to explanations offered in socio-biology and evolutionary psychology.
 Another assumption that is frequently used to legitimate the real existence of forces associated with the invisible hand in neoclassical economics derives from Darwin's view of natural selection as a regarded-threat, competing between atomized organisms in the struggle for survival. In natural selection as we now understand it, cooperation appears to exist in complementary relation to competition. Complementary relationships between such results are emergent self-regulating properties that are greater than the sum of parts and that serve to perpetuate the existence of the whole.
 According to E.O Wilson, the 'human mind evolved to believe in the gods'' and people 'need a sacred narrative' to have a sense of higher purpose. Yet it is also clear that the unspoken 'gods'' in his view are merely human constructs and, therefore, there is no basis for dialogue between the world-view of science and religion. 'Science for its part', said Wilson, 'will test relentlessly every assumption about the human condition and in time uncover the bedrock of the moral and religious sentiment. The eventual result of the competition between each other will be the secularization of the human epic and of religion itself.
 Man has come to the threshold of a state of consciousness, regarding his nature and his relationship to the Cosmos, in terms that reflect 'reality'. By using the processes of nature as metaphor, to describe the forces by which it operates upon and within Man, we come as close to describing 'reality' as we can within the limits of our comprehension. Men will be very uneven in their capacity for such understanding, which, naturally, differs for different ages and cultures, and develops and changes over the course of time. For these reasons it will always be necessary to use metaphor and myth to provide 'comprehensible' guides to living in this way. Man's imagination and intellect play vital roles on his survival and evolution.
 Since so much of life both inside and outside the study is concerned with finding explanations of things, it would be desirable to have a concept of what counts as a good explanation from bad. Under the influence of 'logical positivist' approaches to the structure of science, it was felt that the criterion ought to be found in a definite logical relationship between the 'exlanans' (that which does the explaining) and the explanandum (that which is to be explained). The approach culminated in the covering law model of explanation, or the view that an event is explained when it is subsumed under a law of nature, that is, its occurrence is deducible from the law plus a set of initial conditions. A law would itself be explained by being deduced from a higher-order or covering law, in the way that Johannes Kepler(or Keppler, 1571-1630), was by way of planetary motion that the laws were deducible from Newton's laws of motion. The covering law model may be adapted to include explanation by showing that something is probable, given a statistical law. Questions for the covering law model include querying for the covering laws are necessary to explanation (we explain whether everyday events without overtly citing laws): Querying whether they are sufficient (it may not explain an event just to say that it is an example of the kind of thing that always happens). And querying whether a purely logical relationship is adapted to capturing the requirements, which we make of explanations, and these may include, for instance, that we have a 'feel' for what is happening, or that the explanation proceeds in terms of things that are familiar to us or unsurprising, or that we can give a model of what is going on, and none of these notions is captured in a purely logical approach. Recent work, therefore, has tended to stress the contextual and pragmatic elements in requirements for explanation, so that what counts as good explanation given one set of concerns may not do so given another.
 The argument to the best explanation is the view that once we can select the best of any in something in explanations of an event, then we are justified in accepting it, or even believing it. The principle needs qualification, since something it is unwise to ignore the antecedent improbability of a hypothesis which would explain the data better than others, e.g., the best explanation of a coin falling heads 530 times in 1,000 tosses might be that it is biassed to give a probability of heads of 0.53 but it might be more sensible to suppose that it is fair, or to suspend judgement.
 In a philosophy of language is considered as the general attempt to understand the components of a working language, the relationship the understanding speaker has to its elements, and the relationship they bear to the world. The subject therefore embraces the traditional division of semiotic into syntax, semantics, and pragmatics. The philosophy of language thus mingles with the philosophy of mind, since it needs an account of what it is in our understanding that enables us to use language. It so mingles with the metaphysics of truth and the relationship between sign and object. Much as much is that the philosophy in the 20th century, has been informed by the belief that philosophy of language is the fundamental basis of all philosophical problems, in that language is the distinctive exercise of mind, and the distinctive way in which we give shape to metaphysical beliefs. Particular topics will include the problems of logical form, for which is the basis of the division between syntax and semantics, as well as problems of understanding the number and nature of specifically semantic relationships such as meaning, reference, predication, and quantification. Pragmatics includes that of speech acts, while problems of rule following and the indeterminacy of translation infect philosophies of both pragmatics and semantics.
 On this conception, to understand a sentence is to know its truth-conditions, and, yet, in a distinctive way the conception has remained central that those who offer opposing theories characteristically define their position by reference to it. The Conceptions of meaning s truth-conditions needs not and ought not to be advanced for being in itself as complete account of meaning. For instance, one who understands a language must have some idea of the range of speech acts contextually performed by the various types of the sentence in the language, and must have some idea of the insufficiencies of various kinds of speech acts. The claim of the theorist of truth-conditions should rather be targeted on the notion of content: If indicative sentences differ in what they strictly and literally say, then this difference is fully accounted for by the difference in the truth-conditions.
 The meaning of a complex expression is a function of the meaning of its constituent. This is just as a sentence of what it is for an expression to be semantically complex. It is one of the initial attractions of the conception of meaning truth-conditions that it permits a smooth and satisfying account of the way in which the meaning of s complex expression is a function of the meaning of its constituents. On the truth-conditional conception, to give the meaning of an expression is to state the contribution it makes to the truth-conditions of sentences in which it occurs. For singular terms - proper names, indexical, and certain pronouns - this is done by stating the reference of the terms in question. For predicates, it is done either by stating the conditions under which the predicate is true of arbitrary objects, or by stating the conditions under which arbitrary atomic sentences containing it is true. The meaning of a sentence-forming operator is given by stating its contribution to the truth-conditions of as complex sentence, as a function of the semantic values of the sentences on which it operates.
 The theorist of truth conditions should insist that not every true statement about the reference of an expression is fit to be an axiom in a meaning-giving theory of truth for a language, such is the axiom: 'London' refers to the city in which there was a huge fire in 1666, is a true statement about the reference of 'London'. It is a consequent of a theory which substitutes this axiom for no different a term than of our simple truth theory that 'London is beautiful' is true if and only if the city in which there was a huge fire in 1666 is beautiful. Since a psychological subject can understand, the given name to 'London' without knowing that last-mentioned truth condition, this replacement axiom is not fit to be an axiom in a meaning-specifying truth theory. It is, of course, incumbent on a theorised meaning of truth conditions, to state in a way which does not presuppose any previous, non-truth conditional conception of meaning
 Among the many challenges facing the theorist of truth conditions, two are particularly salient and fundamental. First, the theorist has to answer the charge of triviality or vacuity; second, the theorist must offer an account of what it is for a person's language to be truly describable by as semantic theory containing a given semantic axiom.
 Since the content of a claim that the sentence, 'Paris is beautiful' is the true amount under which there will be no more than the claim that Paris is beautiful, we can trivially describers understanding a sentence, if we wish, as knowing its truth-conditions, but this gives us no substantive account of understanding whatsoever. Something other than the grasp of truth conditions must provide the substantive account. The charge rests upon what has been called the redundancy theory of truth, the theory which, somewhat more discriminatingly. Horwich calls the minimal theory of truth. It’s conceptual representation that the concept of truth is exhausted by the fact that it conforms to the equivalence principle, the principle that for any proposition 'p', it is true that 'p' if and only if 'p'. Many different philosophical theories of truth will, with suitable qualifications, accept that equivalence principle. The distinguishing feature of the minimal theory is its claim that the equivalence principle exhausts the notion of truth. It is now widely accepted, both by opponents and supporters of truth conditional theories of meaning, that it is inconsistent to accept both minimal theory of truth and a truth conditional account of meaning. If the claim that a sentence 'Paris is beautiful' is true is exhausted by its equivalence to the claim that Paris is beautiful, it is circular to try of its truth conditions. The minimal theory of truth has been endorsed by the Cambridge mathematician and philosopher Plumpton Ramsey (1903-30), and the English philosopher Jules Ayer, the later Wittgenstein, Quine, Strawson and Horwich and - confusing and inconsistently if this article is correct - Frége himself. But is the minimal theory correct?
 The minimal theory treats instances of the equivalence principle as definitional of truth for a given sentence, but in fact, it seems that each instance of the equivalence principle can itself be explained. The truth from which such an instance as, 'London is beautiful' is true if and only if London is beautiful. This would be a pseudo-explanation if the fact that 'London' refers to London consists in part in the fact that 'London is beautiful' has the truth-condition it does. But it is very implausible, it is, after all, possible for apprehending and for its understanding of the name 'London' without understanding the predicate 'is beautiful'.
 Sometimes, however, the counterfactual conditional is known as subjunctive conditionals, insofar as a counterfactual conditional is a conditional of the form if 'p' were to happen 'q' would, or if 'p' were to have happened 'q' would have happened, where the supposition of 'p' is contrary to the known fact that 'not-p'. Such assertions are nevertheless, useful 'if you broke the bone, the X-ray would have looked different', or 'if the reactor was to fail, this mechanism would click in' are important truths, even when we know that the bone is not broken or are certain that the reactor will not fail. It is arguably distinctive of laws of nature that yield counterfactuals ('if the metal were to be heated, it would expand'), whereas accidentally true generalizations may not. It is clear that counterfactuals cannot be represented by the material implication of the propositional calculus, since that conditionals come out true whenever 'p' is false, so there would be no division between true and false counterfactuals.
 Although the subjunctive form indicates the counterfactual, in many contexts it does not seem to matter whether we use a subjunctive form, or a simple conditional form: 'If you run out of water, you will be in trouble' seems equivalent to 'if you were to run out of water, you would be in trouble', in other contexts there is a big difference: 'If Oswald did not kill Kennedy, someone else did' is clearly true, whereas 'if Oswald had not killed Kennedy, someone would have' is most probably false.
 The best-known modern treatment of counterfactuals is that of David Lewis, which evaluates them as true or false according to whether 'q' is true in the 'most similar' possible worlds to ours in which 'p' is true. The similarity-ranking this approach is needed to prove of the controversial, particularly since it may need to presuppose some notion of the same laws of nature, whereas art of the interest in counterfactual is that they promise to illuminate that notion. There is an expanding force of awareness that the classification of conditionals is an extremely tricky business, and categorizing them as counterfactual or not that it is of limited use.
 The pronouncing of any conditional, preposition of the form 'if p then q', the condition hypothesizes, 'p'. It's called the antecedent of the conditional, and 'q' the consequent. Various kinds of conditional have been distinguished. Weaken in that of material implication, merely telling us that with 'not-p' or 'q', stronger conditionals include elements of modality, corresponding to the thought that if 'p' is true then 'q' must be true. Ordinary language is very flexible in its use of the conditional form, and there is controversy whether, yielding different kinds of conditionals with different meanings, or pragmatically, in which case there should be one basic meaning which case there should be one basic meaning, with surface differences arising from other implicatures.
 Passively, there are many forms of reliabilism. Just as there are many forms of 'Foundationalism' and 'coherence'. How is reliabilism related to these other two theories of justification? We usually regard it as a rival, and this is aptly so, insofar as Foundationalism and coherentism traditionally focussed on purely evidential relations than psychological processes, but we might also offer reliabilism as a deeper-level theory, subsuming some precepts of either Foundationalism or coherentism. Foundationalism says that there are 'basic' beliefs, which acquire justification without dependence on inference; reliabilism might rationalize this indicating that reliable non-inferential processes have formed the basic beliefs. Coherence stresses the primary of systematic in all doxastic decision-making. Reliabilism might rationalize this by pointing to increases in reliability that accrue from systematic consequently, reliabilism could complement Foundationalism and coherence than completed with them.
 These examples make it seem likely that, if there is a criterion for what makes an alternate situation relevant that will save Goldman's claim about local reliability and knowledge. Will did not be simple. The interesting thesis that counts as a causal theory of justification, in the making of 'causal theory' intended for the belief as it is justified in case it was produced by a type of process that is 'globally' reliable, that is, its propensity to produce true beliefs that can be defined, to an acceptable approximation, as the proportion of the beliefs it produces, or would produce where it used as much as opportunity allows, that is true is sufficiently reasonable. We have advanced variations of this view for both knowledge and justified belief, its first formulation of a reliability account of knowing appeared in the notation from F.P.Ramsey (1903-30). The theory of probability, he was the first to show how a 'personality theory' could be progressively advanced from a lower or simpler to a higher or more complex form, as developing to come to have usually gradual acquirements, only based on a precise behavior al notion of preference and expectation. In the philosophy of language, much of Ramsey's work was directed at saving classical mathematics from 'intuitionism', or what he called the 'Bolshevik harassments of Brouwer and Weyl. In the theory of probability he was the first to show how we could develop some personalist's theory, based on precise behavioural notation of preference and expectation. In the philosophy of language, Ramsey was one of the first thinkers, which he combined with radical views of the function of many kinds of a proposition. Neither generalizations, nor causal propositions, nor those treating probability or ethics, describe facts, but each has a different specific function in our intellectual economy. Ramsey was one of the earliest commentators on the early work of Wittgenstein and his continuing friendship that led to Wittgenstein's return to Cambridge and to philosophy in 1929.
 Ramsey's sentence theory is the sentence generated by taking all the sentences affirmed in a scientific theory that use some term, e.g., 'quark'. Replacing the term by a variable, and existentially quantifying into the result, instead of saying that quarks have such-and-such properties, the Ramsey sentence says that there is something that has those properties. If we repeat the process for all of a group of the theoretical terms, the sentence gives the 'topic-neutral' structure of the theory, but removes any implication that we know what the term so treated prove competent. It leaves open the possibility of identifying the theoretical item with whatever, but it is that best fits the description provided, virtually, all theories of knowledge. Of course, share an externalist component in requiring truth as a condition for known in. Reliabilism goes further, however, in trying to capture additional conditions for knowledge by ways of a nomic, counterfactual or similar 'external' relations between belief and truth, closely allied to the nomic sufficiency account of knowledge. The core of this approach is that X's belief that 'p' qualifies as knowledge just in case 'X' believes 'p', because of reasons that would not obtain unless 'p's' being true, or because of a process or method that would not yield belief in 'p' if 'p' were not true. An enemy example, 'X' would not have its current reasons for believing there is a telephone before it. Or consigned to not come to believe this in the ways it does, thus, there is a counterfactual reliable guarantor of the belief's being true. Determined to and the facts of counterfactual approach say that 'X' knows that 'p' only if there is no 'relevant alternative' situation in which 'p' is false but 'X' would still believe that a proposition 'p'; must be sufficient to eliminate all the alternatives to 'p' where an alternative to a proposition 'p' is a proposition incompatible with 'p?'. That I, one's justification or evidence for 'p' must be sufficient for one to know that every alternative to 'p' is false. This element of our evolving thinking, sceptical arguments have exploited about which knowledge. These arguments call our attentions to alternatives that our evidence sustains itself with no elimination. The sceptic inquires to how we know that we are not seeing a cleverly disguised mule. While we do have some evidence against the likelihood of such as deception, intuitively knowing that we are not so deceived is not strong enough for 'us'. By pointing out alternate but hidden points of nature, in that we cannot eliminate, and others with more general application, as dreams, hallucinations, etc. The sceptic appears to show that every alternative is seldom. If ever, satisfied.
 All the same, and without a problem, is noted by the distinction between the 'in itself' and the; for itself' originated in the Kantian logical and epistemological distinction between a thing as it is in itself, and that thing as an appearance, or as it is for us. For Kant, the thing in itself is the thing as it is intrinsically, that is, the character of the thing apart from any relations in which it happens to stand. The thing for which, or as an appearance, is the thing in so far as it stands in relation to our cognitive faculties and other objects. 'Now a thing in itself cannot be known through mere relations: and we may therefore conclude that since outer sense gives us nothing but mere relations, this sense can contain in its representation only the relation of an object to the subject, and not the inner properties of the object in itself'. Kant applies this same distinction to the subject's cognition of itself. Since the subject can know itself only in so far as it can intuit itself, and it can intuit itself only in terms of temporal relations, and thus as it is related to its own self, it represents itself 'as it appears to itself, not as it is'. Thus, the distinction between what the subject is in itself and hat it is for itself arises in Kant in so far as the distinction between what an object is in itself and what it is for a Knower is applied to the subject's own knowledge of itself.
 Hegel (1770-1831) begins the transition of the epistemological distinct ion between what the subject is in itself and what it is for itself into an ontological distinction. Since, for Hegel, what is, s it is in fact it in itself, necessarily involves relation, the Kantian distinction must be transformed. Taking his cue from the fact that, even for Kant, what the subject is in fact it in itself involves a relation to itself, or self-consciousness. Hegel suggests that the cognition of an entity in terms of such relations or self-relations do not preclude knowledge of the thing itself. Rather, what an entity is intrinsically, or in itself, is best understood in terms of the potentiality of that thing to enter specific explicit relations with it. And, just as for consciousness to be explicitly itself is for it to be for itself by being in relation to itself, i.e., to be explicitly self-conscious, for-itself of any entity is that entity in so far as it is actually related to itself. The distinction between the entity in itself and the entity for itself is thus taken to apply to every entity, and not only to the subject. For example, the seed of a plant is that plant in itself or implicitly, while the mature plant which involves actual relation among the plant's various organs is the plant 'for itself'. In Hegel, then, the in itself/for itself distinction becomes universalized, in is applied to all entities, and not merely to conscious entities. In addition, the distinction takes on an ontological dimension. While the seed and the mature plant are one and the same entity, being in itself of the plan, or the plant as potential adult, in that an ontologically distinct commonality is in for itself on the plant, or the actually existing mature organism. At the same time, the distinction retains an epistemological dimension in Hegel, although its import is quite different from that of the Kantian distinction. To know a thing, it is necessary to know both the actual explicit self-relations which mark the thing (the being for itself of the thing), and the inherent simpler principle of these relations, or the being in itself of the thing. Real knowledge, for Hegel, thus consists in knowledge of the thing as it is in and for itself.
 Sartre's distinction between being in itself and being for itself, which is an entirely ontological distinction with minimal epistemological import, is descended from the Hegelian distinction. Sartre distinguishes between what it is for consciousness to be, i.e., being for itself, and the being of the transcendent being which is intended by consciousness, i.e., being in itself. What is it for consciousness to be, being for itself, is marked by self relation? Sartre posits a 'Pre-reflective Cogito', such that every consciousness of '?' necessarily involves a 'non-positional' consciousness of the consciousness of '?'. While in Kant every subject is both in itself, i.e., as it is apart from its relations, and for itself in so far as it is related to itself, and for itself in so far as it is related to itself by appearing to itself, and in Hegel every entity can be considered as both 'in itself' and 'for itself', in Sartre, to be self-related or for itself is the distinctive ontological mark of consciousness, while to lack relations or to be in itself is the distinctive e ontological mark of non-conscious entities.
 This conclusion conflicts with another strand in our thinking about knowledge, in that we know many things. Thus, there is a tension in our ordinary thinking about knowledge -. We believe that knowledge is, in the sense indicated, an absolute concept and yet, we also believe that there are many instances of that concept.
 If one finds absoluteness to be too central a component of our concept of knowledge to be relinquished, one could argue from the absolute character of knowledge to a sceptic conclusion (Unger, 1975). Most philosophers, however, have taken the other course, choosing to respond to the conflict by giving up, perhaps reluctantly, the absolute criterion. This latter response holds as sacrosanct our commonsense belief that we know many things (Pollock, 1979 and Chisholm, 1977). Each approach is subject to the criticism that it preserves one aspect of our ordinary thinking about knowledge at the expense of denying another. We can view the theory of relevant alternatives as an attempt to provide a more satisfactory response to this tension in our thinking about knowledge. It attempts to characterize knowledge in a way that preserves both our belief that knowledge is an absolute concept and our belief that we have knowledge.
 This approach to the theory of knowledge that sees an important connection between the growth of knowledge and biological evolution an evolutionary epistemologist claims that the development of human knowledge processed through some natural selection process, the best example of which is Darwin's theory of biological natural selection. There is a widespread misconception that evolution proceeds according to some plan or direct, put it has neither, and the role of chance ensures that its future course will be unpredictable. Random variations in individual organisms create tiny differences in their Darwinian fitness. Some individuals have more offspring's than others, and the characteristics that increased their fitness thereby become more prevalent in future generations. Once upon a time, at least a mutation occurred in a human population in tropical Africa that changed the hemoglobin molecule in a way that provided resistance to malaria. This enormous advantage caused the new gene to spread; with the unfortunate consequence that sickle-cell anaemia came to exist.
 When proximate and evolutionary explanations are carefully distinguished, many questions in biology make more sense. A proximate explanation describes a trait - its anatomy, physiology, and biochemistry, as well as its development from the genetic instructions provided by a bit of DNA in the fertilized egg to the adult individual. An evolutionary explanation is about why DNA specifies that trait in the first place and why has DNA that encodes for one kind of structure and not some other. Proximate and evolutionary explanations are not alternatives, but both are needed to understand every trait. A proximate explanation for the external ear would incorporate of its arteries and nerves, and how it develops from the embryo to the adult form. Even if we know this, however, we still need an evolutionary explanation of how its structure gives creatures with ears an advantage, why those that lack the structure shaped by selection to give the ear its current form. To take another example, a proximate explanation of taste buds describes their structure and chemistry, how they detect salt, sweet, sour, and bitter, and how they transform this information into impulses that travel via neurons to the brain. An evolutionary explanation of taste buds shows why they detect saltiness, acidity, sweetness and bitterness instead of other chemical characteristics, and how the capacities detect these characteristics help, and cope with life.
 Chance can influence the outcome at each stage: First, in the creation of genetic mutation, second, in whether the bearer lives long enough to show its effects, thirdly, in chance events that influence the individual's actual reproductive success, and fourth, in whether a gene even if favoured in one generation, is, happenstance, eliminated in the next, and finally in the many unpredictable environmental changes that will undoubtedly occur in the history of any group of organisms. As Harvard biologist Stephen Jay Gould has so vividly expressed that process over again, the outcome would surely be different. Not only might there not be humans, there might not even be anything like mammals.
 We will often emphasis the elegance of traits shaped by natural selection, but the common idea that nature creates perfection needs to be analyzed carefully. The extent for which evolution obtainably achieves perfection depends on the enacting fitness for which Darwin speaks in terms of their survival and their fittest are most likely as perfect than the non-surviving species, only, that it enables us to know exactly what you mean. If in what you mean, 'Does natural selection always takes the best path for the long-term welfare of a species?' The answer is no. That would require adaptation by group selection, and this is, unlikely. If you mean 'Does natural selection creates every adaptation that would be valuable?' The answer again, is no. For instance, some kinds of South American monkeys can grasp branches with their tails. The trick would surely also be useful to some African species, but, simply because of bad luck, none have it. Some combination of circumstances started some ancestral South American monkeys using their tails in ways that ultimately led to an ability to grab onto branches, while no such development took place in Africa. Mere usefulness of a trait does not necessitate it mean that will evolve.
 This is an approach to the theory of knowledge that sees an important connection between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge proceeds through some natural selection process, the best example of which is Darwin's theory of biological natural selection. The three major components of the model of natural selection are variation selection and retention. According to Darwin's theory of natural selection, variations are not pre-designed to perform certain functions. Rather, these variations that perform useful functions are selected. While those that suffice on doing nothing are not selected but, nevertheless, such selections are responsible for the appearance that specific variations built upon intentionally do really occur. In the modern theory of evolution, genetic mutations provide the blind variations ( blind in the sense that variations are not influenced by the effects they would have, - the likelihood of a mutation is not correlated with the benefits or liabilities that mutation would confer on the organism), the environment provides the filter of selection, and reproduction provides the retention. It is achieved because those organisms with features that make them less adapted for survival do not survive about other organisms in the environment that have features that are better adapted. Evolutionary epistemology applies this blind variation and selective retention model to the growth of scientific knowledge and to human thought processes in general.
 The parallel between biological evolutions and conceptual or we can see 'epistemic' evolution as either literal or analogical. The literal version of evolutionary epistemological biological evolution as the main cause of the growth of knowledge stemmed from this view, called the 'evolution of cognitive mechanic programs', by Bradie (1986) and the 'Darwinian approach to epistemology' by Ruse (1986), that growth of knowledge occurs through blind variation and selective retention because biological natural selection itself is the cause of epistemic variation and selection. The most plausible version of the literal view does not hold that all human beliefs are innate but rather than the mental mechanisms that guide the acquisition of non-innate beliefs are themselves innately and the result of biological natural selection. Ruses (1986) repossess to resume of the insistence of an interlingual rendition of literal evolutionary epistemology that he links to sociology.
 Determining the value upon innate ideas can take the path to consider as these have been variously defined by philosophers either as ideas consciously present to the mind priori to sense experience (the non-dispositional sense), or as ideas which we have an innate disposition to form, though we need to be actually aware of them at a particular r time, e.g., as babies - the dispositional sense. Understood in either way they were invoked to account for our recognition of certain verification, such as those of mathematics, or to justify certain moral and religious clams which were held to b capable of being know by introspection of our innate ideas. Examples of such supposed truths might include 'murder is wrong' or 'God exists'.
 One difficulty with the doctrine is that it is sometimes formulated as one about concepts or ideas which are held to be innate and at other times one about a source of propositional knowledge, in so far as concepts are taken to be innate the doctrine relates primarily to claims about meaning: Our idea of God, for example, is taken as a source for the meaning of the word God. When innate ideas are understood propositionally, their supposed innateness is taken an evidence for the truth. This latter thesis clearly rests on the assumption that innate propositions have an unimpeachable source, usually taken to be God, but then any appeal to innate ideas to justify the existence of God is circular. Despite such difficulties the doctrine of innate ideas had a long and influential history until the eighteenth century and the concept has in recent decades been revitalized through its employment in Noam Chomsky's influential account of the mind's linguistic capacities.
 The attraction of the theory has been felt strongly by those philosophers who have been unable to give an alternative account of our capacity to recognize that some propositions are certainly true where that recognition cannot be justified solely o the basis of an appeal to sense experiences. Thus Plato argued that, for example, recognition of mathematical truths could only be explained on the assumption of some form of recollection, in Plato, the recollection of knowledge, possibly obtained in a previous stat e of existence e draws its topic as most famously broached in the dialogue “Meno,” and the doctrine is one attemptive account for the 'innate' unlearned character of knowledge of first principles. Since there was no plausible post-natal source the recollection must refer of a pre-natal acquisition of knowledge. Thus understood, the doctrine of innate ideas supported the views that there were importantly gradulatorially innate human beings and it was this sense which hindered their proper apprehension.
 The ascetic implications of the doctrine were important in Christian philosophy throughout the Middle Ages and scholastic teaching until its displacement by Locke' philosophy in the eighteenth century. It had in the meantime acquired modern expression in the philosophy of Descartes who argued that we can come to know certain important truths before we have any empirical knowledge at all. Our idea of God must necessarily exist, is Descartes held, logically independent of sense experience. In England the Cambridge Plantonists such as Henry Moore and Ralph Cudworth added considerable support.
 Locke's rejection of innate ideas and his alternative empiricist account was powerful enough to displace the doctrine from philosophy almost totally. Leibniz, in his critique of Locke, attempted to defend it with a sophisticated disposition version of theory, but it attracted few followers.
 The empiricist alternative to innate ideas as an explanation of the certainty of propositions in the direction of construing with necessary truths as analytic, justly be for Kant's refinement of the classification of propositions with the fourfold analytic/synthetic distention and deductive/inductive did nothing to encourage a return to their innate idea's doctrine, which slipped from view. The doctrine may fruitfully be understood as the genesis of confusion between explaining the genesis of ideas or concepts and the basis for regarding some propositions as necessarily true.
 Chomsky's revival of the term in connection with his account of the spoken exchange acquisition has once more made the issue topical. He claims that the principles of language and 'natural logic' are known unconsciously and is a precondition for language acquisition. But for his purposes innate ideas must be taken in a strong dispositional sense - so strong that it is far from clear that Chomsky's claims are as in direct conflict, and make unclear in mind or purpose, as with empiricists accounts of valuation, some (including Chomsky) have supposed. Willard van Orman Quine (1808-2000), for example, sees no disaccording with his own version of empirical behaviorism, in which sees the typical of an earlier time and often replaced by something more modern or fashionable converse [in] views upon the meaning of determining what a thing should be, as each generation has its own standards of mutuality.
 Locke' accounts of analytic propositions was, that everything that a succinct account of analyticity should be (Locke, 1924). He distinguishes two kinds of analytic propositions, identity propositions for which 'we affirm the said term of itself', e.g., 'Roses are roses' and predicative propositions in which 'a part of the complex idea is predicated of the name of the whole', e.g., 'Roses are flowers'. Locke calls such sentences 'trifling' because a speaker who uses them 'trifling with words'. A synthetic sentence, in contrast, such as a mathematical theorem, that state of real truth and presents its instructive parallel's of real knowledge'. Correspondingly, Locke distinguishes both kinds of 'necessary consequences', analytic entailments where validity depends on the literal containment of the conclusion in the premise and synthetic entailment where it does not. John Locke (1632-1704) did not originate this concept-containment notion of analyticity. It is discussed by Arnaud and Nicole, and it is safe to say that it has been around for a very long time.
 All the same, the analogical version of evolutionary epistemology, called the 'evolution of theory's program', by Bradie (1986). The 'Spenserians approach' (after the nineteenth century philosopher Herbert Spencer) by Ruse (1986), a process analogous to biological natural selection has governed the development of human knowledge, rather than by an instance of the mechanism itself. This version of evolutionary epistemology, introduced and elaborated by Donald Campbell (1974) and Karl Popper, sees the [partial] fit between theories and the world as explained by a mental process of trial and error known as epistemic natural selection.
 We have usually taken both versions of evolutionary epistemology to be types of naturalized epistemology, because both take some empirical facts as a starting point for their epistemological project. The literal version of evolutionary epistemology begins by accepting evolutionary theory and a materialist approach to the mind and, from these, constructs an account of knowledge and its developments. By contrast, the analogical version does not require the truth of biological evolution: It simply draws on biological evolution as a source for the model of natural selection. For this version of evolutionary epistemology to be true, the model of natural selection need only apply to the growth of knowledge, not to the origin and development of species. Savagery put, evolutionary epistemology of the analogical sort could still be true even if creationism is the correct theory of the origin of species.
 Although they do not begin by assuming evolutionary theory, most analogical evolutionary epistemologists are naturalized epistemologists as well, their empirical assumptions, least of mention, implicitly come from psychology and cognitive science, not evolutionary theory. Sometimes, however, evolutionary epistemology is characterized in a seemingly non-naturalistic fashion. (Campbell 1974) says that 'if one is expanding knowledge beyond what one knows, one has no choice but to explore without the benefit of wisdom', i.e., blindly. This, Campbell admits, makes evolutionary epistemology close to being a tautology (and so not naturalistic). Evolutionary epistemology does assert the analytic claim that when expanding one's knowledge beyond what one knows, one must processed to something that is already known, but, more interestingly, it also makes the synthetic claim that when expanding one's knowledge beyond what one knows, one must proceed by blind variation and selective retention. This claim is synthetic because we can empirically falsify it. The central claim of evolutionary epistemology is synthetic, not analytic, but if the central contradictory of which they are not, then Campbell is right that evolutionary epistemology does have the analytic feature he mentions, but he is wrong to think that this is a distinguishing feature, since any plausible epistemology has the same analytic feature.
 Two extra-ordinary issues lie to awaken the literature that involves questions about 'realism', i.e., what metaphysical commitment does an evolutionary epistemologist have to make? (Progress, i.e., according to evolutionary epistemology, does knowledge develop toward a goal?) With respect to realism, many evolutionary epistemologists endorse that is called 'hypothetical realism', a view that combines a version of epistemological 'scepticism' and tentative acceptance of metaphysical realism. With respect to progress, the problem is that biological evolution is not goal-directed, but the growth of human knowledge is. Campbell (1974) worries about the potential dis-analogy here but is willing to bite the stone of conscience and admit that epistemic evolution progress toward a goal (truth) while biological evolution does not. Some have argued that evolutionary epistemologists must give up the 'truth-topic' sense of progress because a natural selection model is in non-teleological in essence alternatively, following Kuhn (1970), and embraced along with evolutionary epistemology.
 Among the most frequent and serious criticisms levelled against evolutionary epistemology is that the analogical version of the view is false because epistemic variation is not blind are to argue that, however, that this objection fails because, while epistemic variation is not random, its constraints come from heuristics that, for the most part, are selective retention. Further, Stein and Lipton argue that lunatics are analogous to biological pre-adaptations, evolutionary pre-biological pre-adaptations, evolutionary cursors, such as a half-wing, a precursor to a wing, which have some function other than the function of their discountable structures: The function of descendability may result in the function of their descendable character embodied to its structural foundations, is that of the guideline of epistemic variation is, on this view, not the source of dis-analogy, but the source of a more articulated account of the analogy.
 Many evolutionary epistemologists try to combine the literal and the analogical versions, saying that those beliefs and cognitive mechanisms, which are innate results from natural selection of the biological sort and those that are innate results from natural selection of the epistemic sort. This is reasonable as long as the two parts of this hybrid view are kept distinct. An analogical version of evolutionary epistemology with biological variation as its only source of blindness would be a null theory: This would be the case if all our beliefs are innate or if our non-innate beliefs are not the result of blind variation. An appeal to the legitimate way to produce a hybrid version of evolutionary epistemology since doing so trivializes the theory. For similar reasons, such an appeal will not save an analogical version of evolutionary epistemology from arguments to the effect that epistemic variation is blind.
 Although it is a new approach to theory of knowledge, evolutionary epistemology has attracted much attention, primarily because it represents a serious attempt to flesh out a naturalized epistemology by drawing on several disciplines. In science is used for understanding the nature and development of knowledge, then evolutionary theory is among the disciplines worth a look. Insofar as evolutionary epistemology looks there, it is an interesting and potentially fruitful epistemological programmed.
 What makes a belief justified and what makes true belief knowledge? Thinking that whether a belief deserves one of these appraisals is natural depends on what caused such subjectivity to have the belief. In recent decades many epistemologists have pursued this plausible idea with a variety of specific proposals. Some causal theories of knowledge have it that a true belief that 'p' is knowledge just in case it has the right causal connection to the fact that 'p'. They can apply such a criterion only to cases where the fact that 'p' is a sort that can enter intuit causal relations, as this seems to exclude mathematically and other necessary facts and perhaps any fact expressed by a universal generalization, and proponents of this sort of criterion have usually supposed that it is limited to perceptual representations where knowledge of particular facts about subjects' environments.
 For example, Armstrong (1973) initially proposed something which is proposed to another for consideration, as a set before the mind for consideration, as to put forth an intended purpose. That a belief to carry a one's affairs independently and self-sufficiently often under difficult circumstances progress for oneself and makes do and stand on one's own formalities in the transitional form 'This [perceived] objects is 'F' is [non-inferential] knowledge if and only if the belief is a completely reliable sign that the perceived object is 'F', that is, the fact that the object is 'F' contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictated that, for any subject, and the perceived objective 'y', if 'p' had those properties and believed that 'y' is 'F', then 'y' is 'F'. Offers a rather similar account, in terms of the belief's being caused by a signal received by the perceiver that carries the information that the object is 'F'.
 This sort of condition fails, however, to be sufficiently for non-inferential perceptivity, for knowledge is accountable for its compatibility with the belief's being unjustified, and an unjustified belief cannot be knowledge. The view that a belief acquires favourable epistemic status by having some kind of reliable linkage to the truth, seems by accountabilities that they have variations of this view which has been advanced for both knowledge and justified belief. The first formulation of a reliable account of knowing notably appeared as marked and noted and accredited to F. P. Ramsey (1903-30), whereby much of Ramsey's work was directed at saving classical mathematics from 'intuitionism', or what he called the 'Bolshevik menace of Brouwer and Weyl'. In the theory of probability he was the first to develop, based on precise behavioural nations of preference and expectation. In the philosophy of language, Ramsey was one of the first thinkers to accept a 'redundancy theory of truth', which he combined with radical views of the function of many kinds of propositions. Neither generalizations, nor causal positions, nor those treating probability or ethics, described facts, but each have a different specific function in our intellectual economy. Additionally, Ramsey, who said that an impression of belief was knowledge if it were true, certain and obtained by a reliable process. P. Unger (1968) suggested that 'S' knows that 'p' just in case it is  of at all accidental that 'S' is right about its being the case that drew an analogy between a thermometer that reliably indicates the temperature and a belief interaction of reliability that indicates the truth. Armstrong said that a non-inferential belief qualified as knowledge if the belief has properties that are nominally sufficient for its truth, i.e., guarantees its truth via laws of nature.
 They standardly classify reliabilism as an 'externaturalist' theory because it invokes some truth-linked factor, and truth is 'eternal' to the believer the main argument for externalism derives from the philosophy of language, more specifically, from the various phenomena pertaining to natural kind terms, indexicals, etc., that motivate the views that have come to be known as direct reference' theories. Such phenomena seem, at least to show that the belief or thought content that can be properly attributed to a person is dependent on facts about his environment, i.e., whether he is on Earth or Twin Earth, what in fact he is pointing at, the classificatory criteria employed by the experts in his social group, etc. -. Not just on what is going on internally in his mind or brain (Putnam, 175 and Burge, 1979.) Virtually all theories of knowledge, of course, share an externalist component in requiring truth as a condition for knowing. Reliabilism goes further, however, in trying to capture additional conditions for knowledge by means of a nomic, counterfactual or other such 'external' relations between 'belief' and 'truth'.
 The most influential counterexample to reliabilism is the demon-world and the clairvoyance examples. The demon-world example challenges the necessity of the reliability requirement, in that a possible world in which an evil demon creates deceptive visual experience, the process of vision is not reliable. Still, the visually formed beliefs in this world are intuitively justified. The clairvoyance example challenges the sufficiency of reliability. Suppose a cognitive agent possesses a reliable clairvoyance power, but has no evidence for or against his possessing such a power. Intuitively, his clairvoyantly formed beliefs are unjustifiably unreasoned, but Reliabilism declares them justified.
 Another form of reliabilism, - 'normal worlds', reliabilism, answers to the range problem differently, and treats the demon-world problem in the same fashionable manner, and so permitting a 'normal world', as one that is consistent with our general beliefs about the actual world. Normal-worlds reliabilism says that a belief, in any possible world is justified just in case its generating processes have high truth ratios in normal worlds. This resolves the demon-world problem because the relevant truth ratio of the visual process is not its truth ratio in the demon world itself, but its ratio in normal worlds. Since this ratio is presumably high, visually formed beliefs in the demon world turn out to be justified.
 Yet, a different version of reliabilism attempts to meet the demon-world and clairvoyance problems without recourse to the questionable notion of 'normal worlds'. Consider Sosa's (1992) suggestion that justified beliefs is belief acquired through 'intellectual virtues', and not through intellectual 'vices', whereby virtues are reliable cognitive faculties or processes. The task is to explain how epistemic evaluators have used the notion of indelible virtues, and vices, to arrive at their judgments, especially in the problematic cases. Goldman (1992) proposes a two-stage reconstruction of an evaluator's activity. The first stage is a reliability-based acquisition of a 'list' of virtues and vices. The second stage is application of this list to queried cases. Determining has executed the second stage whether processes in the queried cases resemble virtues or vices. We have classified visual beliefs in the demon world as justified because visual belief formation is one of the virtues. Clairvoyance formed, beliefs are classified as unjustified because clairvoyance resembles scientifically suspect processes that the evaluator represents as vices, e.g., mental telepathy, ESP, and so forth
 A philosophy of meaning and truth, for which it is especially associated with the American philosopher of science and of language (1839-1914), and the American psychologist philosopher William James (1842-1910), Wherefore the study in Pragmatism is given to various formulations by both writers, but the core is the belief that the meaning of a doctrine is the same as the practical effects of adapting it. Peirce interpreted of theocratical sentences ids only that of a corresponding practical maxim (telling us what to do in some circumstance). In James the position issues in a theory of truth, notoriously allowing that belief, including for examples, belief in God, are the widest sense of the works satisfactorily in the widest sense of the word. On James's view almost any belief might be respectable, and even true, but working with true beliefs is not a simple matter for James. The apparent subjectivist consequences of this were wildly assailed by Russell (1872-1970), Moore (1873-1958), and others in the early years of the 20th-century. This led to a division within pragmatism between those such as the American educator John Dewey (1859-1952), whose humanistic conception of practice remains inspired by science, and the more idealistic route that especially by the English writer F.C.S. Schiller (1864-1937), embracing the doctrine that our cognitive efforts and human needs actually transform the reality that we seek to describe. James often writes as if he sympathizes with this development. For instance, in The Meaning of Truth (1909), he considers the hypothesis that other people have no minds (dramatized in the sexist idea of an 'automatic sweetheart' or female zombie) and remarks' that the hypothesis would not work because it would not satisfy our egoistic craving for the recognition and admiration of others, these implications that make it true that the other persons have minds in the disturbing part.
 Modern pragmatists such as the American philosopher and critic Richard Rorty (1931-) and some writings of the philosopher Hilary Putnam (1925-) who has usually tried to dispense with an account of truth and concentrate, as perhaps James should have done, upon the nature of belief and its relations with human attitude, emotion, and need. The driving motivation of pragmatism is the idea that belief in the truth on the one hand must have a close connection with success in action on the other. One way of cementing the connection is found in the idea that natural selection must have adapted us to be cognitive creatures because beliefs have effects, as they work. Pragmatism can be found in Kant's doctrine of the primary of practical over pure reason, and continued to play an influential role in the theory of meaning and of truth.
 In case of fact, the philosophy of mind is the modern successor to behaviourism, as do the functionalism that its early advocates were Putnam (1926- ) and Sellars (1912-89), and its guiding principle is that we can define mental states by a triplet of relations they have on other mental stares, what effects they have on behaviour. The definition need not take the form of a simple analysis, but if w could write down the totality of axioms, or postdate, or platitudes that govern our theories about what things of other mental states, and our theories about what things are apt to cause (for example), a belief state, what effects it would have on a variety of other mental states, and what the force of impression of one thing on another, inducing to come into being and carry to as successful conclusions as found a pass that allowed them to affect passage through the mountains. A condition or occurrence traceable to a cause drawing forth the underlying and hidden layers of deep-seated latencies. Very well protected but the digression belongs to the patient, in that, what exists of the back-burners of the mind, slowly simmering, and very much of your self control is intact: Furthering the outcry of latent incestuousness that affects the likelihood of having an influence upon behaviour, so then all that we would have done otherwise, contains all that is needed to make the state a proper theoretical notion. It could be implicitly defied by these theses. Functionalism is often compared with descriptions of a computer, since according to mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlying hardware or 'realization' of the program the machine is running. The principal advantage of functionalism includes its fit with the way we know of mental states both of ourselves and others, which is via their effects on behaviour and other mental states. As with behaviourism, critics charge that structurally complex items that do not bear mental states might nevertheless, imitate the functions that are cited. According to this criticism functionalism is too generous and would count too many things as having minds. It is also queried whether functionalism is too paradoxical, able to see mental similarities only when there is causal similarity, when our actual practices of interpretations enable us to support thoughts and desires too differently from our own, it may then seem as though beliefs and desires are obtained in the consenting availability of 'variably acquired' causal architecture, just as much as they can be in different Neurophysiologic states.
 The philosophical movement of Pragmatism had a major impact on American culture from the late 19th century to the present. Pragmatism calls for ideas and theories to be tested in practice, by assessing whether acting upon the idea or theory produces desirable or undesirable results. According to pragmatists, all claims about truth, knowledge, morality, and politics must be tested in this way. Pragmatism has been critical of traditional Western philosophy, especially the notions that there are absolute truths and absolute values. Although pragmatism was popular for a time in France, England, and Italy, most observers believe that it encapsulates an American faith in know-how and practicality and an equally American distrust of abstract theories and ideologies.
 In mentioning the American psychologist and philosopher we find William James, who helped to popularize the philosophy of pragmatism with his book Pragmatism: A New Name for Old Ways of thinking (1907). Influenced by a theory of meaning and verification developed for scientific hypotheses by American philosopher C.S. Peirce, James held that truth is what compellingly works, or has good experimental results. In a related theory, James argued the existence of God is partly verifiable because many people derive benefits from believing.
 Pragmatists regard all theories and institutions as tentative hypotheses and solutions. For this reason they believed that efforts to improve society, through such means as education or politics, must be geared toward problem solving and must be ongoing. Through their emphasis on connecting theory to practice, pragmatist thinkers attempted to transform all areas of philosophy, from metaphysics to ethics and political philosophy.
 Pragmatism sought a middle ground between traditional ideas about the nature of reality and radical theories of nihilism and irrationalism, which had become popular in Europe in the late 19th century. Traditional metaphysics assumed that the world has a fixed, intelligible structure and that human beings can know absolute or objective truths about the world and about what constitutes moral behaviour. Nihilism and irrationalism, on the other hand, denied those very assumptions and their certitude. Pragmatists today still try to steer a middle course between contemporary offshoots of these two extremes.
 The ideas of the pragmatists were considered revolutionary when they first appeared. To some critics, pragmatism's refusal to affirm any absolutes carried negative implications for society. For example, pragmatists do not believe that a single absolute idea of goodness or justice exists, but rather than these concepts are changeable and depend on the context in which they are being discussed. The absence of these absolutes, critics feared, could result in a decline in moral standards. The pragmatists' denial of absolutes, moreover, challenged the foundations of religion, government, and schools of thought. As a result, pragmatism influenced developments in psychology, sociology, education, semiotics (the study of signs and symbols), and scientific method, as well as philosophy, cultural criticism, and social reform movements. Various political groups have also drawn on the assumptions of pragmatism, from the progressive movements of the early 20th century to later experiments in social reform.
 Pragmatism is best understood in its historical and cultural context. It arose during the late 19th century, a period of rapid scientific advancement typified by the theories of British biologist Charles Darwin, whose theories suggested too many thinkers that humanity and society are in a perpetual state of progress. During this same period a decline in traditional religious beliefs and values accompanied the industrialization and material progress of the time. In consequence it became necessary to rethink fundamental ideas about values, religion, science, community, and individuality.
 The three most important pragmatists are American philosophers' Charles Sanders Peirce, William James, and John Dewey. Peirce was primarily interested in scientific method and mathematics; His objective was to infuse scientific thinking into philosophy and society and he believed that human comprehension of reality was becoming ever greater and that human communities were becoming increasingly progressive. Peirce developed pragmatism as a theory of meaning - in particular, the meaning of concepts used in science. The meaning of the concept 'brittle', for example, is given by the observed consequences or properties that objects called 'brittle' exhibit. For Peirce, the only rational way to increase knowledge was to form mental habits that would test ideas through observation, experimentation, or what he called inquiry. Many philosophers known as logical positivist, a group of philosophers who have been influenced by Peirce, believed that our evolving species was fated to get ever closer to Truth. Logical positivists emphasize the importance of scientific verification, rejecting the assertion of positivism that personal experience is the basis of true knowledge.
 James moved pragmatism in directions that Peirce strongly disliked. He generalized Peirce's doctrines to encompass all concepts, beliefs, and actions; he also applied pragmatist ideas to truth as well as to meaning. James was primarily interested in showing how systems of morality, religion, and faith could be defended in a scientific civilization. He argued that sentiment, as well as logic is crucial to rationality and that the great issues of life - morality and religious belief, for example - are leaps of faith. As such, they depend upon what he called 'the will to believe' and not merely on scientific evidence, which can never tell us what to do or what is worthwhile. Critics charged James with relativism (the belief that values depend on specific situations) and with crass expediency for proposing that if an idea or action works the way one intends, it must be right. But James can more accurately be described as a pluralist - someone who believes the world to be far too complex for any-one philosophy to explain everything.
 Dewey's philosophy can be described as a version of philosophical naturalism, which regards human experience, intelligence, and communities as ever-evolving mechanisms. Using their experience and intelligence, Dewey believed, human beings can solve problems, including social problems, through inquiry. For Dewey, naturalism led to the idea of a democratic society that allows all members to acquire social intelligence and progress both as individuals and as communities. Dewey held that traditional ideas about knowledge, truth, and values, in which absolutes are assumed, are incompatible with a broadly Darwinian world-view in which individuals and societies are progressing. In consequence, he felt that these traditional ideas must be discarded or revised. Indeed, for pragmatists, everything people know and do depend on a historical context and are thus tentative rather than absolute.
 Many followers and critics of Dewey believe he advocated elitism and social engineering in his philosophical stance. Others think of him as a kind of romantic humanist. Both tendencies are evident in Dewey's writings, although he aspired to synthesize the two realms.
 The pragmatists' tradition was revitalized in the 1980s by American philosopher Richard Rorty, who has faced similar charges of elitism for his belief in the relativism of values and his emphasis on the role of the individual in attaining knowledge. Interest has renewed in the classic pragmatists - Pierce, James, and Dewey - have an alternative to Rorty's interpretation of the tradition.
 One of the earliest versions of a correspondence theory was put forward in the 4th century Bc Greek philosopher Plato, who sought to understand the meaning of knowledge and how it is acquired. Plato wished to distinguish between true belief and false belief. He proposed a theory based on intuitive recognition that true statements correspond to the facts - that is, agree with reality - while false statements do not. In Plato's example, the sentence "Theaetetus flies" can be true only if the world contains the fact that Theaetetus flies. However, Plato - and much later, 20th-century British philosopher Bertrand Russell - recognized this theory as unsatisfactory because it did not allow for false belief. Both Plato and Russell reasoned that if a belief is false because there is no fact to which it corresponds, it would then be a belief about nothing and so not a belief at all. Each then speculated that the grammar of a sentence could offer a way around this problem. A sentence can be about something (the person Theaetetus), yet false (flying is not true of Theaetetus). But how, they asked, are the parts of a sentence related to reality?
One suggestion, proposed by 20th-century philosopher Ludwig Wittgenstein, is that the parts of a sentence relate to the objects they describe in much the same way that the parts of a picture relate to the objects pictured. Once again, however, false sentences pose a problem: If a false sentence pictures nothing, there can be no meaning in the sentence.
 In the late 19th-century American philosopher Charles S. Peirce offered another answer to the question "What is truth?" He asserted that truth is that which experts will agree upon when their investigations are final. Many pragmatists such as Peirce claim that the truth of our ideas must be tested through practice. Some pragmatists have gone so far as to question the usefulness of the idea of truth, arguing that in evaluating our beliefs we should rather pay attention to the consequences that our beliefs may have. However, critics of the pragmatic theory are concerned that we would have no knowledge because we do not know which set of beliefs will ultimately be agreed upon; nor are their sets of beliefs that are useful in every context.
 A third theory of truth, the coherence theory, also concerns the meaning of knowledge. Coherence theorists have claimed that a set of beliefs is true if the beliefs are comprehensive - that is, they cover everything - and do not contradict each other.
 Other philosophers dismiss the question "What is truth?" With the observation that attaching the claim 'it is true that' to a sentence adds no meaning, however, these theorists, who have proposed what are known as deflationary theories of truth, do not dismiss such talk about truth as useless. They agree that there are contexts in which a sentence such as 'it is true that the book is blue' can have a different impact than the shorter statement 'the book is blue'. What is more important, use of the word true is essential when making a general claim about everything, nothing, or something, as in the statement 'most of what he says is true?'
 Many experts believe that philosophy as an intellectual discipline originated with the work of Plato, one of the most celebrated philosophers in history. The Greek thinker had an immeasurable influence on Western thought. However, Plato's expression of ideas in the form of dialogues-the dialectical method, used most famously by his teacher Socrates - has led to difficulties in interpreting some of the finer points of his thoughts. The issue of what exactly Plato meant to say is addressed in the following excerpt by author R. M. Hare.
 Linguistic analysis as a method of philosophy is as old as the Greeks. Several of the dialogues of Plato, for example, are specifically concerned with clarifying terms and concepts. Nevertheless, this style of philosophizing has received dramatically renewed emphasis in the 20th century. Influenced by the earlier British empirical tradition of John Locke, George Berkeley, David Hume, and John Stuart Mill and by the writings of the German mathematician and philosopher Gottlob Frége, the 20th-century English philosopher's G. E. Moore and Bertrand Russell became the founders of this contemporary analytic and linguistic trend. As students together at the University of Cambridge, Moore and Russell rejected Hegelian idealism, particularly as it was reflected in the work of the English metaphysician F. H. Bradley, who held that nothing is completely real except the Absolute. In their opposition to idealism and in their commitment to the view that careful attention to language is crucial in philosophical inquiry, and they set the mood and style of philosophizing for much of the 20th century English-speaking world.
 For Moore, philosophy was first and foremost analysis. The philosophical task involves clarifying puzzling propositions or concepts by indicating fewer puzzling propositions or concepts to which the originals are held to be logically equivalent. Once this task has been completed, the truth or falsity of problematic philosophical assertions can be determined more adequately. Moore was noted for his careful analyses of such puzzling philosophical claims as 'time is unreal', analyses that aided of determining the truth of such assertions.
 Russell, strongly influenced by the precision of mathematics, was concerned with developing an ideal logical language that would accurately reflect the nature of the world. Complex propositions, Russell maintained, can be resolved into their simplest components, which he called atomic propositions. These propositions refer to atomic facts, the ultimate constituents of the universe. The metaphysical view based on this logical analysis of language and the insistence that meaningful propositions must correspond to facts constitutes what Russell called logical atomism. His interest in the structure of language also led him to distinguish between the grammatical form of a proposition and its logical form. The statements 'John is good' and 'John is tall' have the same grammatical form but different logical forms. Failure to recognize this would lead one to treat the property 'goodness' as if it were a characteristic of John in the same way that the property 'tallness' is a characteristic of John. Such failure results in philosophical confusion.
 Austrian-born philosopher Ludwig Wittgenstein was one of the most influential thinkers of the 20th century. With his fundamental work, Tractatus Logico-philosophicus, published in 1921, he became a central figure in the movement known as analytic and linguistic philosophy.
 Russell's work of mathematics attracted towards studying in Cambridge the Austrian philosopher Ludwig Wittgenstein, who became a central figure in the analytic and linguistic movement. In his first major work, Tractatus Logico-Philosophicus (1921; translation 1922), in which he first presented his theory of language, Wittgenstein argued that 'all philosophy is a 'critique of language' and that 'philosophy aims at the logical clarification of thoughts'. The results of Wittgenstein's analysis resembled Russell's logical atomism. The world, he argued, is ultimately composed of simple facts, which it is the purpose of language to picture. To be meaningful, statements about the world must be reducible to linguistic utterances that have a structure similar to the simple facts pictured. In this early Wittgensteinian analysis, only propositions that picture facts - the propositions of science - are considered factually meaningful. Metaphysical, theological, and ethical sentences were judged to be factually meaningless.
 Influenced by Russell, Wittgenstein, Ernst Mach, and others, a group of philosophers and mathematicians in Vienna in the 1920s initiated the movement known as logical positivism: Led by Moritz Schlick and Rudolf Carnap, the Vienna Circle initiated one of the most important chapters in the history of analytic and linguistic philosophy. According to the positivists, the task of philosophy is the clarification of meaning, not the discovery of new facts (the job of the scientists) or the construction of comprehensive accounts of reality (the misguided pursuit of traditional metaphysics).
 The positivists divided all meaningful assertions into two classes: analytic propositions and empirically verifiable ones. Analytic propositions, which include the propositions of logic and mathematics, are statements the truth or falsity of which depend altogether on the meanings of the terms constituting the statement. An example would be the proposition 'two plus two equals four'. The second class of meaningful propositions includes all statements about the world that can be verified, at least in principle, by sense experience. Indeed, the meaning of such propositions is identified with the empirical method of their verification. This verifiability theory of meaning, the positivists concluded, would demonstrate that scientific statements are legitimate factual claims and that metaphysical, religious, and ethical sentences are factually dwindling. The ideas of logical positivism were made popular in England by the publication of A. J. Ayer's Language, Truth and Logic in 1936.
 The positivists' verifiability theory of meaning came under intense criticism by philosophers such as the Austrian-born British philosopher Karl Popper. Eventually this narrow theory of meaning yielded to a broader understanding of the nature of language. Again, an influential figure was Wittgenstein. Repudiating many of his earlier conclusions in the Tractatus, he initiated a new line of thought culminating in his posthumously published Philosophical Investigations (1953, translated 1953). In this work, Wittgenstein argued that once attention is directed to the way language is actually used in ordinary discourse, the variety and flexibility of language become clear. Propositions do much more than simply picture facts.
 This recognition led to Wittgenstein's influential concept of language games. The scientist, the poet, and the theologian, for example, are involved in different language games. Moreover, the meaning of a proposition must be understood in its context, that is, in terms of the rules of the language game of which that proposition is a part. Philosophy, concluded Wittgenstein, is an attempt to resolve problems that arise as the result of linguistic confusion, and the key to the resolution of such problems is ordinary language analysis and the proper use of language.
 Additional contributions within the analytic and linguistic movement include the work of the British philosopher's Gilbert Ryle, John Austin, and P. F. Strawson and the American philosopher W. V. Quine. According to Ryle, the task of philosophy is to restate 'systematically misleading expressions' in forms that are logically more accurate. He was particularly concerned with statements the grammatical form of which suggests the existence of nonexistent objects. For example, Ryle is best known for his analysis of mentalistic language, language that misleadingly suggests that the mind is an entity in the same way as the body.
 Austin maintained that one of the most fruitful starting points for philosophical inquiry is attention to the extremely fine distinctions drawn in ordinary language. His analysis of language eventually led to a general theory of speech acts, that is, to a description of the variety of activities that an individual may be performing when something is uttered.
 Strawson is known for his analysis of the relationship between formal logic and ordinary language. The complexity of the latter, he argued, is inadequately represented by formal logic. A variety of analytic tools, therefore, are needed in addition to logic in analysing ordinary language.
 Quine discussed the relationship between language and ontology. He argued that language systems tend to commit their users to the existence of certain things. For Quine, the justification for speaking one way rather than another is a thoroughly pragmatic one.
 The commitment to language analysis as a way of pursuing philosophy has continued as a significant contemporary dimension in philosophy. A division also continues to exist between those who prefer to work with the precision and rigour of symbolic logical systems and those who prefer to analyse ordinary language. Although few contemporary philosophers maintain that all philosophical problems are linguistic, the view continues to be widely held that attention to the logical structure of language and to how language is used in everyday discourse can many a time have an eye to aid in anatomize Philosophical problems.
 A loose title for various philosophies that emphasize certain common themes, the individual, the experience of choice, and the absence of rational understanding of the universe, with the additional ways of addition seems a consternation of dismay or one fear, or the other extreme, as far apart is the sense of the dottiness of 'absurdity in human life', however, existentialism is a philosophical movement or tendency, emphasizing individual existence, freedom, and choice, that influenced many diverse writers in the 19th and 20th centuries.
 Existentialism as a distinct philosophical and literary movement belongs to the 19th and 20th centuries, but elements of existentialism can be found in the thought (and life) of Socrates, in the Bible, and in the work of many pre-modern philosophers and writers.
 The first to anticipate the major concerns of modern existentialism was the 17th-century French philosopher Blasé Pascal. Pascal rejected the rigorous rationalism of his contemporary René Descartes, asserting, in his Pensées (1670), that a systematic philosophy that presumes to explain God and humanity is a form of pride. Like later existentialist writers, he saw human life in terms of paradoxes: The human self, which combines mind and body, is itself a paradox and contradiction.
 Kierkegaard, generally regarded as the founder of modern existentialism, reacted against the systematic absolute idealism of the 19th-century German philosopher George Wilhelm Friedrich Hegel, who claimed to have worked out a total rational understanding of humanity and history. Kierkegaard, on the contrary, stressed the ambiguity and absurdity of the human situation. The individual's response to this situation must be to live a totally committed life, and this commitment can only be understood by the individual who has made it. The individual therefore must always be prepared to defy the norms of society for the sake of the higher authority of a personally valid way of life. Kierkegaard ultimately advocated a 'leap of faith' into a Christian way of life, which, although incomprehensible and full of risk, was the only commitment he believed, could save the individual from despair.
 Danish religious philosopher Søren Kierkegaard rejected the all-encompassing, analytical philosophical systems of such 19th-century thinkers as focussed on the choices the individual must make in all aspects of his or her life, especially the choice to maintain religious faith. In Fear and Trembling (1846, Translation, 1941), Kierkegaard explored the concept of faith through an examination of the biblical story of Abraham and Isaac, in which God demanded that Abraham demonstrate his faith by sacrificing his son.
 One of the most controversial works of 19th-century philosophy, Thus Spake Zarathustra (1883-1885) articulated German philosopher Friedrich Nietzsche's theory of the Übermensch, a term translated as "Superman" or "Overman." The Superman was an individual who overcame what Nietzsche termed the 'slave morality' of traditional values, and lived according to his own morality. Nietzsche also advanced his idea that 'God is dead', or that traditional morality was no longer relevant in people's lives. In this passage, the sage Zarathustra came down from the mountain where he had spent the last ten years alone to preach to the people.
 Nietzsche, who was not acquainted with the work of Kierkegaard, influenced subsequent existentialist thought through his criticism of traditional metaphysical and moral assumptions and through his espousal of tragic pessimism and the life-affirming individual will that opposes itself to the moral conformity of the majority. In contrast to Kierkegaard, whose attack on conventional morality led him to advocate a radically individualistic Christianity, Nietzsche proclaimed the "death of God" and went on to reject the entire Judeo-Christian moral tradition in favour of a heroic pagan ideal.
 The modern philosophy movements of phenomenology and existentialism have been greatly influenced by the thought of German philosopher Martin Heidegger. According to Heidegger, humankind has fallen into a crisis by taking a narrow, technological approach to the world and by ignoring the larger question of existence. People, if they wish to live authentically, must broaden their perspectives. Instead of taking their existence for granted, people should view themselves as part of being (Heidegger's term for that which underlies all existence).
 Heidegger, like Pascal and Kierkegaard, reacted against an attempt to put philosophy on a conclusive rationalistic basis - in this case the phenomenology of the 20th-century German philosopher Edmund Husserl. Heidegger argued that humanity finds itself in an incomprehensible, indifferent world. Human beings can never hope to understand why they are here; instead, each individual must choose a goal and follow it with passionate conviction, aware of the certainty of death and the ultimate meaninglessness of one's life. Heidegger contributed to existentialist thought an original emphasis on being and ontology as well as on language.
 Twentieth-century French intellectual Jean-Paul Sartre helped to develop existential philosophy through his writings, novels, and plays. Much did of Sartre's works focuses on the dilemma of choice faced by free individuals and on the challenge of creating meaning by acting responsibly in an indifferent world. In stating that 'man is condemned to be free', Sartre reminds us of the responsibility that accompanies human decisions.
 Sartre first gave the term existentialism general currency by using it for his own philosophy and by becoming the leading figure of a distinct movement in France that became internationally influential after World War II. Sartre's philosophy is explicitly atheistic and pessimistic; he declared that human beings require a rational basis for their lives but are unable to achieve one and thus human life is a 'futile passion'. Sartre nevertheless insisted that his existentialism is a form of humanism, and he strongly emphasized human freedom, choice, and responsibility. He eventually tried to reconcile these existentialist concepts with a Marxist analysis of society and history.
 Although existentialist thought encompasses the uncompromising atheism of Nietzsche and Sartre and the agnosticism of Heidegger, its origin in the intensely religious philosophies of Pascal and Kierkegaard foreshadowed its profound influence on a 20th-century theology. The 20th-century German philosopher Karl Jaspers, although he rejected explicit religious doctrines, influenced contemporary theologies through his preoccupation with transcendence and the limits of human experience. The German Protestant theologian's Paul Tillich and Rudolf Bultmann, the French Roman Catholic theologian Gabriel Marcel, the Russian Orthodox philosopher Nikolay Berdyayev, and the German Jewish philosopher Martin Buber inherited many of Kierkegaard's concerns, especially that a personal sense of authenticity and commitment is essential to religious faith.
 Renowned as one of the most important writers in world history, 19th-century Russian author Fyodor Dostoyevsky wrote psychologically intense novels which probed the motivations and moral justifications for his characters' actions. Dostoyevsky commonly addressed themes such as the struggle between good and evil within the human soul and the idea of salvation through suffering. The Brothers Karamazov (1879-1880), generally considered Dostoyevsky's best work, interlaces religious exploration with the story of a family's violent quarrels over a woman and a disputed inheritance.
 A number of existentialist philosophers used literary forms to convey their thought, and existentialism has been as vital and as extensive a movement in literature as in philosophy. The 19th-century Russian novelist Fyodor Dostoyevsky is probably the greatest existentialist literary figure. In Notes from the Underground (1864), the alienated antihero rages against the optimistic assumptions of rationalist humanism. The view of human nature that emerges in this and other novels of Dostoyevsky is that it is unpredictable and perversely self-destructive; only Christian love can save humanity from itself, but such love cannot be understood philosophically. As the character Alyosha says in The Brothers Karamazov (1879-80), "We must love life more than the meaning of it."
 The opening series of arranged passages in continuous or uniform order, by ways that the progressive course accommodates to arrange in a line or lines of continuity, Wherefore, the Russian novelist Fyodor Dostoyevsky's Notes from Underground (1864) - 'I am a sick man . . . I am a spiteful man' - are among the most famous in 19th-century literature. Published five years after his release from prison and involuntary, military service in Siberia, Notes from Underground is a sign of Dostoyevsky's rejection of the radical social thinking he had embraced in his youth. The unnamed narrator is antagonistic in tone, questioning the reader's sense of morality as well as the foundations of rational thinking. In this excerpt from the beginning of the novel, the narrator describes himself, derisively referring to himself as an 'overly conscious' intellectual.
 In the 20th century, the novels of the Austrian Jewish writer Franz Kafka, such as The Trial (1925 translations, 1937) and The Castle (1926 translations, 1930), presents isolated men confronting vast, elusive, menacing bureaucracies; Kafka's themes of anxiety, guilt, and solitude reflect the influence of Kierkegaard, Dostoyevsky, and Nietzsche. The influence of Nietzsche is also discernible in the novels of the French writer's André Malraux and in the plays of Sartre. The work of the French writer Albert Camus is usually associated with existentialism because of the prominence in it of such themes as the apparent absurdity and futility of life, the indifference of the universe, and the necessity of engagement in a just cause. In the United States, the influence of existentialism on literature has been more indirect and diffuse, but traces of Kierkegaard's thought can be found in the novels of Walker Percy and John Updike, and various existentialist themes are apparent in the work of such diverse writers as Norman Mailer and John Barth.
 The problem of defining knowledge in terms of true belief plus some favoured relation between the believer and the facts began with Plato's view in the Theaetetus, that knowledge is true belief plus some logos, and epistemology is a beginning for which it is bound to the foundations of knowledge, a special branch of philosophy that addresses the philosophical problems surrounding the theory of knowledge. Epistemology is concerned with the definition of knowledge and related concepts, the sources and criteria of knowledge, the kinds of knowledge possible and the degree to which each is certain, and the exact integrations among the one's who are understandably of knowing and the object known.
 Thirteenth-century Italian philosopher and theologian Saint Thomas Aquinas attempted to synthesize Christian belief with a broad range of human knowledge, embracing diverse sources such as Greek philosopher Aristotle and Islamic and Jewish scholars. His thought exerted lasting influence on the development of Christian theology and Western philosophy. And explicated by the author, Anthony Kenny who examines the complexities of Aquinas's concepts of substance and accident.
 In the 5th century Bc, the Greek Sophists questioned the possibility of reliable and objective knowledge. Thus, a leading Sophist, Gorgias, argued that nothing really exists, that if anything did exist it could not be known, and that if knowledge were possible, it could not be communicated. Another prominent Sophist, Protagoras, maintained that no person's opinions can be said to be more correct than another's, because each is the sole judge of his or her own experience. Plato, following his illustrious teacher Socrates, tried to answer the Sophists by postulating the existence of a world of unchanging and invisible forms, or ideas, about which it is possible to have exact and certain knowledge. The thing's one sees and touches, they maintained, are imperfect copies of the pure forms studied in mathematics and philosophy. Accordingly, only the abstract reasoning of these disciplines yields genuine knowledge, whereas reliance on sense perception produces vague and inconsistent opinions. They concluded that philosophical contemplation of the unseen world of forms is the highest goal of human life.
 Aristotle followed Plato in regarding abstract knowledge as superior to any other, but disagreed with him as to the proper method of achieving it. Aristotle maintained that almost all knowledge is derived from experience. Knowledge is gained either directly, by abstracting the defining traits of a species, or indirectly, by deducing new facts from those already known, in accordance with the rules of logic. Careful observation and strict adherence to the rules of logic, which were first set down in systematic form by Aristotle, would help guard against the pitfalls the Sophists had exposed. The Stoic and Epicurean schools agreed with Aristotle that knowledge originates in sense perception, but against both Aristotle and Plato they maintained that philosophy is to be valued as a practical guide to life, rather than as an end in itself.
 After many centuries of declining interest in rational and scientific knowledge, the Scholastic philosopher Saint Thomas Aquinas and other philosophers of the middle Ages helped to restore confidence in reason and experience, blending rational methods with faith into a unified system of beliefs. Aquinas followed Aristotle in regarding perception as the starting point and logic as the intellectual procedure for arriving at reliable knowledge of nature, but he considered faith in scriptural authority as the main source of religious belief.
 From the 17th to the late 19th century, the main issue in epistemology was reasoning versus sense perception in acquiring knowledge. For the rationalists, of whom the French philosopher René Descartes, the Dutch philosopher Baruch Spinoza, and the German philosopher Gottfried Wilhelm Leibniz were the leaders, the main source and final test of knowledge was deductive reasoning based on self-evident principles, or axioms. For the empiricists, beginning with the English philosophers Francis Bacon and John Locke, the main source and final test of knowledge was sense perception.
 Bacon inaugurated the new era of modern science by criticizing the medieval reliance on tradition and authority and also by setting down new rules of scientific method, including the first set of rules of inductive logic ever formulated. Locke attacked the rationalist belief that the principles of knowledge are intuitively self-evident, arguing that all knowledge is derived from experience, either from experience of the external world, which stamps sensations on the mind, or from internal experience, in which the mind reflects on its own activities. Human knowledge of external physical objects, he claimed, is always subject to the errors of the senses, and he concluded that one cannot have absolutely certain knowledge of the physical world.
 Irish-born philosopher and clergyman George Berkeley (1685-1753) argued that of everything a human being conceived of exists, as an idea in a mind, a philosophical focus which is known as idealism. Berkeley reasoned that because one cannot control one's thoughts, they must come directly from a larger mind: that of God. In this excerpt from his Treatise Concerning the Principles of Human Knowledge, written in 1710, Berkeley explained why he believed that it is 'impossible . . . that there should be any such thing as an outward object'.
 The Irish philosopher George Berkeley acknowledged along with Locke, that knowledge occurs through ideas, but he denied Locke's belief that a distinction can appear between ideas and objects. The British philosopher David Hume continued the empiricist tradition, but he did not accept Berkeley's conclusion that knowledge was of ideas only. He divided all knowledge into two kinds: Knowledge of relations of ideas - that is, the knowledge found in mathematics and logic, which is exact and certain but provide no information about the world.  Knowledge of matters of fact - that is, the knowledge derived from sense perception. Hume argued that most knowledge of matters of fact depends upon cause and effect, and since no logical connection exists between any given cause and its effect, one cannot hope to know any future matter of fact with certainty. Thus, the most reliable laws of science might not remain true - a conclusion that had a revolutionary impact on philosophy.
 The German philosopher Immanuel Kant tried to solve the crisis precipitated by Locke and brought to a climax by Hume; His proposed solution combined elements of rationalism with elements of empiricism. He agreed with the rationalists, one can have exact and certain knowledge, but he followed the empiricists in holding that such knowledge is more informative. Adding upon a proposed structure of thought than about the world outside of thought, and distinguishing upon three kinds of knowledge: Analytical deduction, which is exact and certain but uninformative, because it makes clear only what is contained in definitions; synthetic a posterior, which conveys information about the world learned from experience, but is subject to the errors of the senses; and synthetic a priori, which is discovered by pure intuition and is both exact and certain, for it expresses the necessary conditions that the mind imposes on all objects of experience. Mathematics and philosophy, according to Kant, provide this last. Since the time of Kant, one of the most frequently argued questions in philosophy has been whether or not such a thing as synthetic a priori knowledge really exists.
 During the 19th century, the German philosopher Georg Wilhelm Friedrich Hegel revived the rationalist claim that absolutely certain knowledge of reality can be obtained by equating the processes of thought, of nature, and of history. Hegel inspired an interest in history and a historical approach to knowledge that was further emphasized by Herbert Spencer in Britain and by the German school of historicisms. Spencer and the French philosopher Auguste Comte brought attention to the importance of sociology as a branch of knowledge and both extended the principles of empiricism to the study of society.
 The American school of pragmatism, founded by the philosophers Charles Sanders Peirce, William James, and John Dewey at the turn of this century, carried empiricism further by maintaining that knowledge is an instrument of action and that all beliefs should be judged by their usefulness as rules for predicting experiences.
 In the early 20th century, epistemological problems were discussed thoroughly, and subtle shades of difference grew into rival schools of thought. Special attention was given to the relation between the act of perceiving something, the object directly perceived, and the thing that can be said to be known as a result of the perception. The phenomena lists contended that the objects of knowledge are the same as the objects perceived. The neutralists argued that one has direct perceptions of physical objects or parts of physical objects, rather than of one's own mental states. The critical realists took a middle position, holding that although one perceives only sensory data such as colours and sounds, these stand for physical objects and provide knowledge thereof.
 A method for dealing with the problem of clarifying the relation between the act of knowing and the object known was developed by the German philosopher Edmund Husserl. He outlined an elaborate procedure that he called phenomenology, by which one is said to be able to distinguish the way things appear to be from the way one thinks they really are, thus gaining a more precise understanding of the conceptual foundations of knowledge.
 During the second quarter of the 20th century, two schools of thought emerged, each indebted to the Austrian philosopher Ludwig Wittgenstein. The first of these schools, logical empiricism, or logical positivism, had its origins in Vienna, Austria, but it soon spread to England and the United States. The logical empiricists insisted that there is only one kind of knowledge: scientific knowledge; that any valid knowledge claim must be verifiable in experience; and hence that much that had passed for philosophy was neither true nor false but literally meaningless. Finally, following Hume and Kant, a clear distinction must be maintained between analytic and synthetic statements. The so-called verifiability criterion of meaning has undergone changes as a result of discussions among the logical empiricists themselves, as well as their critics, but has not been discarded. More recently, the sharp distinction between the analytic and the synthetic has been attacked by a number of philosophers, chiefly by American philosopher W.V.O. Quine, whose overall approach is in the pragmatic tradition.
 The latter of these recent schools of thought, generally referred to as linguistic analysis, or ordinary language philosophy, seem to break with traditional epistemology. The linguistic analysts undertake to examine the actual way key epistemological terms are used - terms such as knowledge, perception, and probability - and to formulate definitive rules for their use in order to avoid verbal confusion. British philosopher John Langshaw Austin argued, for example, that to say a statement was true, and add nothing to the statement except a promise by the speaker or writer. Austin does not consider truth a quality or property attaching to statements or utterances. However, the ruling thought is that it is only through a correct appreciation of the role and point of this language is that we can come to a better conceptual understanding of what the language is about, and avoid the oversimplifications and distortion we are apt to bring to its subject matter.
 Linguistics is the scientific study of language. It encompasses the description of languages, the study of their origin, and the analysis of how children acquire language and how people learn languages other than their own. Linguistics is also concerned with relationships between languages and with the ways languages change over time. Linguists may study language as a thought process and seek a theory that accounts for the universal human capacity to produce and understand language. Some linguists examine language within a cultural context. By observing talk, they try to determine what a person needs to know in order to speak appropriately in different settings, such as the workplace, among friends, or among family. Other linguists focus on what happens when speakers from different language and cultural backgrounds interact. Linguists may also concentrate on how to help people learn another language, using what they know about the learner's first language and about the language being acquired.
 Although there are many ways of studying language, most approaches belong to one of the two main branches of linguistics: descriptive linguistics and comparative linguistics.
 Descriptive linguistics is the study and analysis of spoken language. The techniques of descriptive linguistics were devised by German American anthropologist Franz Boas and American linguist and anthropologist Edward Sapir in the early 1900s to record and analyse Native American languages. Descriptive linguistics begins with what a linguist hears native speakers say. By listening to native speakers, the linguist gathered a body of data and analyses' it in order to identify distinctive sounds, called phonemes. Individual phonemes, such as /p/ and /b/, are established on the grounds that substitution of one for the other changes the meaning of a word. After identifying the entire inventory of sounds in a language, the linguist looks at how these sounds combine to create morphemes, or units of sound that carry meaning, such as the words push and bush. Morphemes may be individual words such as push; root words, such as the berry in a blueberry; or prefixes (pre- in preview) and suffixes (-ness in openness).
 The linguist's next step is to see how morphemes combine into sentences, obeying both the dictionary meaning of the morpheme and the grammatical rules of the sentence. In the sentence "She pushed the bush," the morpheme she, a pronoun, is the subject 'push', a transitive verb, is the verb 'the', a definite article, is the determiner, and bush, a noun, is the object. Knowing the function of the morphemes in the sentence enables the linguist to describe the grammar of the language. The scientific procedures of phonemics (finding phonemes), morphology (discovering morphemes), and syntax (describing the order of morphemes and their function) provides descriptive linguists with a way to write down grammars of languages never before written down or analysed. In this way they can begin to study and understand these languages.
 Comparative linguistics is the study and analysis, by means of written records, of the origins and relatedness of different languages. In 1786 Sir William Jones, a British scholar, asserted that Sanskrit, Greek, and Latins were related to each other and had descended from a common source. He based this assertion on observations of similarities in sounds and meanings among the three languages. For example, the Sanskrit word borate for "brother" resembles the Latin word frater, the Greek word phrater, (and the English word brother).
 Other scholars went on to compare Icelandic with Scandinavian languages, and Germanic languages with Sanskrit, Greek, and Latin. The correspondences among languages, known as genetic relationships, came to be represented on what comparative linguists refer to as family trees. Family trees established by comparative linguists include the Indo-European, relating Sanskrit, Greek, Latin, German, English, and other Asian and European languages; the Algonquian, relating Fox, Cree, Menomini, Ojibwa, and other Native North American languages; and the Bantu, relating Swahili, Xhosa, Zulu, Kikuyu, and other African languages.
 Comparative linguists also look for similarities in the way words are formed in different languages. Latin and English, for example, change the form of a word to express different meanings, as when the English verbs 'go', changes too, 'went' and 'gone' to express a past action. Chinese, on the other hand, has no such inflected forms; the verb remains the same while other words indicate the time (as in "go store tomorrow"). In Swahili, prefixes, suffixes, and infixes (additions in the body of the word) combine with a root word to change its meaning. For example, a single word might be express when something was done, by whom, to whom, and in what manner.
 Some comparative linguists reconstruct hypothetical ancestral languages known as proto-languages, which they use to demonstrate relatedness among contemporary languages. A proto-language is not intended to depict a real language, however, and does not represent the speech of ancestors of people speaking modern languages. Unfortunately, some groups have mistakenly used such reconstructions in efforts to demonstrate the ancestral homeland of people.
 Comparative linguists have suggested that certain basic words in a language do not change over time, because people are reluctant to introduce new words for such constants as arm, eye, or mother. These words are termed culture free. By comparing lists of culture-free words in languages within a family, linguists can derive the percentage of related words and use a formula to figure out when the languages separated from one another.
 By the 1960s comparativists were no longer satisfied with focussing on origins, migrations, and the family tree method. They challenged as unrealistic the notion that an earlier language could remain sufficiently isolated for other languages to be derived exclusively from it over a period of time. Today comparativists seek to understand the more complicated reality of language history, taking language contact into account. They are concerned with universal characteristics of language and with comparisons of grammars and structures.
 The field of linguistics, which lends from its own theories and methods into other disciplines, and many subfields of linguistics have expanded our understanding of languages. Linguistic theories and methods are also used in other fields of study. These overlapping interests have led to the creation of several cross-disciplinary fields.
 Sociolinguistic study of patterns and variations in language within a society or community. It focuses on the way people use language to express social class, group status, gender, or ethnicity, and it looks at how they make choices about the form of language they use. It also examines the way people use language to negotiate their role in society and to achieve positions of power. For example, sociolinguistic studies have found that the way a New Yorker pronounces the phoneme /r/ in an expression such as "fourth floor" can indicate the person's social class. According to one study, people aspiring to move from the lower middle class to the upper middle class attach prestige to pronouncing /r/. Sometimes they even overcorrect their speech, pronouncing /r/ where those whom they wish to copy may not.
 Some Sociolinguists believe that analysing such variables as the use of a particular phoneme can predict the direction of language change. Change, they say, moves toward the variable associated with power, prestige, or other quality having high social value. Other Sociolinguists focus on what happens when speakers of different languages interact. This approach to language change emphasizes the way languages mix rather than the direction of change within a community. The goal of a Sociolinguistical understanding, perhaps, takes a position whereby a communicative competence - what people need to know to use the appropriate language for a given social setting.
 Psycholinguistics merge the fields of psychology and linguistics to study how people process language and how language use is related to underlying mental processes. Studies of children's language acquisition and of second-language acquisition are psycholinguistic in nature. Psycholinguists work to develop models for how language is processed and understood, using evidence from studies of what happens when these processes go awry. They also study language disorders such as aphasia (impairment of the ability to use or comprehend words) and dyslexia (impairment of the ability to make out written language).
 Computational linguistics involves the use of computers to compile linguistic data, analyse languages, translate from one language to another, and develop and test models of language processing. Linguists use computers and large samples of actual language to analyse the relatedness and the structure of languages and to look for patterns and similarities. Computers also assist in stylistic studies, information retrieval, various forms of textual analysis, and the construction of dictionaries and concordances. Applying computers to language studies has resulted in a machine translated systems and machines that recognize and produce speech and text. Such machines facilitate communication with humans, including those who are perceptually or linguistically impaired.
 Applied linguistics employs linguistic theory and methods in teaching and in research on learning a second language. Linguists look at the errors people make as they learn another language and at their strategies for communicating in the new language at different degrees of competence. In seeking to understand what happens in the mind of the learner, applied linguists recognize that motivation, attitude, learning style, and personality affect how well a person learns another language.
 Anthropological linguistics, also known as linguistic anthropology, uses linguistic approaches to analyse culture. Anthropological linguists examine the relationship between a culture and its language. The way cultures and languages have moderately changed uninterruptedly through intermittent intervals of time. And how various cultures and languages are related to each other, for example, the present English usage of family and given names arose in the late 13th and early 14th centuries when the laws concerning registration, tenure, and inheritance of property were changed.
 Once linguists began to study language as a set of abstract rules that somehow account for speech, other scholars began to take an interest in the field. They drew analogies between language and other forms of human behaviour, based on the belief that a shared structure underlies many aspects of a culture. Anthropologists, for example, became interested in a structuralist approach to the interpretation of kinship systems and analysis of myth and religion. American linguist Leonard Bloomfield promoted structuralism in the United States.
 Saussure's ideas also influenced European linguistics, most notably in France and Czechoslovakia (now the Czech Republic). In 1926 Czech linguist Vilem Mathesius founded the Linguistic Circle of Prague, a group that expanded the focus of the field to include the context of language use. The Prague circle developed the field of phonology, or the study of sounds, and demonstrated that universal features of sounds in the languages of the world interrelate in a systematic way. Linguistic analysis, they said, should focus on the distinctiveness of sounds rather than on the ways they combine. Where descriptivist tried to locate and describe individual phonemes, such as /b/ and /p/, the Prague linguists stressed the features of these phonemes and their interrelationships in different languages. In English, for example, the voice distinguishes between the similar sounds of /b/ and /p/, but these are not distinct phonemes in a number of other languages. An Arabic speaker might pronounce the cities Pompeii and Bombay the same way.
 As linguistics developed in the 20th century, the notion became prevalent that language is more than speech - specifically, that it is an abstract system of interrelationships shared by members of a speech community. Structural linguistics led linguists to look at the rules and the patterns of behaviour shared by such communities. Whereas structural linguists saw the basis of language in the social structure, other linguists looked at language as a mental process.
 The 1957 publication of ”Syntactic Structures” by American linguist Noam Chomsky initiated what many views as a scientific revolution in linguistics. Chomsky sought a theory that would account for both linguistic structure and the creativity of language - the fact that we can create entirely original sentences and understand sentences never before uttered. He proposed that all people have an innate ability to acquire language. The task of the linguist, he claimed, is to describe this universal human ability, known as language competence, with a grammar from which the grammars of all languages could be derived. The linguist would develop this grammar by looking at the rules children use in hearing and speaking their first language. He termed the resulting model, or grammar, a transformational-generative grammar, referring to the transformations (or rules) that create (or account for) language. Certain rules, Chomsky asserted, are shared by all languages and form part of a universal grammar, while others are language specific and associated with particular speech communities. Since the 1960s much of the development in the field of linguistics has been a reaction to or against Chomsky's theories.
 At the end of the 20th century, linguists used the term grammar primarily to refer to a subconscious linguistic system that enables people to produce and comprehend an unlimited number of utterances. Grammar thus accounts for our linguistic competence. Observations about the actual language we use, or language performance, are used to theorize about this invisible mechanism known as grammar.
 The scientific study of language led by Chomsky has had an impact on nongenerative linguists as well. Comparative and historically oriented linguists are looking for the various ways linguistic universals show up in individual languages. Psycholinguists, interested in language acquisition, are investigating the notion that an ideal speaker-hearer is the origin of the acquisition process. Sociolinguists are examining the rules that underlie the choice of language variants, or codes, and allow for switching from one code to another. Some linguists are studying language performance - the way people use language - to see how it reveals a cognitive ability shared by all human beings. Others seek to understand animal communication within such a framework. What mental processes enable chimpanzees to make signs and communicate with one another and how do these processes differ from those of humans?
 From these initial concerns came some of the great themes of twentieth-century philosophy. How exactly does language relate to thought? Are the irredeemable problems about putative private thought? These issues are captured under the general label ‘Lingual Turn’. The subsequent development of those early twentieth-century positions has led to a bewildering heterogeneity in philosophy in the early twenty-first century. the very nature of philosophy is itself radically disputed: Analytic, continental, postmodern, critical theory, feminist t, and non-Western, are all prefixes that give a different meaning when joined to ‘philosophy’. The variety of thriving different schools, the number of professional philosophers, the proliferation of publications, the development of technology in helping research as all manifest a radically different situation to that of one hundred years ago.
As with justification and knowledge, the traditional view of content has been strongly Internalist in character. The main argument for externalism derives from the philosophy y of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc. that motivate the views that have come to be known as 'direct reference' theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependant on facts about his environment, e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criterion employed by expects in his social group, etc. - not just on what is going on internally in his mind or brain.
 An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought 'from the inside', simply by reflection. If content is depending on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors - which will not in general be available to the person whose belief or thought is in question.
 The adoption of an externalist account of mental content would seem to support an externalist account of justification, apart from all contentful representation is a belief inaccessible to the believer, then both the justifying statuses of other beliefs in relation to that of the same representation are the status of that content, being totally rationalized by further beliefs for which it will be similarly inaccessible. Thus, contravening the Internalist requirement for justification, as an Internalist must insist that there are no justification relations of these sorts, that our internally associable content can also not be warranted or as stated or indicated without the deviated departure from a course or procedure or from a norm or standard in showing no deviation from traditionally held methods of justification exacting by anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalised account of content is mistaken.
 Except for alleged cases of thing s that are evident for one just by being true, it has often been thought, anything is known must satisfy certain criteria as well as being true. Except for alleged cases of self-evident truths, it is often thought that anything that is known must satisfy certain criteria or standards. These criteria are general principles that will make a proposition evident or just make accepting it warranted to some degree. Common suggestions for this role include position ‘p’, e.g., that 2 + 2 = 4, ‘p’ is evident or, if ‘p’ coheres wit h the bulk of one’s beliefs, ‘p’ is warranted. These might be criteria whereby putative self-evident truths, e.g., that one clearly and distinctly conceive s ‘p’, ‘transmit’ the status as evident they already have without criteria to other proposition s like ‘p’, or they might be criteria whereby purely non-epistemic considerations, e.g., facts about logical connections or about conception that need not be already evident or warranted, originally ‘create’ p’s epistemic status. If that in turn can be ‘transmitted’ to other propositions, e.g., by deduction or induction, there will be criteria specifying when it is.
 Nonetheless, of or relating to tradition a being previously characterized or specified to convey an idea indirectly, as an idea or theory for consideration and being so extreme a design or quality and lean towards an ecocatorial suggestion that implicate an involving responsibility that include: (1) if a proposition ‘p’, e.g., that 2 + 2 = 4, is clearly and distinctly conceived, then ‘p’ is evident, or simply, (2) if we can’t conceive ‘p’ to be false, then ‘p’ is evident: Or, (3) whenever are immediately conscious o f in thought or experience, e.g,, that we seem to see red, is evident. These might be criteria whereby putative self-evident truth s, e.g., that one clearly and distinctly conceives, e.g., that one clearly and distinctly conceives ‘p’, ‘transmit’ the status as evident they already have for one without criteria to other propositions like ‘p’. Alternatively, they might be criteria whereby epistemic status, e.g., p’s being evident, is originally created by purely non-epistemic considerations, e.g., facts about how ‘p’ is conceived which are neither self-evident is already criterial evident.
 The result effect, holds that traditional criteria do not seem to make evident propositions about anything beyond our own thoughts, experiences and necessary truths, to which deductive or inductive criteria ma y be applied. Moreover, arguably, inductive criteria, including criteria warranting the best explanation of data, never make things evident or warrant their acceptance enough to count as knowledge.
 Contemporary epistemologists suggest that traditional criteria may need alteration in three ways. Additional evidence may subject even our most basic judgements to rational correction, though they count as evident on the basis of our criteria. Warrant may be transmitted other than through deductive and inductive relations between propositions. Transmission criteria might not simply ‘pass’ evidence on linearly from a foundation of highly evident ‘premisses’ to ‘conclusions’ that are never more evident.
 As with justification and knowledge, the traditional view of content has been strongly Internalist in character. The main argument for externalism derives from the philosophy y of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc. that motivate the views that have come to be known as 'direct reference' theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependant on facts about his environment, e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criterion employed by expects in his social group, etc. - not just on what is going on internally in his mind or brain.
 An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought 'from the inside', simply by reflection. If content is depending on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors - which will not in general be available to the person whose belief or thought is in question.
 The adoption of an externalist account of mental content would seem to support an externalist account of justification, apart from all contentful representation is a belief inaccessible to the believer, then both the justifying statuses of other beliefs in relation to that of the same representation are the status of that content, being totally rationalized by further beliefs for which it will be similarly inaccessible. Thus, contravening the Internalist requirement for justification, as an Internalist must insist that there are no justification relations of these sorts, that our internally associable content can also not be warranted or as stated or indicated without the deviated departure from a course or procedure or from a norm or standard in showing no deviation from traditionally held methods of justification exacting by anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalised account of content is mistaken.
 Except for alleged cases of thing s that are evident for one just by being true, it has often been thought, anything is known must satisfy certain criteria as well as being true. Except for alleged cases of self-evident truths, it is often thought that anything that is known must satisfy certain criteria or standards. These criteria are general principles that will make a proposition evident or just make accepting it warranted to some degree. Common suggestions for this role include position ‘p’, e.g., that 2 + 2 = 4, ‘p’ is evident or, if ‘p’ coheres wit h the bulk of one’s beliefs, ‘p’ is warranted. These might be criteria whereby putative self-evident truths, e.g., that one clearly and distinctly conceive s ‘p’, ‘transmit’ the status as evident they already have without criteria to other proposition s like ‘p’, or they might be criteria whereby purely non-epistemic considerations, e.g., facts about logical connections or about conception that need not be already evident or warranted, originally ‘create’ p’s epistemic status. If that in turn can be ‘transmitted’ to other propositions, e.g., by deduction or induction, there will be criteria specifying when it is.
 Nonetheless, of or relating to tradition a being previously characterized or specified to convey an idea indirectly, as an idea or theory for consideration and being so extreme a design or quality and lean towards an ecocatorial suggestion that implicate an involving responsibility that include: (1) if a proposition ‘p’, e.g., that 2 + 2 = 4, is clearly and distinctly conceived, then ‘p’ is evident, or simply, (2) if we can’t conceive ‘p’ to be false, then ‘p’ is evident: Or, (3) whenever are immediately conscious o f in thought or experience, e.g,, that we seem to see red, is evident. These might be criteria whereby putative self-evident truth s, e.g., that one clearly and distinctly conceives, e.g., that one clearly and distinctly conceives ‘p’, ‘transmit’ the status as evident they already have for one without criteria to other propositions like ‘p’. Alternatively, they might be criteria whereby epistemic status, e.g., p’s being evident, is originally created by purely non-epistemic considerations, e.g., facts about how ‘p’ is conceived which are neither self-evident is already criterial evident.
 The result effect, holds that traditional criteria do not seem to make evident propositions about anything beyond our own thoughts, experiences and necessary truths, to which deductive or inductive criteria ma y be applied. Moreover, arguably, inductive criteria, including criteria warranting the best explanation of data, never make things evident or warrant their acceptance enough to count as knowledge.
 Contemporary epistemologists suggest that traditional criteria may need alteration in three ways. Additional evidence may subject even our most basic judgements to rational correction, though they count as evident on the basis of our criteria. Warrant may be transmitted other than through deductive and inductive relations between propositions. Transmission criteria might not simply ‘pass’ evidence on linearly from a foundation of highly evident ‘premisses’ to ‘conclusions’ that are never more evident.
 A group of statements, some of which purportedly provide support for another. The statements which purportedly provide the support are the premisses while the statement purportedly support is the conclusion. Arguments are typically divided into two categories depending on the degree of support they purportedly provide. Deductive arguments purportedly provide conclusive support for their conclusions while inductively supports the purported provision that inductive arguments purportedly provided only arguments purportedly in the providing probably of support. Some, but not all, arguments succeed in providing support for their conclusions. Successful deductive arguments are valid while successful inductive arguments are valid while successful inductive arguments are strong. An argument is valid just in case if all its premisses are true its conclusion is only probably true. Deductive logic provides methods for ascertaining whether or not an argument is valid whereas, inductive logic provides methods for ascertaining the degree of support the premisses of an argument confer on its conclusion.
 Finally, proof, least of mention, is a collection of considerations and reasons that instill and sustain conviction that some proposed theorem-the theorem proved-is not only true, but could not possibly be false. A perceptual observation may instill the conviction that water is cold. But a proof that 2 + 5 = 5 must not only instill the conviction that is true that 2 + 3 = 5, but also that 2 + 3 could not be anything but the digit 5.
 Contemporary philosophers of mind have typically supposed (or at least hoped) that the mind can be naturalized -, i.e., that all mental facts have explanations in the terms of natural science. This assumption is shared within cognitive science, which attempts to provide accounts of mental states and processes in terms (ultimately) of features of the brain and central nervous system. In the course of doing so, the various sub-disciplines of cognitive science (including cognitive and computational psychology and cognitive and computational neuroscience) postulate a number of different kinds of structures and processes, many of which are not directly implicated by mental states and processes as commonsensical conceived. There remains, however, a shared commitment to the idea that mental states and processes are to be explained in terms of mental representations.
 In philosophy, recent debates about mental representation have centred around the existence of propositional attitudes (beliefs, desires, etc.) and the determination of their contents (how they come to be about what they are about), and the existence of phenomenal properties and their relation to the content of thought and perceptual experience. Within cognitive science itself, the philosophically relevant debates have been focussed on the computational architecture of the brain and central nervous system, and the compatibility of scientific and commonsense accounts of mentality.
 Intentional Realists such as Dretske (e.g., 1988) and Fodor (e.g., 1987) note that the generalizations we apply in everyday life in predicting and explaining each other's behaviour (often collectively referred to as 'folk psychology') are both remarkably successful and indispensable. What a person believes, doubts, desires, fears, etc. is a highly reliable indicator of what that person will do.  We have no other way of making sense of each other's behaviour than by ascribing such states and applying the relevant generalizations. We are thus committed to the basic truth of commonsense psychology and, hence, to the existence of the states its generalizations refer to. (Some realists, such as Fodor, also hold that commonsense psychology will be vindicated by cognitive science, given that propositional attitudes can be construed as computational relations to mental representations.)
 Intentional Eliminativists, such as Churchland, (perhaps) Dennett and (at one time) Stich argue that no such things as propositional attitudes (and their constituent representational states) are implicated by the successful explanation and prediction of our mental lives and behaviour. Churchland denies that the generalizations of commonsense propositional-attitude psychology are true. He (1981) argues that folk psychology is a theory of the mind with a long history of failure and decline, and that it resists incorporation into the framework of modern scientific theories (including cognitive psychology). As such, it is comparable to alchemy and phlogiston theory, and ought to suffer a comparable fate. Commonsense psychology is false, and the states (and representations) it postulates simply don't exist. (It should be noted that Churchland is not an eliminativist about mental representation tout court.
 Dennett (1987) grants that the generalizations of commonsense psychology are true and indispensable, but denies that this is sufficient reason to believe in the entities they appear to refer to. He argues that to give an intentional explanation of a system's behaviour is merely to adopt the 'intentional stance' toward it. If the strategy of assigning contentful states to a system and predicting and explaining its behaviour (on the assumption that it is rational -, i.e., that it behaves as it should, given the propositional attitudes it should have in its environment) is successful, then the system is intentional, and the propositional-attitude generalizations we apply to it are true. But there is nothing more to having a propositional attitude than this.
 Though he has been taken to be thus claiming that intentional explanations should be construed instrumentally, Dennett (1991) insists that he is a 'moderate' realist about propositional attitudes, since he believes that the patterns in the behaviour and behavioural dispositions of a system on the basis of which we (truly) attribute intentional states to it are objectively real. In the event that there are two or more explanatorily adequate but substantially different systems of intentional ascriptions to an individual, however, Dennett claims there is no fact of the matter about what the system believes (1987, 1991). This does suggest an irrealism at least with respect to the sorts of things Fodor and Dretske take beliefs to be; though it is not the view that there is simply nothing in the world that makes intentional explanations true.
 (Davidson 1973, 1974 and Lewis 1974 also defend the view that what it is to have a propositional attitude is just to be interpretable in a particular way. It is, however, not entirely clear whether they intend their views to imply irrealism about propositional attitudes.). Stich (1983) argues that cognitive psychology does not (or, in any case, should not) taxonomize mental states by their semantic properties at all, since attribution of psychological states by content is sensitive to factors that render it problematic in the context of a scientific psychology. Cognitive psychology seeks causal explanations of behaviour and cognition, and the causal powers of a mental state are determined by its intrinsic 'structural' or 'syntactic' properties. The semantic properties of a mental state, however, are determined by its extrinsic properties -, e.g., its history, environmental or intra-mental relations. Hence, such properties cannot figure in causal-scientific explanations of behaviour. (Fodor 1994 and Dretske 1988 are realist attempts to come to grips with some of these problems.) Stich proposes a syntactic theory of the mind, on which the semantic properties of mental states play no explanatory role.
 It is a traditional assumption among realists about mental representations that representational states come in two basic varieties (Boghossian 1995). There are those, such as thoughts, which are composed of concepts and have no phenomenal ('what-it's-like') features ('Qualia'), and those, such as sensory experiences, which have phenomenal features but no conceptual constituents. (Non-conceptual content is usually defined as a kind of content that states of a creature lacking concepts but, nonetheless enjoy. On this taxonomy, mental states can represent either in a way analogous to expressions of natural languages or in a way analogous to drawings, paintings, maps or photographs. (Perceptual states such as seeing that something is blue, are sometimes thought of as hybrid states, consisting of, for example, a Non-conceptual sensory experience and a thought, or some more integrated compound of sensory and conceptual components.)
 Some historical discussions of the representational properties of mind (e.g., Aristotle 1984, Locke 1689/1975, Hume 1739/1978) seem to assume that Non-conceptual representations - percepts ('impressions'), images ('ideas') and the like - are the only kinds of mental representations, and that the mind represents the world in virtue of being in states that resemble things in it. On such a view, all representational states have their content in virtue of their phenomenal features. Powerful arguments, however, focussing on the lack of generality (Berkeley 1975), ambiguity (Wittgenstein 1953) and non-compositionality (Fodor 1981) of sensory and imagistic representations, as well as their unsuitability to function as logical (Frége 1918/1997, Geach 1957) or mathematical (Frége 1884/1953) concepts, and the symmetry of resemblance (Goodman 1976), convinced philosophers that no theory of mind can get by with only Non-conceptual representations construed in this way.
 Contemporary disagreement over Non-conceptual representation concerns the existence and nature of phenomenal properties and the role they play in determining the content of sensory experience. Dennett (1988), for example, denies that there are such things as Qualia at all; while Brandom (2002), McDowell (1994), Rey (1991) and Sellars (1956) deny that they are needed to explain the content of sensory experience. Among those who accept that experiences have phenomenal content, some (Dretske, Lycan, Tye) argue that it is reducible to a kind of intentional content, while others (Block, Loar, Peacocke) argue that it is irreducible.
 The representationalist thesis is often formulated as the claim that phenomenal properties are representational or intentional. However, this formulation is ambiguous between a reductive and a non-deductive claim (though the term 'representationalism' is most often used for the reductive claim). On one hand, it could mean that the phenomenal content of an experience is a kind of intentional content (the properties it represents). On the other, it could mean that the (irreducible) phenomenal properties of an experience determine an intentional content. Representationalists such as Dretske, Lycan and Tye would assent to the former claim, whereas phenomenalists such as Block, Chalmers, Loar and Peacocke would assent to the latter. (Among phenomenalists, there is further disagreement about whether Qualia are intrinsically representational (Loar) or not (Block, Peacocke).
 Most (reductive) representationalists are motivated by the conviction that one or another naturalistic explanation of intentionality is, in broad outline, correct, and by the desire to complete the naturalization of the mental by applying such theories to the problem of phenomenality. (Needless to say, most phenomenalists (Chalmers is the major exception) are just as eager to naturalize the phenomenal - though not in the same way.)
 The main argument for representationalism appeals to the transparency of experience. The properties that characterize what it's like to have a perceptual experience are presented in experience as properties of objects perceived: in attending to an experience, one seems to 'see through it' to the objects and properties it is experiences of. They are not presented as properties of the experience itself. If nonetheless they were properties of the experience, perception would be massively deceptive. But perception is not massively deceptive. According to the representationalist, the phenomenal character of an experience is due to its representing objective, non-experiential properties. (In veridical perception, these properties are locally instantiated; in illusion and hallucination, they are not.) On this view, introspection is indirect perception: one comes to know what phenomenal features one's experience has by coming to know what objective features it represents.
 In order to account for the intuitive differences between conceptual and sensory representations, representationalists appeal to their structural or functional differences. Dretske (1995), for example, distinguishes experiences and thoughts on the basis of the origin and nature of their functions: an experience of a property 'P' is a state of a system whose evolved function is to indicate the presence of 'P' in the environment; a thought representing the property 'P', on the other hand, is a state of a system whose assigned (learned) function is to calibrate the output of the experiential system. Rey (1991) takes both thoughts and experiences to be relations to sentences in the language of thought, and distinguishes them on the basis of (the functional roles of) such sentences' constituent predicates. Lycan (1987, 1996) distinguishes them in terms of their functional-computational profiles. Tye (2000) distinguishes them in terms of their functional roles and the intrinsic structure of their vehicles: thoughts are representations in a language-like medium, whereas experiences are image-like representations consisting of 'symbol-filled arrays.' (The account of mental images in Tye 1991.)
 Phenomenalists tend to make use of the same sorts of features (function, intrinsic structure) in explaining some of the intuitive differences between thoughts and experiences; but they do not suppose that such features exhaust the differences between phenomenal and non-phenomenal representations. For the phenomenalist, it is the phenomenal properties of experiences - Qualia themselves - that constitute the fundamental difference between experience and thought. Peacocke (1992), for example, develops the notion of a perceptual 'scenario' (an assignment of phenomenal properties to coordinates of a three-dimensional egocentric space), whose content is 'correct' (a semantic property) if in the corresponding 'scene' (the portion of the external world represented by the scenario) properties are distributed as their phenomenal analogues are in the scenario.
 Another sort of representation championed by phenomenalists (e.g., Block, Chalmers (2003) and Loar (1996)) is the 'phenomenal concept' -, a conceptual/phenomenal hybrid consisting of a phenomenological 'sample' (an image or an occurrent sensation) integrated with (or functioning as) a conceptual component. Phenomenal concepts are postulated to account for the apparent fact (among others) that, as McGinn (1991) puts it, 'you cannot form [introspective] concepts of conscious properties unless you yourself instantiate those properties.' One cannot have a phenomenal concept of a phenomenal property 'P', and, hence, phenomenal beliefs about P, without having experience of 'P', because 'P' itself is (in some way) constitutive of the concept of 'P'. (Jackson 1982, 1986 and Nagel 1974.)
 Though imagery has played an important role in the history of philosophy of mind, the important contemporary literature on it is primarily psychological. In a series of psychological experiments done in the 1970s (summarized in Kosslyn 1980 and Shepard and Cooper 1982), subjects' response time in tasks involving mental manipulation and examination of presented figures was found to vary in proportion to the spatial properties (size, orientation, etc.) of the figures presented. The question of how these experimental results are to be explained has kindled a lively debate on the nature of imagery and imagination.
 Kosslyn (1980) claims that the results suggest that the tasks were accomplished via the examination and manipulation of mental representations that they have spatial properties, i.e., pictorial representations, or images. Others, principally Pylyshyn (1979, 1981, 2003), argue that the empirical facts can be explained in terms exclusively of discursive, or propositional representations and cognitive processes defined over them. (Pylyshyn takes such representations to be sentences in a language of thought.)
 The idea that pictorial representations are literally pictures in the head is not taken seriously by proponents of the pictorial view of imagery.  The claim is, rather, that mental images represent in a way that is relevantly like the way pictures represent. (Attention has been focussed on visual imagery - hence the designation 'pictorial'; Though of course, there may imagery in other modalities - auditory, olfactory, etc. - as well.)
 The distinction between pictorial and discursive representation can be characterized in terms of the distinction between analog and digital representation (Goodman 1976). This distinction has itself been variously understood (Fodor & Pylyshyn 1981, Goodman 1976, Haugeland 1981, Lewis 1971, McGinn 1989), though a widely accepted construal is that analog representation is continuous (i.e., in virtue of continuously variable properties of the representation), while digital representation is discrete (i.e., in virtue of properties a representation either has or doesn't have) (Dretske 1981). (An analog/digital distinction may also be made with respect to cognitive processes. (Block 1983.)) On this understanding of the analog/digital distinction, imagistic representations, which represent in virtue of properties that may vary continuously (such for being more or less bright, loud, vivid, etc.), would be analog, while conceptual representations, whose properties do not vary continuously (a thought cannot be more or less about Elvis: either it is or it is not) would be digital.
 It might be supposed that the pictorial/discursive distinction is best made in terms of the phenomenal/nonphenomenal distinction, but it is not obvious that this is the case. For one thing, there may be nonphenomenal properties of representations that vary continuously. Moreover, there are ways of understanding pictorial representation that presuppose neither phenomenality nor analogicity. According to Kosslyn (1980, 1982, 1983), a mental representation is 'quasi-pictorial' when every part of the representation corresponds to a part of the object represented, and relative distances between parts of the object represented are preserved among the parts of the representation. But distances between parts of a representation can be defined functionally rather than spatially - for example, in terms of the number of discrete computational steps required to combine stored information about them. (Rey 1981.)
 Tye (1991) proposes a view of images on which they are hybrid representations, consisting both of pictorial and discursive elements. On Tye's account, images are '(labelled) interpreted symbol-filled arrays.' The symbols represent discursively, while their arrangement in arrays has representational significance (the location of each 'cell' in the array represents a specific viewer-centred 2-D location on the surface of the imagined object)
 The contents of mental representations are typically taken to be abstract objects (properties, relations, propositions, sets, etc.). A pressing question, especially for the naturalist, is how mental representations come to have their contents. Here the issue is not how to naturalize content (abstract objects can't be naturalized), but, rather, how to provide a naturalistic account of the content-determining relations between mental representations and the abstract objects they express. There are two basic types of contemporary naturalistic theories of content-determination, causal-informational and functional.
 Causal-informational theories hold that the content of a mental representation is grounded in the information it carries about what does (Devitt 1996) or would (Fodor 1987, 1990) cause it to occur. There is, however, widespread agreement that causal-informational relations are not sufficient to determine the content of mental representations. Such relations are common, but representation is not. Tree trunks, smoke, thermostats and ringing telephones carry information about what they are causally related to, but they do not represent (in the relevant sense) what they carry information about. Further, a representation can be caused by something it does not represent, and can represent something that has not caused it.
 The main attempts to specify what makes a causal-informational state a mental representation are Asymmetric Dependency Theories, the Asymmetric Dependency Theory distinguishes merely informational relations from representational relations on the basis of their higher-order relations to each other: informational relations depend upon representational relations, but not vice-versa. For example, if tokens of a mental state type are reliably caused by horses, cows-on-dark-nights, zebras-in-the-mist and Great Danes, then they carry information about horses, etc. If, however, such tokens are caused by cows-on-dark-nights, etc. because they were caused by horses, but not vice versa, then they represent horses.
 According to Teleological Theories, representational relations are those a representation-producing mechanism has the selected (by evolution or learning) function of establishing. For example, zebra-caused horse-representations do not mean zebra, because the mechanism by which such tokens are produced has the selected function of indicating horses, not zebras. The horse-representation-producing mechanism that responds to zebras is malfunctioning.
 Functional theories, hold that the content of a mental representation are well  grounded in causal computational inferential relations to other mental portrayals other than mental representations.  They differ on whether relata should include all other mental representations or only some of them, and on whether to include external states of affairs. The view that the content of a mental representation is determined by its inferential/computational relations with all other representations is holism; the view it is determined by relations to only some other mental states is localisms (or molecularism). (The view that the content of a mental state depends on none of its relations to other mental states is atomism.) Functional theories that recognize no content-determining external relata have been called solipsistic (Harman 1987). Some theorists posit distinct roles for internal and external connections, the former determining semantic properties analogous to sense, the latter determining semantic properties analogous to reference (McGinn 1982, Sterelny 1989)
 (Reductive) representationalists (Dretske, Lycan, Tye) usually take one or another of these theories to provide an explanation of the (Non-conceptual) content of experiential states. They thus tend to be Externalists, about phenomenological as well as conceptual content. Phenomenalists and non-deductive representationalists (Block, Chalmers, Loar, Peacocke, Siewert), on the other hand, take it that the representational content of such states is (at least in part) determined by their intrinsic phenomenal properties. Further, those who advocate a phenomenology-based approach to conceptual content (Horgan and Tiensen, Loar, Pitt, Searle, Siewert) also seem to be committed to Internalist individuation of the content (if not the reference) of such states.
 Generally, those who, like informational theorists, think relations to one's (natural or social) environment are (at least partially) determinative of the content of mental representations are Externalists (e.g., Burge 1979, 1986, McGinn 1977, Putnam 1975), whereas those who, like some proponents of functional theories, think representational content is determined by an individual's intrinsic properties alone, are internalists (or individualists).
 This issue is widely taken to be of central importance, since psychological explanation, whether commonsense or scientific, is supposed to be both causal and content-based. (Beliefs and desires cause the behaviours they do because they have the contents they do. For example, the desire that one have a beer and the beliefs that there is beer in the refrigerator and that the refrigerator is in the kitchen may explain one's getting up and going to the kitchen.) If, however, a mental representation's having a particular content is due to factors extrinsic to it, it is unclear how its having that content could determine its causal powers, which, arguably, must be intrinsic. Some who accept the standard arguments for externalism have argued that internal factors determine a component of the content of a mental representation. They say that mental representations have both 'narrow' content (determined by intrinsic factors) and 'wide' or 'broad' content (determined by narrow content plus extrinsic factors). (This distinction may be applied to the sub-personal representations of cognitive science as well as to those of commonsense psychology.
 Narrow content has been variously construed. Putnam (1975), Fodor (1982)), and Block (1986) for example, seems to understand it as something like dedictorial content (i.e., Frégean sense, or perhaps character, à la Kaplan 1989). On this construal, narrow content is context-independent and directly expressible. Fodor (1987) and Block (1986), however, has also characterized narrow content as radically inexpressible. On this construal, narrow content is a kind of proto-content, or content-determinant, and can be specified only indirectly, via specifications of context/wide-content pairings. Both, construe of as a narrow content and are characterized as functions from context to (wide) content. The narrow content of a representation is determined by properties intrinsic to it or its possessor such as its syntactic structure or its intra-mental computational or inferential role or its phenomenology.
 Burge (1986) has argued that causation-based worries about externalist individuation of psychological content, and the introduction of the narrow notion, are misguided. Fodor (1994, 1998) has more recently urged that there may be no need to narrow its contentual representations, accountable for reasons of an ordering supply of naturalistic (causal) explanations of human cognition and action, since the sorts of cases they were introduced to handle, viz., Twin-Earth cases and Frége cases, are nomologically either impossible or dismissible as exceptions to non-strict psychological laws.
 The leading contemporary version of the Representational Theory of Mind, the Computational Theory of Mind, claims that the brain is a kind of computer and that mental processes are computations. According to the computational theory of mind, cognitive states are constituted by computational relations to mental representations of various kinds, and cognitive processes are sequences of such states. The computational theory of mind and the representational theory of mind, may by attempting to explain all psychological states and processes in terms of mental representation. In the course of constructing detailed empirical theories of human and animal cognition and developing models of cognitive processes' implementable in artificial information processing systems, cognitive scientists have proposed a variety of types of mental representations. While some of these may be suited to be mental relata of commonsense psychological states, some - so-called 'subpersonal' or 'sub-doxastic' representations - are not. Though many philosophers believe that computational theory of mind can provide the best scientific explanations of cognition and behaviour, there is disagreement over whether such explanations will vindicate the commonsense psychological explanations of prescientific representational theory of mind.
 According to Stich's (1983) Syntactic Theory of Mind, for example, computational theories of psychological states should concern themselves only with the formal properties of the objects those states are relations to. Commitment to the explanatory relevance of content, however, is for most cognitive scientists fundamental. That mental processes are computations, which computations are rule-governed sequences of semantically evaluable objects, and that the rules apply to the symbols in virtue of their content, are central tenets of mainstream cognitive science.
 Explanations in cognitive science appeal to a many different kinds of mental representation, including, for example, the 'mental models' of Johnson-Laird 1983, the 'retinal arrays,' 'primal sketches' and '2½ -D sketches' of Marr 1982, the 'frames' of Minsky 1974, the 'sub-symbolic' structures of Smolensky 1989, the 'quasi-pictures' of Kosslyn 1980, and the 'interpreted symbol-filled arrays' of Tye 1991 - in addition to representations that may be appropriate to the explanation of commonsense
Psychological states. Computational explanations have been offered of, among other mental phenomena, belief.
 The classicists hold that mental representations are symbolic structures, which typically have semantically evaluable constituents, and that mental processes are rule-governed manipulations of them that are sensitive to their constituent structure. The connectionists, hold that mental representations are realized by patterns of activation in a network of simple processors ('nodes') and that mental processes consist of the spreading activation of such patterns. The nodes themselves are, typically, not taken to be semantically evaluable; nor do the patterns have semantically evaluable constituents. (Though there are versions of Connectionism -, 'localist' versions - on which individual nodes are taken to have semantic properties (e.g., Ballard 1986, Ballard & Hayes 1984).) It is arguable, however, that localist theories are neither definitive nor representative of the Conceptionist program.
 Classicists are motivated (in part) by properties thought seems to share with language. Jerry Alan Fodor's (1935-), Language of Thought Hypothesis, (Fodor 1975, 1987), according to which the system of mental symbols constituting the neural basis of thought is structured like a language, provides a well-worked-out version of the classical approach as applied to commonsense psychology. According to the language of a thought hypothesis, the potential infinity of complex representational mental states is generated from a finite stock of primitive representational states, in accordance with recursive formation rules. This combinatorial structure accounts for the properties of productivity and systematicity of the system of mental representations. As in the case of symbolic languages, including natural languages (though Fodor does not suppose either that the language of thought hypotheses explains only linguistic capacities or that only verbal creatures have this sort of cognitive architecture), these properties of thought are explained by appeal to the content of the representational units and their combinability into contentful complexes. That is, the semantics of both language and thought is compositional: the content of a complex representation is determined by the contents of its constituents and their structural configuration.
 Connectionists are motivated mainly by a consideration of the architecture of the brain, which apparently consists of layered networks of interconnected neurons. They argue that this sort of architecture is unsuited to carrying out classical serial computations. For one thing, processing in the brain is typically massively parallel. In addition, the elements whose manipulation drive's computation in Conceptionist networks (principally, the connections between nodes) are neither semantically compositional nor semantically evaluable, as they are on the classical approach. This contrast with classical computationalism is often characterized by saying that representation is, with respect to computation, distributed as opposed to local: representation is local if it is computationally basic; and distributed if it is not. (Another way of putting this is to say that for classicists mental representations are computationally atomic, whereas for connectionists they are not.)
 Moreover, connectionists argue that information processing as it occurs in Conceptionist networks more closely resembles some features of actual human cognitive functioning. For example, whereas on the classical view learning involves something like hypothesis formation and testing (Fodor 1981), on the Conceptionist model it is a matter of evolving distribution of 'weight' (strength) on the connections between nodes, and typically does not involve the formulation of hypotheses regarding the identity conditions for the objects of knowledge. The Conceptionist network is 'trained up' by repeated exposure to the objects it is to learn to distinguish; and, though networks typically require many more exposures to the objects than do humans, this seems to model at least one feature of this type of human learning quite well.
 Further, degradation in the performance of such networks in response to damage is gradual, not sudden as in the case of a classical information processor, and hence more accurately models the loss of human cognitive function as it typically occurs in response to brain damage. It is also sometimes claimed that Conceptionist systems show the kind of flexibility in response to novel situations typical of human cognition - situations in which classical systems are relatively 'brittle' or 'fragile.'
 Some philosophers have maintained that Connectionism entails that there are no propositional attitudes. Ramsey, Stich and Garon (1990) have argued that if Conceptionist models of cognition are basically correct, then there are no discrete representational states as conceived in ordinary commonsense psychology and classical cognitive science. Others, however (e.g., Smolensky 1989), hold that certain types of higher-level patterns of activity in a neural network may be roughly identified with the representational states of commonsense psychology. Still others argue that language-of-thought style representation is both necessary in general and realizable within Conceptionist architectures, collect the central contemporary papers in the classicist/Conceptionist debate, and provides useful introductory material as well.
 Whereas Stich (1983) accepts that mental processes are computational, but denies that computations are sequences of mental representations, others accept the notion of mental representation, but deny that computational theory of mind provides the correct account of mental states and processes.
 Van Gelder (1995) denies that psychological processes are computational. He argues that cognitive systems are dynamic, and that cognitive states are not relations to mental symbols, but quantifiable states of a complex system consisting of (in the case of human beings) a nervous system, a body and the environment in which they are embedded. Cognitive processes are not rule-governed sequences of discrete symbolic states, but continuous, evolving total states of dynamic systems determined by continuous, simultaneous and mutually determining states of the systems components. Representation in a dynamic system is essentially information-theoretic, though the bearers of information are not symbols, but state variables or parameters.
 Horst (1996), on the other hand, argues that though computational models may be useful in scientific psychology, they are of no help in achieving a philosophical understanding of the intentionality of commonsense mental states. Computational theory of mind attempts to reduce the intentionality of such states to the intentionality of the mental symbols they are relations to. But, Horst claims, the relevant notion of symbolic content is essentially bound up with the notions of convention and intention. So the computational theory of mind involves itself in a vicious circularity: the very properties that are supposed to be reduced are (tacitly) appealed to in the reduction.
 To say that a mental object has semantic properties is, paradigmatically, to say that it may be about, or be true or false of, an object or objects, or that it may be true or false simpliciter. Suppose I think that you took to sniffing snuff. I am thinking about you, and if what I think of you (that they take snuff) is true of you, then my thought is true. According to representational theory of mind such states are to be explained as relations between agents and mental representations. To think that you take snuff is to token in some way a mental representation whose content is that ocelots take snuff. On this view, the semantic properties of mental states are the semantic properties of the representations they are relations to.
 Linguistic acts seem to share such properties with mental states. Suppose I say that you take snuff. I am talking about you, and if what I say of you (that they take snuff) is true of them, then my utterance is true. Now, to say that you take snuff is (in part) to utter a sentence that means that you take snuff. Many philosophers have thought that the semantic properties of linguistic expressions are inherited from the intentional mental states they are conventionally used to express. On this view, the semantic properties of linguistic expressions are the semantic properties of the representations that are the mental relata of the states they are conventionally used to express.
 It is also widely held that in addition to having such properties as reference, truth-conditions and truth - so-called extensional properties - expressions of natural languages also have intensional properties, in virtue of expressing properties or propositions - i.e., in virtue of having meanings or senses, where two expressions may have the same reference, truth-conditions or truth value, yet express different properties or propositions (Frége 1892/1997). If the semantic properties of natural-language expressions are inherited from the thoughts and concepts they express (or vice versa, or both), then an analogous distinction may be appropriate for mental representations.
Theories of representational content may be classified according to whether they are atomistic or holistic and according to whether they are externalistic or internalistic, whereby, emphasizing the priority of a whole over its parts. Furthermore, in the philosophy of language, this becomes the claim that the meaning of an individual word or sentence can only be understood in terms of its relation to an indefinitely larger body of language, such as à whole theory, or even a whole language or form of life. In the philosophy of mind a mental state similarly may be identified only in terms of its relations with others. Moderate holism may allow the other things besides these relationships also count; extreme holism would hold that a network of relationships is all that we have. A holistic view of science holds that experience only confirms or disconfirms large bodies of doctrine, impinging at the edges, and leaving some leeway over the adjustment that it requires.
 Once, again, in the philosophy of mind and language, the view that what is thought, or said, or experienced, is essentially dependent on aspects of the world external to the mind of the subject. The view goes beyond holding that such mental states are typically caused by external factors, to insist that they could not have existed as they now do without the subject being embedded in an external world of a certain kind. It is these external relations that make up the essence or identify of the mental state. Externalism is thus opposed to the Cartesian separation of the mental from the physical, since that holds that the mental could in principle exist as it does even if there were no external world at all. Various external factors have been advanced as ones on which mental content depends, including the usage of experts, the linguistic, norms of the community. And the general causal relationships of the subject. In the theory of knowledge, externalism is the view that a person might know something by being suitably situated with respect to it, without that relationship being in any sense within his purview. The person might, for example, be very reliable in some respect without believing that he is. The view allows that you can know without being justified in believing that you know.
 However, atomistic theories take a representation's content to be something that can be specified independent entity of that representation' s relations to other representations. What the American philosopher of mind, Jerry Alan Fodor (1935-) calls the crude causal theory, for example, takes a representation to be a |cow| - a menial representation with the same content as the word 'cow' - if its tokens are caused by instantiations of the property of being-a-cow, and this is a condition that places no explicit constraints on how |cow|'s must or might relate to other representations. Holistic theories contrasted with atomistic theories in taking the relations à representation bears to others to be essential to its content. According to functional role theories, a representation is a |cow| if it behaves like a |cow| should behave in inference.
 Internalist theories take the content of a representation to be a matter determined by factors internal to the system that uses it. Thus, what Block (1986) calls 'short-armed' functional role theories are Internalist. Externalist theories take the content of a representation to be determined, in part at least, by factors external to the system that uses it. Covariance theories, as well as telelogical theories that invoke an historical theory of functions, take content to be determined by 'external' factors. Crossing the atomist-holistic distinction with the Internalist-externalist distinction.
 Externalist theories (sometimes called non-individualistic theories) have the consequence that molecule for molecule identical cognitive systems might yet harbour representations with different contents. This has given rise to a controversy concerning 'narrow' content. If we assume some form of externalist theory is correct, then content is, in the first instance 'wide' content, i.e., determined in part by factors external to the representing system. On the other hand, it seems clear that, on plausible assumptions about how to individuate psychological capacities, internally equivalent systems must have the same psychological capacities. Hence, it would appear that wide content cannot be relevant to characterizing psychological equivalence. Since cognitive science generally assumes that content is relevant to characterizing psychological equivalence, philosophers attracted to externalist theories of content have sometimes attempted to introduce 'narrow' content, i.e., an aspect or kind of content that is equivalent internally equivalent systems. The simplest such theory is Fodor's idea (1987) that narrow content is a function from contents (i.e., from whatever the external factors are) to wide contents.
 Scientists used Newton's theory of gravitation successfully for many years. Several problems began to arise, however, involving motion that did not follow the law of gravitation or Newtonian mechanics. One problem was the observed and unexplainable deviations in the orbit of Mercury (which could not be caused by the gravitational pull of another orbiting body).
 Another problem with Newton's theory involved reference frames, that is, the conditions under which an observer measures the motion of an object. According to Newtonian mechanics, two observers making measurements of the speed of an object will measure different speeds if the observers are moving relative to each other. A person on the ground observing a ball that is on a train passing by will measure the speed of the ball as the same as the speed of the train. A person on the train observing the ball, however, will measure the ball's speed as zero. According to the traditional ideas about space and time, then, there could not be a constant, fundamental speed in the physical world because all speed is relative. However, near the end of the 19th century the Scottish physicist James Clerk Maxwell proposed a complete theory of electric and magnetic forces that contained just such a constant, which he called c. This constant speed was 300,000 km/sec (186,000 mi/sec) and was the speed of electromagnetic waves, including light waves. This feature of Maxwell's theory caused a crisis in physics because it indicated that speed was not always relative.
 Albert Einstein resolved this crisis in 1905 with his special theory of relativity. An important feature of Einstein's new theory was that no particle, and even no information, could travel faster than the fundamental speed c. In Newton's gravitation theory, however, information about gravitation moved at infinite speed. If a star exploded into two parts, for example, the change in gravitational pull would be felt immediately by a planet in a distant orbit around the exploded star. According to Einstein's theory, such forces were not possible.
 Though Newton's theory contained several flaws, it is still very practical for use in everyday life. Even today, it is sufficiently accurate for dealing with earth-based gravitational effects such as in geology (the study of the formation of the earth and the processes acting on it), and for most scientific work in astronomy. Only when examining exotic phenomena such as black holes (points in space with a gravitational force so strong that not even light can escape them) or in explaining the big bang (the origin of the universe) is Newton's theory inaccurate or inapplicable.
 The gravitational attraction of objects for one another is the easiest fundamental force to observe and was the first fundamental force to be described with a complete mathematical theory by the English physicist and mathematician Sir Isaac Newton. A more accurate theory called general relativity was formulated early in the 20th century by the German-born American physicist Albert Einstein. Scientists recognize that even this theory is not correct for describing how gravitation works in certain circumstances, and they continue to search for an improved theory.
 Gravitation plays a crucial role in most processes on the earth. The ocean tides are caused by the gravitational attraction of the moon and the sun on the earth and its oceans. Gravitation drives weather patterns by making cold air sink and displace less dense warm air, forcing the warm air to rise. The gravitational pull of the earth on all objects holds the objects to the surface of the earth. Without it, the spin of the earth would send them floating off into space.
 The gravitational attraction of every bit of matter in the earth for every other bit of matter amounts to an inward pull that holds the earth together against the pressure forces tending to push it outward. Similarly, the inward pull of gravitation holds stars together. When a star's fuel nears depletion, the processes producing the outward pressure weaken and the inward pull of gravitation eventually compresses the star to a very compact size.
 If the pull of gravity is the only force acting on an object, then all objects, regardless of their weight, size, or shape, will accelerate in the same manner. At the same place on the earth, the 16 lb (71 N) bowling ball and a 500 lb (2200 N) boulder will fall with the same rate of acceleration. As each second passes, each object will increase its downward speed by about 9.8 m/sec (32 ft/sec), resulting in an acceleration of 9.8 m/sec/sec (32 ft/sec/sec). In principle, a rock and a feather both would fall with this acceleration if there were no other forces acting. In practice, however, air friction exerts a greater upward force on the falling feather than on the rock and makes the feather fall more slowly than the rock.
 The mass of an object does not change as it is moved from place to place, but the acceleration due to gravity, and therefore the object's weight, will change because the strength of the earth's gravitational pull is not the same everywhere. The earth's pull and the acceleration due to gravity decrease as an object moves farther away from the centre of the earth. At an altitude of 4000 miles (6400 km) above the earth's surface, for instance, the bowling ball that weighed 16 lb (71 N) at the surface would weigh only about 4 lb (18 N). Because of the reduced weight force, the rate of acceleration of the bowling ball at that altitude would be only one quarter of the acceleration rate at the surface of the earth. The pull of gravity on an object also changes slightly with latitude. Because the earth is not perfectly spherical, but bulges at the equator, the pull of gravity is about 0.5 percent stronger at the earth's poles than at the equator.
 The ancient Greek philosophers developed several theories about the force that caused objects to fall toward the earth. In the 4th century Bc, the Greek philosopher Aristotle proposed that all things were made from some combination of the four elements, earth, air, fire, and water. Objects that were similar in nature attracted one another, and as a result, objects with more earth in them were attracted to the earth. Fire, by contrast, was dissimilar and therefore tended to rise from the earth. Aristotle also developed a cosmology, that is, a theory describing the universe, that was geocentric, or earth-entered, with the moon, sun, planets, and stars moving around the earth on spheres. The Greek philosophers, however, did not propose a connection between the force behind planetary motion and the force that made objects fall toward the earth.
 At the beginning of the 17th century, the Italian physicist and astronomer Galileo discovered that all objects fall toward the earth with the same acceleration, regardless of their weight, size, or shape, when gravity is the only force acting on them. Galileo also had a theory about the universe, which he based on the ideas of the Polish astronomer Nicolaus Copernicus. In the mid-16th century, Copernicus had proposed a heliocentric, or sun-entered system, in which the planets moved in circles around the sun, and Galileo agreed with this cosmology. However, Galileo believed that the planets moved in circles because this motion was the natural path of a body with no forces acting on it. Like the Greek philosophers, he saw no connection between the force behind planetary motion and gravitation on earth.
 In the late 16th and early 17th centuries the heliocentric model of the universe gained support from observations by the Danish astronomer Tycho Brahe, and his student, the German astronomer Johannes Kepler. These observations, made without telescopes, were accurate enough to determine that the planets did not move in circles, as Copernicus had suggested. Kepler calculated that the orbits had to be ellipses (slightly elongated circles). The invention of the telescope made even more precise observations possible, and Galileo was one of the first to use a telescope to study astronomy. In 1609 Galileo observed that moons orbited the planet Jupiter, a fact that could not presumably fit into an earth-centred model of the heavens.
 The new heliocentric theory changed scientists' views about the earth's place in the universe and opened the way for new ideas about the forces behind planetary motion. However, it was not until the late 17th century that Isaac Newton developed a theory of gravitation that encompassed both the attraction of objects on the earth and planetary motion.
 Gravitational Forces Because the Moon has significantly less mass than Earth, the weight of an object on the Moon’s surface is only one-sixth the object’s weight on Earth’s surface. This graph shows how much an object that weighs ‘w’ on Earth would weigh at different points between the Earth and Moon. Since the Earth and Moon pull in opposite directions, there is a point, about 346,000 km (215,000 mi) from Earth, where the opposite gravitational forces would cancel, and the object's weight would be zero.
 To develop his theory of gravitation, Newton first had to develop the science of forces and motion called mechanics. Newton proposed that the natural motion of an object be motion at a constant speed on a straight line, and that it takes a force too slow, speed, or change the path of an object. Newton also invented calculus, a new branch of mathematics that became an important tool in the calculations of his theory of gravitation.
 Newton proposed his law of gravitation in 1687 and stated that every particle in the universe attracts every other particle in the universe with a force that depends on the product of the two particles' masses divided by the square of the distance between them. The gravitational force between two objects can be expressed by the following equation: F= GMm/d2 where F is the gravitational force, ‘G’ is a constant known as the universal constant of gravitation, ‘M’ and ‘m’ are the masses of each object, and d is the distance between them. Newton considered a particle to be an object with a mass that was concentrated in a small point. If the mass of one or both particles increases, then the attraction between the two particles increases. For instance, if the mass of one particle is doubled, the force of attraction between the two particles is doubled. If the distance between the particles increases, then the attraction decreases as the square of the distance between them. Doubling the distance between two particles, for instance, will make the force of attraction one quarter as great as it was.
 According to Newton, the force acts along a line between the two particles. In the case of two spheres, it acts similarly between their centres. The attraction between objects with irregular shapes is more complicated. Every bit of matter in the irregular object attracts every bit of matter in the other object. A simpler description is possible near the surface of the earth where the pull of gravity is approximately uniform in strength and direction. In this case there is a point in an object (even an irregular object) called the centre of gravity, at which all the force of gravity can be considered to be acting.
 Newton's law affects all objects in the universe, from raindrops in the sky to the planets in the solar system. It is therefore known as the universal law of gravitation. In order to know the strength of gravitational forces overall, however, it became necessary to find the value of G, the universal constant of gravitation. Scientists needed to re-enact an experiment, but gravitational forces are very weak between objects found in a common laboratory and thus hard to observe. In 1798 the English chemist and physicist Henry Cavendish finally measured ‘G’ with a very sensitive experiment in which he nearly eliminated the effects of friction and other forces. The value he found was 6.754 x 10-11 N-m2/kg2-close to the currently accepted value of 6.670 x 10-11 N-m2/kg2 (a decimal point followed by ten zeros and then the number 6670). This value is so small that the force of gravitation between two objects with a mass of 1 metric ton each, 1 metre from each other, is about sixty-seven millionths of a newton, or about fifteen millionths of a pound.
 Gravitation may also be described in a completely different way. A massive object, such as the earth, may be thought of as producing a condition in space around it called a gravitational field. This field causes objects in space to experience a force. The gravitational field around the earth, for instance, produces a downward force on objects near the earth surface. The field viewpoint is an alternative to the viewpoint that objects can affect each other across distance. This way of thinking about interactions has proved to be very important in the development of modern physics.
 Newton's law of gravitation was the first theory to describe the motion of objects on the earth accurately as well as the planetary motion that astronomers had long observed. According to Newton's theory, the gravitational attraction between the planets and the sun holds the planets in elliptical orbits around the sun. The earth's moon and moons of other planets are held in orbit by the attraction between the moons and the planets. Newton's law led to many new discoveries, the most important of which was the discovery of the planet Neptune. Scientists had noted unexplainable variations in the motion of the planet Uranus for many years. Using Newton's law of gravitation, the French astronomer Urbain Leverrier and the British astronomer John Couch each independently predicted the existence of a more distant planet that was perturbing the orbit of Uranus. Neptune was discovered in 1864, in an orbit close to its predicted position.
 Einstein's general relativity theory predicts special gravitational conditions. The Big Bang theory, which describes the origin and early expansion of the universe, is one conclusion based on Einstein's theory that has been verified in several independent ways.
 Another conclusion suggested by general relativity, as well as other relativistic theories of gravitation, is that gravitational effects move in waves. Astronomers have observed a loss of energy in a pair of neutron stars (stars composed of densely packed neutrons) that are orbiting each other. The astronomers theorize that energy-carrying gravitational waves are radiating from the pair, depleting the stars of their energy. Very violent astrophysical events, such as the explosion of stars or the collision of neutron stars, can produce gravitational waves strong enough that they may eventually be directly detectable with extremely precise instruments. Astrophysicists are designing such instruments with the hope that they will be able to detect gravitational waves by the beginning of the 21st century.
 Another gravitational effect predicted by general relativity is the existence of black holes. The idea of a star with a gravitational force so strong that light cannot escape from its surface can be traced to Newtonian theory. Einstein modified this idea in his general theory of relativity. Because light cannot escape from a black hole, for any object-a particle, spacecraft, or wave-to escape, it would have to move past light. Nevertheless, light moves outward at the speed 'c'. According to relativity 'c', is the highest attainable speed, so nothing can pass it. The black holes that Einstein envisioned, then, allow no escape whatsoever. An extension of this argument shows that when gravitation is this strong, nothing can even stay in the same place, but must move inward. Even the surface of a star must move inward, and must continue the collapse that created the strong gravitational force. What remains then is not a star, but a region of space from which emerges a tremendous gravitational force.
 Einstein's theory of gravitation revolutionized 20th-century physics. Another important revolution that took place was quantum theory. Quantum theory states that physical interactions, or the exchange of energy, cannot be made arbitrarily small. There is a minimal interaction that comes in a packet called the quantum of an interaction. For electromagnetism the quantum is called the photon. Like the other interactions, gravitation also must be quantized. Physicists call a quantum of gravitational energy a graviton. In principle, gravitational waves arriving at the earth would consist of gravitons. In practice, gravitational waves would consist of apparently continuous streams of gravitons, and individual gravitons could not be detected.
 Einstein's theory did not include quantum effects. For most of the 20th century, theoretical physicists have been unsuccessful in their attempts to formulate a theory that resembles Einstein's theory but also includes gravitons. Despite the lack of a complete quantum theory, making some partial predictions about quantized gravitation is possible. In the 1970s, British physicist Stephen Hawking showed that quantum mechanical processes in the strong gravitational pull just outside of black holes would create particles and quanta that move away from the black hole, thereby robbing it of energy.
 Astronomy, is the study of the universe and the celestial bodies, gas, and dust within it. Astronomy includes observations and theories about the solar system, the stars, the galaxies, and the general structure of space. Astronomy also includes cosmology, the study of the universe and its past and future. People whom analysis astronomy is called astronomers, and they use a wide variety of methods to achieve of what in finality is obtainably resolved through their research. These methods usually involve ideas of physics, so most astronomers are also astrophysicists, and the terms astronomer and astrophysicist are basically identical. Some areas of astronomy also use techniques of chemistry, geology, and biology.
 Astronomy is the oldest science, dating back thousands of years to when primitive people noticed objects in the sky overhead and watched the way the objects moved. In ancient Egypt, he first appearance of certain stars each year marked the onset of the seasonal flood, an important event for agriculture. In 17th-century England, astronomy provided methods of keeping track of time that were especially useful for accurate navigation. Astronomy has a long tradition of practical results, such as our current understanding of the stars, day and night, the seasons, and the phases of the Moon. Much of today's research in astronomy does not address immediate practical problems. Instead, it involves basic research to satisfy our curiosity about the universe and the objects in it. One day such knowledge may be of practical use to humans.
 Astronomers use tools such as telescopes, cameras, spectrographs, and computers to analyse the light that astronomical objects emit. Amateur astronomers observe the sky as a hobby, while professional astronomers are paid for their research and usually work for large institutions such as colleges, universities, observatories, and government research institutes. Amateur astronomers make valuable observations, but are often limited by lack of access to the powerful and expensive equipment of professional astronomers.
 A wide range of astronomical objects is accessible to amateur astronomers. Many solar system objects-such as planets, moons, and comets-are bright enough to be visible through binoculars and small telescopes. Small telescopes are also sufficient to reveal some of the beautiful detail in nebulas-clouds of gas and dust in our galaxy. Many amateur astronomers observe and photograph these objects. The increasing availability of sophisticated electronic instruments and computers over the past few decades has made powerful equipment more affordable and allowed amateur astronomers to expand their observations too much fainter objects. Amateur astronomers sometimes share their observations by posting their photographs on the World Wide Web, a network of information based on connections between computers.
 Amateurs often undertake projects that require numerous observations over days, weeks, months, or even years. By searching the sky over a long period of time, amateur astronomers may observe things in the sky that represent sudden change, such as new comets or novas (stars that brightens suddenly). This type of consistent observation is also useful for studying objects that change slowly over time, such as variable stars and double stars. Amateur astronomers observe meteor showers, sunspots, and groupings of planets and the Moon in the sky. They also participate in expeditions to places in which special astronomical events-such as solar eclipses and meteor showers-are most visible. Several organizations, such as the Astronomical League and the American Association of Variable Star Observers, provide meetings and publications through which amateur astronomers can communicate and share their observations.
 Professional astronomers usually have access to powerful telescopes, detectors, and computers. Most work in astronomy includes three parts, or phases. Astronomers first observe astronomical objects by guiding telescopes and instruments to collect the appropriate information. Astronomers then analyse the images and data. After the analysis, they compare their results with existing theories to determine whether their observations match with what theories predict, or whether the theories can be improved. Some astronomers work solely on observation and analysis, and some work solely on developing new theories.
 Astronomy is such a broad topic that astronomers specialize in one or more parts of the field. For example, the study of the solar system is a different area of specialization than the study of stars. Astronomers who study our galaxy, the Milky Way, often use techniques different from those used by astronomers who study distant galaxies. Many planetary astronomers, such as scientists who study Mars, may have geology backgrounds and not consider they astronomers at all. Solar astronomers use different telescopes than nighttime astronomers use, because the Sun is so bright. Theoretical astronomers may never use telescopes at all. Instead, these astronomers use existing data or sometimes only previous theoretical results to develop and test theories. An increasing field of astronomy is computational astronomy, in which astronomers use computers to simulate astronomical events. Examples of events for which simulations are useful include the formation of the earliest galaxies of the universe or the explosion of a star to make a supernova.
 Astronomers learn about astronomical objects by observing the energy they emit. These objects emit energy in the form of electromagnetic radiation. This radiation travels throughout the universe in the form of waves and can range from gamma rays, which have extremely short wavelengths, to visible light, to radio waves, which are very long. The entire range of these different wavelengths makes up the electromagnetic spectrum.
 Astronomers gather different wavelengths of electromagnetic radiation depending on the objects that are being studied. The techniques of astronomy are often very different for studying different wavelengths. Conventional telescopes work only for visible light and the parts of the spectrum near visible light, such as the shortest infrared wavelengths and the longest ultraviolet wavelengths. Earth’s atmosphere complicates studies by absorbing many wavelengths of the electromagnetic spectrum. Gamma-ray astronomy, X-ray astronomy, infrared astronomy, ultraviolet astronomy, radio astronomy, visible-light astronomy, cosmic-ray astronomy, gravitational-wave astronomy, and neutrino astronomy all use different instruments and techniques.
 Observational astronomers use telescopes or other instruments to observe the heavens. The astronomers who do the most observing, however, probably spend more time using computers than they do using telescopes. A few nights of observing with a telescope often provide enough data to keep astronomers busy for months analysing the data.
 Until the 20th century, all observational astronomers studied the visible light that astronomical objects emit. Such astronomers are called optical astronomers, because they observe the same part of the electromagnetic spectrum that the human eye sees. Optical astronomers use telescopes and imaging equipment to study light from objects. Professional astronomers today hardly ever look through telescopes. Instead, a telescope sends an object’s light to a photographic plate or to an electronic light-sensitive computer chip called a charge-coupled device, or CCD. CCDs are about fifty times more sensitive than film, so today's astronomers can record in a minute an image that would have taken about an hour to record on film.
 Telescopes may use either lenses or mirrors to gather visible light, permitting direct observation or photographic recording of distant objects. Those that use lenses are called refracting telescopes, since they use the property of refraction, or bending, of light. The largest refracting telescope is the 40-in (1-m) telescope at the Yerkes Observatory in Williams Bay, Wisconsin, founded in the late 19th century. Lenses bend different colours of light by different amounts, so different colours focus differently. Images produced by large lenses can be tinged with colour, often limiting the observations to those made through filters. Filters limit the image to one colour of light, so the lens bends all of the light in the image the same amount and makes the image more accurate than an image that includes all colours of light. Also, because light must pass through lenses, lenses can only be supported at the very edges. Large, heavy lenses are so thick that all the large telescopes in current use are made with other techniques.
 Reflecting telescopes, which use mirrors, are easier to make than refracting telescopes and reflect all colours of light equally. All the largest telescopes today are reflecting telescopes. The largest single telescopes are the Keck telescopes at Mauna Kea Observatory in Hawaii. The Keck telescope mirrors are 394 in (10.0 m) in diameter. Mauna Kea Observatory, at an altitude of 4,205 m (13,796 ft), is especially high. The air at the observatory is clear, so many major telescope projects are located there.
 The Hubble Space Telescope (HST), a reflecting telescope that orbits Earth, has returned the clearest images of any optical telescope. The main mirror of the HST is only ninety-four in. (2.4 m.) across, far smaller than that of the largest ground-based reflecting telescopes. Turbulence in the atmosphere makes observing objects as clearly as the HST can see impossible for ground-based telescopes. HST images of visible light are about five times finer than any produced by ground-based telescopes. Giant telescopes on Earth, however, collect much more light than the HST can. Examples of such giant telescopes include the twin 32-ft (10-m) Keck telescopes in Hawaii and the four 26-ft (8-m) telescopes in the Very Large Telescope array in the Atacama Desert in northern Chile (the nearest city is Antofagasta, Chile). Often astronomers use space and ground-based telescopes in conjunction.
 Astronomers usually share telescopes. Many institutions with large telescopes accept applications from any astronomer who wishes to use the instruments, though others have limited sets of eligible applicants. The institution then divides the available time between successful applicants and assigns each astronomer an observing period. Astronomers can collect data from telescopes remotely. Data from Earth-based telescopes can be sent electronically over computer networks. Data from space-based telescopes reach Earth through radio waves collected by antennas on the ground.
 Gamma rays have the shortest wavelengths. Special telescopes in orbit around Earth, such as the National Aeronautics and Space Administration’s (NASA’s) Compton Gamma-Ray Observatory, gather gamma rays before Earth’s atmosphere absorbs them. X rays, the next shortest wavelengths, also must be observed from space. NASA’s Chandra x-ray Observatory (CXO) is a school-bus-sized spacecraft scheduled to begin studying X-rays from orbit in 1999. It is designed to make high-resolution images.
 Ultraviolet light has wavelengths longer than X rays, but shorter than visible light. Ultraviolet telescopes are similar to visible-light telescopes in the way they gather light, but the atmosphere blocks most ultraviolet radiation. Most ultraviolet observations, therefore, must also take place in space. Most of the instruments on the Hubble Space Telescope (HST) are sensitive to ultraviolet radiation. Humans cannot see ultraviolet radiation, but astronomers can create visual images from ultraviolet light by assigning particular colours or shades to different intensities of radiation.
 Infrared astronomers study parts of the infrared spectrum, which consists of electromagnetic waves with wavelengths ranging from just longer than visible light to 1,000 times longer than visible light. Earth’s atmosphere absorbs infrared radiation, so astronomers must collect infrared radiation from places where the atmosphere is very thin, or from above the atmosphere. Observatories for these wavelengths are located on certain high mountaintops or in space. Most infrared wavelengths can be observed only from space. Every warm object emits some infrared radiation. Infrared astronomy is useful because objects that are not hot enough to emit visible or ultraviolet radiation may still emit infrared radiation. Infrared radiation also passes through interstellar and intergalactic gas and dusts more easily than radiation with shorter wavelengths. Further, the brightest part of the spectrum from the farthest galaxies in the universe is shifted into the infrared. The Next Generation Space Telescope, which NASA plans to launch in 2006, will operate especially in the infrared.
 Radio waves have the longest wavelengths. Radio astronomers use giant dish antennas to collect and focus signals in the radio part of the spectrum. These celestial radio signals, often from hot bodies in space or from objects with strong magnetic fields, come through Earth's atmosphere to the ground. Radio waves penetrate dust clouds, allowing astronomers to see into the centre of our galaxy and into the cocoons of dust that surround forming stars.
 Sometimes astronomers study emissions from space that are not electromagnetic radiation. Some of the particles of interest to astronomers are neutrinos, cosmic rays, and gravitational waves. Neutrinos are tiny particles with no electric charge and very little or no mass. The Sun and supernovas emit neutrinos. Most neutrino telescopes consist of huge underground tanks of liquid. These tanks capture a few of the many neutrinos that strike them, while the vast majority of neutrinos pass right through the tanks.
 Cosmic rays are electrically charged particles that come to Earth from outer space at almost the speed of light. They are made up of negatively charged particles called electrons and positively charged nuclei of atoms. Astronomers do not know where most cosmic rays come from, but they use cosmic-ray detectors to study the particles. Cosmic-ray detectors are usually grids of wires that produce an electrical signal when a cosmic ray passes close to them.
 Gravitational waves are a predicted consequence of the general theory of relativity developed by German-born American physicist Albert Einstein. Set off up in the 1960s astronomers have been building detectors for gravitational waves. Older gravitational-wave detectors were huge instruments that surrounded a carefully measured and positioned massive object suspended from the top of the instrument. Lasers trained on the object were designed to measure the object’s movement, which theoretically would occur when a gravitational wave hit the object. At the end of the 20th century, these instruments had picked up no gravitational waves. Gravitational waves should be very weak, and the instruments were probably not yet sensitive enough to register them. In the 1970s and 1980s American physicists Joseph Taylor and Russell Hulse observed indirect evidence of gravitational waves by studying systems of double pulsars. A new generation of gravitational-wave detectors, developed in the 1990s, used interferometers to measure distortions of space that would be caused by passing gravitational waves.
 Some objects emit radiation more strongly in one wavelength than in another, but a set of data across the entire spectrum of electromagnetic radiation is much more useful than observations in anyone wavelength. For example, the supernova remnant known as the Crab Nebula has been observed in every part of the spectrum, and astronomers have used all the discoveries together to make a complete picture of how the Crab Nebula is evolving.
 Whether astronomers take data from a ground-based telescope or have data radioed to them from space, they must then analyse the data. Usually the data are handled with the aid of a computer, which can carry out various manipulations the astronomer requests. For example, some of the individual picture elements, or pixels, of a CCD may be more sensitive than others. Consequently, astronomers sometimes take images of blank sky to measure which pixels appear brighter. They can then take these variations into account when interpreting the actual celestial images. Astronomers may write their own computer programs to analyse data or, as is increasingly the case, use certain standard computer programs developed at national observatories or elsewhere.
 Often an astronomer uses observations to test a specific theory. Sometimes, a new experimental capability allows astronomers to study a new part of the electromagnetic spectrum or to see objects in greater detail or through special filters. If the observations do not verify the predictions of a theory, the theory must be discarded or, if possible, modified.
 Up to about 3,000 stars are visible at a time from Earth with the unaided eye, far away from city lights, on a clear night. A view at night may also show several planets and perhaps a comet or a meteor shower. Increasingly, human-made light pollution is making the sky less dark, limiting the number of visible astronomical objects. During the daytime the Sun shines brightly. The Moon and bright planets are sometimes visible early or late in the day but are rarely seen at midday.
 Earth moves in two basic ways: It turns in place, and it revolves around the Sun. Earth turns around its axis, an imaginary line that runs down its centre through its North and South poles. The Moon also revolves around Earth. All of these motions produce day and night, the seasons, the phases of the Moon, and solar and lunar eclipses.
 Earth is about 12,000 km. (about 7,000 mi.) in diameter. As it revolves, or moves in a circle, around the Sun, Earth spins on its axis. This spinning movement is called rotation. Earth’s axis is tilted 23.5° with respect to the plane of its orbit. Each time Earth rotates on its axis, its corrective velocity to enable it of travelling, or free falling through into a new day, in other words, its rotational inertia or axial momentum carries it through one day, a cycle of light and dark. Humans artificially divide the day into 24 hours and then divide the hours into 60 minutes and the minutes into 60 seconds.
 Earth revolves around the Sun once every year, or 365.25 days (most people use a 365-day calendar and take care of the extra 0.25 day by adding a day to the calendar every four years, creating a leap year). The orbit of Earth is almost, but not quite, a circle, so Earth is sometimes a little closer to the Sun than at other times. If Earth were upright as it revolved around the Sun, each point on Earth would have exactly twelve hours of light and twelve hours of dark each day. Because Earth is tilted, however, the northern hemisphere sometimes points toward the Sun and sometimes points away from the Sun. This tilt is responsible for the seasons. When the northern hemisphere points toward the Sun, the northernmost regions of Earth see the Sun 24 hours a day. The whole northern hemisphere gets more sunlight and gets it at a more direct angle than the southern hemisphere does during this period, which lasts for half of the year. The second half of this period, when the northern hemisphere points most directly at the Sun, is the northern hemisphere's summer, which corresponds to winter in the southern hemisphere. During the other half of the year, the southern hemisphere points more directly toward the Sun, so it is spring and summer in the southern hemisphere and fall and winters in the northern hemisphere.
 One revolution of the Moon around Earth takes a little more than twenty-seven days seven hours. The Moon rotates on its axis in this same period of time, so the same face of the Moon is always presented to Earth. Over a period a little longer than twenty-nine days twelve hours, the Moon goes through a series of phases, in which the amount of the lighted half of the Moon we see from Earth changes. These phases are caused by the changing angle of sunlight hitting the Moon. (The period of phases is longer than the period of revolution of the Moon, because the motion of Earth around the Sun changes the angle at which the Sun’s light hits the Moon from night to night.)
 The Moon’s orbit around Earth is tilted five from the plane of Earth’s orbit. Because of this tilt, when the Moon is at the point in its orbit when it is between Earth and the Sun, the Moon is usually a little above or below the Sun. At that time, the Sun lights the side of the Moon facing away from Earth, and the side of the Moon facing toward Earth is dark. This point in the Moon’s orbit corresponds to a phase of the Moon called the new moon. A quarter moon occurs when the Moon is at right angles to the line formed by the Sun and Earth. The Sun lights the side of the Moon closest to it, and half of that side is visible from Earth, forming a bright half-circle. When the Moon is on the opposite side of Earth from the Sun, the face of the Moon visible from Earth is lit, showing the full moon in the sky
 Because of the tilt of the Moon's orbit, the Moon usually passes above or below the Sun at new moon and above or below Earth's shadow at full moon. Sometimes, though, the full moon or new moon crosses the plane of Earth's orbit. By a coincidence of nature, even though the Moon is about 400 times smaller than the Sun, it is also about 400 times closer to Earth than the Sun is, so the Moon and Sun look almost the same size from Earth. If the Moon lines up with the Sun and Earth at new moon (when the Moon is between Earth and the Sun), it blocks the Sun’s light from Earth, creating a solar eclipse. If the Moon lines up with Earth and the Sun at the full moon (when Earth is between the Moon and the Sun), Earth’s shadow covers the Moon, making a lunar eclipse.
 A total solar eclipse is visible from only a small region of Earth. During a solar eclipse, the complete shadow of the Moon that falls on Earth is only about 160 km. (about 100 mi.) wide. As Earth, the Sun, and the Moon move, however, the Moon’s shadow sweeps out a path up to 16,000 km. (10,000 mi.) long. The total eclipse can only be seen from within this path. A total solar eclipse occurs about every eighteen months. Off to the sides of the path of a total eclipse, a partial eclipse, in which the Sun is only partly covered, is visible. Partial eclipses are much less dramatic than total eclipses. The Moon’s orbit around Earth is elliptical, or egg-shaped. The distance between Earth and the Moon varies slightly as the Moon orbits Earth. When the Moon is farther from Earth than usual, it appears smaller and may not cover the entire Sun during an eclipse. A ring, or annulus, of sunlight remains assimilated through visibility. Making an annular eclipse. An annular solar eclipse also occurs about every eighteen months. Additional partial solar eclipses are also visible from Earth in between.
 At a lunar eclipse, the Moon is existent in Earth's shadow. When the Moon is completely in the shadow, the total lunar eclipse is visible from everywhere on the half of Earth from which the Moon is visible at that time. As a result, more people see total lunar eclipses than see total solar eclipses.
 In an open place on a clear dark night, streaks of light may appear in a random part of the sky about once every ten minutes. These streaks are meteors-bits of rock-turning up in Earth's atmosphere. The bits of rock are called meteoroids, and when these bits survive Earth’s atmosphere intact and land on Earth, they are known as meteorites.
 Every month or so, Earth passes through the orbit of a comet. Dust from the comet remains in the comet's orbit. When Earth passes through the band of dust, the dust and bits of rock burn up in the atmosphere, creating a meteor shower. Many more meteors are visible during a meteor shower than on an ordinary night. The most observed meteor shower is the Perseid shower, which occurs each year on August 11th or 12th.
 Humans have picked out landmarks in the sky and mapped the heavens for thousands of years. Maps of the sky helped to potentially lost craft in as much as sailors have navigated using the celestially fixed stars to find refuge away from being lost. Now astronomers methodically map the sky to produce a universal format for the addresses of stars, galaxies, and other objects of interest.
 Some of the stars in the sky are brighter and more noticeable than others are, and some of these bright stars appear to the eye to be grouped together. Ancient civilizations imagined that groups of stars represented figures in the sky. The oldest known representations of these groups of stars, called constellations, are from ancient Sumer (now Iraq) from about 4000 Bc. The constellations recorded by ancient Greeks and Chinese resemble the Sumerian constellations. The northern hemisphere constellations that astronomers recognize today are based on the Greek constellations. Explorers and astronomers developed and recorded the official constellations of the southern hemisphere in the 16th and 17th centuries. The International Astronomical Union (IAU) officially recognizes eighty-eight constellations. The IAU defined the boundaries of each constellation, so the eighty-eight constellations divide the sky without overlapping.
 A familiar group of stars in the northern hemisphere is called the Big Dipper. The Big Dipper is part of an official constellation-Ursa Major, or the Great Bear. Groups of stars that are not official constellations, such as the Big Dipper, are called asterisms. While the stars in the Big Dipper appear in approximately the same part of the sky, they vary greatly in their distance from Earth. This is true for the stars in all constellations or asterisms-the stars accumulating of the group do not really occur close to each other in space, they merely appear together as seen from Earth. The patterns of the constellations are figments of humans’ imagination, and different artists may connect the stars of a constellation in different ways, even when illustrating the same myth.
 Astronomers use coordinate systems to label the positions of objects in the sky, just as geographers use longitude and latitude to label the positions of objects on Earth. Astronomers use several different coordinate systems. The two most widely used are the altazimuth system and the equatorial system. The altazimuth system gives an object’s coordinates with respect to the sky visible above the observer. The equatorial coordinate system designates an object’s location with respect to Earth’s entire night sky, or the celestial sphere.
 One of the ways astronomers give the position of a celestial object is by specifying its altitude and its azimuth. This coordinate system is called the altazimuth system. The altitude of an object is equal to its angle, in degrees, above the horizon. An object at the horizon would have an altitude of zero, and an object directly overhead would have an altitude of ninety. The azimuth of an object is equal to its angle in the horizontal direction, with north at zero, east at ninety, south at 180°, and west at 270°. For example, if an astronomer were looking for an object at twenty-three altitude and eighty-seven azimuth, the astronomer would know to look low in the sky and almost directly east.
 As Earth rotates, astronomical objects appear to rise and set, so their altitudes and azimuths are constantly changing. An object’s altitude and azimuth also vary according to an observer’s location on Earth. Therefore, astronomers almost never use altazimuth coordinates to record an object’s position. Instead, astronomers with altazimuth telescopes translate coordinates from equatorial coordinates to find an object. Telescopes that use an altazimuth mounting system may be simple to set up, but they require many calculated movements to keep them pointed at an object as it moves across the sky. These telescopes fell out of use with the development of the equatorial coordinate and mounting system in the early 1800s. However, computers have made the return to popularity possible for altazimuth systems. Altazimuth mounting systems are simple and inexpensive, and-with computers to do the required calculations and control the motor that moves the telescope-they are practical.
 The equatorial coordinate system is a coordinate system fixed on the sky. In this system, a star keeps the same coordinates no matter what the time is or where the observer is located. The equatorial coordinate system is based on the celestial sphere. The celestial sphere is a giant imaginary globe surrounding Earth. This sphere has north and south celestial pole directly above Earth’s North and South poles. It has a celestial equator, directly above Earth’s equator. Another important part of the celestial sphere is the line that marks the movement of the Sun with respect to the stars throughout the year. This path is called the ecliptic. Because Earth is tilted with respect to its orbit around the Sun, the ecliptic is not the same as the celestial equator. The ecliptic is tilted 23.5° to the celestial equator and crosses the celestial equator at two points on opposite sides of the celestial sphere. The crossing points are called the vernal (or spring) equinox and the autumnal equinox. The vernal equinox and autumnal equinox mark the beginning of spring and fall, respectively. The points at which the ecliptic and celestial equator are farthest apart are called the summer solstice and the winter solstice, which mark the beginning of summer and winter, respectively.
 As Earth rotates on its axis each day, the stars and other distant astronomical objects appear to rise in the eastern part of the sky and set in the west. They seem to travel in circles around Earth’s North or South poles. In the equatorial coordinate system, the celestial sphere turns with the stars (but this movement is really caused by the rotation of Earth). The celestial sphere makes one complete rotation every twenty-three hours fifty-six minutes, which is four unexpected moments than a day measured by the movement of the Sun. A complete rotation of the celestial sphere is called a sidereal day. Because the sidereal day is shorter than a solar day, the stars that an observer sees from any location on Earth change slightly from night to night. The difference between a sidereal day and a solar day occurs because of Earth’s motion around the Sun.
 The equivalent of longitude on the celestial sphere is called right ascension and the equivalent of latitude is declination. Specifying the right ascension of a star is equivalent to measuring the east-west distance from a line called the prime meridian that runs through Greenwich, England, for a place on Earth. Right ascension starts at the vernal equinox. Longitude on Earth is given in degrees, but right ascension is given in units of time-hours, minutes, and seconds. This is because the celestial equator is divided into 24 equal parts-each called an hour of right ascension instead of fifteen. Each hour is made up of 60 minutes, each of which is equal to 60 seconds. Measuring right ascension in units of time makes determine when will be the best time for observing an object easier for astronomers. A particular line of right ascension will be at its highest point in the sky above a particular place on Earth four minutes earlier each day, so keeping track of the movement of the celestial sphere with an ordinary clock would be complicated. Astronomers have special clocks that keep sidereal time (24 sidereal hours are equal to twenty-three hours fifty-six minutes of familiar solar time). Astronomers compare the current sidereal time with the right ascension of the object they wish to view. The object will be highest in the sky when the sidereal time equals the right ascension of the object.
 The direction perpendicular to right ascension-and the equivalent to latitude on Earth-is declination. Declination is measured in degrees. These degrees are divided into arcminutes and arcseconds. One arcminute is equal to 1/60 of a degree, and one arcsecond is equal to 1/60 of an arcminute, or 1/360 of a degree. The celestial equator is at declination zero, the north celestial pole is at declination ninety, and the south celestial pole has a declination of -90°. Each star has a right ascension and a declination that mark its position in the sky. The brightest star, Sirius, for example, has right ascension six hours forty-five minutes (abbreviated as 6h. 45m.) and declination-16 degrees forty-three arcminutes
 Stars are so far away from Earth that the main star motion we see results from Earth’s rotation. Stars do move in space, however, and these proper motions slightly change the coordinates of the nearest stars over time. The effects of the Sun and the Moon on Earth also cause slight changes in Earth’s axis of rotation. These changes, called precession, cause a slow drift in right ascension and declination. To account for precession, astronomers redefine the celestial coordinates every fifty years or so.
 Solar systems, both our own and those located around other stars, are a major area of research for astronomers. A solar system consists of a central star orbited by planets or smaller rocky bodies. The gravitational force of the star holds the system together. In our solar system, the central star is the Sun. It holds all the planets, including Earth, in their orbits and provides light and energy necessary for life. Our solar system is just one of many. Astronomers are just beginning to be able to study other solar systems.
 Our solar system contains the Sun, nine planets (of which Earth is third from the Sun), and the planets’ satellites. It also contains asteroids, comets, and interplanetary dust and gases.
 Until the end of the 18th century, humans knew of five planets-Mercury, Venus, Mars, Jupiter, and Saturn-in addition to Earth. When viewed without a telescope, planets appear to be dots of light in the sky. They shine steadily, while stars seem to twinkle. Twinkling results from turbulence in Earth's atmosphere. Stars are so far away that they appear as tiny points of light. A moment of turbulence can change that light for a fraction of a second. Even though they look the same size as stars to unaided human eyes, planets are close enough that they take up more space in the sky than stars do. The disks of planets are big enough to average out variations in light caused by turbulence and therefore do not twinkle.
 Between 1781 and 1930, astronomers found three more planets-Uranus, Neptune, and Pluto. This brought the total number of planets in our solar system to nine. In order of increasing distance from the Sun, the planets in our solar system are Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, and Pluto.
 Astronomers call the inner planets-Mercury, Venus, Earth, and Mars-the terrestrial planets. Terrestrial (from the Latin word terra, meaning ‘Earth’) planets are Earthlike in that they have solid, rocky surfaces. The next group of planets-Jupiter, Saturn, Uranus, and Neptune-is called the Jovian planets, or the giant planets. The word Jovian has the same Latin root as the word Jupiter. Astronomers call these planets the Jovian planets because they resemble Jupiter in that they are giant, massive planets made almost entirely of gas. The mass of Jupiter, for example, is 318 times the mass of Earth. The Jovian planets have no solid surfaces, although they probably have rocky cores several times more massive than Earth. Rings of chunks of ice and rock surround each of the Jovian planets. The rings around Saturn are the most familiar.
 Pluto, the outermost planet, is tiny, with a mass about one five-hundredth the mass of Earth. Pluto seems out of place, with its tiny, solid body out beyond the giant planets. Many astronomers believe that Pluto is really just the largest, or one of the largest, of a group of icy objects in the outer solar system. These objects orbit in a part of the solar system called the Kuiper Belt. Even if astronomers decide that Pluto belongs to the Kuiper Belt objects, it will probably still be called a planet for historical reasons.
 Most of the planets have moons, or satellites. Earth's Moon has a diameter about one-fourth the diameter of Earth. Mars has two tiny chunks of rock, Phobos and Deimos, each only about 10 km (about 6 mi) across. Jupiter has at least seventeen satellites. The largest four, known as the Galilean satellites, are Io, Europa, Ganymede, and Callisto. Ganymede is even larger than the planet Mercury. Saturn has at least eighteen satellites. Saturn’s largest moon, Titan, is also larger than the planet Mercury and is enshrouded by a thick, opaque, smoggy atmosphere. Uranus has at least seventeen moons, and Neptune has at least eight moons. Pluto had one moon, called Charon. Charon is more than half as big as Pluto.
 Comets and asteroids are rocky and icy bodies that are smaller than planets. The distinction between comets, asteroids, and other small bodies in the solar system is a little fuzzy, but generally a comet is icier than an asteroid and has a more elongated orbit. The orbit of a comet takes it close to the Sun, then back into the outer solar system. When comets near the Sun, some of their ice turns from solid material into gas, releasing some of their dust. Comets have long tails of glowing gas and dust when they are near the Sun. Asteroids are rockier bodies and usually have orbits that keep them at always about the same distance from the Sun.
 Both comets and asteroids have their origins in the early solar system. While the solar system was forming, many small, rocky objects called planetesimals condensed from the gas and dust of the early solar system. Millions of planetesimals remain in orbit around the Sun. A large spherical cloud of such objects out beyond Pluto forms the Oort cloud. The objects in the Oort cloud are considered comets. When our solar system passes close to another star or drifts closer than usual to the centre of our galaxy, the change in gravitational pull may disturb the orbit of one of the icy comets in the Oort cloud. As this comet falls toward the Sun, the ice turns into vapour, freeing dust from the object. The gas and dust form the tail or tails of the comet. The gravitational pull of large planets such as Jupiter or Saturn may swerve the comet into an orbit closer to the Sun. The time needed for a comet to make a complete orbit around the Sun is called the comet’s period. Astronomers believe that comets with periods longer than about 200 years come from the Oort Cloud. Short-period comets, those with periods less than about 200 years, probably come from the Kuiper Belt, a ring of planetesimals beyond Neptune. The material in comets is probably from the very early solar system, so astronomers study comets to find out more about our solar system’s formation.
 When the solar system was forming, some of the planetesimals came together more toward the centre of the solar system. Gravitational forces from the giant planet Jupiter prevented these planetesimals from forming full-fledged planets. Instead, the planetesimals broke up to create thousands of minor planets, or asteroids, that orbit the Sun. Most of them are in the asteroid belt, between the orbits of Mars and Jupiter, but thousands are in orbits that come closer to Earth or even cross Earth's orbit. Scientists are increasingly aware of potential catastrophes if any of the largest of these asteroids hits Earth. Perhaps 2,000 asteroids larger than 1 km. (0.6 mi.) in diameter are potential hazards.
 The Sun is the nearest star to Earth and is the centre of the solar system. It is only eight light-minutes away from Earth, meaning light takes only eight minutes to travel from the Sun to Earth. The next nearest star is four light-years away, so light from this star, Proxima Centauri (part of the triple star Alpha Centauri), takes four years to reach Earth. The Sun's closeness means that the light and other energy we get from the Sun dominate Earth’s environment and life. The Sun also provides a way for astronomers to study stars. They can see details and layers of the Sun that are impossible to see on more distant stars. In addition, the Sun provides a laboratory for studying hot gases held in place by magnetic fields. Scientists would like to create similar conditions (hot gases contained by magnetic fields) on Earth. Creating such environments could be useful for studying basic physics.
 The Sun produces its energy by fusing hydrogen into helium in a process called nuclear fusion. In nuclear fusion, two atoms merge to form a heavier atom and release energy. The Sun and stars of similar mass start off with enough hydrogen to shine for about ten billion years. The Sun is less than halfway through its lifetime.
 Although most telescopes are used mainly to collect the light of faint objects so that they can be studied, telescopes for planetary and other solar system studies are also used to magnify images. Astronomers use some of the observing time of several important telescopes for planetary studies. Overall, planetary astronomers must apply and compete for observing time on telescopes with astronomers seeking to study other objects. Some planetary objects can be studied as they pass in front of, or occult, distant stars. The atmosphere of Neptune's moon Triton and the shapes of asteroids can be investigated in this way, for example. The fields of radio and infrared astronomy are useful for measuring the temperatures of planets and satellites. Ultraviolet astronomy can help astronomers study the magnetic fields of planets.
 During the space age, scientists have developed telescopes and other devices, such as instruments to measure magnetic fields or space dust, that can leave Earth's surface and travel close to other objects in the solar system. Robotic spacecraft have visited all of the planets in the solar system except Pluto. Some missions have targeted specific planets and spent much time studying a single planet, and some spacecraft have flown past a number of planets.
 Astronomers use different telescopes to study the Sun than they use for nighttime studies because of the extreme brightness of the Sun. Telescopes in space, such as the Solar and Heliospheric Observatory (SOHO) and the Transition Region and Coronal Explorer (TRACE), are able to study the Sun in regions of the spectrum other than visible light. X-rays, ultraviolet, and radio waves from the Sun are especially interesting to astronomers. Studies in various parts of the spectrum give insight into giant flows of gas in the Sun, into how the Sun's energy leaves the Sun to travel to Earth, and into what the interior of the Sun is like. Astronomers also study solar-terrestrial relations-the relation of activity on the Sun with magnetic storms and other effects on Earth. Some of these storms and effects can affect radio reception, cause electrical blackouts, or damage satellites in orbit.
 Our solar system began forming about five billion years ago, when a cloud of gas and dust between the stars in our Milky Way Galaxy began contracting. A nearby supernova-an exploding star-may have started the contraction, but most astronomers believe a random change in density in the cloud caused the contraction. Once the cloud-known as the solar nebula-began to contract, the contraction occurred faster and faster. The gravitational energy caused by this contraction heated the solar nebula. As the cloud became smaller, it began to spin faster, much as a spinning skater will spin faster by pulling in his or her arms. This spin kept the nebula from forming a sphere; instead, it settled into a disk of gas and dust.
 In this disk, small regions of gas and dust began to draw closer and stick together. The objects that resulted, which were usually less than 500 km (300 mi) across, are the planetesimals. Eventually, some planetesimals stuck together and grew to form the planets. Scientists have made computer models of how they believe the early solar system behaved. The models show that for a solar system to produce one or two huge planets like Jupiter and several other, much smaller planets is usual.
 The largest region of gas and dust wound up in the centre of the nebula and formed the protosun (proto is Greek for ‘before’ and is used to distinguish between an object and its forerunner). The increasing temperature and pressure in the middle of the protosun vaporized the dust and eventually allowed nuclear fusion to begin, marking the formation of the Sun. The young Sun gave off a strong solar wind that drove off most of the lighter elements, such as hydrogen and helium, from the inner planets. The inner planets then solidified and formed rocky surfaces. The solar wind lost strength. Jupiter’s gravitational pull was strong enough to keep its shroud of hydrogen and helium gas. Saturn, Uranus, and Neptune also kept their layers of light gases.
 The theory of solar system formation described above accounts for the appearance of the solar system as we know it. Examples of this appearance include the fact that the planets all orbit the Sun in the same direction and that almost all the planets rotate on their axes in the same direction. The recent discoveries of distant solar systems with different properties could lead to modifications in the theory, however
 Studies in the visible, the infrared, and the shortest radio wavelengths have revealed disks around several young stars in our galaxy. One such object, Beta Pictoris (about sixty-two light-years from Earth), has revealed a warp in the disk that could be a sign of planets in orbit. Astronomers are hopeful that, in the cases of these young stars, they are studying the early stages of solar system formation.
 Although astronomers have long assumed that many other stars have planets, they have been unable to detect these other solar systems until recently. Planets orbiting around stars other than the Sun are called extra solar planets. Planets are small and dim compared with stars, so they are lost in the glare of their parent stars and are invisible to direct observation with telescopes.
 Astronomers have tried to detect other solar systems by searching for the way a planet affects the movement of its parent star. The gravitational attraction between a planet and its star pulls the star slightly toward the planet, so the star wobbles slightly as the planet orbits it. Throughout the mind and late 1900s, several observatories tried to detect wobbles in the nearest stars by watching the stars’ movement across the sky. Wobbles were reported in several stars, but later observations showed that the results were false.
 In the early 1990s, studies of a pulsar revealed at least two planets orbiting it. Pulsars are compact stars that give off pulses of radio waves at very regular intervals. The pulsar, designated PSR 1257+12, is about 1,000 light-years from Earth. This pulsar's pulses sometimes came a little early and sometimes a little late in a periodic pattern, revealing that an unseen object was pulling the pulsar toward and away from Earth. The environment of a pulsar, which emits X rays and other strong radiation that would be harmful to life on Earth, is so extreme that these objects would have little resemblance to planets in our solar system.
 The wobbling of a star changes the star’s light that reaches Earth. When the star moves away from Earth, even slightly, each wave of light must travel farther to Earth than the wave before it. This increases the distance between waves (called the wavelength) as the waves reach Earth. When a star’s planet pulls the star closer to Earth, each successive wavefront has less distance to travel to reach Earth. This shortens the wavelength of the light that reaches Earth. This effect is called the Doppler effect. No star moves fast enough for the change in wavelength to result in a noticeable change in colour, which depends on wavelength, but the changes in wavelength can be measured with precise instruments. Because the planet’s effect on the star is very small, astronomers must analyse the starlight carefully to detect a shift in wavelength. They do this by first using a technique called spectroscopy to separate the white starlight into its component colours, as water vapour does to sunlight in a rainbow. Stars emit light in a continuous range. The range of wavelengths a star emits is called the star’s spectrum. This spectrum had dark lines, called absorption lines, at wavelengths at which atoms in the outermost layers of the star absorb light.
 Astronomers know what the exact wavelength of each absorption line is for a star that is not moving. By seeing how far the movement of a star shifts the absorption lines in its spectrum, astronomers can calculate how fast the star is moving. If the motion fits the model of the effect of a planet, astronomers can calculate the mass of the planet and how close it is to the star. These calculations can only provide the lower limit to the planet’s mass, because telling at what angle the planet orbits. The star is impossible for astronomers. Astronomers need to know the angle at which the planet orbits the star to calculate the planet’s mass accurately. Because of this uncertainty, some of the giant extra solar planets may be a type of failed star called a brown dwarf instead of planets. Most astronomers believe that many of the suspected planets are true planets.
 Between 1995 and 1999 astronomers discovered more than a dozen extra solar planets. Astronomers now know of far more planets outside our solar system than inside our solar system. Most of these planets, surprisingly, are more massive than Jupiter and are orbiting so close to their parent stars that some of them have ‘years’ (the time it takes to orbit the parent star once) as long as only a few days on Earth. These solar systems are so different from our solar system that astronomers are still trying to reconcile them with the current theory of solar system formation. Some astronomers suggest that the giant extra solar planets formed much farther away from their stars and were later thrown into the inner solar systems by some gravitational interaction.
 Stars are an important topic of astronomical research. Stars are balls of gas that shine or used to shine because of nuclear fusion in their cores. The most familiar star is the Sun. The nuclear fusion in stars produces a force that pushes the material in a star outward. However, the gravitational attraction of the star’s material for itself pulls the material inward. A star can remain stable as long as the outward pressure and gravitational force balance. The properties of a star depend on its mass, its temperature, and its stage in evolution.
 Astronomers study stars by measuring their brightness or, with more difficulty, their distances from Earth. They measure the ‘colour’ of a star-the differences in the star’s brightness from one part of the spectrum to another-to determine its temperature. They also study the spectrum of a star’s light to determine not only the temperature, but also the chemical makeup of the star’s outer layers.
 Many different types of stars exist. Some types of stars are really just different stages of a star’s evolution. Some types are different because the stars formed with much more or much less mass than other stars, or because they formed close to other stars. The Sun is a type of star known as a main-sequence star. Eventually, main-sequence stars such as the Sun swell into giant stars and then evolve into tiny, dense, white dwarf stars. Main-sequence stars and giants have a role in the behaviour of most variable stars and novas. A star much more massive than the Sun will become a supergiant star, then explode as a supernova. A supernova may leave behind a neutron star or a black hole.
 In about 1910 Danish astronomer Ejnar Hertzsprung and American astronomer Henry Norris Russell independently worked out a way to graph basic properties of stars. On the horizontal axis of their graphs, they plotted the temperatures of stars. On the vertical axis, they plotted the brightness of stars in a way that allowed the stars to be compared. (One plotted the absolute brightness, or absolute magnitude, of a star, a measurement of brightness that takes into account the distance of the star from Earth. The other plotted stars in a nearby galaxy, all about the same distance from Earth.)
 On an H-R diagram, the brightest stars are at the top and the hottest stars are at the left. Hertzsprung and Russell found that most stars fell on a diagonal line across the H-R diagram from upper left lower to right. This line is called the main sequence. The diagonal line of main-sequence stars indicates that temperature and brightness of these stars are directly related. The hotter a main-sequence stars is, the brighter it is. The Sun is a main-sequence star, located in about the middle of the graph. More faint, cool stars exist than hot, bright ones, so the Sun is brighter and hotter than most of the stars in the universe.
 At the upper right of the H-R diagram, above the main sequence, stars are brighter than main-sequence stars of the same colour. The only way stars of a certain colour can be brighter than other stars of the same colour is if the brighter stars are also bigger. Bigger stars are not necessarily more massive, but they do have larger diameters. Stars that fall in the upper right of the H-R diagram are known as giant stars or, for even brighter stars, supergiant stars. Supergiant stars have both larger diameters and larger masses than giant stars.
 Giant and supergiant stars represent stages in the lives of stars after they have burned most of their internal hydrogen fuel. Stars swell as they move off the main sequence, becoming giants and—for more massive stars-supergiants.
 A few stars fall in the lower left portion of the H-R diagram, below the main sequence. Just as giant stars are larger and brighter than main-sequences stars, these stars are smaller and dimmer. These smaller, dimmer stars are hot enough to be white or blue-white in colour and are known as white dwarfs.
 White dwarf stars are only about the size of Earth. They represent stars with about the mass of the Sun that have burned as much hydrogen as they can. The gravitational force of a white dwarf’s mass is pulling the star inward, but electrons in the star resist being pushed together. The gravitational force is able to pull the star into a much denser form than it was in when the star was burning hydrogen. The final stage of life for all stars like the Sun is the white dwarf stage.
 Many stars vary in brightness over time. These variable stars come in a variety of types. One important type is called a Cepheid variable, named after the star delta Cepheid, which is a prime example of a Cepheid variable. These stars vary in brightness as they swell and contract over a period of weeks or months. Their average brightness depends on how long the period of variation takes. Thus astronomers can determine how bright the star is merely by measuring the length of the period. By comparing how intrinsically bright these variable stars are with how bright they look from Earth, astronomers can calculate how far away these stars are from Earth. Since they are giant stars and are very bright, Cepheid variables in other galaxies are visible from Earth. Studies of Cepheid variables tell astronomers how far away these galaxies are and are very useful for determining the distance scale of the universe. The Hubble Space Telescope (HST) can determine the periods of Cepheid stars in galaxies farther away than ground-based telescopes can see. Astronomers are developing a more accurate idea of the distance scale of the universe with HST data.
 Cepheid variables are only one type of variable star. Stars called long-period variables vary in brightness as they contract and expand, but these stars are not as regular as Cepheid variables. Mira, a star in the constellation Cetus (the whale), is a prime example of a long-period variable star. Variable stars called eclipsing binary stars are really pairs of stars. Their brightness varies because one member of the pair appears to pass in front of the other, as seen from Earth. A type of variable star called R Coronae Borealis stars varies because they occasionally give off clouds of carbon dust that dim these stars.
 Sometimes stars brighten drastically, becoming as much as 100 times brighter than they were. These stars are called novas (Latin for ‘new stars’). They are not really new, just much brighter than they were earlier. A nova is a binary, or double, star in which one member is a white dwarf and the other is a giant or supergiant. Matter from the large star falls onto the small star. After a thick layer of the large star’s atmosphere has collected on the white dwarf, the layer burns off in a nuclear fusion reaction. The fusion produces a huge amount of energy, which, from Earth, appears as the brightening of the nova. The nova gradually returns to its original state, and material from the large star again begins to collect on the white dwarf.
 Sometimes stars brighten many times more drastically than novas do. A star that had been too dim to see can become one of the brightest stars in the sky. These stars are called supernovas. Sometimes supernovas that occur in other galaxies are so bright that, from Earth, they appear as bright as their host galaxy.
 There are two types of supernovas. One type is an extreme case of a nova, in which matter falls from a giant or supergiant companion onto a white dwarf. In the case of a supernova, the white dwarf gains so much fuel from its companion that the star increases in mass until strong gravitational forces cause it to become unstable. The star collapses and the core explodes, vaporizing a lot of the white dwarves and producing an immense amount of light. Only bits of the white dwarf remain after this type of supernova occurs.
 The other type of supernova occurs when a supergiant star uses up all its nuclear fuel in nuclear fusion reactions. The star uses up its hydrogen fuel, but the core is hot enough that it provides the initial energy necessary for the star to begin ‘burning’ helium, then carbon, and then heavier elements through nuclear fusion. The process stops when the core is mostly iron, which is too heavy for the star to ‘burn’ in a way that gives off energy. With no such fuel left, the inward gravitational attraction of the star’s material for itself has no outward balancing force, and the core collapses. As it collapses, the core releases a shock wave that tears apart the star’s atmosphere. The core continues collapsing until it forms either a neutron star or a black hole, depending on its mass
 Only a handfuls of supernovas are known in our galaxy. The last Milky Way supernova seen from Earth was observed in 1604. In 1987 astronomers observed a supernova in the Large Magellanic Cloud, one of the Milky Way’s satellite galaxies. This supernova became bright enough to be visible to the unaided eye and is still under careful study from telescopes on Earth and from the Hubble Space Telescope. A supernova in the process of exploding emits radiation in the X-ray range and ultraviolet and radio radiation studies in this part of the spectrum are especially useful for astronomers studying supernova remnants.
 Neutron stars are the collapsed cores sometimes left behind by supernova explosions. Pulsars are a special type of neutron star. Pulsars and neutron stars form when the remnant of a star left after a supernova explosion collapses until it is about 10 km. (about 6 mi.) in radius. At that point, the neutrons-electrically neutral atomic particles-of the star resists being pressed together further. When the force produced by the neutrons, balances, the gravitational force, the core stops collapsing. At that point, the star is so dense that a teaspoonful has the mass of a billion metric tons.
 Neutron stars become pulsars when the magnetic field of a neutron star directs a beam of radio waves out into space. The star is so small that it rotates from one to a few hundred times per second. As the star rotates, the beam of radio waves sweeps out a path in space. If Earth is in the path of the beam, radio astronomers see the rotating beam as periodic pulses of radio waves. This pulsing is the reason these stars are called pulsars.
 Some neutron stars are in binary systems with an ordinary star neighbour. The gravitational pull of a neutron star pulls material off its neighbour. The rotation of the neutron star heats the material, causing it to emit X-rays. The neutron star’s magnetic field directs the X-rays into a beam that sweeps into space and may be detected from Earth. Astronomers call these stars X-ray pulsars.
 Gamma-ray spacecraft detect bursts of gamma rays about once a day. The bursts come from sources in distant galaxies, so they must be extremely powerful for us to be able to detect them. A leading model used to explain the bursts are the merger of two neutron stars in a distant galaxy with a resulting hot fireball. A few such explosions have been seen and studied with the Hubble and Keck telescopes.
 Black holes are objects that are so massive and dense that their immense gravitational pull does not even let light escape. If the core left over after a supernova explosion has a mass of more than about fives times that of the Sun, the force holding up the neutrons in the core is not large enough to balance the inward gravitational force. No outward force is large enough to resist the gravitational force. The core of the star continues to collapse. When the core's mass is sufficiently concentrated, the gravitational force of the core is so strong that nothing, not even light, can escape it. The gravitational force is so strong that classical physics no longer applies, and astronomers use Einstein’s general theory of relativity to explain the behaviour of light and matter under such strong gravitational forces. According to general relativity, space around the core becomes so warped that nothing can escape, creating a black hole. A star with a mass ten times the mass of the Sun would become a black hole if it were compressed to 90 km. (60 mi.) or less in diameter.
 Astronomers have various ways of detecting black holes. When a black hole is in a binary system, matter from the companion star spirals into the black hole, forming a disk of gas around it. The disk becomes so hot that it gives off X rays that astronomers can detect from Earth. Astronomers use X-ray telescopes in space to find X-ray sources, and then they look for signs that an unseen object of more than about five times the mass of the Sun is causing gravitational tugs on a visible object. By 1999 astronomers had found about a dozen potential black holes.
 The basic method that astronomers use to find the distance of a star from Earth uses parallax. Parallax is the change in apparent position of a distant object when viewed from different places. For example, imagine a tree standing in the centre of a field, with a row of buildings at the edge of the field behind the tree. If two observers stand at the two front corners of the field, the tree will appear in front of a different building for each observer. Similarly, a nearby star's position appears different when seen from different angles.
 Parallax also allows human eyes to judge distance. Each eye sees an object from a different angle. The brain compares the two pictures to judge the distance to the object. Astronomers use the same idea to calculate the distance to a star. Stars are very far away, so astronomers must look at a star from two locations as far apart as possible to get a measurement. The movement of Earth around the Sun makes this possible. By taking measurements six months apart from the same place on Earth, astronomers take measurements from locations separated by the diameter of Earth’s orbit. That is a separation of about 300 million km (186 million mi). The nearest stars will appear to shift slightly with respect to the background of more distant stars. Even so, the greatest stellar parallax is only about 0.77 seconds of arc, an amount 4,600 times smaller than a single degree. Astronomers calculate a star’s distance by dividing one by the parallax. Distances of stars are usually measured in parsecs. A parsec is 3.26 light-years, and a light-year is the distance that light travels in a year, or about 9.5 trillion km (5.9 trillion mi). Proxima Centauri, the Sun’s nearest neighbour, has a parallax of 0.77 seconds of arc. This measurement indicates that Proxima Centauri’s distance from Earth is about 1.3 parsecs, or 4.2 light -years. Because Proxima Centauri is the Sun’s nearest neighbours, it has a larger parallax than any other star.
 Astronomers can measure stellar parallaxes for stars up to about 500 light-years away, which is only about 2 percent of the distance to the centre of our galaxy. Beyond that distance, the parallax angle is too small to measure.
 A European Space Agency spacecraft named Hipparcos (an acronym for High Precision Parallax Collecting Satellite), launched in 1989, gave a set of accurate parallaxes across the sky that was released in 1997. This set of measurements has provided a uniform database of stellar distances for more than 100,000 stars and to some degree less accurate database of more than one million stars. These parallax measurements provide the base for measurements of the distance scale of the universe. Hipparcos data are leading to more accurate age calculations for the universe and for objects in it, especially globular clusters of stars.
 Astronomers use a star’s light to determine the star’s temperature, composition, and motion. Astronomers analyse a star’s light by looking at its intensity at different wavelengths. Blue light has the shortest visible wavelengths, at about 400 nanometres. (A nanometre, abbreviated ‘nm’, is one billionth of a metre, or about one forty-thousandth of an inch.) Red light has the longest visible wavelengths, at about 650 nm. A law of radiation known as Wien's displacement law (developed by German physicist Wilhelm Wien) links the wavelength at which the most energy is given out by an object and its temperature. A star like the Sun, whose surface temperature is about 6000 K (about 5730°C or about 10,350°F), gives off the most radiation in yellow-green wavelengths, with decreasing amounts in shorter and longer wavelengths. Astronomers put filters of different standard colours on telescopes to allow only light of a particular colour from a star to pass. In this way, astronomers determine the brightness of a star at particular wavelengths. From this information, astronomers can use Wien’s law to determine the star’s surface temperature.
 Astronomers can see the different wavelengths of light of a star in more detail by looking at its spectrum. The continuous rainbow of colour of a star's spectrum is crossed by dark lines, or spectral lines. In the early 19th century, German physicist Josef Fraunhofer identified such lines in the Sun's spectrum, and they are still known as Fraunhofer lines. American astronomer Annie Jump Cannon divided stars into several categories by the appearance of their spectra. She labelled them with capital letters according to how dark their hydrogen spectral lines were. Later astronomers reordered these categories according to decreasing temperature. The categories are O, B, A, F, G, K, and M, where O stars are the hottest and M stars are the coolest. The Sun is a G star. An additional spectral type, L stars, was suggested in 1998 to accommodate some cool stars studied using new infrared observational capabilities. Detailed study of spectral lines shows the physical conditions in the atmospheres of stars. Careful study of spectral lines shows that some stars have broader lines than others of the same spectral type. The broad lines indicate that the outer layers of these stars are more diffuse, meaning that these layers are larger, but spread more thinly, than the outer layers of other stars. Stars with large diffuse atmospheres are called giants. Giant stars are not necessarily more massive than other stars-the outer layers of giant stars are just more spread out.
 Many stars have thousands of spectral lines from iron and other elements near iron in the periodic table. Other stars of the same temperature have very few spectral lines from such elements. Astronomers interpret these findings to mean that two different populations of stars exist. Some formed long ago, before supernovas produced the heavy elements, and others formed more recently and incorporated some heavy elements. The Sun is one of the more recent stars.
 Spectral lines can also be studied to see if they change in wavelength or are different in wavelength from sources of the same lines on Earth. These studies tell us, according to the Doppler effect, how much the star is moving toward or away from us. Such studies of starlight can tell us about the orbits of stars in binary systems or about the pulsations of variable stars, for example.
 Astronomers study galaxies to learn about the structure of the universe. Galaxies are huge collections of billions of stars. Our Sun is part of the Milky Way Galaxy. Galaxies also contain dark strips of dust and may contain huge black holes at their centres. Galaxies exist in different shapes and sizes. Some galaxies are spirals, some are oval, or elliptical, and some are irregular. The Milky Way is a spiral galaxy. Galaxies tend to group together in clusters.
 Our Sun is only one of about 400 billion stars in our home galaxy, the Milky Way. On a dark night, far from outdoor lighting, a faint, hazy, whitish band spans the sky. This band is the Milky Way Galaxy as it appears from Earth. The Milky Way looks splotchy, with darker regions interspersed with lighter ones.
 The Milky Way Galaxy is a pinwheel-shaped flattened disk about 75,000 light-years in diameter. The Sun is located on a spiral arm about two-thirds of the way out from the centre. The galaxy spins, but the centre spins faster than the arms. At Earth’s position, the galaxy makes a complete rotation about every 200 million years.
 When observers on Earth look toward the brightest part of the Milky Way, which is in the constellation Sagittarius, they look through the galaxy’s disk toward its centre. This disk is composed of the stars, gas, and dust between Earth and the galactic centre. When observers look in the sky in other directions, they do not see as much of the galaxy’s gas and dust, and so can see objects beyond the galaxy more clearly.
 The Milky Way Galaxy has a core surrounded by its spiral arms. A spherical cloud containing about 100 examples of a type of star cluster known as a globular cluster surrounds the galaxy. Still, farther out is a galactic corona. Astronomers are not sure what types of particles or objects occupy the corona, but these objects do exert a measurable gravitational force on the rest of the galaxy. Galaxies contain billions of stars, but the space between stars is not empty. Astronomers believe that almost every galaxy probably has a huge black hole at its centre.
 The space between stars in a galaxy consists of low
- density gas and dust. The dust is largely carbon given off by red-giant stars. The gas is largely hydrogen, which accounts for 90 percent of the atoms in the universe. Hydrogen exists in two main forms in the universe. Astronomers give complete hydrogen atoms, with a nucleus and an electron, a designation of the Roman numeral I, or HI. Ionized hydrogen, hydrogen made up of atoms missing their electrons, is given the designation II, or HII. Clouds, or regions, of both types of hydrogen exist between the stars. HI regions are too cold to produce visible radiation, but they do emit radio waves that are useful in measuring the movement of gas in our own galaxy and in distant galaxies. The HII regions form around hot stars. These regions emit diffuse radiation in the visual range, as well as in the radio, infrared, and ultraviolet ranges. The cloudy light from such regions forms beautiful nebulas such as the Great Orion Nebula.
 Astronomers have located more than 100 types of molecules in interstellar space. These molecules occur only in trace amounts among the hydrogens. Still, astronomers can use these molecules to map galaxies. By measuring the density of the molecules throughout a galaxy, astronomers can get an idea of the galaxy’s structure. interstellar dust sometimes gathers to form dark nebulae, which appear in silhouette against background gas or stars from Earth. The Horsehead Nebula, for example, is the silhouette of interstellar dust against a background HI region.
 The first known black holes were the collapsed cores of supernova stars, but astronomers have since discovered signs of much larger black holes at the centres of galaxies. These galactic black holes contain millions of times as much mass as the Sun. Astronomers believe that huge black holes such as these provide the energy of mysterious objects called quasars. Quasars are very distant objects that are moving away from Earth at high speed. The first ones discovered were very powerful radio sources, but scientists have since discovered quasars that don’t strongly emit radio waves. Astronomers believe that almost every galaxy, whether spiral or elliptical, has a huge black hole at its centre.
 Astronomers look for galactic black holes by studying the movement of galaxies. By studying the spectrum of a galaxy, astronomers can tell if gas near the centre of the galaxy is rotating rapidly. By measuring the speed of rotation and the distance from various points in the galaxy to the centre of the galaxy, astronomers can determine the amount of mass in the centre of the galaxy. Measurements of many galaxies show that gas near the centre is moving so quickly that only a black hole could be dense enough to concentrate so much mass in such a small space. Astronomers suspect that a significant black hole occupies even the centre of the Milky Way. The clear images from the Hubble Space Telescope have allowed measurements of motions closer to the centres of galaxies than previously possible, and have led to the confirmation in several cases that giant black holes are present.
 Galaxies are classified by shape. The three types are spiral, elliptical, and irregular. Spiral galaxies consist of a central mass with one, two, or three arms that spiral around the centre. An elliptical galaxy is oval, with a bright centre that gradually, evenly dims to the edges. Irregular galaxies are not symmetrical and do not look like spiral or elliptical galaxies. Irregular galaxies vary widely in appearance. A galaxy that has a regular spiral or elliptical shape but has, some special oddity is known as a peculiar galaxy. For example, some peculiar galaxies are stretched and distorted from the gravitational pull of a nearby galaxy.
 Spiral galaxies are flattened pinwheels in shape. They can have from one to three spiral arms coming from a central core. The Great Andromeda Spiral Galaxy is a good example of a spiral galaxy. The shape of the Milky Way is not visible from Earth, but astronomers have measured that the Milky Way is also a spiral galaxy. American astronomer Edwin Hubble further classified spirals galaxies by the tightness of their spirals. In order of increasingly open arms, Hubble’s types are Sa, Sb., and Sc. Some galaxies have a straight, bright, bar-shaped feature across their centre, with the spiral arms coming off the bar or off a ring around the bar. With a capital B for the bar, the Hubble types of these galaxies are SBa, SBb, and Sbc.
 Many clusters of galaxies have giant elliptical galaxies at their centres. Smaller elliptical galaxies, called dwarf elliptical galaxies, are much more common than giant ones. Most of the two dozen galaxies in the Milky Way’s Local Group of galaxies are dwarf elliptical galaxies.
 Astronomers classify elliptical galaxies by how oval they look, ranging from E0 for very round to E3 for intermediately oval to E7 for extremely elongated. The galaxy class E7 is also called S0, which is also known as a lenticular galaxy, a shape with an elongated disk but no spiral arms. Because astronomers can see other galaxies only from the perspective of Earth, the shape astronomers see is not necessarily the exact shape of a galaxy. For instance, they may be viewing it from an end, and not from above or below.
 Some galaxies have no structure, while others have some trace of structure but do not fit the spiral or elliptical classes. All of these galaxies are called irregular galaxies. The two small galaxies that are satellites to the Milky Way Galaxy are both irregular. They are known as the Magellanic Clouds. The Large Magellanic Cloud shows signs of having a bar in its centre. The Small Magellanic Cloud is more formless. Studies of stars in the Large and Small Magellanic Clouds have been fundamental for astronomers’ understanding of the universe. Each of these galaxies provides groups of stars that are all at the same distance from Earth, allowing astronomers to compare the absolute brightness of these stars.
 In the late 1920s American astronomer Edwin Hubble discovered that all but the nearest galaxies to us are receding, or moving away from us. Further, he found that the farther away from Earth a galaxy is, the faster it is receding. He made his discovery by taking spectra of galaxies and measuring the amount by which the wavelengths of spectral lines were shifted. He measured distance in a separate way, usually from studies of Cepheid variable stars. Hubble discovered that essentially all the spectra of all the galaxies were shifted toward the red, or had red-shifts. The red-shifts of galaxies increased with increasing distance from Earth. After Hubble’s work, other astronomers made the connection between red-shift and velocity, showing that the farther a galaxy is from Earth, the faster it moves away from Earth. This idea is called Hubble’s law and is the basis for the belief that the universe is uniformly expanding. Other uniformly expanding three-dimensional objects, such as a rising cake with raisins in the batter, also demonstrate the consequence that the more distant objects (such as the other raisins with respect to any given raisin) appear to recede more rapidly than nearer ones. This consequence is the result of the increased amount of material expanding between these more distant objects.
 Hubble's law state that there is a straight-line, or linear, relationship between the speed at which an object is moving away from Earth and the distance between the object and Earth. The speed at which an object is moving away from Earth is called the object’s velocity of recession. Hubble’s law indicates that as velocity of recession increases, distance increases by the same proportion. Using this law, astronomers can calculate the distance to the most-distant galaxies, given only measurements of their velocities calculated by observing how much their light is shifted. Astronomers can accurately measure the red-shifts of objects so distant that the distance between Earth and the objects cannot be measured by other means.
 The constant of proportionality that relates velocity to distance in Hubble's law is called Hubble's constant, or H. Hubble's law is often written v Hd, or velocity equals Hubble's constant multiplied by distance. Thus determining Hubble's constant will give the speed of the universe's expansion. The inverse of Hubble’s constant, or 1/H, theoretically provides an estimate of the age of the universe. Astronomers now believe that Hubble’s constant has changed over the lifetime of the universe, however, so estimates of expansion and age must be adjusted accordingly.
 The value of Hubble’s constant probably falls between sixty-four and 78 kilometres per second per mega-parsec (between forty and 48 miles per second per mega-parsec). A mega-parsec is one million parsecs and a parsec is 3.26 light-years. The Hubble Space Telescope studied Cepheid variables in distant galaxies to get an accurate measurement of the distance between the stars and Earth to refine the value of Hubble’s constant. The value they found is 72 kilometres per second per mega-parsec (45 miles per second per mega-parsec), with an uncertainty of only 10 percent
 The actual age of the universe depends not only on Hubble's constant but also on how much the gravitational pull of the mass in the universe slows the universe’s expansion. Some data from studies that use the brightness of distant supernovas to assess distance indicate that the universe's expansion is speeding up instead of slowing. Astronomers invented the term ‘dark energy’ for the unknown cause of this accelerating expansion and are actively investigating these topics. The ultimate goal of astronomers is to understand the structure, behaviour, and evolution of all of the matter and energy that exist. Astronomers call the set of all matter and energy the universe. The universe is infinite in space, but astronomers believe it does have a finite age. Astronomers accept the theory that about fourteen billion years ago the universe began as an explosive event resulting in a hot, dense, expanding sea of matter and energy. This event is known as the big bang Astronomers cannot observe that far back in time. Many astronomers believe, however, the theory that within the first fraction of a second after the big bang, the universe went through a tremendous inflation, expanding many times in size, before it resumed a slower expansion.
 As the universe expanded and cooled, various forms of elementary particles of matter formed. By the time the universe was one second old, protons had formed. For approximately the next 1,000 seconds, in the era of nucleosynthesis, all the nuclei of deuterium (hydrogen with both a proton and neutron in the nucleus) that are present in the universe today formed. During this brief period, some nuclei of lithium, beryllium, and helium formed as well.
 When the universe was about one million years old, it had cooled to about 3000 K (about 3300°C or about 5900°F). At that temperature, the protons and heavier nuclei formed during nucleosynthesis could combine with electrons to form atoms. Before electrons combined with nuclei, the travel of radiation through space was very difficult. Radiation in the form of photons (packets of light energy) could not travel very far without colliding with electrons. Once protons and electrons combined to form hydrogen, photons became able to travel through space. The radiation carried by the photons had the characteristic spectrum of a hot gas. Since the time this radiation was first released, it has cooled and is now 3 K (-270°C or-450°F). It is called the primeval background radiation and has been definitively detected and studied, first by radio telescopes and then by the Cosmic Background Explorer (COBE) and Wilkinson Microwave Anisotropy Probe (WMAP) spacecrafts. COBE, WMAP, and ground-based radio telescopes detected tiny deviations from uniformity in the primeval background radiation; these deviations may be the seeds from which clusters of galaxies grew.
 The gravitational force from invisible matter, known as dark matter, may have helped speed the formation of structure in the universe. Observations from the Hubble Space Telescope have revealed older galaxies than astronomers expected, reducing the interval between the big bang and the formation of galaxies or clusters of galaxies.
 From about two billion years after the big bang for another two billion years, quasars formed as active giant black holes in the cores of galaxies. These quasars gave off radiation as they consumed matter from nearby galaxies. Few quasars appear close to Earth, so quasars must be a feature of the earlier universe.
 A population of stars formed out of the interstellar gas and dust that contracted to form galaxies. This first population, known as Population II, was made up almost entirely of hydrogen and helium. The stars that formed evolved and gave out heavier elements that were made through fusion in the stars’ cores or that was formed as the stars exploded as supernovas. The later generation of stars, to which the Sun belongs, is known as Population I and contains heavy elements formed by the earlier population. The Sun formed about five billion years ago and is almost halfway through its 11-billion-year lifetime
 About 4.6 billion years ago, our solar system formed. The oldest fossils of a living organism date from about 3.5 billion years ago and represent Cyanobacteria. Life evolved, and sixty-five million years ago, the dinosaurs and many other species were extinguished, probably from a catastrophic meteor impact. Modern humans evolved no earlier than a few hundred thousand years ago, a blink of an eye on the cosmic timescale.
 Will the universe expand forever or eventually stop expanding and collapse in on itself? Jay M. Pasachoff, professor of astronomy at Williams College in Williamstown, Massachusetts, confronts this question in this discussion of cosmology. Whether the universe will go on expanding forever, depends on whether there is enough critical density to halt or reverse the expansion, and the answer to that question may, in turn, depend on the existence of something the German-born American physicist Albert Einstein once labelled the cosmological constant.
 New technology allows astronomers to peer further into the universe than ever before. The science of cosmology, the study of the universe as a whole, has become an observational science. Scientists may now verify, modify, or disprove theories that were partially based on guesswork.
 In the 1920s, the early days of modern cosmology, it took an astronomer all night at a telescope to observe a single galaxy. Current surveys of the sky will likely compile data for a million different galaxies within a few years. Building upon advances in cosmology over the past century, our understanding of the universe should continue to accelerate
 Modern cosmology began with the studies of Edwin Hubble, who measured the speeds that galaxies move toward or away from us in the mid-1920s. By observing red-shift-the change in wavelength of the light that galaxies give off as they move away from us-Hubble realized that though the nearest galaxies are approaching us, all distant galaxies are receding. The most-distant galaxies are receding most rapidly. This observation is consistent with the characteristics of an expanding universe. Since 1929 an expanding universe has been the first and most basic pillar of cosmology.
 In 1990 the National Aeronautics and Space Administration (NASA) launched the Hubble Space Telescope (HST), named to honour the pioneer of cosmology. Appropriately, determining the rate at which the universe expands was one of the telescope’s major tasks.
 One of the HST’s key projects was to study Cepheid variables (stars that varies greatly in brightness) and to measure distances in space. Another set of Hubble’s observations focuses on supernovae, exploding stars that can be seen at very great distances because they are so bright. Studies of supernovae in other galaxies reveal the distances to those galaxies.
 The term big bang refers to the idea that the expanding universe can be traced back in time to an initial explosion. In the mid-1960s, physicists found important evidence of the big bang when they detected faint microwave radiation coming from every part of the sky. Astronomers think this radiation originated about 300,000 years after the big bang, when the universe thinned enough to become transparent. The existence of cosmic microwave background radiation, and its interpretation, is the second pillar of modern cosmology.
 Also in the 1960s, astronomers realized that the lightest of the elements, including hydrogen, helium, lithium, and boron, were formed mainly at the time of the big bang. What is most important, deuterium (the form of hydrogen with an extra neutron added to normal hydrogen's single proton) was formed only in the era of nucleosynthesis? This era started about one second after the universe was formed and made up the first three minutes or so after the big bang. No sources of deuterium are known since that early epoch. The current ratio of deuterium to regular hydrogen depends on how dense the universe was at that early time, so studies of the deuterium that can now be detected indicate how much matter the universe contains. These studies of the origin of the light elements are the third pillar of modern cosmology.
 Until recently many astronomers disagreed on whether the universe was expected to expand forever or eventually stop expanding and collapse in on itself in a ‘big crunch.’
 At the General Assembly of the International Astronomical Union (IAU) held in August 2000, a consistent picture of cosmology emerged. This picture depends on the current measured value for the expansion rate of the universe and on the density of the universe as calculated from the abundances of the light elements. The most recent studies of distant supernovae seem to show that the universe's expansion is accelerating, not slowing. Astronomers have recently proposed a theoretical type of negative energy-which would provide a force that opposes the attraction of gravity-to explain the accelerating universe.
 For decades scientists have debated the rate at which the universe is expanding. We know that the further away a galaxy is, the faster it moves away from us. The question is: How fast are galaxies receding for each unit of distance they are away from us? The current value, as announced at the IAU meeting, is 75 km/s/Mpc, that is, for each mega-parsec of distance from us (where each mega-parsec is 3.26 million light-years), the speed of expansion increases by 75 kilometres per second.
 What’s out there, exactly?
 In the picture of expansion held until recently, astronomers thought the universe contained just enough matter and energy so that it would expand forever but expand at a slower and slower rate as time went on. The density of matter and energy necessary for this to happen is known as the critical density.
 Astronomers now think that only 5 percent or so of the critical density of the universe is made of ordinary matter. Another 25 percent or so of the critical density is made of dark matter, a type of matter that has gravity but that has not been otherwise detected. The accelerating universe, further, shows that the remaining 70 percent of the critical density is made of a strange kind of energy, perhaps that known as the cosmological constant, an idea tentatively invoked and then abandoned by Albert Einstein in equations for his general theory of relativity.
 Some may be puzzled: Didn't we learn all about the foundations of physics when we were still at school? The answer is ‘yes’ or ‘no’, depending on the interpretation. We have become acquainted with concepts and general relations that enable us to comprehend an immense range of experiences and make them accessible to mathematical treatment. In a certain sense these concepts and relations are probably even final. This is true, for example, of the laws of light refraction, of the relations of classical thermodynamics as far as it is based on the concepts of pressure, volume, temperature, heat and work, and of the hypothesis of the nonexistence of a perpetual motion machine.
 What, then, impels us to devise theory after theory? Why do we devise theories at all? The answer to the latter question is simple: Because we enjoy ‘comprehending’, i.e., reducing phenomena by the process of logic to something already known or (apparently) evident. New theories are first of all necessary when we encounter new facts that cannot be ‘explained’ by existing theories. Nevertheless, this motivation for setting up new theories is, so to speak, trivial, imposed from without. There is another, more subtle motive of no less importance. This is the striving toward unification and simplification of the premises of the theory as a whole (i.e., Mach's principle of economy, interpreted as a logical principle).
 There exists a passion for comprehension, just as there exists a passion for music. That passion is altogether common in children, but gets lost in most people later on. Without this passion, there would be neither mathematics nor natural science. Time and again the passion for understanding has led to the illusion that man is able to comprehend the objective world rationally, by pure thought, without any empirical foundations-in short, by metaphysics. I believe that every true theorist is a kind of tamed metaphysicist, no matter how pure a
‘positivist’, he may fancy himself. The metaphysicist believes that the logically simple are also the real. The tamed metaphysicist believes that not all that is logically simple is embodied in experienced reality, but that the totality of all sensory experience can be ‘comprehended’ on the basis of a conceptual system built on premises of great simplicity. The skeptic will say that this is a ‘miracle creed’. Admittedly so, but it is a miracle creed that has been borne out to an amazing extent by the development of science.
 The rise of atomism is a good example. How may Leucippus have conceived this bold idea? When water freezes and becomes ice-apparently something entirely different from water-why is it that the thawing of the ice forms something that seems indistinguishable from the original water? Leucippus is puzzled and looks for an ‘explanation’. He is driven to the conclusion that in these transitions the ‘essence’, of the thing has not changed at all. Maybe the thing consists of immutable particles and the change is only a change in their spatial arrangement. Could it not be that the same is true of all material objects that emerge again and again with nearly identical qualities?
 This idea is not entirely lost during the long hibernation of occidental thought. Two thousand years after Leucippus, Bernoulli wonders why gas exerts pressure on the walls of a container. Should this be ‘explained’ by mutual repulsion of the parts of the gas, in the sense of Newtonian mechanics? This hypothesis appears absurd, for the gas pressure depends on the temperature, all other things being equal. To assume that the Newtonian forces of interaction depend on temperature is contrary to the spirit of Newtonian mechanics. Since Bernoulli is aware of the concept of atomism, he is bound to conclude that the atoms (or molecules) collide with the walls of the container and in doing so exert pressure. After all, one has to assume that atoms are in motion; how else can one account for the varying temperature of gases?
 A simple mechanical consideration shows that this pressure depends only on the kinetic energy of the particles and on their density in space. This should have led the physicists of that age to the conclusion that heat consists in random motion of the atoms. Had they taken this consideration as seriously as it deserved to be taken, the development of the theory of heat-in particular the discovery of the equivalence of heat and mechanical energy-would have been considerably facilitated.
 This example is meant to illustrate two things. The theoretical idea (atomism in this case) does not arise apart and independent of experience; nor can it be derived from experience by a purely logical procedure. It is produced by a creative act. Once a theoretical idea has been acquired, one does well to hold fast to it until it leads to an untenable conclusion.
 In Newtonian physics the elementary theoretical concept on which the theoretical description of material bodies is based is the material point, or particle. Thus, matter is considered theoretically to be discontinuous. This makes it necessary to consider the action of material points on one another as ‘action at a distance’. Since the latter concept seems quite contrary to everyday experience, it is only natural that the contemporaries of Newton-and in fact, Newton himself found it difficult to accept. Owing to the almost miraculous success of the Newtonian system, however, the succeeding generations of physicists became used to the idea of action at a distance. Any doubt was buried for a long time to come.
 All the same, when, in the second half of the 19th century, the laws of electrodynamics became known, it turned out that these laws could not be satisfactorily incorporated into the Newtonian system. It is fascinating to muse: Would Faraday have discovered the law of electromagnetic induction if he had received a regular college education? Unencumbered by the traditional way of thinking, he felt that the introduction of the ‘field’ as an independent element of reality helped him to coordinate the experimental facts. It was Maxwell who fully comprehended the significance of the field concept; he made the fundamental discovery that the laws of electrodynamics found their natural expression in the differential equations for the electric and magnetic fields. These equations implied the existence of waves, whose properties corresponded to those of light as far as they were known at that time.
 This incorporation of optics into the theory of electromagnetism represents one of the greatest triumphs in the striving toward unification of the foundations of physics; Maxwell achieved this unification by purely theoretical arguments, long before it was corroborated by Hertz' experimental work. The new insight made it possible to dispense with the hypothesis of action at a distance, at least in the realm of electromagnetic phenomena; the intermediary field now appeared as the only carrier of electromagnetic interaction between bodies, and the field's behaviour was completely determined by contiguous processes, expressed by differential equations.
 Now a question arose: Since the field exists even in a vacuum, should one conceive of the field as a state of a ‘carrier’, or should it be endowed with an independent existence not reducible to anything else? In other words, is there an ‘ether’ which carries the field; the ether being considered in the undulatory state, for example, when it carries light waves?
 The question has a natural answer: Because one cannot dispense with the field concept, not introducing in addition a carrier with hypothetical properties is preferable. However, the pathfinder who first recognized the indispensability of the field concept were still too strongly imbued with the mechanistic tradition of thought to accept unhesitatingly this simple point of view. Nevertheless, in the course of the following decades this view imperceptibly took hold.
 The introduction of the field as an elementary concept gave rise to an inconsistency of the theory as a whole. Maxwell's theory, although adequately describing the behaviour of electrically charged particles in their interaction with one another, does not explain the behaviours of electrical densities, i.e., it does not provide a theory of the particles themselves. They must therefore be treated as mass points on the basis of the old theory. The combination of the idea of a continuous field with that of material points discontinuous in space appears inconsistent. A consistent field theory requires continuity of all elements of the theory, not only in time but also in space, and in all points of space. Hence the material particle has no place as a fundamental concept in a field theory. Thus, even apart from the fact that gravitation is not included. Maxwell’s electrodynamics cannot be considered a complete theory.
 Maxwell's equations for empty space remain unchanged if the spatial coordinates and the time are subjected to a particular linear transformations-the Lorentz transformations (‘covariance’ with respect to Lorentz transformations). Covariance also holds, of course, for a transformation that is composed of two or more such transformations; this is called the ‘group’ property of Lorentz transformations.
 Maxwell's equations imply the ‘Lorentz group’, but the Lorentz group does not imply Maxwell's equations. The Lorentz group may effectively be defined independently of Maxwell's equations as a group of linear transformations that leave a particular value of the velocity-the velocity of light-invariant. These transformations hold for the transition from one ‘inertial system to another that is in uniform motion relative to the first. The most conspicuous novel property of this transformation group is that it does away with the absolute character of the concept of simultaneity of events distant from each other in space. On this account it is to be expected that all equations of physics are covariant with respect to Lorentz transformations (special theory of relativity). Thus it came about that Maxwell's equations led to a heuristic principle valid far beyond the range of the applicability or even validity of the equations themselves.
 Special relativity has this in common with Newtonian mechanics: The laws of both theories are supposed to hold only with respect to certain coordinate systems: those known as ‘inertial systems’. An inertial system is a system in a state of motion such that ‘force-free’ material points within it are not accelerated with respect to the coordinate system. However, this definition is empty if there is no independent means for recognizing the absence of forces. Nonetheless, such a means of recognition does not exist if gravitation is considered as a ‘field’.
 Let ‘A’ be a system uniformly accelerated with respect to an ‘inertial system’ I. Material points, not accelerated with respect to me, are accelerated with respect to ‘A’, the acceleration of all the points being equal in magnitude and direction. They behave as if a gravitational field exists with respect to ‘A’, for it is a characteristic property of the gravitational field that the acceleration is independent of the particular nature of the body. There is no reason to exclude the possibility of interpreting this behaviour as the effect of a ‘true’ gravitational field (principle of equivalence). This interpretation implies that ‘A’ is an ‘inertial system,’ even though it is accelerated with respect to another inertial system. (It is essential for this argument that the introduction of independent gravitational fields is considered justified even though no masses generating the field are defined. Therefore, to Newton such an argument would not have appeared convincing.) Thus the concepts of inertial system, the law of inertia and the law of motion are deprived of their concrete meaning-not only in classical mechanics but also in special relativity. Moreover, following up this train of thought, it turns out that with respect to A time cannot be measured by identical clocks; effectively, even the immediate physical significance of coordinate differences is generally lost. In view of all these difficulties, should one not try, after all, to hold on to the concept of the inertial system, relinquishing the attempt to explain the fundamental character of the gravitational phenomena that manifest themselves in the Newtonian system as the equivalence of inert and gravitational mass? Those who trust in the comprehensibility of nature must answer: No.
 This is the gist of the principle of equivalence: In order to account for the equality of inert and gravitational mass within the theory admitting nonlinear transformations of the four coordinates is necessary. That is, the group of Lorentz transformations and hence the set of the "permissible" coordinate systems has to be extended.
 What group of coordinate transformations can then be substituted for the group of Lorentz transformations? Mathematics suggests an answer that is based on the fundamental investigations of Gauss and Riemann: namely, that the appropriate substitute is the group of all continuous (analytical) transformations of the coordinates. Under these transformations the only thing that remains invariant is the fact that neighbouring points have nearly the same coordinates; the coordinate system expresses only the topological order of the points in space (including its four-dimensional character). The equations expressing the laws of nature must be covariant with respect to all continuous transformations of the coordinates. This is the principle of general relativity.
 The procedure just described overcomes a deficiency in the foundations of mechanics that had already been noticed by Newton and was criticized by Leibnitz and, two centuries later, by Mach: Inertia resists acceleration, but acceleration relative to what? Within the frame of classical mechanics the only answer is: Inactivity resists velocity relative to distances. This is a physical property of space-space acts on objects, but objects do not act on space. Such is probably the deeper meaning of Newton's assertion spatium est absolutum (space is absolute). Nevertheless, the idea disturbed some, in particular Leibnitz, who did not ascribe an independent existence to space but considered it merely a property of ‘things’ (contiguity of physical objects). Had his justified doubts won out at that time, it hardly would have been a boon to physics, for the empirical and theoretical foundations necessary to follow up his idea was not available in the 17th century.
 According to general relativity, the concept of space detached from any physical content does not exist. The physical reality of space is represented by a field whose components are continuous functions of four independent variables—the coordinates of space and time. It is just this particular kind of dependence that expresses the spatial character of physical reality.
 Since the theory of general relativity implies the representation of physical reality by a continuous field, the concept of particles or material points cannot . . . play a fundamental part, nor can the concept of motion. The particle can only appear as a limited region in space in which the field strength or the energy density is particularly high.
 A relativistic theory has to answer two questions: (1) What is the mathematical character of the field? What equations hold for this field?
 Concerning the first question: From the mathematical point of view the field is essentially characterized by the way its components transform if a coordinate transformation is applied. Concerning the second (2) question: The equations must determine the field to a sufficient extent while satisfying the postulates of general relativity. Whether or not this requirement can be satisfied, depends on the choice of the field-type.
 The attempts to comprehend the correlations among the empirical data on the basis of such a highly abstract program may at first appear almost hopeless. The procedure amounts, in fact, to putting the question: What most simple property can be required from what most simple object (field) while preserving the principle of general relativity? Viewed in formal logic, the dual character of the question appears calamitous, quite apart from the vagueness of the concept ‘simple’. Moreover, as for physics there is nothing to warrant the assumption that a theory that is ‘logically simple’ should also be ‘true’.
 Yet every theory is speculative. When the basic concepts of a theory are comparatively ‘close to experience’ (e.g., the concepts of force, pressures, mass), its speculative character is not so easily discernible. If, however, a theory is such as to require the application of complicated logical processes in order to reach conclusions from the premises that can be confronted with observation, everybody becomes conscious of the speculative nature of the theory. In such a case an almost irresistible feeling of aversion arises in people who are inexperienced in epistemological analysis and who are unaware of the precarious nature of theoretical thinking in those fields with which they are familiar.
 On the other hand, it must be conceded that a theory has an important advantage if its basic concepts and fundamental hypotheses are ‘close to experience’, and greater confidence in such a theory is justifiable. There is less danger of going completely astray, particularly since it takes so much less time and effort to disprove such theories by experience. Yet ever more, as the depth of our knowledge increases, we must give up this advantage in our quest for logical simplicity and uniformity in the foundations of physical theory. It has to be admitted that general relativity has gone further than previous physical theories in relinquishing ‘closeness to experience’ of fundamental concepts in order to attain logical simplicity. This holds all ready for the theory of gravitation, and it is even more true of the new generalization, which is an attempt to comprise the properties of the total field. In the generalized theory the procedure of deriving from the premises of the theory conclusions that can be confronted with empirical data is so difficult that so far no such result has been obtained. In favour of this theory are, at this point, its logical simplicity and its ‘rigidity’. Rigidity means here that the theory is either true or false, but not modifiable.
 The greatest inner difficulty impeding the development of the theory of relativity is the dual nature of the problem, indicated by the two questions we have asked. This duality is the reason the development of the theory has taken place in two steps so widely separated in time. The first of these steps, the theory of gravitation, is based on the principle of equivalence discussed above and rests on the following consideration: According to the theory of special relativity, light has a constant velocity of propagation. If a light ray in a vacuum starts from a point, designated by the coordinates x1, x2 and x3 in a three-dimensional coordinate system, at the time x4, it spreads as a spherical wave and reaches a neighbouring point (x1 + dx1, x2 + dx2, x3 + dx3) at the time x4 + dx4. Introducing the velocity of light, c, we write the expression:
 This expression represents an objective relation between neighbouring space-time points in four dimensions, and it holds for all inertial systems, provided the coordinate transformations are restricted to those of special relativity. The relation loses this form, however, if arbitrary continuous transformations of the coordinates are admitted in accordance with the principle of general relativity. The relation then assumes the more general form:
Σik gik dxi dxk=0
The gik are certain functions of the coordinates that transform in a definite way if a continuous coordinate transformation is applied. According to the principle of equivalence, these gik functions describe a particular kind of gravitational field: a field that can be obtained by transformation of ‘field-free’ space. The gik satisfies a particular law of transformation. Mathematically speaking, they are the components of a ‘tensor’ with a property of symmetry that is preserved in all transformations; the symmetrical property is expressed as follows:
gik=gki
The idea suggests itself: May we not ascribe objective meaning to such a symmetrical tensor, even though the field cannot be obtained from the empty space of special relativity by a mere coordinate transformation? Although we cannot expect that such a symmetrical tensor will describe the most general field, it may describe the particular case of the ‘pure gravitational field’. Thus it is evident what kind of field, at least for a special case, general relativity has to postulate: a symmetrical tensor field.
 Hence only the second question is left: What kind of general covariant field law can be postulated for a symmetrical tensor field?
 This question has not been difficult to answer in our time, since the necessary mathematical conceptions were already here in the form of the metric theory of surfaces, created a century ago by Gauss and extended by Riemann to manifolds of an arbitrary number of dimensions. The result of this purely formal investigation has been amazing in many respects. The differential equations that can be postulated as field law for gik cannot be of lower than second order, i.e., they must at least contain the second derivatives of the gik with respect to the coordinates. Assuming that no higher than second derivatives appear in the field law, it is mathematically determined by the principle of general relativity. The system of equations can be written in the form: Rik = 0. The Rik transforms in the same manner as the gik, i.e., they too form a symmetrical tensor.
 These differential equations completely replace the Newtonian theory of the motion of celestial bodies provided the masses are represented as singularities of the field. In other words, they contain the law of force as well as the law of motion while eliminating ‘inertial systems’.
 The fact that the masses appear as singularities indicate that these masses themselves cannot be explained by symmetrical gik fields, or ‘gravitational fields’. Not even the fact that only positive gravitating masses exist can be deduced from this theory. Evidently a complete relativistic field theory must be based on a field of more complex nature, that is, a generalization of the symmetrical tensor field.
 The first observation is that the principle of general relativity imposes exceedingly strong restrictions on the theoretical possibilities. Without this restrictive principle hitting on the gravitational equations would be practically impossible for anybody, not even by using the principle of special relativity, even though one knows that the field has to be described by a symmetrical tensor. No amount of collection of facts could lead to these equations unless the principles of general relativity were used. This is the reason that all attempts to obtain a deeper knowledge of the foundations of physics seem doomed to me unless the basic concepts are in accordance with general relativity from the beginning. This situation makes it difficult to use our empirical knowledge, however comprehensive, in looking for the fundamental concepts and relations of physics, and it forces us to apply free speculation to a much greater extent than is presently assumed by most physicists. One may not see any reason to assume that the heuristic significance of the principle of general relativity is restricted to gravitation and that the rest of physics can be dealt with separately on the basis of special relativity, with the hope that later as a resultant circumstance brings about the whole that may be fitted consistently into a general relativistic scheme. One is to think that such an attitude, although historically understandable, can be objectively justified. The comparative smallness of what we know today as gravitational effects is not a conclusive reason for ignoring the principle of general relativity in theoretical investigations of a fundamental character. In other words, I do not believe that asking it is justifiable: What would physics look like without gravitation?
 The second point we must note is that the equations of gravitation are ten differential equations for the ten components of the symmetrical tensor gik. In the case of a non-generalized relativity theory, a system is ordinarily not over determined if the number of equations is equal to the number of unknown functions. The manifold of solutions is such that within the general solution a certain number of functions of three variables can be chosen arbitrarily. For a general relativistic theory this cannot be expected as a matter of course. Free choice with respect to the coordinate system implies that out of the ten functions of a solution, or components of the field, four can be made to assume prescribed values by a suitable choice of the coordinate system. In other words, the principle of general relativity implies that the number of functions to be determined by differential equations is not ten but 10-4=6. For these six functions only six independent differential equations may be postulated. Only six out of the ten differential equations of the gravitational field ought to be independent of each other, while the remaining four must be connected to those six by means of four relations (identities). In earnest there exist among the left-hand sides, Rik, of the ten gravitational equations four identities ’Bianchi's identities’-which assure their ‘compatibility’.
 In a case like this-when the number of field variables is equal to the number of differential equations-compatibility is always assured if the equations can be obtained from a variational principle. This is unquestionably the case for the gravitational equations.
 However, the ten differential equations cannot be entirely replaced by six. The system of equations is verifiably ‘over determined’, but due to the existence of the identities it is over determined in such a way that its compatibility is not lost,i.e., the manifold of solutions is not critically restricted. The fact that the equations of gravitation imply the law of motion for the masses is intimately connected with this (permissible) over determination.
 After this preparation understanding the nature of the present investigation without entering into the details of its mathematics is now easy. The problem is to set up a relativistic theory for the total field. The most important clue to its solution is that there exists already the solution for the special case of the pure gravitational field. The theory we are looking for must therefore be a generalization of the theory of the gravitational field. The first question is: What is the natural generalization of the symmetrical tensor field?
 This question cannot be answered by itself, but only in connection with the other question: What generalization of the field is going to provide the most natural theoretical system? The answer on which the theory under discussion is based is that the symmetrical tensor field must be replaced by a non-symmetrical one. This means that the condition gik = gki for the field components must be dropped. In that case the field has sixteen instead of ten independent components.
 There remains the task of setting up the relativistic differential equations for a non-symmetrical tensor field. In the attempt to solve this problem one meets with a difficulty that does not arise in the case of the symmetrical field. The principle of general relativity does not suffice to determine completely the field equations, mainly because the transformation law of the symmetrical part of the field alone does not involve the components of the anti-symmetrical part or vice versa. Probably this is the reason that this kind of generalization of the field has been hardly ever tried before. The combination of the two parts of the field can only be shown to be a natural procedure if in the formalism of the theory only the total field plays a role, and not the symmetrical and anti-symmetrical parts separately.
 It turned out that this requirement can actively be satisfied in a natural way. Nonetheless, even this requirement, together with the principle of general relativity, is still not sufficient to determine uniquely the field equations. Let us remember that the system of equations must satisfy a further condition: the equations must be compatible. It has been mentioned above that this condition is satisfied if the equations can be derived from a variational principle.
 This has rightfully been achieved, although not in so natural a way as in the case of the symmetrical field. It has been disturbing to find that it can be achieved in two different ways. These variational principles furnished two systems of equations-let us denote them by E1 and E2-which were different from each other (although only so), each of them exhibiting specific imperfections. Consequently even the condition of compatibility was insufficient to determine the system of equations uniquely.
 It was, in fact, the formal defects of the systems E1 and E2 out whom indicated a possible way. There exists a third system of equations, E3, which is free of the formal defects of the systems E1 and E2 and represents a combination of them in the sense that every solution of E3 is a solution of E1 as well as of E2. This suggests that E3 may be the system for which we have been looking. Why not postulate E3, then, as the system of equations? Such a procedure is not justified without further analysis, since the compatibility of E1 and that of E2 does not imply compatibility of the stronger system E3, where the number of equations exceeds the number of field components by four.
 An independent consideration shows that irrespective of the question of compatibility the stronger system, E3, is the only really natural generalization of the equations of gravitation.
 It seems, nonetheless, that E3 is not a compatible system in the same sense as are the systems E1 and E2, whose compatibility is assured by a sufficient number of identities, which means that every field that satisfies the equations for a definite value of the time has a continuous extension representing a solution in four-dimensional space. The system E3, however, is not extensible in the same way. Using the language of classical mechanics, we might say: In the case of the system E3 the ‘initial condition’ cannot be freely chosen. What really matter is the answer to the question: Is the manifold of solutions for the system E3 as extensive as must be required for a physical theory? This purely mathematical problem is as yet unsolved.
 The skeptic will say: "It may be true that this system of equations is reasonable from a logical standpoint. However, this does not prove that it corresponds to nature." You are right, dear skeptic. Experience alone can decide on truth. Yet we have achieved something if we have succeeded in formulating a meaningful and precise question. Affirmation or refutation will not be easy, in spite of an abundance of known empirical facts. The derivation, from the equations, of conclusions that can be confronted with experience will require painstaking efforts and probably new mathematical methods.
 Schrödinger's mathematical description of electron waves found immediate acceptance. The mathematical description matched what scientists had learned about electrons by observing them and their effects. In 1925, a year before Schrödinger published his results, German-British physicist Max Born and German physicist Werner Heisenberg developed a mathematical system called matrix mechanics. Matrix mechanics also succeeded in describing the structure of the atom, but it was totally theoretical. It gave no picture of the atom that physicists could verify observationally. Schrödinger's vindication of de Broglie's idea of electron waves immediately overturned matrix mechanics, though later physicists showed that wave mechanics are equivalent to matrix mechanics.
 To solve these problems, mathematicians use calculus, which deals with continuously changing quantities, such as the position of a point on a curve. Its simultaneous development in the 17th century by English mathematician and physicist Isaac Newton and German philosopher and mathematician Gottfried Wilhelm Leibniz enabled the solution of many problems that had been insoluble by the methods of arithmetic, algebra, and geometry. Among the advances that calculus helped develop were the determinations of Newton’s laws of motion and the theory of electromagnetism.
 The physical sciences investigate the nature and behaviour of matter and energy on a vast range of size and scale. In physics itself, scientists study the relationships between matter, energy, force, and time in an attempt to explain how these factors shape the physical behaviour of the universe. Physics can be divided into many branches. Scientists study the motion of objects, a huge branch of physics known as mechanics that involves two overlapping sets of scientific laws. The laws of classical mechanics govern the behaviour of objects in the macroscopic world, which includes everything from billiard balls to stars, while the laws of quantum mechanics govern the behaviour of the particles that make up individual atoms.
 The new math is new only in that the material is introduced at a much lower level than heretofore. Thus geometry, which was and is commonly taught in the second year of high school, is now frequently introduced, in an elementary fashion, in the fourth grade-in fact, naming and recognition of the common geometric figures, the circle and the square, occurs in kindergarten. At an early stage, numbers are identified with points on a line, and the identification is used to introduce, much earlier than in the traditional curriculum, negative numbers and the arithmetic processes involving them.
 The elements of set theory constitute the most basic and perhaps the most important topic of the new math. Even a kindergarten child can understand, without formal definition, the meaning of a set of red blocks, the set of fingers on the left hand, and the set of the child’s ears and eyes. The technical word set is merely a synonym for many common words that designate an aggregate of elements. The child can understand that the set of fingers on the left hand and the set on the right-hand match-that is, the elements, fingers, can be put into a one-to-one correspondence. The set of fingers on the left hand and the set of the child’s ears and eyes do not match. Some concepts that are developed by this method are counting, equality of number, more than, and less then. The ideas of union and intersection of sets and the complement of a set can be similarly developed without formal definition in the early grades. The principles and formalism of set theory are extended as the child advances; upon graduation from high school, the student’s knowledge is quite comprehensive.
 The amount of new math and the particular topics taught vary from school to school. In addition to set theory and intuitive geometry, the material is usually chosen from the following topics: a development of the number systems, including methods of numeration, binary and other bases of notation, and modular arithmetic; measurement, with attention to accuracy and precision, and error study; studies of algebraic systems, including linear algebra, modern algebra, vectors, and matrices, with an axiomatically delegated approach; logic, including truth tables, the nature of proof, Venn or Euler diagrams, relations, functions, and general axiomatic; probability and statistics; linear programming; computer programming and language; and analytic geometry and calculus. Some schools present differential equations, topology, and real and complex analysis.
 Cosmology, of an evolution, is the study of the general nature of the universe in space and in time-what it is now, what it was in the past and what it is likely to be in the future. Since the only forces at work between the galaxies that makes up the material universe are the forces of gravity, the cosmological problem is closely connected with the theory of gravitation, in particular with its modern version as comprised in Albert Einstein's general theory of relativity. In the frame of this theory the properties of space, time and gravitation are merged into one harmonious and elegant picture.
 The basic cosmological notion of general relativity grew out of the work of great mathematicians of the 19th century. In the middle of the last century two inquisitive mathematical minds-Russian named Nikolai Lobachevski and a Hungarian named János Bolyai-discovered that the classical geometry of Euclid was not the only possible geometry: in fact, they succeeded in constructing a geometry that was fully as logical and self-consistent as the Euclidean. They began by overthrowing Euclid's axiom about parallel lines: namely, that only one parallel to a given straight line can be drawn through a point not on that line. Lobachevski and Bolyai both conceived a system of geometry in which a great number of lines parallel to a given line could be drawn through a point outside the line.
 To illustrate the differences between Euclidean geometry and their non-Euclidean system considering just two dimensions are simplest-that is, the geometry of surfaces. In our schoolbooks this is known as ‘plane geometry’, because the Euclidean surface is a flat surface. Suppose, now, we examine the properties of a two-dimensional geometry constructed not on a plane surface but on a curved surface. For the system of Lobachevski and Bolyai we must take the curvature of the surface to be ‘negative’, which means that the curvature is not like that of the surface of a sphere but like that of a saddle. Now if we are to draw parallel lines or any figure (e.g., a triangle) on this surface, we must decide first of all how we will define a ‘straight line’, equivalent to the straight line of plane geometry. The most reasonable definition of a straight line in Euclidean geometry is that it is the path of the shortest distance between two points. On a curved surface the line, so defined, becomes a curved line known as a ‘geodesic’.
 Considering a surface curved like a saddle, we find that, given a ‘straight’ line or geodesic, we can draw through a point outside that line a great many geodesics that will never intersect the given line, no matter how far they are extended. They are therefore parallel to it, by the definition of parallel. The possible parallels to the line fall within certain limits, indicated by the intersecting lines.
 As a consequence of the overthrow of Euclid's axiom on parallel lines, many of his theorems are demolished in the new geometry. For example, the Euclidean theorem that the sum of the three angles of a triangle is 180 degrees no longer holds on a curved surface. On the saddle-shaped surface the angles of a triangle formed by three geodesics always add up to less than 180 degrees, the actual sum depending on the size of the triangle. Further, a circle on the saddle surface does not have the same properties as a circle in plane geometry. On a flat surface the circumference of a circle increases in proportion to the increase in diameter, and the area of a circle increases in proportion to the square of the increase in diameter. Still, on a saddle surface both the circumference and the area of a circle increase at faster rates than on a flat surface with increasing diameter.
 After Lobachevski and Bolyai, the German mathematician Bernhard Riemann constructed another non-Euclidean geometry whose two-dimensional model is a surface of positive, rather than negative, curvature-that is, the surface of a sphere. In this case a geodesic line is simply a great circle around the sphere or a segment of such a circle, and since any two great circles must intersect at two points (the poles), there are no parallel lines at all in this geometry. Again the sum of the three angles of a triangle is not 180 degrees: in this case it is always more than 180. The circumference of a circle now increases at a rate slower than in proportion to its increase in diameter, and its area increases more slowly than the square of the diameter.
 Now all this is not merely an exercise in abstract reasoning but bears directly on the geometry of the universe in which we live. Is the space of our universe ‘flat’, as Euclid assumed, or is it curved negatively (per Lobachevski and Bolyai) or curved positively (Riemann)? If we were two-dimensional creatures living in a two-dimensional universe, we could tell whether we were living on a flat or a curved surface by studying the properties of triangles and circles drawn on that surface. Similarly as three-dimensional beings living in three-dimensional space, in that we should be capably able by way of studying geometrical properties of that space, to decide what the curvature of our space is. Riemann in fact developed mathematical formulas describing the properties of various kinds of curved space in three and more dimensions. In the early years of this century Einstein conceived the idea of the universe as a curved system in four dimensions, embodying time as the fourth dimension, and he proceeded to apply Riemann's formulas to test his idea.
 Einstein showed that time can be considered a fourth coordinate supplementing the three coordinates of space. He connected space and time, thus establishing a ‘space-time continuum’, by means of the speed of light as a link between time and space dimensions. However, recognizing that space and time are physically different entities, he employed the imaginary number Á, or me, to express the unit of time mathematically and make the time coordinate formally equivalent to the three coordinates of space.
 In his special theory of relativity Einstein made the geometry of the time-space continuum strictly Euclidean, that is, flat. The great idea that he introduced later in his general theory was that gravitation, whose effects had been neglected in the special theory, must make it curved. He saw that the gravitational effect of the masses distributed in space and moving in time was equivalent to curvature of the four-dimensional space-time continuum. In place of the classical Newtonian statement that ‘the sun produces a field of forces that impel the earth to deviate from straight-line motion and to move in a circle around the sun’. Einstein substituted a statement to the effect that ‘the presence of the sun causes a curvature of the space-time continuum in its neighbourhood’.
 The motion of an object in the space-time continuum can be represented by a curve called the object's ‘world line’. Einstein declared, in effect: ‘The world line of the earth is a geodesic trajectory in the curved four-dimensional space around the sun’. In other words, the . . . earth’s ‘world line’ . . . corresponds to the shortest four-dimensional distance between the position of the earth in January . . . and its position in October . . .
 Einstein's idea of the gravitational curvature of space-time was, of course, triumphantly affirmed by the discovery of perturbations in the motion of Mercury at its closest approach to the sun and of the deflection of light rays by the sun's gravitational field. Einstein next attempted to apply the idea to the universe as a whole. Does it have a general curvature, similar to the local curvature in the sun's gravitational field? He now had to consider not a single centre of gravitational force but countless focal points in a universe full of matter concentrated in galaxies whose distribution fluctuates considerably from region to region in space. However, in the large-scale view the galaxies are spread uniformly throughout space as far out as our biggest telescopes can see, and we can justifiably ‘smooth out’ its matter to a general average (which comes to about one hydrogen atom per cubic metre). On this assumption the universe as a whole has a smooth general curvature.
 Nevertheless, if the space of the universe is curved, what is the sign of this curvature? Is it positive, as in our two-dimensional analogy of the surface of a sphere, or is it negative, as in the case of a saddle surface? Since we cannot consider space alone, how is this space curvature related to time?
 Analysing the pertinent mathematical equations, Einstein came to the conclusion that the curvature of space must be independent of time, i.e., that the universe as a whole must be unchanging (though it changes internally). However, he found to his surprise that there was no solution of the equations that would permit a static cosmos. To repair the situation, Einstein was forced to introduce an additional hypothesis that amounted to the assumption that a new kind of force was acting among the galaxies. This hypothetical force had to be independent of mass (being the same for an apple, the moon and the sun) and to gain in strength with increasing distance between the interacting objects (as no other forces ever do in physics).
 Einstein's new force, called ‘cosmic repulsion’,  allowed two mathematical models of a static universe. One solution, which was worked out by Einstein himself and became known as, Einstein's spherical universe, gave the space of the cosmos a positive curvature. Like a sphere, this universe was closed and thus had a finite volume. The space coordinates in Einstein's spherical universe were curved in the same way as the latitude or longitude coordinates on the surface of the earth. However, the time axis of the space-time continuum ran quite straight, as in the good old classical physics. This means that no cosmic event would ever recur. The two-dimensional analogy of Einstein's space-time continuum is the surface of a cylinder, with the time axis running parallel to the axis of the cylinder and the space axis perpendicular to it.
 The other static solution based on the mysterious repulsion forces was discovered by the Dutch mathematician Willem de Sitter. In his model of the universe both space and time were curved. Its geometry was similar to that of a globe, with longitude serving as the space coordinate and latitude as time. Unhappily astronomical observations contradicted by both Einstein and de Sitter's static models of the universe, and they were soon abandoned.
 In the year 1922 a major turning point came in the cosmological problem. A Russian mathematician, Alexander A. Friedman (from whom the author of this article learned his relativity), discovered an error in Einstein's proof for a static universe. In carrying out his proof Einstein had divided both sides of an equation by a quantity that, Friedman found, could become zero under certain circumstances. Since division by zero is not permitted in algebraic computations, the possibility of a nonstatic universe could not be excluded under the circumstances in question. Friedman showed that two nonstatic models were possible. One depiction as afforded by the efforts as drawn upon the imagination can see that the universe as expanding with time, others, by contrast, are less neuronally excited and cannot see beyond any celestial attempt for looking.
 Einstein quickly recognized the importance of this discovery. In the last edition of his book The Meaning of Relativity he wrote: "The mathematician Friedman found a way out of this dilemma. He showed that having a finite density in the whole is possible, according to the field equations, (three-dimensional) space, without enlarging these field equations. Einstein remarked to me many years ago that the cosmic repulsion idea was the biggest blunder that he ever made in his entire life
 Almost at the very moment that Friedman was discovering the possibility of an expanding universe by mathematical reasoning, Edwin P. Hubble at the Mount Wilson Observatory on the other side of the world found the first evidence of actual physical expansion through his telescope. He made a compilation of the distances of a number of far galaxies, whose light was shifted toward the red end of the spectrum, and it was soon found that the extent of the shift was in direct proportion to a galaxy's distance from us, as estimated by its faintness. Hubble and others interpreted the red-shift as the Doppler effect-the well-known phenomenon of lengthening of wavelengths from any radiating source that is moving rapidly away (a train whistle, a source of light or whatever). To date there has been no other reasonable explanation of the galaxies' red-shift. If the explanation is correct, it means that the galaxies are all moving away from one another with increasing velocity as they move farther apart. Thus, Friedman and Hubble laid the foundation for the theory of the expanding universe. The theory was soon developed further by a Belgian theoretical astronomer, Georges Lemaître. He proposed that our universe started from a highly compressed and extremely hot state that he called the ‘primeval atom’. (Modern physicists would prefer the term ‘primeval nucleus’.) As this matter expanded, it gradually thinned out, cooled down and reaggregated in stars and galaxies, giving rise to the highly complex structure of the universe as we now know it to be.
 Not until a few years ago the theory of the expanding universe lay under the cloud of a very serious contradiction. The measurements of the speed of flight of the galaxies and their distances from us indicated that the expansion had started about 1.8 billion years ago. On the other hand, measurements of the age of ancient rocks in the earth by the clock of radioactivity (i.e., the decay of uranium to lead) showed that some of the rocks were at least three billion years old; more recent estimates based on other radioactive elements raise the age of the earth's crust to almost five billion years. Clearly a universe 1.8 billion years old could not contain five-billion-year-old rocks! Happily the contradiction has now been disposed of by Walter Baade's recent discovery that the distance yardstick (based on the periods of variable stars) was faulty and that the distances between galaxies are more than twice as great as they were thought to be. This change in distances raises the age of the universe to five billion years or more.
 Friedman's solution of Einstein's cosmological equation, permits two kinds of universe. We can call one the ‘pulsating’ universe. This model says that when the universe has reached a certain maximum permissible expansion, it will begin to contract; that it will shrink until its matter has been compressed to a certain maximum density, possibly that of atomic nuclear material, which is a hundred million times denser than water; that it will then begin to expand again-and so on through the cycle ad infinitum. The other model is a ‘hyperbolic’ one: it suggests that from an infinitely thin state an eternity ago the universe contracted until it reached the maximum density, from which it rebounded to an unlimited expansion that will go on indefinitely in the future.
 The question whether our universe is ‘pulsating’ or ‘hyperbolic’ should be decidable from the present rate of its expansion. The situation is analogous to the case of a rocket shot from the surface of the earth. If the velocity of the rocket is less than seven miles per second-the ‘escape velocity’-the rocket will climb only to a certain height and then fall back to the earth. (If it were completely elastic, it would bounce up again, . . . and so on.) On the other hand, a rockets shot with a velocity of more than seven miles per second will escape from the earth's gravitational field and disappeared in space. The case of the receding system of galaxies is very similar to that of an escape rocket, except that instead of just two interacting bodies: the rocket and the earth, we have an unlimited number of them escaping from one another. We find that the galaxies are fleeing from one another at seven times the velocity necessary for mutual escape.
 Thus we may conclude that our universe corresponds to the ‘hyperbolic’ model, so that its present expansion will never stop. We must make one reservation. The estimate of the necessary escape velocity is based on the assumption that practically all the mass of the universe is concentrated in galaxies. If intergalactic space contained matter whose total mass was more than seven times that in the galaxies, we would have to reverse our conclusion and decide that the universe is pulsating. There has been no indication so far, however, that any matter exists in intergalactic space. It could have escaped detection only if it were in the form of pure hydrogen gas, without other gases or dust.
 Is the universe finite or infinite? This resolves itself into the question: Is the curvature of space positive or negative-closed like that of a sphere, or open like that of a saddle? We can look for the answer by studying the geometrical properties of its three-dimensional space, just as we examined the properties of figures on two-dimensional surfaces. The most convenient property to investigate astronomically is the relation between the volume of a sphere and its radius.
 We saw that, in the two-dimensional case, the area of a circle increases with increasing radius at a faster rate on a negatively curved surface than on a Euclidean or flat surface; and that on a positively curved surface the relative rate of increase is slower. Similarly the increase of volume is faster in negatively curved space, slower in positively curved space. In Euclidean space the volume of a sphere would increase in proportion to the cube, or third power, of the increase in radius. In negatively curved space the volume would increase faster than this, in undisputably curved space, slower. Thus if we look into space and find that the volume of successively larger spheres, as measured by a count of the galaxies within them, increases faster than the cube of the distance to the limit of the sphere (the radius), we can conclude that the space of our universe has negative curvature, and therefore is open and infinite. Similarly, if the number of galaxies increases at a rate slower than the cube of the distance, we live in a universe of positive curvature-closed and finite.
 Following this idea, Hubble undertook to study the increase in number of galaxies with distance. He estimated the distances of the remote galaxies by their relative faintness: galaxies vary considerably in intrinsic brightness, but over a very large number of galaxies these variations are expected to average out. Hubble's calculations produced the conclusion that the universe is a closed system-a small universe only a few billion light-years in radius.
 We know now that the scale he was using was wrong: with the new yardstick the universe would be more than twice as large as he calculated. Nevertheless, there is a more fundamental doubt about his result. The whole method is based on the assumption that the intrinsic brightness of a galaxy remains constant. What if it changes with time? We are seeing the light of the distant galaxies as it was emitted at widely different times in the past-500 million, a billion, two billion years ago. If the stars in the galaxies are burning out, the galaxies must dim as they grow older. A galaxy two billion light-years away cannot be put on the same distance scale with a galaxy 500 million light-years away unless we take into account the fact that we are seeing the nearer galaxy at an older, and less bright, age. The remote galaxy is farther away than a mere comparison of the luminosity of the two would suggest.
 When a correction is made for the assumed decline in brightness with age, the more distant galaxies are spread out to farther distances than Hubble assumed. In fact, the calculations of volume are nonetheless drastically that we may have to reverse the conclusion about the curvature of space. We are not sure, because we do not yet know enough about the evolution of galaxies. Even so, if we find that galaxies wane in intrinsic brightness by only a few per cent in a billion years, we will have to conclude that space is curved negatively and the universe is infinite.
 Effectively there is another line of reasoning which supports the side of infinity. Our universe seems to be hyperbolic and ever-expanding. Mathematical solutions of fundamental cosmological equations indicate that such a universe is open and infinite.

No comments:

Post a Comment