September 16, 2010

PAGE 56

 We have reviewed the questions that dominated the thinking of cosmologists during the first half of this century: the conception of a four-dimensional space-time continuum, of curved space, of an expanding universe and of a cosmos that is either finite or infinite. Now we must consider the major present issue in cosmology: Is the universe in truth evolving, or is it in a steady state of equilibrium that has always existed and will go on through eternity? Most cosmologists take the evolutionary view. All the same, in 1951 a group at the University of Cambridge, whose chief official representative has been Fred Hoyle, advanced the steady-state idea. Essentially their theory is that the universe is infinite in space and time that it has neither a beginning nor an end, that the density of its matter remains constant, that new matter is steadily being created in space at a rate that exactly compensates for the thinning of matter by expansion, that as a consequence new galaxies are continually being born, and that the galaxies of the universe therefore range in age from mere youngsters to veterans of 5, 10, 20 and more billions of years. In my opinion this theory must be considered very questionable because of the simple fact (apart from other reasons) that the galaxies in our neighbourhood all seem to be of the same age as our own Milky Way. However, the issue is many-sided and fundamental, and can be settled only by extended study of the universe as far as we can observe it . . . Thus coming to summarize the evolutionary theory.
 We assume that the universe started from a very dense state of matter. In the early stages of its expansion, radiant energy was dominant over the mass of matter. We can measure energy and matter on a common scale by means of the well-known equation E=mc2, which says that the energy equivalent of matter is the mass of the matter multiplied by the square of the velocity of light. Energy can be translated into mass, conversely, by dividing the energy quantity by c2. Thus, we can speak of the ‘mass density’ of energy. Now at the beginning the mass density of the radiant energy was incomparably greater than the density of the matter in the universe. Yet in an expanding system the density of radiant energy decreases faster than does the density of matter. The former thins out as the fourth power of the distance of expansion: as the radius of the system doubles, the density of radiant energy drops to one sixteenth. The density of matter declines as the third power; a doubling of the radius means an eightfold increase in volume, or eightfold decrease in density.
 Assuming that the universe at the beginning was under absolute rule by radiant energy, we can calculate that the temperature of the universe was 250 million degrees when it was one hour old, dropped to 6,000 degrees (the present temperature of our sun's surface) when it was 200,000 years old and had fallen to about 100 degrees below the freezing point of water when the universe reached its 250-millionth birthday.
 This particular birthday was a crucial one in the life of the universe. It was the point at which the density of ordinary matter became greater than the mass density of radiant energy, because of the more rapid fall of the latter. The switch from the reign of radiation to the reign of matter profoundly changed matter's behaviours. During the eons of its subjugation to the will of radiant energy (i.e., light), it must have been spread uniformly through space in the form of thin gas. Nevertheless, as soon as matter became gravitationally more important than the radiant energy, it began to acquire a more interesting character. James Jeans, in his classic studies of the physics of such a situation, proved half a century ago that a gravitating gas filling a very large volume is bound to break up into individual ‘gas balls’, the size of which is determined by the density and the temperature of the gas. Thus in the year 250,000,000 A.B.E. (after the beginning of expansion), when matter was freed from the dictatorship of radiant energy, the gas broke up into giant gas clouds, slowly drifting apart as the universe continued to expand. Applying Jeans's mathematical formula for the process to the gas filling the universe at that time, in that these primordial balls of gas would have had just about the mass that the galaxies of stars possess today. They were then only ‘proto galaxies’-cold, dark and chaotic. However, their gas soon condensed into stars and formed the galaxies as we see them now.
 A central question in this picture of the evolutionary universe is the problem of accounting for the formation of the varied kinds of matter composing it, i.e., the chemical elements . . . Its belief is that at the start matter was composed simply of protons, neutrons and electrons. After five minutes the universe must have cooled enough to permit the aggregation of protons and neutrons into larger units, from deuterons (one neutron and one proton) up to the heaviest elements. This process must have ended after about thirty minutes, for by that time the temperature of the expanding universe must have dropped below the threshold of thermonuclear reactions among light elements, and the neutrons must have been used up in element-building or been converted to protons.
 To many, the statement that the present chemical constitution of our universe was decided in half an hour five billion years ago will sound nonsensical. However, consider a spot of ground on the atomic proving ground in Nevada where an atomic bomb was exploded three years ago. Within one microsecond the nuclear reactions generated by the bomb produced a variety of fission products. Today, 100 million-million microseconds later, the site is still ‘hot’ with the surviving fission products. The ratio of one microsecond to three years is the same as the ratio of half an hour to five billion years! If we can accept a time ratio of this order in the one case, why not in the other?
 The late Enrico Fermi and Anthony L. Turkevich at the Institute for Nuclear Studies of the University of Chicago undertook a detailed study of thermonuclear reactions such as must have taken place during the first half hour of the universe's expansion. They concluded that the reactions would have produced about equal amounts of hydrogen and helium, making up 99 per cent of the total material, and about 1 per cent of deuterium. We know that hydrogen and helium do in fact make up about 99 per cent of the matter of the universe. This leaves us with the problem of building the heavier elements. Hold to opinion, that some of them were built by capture of neutrons. However, since the absence of any stable nucleus of atomic weight five makes it improbable that the heavier elements could have been produced in the first half hour in the abundances now observed, and, yet agreeing that the lion's share of the heavy elements may have been formed later in the hot interiors of stars.
 All the theories-of the origin, age, extent, composition and nature of the universe-are becoming more subject to test by new instruments and new techniques . . . Nevertheless, we must not forget that the estimate of distances of the galaxies is still founded on the debatable assumption that the brightness of galaxies does not change with time. If galaxies diminish in brightness as they age, the calculations cannot be depended upon. Thus the question whether evolution is or is not taking place in the galaxies is of crucial importance at the present stage of our outlook on the universe.
 In addition certain branches of physical science focus on energy and its large-scale effects. Thermodynamics is the study of heat and the effects of converting heat into other kinds of energy. This branch of physics has a host of highly practical applications because heat is often used to power machines. Physicists also investigate electrical energy and energy that are carried in electromagnetic waves. These include radio waves, light rays, and X-rays-forms of energy that are closely related and that all obey the same set of rules. Chemistry is the study of the composition of matter and the way different substances interact-subjects that involve physics on an atomic scale. In physical chemistry, chemists study the way physical laws govern chemical change, while in other branches of chemistry the focus is on particular chemicals themselves. For example, inorganic chemistry investigates substances found in the nonliving world and organic chemistry investigates carbon-based substances. Until the 19th century, these two areas of chemistry were thought to be separate and distinct, but today chemists routinely produce organic chemicals from inorganic raw materials. Organic chemists have learned how to synthesize many substances that are found in nature, together with hundreds of thousands that are not, such as plastics and pesticides. Many organic compounds, such as reserpine, a drug used to treat hypertension, cost less to produce by synthesizing from inorganic raw materials than to isolate from natural sources. Many synthetic medicinal compounds can be modified to make them more effective than their natural counterparts, with fewer harmful side effects.
 The branch of chemistry known as biochemistry deals solely with substances found in living things. It investigates the chemical reactions that organisms use to obtain energy and the reactions up which they use to build themselves. Increasingly, this field of chemistry has become concerned not simply with chemical reactions themselves but also with how the shape of molecules influences the way they work. The result is the new field of molecular biology, one of the fastest-growing sciences today.
 Physical scientists also study matter elsewhere in the universe, including the planets and stars. Astronomy is the science of the heavens usually, while astrophysics is a branch of astronomy that investigates the physical and chemical nature of stars and other objects. Astronomy deals largely with the universe as it appears today, but a related science called cosmology looks back in time to answer the greatest scientific questions of all: how the universe began and how it came to be as it is today
 The life sciences include all those areas of study that deal with living things. Biology is the general study of the origin, development, structure, function, evolution, and distribution of living things. Biology may be divided into botany, the study of plants; zoology, the study of animals; and microbiology, the study of the microscopic organisms, such as bacteria, viruses, and fungi. Many single-celled organisms play important roles in life processes and thus are important to more complex forms of life, including plants and animals.
 Genetics is the branch of biology that studies the way in which characteristics are transmitted from an organism to its offspring. In the latter half of the 20th century, new advances made it easier to study and manipulate genes at the molecular level, enabling scientists to catalogue all the genes finds in each cell of the human body. Exobiology, a new and still speculative field, is the study of possible extraterrestrial life. Although Earth remains the only place known to support life, many believe that it is only a matter of time before scientists discover life elsewhere in the universe.
 While exobiology is one of the newest life sciences, anatomy is one of the oldest. It is the study of plant and animal structures, carried out by dissection or by using powerful imaging techniques. Gross anatomy deals with structures that are large enough to see, while microscopic anatomy deals with much smaller structures, down to the level of individual cells.
 Physiology explores how living things’ work. Physiologists study processes such as cellular respiration and muscle contraction, as well as the systems that keep these processes under control. Their work helps to answer questions about one of the key characteristics of life, the fact that most living things maintain a steady internal state when the environment around them constantly changes.
 Together, anatomy and physiology form two of the most important disciplines in medicine, the science of treating injury and human disease. General medical practitioners have to be familiar with human biology as a whole, but medical science also includes a host of clinical specialties. They include sciences such as cardiology, urology, and oncology, which investigate particular organs and disorders, and pathology, the general study of disease and the changes that it causes in the human body.
 As well as working with individual organisms, life scientists also investigate the way living things interact. The study of these interactions, known as ecology, has become a key area of study in the life sciences as scientists become increasingly concerned about the disrupting effects of human activities on the environment.
 The social sciences explore human society past and present, and the way human beings behave. They include sociology, which investigates the way society is structured and how it functions, as well as psychology, which is the study of individual behaviour and the mind. Social psychology draws on research in both these fields. It examines the way society influence’s people's behaviour and attitudes.
 Another social science, anthropology, looks at humans as a species and examines all the characteristics that make us what we are. These include not only how people relate to each other but also how they interact with the world around them, both now and in the past. As part of this work, anthropologists often carry out long-term studies of particular groups of people in different parts of the world. This kind of research helps to identify characteristics that all human beings share. That there are those that are the products of some non-regional culture, in that have been taught by others in sharing their knowledge as given up from generation to generation.
 The social sciences also include political science, law, and economics, which are products of human society. Although far removed from the world of the physical sciences, all these fields can be studied in a scientific way. Political science and law are uniquely human concepts, but economics has some surprisingly close parallels with ecology. This is because the laws that govern resource use, productivity, and efficiency do not operate only in the human world, with its stock markets and global corporations, but in the nonhuman world as well in technology, scientific knowledge is put to practical ends. This knowledge comes chiefly from mathematics and the physical sciences, and it is used in designing machinery, materials, and industrial processes. Overall, this work is known as engineering, a word dating back to the early days of the Industrial Revolution, when an ‘engine’ was any kind of machine.
 Engineering has many branches, calling for a wide variety of different skills. For example, aeronautical engineers need expertise in the science of fluid flow, because aeroplanes fly through air, which is a fluid. Using wind tunnels and computer models, aeronautical engineers strive to minimize the air resistance generated by an aeroplane, while at the same time maintaining a sufficient amount of lift. Marine engineers also need detailed knowledge of how fluids behave, particularly when designing submarines that have to withstand extra stresses when they dive deep below the water’s surface. In civil engineering, stress calculations ensure that structures such as dams and office towers will not collapse, particularly if they are in earthquake zones. In computing, engineering takes two forms: hardware design and software design. Hardware design refers to the physical design of computer equipment (hardware). Software design is carried out by programmers who analyse complex operations, reducing them to a series of small steps written in a language recognized by computers.
 In recent years, a completely new field of technology has developed from advances in the life sciences. Known as biotechnology, it involves such varied activities as genetic engineering, the manipulation of genetic material of cells or organisms, and cloning, the formation of genetically uniform cells, plants, or animals. Although still in its infancy, many scientists believe that biotechnology will play a major role in many fields, including food production, waste disposal, and medicine. Science exists because humans have a natural curiosity and an ability to organize and record things. Curiosity is a characteristic shown by many other animals, but organizing and recording knowledge is a skill demonstrated by humans alone.
 During prehistoric times, humans recorded information in a rudimentary way. They made paintings on the walls of caves, and they also carved numerical records on bones or stones. They may also have used other ways of recording numerical figures, such as making knots in leather cords, but because these records were perishable, no traces of them remain. Even so, with the invention of writing about 6,000 years ago, a new and much more flexible system of recording knowledge appeared.
 The earliest writers were the people of Mesopotamia, who lived in a part of present-day Iraq. Initially they used a pictographic script, inscribing tallies and lifelike symbols on tablets of clay. With the passage of time, these symbols gradually developed into cuneiform, a much more stylized script composed of wedge-shaped marks.
 Because clay is durable, many of these ancient tablets still survive. They show that when writing first appeared. The Mesopotamians already had a basic knowledge of mathematics, astronomy, and chemistry, and that they used symptoms to identify common diseases. During the following 2,000 years, as Mesopotamian culture became increasingly sophisticated, mathematics in particular became a flourishing science. Knowledge accumulated rapidly, and by 1000 Bc the earliest private libraries had appeared.
 Southwest of Mesopotamia, in the Nile Valley of northeastern Africa, the ancient Egyptians developed their own form of a pictographic script, writing on papyrus, or inscribing text in stone. Written records from 1500 Bc. shows that, like the Mesopotamians, the Egyptians had a detailed knowledge of diseases. They were also keen astronomers and skilled mathematicians-a fact demonstrated by the almost perfect symmetry of the pyramids and by other remarkable structures they built.
 For the peoples of Mesopotamia and ancient Egypt, knowledge was recorded mainly for practical needs. For example, astronomical observations enabled the development of early calendars, which helped in organizing the farming year. Yet in ancient Greece, often recognized as the birthplace of Western science, a new scientific enquiry began. Here, philosophers sought knowledge largely for its own sake.
 Thales of Miletus were one of the first Greek philosophers to seek natural causes for natural phenomena. He travelled widely throughout Egypt and the Middle East and became famous for predicting a solar eclipse that occurred in 585 Bc. At a time when people regarded eclipses as ominous, inexplicable, and frightening events, his prediction marked the start of rationalism, a belief that the universe can be explained by reason alone. Rationalism remains the hallmark of science to this day.
 Thales and his successors speculated about the nature of matter and of Earth itself. Thales himself believed that Earth was a flat disk floating on water, but the followers of Pythagoras, one of ancient Greece's most celebrated mathematicians, believed that Earth was spherical. These followers also thought that Earth moved in a circular orbit-not around the Sun but around a central fire. Although flawed and widely disputed, this bold suggestion marked an important development in scientific thought: the idea that Earth might not be, after all, the centre of the universe. At the other end of the spectrum of scientific thought, the Greek philosopher Leucippus and his student Democritus of Abdera proposed that all matter be made up of indivisible atoms, more than 2,000 years before the idea became a part of modern science.
 As well as investigating natural phenomena, ancient Greek philosophers also studied the nature of reasoning. At the two great schools of Greek philosophy in Athens-the Academy, founded by Plato, and the Lyceum, founded by Plato's pupil Aristotle-students learned how to reason in a structured way using logic. The methods taught at these schools included induction, which involve taking particular cases and using them to draw general conclusions, and deduction, the process of correctly inferring new facts from something already known.
 In the two centuries that followed Aristotle's death in 322 Bc, Greek philosophers made remarkable progress in a number of fields. By comparing the Sun's height above the horizon in two different places, the mathematician, astronomer, and geographer Eratosthenes calculated Earth's circumference, producing the figure of an accurate overlay within one percent. Another celebrated Greek mathematician, Archimedes, laid the foundations of mechanics. He also pioneered the science of hydrostatics, the study of the behaviour of fluids at rest. In the life sciences, Theophrastus founded the science of botany, providing detailed and vivid descriptions of a wide variety of plant species as well as investigating the germination process in seeds.
 By the 1st century Bc, Roman power was growing and Greek influence had begun to wane. During this period, the Egyptian geographer and astronomer Ptolemy charted the known planets and stars, putting Earth firmly at the centre of the universe, and Galen, a physician of Greek origin, wrote important works on anatomy and physiology. Although skilled soldiers, lawyers, engineers, and administrators, the Romans had little interest in basic science. As a result, scientific growth made little advancement in the days of the Roman Empire. In Athens, the Lyceum and Academy were closed down in AD. 529, bringing the first flowering of rationalism to an end.
 For more than nine centuries, from about ad 500 to 1400, Western Europe made only a minor contribution to scientific thought. European philosophers became preoccupied with alchemy, a secretive and mystical pseudoscience that held out the illusory promise of turning inferior metals into gold. Alchemy did lead to some discoveries, such as sulfuric acid, which was first described in the early 1300's, but elsewhere, particularly in China and the Arab world, much more significant progress in the sciences was made.
 Chinese science developed in isolation from Europe, and followed a different pattern. Unlike the Greeks, who prized knowledge as an end, the Chinese excelled at turning scientific discoveries to practical ends. The list of their technological achievements is dazzling: it includes the compass, invented in about AD. 270; wood-block printing, developed around 700, and gunpowder and movable type, both invented around the year 1000. The Chinese were also capable mathematicians and excellent astronomers. In mathematics, they calculated the value of π (pi) to within seven decimal places by the year 600, while in astronomy, one of their most celebrated observations was that of the supernova, or stellar explosion, that took place in the Crab Nebula in 1054. China was also the source of the world's oldest portable star map, dating from about 940 Bc.
 The Islamic world, which in medieval times extended as far west as Spain, also produced many scientific breakthroughs. The Arab mathematician Muhammad al-Khwarizmi introduced Hindu-Arabic numerals to Europe many centuries after they had been devised in southern Asia. Unlike the numerals used by the Romans, Hindu-Arabic numerals include zero, a mathematical device unknown in Europe at the time. The value of Hindu-Arabic numerals depends on their place: in the number 300, for example, the numeral three is worth ten times as much as in thirty. Al-Khwarizmi also wrote on algebra (it derived from the Arab word al-jabr), and his name survives in the word algorithm, a concept of great importance in modern computing.
 In astronomy, Arab observers charted the heavens, giving many of the brightest stars the names we use today, such as Aldebaran, Altair, and Deneb. Arab scientists also explored chemistry, developing methods to manufacture metallic alloys and test the quality and purity of metals. As in mathematics and astronomy, Arab chemists left their mark in some of the names they used-alkali and alchemy, for example, are both words of Arabic origin. Arab scientists also played a part in developing physics. One of the most famous Egyptian physicists, Alhazen, published a book that dealt with the principles of lenses, mirrors, and other devices used in optics. In this work, he rejected the then-popular idea that eyes give out light rays. Instead, he correctly deduced that eyes work when light rays enter the eye from outside.
 In Europe, historians often attribute the rebirth of science to a political event-the capture of Constantinople (now Istanbul) by the Turks in 1453. At the time, Constantinople was the capital of the Byzantine Empire and a major seat of learning. Its downfall led to an exodus of Greek scholars to the West. In the period that followed, many scientific works, including those originally from the Arab world, were translated into European languages. Through the invention of the movable type printing press by Johannes Gutenberg around 1450, copies of these texts became widely available.
 The Black Death, a recurring outbreak of bubonic plague that began in 1347, disrupted the progress of science in Europe for more than two centuries. However, in 1543 two books were published that had a profound impact on scientific progress. One was De Corporis Humani Fabrica (On the Structure of the Human Body, seven volumes, 1543), by the Belgian anatomist Andreas Vesalius. Vesalius studied anatomy in Italy, and his masterpiece, which was illustrated by superb woodcuts, corrected errors and misunderstandings about the body before which had persisted since the time of Galen more than 1,300 years. Unlike Islamic physicians, whose religion prohibited them from dissecting human cadavers, Vesalius investigated the human body in minute detail. As a result, he set new standards in anatomical science, creating a reference work of unique and lasting value.
 The other book of great significance published in 1543 was De Revolutionibus Orbium Coelestium (On the Revolutions of the Heavenly Spheres), written by the Polish astronomer. In it, Copernicus rejected the idea that Earth was the centre of the universe, as proposed by Ptolemy in the 1st century Bc. Instead, he set out to prove that Earth, together with the other planets, follows orbits around the Sun. Other astronomers opposed Copernicus's ideas, and more ominously, so did the Roman Catholic Church. In the early 1600's, the church placed the book on a list of forbidden works, where it remained for more than two centuries. Despite this ban and despite the book's inaccuracies (for instance, Copernicus believed that Earth's orbit was circular rather than elliptical), De Revolutionibus remained a momentous achievement. It also marked the start of a conflict between science and religion that has dogged Western thought ever since
 In the first decade of the 17th century, the invention of the telescope provided independent evidence to support Copernicus's views. Italian physicist and astronomer Galileo Galilei used the new device to remarkable effect. He became the first person to observe satellites circling Jupiter, the first to make detailed drawings of the surface of the Moon, and the first to see how Venus waxes and wanes as it circles the Sun.
 These observations of Venus helped to convince Galileo that Copernicus’s Sun-entered view of the universe had been correct, but he fully understood the danger of supporting such heretical ideas. His Dialogue on the Two Chief World Systems, Ptolemaic and Copernican, published in 1632, was carefully crafted to avoid controversy. Even so, he was summoned before the Inquisition (tribunal established by the pope for judging heretics) the following year and, under threat of torture, forced to recant.
 Nicolaus Copernicus (1473-1543), the first developed heliocentric theory of the Universes in the modern era presented in De Revolutioniv bus Coelestium, published in the year of Copernicus’s death. The system is entirely mathematical, in the sense of predicting the observed position of celestial bodies on te basis of an underlying geometry, without exploring the mechanics of celestial motion. Its mathematical and scientific superiority over the Ptolemaic system was not as direct as poplar history suggests: Copernicus’s system adhered to circular planetary motion and let the planets run on forty-eight epicycles and eccentrics. It was not until the work of Kepler and Galileo that the system became markedly simpler than Ptolemaic astronomy.
 The publication of Nicolaus Copernicus's De Revolutionibus Orbium Coelestium (On the Revolutions of the Heavenly Spheres) in 1543 is traditionally considered the inauguration of the scientific revolution. Ironically, Copernicus had no intention of introducing radical ideas into cosmology. His aim was only to restore the purity of ancient Greek astronomy by eliminating novelties introduced by Ptolemy. With such an aim in mind he modelled his own book, which would turn astronomy upside down, on Ptolemy's Almagest. At the core of the Copernican system, as with that of Aristarchus before him, is the concept of the stationary Sun at the centre of the universe, and the revolution of the planets, Earth included, around the Sun. The Earth was ascribed, in addition to an annual revolution around the Sun, a daily rotation around its axis.
 Copernicus's greatest achievement is his legacy. By introducing mathematical reasoning into cosmology, he dealt a severe blow to Aristotelian commonsense physics. His concept of an Earth in motion launched the notion of the Earth as a planet. His explanation that he had been unable to detect stellar parallax because of the enormous distance of the sphere of the fixed stars opened the way for future speculation about an infinite universe. Nevertheless, Copernicus still clung to many traditional features of Aristotelian cosmology. He continued to advocate the entrenched view of the universe as a closed world and to see the motion of the planets as uniform and circular. Thus, in evaluating Copernicus's legacy, it should be noted that he set the stage for far more daring speculations than he himself could make. The heavy metaphysical underpinning of Kepler's laws, combined with an obscure style and a demanding mathematics, caused most contemporaries to ignore his discoveries. Even his Italian contemporary Galileo Galilei, who corresponded with Kepler and possessed his books, never referred to the three laws. Instead, Galileo provided the two important elements missing from Kepler's work: a new science of dynamics that could be employed in an explanation of planetary motion, and a staggering new body of astronomical observations. The observations were made possible by the invention of the telescope in Holland C.1608 and by Galileo's ability to improve on this instrument without having ever seen the original. Thus equipped, he turned his telescope skyward, and saw some spectacular sights.
 The results of his discoveries were immediately published in the Sidereus nuncius (The Starry Messenger) of 1610. Galileo observed that the Moon was very similar to the Earth, with mountains, valleys, and oceans, and not at all that perfect, smooth spherical body it was claimed to be. He also discovered four moons orbiting Jupiter. As for the Milky Way, instead of being a stream of light, it was, alternatively a large aggregate of stars. Later observations resulted in the discovery of sunspots, the phases of Venus, and that strange phenomenon that would later be designated as the rings of Saturn.
 Having announced these sensational astronomical discoveries which reinforced his conviction of the reality of the heliocentric theory-Galileo resumed his earlier studies of motion. He now attempted to construct a comprehensive new science of mechanics necessary in a Copernican world, and the results of his labours were published in Italian in two epoch
- making books: Dialogue Concerning the Two Chief World Systems (1632) and Discourses and Mathematical Demonstrations Concerning the Two New Sciences (1638). His studies of projectiles and free-falling bodies brought him very close to the full formulation of the laws of inertia and acceleration (the first two laws of Isaac Newton). Galileo's legacy includes both the modern notion of ‘laws of nature’ and the idea of mathematics as nature's true language. He contributed to the mathematization of nature and the geometrization of space, as well as to the mechanical philosophy that would dominate the 17th and 18th centuries. Perhaps most important, it is largely due to Galileo that experiments and observations serve as the cornerstone of scientific reasoning.
 Today, Galileo is remembered equally well because of his conflict with the Roman Catholic church. His uncompromising advocacy of Copernicanism after 1610 was responsible, in part, for the placement of Copernicus's De Revolutionibus on the Index of Forbidden Books in 1616. At the same time, Galileo was warned not to teach or defend Copernicanism in public. The election of Galileo's friend Maffeo Barbering as Pope Urban VIII in 1624 filled Galileo with the hope that such a verdict could be revoked. With perhaps some unwarranted optimism, Galileo set to work to complete his Dialogue (1632). However, Galileo underestimated the power of the enemies he had made during the previous two decades, particularly some Jesuits who had been the target of his acerbic tongue. The outcome was that Galileo was summoned to Rome and there forced to abjure, on his knees, the views he had expressed in his book. Ever since, Galileo has been portrayed as a victim of a repressive church and a martyr in the cause of freedom of thought; as such, he has become a powerful symbol.
 Despite his passionate advocacy of Copernicanism and his fundamental work in mechanics, Galileo continued to accept the age-old views that planetary orbits were circular and the cosmos an enclosed world. These beliefs, as well as a reluctance rigorously to apply mathematics to astronomy as he had previously applied it to terrestrial mechanics, prevented him from arriving at the correct law of inertia. Thus, it remained for Isaac Newton to unite heaven and Earth in his immense intellectual achievement, the Philosophiae Naturalis principia mathematica (Mathematical Principles of Natural Philosophy), which was published in 1687. The first book of the Principia contained Newton's three laws of motion. The first expounds the law of inertia: everybody persists in a state of rest or uniform motion in a straight line unless compelled to change such a state by an impressing force. The second is the law of acceleration, according to which the change of motion of a body is proportional to the force acting upon it and takes place in the direction of the straight line along which that force is impressed. The third, and most original, law ascribes to every action an opposite and equal reaction. These laws governing terrestrial motion were extended to include celestial motion in book three of the Principia, where Newton formulated his most famous law, the law of gravitation: everybody in the universe attracts any other body with a force directly proportional to the product of their masses and inversely proportional to the square of the distance between them.
 The Principia is deservedly considered one of the greatest scientific masterpieces of all time. Nevertheless, in 1704, Newton published his second great work, the Opticks, in which he formulated his corpuscular theory of light and his theory of colours. In later editions Newton appended a series of ‘queries’ concerning various related topics in natural philosophy. These speculative, and sometimes metaphysical, statements on such issues as light, heat, ether, and matter became most productive during the 18th century, when the book and the experimental method it propagated became immensely popular.
 The 17th century French scientist and mathematician René Descartes was also one of the most influential thinkers in Western philosophy. Descartes stressed the importance of skepticism in thought and proposed the idea that existence had a dual nature: one physical, the other mental. The latter concept, known as Cartesian dualism, continues to engage philosophers today. This passage from Discourse on Method (first published in his Philosophical Essays in 1637) contains a summary of his thesis, which includes the celebrated phrase “I think, therefore I am.”
 Then examining attentively what I was, and seeing that I could pretend that I had no body and that there was no world or place that I [was] in, but that I could not, for all that, pretend that I did not exist, and that, on the contrary, from the very fact that I thought of doubting the truth of other things, it followed very evidently and very conveniently that I existed; while, on the other hand, if I had only ceased to think, although all the rest of what I had ever imagined had been true, I would have had no reason to believe that I existed; I thereby concluded that I was a substance, of which the whole essence or nature consists in thinking, and which, in order to exist, needs no place and depends on no material thing; so that this ‘I’, which is to say, the mind, by which I am what I am, is distinct entirely from the body, and even that knowing is easier than the body, and moreover that even if the body were not, it would not cease to be all that it is.
 It is, nonetheless, as considered overall of what is needed for a proposition to be true and certain; for, I had morally justified, in finding of one that so happens that I knew it to be so. I thought too, that I had morally justified by reason alone, in that to know of what is of this necessitates a narrative coherence as availed to a set-order of governing principles. Having marked and noted that there is nothing in at all that in this, I think, therefore I am, which assures me that I am speaking the truth, except that I see very clearly that in order to think one must exist, I judged that I could take it to be a general rule that the things we conceive very clearly and very distinctly is nevertheless some difficulty in being able to recognize for certain that are the things we see distinctly.
 Following this, reflecting on the fact that I had doubts, and that consequently my being was not perfect, for I saw clearly that it was a greater perfection to know than to doubt, I decided to inquire from what place I had learned to think of some thing perfect than myself; and I clearly recognized that this must have been from some nature that was in fact perfect. As for the notions I had of several other things outside myself, such as the sky, the earth, light, heat and a thousand others, I had not the same concern to know their source, because, seeing nothing in them that seemed to make them superior to me. I could believe that, if they were true, they were dependencies of my nature, in as much as it. One perfection; and, if they were not, that I held them from nothing, that is to say that they were in me because of an imperfection in my nature. Nevertheless, I could not make the same judgement concerning the idea of a being perfect than myself; for to hold it from nothing was something manifestly impossible; and because it is no less contradictory that the perfect should proceed from and depend on the less perfect, than it is that something should emerge out of nothing, I could not hold it from myself; with the result that it remained that it must have been put into me by a being whose nature was truly perfect than mine and which even had in it all the perfection of which I could have any idea, which is to say, in a word, which was God. To which I added that, since I knew some perfections that I did not have, I was not the only being that existed (I will freely use here, with your permission, the terms of the School) but that there must be another perfect, upon whom I depended, and from whom I had acquired all I had; for, if I had been alone and independent of all other, so as to have had from myself this small portion of perfection that I had by participation in the perfection of God, I could have given myself, by the same reason, all the remainder of perfection that I knew myself to lack, and thus to be myself infinite, eternal, immutable, omniscient, all powerful, and finally to have all the perfections that I could observe to be in God. For, consequentially upon the reasoning by which I had proved the existence of God, in order to understand the nature of God as far as my own nature was capable of doing, I had only to consider, concerning all the things of which I found in myself some idea, whether it was a perfection or not to have them: and I was assured that none of those that indicated some imperfection was in him, but that all the others were. So I saw that doubt, inconstancy, sadness and similar things could not be in him, seeing that I myself would have been very pleased to be free from them. Then, further, I had ideas of many sensible and bodily things; for even supposing that I was dreaming, and that everything I saw or imagined was false, I could not, nevertheless, deny that the ideas were really in my thoughts. However, because I had already recognized in myself very clearly that intelligent nature is distinct from the corporeal, considering that all composition is evidence of dependency, and that dependency is manifestly a defect, I thence judged that it could not be a perfection in God to be composed of these two natures, and that, consequently, he was not so composed, but that, if there were any bodies in the world or any intelligence or other natures that were not wholly perfect, their existence must depend on his power, in such a way that they could not subsist without him for a single instant.
 I set out after that to seek other truths; and turning to the object of the geometers [geometry], which I conceived as a continuous body, or a space extended indefinitely in length, width and height or depth, divisible into various parts, which could have various figures and sizes and be moved or transposed in all sorts of ways-for the geometers take all that to be in the object of their study-I went through some of their simplest proofs. Having observed that the great certainty that everyone attributes to them is based only on the fact that they are clearly conceived according to the rule I spoke of earlier, I noticed also that they had nothing at all in them that might assure me of the existence of their object. Thus, for example, I very well perceived that, supposing a triangle to be given, its three angles must be equal to two right-angles, but I saw nothing, for all that, which assured me that any such triangle existed in the world, whereas regressing to the examination of the idea I had of a perfect Being. In that of its finding it was found that existence was comprised in the idea in the same way that the equality of the three angles of a triangle to two right angles is comprised in the idea of a triangle or, as in the idea of a sphere, the fact that all its parts are equidistant from its centre, or even more obviously so; and that consequently it is at least as certain that God, who is this perfect Being, is, or exists, as any geometric demonstration can be.
 The impact of the Newtonian accomplishment was enormous. Newton's two great books resulted in the establishment of two traditions that, though often mutually exclusive, nevertheless permeated into every area of science. The first was the mathematical and reductionist tradition of the Principia, which, like René Descartes's mechanical philosophy, propagated a rational, well-regulated image of the universe. The second was the experimental tradition of the Opticks, in a measure less demanding than the mathematical tradition and, owing to the speculative and suggestive queries appended to the Opticks, highly applicable to chemistry, biology, and the other new scientific disciplines that began to flourish in the 18th century. This is not to imply that everyone in the scientific establishment was, or would be, a Newtonian. Newtonianism had its share of detractors. Instead, the Newtonian achievement was so great, and its applicability to other disciplines so strong, that although Newtonian science could be argued against, it could not be ignored. In fact, in the physical sciences an initial reaction against universal gravitation occurred. For many, the concept of action at a distance seemed to hark back to those occult qualities with which the mechanical philosophy of the 17th century had done away. By the second half of the 18th century, however, universal gravitation would be proved correct, thanks to the work of Leonhard Euler, A. C. Clairaut, and Pierre Simon de LaPlace, the last of whom announced the stability of the solar system in his masterpiece Celestial Mechanics (1799-1825).
 Newton's influence was not confined to the domain of the natural sciences. The philosophes of the 18th-century Enlightenment sought to apply scientific methods to the study of human society. To them, the empiricist philosopher John Locke was the first person to attempt this. They believed that in his Essay on Human Understanding (1690) Locke did for the human mind what Newton had done for the physical world. Although Locke's psychology and epistemology were to come under increasing attack as the 18th century advanced, other thinkers such as Adam Smith, David Hume, and Abbé de Condillac would aspire to become the Newtons of the mind or the moral realm. These confident, optimistic men of the Enlightenment argued that there must exist universal human laws that transcend differences of human behaviour and the variety of social and cultural institutions. Labouring under such an assumption, they sought to uncover these laws and apply them to the new society about which they hoped to bring.
 As the 18th century progressed, the optimism of the philosophes waned and a reaction began to set in. Its first manifestation occurred in the religious realm. The mechanistic interpretation of the world-shared by Newton and Descartes -had, in the hands of the philosophes, led to materialism and atheism. Thus, by mid-century the stage was set for a revivalist movement, which took the form of Methodism in England and pietism in Germany. By the end of the century the romantic reaction had begun. Fuelled in part by religious revivalism, the romantics attacked the extreme rationalism of the Enlightenment, the impersonalization of the mechanistic universe, and the contemptuous attitude of "mathematicians" toward imagination, emotions, and religion.
 The romantic reaction, however, was not anti-scientific; its adherents rejected a specific type of the mathematical science, not the entire enterprise. In fact, the romantic reaction, particularly in Germany, would give rise to a creative movement-the Naturphilosophie -that in turn would be crucial for the development of the biological and life sciences in the 19th century, and would nourish the metaphysical foundation necessary for the emergence of the concepts of energy, forces, and conservation.
 Thus and so, in classical physics, external reality consisted of inert and inanimate matter moving in accordance with wholly deterministic natural laws, and collections of discrete atomized parts constituted wholes. Classical physics was also premised, however, on a dualistic conception of reality as consisting of abstract disembodied ideas existing in a domain separate from and superior to sensible objects and movements. The motion that the material world experienced by the senses was inferior to the immaterial world experiences by mind or spirit has been blamed for frustrating the progress of physics up too at least the time of Galileo. Nevertheless, in one very important respect it also made the fist scientific revolution possible. Copernicus, Galileo, Kepler and Newton firmly believed that the immaterial geometrical mathematical ides that inform physical reality had a prior existence in the mind of God and that doing physics was a form of communion with these ideas.
 Even though instruction at Cambridge was still dominated by the philosophy of Aristotle, some freedom of study was permitted in the student's third year. Newton immersed himself in the new mechanical philosophy of Descartes, Gassendi, and Boyle; in the new algebra and analytical geometry of Vieta, Descartes, and Wallis; and in the mechanics and Copernican astronomy of Galileo. At this stage Newton showed no great talent. His scientific genius emerged suddenly when the plague closed the University in the summer of 1665 and he had to return to Lincolnshire. There, within eighteen months he began revolutionary advances in mathematics, optics, physics, and astronomy.
 During the plague years Newton laid the foundation for elementary differential and integral Calculus, several years before its independent discovery by the German philosopher and mathematician Leibniz. The ‘method of fluxions’, as he termed it, was based on his crucial insight that the integration of a function (or finding the area under its curve) is merely the inverse procedure to differentiating it (or finding the slope of the curve at any point). Taking differentiation as the basic operation, Newton produced simple analytical methods that unified a host of disparate techniques previously developed on a piecemeal basis to deal with such problems as finding areas, tangents, the lengths of curves, and their maxima and minima. Even though Newton could not fully justify his methods -rigorous logical foundations for the calculus were not developed until the 19th century-he receives the credit for developing a powerful tool of problem solving and analysis in pure mathematics and physics. Isaac Barrow, a Fellow of Trinity College and Lucasian Professor of Mathematics in the University, was so impressed by Newton's achievement that when he resigned his chair in 1669 to devote himself to theology, he recommended that the 27-year-old Newton take his place.
 Newton's initial lectures as Lucasian Professor dealt with optics, including his remarkable discoveries made during the plague years. He had reached the revolutionary conclusion that white light is not a simple, homogeneous entity, as natural philosophers since Aristotle had believed. When he passed a thin beam of sunlight through a glass prism, he noted the oblong spectrum of colours-red, yellow, green, blue, violet -that formed on the wall opposite. Newton showed that the spectrum was too long to be explained by the accepted theory of the bending (or refraction) of light by dense media. The old theory said that all rays of white light striking the prism at the same angle would be equally refracted. Newton argued that white light is really a mixture of many different types of rays, that the different types of rays are refracted at different angles, and that each different type of ray is responsible for producing a given spectral colour. A so-called crucial experiment confirmed the theory. Newton selected out of the spectrum a narrow band of light of one colour. He sent it through a second prism and observed that no further elongation occurred. All the selected rays of one colour were refracted at the same angle.
 These discoveries led Newton to the logical, but erroneous, conclusion that telescopes using refracting lenses could never overcome the distortions of chromatic dispersion. He therefore proposed and constructed a reflecting telescope, the first of its kind, and the prototype of the largest modern optical telescopes. In 1671 he donated an improved version to the Royal Society of London, the foremost scientific society of the day. As a consequence, he was elected a fellow of the society in 1672. Later that year Newton published his first scientific paper in the Philosophical Transactions of the society. It dealt with the new theory of light and colour and is one of the earliest examples of the short research paper.
 Newton's paper was well received, but two leading natural philosophers, Robert Hooke and Christian Huygens rejected Newton's naive claim that his theory was simply derived with certainty from experiments. In particular they objected to what they took to be Newton's attempt to prove by experiment alone that light consists in the motion of small particles, or corpuscles, rather than in the transmission of waves or pulses, as they both believed. Although Newton's subsequent denial of the use of hypotheses was not convincing, his ideas about scientific method won universal assent, along with his corpuscular theory, which reigned until the wave theory was revived in the early 19th century.
 The debate soured Newton's relations with Hooke. Newton withdrew from public scientific discussion for about a decade after 1675, devoting himself to chemical and alchemical researches. He delayed the publication of a full account of his optical researches until after the death of Hooke in 1703. Newton's Opticks appeared the following year. It dealt with the theory of light and colour and with Newton's investigations of the colours of thin sheets, of ‘Newton's rings’, and of the phenomenon of diffraction of light. To explain some of his observations he had to graft elements of a wave theory of light onto his basically corpuscular theory. q
 Newton's greatest achievement was his work in physics and celestial mechanics, which culminated in the theory of universal gravitation. Even though Newton also began this research in the plague years, the story that he discovered universal gravitation in 1666 while watching an apple fall from a tree in his garden is a myth. By 1666, Newton had formulated early versions of his three Laws of motion. He had also discovered the law stating the centrifugal force (or force away from the centre) of a body moving uniformly in a circular path. However, he still believed that the earth's gravity and the motions of the planets might be caused by the action of whirlpools, or vortices, of small corpuscles, as Descartes had claimed. Moreover, although he knew the law of centrifugal force, he did not have a correct understanding of the mechanics of circular motion. He thought of circular motion as the result of a balance between two forces. One centrifugal, the other centripetal (toward the centre)-than as the result of one force, a centripetal force, which constantly deflects the body away from its inertial path in a straight line.
 Newton's great insight of 1666 was to imagine that the Earth's gravity extended to the Moon, counterbalancing its centrifugal force. From his law of centrifugal force and Kepler's third law of planetary motion, Newton deduced that the centrifugal (and hence centripetal) forces of the Moon or of any planet must decrease as the inverse square of its distance from the centre of its motion. For example, if the distance is doubled, the force becomes one-fourth as much; if distance is trebled, the force becomes one-ninth as much. This theory agreed with Newton's data to within about 11%.
 In 1679, Newton returned to his study of celestial mechanics when his adversary Hooke drew him into a discussion of the problem of orbital motion. Hooke is credited with suggesting to Newton that circular motion arises from the centripetal deflection of inertially moving bodies. Hooke further conjectured that since the planets move in ellipses with the Sun at one focus (Kepler's first law), the centripetal force drawing them to the Sun should vary as the inverse square of their distances from it. Hooke could not prove this theory mathematically, although he boasted that he could. Not to be shown up by his rival, Newton applied his mathematical talents to proving Hooke's conjecture. He showed that if a body obeys Kepler's second law (which states that the line joining a planet to the sun sweeps out equal areas in equal times), then the body is being acted upon by a centripetal force. This discovery revealed for the first time the physical significance of Kepler's second law. Given this discovery, Newton succeeded in showing that a body moving in an elliptical path and attracted to one focus must truly be drawn by a force that varies as the inverse square of the distance. Later even these results were set aside by Newton.
 In 1684 the young astronomer Edmond Halley, tired of Hooke's fruitless boasting, asked Newton whether he could prove Hooke's conjecture and to his surprise was told that Newton had solved the problem a full five years before but had now mislaid the paper. At Halley's constant urging Newton reproduced the proofs and expanded them into a paper on the laws of motion and problems of orbital mechanics. Finally Halley persuaded Newton to compose a full-length treatment of his new physics and its application to astronomy. After eighteen months of sustained effort, Newton published (1687) the Philosophiae Naturalis principia Mathematica (The Mathematical Principles of Natural Philosophy), or Principia, as it is universally known.
 By common consent the Principia is the greatest scientific book ever written. Within the framework of an infinite, homogeneous, three-dimensional, empty space and a uniformly and eternally flowing ‘absolute’ time, Newton fully analysed the motion of bodies in resisting and nonresisting media under the action of centripetal forces. The results were applied to orbiting bodies, projectiles, pendula, and free-fall near the Earth. He further demonstrated that the planets were attracted toward the Sun by a force varying as the inverse square of the distance and generalized that all heavenly bodies mutually attract one another. By further generalization, he reached his law of universal gravitation: every piece of matter attracts every other piece with a force proportional to the product of their masses and inversely proportional to the square of the distance between them.
 Given the law of gravitation and the laws of motion, Newton could explain a wide range of hitherto disparate phenomena such as the eccentric orbits of comets, the causes of the tides and their major variations, the precession of the Earth's axis, and the perturbation of the motion of the Moon by the gravity of the Sun. Newton's one general law of nature and one system of mechanics reduced to order most of the known problems of astronomy and terrestrial physics. The work of Galileo, Copernicus, and Kepler was united and transformed into one coherent scientific theory. The new Copernican world-picture finally had a firm physical basis.
 Because Newton repeatedly used the term ‘attraction’ in the Principia, mechanical philosophers attacked him for reintroducing into science the idea that mere matter could act at a distance upon other matter. Newton replied that he had only intended to show the existence of gravitational attraction and to discover its mathematical law, not to inquire into its cause. He no more than his critics believed that brute matter could act at a distance. Having rejected the Cartesian vortices, he reverted in the early 1700s to the idea that some material medium, or ether, caused gravity. However, Newton's ether was no longer a Cartesian-type ether acting solely by impacts among particles. The ether had to be extremely rare so it would not obstruct the motions of the planets, and yet very elastic or springy so it could push large masses toward one another. Newton postulated that the new ether consisted of particles endowed with very powerful short-range repulsive forces. His unreconciled ideas on forces and ether deeply influenced later natural philosophers in the 18th century when they turned to the phenomena of chemistry, electricity and magnetism, and physiology.
 With the publication of the Principia, Newton was recognized as the leading natural philosopher of the age, but his creative career was effectively over. After suffering a nervous breakdown in 1693, he retired from research to seek a government position in London. In 1696 he became Warden of the Royal Mint and in 1699 its Master, an extremely lucrative position. He oversaw the great English recoinage of the 1690s and pursued counterfeiters with ferocity. In 1703 he was elected president of the Royal Society and was reelected each year until his death. He was knighted (1708) by Queen Anne, the first scientist to be so honoured for his work.
 As any overt appeal to metaphysics became unfashionable, the science of mechanics was increasingly regarded, says Ivor Leclerc, as ‘an autonomous science,’ and any alleged role of God as ‘deus ex machina’. At the beginning of the nineteenth century, Pierre-Simon LaPlace, along with a number of other great French mathematicians and, advanced the view that the science of mechanics constituted a complex view of nature. Since this science, by observing its epistemology, had revealed itself to be the fundamental science, the hypothesis of God as, they concluded unnecessary.
 Pierre de Simon LaPlace (1749-1827) is recognized for eliminating not only the theological component of classical physics but the ‘entire metaphysical component’ as well. The epistemology of science requires, had that we proceeded by inductive generalisations from observed facts to hypotheses that are ‘tested by observed conformity of the phenomena.’ What was unique out LaPlace’s view of hypotheses as insistence that we cannot attribute reality to them. Although concepts like force, mass, notion, cause, and laws are obviously present in classical physics, they exist in LaPlace’s view only as quantities. Physics is concerned, he argued, with quantities that we associate as a matter of convenience with concepts, and the truths abut nature are only quantities.
 The seventeenth-century view of physics s a philosophy of nature or as natural philosophy was displaced by the view of physics as an autonomous science that was: The science of nature. This view, which was premised on the doctrine e of positivism, promised to subsume all of the nature with a mathematical analysis of entities in motion and claimed that the true understanding of nature was revealed only in the mathematical descriptions. Since the doctrine of positivism, assumed that the knowledge we call physics resides only in the mathematical formalisms of physical theory, it disallows the prospect that the vision of physical reality revealed in physical theory can have any other meaning. In the history of science, the irony is that positivism, which was intended to banish metaphysical concerns from the domain of science, served to perpetuate a seventeenth-century metaphysical assumption about the relationship between physical reality and physical theory.
 So, then, the decision was motivated by our conviction that our discoveries have more potential to transform our conception of the ‘way thing are’ than any previous discovery in the history of science, as these implications of discovery extend well beyond the domain of the physical sciences, and the best efforts of large numbers of thoughtfully convincing in others than I will be required to understand them.
 In fewer contentious areas, European scientists made rapid progress on many fronts in the 17th century. Galileo himself investigated the laws governing falling objects, and discovered that the duration of a pendulum's swing is constant for any given length. He explored the possibility of using this to control a clock, an idea that his son put into practice in 1641. Two years later another Italian, mathematician and physicist Evangelists Torricelli, made the first barometer. In doing so he discovered atmospheric pressure and produced the first artificial vacuum known to science. In 1650 German physicist Otto von Guericke invented the air pump. He is best remembered for carrying out a demonstration of the effects of atmospheric pressure. Von Guericke joined two large, hollow bronze hemispheres, and then pumped out the air within them to form a vacuum. To illustrate the strength of the vacuum, von Guericke showed how two teams of eight horses pulling in opposite directions could not separate the hemispheres. Yet the hemispheres fell apart as soon as air was let in.
 Throughout the 17th century major advances occurred in the life sciences, including the discovery of the circulatory system by the English physician William Harvey and the discovery of microorganisms by the Dutch microscope maker Antoni van Leeuwenhoek. In England, Robert Boyle established modern chemistry as a full-fledged science, while in France, philosopher and scientist René Descartes made numerous discoveries in mathematics, as well as advancing the case for rationalism in scientific research.
 However, the century's greatest achievements came in 1665, when the English physicist and mathematician Isaac Newton fled from Cambridge to his rural birthplace in Woolsthorpe to escape an epidemic of the plague. There, in the course of a single year, he made a series of extraordinary breakthroughs, including new theories about the nature of light and gravitation and the development of calculus. Newton is perhaps best known for his proof that the force of gravity extends throughout the universe and that all objects attract each other with a precisely defined and predictable force. Gravity holds the Moon in its orbit around the Earth and is the principal cause of the Earth’s tides. These discoveries revolutionized how people viewed the universe and they marked the birth of modern science.
 Newton’s work demonstrated that nature was governed by basic rules that could be identified using the scientific method. This new approach to nature and discovery liberated 18th-century scientists from passively accepting the wisdom of ancient writings or religious authorities that had never been tested by experiment. In what became known as the Age of Reason, or the Age of Enlightenment, scientists in the 18th century began to apply rational thought actively, careful observation, and experimentation to solve a variety of problems.
 Advances in the life sciences saw the gradual erosion of the theory of spontaneous generation, a long-held notion that life could spring from nonliving matter. It also brought the beginning of scientific classification, pioneered by the Swedish naturalist Carolus Linnaeus, who classified close to 12,000 living plants and animals into a systematic arrangement.
 By 1700 the first steam engine had been built. Improvements in the telescope enabled German-born British astronomer Sir William Herschel to discover the planet Uranus in 1781. Throughout the 18th century science began to play an increasing role in everyday life. New manufacturing processes revolutionized the way that products were made, heralding the Industrial Revolution. In An Inquiry Into the Nature and Causes of the Wealth of Nations, published in 1776, British economist Adam Smith stressed the advantages of division of labour and advocated the use of machinery to increase production. He urged governments to allow individuals to compete within a free market in order to produce fair prices and maximum social benefit. Smith’s work for the first time gave economics the stature of an independent subject of study and his theories greatly influenced the course of economic thought for more than a century.
 With knowledge in all branches of science accumulating rapidly, scientists began to specialize in particular fields. Specialization did not necessarily mean that discoveries were specializing as well: From the 19th century onward, research began to uncover principles that unite the universe as a whole.
 In chemistry, one of these discoveries was a conceptual one: that all matter is made of atoms. Originally debated in ancient Greece, atomic theory was revived in a modern form by the English chemist John Dalton in 1803. Dalton provided clear and convincing chemical proof that such particles exist. He discovered that each atom has a characteristic mass and that atoms remain unchanged when they combine with other atoms to form compound substances. Dalton used atomic theory to explain why substances always combine in fixed proportions-a field of study known as quantitative chemistry. In 1869 Russian chemist Dmitry Mendeleyev used Dalton’s discoveries about atoms and their behaviour to draw up his periodic table of the elements.
 Other 19th-century discoveries in chemistry included the world's first synthetic fertilizer, manufactured in England in 1842. In 1846 German chemist Christian Schoenbein accidentally developed the powerful and unstable explosive nitrocellulose. The discovery occurred after he had spilled a mixture of nitric and sulfuric acids and then mopped it up with a cotton apron. After the apron had been hung up to dry, it exploded. He later learned that the cellulose in the cotton apron combined with the acids to form a highly flammable explosive.
 In 1828 the German chemist Friedrich Wöhler showed that making carbon-containing was possible, organic compounds from inorganic ingredients, a breakthrough that opened an entirely new field of research. By the end of the 19th century, hundreds of organic compounds had been synthesized, including mauve, magenta, and other synthetic dyes, as well as aspirin, still one of the world's most useful drugs.
 In physics, the 19th century is remembered chiefly for research into electricity and magnetism, which were pioneered by physicists such as Michael Faraday and James Clerk Maxwell of Great Britain. In 1821 Faraday demonstrated that a moving magnet could set an electric current flowing in a conductor. This experiment and others he carried as a process, led to the development of electric motors and generators. While Faraday’s genius lay in discovery by experiment, Maxwell produced theoretical breakthroughs of even greater note. Maxwell's development of the electromagnetic theory of light took many years. It began with the paper ‘On Faraday's Lines of Force’ (1855–1856), in which Maxwell built on the ideas of British physicist Michael Faraday. Faraday explained that electric and magnetic effects result from lines of forces that surround conductors and magnets. Maxwell drew an analogy between the behaviour of the lines of force and the flow of a liquid, deriving equations that represented electric and magnetic effects. The next step toward Maxwell’s electromagnetic theory was the publication of the paper, On Physical Lines of Force (1861-1862). Here Maxwell developed a model for the medium that could carry electric and magnetic effects. He devised a hypothetical medium that consisted of a fluid in which magnetic effects created whirlpool-like structures. These whirlpools were separated by cells created by electric effects, so the combination of magnetic and electric effects formed a honeycomb pattern.
 Maxwell could explain all known effects of electromagnetism by considering how the motion of the whirlpools, or vortices, and cells could produce magnetic and electric effects. He showed that the lines of force behave like the structures in the hypothetical fluid. Maxwell went further, considering what would happen if the fluid could change density, or be elastic. The movement of a charge would set up a disturbance in an elastic medium, forming waves that would move through the medium. The speed of these waves would be equal to the ratio of the value for an electric current measured in electrostatic units to the value of the same current measured in electromagnetic units. German physicists Friedrich Kohlrausch and Wilhelm Weber had calculated this ratio and found it the same as the speed of light. Maxwell inferred that light consists of waves in the same medium that causes electric and magnetic phenomena.
 Maxwell found supporting evidence for this inference in work he did on defining basic electrical and magnetic quantities in terms of mass, length, and time. In the paper, On the Elementary Regulations of Electric Quantities (1863), he wrote that the ratio of the two definitions of any quantity based on electric and magnetic forces is always equal to the velocity of light. He considered that light must consist of electromagnetic waves but first needed to prove this by abandoning the vortex analogy and developing a mathematical system. He achieved this in ‘A Dynamical Theory of the Electromagnetic Field’ (1864), in which he developed the fundamental equations that describe the electromagnetic field. These equations showed that light is propagated in two waves, one magnetic and the other electric, which vibrate perpendicular to each other and perpendicular to the direction in which they are moving (like a wave travelling along a string). Maxwell first published this solution in Note on the Electromagnetic Theory of Light (1868) and summed up all of his work on electricity and magnetism in Treatise on Electricity and Magnetism in 1873.
 The treatise also suggested that a whole family of electromagnetic radiation must exist, of which visible light was only one part. In 1888 German physicist Heinrich Hertz made the sensational discovery of radio waves, a form of electromagnetic radiation with wavelengths too long for our eyes to see, confirming Maxwell’s ideas. Unfortunately, Maxwell did not live long enough to see this vindication of his work. He also did not live to see the ether (the medium in which light waves were said to be propagated) disproved with the classic experiments of German-born American physicist Albert Michelson and American chemist Edward Morley in 1881 and 1887. Maxwell had suggested an experiment much like the Michelson-Morley experiment in the last year of his life. Although Maxwell believed the ether existed, his equations were not dependent on its existence, and so remained valid.
 Maxwell's other major contribution to physics was to provide a mathematical basis for the kinetic theory of gases, which explains that gases behave as they do because they are composed of particles in constant motion. Maxwell built on the achievements of German physicist Rudolf Clausius, who in 1857 and 1858 had shown that a gas must consist of molecules in constant motion colliding with each other and with the walls of their container. Clausius developed the idea of the mean free path, which is the average distance that a molecule travels between collisions.
 Maxwell's development of the kinetic theory of gases was stimulated by his success in the similar problem of Saturn's rings. It dates from 1860, when he used a statistical treatment to express the wide range of velocities (speeds and the directions of the speeds) that the molecules in a quantity of gas must inevitably possess. He arrived at a formula to express the distribution of velocity in gas molecules, relating it to temperature. He showed that gases store heat in the motion of their molecules, so the molecules in a gas will speed up as the gasses temperature increases. Maxwell then applied his theory with some success to viscosity (how much a gas resists movement), diffusion (how gas molecules move from an area of higher concentration to an area of lower concentration), and other properties of gases that depend on the nature of the molecules’ motion.
 Maxwell's kinetic theory did not fully explain heat conduction (how heat travels through a gas). Austrian physicist Ludwig Boltzmann modified Maxwell’s theory in 1868, resulting in the Maxwell-Boltzmann distribution law, showing the number of particles (n) having an energy (E) in a system of particles in thermal equilibrium. It has the form:
n = n0 exp(-E/kT),
where n0 is the number of particles having the lowest energy, ‘k’ the Boltzmann constant, and ‘T’ the thermodynamic temperature.
 If the particles can only have certain fixed energies, such as the energy levels of atoms, the formula gives the number (Ei) above the ground state energy. In certain cases several distinct states may have the same energy and the formula then becomes:
ni = gin0 exp(-Ki/kT),
where (g)i is the statistical weight of the level of energy ‘Ei’,
i.e., the number of states having energy Ei. The distribution of energies obtained by the formula is called a Boltzmann distribution.
 Both Maxwell’ s thermodynamic relational equations and the Boltzmann formulation to a contributional successive succession of refinements of kinetic theory, and it proved fully applicable to all properties of gases. It also led Maxwell to an accurate estimate of the size of molecules and to a method of separating gases in a centrifuge. The kinetic theory was derived using statistics, so it also revised opinions on the validity of the second law of thermodynamics, which states that heat cannot flow from a colder to a hotter body of its own accord. In the case of two connected containers of gases at the same temperature, it is statistically possible for the molecules to diffuse so that the faster-moving molecules all concentrate in one container while the slower molecules gather in the other, making the first container hotter and the second colder. Maxwell conceived this hypothesis, which is known as Maxwell's demon. Although this event is very unlikely, it is possible, and the second law is therefore not absolute, but highly probable.
 These sources provide additional information on James Maxwell Clerk: Maxwell is generally considered the greatest theoretical physicist of the 1800s. He combined a rigorous mathematical ability with great insight, which enabled him to make brilliant advances in the two most important areas of physics at that time. In building on Faraday's work to discover the electromagnetic nature of light, Maxwell not only explained electromagnetism but also paved the way for the discovery and application of the whole spectrum of electromagnetic radiation that has characterized modern physics. Physicists now know that this spectrum also includes radio, infrared, ultraviolet, and X-ray waves, to name a few. In developing the kinetic theory of gases, Maxwell gave the final proof that the nature of heat resides in the motion of molecules.
 With Maxwell's famous equations, as devised in 1864, uses mathematics to explain the intersaction between electric and magnetic fields. His work demonstrated the principles behind electromagnetic waves created when electric and magnetic fields oscillate simultaneously. Maxwell realized that light was a form of electromagnetic energy, but he also thought that the complete electromagnetic spectrum must include many other forms of waves as well.
 With the discovery of radio waves by German physicist Heinrich Hertz in 1888 and X-rays by German physicist Wilhelm Roentgen in 1895, Maxwell’s ideas were proved correct. In 1897 British physicist Sir Joseph J. Thomson discovered the electron, a subatomic particle with a negative charge. This discovery countered the long-held notion that atoms were the basic unit of matter.
 As in chemistry, these 19th-century discoveries in physics proved to have immense practical value. No one was more adept at harnessing them than American physicist and prolific inventor Thomas Edison. Working from his laboratories in Menlo Park, New Jersey, Edison devised the carbon-granule microphone in 1877, which greatly improved the recently invented telephone. He also invented the phonograph, the electric light bulb, several kinds of batteries, and the electric metre. Edison was granted more than 1,000 patents for electrical devices, a phenomenal feat for a man who had no formal schooling.
 In the earth sciences, the 19th century was a time of controversy, with scientists debating Earth's age. Estimated ranges may be as far as from less than 100,000 years to several hundred million years. In astronomy, greatly improved optical instruments enabled important discoveries to be made. The first observation of an asteroid, Ceres, took place in 1801. Astronomers had long noticed that Uranus exhibited an unusual orbit. French astronomer Urbain Jean Joseph Leverrier predicted that another planet nearby caused Uranus’s odd orbit. Using mathematical calculations, he narrowed down where such a planet would be located in the sky. In 1846, with the help of German astronomer Johann Galle, Leverrier discovered Neptune. The Irish astronomer William Parsons, the third Earl of Rosse, became the first person to see the spiral form of galaxies beyond our own solar system. He did this with the Leviathan, a 183-cm. (72-in.) reflecting telescopes, built on the grounds of his estate in Parsonstown (now Birr), Ireland, in the 1840s. His observations were hampered by Ireland's damp and cloudy climate, but his gigantic telescope remained the world's largest for more than 70 years.
 In the 19th century the study of microorganisms became increasingly important, particularly after French biologist Louis Pasteur revolutionized medicine by correctly deducing that some microorganisms are involved in disease. In the 1880's Pasteur devised methods of immunizing people against diseases by deliberately treating them with weakened forms of the disease-causing organisms themselves. Pasteur’s vaccine against rabies was a milestone in the field of immunization, one of the most effective forms of preventive medicine the world has yet seen. In the area of industrial science, Pasteur invented the process of pasteurization to help prevent the spread of disease through milk and other foods.
 Pasteur’s work on fermentation and spontaneous generation had considerable implications for medicine, because he believed that the origin and development of disease are analogous to the origin and process of fermentation. That is, disease arises from germs attacking the body from outside, just as unwanted microorganisms invade milk and cause fermentation. This concept, called the germ theory of disease, was strongly debated by physicians and scientists around the world. One of the main arguments against it was the contention that the role germs played during the course of disease was secondary and unimportant; the notion that tiny organisms could kill vastly larger ones seemed ridiculous to many people. Pasteur’s studies convinced him that he was right, however, and in the course of his career he extended the germ theory to explain the causes of many diseases.
 Pasteur also determined the natural history of anthrax, a fatal disease of cattle. He proved that anthrax is caused by a particular bacillus and suggested that animals could be given anthrax in a mild form by vaccinating them with attenuated (weakened) bacilli, thus providing immunity from potentially fatal attacks. In order to prove his theory, Pasteur began by inoculating twenty-five sheep; a few days later he inoculated these and twenty-five more sheep with an especially strong inoculant, and he left ten sheep untreated. He predicted that the second twenty-five sheep would all perish and concluded the experiment dramatically by showing, to a sceptical crowd, the carcasses of the twenty-five sheep lying side by side.
 Pasteur spent the rest of his life working on the causes of various diseases, including septicaemia, cholera, diphtheria, fowl cholera, tuberculosis, and smallpox-and their prevention by means of vaccination. He is best known for his investigations concerning the prevention of rabies, otherwise known in humans as hydrophobia. After experimenting with the saliva of animals suffering from this disease, Pasteur concluded that the disease rests in the nerve centres of the body; when an extract from the spinal column of a rabid dog was injected into the bodies of healthy animals, symptoms of rabies were produced. By studying the tissues of infected animals, particularly rabbits, Pasteur was able to develop an attenuated form of the virus that could be used for inoculation.
 In 1885, a young boy and his mother arrived at Pasteur’s laboratory; the boy had been bitten badly by a rabid dog, and Pasteur was urged to treat him with his new method. At the end of the treatment, which lasted ten days, the boy was being inoculated with the most potent rabies virus known; he recovered and remained healthy. Since that time, thousands of people have been saved from rabies by this treatment.
 Pasteur’s research on rabies resulted, in 1888, in the founding of a special institute in Paris for the treatment of the disease. This became known as the Instituted Pasteur, and it was directed by Pasteur himself until he died. (The institute still flourishes and is one of the most important centres in the world for the study of infectious diseases and other subjects related to microorganisms, including molecular genetics.) By the time of his death in Saint-Cloud on September 28, 1895, Pasteur had long since become a national hero and had been honoured in many ways. He was given a state funeral at the Cathedral of Nôtre Dame, and his body was placed in a permanent crypt in his institute.
 Also during the 19th century, the Austrian monk Gregor Mendel laid the foundations of genetics, although his work, published in 1866, was not recognized until after the century had closed. Nevertheless, the British scientist Charles Darwin towers above all other scientists of the 19th century. His publication of On the Origin of Species in 1859 marked a major turning point for both biology and human thought. His theory of evolution by natural selection (independently and simultaneously developed by British naturalist Alfred Russel Wallace) initiated a violent controversy that until it has not subsided. Particularly controversial was Darwin’s theory that humans resulted from a long process of biological evolution from apelike ancestors. The greatest opposition to Darwin’s ideas came from those who believed that the Bible was an exact and literal statement of the origin of the world and of humans. Although the public initially castigated Darwin’s ideas, by the late 1800s most biologists had accepted that evolution occurred, although not all agreed on the mechanism, known as natural selection, that Darwin proposed.
 In the 20th century, scientists achieved spectacular advances in the fields of genetics, medicine, social sciences, technology, and physics.
 At the beginning of the 20th century, the life sciences entered a period of rapid progress. Mendel's work in genetics was rediscovered in 1900, and by 1910 biologists had become convinced that genes are located in chromosomes, the threadlike structures that contain proteins and deoxyribonucleic acid (DNA). During the 1940's American biochemists discovered that DNA taken from one kind of bacterium could influence the characteristics of another. From these experiments, DNA is clearly the chemical that makes up genes and thus the key to heredity.
 After American biochemist James Watson and British biophysicist Francis Crick established the structure of DNA in 1953, geneticists became able to understand heredity in chemical terms. Since then, progress in this field has been astounding. Scientists have identified the complete genome, or genetic catalogue, of the human body. In many cases, scientists now know how individual genes become activated and what affects they have in the human body. Genes can now be transferred from one species to another, sidestepping the normal processes of heredity and creating hybrid organisms that are unknown in the natural world.
 At the turn of the 20th century, Dutch physician Christian Eijkman showed that disease can be caused not only by microorganisms but by a dietary deficiency of certain substances now called vitamins. In 1909 German bacteriologist Paul Ehrlich introduced the world's first bactericide, a chemical designed to kill specific kinds of bacteria without killing the patient's cells as well. Following the discovery of penicillin in 1928 by British bacteriologist Sir Alexander Fleming, antibiotics joined medicine’s chemical armoury, making the fight against bacterial infection almost a routine matter. Antibiotics cannot act against viruses, but vaccines have been used to great effect to prevent some of the deadliest viral diseases. Smallpox, once a worldwide killer, was completely eradicated by the late 1970's, and in the United States the number of polio cases dropped from 38,000 in the 1950's to less than ten a year by the 21st century. By the middle of the 20th century scientists believed they were well on the way to treating, preventing, or eradicating many of the most deadly infectious diseases that had plagued humankind for centuries. Nevertheless, by the 1980's the medical community’s confidence in its ability to control infectious diseases had been shaken by the emergence of new types of disease-causing microorganisms. New cases of tuberculosis developed, caused by bacteria strains that were resistant to antibiotics. New, deadly infections for which there was no known cure also appeared, including the viruses that cause haemorrhagic fever and the human immunodeficiency virus (HIV), the cause of acquired immunodeficiency syndrome.
 In other fields of medicine, the diagnosis of disease has been revolutionized by the use of new imaging techniques, including magnetic resonance imaging and computed tomography. Scientists were also on the verge of success in curing some diseases using gene therapy, in which the insertion of normal or genetically an altered gene into a patient’s cells replaces nonfunctional or missing genes.
 Improved drugs and new tools have made surgical operations that were once considered impossible now routine. For instance, drugs that suppress the immune system enable the transplant of organs or tissues with a reduced risk of rejection Endoscopy permits the diagnosis and surgical treatment of a wide variety of ailments using minimally invasive surgery. Advances in high-speed fiberoptic connections permit surgery on a patient using robotic instruments controlled by surgeons at another location. Known as ‘telemedicine’, this form of medicine makes it possible for skilled physicians to treat patients in remote locations or places that lack medical help.
 In the 20th century the social sciences emerged from relative obscurity to become prominent fields of research. Austrian physician Sigmund Freud founded the practice of psychoanalysis, creating a revolution in psychology that led him to be called the ‘Copernicus of the mind’. In 1948 the American biologist Alfred Kinsey published Sexual Behaviour in the Human Male, which proved to be one of the best-selling scientific works of all time. Although criticized for his methodology and conclusions, Kinsey succeeded in making human sexuality an acceptable subject for scientific research.
 The 20th century also brought dramatic discoveries in the field of anthropology, with new fossil finds helping to piece together the story of human evolution. A completely new and surprising source of anthropological information became available from studies of the DNA in mitochondria, cell structures that provide energy to fuel the cell’s activities. Mitochondrial DNA has been used to track certain genetic diseases and to trace the ancestry of a variety of organisms, including humans.
 In the field of communications, Italian electrical engineer Guglielmo Marconi sent his first radio signal across the Atlantic Ocean in 1901. American inventor Lee De Forest invented the triode, or vacuum tube, in 1906. The triode eventually became a key component in nearly all early radio, radar, television, and computer systems. In 1920 Scottish engineer John Logie Baird developed the Baird Televisor, a primitive television that provided the first transmission of a recognizable moving image. In the 1920's and 1930's American electronic engineer Vladimir Kosma Zworykin significantly improved the television’s picture and reception. In 1935 British physicist Sir Robert Watson-Watt used reflected radio waves to locate aircraft in flight. Radar signals have since been reflected from the Moon, planets, and stars to learn their distance from Earth and to track their movements.
 In 1947 American physicists John Bardeen, Walter Brattain, and William Shockley invented the transistor, an electronic device used to control or amplify an electrical current. Transistors are much smaller, far less expensive, require less power to operate, and are considerably more reliable than triodes. Since their first commercial use in hearing aids in 1952, transistors have replaced triodes in virtually all applications.
 During the 1950's and early 1960's minicomputers were developed using transistors rather than triodes. Earlier computers, such as the electronic numerical integrator and computer (ENIAC), first introduced in 1946 by American physicist John W. Mauchly and American electrical engineer John Presper Eckert, Jr., used as many as 18,000 triodes and filled a large room. However, the transistor initiated a trend toward microminiaturization, in which individual electronic circuits can be reduced to microscopic size. This drastically reduced the computer's size, cost, and power requirements and eventually enabled the development of electronic circuits with processing speeds measured in billionths of a second
 Further miniaturization led in 1971 to the first microprocessor-a computer on a chip. When combined with other specialized chips, the microprocessor becomes the central arithmetic and logic unit of a computer smaller than a portable typewriter. With their small size and a price less than that of a used car, today’s personal computers are many times more powerful than the physically huge, multimillion-dollar computers of the 1950's. Once used only by large businesses, computers are now used by professionals, small retailers, and students to complete a wide variety of everyday tasks, such as keeping data on clients, tracking budgets, and writing school reports. People also use computers to understand each other with worldwide communications networks, such as the Internet and the World Wide Web, to send and receive E-mail, to shop, or to find information on just about any subject.
 During the early 1950's public interest in space exploration developed. The focal event that opened the space age was the International Geophysical Year from July 1957 to December 1958, during which hundreds of scientists around the world coordinated their efforts to measure the Earth’s near-space environment. As part of this study, both the United States and the Soviet Union announced that they would launch artificial satellites into orbit for nonmilitary space activities.
 When the Soviet Union launched the first Sputnik satellite in 1957, the feat spurred the United States to intensify its own space exploration efforts. In 1958 the National Aeronautics and Space Administration (NASA) was founded for the purpose of developing human spaceflight. Throughout the 1960's NASA experienced its greatest growth. Among its achievements, NASA designed, manufactured, tested, and eventually used the Saturn rocket and the Apollo spacecraft for the first manned landing on the Moon in 1969. In the 1960's and 1970's, NASA also developed the first robotic space probes to explore the planet’s Mercury, Venus, and Mars. The success of the Mariner probes paved the way for the unmanned exploration of the outer planets in Earth’s solar system.
 In the 1970's through 1990's, NASA focussed its space exploration efforts on a reusable space shuttle, which was first deployed in 1981. In 1998 the space shuttle, along with its Russian counterpart known as Soyuz, became the workhorses that enabled the construction of the International Space Station.
 In 1900 the German physicist Max Planck proposed the then sensational idea that energy be not divisible but is always given off in set amounts, or quanta. Five years later, German-born American physicist Albert Einstein successfully used quanta to explain the photoelectric effect, which is the release of electrons when metals are bombarded by light. This, together with Einstein's special and general theories of relativity, challenged some of the most fundamental assumptions of the Newtonian era.
 Unlike the laws of classical physics, quantum theory deals with events that occur on the smallest of scales. Quantum theory explains how subatomic particles form atoms, and how atoms interact when they combine to form chemical compounds. Quantum theory deals with a world where the attributes of any single particle can never be completely known-an idea known as the uncertainty principle, put forward by the German physicist Werner Heisenberg in 1927, whereby, the principle, that the product of the uncertainty in measured value of a component of momentum (pχ) and the uncertainty in the corresponding co-ordinates of (χ) is of the equivalent set-order of magnitude, as the Planck constant. In its most precise form:
Δp2 x Δχ  ≥ h/4π
where Δχ represents the root-mean-square value of the uncertainty. For mot purposes one can assume:
Δpχ x Δχ = h/2π
the principle can be derived exactly from quantum mechanics, a physical theory that grew out of Planck’s quantum theory and deals with the mechanics of atomic and related systems in terms of quantities that an be measured mathematical forms, including ‘wave mechanics’ (Schrödinger) and ‘matrix mechanics’ (Born and Heisenberg), all of which are equivalent.
 Nonetheless, it is most easily understood as a consequence of the fact that any measurement of a system mist disturbs the system under investigation, with a resulting lack of precision in measurement. For example, if seeing an electron was possible and thus measures its position, photons would have to be reflected from the electron. If a single photon could be used and detected with a microscope, the collision between the electron and photon would change the electron’s momentum, as to its effectuality Compton Effect as a result to wavelengths of the photon is increased by an amount Δλ, whereby:
Δλ = (2h/m0c) sin2 ½ φ.
This is the Compton equation, h is the Planck constant, m0 the rest mass of the particle, 'c' the speed of light, and φ the angle between the directions of the incident and scattered photon. The quantity h/m0c is known as the Compton wavelength, symbol: λC, to which for an electron is equal to 0.002  43 nm.
 A similar relationship applies to the determination of energy and time, thus:
ΔE x Δt ≥ h/4π.
The effects of the uncertainty principle are not apparent with large systems because of the small size of h. However, the principle is of fundamental importance in the behaviour of systems on the atomic scale. For example, the principle explains the inherent width of spectral lines, if the lifetime of an atom in an excited state is very short there is a large uncertainty in its energy and line resulting from a transition is broad.
 One consequence of the uncertainty principle is that predicting the behaviour of a system and the macroscopic principle of causality cannot apply at the atomic level is impossible fully. Quantum mechanics give a statistical description of the behaviour of physical systems.
 Nevertheless, while there is uncertainty on the subatomic level, quantum physics successfully predicts the overall outcome of subatomic events, a fact that firmly relates it to the macroscopic world, that is, the one in which we live.
 In 1934 Italian-born American physicist Enrico Fermi began a series of experiments in which he used neutrons (subatomic particles without an electric charge) to bombard atoms of various elements, including uranium. The neutrons combined with the nuclei of the uranium atoms to produce what he thought were elements heavier than uranium, known as transuranium elements. In 1939 other scientists demonstrated that in these experiments’ Fermi had not formed heavier elements, but instead had achieved the splitting, or fission, of the uranium atom's nucleus. These early experiments led to the development of fission as both energy sources.
 These fission studies, coupled with the development of particle accelerators in the 1950's, initiated a long and remarkable journey into the nature of subatomic particles that continues today. Far from being indivisible, scientists now know that atoms are made up of twelve fundamental particles known as quarks and leptons, which combine in different ways to make all the kinds of matter currently known.
 Advances in particle physics have been closely linked to progress in cosmology. From the 1920's onward, when the American astronomer Edwin Hubble showed that the universe is expanding, cosmologists have sought to rewind the clock and establish how the universe began. Today, most scientists believe that the universe started with a cosmic explosion some time between ten and twenty billion years ago. However, the exact sequence of events surrounding its birth, and its ultimate fate, are still matters of ongoing debate.
 Apart from their assimilations affiliated within the paradigms of science, Descartes was to posit the existence of two categorically different domains of existence for immaterial ideas-the res extensa and the res cognitans or the ‘extended substance’ and the ‘thinking substance. Descartes defined the extended substance as the realm of physical reality within primary mathematical and geometrical forms resides and thinking substance as the realm of human subjective reality. Given that Descartes distrusted the information from the senses to the point of doubting the perceived results of repeatable scientific experiments, how did he conclude that our knowledge of the mathematical ideas residing only in mind or in human subjectivity was accurate, much less the absolute truth? He did so by making a lap of faith-God constructed the world, said Descartes, in accordance with the mathematical ideas that our minds are capable of uncovering in their pristine essence. The truth of classical physics as Descartes viewed them were quite literally ‘revealed’ truths, and it was this seventeenth-century metaphysical presupposition that became in the history of science what we term the ‘hidden ontology of classical epistemology.’
 While classical epistemology would serve the progress of science very well, It also presented us with a terrible dilemma about the relationship between ‘mind’ and the ‘world’. If there is no real or necessary correspondence between non-mathematical ideas in subjective reality and external physical reality, how do we now that the world in which we live, breath, and have our Being, then perish in so that we undeniably exist? Descartes’s resolution of this dilemma took the form of an exercise. He asked us to direct our attention inward and to divest our consciousness of all awareness of eternal physical reality. If we do so, he concluded, the real existence of human subjective reality could be confirmed.
 As it turned out, this resolution was considerably more problematic and oppressive than Descartes could have imaged. ‘I think, Therefore, I am’ may be a marginally persuasive way of confirming the real existence e of the thinking self. However, the understanding of physical reality that obliged Descartes and others to doubt the existence of this self implied that the separation between the subjective world, or the world of life, and the real world of physical reality was ‘absolute.’
 Our propped new understanding of the relationship between mind and world is framed within the larger context of the history of mathematical physics, the organs and extensions of the classical view of the foundations of scientific knowledge, and the various ways that physicists have attempted to obviate previous challenge s to the efficacy of classical epistemology, this was made so, as to serve as background for a new relationship between parts nd wholes in quantum physics, as well as similar view of the relationship that had emerged in the so-called ‘new biology’ and in recent studies of the evolution of modern humans.
 Nevertheless, at the end of such as this arduous journey lie two conclusions that should make possible that first, there is no basis in contemporary physics or biology for believing in the stark Cartesian division between mind and world, that some have alternatively given to describe as ‘the disease of the Western mind’. Secondly, there is a new basis for dialogue between two cultures that are now badly divided and very much un need of an enlarged sense of common understanding and shared purpose; let us briefly consider the legacy in Western intellectual life of the stark division between mind and world sanctioned by classical physics and formalized by Descartes.
 The first scientific revolution of the seventeenth century freed Western civilization from the paralysing and demeaning forces of superstition, laid the foundations for rational understanding and control of the processes of nature, and ushered in an era of technological innovation and progress that provided untold benefits for humanity. Nevertheless, as classical physics progressively dissolved the distinction between heaven and earth and united the universe in a shared and communicable frame of knowledge, it presented us with a view of physical reality that was totally alien from the world of everyday life.
 Philosophy, quickly realized that there was nothing in tis view of nature that could explain o provide a foundation for the mental, or for all that we know from direct experience cas distinctly human. In a mechanistic universe, he said, there is no privileged place or function for mind, and the separation between mind and matter is absolute. Descartes was also convinced, however, that the immaterial essences that gave form and structure to this universe were coded in geometrical and mathematical ideas, and this insight led to invent ‘algebraic geometry’.
 A scientific understanding of these ideas could be derived, said Descartes, with the aid of precise deduction, and him also claimed that the contours of physical reality could be laid out in three-dimensional co-ordinates. Following the publication of Isaac Newton’s Principia Mathematica. In 1687, reductionism and mathematical modelling became the most powerful tools of modern science. The dream that the entire physical world would be known and mastered though the extension and refinement of mathematical theory became the central feature and guiding principle of scientific knowledge.
 Descartes’s theory of knowledge starts with the quest for certainty, for an indubitable starting-point or foundation on the basis alone of which progress is possible. This is the method of investigating the extent of knowledge and its basis in reason or experience, it attempts to put knowledge upon a secure formation by first inviting us to suspend judgement on any proposition whose truth can be doubted, even as a bare possibility. The standards of acceptance are gradually raised as we are asked to doubt the deliverance of memory, the senses, and even reason, all of which are in principle capable of letting us down. The process is eventually dramatized in the figure of the evil-demon, or malin génie, whose aim is to deceive us, so that our sense, memories, and seasonings lead us astray. The task then becomes one of finding a demon-proof points of certainty, and Descartes produces this in the famous ‘Cogito ergo sum’, I think therefore I am’. It is on this slender basis that the correct use of our faculties has to be reestablished, but it seems as though Descartes has denied himself any materials to use in reconstructing the edifice of knowledge. He has a basis, but any way of building on it without invoking principles tat will not be demon-proof, and so will not meet the standards he had apparently set himself. It vis possible to interpret him as using ‘clear and distinct ideas’ to prove the existence of God, whose benevolence then justifies our use of clear and distinct ideas (‘God is no deceiver’): This is the notorious Cartesian circle. Descartes’s own attitude to this problem is not quite clear, at timers he seems more concerned with providing a stable body of knowledge, that our natural faculties will endorse, rather than one that meets the more severe standards with which he starts. For example, in the second set of Replies he shrugs off the possibility of ‘absolute falsity’ of our natural system of belief, in favour of our right to retain ‘any conviction so firm that it is quite incapable of being destroyed’. The need to add such natural belief to anything certified by reason Events eventually the cornerstone of Hume ‘s philosophy, and the basis of most 20th-century reactionism, to the method of doubt.
 In his own time Rene Descartes’ conception of the entirely separate substance of the mind was recognized to give rise to insoluble problems of the nature of the causal efficacy to the action of God. Events in the world merely form occasions on which God acts so as to bring about the events normally accompanying them, and thought of as their effects, although the position is associated especially with Malebrallium, it is much older, many among the Islamic philosophies, their processes for adducing philosophical proofs to justify elements of religious doctrine. It plays the parallel role in Islam to that which scholastic philosophy played in the development of Christianity. The practitioners of kalam were known as the Mutakallimun. It also gives rise to the problem, insoluble in its own terms, of ‘other minds’. Descartes’s notorious denial that nonhuman animals are conscious is a stark illustration of th problem.
 In his conception of matter Descartes also gives preference to rational cogitation over anything derived from the senses., since we can conceive of the nature of a ‘ball of wax’ surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature. Descartes’s thought here is reflected in Leibniz’s view, as held later by Russell, that the qualities of sense experience have no resemblance to qualities of things, so that knowledge of the external world is essentially knowledge of structure rather than of filling. On this basis Descartes erects a remarkable physics. Since matter is in effect the same as extension there can be no empty space or ‘void’, since there is no empty space motion is not a question of occupying previously empty space, but is to be thought of in terms of vortices (like the motion of a liquid).
 Although the structure of Descartes’s epistemology, theory of mind, and theory of matter have been rejected many times, their relentless exposure of the hardest issues, their exemplary clarity, and even their initial plausibility all contrive to make him the central point of reference for modern philosophy.
 It seems, nonetheless, that the radical separation between mind and nature formalized by Descartes served over time to allow scientists to concentrate on developing mathematical descriptions of matter as pure mechanisms without any concerns about is spiritual dimension or ontological foundations. In the meantime, attempts to rationalize, reconcile, or eliminate Descartes’s stark division between mind and matter became perhaps te most cental feature of Western intellectual life.
 Philosophers in the like of John Locke, Thomas Hobbes, and David Hume tried to articulate some basis for linking the mathematical describable motions of mater with linguistic representations of external reality in the subjective space of mind. Descartes’ compatriot Jean-Jacques Rousseau reified nature as the ground of human consciousness in a state of innocence and proclaimed that “Liberty, Equality, Fraternity” are the guiding principles of this consciousness. Rousseau also made godlike the ideas o the ‘general will’ of the people to achieve these goals and declare that those who do not conform to this will were social deviants.
 Evenhandedly, Rousseau’s attempt to posit a ground for human consciousness by reifying nature was revived in a measure more different in form by the nineteenth-century Romantics in Germany, England, and the United Sates. Goethe and Friedrich Schelling proposed a natural philosophy premised on ontological monism (the idea that God, man, and nature are grounded in an indivisible spiritual Oneness) and argued for the reconciliation of mind and matter with an appeal to sentiment, mystical awareness, and quasi-scientific musing. In Goethe’s attempt to wed mind and matter, nature became a mindful agency that ‘loves illusion’. Shrouds man in mist, ‘ presses him to her heart’, and punishes those who fail to see the ‘light’. Schelling, in his version of cosmic unity, argued that scientific facts were at best partial truths and that the mindful creative spirit that unifies mind and matter is progressively moving toward self-realization and undivided wholeness.
 Descartes believed there are two basic kinds of things in the world, a belief known as substance dualism. For Descartes, the principles of existence for these two groups of things -bodies and minds-are completely different from one another: Bodies exist by being extended in space, while minds exist by being conscious. According to Descartes, nothing can be done to give a body thought and consciousness. No matter how we shape a body or combine it with other bodies, we cannot turn the body into a mind, a thing that is conscious, because being conscious is not a way of being extended.
 For Descartes, a person consists of a human body and a human mind causally interacting with one another. For example, the intentions of a human being might have awaken that person’s limbs to move. In this way, the mind can affect the body. In addition, the sense organs of a human being may be affected by light, pressure, or sound, external sources that in turn affect the brain, affecting mental states. Thus, the body may affect the mind. Exactly how mind can affect body, and vice versa, is a central issue in the philosophy of mind, and is known as the mind-body problem. According to Descartes, this interaction of mind and body is peculiarly intimate. Unlike the interaction between a pilot and his ship, the connection between mind and body more closely resembles two substances that have been thoroughly mixed.
 Because of the diversity of positions associated with existentialism, the term is impossible to define precisely. Certain themes common to virtually all existentialist writers can, however, be identified. The term itself suggests one major theme: the stress on concrete individual existence and, consequently, on subjectivity, individual freedom, and choice.
 Most philosophers since Plato have held that the highest ethical good is the same for everyone; insofar as one approaches moral perfection, one resembles other morally perfect individuals. The 19th-century Danish philosopher Søren Kierkegaard, who was the first writer to call himself existential, reacted against this tradition by insisting that the highest good for the individual is to find his or her own unique vocation. As he wrote in his journal, “I must find a truth that is true for me . . . the idea for which I can live or die.” Other existentialist writers have echoed Kierkegaard's belief that one must choose one's own way without the aid of universal, objective standards. Against the traditional view that moral choice involves an objective judgment of right and wrong, existentialists have argued that no objective, rational basis can be found for moral decisions. The 19th-century German philosopher Friedrich Nietzsche further contended that the individual must decide which situations are to count as moral situations.
 Kierkegaard held that recognizing that one experiences is spiritually crucial not only a fear of specific objects but also a feeling of general apprehension, which he called dread. He interpreted it as God's way of calling each individual to make a commitment to a personally valid way of life. The word anxiety (German Angst) has a similarly crucial role in the work of the 20th-century German philosopher Martin Heidegger; anxiety leads to the individual's confrontation with nothingness and with the impossibility of finding ultimate justification for the choices he or she must make. In the philosophy of Sartre, the word nausea is used for the individual's recognition of the pure contingency of the universe, and the word anguish is used for the recognition of the total freedom of choice that confronts the individual at every moment.
 Existentialism as a distinct philosophical and literary movement belongs to the 19th and 20th centuries, but elements of existentialism can be found in the thought (and life) of Socrates, in the Bible, and in the work of many premodern philosophers and writers.
 The first to anticipate the major concerns of modern existentialism was the 17th-century French philosopher Blaise Pascal. Pascal rejected the rigorous rationalism of his contemporary René Descartes, asserting, in his Pensées (1670), that a systematic philosophy that presumes to explain God and humanity is a form of pride. Like later existentialist writers, he saw human life in terms of paradoxes: The human self, which combines mind and body, is itself a paradox and contradiction.
 Kierkegaard, generally regarded as the founder of modern existentialism, reacted against the systematic absolute idealism of the 19th-century German philosopher Georg Wilhelm Friedrich Hegel, who claimed to have worked out a total rational understanding of humanity and history. Kierkegaard, on the contrary, stressed the ambiguity and absurdity of the human situation. The individual's response to this situation must be to live a totally committed life, and this commitment can only be understood by the individual who has made it. The individual therefore must always be prepared to defy the norms of society for the sake of the higher authority of a personally valid way of life. Kierkegaard ultimately advocated a ‘leap of faith’ into a Christian way of life, which, although incomprehensible and full of risk, was the only commitment he believed could save the individual from despair.
 Nietzsche, who was not acquainted with the work of Kierkegaard, influenced subsequent existentialist thought through his criticism of traditional metaphysical and moral assumptions and through his espousal of tragic pessimism and the life-affirming individual will that opposes itself to the moral conformity of the majority. In contrast to Kierkegaard, whose attack on conventional morality led him to advocate a radically individualistic Christianity, Nietzsche proclaimed the “Death of God” and went on to reject the entire Judeo-Christian moral tradition in favour of a heroic pagan ideal.
 Heidegger, like Pascal and Kierkegaard, reacted against an attempt to put philosophy on a conclusive rationalistic basis-in this case the phenomenology of the 20th-century German philosopher Edmund Husserl. Heidegger argued that humanity finds itself in an incomprehensible, indifferent world. Human beings can never hope to understand why they are here; instead, each individual must choose a goal and follow it with passionate conviction, aware of the certainty of death and the ultimate meaninglessness of one's life. Heidegger contributed to existentialist thought an original emphasis on being and ontology as well as on language.
 Sartre first gave the term existentialism general currency by using it for his own philosophy and by becoming the leading figure of a distinct movement in France that became internationally influential after World War II. Sartre's philosophy is explicitly atheistic and pessimistic; he declared that human beings require a rational basis for their lives but are unable to achieve one, and thus human life is a ‘futile passion’. Sartre nevertheless insisted that his existentialism be a form of humanism, and he strongly emphasized human freedom, choice, and responsibility. He eventually tried to reconcile these existentialist concepts with a Marxist analysis of society and history.
 Although existentialist thought encompasses the uncompromising atheism of Nietzsche and Sartre and the agnosticism of Heidegger, its origin in the intensely religious philosophies of Pascal and Kierkegaard foreshadowed its profound influence on 20th-century theology. The 20th-century German philosopher Karl Jaspers, although he rejected explicit religious doctrines, influenced contemporary theology through his preoccupation with transcendence and the limits of human experience. The German Protestant theologians Paul Tillich and Rudolf Bultmann, the French Roman Catholic theologian Gabriel Marcel, the Russian Orthodox philosopher Nikolay Berdyayev, and the German Jewish philosopher Martin Buyer inherited many of Kierkegaard's concerns, especially that a personal sense of authenticity and commitment is essential to religious faith.
 A number of existentialist philosophers used literary forms to convey their thought, and existentialism has been as vital and as extensive a movement in literature as in philosophy. The 19th-century Russian novelist Fyodor Dostoyevsky is probably the greatest existentialist literary figure. In Notes from the Underground (1864), the alienated antihero rages against the optimistic assumptions of rationalist humanism. The view of human nature that emerges in this and other novels of Dostoyevsky is that it is unpredictable and perversely self-destructive; only Christian love can save humanity from itself, but such love cannot be understood philosophically. As the character Alyosha says in The Brothers Karamazov (1879-80), “We must love life more than the meaning of it.”
 In the 20th century, the novels of the Austrian Jewish writer Franz Kafka, such as The Trial (1925; trans. 1937) and The Castle (1926; trans. 1930), present isolated men confronting vast, elusive, menacing bureaucracies; Kafka's themes of anxiety, guilt, and solitude reflect the influence of Kierkegaard, Dostoyevsky, and Nietzsche. The influence of Nietzsche is also discernible in the novels of the French writers André Malraux and in the plays of Sartre. The work of the French writer Albert Camus is usually associated with existentialism because of the prominence in it of such themes as the apparent absurdity and futility of life, the indifference of the universe, and the necessity of engagement in a just cause. Existentialist themes are also reflected in the th eater of the absurd, notably in the plays of Samuel Beckett and Eugène Ionesco. In the United States, the influence of existentialism on literature has been more indirect and diffuse, but traces of Kierkegaard's thought can be found in the novels of Walker Percy and John Updike, and various existentialist themes are apparent in the work of such diverse writers as Norman Mailer, John Barth, and Arthur Miller.
 The fatal flaw of pure reason is, of course, the absence of emotion, and purely rational explanations of the division between subjective reality and external reality had limited appeal outside the community of intellectuals. The figure most responsible for infusing our understanding of Cartesian dualism with emotional content was the death of God theologian Friedrich Nietzsche. After declaring that God and ‘divine will’ do not exist, Nietzsche reified the ‘essences’ of consciousness in the domain of subjectivity as the ground for individual ‘will’ and summarily dismissed all pervious philosophical attempts to articulate the ‘will to truth’. The problem, claimed Nietzsche, is that earlier versions of the ‘will to power’ disguise the fact that all allege truths were arbitrarily created in the subjective reality of the individual and are expression or manifestations of individual ‘will’.
 In Nietzsche’s view, the separation between mind and mater is more absolute and total than had previously been imagined. Based on the assumption that there is no real or necessary correspondences between linguistic constructions of reality in human subjectivity and external reality, he declared that we are all locked in ‘a prison house of language’. The prison as he conceived it, however, it was also a ‘space’ where the philosopher can examine the ‘innermost desires of his nature’ and articulate a new massage of individual existence founded on will.
 Those who fail to enact their existence in this space, says Nietzsche, are enticed into sacrificing their individuality on the nonexistent altars of religious beliefs and democratic or socialist ideals and become, therefore, members of the anonymous and docile crowd. Nietzsche also invalidated the knowledge claims of science in the examination of human subjectivity. Science, he said, not only exalted natural phenomena and favours reductionistic examinations of phenomena at the expense of mind. It also seeks to educe mind to a mere material substance, and thereby to displace or subsume the separateness and uniqueness of mind with mechanistic description that disallow any basis for te free exerciser of individual will.
 Nietzsche’s emotionally charged defence of intellectual freedom and his radical empowerment of mind as the maker and transformer of the collective fictions that shape human reality in a soulful mechanistic inverse proved terribly influential on twentieth-century thought. Nietzsche sought to reinforce his view of the subjective character of scientific knowledge by appealing to an epistemological crisis over the foundations of logic and arithmetic that arose during the last three decades of the nineteenth century. Though a curious course of events, attempts by Edmund Husserl, a philosopher trained in higher math and physics, to resolve this crisis resulted in a view of the character of human consciousness that closely resembled that of Nietzsche.
 Friedrich Nietzsche is openly pessimistic about the possibility of knowledge. ‘We simply lack any organ for knowledge, for ‘truth’: we know (or believe or imagine) just as much as may be useful in the interests of the human herd, the species: and even what is called ‘utility’ is ultimately also a mere belief, something imaginary and perhaps precisely that most calamitous stupidity of which we will not perish some day’ (The Gay Science).
 This position is very radical, Nietzsche does not simply deny that knowledge, construed as the adequate representation of the world by the intellect, exists. He also refuses the pragmatist identification of knowledge and truth with usefulness: he writes that we think we know what we think is useful, and that we can be quite wrong about the latter.
 Nietzsche’s view, his ‘Perspectivism’, depends on his claim that there is no sensible conception of a world independent of human interpretation and to which interpretations would correspond if hey were to constitute knowledge. He sum up this highly controversial position in The Will to Power: ‘Facts are precisely what there is not. Only interpretation’.
 It is often claimed that Perspectivism is self-undermining. If the thesis that all views are interpretations is true then, it is argued there is at least one view that is not an interpretation. If, on the other hand, the thesis is itself an interpretation, then there is no reason to believe that it is true, and it follows again that nit every view is an interpretation.
 Yet this refutation assume that if a view, like Perspectivism itself, is an interpretation it is wrong. This is not the case. To call any view, including Perspectivism, an interpretation is to say that it can be wrong, which is true of all views, and that is not a sufficient refutation. To show the Perspectivism is literally false producing another view superior to it on specific epistemological grounds is necessary.
 Perspectivism does not deny that particular views can be true. Like some versions of cotemporary anti-realism, it attributes to specific approaches truth in relation t o facts specified internally those approaches themselves. Bu t it refuses to envisage a single independent set of facts, To be accounted for by all theories. Thus Nietzsche grants the truth of specific scientific theories does, however, deny that a scientific interpretation can possibly be ‘the only justifiable interpretation of the world’ (The Gay Science): Neither t h fact science addresses nor the methods it employs are privileged. Scientific theories serve the purposes for which hey have been devised, but these have no priority over the many other purposes of human life. The existence of many purposes and needs relative to which the value of theories is established-another crucial element of Perspectivism is sometimes thought to imply a reason relative, according to which no standards for evaluating purposes and theories can be devised. This is correct only in that Nietzsche denies the existence of single set of standards for determining epistemic value, but holds that specific views can be compared with and evaluated in relation to one another the ability to use criteria acceptable in particular circumstances does not presuppose the existence of criteria applicable in all. Agreement is therefore not always possible, since individuals may sometimes differ over the most fundamental issues dividing them.
 Still, Nietzsche would not be troubled by this fact, which his opponents too also have to confront only he would argue, to suppress it by insisting on the hope that all disagreements are in particular eliminable even if our practice falls woefully short of the ideal. Nietzsche abandons that ideal. He considers irresoluble disagreement and essential part of human life.
 Knowledge for Nietzsche is again material, but now based on desire and bodily needs more than social refinements Perspectives are to be judged not from their relation to the absolute but on the basis of their effects in a specific era. The possibility of any truth beyond such a local, pragmatic one becomes a problem in Nietzsche, since either a noumenal realm or an historical synthesis exists to provide an absolute criterion of adjudication for competing truth claims: what get called truths are simply beliefs that have been for so long that we have forgotten their genealogy? In this Nietzsche reverses the Enlightenment dictum that truth is the way to liberation by suggesting that trying classes in as far as they are considered absolute for debate and conceptual progress and cause as opposed to any ambient behaviour toward the ease of which backwardness and unnecessary misery. Nietzsche moves back and forth without revolution between the positing of trans-histories; truth claims, such as his claim about the will to power, and a kind of epistemic nihilism that calls into question not only the possibility of truth but the need and desire of it as well. However, perhaps what is most important, Nietzsche introduces the notion that truth is a kind of human practice, in a game whose rules are contingent rather than necessary it. The evaluation of truth claims should be based of their strategic efforts, not their ability to represent a reality conceived of as separate as of an autonomous of human influence, for Nietzsche the view that all truth is truth from or within a particular perspective. The perspective may be a general human pin of view, set by such things as the nature of our sensory apparatus, or it may be thought to be bound by culture, history, language, class or gender. Since there may be many perspectives, there are also different families of truth. The term is frequently applied to, of course Nietzsche’s philosophy.
 The best-known disciples of Husserl was Martin Heidegger, and the work of both figures greatly influenced that of the French atheistic existentialist Jean-Paul Sartre. The work of Husserl, Heidegger and Sartre became foundational to that of the principle architects of philosophical postmodernism, the deconstructionists Jacques Lacan, Roland Bathes, Michel Foucault and Jacques Derrida, this direct linkage among the nineteenth-century crises about epistemological foundations of physics and the origins of philosophical postmodernism served to perpetuate the Cartesian two-world dilemma in an even more oppressive form
 Of Sartre’s main philosophical work, Being and Nothingness, Sartre examines the relationships between Being For-itself (consciousness) and Being In-itself (the non-conscious world). He rejects central tenets of the rationalalist and empiricist traditions, calling the view that the mind or self is a thing or substance. ‘Descartes’s substantialist illusion’, and claiming also that consciousness dos not contain ideas or representations . . . are idolist invented by the psychologists. Sartre also attacks idealism in the forms associated with Berkeley and Kant, and concludes that his account of the relationship between consciousness and the world is neither realist nor idealist.
 Sartre also discusses Being For-others, which comprises the aspects of experience about interactions with other minds.. His views are subtle: roughly, he holds that one’s awareness of others is constituted by feelings of shame, pride, and so on.
 Sartre’s rejection of ideas, and the denial of idealism, appear to commit him to direct realism in the theory of perception. This is neither inconsistent with his claim as been non-realist nor idealist, since by ‘realist’ he means views that allow for the mutual independence or in-principle separability of mind and world. Against this Sartre emphasizes, after Heidegger, that perceptual experience has an active dimension, in hat it is a way of interacting and dealing with the world, than a way of merely contemplating it (‘activity, as spontaneous, unreflecting consciousness, constitutes a certain existential stratum in the world’). Consequently, he holds that experience is richer, and open to more aspects of the world, than empiricist writers customarily claim:
 When I run after a streetcar . . . there is consciousness of-the-streetcar-having-to-be-overtaken, etc., . . . I am then plunged into the world of objects, it is they that constitute the unity of my consciousness, it is they that present themselves with values, with attractive nd repellent qualities . . .
 Relatedly, he insists that I experience material things as having certain potentialities -for-me (’nothingness’). I see doors and bottles as openable, bicycles as ridable (these matters are linked ultimately to the doctrine of extreme existentialist freedom). Similarly, if my friend is not where I expect to meet her, then I experience her absence ‘as a real event’.
 These Phenomenological claims are striking and compelling, but Sartre pay insufficient attention to such things as illusions and hallucinations, which are normally cited as problems for direct realists. In his discussion of mental imagery, however, he describes the act of imaging as a ‘transformation’ of ‘psychic material’. This connects with his view that even a physical image such as a photograph of a tree does not figure as an object of consciousness when it is experienced as a tree-representation (than as a piece of coloured cards). Nonetheless, the fact remains that the photograph continues to contribute to the character of the experience. Given this, seeing how Sartre avoids positing a mental analogue of a photograph for episodes of mental imaging is hard, and harder still to reconcile this with his rejection of visual representations. If ones image is regarded as debased and the awareness of awakening is formally received as a differential coefficient of perceptual knowledge, but this merely rises once more the issue of perceptual illusion and hallucination, and the problem of reconciling them are dialectally the formalization built upon realism.
 Much of Western religion and philosophical thought since the seventeenth century has sought to obviate this prospect with an appeal to ontology or to some conception of God or Being. Yet we continue to struggle, as philosophical postmodernism attests, with the terrible prospect by Nietzsche-we are locked in a prison house of our individual subjective realities in a universe that is as alien to our thought as it is to our desires. This universe may seem comprehensible and knowable in scientific terms, and science does seek in some sense, as Koyré puts it, to ‘find a place for everything.’ Nonetheless, the ghost of Descartes lingers in the widespread conviction that science does not provide a ‘place for man’ or for all that we know as distinctly human in subjective reality.
 Nonetheless, after The Gay Science (1882) began the crucial exploration of self-mastery. The relations between reason and power, and the revelation of the unconscious striving after power that provide the actual energy for the apparent self-denial of the ascetic and the martyred was during this period that Nietzsche’s failed relationship with Lou Salome resulted in the emotional crisis from which Also sprach Zarathustra (1883-5, trans., as Thus Spoke Zarathustra) signals a recovery. This work is frequently regarded as Nietzsche’s masterpiece. It was followed by Jenseits von Gut and Böse (1887), trans., as Beyond Good and Evil); Zur Genealogie der Moral (1887, trans., as, The Genealogy of Moral.)
 In Thus Spake Zarathustra (1883-85), Friedrich Nietzsche introduced in eloquent poetic prose the concepts of the death of God, the superman, and the will to power. Vigorously attacking Christianity and democracy as moralities for the ‘weak herd’, he argued for the ‘natural aristocracy’ of the superman who, driven by the ‘will to power’, celebrates life on earth rather than sanctifying it for some heavenly reward. Such a heroic man of merit has the courage to ‘live dangerously’ and thus rise above the masses, developing his natural capacity for the creative use of passion.
 Also known as radical theology, this movement flourished in the mid 1960s. As a theological movement it never attracted a large following, did not find a unified expression, and passed off the scene as quickly and dramatically as it had arisen. There is even disagreement as to whom its major representatives were. Some identify two, and others three or four. Although small, the movement attracted attention because it was a spectacular symptom of the bankruptcy of modern theology and because it was a journalistic phenomenon. The very statement "God is dead" was tailor-made for journalistic exploitation. The representatives of the movement effectively used periodical articles, paperback books, and the electronic media. This movement gave expression to an idea that had been incipient in Western philosophy and theology for some time, the suggestion that the reality of a transcendent God at best could not be known and at worst did not exist at all. Philosopher Kant and theologian Ritschl denied that one could have a theoretical knowledge of the being of God. Hume and the empiricist for all practical purposes restricted knowledge and reality to the material world as perceived by the five senses. Since God was not empirically verifiable, the biblical world view was said to be mythological and unacceptable to the modern mind. Such atheistic existentialist philosophers as Nietzsche despaired even of the search of God; it was he who coined the phrase "God is dead" almost a century before the death of God theologians.
 Mid-twentieth century theologians not associated with the movement also contributed to the climate of opinion out of which death of God theology emerged. Rudolf Bultmann regarded all elements of the supernaturalistic, theistic world view as mythological and proposed that Scripture be demythologized so that it could speak its message to the modern person.
 Paul Tillich, an avowed anti supernaturalist, said that the only nonsymbiotic statement that could be made about God was that he was being itself. He is beyond essence and existence; therefore, to argue that God exists is to deny him. It is more appropriate to say God does not exist. At best Tillich was a pantheist, but his thought borders on atheism. Dietrich Bonhoeffer (whether rightly understood or not) also contributed to the climate of opinion with some fragmentary but tantalizing statements preserved in Letters and Papers from Prison. He wrote of the world and man ‘coming of age’, of ‘religionless Christianity’, of the ‘world without God’, and of getting rid of the ‘God of the gaps’ and getting along just as well as before. It is not always certain what Bonhoeffer meant, but if nothing else, he provided a vocabulary that later radical theologians could exploit.
 It is clear, then, that as startling as the idea of the death of God was when proclaimed in the mid 1960s, it did not represent as radical a departure from recent philosophical and theological ideas and vocabulary as might superficially appear.
 Just what was death of God theology? The answers are as varied as those who proclaimed God's demise. Since Nietzsche, theologians had occasionally used "God is dead" to express the fact that for an increasing number of people in the modern age God seems to be unreal. Nonetheless, the idea of God's death began to have special prominence in 1957 when Gabriel Vahanian published a book entitled God is Dead. Vahanian did not offer a systematic expression of death of God theology. Instead, he analysed those historical elements that contributed to the masses of people accepting atheism not so much as a theory but as a way of life. Vahanian himself did not believe that God was dead. Still, he urged that there be a form of Christianity that would recognize the contemporary loss of God and exert its influence through what was left. Other proponents of the death of God had the same assessment of God's status in contemporary culture, but were to draw different conclusions.
 Thomas J. J. Altizer believed that God had really died. Nonetheless, Altizer often spoke in exaggerated and dialectic language, occasionally with heavy overtones of Oriental mysticism. Sometimes knowing exactly what Altizer meant when he spoke in dialectical opposites is difficult such as "God is dead, thank God" Apparently the real meaning of Altizer's belief that God had died is to be found in his belief in God's immanence. To say that God has died is to say that he has ceased to exist as a transcendent, supernatural being. Alternately, he has become fully immanent in the world. The result is an essential identity between the human and the divine. God died in Christ in this sense, and the process has continued time and again since then. Altizer claims the church tried to give God life again and put him back in heaven by its doctrines of resurrection and ascension. However, the traditional doctrines about God and Christ must be repudiated because man has discovered after nineteen centuries that God does not exist. Christians must even now will the death of God by which the transcendent becomes immanent.
 For William Hamilton the death of God describes the event many have experienced over the last two hundred years. They no longer accept the reality of God or the meaningfulness of language about him. non theistic explanations have been substituted for theistic ones. This trend is irreversible, and everyone must come to terms with the historical-cultural -death of God. God's death must be affirmed and the secular world embraced as normative intellectually and good ethically. Doubtlessly, Hamilton was optimistic about the world, because he was optimistic about what humanity could do and was doing to solve its problems.
 Paul van Buren is usually associated with death of God theology, although he himself disavowed this connection. Yet, his disavowal seems hollow in the light of his book The Secular Meaning of the Gospel and his article "Christian Education Post Mortem Dei." In the former he accepts empiricism and the position of Bultmann that the world view of the Bible is mythological and untenable to modern people. In the latter he proposes an approach to Christian education that does not assume the existence of God but does assume ‘the death of God’ and that ‘God is gone’.
 Van Buren was concerned with the linguistic aspects of God's existence and death. He accepted the premise of empirical analytic philosophy that real knowledge and meaning can be conveyed only by language that is empirically verifiable. This is the fundamental principle of modern secularists and is the only viable option in this age. If only empirically verifiable language is meaningful, ipso facto all language that refers to or assumes the reality of God is meaningless, since one cannot verify God's existence by any of the five senses. Theism, belief in God, is not only intellectually untenable, it is meaningless. In The Secular Meaning of the Gospel van Buren seeks to reinterpret the Christian faith without reference to God. One searches the book in vain for even one clue that van Buren is anything but a secularist trying to translate Christian ethical values into that language game. There is a decided shift in van Buren's later book Discerning the Way, however.
 In retrospect, there was clearly no single death of God theology, only death of God theologies. Their real significance was that modern theologies, by giving up the essential elements of Christian belief in God, had logically led to what were really antitheologies. When the death of God theologies passed off the scene, the commitment to secularism remained and manifested itself in other forms of secular theology in the late 1960s and the 1970s.
 Nietzsche is unchallenged as the most insightful and powerful critic of the moral climate of the 19th century (and of what of it remains in ours). His exploration of unconscious motivation anticipated Freud. He is notorious for stressing the ‘will to power’ that is the basis of human nature, the ‘resentment’ that comes when it is denied its basis in action, and the corruptions of human nature encouraged by religion, such as Christianity, that feed on such resentment. Yet the powerful human beings who escapes all this, the Ubermensch, is not the ‘blood beast’ of later fascism: It is a human being who has mastered passion, risen above the senseless flux, and given creative style to his or her character. Nietzsche’s free spirits recognize themselves by their joyful attitude to eternal return. He frequently presents the creative artist rather than the warlord as his best exemplar of the type, but the disquieting fact remains that he seems to leave himself no words to condemn any uncaged beasts of pre y who best find their style by exerting repulsive power find their style by exerting repulsive power over others. This problem is no t helped by Nietzsche’s frequently expressed misogyny, although in such matters the interpretation of his many-layered and ironic writings is no always straightforward. Similarly y, such
Anti-Semitism as has been found in his work is balanced by an equally vehement denunciation of anti-Semitism, and an equal or greater dislike of the German character of his time.
 Nietzsche’s current influence derives not only from his celebration of will, but more deeply from his scepticism about the notions of truth and act. In particular, he anticipated any of the central tenets of postmodernism: an aesthetic attitude toward the world that sees it as a ‘text’; the denial of facts; the denial of essences; the celebration of the plurality of interpretation and of the fragmented self, as well as the downgrading of reason and the politicization of discourse. All awaited rediscovery in the late 20th century. Nietzsche also has the incomparable advantage over his followers of being a wonderful stylist, and his Perspectivism is echoed in the shifting array of literary devices-humour, irony, exaggeration, aphorisms, verse, dialogue, parody-with that he explores human life and history.
 Yet, it is nonetheless, that we have seen, the origins of the present division that can be traced to the emergence of classical physics and the stark Cartesian division between mind and bodily world are two separate substances, the self is as it happened associated with a particular body, but is self-subsisting, and capable of independent existence, yet Cartesian duality, much as the ‘ego’ that we are tempted to imagine as a simple unique thing that makes up our essential identity, but, seemingly  sanctioned by this physics. The tragedy of the Western mind, well represented in the work of a host of writers, artists, and intellectual, is that the Cartesian division was perceived as uncontrovertibly real.
 Beginning with Nietzsche, those who wished to free the realm of the mental from the oppressive implications of the mechanistic world-view sought to undermine the alleged privileged character of the knowledge called physicians with an attack on its epistemological authority. After Husserl tried and failed to save the classical view of correspondence by grounding the logic of mathematical systems in human consciousness, this not only resulted in a view of human consciousness that became characteristically postmodern. It also represents a direct link with the epistemological crisis about the foundations of logic and number in the late nineteenth century that foreshadowed the epistemological crisis occasioned by quantum physics beginning in the 1920's. This, as a result in disparate views on the existence of oncology and the character of scientific knowledge that fuelled the conflict between the two.
 If there were world enough and time enough, the conflict between each that both could be viewed as an interesting artifact in the richly diverse coordinative systems of higher education. Nevertheless, as the ecological crisis teaches us, the ‘old enough’ capable of sustaining the growing number of our life firms and the ‘time enough’ that remains to reduce and reverse the damage we are inflicting on this world ae rapidly diminishing. Therefore, put an end to the absurd ‘betweeness’ and go on with the business of coordinate human knowledge in the interest of human survival in a new age of enlightenment that could be far more humane and much more enlightened than any has gone before.
 It now, which it is, nonetheless, that there are significant advances in our understanding to a purposive mind. Cognitive science is an interdisciplinary approach to cognition that draws primarily on ideas from cognitive psychology, artificial intelligence, linguistics and logic. Some philosophers may be cognitive scientists, and others concern themselves with the philosophy of cognitive psychology and cognitive science. Since inauguration of cognitive science these disciplines have attracted much attention from certain philosophers of mind. This has changed the character of philosophy of mind, and there are areas where philosophical work on the nature of mind is continuous with scientific work. Yet, the problems that make up this field concern the ways of ‘thinking’ and ‘mental properties’ are those that these problems are standardly and traditionally regarded within philosophy of mind than those that emerge from the recent developments in cognitive science. The cognitive aspect is what has to be understood is to know what would make the sentence true or false. It is frequently identified with the truth cognition of the sentence. Justly as the scientific study of precesses of awareness, thought, and mental organization, often by means of a computer modelling or artificial intelligence research. Contradicted by the evidence, it only has to do with is structure and the way it functioned, that is just because a theory does not mean that the scientific community currently accredits it. Generally, there are many theories, though technically scientific, have been rejected because the scientific evidence is strangely against it. The historical enquiry into the evolution of self-consciousness, developing from elementary sense experience to fully rational, free, thought processes capable of yielding knowledge the presented term, is associated with the work and school of Husserl. Following Brentano, Husserl realized that intentionality was the distinctive mark of consciousness, and saw in it a concept capable of overcoming traditional mind-body dualism. The stud y of consciousness, therefore, maintains two sides: a conscious experience can be regarded as an element in a stream of consciousness, but also as a representative of one aspect or ‘profile’ of an object. In spite of Husserl’s rejection of dualism, his belief that there is a subject-matter remaining after epoch or bracketing of the content of experience, associates him with the priority accorded to elementary experiences in the parallel doctrine of phenomenalism, and phenomenology has partly suffered from the eclipse of that approach to problems of experience and reality. However, later phenomenologists such as Merleau-Ponty do full justice to the world-involving nature of Phenomenological theories are empirical generalizations of data experience, or manifest in experience. More generally, the phenomenal aspects of things are the aspects that show themselves, than the theoretical aspects that are inferred or posited in order to account for them. They merely described the recurring process of nature and do not refer to their cause or that, in the words of J.S. Mill, ‘objects are the permanent possibilities of sensation’. To inhabit a world of independent, external objects is, on this view, to be the subject of actual and possible orderly experiences. Espoused by Russell, the view issued in a programme of translating talk about physical objects and their locations into talking about possible experience. The attempt is widely supposed to have failed, and the priority the approach gives to experience has been much criticized. It is more common in contemporary philosophy to see experience as itself a construct from the actual way of the world, than the other way round.
 Phenomenological theories are also called ‘scientific laws’ ‘physical laws’ and ‘natural laws.’ Newton’s third law is one example. It says that every action ha an equal and opposite reaction. ‘Explanatory theories’ attempt to explain the observations rather than generalized them. Whereas laws are descriptions of empirical regularities, explanatory theories are conceptual constrictions to explain why the data exit, for example, atomic theory explains why we see certain observations, the same could be said with DNA and relativity, Explanatory theories are particularly helpful in such cases where the entities (like atoms,
DNA . . . ) cannot be directly observed.
 What is knowledge? How does knowledge get to have the content it has? The problem of defining knowledge in terms of true belief plus some favoured relation between the believer and the facts begun with Plato, in that knowledge is true belief plus logos, as it is what enables us to apprehend the principle and firms, i.e., an aspect of our own reasoning.
 What makes a belief justified for what measures of belief is knowledge? According to most epistemologists, knowledge entails belief, so that to know that such and such is the case. None less, there are arguments against all versions of the thesis that knowledge requires having a belief-like attitude toward the known. These arguments are given by philosophers who think that knowledge and belief or facsimile, are mutually incompatible (the incompatibility thesis) or by ones who say that knowledge does not entail belief, or vice versa, so that each may exist without the other, but the two may also coexist (the separability thesis). The incompatibility thesis that hinged on the equation of knowledge with certainty. The assumption that we believe in the truth of  claim we are not certain about its truth. Given that belief always involves uncertainty, while knowledge never does, believing something rules out the possibility of knowledge knowing it. Again, given to no reason to grant that states of belief are never ones involving confidence. Conscious beliefs clearly involve some level of confidence, to suggest otherwise, that we cease to believe things about which we are completely confident is bizarre.
 A. D. Woozley (1953) defends a version of the separability thesis. Woozley’s version that deal with psychological certainty than belief per se, is that knowledge can exist without confidence about the item known, although knowledge might also be accompanied by confidence as well. Woozley says, ‘what I can Do, where what I can do may include answering questions.’ on the basis of this remark he suggests that even when people are unsure of the truth of a claim, they might know that the claim is true. We unhesitatingly attribute knowledge to people that correct responses on examinations if those people show no confidence in their answers. Woozley acknowledges however, that it would be odd for those who lack confidence to claim knowledge. Saying it would be peculiar, ‘I know it is correct.’ but this tension; still ‘I know is correct.’ Woozley explains, using a distinction between condition under which are justified in making a claim (such as a claim to know something) and conditioned under which the claim we make is true. While ‘I know such and such’ might be true even if I answered whether such and such holds, nonetheless claiming that ‘I know that such should be inappropriate for me and such unless I was sure of the truth of my claim.’
 Colin Redford (1966) extends Woozley’s defence of the separability thesis. In Redford’s view, not only in knowledge compatible with the lacking of certainty, it is also compatible with a complete lack of belief. He argues by example, in this one example, Jean had forgotten that he learned some English history years prior and yet he is able to give several correct responses to questions such as, ‘When did the Battle of Hastings occur?’ since he forgot that the battle of Hastings took place in 1066 in history, he considers his correct response to be no more than guesses. Thus when he says that the Battle of Hastings took place in 1066 he would deny having the belief that the Battle of Hasting took place in 1066.
 Those who agree with Radford’s defence of the separation thesis will probably think of belief as an inner state that can be directed through introspection. That Jean lacks’ beliefs out English history are plausible on this Cartesian picture since Jean does not find himself with the belief out of which the English history when with any beliefs about English history when he seeks them out. One might criticize Radford, however, by rejecting the Cartesian view of belief. One could argue that some beliefs are thoroughly unconscious. For example, (1859), according to which having beliefs is a matter of the way people are disposed to behave (and has not Radford already adopted a behaviourist conception of knowledge?). since Jean gives the correct response when queried, a form of verbal behaviour, a behaviourist would be tempted to credit him with the belief that the battle of Hastings occurred in 1066.
 Once, again, but the jargon is attributable to different attitudinal values. AS, D. M. Armstrong (1973) makes a different task against Radford. Jean does know that the Battle of Hastings took place in 1066. Armstrong will grant Radford that points, which in fact, Armstrong suggests that Jean believe that 1066 is not the date the Battle of Hastings occur. For Armstrong equates the belief of such and such is just possible bu t no more than just possible with the belief that such and such is not the case. However, Armstrong insists Jean also believe that the Battle did occur in 1066. After all, had Jean been mistaught that the Battle occurred in 1066, and had he forgotten being ‘taught’ this and subsequently ‘guessed’ that it took place in 10690, we would surely describe the situation as one in which Jean’ false belief about te Battle became a memory trace that was causally responsible or his guess. Thus while Jean consciously believes that the Battle did not occur in 1066, unconsciously he does believe it occurred in 1066. So after all, Radford does not have a counterexample to the claim that knowledge entails belief.
 Suppose that Jean’s memory had been sufficiently powerful to produce the relevant belief. As Radford says, Jan has every reason to suppose that his response is mere guesswork, and so he has every reason to consider his belief false. His belief would be an irrational one, and hence one about whose truth Jean would be ignorant.
 The attempt to understand the concepts involved in religious belief, existence, necessity, fate, creation, sun, justice, Mercy, Redemption, God. Until the 20th century the history of western philosophy is closely intertwined with attempts to make sense of aspect of pagan, Jewish or Christian religion, while in other tradition such as Hinduism, Buddhism or Taoism, there is even less distinction between religious and philosophical enquiry. The classic problem of conceiving an appropriate object of religious belief is that of understanding whether any term can be predicated of it: Does it make to any sense of talking about its creating to things, willing event, or being one thing or many? The via negativa of theology is to claim that God can only be known by denying ordinary terms of any application (or them); another influential suggestion is that ordinary term only apply metaphorically, sand that there is in hope of cashing the metaphors. Once a description of a Supreme Being is hit upon, there remains the problem of providing any reason for supposing that anything answering to the description exists. The medieval period was the high-water mark-for purported proofs of the existence of God, such as the Five-Ays of Aquinas, or the ontological argument of such proofs have fallen out of general favour since the 18th century, although theories still sway many people and some philosophers.
 Generally speaking, even religious philosophers (or perhaps, they especially) have  been wary of popular manifestations of religion. Kant, himself a friend of religious faith, nevertheless distinguishes various perversions: Theosophy (using transcendental conceptions that confuses reason), demonology (indulging an anthropomorphic, mode of representing the Supreme Being), theurgy (a fanatical delusion that feeling can be communicated from such a being, or that we can exert an influence on it), and idolatry, or a superstition’s delusion the one can make oneself acceptable to his Supreme Being by order by means than that of having the moral law at heart (Critique of judgement) these warm conversational tendencies have, however, been increasingly important in modern theology.
 Since Feuerbach there has been a growing tendency for philosophy of religion either to concentrate upon the social and anthropological dimension of religious belief, or to treat a manifestation of various explicable psychological urges. Another reaction is retreat into a celebration of purely subjective existential commitments. Still, the ontological arguments continue to attach attention. A modern anti-fundamentalists trends in epistemology are not entirely hostile to cognitive claims based on religious experience.
 Still, the problem f reconciling the subjective or psychological nature of mental life with its objective and logical content preoccupied from of which is next of the problem was elephantine Logische untersuchungen (trans. as Logical Investigations, 1070). To keep a subjective and a naturalistic approach to knowledge together. Abandoning the naturalism in favour of a kind of transcendental idealism. The precise nature of his change is disguised by a pechant for new and impenetrable terminology, but the ‘bracketing’ of eternal questions for which are to a great extent acknowledged implications of a solipistic, disembodied Cartesian ego s its starting-point, with it thought of as inessential that the thinking subject is ether embodied or surrounded by others. However by the time of Cartesian Meditations (trans. as, 1960, fist published in French as Méditations Carthusianness, 1931), a shift in priorities has begun, with the embodied individual, surrounded by others, than the disembodied Cartesian ego now returned to a fundamental position. The extent to which the desirable shift undermines the programme of phenomenology that is closely identical with Husserl’s earlier approach remains unclear, until later phenomenologists such as Merleau -Ponty has worked fruitfully from the later standpoint.
 Pythagoras established and was the central figure in school of philosophy, religion, and mathematics: He was apparently viewed by his followers as semi-divine. For his followers the regular solids (symmetrical three-dimensional forms in which all sides are the same regular polygon) with ordinary language.  The language of mathematical and geometric forms seem closed, precise and pure. Providing one understood the axioms and notations, and the meaning conveyed was invariant from one mind to another. The Pythagoreans following which was the language empowering the mind to leap beyond the confusion of sense experience into the realm of immutable and eternal essences. This mystical insight made Pythagoras the figure from antiquity must revered by the creators of classical physics, and it continues to have great appeal for contemporary physicists as they struggle with the epistemological of the quantum mechanical description of nature.
 Pythagoras (570 Bc) was the son of Mn esarchus of Samos ut, emigrated (531 Bc) to Croton in southern Italy. Here he founded a religious society, but were forces into exile and died at Metapomtum. Membership of the society entailed self-disciplined, silence and the observance of his  taboos, especially against eating flesh and beans. Pythagoras taught the doctrine of metempsychosis or te cycle of reincarnation, and was supposed ale to remember former existence. The soul, which as its own divinity and may have existed as an animal or plant, can, however gain release by a religious dedication to study, after which it may rejoin the universal world-soul. Pythagoras is usually, but doubtfully, accredited with having discovered the basis of acoustics, the numerical ratios underlying the musical scale, thereby y intimating the arithmetical interpretation of nature. This tremendous success inspired the view that the whole of the cosmos should be explicable in terms of harmonia or number. the view represents a magnificent brake from the Milesian attempt to ground physics on a conception shared by all things, and to concentrate instead on form, meaning that physical nature receives an intelligible grounding in different geometric breaks. The view is vulgarized in the doctrine usually attributed to Pythagoras, that all things are number. However, the association of abstract qualitites with numbers, but reached remarkable heights, with occult attachments for instance, between justice and the number four, and mystical significance, especially of the number ten, cosmologically Pythagoras explained the origin of the universe in mathematical terms, as the imposition of limit on the limitless by a kind of injection of a unit. Followers of Pythagoras included Philolaus, the earliest cosmosologist known to have understood that the earth is a moving planet. It is also likely that the Pythagoreans discovered the irrationality of the square root of two.
 The Pythagoreans considered numbers to be among te building blocks of the universe. In fact, one of the most central of the beliefs of Pythagoras mathematihoi, his inner circle, was that reality was mathematical in nature. This made numbers valuable tools, and over time even the knowledge of a number’s name came to be associated with power. If you could name something you had a degree of control over it, and to have power over the numbers was to have power over nature.
 One, for example, stood for the mind, emphasizing its Oneness. Two was opinion, taking a step away from the singularity of mind. Three was wholeness (a whole needs a beginning, a middle and its  ending to be more than a one-dimensional point), and four represented the stable squareness of justice. Five was marriage-being the sum of three and two, the first odd (male) and even (female) numbers. (Three was the first odd number because the number one was considered by the Greeks to be so special that it could not form part of an ordinary grouping of numbers).
 The allocation of interpretations went on up to ten, which for the Pythagoreans was the number of perfections. Not only was it the sum of the first four numbers, but when a series of ten dots are arranged in the sequence 1, 2, 3, 4, . . . each above the next, it forms a perfect triangle, the simplest of the two-dimensional shapes. So convinced were the Pythagoreans of the importance of ten that they assumed there had to be a tenth body in the heavens on top of the known ones, an anti-Earth, never seen as it was constantly behind the Sun. This power of the number ten, may also have linked with ancient Jewish thought, where it appears in a number of guised the ten commandments, and the ten the components are of the Jewish mystical cabbala tradition.
 Such numerology, ascribed a natural or supernatural significance to number, can also be seen in Christian works, and continued in some new-age tradition. In the Opus majus, written in 1266, the English scientist-friar Roger Bacon wrote that: ‘Moreover, although a manifold perfection of number is found according to which ten is said to be perfect, and seven, and six, yet most of all does three claim itself perfection’.
 Ten, we have already seen, was allocated to perfection. Seven was the number of planets according to the ancient Greeks, while the Pythagoreans had designated the number as the universe. Six also has a mathematical significance, as Bacon points out, because if you break it down into te factor that can be multiplied together to make it-one, two, and three-they also add up to six:
1 x 2 x 3 = 6 = 1 + 2 + 3
Such was the concern of the Pythagoreans to keep irrational numbers to themselves, bearing in mind, it might seem amazing that the Pythagoreans could cope with the values involved in this discovery. After all, as the square root of 2 cannot be represented by a ratio, we have to use a decimal fraction to write it out. It would be amazing, were it true that the Greeks did have a grasp for the existence of irrational numbers as a fraction. In fact, though you might find it mentioned that the Pythagoreans did, to talk about them understanding numbers in its way, totally misrepresented the way they thought.
 At this point, as occupied of a particular place in space, and giving the opportunity that our view presently becomes fused with Christian doctrine when logos are God’s instrument in the development (redemption) of the world. The notion survives in the idea of laws of nature, if these conceived of as independent guides of the natural course of events, existing beyond the temporal world that they order. The theory of knowledge and its central questions include the origin of knowledge, the place of experience in generating knowledge, and the place of reason in doing so, the relationship between knowledge and certainty, not between knowledge and the impossibility of error, the possibility of universal scepticism, sand the changing forms of knowledge that arise from new conceptualizations of the world and its surrounding surfaces.
 As, anyone group of problems concerns the relation between mental and physical properties. Collectively they are called ‘the mind-body problem ‘ this problem is of its central questioning of philosophy of mind since Descartes formulated in the three centuries past, for many people understanding the place of mind in nature is the greatest philosophical problem. Mind is often thought to be the last domain that stubbornly resists scientific understanding, and philosophers differ over whether they find that a cause for celebration or scandal, the mind-body problem in the modern era was given its definitive shape by Descartes, although the dualism that he espoused is far more widespread and far older, occurring in some form wherever there is a religious or philosophical tradition by which the soul may have an existence apart from the body. While most modern philosophers of mind would reject the imaginings that lead us to think that this makes sense, there is no consensus over the way to integrate our understanding people a bearer s of physical proper ties on the one hand and as subjects of mental lives on the other.
 As the motivated convictions that discoveries of non-locality have more potential to transform our conceptions of the ‘way things are’ than any previous discovery, it is, nonetheless, that these implications extend well beyond the domain of the physical sciences, and the best efforts of many thoughtful people will be required to understand them.
 Perhaps the most startling and potentially revolutionary of these implications in human terms is the view in the relationship between mind and world that is utterly different from that sanctioned by classical physics. René Descartes, for reasons of the moment, was among the first to realize that mind or consciousness in the mechanistic world-view of classical physics appeared to exist in a realm separate and the distinction drawn upon ‘self-realisation’ and ‘undivided wholeness’ he lf within the form of nature. Philosophy quickly realized that there was nothing in this view of nature that could explain or provide a foundation for the mental, or for all that we know from direct experience and distinctly human. In a mechanistic universe, he said, there is no privileged place or function for mind, and the separation between mind and matter is absolute. Descartes was also convinced, however, that the immaterial essences that gave form and structure to this universe were coded in geometrical and mathematical ideas, and this insight led him to invent algebraic geometry.
 Decanters’ theory of knowledge starts with the quest for certainty, for an indubitable starting-point or foundation on the basis alone of which progress is possible, sometimes known as the use of hyperbolic (extreme) doubt, or Cartesian doubt. This is the method of investigating how much knowledge and its basis in reason or experience used by Descartes in the first two Meditations. This is eventually found in the celebrated ‘Cogito ergo sum’: I think therefore I am. By finding the point of certainty in my own awareness of my own self, Descartes gives a first-person twist to the theory of knowledge that dominated the following centuries in spite of various counter attacks for social and public starting-point. The metaphysic associated with this priority is the famous Cartesian dualism, or separation of mind and matter into two different but interacting substances. Descartes rigorously and rightly understands the presence of divine dispensation to certify any relationship between the two realms thus divided, and to prove the reliability of the senses invoked a ‘clear and distinct perception’ of highly dubious proofs of the existence of a benevolent deity. This has not met general acceptance: As Hume drily puts it, ‘to have recourse to the veracity of the supreme Being, to prove the veracity of our senses, is surely making a very unexpected circuit.’
 In his own time Descartes’s conception of the entirely separate substance of the mind was recognized to precede to insoluble problems of nature of the causal connection between the two systems running in parallel. When I stub my toe, this does not cause pain, but there is a harmony between the mental and the physical (perhaps, due to God) that ensure that there will be a simultaneous pain; when I form an intention and then act, the same benevolence ensures that my action is appropriate to my intention, or if it be to  desire its resultant intention be of an action to be performed on the causal effect of some unreasonable belief. The theory has never been wildly popular, and in its application to mind-body problems many philosophers would say that it was the result of a misconceived ‘Cartesian dualism,’ it of ‘subjective knowledge’ and ‘physical theory.’
 It also produces the problem, insoluble in its own terms, of ‘other minds.’ Descartes’s notorious denial that nonhuman animals are conscious is a stark illustration of the problem. In his conception of matter Descartes also gives preference to rational cogitation over anything derived from the senses. Since we can conceive of the matter of a ball of wax surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature. Descartes’ s thought is reflected in Leibniz’s view, held later by Russell, that the qualities of sense experience have no resemblance to qualities of things, so that knowledge of the external world is essentially knowledge of structure than of filling. On this basis Descartes builds a remarkable physics. Since matter is in effect the same as extension there can be no empty space or ‘void,’ since there is no empty space motion is not a question of occupying previously empty space, but is to be thought of through vortices (like the motion of a liquid).
 Although the structure of Descartes’s epistemology, theory of mind, and theory of matter have been rejected often, their relentless exposures of the hardest issues, their exemplary clarity, and even their initial plausibility, all contrive to make him the central point of reference for modern philosophy.
 A scientific understanding of these ideas could be derived, said, Descartes, with the aid of precise deduction, and has also claimed that the contours of physical reality could be laid out in three-dimensional coordinates. Following the publication of Isaac Newton’s Principia Mathematica in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. The dream that the entire physical world could be known and mastered through the extension and refinement of mathematical theory became the central feature and principle of scientific knowledge.
 The radical separation between mind and nature formalized by Descartes served over time to allow scientists to concentrate on developing mathematical descriptions of matter as pure mechanisms without any concerns about its spiritual dimensions or ontological foundations. Meanwhile, attempts to rationalize, reconcile or eliminate Descartes’s stark division between mind and matter became the most central feature of Western intellectual life.
 Philosophers like John Locke, Thomas Hobbes, and David Hume tried to articulate some basis for linking the mathematical describable motions of matter with linguistic representations of external reality in the subjective space of mind. Descartes’ compatriot Jean-Jacques Rousseau reified nature as the ground of human consciousness in a state of innocence and proclaimed that ‘Liberty, Equality, Fraternities’ are the guiding principals of this consciousness. Rousseau also given rythum to cause an endurable god-like semblance so that the idea of the ‘general will’ of the people to achieve these goals and declared that those who do no conform to this will were social deviants.
 The Enlightenment idea of deism, which imaged the universe as a clockwork and God as the clockmaker, provided grounds for believing in a divine agency at the moment of creation. It also implied, however, that all the creative forces of the universe were exhausted at origins, that the physical substrates of mind were subject to the same natural laws as matter, and that the only means of mediating the gap between mind and matter was pure reason. Traditional Judeo-Christian theism, which had previously been based on both reason and revelation, responding to the challenge of deism by debasing rationality as a test of faith and embracing the idea that the truths of spiritual reality can be known only through divine revelation. This engendered a conflict between reason and revelation that persists to this day. It also laid the foundation for the fierce competition between the mega-narratives of science and religion as frame tales for mediating relations between mind and matter and the manner in which the special character of each should be ultimately defined.
 Rousseau’s attempt to posit a ground for human consciousness by reifying nature was revived in a different form by the nineteenth-century Romantics in Germany, England, and the United States. Goethe and Friedrich Schelling proposed a natural philosophy premised on ontological monism (the idea that God, man, and nature are grounded in an indivisible spiritual Oneness) and argued for the reconciliation of mind and matter with an appeal to sentiment, mystical awareness, and quasi-scientific musings. In Goethe’s attempt to wed mind and matter, nature becomes a mindful agency that ‘loves illusion,’ ‘shrouds man in mist,’ ‘presses him to her heart,’ and punishes those who fail to see the ‘light.’ Schelling, in his version of cosmic unity, argues that scientific facts were at best partial truths and that the mindful dualism spirit that unites mind and matter is progressively moving toward self-realization and undivided wholeness.
 The flaw of pure reason is, of course, the absence of emotion, an external reality had limited appeal outside the community of intellectuals. The figure most responsible for infusing our understanding of Cartesian dualism with emotional content was the death of God theologian Friedrich Nietzsche after declaring that God and ‘divine will’ does not exist, verifiably, literature puts forward, it is the knowledge that God is dead. The death of God he calls the greatest events in modern history and the cause of extremer danger. Yet, the paradox contained in these words. He never said that there was no God, but the Eternal had been vanquished by Time and that the Immortal suffered death at the hands of mortals. ‘God is dead’. It is like a cry mingled of despair and triumph, reducing, by comparison, the whole story of atheism agnosticism before and after him to the level of respectable mediocrity and making it sound like a collection of announcements who in regret are unable to invest in an unsafe proposition:-this is the very essence of Nietzsche’s spiritual core of existence, and what flows is despair and hope in a new generation of man, visions of catastrophe and glory, the icy brilliance of analytical reason, fathoming with affected irreverence those depths until now hidden by awe and fear, and side-by-side, with it, ecstatics invocations of as ritual healer.
 Nietzsche reified for ‘existence’ of consciousness in the domain of subjectivity as the ground for individual ‘will’ and summarily dismissed all previous philosophical attempts to articulate the ‘will to truth.’ The problem, claimed Nietzsche, is that earlier versions of the ‘will to truth’ disguise the fact that all alleged truths were arbitrarily created in the subjective reality of the individual and are expressions or manifestations of individual ‘will.’
 In Nietzsche’s view, the separation between ‘mind’ and ‘matter’ is more absolute and total had previously been imagined. Based on the assumptions that there are no really necessary correspondences between linguistic constructions of reality in human subjectivity and external reality, he declared that we are all locked in ‘a prison house of language.’ The prison as he conceived it, however, was also a ‘space’ where the philosopher can examine the ‘innermost desires of his nature’ and articulate a new message of individual existence founded on will.
 Those who fail to enact their existence in this space, says Nietzsche, are enticed into sacrificing their individuality on the nonexistent altars of religious beliefs and democratic or socialist ideals and become, therefore, members of the anonymous and docile crowd. Nietzsche also invalidated the knowledge claims of science in the examination of human subjectivity. Science, he said, not only exalted natural phenomena and favours reductionistic examinations of phenomena at the expense of mind. It also seeks to reduce mind to a mere material substance, and by that to displace or subsume the separateness and uniqueness of mind with mechanistic descriptions that disallow a basis for the free exercise of individual will.
 Nietzsche’s emotionally charged defence of intellectual freedom and his radical empowerment of mind as the maker and transformer of the collective fictions that shape human reality in a soulless scientific universe proved terribly influential on twentieth-century thought. Nietzsche sought to reinforce his view on subjective character of scientific knowledge by appealing to an epistemological crisis over the foundations of logic and arithmetic that arose during the last three decades of the nineteenth century. As it turned out, these efforts resulted in paradoxes of recursion and self-reference that threatened to undermine both the efficacy of this correspondence and the privileged character of scientific knowledge.
 Nietzsche appealed to this crisis in an effort to reinforce his assumption that, without onotology, all knowledge (including scientific knowledge) was grounded only in human consciousness. As the crisis continued, a philosopher trained in higher mathematics and physics, Edmund Husserl, attempted to preserve the classical view of correspondence between mathematical theory and physical reality by deriving the foundation of logic and number from consciousness in ways that would preserve self-consistency and rigour. It represented a direct link between these early challenges and the efficacy of classical epistemology and the tradition in philosophical thought that culminated in philosophical postmodernism.
 Since Husserl’s epistemology, like that of Descartes and Nietzsche, was grounded in human subjectivity, a better understanding of his attempt to preserve the classical view of correspondence not only reveals more about the legacy of Cartesian duality. It also suggests that the hidden onotology of classical epistemology was more responsible for the deep division and conflict between the two cultures of humanists-social scientists and scientists-engineers than we has preciously imagined. The central question in this late-nineteenth-century debate over the status of the mathematical description of nature as the following: Is the foundation of number and logic grounded in classical epistemology, or must we assume, without any ontology, that the rules of number and logic are grounded only in human consciousness? In order to frame this question in the proper context, it should first examine in more detail that intimate and ongoing dialogue between physics and metaphysics in Western thought.
 Through a curious course of events, attempts by Edmund Husserl, a philosopher trained in higher math and physics to resolve this crisis resulted in a view of the character of human consciousness that closely resembled that of Nietzsche.
 For Nietzsche, however, all the activities of human consciousness share the predicament of psychology. There can be, for him, no ‘pure’ knowledge, only satisfaction, however sophisticated, of an ever-varying intellectual need of the will to know. He therefore demands that man should accept moral responsibility for the kind of questioned he asks, and that he should realize what values are implied in he answers he asks-and in this he was more Christian than all our post-Faustian Fausts of truth and scholarship. ‘The desire for truth,’ he says, ‘is itself in need of critique. Let this be the definition of my philosophical task. By way of excrement, one will question for oneself the value of truth.’ and does he not. He protests that, in an age that is as uncertain of its values as is his and ours, the search for truth will issue in the similarly of trivialities or-catastrophe. We might wonder how he would react to the pious hope of our day that the intelligence and moral conscience of politicians will save the world from the disastrous products of our scientific explorations and engineering skills. It is perhaps not too difficult to guess; for he knew that there was a fatal link between the moral resolution of scientists to follow the scientific search wherever, by its own momentum, it will take us, and te moral debility of societies not altogether disinclined to ‘apply’ the results, however catastrophic, believing that there was a hidden identity among all the expressions of the ‘Will to Power’, he saw the element of moral nihilism in the ethics of our science: Its determination not to let ‘higher values’ interfere with its highest value -Truth (as it conceives it). Thus he said that the goal of knowledge pursued by the natural sciences means perdition.
 In these regions of his mind dwells the terror that he may have helped to bring about the very opposite of what he desired. When this terror comes to the force, he is much afraid of the consequences of his teaching. Perhaps, the best will be driven to despair by it, the very worst accept it? Once he put into the mouth of some imaginary titanic genius what is his most terrible prophetic utterance: ‘Oh grant madness, your heavenly powers, madness that at last I may believe in myself,
. . . I am consumed by doubts, for I have killed the Law. If I am not more than the Law, then I am the most abject of all men’.
 Still ‘God is dead,’ and, sadly, that he had to think the meanest thought: He saw in the real Christ an illegitimate son of the Will to power, a flustrated rabbi sho set out to save himself and the underdog human from the intolerable strain of importantly resending the Caesars-not to be Caesar was now proclaimed a spiritual disjunction-a newly invented form of power, the power to be powerless.
 It is the knowledge that God is dead, and suffered death at the hands of mortals: ‘God is dead’: It is like a cry mingled of despair ad triumph, reducing the whole story of theism nd agnosticism before and after him to the level of respectable mediocrity nd masking it sound like a collection of announcement. Nietzsche, for the nineteenth century, brings to its perverse conclusion a line of religious thought and experience linked with the names of St. Paul, St. Augustin, Pascal, Kierkegaard, and Dostoevsky, minds for whom God was not simply the creator of an order of nature within which man has his clearly defined place, but to whom He came in order to challenge their natural being, masking demands that appeared absurd in the light of natural reason. These men are of the family of Jacob: Having wrestled with God for His blessing, they ever after limp through life with the framework of Nature incurably out of joint. Nietzsche is just a wrestler, except within him the shadow of Jacob merges with the shadow of Prometheus. Like Jacob, Nietzsche too believed that he prevailed against God in that struggle, and won a new name for himself, the name of Zarathustra. Yet the words he spoke on his mountain to the angle of the Lord were: ‘I will not let thee go, but thou curse me.’ Or, in words that Nietzsche did in fact speak: ‘I have on purpose devoted my life to exploring the whole contrast to a truly religious nature. I know the Devil and all his visions of God.’ ‘God is dead,’ is the very core of Nietzsche’s spiritual existence, and what follows is despair and hope in a new greatness of man.
 Further to issues are the best-known disciple that Husserl was Martin Heidegger, and the work of both figures greatly influenced that of the French atheistic existentialist Jean-Paul Sartre. His first novel, La Nausée, was published in 1938 (trans. As Nausea, 1949). Lʹ̀Imginaire (1940, trans. as The Psychology of the Imagination, 1948) is a contribution to phenomenal psychology. Briefly captured by the Germans, Sartre spent the ending of war years in Paris, where Lʹ Être et le néant, his major purely philosophical work, was published in 1945 (trans. as Being and Nothingness, 1956). The lecture Lʹ Existentialisme est un humanisme (1946, trans. as Existentialism is a Humanism, 1947) consolidated Sartre’s position as France’s leading existentialist philosopher.
 Sartre’s philosophy is concerned entirely with the nature of human life, and the structures of consciousness. As a result it gains expression in his novels and plays as well as in more orthodox academic treatises. Its immediate ancestors is the Phenomenological tradition of his teachers, and Sartre can most simply be seen as concerned to rebut the charge of idealism as it is laid at the door of phenomenology. The agent is not a spectator of the world, but, like everything in the world, constituted by acts of intentionality. The self constituted is historically situated, but as an agent whose own mode of finding itself in the world makes for responsibility and emotion. Responsibility is, however, a burden that we cannot frequently bear, and bad faith arises when we deny our own authorship of our actions, seeing then instead as forced responses to situations not of our own making.
 Sartre thus locates the essential nature of human existence in the capacity for choice, although choice, being equally incompatible with determinism and with the existence of a Kantian moral law, implies a synthesis of consciousness (being for-itself) and the objective(being in-itself) that is forever unstable. The unstable and constantly disintegrating nature of free-will generates anguish. For Sartre our capacity to make negative judgement is one fundamental puzzles of consciousness. Like Heidegger he took the ‘ontological’ approach of relating to the nature of nonbeing, a move that decisively differentiated him from the Anglo-American tradition of modern logic.
 The work of Husserl, Heidegger and Sartre became foundational to that of the principal architects of philosophical postmodernism, Deconstructionists Jacques Lacan, Roland Barthes, Michel Foucault, and Jacqures Derrida. This direct linkage among the nineteenth-century crises about the epistemological foundations of mathematical physics and the origins of philosophical postmodernism served to perpetrate the Cartesian two world dilemmas in an even more oppressive form.
 The American envisioned a unified spiritual reality that manifested itself as a personal ethos that sanctioned radical individualism and bred aversion to the emergent materialism of the Jacksonian era. They were also more inclined than their European counterpart, as the examples of Thoreau and Whitman attest, to embrace scientific descriptions of nature. However, the Americans also dissolved the distinction between mind and natter with an appeal to ontological monism and alleged that mind could free itself from all the constraint of assuming that by some sorted limitation of matter, in which such states have of them, some mystical awareness.
 Since scientists, during the nineteenth century were engrossed with uncovering the workings of external reality and seemingly knew of themselves that these virtually overflowing burdens of nothing, in that were about the physical substrates of human consciousness, the business of examining the distributive contribution in dynamic functionality and structural foundation of mind became the province of social scientists and humanists. Adolphe Quételet proposed a ‘social physics’ that could serve as the basis for a new discipline called sociology, and his contemporary Auguste Comte concluded that a true scientific understanding of the social reality was quite inevitable. Mind, in the view of these figures, was a separate and distinct mechanism subject to the lawful workings of a mechanical social reality.
 Nonetheless, even like Planck and Einstein understood and embraced hoboism as an inescapable condition of our physical existence. According to Einstein’s general relativity theory, wrote Planck, ‘each individual particle of a system in a certain sense, at any one time, exists simultaneously in every part of the space occupied by the system’. The system, as Planck made clear, is the entire cosmos. As Einstein put it, ‘physical reality must be described in terms of continuos functions in space. The material point, therefore, can hardly be conceived any more as the basic concept of the theory.’
 More formal European philosophers, such as Immanuel Kant, sought to reconcile representations of external reality in mind with the motions of matter-based on the dictates of pure reason. This impulse was also apparent in the utilitarian ethics of Jerry Bentham and John Stuart Mill, in the historical materialism of Karl Marx and Friedrich Engels, and in the pragmatism of Charles Smith, William James and John Dewey. These thinkers were painfully aware, however, of the inability of reason to posit a self-consistent basis for bridging the gap between mind and matter, and each remains obliged to conclude that the realm of the mental exists only in the subjective reality of the individual.
 The mechanistic paradigm of the late nineteenth century was the one Einstein came to know when he studied physics. Most physicists believed that it represented an eternal truth, but Einstein was open to fresh ideas. Inspired by Mach’s critical mind, he demolished the Newtonian ideas of space and time and replaced them with new, ‘relativistic’ notions.
 As for Newton, a British mathematician , whereupon the man Hume called Newton, ‘the greatest and rarest genius that ever arose for the ornament and instruction of the species.’ His mathematical discoveries are usually dated to between 1665 and 1666, when he was secluded in Lincolnshire, the university being closed because of the plague His great work, the Philosophae Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy, usually referred to as the Principia), was published in 1687.
 Yet throughout his career, Newton engaged in scientific correspondence and controversy. The often-quoted remark, ‘If I have seen further it is by standing on the shoulders of Giant’s occurs in a conciliatory letter to Robert Hooke (1635-1703). Newton was in fact echoing the remark of Bernard of Chartres in 1120: ‘We are dwarfs standing on the shoulders of giants’. The dispute with Leibniz over the invention of the calculus is his best-known quarrel, and abound with restrictive limitation that gave specified allowances given to spiritual insight under which for Newton himself, did appoint the committee of the Royal Society that judged the question of precedence, and then writing the report, the Commercium Epistolicum, awarding himself the victory. Although was himself of the ‘age of reason,’ Newton was himself interested in alchemy, prophesy, gnostic, wisdom and theology,
 Philosophical influence of Principia was incalculable, and from Locke’s Essay onward philosophers recognized Newton’s work as a new paradigm of scientific method, but without being entirely clear what different parts reason and observation play in the edifice. Although Newton ushered in so much of the scientific world-view, overall scholium at the end of Principia, he argues that ‘it is not to be conceived that mere mechanical causes could give birth to so many regular motions’ and hence that his discoveries pointed to the operations of God, ‘to discourse of whom from phenomena does notably belong to natural philosophy.’ Newton confesses that he has ‘not been able to discover the cause of those properties of gravity from phenomena’: Hypotheses non fingo (I do not make hypotheses). It was left to Hume to argue that the kind of thing Newton does, namely place the events of nature into law-like orders and patterns, is the only kind of thing that scientific enquiry can ever do.
 An ‘action at a distance’ is a much contested concept in the history of physics. Aristotelian physics holds that every motion requires a conjoined mover. Action can therefore never occur at a distance, but needs a medium enveloping the body, and of which parts befit its motion and pushes it from behind (antiperistasis). Although natural motions like free fall and magnetic attraction (quaintly called ‘coition’) were recognized in the post-Aristotelian period, the rise of the ‘corpusularian’ philosophy. Boyle expounded in his Sceptical Chemist (1661) and The Origin and Form of Qualifies (1666), held that all material substances are composed of minutes corpuscles, themselves possessing shape, size, and motion. The different properties of materials would arise different combinations and collisions of corpuscles: chemical properties, such as solubility, would be explicable by the mechanical interactions of corpuscles, just as the capacity of a key to turn a lock is explained by their respective shapes. In Boyle’s hands the idea is opposed to the Aristotelean theory of elements and principles, which he regarded as untestable and sterile. His approach is a precursor of modern chemical atomism, and had immense influence on Locke, however, Locke recognized the need for a different kind of force guaranteeing the cohesion of atoms, and both this and the interaction between such atoms were criticized by Leibniz. Although natural motion like free fall and magnetic attraction (quality called ‘coition’) were recognized in the post-Aristotelian period , the rise of the ‘corpusularian’ philosophy again banned ‘attraction; or unmediated action at a distance: the classic argument is that ‘matter cannot act where it is not again banned ‘attraction’, or unmediated action at a distance: The classic argument is that ‘matter cannot act where it is not’.
 Cartesian physical theory also postulated ‘subtle matter’ to fill space and provide the medium for force and motion. Its successor, the a ether, was populated in order to provide a medium for transmitting forces and causal influences between objects that are not in directorially contact. Even Newton, whose treatment of gravity might seem to leave it conceived of a action to a distance, opposed that an intermediary must be postulated, although he could make no hypothesis as to its nature. Locke, having originally said that bodies act on each other ‘manifestly by impulse and nothing else’. However, changes his mind and strike out the words ‘and nothing else,’ although impulse remains ‘the only way that we can conceive bodies function in’. In the Metaphysical Foundations of Natural Science Kant clearly sets out the view that the way in which bodies impulse each other is no more natural, or intelligible, than the way inn that they act at a distance, in particular he repeats the point half-understood by Locke, that any conception of solid, massy atoms requires understanding the force that makes them cohere as a single unity, which cannot itself be understood in terms of elastic collisions. In many cases contemporary field theories admit of alternative equivalent formulations, one with action at a distance, one with local action only.
 Two theories unveiled and unfolding as their phenomenal yield held by Albert Einstein, attributively appreciated that the special theory of relativity (1915) and, also the tangling and calculably arranging affordance, as drawn upon the gratifying nature whom by encouraging the finding resolutions upon which the realms of its secreted reservoir in continuous phenomenons, in additional the continuatives as afforded by the efforts by the imagination were made discretely available to any the unsurmountable achievements, as remain obtainably afforded through the excavations underlying the artifactual circumstances that govern all principle ‘forms’ or ‘types’ in the involving evolutionary principles of the general theory of relativity (1915).
 Where the special theory gives a unified account of the laws of mechanics and of electromagnetism, including optics. Before 1905 the purely relative nature of uniform motion had in part been recognized in mechanics, although Newton had considered time to be absolute and postulated absolute space. In electromagnetism the ether was supposed to give an absolute bases respect to which motion could be determined. The Galilean transformation equations represent the set of equations:
χʹ = χ ‒ vt
yʹ = y
zʹ = z
tʹ = t
They are used for transforming the parameters of position and motion from an observer at the point ‘O’ with co-ordinates (z, y, z) to an observer at Oʹ with co-ordinates (χʹ, yʹ, zʹ). The axis is chosen to pass through O and Oʹ. The times of an event at ‘t’ and tʹ in the frames of reference of observers at O and Oʹ coincided. ‘V’ is the relative velocity of separation of O and Oʹ. The equation conforms to Newtonian mechanics as compared with the Lorentz transformation equations, it represents a set of equations for transforming the position-motion parameters from an observer at a point O(χ, y, z) to an observer at
Oʹ(χʹ, yʹ, zʹ), moving compared with one another. The equation replaces the Galilean transformation equation of Newtonian mechanics in reactivity problems. If the x-axes are chosen to pass through Oʹ and the time of an event are t and tʹ in the frame of reference of the observers at O and Oʹ respectively, where the zeros of their time scales were the instants that O and Oʹ supported the equations are:
χʹ = β(χ ‒ vt)
yʹ = y
zʹ =z
tʹ = β( t ‒ vχ / c2),
Where ‘v’ is the relative velocity of separation of O, Oʹ, c is the speed of light, and ‘β’ is the function:
(1 ‒ v2/c2)-½.
Newton’s laws of motion in his ‘Principia,’ Newton (1687) stated the three fundamental laws of motion, which are the basis of Newtonian mechanics. The First Law of acknowledgement concerns that all bodies persevere in its state of rest, or uniform motion in a straight line, but in as far as it is compelled, to change that state by forces impressed on it. This may be regarded as a definition of force. The Second Law to acknowledge is, that the rate of change of linear momentum is propositional to the force applied, and takes place in the straight line in which that force acts. This definition can be regarded as formulating a suitable way by which forces may be measured, that is, by the acceleration they produce:
F = d( mv )/dt
i.e., F = ma = v( dm/dt ),
Of where F = force, m = masses, v = velocity, t = time, and ‘a’ = acceleration, from which case, the proceeding majority of quality values were of non-relativistic cases: dm/dt = 0, i.e., the mass remains constant, and then:
F = ma.
The Third Law acknowledges, that forces are caused by the interaction of pairs of bodies. The forces exerted by ‘A’ upon ‘B’ and the force exerted by ‘B’ upon ‘A’ are simultaneous, equal in magnitude, opposite in direction and in the same straight line, caused by the same mechanism.
 Appreciating the popular statement of this law about significant ‘action and reaction’ leads too much misunderstanding. In particular, any two forces that happen to be equal and opposite if they act on the same body, one force, arbitrarily called ‘reaction,’ are supposed to be a consequence of the other and to happen subsequently, as two forces are supposed to oppose each other, causing equilibrium, certain forces such as forces exerted by support or propellants are conventionally called ‘reaction,’ causing considerable confusion.
 The third law may be illustrated by the following examples. The gravitational force exerted by a body on the earth is equal and opposite to the gravitational force exerted by the earth on the body. The intermolecular repulsive forces exerted on the ground by a body resting on it, or hitting it, is equal and opposite to the intermolecular repulsive force exerted on the body by the ground. More general system of mechanics has been given by Einstein in his theory of relativity. This reduces to Newtonian mechanics when all velocities compared with the observer are small compared with those of light.
 Einstein rejected the concept of absolute space and time, and made two postulates (i) The laws of nature are the same for all observers in uniform relative motion, and (ii) The speed of light in the same for all such observers, independently of the relative motions of sources and detectors. He showed that these postulates were equivalent to the requirement that co-ordinates of space and time used by different observers should be related by Lorentz transformation equations. The theory has several important consequences.
 The transformation of time implies that two events that are simultaneous according to one observer will not necessarily be so according to another in uniform relative motion. This does not affect the construct of its sequence of related events so does not violate any conceptual causation. It will appear to two observers in uniform relative motion that each other’s clock runs slowly. This is the phenomenon of ‘time dilation’, for example, an observer moving with respect to a radioactive source finds a longer decay time than found by an observer at rest with respect to it, according to:
Tv = T0/(1 ‒ v2/c2) ½
Where Tv is the mean life measurement by an observer at relative speed ‘v’, and T is the mean life maturement by an observer at rest, and ‘c’ is the speed of light.
 This formula has been verified in innumerable experiments. One consequence is that no body can be accelerated from a speed below ‘c’ with respect to any observer to one above ‘c’, since this would require infinite energy. Einstein educed that the transfer of energy δE by any process entailed the transfer of mass δm where δE = δmc2, so he concluded that the total energy ‘E’ of any system of mass ‘m’ would be given by:
E = mc2
 The principle of conservation of mass states that in any system is constant. Although conservation of mass was verified in many experiments, the evidence for this was limited. In contrast the great success of theories assuming the conservation of energy established this principle, and Einstein assumed it as an axiom in his theory of relativity. According to this theory the transfer of energy ‘E’ by any process entails the transfer of mass m = E/c2. Therefore, the conservation of energy ensures the conservation of mass.
 In Einstein’s theory inertial energy’. This leads to alternate statements of the principle, in which terminology is not generally consistent. Whereas, the law of equivalence of mass and energy such that mass ‘m’ and energy ‘E’ are related by the equation E = mc2, where ‘c’ is the speed of light in a vacuum. Thus, a quantity of energy ‘E’ has a mass ‘m’ and a mass ‘m’ has intrinsic energy ‘E’. The kinetic energy of a particle as determined by an observer with relative speed ‘v’ is thus (m-m0)c2, which tends to the classical value ½mv2 if ≪ C.
 Attempts to express quantum theory in terms consistent with the requirements of relativity were begun by Sommerfeld (1915), eventually. Dirac (1928) gave a relativistic formulation of the wave mechanics of conserved particles (fermions). This explained the concept of spin and the associated magnetic moment, which had been postulated to account for certain details of spectra. The theory led to results very important for the theory of standard or elementary particles. The Klein-Gordon equation is the relativistic wave equation for ‘bosons’. It is applicable to bosons of zero spin, such as the ‘pion’. In which case, for example the Klein-Gordon Lagrangian describes a single spin-0, scalar field:
L = ½[∂t∂t‒ ∂y∂y‒ ∂z∂z] ‒ ½(2πmc / h)22
Then:
∂L/∂(∂) = ∂μ
leading to the equation:
∂L/∂ = (2πmc/h)22+
and therefore the Lagrange equation requires that:
∂μ∂μ + (2πmc/h)2 2 = 0.
Which is the Klein-Gordon equation describing the evolution in space and time of field ‘’? Individual ‘’ excitation of the normal modes of particles of spin-0, and mass ‘m’.
 A mathematical formulation of the special theory of relativity was given by Minkowski. It is based on the idea that an event is specified by there being a four-dimensional co-ordinates, three of which are spatial co-ordinates and one in a dimensional frame in a time co-ordinates. These continuously of dimensional coordinate give to define a four-dimensional space and the motion of a particle can be described by a curve in this space, which is called ‘Minkowski space-time.’ In certain formulations of the theory, use is made of a four-dimensional coordinate system in which three dimensions represent the spatial co-ordinates χ, y, z and the fourth dimension are ‘ict’, where ‘t’ is time, ‘c’ is the speed of light and ‘I’ is √-1, points in this space are called events. The equivalent to the distance between two points is the interval (δs) between two events given by the Pythagoras law in a space-time as:
(δs)2 = ij ηij δ χI χj
Where:
χ = χ1, y = χ2, z = χ3 . . . , t = χ4 and η11 (χ) η33 (χ) = 1? η44 (χ)=1.
Where components of the Minkowski metric tensor are the distances between two points are variant under the ‘Lorentz transformation’, because the measurements of the positions of the points that are simultaneous according to one observer in uniform motion with respect to the first. By contrast, the interval between two events is invariant.
 The equivalents to a vector in the four-dimensional space are consumed by a ‘four vector’, in which has three space components and one of time component. For example, the four -vector momentum has a time component proportional to the energy of a particle, the four-vector potential has the space co-ordinates of the magnetic vector potential, while the time co-ordinates corresponds to the electric potential.
 The special theory of relativity is concerned with relative motion between Nonaccelerated frames of reference. The general theory reals with general relative motion between accelerated frames of reference. In accelerated systems of reference, certain fictitious forces are observed, such as the centrifugal and Coriolis forces found in rotating systems. These are known as fictitious forces because they disappear when the observer transforms to a Nonaccelerated system. For example, to an observer in a car rounding a bend at constant velocity, objects in the car appear to suffer a force acting outward. To an observer outside the car, this is simply their tendency to continue moving in a straight line. The inertia of the objects is seen to cause a fictitious force and the observer can distinguish between non-inertial (accelerated) and inertial (Nonaccelerated) frames of reference.
 A further point is that, to the observer in the car, all the objects are given the same acceleration despite their mass. This implies a connection between the fictitious forces arising from accelerated systems and forces due to gravity, where the acceleration produced is independent of the mass. Near the surface of the earth the acceleration of free fall, ‘g’, is measured with respect to a nearby point on the surface. Because of the axial rotation the reference point is accelerated to the centre of the circle of its latitude, so ‘g’ is not quite in magnitude or direction to the acceleration toward the centre of the earth given by the theory of ‘gravitation’, in 1687 Newton presented his law of universal gravitation, according to which every particle evokes every other particle with the force, ‘F’ given by:
F = Gm1 m2/ χ2,
Where ‘m’ is the masses of two particles a distance ‘χ’ apart, and ‘G’ is the gravitational constant, which, according to modern measurements, has a value:
6.672 59 x 10-11 m3 kg -1 s -2.
For extended bodies the forces are found by integrations. Newton showed that the external effect of a spherical symmetric body is the same as if the whole mass were concentrated at the centre. Astronomical bodies are roughly spherically symmetrical so can be treated as point particles to a very good approximation. On this assumption Newton showed that his law was consistent with Kepler’s Laws. Until recently, all experiments have confirmed the accuracy of the inverse square law and the independence of the law upon the nature of the substances, but in the past few years evidence has been found against both.
 The size of a gravitational field at any point is given by the force exerted on unit mass at that point. The field intensity at a distance ‘χ’ from a point mass ‘m’ is therefore Gm/χ2, and acts toward ‘m’ Gravitational field strength is measured in the newton per kilogram. The gravitational potential ‘V’ at that point is the work done in moving a unit mass from infinity to the point against the field, due to a point mass. Importantly, (a) Potential at a point distance ‘χ’ from the centre of a hollow homogeneous spherical shell of mass ‘m’ and outside the shell:
V = ‒ Gm/χ
The potential is the same as if the mass of the shell is assumed concentrated at the centre, (b) At any point inside the spherical shell the potential is equal to its value at the surface:
V = ‒ Gm/r
Where ‘r’ is the radius of the shell, thus there is no resultant force acting at any point inside the shell and since no potential difference acts between any two points potential at a point distance ‘χ’ from the centre of a homogeneous solid sphere as for it being outside the sphere is the same as that for a shell:
V = ‒ Gm/χ
(d) At a point inside the sphere, of radius ‘r’:
V = ‒ Gm(3r2 ‒ χ2)/2r3
The essential property of gravitation is that it causes a change in motion, in particular the acceleration of free fall (g) in the earth’s gravitational field. According to the general theory of relativity, gravitational fields change the geometry of space and time, causing it to become curved. It is this curvature of space and time, produced by the presence of matter, that controls the natural motions of matter, that controls the natural motions of bodies. General relativity may thus be considered as a theory of gravitation, differences between it and Newtonian gravitation only appearing when the gravitational fields become very strong, as with ‘black holes’ and ‘neutron stars’, or when very accurate measurements can be made.
 Accelerated systems and forces due to gravity, where the acceleration produced are independent of the mass, for example, a person in a sealed container could not easily determine whether he was being driven toward the floor by gravity or if the container were in space and being accelerated upward by a rocket. Observations extended in space and time could distinguish between these alternates, but otherwise they are indistinguishable. This leads to the ‘principle of equivalence’, from which it follows that the inertial mass is the same as the gravitational mass. A further principle used in the general theory is that the laws of mechanics are the same in inertial and non-inertial frames of reference.
 Still, the equivalence between a gravitational field and the fictitious forces in non-inertial systems can be expressed by using Riemannian space-time, which differs from Minkowski Space-time of the special theory. In special relativity the motion of a particle that is not acted on by any force is represented by a straight line in Minkowski Space-time. Overall, using Riemannian Space-time, the motion is represented by a line that is no longer straight, in the Euclidean sense but is the line giving the shortest distance. Such a line is called geodesic. Thus, a space-time is said to be curved. The extent of this curvature is given by the ‘metric tensor’ for space-time, the components of which are solutions to Einstein’s ‘field equations’. The fact that gravitational effects occur near masses is introduced by the postulate that the presence of matter produces this curvature of the space-time. This curvature of space-time controls the natural motions of bodies.
 The predictions of general relativity only differ from Newton’s theory by small amounts and most tests of the theory have been carried out through observations in astronomy. For example, it explains the shift in the perihelion of Mercury, the bending of light or other electromagnetic radiations in the presence of large bodies, and the Einstein Shift. Very close agreements between the predications of general relativity and their accurately measured values have now been obtained. This ‘Einstein shift’ or ‘gravitation red-shift’ hold that a small ‘red-shift’ in the lines of a stellar spectrum caused by the gravitational potential at the level in the star at which the radiation is emitted (for a bright line) or absorbed (for a dark line). This shift can be explained in terms of either, . . . others have maintained that the construction is fundamentally providing by whichever number is assigned in that of what should be the speed or , by contrast, in the easiest of terms, a quantum of energy hv has mass hv/c2. On moving between two points with gravitational potential difference φ, the work done is φhv/c2 so the change of frequency δv is φv/c2.
 Assumptions given under which Einstein’s special theory of relativity (1905) stretches toward its central position are (i) inertial frameworks are equivalent for the description of all physical phenomena, and (ii) the speed of light in empty space is constant for every observer, despite the motion of the observer or the light source, although the second assumption may seem plausible in the light of the Mitchelton-Morley experiment of 1887, which failed to find any difference in the speed of light in the direction of the earth’s rotation or when measured perpendicular to it, it seems likely that Einstein was not influenced by the experiment, and may not even have known the results. Because of the second postulate, no matter how fast she travels, an observer can never overtake a ray of light, and see it as stationary beside her. However, near her speed approaches to that of light, light still retreats at its classical speed. The consequences are that space, time and mass turn relative to the observer. Measurements composed of quantities in an inertial system moving relative to one’s own reveal slow clocks, with the effect increasing as the relative speed of the systems approaches the speed of light. Events deemed simultaneously as measured within one such system will not be simultaneous as measured from the other, forthrightly time and space thus lose their separate identity, and become parts of a single space-time. The special theory also has the famous consequence (E = mc2) of the equivalences of energy and mass.
 Einstein’s general theory of relativity (1916) treats of non -inertial systems, i.e., those accelerating relative to each other. The leading idea is that the laws of motion in an accelerating frame are equivalent to those in a gravitational field. The theory treats gravity not as a Newtonian force acting in an unknown way across distance, but a metrical property of a space-time continuum curved near matter. Gravity can be thought of as a field described by the metric tensor at every point.  The first serious non-Euclidean geometry is usually attributed to the Russian mathematician N.I. Lobachevski, writing in the 1820's, Euclid’s fifth axiom, the axiom of parallels, states that through any points not falling on a straight line, one straight line can be drawn that does not intersect the fist. In Lobachevski’s geometry several such lines can exist. Later G.F.B. Riemann (1822-66) realized that the two-dimensional geometry that would be hit upon by persons coffined to the surface of a sphere would be different from that of persons living on a plane: for example, π would be smaller, since the diameter of a circle, as drawn on a sphere, is largely compared with the circumference. Generalizing, Riemann reached the idea of a geometry in which there are no straight lines that do not intersect a given straight line, just as on a sphere all great circles (the shortest distance between two points) intersect.
 The way then lay open to separating the question of the mathematical nature of a purely formal geometry from a question of its physical application. In 1854 Riemann showed that space of any curvature could be described by a set of numbers known as its metric tensor. For example, ten numbers suffice to describe the point of any four-dimensional manifold. To apply a geometry means finding coordinative definitions correlating the notion of the geometry, notably those of a straight line and an equal distance, with physical phenomena such as the path of a light ray, or the size of a rod at different times and places. The status of these definitions has been controversial, with some such as Poincaré seeing them simply as conventions, and others seeing them as important empirical truths. With the general rise of holism in the philosophy of science the question of status has abated a little, it being recognized simply that the coordination plays a fundamental role in physical science.
 Meanwhile, the classic analogy of curved space-time is while a rock sitting on a bed. If a heavy objects where to be thrown across the bed, it is deflected toward the rock not by a mysterious force, but by the deformation of the space, i.e., the depression of the sheet around the object, a called curvilinear trajectory. Interestingly, the general theory lends some credit to a vision of the Newtonian absolute theory of space, in the sense that space itself is regarded as a thing with metrical properties of it is. The search for a unified field theory is the attempt to show that just as gravity is explicable because of the nature of a space-time, are the other fundamental physical forces: The strong and weak nuclear forces, and the electromagnetic force. The theory of relativity is the most radical challenge to the ‘common sense’ view of space and time as fundamentally distinct from each other, with time as an absolute linear flow in which events are fixed in objective relationships.
 After adaptive changes in the brains and bodies of hominids made it possible for modern humans to construct a symbolic universe using complex language system, something as quite dramatic and wholly unprecedented occurred. We began to perceive the world through the lenses of symbolic categories, to construct similarities and differences in terms of categorical priorities, and to organize our lives according to themes and narratives. Living in this new symbolic universe, modern humans had a large compulsion to encode and recode experiences, to translate everything into representation, and to seek out the deeper hidden and underlying logic that eliminates inconsistencies and ambiguities.
 The mega-narratives or frame tale served to legitimate and rationalize the categorical oppositions and terms of relations between the myriad number of constructs in the symbolic universe of modern humans were religion. The use of religious thought for these purposes is quite apparent in the artifacts found in the fossil remains of people living in France and Spain forty thousand years ago. These artifacts provided the first concrete evidence that a fully developed language system had given birth to an intricate and complex social order.
 Both religious and scientific thought seeks to frame or construct reality as to origins, primary oppositions, and underlying causes, and this partially explains why fundamental assumptions in the Western metaphysical tradition were eventually incorporated into a view of reality that would later be called scientific. The history of scientific thought reveals that the dialogue between assumptions about the character of spiritual reality in ordinary language and the character of physical reality in mathematical language was intimate and ongoing from the early Greek philosophers to the first scientific revolution in the seventeenth century. However, this dialogue did not conclude, as many have argued, with the emergence of positivism in the eighteenth and nineteenth centuries. It was perpetuated in a disguise form in the hidden ontology of classical epistemology-the central issue in the Bohr-Einstein debate.
 The assumption that a one-to-one correspondence exists between every element of physical reality and physical theory may serve to bridge the gap between mind and world for those who use physical theories. Still, it also suggests that the Cartesian division be real and insurmountable in constructions of physical reality based on ordinary language. This explains in no small part why the radical separation between mind and world sanctioned by classical physics and formalized by Descartes (1596-1650) remains, as philosophical postmodernism attests, one of the most pervasive features of Western intellectual life.
 Nietzsche, in subverting the epistemological authority of scientific knowledge, sought of a legitimate division between mind and world much starker than that originally envisioned by Descartes. What is not widely known, however, is that Nietzsche and other seminal figures in the history of philosophical postmodernism were very much aware of an epistemological crisis in scientific thought than arose much earlier, that occasioned by wave-particle dualism in quantum physics. This crisis resulted from attempts during the last three decades of the nineteenth century to develop a logically self-consistent definition of number and arithmetic that would serve to reinforce the classical view of correspondence between mathematical theory and physical reality. As it turned out, these efforts resulted in paradoxes of recursion and self-reference that threatened to undermine both the efficacy of this correspondence and the privileged character of scientific knowledge.
 Nietzsche appealed to this crisis to reinforce his assumption that, without ontology, all knowledge (including scientific knowledge) was grounded only in human consciousness. As the crisis continued, a philosopher trained in higher mathematics and physics, Edmund Husserl 1859-1938, attempted to preserve the classical view of correspondences between mathematical theory and physical reality by deriving the foundation of logic and number from consciousness in ways that would preserve self-consistency and rigour. This afforded effort to ground mathematical physics in human consciousness, or in human subjective reality, was no trivial matter, representing a direct link between these early challenges and the efficacy of classical epistemology and the tradition in philosophical thought that culminated in philosophical postmodernism.
 Since Husserl’s epistemology, like that of Descartes and Nietzsche, was grounded in human subjectivity, a better understanding of his attempt to preserve the classical view of correspondence not only reveals more about the legacy of Cartesian dualism. It also suggests that the hidden and underlying ontology of classical epistemology was more responsible for the deep division and conflict between the two cultures of humanists-social scientists and scientists-engineers than was previously thought. The central question in this late-nineteenth-century debate over the status of the mathematical description of nature was the following: Is the foundation of number and logic grounded in classical epistemology, or must we assume, without any ontology, that the rules of number and logic are grounded only in human consciousness? In order to frame this question in the proper context, we should first examine a more detailing of the intimate and on-line dialogue between physics and metaphysics in Western thought.
 The history of science reveals that scientific knowledge and method did not emerge as full-blown from the minds of the ancient Greek any more than language and culture emerged fully formed in the minds of Homo sapient’s sapient. Scientific knowledge is an extension of ordinary language into grater levels of abstraction and precision through reliance upon geometric and numerical relationships. We speculate that the seeds of the scientific imagination were planted in ancient Greece, as opposed to Chinese or Babylonian culture, partly because the social, political and an economic climate in Greece was more open to the pursuit of knowledge with marginal cultural utility. Another important factor was that the special character of Homeric religion allowed the Greeks to invent a conceptual framework that would prove useful in future scientific investigation. Nevertheless, it was only after this inheritance from Greek philosophy was wedded to some essential features of Judeo-Christian beliefs about the origin of the cosmos that the paradigm for classical physics emerged.
 The philosophical debate that led to conclusions useful to the architects of classical physics can be briefly summarized, such when Thale’s fellow Milesian Anaximander claimed that the first substance, although indeterminate, manifested itself in a conflict of oppositions between hot and cold, moist and dry. The idea of nature as a self-regulating balance of forces was subsequently elaborated upon by Heraclitus (d. after 480 Bc), who asserted that the fundamental substance is strife between opposites, which is itself the unity of the whole. It is, said Heraclitus, the tension between opposites that keeps the whole from simply ‘passing away.’
 Parmenides of Elea (Bc 515 BC) argued in turn that the unifying substance is unique and static being. This led to a conclusion about the relationship between ordinary language and external reality that was later incorporated into the view of the relationship between mathematical language and physical reality. Since thinking or naming involves the presence of something, said Parmenides, thought and language must be dependent upon the existence of objects outside the human intellect. Presuming a one-to-one correspondence between word and idea and actual existing things, Parmenides concluded that our ability to think or speak of a thing at various times implies that it exists at all times. So the indivisible One does not change, and all perceived change is an illusion.
 These assumptions emerged in roughly the form in which they would be used by the creators of classical physics in the thought of the atomists. Leucippus : l. 450-420 Bc and Democritus (460-c. 370 Bc). They reconciled the two dominant and seemingly antithetical concepts of the fundamental character of being Becoming, (Heraclitus) and unchanging Being (Parmenides)-in a remarkable simple and direct way. Being, they said, is present in the invariable substance of the atoms that, through blending and separation, make up the thing of changing or becoming worlds.
 The last remaining feature of what would become the paradigm for the first scientific revolution in the seventeenth century is attributed to Pythagoras (570 Bc). Like Parmenides, Pythagoras also held that the perceived world is illusory and that there is an exact correspondence between ideas and aspects of external reality. Pythagoras, however, had a different conception of the character of the idea that showed this correspondence. The truth about the fundamental character of the unified and unifying substance, which could be uncovered through reason and contemplation, is, he claimed, mathematical in form.
 Pythagoras established and was the cental figure in a school of philosophy, religion and mathematics; He was apparently viewed by his followers as semi-divine. For his followers the regular solids (symmetrical three-dimensional forms in which all sides are the same regular polygons) and whole numbers became revered essences of sacred ideas. In contrast with ordinary language, the language of mathematics and geometric forms seemed closed, precise and pure. Providing one understood the axioms and notations, and the meaning conveyed was invariant from one mind to another. The Pythagoreans felt that the language empowered the mind to leap beyond the confusion of sense experience into the realm of immutable and eternal essences. This mystical insight made Pythagoras the figure from antiquity most revered by the creators of classical physics, and it continues to have great appeal for contemporary physicists as they struggle with the epistemological implications of the quantum mechanical description of nature.
 Yet, least of mention, progress was made in mathematics, and to a lesser extent in physics, from the time of classical Greek philosophy to the seventeenth century in Europe. In Baghdad, for example, from about A.D. 750 to A.D. 1000, substantial advancement was made in medicine and chemistry, and the relics of Greek science were translated into Arabic, digested, and preserved. Eventually these relics reentered Europe via the Arabic kingdom of Spain and Sicily, and the work of figures like Aristotle (384-32 Bc) and Ptolemy (127-148 AD) reached the budding universities of France, Italy, and England during the Middle Ages.
 For much of this period the Church provided the institutions, like the reaching orders, needed for the rehabilitation of philosophy. Nonetheless, the social, political and an intellectual climate in Europe was not ripe for a revolution in scientific thought until the seventeenth century. Until later in time, lest as far into the nineteenth century, the works of the new class of intellectuals we called scientists, whom of which were more avocations than vocation, and the word scientist do not appear in English until around 1840.
 Copernicus (1473-1543) would have been described by his contemporaries as an administrator, a diplomat, an avid student of economics and classical literature, and most notable, a highly honoured and placed church dignitary. Although we named a revolution after him, his devoutly conservative man did not set out to create one. The placement of the Sun at the centre of the universe, which seemed right and necessary to Copernicus, was not a result of making careful astronomical observations. In fact, he made very few observations while developing his theory, and then only to ascertain if his prior conclusions seemed correct. The Copernican system was also not any more useful in making astrological calculations than the accepted model and was, in some ways, much more difficult to implement. What, then, was his motivation for creating the model and his reasons for presuming that the model was correct?
 Copernicus felt that the placement of the Sun at the centre of the universe made sense because he viewed the Sun as the symbol of the presence of a supremely intelligent and intelligible God in a man-centred world. He was apparently led to this conclusion in part because the Pythagoreans believed that fire exists at the centre of the cosmos, and Copernicus identified this fire with the fireball of the Sun. the only support that Copernicus could offer for the greater efficacy of his model was that it represented a simpler and more mathematical harmonious model of the sort that the Creator would obviously prefer. The language used by Copernicus in ‘The Revolution of Heavenly Orbs,’ illustrates the religious dimension of his scientific thought: ‘In the midst of all the sun reposes, unmoving. It is more difficult to who is attributed to this most beautiful temple would place the light-giver in any other part than from where it can illumine all other parts?’
 The belief that the mind of God as Divine Architect permeates the working of nature was the principle of the scientific thought of Johannes Kepler (or, Keppler, 1571-1630). Therefore, most modern physicists would probably feel some discomfort in reading Kepler’s original manuscripts. Physics and metaphysics, astronomy and astrology, geometry and theology commingle with an intensity that might offend those who practice science in the modern sense of that word. Physical laws, wrote Kepler, ‘lie within the power of understanding of the human mind; God wanted us to perceive them when he created us of His own image, in order . . . ‘. , that we may take part in His own thoughts. Our knowledge of numbers and quantities is the same as that of God’s, at least insofar as we can understand something of it in this mortal life.’
 Believing, like Newton after him, in the literal truth of the words of the Bible, Kepler concluded that the word of God is also transcribed in the immediacy of observable nature. Kepler’s discovery that the motions of the planets around the Sun were elliptical, as opposed perfecting circles, may have made the universe seem a less perfect creation of God on ordinary language. For Kepler, however, the new model placed the Sun, which he also viewed as the emblem of a divine agency, more at the centre of mathematically harmonious universes than the Copernican system allowed. Communing with the perfect mind of God requires as Kepler put it ‘knowledge of numbers and quantity.’
 Since Galileo did not use, or even refer to, the planetary laws of Kepler when those laws would have made his defence of the heliocentric universe more credible, his attachment to the god-like circle was probably a more deeply rooted aesthetic and religious ideal. However, it was Galileo, even more than Newton, who was responsible for formulating the scientific idealism that quantum mechanics now force us to abandon. In ‘Dialogue Concerning the Two Great Systems of the World,’ Galileo said about the following about the followers of Pythagoras: ‘I know perfectly well that the Pythagoreans had the highest esteem for the science of number and that Plato himself admired the human intellect and believed that it participates in divinity solely because understanding the nature of numbers is able. I myself am inclined to make the same judgement.’
 This article of faith-mathematical and geometrical ideas mirror precisely the essences of physical reality was the basis for the first scientific law of this new science, a constant describing the acceleration of bodies in free fall, could not be confirmed by experiment. The experiments conducted by Galileo in which balls of different sizes and weights were rolled simultaneously down an inclined plane did not, as he frankly admitted, their precise results. Since a vacuum pumps had not yet been invented, there was simply no way that Galileo could subject his law to rigorous experimental proof in the seventeenth century. Galileo believed in the absolute validity of this law lacking experimental proof because he also believed that movement could be subjected absolutely to the law of number. What Galileo asserted, as the French historian of science Alexander Koyré put it, was ‘that the real are in its essence, geometrical and, consequently, subject to rigorous determination and measurement.’
 The popular image of Isaac Newton (1642-1727) is that of a supremely rational and dispassionate empirical thinker. Newton, like Einstein, could concentrate unswervingly on complex theoretical problems until they yielded a solution. Yet what most consumed his restless intellect were not the laws of physics. Beyond believing, like Galileo that the essences of physical reality could be read in the language of mathematics, Newton also believed, with perhaps even greater intensity than Kepler, in the literal truths of the Bible.
 For Newton the mathematical languages of physics and the language of biblical literature were equally valid sources of communion with the eternal writings in the extant documents alone consist of more than a million words in his own hand, and some of his speculations seem quite bizarre by contemporary standards. The Earth, said Newton, will still be inhabited after the day of judgement, and heaven, or the New Jerusalem, must be large enough to accommodate both the quick and the dead. Newton then put his mathematical genius to work and determined the dimensions required to house the population, his precise estimate was ‘the cube root of 12,000 furlongs.’
 The point is, that during the first scientific revolution the marriage between mathematical idea and physical reality, or between mind and nature via mathematical theory, was viewed as a sacred union. In our more secular age, the correspondence takes on the appearance of an unexamined article of faith or, to borrow a phrase from William James (1842-1910), ‘an altar to an unknown god.’ Heinrich Hertz, the famous nineteenth-century German physicist, nicely described what there is about the practice of physics that tends to inculcate this belief: ‘One cannot escape the feeling that these mathematical formulae have an independent existence and intelligence of their own that they are wiser than we, wiser than their discoveries. That we get more out of them than was originally put into them.’
 While Hertz said that without having to contend with the implications of quantum mechanics, the feeling, the described remains the most enticing and exciting aspects of physics. That elegant mathematical formulae provide a framework for understanding the origins and transformations of a cosmos of enormous age and dimensions are a staggering discovery for bidding physicists. Professors of physics do not, of course, tell their students that the study of physical laws in an act of communion with thee perfect mind of God or that these laws have an independent existence outside the minds that discover them. The business of becoming a physicist typically begins, however, with the study of classical or Newtonian dynamics, and this training provides considerable covert reinforcement of the feeling that Hertz described.
 Perhaps, the best way to examine the legacy of the dialogue between science and religion in the debate over the implications of quantum non-locality is to examine the source of Einstein’s objections tp quantum epistemology in more personal terms. Einstein apparently lost faith in the God portrayed in biblical literature in early adolescence. However, as appropriated, . . . the ‘Autobiographical Notes’ give to suggest that there were aspects that carry over into his understanding of the foundation for scientific knowledge,  . . . ‘Thus I came-despite the fact that I was the son of an entirely irreligious [Jewish] Breeden heritage, which is deeply held of its religiosity, which, however, found an abrupt end at the age of twelve. Though the reading of popular scientific books I soon reached the conviction that much in the stories of the Bible could not be true. The consequence waw a positively frantic [orgy] of freethinking coupled with the impression that youth is intentionally being deceived by the stat through lies that it was a crushing impression. Suspicion against every kind of authority grew out of this experience. It was clear to me that the religious paradise of youth, which was thus lost, was a first attempt ti free myself from the chains of the ‘merely personal’. The mental grasp of this extra-personal world within the frame of the given possibilities swam as highest aim half consciously and half unconsciously before the mind’s eye.’
 What is more, was, suggested Einstein, belief in the word of God as it is revealed in biblical literature that allowed him to dwell in a ‘religious paradise of youth’ and to shield himself from the harsh realities of social and political life. In an effort to recover that inner sense of security that was lost after exposure to scientific knowledge, or to become free again of the ‘merely personal’, he committed himself to understanding the ‘extra-personal world within the frame of given possibilities’, or as seems obvious, to the study of physics. Although the existence of God as described in the Bible may have been in doubt, the qualities of mind that the architects of classical physics associated with this God were not. This is clear in the comments from which Einstein uses of mathematics,  . . . ‘Nature is the realization of the simplest conceivable mathematical ideas and one may be convinced that we can discover, by means of purely mathematical construction, those concepts and those lawful connections between them that furnish the key to the understanding of natural phenomena. Experience remains, of course, the sole criteria of physical utility of a mathematical construction. Nevertheless, the creative principle resides in mathematics. In a certain sense, therefore, it is true that pure thought can grasp reality, as the ancients dreamed.’
 This article of faith, first articulated by Kepler, that ‘nature is the realization of the simplest conceivable mathematical ideas’ allowed for Einstein to posit the first major law of modern physics much as it allows Galileo to posit the first major law of classical physics. During which time, when the special and then the general theories of relativity had not been confirmed by experiment. Many established physicists viewed them as at least minor theorises, Einstein remained entirely confident of their predictions. Ilse Rosenthal-Schneider, who visited Einstein shortly after Eddington’s eclipse expedition confirmed a prediction of the general theory(1919), described Einstein’s response to this news: ‘When I was giving expression to my joy that the results coincided with his calculations, he said quite unmoved, ‘but I knew the theory was correct’ and when I asked, ‘what if there had been no confirmation of his prediction,’ he countered: ‘Then I would have been sorry for the dear Lord-the theory is correct.’
 Einstein was not given to making sarcastic or sardonic comments, particularly on matters of religion. These unguarded responses testify to his profound conviction that the language of mathematics allows the human mind access to immaterial and immutable truths existing outside the mind that conceived them. Although Einstein’s belief was far more secular than Galileo’s, it retained the same essential ingredients.
 What continued in the twenty-three-year-long debate between Einstein and Bohr, least of mention? The primary article drawing upon its faith that contends with those opposing to the merits or limits of a physical theory, at the heart of this debate was the fundamental question, ‘What is the relationship between the mathematical forms in the human mind called physical theory and physical reality?’ Einstein did not believe in a God who spoke in tongues of flame from the mountaintop in ordinary language, and he could not sustain belief in the anthropomorphic God of the West. There is also no suggestion that he embraced ontological monism, or the conception of Being featured in Eastern religious systems, like Taoism, Hinduism, and Buddhism. The closest that Einstein apparently came to affirming the existence of the ‘extra-personal’ in the universe was a ‘cosmic religious feeling’, which he closely associated with the classical view of scientific epistemology.
 The doctrine that Einstein fought to preserve seemed the natural inheritance of physics until the approach of quantum mechanics. Although the mind that constructs reality might be evolving fictions that are not necessarily true or necessary in social and political life, there was, Einstein felt, a way of knowing, purged of deceptions and lies. He was convinced that knowledge of physical reality in physical theory mirrors the preexistent and immutable realm of physical laws. As Einstein consistently made clear, this knowledge mitigates loneliness and inculcates a sense of order and reason in a cosmos that might appear otherwise bereft of meaning and purpose.
 What most disturbed Einstein about quantum mechanics was the fact that this physical theory might not, in experiment or even in principle, mirrors precisely the structure of physical reality. There is, for all the reasons we seem attested of, in that an inherent uncertainty in measurement made, . . . a quantum mechanical process reflects of a pursuit that quantum theory that has its contributive dynamic functionalities that there lay the attribution of a completeness of a quantum mechanical theory. Einstein’s fearing that it would force us to recognize that this inherent uncertainty applied to all of physics, and, therefore, the ontological bridge between mathematical theory and physical reality-does not exist. This would mean, as Bohr was among the first to realize, that we must profoundly revive the epistemological foundations of modern science.
 The world view of classical physics allowed the physicist to assume that communion with the essences of physical reality via mathematical laws and associated theories was possible, but it did not arrange for the knowing mind. In our new situation, the status of the knowing mind seems quite different. Modern physics distributively contributed its view toward the universe as an unbroken, undissectable and undivided dynamic whole. ‘There can hardly be a sharper contrast,’ said Melic Capek, ‘than that between the everlasting atoms of classical physics and the vanishing ‘particles’ of modern physics as Stapp put it: ‘Each atom turns out to be nothing but the potentialities in the behaviour of others. What we find, therefore, are not elementary space-time realities, but preferable to a certain extent in some respects as a web of relationships in which no part can stand alone, every part derives its meaning and existence only from its place within the whole’’
 The characteristics of particles and quanta are not isolatable, given particle-wave dualism and the incessant exchange of quanta within matter-energy fields. Matter cannot be dissected from the omnipresent sea of energy, nor can we in theory or in fact observe matter from the outside. As Heisenberg put it decades ago, ‘the cosmos is a complicated tissue of events, in which connection of different kinds alternate or overlay or combine and by that determine the texture of the whole. This means that a pure reductionist approach to understanding physical reality, which was the goal of classical physics, is no longer appropriate.
 While the formalism of quantum physics predicts that correlations between particles over space-like separated regions are possible, it can say nothing about what this strange new relationship between parts (quanta) and whole (cosmos) was by means an outside formalism. This does not, however, prevent us from considering the implications in philosophical terms, as the philosopher of science Errol Harris noted in thinking about the special character of wholeness in modern physics, a unity without internal content is a blank or empty set and is not recognizable as a whole. A collection of merely externally related parts does not constitute a whole in that the parts will not be ‘mutually adaptive and complementary to one and another.’
 Wholeness requires a complementary relationship between unity and differences and is governed by a principle of organization determining the interrelationship between parts. This organizing principle must be universal to a genuine whole and implicit in all parts that make up the whole, although the whole is exemplified only in its parts. This principle of order, Harris continued, ‘is nothing really by itself. It is the way parts are organized and not another constituent addition to those that form the totality.’
 In a genuine whole, the relationship between the constituent parts must be ‘internal or immanent’ in the parts, as opposed to a mere spurious whole in which parts appear to disclose wholeness due to relationships that are external to the parts. The collection of parts that would allegedly make up the whole in classical physics is an example of a spurious whole. Parts were some genuine wholes when the universal principle of order is inside the parts and by that adjusts each to all that they interlock and become mutually complementary. This not only describes the character of the whole revealed in both relativity theory and quantum mechanics. It is also consistent with the manner in which we have begun to understand the relation between parts and whole in modern biology.
 Modern physics also reveals, claims Harris, a complementary relationship between the differences between parts that constituted content representations that the universal ordering principle that is immanent in each part. While the whole cannot be finally revealed in the analysis of the parts, the study of the differences between parts provides insights into the dynamic structure of the whole present in each of the parts. The part can never, nonetheless, be finally isolated from the web of relationships that disclose the interconnections with the whole, and any attempt to do so results in ambiguity.
 Much of the ambiguity is an attempted explanation of the characterology of wholes in both physics and biology, deriving from the assumption that order exists between or outside parts. Yet order in complementary relationships between differences and sameness in any physical event is never external to that event and finds to its event for being subjective. From this perspective, the addition of non-locality to this picture of the dynamic whole is not surprising. The relationship between part, as quantum event apparent in observation or measurement, and the inseparable whole, revealed but not described by the instantaneous, and the inseparable whole, revealed but described by the instantaneous correlations between measurements in space-like separated regions, is another extension of the part-whole complementarity to modern physics.
 If the universe is a seamlessly interactive system that evolves to a higher level of complexity, and if the lawful regularities of this universe are emergent properties of this system, we can assume that the cosmos is a singular point of significance as a whole that shows of the ‘progressive principal order’ of complementary relations its parts. Given that this whole exists in some sense within all parts (quanta), one can then argue that it operates in self-reflective fashion and is the ground for all emergent complexities. Since human consciousness shows self-reflective awareness in the human brain and since this brain, like all physical phenomena can be viewed as an emergent property of the whole, concluding it is reasonable, in philosophical terms at least, that the universe is conscious.
 However, since the actual character of this seamless whole cannot be represented or reduced to its parts, it lies, quite literally beyond all human representations or descriptions. If one chooses to believe that the universe be a self-reflective and self-organizing whole, this lends no support whatever to conceptions of design, meaning, purpose, intent, or plan associated with any mytho-religious or cultural heritage. However, If one does not accept this view of the universe, there is nothing in the scientific descriptions of nature that can be used to refute this position. On the other hand, it is no longer possible to argue that a profound sense of unity with the whole, which has long been understood as the foundation of religious experience, which can be dismissed, undermined or invalidated with appeals to scientific knowledge.
 While we have consistently tried to distinguish between scientific knowledge and philosophical speculation based on this knowledge-there is no empirically valid causal linkage between the former and the latter. Those who wish to dismiss the speculative assumptions as its basis to be drawn the obvious freedom of which if firmly grounded in scientific theory and experiments there is, however, in the scientific description of nature, the belief in radical Cartesian division between mind and world sanctioned by classical physics. Seemingly clear, that this separation between mind and world was a macro-level illusion fostered by limited awarenesses of the actual character of physical reality and by mathematical idealization extended beyond the realm of their applicability.
 Thus, the grounds for objecting to quantum theory, the lack of a one-to-one correspondence between every element of the physical theory and the physical reality it describes, may seem justifiable and reasonable in strictly scientific terms. After all, the completeness of all previous physical theories was measured against the criterion with enormous success. Since it was this success that gave physics the reputation of being able to disclose physical reality with magnificent exactitude, perhaps a more comprehensive quantum theory will emerge to insist on these requirements.
 All indications are, however, that no future theory can circumvent quantum indeterminancy, and the success of quantum theory in co-ordinating our experience with nature is eloquent testimony to this conclusion. As Bohr realized, the fact that we live in a quantum universe in which the quantum of action is a given or an unavoidable reality requires a very different criterion for determining the completeness or physical theory. The new measure for a complete physical theory is that it unambiguously confirms our ability to co-ordinate more experience with physical reality.
 If a theory does so and continues to do so, which is clearly the case with quantum physics, then the theory must be deemed complete. Quantum physics not only works exceedingly well, it is, in these terms, the most accurate physical theory that has ever existed. When we consider that this physics allows us to predict and measure quantities like the magnetic moment of electrons to the fifteenth decimal place, we realize that accuracy per se is not the real issue. The real issue, as Bohr rightly intuited, is that this complete physical theory effectively undermines the privileged relationship in classical physics between ‘theory’ and ‘physical reality’.
 In quantum physics, one calculates the probability of an event that can happen in alternative ways by adding the wave function, and then taking the square of the amplitude. In the two-slit experiment, for example, the electron is described by one wave function if it goes through one slit and by another wave function it goes through the other slit. In order to compute the probability of where the electron is going to end on the screen, we add the two wave functions, compute the absolute value of their sum, and square it. Although the recipe in classical probability theory seems similar, it is quite different. In classical physics, we would simply add the probabilities of the two alternate ways and let it go at that. The classical procedure does not work here, because we are not dealing with classical atoms. In quantum physics additional terms arise when the wave functions are added, and the probability is computed in a process known as the ‘superposition principle’.
 The superposition principle can be illustrated with an analogy from simple mathematics. Add two numbers and then take the square of their sum. As opposed to just adding the squares of the two numbers. Obviously, (2 + 3)2 is not equal to 22 + 32. The former is 25, and the latter are 13. In the language of quantum probability theory:
| ψ1 + ψ2 | 2 ≠ | ψ1 | 2 + | ψ2 | 2
Where ψ1 and ψ2 are the individual wave functions. On the left-hand side, the superposition principle results in extra terms that cannot be found on the right-hand side. The left-hand side of the above relations is the way a quantum physicist would compute probabilities, and the right-hand side is the classical analogue. In quantum theory, the right-hand side is realized when we know, for example, which slit through which the electron went. Heisenberg was among the first to compute what would happen in an instance like this. The extra superposition terms contained in the left-hand side of the above relations would not be there, and the peculiar wavellite interference pattern would disappear. The observed pattern on the final screen would, therefore, be what one would expect if electrons were behaving like a bullet, and the final probability would be the sum of the individual probabilities. Nonetheless, once we know which slit the electron went through, this interaction with the system causes the interference pattern to disappear.
 In order to give a full account of quantum recipes for computing probabilities, one has to examine what would happen in events that are compound. Compound events are ‘events that can be broken down into a series of steps, or events that consists of a number of things happening independently.’ The recipe here calls for multiplying the individual wave functions, and then following the usual quantum recipe of taking the square of the amplitude.
 The quantum recipe is | ψ1 • ψ2 | 2, and, in this case, it would be the same if we multiplied the individual probabilities, as one would in classical theory. Thus, the recipes of computing results in quantum theory and classical physics can be totally different. The quantum superposition effects are completely nonclassical, and there is no mathematical justification per se why the quantum recipes work. What justifies the use of quantum probability theory is the coming thing that justifies the use of quantum physics it has allowed us in countless experiments to extend our ability to co-ordinate experience with the expansive nature of unity.
 A departure from the classical mechanics of Newton involving the principle that certain physical quantities can only assume discrete values. In quantum theory, introduced by Planck ( 1900 ), certain conditions are imposed on these quantities to restrict their value; the quantities are then said to be ‘quantized’.
 Up to 1900, physics was based on Newtonian mechanics. Large-scale systems are usually adequately described, however, several problems could not be solved, in particular, the explanation of the curves of energy against wavelengths for ‘black-body radiation’, with their characteristic maximum, as these attemptive efforts were afforded to endeavour upon the base-cases, on which the idea that the enclosure producing the radiation contained a number of ‘standing waves’ and that the energy of an oscillator if ‘kT’, where ‘k’ in the ‘Boltzmann Constant’ and ‘T’ the thermodynamic temperature. It is a consequence of classical theory that the energy does not depend on the frequency of the oscillator. This inability to explain the phenomenons has been called the ‘ultraviolet catastrophe’.
 Planck tackled the problem by discarding the idea that an oscillator can attain or decrease energy continuously, suggesting that it could only change by some discrete amount, which he called a ‘quantum.’ This unit of energy is given by ‘hv’ where ‘v’ is the frequency and ‘h’ is the ‘Planck Constant,’ ‘h’ has dimensions of energy ‘x’ times of action, and was called the ‘quantum of action.’ According to Planck an oscillator could only change its energy by an integral number of quanta, i.e., by hv, 2hv, 3hv, etc. This meant that the radiation in an enclosure has certain discrete energies and by considering the statistical distribution of oscillators with respect to their energies, he was able to derive the Planck Radiation Formulas. The formulae contrived by Planck, to express the distribution of dynamic energy in the normal spectrum of ‘black-body’ radiation. It is usual form is:
8πchdλ/λ 5 ( exp[ch / kλT] ‒ 1.
Which represents the amount of energy per unit volume in the range of wavelengths between λ and λ + dλ? ‘c’ = the speed of light and ‘h’ = the Planck constant, as ‘k’ = the Boltzmann constant with ‘T’ equalling thermodynamic temperatures.
 The idea of quanta of energy was applied to other problems in physics, when in 1905 Einstein explained features of the ‘Photoelectric Effect’ by assuming that light was absorbed in quanta (photons). A further advance was made by Bohr(1913) in his theory of atomic spectra, in which he assumed that the atom can only exist in certain energy states and that light is emitted or absorbed as a result of a change from one state to another. He used the idea that the angular momentum of an orbiting electron could only assume discrete values, i.e., was quantized? A refinement of Bohr’s theory was introduced by Sommerfeld in an attempt to account for fine structure in spectra. Other successes of quantum theory were its explanations of the ‘Compton Effect’ and ‘Stark Effect.’ Later developments involved the formulation of a new system of mechanics known as ‘Quantum Mechanics.’
 What is more, in furthering to Compton’s scattering was to an interaction between a photon of electromagnetic radiation and a free electron, or other charged particles, in which some of the energy of the photon is transferred to the particle. As a result, the wavelength of the photon is increased by amount Δλ. Where:
Δλ = ( 2h / m0 c ) sin 2 ½.
This is the Compton equation, ‘h’ is the Planck constant, m0 the rest mass of the particle, ‘c’ the speed of light, and the photon angle between the directions of the incident and scattered photons. The quantity ‘h/m0c’ and  is known to be the ‘Compton Wavelength,’ symbol λC, which for an electron is equal to 0.002 43 nm.
 The outer electrons in all elements and the inner ones in those of low atomic number have ‘binding energies’ negligible compared with the quantum energies of all except very soft X-and gamma rays. Thus most electrons in matter are effectively free and at rest and so cause Compton scattering. In the range of quantum energies 105 to 107 electro volts, this effect is commonly the most important process of attenuation of radiation. The scattering electron is ejected from the atom with large kinetic energy and the ionization that it causes plays an important part in the operation of detectors of radiation.
 In the ‘Inverse Compton Effect’ there is a gain in energy by low-energy photons as a result of being scattered by free electrons of much higher energy. As a consequence, the electrons lose energy. Whereas, the wavelength of light emitted by atoms is altered by the application of a strong transverse electric field to the source, the spectrum lines being split up into a number of sharply defined components. The displacements are symmetrical about the position of the undisplaced lines, and are prepositional of the undisplaced line, and are propositional to the field strength up to about 100 000 volts per. cm. (The Stark Effect).
 Adjoined alongside with quantum mechanics, is an unstretching constitution taken advantage of forwarded mathematical physical theories-growing from Planck’s ‘Quantum Theory’ and deals with the mechanics of atomic and related systems in terms of quantities that can be measured. The subject development in several mathematical forms, including ‘Wave Mechanics’ (Schrödinger) and ‘Matrix Mechanics’ (Born and Heisenberg), all of which are equivalent.
 In quantum mechanics, it is often found that the properties of a physical system, such as its angular moment and energy, can only take discrete values. Where this occurs the property is said to be ‘quantized’ and its various possible values are labelled by a set of numbers called quantum numbers. For example, according to Bohr’s theory of the atom, an electron moving in a circular orbit could occupy any orbit at any distance from the nucleus but only an orbit for which its angular momentum (mvr) was equal to nh/2π, where ‘n’ is an integer (0, 1, 2, 3, etc.) and ‘h’ is the Planck’s constant. Thus the property of angular momentum is quantized and ‘n’ is a quantum number that gives its possible values. The Bohr theory has now been superseded by a more sophisticated theory in which the idea of orbits is replaced by regions in which the electron may move, characterized by quantum numbers ‘n’, ‘I’, and ‘m’.
 Literally (`913) tis was the first significant application of the quantum theory of atomic structure. Although the theory has been replaced in the effort of a mathematical physical theory that grew out of Plank’s quantum tho ry and deals with the mechanics of atomic and related systems in terms of quantities that can be measured. The subject developed in several mathematical forms, including ‘wave mechanics, and ‘matrix mechanics’ all of which are equivalent.
 A mathematical formulation of the special theory of relativity was given by Minkowski. It is based on the idea that an event is specified by four coordaites: three spacial coordinate and one of time coordinate, these coordinates define a four-dimensional space and the motion of s particle can be described by a curvature in this space, which is called Minkowski space-time.
 The equivalence between a gravitational field and the fictitious forces in non-inertial systems can be expressed by using Riemannian space-time, which differs from the Minkowski space-time of the special theory. In special relativity the motion of a pace that is not acted on any forces is represented by a straight line in Minkowski space-time. Overall relativity, using Riemannian space-time, the motion is represented by a line that is no longer straight (in the Euclidean sense) but is the line given the shortest distance. Since a line is called a ‘geodesic’, thus space-time is said to be curved. Nonetheless, the extent of this curvature is given bit the metric tensor for space-time, the components of which ae solutions to Einstein’s field equations. The fact that gravitational affected occur near masses is introduced by the postulate that the presence of matter produces this curvature of space-time. This curvature of space-time controls the natural motions of bodies.
 The predictions of general relativity only differ from Newton’s theory by small amounts and most tests of the theory have been carries out through observations in astronomy. For example, it explains the shift in the perihelion of Mercury, the bending of light or other electromagnetic radiation in the presence of large bodies, and the Einstein shift. Which of a small resift in the lines of a stellar spectrum caused by the gravitational potential at the level in the star at which the radiations is emitted (for a bright lone) or absorbed (for a dark line). This shift can be explained in terms of either the speed or general theory of relativity. In the simplest terms a quantum of energy hv has mass hv/c2. On moving between two points with gravitational potential difference Φ. The work done is Φhv/c2 so the change of frequency δv is Φv/c2.
 Thus and so, the theory of the atom in particle the simplest atom, that of hydrogen, consisting of a nucleus and one electron. It was assumed that there could be a ground state in which an isolatd atom would remain permanently, and short -lived states of higher energy to which the atom could be excited by collisions or absorption of radiation. It was supposed that radiation was emitted or absorbed in quanta or energy equal to integral multiplied of ‘hv’, where ‘h’ is the Planck constant and ‘v’ is the frequency of the electromagnetic waves. (Later it as realized that a single quantum has the unique value hv). The frequency of radiation emitted on capturing a free electron into the nth state (where –1 for the ground state) was supposed to be nh/2 times the rotational frequency of the electron in a circular orbit. This idea to, and was replaced by, the concept that the angular momentum of orbited is quantized in terms of h/2π. The energy of the nth state was found to be given by:
En= me4/8h2 ε0 2n2
where ‘m’ is the reduced mass of the electron. This formula gave excellent agreement with the then known series of lines in the visible and infrared regions of the spectrum of atomic hydrogen and predicted a series in the ultraviolet that was soon to be found by Lyman.
 The extension of the theory to more complicated atoms had success but raised innumerable difficulties, which were only resolved by the development of wave mechanics.
 Am allowed wave function of an electron in an atom obtained by a solution of the Schrödinger wave equations? In a hydrogen atom, for example, the electron moves in the electrostatic field of the nucleus and its potential energy is -e2.r, where ‘e’ is the electron charge and ‘r’ its distance from the nucleus. A precise orbit cannot be considered as in Bohr’s theory of the atom but the behaviour of the electron is described by its wave function, Ψ, which is a mathematical function of its position with respect to the nucleus. The significance of the wave function is that | Ψ |2dt is the probability of locating the electron in the element of volume ‘dt’.
 Solution of Schrödinger’s equation for the hydrogen atom shoes that the electron can only have certain allowed wave functions (eigenfunctions). Each of these corresponds to a probability distinction in space given by the manner in which | Ψ |2 varies with position. They also have an associated value of the energy ‘E’. There allowed wave function, or orbitals, are characterized by three quantum numbers similar to those characterizing the allowed orbits in the earlier quantum theory of the atom.
 ‘N’ the principle quantum number, can have values 1, 2, 3, etc. the orbital with n = 1 has the lowest energy. The state of the electron with n = 2, 2, 3, etc., are called shells and designated the K, L, M shells, etc., I the azimuthal quantum number, which for a given value of ‘n’ can have values 0, 1, 2, . . . (–1). Thus when n = 1, I can only have the value 0. An electron in the L shell of an atom with n = 2 can occupy two Subshell of different energy corresponding to 1 = o and
I = 1. Similarly the M shell (–3) has three Subshell with I =0, I =1, and I = 2. Orbitals with I = 0, 1, 2 and 3 are called s,p,d, and f orbitals respectfully. The significance of the I quantum number is that it gives the angular momentum of the electron. The orbital angular momentum of an electron is given by:
√[I(I + 1)(h/2π)]
the Bohr Theory of the Atom (1913) introduced the concept that an electron in an atom is normally in a state of lowest energy (ground state) in which it remains indefinitely unless disturbed. By absorption of electromagnetic radiation or collision with particle the atom may be excited-that is an electron is moved into a state of higher energy. Such excited states usually have short lifetimes (typically nanoseconds) and the electron returns to the ground state, commonly by emitting one or more quanta of electromagnetic radiation. The original theory was only partially successful in predicting the energies and other properties of the electron states. Attempts were made to improve the theory by postulating elliptic orbits (Sommerfeld 1915) and electron spin (Pauli 1925) but a satisfactory theory only became possible upon th development of wave mechanics after 1925.
 According to modern theories, an electron does des not follow a determinate orbit as envisaged by Bohr but in a state described by the solution of a wave equation. This determines the probability that the electron may be located in a given element of volume. Each state is characterized by a set of four quantum members, and, according to Pauli exclusion principle, not more than one election can be in a given state.
 An exact calculation of the energies and other properties of the quantum states is only possible for the simplest atoms but there are various approximate methods that give useful results. The properties of the innermost electron states of complex atoms are found experimentally by the study of X-ray spectra. Th e outer electrons are investigated using spectra in the infrared, visible, and ultraviolet. Certain details have been studied using microwaves. Other information may be obtained from magnetism, and chemical properties
 Properties of [Standard] elementary particles are also described by quantum numbers. For example, an electron has the property known a ‘spin’, and can exist in two possible energy states depending on whether this spin set parallel or antiparallel to a certain direction. The two states are conveniently characterized by quantum numbers + ½ and ‒ ½. Similarly properties such as charge, Isospin, strangeness, parity and hyper-charge are characterized by quantum numbers. In interactions between particles, a particular quantum number may be conserved, i.e., the sum of the quantum numbers of the particles before and after the interaction remains the same. It is the type of interaction-strong electromagnetic, weak that determines whether the quantum number is conserved.
 Bohr discovered that if you use Planck’s constant in combination with the known mass and charge of the electron, the approximate size of the hydrogen atom could be derived. Assuming that a jumping electron absorbs or emits energy in units of Planck’s constant, in accordance with the formula Einstein used to explain the photoelectric effect, Bohr was able to find correlations with the special spectral lines for hydrogen. More important, the model also served to explain why the electron does not, as electromagnetic theory says it should, radiate its energy quickly away and collapse into the nucleus.
 Bohr reasoned that this does not occur because the orbits are quantized-electrons absorb and emit energy corresponding to the specific orbits. Their lowest energy state, or lowest orbit, is the ground state. What is notable, however, is that Bohr, although obliged to use macro-level analogies and classical theory, quickly and easily posits a view of the functional dynamic of the energy shells of the electron that has no macro-level analogy and is inexplicable within th framework of classical theory.
 The central problem with Bohr’s model from the perspective of classical theory was pointed out by Rutherford shortly before the first of the papers describing the model was published. “There appears to me,” Rutherford wrote in a letter to Bohr, “one grave problem in your hypothesis that I have no doubt you fully realize, namely, how does an electron decide what frequency it is going to vibrate at when it passes from one stationary state to another? It seems to me that you would have to assume that the electron knows beforehand where it is going to stop.” Viewing the electron as atomic in the Greek sense, or as a point-like object that moves, there is cause to wonder, in the absence of a mechanistic explanation, how this object instantaneously ‘jumps’ from one shell or orbit to another. It was essentially efforts to answer this question that led to the development of quantum theory.
 The effect of Bohr’s model was to raise more questions than it answered. Although the model suggested that we can explain the periodic table of th elements by assuming that a maximum number of electrons are found in each shell, Bohr was not able to provide any mathematical acceptable explanation for the hypothesis. That explanation was provided in 1925 by Wolfgang Pauli, known throughout his career for his extraordinary talents as a mathematician.
 Bohr had used three quantum numbers in his models-Planck’s constant, mass, and charge. Pauli added a fourth, described as spin, which was initially represented with the macro-level analogy of a spinning ball on a pool table. Rather predictably, th analogy does not work. Whereas a classical spin can point in any direction, a quantum mechanical spin points either up or down along the axis of measurement. In total contrast to the classical notion of a spinning ball, we cannot even speak of the spin of the particle if no axis is measured.
 When Pauli added this fourth quantum number, he found a correspondence between the number of electrons in each full shell of atoms and the new set of quantum numbers describing the shell. This became the basis for what we now call the ‘Pauli exclusion principle’. The principle is simple and yet quite startling-two electrons cannot have all their quantum numbers the same, and no two actual electrons are identical in te sense of having the same quantum number. The exclusion principle explains mathematically why there is a maximum number of electrons in the shell of any given atom. If the shell is full, adding another electron would be impossible because this would result in two electrons in the shell having the same quantum number.
 This may sound a bit esoteric, but the fact that nature obeys the exclusion principle is quite fortunate from our point of view. If electrons did not obey the principle, all elements would exist at the ground state and there would be no chemical affinity between them. Structures like crystals and DNA would not exist, and the only structure allows for chemical bonds, which, in turn, result in the hierarchy of strictures from atoms, molecules, cells, plants, and animals.
 The energy associated with a quantum state of an atom or other system that is fixed, or determined, by given set quantum numbers. It is one of the various quantum states that can be assumed by an atom under defined conditions. The term is often used to mean the state itself, which is incorrect accorded to: (i) the energy of a given state may be changed by externally applied fields (ii) there may be a number of states of equal energy in the system.
 The electrons in an atom can occupy any of an infinite number of bound states with discrete energies. For an isolated atom the energy for a given state is exactly determinate except for the effected of the ‘uncertainty principle’. The ground state with lowest energy has an infinite lifetime hence, the energy, in principle is exactly determinate, the energies of these states are most accurately measured by finding the wavelength of the radiation emitted or absorbed in transitions between them, i.e., from their line spectra. Theories of the atom have been developed to predict these energies by calculation. Due to de Broglie and extended by Schrödinger, Dirac and many others, it (wave mechanics originated in the suggestion that light consists of corpuscles as well as of waves and the consequent suggestion that all [standard] elementary particles are associated with waves. Wave mechanics are based on the Schrödinger wave equation describing the wave properties of matter. It relates the energy of a system to wave function, usually, it is found that a system, such as an atom or molecule can only have certain allowed wave functions (eigenfunction) and certain allowed energies (Eigenvalues), in wave mechanics the quantum conditions arise in a natural way from the basic postulates as solutions of the wave equation. The energies of unbound states of positive energy form a continuum. This gives rise to the continuum background to an atomic spectrum as electrons are captured from unbound states. The energy of an atom state sustains essentially by some changes by the ‘Stark Effect’ or the ‘Zeeman Effect’.
 The vibrational energies of the molecule also have discrete values, for example, in a diatomic molecule the atom oscillates in the line joining them. There is an equilibrium distance at which the force is zero. The atoms repulse when closer and attract when further apart. The restraining force is nearly prepositional to the displacement hence, the oscillations are simple harmonic. Solution of the Schrödinger wave equation gives the energies of a harmonic oscillation as:
En = ( n + ½ ) h.
Where ‘h’ is the Planck constant,  is the frequency, and ‘n’ is the vibrational quantum number, which can be zero or any positive integer. The lowest possible vibrational energy of an oscillator is not zero but ½ h. This is the cause of zero-point energy. The potential energy of interaction of atoms is described more exactly by the ‘Morse Equation,’ which shows that the oscillations are anharmonic. The vibrations of molecules are investigated by the study of ‘band spectra’.
 The rotational energy of a molecule is quantized also, according to the Schrödinger equation, a body with the moment of inertial I about the axis of rotation have energies given by:
EJ = h2J ( J + 1 ) / 8π 2I.
Where J is the rotational quantum number, which can be zero or a positive integer. Rotational energies originate from band spectra.
 The energies of the state of the nucleus are determined from the gamma ray spectrum and from various nuclear reactions. Theory has been less successful in predicting these energies than those of electrons because the interactions of nucleons are very complicated. The energies are very little affected by external influence but the ‘Mössbauer Effect’ has permitted the observations of some minute changes.
 In quantum theory, introduced by Max Planck 1858-1947 in 1900, was the first serious scientific departure from Newtonian mechanics. It involved supposing that certain physical quantities can only assume discrete values. In the following two decades it was applied successfully by Einstein and the Danish physicist Neils Bohr (1885-1962). It was superseded by quantum mechanics in the tears following 1924, when the French physicist Louis de Broglie (1892-1987) introduced the idea that a particle may also be regarded as a wave. A set of waves that represent the behaviour, under appropriate conditions, of a particle (e.g., its diffraction by a crystal lattice). The wavelength is given by the de Broglie equation. They are sometimes regarded as waves of probability, since the square of their amplitude at a given point represents the probability of finding the particle in unit volume at that point. These waves were predicted by de Broglie in 1924 and observed in 1927 in the Davisson-Germer experiment. The Schrödinger wave equation relates the energy of a system to a wave function, the energy of a system to a wave function, the square of the amplitude of the wave is proportional to the probability of a particle being found in a specific position. The wave function expresses the lack of possibly of defining both the position and momentum of a particle, this expression of discrete representation is called as the ‘uncertainty principle,’ the allowed wave functions that have  described stationary states of a system
 Part of the difficulty with the notions involved is that a system may be in an indeterminate state at a time, characterized only by the probability of some result for an observation, but then ‘become’ determinate (the collapse of the wave packet) when an observation is made such as the position and momentum of a particle if that is to apply to reality itself, than to mere indetermincies of measurement. It is as if there is nothing but a potential for observation or a probability wave before observation is made, but when an observation is made the wave becomes a particle. The wave-particle duality seems to block any way of conceiving of physical reality-in quantum terms. In the famous two-slit experiment, an electron is fired at a screen with two slits, like a tennis ball thrown at a wall with two doors in it. If one puts detectors at each slit, every electron passing the screen is observed to go through exactly one slit. When the detectors are taken away, the electron acts like a wave process going through both slits and interfering with itself. A particle such an electron is usually thought of as always having an exact position, but its wave is not absolutely zero anywhere, there is therefore a finite probability of it ‘tunnelling through’ from one position to emerge at another.
 The unquestionable success of quantum mechanics has generated a large philosophical debate about its ultimate intelligibility and it’s metaphysical implications. The wave-particle duality is already a departure from ordinary ways of conceiving of tings in space, and its difficulty is compounded by the probabilistic nature of the fundamental states of a system as they are conceived in quantum mechanics. Philosophical options for interpreting quantum mechanics have included variations of the belief that it is at best an incomplete description of a better-behaved classical underlying reality ( Einstein ), the Copenhagen interpretation according to which there are no objective unobserved events in the micro-world (Bohr and W. K. Heisenberg, 1901- 76), an ‘acausal’ view of the collapse of the wave packet (J. von Neumann, 1903-57), and a ‘many world’ interpretation in which time forks perpetually toward innumerable futures, so that different states of the same system exist in different parallel universes (H. Everett).
 In recent tars the proliferation of subatomic particles, such as there are 36 kinds of quarks alone, in six flavours to look in various directions for unification. One avenue of approach is superstring theory, in which the four-dimensional world is thought of as the upshot of the collapse of a ten-dimensional world, with the four primary physical forces, one of gravity another is electromagnetism and the strong and weak nuclear forces, becoming seen as the result of the fracture of one primary force. While the scientific acceptability of such theories is a matter for physics, their ultimate intelligibility plainly requires some philosophical reflection.
 A theory of gravitation that is consistent with quantum mechanics whose subject, still in its infancy, has no completely satisfactory theory. In controventional quantum gravity, the gravitational force is mediated by a massless spin-2 particle, called the ‘graviton’. The internal degrees of freedom of the graviton require (hij)(χ) represent the deviations from the metric tensor for a flat space. This formulation of general relativity reduces it to a quantum field theory, which has a regrettable tendency to produce infinite for measurable qualitites. However, unlike other quantum field theories, quantum gravity cannot appeal to renormalizations procedures to make sense of these infinites. It has been shown that renormalization procedures fail for theories, such as quantum gravity, in which the coupling constants have the dimensions of a positive power of length. The coupling constant for general relativity is the Planck length,
Lp = ( Gh / c3 )½ ≡ 10 ‒35 m.
Supersymmetry has been suggested as a structure that could be free from these pathological infinities. Many theorists believe that an effective superstring field theory may emerge, in which the Einstein field equations are no longer valid and general relativity is required to appar only as low energy limit. The resulting theory may be structurally different from anything that has been considered so far. Supersymmetric string theory (or superstring) is an extension of the ideas of Supersymmetry to one-dimensional string-like entities that can interact with each other and scatter according to a precise set of laws. The normal modes of super-strings represent an infinite set of ‘normal’ elementary particles whose masses and spins are related in a special way. Thus, the graviton is only one of the string modes-when the string-scattering processes are analysed in terms of their particle content, the low-energy graviton scattering is found to be the same as that computed from Supersymmetric gravity. The graviton mode may still be related to the geometry of the space-time in which the string vibrates, but it remains to be seen whether the other, massive, members of the set of ‘normal’ particles also have a geometrical interpretation. The intricacy of this theory stems from the requirement of a space-time of at least ten dimensions to ensure internal consistency. It has been suggested that there are the normal four dimensions, with the extra dimensions being tightly ‘curled up’ in a small circle presumably of Planck length size.
 In the quantum theory or quantum mechanics of an atom or other system fixed, or determined by a given set of quantum numbers. It is one of the various quantum states that an atom can assume. The conceptual representation of an atom was first introduced by the ancient Greeks, as a tiny indivisible component of matter, developed by Dalton, as the smallest part of an element that can take part in a chemical reaction, and made very much more precisely by theory and excrement in the late-19th and 20th centuries.
 Following the discovery of the electron (1897), it was recognized that atoms had structure, since electrons are negatively charged, a neutral atom must have a positive component. The experiments of Geiger and Marsden on the scattering of alpha particles by thin metal foils led Rutherford to propose a model (1912) in which nearly, but all the mass of an atom is concentrated at its centre in a region of positive charge, the nucleus, the radius of the order 10 -15 metre. The electrons occupy the surrounding space to a radius of 10-11 to 10-10 m. Rutherford also proposed that the nucleus have a charge of ‘Ze’ and is surrounded by ‘Z’ electrons (Z is the atomic number). According to classical physics such a system must emit electromagnetic radiation continuously and consequently no permanent atom would be possible. This problem was solved by the development of the quantum theory.
 The ‘Bohr Theory of the Atom’, 1913, introduced the concept that an electron in an atom is normally in a state of lower energy, or ground state, in which it remains indefinitely unless disturbed. By absorption of electromagnetic radiation or collision with another particle the atom may be excited-that is an electron is moved into a state of higher energy. Such excited states usually have short lifetimes, typically nanoseconds and the electron returns to the ground state, commonly by emitting one or more quanta of electromagnetic radiation. The original theory was only partially successful in predicting the energies and other properties of the electronic states. Attempts were made to improve the theory by postulating elliptic orbits (Sommerfeld 1915) and electron spin (Pauli 1925) but a satisfactory theory only became possible upon the development of ‘Wave Mechanics,’ after 1925.
 According to modern theories, an electron does not follow a determinate orbit as envisaged by Bohr, but is in a state described by the solution of a wave equation. This determines the probability that the electron may be located in a given element of volume. Each state is characterized by a set of four quantum numbers, and, according to the Pauli exclusion principle, not more than one electron can be in a given state.
 The Pauli exclusion principle states that no two identical ‘fermions’ in any system can be in the same quantum state that is have the same set of quantum numbers. The principle was first proposed (1925) in the form that not more than two electrons in an atom could have the same set of quantum numbers. This hypothesis accounted for the main features of the structure of the atom and for the periodic table. An electron in an atom is characterized by four quantum numbers, n, I, m, and S. A particular atomic orbital, which has fixed values of n, I, and m, can thus contain a maximum of two electrons, since the spin quantum number ‘s’ can only be + | or-|. In 1928 Sommerfeld applied the principle to the free electrons in solids and his theory has been greatly developed by later associates.
 Additionally, an effect occurring when atoms emit or absorb radiation in the presence of a moderately strong magnetic field. Each spectral; Line is split into closely spaced polarized components, when the source is viewed at right angles to the field there are three components, the middle one having the same frequency as the unmodified line, and when the source is viewed parallel to the field there are two components, the undisplaced line being preoccupied. This is the ‘normal’ Zeeman Effect. With most spectral lines, however, the anomalous Zeeman effect occurs, where there are a greater number of symmetrically arranged polarized components. In both effects the displacement of the components is a measure of the magnetic field strength. In some cases the components cannot be resolved and the spectral line appears broadened.
 The Zeeman effect occurs because the energies of individual electron states depend on their inclination to the direction of the magnetic field, and because quantum energy requirements impose conditions such that the plane of an electron orbit can only set itself at certain definite angles to the applied field. These angles are such that the projection of the total angular momentum on the field direction in an integral multiple of h/2π (h is the Planck constant). The Zeeman effect is observed with moderately strong fields where the precession of the orbital angular momentum and the spin angular momentum of the electrons about each other is much faster than the total precession around the field direction. The normal Zeeman effect is observed when the conditions are such that the Landé factor is unity, otherwise the anomalous effect is found. This anomaly was one of the factors contributing to the discovery of electron spin.
 Statistics that are concerned with the equilibrium distribution of elementary particles of a particular type among the various quantized energy states. It is assumed that these elementary particles are indistinguishable. The ‘Pauli Exclusion Principle’ is obeyed so that no two identical ‘fermions’ can be in the same quantum mechanical state. The exchange of two identical fermions, i.e., two electrons, does not affect the probability of distribution but it does involve a change in the sign of the wave function. The ‘Fermi-Dirac Distribution Law’ gives E the average number of identical fermions in a state of energy E:
E = 1/[eα + E/kT + 1],
Where ‘k’ is the Boltzmann constant, ‘T’ is the thermodynamic temperature and α is a quantity depending on temperature and the concentration of particles. For the valences electrons in a solid, ‘α’ takes the form-E1/kT, where E1 is the Fermi level. Whereby, the Fermi level (or Fermi energy) E F the value of E is exactly one half. Thus, for a system in equilibrium one half of the states with energy very nearly equal to ‘E’ (if any) will be occupied. The value of EF varies very slowly with temperatures, tending to E0 as ‘T’ tends to absolute zero.
 In Bose-Einstein statistics, the Pauli exclusion principle is not obeyed so that any number of identical ‘bosons’ can be in the same state. The exchanger of two bosons of the same type affects neither the probability of distribution nor the sign of the wave function. The ‘Bose-Einstein Distribution Law’ gives E the average number of identical bosons in a state of energy E:
E = 1/[eα + E/kT-1].
The formula can be applied to photons, considered as quasi-particles, provided that the quantity α, which conserves the number of particles, is zero. Planck’s formula for the energy distribution of ‘Black-Body Radiation’ was derived from this law by Bose. At high temperatures and low concentrations both the quantum distribution laws tend to the classical distribution:
E = Ae-E/kT.
Additionally, the property of substances that have a positive magnetic ‘susceptibility’, whereby its quantity μr ‒ 1, and where μr is ‘Relative Permeability,’ again, that the electric-quantity presented as Єr ‒ 1, where Єr is the ‘Relative Permittivity,’ all of which has positivity. All of which are caused by the ‘spins’ of electrons, paramagnetic substances having molecules or atoms, in which there are paired electrons and thus, resulting of a ‘Magnetic Moment.’ There is also a contribution of the magnetic properties from the orbital motion of the electron, as the relative ‘permeability’ of a paramagnetic substance is thus greater than that of a vacuum, i.e., it is greater than unity.
 A ‘paramagnetic substance’ is regarded as an assembly of magnetic dipoles that have random orientation. In the presence of a field the magnetization is determined by competition between the effect of the field, in tending to align the magnetic dipoles, and the random thermal agitation. In small fields and high temperatures, the magnetization produced is proportional to the field strength, wherefore at low temperatures or high field strengths, a state of saturation is approached. As the temperature rises, the susceptibility falls according to Curie’s Law or the Curie-Weiss Law.
 Furthering by Curie’s Law, the susceptibility (χ) of a paramagnetic substance is unversedly proportional to the ‘thermodynamic temperature’ (T): χ = C/T. The constant ’C is called the ‘Curie constant’ and is characteristic of the material. This law is explained by assuming that each molecule has an independent magnetic ‘dipole’ moment and the tendency of the applied field to align these molecules is opposed by the random moment due to the temperature. A modification of Curie’s Law, followed by many paramagnetic substances, where the Curie-Weiss law modifies its applicability in the form:
χ = C/(T ‒ θ ).
The law shows that the susceptibility is proportional to the excess of temperature over a fixed temperature θ: ‘θ’ is known as the Weiss constant and is a temperature characteristic of the material, such as sodium and potassium, also exhibit type of paramagnetic resulting from the magnetic moments of free, or nearly free electrons, in their conduction bands? This is characterized by a very small positive susceptibility and a very slight temperature dependence, and is known as ‘free-electron paramagnetism’ or ‘Pauli paramagnetism’.
 A property of certain solid substances that having a large positive magnetic susceptibility having capabilities of being magnetized by weak magnetic fields. The chief elements are iron, cobalt, and nickel and many ferromagnetic alloys based on these metals also exist. Justifiably, ferromagnetic materials exhibit magnetic ‘hysteresis’, of which formidable combination of decaying within the change of an observed effect in response to a change in the mechanism producing the effect. (Magnetic) a phenomenon shown by ferromagnetic substances, whereby the magnetic flux through the medium depends not only on the existing magnetizing field, but also on the previous state or states of the substances, the existence of a phenomenon necessitates a dissipation of energy when the substance is subjected to a cycle of magnetic changes, this is known as the magnetic hysteresis loss. The magnetic hysteresis loops were acceding by a curved obtainability from ways of which, in themselves were of plotting the magnetic flux density ‘B’, of a ferromagnetic material against the responding value of the magnetizing field ’H’, the area to the ‘hysteresis loss’ per unit volume in taking the specimen through the prescribed magnetizing cycle. The general forms of the hysteresis loop fore a symmetrical cycle between ‘H’ and ‘- H’ and ‘H-h, having inclinations that rise to hysteresis.
 The magnetic hysteresis loss commands the dissipation of energy as due to magnetic hysteresis, when the magnetic material is subjected to changes, particularly, the cycle changes of magnetization, as having the larger positive magnetic susceptibility, and are capable of being magnetized by weak magnetic fields. Ferro magnetics are able to retain a certain domain of magnetization when the magnetizing field is removed. Those materials that retain a high percentage of their magnetization are said to be hard, and those that lose most of their magnetization are said to be soft, typical examples of hard ferromagnetic are cobalt steel and various alloys of nickel, aluminium and cobalt. Typical soft magnetic materials are silicon steel and soft iron, the coercive force as acknowledged to the reversed magnetic field’ that is required to reduce the magnetic ‘flux density’ in a substance from its remnant value to zero in characteristic of ferromagnetisms and explains by its presence of domains. A ferromagnetic domain is a region of crystalline matter, whose volume may be 10-12 to 10-8 m3, which contains atoms whose magnetic moments are aligned in the same direction. The domain is thus magnetically saturated and behaves like a magnet with its own magnetic axis and moment. The magnetic moment of the ferrometic atom results from the spin of the electron in an unfilled inner shell of the atom. The formation of a domain depends upon the strong interactions forces (Exchange forces) that are effective in a crystal lattice containing ferrometic atoms.
 In an unmagnetized volume of a specimen, the domains are arranged in a random fashion with their magnetic axes pointing in all directions so that the specimen has no resultant magnetic moment. Under the influence of a weak magnetic field, those domains whose magnetic saxes have directions near to that of the field flux at the expense of their neighbours. In this process the atoms of neighbouring domains tend to align in the direction of the field but the strong influence of the growing domain causes their axes to align parallel to its magnetic axis. The growth of these domains leads to a resultant magnetic moment and hence, magnetization of the specimen in the direction of the field, with increasing field strength, the growth of domains proceeds until there is, effectively, only one domain whose magnetic axis appropriates to the field direction. The specimen now exhibits tron magnetization. Further, increasing in field strength cause the final alignment and magnetic saturation in the field direction. This explains the characteristic variation of magnetization with applied strength. The presence of domains in ferromagnetic materials can be demonstrated by use of ‘Bitter Patterns’ or by ‘Barkhausen Effect,’ which puts forward, that the magnetization of a ferromagnetic substance does not increase or decrease steadily with steady increase or decrease of the magnetizing field but proceeds in a series of minute jumps. The effect gives support to the domain theory of Ferromagnetism.
 For ferromagnetic solids there are a change from ferromagnetic to paramagnetic behaviour above a particular temperature and the paramagnetic material then obeyed the Curie-Weiss Law above this temperature, this is the ‘Curie temperature’ for the material. Below this temperature the law is not obeyed. Some paramagnetic substances, obey the temperature ‘θ C’ and do not obey it below, but are not ferromagnetic below this temperature. The value ‘θ’ in the Curie-Weiss law can be thought of as a correction to Curie’s law reelecting the extent to which the magnetic dipoles interact with each other. In materials exhibiting ‘antiferromagnetism’ of which the temperature ‘θ’ corresponds to the ‘Néel temperature’.
 Without discredited inquisitions, the property of certain materials that have a low positive magnetic susceptibility, as in paramagnetism, and exhibit a temperature dependence similar to that encountered in ferromagnetism. The susceptibility increased with temperatures up to a certain point, called the ‘Néel Temperature,’ and then falls with increasing temperatures in accordance with the Curie-Weiss law. The material thus becomes paramagnetic above the Néel temperature, which is analogous to the Curie temperature in the transition from ferromagnetism to paramagnetism. Antiferromagnetism is a property of certain inorganic compounds such as MnO, FeO, FeF2 and MnS. It results from interactions between neighbouring atoms leading and an antiparallel arrangement of adjacent magnetic dipole moments, least of mention. A system of two equal and opposite charges placed at a very short distance apart. The product of either of the charges and the distance between them is known as the ‘electric dipole moments. A small loop carrying a current  behaves as a magnetic dipole and is equal to IA, where A being the area of the loop.
 The energy associated with a quantum state of an atom or other system that is fixed, or determined by a given set of quantum numbers. It is one of the various quantum states that can be assumed by an atom under defined conditions. The term is often used to mean the state itself, which is incorrect by ways of: (1) the energy of a given state may be changed by externally applied fields, and (2) there may be a number of states of equal energy in the system.
 The electrons in an atom can occupy any of an infinite number of bound states with discrete energies. For an isolated atom the energy for a given state is exactly determinate except for the effects of the ‘uncertainty principle’. The ground state with lowest energy has an infinite lifetime, hence the energy is if, in at all as a principle that is exactly determinate. The energies of these states are most accurately measured by finding the wavelength of the radiation emitted or absorbed in transitions between them, i.e., from their line spectra. Theories of the atom have been developed to predict these energies by calculating such a system that emit electromagnetic radiation continuously and consequently no permanent atom would be possible, hence this problem was solved by the developments of quantum theory. An exact calculation of the energies and other particles of the quantum state is only possible for the simplest atom but there are various approximate methods that give useful results as an approximate method of solving a difficult problem, if the equations to be solved, and depart only slightly from those of some problems already solved. For example, the orbit of a single planet round the sun is an ellipse, that the perturbing effect of other planets modifies the orbit slightly in a way calculable by this method. The technique finds considerable application in ‘wave mechanics’ and in ‘quantum electrodynamics’. Phenomena that are not amendable to solution by perturbation theory are said to be non-perturbative.
 The energies of unbound states of positive total energy form a continuum. This gives rise to the continuos background to an atomic spectrum, as electrons are captured from unbound state, the energy of an atomic state can be changed by the ‘Stark Effect’ or the ‘Zeeman Effect.’
 The vibrational energies of molecules also have discrete values, for example, in a diatomic molecule the atoms oscillate in the line joining them. There is an equilibrium distance at which the force is zero, and the atoms deflect when closer and attract when further apart. The restraining force is very nearly proportional to the displacement, hence the oscillations are simple harmonic. Solution of the ‘Schrödinger wave equation’ gives the energies of a harmonic oscillation as:
En = ( n + ½ ) hƒ
Where ‘h’ is the Planck constant, ƒ is the frequency, and ‘n’ is the vibrational quantum number, which can be zero or any positive integer. The lowest possible vibrational energy of an oscillator is thus not zero but ½hƒ. This is the cause of zero-point energy. The potential energy of interaction of atoms is described more exactly by the Morse equation, which shows that the oscillations are anharmonic. The vibrations of molecules are investigated by the study of ‘band spectra’.
 The rotational energy of a molecule is quantized also, according to the Schrödinger equation a body with moments of inertia I about the axis of rotation have energies given by:
Ej = h2J(J + 1 )/8π2 I,
Where ‘J’ is the rotational quantum number, which can be zero or a positive integer. Rotational energies are found from ‘band spectra’.
 The energies of the states of the ‘nucleus’ can be determined from the gamma ray spectrum and from various nuclear reactions. Theory has been less successful in predicting these energies than those of electrons in atoms because the interactions of nucleons are very complicated. The energies are very little affected by external influences, but the ‘Mössbauer Effect’ has permitted the observation of some minute changes.
 When X-rays are scattered by atomic centres arranged at regular intervals, interference phenomena occur, crystals providing grating of a suitable small interval. The interference effects may be used to provide a spectrum of the beam of X-rays, since, according to ‘Bragg’s Law,’ the angle of reflection of X-rays from a crystal depends on the wavelength of the rays. For lower-energy X-rays mechanically ruled grating can be used. Each chemical element emits characteristic X-rays in sharply defined groups in more widely separated regions. They are known as the K, L’s, M, N. etc., promote lines of any series toward shorter wavelengths as the atomic number of the elements concerned increases. If a parallel beam of X-rays, wavelength λ, strikes a set of crystal planes it is reflected from the different planes, interferences occurring between X-rays reflect from adjacent planes. Bragg’s Law states that constructive interference takes place when the difference in path-lengths, BAC, is equal to an integral number of wavelengths
2d sin θ = nλ,
In which ‘n’ is an integer, ‘d’ is the interplanar distance, and ‘θ’ is the angle between the incident X-ray and the crystal plane. This angle is called the ‘Bragg’s Angle,’ and a bright spot will be obtained on an interference pattern at this angle. A dark spot will be obtained, however, if be 2d sin θ = mλ. Where ‘m’ is half-integral. The structure of a crystal can be determined from a set of interference patterns found at various angles from the different crystal faces.
 A concept originally introduced by the ancient Greeks, as a tiny indivisible component of matter, developed by Dalton, as the smallest part of an element that can take part in a chemical reaction, and made experiment in the late-19th and early 20th century. Following the discovery of the electron (1897), they recognized that atoms had structure, since electrons are negatively charged, a neutral atom must have a positive component. The experiments of Geiger and Marsden on the scattering of alpha particles by thin metal foils led Rutherford to propose a model (1912) in which nearly all mass of the atom is concentrated at its centre in a region of positive charge, the nucleus is a region of positive charge, the nucleus, radiuses of the order 10-15 metre. The electrons occupy the surrounding space to a radius of 10-11 to 10-10 m. Rutherford also proposed that the nucleus have a charge of ‘Ze’, is surrounded by ‘Z’ electrons (‘Z’ is the atomic number). According to classical physics such a system must emit electromagnetic radiation continuously and consequently no permanent atom would be possible. This problem was solved by the developments of the ‘Quantum Theory.’
 The ‘Bohr Theory of the Atom’ (1913) introduced the notion that an electron in an atom is normally in a state of lowest energy (ground state) in which it remains indefinitely unless disturbed by absorption of electromagnetic radiation or collision with other particle the atom may be excited-that is, electrons moved into a state of higher energy. Such excited states usually have short life spans (typically nanoseconds) and the electron returns to the ground state, commonly by emitting one or more ‘quanta’ of electromagnetic radiation. The original theory was only partially successful in predicting the energies and other properties of the electronic states. Postulating elliptic orbits made attempts to improve the theory (Sommerfeld 1915) and electron spin (Pauli 1925) but a satisfactory theory only became possible upon the development of ‘Wave Mechanics’ 1925.
 According to modern theories, an electron does not follow a determinate orbit as envisaged by Bohr, but is in a state described by the solution of the wave equation. This determines the ‘probability’ that the electron may be found in a given element of volume. A set of four quantum numbers has characterized each state, and according to the ‘Pauli Exclusion Principle’, not more than one electron can be in a given state.
 An exact calculation of the energies and other properties of the quantum states is possible for the simplest atoms, but various approximate methods give useful results, i.e., as an approximate method of solving a difficult problem if the equations to be solved and depart only slightly from those of some problems already solved. The properties of the innermost electron states of complex atoms are found experimentally by the study of X-ray spectra. The outer electrons are investigated using spectra in the infrared, visible, and ultraviolet. Certain details have been studied using microwaves. As administered by a small difference in energy between the energy levels of the
2P½ states of hydrogen. In accord with Lamb Shift, these levels would have the same energy according to the wave mechanics of Dirac. The actual shift can be explained by a correction to the energies based on the theory of the interaction of electromagnetic fields with matter, in of which the fields themselves are quantized. Yet, other information may be obtained form magnetism and other chemical properties.
 Its appearance potential concludes as, (1)the potential differences through which an electron must be accelerated from rest to produce a given ion from its parent atom or molecule. (2) This potential difference multiplied bu the electron charge giving the least energy required to produce the ion. A simple ionizing process gives the ‘ionization potential’ of the substance, for example:
Ar + e ➝ Ar + + 2e.
Higher appearance potentials may be found for multiplying charged ions:
Ar + e ➝ Ar + + + 3r.
The number of protons in a nucleus of an atom or the number of electrons resolving around the nucleus is among some concerns of atomic numbers. The atomic number determines the chemical properties of an element and the element’s position in the periodic table, because of which the clarification of chemical elements, in tabular form, in the order of their atomic number. The elements show a periodicity of properties, chemically similar recurring in a definite order. The sequence of elements is thus broken into horizontal ‘periods’ and vertical ‘groups’ the elements in each group showing close chemical analogies, i.e., in valency, chemical properties, etc. all the isotopes of an element have the same atomic number although different isotopes gave mass numbers.
 An allowed ‘wave function’ of an electron in an atom obtained by a solution of the Schrödinger wave equation. In a hydrogen atom, for example, the electron moves in the electrostatic field of the nucleus and its potential energy is -e2, where ‘e’ is the electron charge. ‘r’ its distance from the nucleus, as a precise orbit cannot be considered as in Bohr’s theory of the atom, but the behaviour of the electron is described by its wave function, Ψ, which is a mathematical function of its position with respect to the nucleus. The significance of the wave function is that | Ψ | 2dt, is the probability of finding the electron in the element of volume ‘dt’.
 Solution of Schrödinger’s equation for hydrogen atom shows that the electron can only have certain allowed wave functions (eigenfunction). Each of these corresponds to a probability distribution in space given by the manner in which | Ψ | 2 varies with position. They also have an associated value of energy ‘E’. These allowed wave functions, or orbitals, are characterized by three quantum numbers similar to those characterizing the allowed orbits in the quantum theory of the atom: ‘n’, the ‘principle quantum number’, can have values of 1, 2, 3, etc. the orbital with n=1 has the lowest energy. The states of the electron with n=1, 2, 3, etc., are called ‘shells’ and designated the K, L, M shells, etc. ‘I’ the ‘azimuthal quanta number’ which for a given value of ‘n’ can have values of 0, 1, 2,  . . . (n ‒1). Similarly, the ’M’ shell (n = 3) has three Subshell with I = 0, I = 1, and I = 2. Orbitals with I = 0, 1, 2, and 3 are called s, p, d, and  orbitals respectively. The significance of the I quantum number is that it gives the angular momentum of the electron. The orbital annular momentum of an electron is given by:
√[1(I + 1)(h2π)]
‘m’ the ‘magnetic quanta number’, which for a given value of ‘I’ can have values of -I,-(I ‒ 1), . . . , 0, . . . (I‒ 1). Thus for ‘p’ orbital for which I = 1, there is in fact three different orbitals with m =-1, 0, and 1. These orbitals with the same values of ‘n’ and ‘I ‘ but different ‘m’ values, have the same energy. The significance of this quantum number is that it shows the number of different levels that would be produced if the atom were subjected to an external magnetic field
 According to wave theory the electron may be at any distance from the nucleus, but in fact there is only a reasonable chance of it being within a distance of-5 x 1011 metre. In fact the maximum probability occurs when r = a0 where a0 is the radius of the first Bohr orbit. Representing an orbit that there is no arbitrarily decided probability is customary (say 95%) of finding them an electron. Notably taken, is that although ‘s’ orbitals are spherical (I = 0), orbitals with I > 0, have an angular dependence. Finally. The electron in an atom can have a fourth quantum number, ‘M’ characterizing its spin direction. This can be + ½ or ‒ ½ and according to the Pauli Exclusion principle, each orbital can hold only two electrons. The fourth quantum numbers lead to an explanation of the periodic table of the elements.
 The least distance in a progressive wave between two surfaces with the same phase arises to a wavelength. If ‘v’ is the phase speed and ‘v’ the frequency, the wavelength is given by v = vλ. For electromagnetic radiation the phase speed and wavelength in a material medium are equal to their values in a free space divided by the ‘refractive index’. The wavelengths of spectral lines are normally specified for free space.
 Optical wavelengths are measure absolutely using interferometers or diffraction gratings, or comparatively using a prism spectrometer. The wavelength can only have an exact value for an infinite waver train if an atomic body emits a quantum in the form of a train of waves of duration τ the fractional uncertainty of the wavelength, Δλ/λ, is approximately λ/2cτ, where ‘c’ is the speed in free space. This is associated with the indeterminacy of the energy given by the uncertainty principle.
 Whereas, a mathematical quantity analogous to the amplitude of a wave that appears in the equation of wave mechanics, particularly the Schrödinger waves equation. The most generally accepted interpretation is that | Ψ | 2dV represents the probability that a particle is within the volume element dV. The wavelengths, as a set of waves that represent the behaviour, under appropriate conditions, of a particle, e.g., its diffraction by a particle. The wavelength is given by the ‘de Broglie Equation.’ They are sometimes regarded as waves of probability, times the square of their amplitude at a given point represents the probability of finding the particle in unit volume at that point. These waves were predicted by de Broglie in 1924 and observed in 1927 in the Davisson-Germer Experiment. Still, ‘Ψ’ is often a might complex quality.
 The analogy between ‘Ψ’ and the amplitude of a wave is purely formal. There is no macroscopic physical quantity with which ‘Ψ’ can be identified, in contrast with, for example, the amplitude of an electromagnetic wave, which is expressed in terms of electric and magnetic field intensities.
 Overall, there are an infinite number of functions satisfying a wave equation but only some of these will satisfy the boundary conditions. ‘Ψ’ must be finite and single-valued at every point, and the spatial derivative must be continuous at an interface? For a particle subject to a law of conservation of numbers, the integral of | Ψ | 2dV over all space must remain equal to 1, since this is the probability that it exists somewhere to satisfy this condition the wave equation must be of the first order in (dΨ/dt). Wave functions obtained when these conditions are applied from a set of characteristic functions of the Schrödinger wave equation. These are often called eigenfunctions and correspond to a set of fixed energy values in which the system may exist describe stationary states on the system. For certain bound states of a system the eigenfunctions do not charge the sign or reversing the co-ordinated axes. These states are said to have even parity. For other states the sign changes on space reversal and the parity is said to be odd.
 It’s issuing case of eigenvalue problems in physics that take the form:
ΩΨ = λΨ
Where Ω is come mathematical operation (multiplication by a number, differentiation, etc.) on a function Ψ, which is called the ‘eigenfunction’. (λ) is called the ‘eigenvalue’, which in a physical system will be identified with an observable quantity, as, too, an atom to other systems that are fixed, or determined, by a given set of quantum numbers? It is one of the various quantum states that can be assumed by an atom
 Eigenvalue problems are ubiquitous in classical physics and occur whenever the mathematical description of a physical system yields a series of coupled differential equations. For example, the collective motion of a large number of interacting oscillators may be described by a set of coupled differential equations. Each differential equation describes the motion of one of the oscillators in terms of the positions of all the others. A ‘harmonic’ solution may be sought, in which each displacement is assumed as a simple harmonic motion in time. The differential equations then reduce to ‘3N’ linear equations with 3N unknowns. Where ‘N’ is the number of individual oscillators, each problem is from each one of three degrees of freedom. The whole problem I now easily recast as a ‘matrix’ equation of the form:
Mχ = ῳ2χ.
Where ‘M’ is an N x N matrix called the ‘a dynamic matrix, χ is an N x 1 column matrix, and ῳ2 of the harmonic solution. The problem is now an eigenvalue problem with eigenfunctions’ χ, where are the normal modes of the system, with corresponding eigenvalues ῳ2. As χ can be expressed as a column vector, χ is a vector in some-dimensional vector space. For this reason, χ is also often called an eigenvector.
 When the collection of oscillators is a complicated three-dimensional molecule, the casting of the problem into normal modes s and effective simplification of the system. The symmetry principles of group theory, the symmetry operations in any physical system must be posses the properties of the mathematical group. As the group of rotation, both finite and infinite, are important in the analysis of the symmetry of atoms and molecules, which underlie the quantum theory of angular momentum. Eigenvalue problems arise in the quantum mechanics of atomic arising in the quantum mechanics of atomic or molecular systems yield stationary states corresponding to the normal mode oscillations of either electrons in-an atom or atoms within a molecule. Angular momentum quantum numbers correspond to a labelling system used to classify these normal modes, analysing the transitions between them can lead and theoretically predict of atomic or a molecular spectrum. Whereas, the symmetrical principle of group theory can then be applied, from which allow their classification accordingly. In which, this kind of analysis requires an appreciation of the symmetry properties of the molecules (rotations, inversions, etc.) that leave the molecule invariant make up the point group of that molecule. Normal modes sharing the same ῳ eigenvalues are said to correspond to the irreducible representations of these molecules’ point group. It is among these irreducible representations that one will find the infrared absorption spectrum for the vibrational normal modes of the molecule.
 Eigenvalue problems play a particularly important role in quantum mechanics. In quantum mechanics, physically observable as location momentum energy etc., are represented by operations (differentiations with respect to a variable, multiplication by a variable), which act on wave functions. Wave functioning differs from classical waves in that they carry no energy. For classical waves, the square modulus of its amplitude measures its energy. For a wave function, the square modulus of its amplitude, at a location χ represents not energy bu probability, i.e., the probability that a particle-a localized packet of energy will be observed in a detector is placed at that location. The wave function therefore describes the distribution of possible locations of the particle and is perceptible only after many location detectors events have occurred. A measurement of position of a quantum particle may be written symbolically as:
X Ψ(χ) = χΨ(χ),
Where Ψ (χ) is said to be an eigenvector of the location operator and ‘χ’ is the eigenvalue, which represents the location. Each Ψ(χ) represents amplitude at the location ‘χ’, | Ψ (χ) |2 is the probability that the particle will be found in an infinitesimal volume at that location. The wave function describing the distribution of all possible locations for the particle is the linear superposition of all Ψ (χ) for zero ≤χ ≥ ∞. These principles that hold generally in physics wherever linear phenomena occur. In elasticity, the principle stares that the same strains whether it acts alone accompany each stress or in conjunction with others, it is true so long as the total stress does not exceed the limit of proportionality. In vibrations and wave motion the principle asserts that one set is unaffected by the presence of another set. For example, two sets of ripples on water will pass through one anther without mutual interaction so that, at a particular instant, the resultant distribution at any point traverse by both sets of waves is the sum of the two component disturbances.’
 The superposition of two vibrations, y1 and y2, both of frequency , produces a resultant vibration of the same frequency, its amplitude and phase functions of the component amplitudes and phases, that:
y1 = a1 sin(2πt + δ1)
y2 = a2 sin(sin(2πt + δ2)
Then the resultant vibration, y, is given by:
y1 + y2 = A sin(2πt + Δ),
Where amplitude A and phase Δ is both functions of a1, a2, δ1, and δ2.
 However, the eigenvalue problems in quantum mechanics therefore represent observable representations as made by possible states (position, in the case of χ) that the quantum system can have to stationary states, of which states that the product of the uncertainty of the resulting value of a component of momentum (pχ) and the uncertainties in the corresponding co-ordinate position (χ) is of the same set-order of magnitude as the Planck Constant. It produces an accurate measurement of position is possible, as a resultant of the uncertainty principle. Subsequently, measurements of the position acquire a spread themselves, which makes the continuos monitoring of the position impossibly.
 As in, classical mechanics may take differential or matrix forms. Both forms have been shown to be equivalent. The differential form of quantum mechanics is called wave mechanics (Schrödinger), where the operators are differential operators or multiplications by variables. Eigenfunctions in wave mechanics are wave functions corresponding to stationary wave states that responding to stationary conditions. The matrix forms of quantum mechanics are often matrix mechanics: Born and Heisenberg. Matrices acting of eigenvectors represent the operators.
 The relationship between matrix and wave mechanics is similar to the relationship between matrix and differential forms of eigenvalue problems in classical mechanics. The wave functions representing stationary states are really normal modes of the quantum wave. These normal modes may be thought of as vectors that span on a vector space, which have a matrix representation.
 Pauli, in 1925, suggested that each electron could exist in two states with the same orbital motion. Uhlenbeck and Goudsmit interpreted these states as due to the spin of the electron about an axis. The electron is assumed to have an intrinsic angular momentum on addition, to any angular momentum due to its orbital motion. This intrinsic angular momentum is called ‘spin’ It is quantized in values of
s(s + 1)h/2π,
Where ‘s’ is the ‘spin quantum number’ and ‘h’ the Planck constant. For an electron the component of spin in a given direction can have values of + ½ and – ½, leading to the two possible states. An electron with spin that is behaviourally likens too small magnetic moments, in which came alongside an intrinsic magnetic moment. A ‘magneton gives of a fundamental constant, whereby the intrinsic magnetic moment of an electron acquires the circulatory current created by the angular momentum ‘p’ of an electron moving in its orbital produces a magnetic moment μ = ep/2m, where ‘e and ‘m’ are the charge and mass of the electron, by substituting the quantized relation p = jh/2π(h = the Planck constant; j = magnetic quantum number ), μ-jh/4πm. When j is taken as unity the quantity eh/4πm is called the Bohr magneton, its value is:
9.274 0780 x 10-24 Am2.
According to the wave mechanics of Dirac, the magnetic moment associated with the spin of the electron would be exactly one Bohr magnetron, although quantum electrodynamics show that a small difference can v=be expected. The nuclear magnetron, ‘μN’ is equal to (me/mp)μB. Where mp is the mass of the proton. The value of μN is:
5.050 8240 x 10-27 A m2
The magnetic moment of a proton is, in fact, 2.792 85 nuclear magnetos. The two states of different energy result from interactions between the magnetic field due to the electron’s spin and that caused by its orbital motion. These are two closely spaced states resulting from the two possible spin directions and these lead to the two limes in the doublet.
 In an external magnetic field the angular momentum vector of the electron precesses. For an explicative example, if a body is of a spin, it holds about its axis of symmetry OC (where O is a fixed point) and C is rotating round an axis OZ fixed outside the body, the body is said to be precessing round OZ. OZ is the precession axis. A gyroscope precesses due to an applied torque called the precessional torque. If the moment of inertia a body about OC is I and its angular momentum velocity is ω, a torque ‘K’, whose axis is perpendicular to the axis of rotation will produce an angular velocity of precession Ω about an axis perpendicular to both ῳ and the torque axis where:
Ω = K/Iω.
It is . . . , wholly orientated of the vector to the field direction are allowed, there is a quantization so that the component of the angular momentum along the direction I restricted of certain values of h/2π. The angular momentum vector has allowed directions such that the component is mS(h2π), where mS is the magnetic so in quantum number. For a given value of s, mS has the value’s, ( s-1),  . . . -s. For example, formerly the s = 1, mS is I, O, and – 1. The electron has a spin of ½ and thus mS is + ½ and – ½. Thus, the components of its spin of angular momentum along the field direction are ± ½(h/2π). These phenomena are called ‘a space quantization’.
 The resultant spin of a number of particles is the vector sum of the spins (s) of the individual particles and is given by symbol S. for example, in an atom two electrons with spin of ½ could combine to give a resultant spin of S = ½ + ½ = 1 or a resultant of S = ½ – ½ =1 or a resultant of S = ½ – ½ =0.
 Alternative symbols used for spin is J, for elementary particles or standard theory and I (for a nucleus). Most elementary particles have a non-zero spin, which either be integral of half integral. The spin of a nucleus is the resultant of the spin of its constituent’s nucleons.
 For most generally accepted interpretations is that
| ψ |2dV represents the probability that particle is located within the volume element dV, as well, ‘Ψ’ is often a complex quantity. The analogy between ‘Ψ’ and the amplitude of a wave is purely formal. There is no macroscopic physical quantity with which ‘Ψ’ can be identified, in contrast with, for example, the amplitude of an electromagnetic wave, which are expressed in terms of electric and magnetic field intensities. There are an infinite number of functions satisfying a wave equation, but only some of these will satisfy the boundary condition. ‘Ψ’ must be finite and single-valued at each point, and the spatial derivatives must be continuous at an interface? For a particle subject to a law of conservation of numbers; The integral of | Ψ |2dV over all space must remain equal to 1, since this is the probability that it exists somewhere. To satisfy this condition the wave equation must be of the first order in (dΨdt). Wave functions obtained when these conditions are applied form of set of ‘characteristic functions’ of the Schrödinger wave equation. These are often called ‘eigenfunctions’ and correspond to a set of fixed energy values in which the system may exist, called ‘eigenvalues’. Energy eigenfunctions describe stationary states of a system. For example, bound states of a system the eigenfunctions do not change signs on reversing the co-ordinated axes. These states are said to have ‘even parity’. For other states the sign changes on space reversal and the parity is said to be ‘odd’.
 The least distance in a progressive wave between two surfaces with the same phase. If ‘v’ is the ‘phase speed’ and ‘v’ the frequency, the wavelength is given by v = vλ. For ‘electromagnetic radiation’ the phase speed and wavelength in a material medium are equal to their values in free space divided by the ‘refractive index’. The wavelengths are spectral lines are normally specified for free space. Optical wavelengths are measured absolutely using interferometers or diffraction grating, or comparatively using a prism spectrometer.
 The wavelength can only have an exact value for an infinite wave train. If an atomic body emits a quantum in the form of a train of waves of duration ‘τ’ the fractional uncertainty of the wavelength, Δλ/λ, is approximately λ/2πcτ, where ‘c’ is the speed of free space. This is associated with the indeterminacy of the energy given by the ‘uncertainty principle’.
 A moment of momentum about an axis, represented as Symbol: L, the product of the moment of inertia and angular velocity (Iѡ) angular momentum is a ‘pseudo vector quality’. It is conserved in an isolated system, as the moment of inertia contains itself of a body about an axis. The sum of the products of the mass of each particle of a body and square of its perpendicular distance from the axis: This addition is replaced by an integration in the case of continuous body. For a rigid body moving about a fixed axis, the laws of motion have the same form as those of rectilinear motion, with moments of inertia replacing mass, angular velocity replacing linear momentum, etc. hence the ‘energy’ of a body rotating about a fixed axis with angular velocity ѡ is ½Iѡ2, which corresponds to ½mv2 for the kinetic energy of a body mass ‘m’ translated with Velocity ‘v’.
 The linear momentum of a particle ‘p’ bears the product of the mass and the velocity of the particle. It is a ‘vector’ quality directed through the particle of a body or a system of particles is the vector sum of the linear momentums of the individual particles. If a body of mass ‘M’ is translated (the movement of a body or system in which a way that all points are moved in parallel directions through equal distances), with a velocity ‘V’, it has its mentum as ‘MV’, which is the momentum of a particle of mass ‘M’ at the centre of gravity of the body. The product of ‘moment of inertia and angular velocity’. Angular momentum is a ‘pseudo vector quality and is conserved in an isolated system, and equal to the linear velocity divided by the radial axes per. sec.
 If the moment of inertia of a body of mass ‘M’ about an axis through the centre of mass is I, the moment of inertia about a parallel axis distance ‘h’ from the first axis is I + Mh2. If the radius of gyration is ‘k’ about the first axis, it is (k2 + h2 ) about the second. The moment of inertia of a uniform solid body about an axis of symmetry is given by the product of the mass and the sum of squares of the other semi-axes, divided by 3, 4, 5 according to whether the body is rectangular, elliptical or ellipsoidal.
 The circle is a special case of the ellipse. The Routh’s rule works for a circular or elliptical cylinder or elliptical discs it works for all three axes of symmetry. For example, for a circular disk of the radius ‘an’ and mass ‘M’, the moment of inertia about an axis through the centre of the disc and lying (a) perpendicular to the disc, (b) in the plane of the disc is:
(a) ¼M(a2 + a2) = ½Ma2
(b) ¼Ma2.
A formula for calculating moments of inertia I:
I = mass x (a2 /3 + n) + b2 /(3 + nʹ ),
Where n and nʹ are the numbers of principal curvatures of the surface that ends the semiaxes in question and ‘a’ and ‘b’s’ are the lengths of the semiaxes. Thus, if the body is a rectangular parallelepiped, n = nʹ = 0, and,
I =-mass x (a2 / 3 + b2 /3).
If the body is a cylinder then, for an axis through its centre, perpendicular to the cylinder axis, n = 0 and nʹ = 1, it substantiates that if,
I = mass x (a2 / 3 + b2 /4).
If ‘I’ is desired about the axis of the cylinder, then n= nʹ = 1 and
a = b = r (the cylinder radius) and; I = mass x (r2 /2).
 An array of mathematical concepts, which is similar to a determinant but differ from it in not having a numerical value in the ordinary sense of the term is called a matrix. It obeys the same rules of multiplication, addition. Etc. an array of ‘mn’ numbers set out in ‘m’ rows and ‘n’ columns are a matrix of the order of m x n. the separate numbers are usually called elements, such arrays of numbers, tarted as single entities and manipulated by the rules of matrix algebra, are of use whenever simultaneous equations are found, e.g., changing from one set of Cartesian axes to another set inclined the first: Quantum theory, electrical networks. Matrixes are very prominent in the mathematical expression of quantum mechanics.
 A mathematical form of quantum mechanics that was developed by Born and Heisenberg and originally simultaneously with but independently of wave mechanics. Waving mechanics is equivalent, but in it the wave function of wave mechanics is replaced by ‘vectors’ in a seemly space (Hilbert space) and observable things of the physical world, such as energy, momentum, co-ordinates, etc., is represented by ‘matrices’.
 The theory involves the idea that a maturement on a system disturbs, to some extent, the system itself. With large systems this is of no consequence, and the system this is of no classical mechanics. On the atomic scale, however, the results of the order in which the observations are made. Tote up if ‘p’ denotes an observation of a component of momentum and ‘q.  An observer of the corresponding co-ordinates pq ≠ qp. Here ‘p’ and ‘q’ are not physical quantities but operators. In matrix mechanics and obey the relationship where ‘h’ is the Planck constant that equals to 6.626•076 x 10 34 j s.
pq ‒ qp = ih/2π
The matrix elements are connected with the transition probability between various states of the system.
 A quantity with magnitude and direction. It can be represented by a line whose length is propositional to the magnitude and whose direction is that of the vector, or by three components in rectangular co-ordinate system. Their angle between vectors is 90%, that the product and vector product base a similarity to unit vectors such, are to either be equated to being zero or one.
 A true vector, or polar vector, involves the displacement or virtual displacement. Polar vectors include velocity, acceleration, force, electric and magnetic strength. The deigns of their components are reversed on reversing the co-ordinated axes. Their dimensions include length to an odd power.
 A Pseudo vector, or axial vector, involves the orientation of an axis in space. The direction is conventionally obtained in a right-handed system by sighting along the axis so that the rotation appears clockwise, Pseudo-vectors includes angular velocity, vector area and magnetic flux density. The signs of their components are unchanged on reversing the co-ordinated axes. Their dimensions include length to an even power.
 Polar vectors and axial vectors obey the same laws of the vector analysis (a) Vector addition: If two vectors ‘A’ and ‘B’ are represented in magnitude and direction by the adjacent sides of a parallelogram, the diagonal represents the vector sun (A + B) in magnitude and direction, forces, velocity, etc., combine in this way. (b) Vector multiplying: There are two ways of multiplying vectors (i) the ‘scalar product’ of two vectors equals the product of their magnitudes and the co-sine of the angle between them, and is scalar quantity. It is usually written
A • B ( reads as A dot B )
(ii) The vector product of two vectors: A and B are defined as a pseudo vector of magnitude AB sin θ, having a direction perpendicular to the plane containing them. The sense of the product along this perpendicular is defined by the rule: If ‘A’ is turned toward ‘B’ through the smaller angle, this rotation appears of the vector product. A vector product is usually written:
A x B ( reads as A cross B ).
Vectors should be distinguished from scalars by printing the symbols in bold italic letters.
 A theory that seeks to unite the properties of gravitational, electromagnetic, weak, and strong interactions to predict all their characteristics. At present it is not known whether such a theory can be developed, or whether the physical universe is amenable to a single analysis about the current concepts of physics. There are unsolved problems in using the framework of a relativistic quantum field theory to encompass the four elementary particles. It can occupy a certain position that using extended objects, as superstring and super-symmetric theories, may, however, still, will enable a future synthesis for achieving obtainability.
 A unified quantum field theory of the electromagnetic, weak and strong interactions, in most models, the known interactions are viewed as a low-energy manifestation of a single unified interaction, the unification taking place at energies (Typically 1015 GeV) very much higher than those currently accessible in particle accelerations. One feature of the Grand Unified Theory is that ‘baryon’ number and ‘lepton’ number would no longer be absolutely conserved quantum numbers, with the consequences that such processes as ‘proton decay’, for example, the decay of a proton into a positron and a π0, and p → e+π0 would be expected to be observed. Predicted lifetimes for proton decay are very long, typically 1035 years. Searchers for proton decay are being undertaken by many groups, using large underground detectors, so far without success.
 One of the mutual attractions binding the universe of its owing totality, but independent of electromagnetism, strong and weak nuclear forces of interactive bondages is one of gravitation. Newton showed that the external effect of a spherical symmetric body is the same as if the whole mass were concentrated at the centre. Astronomical bodies are roughly spherically symmetric so can be treated as point particles to a very good approximation. On this assumption Newton showed that his law consistent with Kepler’s laws? Until recently, all experiments have confirmed the accuracy of the inverse square law and the independence of the law upon the nature of the substances, but in the past few years evidence has been found against both.
 The size of a gravitational field at any point is given by the force exerted on unit mass at that point. The field intensity at a distance ‘χ’ from a point mass ‘m’ is therefore Gm/χ2, and acts toward ‘m’. Gravitational field strength is measured in ‘newtons’ per kilogram. The gravitational potential ‘V’ at that point is the work done in moving a unit mass from infinity to the point against the field, due to a point mass.
       X
V = Gm  ∞ dχ / χ2 = ‒ Gm / χ.
‘V’ is a scalar measurement in joules per kilogram. The following special cases are also important (a) Potential at a point distance χ from the centre of a hollow homogeneous spherical shell of mass ‘m’ and outside the shell:
V = ‒Gm / χ.
The potential is the same as if the mass of the shell is assumed concentrated at the centre (b) At any point inside the spherical shell the potential is equal to its value at the surface:
V = ‒Gm / r
Where ‘r’ is the radius of the shell. Thus, there is no resultant force acting at any point inside the shell, since no potential difference acts between any two points, then, the potential at a point distance ‘χ’ from the centre of a homogeneous solid sphere and outside the spheres the same as that for a shell:
V = ‒Gm / χ
(d) At a point inside the sphere, of radius ‘r’.
V = ‒Gm( 3r2 ‒ χ2 ) /2r3.
The essential property of gravitation is that it causes a change in motion, in particular the acceleration of free fall (g) in the earth’s gravitational field. According to the general theory of relativity, gravitational fields change the geometry of space-timer, causing it to become curved. It is this curvature that is geometrically responsible for an inseparability of the continuum of ‘space-time’ and its forbearing product is to a vicinity mass, entrapped by the universality of space-time, that in ways described by the pressures of their matter, that controls the natural motions of fording bodies. General relativity may thus be considered as a theory of gravitation, differences between it and Newtonian gravitation only appearing when the gravitational fields become very strong, as with ‘black-holes’ and ‘neutron stars’, or when very accurate measurements can be made.
 Another binding characteristic embodied universally is the interaction between elementary particle arising as a consequence of their associated electric and magnetic fields. The electrostatic force between charged particles is an example. This force may be described in terms of the exchange of virtual photons, because of the uncertainty principle being broken by an amount ~E providing this only occurring for a time is possible for the law of conservation of mass and energy such that:
ΔEΔt ≤ h/4π.
This makes it possible for particles to be created for short periods of time where their creation would normally violate conservation laws of energy. These particles are called ‘virtual particles’. For example, in a complete vacuum-that no ‘real’ particle’s exist, as pairs of virtual electrons and positron are continuously forming and rapidly disappearing (in less then 10-23 seconds). Other conservation laws such as those applying to angular momentum, Isospin, etc., cannot be violated even for short periods of time.
 Because its strength lies between strong and weak nuclear interactions, the exchanging electromagnetic interaction of particles decaying by electromagnetic interaction, do so with a lifetime shorter than those decaying by weak interaction, but longer than those decaying under the influence of strong interaction. For example, of electromagnetic decay is:
π0 → γ + γ.
This decay process, with a mean lifetime covering 8.4 x 10-17, may be understood as the annihilation of the quark and the antiquark, making up the π0, into a pair of photons. The quantum numbers having to be conserved in electromagnetic interactions are, angular momentum, charge, baryon number, Isospin quantum number I3, strangeness, charm, parity and charge conjugation parity are unduly influenced.
 Quanta’s electrodynamic descriptions of the photon-mediated electromagnetic interactions have been verified over a great range of distances and have led to highly accurate predictions. Quantum electrodynamics are a ‘gauge theory; as in quantum electrodynamics, the electromagnetic force can be derived by requiring that the equation describing the motion of a charged particle remain unchanged in the course of local symmetry operations. Specifically, if the phase of the wave function, by which charged particle is described is alterable independently, at which point in space, quantum electrodynamics require that the electromagnetic interaction and its mediating photon exist in order to maintain symmetry.
 A kind of interaction between elementary particles that is weaker than the strong interaction force by a factor of about 1012. When strong interactions can occur in reactions involving elementary particles, the weak interactions are usually unobserved. However, sometimes strong and electromagnetic interactions are prevented because they would violate the conservation of some quantum number, e.g., strangeness, that has to be conserved in such reactions. When this happens, weak interactions may still occur.
 The weak interaction operates over an extremely short range (about 2 x 10-18 m) it is mediated by the exchange of a very heavy particle (a gauge boson) that may be the charged W+ or W‒ particle (mass about 80 GeV/c2) or the neutral Z0 particles (mass about 91 GeV/c2). The gauge bosons that mediate the weak interactions are analogous to the photon that mediates the electromagnetic interaction. Weak interactions mediated by W particles involve a change in the charge and hence the identity of the reacting particle. The neutral Z0 does not lead to such a change in identity. Both sorts of weak interaction can violate parity.
 Most of the long-lived elementary particles decay as a result of weak interactions. For example, the kaon decay K+ ➝ μ+ vμ may be thought of for being due to the annihilation of the u quark and antiquark in the K+ to produce a virtual W+ boson, which then converts into a positive muon and a neutrino. This decay action or and electromagnetic interaction because strangeness is not conserved, Beta decay is the most common example of weak interaction decay. Because it is so weak, particles that can only decay by weak interactions that do so slowly, i.e., they have a very long lifetimes. Other examples of weak interactions include the scattering of the neutrino by other particles and certain very small effects on electrons within the atom.
 Understanding of weak interactions is based on the electroweak theory, in which it is proposed that the weak and electromagnetic interactions are different manifestations of a single underlying force, known as the electroweak force. Many of the predictions of the theory have been confirmed experimentally.
 A gauge theory, also called quantum flavour dynamics, that provides a unified description of both the electromagnetic and weak interactions. In the Glashow-Weinberg-Salam theory, also known as the standard model, electroweak interactions arise from the exchange of photons and of massive charged W+ and neutral Z0 bosons of spin 1 between quarks and leptons. The extremely massive charged particle, symbol W+ or W‒, that mediates certain types of weak interaction. The neutral Z-particle, or Z boson, symbol Z0, mediates the other types. Both are gauge bosons. The W-and Z-particles were first detected at CERN (1983) by studying collisions between protons and antiprotons with total energy 540 GeV in centre-of -mass co-ordinates. The rest masses were determined as about 80 GeV/c2 and 91 GeV/c2 for the W-and Z-particles, respectively, as had been predicted by the electroweak theory.
 The interaction strengths of the gauge bosons to quarks and leptons and the masses of the W and Z bosons themselves are predicted by the theory, the Weinberg Angle θW, which must be determined by experiment. The Glashow-Weinberg-Salam theory successfully describes all existing data from a wide variety of electroweak processes, such as neutrino-nucleon, neutrino-electron and electron-nucleon scattering. A major success of the model was the direct observation in 1983-84 of the W± and Z0 bosons with the predicted masses of 80 and 91 GeV/c2 in high energy proton-antiproton interactions. The decay modes of the W± and Z0 bosons have been studied in very high pp and e+ e‒ interactions and found to be in good agreement with the Standard model. The six known types (or flavours) of quarks and the six known leptons are grouped into three separate generations of particles as follows:
1st generation: e‒  ve  u  d
2nd generation: μ‒  vμ  c  s
3rd generation:  τ‒  vτ  t  b
The second and third generations are essentially copies of the first generation, which contains the electron and the ‘up’ and ‘down’ quarks making up the proton and neutron, but involve particles of higher mass. Communication between the different generations occurs only in the quark sector and only for interactions involving W± bosons. Studies of Z0 bosons production in very high energy electron-positron interactions has shown that no further generations of quarks and leptons can exist in nature (an arbitrary number of generations is a priori possible within the standard model) provided only that any new neutrinos are approximately massless.
 The Glashow Weinberg-Salam model also predicts the existence of a heavy spin 0 particle, not yet observed experimentally, known as the Higgs boson. The spontaneous symmetry-breaking mechanism used to generate non-zero masses for W± and Z bosons in the electroweak theory, whereby the mechanism postulates the existence of two new complex fields, φ (χμ) = φ1 + I φ2 and Ψ (χμ) = Ψ1 + I Ψ2 that are functional distributors to χμ = χ, y, z and t, and form a doublet? (φ, Ψ) this doublet of complex fields transforms in the same way as leptons and quarks under electroweak gauge transformations. Such gauge transformations rotate φ1, φ2, Ψ1, Ψ2 into each other without changing the nature of the physical science.
 The vacuum does not share the symmetry of the fields
(φ, Ψ) and a spontaneous breaking of the vacuum symmetry occurs via the Higgs mechanism. Consequently, the fields φ and Ψ have non-zero values in the vacuum. A particular orientation of φ1, φ2, Ψ1, Ψ2 may be chosen so that all the components of φ ( φ1 ). This component responds to electroweak fields in a way that is analogous to the response of a plasma to electromagnetic fields. Plasmas oscillate in the presence of electromagnetic waves, however, electromagnetic waves can only propagate at a frequency above the plasma frequency ωp2 given by the expression:
ωp2 = ne2 / mε
Where ‘n’ is the charge number density, ‘e’ the electrons charge. ‘m’ the electrons mass and ‘ε’ is the Permittivity of the plasma. In quantum field theory, this minimum frequency for electromagnetic waves may be thought of as a minimum energy for the existence of a quantum of the electromagnetic field (a photon) within the plasma. This minimum energy or mass for the photon, which becomes a field quantum of a finite ranged force. Thus, in its plasma, photons acquire a mass and the electromagnetic interaction has a finite range.
 The vacuum field φ1 responds to weak fields by giving a mass and finite range to the W± and Z bosons, however, the electromagnetic field is unaffected by the presence of φ1 so the photon remains massless. The mass acquired by the weak interaction bosons is proportional to the vacuum of φ1 and to the weak charge strength. A quantum of the field φ1 is an electrically neutral particle called the Higgs boson. It interacts with all massive particles with a coupling that is proportional to their mass. The standard model does not predict the mass of the Higgs boson, but it is known that it cannot be too heavy. Not much more than about 1000 proton masses. Since this would lead to complicated self-interaction, such self-interaction is not believed to be present, because the theory does not account for them, but nevertheless successfully predicts the masses of the W± and Z bosons. These of the particle results from the so-called spontaneous symmetry breaking mechanisms, and used to generate non-zero masses for the W± and Z0 bosons and is presumably too massive to have been produced in existing particle accelerators.
 We now turn our attentions belonging to the third binding force of unity, in, and of itself, its name implicates a physicality in the belonging nature that holds itself the binding of strong interactions that portray of its owing universality, simply because its universal. Interactions between elementary particles involving the strong interaction force. This force is about one hundred times greater than the electromagnetic force between charged elementary particles. However, it is a short range force-it is only important for particles separated by a distance of less than abut 10-15-and is the force that holds protons and neutrons together in atomic nuclei for ‘soft’ interactions between hadrons, where small-scale transfers of momentum are involved, the strong interactions may be described in terms of the exchange of virtual hadrons, just as electromagnetic interactions between charged particles may be described in terms of the exchange of virtual photons. At a more fundamental level, the strong interaction arises as the result of the exchange of Gluons between quarks and/and antiquarks as described by quantum chromodynamics.
 In the hadron exchange picture, any hadron can act as the exchanged particle provided certain quantum numbers are conserved. These quantum numbers are the total angular momentum, charge, baryon number, Isospin (both I and I3), strangeness, parity, charge conjugation parity, and G-parity. Strong interactions are investigated experimentally by observing how beams of high-energy hadrons are scattered when they collide with other hadrons. Two hadrons colliding at high energy will only remain near to each other for a very short time. However, during the collision they may come sufficiently close to each other for a strong interaction to occur by the exchanger of a virtual particle. As a result of this interaction, the two colliding particles will be deflected (scattered) from their original paths. ‘I’ the virtual hadron exchanged during the interaction carries some quantum numbers from one particle to the other, the particles found after the collision may differ from those before it. Sometimes the number of particles is increased in a collision.
 In hadron-hadron interactions, the number of hadrons produced increases approximately logarithmically with the total centre of mass energy, reaching about 50 particles for proton-antiproton collisions at 900 GeV, for example in some of these collisions, two oppositely-directed collimated ‘jets’ of hadrons are produced, which are interpreted as due to an underlying interaction involving the exchange of an energetic gluon between, for example, a quark from the proton and an antiquark from the antiproton. The scattered quark and antiquark cannot exist as free particles, but instead ‘fragments’ into a large number of hadrons (mostly pions and kaon) travelling approximately along the original quark or antiquark direction. This results in collimated jets of hadrons that can be detected experimentally. Studies of this and other similar processes are in good agreement with quantum chromodynamics predictions.
 The interaction between elementary particles arising as a consequence of their associated electric and magnetic fields. The electrostatic force between charged particles is an example. This force may be described in terms of the exchange of virtual photons, because its strength lies between strong and weak interactions, particles decaying by electromagnetic interaction do so with a lifetime shorter than those decaying by weak interaction, but longer than those decaying by strong interaction. An example of electromagnetic decay is:
π0 ➝ ϒ + ϒ.
This decay process (mean lifetime 8.4 x 10-17 seconds) may be understood as the ‘annihilation’ of the quark and the antiquark making up the π0, into a pair of photons. The following quantum numbers have to be conserved in electromagnetic interactions: Angular momentum, charm, baryon number, Isospin quantum number I3, strangeness, charm, parity, and charge conjugation parity.
 A particle that, as far as is known, is not composed of other simpler particles. Elementary particles represent the most basic constituents of matter and are also the carriers of the fundamental forces between particles, namely the electromagnetic, weak, strong, and gravitational forces. The known elementary particles can be grouped into three classes, leptons, quarks, and gauge bosons, hadrons, such strongly interacting particles as the proton and neutron, which are bound states of quarks and antiquarks, are also sometimes called elementary particles.
 Leptons undergo electromagnetic and weak interactions, but not strong interactions. Six leptons are known, the negatively charged electron, muon, and tauons plus three associates neutrinos: ve, vμ and vτ. The electron is a stable particle but the muon and tau leptons decay through the weak interactions with lifetimes of about 10-8 and 10-13 seconds. Neutrinos are stable neutral leptons, which interact only through the weak interaction.
 Corresponding to the leptons are six quarks, namely the up (u), charm (one c) and top (t) quarks with electric charge equal to +⅔ that of the proton and the down (d), strange (s), and bottom (b) quarks of charge -⅓ the proton charge. Quarks have not been observed experimentally as free particles, but reveal their existence only indirectly in high-energy scattering experiments and through patterns observed in the properties of hadrons. They are believed to be permanently confined within hadrons, either in baryons, half integer spin hadrons containing three quarks, or in mesons, integer spin hadrons containing a quark and an antiquark. The proton, for example, is a baryon containing two ‘up’ quarks and an ‘anti-down (d) quark, while the π+ is a positively charged meson containing an up quark and an anti-down (d) antiquark. The only hadron that is stable as a free particle is the proton. The neutron is unstable when free. Within a nucleus, proton and neutrons are generally both stable but either particle may bear into a transformation into the other, by ‘Beta Decay or Capture’.
 Interactions between quarks and leptons are mediated by the exchange of particles known as ‘gauge bosons’, specifically the photon for electromagnetic interactions, W± and Z0 bosons for the weak interaction, and eight massless Gluons, in the case of the strong integrations.
 A class of eigenvalue problems in physics that take the form
ΩΨ = λΨ,
Where ‘Ω’ is some mathematical operation (multiplication by a number, differentiation, etc.) on a function ‘Ψ’, which is called the ‘eigenfunction’. ‘λ’ is called the eigenvalue, which in a physical system will be identified with an observable quantity analogous to the amplitude of a wave that appears in the equations of wave mechanics, particularly the Schrödinger wave equation, the most generally accepted interpretation is that | Ψ |2dV, representing the probability that a particle is located within the volume element dV, mass in which case a particle of mass ‘m’ moving with a velocity ‘v’ will, under suitable experimental conditions exhibit the characteristics of a wave of wave length λ, given by the equation? λ = h/mv, where ‘h’ is the Planck constant that equals to 6.626 076 x 10-34 J s.? This equation is the basis of wave mechanics.
 Eigenvalue problems are ubiquitous in classical physics and occur whenever the mathematical description of a physical system yields a series of coupled differential equations. For example, the collective motion of a large number of interacting oscillators may be described by a set of coupled differential educations. Each differential equation describes the motion of one of the oscillators in terms of the position of all the others. A ‘harmonic’ solution may be sought, in which each displacement is assumed to have a ‘simple harmonic motion’ in time. The differential equations then reduce to 3N linear equations with 3N unknowns, where ‘N’ is the number of individual oscillators, each with three degrees of freedom. The whole problem is now easily recast as a ‘matrix education’ of the form:
Mχ = ω2χ
Where ‘M’ is an N x N matrix called the ‘dynamical matrix’, and χ is an N x 1 ‘a column matrix, and ω2 is the square of an angular frequency of the harmonic solution. The problem is now an eigenvalue problem with eigenfunctions ‘χ’ which is the normal mode of the system, with corresponding eigenvalues ω2. As ‘χ’ can be expressed as a column vector, χ is a vector in some N-dimensional vector space. For this reason, χ is often called an eigenvector.
 When the collection of oscillators is a complicated three-dimensional molecule, the casting of the problem into normal modes is an effective simplification of the system. The symmetry principles of ‘group theory’ can then be applied, which classify normal modes according to their ‘ω’ eigenvalues (frequencies). This kind of analysis requires an appreciation of the symmetry properties of the molecule. The sets of operations (Rotations, inversions, etc.) that leave the molecule invariant make up the ‘point group’ of that molecule. Normal modes sharing the same ‘ω’ eigenvalues are said to correspond to the ‘irreducible representations’ of the molecule’s point group. It is among these irreducible representations that one will find the infrared absorption spectrum for the vibrational normal modes of the molecule.
 Eigenvalue problems play a particularly important role in quantum mechanics. In quantum mechanics, physically observable (location, momentum, energy, etc.) are represented by operations (differentiation with respect to a variable, multiplication by a variable), which act on wave functions. Wave functions differ from classical waves in that they carry no energy. For classical waves, the square modulus of its amplitude measure its energy. For a wave function, the square modulus of its amplitude (at a location χ) represent not energy but probability, i.e., the probability that a particle-a localized packet of energy will be observed if a detector is placed at that location. The wave function therefore describes the distribution of possible locations of the particle and is perceptible only after many location detection events have occurred. A measurement of position on a quantum particle may be written symbolically as:
X  Ψ( χ ) = χΨ( χ )
Where Ψ (χ) is said to be an eigenvector of the location operator and ‘χ’ is the eigenvalue, which represents the location. Each Ψ(χ) represents amplitude at the location χ, | Ψ(χ) |2 is the probability that the particle will be located in an infinitesimal volume at that location. The wave function describing the distribution of all possible locations for the particle is the linear super-position of all Ψ (χ) for 0 ≤ χ ≤ ∞ that occur, its principle states that each stress is accompanied by the same strains whether it acts alone or in conjunction with others, it is true so long as the total stress does not exceed the limit of proportionality. Also, in vibrations and wave motion the principle asserts that one set of vibrations or waves are unaffected by the presence of another set. For example, two sets of ripples on water will pass through one another without mutual interactions so that, at a particular instant, the resultant disturbance at any point traversed by both sets of waves is the sum of the two component disturbances.
 The eigenvalue problem in quantum mechanics therefore represents the act of measurement. Eigenvectors of an observable presentation were the possible states (Position, in the case of χ) that the quantum system can have. Stationary states of a quantum non-demolition attribute of a quantum system, such as position and momentum, are related by the Heisenberg Uncertainty Principle, which states that the product of the uncertainty of the measured value of a component of momentum (pχ) and the uncertainty in the corresponding co-ordinates of position (χ) is of the same set-order of significance as the Planck constant. Attributes related in this way are called ‘conjugate’ attributes. Thus, while an accurate measurement of position is possible, as a result of the uncertainty principle it produces a large momentum spread. Subsequent measurements of the position acquire a spread themselves, which makes the continuous monitoring of the position impossible.
 The eigenvalues are the values that observables take on within these quantum states. As in classical mechanics, eigenvalue problems in quantum mechanics may take differential or matrix forms. Both forms have been shown to be equivalent. The differential form of quantum mechanics is called ‘wave mechanics’ (Schrödinger), where the operators are differential operators or multiplications by variables. Eigenfunctions in wave mechanics are wave functions corresponding to stationary wave states that satisfy some set of boundary conditions. The matrix form of quantum mechanics is often called matrix mechanics (Bohr and Heisenberg). Matrix acting on eigenvectors represents the operators.
 The relationship between matrix and wave mechanics is very similar to the relationship between matrix and differential forms of eigenvalue problems in classical mechanics. The wave functions representing stationary states are really normal modes of the quantum wave. These normal modes may be thought of as vectors that span a vector space, which have a matrix representation.
 Once, again, the Heisenberg uncertainty relation, or indeterminacy principle of ‘quantum mechanics’ that associate the physical properties of particles into pairs such that both together cannot be measured to within more than a certain degree of accuracy. If ‘A’ and ‘V’ form such a pair is called a conjugate pair, then: ΔAΔV > k, where ‘k’ is a constant and ΔA and ΔV’s are a variance in the experimental values for the attributes ‘A’ and ‘V’. The best-known instance of the equation relates the position and momentum of an electron: ΔpΔχ > h, where ‘h’ is the Planck constant. This is the Heisenberg uncertainty principle. Still, the usual value given for Planck’s constant is 6.6 x 10-27 ergs’ sec. Since Planck’s constant is not zero, mathematical analysis reveals the following: The ‘spread’, or uncertainty, in position times the ‘spread’, or uncertainty of momentum is greater than, or possibly equal to, the value of the constant or, or accurately, Planck’s constant divided by 2π, if we choose to know momentum exactly, then us knowing nothing about position, and vice versa.
 The presence of Plank’s constant calls that we approach  quantum physics a situation in which the mathematical theory does not allow precise prediction of, or exist in exact correspondences with, the physical reality. If nature did not insist on making changes or transitions in precise chunks of Planck’s quantum of action, or in multiples of these chunks, there would be no crisis. However, whether it is of our own determinacy, such that a cancerous growth in the body of an otherwise perfect knowledge of the physical world or the grounds for believing, in principle at least, in human freedom, one thing appears certain-it is an indelible feature of our understanding of nature.
 In order too further explain how fundamental the quantum of action is to our present understanding of the life of nature, let us attempt to do what quantum physics says we cannot do and visualize its role in the simplest of all atoms-the hydrogen atom. It can be thought that standing at the centre of the Sky Dome at roughly where the pitcher’s mound is. Place a grain of salt on the mound, and picture a speck of dust moving furiously around the orbital’s outskirts of the Sky Dome’s fulfilling circle, around which the grain of salt remains referential of the topic. This represents, roughly, the relative size of the nucleus and the distance between electron and nucleus inside the hydrogen atom when imaged in its particle aspect.
 In quantum physics, however, the hydrogen atom cannot be visualized with such macro-level analogies. The orbit of the electron is not a circle, in which a plantlike object moves, and each orbit is described in terms of a probability distribution for finding the electron in an average position corresponding to each orbit as opposed to an actual position. Without observation or measurement, the electron could be in some sense anywhere or everywhere within the probability distribution, also, the space between probability distributions is not empty, it is infused with energetic vibrations capable of manifesting itself as the befitting quanta.
 The energy levels manifest at certain distances because the transition between orbits occurs in terms of precise units of Planck’s constant. If any attentive effects to comply with or measure where the particle-like aspect of the electron is, in that the existence of Planck’s constant will always prevent us from knowing precisely all the properties of that electron that we might presume to be they’re without measurement. Also, the two-split experiment, as our presence as observers and what we choose to measure or observe are inextricably linked to the results obtained. Since all complex molecules are built from simpler atoms, what is to be done, is that liken to the hydrogen atom, of which case applies generally to all material substances.
 The grounds for objecting to quantum theory, the lack of a one-to-one correspondence between every element of the physical theory and the physical reality it describes, may seem justifiable and reasonable in strict scientific terms. After all, the completeness of all previous physical theories was measured against that criterion with enormous success. Since it was this success that gave physicists the reputation of being able to disclose physical reality with magnificent exactitude, perhaps a more complex quantum theory will emerge by continuing to insist on this requirement.
 All indications are, however, that no future theory can circumvent quantum indeterminacy, and the success of quantum theory in co-ordinating our experience with nature is eloquent testimony to this conclusion. As Bohr realized, the fact that we live in a quantum universe in which the quantum of action is a given or an unavoidable reality requires a very different criterion for determining the completeness of physical theory. The new measure for a complete physical theory is that it unambiguously confirms our ability to co-ordinate more experience with physical reality.
 If a theory does so and continues to do so, which is a distinctive feature of the case with quantum physics, then the theory must be deemed complete. Quantum physics not only works exceedingly well, it is, in these terms, the most accurate physical theory that has ever existed. When we consider that this physics allows us to predict and measure quantities like the magnetic moment of electrons to the fifteenth decimal place, we realize that accuracy perse is not the real issue. The real issue, as Bohr rightly intuited, is that this complete physical theory effectively undermines the privileged relationships in classical physics between physical theory and physical reality. Another measure of success in physical theory is also met by quantum physics-eloquence and simplicity. The quantum recipe for computing probabilities given by the wave function is straightforward and can be successfully employed by any undergraduate physics student. Take the square of the wave amplitude and compute the probability of what can be measured or observed with a certain value. Yet there is a profound difference between the recipe for calculating quantum probabilities and the recipe for calculating probabilities in classical physics.
 In quantum physics, one calculates the probability of an event that can happen in alternative ways by adding the wave functions, and then taking the square of the amplitude. In the two-split experiment, for example, the electron is described by one wave function if it goes through one slit and by another wave function if it goes through the other slit. In order to compute the probability of where the electron is going to end on the screen, we add the two wave functions, compute the obsolete value of their sum, and square it. Although the recipe in classical probability theory seems similar, it is quite different. In classical physics, one would simply add the probabilities of the two alternative ways and let it go at that. That classical procedure does not work here because we are not dealing with classical atoms in quantum physics additional terms arise when the wave functions are added, and the probability is computed in a process known as the ‘superposition principle’. That the superposition principle can be illustrated with an analogy from simple mathematics. Add two numbers and then take the square of their sum, as opposed to just adding the squares of the two numbers. Obviously, (2 + 3)2 is not equal to 22 + 32. The former is 25, and the latter are 13. In the language of quantum probability theory:
| Ψ1 + Ψ2 |2  ≠ | Ψ1 |2 + | Ψ2 |2
Where Ψ1 and Ψ2 are the individual wave functions on the left-hand side, the superposition principle results in extra terms that cannot be found on the right-handed side the left-hand faction of the above relation is the way a quantum physicists would compute probabilities and the right-hand side is the classical analogue. In quantum theory, the right-hand side is realized when we know, for example, which slit through which the electron went. Heisenberg was among the first to compute what would happen in an instance like this. The extra superposition terms contained in the left-hand side of the above relation would not be there, and the peculiar wave-like interference pattern would disappear. The observed pattern on the final screen would, therefore, be what one would expect if electrons were behaving like bullets, and the final probability would be the sum of the individual probabilities. However, when we know which slit the electron went through, this interaction with the system causes the interference pattern to disappear.
 In order to give a full account of quantum recipes for computing probabilities, one ‘g’ has to examine what would happen in events that are compounded. Compound events are events that can be broken down into a series of steps, or events that consist of a number of things happening independently the recipe here calls for multiplying the individual wave functions, and then following the usual quantum recipe of taking the square of the amplitude.
 The quantum recipe is | Ψ1 • Ψ2 |2, and, in this case, it would be the same if we multiplied the individual probabilities, as one would in classical theory. Thus the recipes of computing results in quantum theory and classical physics can be totally different from quantum superposition effects are completely non-classical, and there is no mathematical justification to why the quantum recipes work. What justifies the use of quantum probability theory is the same thing that justifies the use of quantum physics-it has allowed us in countless experiments to extend our ability to co-ordinate experience with nature vastly.
 The view of probability in the nineteenth century was greatly conditioned and reinforced by classical assumptions about the relationships between physical theory and physical reality. In this century, physicists developed sophisticated statistics to deal with large ensembles of particles before the actual character of these particles was understood. Classical statistics, developed primarily by James C. Maxwell and Ludwig Boltzmann, was used to account for the behaviour of a molecule in a gas and to predict the average speed of a gas molecule in terms of the temperature of the gas.
 The presumption was that the statistical average were workable approximations those subsequent physical theories, or better experimental techniques, would disclose with precision and certainty. Since nothing was known about quantum systems, and since quantum indeterminacy is small when dealing with micro-level effects, this presumption was quite reasonable. We know, however, that quantum mechanical effects are present in the behaviour of gasses and that the choice to ignore them is merely a matter of convincing in getting workable or practical resulted. It is, therefore, no longer possible to assume that the statistical averages are merely higher-level approximations for a more exact description.
 Perhaps the best-known defence of the classical conception of the relationship between physical theory ands physical reality is the celebrated animal introduced by the Austrian physicist Erin Schrödinger (1887-1961) in 1935, in a ‘thought experiment’ showing the strange nature of the world of quantum mechanics. The cat is thought of as locked in a box with a capsule of cyanide, which will break if a Geiger counter triggers. This will happen if an atom in a radioactive substance in the box decays, and there is a chance of 50% of such an event within an hour. Otherwise, the cat is alive. The problem is that the system is in an indeterminate state. The wave function of the entire system is a ‘superposition’ of states, fully described by the probabilities of events occurring when it is eventually measured, and therefore ‘contains equal parts of the living and dead cat’. When we look and see we will find either a breathing cat or a dead cat, but if it is only as we look that the wave packet collapses, quantum mechanic forces us to say that before we looked it was not true that the cat was dead and not true that it was alive, the thought experiment makes vivid the difficulty of conceiving of quantum indetermincies when these are translated to the familiar world of everyday objects.
 The ‘electron,’ is a stable elementary particle having a negative charge, ‘e’, equal to:
1.602 189 25 x 10-19 C
and a rest mass, m0 equal to:
9.109 389 7 x 10-31 kg
equivalent to:  0.511 0034 MeV/c2
It has a spin of ½ and obeys Fermi-Dirac Statistics. As it does not have strong interactions, it is classified as a ‘lepton’.
 The discovery of the electron was reported in 1897 by Sir J. J. Thomson, following his work on the rays from the cold cathode of a gas-discharge tube, it was soon established that particles with the same charge and mass were obtained from numerous substances by the ‘photoelectric effect’, ‘thermionic emission’ and ‘beta decay’. Thus, the electron was found to be part of all atoms, molecules, and crystals.
 Free electrons are studied in a vacuum or a gas at low pressure, whereby beams are emitted from hot filaments or cold cathodes and are subject to ‘focussing’, so that the particles in which an electron beam in, for example, a cathode-ray tube, where in principal methods as (i) Electrostatic focussing, the beam is made to converge by the action of electrostatic fields between two or more electrodes at different potentials. The electrodes are commonly cylinders coaxial with the electron tube, and the whole assembly forms an electrostatic electron lens. The focussing effect is usually controlled by varying the potential of one of the electrodes, called the focussing electrode. (ii) Electromagnetic focussing, by way that the beam is made to converge by the action of a magnetic field that is produced by the passage of direct current, through a focussing coil. The latter are commonly a coil of short axial length mounted so as to surround the electron tube and to be coaxial with it.
 The force FE on an electron or magnetic field of strengths is given by FE = Ee and is in the direction of the field. On moving through a potential difference V, the electron acquires a kinetic energy eV, hence obtaining beams of electrons of accurately known kinetic energy is possible. In a magnetic field of magnetic flux density ‘B’, an electron with speed ‘v’ is subject to a force, FB = Bev sin θ, where θ is the angle between ‘B’ and ‘v’. This force acts at right angles to the plane containing ‘B’ and ‘v’.
 The mass of any particle increases with speed according to the theory of relativity. If an electron is accelerated from rest through 5kV, the mass is 1% greater than it is at rest. Thus, accountably, must be taken of relativity for calculations on electrons with quite moderate energies.
 According to ‘wave mechanics’ a particle with momentum ‘mv’ exhibits’ diffraction and interference phenomena, similar to a wave with wavelength λ = h/mv, where ‘h’ is the Planck constant. For electrons accelerated through a few hundred volts, this gives wavelengths a preferably less than typical interatomic spacing in crystals. Hence, a crystal can act as a diffraction grating for electron beams.
 Owing to the fact that electrons are associated with a wavelength λ given by λ = h/mv, where ‘h’ is the Planck constant and (mv) the momentum of the electron, a beam of electrons suffers diffraction in its passage through crystalline material, similar to that experienced by a beam of X-rays. The diffraction pattern depends on the spacing of the crystal planes, and the phenomenon can be employed to investigate the structure of surface and other films, and under suitable conditions exhibit the characteristics of a wave of the wavelength given by the equation λ = h/mv, which is the basis of wave mechanics. A set of waves that represent the behaviour, under appropriate conditions, of a particle, e.g., its diffraction by a crystal lattice, that is given the ‘de Broglie equation.’ They are sometimes regarded as waves of probability, since the square of their amplitude at a given point represents the probability of finding the particle in unit volume at that point.
 The first experiment to demonstrate ‘electron diffraction’, and hence the wavelike nature of particles. A narrow pencil of electrons from a hot filament cathode was projected ‘in vacua’ onto a nickel crystal. The experiment showed the existence of a definite diffracted beam at one particular angle, which depended on the velocity of the electrons, assuming this to be the Bragg angle, stating that the structure of a crystal can be determined from a set of interference patterns found at various angles from the different crystal faces, least of mention, the wavelength of the electrons was calculated and found to be in agreement with the ‘de Broglie equation.’
 At kinetic energies less than a few electro-volts, electrons undergo elastic collision with atoms and molecules, simply because of the large ratio of the masses and the conservation of momentum, only an extremely small transfer of kinetic energy occurs. Thus, the electrons are deflected but not slowed appreciatively. At higher energies collisions are inelastic. Molecules may be dissociated, and atoms and molecules may be excited or ionized. Thus it is the least energy that causes an ionization:
A ➝ A+ + e‒
Where the Ion and the electron are far enough apart for their electrostatic interaction to be negligible and no extra kinetic energy removed is that in the outermost orbit, i.e., the level strongly bound electrons. Considering removal of electrons from inner orbits is also possible, in which their binding energy is greater. As an excited particle or recombining, ions emit electromagnetic radiation mostly in the visible or ultraviolet.
 For electron energies of the order of several GeV upwards, X-rays are generated. Electrons of high kinetic energy travel considerable distances through matter, leaving a trail of positive ions and free electrons. The energy is mostly lost in small increments ( about 30 eV ) with only an occasional major interaction causing X-ray emissions. The range increases at higher energies. The positron-the antiparticle of the electron, i.e., an elementary particle with electron mass and positive charge equal to that of the electron. According to the relativistic wave mechanics of Dirac, space contains a continuum of electrons in states of negative energy. These states are normally unobservable, but if sufficient energy can be given, an electron may be raised into a state of positive energy and suggested itself observably. The vacant state of negativity behaves as a positive particle of positive energy, which is observed as a positron.
 The simultaneous formation of a positron and an electron from a photon is called ‘pair production’, and occurs when the annihilation of Gamma-rays photons with an energy of 1.02 MeV passes close to an atomic nucleus, whereby the interaction between the particle and its antiparticle disappear and photons or other elementary particles or antiparticles are so created, as accorded to energy and momentum conservation.
 At low energies, an electron and a positron annihilate to produce electromagnetic radiation. Usually the particles have little kinetic energy or momentum in the laboratory system before interaction, hence the total energy of the radiation is nearly 2m0c2, where m0 is the rest mass of an electron. In nearly all cases two photons are generated. Each of 0.511 MeV, in almost exactly opposite directions to conserve momentum. Occasionally, three photons are emitted all in the same plane. Electron-positron annihilation at high energies has been extensively studied in particle accelerators. Generally, the annihilation results in the production of a quark, and an antiquark, fort example, e+ e‒ ➝ μ+ μ‒ or a charged lepton plus an antilepton (e+e‒ ➝ μ+μ‒). The quarks and antiquarks do not appear as free particles but convert into several hadrons, which can be detected experimentally. As the energy available in the electron-positron interaction increases, quarks and leptons of progressively larger rest mass can be produced. In addition, striking resonances are present, which appear as large increases in the rate at which annihilations occur at particular energies. The I / PSI particle and similar resonances containing an antiquark are produced at an energy of about 3 GeV, for example, giving rise to abundant production of charmed hadrons. Bottom (b) quark production occurs at greater energies than about 10 GeV. A resonance at an energy of about 90 GeV, due to the production of the Z0 gauge boson involved in weak interaction is currently under intensive study at the LEP and SLC e+ e‒ colliders. Colliders are the machines for increasing the kinetic energy of charged particles or ions, such as protons or electrons, by accelerating them in an electric field. A magnetic field is used to maintain the particles in the desired direction. The particle can travel in a straight, spiral, or circular paths. At present, the highest energies are obtained in the proton synchrotron.
 The Super Proton Synchrotron at CERN (Geneva) accelerates protons to 450 GeV. It can also cause proton-antiproton collisions with total kinetic energy, in centre-of-mass co-ordinates of 620 GeV. In the USA the Fermi National Acceleration Laboratory proton synchrotron gives protons and antiprotons of 800 GeV, permitting collisions with total kinetic energy of 1600 GeV. The Large Electron Positron (LEP) system at CERN accelerates particles to 60 GeV.
 All the aforementioned devices are designed to produce collisions between particles travelling in opposite directions. This gives effectively very much higher energies available for interaction than our possible targets. High-energy nuclear reaction occurs when the particles, either moving in a stationary target collide. The particles created in these reactions are detected by sensitive equipment close to the collision site. New particles, including the tauon, W, and Z particles and requiring enormous energies for their creation, have been detected and their properties determined.
 While, still, a ‘nucleon’ and ‘anti-nucleon’ annihilating at low energy, produce about half a dozen pions, which may be neutral or charged. By definition, mesons are both hadrons and bosons, justly as the pion and kaon are mesons. Mesons have a substructure composed of a quark and an antiquark bound together by the exchange of particles known as Gluons.
 The conjugate particle or antiparticle that corresponds with another particle of identical mass and spin, but has such quantum numbers as charge (Q), baryon number (B), strangeness (S), charms, and Isospin (I3) of equal magnitude but opposite signs. Examples of a particle and its antiparticle include the electron and positron, proton and antiproton, the positive and negatively charged pions, and the ‘up’ quark and ‘up’ antiquark. The antiparticle corresponding to a particle with the symbol ‘an’ is usually denoted ‘ā’. When a particle and its antiparticle are identical, as with the photon and neutral pion, this is called a ‘self-conjugate particle’.
 The critical potential to excitation energy required to change am atom or molecule from one quantum state to another of higher energy, is equal to the difference in energy of the states and is usually the difference in energy between the ground state of the atom and a specified excited state. Which the state of a system, such as an atom or molecule, when it has a higher energy than its ground state.
 The ground state contributes the state of a system with the lowest energy. An isolated body will remain indefinitely in it, such that having possession of two or more ground states is possible for a system, of equal energy but with different sets of quantum numbers. In the case of atomic hydrogen there  are two states for which the quantum numbers n, I, and m are 1, 0, and 0 respectively, while the spin may be + ½ with respect to a defined direction. An allowed wave function of an electron in an atom obtained by a solution of the ‘Schrödinger wave equation’ in which a hydrogen atom, for example, the electron moves in the electrostatic field of the nucleus and its potential energy is -e2 / r, where  ‘e’ is the electron charge and ‘r’ its distance from the nucleus. A precise orbit cannot be considered as in Bohr’s theory of the atom, but the behaviour of the electron is described by its wave function, Ψ, which is a mathematical function of its position with respect to the nucleus. The significance of the wave function is that | Ψ |2 dt is the probability of locating the electron in the element of volume ‘dt’.
 Solution of Schrödinger’s equation for the hydrogen atom shows that the electron can only have certain allowed wave functions (eigenfunctions). Each of these corresponds to a probability distribution in space given by the manner in which | Ψ |2 varies with position. They also have an associated value of the energy ‘E’. These allowed wave functions, or orbitals, are characterized by three quantum numbers similar to those characterized the allowed orbits in the earlier quantum theory of the atom: ‘n’, the ‘principal quantum number, can have values of 1, 2, 3, etc. the orbital with n =1 has the lowest energy. The states of the electron with n = 1, 2, 3, etc., are called ‘shells’ and designate the K, L, M shells, etc. ‘I’, the ‘azimuthal quantum numbers’, which for a given value of ‘n’ can have values of 0, 1, 2, . . . ( –1 ). An electron in the ‘L’ shell of an atom with n = 2 can occupy two sub-shells of different energy corresponding to I = 0, I = 1, and I = 2. Orbitals with I = 0, 1, 2 and 3 are called s, p, d, and ƒ orbitals respectively. The significance of I quantum number is that it gives the angular momentum of the electron. The orbital angular momentum of an electron is given by:
[I( I + 1 )( h/2π).
the ‘magnetic quantum number, which for a given value of ‘I’ can have values’ represented by a ‘p’ orbital for orbits with m = 1, 0, 1. These orbitals, with the same values of ‘n’ and ‘I’ but different ‘m’ values, have the same energy. The significance of this quantum number is that it indicates the number of different levels that would be produced if the atom were subjected to an external magnetic field.
 According to wave theory the electron may be at any distance from the nucleus, but in fact, there is only a reasonable chance of it being within a distance of-5 x 10-11 metre. Enshrouded by the maximum probability that occurs when r-a0 and where a0 is the radius of the first Bohr orbit. Representing an orbital by a surface enclosing a volume within which there is an arbitrarily decided probability is customary (say 95%) of finding the electron.
 Finally, the electron in an atom can have a fourth quantum number MS, characterizing its spin direction. This can be + ½ or ‒ ½, and according to the ‘Pauli Exclusion Principle,’ each orbital can hold only two electrons. The four quantum numbers lead to an explanation of the periodic table of the elements.
 In earlier mention, the concerns referring to the ‘moment’ had been to our exchanges to issue as, i.e., the moment of inertia, moment of momentum. The moment of a force about an axis is the product of the perpendicular distance of the axis from the line of action of the force, and the component of the force in the plane perpendicular to the axis. The moment of a system of coplanar forces about an axis perpendicular to the plane containing them is the algebraic sum of the moments of the separate forces about that axis of a anticlockwise moment appear taken controventionally to be positive and clockwise of ones Uncomplementarity. The moment of momentum about an axis, symbol L is the product to the moment of inertia and angular velocity (Iω). Angular momentum is a pseudo-vector quality, as it is connected in an isolated system. It is a scalar and is given a positive or negative sign as in the moment of force. When contending to systems, in which forces and motions do not all lie in one plane, the concept of the moment about a point is needed. The moment of a vector P, e.g., forces or momentous pulsivity, from which a point ‘A’ is a pseudo-vector M equal to the vector product of r and P, where r is any line joining ‘A’ to any point ‘B’ on the line of action of P. The vector product M =  r x p is independent of the position of ‘B’  and the relation between the scalar moment about an axis and the vector moment about which a point on the axis is that the scalar is the component of the vector in the direction of the axis.
 The linear momentum of a particle ‘p’ is the product of the mass and the velocity of the particle. It is a vector quality directed through the particle in the direction of motion. The linear momentum of a body or of a system of particles is the vector sum of the linear momenta of the individual particle. If a body of mass ‘M’ is translated with a velocity ‘V’, its momentum is MV, which is the momentum of a particle of mass ‘M’ at the centre of gravity of the body. (1) In any system of mutually interacting or impinging particles, the linear momentum in any fixed direction remains unaltered unless there is an external force acting in that direction. (2) Similarly, the angular momentum is constant in the case of a system rotating about a fixed axis provided that no external torque is applied.
 Subatomic particles fall into two major groups: The elementary particles and the hadrons. An elementary particle is not composed of any smaller particles and therefore represents the most fundamental form of matter. A hadron is composed of panicles, including the major particles called quarks, the most common of the subatomic particles, includes the major constituents of the atom-the electron is an elementary particle, and the proton and the neutron (hadrons). An elementary particle with zero charge and a rest mass equal to:
1.674 9542 x 10-27 kg,
i.e., 939.5729 MeV / c2.
It is a constituent of every atomic nucleus except that of ordinary hydrogen, free neutrons decay by ‘beta decay’ with a mean life of 914 s. the neutron has spin ½, Isospin ½, and positive parity. It is a ‘fermion’ and is classified as a ‘hadron’ because it has strong interaction.
 Neutrons can be ejected from nuclei by high-energy particles or photons, the energy required is usually about 8 MeV, although sometimes it is less. The fission is the most productive source. They are detected using all normal detectors of ionizing radiation because of the production of secondary particles in nuclear reactions. The discovery of the neutron (Chadwick, 1932) involved the detection of the tracks of protons ejected by neutrons by elastic collisions in hydrogenous materials.
 Unlike other nuclear particles, neutrons are not repelled by the electric charge of a nucleus so they are very effective in causing nuclear reactions. When there is no ‘threshold energy’, the interaction ‘cross sections’ become very large at low neutron energies, and the thermal neutrons produced in great numbers by nuclear reactions cause nuclear reactions on a large scale. The capture of neutrons by the (n, ϒ) process produces large quantities of radioactive materials, both useful nuclides such as 66Co for cancer therapy and undesirable by-product. The least energy required to cause a certain process, in particular a reaction in nuclear or particle physics. It is often important to distinguish between the energies required in the laboratory and in centre-of-mass co-ordinates. In ‘fission’ the splitting of a heavy nucleus of an atom into two or more fragments of comparable size usually as the result of the impact of a neutron on the nucleus. It is normally accompanied by the emission of neutrons or gamma rays. Plutonium, uranium, and thorium are the principle fissionable elements
 In nuclear reaction, a reaction between an atonic nucleus and a bombarding particle or photon leading to the creation of a new nucleus and the possible ejection of one or more particles. Nuclear reactions are often represented by enclosing brackets and symbols for the incoming and final nuclides being shown outside the brackets. For example:
14N ( α, p )17O.
Energy from nuclear fissions, in its gross effect, finds the nucleuses of atoms of moderate size are more tightly held together than the largest nucleus, so that if the nucleus of a heavy atom can be induced to split into two nuclei and moderate mass, there should be considerable release of energy. By Einstein’ s law of the conservation of mass and energy, this mass and energy difference is equivalent to the energy released when the nucleons binding differences are equivalent to the energy released when the nucleons bind together. Y=this energy is the binding energy, the graph of binding per nucleon, EB/A increases rapidly up to a mass number of 50-69 (iron, nickel, etc.) and then decreases slowly. There are therefore two ways in which energy can be released from a nucleus, both of which can be released from the nucleus, both of which entail a rearrangement of nuclei occurring in the lower as having to curve into form its nuclei, in the upper, higher-energy part of the curve. The fission is the splitting of heavy atoms, such as uranium, into lighter atoms, accompanied by an enormous release of energy. Fusion of light nuclei, such as deuterium and tritium, releases an even greater quantity of energy.
 The work that must be done to detach a single particle from a structure of free electrons of an atom or molecule to form a negative ion. The process is sometimes called ‘electron capture, but the term is more usually applied to nuclear processes. As many atoms, molecules and free radicals from stable negative ions by capturing electrons to atoms or molecules to form a negative ion. The electron affinity is the least amount of work that must be done to separate from the ion. It is usually expressed in electro-volts.
 The uranium isotope 235U will readily accept a neutron but one-seventh of the nuclei stabilized by gamma emissions while six-sevenths split into two parts. Most of the energy released amounts to about 170 MeV, in the form of the kinetic energy of these fission fragments. In addition an averaged of 2.5 neutrons of average energy 2 MeV and some gamma radiation is produced. Further energy is released later by radioactivity of the fission fragments. The total energy released is about 3 x 10-11 joule per atom fissioned, i.e., 6.5 x 1013 joule per kg conserved.
 To extract energy in a controlled manner from fissionable nuclei, arrangements must be made for a sufficient proportion of the neutrons released in the fissions to cause further fissions in their turn, so that the process is continuous, the minium mass of a fissile material that will sustain a chain reaction seems confined to nuclear weaponry. Although, a reactor with a large proportion of 235U or plutonium 239Pu in the fuel uses the fast neutrons as they are liberated from the fission, such a rector is called a ‘fast reactor’. Natural uranium contains 0.7% of 235U and if the liberated neutrons can be slowed before they have much chance of meeting the more common 238U atom and then cause another fission. To slow the neutron, a moderator is used containing light atoms to which the neutrons will give kinetic energy by collision. As the neutrons eventually acquire energies appropriate to gas molecules at the temperatures of the moderator, they are then said to be thermal neutrons and the reactor is a thermal reactor.
 Then, of course, the Thermal reactors, in typical thermal reactors, the fuel elements are rods embedded as a regular array in which the bulk of the moderator that the typical neutron from a fission process has a good chance of escaping from the narrowed fuel rod and making many collisions with nuclei in the moderator before again entering a fuel element. Suitable moderators are pure graphite, heavy water (D2O), are sometimes used as a coolant, and ordinary water (H2O). Very pure materials are essential as some unwanted nuclei capture neutrons readily. The reactor core is surrounded by a reflector made of suitable material to reduce the escape of neutrons from the surface. Each fuel element is encased, e.g., in magnesium alloy or stainless steel, to prevent escape of radioactive fission products. The coolant, which may be gaseous or liquid, flows along the channels over the canned fuel elements. There is an emission of gamma rays inherent in the fission process and, many of the fission products are intensely radioactive. To protect personnel, the assembly is surrounded by a massive biological shield, of concrete, with an inner iron thermal shield to protect the concrete from high temperatures caused by absorption of radiation.
 To keep the power production steady, control rods are moved in or out of the assembly. These contain material that captures neutrons readily, e.g., cadmium or boron. The power production can be held steady by allowing the currents in suitably placed ionization chambers automatically to modify the settings of the rods. Further absorbent rods, the shut-down rods, are driven into the core to stop the reaction, as in an emergence if the control mechanism fails. To attain high thermodynamic efficiency so that a large proportion of the liberated energy can be used, the heat should be extracted from the reactor core at a high temperature.
 In fast reactors no mediator is used, the frequency of collisions between neutrons and fissile atoms being creased by enriching the natural uranium fuel with 239Pu or additional 235U atoms that are fissioned by fast neutrons. The fast neutrons are thus built up a self-sustaining chain reaction. In these reactions the core is usually surrounded by a blanket of natural uranium into which some of the neutrons are allowed to escape. Under suitable conditions some of these neutrons will be captured by 238U atoms forming 239U atoms, which are converted to 239Pu. As more plutonium can be produced than required to enrich the fuel in the core, these are called ‘fast breeder reactors’.
 Thus and so, a neutral elementary particle with spin ½, that only takes part in weak interactions. The neutrino is a lepton and exists in three types corresponding to the three types of charged leptons, that is, there are the electron neutrinos (ve) tauon neutrinos (vμ) and tauon neutrinos (vτ). The antiparticle of the neutrino is the antineutrino.
 Neutrinos were originally thought to have a zero mass, but recently there have been some advances to an indirect experiment that evince to the contrary. In 1985 a Soviet team reported a measurement for the first time, of a non-zero neutrino mass. The mass measured was extremely small, some 10 000 times smaller than the mass of the electron. However, subsequent attempts to reproduce the Soviet measurement were unsuccessful. More recent (1998-99), the Super-Kamiokande experiment in Japan has provided indirect evidence for massive neutrinos. The new evidence is based upon studies of neutrinos, which are created when highly energetic cosmic rays bombard the earth’s upper atmosphere. By classifying the interaction of these neutrinos according to the type of neutrino involved (an electron neutrino or muon neutrino), and counting their relative numbers as a function: An oscillatory behaviour may be shown to occur. Oscillation in this sense is the charging back and forth of the neutrino’s type as it travels through space or matter. The Super-Kamiokande result indicates that muon neutrinos are changing into another type of neutrino, e.g., sterile neutrinos. The experiment does not, however, determine directly the masses, though the oscillations suggest very small differences in mass between the oscillating types.
 The neutrino was first postulated (Pauli 1930) to explain the continuous spectrum of beta rays. It is assumed that there is the same amount of energy available for each beta decay of a particle nuclide and that energy is shared according to a statistical law between the electron and a light neutral particle, now classified as the anti-neutrino, ύe  Later it was shown that the postulated particle would also conserve angular momentum and linear momentum in the beta decays.
 In addition to beta decay, the electron neutrino is also associated with, for example, positron decay and electron capture:
 22Na → 22Ne + e+ + ve
55Fe + e‒ → 55Mn + ve
The absorption of anti-neutrinos in matter by the process
1H + ΰe ➝ n + e+
Was first demonstrated by Reines and Cowan? The muon neutrino is generated in such processes as:
π+ → μ+ + vμ
Although the interactions of neutrinos are extremely weak the cross sections increase with energy and reaction can be studied at the enormous energies available with modern accelerators in some forms of ‘grand unification theories’, neutrinos are predicted to have a non-zero mass. Nonetheless, no evidences have been found to support this prediction.
 The antiparticle of an electron, i.e., an elementary particle with electron mass and positive charge and equal to that of the electron. According to the relativistic wave mechanics of Dirac, space contains a continuum of electrons in states of negative energy. These states are normally unobservable, but if sufficient energy can be given, an electron may be raised into a state of positivity and become observable. The vacant state of negativity seems to behave as a positive particle of positive energy, which is observed as a positron.
 A theory of elementary particles based on the idea that the fundamental entities are not point-like particles, but finite lines (strings) or closed loops formed by stings. The original idea was that an elementary particle was the result of a standing wave in a string. A considerable amount of theoretical effort has been put into development string theories. In particular, combining the idea of strings with that of super-symmetry, which has led to the idea with which correlation holds strongly with super-strings. This theory may be a more useful route to a unified theory of fundamental interactions than quantum field theory, simply because it’s probably by some unvoided infinites that arise when gravitational interactions are introduced into field theories. Thus, superstring theory inevitably leads to particles of spin 2, identified as gravitons. String theory also shows why particles violate parity conservation in weak interactions.
 Superstring theories involve the idea of higher dimensional spaces: 10 dimensions for fermions and 26 dimensions for bosons. It has been suggested that there are the normal four space-time dimensions, with the extra dimension being tightly ‘curved’. Still, there are no direct experimental evidences for super-strings. They are thought to have a length of about 10-35 m and energies of 1014 GeV, which is well above the energy of any accelerator. An extension of the theory postulates that the fundamental entities are not one-dimensional but two-dimensional, i.e., they are super-membranes.
 Allocations often other than what are previous than in time, awaiting the formidable combinations of what precedes the presence to the future, because of which the set of invariance of a system, a symmetry operation on a system is an operation that does not change the system. It is studied mathematically using ‘Group Theory.’ Some symmetries are directly physical, for instance the reelections and rotations for molecules and translations in crystal lattices. More abstractively the implicating inclinations toward abstract symmetries involve changing properties, as in the CPT Theorem and the symmetries associated with ‘Gauge Theory.’ Gauge theories are now thought to provide the basis for a description in all elementary particle interactions. The electromagnetic particle interactions are described by quantum electrodynamics, which is called Abelian gauge theory.
 Quantum field theory for which measurable quantities remain unchanged under a ‘group transformation’. All these theories consecutive field transformations do not commute. All non-Abelian gauge theories are based on work proposed by Yang and Mills in 1954, describe the interaction between two quantum fields of fermions. In which particles represented by fields whose normal modes of oscillation are quantized. Elementary particle interactions are described by relativistically invariant theories of quantized fields, ie. , By relativistic quantum field theories. Gauge transformations can take the form of a simple multiplication by a constant phase. Such transformations are called ‘global gauge transformations’. In local gauge transformations, the phase of the fields is alterable by amounts that vary with space and time; i.e.,
Ψ ➝ eiθ ( χ ) Ψ,
Where θ (χ) is a function of space-time. As, in Abelian gauge theories, consecutive field transformations commute, i.e.,
Ψ ➝ ei θ ( χ ) ei φ Ψ = ei φ ( χ ) ei φ ( χ ) Ψ,
Where φ (χ) is another function of space and time. Quantum chromodynamics (the theory of the strong interaction) and electroweak and grand unified theories are all non-Abelian. In these theories consecutive field transformations do not commute. All non-Abelian gauge theories are based on work proposed by Yang and Mils, as Einstein’s theory of general relativity can also be formulated as a local gauge theory.
 A symmetry including both boson and fermions, in theories based on super-symmetry every boson has a corresponding boson. The boson partners of existing fermions have names formed by prefacing the names of the fermion with an ‘s’ (e.g., selection, squark, lepton). The names of the fermion partners of existing bosons are obtained by changing the terminal-on of the boson to-into (e.g., photons, Gluons, and zino). Although, super-symmetries have not been observed experimentally, they may prove important in the search for a Unified Field Theory of the fundamental interactions.
 The quark is a fundamental constituent of hadrons, i.e., of particles that take part in strong interactions. Quarks are never seen as free particles, which is substantiated by lack of experimental evidence for isolated quarks. The explanation given for this phenomenon in gauge theory is known a quantum chromodynamics, by which quarks are described, is that quark interaction become weaker as they come closer together and fall to zero once the distance between them is zero. The converse of this proposition is that the attractive forces between quarks become stronger s they move, as this process has no limited, quarks can never separate from each other. In some theories, it is postulated that at very high-energy temperatures, as might have prevailed in the early universe, quarks can separate, te temperature at which this occurs is called the ‘deconfinement temperatures’. Nevertheless, their existence has been demonstrated in high-energy scattering experiments and by symmetries in the properties of observed hadrons. They are regarded s elementary fermions, with spin ½, baryon number ⅓, strangeness 0 or-1, and charm 0 or + 1. They are classified in six flavours [up (u), charm and top (t), each with charge ⅔ the proton charge, down (d), strange (s) and bottom (b), each with -⅓ the proton charge. Each type has an antiquark with reversed signs of charge, baryon number, strangeness, and charm. The top quark has not been observed experimentally, but there are strong theoretical arguments for its existence. The top quark mass is known to be greater than about 90 GeV/c2.
 The fractional charges of quarks are never observed in hadrons, since the quarks form combinations in which the sum of their charges is zero or integral. Hadrons can be either baryons or mesons, essentially, baryons are composed of three quarks while mesons are composed of a quark-antiquark pair. These components are bound together within the hadron by the exchange of particles known as Gluons. Gluons are neutral massless gauge bosons, the quantum field theory of electromagnetic interactions discriminate themselves against the gluon as the analogue of the photon and with a quantum number known as ‘colour’ replacing that of electric charge. Each quark type ( or flavour ) comes in three colours (red, blue and green, say), where colour is simply a convenient label and has no connection with ordinary colour. Unlike the photon in quantum chromodynamics, which is electrically neutral, Gluons in quantum chromodynamics carry colour and can therefore interact with themselves. Particles that carry colour are believed not to be able to exist in free particles. Instead, quarks and Gluons are permanently confined inside hadrons (strongly interacting particles, such as the proton and the neutron).
 The gluon self-interaction leads to the property known as ‘asymptotic freedom’, in which the interaction strength for the strong interaction decreases as the momentum transfer involved in an interaction increase. This allows perturbation theory to be used and quantitative comparisons to be made with experiment, similar to, but less precise than those possibilities of quantum chromodynamics. Quantum chromodynamics the being tested successfully in high energy muon-nucleon scattering experiments and in proton-antiproton and electron-positron collisions at high energies. Strong evidence for the existence of colour comes from measurements of the interaction rates for e+e‒  ➝ hadrons and
e+e-  ➝ μ+ μ‒. The relative rate for these two processes is a factor of three larger than would be expected without colour, this factor measures directly the number of colours, i.e., for each quark flavour.
 The quarks and antiquarks with zero strangeness and zero charm are the u, d, û and . They form the combinations:
protons (uud), antiprotons (ūū)
Neutrons (uud), antineutron (ū)
pion: π+ (u), π‒ (ūd), π0 (d, uū).
The charge and spin of these particles are the sums of the charge and spin of the component quarks and antiquarks.
 In the strange baryon, e.g., the Λ and Σ meons, either the quark or antiquark is strange. Similarly, the presence of one or more ‘c’ quarks leads to charm baryons’ ‘a’ ‘c’ or ‘č’ to the charmed mesons. It has been found useful to introduce a further subdivision of quarks, each flavour coming in three colours (red, green, blue). Colour as used here serves simply as a convenient label and is unconnected with ordinary colour. A baryon comprises a red, a green, and a blue quark and a meson comprised a red and ant-red, a blue and ant-blue, or a green and antigreen quark and antiquark. In analogy with combinations of the three primary colours of light, hadrons carry no net colour, i.e., they are ‘colourless’ or ‘white’. Only colourless objects can exist as free particles. The characteristics of the six quark flavours are shown in the table.
 The cental feature of quantum field theory, is that the essential reality is a set of fields subject to the rules of special relativity and quantum mechanics, all else is derived as a consequence of the quantum dynamics of those fields. The quantization of fields is essentially an exercise in which we use complex mathematical models to analyse the field in terms of its associated quanta. Material reality as we know it in quantum field theory is constituted by the transformation and organization of fields and their associated quanta. Hence, this reality. Reveals a fundamental complementarity, in which particles are localized in space/time, and fields, which are not. In modern quantum field theory, all matter is composed of six strongly interacting quarks and six weakly interacting leptons. The six quarks are called up, down, charmed, strange, top, and bottom and have different rest masses and functional changes. The up and own quarks combine through the exchange of Gluons to form protons and neutrons.
 The ‘lepton’ belongs to the class of elementary particles, and does not take part in strong interactions. They have no substructure of quarks and are considered indivisible. They are all; fermions, and are categorized into six distinct types, the electron, muon, and tauon, which are all identically charged, but differ in mass, and the three neutrinos, which are all neutral and thought to be massless or nearly so. In their interactions the leptons appear to observe boundaries that define three families, each composed of a charged lepton and its neutrino. The families are distinguished mathematically by three quantum numbers, Ie, Iμ, and Iv lepton numbers called ‘lepton numbers. In weak interactions their IeTOT, IμTOT and Iτ for the individual particles are conserved.
 In quantum field theory, potential vibrations at each point in the four fields are capable of manifesting themselves in their complemtarity, their expression as individual particles. The interactions of the fields result from the exchange of quanta that are carriers of the fields. The carriers of the field, known as messenger quanta, are the ‘coloured’ Gluons for the strong-binding-force, of which the photon for electromagnetism, the intermediate boson for the weak force, and the graviton or gravitation. If we could re-create the energies present in the fist trillionths of trillionths of a second in the life o the universe, these four fields would, according to quantum field theory, become one fundamental field.
 The movement toward a unified theory has evolved progressively from super-symmetry to super-gravity to string theory. In string theory the one-dimensional trajectories of particles, illustrated in the Feynman lectures, seem as if, in at all were possible, are replaced by the two-dimensional orbits of a string. In addition to introducing the extra dimension, represented by a smaller diameter of the string, string theory also features another mall but non-zero constant, with which is analogous to Planck’s quantum of action. Since the value of the constant is quite small, it can be generally ignored but at extremely small dimensions. Still, since the constant, like Planck’s constant is not zero, this results in departures from ordinary quantum field theory in very small dimensions.
 Part of what makes string theory attractive is that it eliminates, or ‘transforms away’, the inherent infinities found in the quantum theory of gravity. If the predictions of this theory are proven valid in repeatable experiments under controlled coeditions, it could allow gravity to be unified with the other three fundamental interactions. Nevertheless, even if string theory leads to this grand unification, it will not alter our understanding of wave-particle duality. While the success of the theory would reinforce our view of the universe as a unified dynamic process, it applies to very small dimensions, and therefore, does not alter our view of wave-particle duality.
 While the formalism of quantum physics predicts that correlations between particles over space-like inseparability, of which are possible, it can say nothing about what this strange new relationship between parts (quanta) and the whole (cosmos) cause to result outside this formalism. This does not, however, prevent us from considering the implications in philosophical terms. As the philosopher of science Errol Harris noted in thinking about the special character of wholeness in modern physics, a unity without internal content is a blank or empty set and is not recognizable as a whole. A collection of merely externally related parts does not constitute a whole in that the parts will not be ‘mutually adaptive and complementary to one-another.’
 Wholeness requires a complementary relationship between unity and difference and is governed by a principle of organization determining the interrelationship between parts. This organizing principle must be universal to a genuine whole and implicit in all parts constituting the whole, even the whole is exemplified only in its parts. This principle of order, Harris continued, ‘is nothing really in and of itself. It is the way he parts are organized, and another constituent additional to those that constitute the totality.’
 In a genuine whole, the relationship between the constituent parts must be ‘internal or immanent’ ion the parts, as opposed to a more spurious whole in which parts appear to disclose wholeness dur to relationships that are external to the arts. The collection of parts that would allegedly constitute the whole in classical physics is an example of a spurious whole. Parts continue a genuine whole when the universal principle of order is inside the parts and hereby adjusts each to all so that they interlock and become mutually complementary. This not only describes the character of the whole revealed in both relativity theory and quantum mechanics. It is also consistent with the manner in which we have begun to understand the relations between parts and whole in modern biology.
 Modern physics also reveals, claimed Harris, complementary relationship between the differences between parts that constitute and the universal ordering principle that are immanent in each part. While the whole cannot be finally disclosed in the analysis of the parts, the study of the differences between parts provides insight
into the dynamic structure of the whole present in each part. The part can never, however, be finally isolated from the web of relationships that discloses the interconnections with the whole, and any attempt to do so results in ambiguity.
 Much of the ambiguity in attempts to explain the character of wholes in both physics and biology derives from the assumption that order exists between or outside parts. Yet order in complementary relationships between difference and sameness in any physical event is never external to that event, and the cognations are immanent in the event. From this perspective, the addition of non-locality to this picture of the distributive constitution in dynamic function of wholeness is not surprising. The relationships between part, as quantum event apparent in observation or measurement, and the indivisible whole, calculate on in but are not described by the instantaneous correlations between measurements in space-like separate regions, is another extension of the part-whole complementarity in modern physics.
 If the universe is a seamlessly interactive system that evolves to higher levels of complex and complicating regularities of which ae lawfully emergent in property of systems, we can assume that the cosmos is a single significant whole that evinces progressive order in complementary relations to its parts. Given that this whole exists in some sense within all parts (quanta), one can then argue that in operates in self-reflective fashion and is the ground from all emergent plexuities. Since human consciousness evinces self-reflective awareness in te human brain (well protected between the cranium walls) and since this brain, like all physical phenomena, can be viewed as an emergent property of the whole, concluding it is unreasonable, in philosophical terms at least, that the universe is conscious.
 Even so, since the actual character of this seamless whole cannot be represented or reduced to its parts, it lies, quite laterally, beyond all human representation or descriptions. If one chooses to believe that the universe be a self-reflective and self-organizing whole, this lends no support whatsoever to conceptual representation of design, meaning, purpose, intent, or plan associated with mytho-religious or cultural heritage. However, if one does not accept this view of the universe, there is noting in the scientific description of nature that can be used to refute this position. On the other hand, it is no longer possible to argue that a profound sense of unity with the whole, which has long been understood as foundation of religious experiences, but can be dismissed, undermined, or invalidated with appeals to scientific knowledge.
 While we have consistently tried to distinguish between scientific knowledge and philosophical speculation based on this of what is obtainable, let us be quite clear on one point-there is no empirically valid causal linkage between the former and the latter. Those who wish to dismiss the speculative base on which is obviously free to do as done. However, there is another conclusion to be drawn, in that is firmly grounded in scientific theory and experiment there is no basis in the scientific descriptions of nature for believing in the radical Cartesian division between mind and world sanctioned by classical physics. Clearly, his radical separation between mind and world was a micro-level illusion fostered by limited awareness of the actual character of physical reality nd by mathematical idealizations extended beyond the realms of their applicability.
 All the same, the philosophical implications might prove in themselves as a criterial motive in debative consideration to how our proposed new understanding of the relationship between parts and wholes in physical reality might affect the manner in which we deal with some major real-world problems. This will issue to demonstrate why a timely resolution of these problems is critically dependent on a renewed dialogue between members of the cultures of human-social scientists and scientists-engineers. We will also argue that the resolution of these problems could be dependent on a renewed dialogue between science and religion.
 As many scholars have demonstrated, the classical paradigm in physics has greatly influenced and conditioned our understanding and management of human systems in economic and political realities. Virtually all models of these realities treat human systems as if they consist of atomized units or parts that interact with one another in terms of laws or forces external to or between the parts. These systems are also viewed as hermetic or closed and, thus, its discreteness, separateness and distinction.
 Consider, for example, how the classical paradigm influenced or thinking about economic reality. In the eighteenth and nineteenth centuries, the founders of classical economics-figures like Adam Smith, David Ricardo, and Thomas Malthus conceived of the economy as a closed system in which intersections between parts (consumer, produces, distributors, etc.) are controlled by forces external to the parts (supply and demand). The central legitimating principle of free market economics, formulated by Adam Smith, is that lawful or law-like forces external to the individual units function as an invisible hand. This invisible hand, said Smith, frees the units to pursue their best interests, moves the economy forward, and usually legislates the behaviour of parts in the best vantages of the whole. (The resemblance between the invisible hand and Newton’s universal law of gravity and between the relations of parts and wholes in classical economics and classical physics should be transparent.)
 After roughly 1830, economists shifted the focus to the properties of the invisible hand in the interactions between pats using mathematical models. Within these models, the behaviour of pats in the economy is assumed to be analogous to the awful interactions between pats in classical mechanics. It is, therefore, not surprising that differential calculus was employed to represent economic change in a virtual world in terms of small or marginal shifts in consumption or production. The assumption was that the mathematical description of marginal shifts n the complex web of exchanges between parts (atomized units and quantities) and whole (closed economy) could reveal the lawful, or law-like, machinations of the closed economic system.
 These models later became one of the fundamentals for microeconomics. Microeconomics seek to describe interactions between parts in exact quantifiable measures-such as marginal cost, marginal revenue, marginal utility, and growth of total revenue as indexed against individual units of output. In analogy with classical mechanics, the quantities are viewed as initial conditions that can serve to explain subsequent interactions between parts in the closed system in something like deterministic terms. The combination of classical micro-analysis with micro-analysis resulted in what Thorstein Veblen in 1900 termed neoclassical economics-the model for understanding economic reality that is widely used today.
 Beginning in the 1939s, the challenge became to subsume the understanding of the interactions between parts in closed economic systems with more sophisticated mathematical models using devices like linear programming, game theory, and new statistical techniques. In spite of the growing mathematical sophistication, these models are based on the same assumptions from classical physics featured in previous neoclassical economic theory-with one exception. They also appeal to the assumption that systems exist in equilibrium or in perturbations from equilibria, and they seek to describe the state of the closed economic system in these terms.
 One could argue that the fact that our economic models are  assumptions from classical mechanics is not a problem by appealing to the two-domain distinction between micro-level micro-level processes expatiated upon earlier. Since classical mechanic serves us well in our dealings with micro-level phenomena in situations where the speed of light is so large and the quantum of action is so small as to be safely ignored for practical purposes, economic theories based on assumptions from classical mechanics should serve us well in dealing with the micro-level behaviour of economic systems.
 The obvious problem, . . . acceded peripherally,  . . .  nature is relucent to operate in accordance with these assumptions, in that the biosphere, the interaction between parts be intimately related to the hole, no collection of arts is isolated from the whole, and the ability of the whole to regulate the relative abundance of atmospheric gases suggests that the whole of the biota appear to display emergent properties that are more than the sum of its parts. What the current ecological crisis reveals in the abstract virtual world of neoclassical economic theory. The real economies are all human activities associated with the production, distribution, and exchange of tangible goods and commodities and the consumption and use of natural resources, such as arable land and water. Although expanding economic systems in the really economy ae obviously embedded in a web of relationships with the entire biosphere, our measure of healthy economic systems disguises this fact very nicely. Consider, for example, the healthy economic system written in 1996 by Frederick Hu, head of the competitive research team for the World Economic Forum-short of military conquest, economic growth is the only viable means for a country to sustain increases in natural living standards . . . An economy is internationally competitive if it does so strongly in three general areas: Abundant productive stimulations from capital, labour, infrastructure and technology, optimal economic policies such as low taxes, little interference, free trade and sound market institutions. Such as the rule of law and protection of property rights.
 The prescription for medium-term growth of economies ion countries like Russia, Brazil, and China may seem utterly pragmatic and quite sound. However, the virtual economy described is a closed and hermetically sealed system in which the invisible hand of economic forces allegedly results in a health growth economy if impediments to its operation are removed or minimized. It is, of course, often trued that such prescriptions can have the desired results in terms of increases in living standards, and Russia, Brazil and China are seeking to implement them in various ways.
 In the real economy, however, these systems are clearly not closed or hermetically sealed: Russia uses carbon-based fuels in production facilities that produce large amounts of carbon dioxide and other gases that contribute to global warming: Brazil is in the process of destroying a rain forest that is critical to species diversity and the maintenance of a relative abundance of atmospheric gases that regulate Earth temperature, and China is seeking to build a first-world economy based on highly polluting old-world industrial plants that burn soft coal. Not to forget,  . . . the victual economic systems that the world now seems to regard as the best example of the benefits that can be derived form the workings of the invisible hand, that of the United States, operates in the real economy as one of the primary contributors to the ecological crisis.
 In ‘Consilience,’ Edward O. Wilson makes to comment, the case that effective and timely solutions to the problem threatening human survival is critically dependent on something like a global revolution in ethical thought and behaviour. Nonetheless, his view of the basis for this revolution is quite different from our own. Wilson claimed that since the foundations for moral reasoning evolved in what he termed ‘gene-culture’ evolution, the rules of ethical behaviour re emergent aspects of our genetic inheritance. Based on the assumptions that the behaviour of contemporary hunter-gatherers resembles that of our hunter-gatherers forebears in the Palaeolithic Era, he drew on accounts of Bushman hunter-gatherers living in the centre Kalahari in an effort to demonstrate that ethical behaviour is associated with instincts like bonding, cooperation, and altruism.
 Wilson argued that these instincts evolved in our hunter-gatherers accessorial descendabilities, whereby genetic mutation and the ethical behaviour associated with these genetically based instincts provided a survival advantage. He then claimed that since these genes were passed on to subsequent generations of our dependable characteristics, which eventually became pervasive in the human genome, the ethical dimension of human nature has a genetic foundation. When we fully understand the ‘innate epigenetic rules of moral reasoning,’ the rules will probably turn out to be an ensemble of many algorithms whose interlocking activities guide the mind across a landscape of nuances moods and choices.
 Any reasonable attempt to lay a firm foundation beneath the quagmire of human ethics in all of its myriad and often contradictory formulations is admirable, and Wilson’s attempt is more admirable than most. In our view, however, there is little or no prospect that will prove as successful for any number of reasons. While the probability for us to discover some linkage between genes and behaviour, seems that the lightened path of human ethical behaviour and ranging advantages of this behaviour is far too complex, not o mentions, inconsistently been reduced to a given set classification of ‘epigenetic ruled of moral reasoning.’
 Also, moral codes may derive in part from instincts that confer a survival advantage, but when we are to examine these codes, they are clearly primarily cultural products. This explains why ethical systems are constructed in a bewildering variety of ways in different cultural contexts and why they often sanction or legitimate quite different thoughts and behaviours. Let us not forget that rules of ethical behaviours are quite malleable and have been used sacredly to legitimate human activities such as slavery, colonial conquest, genocide and terrorism. As Cardinal Newman cryptically put it, ‘Oh how we hate one another for the love of God.’
 According to Wilson, the ‘human mind evolved to believe in the gods’ and people ‘need a sacred narrative’ to his view are merely human constructs and, therefore, there is no basis for dialogue between the world views of science and religion. ‘Science for its part, will test relentlessly every assumption about the human condition and in time uncover the bedrock of the moral and religiously sentient. The result of the competition between the two world views, is believed, as I, will be the secularization of the human epic and of religion itself.
 Wilson obviously has a right to his opinions, and many will agree with him for their own good reasons, but what is most interesting about his thoughtful attempted to posit a more universal basis for human ethics in that it s based on classical assumptions about the character of both physical and biological realities. While Wilson does not argue that human’s behaviour is genetically determined in the strict sense, however, he does allege that there is a causal linkage between genes and behaviour that largely condition this behaviour, he appears to be a firm believer in classical assumption that reductionism can uncover the lawful essences that principally govern the physical aspects attributed to reality, including those associated with the alleged ‘epigenetic rules of moral reasoning.’
 Once, again, Wilson’s view is apparently nothing that cannot be reduced to scientific understandings or fully disclosed in scientific terms, and this apparency of hope for the future of humanity is that the triumph of scientific thought and method will allow us to achieve the Enlightenments ideal of disclosing the lawful regularities that govern or regulate all aspects of human experience. Hence, science will uncover the ‘bedrock of moral and religious sentiment, and the entire human epic will be mapped in the secular space of scientific formalism.’ The intent is not to denigrate Wilson’s attentive efforts to posit a more universal basis for the human condition, but is to demonstrate that any attempt to understand or improve upon the behaviour based on appeals to outmoded classical assumptions is unrealistic and outmoded. If the human mind did, in fact, evolve in something like deterministic fashion in gene-culture evolution-and if there were, in fact, innate mechanisms in mind that are both lawful and benevolent. Wilson’s program for uncovering these mechanisms could have merit. Nevertheless, for all the reasons that have been posited, classical determinism cannot explain the human condition and its evolutionary principle that govern in their functional dynamics, as Darwinian evolution should be modified to accommodate the complementary relationships between cultural and biological principles those governing evaluations do have in them a strong, and firm grip upon genetical mutations that have attributively been the distribution in the contribution of human interactions with themselves in the finding to self-realization and undivided wholeness.
 Equally important, the classical assumption that the only privileged or valid knowledge is scientific is one of the primary sources of the stark division between the two cultures of humanistic and scientists-engineers, in this view, Wilson is quite correct in assuming that a timely end to the two culture war and a renewer dialogue between members of these cultures is now critically important to human survival. It is also clear, however, that dreams of reason based on the classical paradigm will only serve to perpetuate the two-culture war. Since these dreams are also remnants of an old scientific word view that no longer applies in theory in fact, to the actual character of physical reality, as reality is a probable service to frustrate the solution for which in found of a real world problem.
 However, there is a renewed basis for dialogue between the two cultures, it is believed as quite different from that described by Wilson. Since classical epistemology has been displaced, or is the process of being displaced, by the new epistemology of science, the truths of science can no longer be viewed as transcendent and absolute in the classical sense. The universe more closely resembles a giant organism than a giant machine, and it also displays emergent properties that serve to perpetuate the existence of the whole in both physics and biology that cannot be explained in terms of unrestricted determinism, simple causality, first causes, linear movements and initial conditions. Perhaps the first and most important precondition for renewed dialogue between the two cultural conflicting realizations as Einstein explicated upon its topic as, that a human being is a ‘part of the whole’. It is this spared awareness that allows for the freedom, or existential choice of self-decision of choosing our free-will and the power to differentiate a direct care to free ourselves of the ‘optical illusion’ of our present conception of self as a ‘part limited in space and time’, and to widen ‘our circle of compassion to embrace al living creatures and the whole of nature in its beauty’. Yet, one cannot, of course, merely reason oneself into an acceptance of this view, nonetheless, the inherent perceptions of the world are reason that the capacity for what Einstein termed ‘cosmic religious feedings.’ Perhaps, our enabling capability for that which is within us to have the obtainable ability to enabling of ours is to experience the self-realization, that of its realness is to sense its proven existence of a sense of elementarily leaving to some sorted conquering sense of universal consciousness, in so given to  arise the existence of the universe, which really makes an essential difference to the existence or its penetrative spark of awakening indebtednesses of reciprocality?
 Those who have this capacity will favourably be able to communicate their enhanced scientific understanding of the relations among all aspects, and in part that is our self and the whole that are the universes in ordinary language wit enormous emotional appeal. The task lies before the poets of this renewing reality have nicely been described by Jonas Salk, which ‘man has come to the threshold of a state of consciousness, regarding his nature and his relationship to the Cosmos, in terms that reflects ‘reality’. By using the processes of Nature and metaphor, to describe the forces by which it operates upon and within Man, we come as close to describing ‘reality’ as we can within te limits of our comprehension. Men will be very uneven in their capacity or such understanding, which, naturally, differs for different ages and cultures, and develops and changes over the course of time. For these reasons that using metaphorical and mythical provisions as comprehensive guides to living will always be necessary. In this way. Man’s afforded efforts by the imagination and intellect can be playing the vital roles embarking upon the survival and his endurable evolution.
 It is time, if not, only, concluded  from evidence in its  suggestive conditional relation, for which the religious imagination and the religious experience to engage upon the complementary truths of science in fitting that silence with meaning, as having to antiquate a continual emphasis, least of mention, that does not mean that those who do not believe in the existence of God or Being, should refrain in any sense from assessing the impletions of the new truths of science. Understanding these implications does not necessitate any ontology, and is in no way diminished by the lack of any ontology. One is free to recognize a basis for a dialogue between science and religion for the same reason that one is free to deny that this basis exists-there is nothing in our current scientific world view that can prove the existence of God or Being and nothing that legitimate any anthropomorphic conceptions of the nature of God or Being. The question of belief in some ontology yet remains in what, and it has always been-a question, and the physical universe on the most basic level remains what it always been a riddle. The ultimate answer to the question and the ultimate meaning of the riddle is, and probably will always be, a matter of personal choice and conviction.
 The present time is clearly a time of a major paradigm shift, but consider the last great paradigm shift, the one that resulted in the Newtonian framework. This previous paradigm shift was profoundly problematic for the human spirit, it led to the conviction that we are strangers, freaks of nature, conscious beings in a universe that is almost entirely unconscious, and that, since the universe its strictly deterministic, even the free will we feel in regard to the movements of our bodies is an illusion. Yet going through the acceptance of such a paradigm was probably necessary for the Western mind.
 The overwhelming success of Newtonian physics led most scientists and most philosophers of the Enlightenment to rely on it exclusively. As far as the quest for knowledge about reality was concerned, they regarded all of the other mode’s of expressing human experience, such as accounts of numinous emergences, poetry, art, and so on, as irrelevant. This reliance on science as the only way to the truth about the universe s clearly obsoletes. Science has to give up the illusion of its self-sufficiency and self-sufficiency of human reason. It needs to unite with other modes of knowing, n particular with contemplation, and help each of us move to higher levels of being and toward the Experience of Oneness.
 If this is the direction of the emerging world-view, then the paradigm shifts we are presently going through will prove to e nourishing to the human spirit and in correspondences with its deepest conscious or unconscious yearning-the yearning to emerge out of Plato’s shadows and into the light of luminosity. The Big Bang theory seeks to explain what happened at or soon after the beginning of the universe. Scientists can now model the universe back to 10-43 seconds after the big bang. For the time before that moment, the classical theory of gravity is no longer adequate. Scientists are searching for a theory that merges gravity (as explained by Einstein's general theory of relativity) and quantum mechanics but have not found one yet. Many scientists have hope that string theory, also known as M-theory, will tie together gravity and quantum mechanics and help scientists explore further back in time.
 Because scientists cannot look back in time beyond that early epoch, the actual big bang is hidden from them. There is no way at present to detect the origin of the universe. Further, the big bang theory does not explain what existed before the big bang. Perhaps, that time itself began at the big bang, so that it makes no sense to discuss what happened ‘before’ the big bang. According to the big bang theory, the universe expanded rapidly in its first microseconds. A single force existed at the beginning of the universe, and as the universe expanded and cooled, this force separated into those we know today: gravity, electromagnetism, the strong nuclear force, and the weak nuclear force. A theory called the electroweak theory now provides a unified explanation of electromagnetism and the weak nuclear force theory. Physicists are now searching for a grand unification theory also to incorporate the strong nuclear force. String theory seeks to incorporate the force of gravity with the other three forces, providing a theory of everything (TOE).
 One widely accepted version of big bang theory includes the idea of inflation. In this model, the universe expanded much more rapidly at first, to about 1050 times its original size in the first 10-32 second, then slowed its expansion. The theory was advanced in the 1980s by American cosmologist Alan Guth and elaborated upon by American astronomer Paul Steinhardt, Russian American scientist Andrei Line, and British astronomer Andreas Albrecht. The inflationary universe theory solves a number of problems of cosmology. For example, it shows that the universe now appears close to the type of flat space described by the laws of Euclid’s geometry: We see only a tiny region of the original universe, similar to the way we do not notice the curvature of the earth because we see only a small part of it. The inflationary universe also shows why the universe appears so homogeneous. If the universe we observe was inflated from some small, original region, it is not surprising that it appears uniform.
 Once the expansion of the initial inflationary era ended, the universe continued to expand more slowly. The inflationary model predicts that the universe is on the boundary between being open and closed. If the universe is open, it will keep expanding forever. If the universe is closed, the expansion of the universe will eventually stop and the universe will begin contracting until it collapses. Whether the universe is open or closed depends on the density, or concentration of mass, in the universe. If the universe is dense enough, it is closed.
 The theory is based on the mathematical equations, known as the field equations, of the general theory of relativity set forth in 1915 by Albert Einstein. In 1922 Russian physicist Alexander Friedmann provided a set of solutions to the field equations. These solutions have served as the framework for much of the current theoretical work on the big bang theory. American astronomer Edwin Hubble provided some of the greatest supporting evidence for the theory with his 1929 discovery that the light of distant galaxies was universally shifted toward the red end of the spectrum. Once ‘tired light’ theories-that light slowly loses energy naturally, becoming more red over time-were dismissed, this shift proved that the galaxies were moving away from each other. Hubble found that galaxies farther away were moving away proportionally faster, showing that the universe is expanding uniformly. However, the universe’s initial state was still unknown.
 In the 1940's Russian-American physicist George Gamow worked out a theory that fit with Friedmann’s solutions in which the universe expanded from a hot, dense state. In 1950 British astronomer Fred Hoyle, in support of his own opposing steady-state theory, referred to Gambas theory as a mere ‘big bang,’ but the name stuck.
 The overall framework of the big bang theory came out of solutions to Einstein’s general relativity field equations and remains unchanged, but various details of the theory are still being modified today. Einstein himself initially believed that the universe was static. When his equations seemed to imply that the universe was expanding or contracting, Einstein added a constant term to cancel out the expansion or contraction of the universe. When the expansion of the universe was later discovered, Einstein stated that introducing this ‘cosmological constant’ had been a mistake.
 After Einstein’s work of 1917, several scientists, including the Abbé Georges Lemaître in Belgium, Willem de Sitter in Holland, and Alexander Friedmann in Russia, succeeded in finding solutions to Einstein’s field equations. The universes described by the different solutions varied. De Sitter’s model had no matter in it. This model is effectively not a bad approximation, since the average density of the universe is extremely low. Lemaître’s universe expanded from a ‘primeval atom.’ Friedmann’s universe also expanded from a very dense clump of matter, but did not involve the cosmological constant. These models explained how the universe behaved shortly after its creation, but there was still no satisfactory explanation for the beginning of the universe.
 In the 1940's George Gamow was joined by his students Ralph Alphen and Robert Herman in working out details of Friedmann’s solutions to Einstein’s theory. They expanded on Gamow’s idea that the universe expanded from a primordial state of matter called ylem consisting of protons, neutrons, and electrons in a sea of radiation. They theorized the universe was very hot at the time of the big bang (the point at which the universe explosively expanded from its primordial state), since elements are heavier than hydrogen can be formed only at a high temperature. Alpher and Hermann predicted that radiation from the big bang should still exist. Cosmic background radiation roughly corresponding to the temperature predicted by Gamow’s team was detected in the 1960s, further supporting the big bang theory, though the work of Alpher, Herman, and Gamow had been forgotten.
 The big bang theory seeks to explain what happened at or soon after the beginning of the universe. Scientists can now model the universe back to 10-43 seconds after the big bang. For the time before that moment, the classical theory of gravity is no longer adequate. Scientists are searching for a theory that merges gravity (as explained by Einstein's general theory of relativity) and quantum mechanics but have not found one yet. Many scientists have hopes that string theory, also known as M -theory, will tie together gravity and quantum mechanics and help scientists explore further back in time.
 Because scientists cannot look back in time beyond that early epoch, the actual big bang is hidden from them. There is no way at present to detect the origin of the universe. Further, the big bang theory does not explain what existed before the big bang. Perhaps, that time itself began at the big bang, so that it makes no sense to discuss what happened ‘before’ the big bang.
 According to the big bang theory, the universe expanded rapidly in its first microseconds. A single force existed at the beginning of the universe, and as the universe expanded and cooled, this force separated into those we know today: gravity, electromagnetism, the strong nuclear force, and the weak nuclear force. A theory called the electroweak theory now provides a unified explanation of electromagnetism and the weak nuclear force theory. Physicists are now searching for a grand unification theory to incorporate the strong nuclear force also. String theory seeks to incorporate the force of gravity with the other three forces, providing a theory of everything (TOE).
 One widely accepted version of big bang theory includes the idea of inflation. In this model, the universe expanded much more rapidly at first, to about 1050 times its original size in the first 10-32 second, then slowed its expansion. The theory was advanced in the 1980's by American cosmologist Alan Guth and elaborated upon by American astronomer Paul Steinhardt, Russian American scientist Andrei Linde, and British astronomer Andreas Albrecht. The inflationary universe theory solves several problems of cosmology. For example, it shows that the universe now appears close to the type of flat space described by the laws of Euclid’s geometry: We see only a tiny region of the original universe, similar to the way we do not notice the curvature of the earth because we see only a small part of it. The inflationary universe also shows why the universe appears so homogeneous. If the universe we observe was inflated from some small, original region, it is not surprising that it appears uniform.
 Once the expansion of the initial inflationary era ended, the universe continued to expand more slowly. The inflationary model predicts that the universe is on the boundary between being open and closed. If the universe is open, it will keep expanding forever. If the universe is closed, the expansion of the universe will eventually stop and the universe will begin contracting until it collapses. Whether the universe is open or closed, depends on the density, or concentration of mass, in the universe. If the universe is dense enough, it is closed.
 The universe cooled as it expanded. After about one second, protons formed. In the following few minutes-often referred to as the ‘first three minutes’-combinations of protons and neutrons formed the isotope of hydrogen known as deuterium and some of the other light elements, principally helium, and some lithium, beryllium, and boron. The study of the distribution of deuterium, helium, and the other light elements is now a major field of research. The uniformity of the helium abundance around the universe supports the big bang theory and the abundance of deuterium can be used to estimate the density of matter in the universe.
 From about 380,000 too about one million years after the big bang, the universe cooled to about 3000°C’s (about 5000°F’s) and protons and electrons combined to  hydrogen atoms. Hydrogen atoms can only absorb and emit specific colours, or wavelengths, of light. The formation of atoms allowed many other wavelengths of light, wavelengths that had been interfering with the free electrons, to travel much farther than before. This change sets free radiation that we can detect today. After billions of years of cooling, this cosmic background radiation is at 3 K (-270°C/- 454°F). The cosmic background radiation was first detected and identified in 1965 by American astrophysicists Arno Penzias and Robert Wilson.

No comments:

Post a Comment