Information

How does the brain instinctively know the math behind Newtonian Physics?


A 10 year old child does not have the intellectual power to accurately calculate the energy required to throw a ball an arbitrary distance. Yet they are able to accurately throw a ball at a distinctive target.

Similarily, we often "instinctively" know an approximation of the amount of energy and force required to accomplish certain tasks, even considering that many of us have minimal knowledge of the mathematical complexity behind the tasks.

How does the human brain know the values of force, energy etc. required when accomplishing certain tasks, while being ignorant of the mathematics behind the Newtonian Physics of the task? Is it just a game of "guess and check"?


Trial and error.

The CNS is very much a living, changing organ system - much more so before you are an adult. While you are a child your neurons fire, muscles contract, and force is exerted on objects - then your CNS modifies itself (either through redistribution of neural contacts or growth of new neurons spurned by chemical changes) to account for the results.

This is why "practice makes perfect." As you continuously perform a task, like throwing a ball, your body is keeping track of the results. Did X muscle fibers contracting in Y muscles overshoot your buddy? You register the failure and compensate by utilizing X-N muscle fibers that contract in Y muscles until you consistently satisfy your goal.

Your CNS is keeping track of everything, all the time. There isn't a single instant in your life - asleep or awake - that's not being monitored, so your body has copious amounts of data to work with when making its estimates.

This is also why simulations take time to adapt to, even if the person is fully aware of the basics behind the physics involved.


First of all a little detail : laws of physics describe what is going on, they are not what is actually going on. Beeing able to make nearly precise predictions about the outcome of a certain action is what those formulas are about, as well as getting a feeling of how and why it actually happens this way. I would bet that neither Newton nor Einstein or any great head of physics ever tried to predict a real life action by the mathematics they know.

The key to understand how organisms predict actions is heuristics. So you and any organism on earth uses other intuitively calculated values for predicting actions. You can easily predict that every object you throw in the air will fall to the ground whether you are a physicist or not.

Prof. Gigerenzer explained this phenomena at this presentation at 36:50 unfortunately this is in German, but maybe the pictures he shows help a little bit. https://youtu.be/IderadHRCu8

An easy example: a dog that is trying to catch a thrown freesbee in the air does not know anything about mathematics. But dogs actually seem to be pretty good at predicting where a freesbee is in a good position to get caught while it is still in the air. How they do this? A dog is focusing the freesbee at a certain angle to themselve at which it would be good to get caught when they are close enough. They then ran fast or slow, always keeping the same angle, not the same distance, to the freesbee. No matter the parabola of the freesbee as long as the dog keeps the angle all is fine. When the freesbee comes close enough to get caught, the dog bites and catches the freesbee.

Do we do unconscious mathematics? Yes kind of easy sensoric heuristics. Do we do unconscious mathematics like physics? Definetly not, as I said Newton and others are formulas that work well to EXPLAIN what happens, they are not what is actually happening.


And just to be different.

You are looking at 30 million years of evolution and natural selection to be able to gauge distance and relative position in 3 dimensional space accurately. The penalty of making a mistake is death by falling out of a tree. http://anthro.palomar.edu/earlyprimates/early_2.htm

As for throwing… that is another 3 million years of evolution on how to use such a tool. The penalty for missing a target… well I am guessing it is to be eaten or have your child eaten by one or more predators. https://en.wikipedia.org/wiki/Oldowan

Humans are good at throwing things… better than good. Human can throw fast and accurately. Our shoulder joints have undergone natural selection such that humans have weird shoulder blades compared to other great ape. http://www.natureworldnews.com/articles/16572/20150909/ape-human-evolution-more-clues-revealed-shoulders.htm

If there was enough selection pressure to reshape the shoulder of an ape to enable it to throw rocks (projectiles) fast, I am sure that was equal pressure on the software side to make sure said rock hits its intended target.


10 Brain-Breaking Scientific Concepts

Many of us here at Listverse really enjoy messing with the heads of our readership. We know that you come here to be entertained and informed perhaps it&rsquos just our sterling work ethic, but on some days we feel the need to give you much, much more than you bargained for. This is one of those days.

So while you depend on us to provide a pleasant distraction from whatever part of the day in which you&rsquore visiting, please allow us to instead give you a MAJOR distraction from the REST of the day, and perhaps tomorrow as well. Here are ten things that are going to take awhile to get your brain around we still haven&rsquot done it, and we write about this stuff.

Have you ever been able to tell what another person is thinking? How do you know? It&rsquos one thing to suggest that we&rsquore not really capable of knowing anybody else&rsquos thoughts it&rsquos quite another to suggest that that person may not have any thoughts for you to know.

The philosophical zombie is a thought experiment, a concept used in philosophy to explore problems of consciousness as it relates to the physical world. Most philosophers agree that they don&rsquot actually exist, but here&rsquos the key concept: all of those other people you encounter in the world are like the non-player characters in a video game. They speak as if they have consciousness, but they do not. They say &ldquoow!&rdquo if you punch them, but they feel no pain. They are simply there in order to help usher your consciousness through the world, but possess none of their own.

The concept of zombies is used largely to poke holes in physicalism, which holds that there are no things other than physical things, and that anything that exists can be defined solely by its physical properties. The &ldquoconceivability argument&rdquo holds that whatever is conceivable is possible, therefore zombies are possible. Their very possibility&mdashvastly unlikely though it may be&mdashraises all kinds of problems with respect to the function of consciousness, among other things&mdashthe next entry in particular.

Qualia are, simply, the objective experiences of another. It may seem simple to state that it&rsquos impossible to know exactly what another person&rsquos experience is, but the idea of qualia (that term, by the way, is plural the singular is &ldquoquale&rdquo) goes quite a bit beyond the simplicity of that statement.

For example, what is hunger? We all know what being hungry feels like, right? But how do you know that your friend Joe&rsquos experience of hunger is the same as yours? We can even describe it as &ldquoan empty, kind of rumbling feeling in your stomach&rdquo. Fine&mdashbut Joe&rsquos feeling of &ldquoemptiness&rdquo could be completely different from yours as well. For that matter, consider &ldquored&rdquo. Everyone knows what red looks like, but how would you describe it to a blind person? Even if we break it down and discuss how certain light frequencies produce a color we call &ldquored&rdquo, we still have no way of knowing if Joe perceives the color red as the color you know to be, say, green.

Here&rsquos where it gets very weird. A famous thought experiment on qualia concerns a woman who is raised in a black and white room, gaining all of her information about the world with black and white monitors. She studies and learns everything there is to know about the physical aspects of color and vision wavelength frequencies, how the eyes perceive color, everything. She becomes an expert, and eventually knows literally all the factual information there is to know on these subjects.

Then, one day she is released from the room and gets to actually SEE colors for the first time. Doing so, she learns something about colors that she didn&rsquot know before&mdashbut WHAT?

British philosopher John Stuart Mill, in the 19th century, set forth a theory of names that held for many decades&mdashessentially, that the meaning of a proper name is whatever bears that name in the external world simple enough. The problem with the theory arises when things do not exist in the external world, which would make sentences like, &ldquoHarry Potter is a great wizard&rdquo completely meaningless according to Mill.

German logician Gottlob Frege challenged this view with his Descriptivist theory, which holds that the meanings&mdashsemantic contents&mdashof names are the collections of descriptions associated with them. This makes the above sentence make sense, since the speaker and presumably listener would attach the description &ldquocharacter from popular culture&rdquo or &ldquofictional boy created by J.K. Rowling&rdquo to the name &ldquoHarry Potter&rdquo.

It seems simple, but in philosophy of language there had not been a distinction&mdashuntil Frege&mdashbetween sense and reference. That is to say, there are multiple meanings associated with words as a matter of necessity&mdashthe OBJECT to which the term refers, and the WAY in which the term refers to the object.

Believe it or not, descriptivist theory has had some pretty serious holes blown in it in recent decades, notably by American philosopher Saul Kripke in his book Naming and Necessity. Just one argument proposes (in a nutshell) that if information about the subject of a name is incorrect or incomplete, then a name could actually refer to a completely different person about which the known information would be more accurate Kripke&rsquos objections only get more headache-inducing from there.

The Mind-Body problem is an aspect of Dualism, which is a philosophy that basically holds that for all systems or domains, there are always two types of things or principles&mdashfor example, good and evil, light and dark, wet and dry&mdashand that these two things necessarily exist independently of each other, and are more or less equal in terms of their influence on the system. A Dualist view of religion believes this of God and Satan, contrasting with a Monist view (which would believe, perhaps, in only one or the other, or that we are all one consciousness) or a pluralist view (which might hold that there are many gods).

The Mind-Body problem, then, is simple: what&rsquos the relationship between body and mind? If dualism is correct, then humans should be either physical or mental entities, yet we appear to have properties of both. This causes all kinds of problems that present themselves in all kinds of ways: are mental states and physical states somehow the same thing? Or do they influence each other? If so, how? What is consciousness, and if it is distinct from the physical body, can it exist OUTSIDE the physical body? What is &ldquoself&rdquo&mdashare &ldquoyou&rdquo the physical you? Or are &ldquoyou&rdquo your mind?

The problem that Dualists cannot reconcile is that there is no way to build a satisfactory picture of a being possessed of both a body AND a mind, which may bring you back to the concept of philosophical zombies up there. Unless the next example goes ahead and obliterates all of this thinking for you, which it might.

Since the release of The Matrix, we have all wondered from time to time if we could really be living in a computer simulation. We&rsquore all kind of used to that idea, and it&rsquos a fun idea to kick around. This doesn&rsquot blow our minds anymore, but the &ldquoSimulation Argument&rdquo puts it into a perspective that . . . frankly, is probably going to really freak you out, and we&rsquore sorry. You did read the title of the article, though, so we&rsquore not TOO sorry.

First, though, consider the &ldquoDream Argument&rdquo. When dreaming, one doesn&rsquot usually know it we&rsquore fully convinced of the dream&rsquos reality. In that respect, dreams are the ultimate virtual reality, and proof that our brains can be fooled into thinking that pure sensory input represents our true physical environment, when it actually does not. In fact, it&rsquos sort of impossible to tell whether you may be dreaming now&mdashor always. Now consider this:

Human beings will probably survive as a species long enough to be capable of creating computer simulations that host simulated persons with artificial intelligence. Informing the AI of its nature as a simulation would defeat the purpose&mdashthe simulation would no longer be authentic. Unless such simulations are prohibited in some way, we will almost certainly run billions of them&mdashto study history, war, disease, culture, etc. Some, if not most, of these simulations will also develop this technology and run simulations within themselves, and on and on ad infinitum.

So, which is more likely&mdashthat we are the ONE root civilization which will first develop this technology, or that we are one of the BILLIONS of simulations? It is, of course, more likely that we&rsquore one of the simulations&mdashand if indeed it eventually comes to pass that we develop the technology to run such simulations, it is ALL BUT CERTAIN.

Synchronicity, aside from being a very good Police record, is a term coined by famous psychologist and philosopher Carl Jung. It is the concept of &ldquomeaningful coincidences&rdquo and Jung was partially inspired by a very strange event involving one of his patients.

Jung had been kicking around the idea that coincidences that appear to have a causal connection are in fact manifested in some way by the consciousness of the person perceiving the coincidences. One patient was having trouble processing some subconscious trauma, and one night dreamed of an insect&mdasha golden scarab, a large and rare type of beetle. The next day, in a session with Jung and after describing the dream, an insect bounced off the window of the study in which they sat. Jung collected it&mdasha golden scarab, very rare for the region&rsquos climate. He released it into the room and, as the patient gathered her jaw up off the floor, proceeded to describe his theory of meaningful coincidences.

The meaning of the scarab itself&mdashthe patient was familiar with its status as a totem of death and rebirth in ancient Egyptian philosophy&mdashwas symbolic of the patient&rsquos need to abandon old ways of thinking in order to progress with her treatment. The incident solidified Jung&rsquos ideas about synchronicity, and its implication that our thoughts and ideas&mdasheven subconscious ones&mdashcan have a real effect somehow on the physical world, and manifest in ways that are meaningful to us.

You probably recognize by now that a main thrust of many of these concepts is an attempt to understand the nature of consciousness. The theory of Orchestrated Objective Reduction is no different, but was arrived at independently by two very smart people from two very different angles&mdashone from mathematics (Roger Penrose, a British theoretical physicist), and one from anesthesiology (Stuart Hameroff, a University professor and anesthesiologist). They assimilated their combined research into the &ldquoOrch-OR&rdquo theory after years of working separately.

The theory is an extrapolation of Godel&rsquos Incompleteness Theorem, which revolutionized mathematics and states that &ldquoany . . . theory capable of expressing elementary arithmetic cannot be both consistent and complete&rdquo. Basically, it proves the incompleteness of mathematics and of any defined system in general. Penrose took this a step further&mdashstating that since a mathematician is a &ldquosystem&rdquo and theorems like Godel&rsquos are provable by human mathematicians, &ldquoThe inescapable conclusion seems to be: Mathematicians are not using a knowably sound calculation procedure in order to ascertain mathematical truth. We deduce that mathematical understanding &mdashthe means whereby mathematicians arrive at their conclusions with respect to mathematical truth &mdashcannot be reduced to blind calculation.&rdquo

This means that the human brain is not merely performing calculations&mdashlike a computer but way, way faster&mdashbut doing . . . something else. Something that no computer could ever replicate, some &ldquonon-computable process&rdquo that cannot be described by an algorithm. There are not many things in science that fit this description quantum wave function collapse is one of them, but that opens up a completely new can of worms.

Quantum physics deals with particles (or maybe they&rsquore waves) so small that even the act of observing them, or measuring them, can affect changes in what they&rsquore doing. That is the fundamental idea behind the so-called Uncertainty Principle, which was first described by Werner Heisenberg, which may answer a different question that has bothered a few of you for some time.

This dual nature of quanta was proposed to help explain this. If a particle appears to be in two places at once, or acts like a wave at one point and a particle the next, or appears to pop in and out of existence&mdashall things that are known to be par for the course at the quantum level&mdashit may be because the act of measurement, of observation, influences what is being observed.

Because of this, while it may be possible to get an accurate representation of one state of a quantum object&rsquos being (say, an electron&rsquos velocity), the means being used to achieve that measurement (say, firing a photon at it to intercept it) will affect its other properties (like its location, and mass) so that a COMPLETE picture of such an object&rsquos state of being will be impossible&mdashthose other properties become uncertain. Simple, right?

There are a number of problems with the &ldquoBig Bang&rdquo model of cosmology, not the least of which is the likelihood of a theoretical &ldquoBig Crunch&rdquo in which the expanding Universe contracts (the &ldquooscillating universe&rdquo theory) or the ultimate heat death of the universe. One theory that eliminates all of these problems is the theory of Eternal Return&mdashwhich suggests simply that there is no beginning OR end to the Universe that it recurs, infinitely, and always has been.

The theory depends upon infinite time and space, which is by no means certain. Assuming a Newtonian cosmology, it has been proven by at least one mathematician that the eternal recurrence of the Universe is a mathematical certainty, and of course the concept shows up in many religions both ancient and modern.

This concept is central to the writings of Nietzsche, and has serious philosophical implications as to the nature of free will and destiny. It seems like a heavy, almost unbearable burden to be pinned to space and time, destined to repeat the entirety of our existence throughout a literal eternity&mdashuntil you consider the alternative . . .

If the concept of a Universe with no beginning or end, in which the same events take on fixed and immovable meaning, seems heavy&mdashthen consider the philosophical concept of Lightness, which is the exact opposite. In a Universe in which there IS a beginning and there IS an end, in which everything that exists will very soon exist no more, then everything is fleeting, and nothing has meaning. Which makes this Lightness the ultimate burden to bear, in a Universe in which everything is &ldquowithout weight . . . and whether it was horrible, beautiful, or sublime . . . means nothing.&rdquo

The above quote is from the appropriately titled &ldquoThe Unbearable Lightness Of Being&rdquo by reclusive author Milan Kundera, which is an in-depth exploration of the philosophy which we are really not sure we ever want to read. However, Zen Buddhism endorses this concept&mdashand teaches to rejoice in it. Indeed, many Eastern philosophies view recognition and acceptance of this condition as a form of perfection and enlightenment.

We suppose it all depends on your personal point of view, which . . . now that we think about it, is sort of the point of all of this.


Is Matter Conscious?

T he nature of consciousness seems to be unique among scientific puzzles. Not only do neuroscientists have no fundamental explanation for how it arises from physical states of the brain, we are not even sure whether we ever will. Astronomers wonder what dark matter is, geologists seek the origins of life, and biologists try to understand cancer—all difficult problems, of course, yet at least we have some idea of how to go about investigating them and rough conceptions of what their solutions could look like. Our first-person experience, on the other hand, lies beyond the traditional methods of science. Following the philosopher David Chalmers, we call it the hard problem of consciousness.

But perhaps consciousness is not uniquely troublesome. Going back to Gottfried Leibniz and Immanuel Kant, philosophers of science have struggled with a lesser known, but equally hard, problem of matter. What is physical matter in and of itself, behind the mathematical structure described by physics? This problem, too, seems to lie beyond the traditional methods of science, because all we can observe is what matter does, not what it is in itself—the “software” of the universe but not its ultimate “hardware.” On the surface, these problems seem entirely separate. But a closer look reveals that they might be deeply connected.


C onsciousness is a multifaceted phenomenon, but subjective experience is its most puzzling aspect. Our brains do not merely seem to gather and process information. They do not merely undergo biochemical processes. Rather, they create a vivid series of feelings and experiences, such as seeing red, feeling hungry, or being baffled about philosophy. There is something that it’s like to be you, and no one else can ever know that as directly as you do.

Our own consciousness involves a complex array of sensations, emotions, desires, and thoughts. But, in principle, conscious experiences may be very simple. An animal that feels an immediate pain or an instinctive urge or desire, even without reflecting on it, would also be conscious. Our own consciousness is also usually consciousness of something—it involves awareness or contemplation of things in the world, abstract ideas, or the self. But someone who is dreaming an incoherent dream or hallucinating wildly would still be conscious in the sense of having some kind of subjective experience, even though they are not conscious of anything in particular.

Philosophers and neuroscientists often assume that consciousness is like software, whereas the brain is like hardware.

Where does consciousness—in this most general sense—come from? Modern science has given us good reason to believe that our consciousness is rooted in the physics and chemistry of the brain, as opposed to anything immaterial or transcendental. In order to get a conscious system, all we need is physical matter. Put it together in the right way, as in the brain, and consciousness will appear. But how and why can consciousness result merely from putting together non-conscious matter in certain complex ways?

This problem is distinctively hard because its solution cannot be determined by means of experiment and observation alone. Through increasingly sophisticated experiments and advanced neuroimaging technology, neuroscience is giving us better and better maps of what kinds of conscious experiences depend on what kinds of physical brain states. Neuroscience might also eventually be able to tell us what all of our conscious brain states have in common: for example, that they have high levels of integrated information (per Giulio Tononi’s Integrated Information Theory), that they broadcast a message in the brain (per Bernard Baars’ Global Workspace Theory), or that they generate 40-hertz oscillations (per an early proposal by Francis Crick and Christof Koch). But in all these theories, the hard problem remains. How and why does a system that integrates information, broadcasts a message, or oscillates at 40 hertz feel pain or delight? The appearance of consciousness from mere physical complexity seems equally mysterious no matter what precise form the complexity takes.

Nor would it seem to help to discover the concrete biochemical, and ultimately physical, details that underlie this complexity. No matter how precisely we could specify the mechanisms underlying, for example, the perception and recognition of tomatoes, we could still ask: Why is this process accompanied by the subjective experience of red, or any experience at all? Why couldn’t we have just the physical process, but no consciousness?

Other natural phenomena, from dark matter to life, as puzzling as they may be, don’t seem nearly as intractable. In principle, we can see that understanding them is fundamentally a matter of gathering more physical detail: building better telescopes and other instruments, designing better experiments, or noticing new laws and patterns in the data we already have. If we were somehow granted knowledge of every physical detail and pattern in the universe, we would not expect these problems to persist. They would dissolve in the same way the problem of heritability dissolved upon the discovery of the physical details of DNA. But the hard problem of consciousness would seem to persist even given knowledge of every imaginable kind of physical detail.

I n this way, the deep nature of consciousness appears to lie beyond scientific reach. We take it for granted, however, that physics can in principle tell us everything there is to know about the nature of physical matter. Physics tells us that matter is made of particles and fields, which have properties such as mass, charge, and spin. Physics may not yet have discovered all the fundamental properties of matter, but it is getting closer.

Yet there is reason to believe that there must be more to matter than what physics tells us. Broadly speaking, physics tells us what fundamental particles do or how they relate to other things, but nothing about how they are in themselves, independently of other things.

Charge, for example, is the property of repelling other particles with the same charge and attracting particles with the opposite charge. In other words, charge is a way of relating to other particles. Similarly, mass is the property of responding to applied forces and of gravitationally attracting other particles with mass, which might in turn be described as curving spacetime or interacting with the Higgs field. These are also things that particles do or ways of relating to other particles and to spacetime.

Conscious experiences are just the kind of things that physical structure could be the structure of.

In general, it seems all fundamental physical properties can be described mathematically. Galileo, the father of modern science, famously professed that the great book of nature is written in the language of mathematics. Yet mathematics is a language with distinct limitations. It can only describe abstract structures and relations. For example, all we know about numbers is how they relate to the other numbers and other mathematical objects—that is, what they “do,” the rules they follow when added, multiplied, and so on. Similarly, all we know about a geometrical object such as a node in a graph is its relations to other nodes. In the same way, a purely mathematical physics can tell us only about the relations between physical entities or the rules that govern their behavior.

One might wonder how physical particles are, independently of what they do or how they relate to other things. What are physical things like in themselves, or intrinsically? Some have argued that there is nothing more to particles than their relations, but intuition rebels at this claim. For there to be a relation, there must be two things being related. Otherwise, the relation is empty—a show that goes on without performers, or a castle constructed out of thin air. In other words, physical structure must be realized or implemented by some stuff or substance that is itself not purely structural. Otherwise, there would be no clear difference between physical and mere mathematical structure, or between the concrete universe and a mere abstraction. But what could this stuff that realizes or implements physical structure be, and what are the intrinsic, non-structural properties that characterize it? This problem is a close descendant of Kant’s classic problem of knowledge of things-in-themselves. The philosopher Galen Strawson has called it the hard problem of matter.

Why Physics Can’t Tell Us What Life Is

There is just something obviously reasonable about the following notion: If all life is built from atoms that obey precise equations we know—which seems to be true—then the existence of life might just be some downstream consequence of these laws. READ MORE

It is ironic, because we usually think of physics as describing the hardware of the universe—the real, concrete stuff. But in fact physical matter (at least the aspect that physics tells us about) is more like software: a logical and mathematical structure. According to the hard problem of matter, this software needs some hardware to implement it. Physicists have brilliantly reverse-engineered the algorithms—or the source code—of the universe, but left out their concrete implementation.

The hard problem of matter is distinct from other problems of interpretation in physics. Current physics presents puzzles, such as: How can matter be both particle-like and wave-like? What is quantum wavefunction collapse? Are continuous fields or discrete individuals more fundamental? But these are all questions of how to properly conceive of the structure of reality. The hard problem of matter would arise even if we had answers to all such questions about structure. No matter what structure we are talking about, from the most bizarre and unusual to the perfectly intuitive, there will be a question of how it is non-structurally implemented.

Indeed, the problem arises even for Newtonian physics, which describes the structure of reality in a way that makes perfect intuitive sense. Roughly speaking, Newtonian physics says that matter consists of solid particles that interact either by bumping into each other or by gravitationally attracting each other. But what is the intrinsic nature of the stuff that behaves in this simple and intuitive way? What is the hardware that implements the software of Newton’s equations? One might think the answer is simple: It is implemented by solid particles. But solidity is just the behavior of resisting intrusion and spatial overlap by other particles—that is, another mere relation to other particles and space. The hard problem of matter arises for any structural description of reality no matter how clear and intuitive at the structural level.

Like the hard problem of consciousness, the hard problem of matter cannot be solved by experiment and observation or by gathering more physical detail. This will only reveal more structure, at least as long as physics remains a discipline dedicated to capturing reality in mathematical terms.

M ight the hard problem of consciousness and the hard problem of matter be connected? There is already a tradition for connecting problems in physics with the problem of consciousness, namely in the area of quantum theories of consciousness. Such theories are sometimes disparaged as fallaciously inferring that because quantum physics and consciousness are both mysterious, together they will somehow be less so. The idea of a connection between the hard problem of consciousness and the hard problem of matter could be criticized on the same grounds. Yet a closer look reveals that these two problems are complementary in a much deeper and more determinate way. One of the first philosophers to notice the connection was Leibniz all the way back in the late 17th century, but the precise modern version of the idea is due to Bertrand Russell. Recently, contemporary philosophers including Chalmers and Strawson have rediscovered it. It goes like this.

The hard problem of matter calls for non-structural properties, and consciousness is the one phenomenon we know that might meet this need. Consciousness is full of qualitative properties, from the redness of red and the discomfort of hunger to the phenomenology of thought. Such experiences, or “qualia,” may have internal structure, but there is more to them than structure. We know something about what conscious experiences are like in and of themselves, not just how they function and relate to other properties.

For example, think of someone who has never seen any red objects and has never been told that the color red exists. That person knows nothing about how redness relates to brain states, to physical objects such as tomatoes, or to wavelengths of light, nor how it relates to other colors (for example, that it’s similar to orange but very different from green). One day, the person spontaneously hallucinates a big red patch. It seems this person will thereby learn what redness is like, even though he or she doesn’t know any of its relations to other things. The knowledge he or she acquires will be non-relational knowledge of what redness is like in and of itself.

This suggests that consciousness—of some primitive and rudimentary form—is the hardware that the software described by physics runs on. The physical world can be conceived of as a structure of conscious experiences. Our own richly textured experiences implement the physical relations that make up our brains. Some simple, elementary forms of experiences implement the relations that make up fundamental particles. Take an electron, for example. What an electron does is to attract, repel, and otherwise relate to other entities in accordance with fundamental physical equations. What performs this behavior, we might think, is simply a stream of tiny electron experiences. Electrons and other particles can be thought of as mental beings with physical powers as streams of experience in physical relations to other streams of experience.

Manuel Litran / Paris Match via Getty Images

This idea sounds strange, even mystical, but it comes out of a careful line of thought about the limitations of science. Leibniz and Russell were determined scientific rationalists—as evidenced by their own immortal contributions to physics, logic, and mathematics—but equally deeply committed to the reality and uniqueness of consciousness. They concluded that in order to give both phenomena their proper due, a radical change of thinking is required.

And a radical change it truly is. Philosophers and neuroscientists often assume that consciousness is like software, whereas the brain is like hardware. This suggestion turns this completely around. When we look at what physics tells us about the brain, we actually just find software—purely a set of relations—all the way down. And consciousness is in fact more like hardware, because of its distinctly qualitative, non-structural properties. For this reason, conscious experiences are just the kind of things that physical structure could be the structure of.

Given this solution to the hard problem of matter, the hard problem of consciousness all but dissolves. There is no longer any question of how consciousness arises from non-conscious matter, because all matter is intrinsically conscious. There is no longer a question of how consciousness depends on matter, because it is matter that depends on consciousness—as relations depend on relata, structure depends on realizer, or software on hardware.

One might object that this is plain anthropomorphism, an illegitimate projection of human qualities on nature. After all, why do we think that physical structure needs some intrinsic realizer? Is it not because our own brains have intrinsic, conscious properties, and we like to think of nature in familiar terms? But this objection does not hold. The idea that intrinsic properties are needed to distinguish real and concrete from mere abstract structure is entirely independent of consciousness. Moreover, the charge of anthropomorphism can be met by a countercharge of human exceptionalism. If the brain is indeed entirely material, why should it be so different from the rest of matter when it comes to intrinsic properties?

T his view, that consciousness constitutes the intrinsic aspect of physical reality, goes by many different names, but one of the most descriptive is “dual-aspect monism.” Monism contrasts with dualism, the view that consciousness and matter are fundamentally different substances or kinds of stuff. Dualism is widely regarded as scientifically implausible, because science shows no evidence of any non-physical forces that influence the brain.

Monism holds that all of reality is made of the same kind of stuff. It comes in several varieties. The most common monistic view is physicalism (also known as materialism), the view that everything is made of physical stuff, which only has one aspect, the one revealed by physics. This is the predominant view among philosophers and scientists today. According to physicalism, a complete, purely physical description of reality leaves nothing out. But according to the hard problem of consciousness, any purely physical description of a conscious system such as the brain at least appears to leave something out: It could never fully capture what it is like to be that system. That is to say, it captures the objective but not the subjective aspects of consciousness: the brain function, but not our inner mental life.

In order to give both phenomena their proper due, a radical change of thinking is required.

Russell’s dual-aspect monism tries to fill in this deficiency. It accepts that the brain is a material system that behaves in accordance with the laws of physics. But it adds another, intrinsic aspect to matter which is hidden from the extrinsic, third-person perspective of physics and which therefore cannot be captured by any purely physical description. But although this intrinsic aspect eludes our physical theories, it does not elude our inner observations. Our own consciousness constitutes the intrinsic aspect of the brain, and this is our clue to the intrinsic aspect of other physical things. To paraphrase Arthur Schopenhauer’s succinct response to Kant: We can know the thing-in-itself because we are it.

Dual-aspect monism comes in moderate and radical forms. Moderate versions take the intrinsic aspect of matter to consist of so-called protoconscious or “neutral” properties: properties that are unknown to science, but also different from consciousness. The nature of such neither-mental-nor-physical properties seems quite mysterious. Like the aforementioned quantum theories of consciousness, moderate dual-aspect monism can therefore be accused of merely adding one mystery to another and expecting them to cancel out.

The most radical version of dual-aspect monism takes the intrinsic aspect of reality to consist of consciousness itself. This is decidedly not the same as subjective idealism, the view that the physical world is merely a structure within human consciousness, and that the external world is in some sense an illusion. According to dual-aspect monism, the external world exists entirely independently of human consciousness. But it would not exist independently of any kind of consciousness, because all physical things are associated with some form of consciousness of their own, as their own intrinsic realizer, or hardware.

Manuel Litran / Paris Match via Getty Images

A s a solution to the hard problem of consciousness, dual-aspect monism faces objections of its own. The most common objection is that it results in panpsychism, the view that all things are associated with some form of consciousness. To critics, it’s just too implausible that fundamental particles are conscious. And indeed this idea takes some getting used to. But consider the alternatives. Dualism looks implausible on scientific grounds. Physicalism takes the objective, scientifically accessible aspect of reality to be the only reality, which arguably implies that the subjective aspect of consciousness is an illusion. Maybe so—but shouldn’t we be more confident that we are conscious, in the full subjective sense, than that particles are not?

A second important objection is the so-called combination problem. How and why does the complex, unified consciousness of our brains result from putting together particles with simple consciousness? This question looks suspiciously similar to the original hard problem. I and other defenders of panpsychism have argued that the combination problem is nevertheless not as hard as the original hard problem. In some ways, it is easier to see how to get one form of conscious matter (such as a conscious brain) from another form of conscious matter (such as a set of conscious particles) than how to get conscious matter from non-conscious matter. But many find this unconvincing. Perhaps it is just a matter of time, though. The original hard problem, in one form or another, has been pondered by philosophers for centuries. The combination problem has received much less attention, which gives more hope for a yet undiscovered solution.

The possibility that consciousness is the real concrete stuff of reality, the fundamental hardware that implements the software of our physical theories, is a radical idea. It completely inverts our ordinary picture of reality in a way that can be difficult to fully grasp. But it may solve two of the hardest problems in science and philosophy at once.

Hedda Hassel Mørch is a Norwegian philosopher and postdoctoral researcher hosted by the Center for Mind, Brain, and Consciousness at NYU. She works on the combination problem and other topics related to dual-aspect monism and panpsychism.


Contents

Materialism belongs to the class of monist ontology, and is thus different from ontological theories based on dualism or pluralism. For singular explanations of the phenomenal reality, materialism would be in contrast to idealism, neutral monism, and spiritualism. It can also contrast with phenomenalism, vitalism, and dual-aspect monism. Its materiality can, in some ways, be linked to the concept of determinism, as espoused by Enlightenment thinkers. [ citation needed ]

Despite the large number of philosophical schools and subtle nuances between many, [1] [2] [3] all philosophies are said to fall into one of two primary categories, defined in contrast to each other: idealism and materialism. [a] The basic proposition of these two categories pertains to the nature of reality—the primary distinction between them is the way they answer two fundamental questions: "what does reality consist of?" and "how does it originate?" To idealists, spirit or mind or the objects of mind (ideas) are primary, and matter secondary. To materialists, matter is primary, and mind or spirit or ideas are secondary—the product of matter acting upon matter. [3]

The materialist view is perhaps best understood in its opposition to the doctrines of immaterial substance applied to the mind historically by René Descartes however, by itself materialism says nothing about how material substance should be characterized. In practice, it is frequently assimilated to one variety of physicalism or another.

Modern philosophical materialists extend the definition of other scientifically observable entities such as energy, forces and the curvature of space however, philosophers such as Mary Midgley suggest that the concept of "matter" is elusive and poorly defined. [4]

During the 19th century, Karl Marx and Friedrich Engels extended the concept of materialism to elaborate a materialist conception of history centered on the roughly empirical world of human activity (practice, including labor) and the institutions created, reproduced or destroyed by that activity. They also developed dialectical materialism, by taking Hegelian dialectics, stripping them of their idealist aspects, and fusing them with materialism (see Modern philosophy). [5]

Non-reductive materialism Edit

Materialism is often associated with reductionism, according to which the objects or phenomena individuated at one level of description, if they are genuine, must be explicable in terms of the objects or phenomena at some other level of description—typically, at a more reduced level.

Non-reductive materialism explicitly rejects this notion, however, taking the material constitution of all particulars to be consistent with the existence of real objects, properties or phenomena not explicable in the terms canonically used for the basic material constituents. Jerry Fodor argues this view, according to which empirical laws and explanations in "special sciences" like psychology or geology are invisible from the perspective of basic physics. [6]

Before Common Era Edit

Materialism developed, possibly independently, in several geographically separated regions of Eurasia during what Karl Jaspers termed the Axial Age (c. 800–200 BC).

In ancient Indian philosophy, materialism developed around 600 BC with the works of Ajita Kesakambali, Payasi, Kanada and the proponents of the Cārvāka school of philosophy. Kanada became one of the early proponents of atomism. The Nyaya–Vaisesika school (c. 600–100 BC) developed one of the earliest forms of atomism (although their proofs of God and their positing that consciousness was not material precludes labelling them as materialists). Buddhist atomism and the Jaina school continued the atomic tradition. [ citation needed ]

Ancient Greek atomists like Leucippus, Democritus and Epicurus prefigure later materialists. The Latin poem De Rerum Natura by Lucretius (99 – c. 55 BC) reflects the mechanistic philosophy of Democritus and Epicurus. According to this view, all that exists is matter and void, and all phenomena result from different motions and conglomerations of base material particles called atoms (literally 'indivisibles'). De Rerum Natura provides mechanistic explanations for phenomena such as erosion, evaporation, wind, and sound. Famous principles like "nothing can touch body but body" first appeared in the works of Lucretius. Democritus and Epicurus, however, did not hold to a monist ontology since they held to the ontological separation of matter and space (i.e. space being "another kind" of being) indicating that the definition of materialism is wider than the given scope of this article. [ citation needed ]

Early Common Era Edit

Wang Chong (27 – c. 100 AD) was a Chinese thinker of the early Common Era said to be a materialist. [7] Later Indian materialist Jayaraashi Bhatta (6th century) in his work Tattvopaplavasimha ('The upsetting of all principles') refuted the Nyāya Sūtra epistemology. The materialistic Cārvāka philosophy appears to have died out some time after 1400 when Madhavacharya compiled Sarva-darśana-samgraha ('a digest of all philosophies') in the 14th century, he had no Cārvāka (or Lokāyata) text to quote from or refer to. [8]

Modern philosophy Edit

Thomas Hobbes (1588–1679) [10] and Pierre Gassendi (1592–1665) [11] represented the materialist tradition in opposition to the attempts of René Descartes (1596–1650) to provide the natural sciences with dualist foundations. There followed the materialist and atheist abbé Jean Meslier (1664–1729), along with the works of the French materialists: Julien Offray de La Mettrie, German-French Baron d'Holbach (1723–1789), Denis Diderot (1713–1784), and other French Enlightenment thinkers. In England, John "Walking" Stewart (1747–1822) insisted on seeing matter as endowed with a moral dimension, which had a major impact on the philosophical poetry of William Wordsworth (1770–1850).

In late modern philosophy, German atheist anthropologist Ludwig Feuerbach would signal a new turn in materialism through his book The Essence of Christianity (1841), which presented a humanist account of religion as the outward projection of man's inward nature. Feuerbach introduced anthropological materialism, a version of materialism that views materialist anthropology as the universal science. [12]

Feuerbach's variety of materialism would go on to heavily influence Karl Marx, [13] who in the late 19th century elaborated the concept of historical materialism—the basis for what Marx and Friedrich Engels outlined as scientific socialism:

The materialist conception of history starts from the proposition that the production of the means to support human life and, next to production, the exchange of things produced, is the basis of all social structure that in every society that has appeared in history, the manner in which wealth is distributed and society divided into classes or orders is dependent upon what is produced, how it is produced, and how the products are exchanged. From this point of view, the final causes of all social changes and political revolutions are to be sought, not in men's brains, not in men's better insights into eternal truth and justice, but in changes in the modes of production and exchange. They are to be sought, not in the philosophy, but in the economics of each particular epoch.

Through his Dialectics of Nature (1883), Engels later developed a "materialist dialectic" philosophy of nature a worldview that would be given the title dialectical materialism by Georgi Plekhanov, the father of Russian Marxism. [14] In early 20th-century Russian philosophy, Vladimir Lenin further developed dialectical materialism in his book Materialism and Empirio-criticism (1909), which connected the political conceptions put forth by his opponents to their anti-materialist philosophies.

A more naturalist-oriented materialist school of thought that developed in the middle of the 19th century was German materialism, which included Ludwig Büchner (1824–99), the Dutch-born Jacob Moleschott (1822–93) and Carl Vogt (1817–95), [15] [16] even though they had had different views on core issues such as the evolution and the origins of life in nature. [17]

Analytic philosophy Edit

Contemporary analytic philosophers (e.g. Daniel Dennett, Willard Van Orman Quine, Donald Davidson, and Jerry Fodor) operate within a broadly physicalist or scientific materialist framework, producing rival accounts of how best to accommodate the mind, including functionalism, anomalous monism, identity theory, and so on. [18]

Scientific materialism is often synonymous with, and has typically been described as being, a reductive materialism. In the early 21st century, Paul and Patricia Churchland [19] [20] advocated a radically contrasting position (at least, in regards to certain hypotheses): eliminative materialism. Eliminative materialism holds that some mental phenomena simply do not exist at all, and that talk of those mental phenomena reflects a totally spurious "folk psychology" and introspection illusion. A materialist of this variety might believe that a concept like "belief" simply has no basis in fact (e.g. the way folk science speaks of demon-caused illnesses).

With reductive materialism being at one end of a continuum (our theories will reduce to facts) and eliminative materialism on the other (certain theories will need to be eliminated in light of new facts), revisionary materialism is somewhere in the middle. [18]

Continental philosophy Edit

Contemporary continental philosopher Gilles Deleuze has attempted to rework and strengthen classical materialist ideas. [21] Contemporary theorists such as Manuel DeLanda, working with this reinvigorated materialism, have come to be classified as new materialist in persuasion. [22] New materialism has now become its own specialized subfield of knowledge, with courses being offered on the topic at major universities, as well as numerous conferences, edited collections and monographs devoted to it.

Jane Bennett's book Vibrant Matter (2010) has been particularly instrumental in bringing theories of monist ontology and vitalism back into a critical theoretical fold dominated by poststructuralist theories of language and discourse. [23] Scholars such as Mel Y. Chen and Zakiyyah Iman Jackson, however, have critiqued this body of new materialist literature for its neglect in considering the materiality of race and gender in particular. [24] [25]

Métis scholar Zoe Todd, as well as Mohawk (Bear Clan, Six Nations) and Anishinaabe scholar Vanessa Watts, [26] query the colonial orientation of the race for a "new" materialism. [27] Watts in particular describes the tendency to regard matter as a subject of feminist or philosophical care as a tendency that is too invested in the reanimation of a Eurocentric tradition of inquiry at the expense of an Indigenous ethic of responsibility. [28] Other scholars, such as Helene Vosters, echo their concerns and have questioned whether there is anything particularly "new" about this so-called "new materialism," as Indigenous and other animist ontologies have attested to what might be called the "vibrancy of matter" for centuries. [29] Other scholars such as Thomas Nail have critiqued "vitalist" versions of new materialism for its depoliticizing "flat ontology" and for being ahistorical in nature. [30] [31]

Quentin Meillassoux proposed speculative materialism, a post-Kantian return to David Hume which is also based on materialist ideas. [32]

The nature and definition of matter—like other key concepts in science and philosophy—have occasioned much debate: [33]

  • Is there a single kind of matter (hyle) that everything is made of, or multiple kinds?
  • Is matter a continuous substance capable of expressing multiple forms (hylomorphism) [34] or a number of discrete, unchanging constituents (atomism)? [35]
  • Does it have intrinsic properties (substance theory) [36][37] or is it lacking them (prima materia)?

One challenge to the conventional concept of matter as tangible 'stuff' came with the rise of field physics in the 19th century. Relativity shows that matter and energy (including the spatially distributed energy of fields) are interchangeable. This enables the ontological view that energy is prima materia and matter is one of its forms. In contrast, the Standard Model of particle physics uses quantum field theory to describe all interactions. On this view it could be said that fields are prima materia and the energy is a property of the field. [ citation needed ]

According to the dominant cosmological model, the Lambda-CDM model, less than 5% of the universe's energy density is made up of the "matter" described by the Standard Model, and the majority of the universe is composed of dark matter and dark energy, with little agreement among scientists about what these are made of. [38]

With the advent of quantum physics, some scientists believed the concept of matter had merely changed, while others believed the conventional position could no longer be maintained. For instance Werner Heisenberg said, "The ontology of materialism rested upon the illusion that the kind of existence, the direct 'actuality' of the world around us, can be extrapolated into the atomic range. This extrapolation, however, is impossible…atoms are not things." [39]

The concept of matter has changed in response to new scientific discoveries. Thus materialism has no definite content independent of the particular theory of matter on which it is based. According to Noam Chomsky, any property can be considered material, if one defines matter such that it has that property. [40]

George Stack distinguishes between materialism and physicalism:

In the twentieth century, physicalism has emerged out of positivism. Physicalism restricts meaningful statements to physical bodies or processes that are verifiable or in principle verifiable. It is an empirical hypothesis that is subject to revision and, hence, lacks the dogmatic stance of classical materialism. Herbert Feigl defended physicalism in the United States and consistently held that mental states are brain states and that mental terms have the same referent as physical terms. The twentieth century has witnessed many materialist theories of the mental, and much debate surrounding them. [41]

However, not all conceptions of physicalism are tied to verificationist theories of meaning or direct realist accounts of perception. Rather, physicalists believe that no "element of reality" is missing from the mathematical formalism of our best description of the world. "Materialist" physicalists also believe that the formalism describes fields of insentience. In other words, the intrinsic nature of the physical is non-experiential. [ citation needed ]

From contemporary physicists Edit

Rudolf Peierls, a physicist who played a major role in the Manhattan Project, rejected materialism: "The premise that you can describe in terms of physics the whole function of a human being . including knowledge and consciousness, is untenable. There is still something missing." [42]

Erwin Schrödinger said, "Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental. It cannot be accounted for in terms of anything else." [43]

Werner Heisenberg, who came up with the uncertainty principle, wrote, "The ontology of materialism rested upon the illusion that the kind of existence, the direct 'actuality' of the world around us, can be extrapolated into the atomic range. This extrapolation, however, is impossible . Atoms are not things." [44]

Quantum mechanics Edit

Some 20th-century physicists (e.g., Eugene Wigner [45] and Henry Stapp), [46] as well as modern day physicists and science writers (e.g., Stephen Barr, [47] Paul Davies, and John Gribbin) have argued that materialism is flawed due to certain recent scientific findings in physics, such as quantum mechanics and chaos theory. According to Gribbin and Davies (1991):

Then came our Quantum theory, which totally transformed our image of matter. The old assumption that the microscopic world of atoms was simply a scaled-down version of the everyday world had to be abandoned. Newton's deterministic machine was replaced by a shadowy and paradoxical conjunction of waves and particles, governed by the laws of chance, rather than the rigid rules of causality. An extension of the quantum theory goes beyond even this it paints a picture in which solid matter dissolves away, to be replaced by weird excitations and vibrations of invisible field energy. Quantum physics undermines materialism because it reveals that matter has far less "substance" than we might believe. But another development goes even further by demolishing Newton's image of matter as inert lumps. This development is the theory of chaos, which has recently gained widespread attention.

Digital physics Edit

The objections of Davies and Gribbin are shared by proponents of digital physics who view information rather than matter to be fundamental. Famous physicist and proponent of digital physics John Archibald Wheeler wrote, "all matter and all things physical are information-theoretic in origin and this is a participatory universe." [48] Their objections were also shared by some founders of quantum theory, such as Max Planck, who wrote:

As a man who has devoted his whole life to the most clear headed science, to the study of matter, I can tell you as a result of my research about atoms this much: There is no matter as such. All matter originates and exists only by virtue of a force which brings the particle of an atom to vibration and holds this most minute solar system of the atom together. We must assume behind this force the existence of a conscious and intelligent Mind. This Mind is the matrix of all matter.

James Jeans concurred with Planck saying, "The Universe begins to look more like a great thought than like a great machine. Mind no longer appears to be an accidental intruder into the realm of matter." [49]

Religious and spiritual views Edit

According to Constantin Gutberlet writing in Catholic Encyclopedia (1911), materialism, defined as "a philosophical system which regards matter as the only reality in the world…denies the existence of God and the soul." [50] In this view, materialism could be perceived incompatible with world religions that ascribe existence to immaterial objects. [51] Materialism may be conflated with atheism [ citation needed ] according to Friedrich A. Lange (1892), "Diderot has not always in the Encyclopædia expressed his own individual opinion, but it is just as true that at its commencement he had not yet got as far as Atheism and Materialism." [52]

Most of Hinduism and transcendentalism regard all matter as an illusion, or maya, blinding humans from knowing the truth. Transcendental experiences like the perception of Brahman are considered to destroy the illusion. [53]

Joseph Smith, the founder of the Latter Day Saint movement, taught: "There is no such thing as immaterial matter. All spirit is matter, but it is more fine or pure, and can only be discerned by purer eyes We cannot see it but when our bodies are purified we shall see that it is all matter." [54] This spirit element is believed to always have existed and to be co-eternal with God. [55]

Mary Baker Eddy, the founder of the Christian Science movement, denied the existence of matter on the basis of the allness of Mind (which she regarded as a synonym for God). [56]

Philosophical objections Edit

In the Critique of Pure Reason, Immanuel Kant argued against materialism in defending his transcendental idealism (as well as offering arguments against subjective idealism and mind–body dualism). [57] [58] However, Kant with his refutation of idealism, argues that change and time require an enduring substrate. [59] [60]

Postmodern/poststructuralist thinkers also express a skepticism about any all-encompassing metaphysical scheme. Philosopher Mary Midgley [61] argues that materialism is a self-refuting idea, at least in its eliminative materialist form. [62] [63] [64] [65]

Varieties of idealism Edit

Arguments for idealism, such as those of Hegel and Berkeley, often take the form of an argument against materialism indeed, the idealism of Berkeley was called immaterialism. Now, matter can be argued to be redundant, as in bundle theory, and mind-independent properties can, in turn, be reduced to subjective percepts. Berkeley presents an example of the latter by pointing out that it is impossible to gather direct evidence of matter, as there is no direct experience of matter all that is experienced is perception, whether internal or external. As such, the existence of matter can only be assumed from the apparent (perceived) stability of perceptions it finds absolutely no evidence in direct experience. [ citation needed ]

If matter and energy are seen as necessary to explain the physical world, but incapable of explaining mind, dualism results. Emergence, holism and process philosophy seek to ameliorate the perceived shortcomings of traditional (especially mechanistic) materialism without abandoning materialism entirely. [ citation needed ]

Materialism as methodology Edit

Some critics object to materialism as part of an overly skeptical, narrow or reductivist approach to theorizing, rather than to the ontological claim that matter is the only substance. Particle physicist and Anglican theologian John Polkinghorne objects to what he calls promissory materialism—claims that materialistic science will eventually succeed in explaining phenomena it has not so far been able to explain. [66] Polkinghorne prefers "dual-aspect monism" to materialism. [67]

Some scientific materialists have been criticized for failing to provide clear definitions for what constitutes matter, leaving the term materialism without any definite meaning. Noam Chomsky states that since the concept of matter may be affected by new scientific discoveries, as has happened in the past, scientific materialists are being dogmatic in assuming the opposite. [40]

a. ^ Indeed, it has been noted it is difficult if not impossible to define one category without contrasting it with the other. [2] [3]


Contents

"Determinism" may commonly refer to any of the following viewpoints.

Causal determinism Edit

Causal determinism, sometimes synonymous with historical determinism (a sort of path dependence), is "the idea that every event is necessitated by antecedent events and conditions together with the laws of nature." [4] However, it is a broad enough term to consider that: [5]

. one's deliberations, choices, and actions will often be necessary links in the causal chain that brings something about. In other words, even though our deliberations, choices, and actions are themselves determined like everything else, it is still the case, according to causal determinism, that the occurrence or existence of yet other things depends upon our deliberating, choosing and acting in a certain way.

Causal determinism proposes that there is an unbroken chain of prior occurrences stretching back to the origin of the universe. The relation between events may not be specified, nor the origin of that universe. Causal determinists believe that there is nothing in the universe that is uncaused or self-caused. Causal determinism has also been considered more generally as the idea that everything that happens or exists is caused by antecedent conditions. [6] In the case of nomological determinism, these conditions are considered events also, implying that the future is determined completely by preceding events—a combination of prior states of the universe and the laws of nature. [4] Yet they can also be considered metaphysical of origin (such as in the case of theological determinism). [5]

Nomological determinism Edit

Nomological determinism, generally synonymous with physical determinism (its opposite being physical indeterminism), the most common form of causal determinism, is the notion that the past and the present dictate the future entirely and necessarily by rigid natural laws, that every occurrence results inevitably from prior events. Nomological determinism is sometimes illustrated by the thought experiment of Laplace's demon. [7] Nomological determinism is sometimes called scientific determinism, although that is a misnomer.

Necessitarianism Edit

Necessitarianism is closely related to the causal determinism described above. It is a metaphysical principle that denies all mere possibility there is exactly one way for the world to be. Leucippus claimed there were no uncaused events, and that everything occurs for a reason and by necessity. [8]

Predeterminism Edit

Predeterminism is the idea that all events are determined in advance. [9] [10] The concept is often argued by invoking causal determinism, implying that there is an unbroken chain of prior occurrences stretching back to the origin of the universe. In the case of predeterminism, this chain of events has been pre-established, and human actions cannot interfere with the outcomes of this pre-established chain.

Predeterminism can be used to mean such pre-established causal determinism, in which case it is categorised as a specific type of determinism. [9] [11] It can also be used interchangeably with causal determinism—in the context of its capacity to determine future events. [9] [12] Despite this, predeterminism is often considered as independent of causal determinism. [13] [14]

Biological determinism Edit

The term predeterminism is also frequently used in the context of biology and heredity, in which case it represents a form of biological determinism, sometimes called genetic determinism. [15] Biological determinism is the idea that each of human behaviors, beliefs, and desires are fixed by human genetic nature.

Fatalism Edit

Fatalism is normally distinguished from "determinism", [16] as a form of teleological determinism. Fatalism is the idea that everything is fated to happen, so that humans have no control over their future. Fate has arbitrary power, and need not follow any causal or otherwise deterministic laws. [6] Types of fatalism include hard theological determinism and the idea of predestination, where there is a God who determines all that humans will do. This may be accomplished either by knowing their actions in advance, via some form of omniscience [17] or by decreeing their actions in advance. [18]

Theological determinism Edit

Theological determinism is a form of determinism that holds that all events that happen are either preordained (i.e., predestined) to happen by a monotheistic deity, or are destined to occur given its omniscience. Two forms of theological determinism exist, referred to as strong and weak theological determinism. [19]

Strong theological determinism is based on the concept of a creator deity dictating all events in history: "everything that happens has been predestined to happen by an omniscient, omnipotent divinity." [20]

Weak theological determinism is based on the concept of divine foreknowledge—"because God's omniscience is perfect, what God knows about the future will inevitably happen, which means, consequently, that the future is already fixed." [21] There exist slight variations on this categorisation, however. Some claim either that theological determinism requires predestination of all events and outcomes by the divinity—i.e., they do not classify the weaker version as theological determinism unless libertarian free will is assumed to be denied as a consequence—or that the weaker version does not constitute theological determinism at all. [22]

With respect to free will, "theological determinism is the thesis that God exists and has infallible knowledge of all true propositions including propositions about our future actions," more minimal criteria designed to encapsulate all forms of theological determinism. [23]

Theological determinism can also be seen as a form of causal determinism, in which the antecedent conditions are the nature and will of God. [5] Some have asserted that Augustine of Hippo introduced theological determinism into Christianity in 412 CE, whereas all prior Christian authors supported free will against Stoic and Gnostic determinism. [24] However, there are many Biblical passages that seem to support the idea of some kind of theological determinism including Psalm 115:3, Acts 2:23, and Lamentations 2:17.

Logical determinism Edit

Logical determinism, or determinateness, is the notion that all propositions, whether about the past, present, or future, are either true or false. Note that one can support causal determinism without necessarily supporting logical determinism and vice versa (depending on one's views on the nature of time, but also randomness). The problem of free will is especially salient now with logical determinism: how can choices be free, given that propositions about the future already have a truth value in the present (i.e. it is already determined as either true or false)? This is referred to as the problem of future contingents.

Often synonymous with logical determinism are the ideas behind spatio-temporal determinism or eternalism: the view of special relativity. J. J. C. Smart, a proponent of this view, uses the term tenselessness to describe the simultaneous existence of past, present, and future. In physics, the "block universe" of Hermann Minkowski and Albert Einstein assumes that time is a fourth dimension (like the three spatial dimensions). In other words, all the other parts of time are real, like the city blocks up and down a street, although the order in which they appear depends on the driver (see Rietdijk–Putnam argument).

Adequate determinism Edit

Adequate determinism is the idea, because of quantum decoherence, that quantum indeterminacy can be ignored for most macroscopic events. Random quantum events "average out" in the limit of large numbers of particles (where the laws of quantum mechanics asymptotically approach the laws of classical mechanics). [25] Stephen Hawking explains a similar idea: he says that the microscopic world of quantum mechanics is one of determined probabilities. That is, quantum effects rarely alter the predictions of classical mechanics, which are quite accurate (albeit still not perfectly certain) at larger scales. [26] Something as large as an animal cell, then, would be "adequately determined" (even in light of quantum indeterminacy).

Many-worlds Edit

The many-worlds interpretation accepts the linear causal sets of sequential events with adequate consistency yet also suggests constant forking of causal chains creating "multiple universes" to account for multiple outcomes from single events. [27] Meaning the causal set of events leading to the present are all valid yet appear as a singular linear time stream within a much broader unseen conic probability field of other outcomes that "split off" from the locally observed timeline. Under this model causal sets are still "consistent" yet not exclusive to singular iterated outcomes.

The interpretation side steps the exclusive retrospective causal chain problem of "could not have done otherwise" by suggesting "the other outcome does exist" in a set of parallel universe time streams that split off when the action occurred. This theory is sometimes described with the example of agent based choices but more involved models argue that recursive causal splitting occurs with all particle wave functions at play. [28] This model is highly contested with multiple objections from the scientific community.

Philosophical varieties Edit

Determinism in nature/nurture controversy Edit

Although some of the above forms of determinism concern human behaviors and cognition, others frame themselves as an answer to the debate on nature and nurture. They will suggest that one factor will entirely determine behavior. As scientific understanding has grown, however, the strongest versions of these theories have been widely rejected as a single-cause fallacy. [29] In other words, the modern deterministic theories attempt to explain how the interaction of both nature and nurture is entirely predictable. The concept of heritability has been helpful in making this distinction.

    , sometimes called genetic determinism, is the idea that each of human behaviors, beliefs, and desires are fixed by human genetic nature. involves the idea that all behavior can be traced to specific causes—either environmental or reflexive. John B. Watson and B. F. Skinner developed this nurture-focused determinism. , along with social determinism, is the nurture-focused theory that the culture in which we are raised determines who we are. , also known as climatic or geographical determinism, proposes that the physical environment, rather than social conditions, determines culture. Supporters of environmental determinism often [quantify] also support Behavioral determinism. Key proponents of this notion have included Ellen Churchill Semple, Ellsworth Huntington, Thomas Griffith Taylor and possibly Jared Diamond, although his status as an environmental determinist is debated. [30]

Determinism and prediction Edit

Other 'deterministic' theories actually seek only to highlight the importance of a particular factor in predicting the future. These theories often use the factor as a sort of guide or constraint on the future. They need not suppose that complete knowledge of that one factor would allow us to make perfect predictions.

    can mean that humans must act according to reason, but it can also be synonymous with some sort of Psychological egoism. The latter is the view that humans will always act according to their perceived best interest. claims that our language determines (at least limits) the things we can think and say and thus know. The Sapir–Whorf hypothesis argues that individuals experience the world based on the grammatical structures they habitually use. attributes primacy to economic structure over politics in the development of human history. It is associated with the dialectical materialism of Karl Marx. is a reductionist theory that presumes that a society's technology drives the development of its social structure and cultural values.

Philosophy has explored the concept of determinism for thousands of years, which derives from the principle of causality. But philosophers, often, do not clearly distinguish between cosmic nature, human nature, and historical reality. Anthropologists define historical reality as synonymous with culture. The reality of determinism, as an uncontrollable element for human beings, unfolds in the classification of various types of society, after the overcoming of the "society of nature", identifiable with the overcoming of the society without any structure (and, therefore, consistent with the nature of the animal species, endowed with a minimum sociality, and minimal psychic processing). On the contrary, structured societies are based on cultural mechanisms, that is to say on mechanisms other than natural drives, which drives are common to all social animals. Already for some animal species, with less intellectual capacity than homo sapiens, elements of structures can be noted, that is, elements of the societies of the hordes, or of the tribal societies or those with stable social stratifications. These structural elements, insofar as they are artificial, or extraneous to the nature of the specific species in which they emerge, constitute factors of external determination, that is, of upheaval, on the drives, desires, needs, and purposes of the individuals of that particular species.

Contemporary human beings are generally inserted in a social reality equipped with structures, of an organic-stratified type, based on the concept and essence of the state, and therefore definable as structural statual reality, suffer from this reality structural, a decisive influence, which is such as to determine, almost entirely, their character, their thinking, and their behavior. Of this decisive influence, human beings are very little, or not at all, conscious, and can realize such consciousness only through in-depth philosophical studies, and individual reflections. Individually, they can, at least partially, abstract themselves from this decisive influence, only if they self-marginalize themselves from the reality of these same structures, in the specific manifestation that the latter assumption, in the historical era in which a specific individual finds himself living. This marginalization does not necessarily imply social isolation, which causes it to take refuge in asociality, but to renounce being actively involved in the logic of the specific historical moment in which the individual finds himself living and, therefore, even more, abstracting from the hierarchical logic, based on the principle of authority, which is characteristic of the structural reality, historically determined and, in turn, decisive, on the individuals and peoples. [31]

With free will Edit

Philosophers have debated both the truth of determinism, and the truth of free will. This creates the four possible positions in the figure. Compatibilism refers to the view that free will is, in some sense, compatible with determinism. The three incompatibilist positions, on the other hand, deny this possibility. The hard incompatibilists hold that free will is incompatible with both determinism and indeterminism, the libertarianists that determinism does not hold, and free will might exist, and the hard determinists that determinism does hold and free will does not exist.

The Dutch philosopher Baruch Spinoza was a determinist thinker, and argued that human freedom can be achieved through knowledge of the causes that determine our desire and affections. He defined human servitude as the state of bondage of the man who is aware of his own desires, but ignorant of the causes that determined him. On the other hand, the free or virtuous man becomes capable, through reason and knowledge, to be genuinely free, even as he is being "determined". For the Dutch philosopher, acting out of our own internal necessity is genuine freedom while being driven by exterior determinations is akin to bondage. Spinoza's thoughts on human servitude and liberty are respectively detailed in the fourth [32] and fifth [33] volumes of his work Ethics.

The standard argument against free will, according to philosopher J. J. C. Smart, focuses on the implications of determinism for 'free will'. [34] However, he suggests free will is denied whether determinism is true or not. On one hand, if determinism is true, all our actions are predicted and we are assumed not to be free on the other hand, if determinism is false, our actions are presumed to be random and as such we do not seem free because we had no part in controlling what happened.

With the soul Edit

Some determinists argue that materialism does not present a complete understanding of the universe, because while it can describe determinate interactions among material things, it ignores the minds or souls of conscious beings.

A number of positions can be delineated:

  • Immaterial souls are all that exist (idealism).
  • Immaterial souls exist and exert a non-deterministic causal influence on bodies (traditional free-will, interactionist dualism). [35][36]
  • Immaterial souls exist, but are part of a deterministic framework.
  • Immaterial souls exist, but exert no causal influence, free or determined (epiphenomenalism, occasionalism)
  • Immaterial souls do not exist – there is no mind-body dichotomy, and there is a materialistic explanation for intuitions to the contrary.

With ethics and morality Edit

Another topic of debate is the implication that Determinism has on morality. Hard determinism (a belief in determinism, and not free will) is particularly criticized for seeming to make traditional moral judgments impossible. Some philosophers find this an acceptable conclusion.

Philosopher and incompatibilist Peter van Inwagen introduces this thesis, when argument that free will is required for moral judgments, as such: [37]

  1. The moral judgment that X should not have been done implies that something else should have been done instead
  2. That something else should have been done instead implies that there was something else to do
  3. That there was something else to do implies that something else could have been done
  4. That something else could have been done implies that there is free will
  5. If there is no free will to have done other than X we cannot make the moral judgment that X should not have been done.

However, a compatibilist might have an issue with Inwagen's process, because one cannot change the past as their arguments center around. A compatibilist who centers around plans for the future might posit:

  • The moral judgment that X should not have been done implies that something else could have been done instead
  • That something else can be done instead implies that there is something else to do
  • That there is something else to do implies that something else can be done
  • That something else can be done implies that there is free will for planning future recourse
  • If there is free will to do other than X the moral judgment can be made that other than X should be done, a responsible party for having done X while knowing it should not have been done should be punished to help remember to not do X in the future.

Mecca Chiesa notes that the probabilistic or selectionistic determinism of B. F. Skinner comprised a wholly separate conception of determinism that was not mechanistic at all. Mechanistic determinism assumes that every event has an unbroken chain of prior occurrences, but a selectionistic or probabilistic model does not. [38] [39]

Western tradition Edit

In the West, some elements of determinism have been expressed in Greece from the 6th century BC by the Presocratics Heraclitus [40] and Leucippus. [41] The first full-fledged notion of determinism appears to originate with the Stoics, as part of their theory of universal causal determinism. [42] The resulting philosophical debates, which involved the confluence of elements of Aristotelian Ethics with Stoic psychology, led in the 1st-3rd centuries CE in the works of Alexander of Aphrodisias to the first recorded Western debate over determinism and freedom, [43] an issue that is known in theology as the paradox of free will. The writings of Epictetus as well as middle Platonist and early Christian thought were instrumental in this development. [44] Jewish philosopher Moses Maimonides said of the deterministic implications of an omniscient god: [45] "Does God know or does He not know that a certain individual will be good or bad? If thou sayest 'He knows', then it necessarily follows that [that] man is compelled to act as God knew beforehand he would act, otherwise God's knowledge would be imperfect." [46]

Newtonian mechanics Edit

Determinism in the West is often associated with Newtonian mechanics/physics, which depicts the physical matter of the universe as operating according to a set of fixed, knowable laws. The "billiard ball" hypothesis, a product of Newtonian physics, argues that once the initial conditions of the universe have been established, the rest of the history of the universe follows inevitably. If it were actually possible to have complete knowledge of physical matter and all of the laws governing that matter at any one time, then it would be theoretically possible to compute the time and place of every event that will ever occur (Laplace's demon). In this sense, the basic particles of the universe operate in the same fashion as the rolling balls on a billiard table, moving and striking each other in predictable ways to produce predictable results.

Whether or not it is all-encompassing in so doing, Newtonian mechanics deals only with caused events for example, if an object begins in a known position and is hit dead on by an object with some known velocity, then it will be pushed straight toward another predictable point. If it goes somewhere else, the Newtonians argue, one must question one's measurements of the original position of the object, the exact direction of the striking object, gravitational or other fields that were inadvertently ignored, etc. Then, they maintain, repeated experiments and improvements in accuracy will always bring one's observations closer to the theoretically predicted results. When dealing with situations on an ordinary human scale, Newtonian physics has been so enormously successful that it has no competition. But it fails spectacularly as velocities become some substantial fraction of the speed of light and when interactions at the atomic scale are studied. Before the discovery of quantum effects and other challenges to Newtonian physics, "uncertainty" was always a term that applied to the accuracy of human knowledge about causes and effects, and not to the causes and effects themselves.

Newtonian mechanics, as well as any following physical theories, are results of observations and experiments, and so they describe "how it all works" within a tolerance. However, old western scientists believed if there are any logical connections found between an observed cause and effect, there must be also some absolute natural laws behind. Belief in perfect natural laws driving everything, instead of just describing what we should expect, led to searching for a set of universal simple laws that rule the world. This movement significantly encouraged deterministic views in Western philosophy, [47] as well as the related theological views of classical pantheism.

Eastern tradition Edit

The idea that the entire universe is a deterministic system has been articulated in both Eastern and non-Eastern religion, philosophy, and literature.

In I Ching and Philosophical Taoism, the ebb and flow of favorable and unfavorable conditions suggests the path of least resistance is effortless (see Wu wei).

In the philosophical schools of the Indian Subcontinent, the concept of karma deals with similar philosophical issues to the western concept of determinism. Karma is understood as a spiritual mechanism which causes the entire cycle of rebirth (i.e. Saṃsāra). Karma, either positive or negative, accumulates according to an individual's actions throughout their life, and at their death determines the nature of their next life in the cycle of Saṃsāra. Most major religions originating in India hold this belief to some degree, most notably Hinduism, Jainism, Sikhism, and Buddhism.

The views on the interaction of karma and free will are numerous, and diverge from each other greatly. For example, in Sikhism, God's grace, gained through worship, can erase one's karmic debts, a belief which reconciles the principle of Karma with a monotheistic God one must freely choose to worship. [48] Jainism, on the other hand, believe in a sort of compatibilism, in which the cycle of Saṃsara is a completely mechanistic process, occurring without any divine intervention. The Jains hold an atomic view of reality, in which particles of karma form the fundamental microscopic building material of the universe, resembling in some ways modern-day atomic theory.

Buddhism Edit

Buddhist philosophy contains several concepts which some scholars describe as deterministic to various levels. However, the direct analysis of Buddhist metaphysics through the lens of determinism is difficult, due to the differences between European and Buddhist traditions of thought.

One concept which is argued to support a hard determinism is the idea of dependent origination, which claims that all phenomena (dharma) are necessarily caused by some other phenomenon, which it can be said to be dependent on, like links in a massive chain. In traditional Buddhist philosophy, this concept is used to explain the functioning of the cycle of saṃsāra all actions exert a karmic force, which will manifest results in future lives. In other words, righteous or unrighteous actions in one life will necessarily cause good or bad responses in another. [49]

Another Buddhist concept which many scholars perceive to be deterministic is the idea of non-self, or anatta. [50] In Buddhism, attaining enlightenment involves one realizing that in humans there is no fundamental core of being which can be called the "soul", and that humans are instead made of several constantly changing factors which bind them to the cycle of Saṃsāra. [50]

Some scholars argue that the concept of non-self necessarily disproves the ideas of free will and moral culpability. If there is no autonomous self, in this view, and all events are necessarily and unchangeably caused by others, then no type of autonomy can be said to exist, moral or otherwise. However, other scholars disagree, claiming that the Buddhist conception of the universe allows for a form of compatibilism. Buddhism perceives reality occurring on two different levels, the ultimate reality which can only be truly understood by the enlightened, and the illusory and false material reality. Therefore, Buddhism perceives free will as a notion belonging to material reality, while concepts like non-self and dependent origination belong to the ultimate reality the transition between the two can be truly understood, Buddhists claim, by one who has attained enlightenment. [51]

Generative processes Edit

Although it was once thought by scientists that any indeterminism in quantum mechanics occurred at too small a scale to influence biological or neurological systems, there is indication that nervous systems are influenced by quantum indeterminism due to chaos theory. [ citation needed ] It is unclear what implications this has for the problem of free will given various possible reactions to the problem in the first place. [52] Many biologists do not grant determinism: Christof Koch, for instance, argues against it, and in favour of libertarian free will, by making arguments based on generative processes (emergence). [53] Other proponents of emergentist or generative philosophy, cognitive sciences, and evolutionary psychology, argue that a certain form of determinism (not necessarily causal) is true. [54] [55] [56] [57] They suggest instead that an illusion of free will is experienced due to the generation of infinite behaviour from the interaction of finite-deterministic set of rules and parameters. Thus the unpredictability of the emerging behaviour from deterministic processes leads to a perception of free will, even though free will as an ontological entity does not exist. [54] [55] [56] [57]

As an illustration, the strategy board-games chess and Go have rigorous rules in which no information (such as cards' face-values) is hidden from either player and no random events (such as dice-rolling) happen within the game. Yet, chess and especially Go with its extremely simple deterministic rules, can still have an extremely large number of unpredictable moves. When chess is simplified to 7 or fewer pieces, however, endgame tables are available that dictate which moves to play to achieve a perfect game. This implies that, given a less complex environment (with the original 32 pieces reduced to 7 or fewer pieces), a perfectly predictable game of chess is possible. In this scenario, the winning player can announce that a checkmate will happen within a given number of moves, assuming a perfect defense by the losing player, or fewer moves if the defending player chooses sub-optimal moves as the game progresses into its inevitable, predicted conclusion. By this analogy, it is suggested, the experience of free will emerges from the interaction of finite rules and deterministic parameters that generate nearly infinite and practically unpredictable behavioural responses. In theory, if all these events could be accounted for, and there were a known way to evaluate these events, the seemingly unpredictable behaviour would become predictable. [54] [55] [56] [57] Another hands-on example of generative processes is John Horton Conway's playable Game of Life. [58] Nassim Taleb is wary of such models, and coined the term "ludic fallacy."

Compatibility with the existence of science Edit

Certain philosophers of science argue that, while causal determinism (in which everything including the brain/mind is subject to the laws of causality) is compatible with minds capable of science, fatalism and predestination is not. These philosophers make the distinction that causal determinism means that each step is determined by the step before and therefore allows sensory input from observational data to determine what conclusions the brain reaches, while fatalism in which the steps between do not connect an initial cause to the results would make it impossible for observational data to correct false hypotheses. This is often combined with the argument that if the brain had fixed views and the arguments were mere after-constructs with no causal effect on the conclusions, science would have been impossible and the use of arguments would have been a meaningless waste of energy with no persuasive effect on brains with fixed views. [59]

Mathematical models Edit

Many mathematical models of physical systems are deterministic. This is true of most models involving differential equations (notably, those measuring rate of change over time). Mathematical models that are not deterministic because they involve randomness are called stochastic. Because of sensitive dependence on initial conditions, some deterministic models may appear to behave non-deterministically in such cases, a deterministic interpretation of the model may not be useful due to numerical instability and a finite amount of precision in measurement. Such considerations can motivate the consideration of a stochastic model even though the underlying system is governed by deterministic equations. [60] [61] [62]

Quantum and classical mechanics Edit

Day-to-day physics Edit

Since the beginning of the 20th century, quantum mechanics—the physics of the extremely small—has revealed previously concealed aspects of events. Before that, Newtonian physics—the physics of everyday life—dominated. Taken in isolation (rather than as an approximation to quantum mechanics), Newtonian physics depicts a universe in which objects move in perfectly determined ways. At the scale where humans exist and interact with the universe, Newtonian mechanics remain useful, and make relatively accurate predictions (e.g. calculating the trajectory of a bullet). But whereas in theory, absolute knowledge of the forces accelerating a bullet would produce an absolutely accurate prediction of its path, modern quantum mechanics casts reasonable doubt on this main thesis of determinism.

Relevant is the fact that certainty is never absolute in practice (and not just because of David Hume's problem of induction). The equations of Newtonian mechanics can exhibit sensitive dependence on initial conditions. This is an example of the butterfly effect, which is one of the subjects of chaos theory. The idea is that something even as small as a butterfly could cause a chain reaction leading to a hurricane years later. Consequently, even a very small error in knowledge of initial conditions can result in arbitrarily large deviations from predicted behavior. Chaos theory thus explains why it may be practically impossible to predict real life, whether determinism is true or false. On the other hand, the issue may not be so much about human abilities to predict or attain certainty as much as it is the nature of reality itself. For that, a closer, scientific look at nature is necessary.

Quantum realm Edit

Quantum physics works differently in many ways from Newtonian physics. Physicist Aaron D. O'Connell explains that understanding our universe, at such small scales as atoms, requires a different logic than day-to-day life does. O'Connell does not deny that it is all interconnected: the scale of human existence ultimately does emerge from the quantum scale. O'Connell argues that we must simply use different models and constructs when dealing with the quantum world. [63] Quantum mechanics is the product of a careful application of the scientific method, logic and empiricism. The Heisenberg uncertainty principle is frequently confused with the observer effect. The uncertainty principle actually describes how precisely we may measure the position and momentum of a particle at the same time – if we increase the accuracy in measuring one quantity, we are forced to lose accuracy in measuring the other. "These uncertainty relations give us that measure of freedom from the limitations of classical concepts which is necessary for a consistent description of atomic processes." [64]

This is where statistical mechanics come into play, and where physicists begin to require rather unintuitive mental models: A particle's path simply cannot be exactly specified in its full quantum description. "Path" is a classical, practical attribute in our everyday life, but one that quantum particles do not meaningfully possess. The probabilities discovered in quantum mechanics do nevertheless arise from measurement (of the perceived path of the particle). As Stephen Hawking explains, the result is not traditional determinism, but rather determined probabilities. [65] In some cases, a quantum particle may indeed trace an exact path, and the probability of finding the particles in that path is one (certain to be true). In fact, as far as prediction goes, the quantum development is at least as predictable as the classical motion, but the key is that it describes wave functions that cannot be easily expressed in ordinary language. As far as the thesis of determinism is concerned, these probabilities, at least, are quite determined. These findings from quantum mechanics have found many applications, and allow us to build transistors and lasers. Put another way: personal computers, Blu-ray players and the Internet all work because humankind discovered the determined probabilities of the quantum world. [66] None of that should be taken to imply that other aspects of quantum mechanics are not still up for debate.

On the topic of predictable probabilities, the double-slit experiments are a popular example. Photons are fired one-by-one through a double-slit apparatus at a distant screen. They do not arrive at any single point, nor even the two points lined up with the slits (the way it might be expected of bullets fired by a fixed gun at a distant target). Instead, the light arrives in varying concentrations at widely separated points, and the distribution of its collisions with the target can be calculated reliably. In that sense the behavior of light in this apparatus is deterministic, but there is no way to predict where in the resulting interference pattern any individual photon will make its contribution (although, there may be ways to use weak measurement to acquire more information without violating the uncertainty principle).

Some (including Albert Einstein) argue that our inability to predict any more than probabilities is simply due to ignorance. [67] The idea is that, beyond the conditions and laws we can observe or deduce, there are also hidden factors or "hidden variables" that determine absolutely in which order photons reach the detector screen. They argue that the course of the universe is absolutely determined, but that humans are screened from knowledge of the determinative factors. So, they say, it only appears that things proceed in a merely probabilistically determinative way. In actuality, they proceed in an absolutely deterministic way.

John S. Bell criticized Einstein's work in his famous Bell's theorem, which proved that quantum mechanics can make statistical predictions that would be violated if local hidden variables really existed. A number of experiments have tried to verify such predictions, and so far they do not appear to be violated. Current experiments continue to verify the result, including the 2015 "Loophole Free Test" that plugged all known sources of error and the 2017 "Cosmic Bell Test" experiment that used cosmic data streaming from different directions toward the Earth, precluding the possibility the sources of data could have had prior interactions. However, it is possible to augment quantum mechanics with non-local hidden variables to achieve a deterministic theory that is in agreement with experiment. [68] An example is the Bohm interpretation of quantum mechanics. Bohm's Interpretation, though, violates special relativity and it is highly controversial whether or not it can be reconciled without giving up on determinism.

More advanced variations on these arguments include Quantum contextuality, by Bell, Simon B. Kochen and Ernst Specker, which argues that hidden variable theories cannot be "sensible," meaning that the values of the hidden variables inherently depend on the devices used to measure them.

This debate is relevant because it is easy to imagine specific situations in which the arrival of an electron at a screen at a certain point and time would trigger one event, whereas its arrival at another point would trigger an entirely different event (e.g. see Schrödinger's cat - a thought experiment used as part of a deeper debate).

Thus, quantum physics casts reasonable doubt on the traditional determinism of classical, Newtonian physics in so far as reality does not seem to be absolutely determined. This was the subject of the famous Bohr–Einstein debates between Einstein and Niels Bohr and there is still no consensus. [69] [70]

Adequate determinism (see Varieties, above) is the reason that Stephen Hawking calls Libertarian free will "just an illusion". [65]

Other matters of quantum determinism Edit

All uranium found on earth is thought to have been synthesized during a supernova explosion that occurred roughly 5 billion years ago. Even before the laws of quantum mechanics were developed to their present level, the radioactivity of such elements has posed a challenge to determinism due to its unpredictability. One gram of uranium-238, a commonly occurring radioactive substance, contains some 2.5 x 10 21 atoms. Each of these atoms are identical and indistinguishable according to all tests known to modern science. Yet about 12600 times a second, one of the atoms in that gram will decay, giving off an alpha particle. The challenge for determinism is to explain why and when decay occurs, since it does not seem to depend on external stimulus. Indeed, no extant theory of physics makes testable predictions of exactly when any given atom will decay. At best scientists can discover determined probabilities in the form of the element's half life.

The time dependent Schrödinger equation gives the first time derivative of the quantum state. That is, it explicitly and uniquely predicts the development of the wave function with time.

So if the wave function itself is reality (rather than probability of classical coordinates), then the unitary evolution of the wave function in quantum mechanics, can be said to be deterministic. But the unitary evolution of the wave function is not the entirety of quantum mechanics.

Asserting that quantum mechanics is deterministic by treating the wave function itself as reality might be thought to imply a single wave function for the entire universe, starting at the origin of the universe. Such a "wave function of everything" would carry the probabilities of not just the world we know, but every other possible world that could have evolved. For example, large voids in the distributions of galaxies are believed by many cosmologists to have originated in quantum fluctuations during the big bang. (See cosmic inflation, primordial fluctuations and large-scale structure of the cosmos.)

However, neither the posited reality nor the proven and extraordinary accuracy of the wave function and quantum mechanics at small scales can imply or reasonably suggest the existence of a single wave function for the entire universe. Quantum mechanics breaks down wherever gravity becomes significant, because nothing in the wave function, or in quantum mechanics, predicts anything at all about gravity. And this is obviously of great importance on larger scales.

Gravity is thought of as a large-scale force, with a longer reach than any other. But gravity becomes significant even at masses that are tiny compared to the mass of the universe.

A wave function the size of the universe might successfully model a universe with no gravity. Our universe, with gravity, is vastly different from what quantum mechanics alone predicts. To forget this is a colossal error.


I Am Not a Machine. Yes You Are.

I ’m trying to explain to Arthur I. Miller why artworks generated by computers don’t quite do it for me. There’s no human being behind them. The works aren’t a portal into another person’s mind, where you can wander in a warren of intention, emotion, and perception, feeling life being shaped into form. What’s more, it often seems, people just ain’t no good, so it’s transcendent to be reminded they can be. Art is one of the few human creations that can do that. Machine art never can because it’s not, well, human. No matter how engaging the songs or poems that a computer generates may be, they ultimately feel empty. They lack the electricity of the human body, the hum of human consciousness, the connection with another person. Miller, a longtime professor, a gentleman intellect, dressed in casual black, is listening patiently, letting me have my say. But I can tell he’s thinking, “This guy’s living in the past.”

Miller is sitting at a simple table in a dim and sparsely furnished apartment on New York City’s Lower East Side. It’s an Airbnb place that’s keeping him housed while he gives talks in bookstores and colleges in the city about his latest book, The Artist in the Machine: The World of AI-Powered Creativity. Miller is the Virgil of art and science writing, a guide through the underworld of artists employing scientific practices like gene-splicing, brain imaging, and computer-code writing to create works that ask viewers to reflect on how science and technology are changing our views of the world and everything in it, including us. His previous book, Colliding Worlds, features artists like Austrian sculptor Julian Voss-Andreae, who studied quantum physics. One Voss-Andreae work, Quantum Man, stands eight feet tall and is constructed of more than 100 vertical steel sheets. The sculpture looks like a man when seen from the front, but as viewers move around it, the figure vanishes, representing how experiments in quantum physics, depending how they’re set up, detect electrons as either waves or particles: “how you look at it, that’s what it is,” writes Miller.

Miller has a Ph.D. in physics from MIT and is an emeritus professor in history and the philosophy of science at University College London. The Artist in the Machine profiles an array of fascinating artists and engineers who write computer programs to generate music, paintings, and literature. (Miller’s profile of Ross Goodwin, who’s written algorithms to generate screenplays and a novel, is featured in this week’s chapter of Nautilus.) Miller argues that AI-fueled art gains independence from its algorithmic parents and takes flight in works that bear the hallmarks of creativity and genius and will one day exceed human artists’ wildest imaginative dreams. Miller says he sympathizes with what I’m saying about the power of art coming from the connection with a human artist, plumbing their emotions and consciousness. But I’m being premature. Just wait, he says, computers will one day produce art as transcendent as the works of Beethoven and Picasso were in their times.

THE THIRD CULTURE: “Scientists and artists speak the same language, especially in that nascent moment of creativity, when everything comes together, when you have the solution to your problem,” says Arthur I. Miller (above). Lesley Miller

So you’re saying we’ll one day connect with machine art as profoundly as we do now with human art?

Yes. The machine sees the world in a different way than we see the world. Just like an artist does. That gives you an inkling that machines will have a different physiology. In time, they will evolve emotions. Just from scanning the web now, they could imitate our emotions. They’ll say, “Oh, thirst, that’s cool. I think I’ll be thirsty,” and they can convince you they’re thirsty. “Love, that sounds cool too, I just had this nice discussion with a machine down the street, and it seems like love.” They’ll hone their notion of love by reading novels, and soon they will evolve emotions and consciousness. That will be the point of artificial general intelligence. Then it’s just a hop, skip, and a jump to artificial superintelligence, where they go beyond us in intelligence, emotions, and consciousness.

Does machine superintelligence worry you?

No. I think if we teach machines to be creative, then they’ll be beneficent toward us, rather than just keeping us around as household pets.

That’s good to hear. But back to art.

OK. I think we should keep your question in mind: Can we appreciate art that we know has been produced by machines?

We could learn to, yes. We may develop a preference for it. Just as we may eventually have a preference for the prose produced by sophisticated machines, which at first may be nonsense to us. Now machines generate prose with word play that we’re not used to. That shows us machines can change the landscape of language. It’s not unlike what happened in the 1840s, when the camera was invented, which freed artists from a too literal interpretation of nature, and opened the doors for the impressionists.

We can have hopes, dreams, and aspirations for humankind. But we are machines, just like computers are machines.

Still, I don’t see how we can feel the agency of machine art as we do with human art.

Machines in art can have intent, and they can have a bit of free will, too. You can take a painting-robot out of a studio, out on the road, with a webcam attached to it, and it will look around and see something that strikes its fancy as a nice pattern, and say, “I think I’ll paint that.” So that’s intent. That’s a bit of free will, too. So machines can be out there in the world, and have some sort of physical experience.

Are you saying we don’t need the human element to appreciate art?

Look what’s going on today with people sitting in front of their screens. Where’s the human element? People are walking down the street looking at their phones. They’re getting hit by cars.

I’ve interviewed neuroscientists who’ve said human connection activates brain networks that move us profoundly, perhaps more than practically anything else.

Well, you have to leave your mind open. That may well change.

Has researching the connections between science and art given you a more mechanical view of humans?

Yes, it has. In doing this work, I have developed a reductionist view of nature. There is a definite relationship between us and machines. We’re like machines machines are like us. We are basically biological machines.

The World’s Most Inspirational Iceberg Isn’t What It Seems

What do the Volkswagen diesel scandal and the European migrant crisis have in common? They’ve both been referred to as the “tip of the iceberg.” The popular expression reflects the fact that, as impressive as the visible portion of an. READ MORE

What does it mean to be a biological machine?

It means we’re made up of atoms and molecules. They obey laws of nature and evolved to produce us. They are the power behind our engines. We’re just big chemical reactions. Each part of a machine is made by means of the good old laws of Newtonian physics, which were deterministic. But when you put this conglomeration together, it’s capable of unpredictable or chaotic behavior. Unpredictability is one of the hallmarks of creativity. So right from the word go, machines can be creative. Since we are machines, we can just go out of existence like a machine. We can have consciousness, but when the electrons stop flowing, consciousness disappears and that’s it. But that doesn’t take away from our creativity. We can still be creative in an inspiring way. We can have hopes, dreams, and aspirations for humankind. But we are machines, just like computers are machines. When one says that to people, some get very upset. They ask me, Is there a soul in a machine? I say, you have to believe in a soul, and I don’t believe in a soul. It isn’t necessary.

Does it make you queasy to think of yourself as a machine?

Because being a machine implies we’re designed.

We are designed. Just not by a designer. We’re a big mistake. It just so happens a number of billions of years ago that there was an isotope of carbon in the sun that was spit out, landed on the Earth, and so we’re carbon-based. This accident could occur, probably did occur, somewhere else in the universe.

Right. But the biology that makes us what and who we are has evolved over billions of years. That evolution makes us very different from machines.

Well, machines are evolving too, and much, much faster than us.

But they were engendered by humans.

We were engendered by nature. Yes, a human made the program, and a human put the machine in operation. But that’s like saying Leopold Mozart taught Wolfgang the rules of music. But we don’t attribute Wolfgang’s creations to his father. Similarly, you don’t attribute AlphaGo’s success at Go to the AlphaGo team. Or if you teach your 4-year-old daughter how to draw, she will draw like you at first. But 20 years down the line, when she’s at art school, she’ll be drawing much differently.

Machines in art can have intent, and they can have a bit of free will, too.

It sounds like you’re saying there’s basically no difference between machine intelligence and human intelligence.

The bottom line is there is no difference. We are computable and machines are of course computable, so there’s no reason why we’re not like machines. Consciousness is computational. It’s computed from 100 billion neurons in our mind. Since machines use computation, they’re computational, and there’s no reason why they cannot have consciousness.

You said you’ve become of a reductionist. What got reduced in your mind?

What got reduced in my mind is our body, us. You could say to a reductionist that there are things that science can’t explain. But science can and will explain everything we see around us.

Let’s step back. What is art?

Art is representations with concepts. That’s what it is to me. AI art does that. AI art has concepts because it’s generated by scientific means. It goes beyond the science, but it has a scientific basis to it, and so it has concepts. I have my own definition of aesthetics, too.

It makes art historians’ hair stand on end. Aesthetics equals the image in a work of art—and the image need not be a visual image—but an image from our five senses, plus the apparatus that generates it. For example, at CERN, I asked a physicist for his definition of aesthetics, and he said, “My notion of aesthetics is nicely laid out wires.” I saw them. There are these units of parallel wires, nothing crossed over, and the wires are color coded. It was beautiful.

I like what Simon Colton, a professor of digital game technologies, tells you. One of his projects, The Painting Fool, is great. He fed Guardian news articles to a computer and had software analyze words and phrases for moods like happy or sad. The computer then created portraits of people, which varied, according to its mood. Colton says comparing AI to human art is irrelevant. We should “be loud and proud about the AI processes leading to their generation,” he says. “People can then enjoy that these have been created by a computer.” I agree that’s the way to appreciate AI art, as its own beast.

Me too. We should not judge products of AI on the basis of whether they can be distinguished from products made by us. Because what’s the point? They may produce something that we can’t imagine right now, that could look like nonsense, but may then turn out to be better than what we could produce.

Why is the intersection of art and science, what you call a third culture, important?

Because it investigates the world as it is on a much better basis than somebody standing in front of an easel with oil paints. We’re looking at the world of the future. We’re looking at a world where we are merging with machines. A world where machines will be of importance. We will eventually be collaborating with them. They will produce works for us to enjoy and for their brethren to enjoy. So this is the way the world is evolving, and this is why we should be aware of AI art, and try to understand. Try to understand that it’s not stealing things from us.

Finally, what has your research into this third culture taught you about people today?

That they should widen their point of view. Too many artists and too many scientists have a narrow point of view. Scientists think art has nothing to do with science and artists think science has little to do with art. It is easier for a scientist to understand art than an artist to understand science. But, in a conceptual sense, art is an attempt to understand the concepts of science. I think my research has changed my view of humanity in the sense that people should leave their minds open to what goes on in fields other than their own. It could widen and even change their beliefs.

Kevin Berger is the editor of Nautilus.

Lead image: Juan Gris—Portrait of Pablo Picasso—Google Art Project

This article first appeared in our “Catalysts” issue in December 2019.


Kinematics and Calculus

Calculus is an advanced math topic, but it makes deriving two of the three equations of motion much simpler. By definition, acceleration is the first derivative of velocity with respect to time. Take the operation in that definition and reverse it. Instead of differentiating velocity to find acceleration, integrate acceleration to find velocity. This gives us the velocity-time equation. If we assume acceleration is constant, we get the so-called [1].

Again by definition, velocity is the first derivative of position with respect to time. Reverse this operation. Instead of differentiating position to find velocity, integrate velocity to find position. This gives us the position-time equation for constant acceleration, also known as the [2].

Unlike the first and second equations of motion, there is no obvious way to derive the (the one that relates velocity to position) using calculus. We can't just reverse engineer it from a definition. We need to play a rather sophisticated trick.

The first equation of motion relates velocity to time. We essentially derived it from this derivative…

dv = a
dt

The second equation of motion relates position to time. It came from this derivative…

ds = v
dt

The third equation of motion relates velocity to position. By logical extension, it should come from a derivative that looks like this…

But what does this equal? Well nothing by definition, but like all quantities it does equal itself. It also equals itself multiplied by 1. We'll use a special version of 1 ( dt dt ) and a special version of algebra (algebra with infinitesimals). Look what happens when we do this. We get one derivative equal to acceleration ( dv dt ) and another derivative equal to the inverse of velocity ( dt ds ).

dv = dv 1
ds ds
dv = dv dt
ds ds dt
dv = dv dt
ds dt ds
dv = a 1
ds v

Next step, separation of variables. Get things that are similar together and integrate them. Here's what we get when acceleration is constant…

Certainly a clever solution, and it wasn't all that more difficult than the first two derivations. However, it really only worked because acceleration was constant — constant in time and constant in space. If acceleration varied in any way, this method would be uncomfortably difficult. We'd be back to using algebra just to save our sanity. Not that there's anything wrong with that. Algebra works and sanity is worth saving.

v = v0 + at [1]
+
s = s0 + v0t + ½at 2 [2]
=
v 2 = v0 2 + 2a(ss0) [3]

Constant jerk

The method shown above works even when acceleration isn't constant. Let's apply it to a situation with an unusual name — constant jerk. No lie, that's what it's called. is the rate of change of acceleration with time.

j = da
dt

This makes jerk the first derivative of acceleration, the second derivative of velocity, and the third derivative of position.

j = da = d 2 v = d 3 s
dt dt 2 dt 3

The SI unit of jerk is the .

Jerk is not just some wise ass physicists response to the question, "Oh yeah, so what do you call the third derivative of position?" Jerk is a meaningful quantity.

The human body comes equipped with sensors to sense acceleration and jerk. Located deep inside the ear, integrated into our skulls, lies a series of chambers called the . Part of this labyrinth is dedicated to our sense of hearing (the ) and part to our sense of balance (the ). The vestibular system comes equipped with sensors that detect angular acceleration (the ) and sensors that detect linear acceleration (the ). We have two otoliths in each ear — one for detecting acceleration in the horizontal plane (the ) and one for detecting acceleration in the vertical place (the ). Otoliths are our own built in accelerometers.

The word otolith comes from the Greek οτο (oto) for ear and λιθος (lithos) for stone. Each of our four otoliths consists of a hard bone-like plate attached to a mat of sensory fibers. When the head accelerates, the plate shifts to one side, bending the sensory fibers. This sends a signal to the brain saying "we're accelerating." Since gravity also tugs on the plates, the signal may also mean "this way is down." The brain is quite good at figuring out the difference between the two interpretations. So good, that we tend to ignore it. Sight, sound, smell, taste, touch — where's balance in this list? We ignore it until something changes in an unusual, unexpected, or extreme way.

I've never been in orbit or lived on another planet. Gravity always pulls me down in the same way. Standing, walking, sitting, lying — it's all quite sedate. Now let's hop in a roller coaster or engage in a similarly thrilling activity like downhill skiing, Formula One racing, or cycling in Manhattan traffic. Acceleration is directed first one way, then another. You may even experience brief periods of weightlessness or inversion. These kinds of sensations generate intense mental activity, which is why we like doing them. They also sharpen us up and keep us focused during possibly life ending moments, which is why we evolved this sense in the first place. Your ability to sense jerk is vital to your health and well being. Jerk is both exciting and necessary.

Constant jerk is easy to deal with mathematically. As a learning exercise, let's derive the equations of motion for constant jerk. You are welcome to try more complicated jerk problems if you wish.

Jerk is the derivative of acceleration. Undo that process. Integrate jerk to get acceleration as a function of time. I propose we call this the . The reason why will be apparent after we finish the next derivation.

j = da
dt
da = j dt
a t

da =
j dt
a0 0
aa0 = jt
a = a0 + jt [0]

Acceleration is the derivative of velocity. Integrate acceleration to get velocity as a function of time. We've done this process before. We called the result the velocity-time relationship or the first equation of motion when acceleration was constant. We should give it a similar name. This is the .

Velocity is the derivative of displacement. Integrate velocity to get displacement as a function of time. We've done this before too. The resulting displacement-time relationship will be our .

Please notice something about these equations. When jerk is zero, they all revert back to the equations of motion for constant acceleration. Zero jerk means constant acceleration, so all is right with the world we've created. (I never said constant acceleration was realistic. Constant jerk is equally mythical. In hypertextbook world, however, all things are possible.)

Where do we go next? Should we work on a velocity-displacement relationship (the third equation of motion for constant jerk)?

v = v0 + a0t + ½jt 2 [1]
+
s = s0 + v0t + ½a0t 2 + ⅙jt 3 [2]
=
v = f(s) [3]

How about an acceleration-displacement relationship (the fourth equation of motion for constant jerk)?

a = a0 + jt [1]
+
s = s0 + v0t + ½a0t 2 + ⅙jt 3 [2]
=
a = f(s) [4]

I don't even know if these can be worked out algebraically. I doubt it. Look at that scary cubic equation for displacement. That can't be our friend. At the moment, I can't be bothered. I don't know if working this out would tell me anything interesting. I do know I've never needed a third or fourth equation of motion for constant jerk — not yet. I leave this problem to the mathematicians of the world.

This is the kind of problem that distinguishes physicists from mathematicians. A mathematician wouldn't necessarily care about the physical significance and just might thank the physicist for an interesting challenge. A physicist wouldn't necessarily care about the answer unless it turned out to be useful, in which case the physicist would certainly thank the mathematician for being so curious.

Constant nothing

This page in this book isn't about motion with constant acceleration, or constant jerk, or constant snap, crackle or pop. It's about the general method for determining the quantities of motion (position, velocity, and acceleration) with respect to time and each other for any kind of motion. The procedure for doing so is either differentiation (finding the derivative)…

  • The derivative of position with time is velocity ( v = dsdt ).
  • The derivative of velocity with time is acceleration ( a = dvdt ).

or integration (finding the integral)…

  • The integral of acceleration over time is change in velocity ( ∆v = ∫adt ).
  • The integral of velocity over time is change in position ( ∆s = ∫vdt ).

Here's the way it works. Some characteristic of the motion of an object is described by a function. Can you find the derivative of that function? That gives you another characteristic of the motion. Can you find its integral? That gives you a different characteristic. Repeat either operation as many times as necessary. Then apply the techniques and concepts you learned in calculus and related branches of mathematics to extract more meaning — range, domain, limit, asymptote, minimum, maximum, extremum, concavity, inflection, analytical, numerical, exact, approximate, and so on. I've added some important notes on this to the summary for this topic.


What is avalanche collapse?

Geeks: At breakdown, the electric field frees bound electrons and photons. If the applied electric field is sufficiently high, free electrons from background radiation may become accelerated to velocities that can liberate additional electrons during collisions with neutral atoms or molecules in a process called avalanche breakdown of superconduction. Breakdown occurs quite abruptly, typically in nanoseconds, resulting in the formation of an electrically conductive path and a disruptive discharge through the material. For solid materials, like a liquid crystalline lattice of water in a nanotube, a breakdown event severely degrades proton and electron currents, or even destroys, its insulating capability.

Non Geeks: When we understand this, we are back to Einstein’s math of the energy mass equivalence equation of E=mC 2 . This also explains why no one really knows today what is the optimal Vitamin D level in humans today. There is no group world wide who has normal semiconductors, so we have no control group in which to study this properly. We have to rely on data collected on humans pre- 1920 to really have an idea of what might be optimal. I have and this is why I like the range of 70-120 ng/ml. Studies done of people of today will never be accurate to assess a good Vitamin D levels for optimal health and will only act to fool physicians and patients.

Collagen is the the number one protein in the body by shear weight. It makes up the protein sheaths and fascia that all carbon nanotubes are connected to. These tubes are all designed to be filled with water. Collagen is what the negative semiconductor in most tissues, like bone mentioned above. I believe there are some tissues it can be the positive semiconductor. When the dielectric blockers are added to collagen matrix for any reason, they can not make or transduce energy from sunlight or any other quantum effect (electrons, photons, and phonons) and the end result that we assay when we look is we see an alteration of our Vitamin D, DHEA, and HS CRP levels. You might be seeing why Vitamin D levels are low in obesity and in metabolic syndrome now. These two disease are the base of the sickness pyramid modern man faces today. Quantum biology predicts that both diseases are underpinned by bad semiconductors that can not hold their charge for some reason. They are both diseases are disease were we have lost electrons, photons, or phonons to the environment from our semiconductors, as I laid out in the EMF series and the recent May Webinar of 2013.

This also explains why the boilerplate version of the paleo template helps improve your health to some degree. It also helps explain why so many on the template do not get totally better to reverse their diseases. You need higher power to generate a high DC current to regenerate. This requires more DHA. It is akin to the physics experiment I mentioned to you above. There is more to the story, because there has to be a way a reverse every disease since humans are not designed by nature to be filled with disease. That story is linked to water and its hydrogen content. We need to approach our reasoning of modern diseases differently if we are going to solve today’s conditions of existence. The paleo blueprint improves the collagen semiconductor best because it increases the amount of protein to make collagen while cutting dielectric blockers in water that collagen uses in quantum semiconduction biology. But often times it does not go far enough for those of us, whose diseases are tied to other semiconductors that are used in the brain, which are made from DHA and water and not collagen. Osteoporosis, obesity, autism, depression, AD, PCOS, and metabolic syndrome are some of these examples, but there are many others. I will be writing about them as time goes on.

This is where the Epi-Paleo Rx steps up the game to include all the semiconductors we have uncovered to date. I fully expect more to be found when people begin to look for them, and the more we find, the more we will understand why ancient cultures did what they did. I believe they used the power of time and empiric testing to instinctively know how to best utilize the photoelectric effect to stay well. The ancestral human diet formed around the East African rift zone and the conditions of existence that existed at our origin. these environmental changes had huge impacts on water and light frequencies. This Epi-paleo Rx seems to help the collagen semi conductor in the body and brain, while maximizing the other semiconductors such as water, to improve its ability to transmit energy using the coherence of water. When these are maximized, we magically see Vitamin D levels rise without supplementation, DHEA levels rise, and HS CRP levels fall. This also helps explain to physicians why some of their patients who they supplement large doses of vitamin D 3, never seem to make a dent in their levels when they measure their serum after some time with aggressive supplementation. I have my patients consider adding 100 micrograms of Vitamin K2 with every 1000 IU increase of Vitamin D3 we add while we simultaneously are improving their semiconductors using the Epi-paleo Rx. This may not be useful in the future. This can help repair the semiconductors quicker when the Epi-Paleo Rx is used simultaneously to improve the other factors important to quantum biology. When we do it this way we begin to see changes and permanence of the Vitamin D level 12-18 months in. The duration of change is directly proportional to the quality of the semi conductor’s ability to transduce energy due to the current disease status. The sicker you are the longer it will take. This also is based in Einstein’s math built into the photoelectric effect of light. It is based upon his fourth paper from 1905 on Brownian motion and stochastic calculus. To gain the benefit of sunlight’s frequency of the wavelength, we need the ability to absorb it. If our semiconductors are run down for any reason, the power of the sun can alter cellular signaling to cause disease. This does not mean we should avoid the sun, it means we need to improve our conditions of existence, to improve our bodies semiconductors’ ability to transmit the frequency of the spectrum of light, to power our cells. Our body is designed to take the sun’s natural electro-magnetic radiation, called visible light, and its intrinsic energy package, the photon, and turn it from an EMF message to a chemical message in our cells to signal and power life.

When the semiconductors of the body are degraded for any reason, vitamin D levels will always be low. The reason it is low is not because the sun’s light energy has recently changed. It is because our semiconductors have been degraded by our epigenetic choices or how we are forced to live our modern life. Today’s low level of Vitamin D tells you something deeply about the recommendations “experts” have made for us when you realize the sun’s light has not changed. It means their recommendations of how we should live have changed for the proper operation of our semiconductors. It also means their beliefs have dramatically altered the choices we have allowed to occur for us and our families. These beliefs have created environmental mismatches which have acted to subjugate the rules of nature for a healthy life.

We are the experts of us, and we need to realize this. We need to allow the laws of nature dictate our health outcomes. This is where the science of quantum mechanics should marry to modern healthcare. We need to move away from the experts ideas and move back toward how Einstein says the elements of life should work at a foundational level. We need to allow nature’s design dictate our choices and not a randomized controlled clinical trial. There is a deep lesson to learn buried in Einstein’s genius for all life. It is time we all pay attention it to make the difference we are looking for in medicine.


Math and Science

The mission of the Department of Mathematics and Science is threefold. The first goal is to acquaint students with scientific methodologies, critical thinking, and the history of scientific thought. The second is to address the interface between science and art, architecture, and design, whether it is through the physics of light, the chemistry of color, the biology of form, or the mathematics of symmetry. The third is to educate students so that they can respond intelligently and critically to today’s new developments in science and technology and make informed decisions regarding current scientific matters that affect public policy issues and ethics.

Acting Chair
Christopher Jensen
[email protected]

Assistant to the Chair
Margaret Dy-So
[email protected]

Laboratory Technician
Mary Lempres

This introduction to physics and chemistry is designed to prepare architecture students for their technological courses involving building, building materials, and building infrastructure. The course is non-calculus based.

This course is a survey of basic mathematical concepts that demonstrate the nature of mathematics. Topics are chosen from areas such as the concept of paradoxes and controversies, infinities, elementary number theory, modular arithmetic, fractals and chaos, topology, elementary probability and statistics.

This course explores some visual aspects of mathematics. Topics are chosen from areas such as geometric constructions, tessellations of the plane, symmetry groups, Platonic and Archimedean solids, spirals, Fibonacci numbers, the golden mean, phyllotaxis, spaces of dimension greater than three, and non-Euclidean geometry.

Introduces students to the mathematical principles underlying their computer programs. It familiarizes them with equations of lines and planes, forms for rotation and translation figures on a computer, transformations for 3-D, and prospective projections onto the screen.

This course is designed to improve the quantitative literacy of its students by exposing students to many of the financial decision they will face in their lives. Students will work with mathematical tools that are commonly used to gain insight and clarity ion these decisions, as well as how to communicate the results of their calculations. Our discussion of money will flow the way it does through adult lives, from earning an income and paying taxes to spending, saving, investing, and borrowing when we don't have money to get what we want or need. This course will lead you on the path to equipping yourself with the necessary mathematical tools and know-how to handle money in a informed way.

This introduction to light and optical phenomena in nature and technology will acquaint students with various physical aspects of light. We will delve into optical effects in nature such as the formation of rainbows, the colors of the sky and bubbles, mirages, the formation of images by our eyes and reception of those images by the rods and cones of our retinas. The use of light in technology will be explored by examining topics such as fiber optics, light sources (from the sun to light bulbs to pixels), one-way mirrors, 3D movie glasses, and image formation with pinholes, lenses and mirrors. Special attention will be paid to the operating principles and functioning of cameras from their lenses, to their viewfinders, apertures and filters.

This is a science course intended for the student curious about modern electronics and its use in enhancing their own designs as well as in preparation for Pratt's DDA and ID courses in interactive installations and robotics. Covering basic physics and electronics theory with practical applications in circuit design and interfacing, the course requires students to use critical and logical thinking to construct working electronic circuits that provide for control of input and output devices, the safe and reliable connection of one circuit to another or to a embedded controller (Arduino, Raspberry PL, etc.) or computer port.

This is a course in basic astronomy, which will provide an overview of our current understanding of the universe around us. Topics will include the origin of the universe, galaxies, stars, planets, interstellar matter, black holes, supernovas, space travel, and the possibility (or not) of extraterrestrial life, as well as the observational techniques we use to reveal the universe.

This course provides an overview of our current understanding of the universe, allowing students to explore the vastness and details of the cosmos while inviting them to integrate scientific ideas into their own works of films, podcasts, discussions, and writing include the origin of the universe and that of matter, galaxies, stars, planets in and outside of our solar system, black holes, supernovae, dark matter, dark energy, the possibility of extraterrestrial life, space travel, as well as the observational techniques used to reveal the cosmos. Students will gain perspective on our place in the universe as we explore how we know what we know, exposing how science is a process rather than an outcome. Discussions will also address the underrepresentation of minorities and women in the science.

This is a "hands-on" core course that introduces students to the chemistry behind artists' materials, including the chemistry of frescoes, traditional oil paintings, dyes, inks, illuminated manuscripts and textiles. Laboratory experiments, trips to museums, molecular visualizations of materials, films and multimedia presentations also part of the course. By the end of the semester, students produce their own fresco and tempera paintings, illuminated manuscripts and dyed textiles and are able to discuss the chemistry involved in each of these processed and how these different typed of works of art deteriorate with time.

This course provides a survey of the composition, structure, and history of the solid earth, with emphasis on how internal processes shape the earth. Major areas of focus include plate tectonics, the rock cycle, seismology, volcanic processes, and mineral resources.

In this course we analyze how the earth works: the ways in which solar energy, internal heat, and human civilization mold the earth's surface environment-its scenery, climates, and vegetation. We examine the Earth's component parts and interactions in order to better understand its past, present and future.

Earthquakes, tsunamis, hurricanes, floods, meteors, and climate change impact our world. In this course we take a "real world" case history approach to examining the physical causes of natural disasters and , equally important, the human contribution to them. We also discuss the engineering, planning, and political steps necessary to prevent disasters and equally important, the human contribution to them. We also discuss the engineering, planning, and political steps necessary to prevent disasters or at least soften their impact.

Botany is the scientific study of plants. This course provides an introduction to the essential components of botany. This includes: Morphology (what does a plant look like? How can we describe the differences between plants to classify them and understand how they are related to each other?), Physiological function (how does a plan work What does it need to grow? How does it respond to environmental stressors like drought?), and Cellular function and genetics (How are plant cells different from animal cells? what about plant sex? how do plants reproduce and evolve into the great diversity of plant on planet earth?)

The natural world is constructed from quite simple components. These components are however configured into increasingly complex degrees of myriad forms which are then reflective of their function within specific environments. This course will survey this diversity of form and design beginning with molecules which, in their simplest configurations, give rise to water and minerals (including fossils) and, more complexly, biological macromolecules. We will then consider more complex and interesting than just 'mushrooms') and plants (flowers are just the beginning). Finally, we will conduct a more thorough investigation of the great variety and beauty of aquatic and terrestrial animal life form the simplest sponge to humans. All of the above will be presented from an evolutionary perspective via weekly lectures and hands-on micro-and macroscopic examination and study of laboratory specimens. Trips to parks and museums will be required. There is an expectation of sustained class engagement and personal responsibility in timely and accurate completion of assignments.

The natural world is constructed from quite simple components. These components are however configured into increasingly complex degrees of myriad forms which are then reflective of their function within specific environments. This course will survey this diversity of form and design beginning with molecules which, in their simplest configurations, give rise to water and minerals (including fossils) and, more complexly, biological macromolecules. We will then consider the 'lower' life forms: protists (single-celled free-living organisms), fungi (much more complex and interesting than just 'mushrooms') and plants (flowers are just the beginning). Finally, we will conduct a more thorough investigation of the great variety and beauty of aquatic and terrestrial animal life form the simplest sponge to humans. All of the above will be presented form an evolutionary perspective via weekly lectures and hands-on micro-and macroscopic examination and study of laboratory specimens. Trips to parks and museums will be required. There is an expectation of sustained class engagement and personal responsibility in timely and accurate completion of assignments while adhering to the highest artistic standards as that befitting a student of Pratt Institute.

Like any other organism, humans rely on their environment-most prominently the living part of that environment-in order to survive. But unlike any other species, humans have the ability to re-shape the diverse environments they inhabit in profound, fundamental, and potentially destructive ways. This course explores how living ecosystems function and how that functioning provides the resources required by both individual humans and the societies we form. It also considers how we have transformed our environment I n ways that can threaten both our own health and the health of the ecosystems upon which human civilization depends. Many scientists suggest that we have entered a new geologic epoch, the Anthropocene this course explores ways in which the "age of humanity" can become a sustain able-rather that apocalyptic-episode in evolutionary history.

Architects build structures that serve as environments for organisms: human beings. Therefore, it is crucial that architects understand the ways in which organisms interact with the environment and other organisms. This course will investigate topics in ecology that will enable students to think more broadly about what it means to design living and working spaces.

The underlying nature of our world, as revealed through science, has a controlling impact on the materials, designs, and structures available for construction of our built environment. Conversely, both the act of fabrication of our built environment and the nature of the structures we build have a profound effect on our natural environment. This course will introduce concepts in the natural, biological and physical sciences that clarify these interactions and prepare students to understand the environmental impact of their construction choices.

To achieve a sustainable future, we need buildings that provide for our comfort and security while imposing a far smaller impact on the environment than do today's buildings. This course will use many techniques of physical science to see how this can be done, both in new construction and in today's built environment. An introduction to climate science is also included. The course is worth three (3) credits and fulfills the Math and Science CORE course requirement.

Human civilization is threatened by its own success at a level not seen in recorded history. The threat, climate change, is well understood scientifically, technically, and economically. Although now penetrating the cultural realm, the political response remains woefully inadequate. This course will use the techniques of science to promote a deep understanding of the nature and urgency of the threat, preparing students to take part in the struggle against climate change that will occur in their lifetimes. The course will be based largely on reports of the intergovernmental Panel on Climate Change (IPCC), augmented by recent literature findings.

Topics in analytic geometry, functions of one variable, limiting processes, differentiation of algebraic and trigonometric functions, definite and indefinite integrals are covered.

Applications of the definite integral transcendental functions methods of integration improper integrals curves in rectangular polar and parametric forms interactive and numerical methods.

This is a comprehensive survey course in statistical theory and methodology. Statistical theory topics include descriptive statistics, data analysis, elementary probability, and hypothesis testing methodology topics include sampling, goodness-of-fit testing, analysis of variance, and least squares estimation.

This course introduces Art History majors to the basics of chemistry and the chemistry behind artists' materials and techniques. Students engage in guided activities, such as guided laboratory experiments, to gain insight into the properties and chemical behavior of artists' materials. Lectures are developed to reinforce the understanding of chemical principles and address their connection to artist materials. In addition, several guest speakers including art conservators and conservation scientists will introduce issues in related to their field of expertise.

In this course students will gain an understanding of the chemistry involved in the art and architectural materials utilized in ancient Rome. The course will draw on research from Pompeii and Herculaneum, which provide a wealth of preserved information about the history, technology, and culture of the Roman people. Through case studies, students will learn about the chemistry of Roman building materials, glasses, and pigments. Deterioration of wall paintings and mosaics will be discussed and students will learn how scientific analysis can provide guidelines for conservators on how to preserve the art at the ancient sites.

This course explores the evolution of sexual reproduction as an alternative to nature's original means of propagating genes (asexual cloning). We'll explore why sex evolved, weighing the benefits and liabilities associated with sexual reproduction and will also look at the diversity of sexual strategies employed across all kingdoms of life, considering the conflict and cooperation inherent in the reproductive process. The course will conclude by looking at the sexual behavior of humans and our closest primate relatives.

In this class students will explore the underlying muscular and skeletal structures that support movement. By conduction detailed anatomical investigations, through exploration of skeletal material, models, our own bodies, and dissections, students will explore the relationship between structure and the biomechanics of animal movement. Students will be challenged to apply their understanding of the anatomy of motion to completion of a creative project in which they are the designers of their won anatomical structures and the movements that arise from these structures.

To achieve a sustainable future, we need buildings that provide for our comfort and security while imposing a far smaller impact on the environment than do today's building. This course will use many techniques of physical science to see how this can be done, both in new construction and in today's built environment. An introduction to climate science is also included. Each student will carry out a detailed energy assessment of an actual building.

Human civilization is threatened by its own success at a level not seen in recorded history. The threat, climate change, is well understood scientifically, technically, and economically. Although now penetrating the cultural realm, the political response remains woefully inadequate. This course will use the techniques of science to promote a deep understanding of the nature and urgency of the threat, preparing students to take part in the struggle against climate change that will occur in their lifetimes. Students will prepare an actual climate change mitigation plan for a city, state, or country of their choosing. The course will be based largely on reports of the intergovernmental Panel on Climate Change (IPCC), augmented by recent literature findings.

In this course we study how color is created at the atomic and molecular level by interaction of light at the physical surface of reflective objects. From there we elucidate the chemistry of the perceptive organ, the eye, via its interaction with light with some coverage of the neurological/perceptual factors of the synthesizing organ, the brain. We will perform several lab experiments treating the nature of color from both a physical and chemical perspective.

In this course students will gain an understanding of how art and design materials degrade and how they can be preserved. Dirt plays a major role in the deterioration of materials therefore optimal cleaning methods are a necessity. Scientific methods are important for the study of art and design materials. The use of multi-spectral imaging and polarized light microscopy for characterization of art and design materials will be discussed. We will cover how to determine realistic goal for treatments. Students will choose an art or design material and get a chance to scientifically characterize, clean, degrade, and apply a treatment allowing for a deeper understanding of the materials they use in their practices.

The development of synthetic polymers such as plastic, rubber, and nylon is one of the main achievements of the 20th century. This course introduces students to the fundamentals of organic chemistry within the context of modern polymeric materials. Students will prepare various synthetic polymers but also work with commercial available polymeric materials. Works of art made of such materials are extremely challenging to conservators since they are vulnerable towards deterioration. Signs of degradation such as discoloration, stickiness, and cracking are usually observed within less than 30 years. Analytical instrumentation will be used to identify and characterize molecular changes before and after artificial aging.

In this course students will gain an understanding of the fundamental similarities and differences between ceramics, metals and glass. Through first exploring the similarities and differences between each material based on their crystalline structures on a microscopic level, students will learn about the related material strengths, working properties, and manufacturing techniques. Then we will focus on causes of degradation of each material with particular attention to pollution, its origins, and the resulting chemical reactions as the inorganic materials interact with pollutants in their environments and the results of increased pollutants and their origins due to climate change.Project based work will serve as a focal learning tool with semester long research projects and weekly lab work/independent work. Students will recreate degradation properties using mockups, and throughout the semester each student will observe and document how the materials change, always reflecting on our living environment.

Focuses on areas of topical interest and current faculty research. The subject matter of these courses changes from semester to semester as a reflection of new scholarly developments and student/faculty interests. Since schedules and topics change frequently, students should seek information on current MSCI-490 offerings from the Department of Math and Science by emailing to [email protected] or checking the Department's web page: https://www.pratt.edu/academics/liberal-arts-and- sciences/mathematics-and-science/math-science- courses

The science and Society course explores some of the most pressing science issues facing the human condition today. Through lectures, readings, discussions, and writing, the class will explore such issues as climate change, alternative energy, genetic engineering, emerging infectious diseases, and the overall forecast for the human condition in the next several decades. Students will gain an appreciation of how science can inform policies that will shape our society, and will recognize the limitations of our current knowledge in prediction how modern technology will shape the human condition in the future.

This introduction to light and optical phenomena in nature and technology will acquaint students with various physical aspects of light. We will delve into optical effects in nature such as the formation of rainbows, the colors of the sky and bubbles, mirages, the formation of images by our eyes and reception of those images by the rods and cones of our retinas. The use of light in technology will be explored by examining topics such as fiber optics, light sources (from the sun to light bulbs to pixels), one-way mirrors, 3D movie glasses, and image formation with pinholes, lenses and mirrors. Special attention will be paid to the operating principles and functioning of cameras from their lenses, to their viewfinders, apertures and filters.

Music enriches our lives and plays major role in societies, cultures and economies around the globe. In this course, we will explore the underlying physic behind acoustic music. We will start with a general description of sound waves before delving into how sound is produced by musical instruments. We will cover how we perceive music, including the functioning of our ears, and will analyze notes, musical scales and chords in terms of the frequencies involve. The surroundings in which we will examine the acoustics of indoor and outdoor spaces.

Before the advent of the discipline of chemistry, artists relied solely on pigments that could be harvested from the natural environment. In this course you will be introduced to the creation of pigments y chemical means. The course is a general chemistry course with the main focus on inorganic chemistry. Through the synthesis of pigments we will explore basic chemical concepts like chemical bonding and different chemical reactions. We will discover how chemical properties allow us to understand the color of pigments and we will touch on the chemical makeup of binders and the making of paint.

This course provides a background in the fundamental principles of evolution, including natural selection, adaptation, population genetics, coevolution, speciation, and macroevolution. Using historical texts as well as cutting-edge research papers, we will explore the ongoing development of Darwin's theory of evolution. Through the readings, activities, and dialogue supported by the course, students will learn to apply evolutionary concepts to both the natural and human-mediated world around them.

The drive to create and innovate is central to the human condition and is unmatched in the animal kingdom. It may be the most defining feature of the behavioral changes-resulting in behavioral modernity-that distinguish humans from our nearest primate and human ancestors. This course explores the concept of behavioral modernity and asks the questions: What evidence is there for the earliest appearance of art and technology in the fossil record? What role do these advances play in the biological success of our species? What accumulate knowledge do we take for granted that allows us to appreciate art, interpret symbolism and interact with technology that our ancestors lacked? In answering, students will explore the nature of art and technology through a biological lens, as adaptations to harsh environments and varied landscapes. We will explore the earliest evidence for tool use and artmaking as well as search the animal kingdom for evidence of these same behaviors. We will observe how technological advances can tell us about cognitive advances, looking both to the fossil record and cognitive development for evidence. Finally, we'll consider whether there are costs to the adaptations that led to our reliance on innovation.

Like any other organism, humans rely on their environment-most prominently the living part of that environment-in order to survive. But unlike any other species, humans have the ability to re-shape the diverse environments they inhabit in profound, fundamental, and potentially destructive ways. This course explores how living ecosystems function and how that functioning provides the resources required by both individual humans and the societies we form. It also considers how we have transformed our environment in ways that can threated both our won health and the health of the ecosystems upon which human civilization depends. Many scientists suggest that we have entered a new geologic epoch,the anthropocene this course explores ways in which the "age of humanity" can become a sustainable-rather than apocalyptic-episode in evolutionary history.

Humans are the only species to play host to two complex evolving systems: One genetic and one cultural. Our unique and extensive use of culture has allowed us to become the most dominant species the earth has ever seen. But the use of cumulative culture as our greatest means of surviving also creates a variety of dilemmas, both for individual people and our species as a whole. This course explores our roles as baby breeders, culture propagators, and idea creators. Understanding these fundamental human activities will allow us to understand how our genes and culture have coevolved and what that unique coevolution means for the present and future of our species.

This course explores very briefly how the concept of energy as the ability to do work was developed during the Industrial Revolution as a new vision of Nature that transformed the Newtonian/Mechanical conception into a new world of thermodynamics. Although energy is one of the most elusive and intangible scientific concepts, it would be impossible to understand our civilization without knowing how our society has historically extracted, transported, used, stored, and saved energy sources. This course will explore how those energy landscapes have transitioned over time, and analyzes some of the main environmental impacts generated by our energy systems.

Students learn about the mechanics of solids, including statics and dynamics, work, energy, machines, elasticity, fluids at rest and in motion, fundamental concepts of heat and temperature and heat transfer. Laboratory experiments are coordinated with classwork.

Covers such topics as electricity and magnetism, including resistance, inductance and capacitance DC and AC circuits measuring instruments production, transmission, and absorption of sound and light sources and intensity measurements. Laboratory experiments are coordinated with classroom work.


How does the brain instinctively know the math behind Newtonian Physics? - Biology

DISCOVER Vol. 21 No. 10 (October 2000)
Table of Contents

20 Ideas That Will Rule Research in the Next 20 Years
On the edge of a brave new world.

By Matt Mahurin

As we head into the 21st century, knowledge is being created— and disseminated— far, far faster than ever before. Given this wealth of discovery, we invited some scientists to predict what questions or ethical issues will dominate their fields over the next 20 years. Their replies express not only excitement at the pace of discovery but also a broad concern about the use of new technologies and scientific information. Many noted the impact of the Human Genome Project— and the ethical problems that will attend screening patients' DNA for information about genetic vulnerabilities. Others remarked upon the parallel explosion of information— or, more precisely, surveillance— in cyberspace and how it, too, threatens individual privacy.

The business of science, so to speak, is knowledge. And knowledge is power. Deciding how to handle these new forms of knowledge will be just as important as— and probably far more problematic than— creating the knowledge itself.

Francis S. Collins, geneticist, director of the Human Genome Project
With the sequence of the human genome largely determined, laboratory research of human diseases will shift as researchers adopt a "genome attitude" toward solving problems. First, there will be increased emphasis on a systems approach. Researchers will examine the integrated functions among many genes, gain insight into the web of coordinated interactions among cellular pathways, and determine the impact of external factors. The number of potential therapeutic targets will increase dramatically as a consequence.

Second, there will be a heavy emphasis on determining the hereditary contributions to common diseases. Among the insights with the greatest immediate consequence will be an understanding of individual variability in response to drugs.

Third, our increasing ability to predict the structure of proteins will accelerate our understanding of how individual proteins work and interact with other proteins and/or DNA elements. This will also contribute to more rapid identification of potential therapeutic agents.

Fourth, human genetic and genomic research will become significantly more computational in approach. In silico will replace in vitro or even in vivo for many experiments.

Fifth, the debate about the ethical, legal, and social consequences of research in human genetics will intensify. While it is hoped that legislative solutions to the problems of genetic discrimination and breaches of privacy will be implemented in many countries, the challenge of educating health care providers to be practitioners of this new brand of genetic medicine will be considerable. Furious debates, not all of them grounded in the scientific facts, will rage about the limits of genetic intervention of our own species. To traverse these troubled waters successfully, we will need full and informed engagement by a diverse group of potential stakeholders.

Antonio Damasio, neuroscientist, University of Iowa College of Medicine
The near future of fundamental neuroscience will be dominated by the problem of consciousness. Curiously, the part of the problem that most thinkers consider difficult, perhaps impossible, to tackle is likely to be elucidated relatively soon. This is the problem of the self, which has to do with how the brain lets us know of our existence as individuals and of the amazing fact that each of us has a private mind that belongs to us and to no one else. But there is another part of the consciousness problem, the part that I describe in my book The Feeling of What Happens, as "the movie in the brain." A lot is already known about the molecules, neurons, and circuits with which the brain constructs the sensory patterns necessary to make a movie in the brain. Yet there is a gap in our understanding of how those sensory patterns, which do occur in well-specified circuits of this or that brain region, even become mental images. The challenge is to fill this gap.

But the developments in clinical neuroscience will be no less important. The remarkable success in identifying the genetic basis of single gene disorders, such as Huntington's disease, and in identifying the many genes that make individuals vulnerable to such disorders as Alzheimer's disease, suggests that the genetic contribution to several devastating neurological conditions will be discovered in just a few years. If neuroscience does its job properly, it will be possible to discover, for example, how the abnormal protein produced by a sick gene can lead to the death of nerve cells, thus opening a new universe of treatment possibilities before our eyes. We will be able to screen our own genome in the early part of our lives, and we will be able, by taking appropriate medications, to prevent the damage that a sick gene will cause, or repair it rapidly. However, this optimistic scenario is not without pitfalls. It will be argued that having one's genetic screening made public could limit the choice of a career, preclude certain forms of employment, and make one uninsurable or perhaps insurable only at prohibitive cost. These dire scenarios can be preempted only by the development of effective treatments, much compassion, intense social awareness, and protective laws.

Ron Graham, professor of computer and information science, University of California at San Diego
It seems very likely that mathematics will become an increasingly essential component of almost all of the emerging sciences. These will range from physics (string theory and the subatomic "zoo"), biology (understanding the human genome and predicting protein folding), computing (creating effective Internet algorithms and guaranteeing security and privacy), chemistry (designing innovative methods for constructing new compounds), and economics (predicting the complex dynamics of the world economy), to name a few. As any science matures, its methods inevitably become more quantitative. So mathematics, as the language of science, is ideally suited for a deeper understanding of the science.

Bernardo Huberman, theoretical physicist, Xerox Parc, Palo Alto, California
One trend I see having a sizable impact will be our ability to access all kinds of information on a global scale, including genetic and private records of individuals. But we will also create legal and ethical problems around such access. Issues of privacy, ownership, and rights to information will become central to people all over the world, thus leading to the creation of novel mechanisms and international institutions. Twenty years ago, I thought that the most important trends would revolve around nonlinear dynamics and the increased sophistication of computers. But I did not envision the cheap global connectivity that the Internet would bring.

Mary-Claire King, professor of medicine and genetics, University of Washington
I think we will explore what it means to be human in new ways. How are we different from our closest relatives? What defines us as a species? What is the genetic basis of our definitive traits? At the genomic level, the answers will be learnable and probably ultimately pretty straightforward. But their philosophical meaning will be immense. Biologists and humanists will need to learn to talk together in ways we have only just begun to develop. I am glad I will still be working on these questions in 20 years.

Steven Pinker, professor of cognitive neuroscience, Massachusetts Institute of Technology
The biologist E. O. Wilson suggested a useful word for a trend in the human sciences that will accelerate in the next two decades: consilience, the unification of knowledge. The natural sciences will blend into the social sciences and humanities via an understanding of human nature provided by bridging disciplines, such as cognitive neuroscience, evolutionary psychology, and behavioral genetics. This will stand in stark contrast to the two-cultures view in which biology and culture are sealed in parallel universes and it is politically dangerous for one to impinge on the other.

The humanities will be freed from sterile postmodernism and social determinism and be seen as being about products of human minds they will thereby benefit from insights about perception, cognition, and emotion. Politics and history will be enriched by an understanding of the psychological roots of human aggression, cooperation, coalition formation, and conflict resolution, rather than invoking unanalyzed "social forces." Law will replace its folk theories of free will, deterrence, and "the reasonable person" with ones compatible with neuroscience, genetics, and evolution. Likewise, economics will augment its folk theory of "economic man" with research about human reasoning, decision-making, and passion.

Medicine will transcend its craftsman's understanding of disease and place itself on a theoretical foundation from evolutionary biology. Education will start with a better understanding of which skills develop instinctively in children and which require intensive instruction and hard work. These changes will not be unopposed. Professional insularity, lazy political arguments, and the ancient doctrine that the mind is a blank slate will slow them down. But the gains in insight will be too great to halt them for long.

Lee Smolin, professor of physics, Pennsylvania State University
During the next 20 years a revolution in physics that has been in progress since Einstein overthrew Newtonian physics will culminate in a new physical theory. It will combine all we have learned in the last century about relativity, quantum theory, elementary particle physics, and cosmology. The remainder of the time will be spent working out its implications. Dramatic progress in observational cosmology and experimental physics will also give rise to tests of the new theory. Then physicists and cosmologists will be able to attack questions on which progress was not previously possible, such as what happened before the Big Bang and why the universe is hospitable to life. The next 20 years also will be remembered as the time that real progress began to be made resolving the great problems of origins: the origin of life, the origin of galaxies, the origin of the human species, the origin of language and human social organization.

On the ethical side, the rapid growth of new opportunities outside the universities for people with scientific training will sooner or later give rise to a long overdue reform of the university system. Universities are among the most bureaucratic institutions in society they will have to reform to compete for talent. The important ethical question is the extent to which these reforms can be managed to foster the main values of the university: teaching, research, and scholarship. The question will be how to protect and foster these necessarily labor-intensive and fragile activities within new organizational structures that will resemble much more the horizontal structures of small technical companies than they do the present rigid and hierarchical university system.

Christopher Wills, professor of biology, University of California AT San Diego
Twenty years ago the Human Genome Project was not even a gleam in anybody's eye. Today the project is virtually complete. Much has been written about the project's impact on understanding cancer and aging and on the possibility of finding genes for intelligence and behavior. But another aspect of the project may be the most important of all. Our population is exploding, leading to widespread environmental damage, wars over shrinking resources, and famines. Our ability to take control over our reproduction has not kept pace with the problem. Condoms, the most widely used device for preventing conception, were employed by the ancient Egyptians in 1000 B.C. The Pill is a crude manipulation of female hormone levels that can have undesirable side effects.

Now, thanks to the Human Genome Project, we have the sequences of all the genes involved in human reproduction. This includes all the genes involved in the production of sperm, all the genes that govern the environment inside the uterus, all the genes involved in ovulation and implantation, and all the genes that code for neuropeptides and other hormones that influence sexual behavior. We do not yet know how most of these genes work, but the opportunities for finding cheap, safe, effective, and reversible ways to prevent conception are boundless. Let's find them before the next 20 years are up.