A phenomenal wide-ranging book touching on everything from physics to history to economics. If you like books like Sapines or Godel Escher Bach, check it out.
In this book I argue that all progress, both theoretical and practical, has resulted from a single human activity: the quest for what I call good explanations.
And that whole universe is just a sliver of an enormously larger entity, the multiverse, which includes vast numbers of such universes.
Experience is indeed essential to science, but its role is different from that supposed by empiricism. It is not the source from which theories are derived. Its main use is to choose between theories that have already been guessed. That is what ‘learning from experience’ is.
So much for inductivism. And since inductivism is false, empiricism must be as well. For if one cannot derive predictions from experience, one certainly cannot derive explanations. Discovering a new explanation is inherently an act of creativity.
The misconception that knowledge needs authority to be genuine or reliable dates back to antiquity, and it still prevails. To this day, most courses in the philosophy of knowledge teach that knowledge is some form of justified, true belief, where ‘justified’ means designated as true (or at least ‘probable’) by reference to some authoritative source or touchstone of knowledge. Thus ‘how do we know…?’ is transformed into ‘by what authority do we claim…?’ The latter question is a chimera that may well have wasted more philosophers’ time and effort than any other idea. It converts the quest for truth into a quest for certainty (a feeling) or for endorsement (a social status). This misconception is called justificationism.
The opposing position – namely the recognition that there are no authoritative sources of knowledge, nor any reliable means of justifying ideas as being true or probable – is called fallibilism.
Fallibilists expect even their best and most fundamental explanations to contain misconceptions in addition to truth, and so they are predisposed to try to change them for the better. In contrast, the logic of justificationism is to seek (and typically, to believe that one has found) ways of securing ideas against change.
Moreover, the logic of fallibilism is that one not only seeks to correct the misconceptions of the past, but hopes in the future to find and change mistaken ideas that no one today questions or finds problematic.
So it is fallibilism, not mere rejection of authority, that is essential for the initiation of unlimited knowledge growth – the beginning of infinity.
one thing that all conceptions of the Enlightenment agree on is that it was a rebellion, and specifically a rebellion against authority in regard to knowledge.
Testability is now generally accepted as the defining characteristic of the scientific method. Popper called it the ‘criterion of demarcation’ between science and non-science.
The reason that testability is not enough is that prediction is not, and cannot be, the purpose of science.
Instrumentalism is one of many ways of denying realism, the commonsense, and true, doctrine that the physical world really exists, and is accessible to rational inquiry.
That is relativism, the doctrine that statements in a given field cannot be objectively true or false: at most they can be judged so relative to some cultural or other arbitrary standard.
Since theories can contradict each other, but there are no contradictions in reality, every problem signals that our knowledge must be flawed or inadequate.
The quest for good explanations is, I believe, the basic regulating principle not only of science, but of the Enlightenment generally.
That is what a good explanation will do for you: it makes it harder for you to fool yourself.
we observe nothing directly anyway. All observation is theory-laden.
So fruitful has this abandonment of anthropocentric theories been, and so important in the broader history of ideas, that anti-anthropocentrism has increasingly been elevated to the status of a universal principle, sometimes called the ‘Principle of Mediocrity’: there is nothing significant about humans (in the cosmic scheme of things).
So, while intergalactic space would kill me in a matter of seconds, Oxfordshire in its primeval state might do it in a matter of hours – which can be considered ‘life support’ only in the most contrived sense. There is a life-support system in Oxfordshire today, but it was not provided by the biosphere. It has been built by humans.
So, just as our senses cannot detect neutrinos or quasars or most other significant phenomena in the cosmic scheme of things, there is no reason to expect our brains to understand them. To the extent that they already do understand them, we have been lucky – but a run of luck cannot be expected to continue for long. Hence Dawkins agrees with an earlier evolutionary biologist, John Haldane, who expected that ‘the universe is not only queerer than we suppose, but queerer than we can suppose.’
That is to say, every putative physical transformation, to be performed in a given time with given resources or under any other conditions, is either – impossible because it is forbidden by the laws of nature; or – achievable, given the right knowledge.
everything that is not forbidden by laws of nature is achievable, given the right knowledge.
It is inevitable that we face problems, but no particular problem is inevitable. We survive, and thrive, by solving each problem as it comes up. And, since the human ability to transform nature is limited only by the laws of physics, none of the endless stream of problems will ever constitute an impassable barrier. So a complementary and equally important truth about people and the physical world is that problems are soluble. By ‘soluble’ I mean that the right knowledge would solve them.
The environments that could create an open-ended stream of knowledge, if suitably primed – i.e. almost all environments.
From the least parochial perspectives available to us, people are the most significant entities in the cosmic scheme of things. They are not ‘supported’ by their environments, but support themselves by creating knowledge.
Apart from the thoughts of people, the only process known to be capable of creating knowledge is biological evolution.
That a gene is adapted to a given function means that few, if any, small changes would improve its ability to perform that function. Some changes might make no practical difference to that ability, but most of those that did would make it worse. In other words good adaptations, like good explanations, are distinguished by being hard to vary while still fulfilling their functions.
Human brains and DNA molecules each have many functions, but among other things they are general-purpose information-storage media: they are in principle capable of storing any kind of information. Moreover, the two types of information that they respectively evolved to store have a property of cosmic significance in common: once they are physically embodied in a suitable environment, they tend to cause themselves to remain so. Such information – which I call knowledge – is very unlikely to come into existence other than through the error-correcting processes of evolution or thought.
the biosphere is much less pleasant for its inhabitants than anything that a benevolent, or even halfway decent, human designer would design.
‘Why should not this answer serve for the watch as well as for the stone; why is it not as admissible in the second case as in the first?’ And he knew why. Because the watch not only serves a purpose, it is adapted to that purpose:
Hence, as many critics have since noticed, if we substitute ‘ultimate designer’ for ‘watch’ in Paley’s text above, we force Paley to ‘the [inevitable] inference…that the ultimate designer must have had a maker’.
Jean-Baptiste Lamarck proposed an answer that is now known as Lamarckism. Its key idea is that improvements acquired by an organism during its lifetime can be inherited by its offspring.
But what genes are adapted to – what they do better than almost any variant of themselves – has nothing to do with the species or the individuals or even their own survival in the long run. It is getting themselves replicated more than rival genes.
Neo-Darwinism does not refer, at its fundamental level, to anything biological. It is based on the idea of a replicator (anything that contributes causally to its own copying).* For instance, a gene conferring the ability to digest a certain type of food causes the organism to remain healthy in some situations where it would otherwise weaken or die. Hence it increases the organism’s chances of having offspring in the future, and those offspring would inherit, and spread, copies of the gene.
Ideas can be replicators too. For example, a good joke is a replicator: when lodged in a person’s mind, it has a tendency to cause that person to tell it to other people, thus copying it into their minds. Dawkins coined the term memes (rhymes with ‘dreams’) for ideas that are replicators.
The existence of a force of gravity is, astonishingly, denied by Einstein’s general theory of relativity, one of the two deepest theories of physics. This says that the only force on your arm in that situation is that which you yourself are exerting, upwards, to keep it constantly accelerating away from the straightest possible path in a curved region of spacetime.
Following the philosopher Daniel Dennett, Hofstadter eventually concludes that the ‘I’ is an illusion. Minds, he concludes, can’t ‘push material stuff around’, because ‘physical law alone would suffice to determine [its] behaviour’. Hence his reductionism.
This also illustrates the emptiness of reductionism in philosophy. For if I ask you for advice about what objectives to pursue in life, it is no good telling me to do what the laws of physics mandate. I shall do that in any case.
So there is no avoiding what-to-do-next problems, and, since the distinction between right and wrong appears in our best explanations that address such problems, we must regard that distinction as real. In other words, there is an objective difference between right and wrong: those are real attributes of objectives and behaviours.
In reality, explanations do not form a hierarchy with the lowest level being the most fundamental. Rather, explanations at any level of emergence can be fundamental. Abstract entities are real, and can play a role in causing physical phenomena. Causation is itself such an abstraction.
Some historians believe that the idea of an alphabet-based writing system was conceived only once in human history – by some unknown predecessors of the Phoenicians, who then spread it throughout the Mediterranean – so that every alphabet-based writing system that has ever existed is either descended from or inspired by that Phoenician one.
Just as one could upgrade the vocabulary of an ancient writing system by adding pictograms, so one could add symbols to a system of numerals to increase its range. And this was done. But the resulting system would still always have a highest-valued symbol, and hence would not be universal for doing arithmetic without tallying.
The only way to emancipate arithmetic from tallying is with rules of universal reach. As with alphabets, a small set of basic rules and symbols is sufficient. The universal system in general use today has ten symbols, the digits 0 to 9, and its universality is due to a rule that the value of a digit depends on its position in the number
First the brain was supposed to be like an immensely complicated set of gears and levers. Then it was hydraulic pipes, then steam engines, then telephone exchanges – and, now that computers are our most impressive technology, brains are said to be computers. But this is still no more than a metaphor, says Searle, and there is no more reason to expect the brain to be a computer than a steam engine.
But there is. A steam engine is not a universal simulator. But a computer is, so expecting it to be able to do whatever neurons can is not a metaphor: it is a known and proven property of the laws of physics as best we know them.
Lady Lovelace’s objection has almost the same logic as Douglas Hofstadter’s argument for reductionism (Chapter 5) – yet Hofstadter is one of today’s foremost proponents of the possibility of AI. That is because both of them share the mistaken premise that low-level computational steps cannot possibly add up to a higher-level ‘I’ that affects anything. The difference between them is that they chose opposite horns of the dilemma that that poses: Lovelace chose the false conclusion that AI is impossible, while Hofstadter chose the false conclusion that no such ‘I’ can exist.
The jump to universality: The tendency of gradually improving systems to undergo a sudden large increase in functionality, becoming universal in some domain.
This is how much success the quest for ‘machines that think’ had achieved in the fifty-eight years following Turing’s paper: nil. Yet, in every other respect, computer science and technology had made astounding progress during that period.
At the present state of the field, a useful rule of thumb is: if it can already be programmed, it has nothing to do with intelligence in Turing’s sense. Conversely, I have settled on a simple test for judging claims, including Dennett’s, to have explained the nature of consciousness (or any other computational task): if you can’t program it, you haven’t understood it.
But his test is rooted in the empiricist mistake of seeking a purely behavioural criterion: it requires the judge to come to a conclusion without any explanation of how the candidate AI is supposed to work. But, in reality, judging whether something is a genuine AI will always depend on explanations of how it works.
The test is only about who designed the AI’s utterances: who adapted its utterances to be meaningful – who created the knowledge in them? If it was the designer, then the program is not an AI. If it was the program itself, then it is an AI.
Thus the very same utterance by the program – the joke – can be either evidence that it is not thinking or evidence that it is thinking depending on the best available explanation of how the program works.
Hence, even if chatbots did at some point start becoming much better at imitating humans (or at fooling humans), that would still not be a path to AI. Becoming better at pretending to think is not the same as coming closer to being able to think.
There is a philosophy whose basic tenet is that those are the same. It is called behaviourism – which is instrumentalism applied to psychology. In other words, it is the doctrine that psychology can only, or should only, be the science of behaviour, not of minds; that it can only measure and predict relationships between people’s external circumstances (‘stimuli’) and their observed behaviours (‘responses’).
The Turing-test idea makes us think that, if it is given enough standard reply templates, an Eliza program will automatically be creating knowledge; artificial evolution makes us think that if we have variation and selection, then evolution (of adaptations) will automatically happen. But neither is necessarily so. In both cases, another possibility is that no knowledge at all will be created during the running of the program, only during its development by the programmer.
Every room is at the beginning of infinity. That is one of the attributes of the unbounded growth of knowledge too: we are only just scratching the surface, and shall never be doing anything else.
if unlimited progress really is going to happen, not only are we now at almost the very beginning of it, we always shall be.
Russian roulette is merely random. Although we cannot predict the outcome, we do know what the possible outcomes are, and the probability of each, provided that the rules of the game are obeyed. The future of civilization is unknowable, because the knowledge that is going to affect it has yet to be created. Hence the possible outcomes are not yet known, let alone their probabilities.
Pessimists believe that the present state of our own civilization is an exception to that pattern. But what does the precautionary principle say about that claim? Can we be sure that our present knowledge, too, is not riddled with dangerous gaps and misconceptions? That our present wealth is not pathetically inadequate to deal with unforeseen problems? Since we cannot be sure, would not the precautionary principle require us to confine ourselves to the policy that would always have been salutary in the past – namely innovation and, in emergencies, even blind optimism about the benefits of new knowledge?
Malthus slipped from educated guesswork into blind prophecy. He and many of his contemporaries were misled into believing that he had discovered an objective asymmetry between what he called the ‘power of population’ and the ‘power of production’. But that was just a parochial mistake – the same one that Michelson and Lagrange made. They all thought they were making sober predictions based on the best knowledge available to them. In reality they were all allowing themselves to be misled by the ineluctable fact of the human condition that we do not yet know what we have not yet discovered.
A probability of one in 250,000 of such an impact in any given year means that a typical person on Earth would have a far larger chance of dying of an asteroid impact than in an aeroplane crash. And the next such object to strike us is already out there at this moment, speeding towards us with nothing to stop it except human knowledge.
The Principle of Optimism: All evils are caused by insufficient knowledge.
Optimism (in the sense that I have advocated) is the theory that all failures – all evils – are due to insufficient knowledge.
Problems are inevitable, because our knowledge will always be infinitely far from complete. Some problems are hard, but it is a mistake to confuse hard problems with problems unlikely to be solved.
Problems are soluble, and each particular evil is a problem that can be solved.
you can indeed become ever more like gods in ever more ways, if you choose to. (Though you will always remain fallible.)
But, if we choose to, are you saying that there is no upper bound to how much we can eventually understand, and control, and achieve?
Crucially, this is the same objective that the originator of the theory had. If it is a good theory – if it is a superb theory, as the fundamental theories of physics nowadays are – then it is exceedingly hard to vary while still remaining a viable explanation. So the learners, through criticism of their initial guesses and with the help of their books, teachers and colleagues, seeking a viable explanation, will arrive at the same theory as the originator. That is how the theory manages to be passed faithfully from generation to generation, despite no one caring about its faithfulness one way or the other. Slowly, and with many setbacks, the same is becoming true in non-scientific fields. The way to converge with each other is to converge upon the truth. 11 The Multiverse The idea of a ‘doppelgänger’ (a ‘double’ of a person) is a frequent theme of science fiction.
A writer of real science fiction faces two conflicting incentives. One is, as with all fiction, to allow the reader to engage with the story, and the easiest way to do that is to draw on themes that are already familiar. But that is an anthropocentric incentive. For instance, it pushes authors to imagine ways around the absolute speed limit that the laws of physics impose on travel and communication (namely the speed of light).
I had better warn the reader that the account that I shall give – known as the ‘many-universes interpretation’ of quantum theory (rather inadequately, since there is much more to it than ‘universes’) – remains at the time of writing a decidedly minority view among physicists.
For example, if the balance in your (electronic) bank account is one dollar, and the bank adds a second dollar as a loyalty bonus and later withdraws a dollar in charges, there is no meaning to whether the dollar they withdrew is the one that was there originally or the one that they had added – or is composed of a little of each. It is not merely that we cannot know whether it was the same dollar, or have decided not to care: because of the physics of the situation there really is no such thing as taking the original dollar, nor such a thing as taking the one added subsequently.
Dollars in bank accounts are what may be called ‘configurational’ entities: they are states or configurations of objects, not what we usually think of as physical objects in their own right.
Another example of fungible configurational entities in classical physics is amounts of energy: if you pedal your bicycle until you have built up a kinetic energy of ten kilojoules, and then brake until half that energy has been dissipated as heat, there is no meaning to whether the energy dissipated was the first five kilojoules that you had added or the second, or any combination. But it is meaningful that half the energy that was there has been dissipated.
The effects of a wave of differentiation usually diminish rapidly with distance – simply because physical effects in general do. The sun, from even a hundredth of a light year away, looks like a cold, bright dot in the sky. It barely affects anything. At a thousand light years, nor does a supernova. Even the most violent of quasar jets, when viewed from a neighbouring galaxy, would be little more than an abstract painting in the sky. There is only one known phenomenon which, if it ever occurred, would have effects that did not fall off with distance, and that is the creation of a certain type of knowledge, namely a beginning of infinity.
There is a way – I think it is the only way – to meet simultaneously the requirements that our fictional laws of physics be universal and deterministic, and forbid faster-than-light and inter-universe communication: more universes. Imagine an uncountably infinite number of them, initially all fungible. The transporter causes previously fungible ones to become different, as before; but now the relevant law of physics says, ‘The voltage surges in half the universes in which the transporter is used.’ So, if the two starships both run their transporters, then, after the two spheres of differentiation have overlapped, there will be universes of four different kinds: those in which a surge happened only in the first starship, only in the second, in neither, and in both. In other words, in the overlap region there are four different histories, each taking place in one quarter of the universes.
Notice that when a random outcome (in this sense) is about to happen, it is a situation of diversity within fungibility: the diversity is in the variable ‘what outcome they are going to see’. The logic of the situation is the same as in cases like that of the bank account I discussed above, except that this time the fungible entities are people. They are fungible, yet half of them are going to see the surge and the other half not.
The number of distinct histories will now increase rapidly. Whenever the transporter is used, it takes only microseconds for the sphere of differentiation to engulf the whole starship, so, if it is typically used ten times per day, the number of distinct histories inside the whole starship will double about ten times a day. Within a month there will be more distinct histories than there are atoms in our visible universe.
So our rule, in short, is that interference can happen only in objects that are unentangled with the rest of the world. This is why, in the interference experiment, the two applications of the transporter have to be ‘in quick succession’. (Alternatively, the object in question has to be sufficiently well isolated for its voltages not to affect its surroundings.)
In entangled objects, further splitting happens instead of interference.
atoms could not exist at all according to classical physics. An atom consists of a positively charged nucleus surrounded by negatively charged electrons. But positive and negative charges attract each other and, if unrestrained, accelerate towards each other, emitting energy in the form of electromagnetic radiation as they go.
In non-technical accounts, the structure of atoms is sometimes explained by analogy with the solar system: one imagines electrons in orbit around the nucleus like planets around the sun. But that does not match the reality. For one thing, gravitationally bound objects do slowly spiral in, emitting gravitational radiation (the process has been observed for binary neutron stars), and the corresponding electromagnetic process in an atom would be over in a fraction of a second.
Thanks to the strong internal interference that it is continuously undergoing, a typical electron is an irreducibly multiversal object, and not a collection of parallel-universe or parallel-histories objects. That is to say, it has multiple positions and multiple speeds without being divisible into autonomous sub-entities each of which has one speed and one position. Even different electrons do not have completely separate identities. So the reality is an electron field throughout the whole of space, and disturbances spread through this field as waves, at the speed of light or below. This is what gave rise to the often-quoted misconception among the pioneers of quantum theory that electrons (and likewise all other particles) are ‘particles and waves at the same time’. There is a field (or ‘waves’) in the multiverse for every individual particle that we observe in a particular universe.
The physical world is a multiverse, and its structure is determined by how information flows in it. In many regions of the multiverse, information flows in quasi-autonomous streams called histories, one of which we call our ‘universe’. Universes approximately obey the laws of classical (pre-quantum) physics. But we know of the rest of the multiverse, and can test the laws of quantum physics, because of the phenomenon of quantum interference. Thus a universe is not an exact but an emergent feature of the multiverse. One of the most unfamiliar and counter-intuitive things about the multiverse is fungibility. The laws of motion of the multiverse are deterministic, and apparent randomness is due to initially fungible instances of objects becoming different. In quantum physics, variables are typically discrete, and how they change from one value to another is a multiversal process involving interference and fungibility.
With hindsight, we can state the rule of thumb like this: whenever a measurement is made, all the histories but one cease to exist. The surviving one is chosen at random, with the probability of each possible outcome being equal to the total measure of all the histories in which that outcome occurs.
Let me define ‘bad philosophy’ as philosophy that is not merely false, but actively prevents the growth of other knowledge. In this case, instrumentalism was acting to prevent the explanations in Schrödinger’s and Heisenberg’s theories from being improved or elaborated or unified.
Bad philosophy has always existed too. For instance, children have always been told, ‘Because I say so.’ Although that is not always intended as a philosophical position, it is worth analysing it as one, for in four simple words it contains remarkably many themes of false and bad philosophy.
it reinterprets a request for true explanation (why should something-or-other be as it is?) as a request for justification (what entitles you to assert that it is so?), which is the justified-true-belief chimera.
it confuses the nonexistent authority for ideas with human authority (power) – a much-travelled path in bad political philosophy.
One currently influential philosophical movement goes under various names such as postmodernism, deconstructionism and structuralism, depending on historical details that are unimportant here. It claims that because all ideas, including scientific theories, are conjectural and impossible to justify, they are essentially arbitrary: they are no more than stories, known in this context as ‘narratives’.
Mixing extreme cultural relativism with other forms of anti-realism, it regards objective truth and falsity, as well as reality and knowledge of reality, as mere conventional forms of words that stand for an idea’s being endorsed by a designated group of people such as an elite or consensus, or by a fashion or other arbitrary authority. And it regards science and the Enlightenment as no more than one such fashion, and the objective knowledge claimed by science as an arrogant cultural conceit.
Perhaps inevitably, these charges are true of postmodernism itself: it is a narrative that resists rational criticism or improvement, precisely because it rejects all criticism as mere narrative. Creating a successful postmodernist theory is indeed purely a matter of meeting the criteria of the postmodernist community – which have evolved to be complex, exclusive and authority-based.
In explanationless science, one may acknowledge that actual happiness and the proxy one is measuring are not necessarily equal. But one nevertheless calls the proxy ‘happiness’ and moves on.
Next, one defines the ‘heritability’ of a trait as its degree of statistical correlation with how genetically related the people are. Again, that is a non-explanatory definition: according to it, whether one was a slave or not was once a highly ‘heritable’ trait in America: it ran in families.
one does the study and finds that ‘happiness’ is, say, 50 per cent ‘heritable’. This asserts nothing about happiness itself, until the relevant explanatory theories are discovered (at some time in the future – perhaps after consciousness is understood and AIs are commonplace technology). Yet people find the result interesting, because they interpret it via everyday meanings of the words ‘happiness’ and ‘heritable’.
The headline will say, ‘New Study Shows Happiness 50% Genetically Determined’ – without quotation marks around the technical terms.
All the formal rules of ‘how to keep from fooling ourselves’ may have been followed. And yet no progress could possibly be made, because it was not being sought: explanationless theories can do no more than entrench existing, bad explanations.
As the above example illustrates, a generic feature of experimentation is that the bigger the errors you make, either in the numbers or in your naming and interpretation of the measured quantities, the more exciting the results are, if true. So, without powerful techniques of error-detection and -correction – which depend on explanatory theories – this gives rise to an instability where false results drown out the true. In the ‘hard sciences’ – which usually do good science – false results due to all sorts of errors are nevertheless common. But they are corrected when their explanations are criticized and tested. That cannot happen in explanationless science.
Positivism: The bad philosophy that everything not ‘derived from observation’ should be eliminated from science.
Logical positivism: The bad philosophy that statements not verifiable by observation are meaningless
Balinski and Young’s Theorem: Every apportionment rule that stays within the quota suffers from the population paradox.
This powerful ‘no-go’ theorem explains the long string of historical failures to solve the apportionment problem. Never mind the various other conditions that may seem essential for an apportionment to be fair: no apportionment rule can meet even the bare-bones requirements of proportionality and the avoidance of the population paradox. Balinski and Young also proved no-go theorems involving other classic paradoxes.
Popper’s criterion Good political institutions are those that make it as easy as possible to detect whether a ruler or policy is a mistake, and to remove rulers or policies without violence when they are.
It is a mistake to conceive of choice and decision-making as a process of selecting from existing options according to a fixed formula. That omits the most important element of decision-making, namely the creation of new options.
rational thinking does not consist of weighing the justifications of rival theories, but of using conjecture and criticism to seek the best explanation, so coalition governments are not a desirable objective of electoral systems.
They should be judged by Popper’s criterion of how easy they make it to remove bad rulers and bad policies. That designates the plurality voting system as best in the case of advanced political cultures.
Following a plurality-voting election, the usual outcome is that the party with the largest total number of votes has an overall majority in the legislature, and therefore takes sole charge. All the losing parties are removed entirely from power. This is rare under proportional representation, because some of the parties in the old coalition are usually needed in the new one. Consequently, the logic of plurality is that politicians and political parties have little chance of gaining any share in power unless they can persuade a substantial proportion of the population to vote for them. That gives all parties the incentive to find better explanations, or at least to convince more people of their existing ones, for if they fail they will be relegated to powerlessness at the next election.
Plurality voting typically ‘over-represents’ the two largest parties, compared with the proportion of votes they receive. Moreover, it is not guaranteed to avoid the population paradox, and is even capable of bringing one party to power when another has received far more votes in total. These features are often cited as arguments against plurality voting and in favour of a more proportional system – either literal proportional representation or other schemes such as transferable-vote systems and run-off systems which have the effect of making the representation of voters in the legislature more proportional.
Empiricism miscasts science as an automatic, non-creative process. And art, though acknowledged as ‘creative’, has often been seen as the antithesis of science, and hence irrational, random, inexplicable – and hence unjudgeable, and non-objective. But if beauty is objective, then a new work of art, like a newly discovered law of nature or mathematical theorem, adds something irreducibly new to the world.
We have an inborn aversion to heights and to falling, yet people go skydiving – not in spite of this feeling, but because of it. It is that very feeling of inborn aversion that humans can reinterpret into a larger picture which to them is attractive – they want more of it; they want to appreciate it more deeply. To a skydiver, the vista from which we were born to recoil is beautiful. The whole activity of skydiving is beautiful, and part of that beauty is in the very sensations that evolved to deter us from trying it. The conclusion is inescapable: that attraction is not inborn, just as the contents of a newly discovered law
what is surprising is that these same flowers also attract humans. This is so familiar a fact that it is hard to see how amazing it is. But think of all the countless hideous animals in nature, and think also that all of them who find their mates by sight have evolved to find that appearance attractive. And therefore it is not surprising that we do not.
Given the prevailing assumptions in the scientific community – which are still rather empiricist and reductionist – it may seem plausible that flowers are not objectively beautiful, and that their attractiveness is merely a cultural phenomenon. But I think that that fails closer inspection. We find flowers beautiful that we have never seen before, and which have not been known to our culture before – and quite reliably, for most humans in most cultures. The same is not true of the roots of plants, or the leaves. Why only the flowers?
I can see only one explanation for the phenomenon of flowers being attractive to humans, and for the various other fragments of evidence I have mentioned. It is that the attribute we call beauty is of two kinds. One is a parochial kind of attractiveness, local to a species, to a culture or to an individual. The other is unrelated to any of those: it is universal, and as objective as the laws of physics. Creating either kind of beauty requires knowledge; but the second kind requires knowledge with universal reach. It reaches all the way from the flower genome, with its problem of competitive pollination, to human minds which appreciate the resulting flowers as art. Not great art – human artists are far better, as is to be expected. But with the hard-to-fake appearance of design for beauty.
Humans are quite unlike that: the amount of information in a human mind is more than that in the genome of any species, and overwhelmingly more than the genetic information unique to one person. So human artists are trying to signal across the same scale of gap between humans as the flowers and insects are between species. They can use some species-specific criteria; but they can also reach towards objective beauty.
Aesthetic truths are linked to factual ones by explanations, and also because artistic problems can emerge from physical facts and situations. The fact that flowers reliably seem beautiful to humans when their designs evolved for an apparently unrelated purpose is evidence that beauty is objective. Those convergent criteria of beauty solve the problem of creating hard-to-forge signals where prior shared knowledge is insufficient to provide them.
A fundamental question in the study of cultures is: what is it about a long-lived meme that gives it this exceptional ability to resist change throughout many replications? Another – central to the theme of this book – is: when such memes do change, what are the conditions under which they can change for the better?
just as native English speakers may be mistaken about why they have said ‘the’ in a given sentence, people enacting all sorts of other memes often give false explanations, even to themselves, of why they are behaving in that way.
A meme exists in a brain form and a behaviour form, and each is copied to the other.
A gene exists in only one physical form, which is copied.
So, for example, although religions prescribe behaviours such as educating one’s children to adopt the religion, the mere intention to transmit a meme to one’s children or anyone else is quite insufficient to make that happen.
The overwhelming majority of ideas simply do not have what it takes to persuade (or frighten or cajole or otherwise cause) children or anyone else into doing the same to other people. If establishing a faithfully replicating meme were that easy, the whole adult population in our society would be proficient at algebra, thanks to the efforts made to teach it to them when they were children. To be exact, they would all be proficient algebra teachers.
Hence the frequently cited metaphor of the history of life on Earth, in which human civilization occupies only the final ‘second’ of the ‘day’ during which life has so far existed, is misleading. In reality, a substantial proportion of all evolution on our planet to date has occurred in human brains. And it has barely begun. The whole of biological evolution was but a preface to the main story of evolution, the evolution of memes.
For a society to be static, something else must be happening as well. One thing my story did not take into account is that static societies have customs and laws – taboos – that prevent their memes from changing. They enforce the enactment of the existing memes, forbid the enactment of variants, and suppress criticism of the status quo.
But what sort of idea is best suited to getting itself adopted many times in succession by many people who have diverse, unpredictable objectives? A true idea is a good candidate. But not just any truth will do. It must seem useful to all those people, for it is they who will be choosing whether to enact it or not. ‘Useful’ in this context does not necessarily mean functionally useful: it refers to any property that can make people want to adopt an idea and enact it, such as being interesting, funny, elegant, easily remembered, morally right and so on. And the best way to seem useful to diverse people under diverse, unpredictable circumstances is to be useful. Such an idea is, or embodies, a truth in the broadest sense: factually true if it is an assertion of fact, beautiful if it is an artistic value or behaviour, objectively right if it is a moral value, funny if it is a joke, and so on.
When girls strive to be ladylike and to meet culturally defined standards of shape and appearance, and when boys do their utmost to look strong and not to cry when distressed, they are struggling to replicate ancient ‘genderstereotyping’ memes that are still part of our culture – despite the fact that explicitly endorsing them has become a stigmatized behaviour.
Thus, memes of this new kind, which are created by rational and critical thought, subsequently also depend on such thought to get themselves replicated faithfully. So I shall call them rational memes. Memes of the older, static-society kind, which survive by disabling their holders’ critical faculties, I shall call anti-rational memes.
That anti-rational memes are still, today, a substantial part of our culture, and of the mind of every individual, is a difficult fact for us to accept.
Children who asked why they were required to enact onerous behaviours that did not seem functional would be told ‘Because I say so’, and in due course they would give their children the same reply to the same question, never realizing that they were giving the full explanation. (This is a curious type of meme whose explicit content is true though its holders do not believe it.)
One need look no further than our clothing styles, and the way we decorate our homes, to find evidence. Consider how you would be judged by other people if you went shopping in pyjamas, or painted your home with blue and brown stripes. That gives a hint of the narrowness of the conventions that govern even these objectively trivial and inconsequential choices about style, and the steepness of the social costs of violating them. Is the same thing true of the more momentous patterns in our lives, such as careers, relationships, education, morality, political outlook and national identity? Consider what we should expect to happen when a static society is gradually switching from anti-rational to rational memes.
Another is the formation within the dynamic society of anti-rational subcultures. Recall that anti-rational memes suppress criticism selectively and cause only finely tuned damage. This makes it possible for the members of an anti-rational subculture to function normally in other respects. So such subcultures can survive for a long time, until they are destabilized by the haphazard effects of reach from other fields. For example, racism and other forms of bigotry exist nowadays almost entirely in subcultures that suppress criticism. Bigotry exists not because it benefits the bigots, but despite the harm they do to themselves by using fixed, non-functional criteria to determine their choices in life.
Existing accounts of memes have neglected the all-important distinction between the rational and anti-rational modes of replication.
Rational meme: An idea that relies on the recipients’ critical faculties to cause itself to be replicated.
Anti-rational meme: An idea that relies on disabling the recipients’ critical faculties to cause itself to be replicated.
Western civilization is in an unstable transitional period between stable, static societies consisting of anti-rational memes and a stable dynamic society consisting of rational memes.
several other species have memes. But what they do not have is the means of improving them other than through random trial and error.
So there is no such thing as ‘just imitating the behaviour’ – still less, therefore, can one discover those ideas by imitating it. One needs to know the ideas before one can imitate the behaviour. So imitating behaviour cannot be how we acquire memes.
The same holds if the behaviour consists of stating the memes. As Popper remarked, ‘It is impossible to speak in such a way that you cannot be misunderstood.’ One can only state the explicit content, which is insufficient to define the meaning of a meme or anything else. Even the most explicit of memes – such as laws – have inexplicit content without which they cannot be enacted. For example, many laws refer to what is ‘reasonable’. But no one can define that attribute accurately enough for, say, a person from a different culture to be able to apply the definition in judging a criminal case.
But humans do not especially copy any behaviour. They use conjecture, criticism and experiment to create good explanations of the meaning of things – other people’s behaviour, their own, and that of the world in general. That is what creativity does. And if we end up behaving like other people, it is because we have rediscovered the same idea.
I think that both those puzzles have the same solution: what replicates human memes is creativity; and creativity was used, while it was evolving, to replicate memes. In other words, it was used to acquire existing knowledge, not to create new knowledge. But the mechanism to do both things is identical, and so in acquiring the ability to do the former, we automatically became able to do the latter. It was a momentous example of reach, which made possible everything that is uniquely human.
Not only is creativity necessary for human meme replication, it is also sufficient. Deaf people and blind people and paralysed people are still able to acquire and create human ideas to a more or less full extent. Hence, neither upright walking nor fine motor control nor the ability to parse sounds into words nor any of those other adaptations, though they might have played a role historically in creating the conditions for human evolution, were functionally necessary to allow humans to become creative. Nor, therefore, are they philosophically significant
The Easter Islanders may or may not have suffered a forest-management fiasco. But, if they did, the explanation would not be about why they made mistakes – problems are inevitable – but why they failed to correct them.
In other words, progress is sustainable, indefinitely. But only by people who engage in a particular kind of thinking and behaviour – the problem-solving and problem-creating kind characteristic of the Enlightenment. And that requires the optimism of a dynamic society.
Diamond says that his main reason for writing Guns, Germs and Steel was that, unless people are convinced that the relative success of Europeans was caused by biogeography, they will for ever be tempted by racist explanations. Well, not readers of this book, I trust! Presumably Diamond can look at ancient Athens, the Renaissance, the Enlightenment – all of them the quintessence of causation through the power of abstract ideas – and see no way of attributing those events to ideas and to people; he just takes it for granted that the only alternative to one reductionist, dehumanizing reinterpretation of events is another.
The Easter Island civilization collapsed because no human situation is free of new problems, and static societies are inherently unstable in the face of new problems.
Once I realized that Ehrlich’s prophesies amounted to saying, ‘If we stop solving problems, we are doomed,’ I no longer found them shocking, for how could it be otherwise?
Even while my pessimistic colleague was dismissing colour television technology as useless and doomed, optimistic people were discovering new ways of achieving it, and new uses for it – uses that he thought he had ruled out by considering for five minutes how well colour televisions could do the existing job of monochrome ones.
In the pessimistic conception, they are wasters: they take precious resources and madly convert them into useless coloured pictures. This is true of static societies: those statues really were what my colleague thought colour televisions are – which is why comparing our society with the ‘old culture’ of Easter Island is exactly wrong.
In the optimistic conception – the one that was unforeseeably vindicated by events – people are problem-solvers: creators of the unsustainable solution and hence also of the next problem. In the pessimistic conception, that distinctive ability of people is a disease for which sustainability is the cure. In the optimistic one, sustainability is the disease and people are the cure.
to expect that problems will always be solved in time to avert disasters would be the same fallacy.
Strategies to prevent foreseeable disasters are bound to fail eventually, and cannot even address the unforeseeable. To prepare for those, we need rapid progress in science and technology and as much wealth as possible.
As the economist David Friedman has remarked, most people believe that an income of about twice their own should be sufficient to satisfy any reasonable person
Perhaps a more practical way of stressing the same truth would be to frame the growth of knowledge (all knowledge, not only scientific) as a continual transition from problems to better problems, rather than from problems to solutions or from theories to better theories
To the inhabitants – who would eventually have to upload their personalities into computers made of something like pure tides – the universe would last for ever because they would be thinking faster and faster, without limit, as it collapsed, and storing their memories in ever smaller volumes so that access times could also be reduced without limit. Tipler called such universes ‘omega-point universes’. At the time, the observational evidence was consistent with the real universe being of that type.
Evidence – including a remarkable series of studies of supernovae in distant galaxies – has forced cosmologists to the unexpected conclusion that the universe not only will expand for ever but has been expanding at an accelerating rate. Something has been counteracting its gravity.
Most advocates of the Singularity believe that, soon after the AI breakthrough, superhuman minds will be constructed and that then, as Vinge put it, ‘the human era will be over.’ But my discussion of the universality of human minds rules out that possibility. Since humans are already universal explainers and constructors, they can already transcend their parochial origins, so there can be no such thing as a superhuman mind as such.
Many people have an aversion to infinity of various kinds. But there are some things that we do not have a choice about. There is only one way of thinking that is capable of making progress, or of surviving in the long run, and that is the way of seeking good explanations through creativity and criticism. What lies ahead of us is in any case infinity. All we can choose is whether it is an infinity of ignorance or of knowledge, wrong or right, death or life.
Then consider joining the 19,000 other people getting the Monday Medley newsletter. It's a collection of fascinating finds from my week, usually about psychology, technology, health, philosophy, and whatever else catches my interest. I also include new articles, book notes, and podcast episodes.