Sunday, November 11, 2007

Mismatch: Why Our World No Longer Fits Our Bodies

As you may have guessed, it was this book's subtitle that caught my eye. Why Our World No Longer Fits Our Bodies. You can't get more blatant than that. Does the book live up to this audacious title? Are today's lifestyles more than our bodies can handle? I'll leave it up to the reader to draw their own conclusions, as this question is difficult to answer, but the evidence is compelling enough to suggest that changes need to be made.

The authors, Peter Gluckman and Mark Hanson, do a very thorough job of exploring the premise that the human body is mismatched to the modern urban lifestyle. The evidence is clear, argue the authors, and can be found in the increasing cases of diabetes, heart disease and obesity in the developed world. There are also more subtle consequences of our mismatch, such as those brought about by the falling age of puberty. Their solutions come right at the end of the book, and are a logical follow-on from their supporting evidence, but are perhaps are bit too brief. The aim of these solutions seems to be to raise people's awareness of the issues, rather than present final answers. Which is reasonable, given that the authors note that the application of evolutionary developmental biology to human medicine is a new field of study.

The relationship between our evolved biology and the nature of the environments in which we live is the real focus of this book. Generalist species such as humans have a broad capacity to adapt or cope over a range of environments but may not be so well-equipped to live in a particular environment as a specialist species. But it is important, the authors note, to distinguish between thriving in an environment and surviving or coping in that environment. Trade-offs which can affect our health and reproduction may have to be made once we move away from the centre of our comfort zone.

Living successfully for humans means being well-matched to the environment. The greater the shift from the environment at the centre of the comfort zone, the greater are the changes in physiology and behaviour needed to cope, until at some point significant costs appear.

When humans migrated progressively northwards from their ancestral home in the African savannah, they moved into colder climates where the average daily amount of sunlight fell. Occupying these new habitats in Europe and even further north gave advantages in terms of opportunities to hunt, and later to cultivate some simple crops and domesticate animals. But it also brought new threats. For example the low exposure to sunlight, especially in the winter months, reduced the production of vitamin D.

Vitamin D is made in the skin by the action of sunlight which converts a precursor molecule found in our diet into the active form of vitamin D. It is vital for many body processes, notably the deposition of bone during development. People who have chronically low levels of vitamin D during development suffer from rickets, and this is associated with skeletal deformity. Older adults who are vitamin D deficient are more likely to suffer brittle bone disease (osteoporosis) and even minor accidents produce fractures. This had not been a problem in Africa as sunlight levels were high throughout the year, and our ancestors had evolved to have dark skins as the melanin protected against the other harmful effects of sunlight.

In moving north we needed to have paler skins, filtering out less of the sun's rays and optimising our production of vitamin D. The cost of this strategy is that there is a higher risk of skin cancer in people with paler skins, triggered even in Europe during the summer when the sunlight exposure can still be high.

Before reading Mismatch, I had a limited understanding of evolutionary biology. So I found the authors' clear and concise introduction to its principles and processes quite useful. All the basics are covered early on, such as genes, variation, selection, adaptation, and inheritance. At the risk of veering off-track, I'll now provide a short overview of evolution for anyone who's interested.

The creation of a new species is the result of an accumulation of changes in the gene pool of an ancestral species. This usually occurs when some members of a given species become separated from the whole. This reduction of the gene pool means that over time new traits gradually become dominant, until the new group no longer resembles the old, and interbreeding between the two is not possible. This is known as macroevolution, or speciation, and takes thousands of generations. The genetic changes that occur in a population with the passage of each generation are known as microevolution. The accumulation of many microevolutionary changes leads to macroevolution.

Microevolution goes something like this: (1) Two individuals of the same species get together and mate. (2) Two sex cells or gametes (one from each parent) unite and form the first cell of a new individual, known as the zygote. (3) The zygote divides multiple times, producing identical copies of itself. Errors in this copying process result in mutation. (4) Mutation alters the genetic code of the offspring, resulting in the introduction of new traits not found in the parental generation. These traits are termed variations. (5) If a certain variation helps this new individual to better adapt to its environment, it will be more likely to survive and reproduce, thus passing on the variation to its progeny. This is known as the theory of natural selection.

In short, natural selection acts to select characteristics or traits that confer greater fitness within a given environment. It involves four straightforward principles: (1) there are more members of a species born in each generation than will survive; (2) there is variation in physical and behavioural characteristics among individuals within species; (3) this variation is heritable, and (4) characteristics that result in an individual surviving and reproducing tend to increase in frequency in the population, whereas characteristics of non-survivors decrease.

So natural selection acts on the most advantageous heritable variations. Those individuals whose characteristics best match them to their environment are said to have the most advantageous phenotype, which is defined as the observable characteristics of an organism. The phenotype is determined by both genetic and environmental influences. The genetic basis of phenotypic traits is known as the genotype of an organism. The genotype is the range of specific genes an organism possesses - in other words, the genetic constitution of an organism.

For example, my genotype for earlobes describes which version, or allele, of earlobe gene I have been gifted with. My father has free earlobes (dominant gene, which we'll call E), while my mother has attached earlobes (recessive gene, which we'll call e). Free earlobes hang below the point where they attach to the head while attached ones do not. My father's genotype for earlobes is Ee, and my mother's is ee. Since I have attached earlobes, my genotype for earlobes is also ee. If I possessed the dominant E allele, the instructions from this would have overpowered those from the recessive e allele and I would have free earlobes.

As a rule, any trait that reduces an individual's struggles in life could be considered beneficial. And anything that brings struggle into an individual's life could be considered a predator. Those individuals that have the necessary traits to overcome predators will survive long enough to reproduce. As an example, consider two different varieties of ladybird living in a lush green forest. One is coloured green, the other is orange. Green ladybirds blend in to their environment, so are much safer from hungry birds than the orange ones, who stand out like a sore thumb. Thus the green ladybirds will survive and increase their numbers, while the orange ones will slowly disappear from the species.

Without these two variations in ladybird colour, there would be no selection. The green colouring trait happens to be beneficial and over time becomes more common in successive generations of the population. The proliferation of green ladybirds means that as a whole, the species has become better adapted to its specific ecological niche, and has thus evolved.

The genotype provides the crude settings for the organism's development of its mature phenotype. But the environment also plays a large part in an organism's development. Epigenetic changes (changes that occur independently of genes) during development drive the phenotype towards a better match with its environment. The authors show that there are circumstances in which the resulting phenotype can turn out to be inappropriate, and these are mainly a consequence of modern environmental influences.

After all, humans evolved to inhabit a world very different from that which we have constructed for ourselves. It is well-known that Homo sapiens originated in the grasslands of Africa some 160,000 years ago. We lived as hunter-gatherers, relying on food from two major sources - (1) the collection of seeds, tubers, nuts, and fruits, and (2) hunting. There have been various estimates of the content of this Palaeolithic diet and it is clear that it differed significantly in a number of respects from the modern diet.

The authors explain: "The Palaeolithic diet was higher in fibre content and had a much lower glycaemic index (ability to raise blood sugar rapidly) because the foods were less refined. Wild honey would be the only source of concentrated sugars and it would have been a minor component of the diet. The diet had a very different mix of fatty acids, a higher protein content, and a much lower salt but higher potassium content. There was no milk, butter or cheese, and the meat was generally much leaner than today. Hunter-gatherers dispersed in accordance with available food suppplies. They could choose their environments, within limits. There is no fossil evidence to suggest that they suffered chronic malnutrition - quite the reverse, because the data available from skeletons suggest that they achieved modern, or close to modern, heights."

Our Palaeolithic ancestors may have had a good diet, but nothing lasts forever, and after the end of the last Ice Age the changes in climate and vegetation made hunting and foraging more difficult. A significant turning point in the history of our species was about to take place - the development of agriculture from about 11,000 years ago. Agriculture first appeared in the fertile crescent extending from the Levant to the Tigris and Euphrates (modern-day northern Syria), and with it came a progressive change in diet. Herding allowed the collection of milk as a food source and there was access to fatter meat on a more consistent basis.

This transition to agriculturalist and settlement-dweller eventually led to the development of urbanisation, complex power hierarchies, and the growth of cities, states, and empires. Some people now lived in contact with much wider networks of others than they would have had they maintained the hunter-gatherer or pastoralist way of life. Prior to the development of settlement, humans lived in social groups of less than 150 and perhaps as small as 20 to 50 people. These would have been extended family groups, and this has been an important component of our species' success.

With settlement, much larger numbers of people came into direct contact with each other. Those who lived in cities came into contact with many hundreds of people. From living in a small clan where individual roles and relationships were clearly evident and the power hierarchy simple, humans came to occupy complex networked social structures where roles were subdivided and separated and intricate power and control hierarchies emerged.

Clearly our social structures are far more complex today than they were when we evolved on the African savannah. This evolutionary discrepancy has been further magnified by a phenomenon the authors term "maturational mismatch". They observe that throughout most of our history, there was synchronous maturation of our bodies and our brains at puberty. But over the last hundred years they have diverged; while psychosocial requirements have become more demanding and full maturation appears to have been delayed, physical maturation is getting earlier.

Two centuries or more ago many teenagers could, and did, take on roles as mature adults. Junior officers in the Royal Navy during the Napoleonic Wars were in their early teens. Has today's society become so much more complex that adolescents need to know so much more to become an adult?

Gluckman and Hanson surmise that external influences like the media and the loss of tight societal pressures may have reinforced exploratory behaviour in adolescents, thereby delaying the development of attributes such as responsibility and self-control. The way adolescents are treated by society may also explain their delayed psychosocial maturity - as a society we confuse physical maturation with psychosocial maturation so we have a tendency to assume a person who is biologically mature is a full adult. These assumptions lead to rebellious adolescent behaviour.

This goes some way towards explaining maturational mismatch in relation to brain development. But it doesn't explain why the age of puberty is falling. The answer to this puzzle lies with diet and nutrition. The development of agriculture and settlement led to population growth, which brought humans into greater proximity with each other - static settlements dependent on agriculture allowed more people to live at one place. This led to fundamental changes in our nutrition. While hunter-gatherers had multiple ways to obtain food, populations dependent on a fixed location for their herd and crops became inevitably more at the mercy of climate and war. Malnutrition affects children first and their growth was reduced. When childhood nutrition is poor, puberty becomes delayed and so with changing patterns of settlement came a delay in puberty.

This allowed physical maturation to remain synchronised with the concurrent delay in pyschosocial maturation, as by this time social structures had become more complex and the skills needed to thrive in society took longer to learn. In today's society, complexity has increased still further, but this time puberty has not kept up. In fact, it has moved in the opposite direction. Gluckman and Hanson suggest that the reason for this is the health and wealth of modern life. The age of puberty is returning to where it was during our nutritionally-balanced Stone Age days, when life expectancy was short (around 35 years) and puberty in the female needed to begin between the ages of 7-8 to allow her a reproductive span of 16-18 years, enough time to support the youngest of her progeny to a fully independent existence.

The effect of nutrition on puberty differs before and after birth. While poor childhood nutrition delays puberty, poor fetal growth may lead to earlier puberty. Poor fetal growth due to deficient nutrient supplies is actually an adaptive response by the fetus. When the supply of nutrients from mother to fetus is poor, the fetus must reduce its growth rate just to survive. If the nutrient levels are too low, babies can be born premature. Their biological processes are able to sense that the environment within the womb is so threatening that it is wise to get out early in order to maximise chances of survival. Accelerated maturation and premature birth means that the individual is less likely to live a long life, so sexual maturation is also accelerated to ensure gene transmission to the next generation.

When a fetus develops in a constrained environment and then lives in a richer one after birth, there are several outcomes. Advanced sexual maturation is one - this is most dramatically observed in children who were born in very poor societies but then adopted and brought up in the richest countries. Their rapid switch from poor early nutrition to good childhood nutrition is associated with much earlier puberty - with some girls having their first period as early as 6 to 8 years of age.

A further outcome is a greater likelihood of developing obesity. Maternal constraint is a natural process in the mother that generates an upper limit on how much nutrition the fetus can sense. It may have given our species an adaptive edge during our evolution, because the predictive responses always made us expect to live in a slightly harsher environment than existed during our gestation. Thus as a species we are pre-adapted to expect worse than we may experience, and this would have given us an inbuilt safety margin. In fact, this is why the human neonate is born with a layer of high-energy fat reserves. If nutrition is compromised, these fat reserves serve the purpose of providing an energy buffer for brain growth to continue.

But as our nutritional environments have got richer, the discordance between predictions created by these constraint mechanisms and reality has become greater. Rather than being evolutionarily advantageous, these prenatal forecasts have now become disadvantageous, and are the major cause of what the authors call a "metabolic mismatch". The consequences of metabolic mismatch become manifest as heart disease and diabetes. The preference for high-calorie and high-fat foods that developed during gestation (in order to build the layer of fat) will lead to weight gain and eventually obesity in a rich environment.

This problem is particularly important among sections of the population who have inadequate education or are in lower socio-economic groups in both the developed and developing world. These people sometimes worsen their situation through inadequate education and inability or lack of opportunity to act. The range of foods they can afford often have a higher fat and carbohydrate content. This compounds their degree of mismatch. The poor are more at risk and poverty is a major contributor to chronic disease.

Further research into maternal constraint is very important, the authors argue, as it is clear that the increasing incidence of heart disease and diabetes have their origin in the mismatch that arises from the interplay between developmental plasticity and the postnatal environment. One might think that evolutionary processes would have worked to exclude such nasty fates for our species. They have not because for the most part these issues do not interfere with our reproductive fitness. These diseases until recently only appeared in middle age, well after reproduction has been completed. Evolution cannot select against traits that appear after reproduction has ceased.

Advances in medicine have allowed humans to live longer than ever before. Right up until the last century, longevity to middle age and beyond was rare. We have always known the certainty of death, but it was not within our power to delay it. We instead used religion to deny it through concepts of an afterlife. These days, we no longer tolerate the evolutionary imperative of decline and demise after reproduction. But there is a cost to living longer.

One has been the rapid rise in the occurrence of diseases of degeneration. There is a theory (the disposable soma model developed by Tom Kirkwood) stating that there is a trade-off between the lifetime investment in growth, reproductive, and repair systems. According to this theory, those individuals which anticipate a short life invest less in repair and more in early reproduction.

It then follows that women who are able to reproduce later in life are also likely to live longer. A study in Boston found that women who were able to conceive children naturally after the age of 40 had a four times greater chance of living to 100. There are several other studies showing the relationship between a late menopause and longevity and others showing the reverse, namely that early menopause is a marker for a shorter lifespan.

Experiments on mice have shown that longevity has genetic determinants: the genes involved are those associated with growth and metabolism. If a mouse predicts a threatening environment during its prenatal development, it will invest less in growth, metabolism, repair, and longevity and try to hasten its reproduction. Conversely if it predicts a benign environment, it will invest in greater longevity. Thus in mice, prenatal undernutrition leads to reduced longevity, whereas postnatal undernutrition leads to a marked prolongation of the lifespan.

There is evidence to show that these developmental trade-offs between components of the life-course strategy also exist in humans. So what can be done? The authors suggest that the route to reducing the impact of our many mismatches lies in technological, environmental, and cultural development.

One example of technological development involves modifying the epigenetic component of our evolutionary inheritance. An experiment is described in which undernourished newborn rats can be tricked into thinking that they are fatter than they really are by giving them injections of a hormone that is normally made by fat. These injections stopped the development of obesity even when the rats were fed a high-fat diet.

Other useful recommendations are given, such as social intervention programmes and further research into the causes and consequences of mismatch. The most wide-ranging and perhaps radical recommendation involves optimising the diet and body composition of all women of reproductive age.

Any reader with an interest in biologically-based approaches to human health would find this book appealing, as would science buffs who enjoy interesting facts and trivia. This is a very informative and well-written book, and health policy-makers should be made aware of the recommendations it proposes.

Saturday, October 6, 2007

The Nuclear Comeback

I went to the Academy Cinema for the second time in a week on Thursday, to see the Documentary Film Festival screening of The Nuclear Comeback. The director is a New Zealander, Justin Pemberton, and at the end of the film he stood up in front of the audience for a question and answer session. Documentaries are the kind of film in which hearing from the director can add so much to the viewing experience. While the director obviously intended to focus more on the problems associated with nuclear power rather than the benefits, I found him to be pretty level-headed in the Q&A session and not excessively biased towards his cause. During the session there was a lively debate between the director and a pro-nuclear supporter. I'll mention more about it after the following brief overview of the film.

The title relates to the recent worldwide resurgence in nuclear power generation, which has happened as a direct result of climate change and global warming fears. Global warming is caused by carbon emissions, and the major benefit of nuclear power is that because no fossil fuels are burned, no carbon emissions are produced. Nuclear power also produces far more power per tonne than any other energy source, which is extremely important, given that the world's electricity consumption is expected to double in the next 25 years.

Governments are paying attention, and consequently 27 nuclear power stations are under construction, with projections for another 136 within a decade. A new group of campaigners have arisen, known as pro-nuclear environmentalists. Bruno Comby of Environmentalists For Nuclear Energy says, "We have absolutely no choice. We need to turn to nuclear energy because it is both clean, safe and abundant enough to ensure the survival of our civilisation."

Although several pro-nuclear commentators were interviewed, including a French pro-nuclear environmentalist who said that he would be happy to have nuclear waste stored underneath his house, the bulk of the film focused on the director's visits to several notorious nuclear sites around the world. These included a nuclear fuel reprocessing plant at Sellafield in Cumbria, which suffered a large, highly radioactive leak in 2005, and of course, Chernobyl.

Chernobyl was the focus of the debate I mentioned earlier between the director and a pro-nuclear supporter in the audience. This man was no ordinary audience member, however. He appeared in the film, firstly speaking to a community group about the benefits of nuclear power and then to the director about his view that New Zealanders need to be more open-minded when it comes to nuclear energy.

The man's name is Dr. Ron Smith, and during the Q&A session, he became very irate about the inconclusiveness of the film in regards to how many people actually died as a result of the meltdown. He said that the director had not even mentioned a recent World Health Organisation report which claimed that the harmful effects of the accident had been overestimated.

I had a brief read of that report. Here is a quote from it: "Given the low radiation doses received by most people exposed to the Chernobyl accident, no effects on fertility, numbers of stillbirths, adverse pregnancy outcomes or delivery complications have been demonstrated nor are there expected to be any. A modest but steady increase in reported congenital malformations in both contaminated and uncontaminated areas of Belarus appears related to improved reporting and not to radiation exposure."

The director did indeed leave room for speculation in this section of the film. One pro-nuclear commentator he spoke to said that the best sources he had access to put the number of deaths at 56. However, snippets of interviews with two other people were then inserted, one of whom said that because of Soviet cover-ups it was hard to get an accurate picture of the real number of deaths. She said numbers as low as 37 and as high as seven million have been given.

During the Q&A session, Justin mentioned these interviews in response to Ron's outburst. He continued by saying, "We've had this debate, Ron, so I don't want to get into it now." An audience member then piped up and said, "Yeah, but the audience hasn't heard it." Sadly, they didn't go into it, and that was the last we heard from Ron.

Ron Smith was interviewed on Campbell Live a little while back, and made it quite clear that he feels nuclear energy is very safe, clean, efficient etc. Providing the opposing viewpoint was the acting Energy Minister, Trevor Mallard. I thought Trevor made good sense and also got to the heart of the matter by highlighting the prohibitively high costs of building nuclear power plants. That interview can be viewed here.

As mentioned before, climate change is the driving force behind the renewed interest in nuclear power. However, it is not completely clear whether using nuclear power will actually reduce carbon dioxide emissions. A study conducted by the Institute for Applied Ecology concluded that based on the emission of global warming gases, nuclear power compares unfavourably to:

1) Conservation through efficiency improvements
2) Run-of-river hydro plants
3) Offshore wind generators
4) Onshore wind generators
5) Power plants run by gas-fired internal combustion engines
6) Power plants run by bio-fuel-powered internal combustion engines

Of the eleven ways to generate electricity that were analysed by the Institute, only four are worse than nuclear power in terms of greenhouse gas emissions. Every stage of nuclear power production, from the manufacture and eventual dismantling of nuclear plants, to the mining, processing, transport, and enrichment of uranium fuel produces emissions. Further emissions accompany the eventual processing, transport, and burial of nuclear wastes.

Even if this data is wrong and nuclear power is clean and green, not to mention safe, as Ron Smith would have us believe, there is a far more important issue that few seem to realise. And that is that many natural resources have reached their peak, and are now in decline. Not just oil, but metals, minerals, fish harvests, fresh water, fertile land. The list goes on. Our demand for all of these resources is increasing, while the supply is shrinking.

Having an abundant energy supply such as that provided by nuclear power would just bring further problems. More available energy means further industrialisation, leading to greater economic growth. This in turn leads to further increases in population and consumption. Which would also bring about more greenhouse gas emissions.

The simple truth is that the Earth has its limits, and cannot cope with humanity's over-exploitation of everything it has to offer. We will see economic contraction in the not-too-distant future. The question is whether societies will contract and simplify intelligently, or valiantly try to maintain the status quo with ambitious projects that will be ultimately unsustainable.

Studies have shown that uranium is unlikely to last any longer than 2050. There are plenty of other energy sources that will always be around. The way forward is finding out how to utilise these efficiently. Solar and wind energy are intermittent, so we need to create better electricity storage devices to ensure power can still be supplied during off-peak periods. I think New Zealand is on the right track. We know we don't have a need for nuclear power, and we know that renewable energy sources have to be tapped to mitigate our dependence on rapidly depleting fossil fuels.

If we want the future to offer hope, we have to realise that we live in a world of scarce resources. We have to work within those parameters. The era of great material abundance is over.

Sunday, September 30, 2007

Crude Impact

The 2007 Documentary Film Festival is on in Auckland at the moment, so today I went to the Academy Cinema and saw Crude Impact, which explores the interconnection between human domination of the planet, and the discovery and use of oil.

First off, I want to say that this is an important film, and I think everyone should watch it. It presents the subject matter in an accessible way, without being too preachy, and made me realise the influence oil has had in shaping today's society.

It begins by linking the world's population explosion during the last century to the utilisation of oil. Thanks to oil, agricultural practices advanced and the result was mass food production. Basic evolutionary theory states that if the food supply is abundant, the population will grow.

Nowhere is this more apparent than in the United States, and as one would expect, the US seems to be the country that the sustainability message is most aimed at. The US is a huge consumer of oil when you consider its population, and this stems from the days when it produced more oil than any other country. This is what enabled the ascent to power - it was able to use oil to produce a wide range of products, which provided export income. With increased wealth came increased expectations. In the 1950s, the American dream of owning a large house, a car, and many possessions began.

The film mentioned that Americans are no more happy today than they were in 1950, yet today consumption levels are many times higher. Ever-increasing consumption levels require an ever-increasing demand for energy, yet if more possessions will not lead to more happiness, then why has this culture of excess become so ingrained?

In 1992, President George W. Bush Snr famously said "The American way of life is non-negotiable" and it was this attitude that provided the justification to strike oil deals with the Middle East. More oil had to be found to preserve the American way of life, and US supplies had long since failed to meet the demand. In 1956, Shell geologist M. King Hubbert had warned that oil production in the US would peak in the early '70s and decline steadily thereafter. He wasn't taken seriously.

But Hubbert was right. The demand for oil in the US has continued to rise, but now the vast majority of it is imported from Third World countries. One such country is Saudi Arabia, and the fact that the US willingly compromised its ideals to create a partnership with the Saudis is a good example of the power that oil exerts. Saudi Arabia is a country that is not concerned with its citizens' general well-being. The government rules in an authoritarian manner, and would seem to have very different values to those which the US espouses. Yet a deal was struck with the Saudis - the terms being that they would provide the US with oil, while the US would help them maintain their power and provide weapons when needed.

The human impact of this insatiable demand for oil was also explored in the film, with the example given of how the indigenous peoples of Ecuador had their habitat destroyed by oil drilling. Their water sources were irreversibly polluted by sub-standard drilling practices and has lead to many of the natives dying from the carcinogens. The Crude Impact website states: "As oil production increases, often the poverty level of regular citizens and indigenous peoples increases as well. These people rarely benefit from the wealth extracted from the land on which they live."

Another example of the human impact showed a prominent protestor in Nigeria being executed for trying to stop oil drilling from taking place on his people's land. African countries are often ruled by dictatorships, and the dictators will make deals with foreign countries without a thought for their fellow countrymen. The people on these lands are having their most precious asset stolen from them, and they can't do a thing about it.

Environmental issues were also raised, such as widespread species extinction due to pollution and global warming. Our continuing dependence on fossil fuels is the primary cause of global warming, and while the film did have a shot of a field of solar panels, there were not really any ideas given about how to meet our energy needs in an alternative way. The advice for now seems to be "reduce your energy demands, so that the oil that is left will last longer."

This may be easier said than done, considering the fact that the Chinese are experiencing an industrial boom. It was said in the film that if every Chinese person were to consume as much as each American, we would need six Earths. Another interesting statistic was given, and this was that if each American household replaced one of its lightbulbs with an energy-efficient bulb, the resulting reduction in energy consumption would be akin to removing one million cars from the road.

Current consumption levels cannot continue. The Earth has its limits, and to replace one energy-producing resource with another will not change this. Four recommendations were given to pave the way for sustainability: 1) Reducing the population. The film states that when women are given social, political and economic power, population stabilises and may even decrease. This is really about gender equality and giving women in less developed countries the same right to education as men. Women with more education have less children. 2) Reducing dependence on fossil fuels. 3) Buying locally produced food and other goods. This means that less transportation energy is consumed. 4) Spreading the sustainability message to the political leaders.

Our technological skill has progressed exponentially over the past century, but little attention has been paid to the long-term costs of our actions. The "bigger is better" attitude needs to be erased from the human psyche and new paradigms have to be developed. Without sustainability in the forefront of our minds, our short-term gains will do nothing more than bring long-term pain.

Sunday, September 23, 2007

Ratatouille

I saw Ratatouille today and I thought it was amazing. Intelligent, funny, heart-warming, emotionally satisfying - I can't say enough good things about it. The animation is brilliant, the story has meaning, and the characters don't feel as if they've been dumbed down to target the younger audience, as seems to be the case with most of the kiddie fare produced these days. I was spellbound all the way through. This is a very likeable and well-made movie and I would have to say it is easily one of the best movies I have seen this year.

One thing I particularly appreciated was that the story felt less conventional than what I have come to expect from my cinema visits. Ratatouille's themes - such as "know yourself", "follow your dreams", "embrace new ideas" - were all presented in a non-preachy way, and this added to my overall impression of the movie as having a nice, simple charm. There wasn't any low-brow humour either, which I thought made a nice change.

The most surprising thing for me was that during one scene I found myself getting teary-eyed. This was not a sad scene, just an intensely emotional one. It seems to me that there are many things that I tend not to get emotionally involved in, so for a movie to bring about a strong reaction in me was a little out of the ordinary. It got me thinking that there are not enough intensely joyful moments in my life.

But that's another story. My final verdict of Ratatouille is: a thoroughly entertaining dose of escapism. You must go see it.

Sunday, September 16, 2007

The Accident That Is Life

Charles Darwin once speculated that life on Earth arose in a "warm little pond" of organic chemicals. His hypothesis, stated in a letter to Joseph Dalton Hooker in 1871, was that this pond would needed to have contained all sorts of ammonia and phosphoric salts, lights, heat, and electricity. However, he was not able to test his theory, and so research in this field, known as abiogenesis, progressed slowly.

Then in 1924, Russian biologist Aleksandr Oparin put forward a theory of life on Earth developing within a "primeval soup", through gradual chemical evolution of carbon-based molecules. A further theory by J.B.S. Haldane asserted that ultraviolet light in Earth's primitive atmosphere caused amino acids (the building blocks of life) to concentrate in the oceans. And then in the famous Miller-Urey experiment conducted in 1953, Stanley Miller passed sparks of electricity through a glass chamber filled with water, methane, ammonia and hydrogen. This experiment was intended to re-create the conditions present on primitive Earth, right down to simulating lightning, which was thought to be an important catalyst in early chemical reactions.

Using paper chromatography, Miller was able to detect amino acids and other organic molecules that had formed in a trap connected to the apparatus. The experiment had therefore proved that organic molecules could have spontaneously formed from inorganic precursors, and it made headlines around the world. It seemed that the mystery of the origin of life had been solved. The hypothesis was that organic molecules were formed in the atmosphere after coming into contact with lightning, and were then rained into the ocean, where they combined to make proteins and nucleic acids, which are the basis of all life forms.

However, as with all experiments, objections were raised. Of particular concern was the fact that Harold Urey chose the gases that would be used, after assuming that these gases were present in early Earth's atmosphere. It was argued that by choosing gases that were very chemically active, Urey ensured that something would happen when the gases were placed together and a catalyst was applied. It was also later shown that the atmosphere on primitive Earth did not contain significant amounts of methane or ammonia. Scientists now believe that the atmosphere contained an inert mix of carbon dioxide and nitrogen.

When Miller repeated his experiment in 1983 using these gases in place of methane and ammonia, the resulting brew contained negligible levels of amino acids. Creationists seized upon the failure as evidence of the erroneousness of abiogenesis. Scientists returned to earlier theories to explain the origin of life, such as panspermia. This theory suggests that the origin of life depended heavily on chemicals delivered to Earth by comets and meteorites.

This is difficult to test however, and this is why it is preferable to assume that life originated on Earth rather than elsewhere in the universe. Still, the theory is considered possible, and it has the advantage of extending the available time frame and range of environments for life to develop. Moreover, panspermia does not conflict with the findings of the original Miller/Urey experiment, as many of the organic molecules that were detected by Stanley Miller are known to exist in outer space.

A meteorite that fell in Murchison, Australia in 1969 was shown to be rich in amino acids. Researchers studying the meteorite have identified over 90 amino acids, 19 of which are found on Earth. Since primitive Earth used to be nothing more than an enormous lump of rock, similar in composition to many of the asteroids and comets roaming the galaxy, it would make sense that amino acids were formed at the same time as the Earth, and hung around, in an inactive state. It also follows that many amino acids would have been transferred here by meteoritic infall.

Findings such as these lend credence to the idea that elements not originally present on Earth made their way here from space and were then responsible for the development of life. The early Earth was bombarded heavily by comets, and it is likely that this brought water here, along with a supply of complex organic molecules. Local evidence has also supported this. A meteorite streaked across New Zealand's sky on November 26, 1908. Two pieces of it were retrieved from a small crater at Mokoia, Taranaki. These pieces have been extensively studied, because the meteorite is one of a rare group that contains compounds of carbon and hydrogen.

Even though panspermia is a credible theory, science is continually unearthing new evidence, and recent experiments by Jeffrey Bada seem to have returned the origin of life to Earth. Bada is a chemist at Scripps Institution of Oceanography in La Jolla, California. He discovered that Miller's 1983 experiment, which used carbon dioxide and nitrogen to simulate the early atmosphere, produced chemicals called nitrites, which destroy amino acids as quickly as they form. Bada noted that the early Earth would have had significant amounts of iron and carbonate minerals, which neutralise the effects of nitrites. When Bada added these minerals to the experiment, the resulting liquid was filled with amino acids.

But other researchers are still sceptical of the claim that this finding disproves panspermia. James Ferris, a prebiotic chemist at Rensselaer Polytechnic Institute in New York, agrees that proteins can form after amino acids have been activated by lightning, but he doesn't see how the building blocks of nucleic acids would have developed. His argument seems to suggest that the first cellular organism arose from a combination of earthly amino acids and interplanetary microbes.

At any rate, experiments such as Bada's provide increasing evidence that science, given enough time, will find the answers to the mystery of the origin of life. It is natural to expect that for the time being, the answers will be in a continual state of change. This is the way science works - all "truths" are dependent on examination by others.

For me, the fact that the answer is not set in stone is what makes the contemplation of the origin of life so intriguing. In contrast, the problem with believing that an intelligent designer is responsible for starting life is that you have to maintain this belief, even when evidence comes along that may refute it.

Creationists like to believe that life is so infinitely complex and perfectly ordered that there is no way it could have all come about by chance. They argue that one need only look at the patterns inherent in the natural world to conclude that nature had a designer with intelligence and immense power.

Unfortunately, creationists usually overlook the fact that the creator of something as immensely complex as the universe would also have to have been created. If life cannot be the result of mere chance, it follows that an omnipotent supernatural intelligence would not just spring into existence. Therefore, there must be another designer - a super-designer - with so much power that designing a designer that can design everything is all in a day's work.

And then in order to have a super-designer you would need a super-super-designer. This is where the whole theory falls flat. I'm no closer to an answer about the origins of life than when I first started my philosophical musings.

There seems no reason to assume that the appearance of life on Earth was planned. Just look at atoms and their weird worlds of chaos. Electrons follow random paths and do not seem to be governed by any known rules. Creationists are well aware of this but do not find it convincing. They claim that the chance occurrence of the right combination of atoms needed to form even the simplest of living organisms is so remote that life must be the result of intelligent planning.

Creationists who make statements such as the above do not truly understand chemical evolution. The complex compounds that make life possible are not the result of a sudden combination of atoms; rather, they are the result of many intermediate steps and synthesising processes. Life started off in the most basic way possible, then succeeded in pushing forward. Life may indeed be an extraordinarily unusual occurrence, but this doesn't mean we should immediately assume conscious design. I quote the argument of W.T. Stace:

A man walking along a street is killed by a tile blown off a roof by the wind. We attribute this to the operation of blind natural laws and forces, without any special design on the part of anyone. Yet the chances against that event happening were almost infinite. The man might have been, at the moment the tile fell, a foot away from the spot on the sidewalk on which the tile fell, or two feet away, or twenty feet away, or a mile away. He might have been at a million other places on the surface of the Earth. Or the tile might have fallen at a million other moments than the moment in which it did fall. Yet in spite of the almost infinite improbability of that happening, we do not find it necessary to suppose that someone threw the tile down from the roof on purpose. We are quite satisfied to attribute the event to the operation of natural forces.

Biologists have accumulated a vast body of knowledge about the natural world, and this was achieved by looking no further than nature itself for explanations. Yet even with the quantity of information available, the origin of life is a topic in which the creationist view prevails. This is because it is one of the few areas for which science does not have a conclusive answer. However, having unanswered questions does not mean we should create a "god of the gaps".

Gods were responsible for disease until we found bacteria and viruses. Until recently, mental illness was thought to be caused by demonic possession. Now we know there are biochemical causes. It is only natural that God's sphere of influence will steadily shrink as we find out more and more about the universe in which we live.

Because there is no way to dust for the fingerprints of an intelligent designer who transcends natural processes, we have to stick with what is observable. Science can only deal with what is observable. In order for us to function as rational human beings, we must stop seeing patterns where there is only randomness and see things as they really are.

Sunday, June 24, 2007

Plants Can Tell Who's Who

According to an article that I read in the New Zealand Herald last week, it seems that plants are able to tell relatives apart from strangers. The article suggested that plants are operating on a higher cognitive level than we give them credit for.

I've reprinted it below for your viewing pleasure. It is from the 19 June 2007 edition.

What will the vegans eat now? Researchers at McMaster University have found that plants get fiercely competitive when forced to share their pot with strangers of the same species, but they're accommodating when potted with their siblings. "The ability to recognise and favour kin is common in animals, but this is the first time it has been shown in plants," said Susan Dudley, associate professor of biology at McMaster University in Hamilton, Canada. "When plants share their pots, they get competitive and start growing more roots, which allows them to grab water and mineral nutrients before their neighbours get them. It appears, though, that they only do this when sharing a pot with unrelated plants; when they share a pot with family they don't increase their root growth. Though they lack cognition and memory, the study shows, plants are capable of complex social behaviours such as altruism towards relatives, says Dudley.

If I were looking for a non-biological explanation for this phenomenon, I would mention Rupert Sheldrake's theory of morphic resonance fields, which basically states that invisible energy patterns or morphic fields surround and affect all living things. Organisms that have surrounding energy fields of similar vibrations can communicate telepathically, and perhaps that is what these plants are doing.

However, a more logical explanation would be the biological concept of resource competition. According to the article, a plant grows bigger when it is potted with an unrelated member of its species. One obvious thing to look for is at what time each day the plant absorbs the most water and mineral nutrients. Siblings are likely to all operate to the same schedule, since their genetic makeup is similar. Therefore, they'll use available resources less efficiently than strangers that operate to different schedules.

An alternative explanation is the biological process known as allelopathy, where one plant harms another with specific biomolecules, in order to hinder this plant's growth and further its own. According to Wikipedia, "Although allelopathic science is a relatively new field of study, there exists convincing evidence that allelopathic interactions between plants play a crucial role in both natural and manipulated ecosystems. These interactions are undoubtedly an important factor in determining species distribution and abundance within some plant communities."

In any case, concluding that plants can "recognise" relatives and strangers seems a bit suspect. We have brains to perform this task. Plants do not. The researcher also makes the assumption that plants are capable of complex social behaviours such as altruism towards relatives. That's quite a leap to make. Looks to me like another case of anthropomorphism.

Friday, May 25, 2007

Earthlings

Just over a week ago, I saw a brutal and hard-hitting documentary called Earthlings. Narrated by Joaquin Phoenix, it critically explores how, over the course of history, humans have placed their own interests far above those of other living creatures. Consequently, the animals who share the planet with us undergo a tremendous amount of suffering in the name of human progress.

We were warned at the start of the movie that there would be some unpleasant scenes, but that if we managed to stay to the end, we would receive a bag of goodies. Well, "unpleasant scenes" was right, and somewhat of an understatement. Earthlings was filled to the brim with hidden-camera footage of animals being mistreated and tortured. The scene I found most repellent was of a goat that had been skinned alive. The expression on its face was one of pure terror.

The film was surprisingly comprehensive in its examination of the ways in which humans exploit animals. It started with animals being used as pets, then moved on to how we use animals for food, clothing, entertainment, and medical research. However, the brutal examples that were shown in all these areas were not, in my opinion, representative of society as a whole. There was a scene which had hillbilly types swearing at and beating pigs. Another scene showed dog catchers throwing a stray dog into a garbage truck and then watching as it is crushed with the rubbish. Japanese fishermen were shown slicing open dolphins. The most extreme and shocking examples were used in order to get the audience to sit up and pay attention.

A thought-provoking point was raised at one point: that if we had to kill our own meat, we would all be vegetarians. I can understand this, as I wouldn't want the blood of the free-range chicken that I eat once a week to be on my hands. However, many indigenous tribes around the world have no problem hunting for and killing their own meat. That's how they feed their families. Hunting is in their genes. It's who they are. The moral aspects of killing a living being don't come into it. It's the law of nature - survival of the fittest.

The men in these tribes who have been taught to hunt bring back the meat, and women and children partake in its consumption. In the same way, our society has organisations that specialise in supplying the meat. What I object to then is not the killing for food, but rather the way that these organisations 1) Waste resources, and 2) Cause animal suffering.

In the US, many, many fields are used to keep livestock. More than 800 million acres of US land (more than a third), and approximately 24% of the entire planet, is grazing ground for cattle. Then there's the land and water that is used to produce feed for those cattle. One acre of land can produce enough grain to feed about 25 cows for a day. That same acre could be used to produce enough grain to make 2600 loaves of bread. One hundred acres of land can only produce enough beef to feed 20 people. The same acreage can produce enough wheat to feed 240 people. The world's poor are starving due to this gross misappropriation of resources.

In stark contrast, it seems that the chicken and pork industries don't use enough land. Battery hens are kept in tiny cages for their entire lives, and pigs are crowded together in pens that prevent them from carrying out natural behaviours such as rolling in the slop and nursing their young. It has been said that chickens are the most abused animals on the planet. They are confined to sloping wire cages in dark sheds with little or no natural light. They will never see the sun, scratch the earth, or forage for food. Battery hens are routinely de-beaked; a process where chicks have their beaks cut back with a hot blade, causing instant and chronic pain. Day-old male chicks are killed in a huge grinder because they can't produce eggs and are too scrawny to be bred for meat.

I think if we want chickens to have better lives, free-range will have to become the norm. Of course, far less chickens and eggs would then be available for people to eat, and they would be more expensive. But this would be the only way to end the suffering. It would probably force many people into going without chicken and could potentially create many converts to vegetarianism. At least free-range chickens have a relatively happy life before ending up on the dinner plate.

I don't want to say much about fish, other than the fact that Earthlings tried to put the viewer off eating fish entirely by showing diseased fish. The virus that caused the disease is supposedly more lethal than AIDS. There is also the fact that most of the world's oceans are being fished to their limits.

I eat a lot of fish, and I am convinced that it has many health benefits. This is the one meat I would want to continue eating. I don't think beef is a healthy meat, due to its high levels of saturated fat. The argument about overconsumption of beef being responsible for the widespread occurrences of obesity and diabetes in the developed world was given a brief mention in the film. What needs to happen is that this information is shown unequivocally by the mass media. This would hopefully reduce the public's desire for beef. Less land would then be needed for cattle farming.

However, there are alternatives to beef. Just last week, I read an article in the New Zealand Herald that espoused the benefits of horsemeat. Guess what? Horsemeat is 50% leaner than beef, higher in protein, has 10 times more Omega 3, and gram for gram has more iron than spinach. It is also high in vitamin B12, rich in zinc, and very low in saturated fat. Gordon Ramsay is going to serve it at his restaurant in London. Animal rights activists weren't too happy when they heard the news, and dumped a truckload of horse manure outside the restaurant.

My guess is that people are more opposed to eating horses than cows because we use horses for entertainment. We ride them and race them, and they are seen as graceful creatures of beauty. The Herald article (which I should add, was reprinted from a British newspaper) agreed that pigs are intelligent, but said that horses are no more intelligent than chickens.

The intelligence argument is interesting. It is an example of anthropomorphism. Intelligence is a human characteristic and we can readily identify with animals that display this characteristic. It is human nature to put the most value on creatures with the most human characteristics. We have more sympathy for four-legged mammals than we do for sea-dwelling creatures with scales and fins. And most people would place the life of a fish above the life of a mosquito.

Our tendency to see a human life as superior to an animal life is given a name in the film: speciesism. This form of prejudice is compared with racism and sexism - the side with more power sees the other side as inferior and therefore tries to exploit this power.

Of course, speciesism is just another word for anthropocentrism, which is the view that humans are the most important beings on Earth. This way of looking at the world came from the Bible. The Bible teaches that humans are the apex of God's creation and all creation is there for the human to develop and use responsibly.

If it is not in our nature to see all lives as equal, what lives should we value? And what is it about these forms of life that makes them valuable? Some would say that all self-aware creatures should be valued. Some say that we should value animals that can express pain in a way we can relate to - if they scream and writhe, then why should we be masters of their fate? And if animals were killed painlessly, would it then be moral to eat them?

I don't have the answers to these questions, because the decisions people make are based on their values. There's no "one size fits all" way of living life. I don't eat red meat. I don't eat pork. In fact, I don't eat mammals at all. I may not be a vegetarian, but I do consider myself an intelligent eater. I know that vegetarians are said to be healthier than non-vegetarians, but in the end, the choice not to eat meat must be an individual one.

Wednesday, April 25, 2007

Six Impossible Things Before Breakfast

I saw Six Impossible Things Before Breakfast in Dymocks, skimmed through a few pages, and concluded that I had stumbled upon quite an interesting topic. So I borrowed the book from the library. It is subtitled "The Evolutionary Origins of Belief", and if you've ever wondered why people believe what they believe, this book attempts to provide some of the answers. Why is belief so important? Because all of our behaviours are influenced by what we believe. Furthermore, the beliefs of people in positions of power dictate how society operates. It is up to these people to ensure that their beliefs are conducive to a well-functioning society. To quote Harry Kroto, "We can't afford to have power concentrated in the hands of people influenced by mystical rather than rational philosophies."

The author, Lewis Wolpert, is a British developmental biologist. He is also one of the UK's foremost science writers. In this book he looks at how beliefs originated, and puts forward a theory that we naturally try to find a causal explanation for things, even when there is insufficient information to do so. Why do we do this? It's all to do with how our brains evolved in relation to our environment. Here's a story to illustrate the point:

Back when man was living the hunter-gatherer life, uncertainty was everywhere. He could be taking a leisurely stroll one minute, then be pounced upon by a sabre-toothed tiger the next. He needed some way to exert greater control over his destiny. One day, he was out hunting near his cave, when suddenly there was a rock slide. He leapt back, but he wasn't fast enough, and one of the rocks landed on his foot. He fell to the ground, howling in pain.

In the branches of a nearby tree, a peacefully dozing sabre-toothed tiger was suddenly jolted awake. Growling angrily, the tiger jumped out of the tree and charged towards the injured caveman. The caveman had to think fast. He couldn't run away. He saw the rocks all around him, remembered the pain they could cause, and instinctively grabbed one in each hand. As the beast swiped its paw at his face, he brought the rocks together and crushed the paw between them. The tiger roared in pain and limped off into the jungle.

After the caveman's fear had subsided, he realised that the rocks he was holding were extremely valuable. If he were ever in such a situation again, he would need those rocks. But first, he would have to make them easier to carry. Tool making was born. The hassle of the hunting process was greatly reduced. Humans had an adaptive advantage and the process of evolution moved forward. More patterns in nature were found - as well as causing pain, two rocks could cause sparks to appear if struck together at the right angle. When these sparks landed on dry twigs, man became owner of a powerful tool with which to control the environment - fire.

Man now had time to ponder in front of a warm fire. This led to questions. He questioned why things happened - what caused the sun to rise each morning, what caused his fellow tribe members to become ill, what happened after death. Using a combination of experience and intuition, man began to devise answers. Causal beliefs were born.

For those who are unfamiliar with the term, a causal belief is the belief that something will happen as a direct result of another thing. For example, I believe that when I strike a match, I will see a flame appear at its tip. Of course, this is a rational causal belief, as there is plenty of evidence to suggest that a flame will indeed appear. However, when the evidence is missing, causal beliefs still arise, and people stubbornly hold on to them, no matter how irrational they may be. Wolpert convincingly argues that there is an evolutionary basis for this behaviour, and after reading this book, I find that I agree with him.

Looking at human history, we can ascertain that many beliefs evolved to help with survival. This is especially evident when we look at the many irrational beliefs people have about the subject of health. In many tribes around the world, illness is still seen as the result of failing to please one of the many omnipresent supernatural beings. In Western society, vitamins are often perceived as providing a defence against a variety of illnesses. However, studies have been done showing that swallowing a sugar pill is often just as effective. This is because the person believes that they are swallowing a vitamin, and has the expectation that they will be cured. This is known as the placebo effect. Strangely enough, the placebo effect also comes into force when a person prays. If the person believes that praying to God can cure an ailment, a positive effect can indeed be achieved. The person believes it will work, so it works.

Frame of mind is also important for those who have a belief in the paranormal. When sceptics observe a psychic, they believe that they will be exposed to some sort of trickery, whereas believers in psychic powers expect a genuine display, and hope to make contact with another world. These expectations do affect the observer's experience. Their later recall of what happened is greatly influenced by their beliefs - the believers recall psychic phenomena even when the demonstration has been unsuccessful.

Societies around the world have developed different ways of thinking and consequently have different beliefs. Wolpert states: "Compared to Europe and the USA, China and other East Asian societies remain committed to the idea of the individual as less important than the society." And further on: "Americans perceive the cause of murder as mental instability, whereas Chinese see it as reflecting society's failure."

Every society around the world has a religious tradition of some sort. We make gods and religious systems for the same reason that we make tools. Religion is a tool for the soul. Wolpert is aware that many people need religion. It gives their lives purpose and meaning, and the comfort of knowing that there is a controlling force. He acknowledges that there is evidence that religious activities reduce psychological stress and promote greater well-being and optimism. He also mentions the suggestion that some have raised that a full apprehension of the human condition would lead to insanity, and that paranormal beliefs can help with this, particularly the belief that we live in some domain larger and more permanent than mere everyday existence.

To me this seems to suggest that everyday existence is not enough for religious people. I can accept this, but not the fact that the concept of a higher power or an afterlife is touted as something that everyone needs to have faith in. Faith should be a personal thing. A person may intuitively feel that there is an entity greater than themself at work in the universe, but trying to explain this to others is futile, because other people cannot experience it.

Although Wolpert examines where belief came from, he refrains from attacking it. He remains very level-headed and unbiased, and simply says that people should look at the evidence for their beliefs rather than accept them without question. Of course, for many people this is easier said than done, since beliefs are formed using different principles and methods than the critical thinking approach inherent to science. Our belief engine prefers quick decisions, is bad with numbers, and sees patterns where there is only randomness. It is too often influenced by authority and it has a liking for mysticism. Perhaps Wolpert refrains from attacking belief because he accepts that causal thinking is a result of human evolution.

Richard Dawkins, on the other hand, revels in pointing out the logical flaws inherent in religious belief, and even goes so far as to claim that churchgoers are morally worse than those who stay away. This is actually quite an interesting point, as there is some truth to it. C.S. Lewis conceded the possibility in his popular theological work, Mere Christianity. The Gospel is generally preached to the weak and poor, such as when missionaries go to Africa to spread the word. Because of who is targeted when spreading the word, troubled souls may well be drawn disproportionately to the Church.

According to Lewis, the appropriate contrast should not, therefore, be between the behaviour of churchgoers and nongoers, but between the behaviour of people before and after they find religion. Dawkins's opinion that churchgoers are morally inferior to the rest of us is obviously logically unsound; one could use the same logic to deduce that medicine is bad because those sitting in a doctor's office are on average sicker than the rest of us. Dawkins also thinks that humanity as a whole would have been better off without religion. Many people who gain fulfilment from their religion would disagree, but in any case, the answer is unknowable. Our history has been shaped by Christian traditions. And the way our brains are wired means that if religion wasn't available, people would find something else to fill the God-shaped hole in their consciousness. Like celebrities. Or football.

Interestingly, Wolpert mentions that tools and causal beliefs may be the basis of the human fascination with ball games. "All involve focusing on how a ball will behave when struck or thrown, and thus involve a basic understanding of the physical forces involved." He asserts that all sports may reflect an innate interest in such processes, especially for those no longer making or using tools. Physical causality is indeed an innate attribute in humans. Wolpert observes that by eighteen months, babies will use a "tool" like a rake to pull a toy towards them that was out of reach. Chimpanzees find this difficult.

He goes on to say: "Babies one year old already point at things, which is something no ape, young or adult, ever does. Babies do this to get a toy before they can talk. It means that they know that what they see, some other person can also see." According to Wolpert, by the age of four, children have a well-developed theory of mind, and recognise that others have an image of the world. They know that beliefs are different from real objects, and know that beliefs determine to a large extent how people behave.

There are interesting anecdotes all throughout the book, and this is one thing I really liked about it. Here is an example: "A test for whether children have a theory of mind, that is, whether they can understand what others are thinking, involves showing them a small tube with a very characteristic pattern that normally contains Smarties: the well-known sweets. A child, when asked what such a tube contains, will say that it contains Smarties. The child is then shown that it does in fact contain pencils and is then asked what their best friend would think was in the tube. Children with a theory of mind will reply that the friend would think it contained Smarties. Autistic children cannot give the correct reply, and will suggest pencils."

There is another interesting story about a tribe which has a belief that fosters cooperation. The Chewung in Malaya distribute food according to their belief in "punen". Punen is misfortune caused by the failure to satisfy an urgent need. To avoid punen, the group ensures that everyone's hunger is satisfied when sharing food.

In chapter seven, Wolpert looks at ways people can create false beliefs, using processes such as confabulation, delusion, and hypnosis. He also mentions schizophrenia and the hallucinatory effects of certain drugs. Especially interesting were the experiments showing that beliefs can be created in a hypnotised person's mind. A person's level of suggestibility and their willingness to accept what an authority figure tells them are the key factors in this being accomplished successfully.

There are a number of neurological illnesses that result in delusional beliefs, one being the Capgras delusion. Wolpert explains: "When the patient sees someone he knows very well, a wife or parent, or child, he claims that the person looks like, for example, his spouse, but she is not really his wife and may be an alien imposter. In other respects the patient may be largely normal. One explanation is that in seeing his wife, he recognises her, but that the normal emotional response is absent and thus it could not be his wife, and so he believes it must be an alien pretending to be her." Pretty strange.

In case you're wondering, the title of the book is a direct quotation from Lewis Carroll's Alice in Wonderland:

Alice laughed: "There's no use trying," she said; "one can't believe impossible things."

"I daresay you haven't had much practice," said the Queen. "When I was younger, I always did it for half an hour a day. Why, sometimes I've believed as many as six impossible things before breakfast."

Carroll makes his White Queen proud of believing impossible things - a feature of many passionate believers. In my opinion, irrationality is not a good way to live life. However, as Wolpert states: "Beliefs, once acquired, have a kind of inertia in that there is a preference to alter them as little as possible. There is a tendency to reject evidence or ideas that are inconsistent with current beliefs, particularly if they undermine central beliefs; this is known as the principle of conservatism." As Francis Bacon once said, "Man prefers to believe what he prefers to be true."

I find it difficult to believe in anything that I can't know for sure. For example, I can see no evidence for the existence of a soul. Personally, I think the idea of a soul is unnecessary. It certainly doesn't affect our lives here on Earth. I think a plausible explanation is that the idea of a soul came about as a way for people to deal with the fear of death. The body dies, but the soul lives on. Many people would find this comforting.

I have a few criticisms of the book. At times I found it a little unfocused, and there are also a number of poorly constructed sentences. In the chapter on religion, Wolpert admits that his evidence is often weak. Often he prefers to provide more trivia rather than attempt to argue his own perspective. Wolpert classifies himself as an atheist reductionist materialist, and seems to be puzzled by belief, but accepts it nonetheless. As I see it, his main message, which appears on page 22, is this: "The freedom to have beliefs is very important, but it carries with it the obligation to carefully examine the evidence for them."

I'll conclude with a quote from Sam Harris: "The only thing that permits human beings to collaborate with one another in a truly open-ended way is their willingness to have their beliefs modified by new facts."

Thursday, April 12, 2007

Dreams and Reality

A couple of nights ago I had an unusual dream. I was in my bedroom in my childhood home, when all of a sudden this praying mantis showed up. It was a large praying mantis, about the size of a dog. I threw a handful of bugs towards it, and it gobbled them up. It then moved nearer to me, and I could see that spikes were growing on its head. I didn't feel scared at all. And that's all I remember.

I only pay attention to my dreams when there's something memorable about them. When I woke up, I remembered the praying mantis, and I wanted to find out what it meant. Apparently, seeing a praying mantis in your dream suggests that you are in a destructive relationship. So I had a think through my relationships and came to the conclusion that the only one that could possibly fit the bill is my relationship with my job.

It just so happens that during my working day before I had the dream, my legs were feeling stiff from lack of movement. I had a strong desire to go for a run just to get the blood flowing. Sitting at my desk in the office for hours on end can't be good for my body. So, was my subconscious telling me that my job will bring destruction of a physical nature, or was it something deeper?

If we're talking job satisfaction, there are certainly aspects of my job that I do not enjoy. Yesterday was particularly unenjoyable. I spent the entire day calling prospects and rattling off the same spiel. Hardly anyone was interested. All in all, it was an empty day. However, last week I felt good after having convinced a prospect to buy. I made a sale. I tooted the horn. My boss recognised my value to the company. I guess you could say that I have a love/hate relationship with my job. Some days it seems challenging, other days it seems like a complete waste of time.

I was reading something the other day about what you need to get out of your job in order to feel passionate about it. You see, passion at work is not just about the product or service you are selling. It depends on two key aspects: whether you are doing something that has meaning for you (which can come from either the product involved or the process) and whether or not you are making progress. If you're able to find yourself a job that meets these criteria, your level of engagement, fulfilment, and ultimately, contribution, will rise exponentially.

Friday, April 6, 2007

Pan's Labyrinth

Recently I went to the movies and saw Pan's Labyrinth. One of the reasons I chose to see this movie was because I read it had been nominated for six Oscars. I also looked at the reviews beforehand and saw that it was universally endorsed by the critics. The synopsis described it as an enchanting, yet dark fairytale. So I was really looking forward to seeing it and was expecting a fantasy tale along the lines of Spirited Away or The Chronicles of Narnia.

Pan's Labyrinth was not at all like those movies. This is not a bad thing, but I feel that it has been marketed in a somewhat misleading fashion. At first I thought it might be a movie that children would go and see, but I quickly changed my mind after a particularly brutal and graphic scene. The movie has a fair amount of violence, and I'm talking violence that is very realistic and unstylised.

The story is set during the Spanish Civil War. It centres around a young girl named Ofelia who has just moved with her pregnant mother to live with her new stepfather. The stepfather is a particularly nasty army commander, and most of the movie's violence happens when he is onscreen. This aspect of the movie contrasts sharply with the mystical world of fauns and fairies that Ofelia frequently visits.

Ofelia learns that she is the human incarnation of an ancient princess, and to return to her rightful spot on the throne, she must complete three tasks before the next full moon. I became excited when this aspect of the story was introduced, and was anticipating a wildly imaginative fantasy adventure. When Ofelia began her first task by entering a hollowed-out old tree and making her way through a dark muddy tunnel, I was reminded of Alice in Wonderland, and I said to myself, "Yeah! This is the stuff!"

However, after Ofelia's completion of the task, we were back to reality and the story returned to its focus on the battle between the Spanish soldiers and the rebels. This side of the story actually receives more attention than the above-mentioned fantastical tale. It has elements of espionage and torture, and is quite intriguing. But I still found myself a little disappointed, because I thought I was getting a story set in a fantasy world. War movies generally don't appeal to me.

Yet this was a memorable movie - the character development is good, and the addition of fairytale-like aspects gave it quite an unusual feel. There's one very creepy scene where Ofelia enters a hall that is home to a hideous creature. The scene had a foreboding atmosphere, similar to what one might expect from a David Lynch movie.

I don't want to give away the ending, suffice to say that the two stories merged in quite a meaningful way. I did want to see more of the fantasy world, but I think this may be because I had been led to believe that the movie was something that it wasn't. The viewer should know beforehand that this is a war drama/suspense film, and that the fantasy elements are secondary. I'll finish off by providing my interpretation of the deeper meaning of the film, which won't make a lot of sense unless you've seen it: Pan's Labyrinth is an exploration of the human need for escapism during difficult times.

Monday, March 26, 2007

The Brain Test

The reading I've been doing lately about our growing understanding of how the brain works prompted me to do the Tickle.com Brain Test. The questions themselves were quite interesting, as there were no wrong answers - the option you choose will depend on what feels most natural. The test aims to find out if you are dominant in the left or right hemisphere of your brain, and will also determine if you are a visual or auditory learner. I was not surprised by my lack of hemispherical dominance. Here's what the report said:

Paul, you are balanced-brained, which means that you rely equally on both the left and right hemispheres of your brain.

You are able to draw on the strengths of both the right and left hemispheres depending on context. Typically, people with balanced right and left hemispheres are very comfortable with switching between local and global perspectives - that is, paying attention to both small details and larger issues when the circumstance indicates. That means they can identify elements that make up an image or situation and also attend to the larger, more holistic pattern or unified whole that those details comprise.

You are able to capitalize on the left hemisphere's skills in verbal communication as well on the right hemisphere's focus on patterns and association making. This rare combination makes you a very creative and flexible thinker.

Depending on the situation, you may rely on one hemisphere or the other. Some situations may lend themselves to using your right brain's creativity and flexibility while other situations may call for a more structured approach as dictated by your left brain.


My test results also showed that I am a visual learner. Apparently, Stephen Hawking is also a balanced-brained, visually learning person. As he is said to be the smartest man alive, I am in good company. The test report also has some interesting information about brain physiology:

Your brain is made up of many different parts and is responsible for many different functions of your body. Because of this, it has adapted to be a very specialized organ. There are parts that control what you taste, what you feel, how you learn, how you think, and how you reason. All of this is so no one part gets overtaxed or worn out, and also so you can perform more than one task at a time.

Your brain stem controls your reflexes and involuntary functions such as breathing, heart rate, blood pressure, and digestion. Your cerebellum helps coordinate movement. Your hypothalamus controls body temperature and feeds behaviors like eating, drinking, aggression, and physical pleasure. Your cerebrum, or cerebral cortex, translates information transmitted from all of your sensing organs. It helps start motor functions, it controls emotions, and it is the center for all thinking, reasoning, learning, and memory. In short, it analyzes all information you feed to it.

The cerebral cortex is divided into two hemispheres. The left hemisphere is responsible for speech, controls the right side of your body, and serves as your logic and reasoning center. The right hemisphere governs your creativity and your athleticism among other things. In the past, people oversimplified this relationship.

People used to say if you were logical, you were definitely left-brained, and if you were creative, you were definitely right-brained. This is no longer the case. New research indicates that there's more flexibility when it comes to our gray matter. And if you know where your strengths and weaknesses lie, you can train your brain to become more organized, creative, or better able to process all sorts of information. Here's some general information on the differences between the left and right hemispheres.

Left hemisphere
There's more to your left hemisphere than analytical strength. Your left hemisphere is involved in linear analytical processes, including processing word meanings and symbols, interpreting facts, and much of your language production and reception.

When you look at a photograph or a painting, your left hemisphere is the one that orients on the logical, linear, and literal action in the picture, such as the storyline or the characters in the picture, as opposed to the more abstract or conceptual elements. Furthermore, when you hear a word, it is the left side that decodes that word's meaning, as opposed to something that word might remind you of. Overall, the left hemisphere is heavily involved in more reductionistic processes, such as breaking a picture into its constituent parts, as opposed to seeing it as a single and unified whole.

Right hemisphere
Similarly, the right hemisphere is not just the seat of intuition. Perhaps it is more intuitively oriented than the left, but in most cases it also identifies patterns and performs spatial analyses. This hemisphere tends to process information in non-linear ways, looking at the whole instead of all the parts that make it up.

When you look at a photograph or painting and notice the overall pattern or abstract contour of the image, it is your right hemisphere that is being activated. As another example, the right side looks at a spiral and sees a unified spiral pattern. Whereas the left side of your brain would see the series of lines making up the spiral and would interpret it in a holistic manner.

If you'd like to take the test, click here.

Sunday, March 25, 2007

Brain Science

The breakthroughs that neuroscientists are making into understanding the human brain are quite fascinating. Just recently, I read about a type of brain scanning technology that can look deep inside a person's brain and read their intentions before they act. The scan is done using a technique called functional magnetic resonance imaging (fMRI), which uses the rate of blood flow to measure neural activity. Once patterns of activity are identified, they are translated into meaningful thoughts using specially-designed software, and a person's intentions can then be revealed before they have been acted upon.

Sound familiar? Steven Spielberg's 2002 movie, Minority Report, dealt with the kinds of problems that may arise with widespread use of such an advanced technology. Neuroscientists are aware of the serious ethical issues over how brain-reading technology may be used in the future, and the recent rapid advances have forced those in the field to set up their own neuroethics society.

Barbara Sahakian, a professor of neuro-psychology at Cambridge University in England, said: "Do we want to become a 'Minority Report' society where we're preventing crimes that might not happen? For some of these techniques, it's just a matter of time. It is just another new technology that society has to come to terms with and use for the good, but we should discuss and debate it now because what we don't want is for it to leak into use in court willy nilly without people having thought about the consequences."

Professor Colin Blakemore, a neuroscientist and director of the Medical Research Council, said: "We shouldn't go overboard about the power of these techniques at the moment, but what you can be absolutely sure of is that these will continue to roll out and we will have more and more ability to probe people's intentions, minds, background thoughts, hopes and emotions."

Neuroscience is still far from developing a scanner that could easily read random thoughts. Currently, the scanning technique can read simple intentions, attitudes or emotional states. The computer learns unique patterns of brain activity or signatures that correspond to different thoughts. It then scans the brain to look for these signatures and predicts what the person is thinking. During a study, the researchers asked volunteers to decide whether to add or subtract two numbers they were later shown on a screen. The volunteers' brain imaging revealed signatures of activity in a marble-sized part of the brain called the medial prefrontal cortex that changed depending on their intention to add the numbers or subtract them. The software was able to predict the volunteers' intentions with 70% accuracy.

This score is obviously better than random guessing, yet it shows that the system still has a way to go before it is able to genuinely deduce what patterns are associated with which thoughts.

John-Dylan Haynes, the neuroscientist who led the study, has estimated that research into unspoken intentions could yield simple applications within the next 5 to 10 years, such as reading a person's attitude to a company during a job interview or testing consumer preferences through "neuromarketing".

There are already companies trying to use brain scanners to build a more accurate lie detector. Several recent studies have also used brain imaging to identify tell-tale activity linked to violent behaviour and racial prejudice. Lie detection is more complex, says Haynes, because it can violate mental privacy but also prove innocence. In some cases, refusing to use it to uphold a right to mental privacy could end up denying an accused person's right to self-defence.

Those most excited about this technology will be disabled people, as it has the potential to improve their quality of life. Being able to read thoughts as they arise in a person's mind could lead to computers that allow people to operate email and the internet using thought alone, and write with word processors that can predict which word or sentence you want to type. The technology is also expected to lead to improvements in thought-controlled wheelchairs and artificial limbs that respond when a person imagines moving.

As for using brain scanners to eavesdrop on people's thoughts for the purpose of judging whether they are likely to commit crimes, a crime is only a crime once it's been committed. If governments of the world really want to stop potential law-breakers, they'll need to rewrite the laws so that thinking about crimes is a crime. And then of course they'll have to create a new division in the police force, known as the "Thought Police". Need I say more?

Tuesday, March 20, 2007

Round the Bays 2007

On Sunday I competed in the annual Round the Bays fun run. This was the first time I had done the run. The distance of 8.4 km seemed shorter than what I had been expecting, and I passed the finish line with a good time. I wasn't planning to run continuously all the way, but the number of people around me forced me to change my mind. During my training runs, I was running for 4 minutes and then walking for 25 seconds. I think this is a good way to minimise fatigue, and also gives the legs a chance to rest, thereby reducing the likelihood of injury.

But during the Round the Bays, I felt like I had to run all the way - I didn't want to be overtaken, and I didn't want to lose momentum. I made a decision to follow the words of Satchel Paige: "Don't look back, something might be gaining on you." The only time I slowed down was to grab a cup of water from the stands at the side of the road. I overtook a lot of my fellow competitors, dodging and weaving my way past kids, mothers with prams, a DHL float, a girl in a Wonder Woman outfit...

Considering the distance I was from the start line when the race began, I think I made up a lot of ground. The advantage the people right at the front have over those further back is the biggest downside of this race, but I guess it can't be helped. When the cannon goes off, the people at the front start moving immediately, and run a distance of 8.4 km. However, the people further back have to slowly edge their way through the crowd for several minutes before even getting to the start line. So most of the participants travel further than 8.4 km, and are also not able to move when the cannon sounds, because everyone is tightly packed together like sardines.

The race was due to start at 9:30 am. I arrived at about 9:15, and jostled my way through the crowd so that I got to a position that was probably one-third of the way back. But then there were no more gaps to move through, so I had to stay put. When the cannon sounded at 9:33, none of the people around me moved for about 30 seconds. When we did start moving, we could only slowly shuffle forward until the crowd thinned. Four minutes and 32 seconds later I got to the start line. So I decided to time my run from this point. I ran it in 39 minutes and 21 seconds, but my official time was 43 minutes and 53 seconds.

The guy who won finished in 26 mins. Very impressive. I calculated that he would have been running one km every 3 minutes and 6 seconds. I was doing one km every 4 minutes and 41 seconds. The fastest woman said this was her third year doing the race. In the previous two years she didn't do so well, because she was far back in the crowd. This year she arrived an hour and a half before start time, just so she could be at the front. I wonder if it is worth it, having to wait around all that time just so a better time can be achieved. I guess if you are competitive and really want to win, then it is.

So that is what I will do in next year's Round the Bays. I will make sure I arrive early. That way I will get a good headstart on everyone else. And I will aim to improve my time of 39:21. I think I could probably get it down to around 35 minutes. To do this, I have to develop a faster stride rate. A faster stride rate means a better time. The quicker steps will also cause my body to stay closer to the ground, thereby reducing the impact of bounce on my ankles and feet.

Researchers have determined that most elite distance runners have a stride rate of about 180 strides per minute. To check my stride rate, what I'm going to do is go out for a normal run, get into my natural running rhythm, and then time myself for 60 seconds as I count my strides. The simple way to do this is to count each time my right foot hits the ground, then multiply by two. The best way to boost stride rate is to focus on rhythm, stay relaxed, and try to glide over the ground.

I think that being able to chart my progress like this is a great way to keep up my motivation to run. Seeing improvement is a good feeling. To quote Friedrich Nietzsche, "What is good? All that heightens the feeling of power in man, the will to power, power itself. What is bad? All that is born of weakness. What is happiness? The feeing that power is growing, that resistance is overcome."

It's not just the feeling of overcoming resistance that makes runners a happy bunch of people. Hundreds of research studies have been done on exercise-induced neurochemicals, which have been proven to produce feelings of elation, inner harmony, and peacefulness. The "runner's high", which was once believed to be caused by endorphins, has more recently been attributed to endocannabinoids - substances released with exercise that produce an effect similar to a marijuana high. Also contributing to this state of euphoria is epinephrine (adrenaline) - the surge that comes with getting excited for a race, which also has the power to boost confidence and kill pain. Add to that serotonin and dopamine, two other feel-good brain chemicals that are well-known for their ability to reduce depression, and you've got a physiological cocktail that can turn a workout into happy hour.

We all have these neurochemicals flowing through us. Some people are able to tap into them and use them on demand, because their brains have developed a high level of neuroplasticity. Neuroplasticity refers to the brain's ability to change its structure and function by expanding or strengthening certain neural circuits while shrinking or weakening others. "Neurotransmitters released during exercise can contribute to neuroplasticity," says neuroscientist Ronald Duman, Ph.D., a professor at Yale University's School of Medicine. "Neuroplasticity within the brain's motivation and reward pathways may play a role in the perception of experiences, including exercise."

Simply put, the way you view exercise determines how motivated you will be to do it. Athletes can think and behave like better athletes by using positive thinking to reshape their brains. Once you begin to think positively about exercise, you just need to concentrate on doing it - and ensuring that it remains a pleasurable experience. Entering a state of flow when exercising - where your brain checks out and your body takes over - is the key to making the experience pleasurable, because it allows you to lose yourself in the moment - time flies, and you are totally engaged.

These days, I'm making a lot of time for exercise. During my lunch break, I make sure I go for a walk. I do Swiss ball exercises every day, which are good for flexibility. Some days I use a rebounder for 15 minutes or so, as jumping is a good exercise for cardio health. I run regularly. This has had a positive effect on my endurance levels. And lastly, when I get the time I also work out on my Total Gym 1000. The 45 minute workout is great for building strength. In conclusion, exercise is good and I will definitely keep at it.

Sunday, March 11, 2007

Real Life as Art

Art often imitates life. There are artists who take this phrase literally and create paintings that look so realistic that they are mistaken for photographs. Some people are not fond of this style of painting, known as photorealism. They feel that it lacks an expression of the artist's inner feelings and experiences. For me, however, paintings that look like photographs are greatly impressive. I like the way they represent life as it is and have no underlying sense of abstraction. I am able to focus on the simple aesthetic beauty of the work and appreciate how much skill must be needed to accurately replicate a photograph using nothing but brushes and paint.

Just over a week ago, I attended a friend's art exhibition. Her work was also an example of art imitating life. But it was not a collection of paintings. It was a little room behind a glass screen, with a desk and chair, biology notes on the wall, a couple of pot plants, a geometric shape made of pencil leads, and a carefully balanced stack of books and CDs. There were a number of other objects strewn arbitrarily around the room. It was this lack of order that prevented me from truly appreciating what this artwork had to offer.

I have now interpreted the work as a metaphor for mental clutter. Additionally, I would like to think that the work represents the transformation of clutter into clarity. Let me explain. Life may seem random and patternless at times, yet we are still able to exert a certain amount of control over it. We create meaning for ourselves, often by way of creative expression. The revelations that come to us as a result of this creativity help us to fashion order from the chaos. During the exhibition, I found out that the aim of my friend's artwork was to display a workspace where creativity takes place. But I missed the deeper meaning behind the randomly-placed objects in the workspace. I now realise that they were there to represent the jumbled mess of thoughts that are swirling around one's head as he/she tries to complete a project of any kind.

However, displaying this intellectually-appealing theme by way of a visual representation is somewhat of a catch-22 situation. I don't know how others feel about this, but I find beauty in patterns and structures. I don't find beauty in haphazard disorder. Visual art is all about appreciating a creation for its form rather than function. The workspace had a function, and that was to exist as a place where chaos is transformed into order. The form through which this function was communicated satisfied my brain, but not my eyes.

Obviously, this opinion won't hold true for everyone. As much as I want to say that artists these days seem unconcerned with creating a work of beauty, it just isn't true. Beauty is subjective. For some art appreciators, a piece is beautiful if it requires interpretation of the emotions the artist felt while working on his/her creation. I'm reminded of the episode of The Simpsons where Homer attempted to build a barbecue pit in his backyard, and ended up creating a deformed mess of metal with an umbrella sticking out one side.

When Homer tries to dump the mangled barbecue, it is somehow brought to the attention of an art dealer in Springfield. She comes to the door of the Simpson home and wants to buy it. Homer tells her, "This isn't art. It's a barbecue that pushed me over the edge." The art dealer, rendered as a sophisticated academic, says, "Art isn't just pretty pictures. It's an expression of raw human emotion. In your case, rage."

Homer becomes an instant art-world celebrity. However, his follow-up artistic attempts are met with disapproval. His initial piece was created spontaneously - perhaps his receptiveness to the path laid by circumstance aligned the stars in his favour. But further success proves elusive. He finds out, as many artists have, that special and evocative artworks are difficult to create. The creation process seems to be all about instincts and reactions, and requires that the artist is in a particularly alert state of mind.

Even if the right factors converge, artists have no way of knowing how the art enthusiasts will react to their work. The more experimental the genre, the greater the uncertainty. Still, artists are always willing to share their creations. And the inspiration their art brings helps other people to inject a little bit of meaning into their own lives.