The Train Wreck of Theism

Yesterday, upon the stair,
I met a man who wasn’t there
He wasn’t there again today
I wish, I wish he’d go away. – Hughes Mearns

Religious experience may be a strong indication of the reality of what we may call God, but the philosophical argument from religious experience comes to grief between a rock and a hard place, the one scientific, the other a point of logic.

The Rock

The God of Classical Theism is, in the end, a synthetic God, an almagam of the Biblical God, anthropomorphic, deeply personal, both for good and ill, and highly interventionist, with the Greek philosophical abstract concept of an impersonal, ultimate being. It is an unstable construction, yet it has held uneasily together for the best part of 2000 years. The challenge of science fractures this synthesis, fatally impacting the Biblical, interventionist God, and driving the philosopher’s abstract concept into the shadowy half-life of Deism.

The challenge of science is not, as might be at first imagined, the challenge of evolution, with which Fundamentalism has been futilely grappling for close on 200 years, nor yet is it the, more modern, challenges of Relativity, Quantum Mechanics or A.I., let alone such things as String Theory or Chaos Theory, which it has yet to consider, but rather, something much more fundamental: the challenge to the Christian (and the Jewish and the Muslim) belief, that God acts in the world, to answer prayer, to work miracles, to suspend natural, physical laws, to provide providential guidance and to determine events other than through the long chain of cause and effect. Christian (and Jewish and Muslim) faith has ordinarily believed that God has interacted, and continues to interact, with the world in these ways, at least from time to time, both in order to preserve it and, when needful, to alter its course. Science has usually claimed that on the macro level, the level of human knowledge, experience and observation, events are determined by natural laws, no matter what interpretation we may choose to read into those events. This entails the conviction that all macro level events may, at least in principle, be thoroughly explained by reference to natural causal principles, one of the most important of which is the law of the conservation of energy. This law states that, in a closed system, the total amount of energy must remain the same. It may change form, but it cannot be created or destroyed. That being so, how can God be said to act in the world, as any divine action within the cosmos would violate this law by being an illicit addition of energy? We cannot resort to Aquinas’ pre-scientific argument of primary and secondary causality, with God acting through natural processes, because for this statement to have any meaning whatsoever, we would have to be able to state what would happen differently if God were not intervening. As things follow a regular pattern, God cannot be said to either intervene or to not intervene, so the statement is without meaning.

The only possible chink of light seems to come from Chaos Theory, but it is only a mere chink, and not a solution.

There is in chaotic systems an apparent openness or indeterminacy, so that the ordinary causal principles at work, the interchange of energy between the isolable constituent parts of a physical system, do not seem to wholly determine the future of the system. Two chaotic systems that are at one point virtually identical to each other may end up vastly different. This, of course, indicates that there is another causal influence active, which is, as yet, unidentified. John Polkinghorne, Professor of Mathematical Physics at Cambridge University, and an Anglican priest, has suggested that this additional influence does not add energy, but information to the system and, as such, provides a means for God to operate as an informational causality that does not violate the law of the conservation of energy.

However, there is a fatal flaw in Polkinghorne’s thesis in that he all too rapidly moves from epistemology to ontology, claiming that our current inability to predict in detail the future states of chaotic systems is, in fact, a function of a genuine indeterminacy within the system. In fact in claiming this, he is revealing his bias and attempting to create a gap for God to exploit. The problem with a “God of the gaps” though, is, as history shows, that the gaps in scientific knowledge are filled in time, and God is, once again, squeezed out. In this case even other Christians, active in Polkinghorne’s field, do not share his view, but rather hold the more natural theory that chaotic systems are entirely deterministic and that our inability to accurately predict their future lies not in our failure to see God at work, supplying something extra in terms of information, but simply in them being too complicated and too sensitive for us to measure at present – an example of small changes producing large effects. Furthermore, the number of truly chaotic systems in nature are far, far fewer than was once thought to be the case, leaving God very little opportunity for intervention. Indeed, even if Polkinghorne was correct in his thesis, and even if the number of chaotic systems was far greater so as to be commonplace and to afford God ample opportunity to intervene in the cosmos, it still begs the question, to whom or what would information be conveyed, and how would it be conveyed without energy?

Without the ability to intervene in miracle and in answer to prayer, to still the storm, walk upon the water, multiply the loaves, raise the dead, part the waves and make the sun stand still, the God of the Bible is dead and the God of the philosophers, shuffles off into the dark recesses of Deism, a God who may, in some sense have set things going, but has long since lost all interest in the project. If this was our only option, we would have to agree with Richard Dawkins: God is no more than a delusion.

The Hard Place

The other fatal problem for Classical Theism may be summed up simply:

  1. If God exists, then God is, necessarily, omnipotent, omniscient, and morally perfect.
  2. If God is omnipotent, then God has the power to eliminate all evil.
  3. If God is omniscient, then God knows evil exists.
  4. If God is morally perfect, then God has the desire to eliminate evil.
  5. Evil exists.
  6. If evil exists and God exists, then either God doesn’t have the power to eliminate all evil, or doesn’t know evil exists, or doesn’t have the desire to eliminate evil.
  7. Therefore, God doesn’t exist.

It is actually a problem as old as the book of Job and it is not entirely true to say that it is a fatal problem as the issue has been wrestled with for millennia and the branch of theology termed theodicy is devoted to exonerating God. However, this solution to the problem is to say that while God is both all loving and all powerful, he, for his own purposes, chooses to allow the existence of evil for other ends.

The problem of evil then, is the problem of reconciling belief in an omniscient, omnipotent, and benevolent God, with the obvious existence of evil and suffering. The problem may be described either experientially or theoretically. Experientially the problem is the difficulty in believing in a loving God in the face of suffering and evil such as pandemics, wars, murders, rapes or terror attacks in which innocent children, women and men become victims. Theoretically the problem is both logical and evidential.

Originating with Epicurus, the logical argument has been expressed as:

  1. If an omnipotent, omniscient and benevolent God exists, then evil cannot.
  2. Evil exists.
  3. Therefore, an omnipotent, omniscient and benevolent God
    does not exist.

This argument is logically valid and, if its premises are true, the conclusion follows of necessity.

Most philosophical debate has focused on the proposition that God cannot exist with evil, with defenders of theism such as Leibniz and Plantigna arguing that God could not only exist with, but actually allow evil, if his purpose was to achieve a greater good. This greater good generally being theorised to be to allow fully autonomous human beings to exercise free will. It is far from clear exactly when evolving homonids began to exercise free will, but it was long after the advent of evil and suffering. While the concept of omnipotence certainly does not include God being able to achieve the logically impossible (to create a square circle, or a rock to heavy for him to lift, or a being greater than himself), are we really to believe that he could not have arranged circumstances in which this one species is able to exercise free will – if, indeed, we really do have free will – without the existence of evil?

The purely intellectual puzzle that is the “logical problem”, however, is as nothing compared with the real world contemplation of evil in the “evidential problem”. William Rowe gives the following example: “In some distant forest lightning strikes a dead tree, resulting in a forest fire. In the fire a fawn is trapped, horribly burned, and lies in terrible agony for several days before death relieves its suffering.” Rowe similarly cites the example of an innocent child who suffers as the victim of violence.

The evidential problem of evil demonstrates that the existence of evil and suffering, particularly inocent suffering, lowers the probability of the existence of God to almost nil and what is left, when all is said and done, is but the shell.

Rowe’s thesis then is this:

1. There exist instances of intense suffering which an omnipotent, omniscient being could have prevented without thereby losing some greater good or permitting some evil equally bad or worse.

2. An omniscient, wholly good being would prevent the occurrence of any intense suffering it could, unless it could not do so without thereby losing some greater good or permitting some evil equally bad or worse.

3. Therefore there does not exist an omnipotent, omniscient, wholly good being.

There does not exist an omnipotent, omniscient, wholly good being. Between the rock and the hard place this being, the God of classical theism is wrecked. There is nothing left to salvage. Despite anguished apologetics from conservatives (Catholic, Evangelical and Fundamentalist) in denial, and despite his distasteful smugness and “preachy” tone, Dawkins is right. God is a delusion … or, at least, that particular model of God, the synthetic God of classical theism, is a delusion. So where do we go from here?

Before we begin to explore an answer to that question, we need to examine the idea of free will, the final bunker of “fortress Fundamentalism”, where, despite the ruins of theism which surround them, Fundamentalists still cling on. Yes, the arguments against their God are great, but God’s hands are tied. God has not created a theoretical, perfect world, but has created the best possible world in which to cultivate and nurture free will. A world in which, through making choices between good and evil, by enduring this vale of soul-making, a people come into existence who freely choose to love him. That through God’s weakness, rather than through his power, he ultimately achieves his glorious goal, that he should be freely worshipped and glorified but those who have endured and in heaven, Eden may be restored.

‘the sufferings of millions of the lower animals throughout almost endless time’ are apparently irreconcilable with the existence of a creator of ‘unbounded’ goodness.

— Charles Darwin, 1856
Advertisements

The Dawn of Religion

The Neolithic Revolution

Around 12,000 years ago, after about 65,000 years of the middle stone age, a huge revolution took place, ushering in the Neolithic era, the “New Stone Age”, marking a complete break for most humans from what had gone before. The most obvious aspect of this was the move from a hunter-gatherer, nomadic lifestyle, following the movement of livestock and living in caves and rock shelters, to a settled agrarian lifestyle, living in permanent communities, planting crops and domesticating animals. Quite what triggered this change is a matter of debate amongst academics, but it led to the growth and proliferation of humans in 10,000 years from a world population of around 5 million to one of 7 billion. It has been quite rightly dubbed the “Neolithic Revolution.” The traditional hunter-gatherer lifestyle, followed by humans since their earliest evolution, was swept aside in favour of permanent settlements and a reliable food supply and out of these agricultural communities, cities and civilisations grew.

There was no single factor, or combination of factors, that led people to take up farming in different parts of the world. In the Near East, for example, it is thought that climate change at the end of the last ice age brought seasonal conditions that favoured annual plants like wild cereals. Elsewhere, such as in East Asia, increased pressure on natural food resources may have forced people to find homegrown solutions. But whatever the reasons for its independent origins, farming sowed the seeds for the modern age.

The wild progenitors of crops including wheat, barley, and peas are traced to the Near East region. Cereals were grown in Syria as long as 9,000 years ago, while figs were cultivated even earlier; prehistoric seedless fruits discovered in the Jordan Valley suggest fig trees were being planted some 11,300 years ago. Though the transition from wild harvesting was gradual, the switch from a nomadic to a settled way of life is marked by the appearance of early Neolithic villages with homes equipped with grinding stones for processing grain.

The origins of rice and millet farming date to the same Neolithic period in China. The world’s oldest known paddy fields, discovered in 2007, reveal evidence of ancient cultivation techniques such as flood and fire control.

Cattle, goats, sheep, and pigs all have their origins as farmed animals in the “Fertile Crescent”, a region covering eastern Turkey, Iraq, and southwestern Iran. This region was the birthplace of the Neolithic Revolution and dates for the domestication of these animals range from between 13,000 to 10,000 years ago.

Genetic studies show that goats and other livestock accompanied the westward spread of agriculture into Europe, helping to revolutionise Stone Age society there. It is far from clear to what extent the farmers themselves migrated west, but the dramatic impact of dairy farming on Europeans is in their DNA. Prior to the arrival of domesticated cattle in Europe, prehistoric populations were not able to process raw cow’s milk, but at some point during the spread of farming into southeastern Europe, a mutation occurred creating lactose tolerance that increased in frequency through natural selection due to the nutritious benefits of milk. If we consider the prevalence of the milk-drinking gene in Europeans today, which is as high as 90 percent in the populations of northern countries, the vast majority are descended from cow herders.

There were other changes too: clothing was still frequently made from animal skins, but bone needles meant that the clothing was increasingly tailored and more sophisticated, furthermore wool and flax began to be woven and sewn. Tools and weapons were also increasingly sophisticated, a greater variety of game was hunted, art and ornamentation all became more elaborate, but above all the settled communities began to develop new structures, becoming more hierarchical, developing specialist trades and ruling elites and becoming patriarchal in the process. This latter point may have be due to the need to defend settled territories in a way that had not been true when homo sapiens was a nomadic hunter-gatherer. Defence required combat skills which, until the development of military technology in more modern times, relied primarily on physical force and, therefore, on the whole, favoured males. Settling in community also led to another significant change, the move from spirituality – which is essentially individualistic and personal, to religion – which is corporate and communal. We see this evidenced by the development of megaliths and circle structures, the latter demonstrating a clear link to the lunisolar calendar, being aligned with sunrise and sunset at the solstices. Some of these reveal the remains of feasts and sacrifices, even, in the case of such as the Goseck Circle in Saxony, of human sacrifices.

The Development of Religion

Quite what the religion of the Neolithic era consisted of is a matter of conjecture. The earlier spirituality had been shamanistic, the later religion of the Bronze Age, was focussed on specific deities who formed a pantheon, as we can see from the inscriptions and literature they left behind. However, during the Neolithic period there are no literary remains and the artifacts are open to interpretation. There are suddenly a vast number of Venus figurines, there are also bulls and what are thought to be sun-chariots. What may have been happening at this time is the personification of the numinous and its differentiation as both a male and female principle in reciprocity, a god and a goddess, just as humans and animals are differentiated into male and female.

Certainly this is the case with the early pantheons. In the proto-Semitic, for instance, we have ‘ilu (male) and ‘ilatu (female), the high god and goddess, ‘Attaru (male) and ‘Attartu (female), the fertility god and goddess, Samsu (the sun goddess) and Warihu (the moon god) and so on. Reading this back into the finds from the Neolithic era, the proliferation of Venus figurines probably represent the goddess who has been personified from the earlier, more magical, focus on female fertility (represented abstractly by the sex organs: breasts, hips and vulva), while the bulls probably represent the god, by expressing unbridled power, physical strength and virility. The solar chariots meanwhile link to the sun and to the solstices, festivals and timings for planting, lambing, harvesting and so on. Quite why the mysterious numen, which is intuited and felt, rather than formulated, is characterised by an unfocussed sense of awe and an at-oneness with nature such as we see in, say, Australian aboriginal spirituality, should develop into a ritualised devotion to a personal deity is not easy to explain, although studies, like that of Guy Swanson of the University of California at Berkeley in 1960, which surveyed 50 “primitive” societies, demonstrated that such a correlation does exist between the complexity and size of a community and the likelihood that their gods will be highly developed and moralising. The development may be due to humanity’s natural tendency to personify almost everything, just as we see in animal behaviour, thoughts, motives and emotions which they are unable to form or feel, and in unusual circumstances or the coincidence of events, patterns and portents of something more. Our natural impulse toward empathy, to see as others see and to feel as others feel, to get inside the other’s mind, may cause us to project personality and purpose onto the universe itself. Or it may be that religion is potentially a powerful vehicle to manipulate, organise, control and direct the community and that, to this end, a message from the gods is a great motivator, as history has shown. Or then again, it may quite easily be both.

What we see in the Abrahamic religions, is the further development of this process.

The Judeao-Christian Tradition

During the Bronze Age, between the 10th and 7th centuries BCE, ancient Israelite and Judean religion was essentially polytheistic but with a particular devotion to one or two primary deities, a practice known as henotheism: that is, the recognition and, even, worship of many deities but with the primary worship focus revolving around a single deity. Within the Bronze Age Judean and Israelite communities, primary devotion was generally focussed on Yahweh (usually rendered “LORD” in English translations, following the Jewish practice big not speaking the divine name out of respect, but capitalised to show that it is not a translation of the word אדון = adonai, κύριος = kurios, “lord” a deferential title). Worship revolved as much around local shrines associated with the Israelites’ legendary prehistory, such as Bethel, as around the central shrine at Shiloh and later at Jerusalem. In terms of practices, sacrificial rituals like Yom Kippur, New Moon festivals, divination using the urim and thumim, and prophecy were all common.

Although the Jewish and Christians traditions suggest that Yahweh was the only deity worshipped throughout Israelite and Judean history, archaeology, inscriptions, and the even the Hebrew Bible (תנח = Tanach or “Old Testament”) itself all indicate otherwise. However, Yahweh, who may originally have been a wilderness deity, was the god principally worshipped, and was understood to be in some sense, uniquely, almost tangibly present in the Jerusalem temple in the form of his Shekinah (glory/radiance), to have a spiritual body, and to be personal, or even a person, with purposes, emotions and willpower, who communicated, judged, rewarded and punished. Furthermore, ancient Israelite and Judean religion shared the common belief that Yahweh was to be worshipped in ritual purity (holiness) and that worshippers were required to maintain the temple’s holiness in order to ensure that the deity would continue to live in its Holy of Holies. To this end, sacrifices, offerings, and liturgy were offered to him and a strict holiness code was rigidly enforced.

Before a centralised state began to take shape, people in Syria- Palestine practiced a form of family religion. Literature dating from the 12th century BCE, the Tel el Amarna letters, together with inscriptions from throughout Syria-Palestine demonstrate this to be so. The data is, however, fragmentary. In other words, it is as though we have 400 pieces of a 2,000-piece jigsaw puzzle. Yet, when we connect the puzzle with other historical sources, it becomes clear that family religion was the norm at the time Israel and Judah began forming their national identity. It is entirely possible that families honoured their ancestors in verbal rites and the presentation of offerings, certainly there is evidence that they focused their religious devotion on the ‘god of the father’ or the ‘god of the house’ (ie “the God of Abraham and the Fear (פחד = pachad) of Isaac”). In so doing, they anchored their collective identity in their lineage and place of origin This was the context, in which the proto- Israelite religion began to develop. Assuming the Hebrew Bible reflects this proto-Israelite religion, scholars believe that it shows that rituals were performed in honour of the deceased (1 Samuel 20) with a communal meal eating meat together where the entire clan was present on its inherited ancestoral land where the remains of its ancestors were buried.

Outside of the Hebrew Bible, one of the best examples of ancient Israelite and Judean religion comes from an archaeological site called Kuntillet ‘Ajrud, dating from as early as the 10th century BCE. One inscription from this site reads, “to YHWH of Samaria and to Asherata.” Another inscription reads, “To YHWH of Teman and to Asherata”. Both of these inscriptions demonstrate that some ancient Israelites and Judeans were certainly not monotheistic in their practice of their religion, but were henotheistic. YHWH (יהוה), which is read as Yahweh, was the primary tribal deity, and is best known from the Hebrew Bible. Asherata, also known as Asherah, was a deity within the Ugaritic pantheon, and is also a common figure in the Hebrew Bible. Therefore, we can confidently say that on the spectrum of how people in ancient Israel and Judah practiced religion, Asherah and Yahweh were both honoured in cults. Priority, though, tending to be given to Yahweh.

An inscription from another site (Khirbet el-Qom, 8th century BCE) reads: “Blessed is Uriahu by YHWH for through Asherata he has saved him from his enemy.” Here, we see strong evidence that Asherata, a deity, represented a person named Uriahu before Yahweh. In Ugaritic literature, we see a similar understanding of the deities. The Ugaritic goddess Athirat was a mediator for El, the chief god of the Ugaritic pantheon. The parallel in how people understood deities (Yahweh is to Asherata as El is to Athirat) demonstrates how ancient Israel and Judah shared a cultural and religious framework with the broader West Semitic culture; yet, they were also unique in the sense that they worshipped a particular deity who uniquely represented their tribal system.

Other examples may be found in the Hebrew Bible itself. In Psalm 82, for example, Yahweh stands in the council of El, the high deity of West Semitic mythology. Yahweh accuses the other deities in the council (the word Elohim (אלוהים) which is generally translated “God” in English Bibles, is actually a plural noun (gods), and is not a plural of majesty either, as some Evangelical scholars claim, but is the technical, collective noun used for the council of the gods in West Semitic culture) of not helping the poor and needy. In other words, the other deities had failed to fulfil their duties as deities. As a result, El takes away from them their divine status and commands Yahweh to rule over the nations. In this piece of poetic theatre from Judah and Israel, we have an example of a tradition in which other deities are within the pantheon and yet Yahweh takes the central role. Yet even in this piece of Yahwistic triumphalism, in which the god of Israel and Judea is pictured as supplanting the gods of the nations, he does so, not by asserting his intrinsic right over them, but is appointed to that role by s superior deity. Even in asserting Yahweh’s superiority, the writer underscores the henotheistic reality of the prevailing culture.

Narrative in the Hebrew Bible tells a similar story. In 1 Kings 16:33, King Ahab makes a shrine for Asherah. 2 Kings 17:16 refers to people who worship Asherah and Ba’al and Ba’al worship occurs consistently throughout the narrative, suggesting that this god played a large part in the belief of the Israelite population during the Iron Age.

Interestingly, one of the earliest translations of the Hebrew Bible, the Septuagint (LXX), translates Deuteronomy 32:8, “When the Most High was apportioning the nations, as he scattered Adam’s sons, he fixed the boundaries of the nations according to the number of divine sons“. Most High is a reference to El and in this verse, El is said to assign nations and people groups to his divine sons, that is, to the deities of the West Semites. In this verse, Yahweh is assigned to Israel, and other deities to other peoples. Thus, even the Hebrew Bible itself reflects the henotheism of ancient Israel.

As these inscriptions demonstrate, worship of deities other than Yahweh seems to have been a regular part of life for the people of Israel and Judea before the exile. Throughout the Hebrew Bible, Yahweh had always been presented as the deity that people ought to worship, but based on these inscriptions, and on the texts of the books of Psalms, Kings and Deuteronomy, we know this was not the practice, but that, rather, henotheism was the norm for ancient Israelites and Judeans. In other words, the Hebrew Bible does not accurately reflect how people actually practiced their religion in ancient Israel and Judea. The reason for this lies in the fact that the Hebrew Bible was substantially edited and, even in part, written, in the period spanning between, what is usually characterised as the “Reformation” under King Josiah, but in reality is better described as the”reinvention” of Judaism, following the triumph of the Yahwistic party, and the Maccabean Risorgimento. So that, although it clearly preserves narrative and poetic fragments from far earlier traditions, the theological and beliefs and practices of the post-exilic period, influenced as they were by the strong monotheism of the Zoroastrianism encountered in exile, have been read back into the nation’s psst. Israel’s monotheistic history is, therefore an example of the victors re-writing history, or rather, of inventing it.

Monotheism became the hallmark of post-exilic Judaism and is its gift to both of the other Abrahamic religions, Christianity and Islam. It is, as what we know as Classical Theism, both the understanding of God found in the Bible and the model of God championed by philosophy. However, while religious experience may be a strong indication of the reality of something we may characterise as “God” in some sense, as a philosophical argument for the existence of the God of Classical Theism, the argument from religious experience fails in the same way that all other arguments for his existence fail, by foundering on two rocks, like Charybdis and Scylla, the one scientific, the other a point of logic.

Spirituality

Australian aboriginal spirituality

So what can we guess about the spirituality of early homo sapiens? We have already seen that studies of the contemporary spirituality of the Southern African San Bushmen seem to demonstrate a correspondence to the artistic remains of the culture which centred on the Blombos cave and may also show a similar correspondence, to some degree, to the various cultures that produced cave art in Europe and even the Shamanism of the Siberian steppes. It seems to demonstrate an underlying, common worldview, which exists, despite superficial difference in its expression in myth and sacred story, in primitive cultures. It is an approach to life, to being, to the numinous, to the awe inspiring wonder at the heart of our existence, a sense of oneness with and creaturely dependence on that which is at once entirely other than us and yet which we and all things participate in. It seems to some degree to have involved altered states of consciousness, and in each case, a sense of interdependence with an animated natural world, and of communion with it. We see it in other stone age cultures, the two best known of which, albeit now impacted by the modern world, are those of the Australian Aborigines and the Native Americans.

Australian Aboriginal spirituality, according to the Aboriginal writer, Mudrooroo Nyoongah, “is a feeling of oneness, of belonging”, a connectedness with “deep innermost feelings”. (“Us Mob” 1995)

For the Yankunytjatjara people from South Australia, everyone is responsible for everyone else. There is a principle of connectedness that underpins Aboriginal life. Because of this interconnectedness (Kanyini) the individual focusses, not on themselves, but on the community: “We practise Kanyini by learning to restrict the ‘mine-ness’, and to develop a strong sense of ‘ours-ness’,” writes Aboriginal Elder, Uncle Bob Randall, in ‘Songman: The Story of an Aboriginal Elder of Uluru’, Bob Randall, ABC 2003. He continues: “We do not separate the material world of objects we see around us with our ordinary eyes, and the sacred world of creative energy that we can learn to see with our inner eye. …. we work through ‘feeling’, what white people call intuitive awareness … white people, separate things out, even the relationship between their minds and their bodies, but especially between themselves and other people and nature … (and) spirit.”

Everything is interconnected: people, plants, animals, land forms, sun, stars and moon. Everything is related to everything else. “Our spirituality is a oneness and an interconnectedness with all that lives and breathes, even with all that does not live or breathe.” Mudrooroo (op cit).

A second important aspect of Aboriginal spirituality is that everything is alive, as Professor Jakelin Troy, a Ngarigu woman from the Snowy Mountains of New South Wales, explains. “All elements of the natural world are animated. Every rock, mountain, river, plant and animal all are sentient (able to perceive or feel), having individual personalities and a life force.” ‘Trees are at the heart of our country – we should learn their Indigenous names’, The Guardian 01/04/2019

“It’s an aspect common to many indigenous philosophies which has some scientific support, at least as far as plants are concerned.” ‘Plants can see, hear and smell – and respond’, BBC Earth, 10/01/2017

This relationship to, and interconnectedness with, everything is expressed in sacred stories. These creation stories describe how the activities of powerful creator ancestors shaped and developed the world as people know and experience it. This sacred Aboriginal oral literature, known as Dreamtime or Dreaming stories, are expressed in performances within each of the language groups across Australia.

What both Mudrooroo and Uncle Bob Randall are referring to when they use the terms such as ‘feelings’, ‘inner eye’ and ‘intuitive awareness’ are ‘things’ that cannot be defined by words and thoughts because they are beyond the mind. We can only start to understand them by a process of negation, by understanding what they are not. The first step is to understand that for the aboriginal, the land is part of their being, it is family. Aboriginal spiritual beliefs are intimately associated with the land they live on. Theirs is not so much a the-ology, but a ge-ology, it is earth-centred, rather than God-centred. The earth is “impregnated with the power of the Ancestor Spirits” which Aboriginal people draw upon. They experience a connection to the land, with the entirety of nature associated with it, that is something alien to white people. A key feature of Aboriginal spirituality is to care for and protect the land, it is an obligation which has been passed down as law for thousands of years.

Land joins the commonly cited trilogy of being: “To recover our wellbeing, we have to pay attention to all four dimensions of our being: mind, body, spirit and land … spirituality is about tapping into the still places I go to when I’m on country and I feel like I’m part of all the things around me” Uncle Bob Randall (op cit). Land is seen as a member of the family and through silent communion the Aboriginal is connected with the land and in that spiritual contact is held in their proper place. “We don’t own the land, the land owns us. The land is my mother, my mother is the land. Land is the starting point to where it all began. It’s like picking up a piece of dirt and saying this is where I started and this is where I’ll go. The land is our food, our culture, our spirit and our identity.” S. Knight, Our Land Our Life, Aboriginal and Torres Strait Islander Commission, Canberra, 1996. This implies that, for the Aboriginal, besides humans, not only animals but plants, and even rocks have a soul.

Finally, an Aboriginal person’s soul or spirit is believed to “continue on after our physical form has passed through death”, Mudrooroo (op cit). That is to say that, after the death of an Aboriginal person, their spirit returns to the Dreamtime from where it will return through birth as a human, an animal, a plant or a rock. The shape is irrelevant, because each form shares the same soul or spirit from the Dreamtime.

“The Dreaming’ or ‘the Dreamtime’ indicates a psychic state in which, or during which, contact is made with the ancestral spirits, or the law, or that special period of the beginning.”
Mudrooroo (op cit). Aboriginal spirituality does not consider the ‘Dreamtime’ as something past, in fact not as a time at all. Time refers to past, present and future but the ‘Dreamtime’ is none of these. The ‘Dreamtime’ is there with the Aboriginal, an ever present reality, not something a long way away. It is the environment that the Aboriginal lived in, and it still lives in today. Because of this, it is more appropriate to speak of ‘Dreaming’ to translate the idea, rather than the more familiar, “dreamtime”, as it expresses better the timeless concept of moving from ‘dream’ to reality which in itself is an act of creation and the basis of many Aboriginal creation myths. None of the 250 Aboriginal languages contain a word for time.

The Dreaming also explains the creation process. Aboriginals believe that ancestor beings rose and roamed the initially barren land, fought and loved, and created the land’s features as we see them today. After creating the ‘sacred world’ the spiritual beings turned into rocks or trees or a part of the landscape. These became sacred places, to be seen as such only by initiates. The spirits of the ancestor beings are passed on to their descendants, the shark, the kangaroo, the honey ant, the snake and hundreds of others, which have become totems within the diverse aboriginal groups across the Australian continent. Each person shares the spirituality from the ‘Dreaming’. Aboriginals believe that during family or tribal ceremonies, while in a trance-like, dreaming state, the ancestral form seizes them and that they then connect with the ancestral being. Dreaming gives the individual aboriginal identity and identifies for them which other people are related to them as close family, through sharing the same Dreaming.

Native American spirituality

This same animist respect for the environment, expressed as an ecological wisdom and spirituality by Native Americans is legendary. Sadly the much celebrated speech of Chief Seattle, beloved if environmentalists and new-agers, is actually the product of a 1970s scriptwriter, not a 19th century Native American. However, while not authentic, it picks up on themes from speeches that were. For Native Americans, animals were respected as having equal rights to humans. Of course they were hunted, but only for food, and the hunter first asked permission of the animal’s spirit. Among the hunter-gatherers the land was owned in common: there was no concept of private property in land, and the idea that it could be bought and sold was repugnant.

Myths and beliefs varied between tribes, as did ritual practices, and yet there was a widespread belief in a Great Spirit who not only created the earth, but who also pervaded everything. Their belief was at once animistic, in common with the Shamanism we have been considering, and panentheistic (rather than pantheistic). It linked an animism which saw kindred spirits in all animals and plants with a belief in a Great Spirit, the spirit of the universe, a God which is entirely imminent within the creation wrought from its own being.

The Native Americans viewed the white man’s attitude to nature as the polar opposite of theirs. The white man seemed bent on destroying not just the Native Americans, but the whole natural order, felling forests, clearing land and killing animals purely for sport.

Not everything that every tribe did was earth-focussed. The Anasazi of Chaco Canyon probably helped to ruin their environment through deforestation. In the potlatch the Kwakiutl regularly burned heaps of canoes, blankets and other possessions simply to prove their superiority to their neighbours, an archetypal example of wanton overconsumption for status. Even the plains Indians often killed far more bison than they needed, in drives of up to 900 animals. There was folly and vanity as well as wisdom. That wisdom was derived from a way of life, and was as fragile as nature, as their way of life began to change, so too their beliefs changed. The introduction of agriculture brought shifts in attitudes to nature. Contact with the white settler changed it further. From the 17th century, it was a way of life and a system of belief that was dying.

It’s epitaph was written by Chief Luther Standing Bear in, Land of the Spotted Eagle, Houghton Mifflin, Boston & New York, 1933.

“The Lakota, was a true lover of r loving the earth and all things of the earth, the attachment growing with age. The old people came literally to love the soil and they sat or reclined on the ground with a feeling of being close to a mothering power. It was good for the skin to touch the earth and the old people liked to remove their moccasins and walk with bare feet on the sacred earth. Their tepees were built upon the earth and their altars were made of earth, and it was the final abiding place of all things that lived and grew. The soil was soothing, strengthening, cleansing and healing. Kinship with all creatures of the earth, sky and water was a real and active principle. For the animal and bird world there existed a brotherly feeling that kept the Lakotas safe among them and so close did some of the Lakotas come to their feathered and furred friends that in true brotherhood they spoke a common tongue.

The Lakota knew that the human heart away from nature becomes hard, that lack of respect for growing, living things soon led to lack of respect for humans, too. So he kept his youth close to its softening influence.

In the Lakota, the spirit of the land is vested; it will be until other men are able to divine and meet its rhythm. Men must be born and reborn to belong. Their bodies must be formed of the dust of their forefathers’ bones.

The Lakota was kin to all living things and he gave to all creatures equal rights with himself. Everything of earth was loved and reverenced.”

This common worldview, common not in any uniformity in its myths and sacred stories, but common in its sense of interdependence as a part of creation, common in its appreciation of the mystery of life, of being, of the fragility of existence, common in its sense of wonder in the face of the numinous, and of its creaturely dependence on that which is at once entirely other than us and yet which humans and all things participate in. This would seem to have been the spirituality of humanity around 65,000 years, from the rise of archaic homo sapiens until the dawn of the Neolithic Revolution and the birth of religion.

Shell Daubers To Worshippers

The difficult task facing us at this point is to discover what happened to transform shell decorators into worshippers in any recognised sense. Difficult, because we can only make conjectures based on evidence which is limited to scanty remains from the period, in the form of art, and contemporary studies of primitive societies that seem to parallel those which produced that art. It is an exercise in educated guesswork.

We have already discussed the etchings in the Blombos Cave in South Africa, which probably date from 77,000 years ago, but may quite easily be even older. This discovery represents the oldest human art. It is abstract and while we can see that it was deliberate, we cannot say with certainty what it represents or why it was produced. It could as easily be a primitive accounting system as it could be a piece of art.

The Dawn of Art

What we do know about the finds at Blombos Cave is this: over 8,000 pieces of ochre have been found, dating back to the Middle Stone Age, deliberately shaped for a particular use, some of these are engraved with the abstract designs already mentioned. Of these, the most famous are two ochre pieces discovered in 2002 that have finely engraved cross-hatched designs and parallel incised lines running across them. Six more of these geometrically carved ochre pieces were found in 2009 and even more of this style of pattern has been found on engraved bone within the cave. In addition a processing workshop for this ochre was uncovered in 100,000 year-old areas of the cave in 2008, where liquefied ochre was found stored in abalone shells. Whatever it’s purpose, this deliberate abstract representation (art for want of a better word) was going on in this location for anything up to 20,000 years. Whatever was happening, it was important enough to take these early humans away from the daily struggle for survival for a significant amount of time.

The Blombos Cave Etchings

More recently, around 66,000 years ago in Western Cape, South Africa, another group of early humans, living in the Diepkloof Rock Shelter left 270 fragments of containers made from ostrich eggshells believed to have originally been water containers. These eggshell containers once again had engravings on them of a geometric style with abstract linear patterns that repeat across the fragments. The similarities in the patterns on the various eggshell fragments suggest that there were set rules for composing these designs, but that outside of those guidelines, creative liberty could be taken. It is also worth noting that due to the texture and hardness of ostrich egg shells, these engravings would not have been at all easy to make.

Abstract art, then, was established in human society between 60,000 and 100,000 years ago. It may have been no more than pleasing to the eye and of as much cultural significance as the lid of a biscuit tin, or possibly to have had a deeper significance, like the design on a willow pattern tea service, but it was deliberate, time consuming and required skill. However, between 40,000 and 60,000 years ago, things changed.

Cueva de El Castillo in Cantabria in Spain, houses some of the world’s oldest representational cave paintings from 40,800 years ago. The oldest is a simple red disk shape, other paintings present in the cave are hand stencils, both positive (made by painting the hand and touching the wall) and negative (made by placing the hand and forearm onto the wall and blowing paint through reeds or bone tubes around it) and claviforms (like the ace of clubs in a deck of cards) which are 37,300 years old. There are also more modern images in charcoal and red ochre spanning across the cave’s ceilings, dating back to the Lower Palaeolithic Age showing that the site was in continued use for thousands of years. Identical disks and hand stencils have been found in Sulawesi in Indonesia dating from 40,000 years ago. The disks were made by blowing paint onto the wall and have been dated at 40,000 years old by Indonesian and Australian scientists, who were able to estimated this based on the ages of the stalactites that had formed on top of the paintings. The hand stencils in Sulawesi were also dated in the same way to 39,900 years ago, making them the earliest hand stencil cave art in the world. The researchers responsible for dating most of the art in Cueva de El Castillo were a team of British, Spanish, and Portuguese experts led by Dr Alistair Pike of the University of Bristol. They, similarly, dated the paintings by dating the minerals and stalactites covering them as they could not use the traditional method of radiocarbon dating due to a lack of organic pigment in the cave. This method gave them the minimum age of most of the cave art, and where larger stalagmites had formed, they were also able to obtain maximum ages.

The more popular idea of cave art: representations of bison, deer, mammoths and other animals in hunting scenes developed a mere 2,000 – 3,000 years later, as evidenced by the finds in the Chauvet-Pont-d’Arc Cave in Ardèche in France, arguably the most significant prehistoric art site in the world because of the large collection of the best preserved cave paintings ever discovered.

Within the cave are hundreds of paintings of at least 13 different species of animal, both herbivorous and predatory. Other paintings include red ochre hand stencils, abstract lines and dots, a “Venus” figure of a vulva and legs, and two unidentified images that may have ritual significance. There is also a painting that appears to be of a volcano erupting lava, which is the earliest known depiction of a volcanic eruption. The age of the paintings has been dated using radiocarbon dating in 2016 to two periods, one occurring between 37,000 to 33,500 years ago and the other between 31,000 to 28,000 years ago.

Parallel to the mid point between the two periods of paintings at the Chauvet-Pont-d’Arc cave are more recently discovered paintings at the Timpuseng Cave, Sulawesi in Indonesia (referred to previously) where, in addition to the disks and hand prints, there is a depiction of a babirusa, or pig-deer, dated at 35,400 years old, and in the Coliboaia Cave in Romania, dated to 35,000 years ago, where there are a bison head and a rhinoceros head in black charcoal, together with drawings of humans.
Also found at sites like these are many small carved and engraved bone or ivory (less often stone) pieces, dating from the same periods, of similar large animals.

The most common subjects in cave paintings and carvings are then, large wild animals, such as bison, horses, aurochs, and deer, species suitable for hunting by humans, yet not necessarily the prey found in the associated deposits of bones. In Lascaux, for instance, the community predominantly left reindeer bones, but reindeer do not appear in their cave paintings, which are mainly of horses. It has been suggested that the paintings of the hunt are anticipatory, perhaps an invocation or “prayer” for a successful kill of this animal, whereas the remains show what the community actually killed in the hunt, or it may be that the remains represent their typical diet and the pictures what they aspired to kill for a particular celebration.

The paintings of humans, unlike those of the animals, lack realism and are more representational, as though there was some taboo about drawing people. If the paintings and figurines were thought to somehow give power over the animal portrayed, to enable it to be successfully killed, it would make sense not to portray humans in the same way. There have also been found “Venus” figurines, which have no real equivalent in cave paintings. Most of these date from 26,000 – 21,000 years ago, but there are examples such as the “Venus of Hohle Fels”, which dates back at least 35,000 years. These figurines were carved from soft stone, bone or ivory, or formed of clay and fired. In total, around 140 or so figurines are known. They portray a woman, generally, with a small head, wide hips, and legs that taper to a point. arms and feet are often absent, and the head is usually small and faceless. They exaggerate the abdomen, hips, breasts, thighs and vulva, hence they are named after the Roman goddess of love, Venus. In this case the greater realism than the depiction of hunters in the paintings, may well be associated with gaining power over another person, though for fertility, not subjugation and victory, hence the exaggerated vulva, hips and breasts.

If these interpretations are correct then religion is already present in these communities, though it is likely that it would be somewhat impersonal and magical, seeking to manipulate forces of nature or even to obtain the benevolence of the ancestors, whose physical bodies had, after all returned to the dust of the earth.

Entering the Tomb with the Ancestors?

Why were the hand prints made? One suggestion is made by David Lewis- Williams, Professor Emeritus of Cognitive Archaeology at the University of the Witwatersrand, who is best known for his research on southern African rock art. His theory is based on ethnographic studies of contemporary hunter-gatherer societies and argues that the paintings were made by paleolithic shamans who penetrated into the darkness of tomb-like caves deep in the earth, and asserts that while there, they entered into a trance state and then painted the images they saw in their visions with the intention of drawing out power from the earth itself through the cave walls. This is the current practice of the Ju/’hoansi in Botswana and Namibia, a society at a similar stage if development as the Mesolithic people responsible for the southern African rock art.

The shamanic world has several tiers of reality inhabited by spirits that can be accessed through altered states of consciousness. The world inhabited by humans is one reality, but it is supplemented by others that are understood to exist beyond it. Shamans have the ability to mediate between these other worlds. For the San, other realms could be accessed during altered states of consciousness, at the rock faces where rock art can be found. Everything created on the rock face was, in some way, associated with the spirit world.

More significantly, the collection of San ethnography shows the trance at the core of San belief. Metaphors for death are contained in the trance- dance. As the San shamans dance, their supernatural power builds up until it reaches a breaking point at which their spirit flies out of the body and they travel to another reality where spirits dwell in the same way as the soul is believed to travel after death.

The trance dance correlates with the symbolism in the rock art. Even though the mythology, the sacred story, differs, the parallels are there at a deeper level. Lewis-Williams has also studied the Palaeolithic rock art in France since the 1970s and argues that there are parallels between San rock and French Palaeolithic rock.

It seems, then, that everything was seen as parts of one, unified whole, that the earth, and the forces of nature associated with it, were seen as sources of power and fecundity, and perhaps also, through them decaying to dust, and thus becoming part of the earth, the power, skill and wisdom of the ancestors too. It also seems that this may have been thought to be capable of being harnessed or accessed, in some way, and that this power and wisdom could be drawn from the earth.

Whether the shamanic model is entirely correct or not, we certainly catch a glimpse of the deep awe early humans felt as they contemplated the numinous they believed permeated their world and something too of their perceived need to reach out to something beyond themselves as they faced the exigencies of life. They knew themselves to be, to exist as subjects of their experience; they knew others like themselves to have being, and that they all existed together and were mutually dependent on each other; but they also experienced something else, that they all existed as parts of an all-inclusive whole – the reality on which they all depended absolutely and which, in its way, relatively, depended on them, something entirely other than them, something that created that sense of contingency in them and to which they must somehow relate. It is out of these certainties that the religions were to arise.

The Birth of Humankind

The Problem With Creationism

For years I, like other Fundamentalists, held to a simplistic and literalist interpretation of the book of Genesis, as though it was a scientific and historical textbook, something I justified by extended special pleading, which explained both how the universe came about and how humankind developed. I felt increasingly uncomfortable doing so, but was trapped in a theological cul-de-sac. So much of my theological superstructure rested on these somewhat doubtful foundations that I could not bring myself to examine them carefully. I did not have the time or the energy, I was on the treadmill of parish ministry. As a student I had felt a sense of embarrassment, almost shame, at being a creationist, certainly the literature did not seem credible (https://answersingenesis.co.uk/store/product/it-trueevidence-creation/?sku=00-1-136) indeed, much of it was almost laughable – but it raised challenging questions (how could a fish, which breathes through its gills, develop into something that could breathe out of water without suffocating in the process? Yes, ok, I know about mudskippers, I’ve even seen them, but back then the question seemed compelling.) My problem lay with Adam: if Adam was not a special creation, simply a mythical first human being, where did that leave original sin? Paul, in 1 Corinthians 15:20-22 had written, the truth is that Christ has been raised from death, as the guarantee that those who sleep in death will also be raised for just as all people die because of their union with Adam, in the same way all will be raised to life because of their union with Christ.” Without Adam’s fall, there was no need for the atonement and Jesus’ death became pointless. And there could be no historical fall, without an historical Adam. To safeguard the Gospel, I had to reject evolution. This view had been passed to me by my father and I was charged, like Atlas, to hold it firm, but all the while, doubt stood at my shoulder, whispering.Others, I know, have no such doubts at all, theirs is a blithely confident faith. To quote from a recent publication:“… there is no reason to believe that the universe and the earth in particular are billions of years old … as many astronomers and geologists insist. A real creation would of necessity require that some aspects of the universe would have come from the hand of its Creator with an appearance of age. For example, Adam in the very hour he was created would have appeared to be a mature man of some years. Then the geological upheaval at the time of the Flood (see Gen. 7:11; 2 Pet. 3:6) could also account for much of the geologist’s “evidence” for an ancient earth … we simply cannot discover the age of the earth or of man on the basis of evidence … but the tendency of Scripture, limiting the known gaps … to tens and hundreds of years and not thousands and millions of years, seems to be toward a relatively young earth and a relatively short history of man to date.”– Robert Reymond, “A New Systematic Theology of The Christian Faith”

But such a blithely confident faith is only possible if we ignore the genre of the literature this model is based upon, adopt a wilful obscurantism towards the findings of science, in the process ignoring the overwhelming consensus of the scientific community since Galileo in the 17th century, and credit the Bible with both supernatural authorship and its attendant infallibility. A recent study by biochemists at Brandeis University, using a computer model to calculate the probability that all forms of life on Earth developed from a single common ancestor, also tested the Fundamentalist model (that humans arose in their current form and have no evolutionary ancestors) and found that, statistically, the probability that humans were created separately from everything else is 1 in 10 to the 6,000th power. It is beyond improbable.The Fundamentalist’s model simply does not stand up to any degree of scrutiny or take into account clearly observable facts (hence Reymond’s assertion about not being able to rely on evidence). Indeed the picture which emerges from consideration of the evidence is that the universe began with the “Big Bang” as an unimaginably hot, dense point, which when it was just one hundredth of a billionth of a trillionth of a trillionth of one second old experienced an incredible burst of expansion in which space itself expanded faster than the speed of light. During this period, the universe doubled in size at least 90 times, going from subatomic to the size of a golf ball almost instantly. As space expanded, the universe cooled and matter formed. One second after the Big Bang, the universe was filled with neutrons, protons, electrons, anti-electrons, photons and neutrinos.From Big Bang to BacteriaFor 380,000 years or so, the universe was too hot for light to shine. The heat of creation smashed atoms together with enough force to break them up into a dense plasma that scattered light like fog. Roughly 380,000 years after the Big Bang, matter cooled enough for atoms to reform, setting loose the initial flash of light created during the Big Bang, which is still detectable today as cosmic microwave background radiation. About 400 million years after the Big Bang, during a period which lasted more than a half-billion years, clumps of gas collapsed enough to form the first stars and galaxies. A little over 9 billion years after the Big Bang, our own solar system was born. By dating the rocks in Earth’s crust, as well as the rocks in Earth’s neighbours, such as the moon and visiting meteorites, scientists have calculated that Earth is 4.54 billion years old, with an error range of 50 million years.

The environment on the early Earth was volatile and hostile to life, yet life somehow began here. We might ask how this could possibly be, yet microbial life forms have been discovered that can survive and thrive at extremes of both high and low temperature and pressure, and in conditions of acidity, salinity, alkalinity, and in concentrations of heavy metals that would have been regarded as lethal even a few years ago. These discoveries include the wide diversity of life near sea–floor hydrother­mal vents, where some organisms live on chemical energy in the absence of sunlight. This demonstrates how life could have begun in that hostile primordial soup. Fossil evidence suggests life emerged some time prior to 3.7 billion years ago and perhaps as early as 4.1 to 4.28 billion years ago. As mentioned above, the similarities among all known present-day species indicate that they have diverged through the process of evolution from a common ancestor. This common ancestor developed into the three domains which form the branches of the evolutionary tree: archaea, bacteria and eukaryotes. Animals are, essentially, multicellular eukaryotes and it is from these eukaryotes that animal life developed to jellyfish and thence to vertebrate fish, amphibians, reptiles, mammals and on and on to the earliest homonids. The story is one of continual development and adaptation over the millennia. But it is with the rise of the homonids that we begin to approach “God”, for while consciousness of self and of the other is present in such animal life as octopuses, it is four very specific cognitive developments in the brain which set the homonids apart and these developments were necessary for the prehension of “God”.

The Rise of Humankind

If we begin with Homo habilis, about 2 million years ago, we find hominins progressively experiencing a significant increase in brain size and general intelligence. Mammals (and, obviously, their brains) had been evolving for 200 million years before that time. For the first 140 million years of their existence, mammals were insignificant creatures living in the nooks and crannies of a dinosaur’s world. During that time, evolution was experimenting with the development of the three-part brain (the forebrain, midbrain, and hindbrain) that forms the central nervous system for mammals. About 65 million years ago, an asteroid apparently struck Earth, producing a cataclysm that killed the dinosaurs. Mammals not only survived but thrived in a world now devoid of their Jurassic predators. Consciousness would not have evolved if a cosmic catastrophe had not claimed the dinosaurs as victims. So, in a very literal sense, we owe our existence, as reasoning mammals, to our lucky stars. With the disappearance of dinosaurs, mammals rapidly diversified, grew, and dominated Earth. The mammal forebrain increased disproportionately in size compared to the midbrain and hindbrain and eventually occupied most of the skull. As it grew, the forebrain differentiated into the four lobes (frontal, temporal, parietal, and occipital), and developed a thin layer called the neocortex. This was the key innovation of the mammal brain, because it included six layers of neurons, compared to the three layers in the cortex of earlier animals. Since neurons are connected three-dimensionally, both horizontally and vertically, to other neurons, the additional three layers increased the connections exponentially, making possible the processing of much more complex information and thought. The first primates appeared approximately 60 million years ago. They proliferated rapidly into hundreds of species, of which 235 species still exist. About 30 million years ago, New World monkeys (cebus monkeys and marmosets) went their separate way, and 25 million years ago the Old World monkeys (baboons and macaques) did the same. The great apes, the group most closely related to us, began dividing about 18 million years ago, with the orangutan, and then the gorilla, following separate evolutionary paths. Finally, about six million years ago, the hominins separated from chimpanzees, our closest hominid ancestor. Insofar as the evolution of chimpanzees was subjected to similar evolutionary pressures as hominins were, it would not be surprising, given the principles of parallel evolution, to find that chimpanzees and, indeed other great apes would develop some cognitive abilities similar to those developed by hominins. Awareness of self is an example of such parallel development. Perhaps more surprisingly, this is also present in octopuses.

Homo habilis is generally regarded as being the first hominin to have diverged significantly from its primate ancestors, although its precise relationship to other early members of the Homo species, such as Homo rudolfensis, Homo ergaster, and the recently discovered Homo naldi, is far from settled. Fossils of Homo habilis have been discovered in Ethiopia, Kenya and Tanzania. Homo habilis lived between 2.3 and 1.4 million years ago, although recent finds in Ethiopia suggest that it may have existed as early as 2.8 million years ago. Its average brain size is estimated to have been about one-third larger than the brain of Australopithecus, making it smarter than Australopithecus, which is demonstrated by it making crude stone tools by breaking rocks to produce sharp stone edges. The use of tools is not unique to hominins, of course, crows use sticks and carefully cut leaves to extract insects from them, sea otters use stones to break the shells of crabs, monkeys use sticks to kill snakes, and chimpanzees use sticks, from which they strip the leaves, to forage in termite mounds, and stones to crack open nuts. Crude stone tools have been found dated to 3.3 million years ago, but those made by Homo habilis were more sophisticated and have been found in abundance in association with Homo habilis fossils. Although crude, such tools would have been effective for cutting the hides and tendons of dead animals, allowing the tool-user to strip meat. The stone tools could also have been used to break open animals’ long bones and extract the marrow, an especially rich source of protein. Animal bones found in association with the stone tools suggest that the tools were used in this way. The bones also suggest that Homo habilis was probably a meat eater, in contrast to earlier hominin species. There is no evidence that Homo habilis hunted animals, so they probably scavenged for animals that had been killed by other animals or had died of old age or disease. What makes the stone tools used by Homo habilis different is their complexity. According to archeologist Steven Mithen of Cambridge University, to detach the type of flakes found in the sites of Olduvai Gorge, it is necessary to recognize acute angles on the stone nodules, to select so-called striking platforms and to employ good hand-eye coordination to strike the nodule in the correct place, in the right direction and with the appropriate amount of force. Homo habilis possessed advanced physical skills and the ability to plan. They were clearly smarter than their hominin ancestors. However, despite their greater intelligence, there is no evidence that they possessed self-awareness or any of the other higher cognitive functions that would distinguish later hominins and lead to the emergence of the gods. Theirs were clever brains, but blank minds.With Homo erectus we see the development of an awareness of self. Homo erectus first appeared approximately 1.8 million years ago and lived until 300,000 years ago, thus existing for 1.5 million years. They had physical features, especially their arms and toes, that show they had more or less given up climbing trees. Their brains ranged from 750 to 1,250 cubic centimeters, averaging about 1,000 cubic centimeters (about 60 percent larger than the brain of Homo habilis) and only slightly smaller than that of modern Homo sapiens at about 1,350 cubic centimeters. It has been claimed with some justification that Homo erectus was the first hominid species whose anatomy and behavior justify the label, human. The larger brain led to new behaviours. Stone tools, some of which have been dated to more than 1.7 million years ago, became elegantly flaked on two sides. This new tool is generally referred to as a handaxe even though it was really just an elegantly sharpened rock weighing several pounds which was significantly sharper than earlier tools. In addition Homo erectus also made wooden spears up to six feet in length and sharpened at both ends. Eleven spears found in Germany were apparently used to hunt wild horses, while in southern England and in Spain, Homo erectus hunted other large mammals, including bison, deer, bears, and elephants. Such hunts required the co-operation of a large number of people. Stone- tipped spears dated to 460,000 years ago have recently been found in South Africa. Homo erectus was also the first hominid to control and use fire and there is good evidence for the controlled use of fire by 790,000 years ago. Co-operative hunting and co-operative living (suggested by archeological remains in natural shelters, such as caves, and in the building of artificial shelters) would have required communication, though the degree of language development remains the subject of debate.However, Homo erectus was not only smarter, but was becoming self-aware. Self-awareness in children is, we know, a gradual process, it develops in stages. Its development is not dependent on achieving a specific age but on achieving a critical level of brain development, which varies among children. This is illustrated by the fact that most children with autism develop mirror self- recognition at a later age than other children. Similarly, it would be expected that self-awareness in Homo erectus would have developed slowly and would have fluctuated in its early stages. Neuroanatomist, Bud Craig, of Arizona State University, has defined self-awareness as “knowing that I exist” and “the feeling that ‘I am.’ In other words, a sense of our own being. We need to be able to experience our own existence before we can experience the existence of anything else in the environment. Without an “I” there can be no “you.” By developing an awareness of self, Homo erectus would have developed an awareness of others and, therefore, been able to initiate co-operative activities. Such an awareness of others would not have included a “theory of mind”, but would have been more like that found in animals that hunt jointly, such as wolves. They are aware of one another, and so able to co-operate, without understanding what the other is thinking. Self-awareness is, also, not unique to humans. Charles Darwin, “while visiting a zoo … held a mirror up to an orangutan and carefully observed the ape’s reaction, which made a series of facial expressions.” In fact the majority of chimpanzees in studies learn to use a mirror to explore body areas they could not see without the mirror, such as their teeth and ears and yet monkeys tested similarly show no such signs of self- recognition. The capacity for self-recognition, it seems, does not extend below humans and the great apes.However, it is with Archaic Homo sapiens, beginning about 200,000 years ago, that the development of an awareness of others’ thoughts, commonly referred to as a “theory of mind” may be seen. In terms of longevity, Homo erectus was the most successful hominin species to inhabit this planet, surviving 15 times longer than our own species has so far. Given its success and broad geographical distribution, it is not surprising that around 700,000 years ago, Homo erectus began evolving into several other hominin species, commonly grouped together and designated as “Archaic Homo sapiens”. Some members of this group apparently developed a new major cognitive advance that would be essential for becoming modern Homo sapiens. Depending on where they lived geographically, these hominins developed as Homo heidelbergensis and Homo neanderthalensis (in Europe): specimens from Spain, dated to approximately 430,000 years ago, display features of both of these; Homo rhodesiensis (in Africa); Homo floresiensis (in Indonesia) and the Denisovans, a sister group to the Neandertals, whom they apparently outnumbered and with whom they interbred (in Siberia). Neandertals lived in Europe 230,000 to 40,000 years ago. The largest concentration living in what is now southern France, but they were widely distributed from Wales in the west to Uzbekistan in the east. There is no evidence that Neandertals ever migrated to China or Indonesia, as Homo erectus had done, or that they ever lived in Africa. The most striking physical characteristic of the Neandertals was their large brain, which averaged 1,480 cubic centimeters, larger than the average for modern humans.


Their short, stocky frames, similar to modern-day Inuit, were suited to the cold European climate. They followed animal herds in the summer and spent winters in a home base, often a cave. They made extensive use of fire and animal skins for warmth and were excellent hunters. They made both stone and bone tools, and weapons significantly more sophisticated than those of Homo erectus. Their spears were as elegantly balanced as Olympic javelins and were used to hunt herd animals, the source of their largely protein diet. Much of the hunting was done in groups, and there is evidence of co-ordination such as driving herds of bison and mammoths over a cliff. They also fished and trapped birds. Despite having large brains and sophisticated hunting techniques, however, the culture of the Neandertals was remarkably static. There were no innovations, just a narrow repertoire of ancient technologies that sustained them for thousands of years. They never invented the harpoon, bow and arrow, or other weapons, despite hunting large animals for almost 200,000 years. Based on their brain size alone, Neandertals should have built computers and flown to the moon. The discrepancy between their brain size and lifestyle has been characterized as a brain-culture mismatch. However, crosshatched lines, 39,000 years old, carved onto the rock of a cave on Gibraltar have been found which are suggestive of early art (there have also been crudely painted shells and collections of feathers from large birds, such as eagles, found). However, one clear difference from their hominin predecessors is established: for the first time in history, there are suggestions of caring for other members of the group. Evidence from caves in Iraq of the remains of nine Neandertals, who died 60,000 to 80,000 years ago, have been found. Amongst them was one older man who showed evidence of having had severe injuries with multiple fractures many years before his death, including trauma to his right arm and leg that would have crippled him as well as a blow to his head that left him blind in one eye. Such a hominin would not have survived for long on his own, and it is clear that others provided care for him for many years. Another example of caring among Neandertals was their practice of burying their dead. From between 75,000 and 35,000 years ago, at least 59 intentional Neandertal burials at 20 sites have been found, mostly in southwestern France. Most placed in a tightly flexed foetal position, which may have had a symbolic significance. Neandertal burials may suggest belief in some sort of afterlife, but, at the very least they show a strength of attachment between individuals that transcends anything seen previously: a gesture toward the dead that was far from obligatory for any but emotional reasons.Providing care for another suggests that you are able to share their emotional perspective, which is to say, to empathize with them, to get into the mind of the other, to know what they are thinking and feeling. Psychologists refer to this as having a theory of mind, an understanding that the behavior of others is motivated by thoughts, emotions, and beliefs. It is not merely being aware of the physical presence and intentions of another person; early hominins all had that ability, as do many animals, as when wolves submit to an alpha male. However, a theory of mind involves actually putting yourself into the other person’s mind. We read the mind of others not only by listening to what they say but also by observing their facial expressions, gaze, posture, and movements. By definition, an awareness of others cannot develop until an awareness of self has first developed, since you cannot understand the thoughts and emotions of another unless you are aware of your own, which is your point of reference. It was at this point that humans became aware of themselves as one amongst many, no longer as the centre of the universe.It is very unlikely, however, that Neandertals believed in gods. Although they apparently had acquired a theory of mind, they had not yet acquired the second- order theory of mind that would allow them to think about what the other was thinking about them. Nor had they acquired an ability to fully project themselves into the past and future and to use their past experiences to plan their future. They were not yet cognitively mature enough to prehend and honor the gods.By 100,000 years ago, hominins had been separated from their primate ancestors for about 5.9 million years, 99 percent of the time from the point of separation to the present day. What were the odds, in the remaining mere 100,000 years, that hominins would write Handel’s Messiah, split the atom and fly to the moon, let alone build monuments such as Angkor Wat and Chartres Cathedral to honour gods? Something remarkable was about to happen.BreakthroughAbout 100,000 years ago, early homo sapiens developed an introspective ability to reflect on their own thoughts. Thus, they could not only think about what others were thinking but also about what others were thinking about them and their reaction to such thoughts.There is archaeological evidence from South Africa of cave dwellers (who also made crude rock shelters where there were no caves) who ate a varied diet including seafood and game, lived a reasonably settled existence, used bedding made from various grasses and plants, (including some plants that have insecticidal properties against, for example, mosquitos), and made herb medicines. In these caves, which centre on the Blombos Cave in Southern Cape, and are dated to 77,0000 years ago, were also discovered seashells, covered with red ochre which had been deliberately perforated, allowing them to be strung together as necklaces or bracelets. A 100,000 year old red ochre processing workshop was recently uncovered in these same caves. Ochre can be used for tanning skins, as an insect repellent when applied to the skin, and for hafting stone tools onto wood handles, but also as body decoration. While it is not possible to say definitely what the ochre was being used for 100,000 years ago, the fact that it was applied to perforated shells suggests that it was, at least occasionally, being used for decoration. Altogether, five different kinds of shells have been identified and among the shell beads, it was found that “beads from two separate archeological layers displayed patterns of wear distinct from one another suggesting that they had been re-strung and worn differently at different times. This is the first evidence of evolving hominin fashion consciousness. Also in the South African caves, 15 pieces of ochre were found that had been modified by scraping and grinding, and then deliberately engraved with straight line designs. On one, for example, cross hatching consists of two sets of six and eight lines partly intercepted by a longer line. Some of the engraved pieces of ochre have been dated to approximately 99,000 years ago. Elsewhere, in what is now Botswana, a six metre long rock had been deliberately carved as a piece of public art to resemble the head of a snake and has been dated to 70,000 years ago. There is also evidence that the inhabitants of the South African caves were wearing fitted clothing at this time. Animal skins had been worn for warmth for thousands of years, by members of Homo erectus and Archaic Homo sapiens who lived in colder climates in Europe and Asia. However, about 72,000 years ago modern humans began to wear clothing that was more tailored than simple animal skin capes, even in hotter climates.Sophisticated tools, pierced shells, fitted clothing, engravings on ochre, rocks carved to resemble animals: a new kind of hominin had clearly emerged. The behaviour of these hominins was so at variance with the behaviour of their predecessors that we designate this group as Homo sapiens, “wise man.” We assume that such individuals must have made some kind of major cognitive leap forward as wearing shell jewelry, decorating one’s body, and wearing fitted clothing all suggest that early Homo sapiens had become not merely self-aware, but aware of what others were thinking about them. Self- adornment is a means of advertising family relationships, social class, group allegiance, and even sexual availability. It is intended to send a message to observers. Self-adornment has been used by Homo sapiens in every known culture, often involving extraordinary investments of time and resources. At the heart of self-adornment is one Homo sapiens thinking about what another Homo sapiens is thinking about him or her. This is the introspective self. The second order theory of mind, which involves thinking about what one person thinks another person is thinking. The acquisition of a second-order theory of mind requires the person to view the self as an object. It is not merely looking in a mirror and recognizing the self but rather being able to think about what you look like to other people, how they see you, and what you think about how they see you. It includes being able to think about yourself thinking about yourself. It is, in short, the introspective self. The fact that early Homo sapiens were apparently decorating themselves and wearing fitted clothing suggests that they were now thinking about themselves and how they appeared to others. The evolution of an introspective self provided early Homo sapiens with major advantages over other hominins, especially in social interactions as they were able to predict others’ behaviour. It would have greatly facilitated group activities such as group hunting, and put Homo sapiens at a significant advantage in warfare against other hominins who did not possess this cognitive skill. Zygmunt Bauman wrote, “Unlike other animals, we not only know; we know that we know. We are aware of being aware, conscious of ‘having’ consciousness, of being conscious. Our knowledge is itself an object of knowledge: we can gaze at our thoughts the same way we look at our hands and feet and at the ‘things’ which surround our bodies not being part of them.” The evolution of the introspective self may be the defining moment in the development of human cognition. Thought is born.In the Bible this emergence of an introspective self is symbolized by the Genesis myth of Adam and Eve, who eat fruit from the forbidden tree in the Garden of Eden and, for the first time, become aware of themselves and their nakedness. This is most likely drawn from the experience of childhood, when at an early age we become aware of ourselves, rather than being a commentary on the development of humanity, but the story powerfully addresses both. We experience differentiation and with it alienation, something both good and bad. It is, ultimately, the source of the existential longing and loss we experience which forms the basis of religion, but it is not so much a “fall” as a great leap forward.The concept of gods would not have occurred to hominins prior to about 100,000 – 77,000 years ago, and would probably not have fully developed before about 10,000 years ago with a settled agrarian society. The human brain, and thus the self-aware human world, would not have been ready for them before that time. However, somewhere between 100,000 and 10,000 years ago, between Homo sapiens’ awareness of others and the appearance of the pantheons of gods which characterise settled agrarian communities, something happened. That something is the prehension of the numinous, of the wholly other. That which transformed Homo sapiens from a self-aware shell decorator, to a worshipper.

Homo Religiosis?

The Argument from Religious Experience

The argument from religious experience is the argument from experiences of “God” to the existence of “God”. In its strong form, it asserts that it is only possible to experience something which actually exists, and so the universal phenomenon of religious experience demonstrates the existence of “God”. People experience “God”, therefore, “God” must be real. In its weaker form, the argument asserts only that religious experiences constitute evidence for “God’s” existence. This form of the argument has been defended by Richard Swinburne with an appeal to the “principle of credulity”. This is slightly different from, and perhaps more credible than, Anselm’s ontological argument; it does not begin from an imagined reality and argue that, if we are able to conceive it, it must necessarily exist, but rather begins from our experience of a reality. Religion very clearly does exist, therefore that which is its focus, that which lies behind it, must also exist. There really is a horned quadruped that gallops across the plains, it just turns out that the unicorn of our dreams is, in fact, a rhinoceros.

Both forms of the argument assume that religious experiences are a type of perceptual experience in which the person having the experience perceives something external to themselves. However, it could be argued that religious experiences involve imagination, rather than perception, and that the object of the experience is not something that exists objectively but rather is something that exists subjectively in the mind of the person having the experience: it is their interpretation of the experience.

Conflicting Religious Experiences

A further difficulty with the argument is that of conflicting experiences: adherents of all religions claim to have had experiences that validate their religion. But if any of these appeals to experience is valid, then surely all are? However, it cannot be the case that all such experiences are valid, because religions are mutually exclusive and mutually inconsistent; they conflict with one another. Which seems to turn the argument on its head and suggest that, in fact, none of these appeals to experience is valid as they all seek to validate something different and mutually contradictory.

Objections could also be raised along lines suggested by philosophical scepticism. These are arguments that our experiences of the external world, the world of the familiar, everyday objects around us, are insufficient to justify our belief in their reality. Descartes’ argument from dreaming is the best known of these, though “external world scepticism” can be traced as far back as ancient Greece. However, this becomes something of a parlour game, a way of whiling away the time and little more, when one considers that even quantum physicists sit on chairs, even though they know that at a quantum level nothing is solid and everything is in a state of flux. To a certain degree, experience trumps theory.

That said, the problem is a real one and all experience is subjective and any subjective experience is logically consistent with a number of objective states. No matter how I perceive the world to be, there are a number of ways that it could in fact be; I could be dreaming, or hallucinating, for example, and if our experience of the tangible world is at least open to question, then how much less certain can be the connection between an uncertain religious experience and belief in “God”?

And yet, as Jung maintained, every generation before our own had some belief in “God”, many, indeed, have argued that it is the defining characteristic of humanity. Certainly from Stonehenge and Sungir to Saqqara and from Chichen Itza and Cahokia to Chenzishan, early humans were captivated by the transcendental “other”, by “God”. Equally clearly during the long ages before the rise of Homonids, there was no prehension of that. It seems to me then, that we are left with two alternatives to account for this near universal “God” awareness: either, as Fundamentalists maintain, humans were created with an immediate awareness of their creator, or that at some point in the long history of human evolution, humans became aware of something beyond themselves which they needed to understand and relate to differently than to anything else in their experience. However, to argue, as Fundamentalists do, for the special creation of homo sapiens (let alone to argue for the special creation of Adam and Eve, two fully articulate, morally competent beings in an intimate relationship with their creator, on the sixth day of creation) requires not only the suspension of credibility, but a degree of intricate special- pleading that wilfully ignores the findings of science. Which leaves us only with a developing awareness, at some point in the history of human evolution, of that which we choose to call “God”.

In other words, while the Argument from Religious Experience, fails to prove the existence of “God”, as indeed does any of the other philosophical arguments we have considered, it takes us closer than does any of the others. For while it most emphatically does not objectively prove “God’s” existence, it does point to the emergence of something we refer to as “God”, something for which we must account. So, in our quest to understand what we mean by “God” we must turn our attention from considering the transcendent “out there”, to considering the imminent within. To understand “God”, we must first understand ourselves. Theology then, must begin with anthropology.

The Talking Cricket

“When you get in trouble And you don’t know right from wrong Give a little whistle, give a little whistle.

When you meet temptation And the urge is very strong Give a little whistle, give a little whistle.

Not just a little squeak, pucker up and blow And if you’re whistle is weak, yell Jiminy Cricket

Right

Take the strait and narrow path And if you start to slide Give a little whistle, give a little whistle And always let your conscience be your guide

Take the strait and narrow path And if you start to slide Give a little whistle (yoo-hoo) Give a little whistle (yoo-hoo) And always let your conscience be your guide.” – Pinnochio, Walt Disney

The Moral Argument: Kant

“Therefore, the summum bonum [“greatest good”] is possible in the world only on the supposition of a supreme Being having a causality corresponding to moral character. Now a being that is capable of acting on the conception of laws is an intelligence (a rational being), and the causality of such a being according to this conception of laws is his will; therefore the supreme cause of nature, which must be presupposed as a condition of the summum bonum, is a being which is the cause of nature by intelligence and will, consequently its author, that is God.”
—Immanuel Kant, Critique of Practical Reason, 1788

The moral argument, is the last of the major arguments for the existence of “God” advanced by theists. It takes two forms: the classical, as developed by Kant, and the popular, as developed by CS Lewis.

The first maintains that there is a moral order in the world that can only be explained by the existence of a divine lawgiver.

Of course, there are some who deny that there is any such thing as an objective morality, they tend to argue that morality is the product of society and that conscience is simply that societal norm internalised and buried deep in our subconscious. A better solution is to recognize that there is a universal, objective moral philosophy, built on reason, empathy and a recognition of our shared humanity. “The Ineffable Carrot and the Infinite Stick” explains this moral system more fully and shows why “God” is unnecessary to account for its existence.

In fact, it could be argued that atheism is a better foundation for an objective morality than is theism. There is a meme which crops up on Facebook from time to time which challenges Christian’s regarding the basis of their morality, saying “If you have to believe in hell in order to live a good life, you are not a good person.” The force of this challenge ought to make evangelicals and fundamentalists uncomfortable, that it rarely has that effect is an indictment.

For most of my life believed in “God” and I lived my life in obedience to his laws in the Bible. This is how I was brought up. Despite injunctions to love “God”, and despite straining with every ounce of willpower to do so, I could not. I feared him and obeyed him, and feared hell and developed the tenderest of consciences, but I could never love that tyrant, despite my gratitude to him that I was saved. Loving this “God” is an institutionalised form of Stockholm Syndrome, whereby hostages come to identify with their captors, it is not genuine love. Yet this is the “God” who lies above and behind this concept of a divinely inspired universal morality and to which this moral argument attests.

The weakness of this argument is thstreason for this is that to postulate, as theists do, that a being is the ultimate source of morality brings us to what is known as the Euthyphro dilemma: does God approve of something because it is good, or is it good because “God” approves of it? Fundamentalists generally rely on smoke and mirrors at this point and claim that the two are one and the same: two sides of one coin. However, if “God” approves of something because it is good, then there is an objective standard of morality outside of “God”, and we can simply bypass the notion of “God” and appeal to this standard directly, to which “God” is apparently subject in some sense. If, on the other hand, a thing is deemed to be morally right if it is approved by “God”, then good and evil would be entirely determined by “God’s” whims, and there would be no genuine objective morality, and thus no moral order, at all. In this respect, the moral argument is self-defeating. The fact that conservative Christian’s, Catholic and Protestant alike, see no issue with this being the case, is perhaps in large part because they are inured to its obscenity by the multiple instances in the Bible where “God” commands acts of questionable morality (if not outright wickedness): Abraham being commanded to sacrifice Isaac, the Israelites being commanded to commit genocide against the Canaanites and so on.

The Moral Argument of CS Lewis

A variant of this argument by the Christian apologist and Oxford don, CS Lewis, the author of such as the Narnia Chronicles and other popular presentations of conservative Christianity, claims that any judgment about good and evil presupposes God’s existence because one can only judge the goodness of something by comparing it to God, who is the highest good.

“…we know that men find themselves under a moral law, which they did not make, and cannot quite forget even when they try, and which they know they ought to obey. …If there was a controlling power outside the universe… [t]he only way in which we could expect it to show itself would be inside ourselves as an influence or a command trying to get us to behave in a certain way. And that is just what we do find inside ourselves.”
—C.S. Lewis, Mere Christianity, 1960.

Or, alternatively, that there is a universal moral law, a standard of right and wrong which all human beings are innately aware of, even if some choose to violate it. Those who present this argument claim that such a moral awareness could only have been put into us by God.

The most significant problem with this argument is that human beings are not all aware of the same moral law, as even a cursory examination of human history would reveal. In various societies throughout history, behaviors such as polygamy, segregation, slavery and racism, physical abuse as a method of discipline, infanticide, incest, pedophilia, human sacrifice, ritual suicide, ritual murder, cannibalism, genital mutilation and genocide were widely accepted, even encouraged. None of the societies that did these things seemed to feel that there was anything wrong with them; many justified their actions by appealing to their “God”. Some societies have shunned violence of any kind, while others have encouraged war and militarism. Some have advocated free speech and individual rights, while others have mandated conformity and the superiority of the state. Even today, there are furious debates over the ethics of topics such as same sex marriage, abortion, capital punishment, sex education, drug legalization, the use of contraceptive and euthanasia. Claiming that God is responsible for humanity’s universal sense of right and wrong fails to explain why there is and has always been such widespread disagreement over morality.

However, it is possible to accommodate both the existence of a moral law and the fact that, obviously, not every culture or individual seems to be aware of it. This is because morality is not something implanted in everyone by “God”, but is something derived from careful deliberation and a rational understanding of our place in the world and our relationships to one another; there is no more reason why we should expect it to be immediately apprehended by any one individual, than the (equally) universal laws of physics.