Imagine a future world where technology lets us control our own destiny, enhance our physical and mental performance, extend our lives – perhaps indefinately. How will we come to see ourselves as human beings? What will it mean to be human? And how can we manage it all for the common good.
This is the world of Humanity 2.0, and the subject of a new book from Warwick University Professor of Sociology Steve Fuller.
I have to say up front this is the first of Fuller’s books I’ve read through cover to cover, and frankly it was quite a challenge. Whether it’s the sociologist’s writing style or the somewhat discordant mix of practical and theological content, extracting what Fuller is really trying to say, his thesis if you like, was an uphill job. To his credit, Fuller has made a series of six short videos summarising his content, and which I’ve added to the end of this post. They came too late for me, but you’re advised to watch them before reading the book.
That said, I want here to give an overview of the content and critique a few areas particularly where I have issues.
Fuller wants to create egalitarian policy for the development and implementation of transhumanist technologies, and justify sociology’s seat at the multi-disciplinary table that will deliver it. It’s the laudable focus of his Chapter 3.
But his broader agenda is to dethrone what he sees as a prevailing hegemony of Neo-Darwinism (essentially what Darwin knew plus our knowledge of molecular genetics) and get an alternative variant of intelligent design (I.D.) taught in school science classes; p180:
…the most controversial aspect of my position, namely, that the active promotion of a certain broadly Abrahamic theological perspective is necessary to motivate students to undertake lives in science and to support those who decide to do so.
He’s accordingly raised his game by developing a brand of I.D. better suited to the task as he sees it; p177:
As a true social constructivist, I see myself as one of the constructors of intelligent design theory. I am not simply remarking from the sidelines about what others have done or are doing, as a historian or journalist might. Rather I am making a front-line contribution to defining the theory’s identity.
although it’s not clear how much of this is driven from heart-felt conviction. Variously describing himself as a Secular Humanist, Humanist, and now Transhumanist, in this Guardian interview from 2006 he appeared not to favour I.D., but felt it deserved a “fair run for its money”; apparently backing any horse, however lame, that will run against Neo-Darwinism.
Fuller’s appeal to I.D. in Humanity 2.0 is itself ambiguous: he uses the term variously in contexts related to a recognisable deity, p187:
I have been quite open about identifying the ‘intelligence’ of intelligent design with the mind of a version of the Abrahamic God into which the scientist aspires to enter by virtue of having been created in imago dei.
then more in relation to nature, as in his discussion around civic religion, p182:
But what remains specifically ‘religious’ about ‘civic religion’? Two aspects: (1) Science’s findings are framed in terms of the larger significance of things, nature’s ‘intelligent design’, if you will. (2) Science’s pursuit requires a particular species of faith – namely, perseverence in the face of adversity – given science’s rather contestable balance sheet in registering goods and harms….
The former quote is consistent with Fuller’s broader counter to Neo-Darwinism, my reading of which can be summed up as (i.e. my words):
Those committed to a Neo-Darwinist world view are aligned with a historical tradition that decrees we can never know a god who is different from us in kind. Such people are uninterested in science or technology beyond that required for a continued existence with their fellow animals in a sustainably snug microcosm. They likewise have no interest in the science and technology of a transhumanist agenda.
It’s never quite clear whether Fuller is projecting God’s image onto man, or man onto God – a model more in line with his version of secular humanism as described in the aforementioned Guardian interview: “human beings at the centre of reality, creating God in their image and likeness” and “taking control of evolution”. With I.D. tied up with hardcore Creationism in the US, however inappropriately from Fuller’s perspective (he doesn’t support Creationism), some clarification would be helpful.
Coming to structure and content. The first two chapters major on the idea of human ‘distinctiveness’, or that which makes us uniquely human, discussed in the frame of race and religion aligned with various biological and theological perspectives from the past, present, and future. Chapter two specifically defines world views broadly corresponding to ‘naturalistic’ Neo-Darwinism, and a divinely-inspired alternative.
Where naturalistics see themselves “embedded” at one with nature, animals like any others emerging from a process of evolution with natural selection, the divinely-inspired are special: fundamentally separate and above animals, they recognise God because he is an intelligently-designing technician as they are, intent on preserving the essence of their specialness – their humanity. Traditionally they’d look to do that in soul form, but now have an eye to the alternatives future transhumanist technologies might offer. All a bit sci-fi for now, but think of uploading thoughts, memories, consciousness to a microchip, robot, clone, hive-mind, or whatever.
Chapter three’s more grounded ‘Policy Blueprint’ centres around the so-called Converging Technologies Agenda (CTA) for the delivery, management, and regulation of technologies for human enhancement, or transhumanism; so: Nanotechnology, Information Technology, Biotechnology and Cognitive Sciences working together under Fuller’s favoured policy regime of ‘anticipatory governance’.
Although more a check-list than a roadmap – I’m still uncertain of the next steps, there’s interesting discussion here on topics like the substantive PR task of selling transhumanist ideas to a CT-sceptical public (think nanotech), use of IT-style early-end-user-involvement to progress it, and the role for media and science communication.
We can expect issues around personal risk and willingness to participate in enhancement technology trials. Fuller points to the danger of CT perceived as hollow rebranding (again, echos of Nanotech’s relation to chemistry), and questions around standards and norms for developments and applications: e.g. would we take a nanotech or medical lead in a medical situation using that technology? There are also emerging and diverse management philosophies to accommodate or rationalise; so the USA taking a more ROI-focused, proactionary, human performance emphasis, hands off approach; while Europe favours a precautionary, state-controlling, human welfare emphasis.
For Fuller, sociology’s egalitarian pedigree, manifest in the Welfare State, qualifies its latent contribution. And with funding for CT industries biased to the private sector, it looks like the common man is going to need a champion. No centrally driven, government funded, benevolent upgrade for the species this. The portents are rather for increasing societal inequality and differentiation: a position Fuller contrasts with the public-focused ‘common good’ research environment of the Cold War. Cynically, and outside any higher moral ambition, CTA could simply serve as a ‘techno-fix’ for over-population or other pressures on the Welfare State, forcing us to work harder and longer for our deferred pensions – no thanks, or getting us off the hook of our ecological responsibilities.
It’s all scary stuff. When we’re popping cogno-enhancers over the cornflakes, and little Jimmy’s off to college by the grace of his cerebral implant, and your investment-banker neighbours have signed up for the latest ‘life-doubler’ programme; one wonders what will qualify us to live, never mind defining our humanity. That’s me fantasising, but drug-based cogno-enhancement is here, and Fuller’s born “always already disabled” scenario could happen, hitting hardest the under-priviledged and those who don’t want, or can’t afford, the latest upgrades.
Chapters four and five are a return to theology and full-on Neo-Darwinist bashing, which is a shame given I suspect there is so much more to say in the vein of Chapter three.
Various off-shoots and mini-theses sprout off the core agenda, like discussion on the debt owed to religion by Science and both the Secularist and Enlightenment movements for their existence, albeit with a concession the influence has waned:
..even if it is true that all supernaturally motivated scientific insights are eventually absorbed into the naturalistic worldview, it does not follow either that the supernaturalism was unnecessary or that naturalism is the final word.
Newton appears as the quintessential religiously motivated scientist, which is fair enough provided we remember back then he had only religion to explain anything. It’s interesting to ask what sort of science a modern-day Newton might pursue. Would he be one of Fuller’s Neo-Darwinists for whom ‘God differs in kind’, causing him to eschew all impractical science like cosmology, particle physics and String Theory?
I do struggle with this idea that scientists can’t, won’t, or won’t want to do fancy science unless they turn all ‘intelligent design’. It’s saying we have to be designed in order to aspire to knowledge or value truth. Or that because Neo-Darwinists wouldn’t recognise God if they found him curled up in the 10th dimension, they wouldn’t bother with String Theory.
Yet scientists, many of whom are Neo-Darwinists, do that kind of science – big time! So what is it – force of habit? Well why not? Maybe we enjoy all that Brian Cox ‘wonders’ stuff because of an evolutionary misfire: a historic brain artifact associated with some evolved inquisitive tendency for practical survival. We do fancy science, we make a discovery, we revel in our dopamine spike, we do more fancy science. Simples. That’s why scientists are such fun folk to have around.
Fuller might see that as a reductionist, even nihilistic, worldview. He’s said that when Darwin killed God he also killed man, or the only part of man that matters – his humanity. And this is why despite presenting his arguments in a frame of reasoned academic detachment, I’m coming round to thinking Fuller’s propositions are at end religious plain and simple – even if the religion is his own science-flavoured brand. He ‘feels’ there is no humanity without god, so we must have god.
Conclusion
If you’re not used to reading sociology texts, which I’m not, Humanity 2.0 is hard going.
It should be clear by now that Humanity 2.0’s high-tech cover art conceals a heavy theological edge with pervasive references to intelligent design in the context of an anti-Neo-Darwinism agenda. And that’s a shame because it distracts from the more diverse, and frankly more interesting, material also there in plenty for those with open minds.
There’s nothing wrong with theological arguments per se, but mixing rational policy debate with what many will see as off-the-wall, politically charged, I.D. rhetoric is a mistake that’s likely to destructively provoke the very individuals and organisations Fuller should be onboarding to secure sociology’s role in the transhumanist agenda.
Can you tell the temperature from how fast crickets chirrup in the evening? Neil deGrasse Tyson thinks so, according to this Tweet yesterday evening:
Sounds like a great idea, and as I’m in the foothills of the San Gabriel mountains – cricket central by my standards – I’ve tested tested out the theory.
Dr Tyson is not the first person to suggest you can tell the temperature with a cricket, and he’s only having a bit of fun, so in the worst case he’ll be guilty of spreading, rather than generating, misleading information ;-).
Armed with a digital recorder and a laboratory thermometer, I quickly found a suitable subject. The temperature read 65 degrees Fahrenheit. This is what the chirruping sounded like:
Press the arrow key:
– Cricket at 65F, 20.40hrs
From this sample, using only my ears, I counted 67 chirps in a 15 second period (it’s tricky counting that fast, but I found I could do it by checking off groups of 8 chirps on my fingers). According to Dr Tyson’s formula, that gives a temperature of 67 plus 40 = 107 F; a whole 42 degrees above the actual temperature.
Why the difference?
We’re doing science here, which means there’s a whole load of stuff to check out before rushing to condemn Dr Tyson for inaccurate tweeting.
Was it indeed a cricket I was listening to? Sounded like one, but I didn’t actually see it.
Was Neil referring to a specific type of cricket, but the 140 Twitter limited the detail he could provide? If he’s missed out a division factor of 2 on the cricket count, that would put my number in the right ballpark.
Did I listen to the cricket long enough? Was it in a cricket warm-up or warm-down mode?
Was my thermometer broken? Ideally I’d have two or more to check, calibrated against a standard. But I don’t think it was the problem.
Could the cricket be hiding under someone’s air-conditioning unit outlet? This isn’t so far fetched actually. We have one in the house at the moment living under our fridge because it’s warm.
Was my sample large enough – both in terms of number of recordings and number of crickets? I did make four separate recordings and (for now take my word for it) they were pretty similar. That said, I should really come back over a number of evenings at different times to be sure – right?
Well, in the longer term the sample could get large, as I’ll probably be listening out for these things obsessively for the rest of my life now.
What is a chirp?
Meantime, I wondered if the explanation was down to the definition of a ‘chirp’. I convinced myself the chirps I had recorded might be doubling up; maybe something the cricket was doing with its legs: ‘chirp-chirp’, ‘chirp-chirp’, etc. – each ‘chirp-chirp’ counting as one ‘chirp’. Are these double chirps that Neil counted as single chirps? Was it an issue of resolution and my ears? To find out, I slowed the recording to 0.19 times its normal speed and re-recorded a sample to get this:
Press the arrow key to stream live:
and a waveform looking like this:
Interestingly, what you hear on the playback isn’t ‘chirp-chirp’ at all; but ‘chirp-chirp-chirp’. And it doesn’t help us, because each group of three sub-chirps only makes up a single one of our original chirps. And there is no indication of a slower beat or modulation that would yield a lower chirp count. My original estimate remember was 67, and if you count the groups on the expanded trace above you’ll find there are 13 in 15 seconds on the slowed down trace or, correcting for the factor of 0.19, gives us 68.4. Virtually where I started. The cricket still says it’s 107F when it’s only 65F. (BTW – you can also hear another animal making an even faster noise in the background.)
Conclusion
In conclusion, accepting all the experimental limitations and caveats listed above, this test alone does not inspire confidence in the formula, and hence, the value of the tweet.
But hey, on the bright side we’ve all learned some possibly quite useless information about crickets, plus, more importantly, something of the pitfalls to watch out for in chronological cricket research (or any research for that matter).
It’s turning into quite an artsy fortnight. On Thursday, I went to see Getty CEO Jim Wood interviewed at Caltech, then a visit with dinner at the Getty Center itself on Saturday night, before on Monday taking my chances with the holiday crowds at Los Angeles County Museum of Art (LACMA). Between times I’ve been viewing some wonderful examples of Arts & Crafts era houses in Pasadena, and learning about the origins of Californian en plein air outdoor painting. A few notes on the Caltech event…..
‘Science and Art’ featured J.Paul Getty Trust President and CEO Jim Wood talking with broadcaster Madeleine Brand.
Despite the wide-open title, the conversation focused on the Getty’s expertise in artifact conservation, and an upcoming series of region-wide exhibitions intended to show how post-WWII Californian art was influenced by the science and technology of the period.
Wood began by describing the full extent of the Getty’s capabilities beyond the public face of the Museum, and how its scientists have developed conservation techniques that are deployed on conservation projects around the world. These range from the restoration of flood-damaged panels in Florence to the recovery of poorly preserved mosaics in Damascus.
The upcoming exhibition series will feature artists from Los Angeles, and cover the 1945-1980 period of rapid industrial development and space exploration. Californian artists in particular stayed close to technological developments at this time, and incorporated emerging new materials and techniques into their art. The period is coincident with the Cold War, so it will be interesting to watch for any cultural references in that direction (I’m thinking of the type of arts exhibits from the USA featured in the Victoria & Albert Museum’s Cold War Modern exhibition last year).
The Q&A kicked off refreshingly backwards with Jim Wood suggesting it’s important to understand the differences between art and science. He takes the view that science deals with progress – it moves towards a goal; but art – while evolving, doesn’t do that; it’s less about facts than ideas. All in all though, despite Wood’s best efforts, these forays into more philosophical territory didn’t really get picked up on by the interviewer or the audience; something of a missed opportunity I felt.
There was an interesting question to Wood on the role of art as a tool to explain difficult scientific concepts; had such art been produced, and should it be preserved? Making a distinction between illustrative and creative art, Wood suggested scientifically illustrative works were likely to be valued; but more for their documentary than artistic qualities. For me, the role of illustrative art is undeniable – look at the depictions of cosmological concepts in popular physics books. The role for creative art in science communication is more ambiguous. It can tell us about prevailing cultural attitudes towards science and technology – back to the Cold War again, consider those swirling atoms and mushroom cloud depictions of atomic power. But it’s less obvious – to me at least – how an abstract artistic aesthetic might translate into, or inform, science.
Wood was asked how we decide when it is right to return an artifact fully to it’s original state – as the conservator’s toolkit gets ever more impressive? It seems there are some difficult calls, but it’s more usual to conserve than restore.
That brought to mind a whole area of science-art interaction that the evening hadn’t touched upon: the use of technology for artifact simulation and display, whereby an original piece is presented next to a simulation of how the item would have originally appeared. I’m thinking here of Roman and Greek statues in their original livery, the brightly painted interiors of Catholic cathedrals, and projection techniques that bring faded tapestries – however temporarily – back to life. I digress; but for more on the topic, here’s a nice piece on statuary, ‘Gods in Color’, from the Boston Globe.
Anyway, that was a very brief update on my brush with science and art at Caltech and the Getty.
Incidentally, one important feature of the Getty Center that Wood didn’t mention is its restaurant, commendable as much for its location as the food. Perched high overlooking the Los Angeles basin towards the ocean, the views are an inspiration to artist and scientist alike.
I took this short sequence in the garden this afternoon. No photo-shopping, just a nice illustration of the splitting of sunlight into it’s component colors by refraction through a water drop – shuddering in the breeze after a storm.
The simplest of things, it put me in mind of John Keats’s supposed lament that Isaac Newton had destroyed the beauty of the rainbow by explaining the science behind it, the underlying sentiment of which he included in the poem Lamia. I say supposed, because I can’t find a primary reference to Keats actually ‘having a go’ at Newton over his prism or whatever. Lamia however speaks for itself (see below).
Richard Dawkins gives an alternative view in his book, Unweaving the Rainbow, where he argues scientific understanding enhances rather than diminishes beauty. I’m with Dawkins on this one. And while those going through life without a scientific education (for whatever reason) experience it in a way that is different, I believe they are also simply missing out.
Keats’s rainbow reference appears in his poem Lamia Part II:
What wreath for Lamia? What for Lycius? What for the sage, old Apollonius? Upon her aching forehead be there hung The leaves of willow and of adder’s tongue; And for the youth, quick, let us strip for him The thyrsus, that his watching eyes may swim Into forgetfulness; and, for the sage, Let spear-grass and the spiteful thistle wage War on his temples. Do not all charms fly At the mere touch of cold philosophy? There was an awful rainbow once in heaven: We know her woof, her texture; she is given In the dull catalogue of common things. Philosophy will clip an Angel’s wings, Conquer all mysteries by rule and line, Empty the haunted air, and gnomed mine— Unweave a rainbow, as it erewhile made The tender-person’d Lamia melt into a shade.
The last thing I expected at a history talk with Stephen Fry was a discussion on the relative merits of rationalism and empiricism. But that’s what we got for part of the time at the Harper Collins Annual history Lecture at the Royal Institute of British Architects last month. And for some reason, the topic’s stuck in my head.
A rush to rationalism?
The difference between rationalism and empiricism essentially turns on the degree to which we draw on the evidence of our senses in creating knowledge.
Fry’s comments were a warning through illustration of over-dependence on apparently rational decisions. As the conversation moved to the fall of the Berlin Wall, Fry made the point that while it seemed rational to liberate Eastern Europe with the flourish, rapidity, and completeness now symbolised by the dismantling of the wall, that process also had unforeseen consequences in the form of unprecedented crime and corruption.
Fry likened it to the activation of a sleeping cancer one might find in a patient from Oliver Sacks’s book Awakenings. These negative developments had been kept in check only by the strictures of the former regime, and were now – in some quarters – the cause of discontent and a call for a return to a more certain past.
Stephen Fry in conversation with Lisa Jardine at RIBA (Photo: Sven Klinge)
It’s hard to know whether an empiricist approach would have predicted the unlooked for outcome, or whether the experience of Eastern Europe has informed China’s more recent and ongoing transformation. But when looked at in this way, the Chinese process, whereby economic liberation moves ahead of relaxation in political and social controls, might not be all bad. For while the West finds elements of the process distasteful, what greater chaos might be unleashed under a less managed regime?
Yet at an emotional level, attacks on rationality can grate, especially with scientists and technologists. I bristled when Fry likened over-zealous support for rationalism to belief in religion. Was this the same Stephen Fry whose debate trounced the Catholic Church, and who regularly shares platforms with the likes of Richard Dawkins? But rather than rejecting rationalism, I believe he made a valid point: that it is too easy to assume a rationalist approach in all situations – however complex – when sometimes the abstract premises from which we deduce knowledge for decision making are just not up to it.
A palette of reason
Moving on, but with an eye to Fry’s sentiments, there seem to be an awful lot of reasonable sounding words out there: like ‘rational’, ’empirical’, ‘evidence-based’, ‘logical’; and indeed – ‘reasonable’. Whether in the context of drugs policy, climate change, faith schools, or whatever; these words sit like so many pigments on a palette of reason, wielded by individuals and governments alike, to convince us – and themselves – that a particular course of action carries some special sanction. But why do the same words frequently lead to misunderstandings and angst?
It seems to be down to definition and interpretation. Boiling our list down to rationalism and empiricism (subsuming ‘evidence-based’ into empiricism and logic into rationalism) the dictionary definitions and learned philosophical commentaries leave plenty of scope for confusion.
‘the practice or principle of basing opinions and actions on reason and knowledge rather than on religious belief or emotional response’, and empiricism as
‘the theory that all knowledge is derived from experience and observation‘
which seems pretty clear. But the Oxford Pocket English Dictionary muddies the rational water by including philosophical and theological interpretations that flex the definition of rationalism to a form no scientist could agree with. It seems scientific rationalism is just one brand. I’ve really no idea what to make of the theological interpretation given as:
‘the practice of treating reason as the ultimate authority in religion’.
but it put me in mind of this quote from the current Pope, relayed in this interview by the Vatican astronomer Guy Consolmagno, and equally confusing to my concept of rationality:
“religion needs science to keep itself away from superstition“
No wonder there’s confusion
This all goes some way to explain why scientists find themselves at odds with the government on issues like drugs policy and the recent Nutt affair.
Professor David Nutt led a committee advising the British Government on drugs policy, until he was sacked for speaking publicly in a manner the Home Secretary judged inconsistent with his position. The sacking blew up into a huge debate about the role of scientific advisors and their advice, what they can say when, and the way scientific evidence is used in a politically cognisant, but surely still rational, decision making process.
Some of our reasonable words appeared in the popular press; such as ‘empirical‘ in this Daily Mail piece by A.N.Wilson:
‘The trouble with a ‘scientific’ argument, of course, is that it is not made in the real world, but in a laboratory by an unimaginative academic relying solely on empirical facts.’
Evan Lerner has argued the technical inaccuracy of this statement that leaves us nowhere to go. If empirical facts are no good, decision makers must be following a rationalist stance or some ‘third way’ unbeknown to philosophers. But I’d argue the politicians are just following a brand of rationalism that suits their purpose; it’s just not a scientific one. And when A.N.Wilson goes on to invoke the R-word:
‘Those who dare question scientists are demonised for their irrationality. Global warming may or may not be a certainty, but anyone who queries it has his sanity questioned. Cast doubt on these gods of certainty and you are accused of wanting to suppress free expression -…’
he’s right; anyone who doesn’t comply with the scientific definition of rationality is demonised. Personally I’d like the scientific definition to be universally accepted, but while there are powerful constituencies who benefit from and delight in wooliness defended as realism or flexibility (politicians, theologians, dictionary compilers), I can’t see it happening.
Likewise, the only kind of rationality under which a discussion on the virtues of faith schools makes sense is one that allows ethical and metaphysical propositions (e.g. is there a god). Moreover, we’re left with politicians working up a drugs policy using an ethics-based ‘political rationality’, and an education policy that recognises and values a ‘religious rationality’.
Unfortunately, the transparency being called for concerning when and under what circumstances this flexing of scientific rationalism happens, also threatens politicians with the anathema of exposing less visible agendas traditionally played close to the chest.
Since I posted this blog, the BHA have issued a video of the whole event. So for a summary – read the blog; for the whole smash…here it is!
Disney’s Dumbo the Elephant got rid of his magic feather. He realised it was just a temporary crutch that gave him the courage to be all that he could be.
For philosopher Daniel Dennett, speaking on ‘A Darwinian Perspective on Religions’ , religion is just like Dumbo’s feather – a crutch we can do without. This is a summary of the British Humanist Association (BHA) event I joined earlier this month at South Place Ethical Society’s Conway Hall in London.
Chairing this second lecture in the BHA’s Darwin 200 special lecture series, Richard Dawkins introduced Daniel Dennett as the scientists’ philosopher; someone who takes time out to keep up to date with the scientific literature. And strangely perhaps, it is Dennett the philosopher, not Dawkins the scientist, of these two champions of atheism, who tends to take the more studious, less obviously attacking, line on religion.
Taking to the podium in cheerful good humour, prompted in part by the obvious similarity between his own bearded visage and that of the cardboard Darwin cut-out standing stage left, Dennett launched enthusiastically into the reverse engineering of religion.
What was in store for the world’s religions? Would they sweep the planet? Would they die out rapidly or drift out of fashion – like the smoking habit ? Or would they transform themselves into creedless moral entities – keeping up the good work but without the mumbo-jumbo? Whatever the future holds for religion, Dennett’s mantra is that if we are going to have any steer over it, we had better understand it – from a scientific point of view.
Dennett treats religion as a Darwinian phenomenon. Human beings put a lot of energy into it – so what’s the biological justification behind it?
Religions, Dennett argues, are the inevitable product of word evolution. He see words simply as memes that can be pronounced. Memes – the name coined by Dawkins to describe units of cultural information transfer that are in some ways similar to genes. Further, words and letters represent a digitisation of language, meaning they can be accurately replicated – even without understanding, because of their consistency with a semantic alphabet. So however crazy an idea expressed in words might be, it can still multiply irrespective of its meaning being understood or making rational sense.
How might the first word memes have come about? Using a Darwinian analogy, Dennett likened the first word memes to wild animals evolving through natural selection in which “evolution is the amplification of something that almost never happens” . As such, it would only have taken someone to give an arbitrary name to a strange noise in the woods one day (fairy, goblin, monster etc.), for that name to eventually get around a wider community. The seeds of superstition would have been sown. Some notable memes, by virtue of a special repulsiveness or attractiveness, would have survived into folklore. It is these memes, Dennett said, that are “the ancestors of the gods” at the core of the world’s religions.
But that was only phase one. When these ‘wild memes’ are purposefully looked at, studied, and manipulated by people, they become more powerful. Some humans (e.g.priests) might dedicate themselves to keeping such memes alive and thriving, even when by themselves they are no longer very convincing. The modern religions resulting from this process and that still survive today represent a tiny fraction of all past religions, and are analogous to surviving languages or species.
Good design means these husbanded memes have inbuilt mechanisms for survival. For example, many religions make man a ‘slave to the meme’ – it’s called subservience.
Dennett described an interesting possible influence of the placebo effect in our cultural religious development. Human susceptability to ritual may be a result of our reproductively successful ancestors being the ones who – through receptiveness to placebo – enjoyed the health benefits of shaman ritual. Other self-maintenance devices built into modern religions include the glorification of incomprehensibility, warnings not to engage with reasonable criticism (on the basis that you’re talking to the Devil, and he’s a better debater than you), and the idea that a belief in a god is a pre-condition for morality.
And that brought Dennett near to his close, and us full circle to Dumbo, and the argument that we have religion because we need it. Dennett argued we no longer need the crutch represented by Dumbo’s feather. Indeed, it’s harmful to hang on to religion, what with the likes of cult suicides and death sentences for blasphemy. But religion is most harmful as a threat to a rational world view. And how does religion differ from other factors that disable rationality, such as drugs or alcohol? Only religion, Dennett said, “honours the disability”.
Are you a scientist, or more of an artistic person? Or maybe you’re a bit of both? Do you care? And does it matter?
It mattered to CP Snow in 1959, when he wrote the essay ‘The Two Cultures’. Snow saw society split into two groups, or cultures: the artistic intellectuals and the natural scientists (natural because they study the natural world). Each misunderstood the others language, ideas, and contribution. The relationship was often one of unproductive hostility.
Fifty years on, towards the end of last month, I joined the London Consortium’s ‘Art and Science Now’ programme, to see how Snow’s ideas are standing up in the eyes of leading figures from the world of arts, science, public policy, science communication and philosophy.
This post is part summary, part observation, and part photo gallery (thanks largely to Sven Klinge, whom I met at the conference and who provided most of the pictures here). Not all the speakers from the Dana event are reported here. Don’t read any significance into that – its just a time issue. Also, plenty of additional ideas came through in the final panel session and Q&A, which are likely to inspire future posts – but there’s more than enough to be going on with here. The full event ran for three days, 22nd-24th January; I joined on 23rd and 24th.
NOTE: now includes a summary of Gillian Beer’s paper not included in the original post
First Day – 23rd January, Science Museum’s Dana Centre
The session kicked off with Steve Connor, Director of the London Consortium, introducing keynote speaker George Rousseau.
Reading a prepared text, Rousseau’s talk majored on the virtues and challenges of cultural collaboration framed as inter-disciplinary working, including the concept of ‘bridging’.
Today, multi-disciplinary working is needs-driven by otherwise insoluble complex problems, yet we attack and treat with suspicion those who move between disciplines – “we rush to shoot them down”. Acknowledging that the best minds have always worked in multiple fields (Rousseau made the standard reference to Lunar Men here) doesn’t seem to help us.
Moving on to ideas of responsibility, can those on each side of the science / humanities divide understand each other sufficiently to be conscious of their respective responsibilities in the world? Scientists might understand the arts better than artists understand science, but so what if they can’t sort out their responsibilities? Can we not work with a unified sense of knowledge to achieve that?
Rousseau’s vision is for a world in which we all work across boundaries and know what we have to do (this is what academics call a crude summary). Rousseau went on to discuss at length what that vision might mean within academia, such as the emergence of multi-disciplinary selection committees filling leading academic appointments with multi-skilled candidates.
Rousseau’s Q&A was short and disappointing. A question on unified knowledge – asking how the different ways in which artists and scientists define the word knowledge might prevent sensible commune, was not engaged with. I took the question to mean knowledge derived from ‘feeling’ – acceptable and embraced by the artist, as opposed to the scientists’ equivalent based on the ‘rational’. But Rousseau simply declared the word knowledge to be a lexical term and that he didn’t really know what knowledge was. Considering the forum, I found that answer surprising and unfulfilling.
The Science Museum’s Robert Bud opened the afternoon session with his keynote speech. Opening his talk with a photo of Dawkins’s atheist bus, Bud quickly took us back a hundred years, to the split between the ‘two sides of the road’ in South Kensington – close enough to what we call the V&A and the Science Museum today. The ‘arts side’ was painted as a backward looking centre for the maintenance of elitist taste; the science side more practical and forward looking, representing progressive materialism and a rejection of the spiritual. The tension between these, Bud argued, formed the roots of Snow’s cultural divide.
Bud set his mission to the clarification of Snow’s real meaning. In developing the story, he pointed to the iconic importance of Francis Crick’s DNA double helix, not only as the basis for life and replacement for the soul, but as the basis for Crick’s, and later Snow’s, fundamental beliefs.
Referencing an essay by Jacob Bronowski, Bud linked Snow to Crick. With a letter from Snow to Bronowski as evidence, he showed that the content of Bronowski’s essay, ‘the abacus and the rose – a dialogue concerning the two world systems‘, was aligned with the real intent behind Snow’s essay. Bronowski couched his text as a dialogue between Potts, a Cricks-type scientific stereotype, and Harping, a Kingsley Amis-type reader in English. In the fictional debate, the artistic protagonist sees no beauty in life unless there is some subjectivity or element of human judgment associated with it. Potts on the other hand, the scientist, has no need of that. In other aspects too, the Harping character generally meets our modern stereotype of an anti-technology, anti-progress, luddite. This is starting to remind me of the themes in Dawkins’s ‘Unweaving the rainbow‘.
Sure enough, having implicated Crick with Bronowski’s Potts character, and aligned Bronowski’s views with the intent of Snow’s essay, Bud now linked Crick to Richard Dawkins. Not only through their shared activity on genetic themes, but also through Dawkins’s atheism, expressed so recently on the sides of another British cultural icon – the London Bus.
Through this elegant, methodical, approach with use of evidence, Bud had boxed down Snow’s intended meaning to the sort of black and white intellectual stand-off that is unfashionable in some quarters. Alan Sokal would later show a similar level of attention to the content and bounding of his argument.
Bud concluded, for the avoidance of any doubt, with this reference to the two cultures debate: [it has] “not been primarily about the conflict between academic disciplines, between whether history or physics is more important, it’s been about materialism against the reality of non-material entities; about god and life. Thank you”
Second Day – 23rd January, Tate Modern Art Gallery
This was the first time I’d seen Marcus du Sautoy in the flesh.
With an appearance and demeanor somewhere between a children’s TV presenter and a younger Jasper Carrott, wearing a tee-shirt emblazoned with the phrase ‘i are scientists‘, I can see how he’ll fit well into Dawkins’s old job, but definitely not his shoes.
Full of energy and wit, there is also something of the diplomat about du Sautoy. The recurring theme of ‘bridging’ came up, with maths as the unlikely bridge between science and the arts. Du Sautoy explained the mathematical structure of music; he is involved in a range of projects that link the two. He was sympathetic and supportive of the ‘artist in residence’ type of cross-rift exchange that one audience member was involved with.
Sad as it might seem, I’ve often pondered on behalf of Anthony Grayling, as to the wisdom of a book-branding philosophy that entails the title ending with the words “…of things”: ‘The Mystery of Things’, ‘The Reason of Things’, ‘The Meaning of Things’, ‘The Heart of Things’, ‘The Form of Things’ , for one tract of his extensive range of popular philosophy books. Maybe it’s just me, but agonising over a purchasing decision in a bookstore, I can never remember which of these ‘things’ editions I already own, and invariably end up leaving the store empty handed.
Yet Grayling remains the philosopher I feel I could most comfortably engage with in chat over a beer. His ability, without patronising, to transpose complex ideas into the common tongue, combined with an unforced sense of humour, is appealing.
Grayling approached Snow from historical and educational perspectives. Again, as Bud had stressed earlier, Snow was late to the debate. In the early decades of the 20th century, Wittgenstein , whose texts Grayling reminded us would be familiar from our previous night’s bath time reading, had defended the ‘feeling’ world of morality and religion from the reductive encroachments of the empirical scientists’ world view.
In 1880, at a time when the so-called Civic Universities, like my alma-mater Birmingham, were for the first time concentrating on scientific and practical subjects, Thomas Huxley addressed the issues perceived around the arts and sciences, spelling out the importance of a scientific education for all in understanding the world. In retort, Huxley’s friend Matthew Arnold, while agreeing with much of his colleague’s argument for science, strongly defended a complementary role for the humanities as a vehicle for human reflectivity – amongst other virtues.
Yet class was to override any happy balance that the Huxley-Arnold discourse might have anticipated. The wealthy and influential social elite, trained in the classics from school through to Oxbridge, were empowered and equipped to exercise a disproportionate influence over society, compared that is to the growing, parallel yet separate, class of technically proficient industrialists.
Key developments in the twentieth century, Grayling argued, were a change in status for science in recognition of its indispensability to modern infrastructure and the exercise of war, and the increase in complexity and requirement for specialism in scientific pursuits.
Specialism led to streaming in the schools system, separating the population at an early age into classicists and scientists. It was this galvanisation through specialisation of an already divisive educational system that, Grayling argued, had prompted Snow’s commentary. The powerful elite saw the importance of science, but without understanding. It prompted the pithy rationale of keeping the “expert on tap, not on top”. Efforts by the ‘new’ universities to introduce reciprocal training for arts and science students enjoyed abject failure.
In summing up the position today, Grayling considered that in important respects, and specifically with regard to the scientific literacy of the general public and elected officials, Snow’s gap is very much wider fifty years on. Further, this condition is damagingly influential on how scientific issues are being read. Grayling pointed to how the public debate on evolution could only persist through ignorance of biology, causing a mistaken belief that there is any substance at all in the contrary argument. The tradition of thinking in the humanities is, Grayling believes, more towards the need for neat answers and closure to issues and debates, representing the antithesis of the scientific acceptability of open ended questions.
Grayling closed with an appeal for scientific literacy, achieved in part through a change in attitude towards, and engagement as adults with, life-long learning.
The relevance of Charles Darwin to a debate on Snow might not be obvious; yet Gillian Beer made it so.
Before developing her main theme around extinction and Darwin, Beer informed us that Snow’s main reason for wanting to see science and technology working effectively in the world was so it could be used to combat global poverty. Maybe that is a less well known, more human, driver in the debate around Snow’s motivations?
As it turns out, as illustration, Darwin’s attitude to extinction, and the various interplays of scientific, social, and personal experience that influenced that attitude, have much to inform our thinking. When human beings are involved, thoughts and actions are rarely down to science alone.
For Darwin, said Beer, extinction was an ordinary necessity in the process of natural selection. Without extinction there would be no new animals, no improvement. And anyhow, the whole process dovetailed nicely with the Victorian cultural values of progress and hierarchy.
There was some sadness attached to species loss, yet the world remained full and in balance. Why react? Extinction is a gradual process; Darwin scoffed at the idea of cataclysm. And there was every reason to believe the animal kingdom could continue to look after itself well into the distant future.
The very far future was different though. A cooling sun and dying earth played on Darwin’s mind. The religious had their afterlife, but for Darwin the finality of an extinction of man was an intolerable prospect.
How different things are today.
Extinctions are more simultaneous and widespread; we contemplate E.O.Wilson’s ominous forecast of 50% species loss over 50 years. And we feel guilt. We can place five historical mass extinctions, and further cataclysm is a real prospect on a warming earth.
More positively, stuffed animals in glass cases have made way for movie films of living, moving, animals. And, while like Darwin, we might not see a personal eternity, our understanding of genes at least gives us some intellectual compensation that something continues.
In her close, Beer came back to the theme of the interaction of science and culture over time, [concluding that while]:
different elements of Darwin’s theory point in different directions, and have been pressed into service by opposed ideologies; that does not undermine his experimental evidence or his theoretical reach, but it does I think demonstrate how the cultures of scientific enquiry, social assumption, and bodily experience all interact, over time, to change ideas. Thankyou.
This was my second serving of Ben Goldacre in a week; having heard him speak at a Centre For Inquiry event reported here.
Today, less spontaneous and the better for it, Goldacre’s rant was unambiguously focused on the evils of the ‘humanities graduate editors’ of mainstream news media.He really hates these guys, whom for Goldacre most closely resemble the Snowian (Snowic?, Snowoid?) stereotype of artist as scientific philistine – ignorant, and dangerously active with it.
Goldacre’s thesis is that we have moved beyond a condition of mere disconnect in which arts graduates are dismissive of science, to one in which the same group feel entitled to make comment in areas and on subjects they know nothing about. He resents the lack of in-depth science reporting in the mainstream media and, as a mantra now, champions the non-specialist but educated reader – the people who “did bio-chemistry at Leicester, and now work in senior management at Marks & Spencers“. He went on to illustrate his talk with slides covering a whole range of mercilessly crass and inaccurate reports extracted from his beloved tabloids and equally unsafe broadsheets. For more, see Ben’s book Bad Science for an enjoyable read.
Goldacre gets his important points across with a sense of fun. That said, today’s comic highlight was at his own expense when, during the final panel session, a deferential compliment back-fired. Having answered a string of audience questions specifically addressed to him, Ben now paused to announce that as a mere ‘D-list’ public intellectual he should defer to the ‘B-list’ public intellectuals – Grayling and Sokal – seated either side of him.Ben’s categorisation was arguably correct for a small population that counts Dawkins in its number, but that didn’t stop Ben changing colour in response to Grayling’s playful objection to his grade.
Jonathan Miller doesn’t like to be called a polymath.
But what do you call a writer, performer, neurologist, and internationally distinguished director of theatre and opera? Perhaps Steven Connor’s introduction of Miller as an intellectual amphibian was more acceptable; it got a smile.
Sitting across from his interviewer, Associate Director of the London Consortium Colin McCabe, Miller played upon his intimidating status to the full, a status that has earnt him the right, and the expectation from others, to speak plainly. He did so with a vengeance.
The idea of ‘Two Cultures’ would be first a mystery, and second anathema, in the household of Miller’s formative years. Engaged equally and variously as artists and scientists, the family were simply doing what intelligent people did; they flitted effortlessly between interests.
Schooling had played a part, but only in the form of an unruly master who ignored the formal curriculum, a reference that put me in mind of Richard Dawkins’s reminiscing about his happy days at Oundle School. But how far would those examples get, elitist at best by current fashion, as educational models today?
With his neurologist cap on, Miller shared his fascination with the mind’s learning and skills capabilities. For example, taking three days off from a failed activity can result in a skill magically appearing, subconscious processing having kicked in unbeknown.
Miller’s latest interest is to form shapes and assemblies out of metal, just structures that interest him and which, again, he resists to label as art. Art and science are very different things to Miller. Most art is “to do with people doodling“, while science is “quite clearly directed enquiry prompted by a context which determines what is going to count as an interesting problem“. He sees the “rightness” of what constitutes a satisfactory artistic endeavour is totally different to the “rightness” of what can be concluded from science.
Miller took a similar line to Bud, in so far as he saw the important differences between art and science as quite fundamental. As for the two helping each other out, Miller has “deep suspicion of this notion of being artist in residence‘, a reference to an earlier audience point, and what he saw as an unnecessary formalisation given that art and science are ‘in residence’ within each of us all the time. He pointed to a nine year period between 1905 and 1914, when interesting developments ranging from relativity to cubism, to the de-construction of music, appeared as the products of a natural cross-fertilisation of interests. The first cubist artists, responsible for the development of military camouflage, were not “artists in residence“, but “artists in uniform“, their input naturally falling into place.
Miller’s point, that the propensity for the arts and natural sciences to unconsciously intermingle is a product of the time we are living in, may well be valid. Yet do we live in such a golden age today? Maybe we need to make some clunky ‘artist in residence’ type gestures to kick the right-thinking along. Miller conceded that the very complexity of science today made polymathism [my word!] a challenge; there are too few hours in the day. Indeed, to expand on a recurring theme, it is problematic enough today for scientists in the same subject area to understand each others’ work. In my experience, it is the same complexity and associated promise of narrowness that drives potential candidates away from a training or career in science.
In a forgivable ploy to avoid final-session empty seat syndrome, Alan Sokal was held back till the 4.00pm slot.
Sokal is something of an icon for followers of the arts-science debate. In 1996, he wrote a spoof academic paper parodying the pseudo-intellectual content and style then being used by an academic group known as the cultural relativists – signatories to a particularly strong flavour of post-modernism. Their thesis as it applies to science is that science is made up – constructed – by people to meet their political wants. Any cultural relativist could write a ten thousand word essay on how over-simplistic that definition is; but that’s partly the point. There is a socio-cultural angle to the most astringent definitions of science, but in the 80s and 90s it all got out of hand. Meanwhile, Sokal’s meaningless paper was accepted and published by the journal ‘Social Text’. His own revelation that the paper was meaningless did much to expose the low standards of evidence and understanding in the prevailing discourse of the cultural relativists. It also precipitated what became known as ‘the science wars’.
Speaking on his birthday, Sokal immediately set out a field he could play on. That meant focusing on the relationship of science and scientific inquiry with society as a whole, rather than the Snow question per-se, and defining exactly which definition of science he would be talking about.
Sokal’s was an appeal for a scientific world view in which we are all scientists, some better than others perhaps, but all working to the same rules. The corner stones of this vision would be clear thinking and a respect for evidence.
Clear thinking meant also the clear writing that Sokal, and earlier Grayling, had demanded. But what is science? It suits some academic commentators not to bolt down definitions in this way, they see it as restrictive and narrowing of debate, but Sokal is firmly out of that camp. It’s worth pondering Sokal’s four definitions of science:
It denotes an intellectual endeavour aimed at a rational understanding of the natural and social world
It denotes a corpus of currently accepted substantive knowledge
It denotes the community of scientists with its mores and social and economic structure
It denotes applied science and technology
Again, there are those who would argue the first two cannot be discussed outside of the second two, and would attach especial dependency of 1,2,and 4 on 3 – the social aspect. Not so Sokal.
As if nothing had changed in the thirteen years since his ‘exposure’ publication, he launched a scathing attack on those who would deny that scientific knowledge constitutes objective knowledge of a reality external to ourselves. In these moments, in that lecture hall, it felt like Snow’s rift had evaporated – ‘science’ had won. There was no robust response from the floor as Sokal ridiculed one sociologist after another by relaying quotes from his book Beyond The Hoax. For example:
the validity of theoretical propositions in the sciences is in no way affected by factual evidence – Kenneth Gergen
the natural world has a small or non-existent role in the construction of scientific knowledge – Harry Collins
for the relativists such as ourselves, there is no sense attached to the idea that some standards or beliefs are really rational as distinct from merely locally accepted as such – Barry Barnes & David Bloor.
For Sokal, this type of writing, and the thought behind it, aims to confuse truth with claims of truth. Of course, any post-modernist worth his salt would head that one off at the pass with a denial of there being any absolute truth. Sokal conceded that the more extreme forms of post-modernist thinking were now in retreat, in part due to his own efforts, but mostly due to the example provided by George Bush of where “science bashing” can worryingly lead.
A diversion into pseudoscience complemented Goldacre’s talk and led into the real point Sokal wanted to address – the universal applicability of the scientific method across all areas of human activity. Why do we use one set of standards for evidence in physics, chemistry and biology, and then relax the standard for religion, medicine or politics? Sokal interestingly framed his argument for a unified approach as the “inverse of scientific imperialism“, whereby science should be seen as just one instance of the application of a rational world view in which empirical claims are supported by empirical evidence (the antithesis of dogma).
So at the end of the conference we had returned to a dichotomy of rational and dogmatic world views that Bud, the day before, had used to characterise and clarify as Snow’s true intent, and which Grayling had reinforced as the real issue. And in conclusion to his talk, Sokal stated his belief that the transition from the dogmatic to an evidence-based world view is very far from being complete.
Notes
Gillian Beer was King Edward VII Professor at the University of Cambridge
Ben Goldacre is a writer, broadcaster, and medical doctor. He has a weekly Bad Science column in the Guardian newspaper
Anthony Grayling is Professor of Philosophy at Birkbeck, University of London
Marcus du Sautoy is Charles Simonyi Professor for the Public Understanding of Science and Professor of Mathematics at the University of Oxford
Jonathan Miller is a neurologist, writer, TV presenter, director of plays, and many other things!
Alan Sokal is a Professor of Physics at New York University and Professor of Mathematics at University College London
Mainstream Press on this topic
‘The crossing of the intellectual divide‘ – Steve Connor, The Indpendent (newspaper) 6th Feb 2009. Connor’s article makes mention of events at the day of the conference I did not attend at Birkbeck College, so worth a read.
Run over from behind by a bus. That’s how physicist and skeptic Professor Robert Park wants to go when his time is up.
I joined Bob Park at the Royal Institution this evening to hear him talk about his new book – ‘Superstition: belief in the age of science’.
To be candid, I’m not sure we got much of an insight into the book, and with a good showing of the ‘usual suspects’ (purely based on my memory of familiar faces – National Secular Society, British Humanist Association, Brights, and atheists of other flavours no doubt – not to mention scientists) in the audience, this was pretty much preaching to the converted. But it didn’t matter; Bob came across as a great guy – gentle and sharp at the same time; but most of all – human.
Following an introduction by Jo Marchant from New Scientist, Bob launched straight into the tale of how two catholic priests had given him the last rites, having stumbled across him, unconscious, under a fallen giant oak. He had photographs to prove it, and that pretty much set the tone for the evening.
We, Bob explained, as homo-sapiens, had only been around for 35,000 years when he was a lad; but today we were 160,000 years old. How come? There’s just more evidence today – we have the 160k skull. And as we’ve only been civilised (read post-hunter/gatherer) for 10,000 years of that, it’s fair to say our brains aren’t exactly wired to watch TV, never mind cancel the irritating offer of a wi-fi connection that repeatedly popped up throughout Bob’s PC presentation. Yet despite our brains being rigged to escape tigers and seek out elusive berry bushes, those same brains do a pretty good job of enjoying concertos, fine art, and solving complex differential equations. So we are somehow managing to get along with less than fit-for-purpose equipment. The secret now is to understand it (the brain) sufficiently so that we can explain and counter some of its more noisome excesses – like war for example.
But getting on to superstition now, Bob explained that as early as 585 BC, Thales of Melitus had understood how solar eclipses came about, if not how to predict them. And yet armed with this and doubtless many other supportive evidences for causation, we failed to declare the rational age of man, but rather continued, as we still do, to be superstitious.
Religion is a superstition, Bob maintained. And with 90% of the global population subscribing to some form of religion, doesn’t that make most of us superstitious? In Bob’s reckoning, that should be a concern.
There followed a variety of God-Delusionesque arguments around the illogical multiplicity of christian and other religions, what I thought was a somewhat confused description and use of the anthropic principle, and a potted history of John Templeton and the Templeton prize. The prize is given to individuals who do research that advances ‘spiritual discovery’ – and is big bucks; the last one was £820,000 to Michael Heller – a cosmologist and catholic priest. We learnt that Templeton’s only dictate on value of the prize was that it should always exceed whatever Nobel is offering. Bob shared the results of a Templeton funded study that must be seen as an own goal in some quarters: a controlled trial to assess the value of prayer on the recovery rates of coronary bypass patients. No effect was found. Interestingly, there was a negative impact on the health of a sub-group of patients who were told up-front they would be receiving prayers.
We moved on to a debunking of the ten commandments as the basis for our moral code, and an appeal instead to the Golden Rule of ‘do unto others as you would have them do unto you’ , which Bob put down to sensible evolutionary development rather than any biblical dictate. (As it happens, A.C.Grayling challenged the attractiveness of the Golden Rule earlier this week – but that’s another story….).
On the role of science, Bob believes that if there is one thing science has to offer over everything else, it’s openness – a reference to open data sharing and peer review.
So what are we left with? A questioner from the audience asked what we all wonder now and again – ‘does life have any meaning?’
But Bob had already answered the question in his slides. There is no plan, and if there’s no plan, there’s no purpose beyond that we give to life ourselves. But, as Bob continued, “that doesn’t mean that we can’t have good lives, enjoyable lives, and part of doing that is the way we treat other people”. There’s nothing more to say.
Also of Interest
Professor Robert Park interview at the Guardian HERE
As the last remnants of Christmas turkey fall to sandwich and soup, forget not Bertrand Russell’s musings on the fate of turkeys subscribing to Francis Bacon’s philosophy of inductivist scientific reasoning.
They don’t see it coming (photo WikiCommons)
On his first day at the farm, the turkey noted he was fed at 9.00 am. After this procedure had been repeated for many weeks, the turkey relaxed a bit, safely drawing the conclusion “I am always fed at 9 am”. Regrettably for the turkey, this conclusion was proved wrong on Christmas eve when, rather than being fed, he got his throat cut. Which just goes to show that any number of true observations can still lead to a false conclusion; the turkey’s argument was simply not logical.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.