The articles in this issue of the Newsletter were all contributed by new Fellows who spoke at the RSC Café in Quebec City last November. Thanks to all of them for their contributions.
In his article, Wayne Hocking deplores the trend to “science lite” that he fears has taken over the debate about climate change. I suggest that this is but a part of a much broader problem, characterized by an anti-science attitude amongst much of our political leadership, an overly simplistic approach to science in the media, and a loss of trust in science amongst a large segment of the general public.
Our modern era, characterized as it is by the wonders of scientific discovery, technological achievement, the gradual elimination of poverty, the near-elimination of many diseases through the triumphs of public health, and the gradual (albeit sputtering) spread of democracy and individual human rights, all began with the Age of Enlightenment in the mid-1600s, when reason, analysis, and logic were promoted by such as Francis Bacon, John Locke, and Voltaire, and philosophers such as Descartes, Kant and Hume. The progress of science would not have even begun without the ability of “naturalists” (as they were then called) to challenge the orthodoxy of the Bible and of tradition. Some of the great advances made as a result, such as the discovery of evolution and the process of natural selection, are still challenged in some quarters, but by and large science has triumphed.
However, as David Colquhoun wrote in The Guardian in 2007, “The past 30 years or so have been an age of Endarkenment, a period in which truth ceased to matter very much, and dogma and irrationality became once more respectable.” He cited “magical and superstitious ideas about medicine”, the belief in a young Earth (<6000 years old), creationism, and the healing power of crystals, amongst other strange trends.
Recently Timothy Caulfield published a book called “Is Gwyneth Paltrow wrong about everything?” in which he shows how celebrity culture has succeeded in misleading the public about a wide range of issues. A distrust of “Big Pharma” is one of the characteristics of this trend, which has generated popularity for so-called “natural” drugs, diet supplements and vaccines. Scientists have not helped their cause here. Many arrticles in medical journals written to highlight the efficacy of a new drug have actually been written by drug companies not by the researchers under whose names they are published. Outright fraud has occurred, as in the well-known case of Andrew Wakefield’s completely false promotion of vaccination as a cause of autism. Many people have been taken in by the so-called “natural” cures for cancer, usually with premature death as the result.
What’s going on here? Politicization of legitimate scientific debate is often part of the problem. Another issue is the tendency of the media to over-simplify complex issues, an approach that is likely to emphasize, rather than help to resolve controversy. A third cause is a loss in authority. Large organizations, both public and private (i.e., the corporate sector) can no longer take it for granted that the statements and assertions they make about their work will automatically be taken at face value.
The plain truth is that real science, which has been increasing the stock of knowledge now for two hundred years, has uncovered complexities that are impenetrable to most individuals (Hocking’s complaint about universities bypassing calculus is part of this). This is why those scientists with the skill to explain science to the public are so valuable. The McNeil Medal is a significant means for the Royal Society to recognize the best of these individuals. Scientific progress is often associated with controversy, and this is one area where our Expert Panel process can be so important. Individual Fellows need to continue to be encouraged to write popular articles and promote real science in the media. This Newsletter is one place to do this!
If you, like me, are living besides a busy four-lane street in front of a large hospital you know that the only sounds that you are aware off are those from the street- and air-ambulances. The continuous road traffic noise, the din, is not noticed at all. The auditory cortex is responsible for that. The process is partly due to a habituation process that fairly quickly (within an hour) turns down the central auditory system gain for that particular sound. But if the reduction in sensation level were only due to habituation you would hear the din again after a good night sleep, and that does not happen.
Over the last decade my lab, due to the efforts of some exceptional postdocs, has investigated the mechanisms behind these long-term effects of moderate level noise. Adult cats were exposed for three weeks to either continuous, or 12 hr on-12 hr off, noise or other multi-frequency sounds in the frequency range of 4-20 kHz, and presented at levels of less than 68 dBA. Because the legal limit for 8 hr exposure per day is 85 dBA, and tripling the exposure duration (to get to a full day) would allow ~80 dBA, this should not cause a hearing loss. Indeed, we did not find hearing losses as measured by electrical responses in the brainstem, nor damage to the hair cells in the inner ear.
To our surprise, however, we did find that neurons in auditory cortex of the cat that code the exposure frequency region of 4-20 kHz ceased largely to respond to sound in that frequency range up to levels of 75 dB. In contrast, the neural responses in the octave regions below and above the exposure frequencies were greatly enhanced in strength and threshold levels were reduced by with ~20 dB. The decrease of neural activity in the exposure frequency region likely explains the near inaudibility of these sounds. We then looked how long it took to reach this reduced level, by exposing for durations from 2 days to 3 weeks. We found that two weeks was sufficient and that longer exposures did not change the effect. Then we studied how long it took for the effect to disappear, assuming it would be a plastic mechanism. We found that after at least 6 weeks cortical neurons responded again to all sound frequencies at the normal sensitivity level (3).
So no long-term harm done? Subsequent experiments and analyses showed that the normal regular mapping of sound frequency in the auditory cortex was now completely random for the frequencies between 4 and 20 kHz. This tonotopic mapping did not return to normal even after 3 months of recovery from the exposure in the quiet cat room that held a number of freely roaming, but mostly sleeping, cats. It is currently not clear whether this map reorganization has perceptual consequences for sound discrimination.
What does potentially have an effect on sound discrimination is the difference in central gain for the exposure region (reduced) and the octaves above and below that region (increased). To transpose this to adult humans, whose hearing is limited in practice to frequencies below 15 kHz —from exposure to loud occupational and recreational noise—, consider that the street noise frequencies are mostly below 3 kHz. This frequency region would thus show a reduced gain, whereas the frequencies up to 6 kHz would be enhanced. This could result in problems with speech understanding especially in noisy environments (1,3).
A disconcerting aspect of these findings is that continuous daily exposure does not allow recovery, since the induction time is so much shorter than the recovery time. It is thus conceivable that workers exposed to machine and other noises, even if only for 8 hr/day and wearing sound barriers that reduce the effective levels below the legal limit, can over time build up a permanent perceptual deficit probably more serious than that produced by street noise. This has been confirmed by research in Finland (2). It was found that long-time workers in a noisy factory environment, while having clinical normal hearing, could not distinguish a /ba/ phoneme from a /pa/ phoneme. In addition they did not show the normal electrophysiological response (i.e., the mismatch negativity) that signals a pre-attention ability of the brain to differentiate those phonemes.
About 10% of people have an abnormal sound level perception known as hyperacusis, where sounds that are perceived as normal by the other 90% are found disturbingly and sometimes painfully loud. This is generally attributed to an increased central gain as a result of reduced neural inhibition. Recalling that in the cats that were exposed to a 4-20 kHz sound, the central gain was increased above and below the exposed frequency region (likely by a disappearance of the lateral inhibition normally provided by the neurons responding to 4-20 kHz). This increased central gain for those frequency regions would cause hyperacusis, as the much-enhanced neural responses recorded from cat auditory cortex indicate. Central gain increases are based on changes in the excitatory synapses’ transmitter-release properties. These do not only increase sound-evoked releases, but also increase spontaneous transmitter release resulting in increased spontaneous firing rates in cortical neurons. This has long been considered a necessary, albeit not sufficient, requirement for tinnitus (i.e., ringing in the ears). So a side effect of long-term exposure to moderate level sounds, long considered safe for the auditory system, would be tinnitus and hyperacusis both in the presence of clinical normal hearing (3).
Answering the question posed in the title is not straightforward. Laboratory experiments are performed in very controlled acoustic environments, whereas human sound exposure is variable, and not every day exactly the same. It may well be that certain changes in sound exposure after work may offset some of the induced changes, and that recovery in a quiet environment (<45 dBA) is not always beneficial. Nevertheless, our findings suggest that annual testing of workers exposed to long-duration and daily repeated noise should not be limited to measuring hearing sensitivity thresholds, i.e., an audiogram. Testing their abilities to understand speech-in-noise would provide a sensitive diagnostic of the hearing problems that do occur in the absence of hearing sensitivity loss.
1) Gourévitch B, Edeline J-M, Occelli F, Eggermont JJ. (2014) Is the din really harmless? Experience-related neural plasticity for non-traumatic noise levels. Nature Reviews Neuroscience 15: 483-491.
2) Kujala T, Shtyrov Y, Winkler I, et al. (2004). Long-term exposure to noise impairs cortical sound processing and attention control. Psychophysiology, 41, 875–881.
3) Pienkowski M, Eggermont JJ. (2012) Reversible long-term changes in auditory processing in mature auditory cortex in the absence of hearing loss induced by passive, moderate-level sound exposure. Ear and Hearing, 33: 305-314.
I will begin this article by reflecting on Global Warming, The article is not another rant for or against global warming. Nor is it about bashing any political party. Climate change is part of the story, but only part. The article looks at the evolution of science - and especially hardcore science - over recent decades.
The first alarm bells regarding global warming were sounded by scientists. Likewise the first warnings about stratospheric ozone depletion. And so it will be with regard to other potential (possibly unforeseen) catastrophes. While in the domain of science, these topics could be discussed with objectivity and tolerance. Yes, these effects were there, but so were other, more natural events such as El Nino, La Nina, Milankovitch cycles, and so on. What was the balance?
But then global warming became public property - rightfully so. But there were issues in dealing with a process that involved temperature changes of a fraction of a degree per decade. Was it really important? So the topic changed to "climate change" - not just global warming, but any dramatic changes that might raise public awareness. Severe weather became an area of focus. And somewhere along the path, the issue became one of great polarity. It was inevitable, of course, once mega-bucks entered the picture. Oil companies denied it. But equally, it was politically "cool" to "fight" for global warming, whether the protagonist believed in it or not. Carbon taxes, originally envisaged with the best of intent, became new money-making machines. A sense of balance was lost. Scientist who wanted to see the whole picture - including astronomical cycles, and natural weather variations - were dismissed as sceptics, categorized along with those who were sure there was no effect.
However, alas, they were not lone outcasts for long. For it soon came to pass that all scientists became outcasts. While we speak of the pendulum of public opinion, the pendulum in this case is not a pendulum - it is more like a magnet on a string moving between two steel walls - it oscillates for a while, but eventually is grasped by the pull of one wall or the other, and is pulled unforgivingly in that direction. Political leaders of all shapes recognize the value of manipulating these steel walls, aka extrema of public opinion. But science was a problem. It clouds the issues - a real scientist sees many sides of the picture. It's a little sad when a member of the public cannot distinguish between the stratospheric ozone layer and the mesospheric one. It's tragic when a minister of science or minister of the environment cannot, especially if the display of ignorance is a public one. So it became clear that in order to avoid embarrassment, science should be moved out of the picture. Scientists were told - more than once - that "they had done their job and raised the alarm- now it was up to the politicians to develop future policy". New layers of bureaucracy were introduced between the political and scientific platforms, so that high-ranking public servants - and even company CEOs and university administrators - never really needed to talk directly to a scientist. It also empowered these overlords with enormous power - now granting agencies, for example, no longer were working to optimize science productivity, but rather they controlled the scientists! Such people now speak without shame about how they like to support their favourite scientists (and of course by inference, do less for less preferred souls) - something that should reek of conflict-of-interest, but no longer does, it seems. Indeed the top-down control is even more ominous - ironically, the worst administrators are often those with just enough knowledge to be dangerous. Recent PhD graduates who move directly into administration, with no postdoctoral experience, but confident in their (limited) science background, can be a bigger threat than an administrator who knows little science but is smart enough to know that they know little science, and so seeks advice. In any case, good "leadership" must be subdued and supportive.
"Global warming" and "Climate Change" now has legions of experts who are not scientists. Hardcore science has lost its grip. At least one university, in setting up a so-called ""sustainability program", refused to allow courses involving calculus into the program, these being of course too challenging for the thousands of non-scientists who might take the program. Hard-core courses would attract too few students to be economically viable - after all, for universities, it's really about money, not knowledge. Yet the basis of much of what we know about climate change comes from hard-core science. Computer models that help us see the future rely on techniques like -yikes! - calculus (and more). So we have reached the point where arguments about climate change are debated in-vaccuo. Science has much more to offer, but has been sidelined. I recently attended a seminar by an (apparently) eminent UK philosopher, who is a government consultant on matters of climate change. The talk was about the statistics of climate change, particularly about "Bayesian statistics" and "double-counting". These topics may seem obscure to you, dear reader; but unfortunately the speaker was similarly hampered, so useful advice from this source would have to be questionable. There is much that science can and must still do - but now, with science effectively side-lined, discussions about these topics take place in a vacuum of fundamental knowledge. The science that is left in these programs is a sort of "science lite" - it wears the attire of science, and looks like science to the untrained eye, and of course to the administrators- both in governments and universities at all administrative levels - who dressed it; but take away the clothes, and underneath there is little flesh.
Yet so much is missed without real science in the picture. There is so much more to our future, which requires that scientists are produced with a deeper, hard-core focus. We face a period of potential asteroid strikes, excessive man-made heating, and magnetic field changes, among other potential disasters, and it is only scientists trained at a fundamental level who can truly address these issues. Lite science, and speculation by untrained personnel, will not even foresee these events, let alone mitigate them. Yes, everyone needs to be involved - but that should include hard-core scientists. And in particular, these scientist cannot function under the constraints of corporate structure - scientific freedom is at the heart of our survival. Science is not a democracy - lone opinions count, as illustrated by legions of "out-of-the-box" thinkers, from Newton to Einstein and beyond. We cannot allow Science to be subjugated by the will of corporate and government overlords, and must resist this current trend to a downward control principle.
White dwarfs (WDs) are stars that have depleted their nuclear sources of energy, but remain luminous by radiating their stored thermal energy out into space. Since they cool at a very predictable rate, they can be used as delicate probes of a wide variety of physical phenomena including establishing the chronology of formation of our Galaxy, testing neutrino production rates and exploring crystallization in the core of these fascinating objects.
In 1844 the great astronomer and mathematician F. W. Bessel demonstrated that Sirius, the brightest star in the sky, wobbled as it moved through space. This was not such an unusual situation as all it meant was that Sirius had a faint companion (Sirius B) and that it and Sirius A formed a binary system with the components orbiting around a common centre of mass with a 50 year period. What did turn out to be unusual was that the companion, first imaged in 1862 by the American Alvan Clark, turned out to be a faint blue star. All known faint stars at that time were red, implying a low surface temperature. The very blue colour of Sirius B signified that it was very hot and its faintness meant that it was also very small. Sirius B was the first example discovered of a class of stars called White Dwarfs (WDs). These objects are the remnants of stars up to about seven times the mass of our Sun that have completed their nuclear evolution. The source of their light at this stage is simply the radiation out into space of their stored thermal energy. In a time that is much longer than the current age of the Universe (13.7 billion years), they will deplete their store of energy and become black dwarfs.
While these stars did not lead directly to the development of quantum mechanics, they do require quantum theory to explain their structure; as such they can be considered a macroscopic demonstration of quantum physics. One aspect of this structure, predicted by the Indian/American astrophysicist Subrahmanyan Chandrasekhar, was that there was an upper limit to their mass at about 40 percent more than the mass of the Sun. Any WD exceeding this limit would explode as a supernova. Such supernovae have been observed and analysis of their brightness coupled with the velocity of their host galaxy has led to the concept of “dark energy” dominating the Universe.
WDs cool at a very predictable rate, much like the embers of a fire that is burning down. Thus nature has provided us with precision clocks that we have used to date various components of our Galaxy and provide a lower limit to the age of the Universe that is independent of its expansion rate. Below I detail two other important physical phenomena that we are exploring using these remarkable stars.
Rate of neutrino emission
The Sun is powered by the nuclear conversion of hydrogen into helium. During this process, energetic particles called neutrinos are produced. Neutrinos are almost massless particles that travel near the speed of light and interact very weakly with matter. These solar neutrinos provide the only direct confirmation of the source of solar energy. Attempts to detect them began in the 1960s, but from almost the very beginning it was clear that the rate of detection did not agree with theory – only about one third of the expected number were seen. Eventually it was understood that neutrinos could “oscillate” from one type to another (there are three known neutrino types) in the solar interior and because the existing experiment was sensitive to only one type, the observed rate was only about a third of that expected. In 2002 the Nobel Prize was awarded to the two physicists that led this experimental work.
In the interior of hot WDs, neutrinos are copiously produced. These neutrinos are almost one thousand times less energetic than those manufactured in the Sun and they are manufactured in an exotic and entirely different manner from the solar neutrinos. The production process here is the plasmon neutrino process wherein a photon, interacting with the plasma of electrons in the WD core, attains an “effective mass”. This then allows the photon to produce a virtual electron-positron pair that subsequently annihilates into a neutrino-antineutrino pair. Such a process is normally forbidden as a photon does not have a mass and in the decay it cannot conserve both momentum and energy. Neutrino rates for this process have been calculated and experimentally inferred, but a direct astrophysical check on the rates in this energy regime has never been carried out.
In 2013 I had a program with colleagues to use the Hubble Space Telescope (HST) to carry out imaging in an ancient populous star cluster. The experiment exploited the ultraviolet capabilities on HST to observe in the core of the cluster where the youngest and hottest WDs resided. We identified in excess of 10,000 WDs with the hottest one having a surface temperature around 100,000 degrees.
In a comparison with theoretical models, we determined that the very hottest WDs were cooling at the expected rate to within a factor of two or so, thus verifying the neutrino production mechanism. While this is a superb result, several astronomical quantities entered into the analysis, all having significant errors and hence reducing the precision.
The core of a typical WD consists of the products of helium-burning, that is a mixture of carbon and oxygen. The density is very high – about a million times that of water. When the core is hot the carbon atoms behave as a classical gas, moving about with high thermal velocities. But as the WD cools, the high density can force the atoms into a crystal lattice and, in effect, the core of the WD “freezes”. When this happens the star will emit some excess energy as its atoms lose their thermal velocities. This will show up as a slowing in the rate of cooling of the WDs. Observationally, this will exhibit itself as a “bump” in a plot of the number of WDs versus their age.
We have looked for and have found an excess number of WDs in our data just where the core is predicted to crystallize – a lovely confirmation of WD cooling theory. WDs in effect become huge diamonds in the sky late in their evolution.