The Growth of the Atom: A History of Scientific Development

 

From the earliest times, thoughtful people have considered it probable that there must be a limit to the extent to which anything could be repeatedly divided - that, in other words, there must be a smallest size for everything. Thought about this led to the idea of the atom (from the Greek atomas meaning ‘not able to be cut’). The idea of very small particles from which everything is made was mooted at least 2500 years ago, which is rather remarkable considering that it was not until the end of the nineteenth century that any real facts about the nature of these particles became available.

 

The first well-known account of a theory of atoms was proposed by the Greek scientific thinker or natural philosopher’ Democritus (c.470 - 380 BC). Actually, there is evidence that Democritus cribbed this idea from his teacher Leucippus of Miletus who flourished in the fifth century BC. Hardly anyone has ever heard of Leucippus while Democritus is quite well known, so naturally, the latter gets the credit for this important idea. Democritus taught that all matter was made up of particles so small that nothing smaller could be conceived. These particles, which he called atoms, were indestructible and, of course, could not be cut. They were solid, hard and incompressible, and each type of material was made up of large numbers of individual atoms. A pure material would consist of only one type of atom in huge numbers. More complex materials contained a range of types of atoms. Atoms were of different shape and it was this that gave them their properties. For instance, white material had atoms with smooth white surfaces. A sour taste was caused by needle-sharp atoms. The human soul was made of atoms that were smaller and finer than any others.

 

Democritus taught that atoms could combine together, in accordance with strict laws of nature, to form different substances. Although fanciful in some of its details, the atomic theory of Democritus (or his boss) was ingenious and was capable of providing some kind of an explanation for a number of facts. It was also a great advance on the rather feeble magical thinking of other philosophers. So, in that respect, it was, for the time, quite good science; but as a fully adequate explanation, it had to be scored pretty low.

 

Most of the Greek philosophers were, unfortunately; convinced that natural science was a rather low-grade activity and distinctly infra dignitatem, as the Romans would have put it. It was fine for slaves and mechanics and other blue-collar workers who didn’t mind getting their hands dirty, but it was not quite the thing for gentlemen. So for a couple of centuries natural science languished while Greek philosophy, under the influence of Socrates, turned to moral and metaphysical questions.

 

The atomic theory of Democritus was not wholly forgotten, however. The Greek philosopher Epicurus (341-270 BC) thought it was rather good and even found it useful. Epicurus, like many people of today, took a mechanistic view of the universe. Greek superstition and belief in magic and in the power of the Gods offended him and he strongly advocated Democritus’ atomic theory to try to counter this superstition. If, as he maintained, the entire universe consisted only of atoms and nothing else, then even the gods were made of atoms and were subject to exactly the same scientific laws as were humans. So there was no need to worry.

 

Another important advocate of the general idea of atoms was the Roman poet and philosopher Titus Lucretius (c.99—55 BC). He put the notion in rather a fine way in his great scientific poem De Rerum Natura (On the Nature of Things). To illustrate the idea of atoms, Lucretius described how he stood on a high hill watching an army on a plain so far below him that it resembled a single massive solid body glittering in the sun. Although it looked solid, he was, of course, aware that it was made up of an enormous number of individual parts. Lucretius’ poem was almost lost and was unknown throughout the Middle Ages. But a single manuscript copy survived and, soon after the invention of printing, the poem was published in full in 1417 and became widely popular. The idea of atoms was thus available to the thinking of many educated people after the Renaissance.

 

Another reason for the preservation of this idea was, ironically, the powerful opposition of the immensely influential Greek philosopher and scientist Aristotle (384 - 322 BC). It has to be remembered that very few of the Greeks were scientists in the sense we use the term today. They did not make observations or carry out experiments. In their view, only pure thought was truly worthy. So, to them, science consisted of dreaming up intellectual theories that would explain nature. Aristotle was very good at this kind of thing. Furthermore, he hated Democritus’ idea of atoms.

 

For nearly two thousand years after his death Aristotle was by far the most widely read of the philosophers and his views carried great weight. He was also remarkably popular because, for many, he was the only source of information on such matters as physiology, logic, ethics, the acquisition of wealth, politics and psychology. Aristotle’s attack on Democritus’ atomic theory was not based on any real scientific grounds but was grounded in pure prejudice and speculative philosophy. This, he believed, was how knowledge was to be obtained. Aristotle also had the backing of the Church and anyone who rejected Aristotle was liable to be in serious trouble. In particular, the Roman Catholic theologians decided that the atomic ideas of Democritus were not only materialistic; they were frankly atheistic.

 

So atoms had a bad press for many centuries until the growth of real science began to look again at the question. In the seventeenth century the popularity of De rerum natura and the growth of scientific, and especially chemical, knowledge helped to maintain a debate between what was then considered the orthodoxy of Aristotle and the revival of the ideas of atomism. At the beginning of the nineteenth century the idea of atoms became one of practical importance to the chemists. The English schoolteacher John Dalton (1766 - 1844), who was interested in all branches of science, proposed that ‘the ultimate particles of all homogeneous bodies are perfectly alike in weight, figure etcetera. By ‘homogeneous bodies’ Dalton did not mean only elements, but included compounds such as water. So, in this, he was not distinguishing atoms from molecules. He was, however, aware that the atoms of an element were, in a chemical context, indivisible and that they remained unchanged while undergoing chemical reactions. He went further. Taking hydrogen as the lightest element, he was able to work out the relative weights of the atoms of other elements, such as oxygen, nitrogen and sulphur.

 

Dalton’s ideas advanced scientific thinking considerably and, because he used his ideas of the atom to explain other phenomena, he can quite realistically be considered the father of modem atomic theory. But were Dalton’s ideas of the atom adequate? Regrettably, no. There were too many unanswered questions and, as science advanced, these increased in number. One question that arose from scientific advance was how to account for the extraordinary fact that certain elements, when purified, were not only warmer than their surroundings but were giving off energy. Two years before the end of the nineteenth century, Marie and Pierre Curie had discovered the radioactive elements polonium and radium. Up to this point, everyone had accepted the idea that atoms were the ultimate small entities. By definition they were indivisible. That’s what they were called. But here were atoms in which something very odd was going on and it began to look likely that these ones, at least, were rather more complicated than people had imagined.

 

At about the same time as the discovery of radium another remarkable new fact appeared. In 1897, the English physicist Joseph John Thomson (1856 - 1940), known to his colleagues as ‘J.J.’ had been working with cathode ray tubes of the kind devised by William Crookes. Thomson knew that the rays, which produced a fine spot on a fluorescent screen on the end of such a tube, could be deflected by magnets and by an electric charge on plates within the tube. Changing the polarity of the charge showed that the beam was attracted by a positive charge and repelled by a negative charge. So, because ‘like’ charges were known to repel each other, the beam itself had to carry a negative charge. Thomson assumed that the beam consisted of a stream of negatively charged particles, or ‘corpuscles’, as he called them. Measuring the deflection of the spot allowed him to quantify the ratio of the charge to the mass of one of these particles. This was shown to be less than one-thousandth of the mass of a hydrogen atom. In other words, particles existed that were smaller than an atom. Shock horror!

 

For a time, no one believed J. J. The indivisibility of the atom had been such a firmly entrenched dogma throughout the whole of the nineteenth century, that when, at a meeting at the Royal Institution in 1897, Thomson announced the discovery of the electron, a distinguished physicist told him afterwards that he thought Thomson had been pulling their legs.

 

As soon as it became known that the atom contained particles smaller than itself speculation arose as to its structure. J. J., of course, speculated on this question. His tentative suggestion was that the atom consisted of a hard ball of positive electricity with electrons stuck on to it, or embedded in it, like currants in a bun. Was this an adequate description of the atom? Unfortunately not. It raised even more questions than it answered.

 

Working briefly under J. J Thomson at Cambridge was the New Zealand student Ernest Rutherford (1871 - 1937). Rutherford was a high-flier who at 27 became professor of physics at McGill University and by the age of 37 was a Nobel Prizewinner. Interested in the then new field of radioactivity, he knew that the rays given off by radioactive materials were of different kinds. He considered this important. So, quite arbitrarily, he called the positively charged rays ‘alpha rays’ and the negatively charged ones ‘beta rays’. Later, between 1906 and 1909, assisted by the young Hans Wilhelm Geiger at Manchester University, he was able to prove that alpha rays were streams of helium atoms minus their electrons. These particles had a double positive charge.

 

In 1910, Geiger and another of Rutherford’s students fired streams of alpha particles at a very thin sheet of gold foil. Most of them passed through and were registered on the photographic plate behind. But a very small number – about one in 20,000 - actually bounced back. This was an astonishing finding. ‘It was almost as incredible,’ said Rutherford, ‘as if you fired a fifteen-inch shell at a sheet of tissue paper and it came back and hit you.’

 

These observations led Rutherford to two important conclusions: that atoms were mostly empty space and that they had a positively charged centre, somewhat similar to the alpha particles. Because like charges (two positive or two negative charges) repel, alpha particles that happened to strike a positively charged atom centre were forced back. Rutherford now felt able to propose a new model for the structure of the atom, and he did so in 1911. It consisted, he suggested, of a positively charged central part, the nucleus, occupying only a tiny proportion of the whole volume of the atom, surrounded by a large space containing negatively charged electrons. He was able to work out that the mass of the positively charged particle of the nucleus, which he called a proton, was more than 1800 times that of an electron. So almost all the mass of the atom resided in the nucleus. Because electrons were of opposite charge to the nucleus and would be attracted to it, they had to have energy of their own in the form of rapid movement around the nucleus. The analogy with the solar system was irresistible and the electrons came to be described as ‘planetary electrons’. Their angular velocity in their orbits round the nucleus, he suggested, provided just the required degree of centrifugal force to balance the attraction of the nucleus.

 

For each proton in the nucleus there was one orbital electron, so the atom remained electrically neutral. Hydrogen had one proton in the nucleus and one planetary electron. Helium had two protons and two electrons. Lithium had three protons and three electrons, Chlorine had 35 protons and 35 electrons, and so on. So it was the number of electrons in the atom that determined the chemical properties, and the number of electrons was ultimately determined by the number of protons in the nucleus.

 

Rutherford knew that the helium nucleus had twice the mass it would have if the two protons were all it contained. At first, he suggested that it might contain four protons, two of which were neutralized by two electrons. But there were various objections to this explanation and he had reason to believe that an uncharged (neutral) particle of the same mass as a proton actually existed. In the 1920s Rutherford and his assistant James Chadwick (1891 - 1974) spent a long time looking for such a particle - which could be called a ‘neutron’ - but failed. Uncharged particles were very difficult to detect simply because they were uncharged. They could not be directly detected by any of the electrostatic methods then in use. Chadwick eventually proved the existence of the neutron in 1934 and won a Nobel Prize and a knighthood for himself.

 

Rutherford’s concept of the atom immediately provided explanations for many well-known chemical and other phenomena and was one of the most germinal and fruitful hypotheses in the entire history of science. It was a great leap forward and advanced physical science and chemistry enormously. But was it true?

 

For a time it seemed the complete answer. On the basis of this model, scientists were soon able to make the following important statements. Hydrogen is the only atom with no neutron in the nucleus. Helium has two protons and two neutrons. Lithium three protons and three neutrons. The atomic number can be taken to be the number of protons and this rises by one with each different heavier element until we reach uranium with 92 protons - the heaviest of the naturally occurring elements. The number of neutrons, however, is not always the same as the number of protons. Many of the heavier atoms have more neutrons than protons and many atoms with the same number of protons (i.e. of the same element) have different numbers of neutrons. Most samples of uranium, for instance, have a mass equal to 238 protons because the nuclei contain 92 protons and 146 neutrons. Some samples - the kind used in the early atom bombs - have a mass of 235 with 92 protons and only 143 neutrons.

 

The chemical properties of an atom depend on how it links with other atoms by way of its electrons. So these properties depend on the number of electrons and, consequently, on the number of protons in the nucleus. The chemical properties are quite unaffected by the number of neutrons. Atoms with the same number of protons but different numbers of neutrons are called isotopes literally ‘equally placed’ (in the periodic table). The physical properties, however, depend also on the number of neutrons. Very heavy atoms with many neutrons are often unstable and can break down, for example by giving off alpha particles (two protons and two neutrons) from the nucleus. The loss of two protons, of course means a loss of two electrons and consequently a complete change to a different element with different chemical properties. This is called transmutation. Elements that undergo spontaneous changes of this kind are said to be radioactive. Some isotopes can also be radioactive.

 

Rutherford was awarded the Nobel Prize in 1908 for his work on radioactivity and was knighted in 1914. He became a professor of physics at Cambridge in 1919 and succeeded J. J. Thomson as Director of the Cavendish Laboratory. In 1921 he received the Order of Merit, he was President of the Royal Society from 1915 to 1930 and was raised to a peerage, as Baron Rutherford of Nelson, in 1931. But was his model of the atom correct?

 

Unfortunately, far from being a complete account of the nature of the atom, consistent with all other knowledge, Rutherford’s model was far from accurate and it was soon apparent that it would not do at all. There were certain important facts which it simply could not explain. If we view the atom as a kind of miniature solar system, we immediately run into a major problem. Orbiting electrons must, according to classical theory, emit energy in the form of electromagnetic radiation. All moving electric charges emit radiation. This is how radio and TV work. But if electrons gave off energy in the form of radiation they would have less to keep them spinning round in their orbits and would spiral down into the nucleus. Their kinetic energy (the energy of movement) would decline and they would fall.

 

Remember that electrons are negative, the atomic nucleus is positive, and that unlike charges attract each other. In Rutherford’s model (which seemed so convincing that it came to be called the classical theory) the only thing that kept the electrons from being attracted by the positive charge on the nuclear protons and plunging into the nucleus was their kinetic energy. This, he explained, supplied a kind of centrifugal force. In fact, although spinning electrons do give off energy, they don’t spiral down into the nucleus. Atoms certainly emit electromagnetic radiation but they do so at frequencies specific to the type of atom (see below) and they go on doing so. This was too big a camel to swallow. Rutherford’s atom just wasn’t good enough.

 

Another problem troubling the physicists was the demonstrable fact that you could do an experiment to prove that light was a wave phenomenon. You could, for instance, show the expected interference between two sets of light waves just as you can show interference between sound waves and even sea waves. But you can also do an experiment to prove that light consists of particles. Rutherford’s model of the atom has nothing to say to us on this seemingly fundamental difficulty.

 

Other things were worrying the physicists. Principal among these was the very odd fact about the way hot bodies gave off energy. When you heat a bit of iron it gets red, then orange, yellow, green, blue and violet. White heat is just a mixture of all these colours. The colour change through the spectrum from red to violet is simply a matter of the wavelength of the energy given off. Red means long wavelengths, yellow shorter, blue shorter still and violet shortest of all in the visible spectrum. Now, the shorter the wavelength, the more energetic the wave. So, according to. classical theory, radiation at the, violet end of the spectrum should have a lot of energy and radiation beyond that should continue to rise steeply.

 

The visible spectrum is only a tiny part of the whole electromagnetic spectrum, which extends a long way on either side of visible light. In terms of wavelength, there is a great region of infrared radiation, of increasing wavelength, below the red; and above the violet, there is a great region of ultraviolet radiation of decreasing wavelength, to say nothing of the X-rays and gamma rays beyond that. By classical theory, then, the energy of radiation should, somewhere in the ultraviolet, reach catastrophic levels. As a consequence, this idea became known as the ‘ultraviolet catastrophe’ and no one had the least idea why it didn’t happen. In fact, simple measurements showed that, within the visible spectrum, as the wavelength decreased, the energy, after rising at first, began to fall again. The ultraviolet catastrophe remained a painful puzzle for years.

 

In October 1900, the German physicist Max Planck (1858 - 1947) took a walk in the Grunewald woods outside Berlin. Like all his physicist colleagues, Planck had long been worrying about these and other facts that did not make sense on the basis of classical Newtonian physics. He was thinking hard how to balance an important mathematical equation that had stubbornly refused to balance. When he returned from his walk, he modestly wrote down: ‘Today I made a discovery as important as Newton’s discovery of gravitation.’

 

Under Newtonian physics the emission of energy - light, heat and other forms of radiation - was assumed to be continuous. There was no reason to think otherwise. What Planck postulated was different. Energy, he decided, was given off in a series of very small separate packets, which he called ‘quanta’. This was a crazy idea, but it fitted nicely with certain incontrovertible facts that could not be explained otherwise. Planck knew that the amount of energy carried in a particle was directly related to the frequency (number of cycles per second) of the wave. Remember that frequency and wavelength are completely bound up in each other. They are reciprocally related: as one increases, the other decreases. If you double the number of wiggles that fit into a given period of time, each of the new wiggles must be half the length of the previous ones.

 

Planck’s Grunewald insight was that the energy was equal to the frequency multiplied by a very small number h – a constant that is now universally known as Planck’s constant. That tiny number was to prove important enough not only to alter fundamentally the ideas about the nature of the atom, but also to turn classical physics upside-down. Planck’s idea of quanta sounded like nonsense at first, but it did provide a way of answering the riddle of the ultraviolet catastrophe. At high frequencies (or short wavelengths, such as those in the ultraviolet) a great deal of energy would be needed to emit a quantum. Only a few of the energy emitters of atoms - the electrons - would be energetic enough to supply this amount of energy. At low frequencies, there are masses of electrons with enough energy to emit quanta of low energy. Somewhere in between would be a peak. The mathematics fitted nicely, but the whole idea worked only on the ridiculous basis that energy was given off in packets. As everyone knew, electromagnetic radiation was a wave.

 

Einstein, investigating how light falling on certain materials, such as selenium, would cause a small electric current to flow (the photoelectric effect), had shown that a certain precise minimum amount of light energy was needed to knock an electron off an atom. He also showed that the kinetic (movement) energy of the electron flying off was equal to the energy of the knocking-off photon minus the energy needed to do the knocking-off. Planck’s new idea led to the theory that, in an atom, each electron is in a certain energy stare and can move to a higher energy state only by absorbing a precise quantum of energy. As it does so, it jumps instantaneously to the higher energy level. There is no question of a particle acquiring a smoothly varying amount of energy. It can only make a quantum leap - a tiny but precise change in its energy.

 

This idea nicely dealt with the problem of how electrons could give off energy and still stay in situ They only gave off energy that they had received from outside the atom. The ideas answered a lot of other questions, but the mare’s nest of new problems it uncovered proved to be unprecedented in the whole history of science.

 

Some of the consequences of quantum theory are virtually unbelievable. Take the change in energy level of an electron. (If you want to continue to think of an atom as being like a solar system with the sun as the nucleus and the electrons as the planets, you can safely think of different energy levels as being different orbits.) Bohr proposed that electrons can move only in certain permitted orbits and while in these orbits do not emit radiation. The energy of an electron in a particular orbit is definite and consists of two parts - its potential energy by virtue of its distance from the nucleus, and its kinetic energy from its movement. Each permitted orbit, therefore, is associated with a particular level of energy. An electron, he suggested, can move suddenly from an orbit of higher energy to one of lower energy. When it does so, the energy difference is emitted as a quantum of electromagnetic radiation, such as light, for instance, of a particular frequency.

 

Every element emits its own characteristic light wavelength when heated. The sodium in common salt gives a yellow colour, for instance, because when the excited electrons in the sodium atom return to ‘their non-excited level, they give off energy at precisely the frequency of blue light Heating supplies energy and raises electrons to a higher energy level. When they fall back, they emit light of a precise wavelength that is determined by the structure of the atom. Checking the wavelength of the light allows us to identify the atom. This is the basis of spectroscopy - the technique that had allowed scientists for many years to tell what distant stars are made of. The new quantum theory image of the atom provided an explanation of this phenomenon.

 

Gradually, Planck’s idea took hold and, with it, Bohr’s model of the atom. Once scientists had grasped its principles, the Bohr atom took the scientific world by storm. The physicists immediately got to work designing experiments to prove that it was right. Success quickly followed success. The idea of the permitted orbits with their ladder of energies was proved by James Franck (1882 - 1964) and Gustav Hertz (1887- 1975) - the nephew of Heinrich - and won them the 1925 Nobel Prize. Using gaseous mercury, which they bombarded with electrons, they showed that the energy was absorbed by the gas in discrete amounts (quanta) of 4.9 electron-volts. This caused the mercury to get excited and then to return to its original state after giving off a photon of light of precise wavelength.

 

The result of this experiment was, of course, a great encouragement both to Bohr and to Max Planck and gave them additional stimulus to go on developing the idea of the atom and quantum theory respectively.

 

So was the Bohr atom, with its basis in. the newly established quantum physics, the last word? Was it the complete answer? Regrettably, no. Bohr’s model still had many shortcomings, and, in spite of its power, was destined to be swept aside a mere 12 years after it was first announced. The Bohr atom model could not account for the spectral lines of atoms with more than one electron - that is, atoms heavier than hydrogen. Furthermore, it did nothing to account for the extraordinary wave partide problem - the fact that there was dear experimental evidence that light behaved both as a wave and as a particle.

 

It is time to introduce an aristocrat – a prince, no less, who later became a duke. Louis-Victor Pierre Raymond, Duc de Broglie (1892 - 1987) (pronounced ‘broy’) was a nobleman whose great-great-grandfather had the distinction of having been guillotined during the French Revolution. De Broglie was expected to enter the diplomatic service or the army and was sent to the Sorbonne to read history. But he had already become interested in science because of the work his elder brother was doing on X-ray spectroscopy in his private laboratory, and wasted much of his study time immersed in science books. Nevertheless, he got his history degree. During World War I, while still a mere prince, he was posted to the Eiffel Tower radio station and this got him even more interested in science. So after the war, he returned to the Sorbonne, where, in 1924, he took a doctorate in physics. He very nearly didn’t because his doctorate thesis was so far ahead of his professors that some of them thought it was rubbish. Fortunately, they sent a copy to Einstein who said, in effect: ‘This is good stuff.’ So the prince became a doctor and a totally new concept was presented to the world.

 

By now, Einstein’s celebrated equation E=mc2 was well known. Energy equals mass multiplied by the speed of light multiplied by the speed of light. Everything that has mass has energy. Particles, such as electrons, have mass so they also have energy. Planck had pointed out that energy was equal to frequency multiplied by a very small number called Planck’s constant: E=hv, where h is Planck’s constant and v is the frequency. Everyone knew this, too.

 

So now it was de Broglie’s turn. All matter, he suggested, must display wave-like properties - must indeed, act like waves. To him it seemed obvious. Energy implies frequency; frequency implies waves; therefore particles must behave as waves. How did he explain this incredible suggestion? Simple. Einstein’s equation and Planck’s equation are not two separate statements; they are interrelated. If you know the mass of something, you multiply it by the speed of light squared and you get the energy. And if you divide this energy by Planck’s constant you get the frequency. So every particle has a definite frequency or rate of pulsation associated with it.

 

If you consider a wave, it can be thought of as a simple up and down motion, like that of a cork on a pond when a stone is thrown in. But it can also be thought of as the outward propagation of the ripples from around the point at which the stone dropped. De Broglie incorporated both ideas. A particle at rest has a local up and down vibration and also a wave that is propagated outward to infinity. Movement of a particle, at speeds much less than the speed of propagation of the wave, can be interpreted as the movement of the resultant wave formed by the interference of many waves whose frequency had to differ slightly in relativistic terms. Matter-waves eluded experimental demonstration for a time, but in 1927 they were actually detected.

 

So the Bohr model of the atom had to be superseded by a new model, proposed by the Austrian physicist Erwin Schrodinger (1887 – 1961) based on his new discipline, wave mechanics. Schrodinger’s atom incorporates Louis de Broglie’s concept of the electron as having wave properties. Electrons can be in any orbit around which an exact number of wavelengths can occur, setting up what is called a ‘standing wave’ like the sound waves in an organ pipe. As there was no accelerating charge, there was no radiation. ‘Permissible’ orbits were determined by the need for the exact number of wavelengths to be present. Other conceivable orbits would involve more or less than a whole number of waves and so would not occur. Schrodinger’s model, published in 1926, offered a more rigorous and mathematically sound account of the atom, than that of Bohr. All three men - Bohr, de Broglie and Schrodinger - were awarded well-deserved Nobel Prizes.

 

 

‘Scientific Blunders – A Brief History of How Wrong Scientists Can Sometimes Be’ p.108 – p.123

Robert Youngsan