DNA Replication, Part III: How It Is Done

In 1957, Arthur Kornberg made a really interesting discovery. He showed that DNA can be replicated outside of a cell, in a laboratory test tube. Kornberg wasn’t much interested in which model of replication was right. Instead, he was interested in specifically how replication occurred. Watson and Crick had suggested that the replication of DNA may not actually require an enzyme. If you could somehow unzip DNA, they thought that new DNA might just self-assemble, because the complimentary base-pairing would bring in all the appropriate nucleotides. Kornberg thought, though, that there must be some enzyme involved. He set out to figure out what that enzyme was.

What Kornberg did was some old-fashioned tedious work. He thought that there must be some enzyme, so he started looking at all the candidate proteins in bacteria cells, and seeing if any of these synthesize DNA. He would essentially create culture media where he would put three things: some single-stranded DNA that should serve as a template for making new DNA, some of the nucleotides that the DNA had to be made of, and some candidate enzymes from bacteria cells. He then would see if DNA is synthesized. He did this over and over, with all sorts of candidate enzymes.

He did find one, actually. He found an enzyme that he named DNA polymerase. When he put this enzyme into the mix, he would get the synthesis of a new complimentary strand on the old existing strand that he added.

This was really an important discovery. This really opened up the modern era of molecular biology. The discovery of DNA polymerase kick started a 50-year long (and still continuing today) study of how DNA replicates, and specifically how DNA polymerase does its job. This is often how science works. There is an initial discovery, and then generations of scientists come along and fill in the details.


The Mechanisms of Replication


What I want to do now is to step through the mechanisms we think are responsible for replicating DNA. The first thing we know that has to happen is that the two sides of the DNA double strand have to be separated. They are separated by an enzyme called helicase. You might by now note a pattern of how scientists name enzymes. They always end with “ase”, and their name tells you exactly what they do.

Once helicases have done its job, there are other sets of proteins that come along. These are generically called single-strand binding proteins. These proteins come and sit on the now open DNA and hold it open. This is like putting a chock in there, so that the thing can’t zip back.

Now is the turn of the major player, the enzyme Kornberg found, DNA polymerase. What this enzyme does, as it names implies, is to polymerize the new nucleotides on the growing new strand of DNA, as it is being complimentarily matched to the older template strand. The nucleotide monomers that are added on to the growing strand are in a slightly different form.

The bases that are being added on by DNA polymerase are not being chosen by DNA polymerase itself. Instead, they are being lined up through the complimentary base-pairing that’s inherent to the nucleotides. A with T, C with G. All DNA polymerase does is to go to the next nucleotide, and build a strong chemical bond between the growing strand and the next nucleotide that’s going to be added. It just finds what’s there, and what’s there is there because of the complimentary base-pairing of that nucleotide with the corresponding one on the old template strand.

Complimentary base-pairing is really the key to DNA replication. All else are just details. We could talk about all these details, but we’ve already seen the key issues. At the end we see that Watson and Crick not only had it right, but almost 50 years later, we’ve seen exactly how that mechanism works. Our understanding of these mechanisms of DNA replication has been central to sequencing human DNA and manipulating it to our whims. Also, it helps us understand one of the essential processes of life, which is reproduction, and, by extension, the evolution of life.

DNA Replication, Part II

Watson and Crick’s semi-conservative model contrasted with a couple of other possibilities for how DNA could possibly replicate. There is the conservative model, which suggests that both strands in the original DNA double helix stay together during replication. Then there is the dispersive model, which suggests that both strands are not only separated, but even broken up into smaller pieces during replication. Deciding which of these models was the correct one seemed to be pretty easy, because they make very different predictions. It was not obvious how to prove it in the laboratory, however.

The prediction of Watson and Crick’s semi-conservative hypothesis is that each of the two daughter double helixes, after one round of replication, should be made up of one old strand and one new strand. The conservative model predicts that after replication, one of the double helixes that result would be entirely old. The daughter helixes would be entirely new. Finally, the dispersive model predicts that both the daughter and parental helixes would be made up of just a mixture of old and new DNA.

This sounds simple, but the difficulty was figuring how to actually test that. We need some way to be able to determine what’s old and what’s new DNA after replication. It took several years before anybody figured out how to do this.


Meselson-Stahl Experiment


This brings us to a pair of researchers, Matthew Meselson and Franklin Stahl. In 1957, a few years after Watson and Crick’s work, they came up with a novel method for distinguishing new and old DNA during replication. Let me explain how they did that.

First, they grew bacteria in two different kinds of culture media. One of these culture media had normal nitrogen in it (N14). The other media had a heavier isotope of nitrogen in it (N15). This isotope of nitrogen is not radioactive, it’s just a little bit heavier. Not much heavier, just a little bit.

The point of culturing bacteria in these two different media is that the nitrogen in those media would be taken up and incorporated into any new biological molecules that were being synthesized. Specifically, the nitrogen would be taken up and incorporated into any new DNA that was being synthesized.

If you culture bacteria for some period of time, what would be many generations, then you can assume that all of the nitrogen that is incorporated in that DNA would either have N14 or N15 depending on the culture media in which you are growing it. In this way, Meselson and Stahl could essentially label old and new DNA by how heavy (dense, really) that DNA was.

As you can imagine, the density difference between DNA that had been made with N15, as compared to DNA that was made with N14, is really small. The really clever part was figuring out how to very accurately measure the densities of these kinds of DNA.


The Clever Part


To do this, Meselson and Stahl devised a new kind of procedure called “density gradient centrifugation”. This density gradient centrifugation allowed them literally to sort out DNA according to how dense it was.

The idea behind this is actually similar to the reason why swimmers don’t sink in the Great Salt Lake. If the density of a liquid and an object in it are more or less the same, then the object would neither sink nor float, it would just sort of stay where you put it. So, if you add a lot of salt to water, actually it becomes of the same density as our own tissues, and you don’t sink in it, you just sort of stay there.

It is more interesting, though, if you have a gradient of densities. In other words, if you have some range from high to low densities in some liquid mediums, then objects of slightly different densities would sort themselves out. The objects would end up at that gradient at exactly where their own density matches the density of that point in the density gradient.

That’s the idea that Meselson and Stahl had. How do you create a density gradient? After trying a number of different kinds of solutions, they found a compound called Caesium Chloride. This is a salt that when it is put into a solution has approximately the same density as DNA. What they then did was take a tube of caesium chloride solution and centrifuge it. If you centrifuge a tube, what happens is that the heavier stuff goes to the end of the tube, and the lighter stuff stays at the top.

What Meselson and Stahl had to do with this experiment was centrifuge the ceasium chloride solution enormously quickly. They actually spun it around so fast that they created a 100000 g-forces. This is really fast, so it is called ultra-centrifugation. They did it for a number of days. At the end, they would get as a density gradient of the caesium chloride along the tube. Remember, caesium chloride is about the same density as DNA.

That means that if you then take some DNA and put it in that tube and spin it around, the DNA would ordinarily just be dispersed in that tube because is more or less the same density as the ceasium chloride. As the density of ceasium chloride develops, however, the DNA would all coalesce in a single band. That band would be in a position along the length of the tube that corresponds exactly to the density of DNA at that point in the tube.

This method was so sensitive, that Meselson and Stahl determined that they could tell the difference between N15 and N14 DNA.


Watson and Crick Had it Right


So, that’s the technique. Armed with this technique, Meselson and Stahl then did the following experiment, which should be sort of obvious bases on what we talked about. They took a culture of N15 bacteria and transferred them to a culture flask that had N14. Now, those bacteria, when they started to replicate their DNA, would start incorporating the lighter nitrogen. Any new DNA produced by those bacteria would be lighter than the old DNA that they had.

They waited for about 20 minutes, which is long enough for just one round of DNA replication. Then they took the bacteria out, extracted the DNA from them and used their density centrifugation method to determine what the densities of the DNA in the sample was.

The conservative model predicts that at this point there should be two separate sets of DNA. There should be lighter DNA and heavier DNA. The new DNA is going to be lighter and the old DNA is going to be heavier. This is because, according to this model, the parent strand stays intact. After one replication, you should have some DNA that is heavy, and some DNA that is all light.

This is not the result they observed. What they saw was just one intermediate band. So, they could rule out the conservative model directly. They couldn’t rule out the dispersive model, thought. After just one round of replication, both the dispersive and semi-conservative models made the same prediction. Each daughter double helix should be composed of half old(heavy) DNA and half new(light) DNA. All of the DNA in the sample, after one replication, should be at some intermediate weight. This is what they saw. The dispersive hypothesis made exactly the same prediction.

If you wait for two replications, however, all of a sudden you get a distinct difference in the predictions made by the semi-conservative and dispersive models. After two replications (about 40 minutes), the dispersive model would still predict there would be only one band, all of new and old DNA is all mixed up. Therefore, all of the DNA would be about the same density. The thing that should change is the position of that band along the gradient.

What Meselson and Stahl saw was the creation of two bands after two rounds of replication, which confirmed the prediction of the semi-conservative model.

Well, it took some years to prove it, but Watson and Crick had it right.

DNA Replication, Part I

Lately I’ve been writing a lot about DNA, its history and structure. I think this is really important, I would say key, to understanding what is life about, and how it evolves. Here I’ll continue with this business. Here I want to look at the proposal Watson and Crick had for how the double helix might be replicated because of the complimentary base-pairing they discovered. This would be a series on DNA replication, which I think is one of the most fascinating and complex processes in the universe.

Watson and Crick suggested that the DNA molecule must unzip, and then, each half of the molecule could serve as a template for a newly formed half. This is a good hypothesis, but was it correct? As Watson and Crick proposed that, there were two alternative hypotheses on the scene.


The Alternative Hypotheses


The first alternative suggested that the DNA double helix must remain completely intact when it is replicated. That is, the two strands do not separate. The entire molecule is somehow used as a template for making more DNA. This alternative was called the conservative hypothesis of replication. The original DNA double helix molecule remained completely intact and conserved. The idea was that there must be some intermediate molecule that got information from the structure of the helix and used it to build a completely different helix.

A second alternative suggested that the original DNA molecule becomes completely broken down during replication, with the newly copied DNA assembled by some unknown mechanism. In other words, the DNA double helix would actually be irrelevant. This alternative was called the dispersive model. It was called dispersive obviously because the DNA in the original helix just becomes dispersed and incorporated in the new copies that were being created.

Based on what was known about molecular biology and DNA in the 1950’s, both of these hypotheses were reasonable. Neither offered a solution to the problem of duplicating the exact order of nucleotides, however. This order is the information that we are seeking. Based on what we know today, both of these alternatives seem unlikely. They’re value then was to serve as alternative hypotheses against which to test specific predictions made by the Watson and Crick model.

The mechanism that Watson and Crick proposed became known as the semi-conservative model of DNA replication. This was called semi-conservative because it predicts that during replication, the double helix unzips and the new daughter helixes would both have one strand of the old helix. We begin with one helix, it separates somehow, and the resulting daughter helixes that are formed maintain the original halves of that parent helix. Upon this old half, new halves are formed to create the new double helixes.

This hypothesis led to a specific prediction. The prediction was that if you could know which was old and new DNA after replication would occur, all the old DNA that was in the original parental double helix, would now be dispersed between the two daughter helixes equally. The daughter helixes would all be composed of one half of old DNA, and one half of new DNA.

Let’s contrast that to the prediction we might have if we look at the other two models. Let’s think about the conservative model first. That model suggests that the DNA helix just remains intact once you’ve got it. After replication, that model would suggest that the two daughter helixes would separately made up of, on the one hand all old DNA, and on the other hand, all new DNA. The old DNA in the original parent is still in the original parent, and the daughter DNA helix is completely new.

The dispersive model made yet another prediction. That prediction was that the old DNA that was found in the original parental helix would just randomly scattered across the two daughter helixes.

We have three specific and different predictions that could be used to distinguish between these three models of replication. The trick is figuring out how to know what’s old and new DNA. Actually, it was several years after Watson and Crick’s original proposal that anybody could figure out how to experimentally test it.

The way to test the hypothesis was pretty obvious conceptually. Let’s ask ourselves, after replication, what happens to the material in the original parent’s helix? We have very distinct predictions. The problem was figuring out how to know where old and new DNA was. This is often it is in science. An idea comes forward, people understand what they have to do, but the critical experiment isn’t actually done until somebody comes along and says not what’s the experiment, but how you can actually do it. This may take years, as it did in this case.

Eventually, researchers figured out an extremely clever way to know what the difference is between old and new DNA. I will talk about this interesting and brilliant experiment in my next article.

The DNA Structure, Part II

James Watson was a young American, who had just completed his PhD. He was interested in protein structure. He moved to Cambridge, England, and began working with Francis Crick, who was a physicist familiar with x-ray crystallography and how to interpret it. The story goes that Watson happened to visit London for a seminar, and saw the x-ray diffraction patterns that Rosalind Franklin had obtained from Maurice Wilkins’ purified DNA. Watson made some notes, rushed back to Cambridge and told Crick what he had seen.

Using Franklin’s data, Watson and Crick were able to deduce a number of key structural elements about how DNA must be shaped. These are things they figured out by looking at those dots on the x-ray crystallograph. First, they learned that the molecule had to form some kind of helix. It had to have a kind of spiral structure, similar to the alpha helix that is characteristic of many parts of proteins. Second, they figured out that the width of this helix was about two nanometers. The interesting thing about this is that, this width was twice the width of what you would have expected if there was only a single helix. That gave them the idea that there had to be more than one helix. A double helix, perhaps.

Another thing they learned was what the regular spacing of the repeated patterning along the length of the molecule is. They saw that there was a repeating pattern at about 0.3 nanometers. This corresponded to the size of one nucleotide. Then there was a larger repeating pattern that was ten times that size. From this they inferred that the number of nucleotides that would occur when the spiral went around just once and returned to the same point in the spiral had to be about ten nucleotides.

That isn’t a lot of information. If I gave you that information you couldn’t tell me the structure of DNA. What Watson and Crick did was to use that data and set out to figure out the structure of DNA the old-fashioned way. They made physical models of the molecule with metal rods. They made large-scale models of DNA several feet tall.

They built many models and asked each time: when we have this model, does it all fit together? They tried over and over again. Eventually they came up with a model that fit. The trickiest part of the modeling was to figure out how the nitrogenous bases fit into the picture. Remember, a polymer of DNA is a repeating pattern of sugars and phosphates, with different nitrogenous bases hanging off the side (the guanines, the adenines and so forth). Where did they fit? If you had two, or even more molecules of DNA that were spiraling together, where do the nitrogenous bases go?

Well, after a couple of failed attempts putting the nitrogenous bases on the outside, Watson and Crick realized they had to go on the inside. Why they might want to put them on the outside? It is the nitrogenous bases that vary along the length of the molecule. It is the variation of the different kinds of nucleotides that must somehow involve the code. If we’re going to get access to that code, we’ve got to make what’s different about that code available to the outside world. They couldn’t get the backbones to work together in any way that made sense with the nitrogenous bases on the outside.

If they turned those nitrogenous bases in, and had the nitrogenous bases connecting with each other, forming kind of stairs, with the backbones of these molecules forming the stringers that are holding the steps; the molecule began to fit together. This actually made sense, because these nitrogenous bases are chemical repelled by water. They want to be on the inside of the molecule because of that.

There was one interesting additional problem. This is actually the most interesting part of the story. That is, how did the bases fit together? If the put the nitrogenous bases on the inside, they could get a double helical structure that began to fit the data, but there was still a problem remaining, there are two kinds of bases, the pyrimidines and the purines, and they are of different sizes. Purines have two rings, and pyrimidines only have one.

If you just try to put these stair steps across the two sides of the double helix, you’ll have some steps that are wider, and some that are narrower. For example, if you got two purines together, you’ll have a relatively narrow step. The outside of this spiral would be going in and out, which is not structurally stable. That’s were Chargaff's rule came in. They realized the implications of Chargaff's rule, which says that the amount of the base adenine (A), always equals the amount of thymine (T). Similarly, the amount of guanine (G), always equals the amount of cytosine (C). This suggested that it may be that one always pairs up with the other when they are matching together on the inside of the helix.

It turns out that when they looked at how these kinds of nitrogenous bases would match up, they found that those peculiar combinations (A and T, C and G) would always maximize that potential weak bonding that occurs between the bases. With this, they actually solved two problems. They figured out how you can have a regular distance along the whole length of the staircase. Also, they figured out what could hold the staircase together. If you always match A with T and G with C, the bonding that holds two sides together is maximized.


In April, 1953, which is only a year after people became convinced that DNA was the information molecule, Watson and Crick published a one-page paper in the journal Nature, which described the double helix. They described the molecular structure and how they thought it would all fit together. The real significance of this work was not simply to describe the 3D structure of DNA, but to show how that 3D structure might actually say something about replication.

Watson and Crick’s paper ends with the following sentence: “It had not escaped our attention that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.”

The fact that A and T, G and C were always paired together, meant that if you took the two sides of the molecule apart, you would always know what the other side has to be. That’s called complementary base-pairing. It was this fact that suggested that mechanism by which DNA was replicated. I’m very excited about this subject, but I will leave it to another article.

The DNA Structure, Part I

After the work of Hershey and Chase, biologists in the early 1950’s became convinced that DNA was what they needed to look at to understand the genetic code. They actually had no idea how DNA could possibly act as a mechanism for genetic inheritance. Let’s step back and remind ourselves of what this molecule has to accomplish. It needs to do two things. First, it needs to have some way of providing a code that can store information about proteins.

The linear structure of DNA, with variable bases along the chain was consistent with the idea that could provide such a code. Proteins also are linear chains. So, you can imagine that there could be some sort of mapping of the pattern of one molecule in the pattern of the other. It wasn’t clear what this mapping could be, but they thought they could figure that out. I will talk about that code in another article.

Second, the molecule had to be able to replicate. If we are going to transmit genetic information, from one generation to the next, it won’t work unless we make duplicate copies of the code, so we can handle one copy of the blueprint to the offspring. The real problem was that it wasn’t clear how DNA could be replicated. The linear structure of DNA offer some hope for a code, but it didn’t offer a clue about how replication might occur.

This is where the race, literally a race, for discovering the three-dimensional structure of DNA began. Scientists were convinced they needed to know the three dimensional structure of DNA to understand replication. It was this impetus that led to the discovery of the now famous DNA double-helix, by Watson and Crick. This was arguably the most important finding in biology in the 20th century.

Why biologists should be interested in the three-dimensional structure of DNA? Biochemists had begun to understand, in the 1950’s, that the function of proteins could be understood by figuring out something about their structure. So, it was hoped that some aspect of the function of DNA, specifically how it was replicated, could be understood by deducing its structure.

What kind of evidence could you use to deduce the three-dimensional structure? The first kind of evidence came from a procedure known as x-ray crystallography. The positions of atoms in a crystal can be inferred from the pattern that they create when you shoot a beam of x-rays through that substance. The x-rays would bend around those atoms, and then, when you look at the pattern on the other side, you see lines and dots in particular orientations and spacing. From them, if you’re familiar with the procedure and very clever, you’ll able to deduce something about the relative positions and orientations of the atoms that make up that structure.

If you shoot an x-ray through a relatively simple crystal, you’ll get a very regular pattern. If you shoot an x-ray through a more complex material, like an organic molecule, you’ll get a much more complex pattern. It is pretty difficult to pull information form that pattern, and infer something about the way the molecule must be structured.

You may be asking, what are we talking about here, crystals or organic molecules? Well, even the most complex organic molecules, in the hands of a good biochemist, can be crystallized. In fact, that’s where the story starts. Maurice Wilkins, a biochemist working at King’s College in London, was able to produce a remarkably pure crystal of DNA. Working with Wilkins was a woman named Rosalind Franklin. She was an expert x-ray crystallographer. She took Wilkins’ purified DNA crystal and was able to get what was to that point the most clear and accurate x-ray crystallograph of DNA that had yet been obtained.

As I said, these patterns are quite hard to interpret. Nowadays we can use computers and algorithms to deduce the structure of proteins from this kind of data, but back then was pretty much painstaking hand calculation and educated guesswork. As it turns out, Wilkins and Franklin puzzled over their x-ray crystallograph trying to deduce something about the structure of DNA. They didn’t quite figure it out. By the time, another person arrived on the scene, James Watson.

Before I introduce Watson, I want to introduce another kind of clue. There was a scientist working at Columbia University named Erwin Chargaff, who discovered a peculiar thing about DNA. He found that if you took the DNA from any organism, and decomposed it into its component nucleotides, you always found a quite interesting relationship. You always found that the amount of the base adenine (A), always equals the amount of thymine (T). Similarly, the amount of guanine (G), always equals the amount of cytosine (C). So, if you take apart the DNA of any organism, you always will find that the amount of A equals the amount T, and the amount of G equals the amount of C.

This was a very curious relationship that became known as Chargaff’s rule. Next time we’re going to talk about James Watson and his codiscovery of the double-helix with Francis Crick.

The Building Blocks of DNA: What Is DNA Made Of?

The building blocks of nucleic acids are called nucleotides. There are only four types of nucleotides. This is one of the reasons why nucleic acids seem relatively simple compared to proteins. Each nucleotide has a sugar that forms a ring. Bonded to one part of this sugar is something called a phosphate group. This phosphate group is just a phosphorus atom with a bunch of oxygen around it. Bonded to another part of the sugar is a nitrogenous base. It is the nitrogenous base that differs from nucleotide to nucleotide. There are four different kinds of nitrogenous bases.

Organic chemists like to number carbons. They actually love carbons because organic chemistry is all about carbons, so they don’t even write them down when they are drawing structures of molecules. If there is a line between two elements, you must assume that there is a carbon there. Instead of writing them, they number the carbons in the sugar around the ring counter-clockwise from one to six. Like this:

There are four kinds of nitrogenous bases. These can be classed into two different groups which differ in size. The pyrimidines have a single six-element ring that’s made of carbons and nitrogens. There are two kinds of pyrimidines, we call them cytosine and thymine. The details aren’t important but these are two kinds of pyrimidines. Here we have drawings of them in order: cytosine and thymine.

The purines, the second kind of nitrogenous bases, are bigger. They are composed of two rings. In addition to the six-element ring that the pyrimidines are made of there is an additional five-element ring that is attached on the side. There are two kinds of purines: adenine and guanine.

We usually refer to the different nucleotides that we find in nucleic acids by the single letter which designates the type of base that it has. We have:

A: adenine.
G: guanine.
C: cytosine.
T: thymine.

Next time we’ll begin to see how this relatively simple polymer, with phosphates and sugars, could serve as a code.

A History Of DNA, Part III: The Code is in DNA

In my last post we saw how Avery and his colleagues demonstrated (but not conclusively to the scientific community) that the molecule which holds the genetic material in living things is DNA. Now I want to look at a very interesting experiment that really changed the minds of biologist in the matter. In the early 1950’s, Hershey and Chase took a novel approach in trying to found out what the genetic material might be made of, by looking at how a particular kind of virus worked.

Let me give you some background. Viruses are not true cells. They are made of an outer coat of protein with an inner core of nucleic acid. Viruses are made of just two things. The way a virus makes its living is by attaching to a cell, say a bacteria cell, injecting something into that cell and taking over the machinery of the cell. Here is a great introduction to viruses by Salman Khan(Sal), I really recomend you to watch it to understand viruses better.



Hershey and Chase were working with a particular kind of virus, called the T2 phage. This is a bacteria-eating virus, which makes its living by taking over a bacteria and using the protein-synthesizing machinery of the bacteria to make more viruses. Viruses can’t replicate themselves, they have to take over another cell. Clearly, then, what a virus must be doing, is injecting some information. It’s the information that would cause the cell to be taken over. What Hershey and Chase set out to do was to ask, what is it that these T2 viruses are actually putting inside the bacteria? There were only two candidates, proteins and nucleic acids.

The trick was to figure out how to determine which part was being injected. It is a very simple experiment to propose conceptually, but like many experiments in science, the devil is in the details. Hershey and Chase developed a very clever way to figure that out. They did this by radioactively labeling the proteins and the DNA that the virus was made of. In proteins, sulfur is a fairly common element. There is a radioactive form of sulfur (S-35). So, they could grow some T2 viruses in a medium that had a lot of this radioactive sulfur in it. What would happen is that as the viruses reproduce, they would incorporate sulfur into their protein codes. That meant that you could ask not where did the protein go, but where did the radioactivity go.

Alternatively, they could label the DNA. They could grow the same kind of virus in a medium that had radioactive phosphorus (P-32). Phosphorus is not found in proteins, but it is a major chemical constituent of DNA.

So, they grew viruses in a medium that either had radioactive sulfur or radioactive phosphorus. This resulted in some viruses having their proteins radioactively labeled, and others their DNA radioactively labeled.

In separate experiments, they added either the radioactively labeled sulfur viruses (with the radioactive protein), or the radioactively labeled phosphorus viruses (with the radioactively labeled DNA). In both cases they would give these viruses just a couple of minutes. Enough time for them to attach to bacteria and inject whatever they are injecting. Then they would stop the whole process. They were given enough time to inject but not enough time to take over the cell and cause it to build more viruses.

They gave the viruses just 20 minutes, and then they would put the solution in a blender. Then they put this solution in a centrifuge, which spins it around. Because of the action of the centrifuge, the heavier stuff would go down to the bottom of the tube. This would be the relatively large bacterial cell bodies. The lighter stuff, which would be the outer coats of the tiny viruses, would remain up in the solution. If you centrifuge them just right, you’ll get a little lump of stuff at the bottom of the tube, that’s going to be all the bacteria. Then you’ll have the rest of the fluid in the tube, which would include the viral coats.

They then would ask, where is the radioactivity? Is the radioactivity at the bottom, or at the rest of the fluid? What they found was that if they radioactively labeled the sulfur, marking the proteins, the radioactivity was found in the fluid, where the viral coats were. If you radioactively labeled the DNA with phosphorus, the radioactivity was found at the bottom, where the bacteria were. This was a very simple result but took the world by storm, because it showed incontrovertibly that what these viruses were injecting in the bacteria (and happened to be the genetic material), was DNA.

Hershey and Chase published these results in 1952, and it really caused a lot of interest. Biologists began to take a closer look at nucleic acids. That is what I want to do in my next post, look at the structure of DNA.

A History Of DNA, Part II: Proteins Vs. DNA

In the early part of the 20th century, when Griffith published his work, there was generally an assumption that the genetic material must be a protein. Why did they think that? They thought it because pretty much everything that happens in the cell is done by a protein. It makes sense that if you got something complex and important that is being done in the cell, like providing information, it is probably going to be a protein.

By weight, if you analyze the content of a typical chromosome, there is five to ten times more protein than there is DNA. Another thing that was going against DNA is that if you look at what nucleic acids (DNA and RNA) are made of, they are a string of subunits (called nucleotides). Proteins are made of 20 different kinds of amino-acids. Nucleic acids, on the other hand, are made up of only four different kinds of nucleotides.

Another thing they knew about nucleic acids was that they are actually structurally quite boring. Proteins have a complex structure that determines their function. Nucleic acids seemed to be strings that laid there. They didn’t have these complex structures. So, the sequence of the building blocks of nucleic acids seemed rather simple (with only four elements), the structure of nucleic acids seemed kind of simple and not very useful. Everybody assumed that it must be proteins that were somehow holding the code.

There was, however, one nagging piece of evidence that argued against proteins. If you heat proteins up, they break down. They break down because those chemical interactions that hold proteins together start to break apart. The protein loses its configuration and changes its shape. The problem here is that when Griffith had heated up those S strain bacteria to kill them, he probably denatured a lot of the proteins. So, there was some evidence that it might not be proteins, but nonetheless most biochemist thought that they should be looking at proteins in the early part of the 20th century.

How did scientists try to solve this problem? Back then, they did biochemical procedures that would selectively break down particular kinds of molecules. This kind of work was done in the early 1940’s, by three researchers at the Rockefeller Institute; Oswald Avery, Colin MacLeod, and Maclyn McCarty. These guys were biochemists who had being developing relatively sophisticated techniques at that time for selectively breaking down different classes of biological molecules. We have four major classes of molecules: proteins, nucleic acids, carbohydrates and lipids.

If you could take a beaker of transforming principle from experiments similar to Griffith’s, and selectively break down each of these classes of biological molecules, then you could ask which molecule, when it is broken down, causes the transforming principle to no longer work. You have some S strain and R strain bacteria, you extract some substance which you would call transforming principle, and then you treat that solution to selectively knock out the proteins, or the nucleic acids, or the other molecules. Then you ask, which one when it is broken down ends up the transforming principle to no longer transform?

They did that, and this is what they found. They could break the carbohydrates, no problem. They could break down the lipids, no problem, still got transformation. They could break down the proteins, and there was no problem. If they broke down the nucleic acids, however, the transforming stopped. They concluded from that, that the transforming principle Griffith discovered must be some kind of nucleic acid.

To me that is pretty good evidence, but interestingly, in the 1940’s, that result wasn’t widely accepted. This was for a couple of reasons. First of all, there was growing interest in protein biochemistry, and a lot of people were still focusing on the importance of proteins. There was a bias against believing it could possibly not be proteins that hold the code. The other reason that people were critical is that they biochemical techniques that these researchers were using were relatively novel. There was some argument that maybe they may not had destroyed all of the class of molecules they thought they had destroyed.

So, there was no way for Avery and his colleagues to prove otherwise at the time and the issue stood. This is where the work of two other researchers came in about a decade later: Alfred Hershey and Martha Chase. In my post I will talk about their clever experiment that shaped modern biochemistry.

A History Of DNA, Part I: Before the Discovery

Proteins are the biological molecules that make things work in living systems. We could say that they are involved in every process in the cell. They are controllers of biochemical reactions, structural elements that hold parts of the cell together, motors that make things move, signals, and so forth. The function of a protein depends almost entirely on its shape. Its three dimensional shape determines its physical and chemical properties, which in turn allow the protein to serve its unique function. The three dimensional shape of a protein, in turn, depends almost entirely on the linear sequence of the building blocks of life, the amino-acids.

There are 20 kinds of amino-acids. An average protein might have a few hundred amino-acids. So, proteins are the work horses, and their function is determined by its chemical sequence of amino-acids. Now, how do we get a protein of a particular sequence built? To answer this question we need to address two things. First, what is the blueprint that is used to build proteins? Second, how does that molecule actually work?

It is the first question that I want to talk about today. Before we can understand how the code works, we need to understand what the code is made of. I think that we all know the answer today: DNA. Interestingly, though, that was one of the questions that defined molecular biology in the 20th century.

At the very beginning of the 20th century, it was already known to scientist that the code was in genes, which in turn resided in chromosomes. This was known from the work of early cell biologists, Walter Sutton and Theodor Boveri being the most important, who discovered that the particular movement of chromosomes that occurs when cells divide corresponded to patterns of transmission of traits between parents and their offspring. These patters of trait transmission had actually been discovered earlier by the Austrian monk Gregor Mendel.

Cell biologists knew about Mendel’s work, how chromosomes moved, and they developed what now is called the chromosomal theory of inheritance. This theory basically says that the way chromosomes move is somehow related to the way that inheritance occurs, therefore, chromosomes are related to information in cells.

We today know that chromosomes are made of DNA, but how that became a known fact? We must begin by going back to the earlier part of the 20th century, to the work of an English physician named Frederick Griffith. This experiment that I am about to describe really provided the first insight into the chemical nature of genetic information.


Griffith’s Experiment


Griffith was a physician and he wasn’t interested in the molecular basis of inheritance. Instead, he was working on a much more applied problem. He was studying Streptococcus pneumoniae, which is a bacteria that causes pneumonia in humans. What Griffith wanted to do was to develop a vaccine against this particular organism, because the pneumonia caused by it often proved fatal. This was before the advent of antibiotics, of course.

As often it is the case for disease-causing bacteria, there were different strains that varied in their virulence. They varied in how likely they were to induce the disease and cause death. Griffith was working with two strains of Streptococcus pneumoniae. He was working with what we call the S strain on the one hand. This was a very virulent strain. It is called S because if you grow it in a colony, it actually looks kind of shiny and smooth. He had another strain, which he called the R strain, which was non-virulent. If you got the R strain, you might be a little sick but you wouldn’t die. It is called the R strain because the bacteria look kind of rough on the surface.

The important thing to note is that these strains did breed true. In other words, as they reproduced, their offspring had the same properties. S strain bacteria gave rise to more S strain bacteria and so on. It is inferred from that, that the difference between the S strain and the R strain bacteria somehow must be genetically encoded.

What did Griffith do? Griffith was using the approach pioneered by Louis Pasteur, which was to take the organism that you want to develop a vaccine for and kill it. This organism could no longer harm you, but nonetheless, would perhaps induce some sort of immune response if injected in a subject. The idea was to take S strain bacteria, kill them by heating them up, and then take these dead S strain bacteria and inject them into a laboratory mouse, and see if that mouse develops immunity. The mouse wouldn’t die if you injected dead S strain bacteria, but the parts of the bacteria that are injected might nonetheless induce an immune response.

This is a great idea, but it didn’t work. It often doesn’t work. There wasn’t enough left of these dead S strain bacteria to induce an immune response. If you injected these in a mouse, and then injected live S strain bacteria, the mouse would die.

Another common technique that was used then and now to develop vaccines was to eject not only the dead offending organism, but some related organism that was less virulent. In this case, we are talking about the R strain bacteria. The idea is that the live R strain bacteria, because they are alive, would induce the organism to develop a full immune response. That development of a full immune response would somehow pickup some immunity to the dead S strain bacteria. This is a common technique and it often works.

What Griffith did then? He killed some S strain bacteria and injected them in with living R strain bacteria. What he hoped would happen is that the mouse would develop immunity to the S strain, but what happened instead was unexpected, and unfortunate for the mouse, the mouse died. This is a surprising result. What’s being injected into the mouse are dead S strain bacteria, that wouldn’t kill the mouse; and live R strain bacteria, that wouldn’t kill the mouse neither. Nothing was injected in the mouse that should kill it, and yet the mouse died.

What Griffith found out when he dissected the poor dead mouse was that inside it were living S strain bacteria. He injected dead S strain, and when he took the mouse apart, he found living S strain. What Griffith concluded from this work, and correctly, was that somehow the living R strain bacteria had taken up something from the dead S strain bacteria, and incorporated it into them. That somehow transformed the R strain into the S strain.

Griffith later showed that he didn’t need the mouse. You can do this in a beaker. If you put dead S strain and live R strain bacteria in a beaker, some of the live R strain would become transformed into live S strain.

What’s really interesting from this experiment, in my opinion, is that the material that they incorporated must somehow be genetic material. It must somehow have information in it. How else could the R strain bacteria now become virulent like an S strain bacteria? The difference between S and R had to have something to do with genetically transmitted information. What Griffith did was he left it there. It was only 1929 when he was doing this work. He said that he had discovered what he called the transforming principle.

Our interest in this, and of scientist interested in genetics, is that somehow this transforming stuff must involve genetic material. In my next post we will see how this experiment helped in discovering that DNA is the molecule that holds genetic material in living things.

Christians Juggling Biology and Theology Before Darwin

The great French naturalist Georges Cuvier, more than anyone else, founded modern biology during the early 1800’s. He had a very long carrier in France. He held very important positions and was highly regarded worldwide. He was part of the reaction against atheistic speculations that emerged in the 18th century. He based his work, rather than on speculation, on empirical research. With his research, Cuvier thought he found plenty of evidence against evolution.

In his early work, he focused on the internal structure of various species, rather than on their external structure. He carefully studied how species are designed internally. From this work, Cuvier concluded that there are only a few basic patterns of animal organization. The various species that we see out there are simply variations on these basic designed types.

Looking at individual species, he saw that bodily interactions within each species are so delicate that any significant change in them would render the individual incapable of survival. Organisms are so neatly balanced that if anything changes in it, the organism would collapse.

From his viewpoint, the origin of new species through evolution was simply impossible. It is not surprising that this view fit with Cuvier’s basic Christian convictions. In his case, however, they were fundamentally based on his scientific research.

He became Chief of the French Museum of Natural History during the Napoleonic era. There, Cuvier moved from individual research to overseeing an entire laboratory. He oversaw the first comprehensive collections of fossils and biological specimens. Napoleon was conquering much of the world, and he wanted to bring trophies back to Paris. At this time, science was a vehicle of showing success and power. Fossils and mummies were the trophies, and Cuvier was in charge of overseeing a lot of them.

From his research with these fossils and his studies with biological specimens, he found that there were no significant changes in living organisms over time. The fossils seemed to be unchanged from the earliest time. He maintained that the basic types work and they couldn’t be changed.

Cuvier also was the man to establish that certain fossils, such as the mastodon from America, represented extinct species. This doesn’t directly agree with the Bible. With this, he showed his openness to new evidence (something which many Christians today should try to imitate).

When he looked closer he found sharp breaks in the fossil record, with each break containing a distinctive array of fossil types. He was the one who originated the idea of different geological epochs. Before Cuvier, the past was just the same as the present. He broke it into periods: the Paleozoic, where you find mostly marine invertebrates; the Mesozoic, were giant reptiles appeared; and then the Cenozoic, the age of birds and mammals.

He saw that each epoch had its own types of species, then there were breaks and new types laid on top of it. This suggested to Cuvier that there were great catastrophic extinctions in the past. By some vast environmental catastrophes, probably worldwide floods (he never suggested Noah’s flood being one of them). When his followers could not find any source for the repopulation of regions after the catastrophes, they concluded that God or some vital force in nature must had recreated life modeled on the few basic viable types after each catastrophe.

Fully developed by the mid 1800’s, this theory held that the Earth had gone through a series of massive floods or ice ages. This was followed by the creation of new life. This theory vastly elongated the biblical chronology. Many of Cuvier’s followers, who were Christians, tried to reconcile it with the account of creation found in the book of Genesis, by arguing that the days mentioned in Genesis were geological ages. This is the day-age view of creation.

After each massive catastrophe, God would recreate life. The intelligent design of each species, and the ongoing need for the intervention of a superior intelligence, proved, to these Christians, God’s existence and his intervention throughout history. Cuvier’s science was employed to prove Christian theology. This is where biology stood in Darwin’s youth. This is the biology that Darwin learnt when he went to Cambridge.

De Maillet's Theory of Evolution

Among the earliest people to suggest that life had developed from simple to complex forms was Benoît de Maillet, who lived from 1656 to 1738. He realized his ideas were over the top for his day, so, he didn’t just come right out and declared the evolution of life to be his view. He used instead an old tactic that others had tried before him, to put the ideas out there, but put them out in a form that permits you to say “I’m not saying I believe this, I’m just reporting what others have said”. In this case, de Maillet placed evolutionary ideas into the mouth of an Indian philosopher.

How did de Maillet concluded that the Earth had being evolving in the first place? The answer to that is pretty interesting. He came from a good family in France and ended up as an ambassador to Egypt at the beginning of the 18th century. He went above and beyond the normal practice of making acquaintance of the region. De Maillet traveled widely in the Mediterranean. He was enormously curious about lots of things. For example, he wanted to know about the features of the Earth’s surfaces in the regions he visited.

I think he got some of this natural curiosity from his upbringing. His grandfather seemed to have been a similar kind of person. The family home was near the sea shore, and his grandfather had a theory about the sea that he passed down to his grandson. He thought he observed that the water level of the sea was dropping. De Maillet’s grandfather convinced his grandson of this. When he found himself in Egypt and other places around the Mediterranean, de Maillet began collecting his own information.

He mastered Arabic and read the histories of Arabic writers. When he traveled around he became familiar with historical landmarks, including the historical records that went with them. He deliberately exposed himself to this foreign Near Eastern culture, whose understanding of the history of the Earth was very different from that of Christian France. He became more open to the possibility that history had been going on a lot longer than what he learned as a youth.

When he examined sights from ancient Carthage he determined that the sea level had indeed been higher back when Carthage was an active port. His calculations suggested a rate of drop of three feet in a thousand years. He assumed this rate was and had been constant for a long time. He then turned to the implications of this idea.

De Maillet expanded his new system to include the entire history of the Earth. He was a follower of Rene Descartes, who had used the collisions of matter to explain how everything worked in nature. De Maillet utilized such mechanical interactions together with his own observations to create a non-Christian cosmogony.


The Publication of the Telliamed


De Maillet knew his manuscript tested the limits of acceptability, so he tried to deflect criticism by attributing the views expressed in the book to a pagan foreigner. The title of the work was the foreigner’s name (which was his own name spelled backwards, how original): “Telliamed. Conversations of an Indian philosopher with a French missionary on the diminution of the sea, the formation of the Earth, the origin of man.”

From the alleged Indian understanding of the Earth’s past, the French missionary learned that the Earth was originally covered by water, whose currents carved out the mountains beneath its surface. The depths of the primitive seas gradually decreased, exposing the highest mountains. As the process of diminution continued, more dry land emerged. As the French missionary pondered these ideas, he brought them to their logical conclusion: “This emergence led to the growth of grass and plants on the rocks. The vegetation, in turn, led to the creation of animals. And finally, the animals led to the creation of man, as the last work of the hands of God”.

Telliamed himself did not invoke the direct act of God to explain the appearance of life. He didn’t give details, but he maintained that various forms of aquatic animals had changed during the time the sea was gradually dropping in accordance with natural process. Flying fish grew little wings and became birds. Other fish grew feet and walked on land.

Clearly, such processes had taken a great deal of time, far longer than 6000 years. Using, the rate of diminution of three feet every thousand years, de Maillet concluded that over 2 billion years had passed since the primitive waters had begun to drop. Humans themselves were over 500000 years old.

The public immediately saw through de Maillet technique of camouflaging his ideas in a pagan philosophy. The reaction, I’m sure you’re not surprised, was outrage. Even Voltaire thought de Maillet had gone too far. The years after Telliamed appeared, Voltaire noted that there was no support for such an outrageous notion.

Retribution came down on this heretical work from far more official sources than Voltaire. De Maillet, who was long dead, was safe, but his book was roundly denounced. I find the story of De Maillet very interesting and enlightening. It shows us how difficult it was, and it is still today, to put forth ideas contrary to popular belief.

Return from De Maillet's Theory of Evolution to Darwin's Theory of Evolution

Theories of Origins Before Darwin

What were the theories of origins before Darwin? What was there before Darwin? What did people think about the origin of life and the different species on Earty? One thing characterizes people, as Descartes would say, they think. As thinking creatures, we have always wondered about how the universe and things in it originated. We are particularly interested in our own origin.

The first chapter of Genesis contains the Christian creation account. It tells of God creating the heaven and the Earth, plants and animals, and then man in God’s image. All in six days. The Bible doesn’t state when this creation occurred, but most early Christians probably assumed that this did not occur too long ago. In the 1600’s, the Anglican bishop James Ussher fixed the date of creation at 4004 B.C.E. This is the established biblical view that continues to the present.


The Early Scientific Accounts of Origins


Over the past 2000 years, this creationist account did not exist alone within the western tradition. Religious accounts of origins, at least for the past 2000 years, have competed with scientific accounts of origins. Science began with the Greeks, about 600 B.C.E. At this time we find the firsts scientific explanations for natural phenomena.

Although many Greeks retained religious theories about nature founded on revelation or mythical stories, some philosophers proposed materialistic explanations founded on reason. What do I mean by materialistic? This means that they explained natural phenomena without recourse to God or the supernatural. These philosophers said that natural phenomena can be explained as the result of physical matter moving in accord with natural law, with God, at most, as the remote creator of the primordial matter and the laws of motion. We find this sort of account in Plato, for whom God created the primordial matter and its laws, and then left it operate.

Biological origins posed a particular puzzle for Greeks who tried to devise purely materialistic explanations for natural phenomena. Biological organisms, people specially, seemed much more intricate and intelligently designed than just rocks or mountains. They seemed created, and creation implies a creator.

So, to explain the origin of biological organisms, early natural philosophers, like Anaximander and the so-called atomists, proposed crude theories of evolution. They are not very detailed, but they had the idea that there was some sort of spontaneous generation of life and somehow species could evolve over time. They weren’t worked out very well.

Aristotle critiqued these ideas. Aristotle himself was an atheist, and first and foremost, a biologist. He was a very avid observer of life, particularly of fishes. Based on his close study of animals, Aristotle defined a species as a breeding group. A group of particular animals or plants that can breed, and produce offspring that eventually could reproduce. He concluded that species were fixed.

Rejecting both creation and evolution, Aristotle simply saw the species as eternal. They always existed. Later Christian philosophers tried to integrate Genesis with Aristotle. They typically viewed each species as created by God in the beginning, but then, using Aristotelian authority, asserted that these species remained fixed for all time in a perfect (albeit fallen) creation.

This was the dominant view for a millennium in the West. It began to break down, though, when religious authority began to break down.


Deist and Atheist Accounts


The breakdown of religious authority finally occurred during the Enlightenment, in the 1700’s. Notions of evolution began creeping back in. This happened particularly in France, where natural philosophers again struggled to devise purely materialistic explanations for life. Seeking to push God back to the beginning, deists proposed a variety of ideas. They proposed that the solar system was created not by God, but rather a comet once hit the sun and knocked off a bunch of matter, which separated, each piece becoming a planet.

They also proposed ideas for the origin of species. They said that the tremendous array of species evolved from a few common ancestral types. Some of the French natural philosophers were even more atheistical. Denis Diderot, for example, a committed materialist, proposed that all living forms developed by random chance mutations from spontaneously generated organisms.

Probably the most influential natural philosopher from this period was the astronomer Pierre Laplace, who proposed a purely materialistic explanation for the origin of the solar system. He said that the solar system was once a big rotating gas nebula, and as it rotated, centrifugal and centripetal forces would pull in matter to the center, which became the sun, but as it pulled in, it left little blobs of material that collapsed into the different planets. This was called the nebular hypothesis.

When Laplace described his theory to Napoleon, he was asked “how does God fit into it?”, Laplace famously responded “I have no need for God in my hypothesis”.

All the ideas that we’ve gone over were highly speculative and were driven more from philosophy than empirical scientific research. There were a few discoveries at the time, though, that reinforced these ideas.

For example, Abraham Trembley detected that polyps, which are very simple sea creatures, could regenerate. By cutting them into pieces they regenerated the whole. They could be flipped inside out and still operate. People saw this as “almost spontaneous generation”. Philosophers took this as scientific evidence for their speculations.

Overall, however, the empirical research during this period cut the other way. Even if these ideas were speculated about, when people actually did the experimental and observational work in nature, most of them opposed the evolutionary ideas. The generation after the Enlightenment reacted against the speculative nature of evolutionary ideas. They returned to creationism, although not the creationism of the Bible, but a creationism based on scientific evidence. I will discuss this in my next post, when we see how Georges Cuvier founded modern biology.

Return From Theories of Origins Before Darwin To Darwin's Theory of Evolution

The Truth About Ussher's Chronology

James Ussher was an Irishman born near the end of the 16th century. Elizabeth was on Britain’s throne and remained there until Usher was 22. By that time, he had accomplished a great deal. He was very gifted with languages. Young James went off to Dublin to enter Trinity College when he was only 13. He was ordained a priest at the age of 20. He became a professor at Trinity when he was only 26. When he was 44, he became Archbishop of Armagh. That made him the head of the Anglo-Irish Church, a protestant leader in a predominantly Catholic land.

Ussher’s skills and his references were scholarly. He held an administrative position as Archbishop, but his heart laid elsewhere. In fact, he was criticized as an administrator because his inclination was to debate, not to simply deal with opposition by politics of intolerant decree.

It was during the final period of his life that he wrote the work for which he is now famous, the "Annals of the Old Testament, deduced from the first origins of the world". This work appeared right at mid-century in 1650. It’s common to say that Ussher reached his famous date of 4004 B.C.E. by simply calculating back from the time of Jesus by adding up years involved in the lineages of Christ given in the Bible, and going all the way back to Adam. It was much more complicated than that.

There is complete information given in the Old Testament to make an accurate calculation up to the time of Solomon, but after that, ambiguities begin to creep in. For approximately the last 400 years before the birth of Jesus, the Bible gives no help at all. What Ussher did was what to correlate information from this period with known dates from the histories of other cultures, specifically the Chaldean and Persian cultures. This required an incredible expertise in biblical history, secular history and language abilities. The bulk of the knowledge he applied was non-biblical. Ussher had one of the best minds of his time.

The late Stephen Jay Gould, noted paleontologist and Darwinian evolutionist, once wrote a wonderful explanation of how Ussher came to his conclusions (it is available here), including how he came to the precise date he gave for the creation: October 23, at noon.

Gould’s point in taking up the subject was to criticize those who dismiss Ussher’s work as the application of dogma to a scientific subject. They are not only ignorant, but they miss the point entirely. Gould said “I close with a final plea for judging people by their own criteria, not by later standards that they couldn't possibly know or assess”.

He was delighted with Ussher’s explanation of how he determined his result. Not only by the plain use of Holy Scripture, by also by light of reason well directed. Because of his erudition, Ussher’s calculation became accepted in the Western Christian tradition for a long time.

King James commissioned a translation of the Bible in the first decade of the 17th century. It became the authorized version in English and continues to exist to this day. By a half-century after Ussher’s death, his calculation of 4004 B.C.E. for the date of creation was inserted into the column of annotations that stood between the double columns of text. It lasted there until the second half of the 20th century.

The Origin of Life, Part IV: Genetic Code

Ok. We’ve shown that it is possible for cell-like structures to spontaneously generate in certain conditions. How do we get from protobionts to all the enormously complicated and diverse stuff that we see today? We don’t know the answer to that question and we probably never will. We do know, however, part of the answer. Part of the answer has to do with reproduction.

How does a living system reproduce? What minimally do we need to get reproduction? How reproduction arose is an especially tricky problem. It is the problem that is most debated today in the area of the origin of life.

To understand what is needed for reproduction, let’s imagine we’re back in time. Let’s imagine we have some proto-cells that are functioning. Let’s say that by chance, one of these protobionts just happens to come up with some unique new trait. This trait could be anything. For example, it could be a new kind of molecule that makes this cell more durable. It could be a new kind of molecule that increases its ability to take up material from the outside.

This protobiont is different from the rest. It is somehow more efficient, better at doing its job. The problem is that we have only one of them. That individual won’t last forever. Even if it does, there will only be one of them. This issue leads us to reproduction. This problem would be solved if our protobiont could reproduce itself in a way that would pass that useful trait on to its progeny. How does it do that? Well, cells split into two. We have one cell, it grows a little larger and splits into two. In essence, that’s reproduction. This is not enough, however.


The Genetic Code and the Problem of Replication


If the trait we are talking about is a molecule, which of the daughter cells gets the molecule? Even if there is a lot of these molecules and each daughter cell gets a half of it, and the daughters of these cells get the half again, eventually this property will fade away. What we need instead is for these primitive cells to somehow be able to make completely new and accurate copies of themselves. They have to be able to store information about the structure of the molecule and transfer that information to its offspring.

How such a mechanism for storing and transmitting this kind of information came about is one of the unresolved questions about the origin of life. We know, however, that there is such a molecule in modern cells. This is a cell that accesses a blueprint for making more molecules. This molecule is called Deoxyribonucleic acid, or DNA.

DNA passes its information onto another kind of nucleic acid, RNA, and then the information goes from RNA into proteins. This is the way information works in modern cells. In this system, DNA acts as some kind of blueprint, RNA as the translator and proteins are the product of that blueprint. Proteins do much of the real work in modern cells.

Here we encounter a really serious problem, however. DNA could not have been the storage molecule that first arose in early life. Why not? The reason is that DNA can’t replicate itself. DNA requires a huge number of other proteins acting as enzymes to replicate. DNA in modern cells can be replicated but only if there are proteins to do the replication job. Proteins that could do that replication job might have arisen sometime in the early history of life on Earth, but they couldn’t have arisen before there was DNA to store their code. We need to postulate simultaneously the appearance of DNA that could store information about proteins and proteins that could replicate that DNA. Which came first, the chicken or the egg?

Neither could have come first because DNA and proteins can’t exist without each other in modern cells. Also, it is unbelievably improbable to think that just the right kind of proteins and just the right kind of DNA happened to arise spontaneously sometime in the early history of life.

What was need, instead, is for some kind of molecule that could do both of these things. A molecule that could replicate itself and it could do other useful things in the cell. Today we’re beginning to think that when life arose the molecule that did that was the nucleic acid RNA, or some early form of what we know today as RNA. Why we think that?


The RNA World


At the beginning of the 1960’s researchers have begun to suspect that RNA might have acted as the first blueprint or genetic material. In the laboratory, it is possible to put in some kind of RNA and then some building blocks, and under the right conditions the RNA replicates itself. RNA in the solution somehow acts as a template that helps the monomers come together in the right way and also polymerize.

A second breakthrough that led people to think that RNA might be the first information processing molecule came in 1983, when Thomas Cech actually discovered that, in modern cells, there are some kinds of RNA that do act as catalysts the way protein enzymes do. That is, they perform some important biochemical tasks in the cell. They are generally called ribozymes.

The important point is that these rybozymes are functioning as catalytic molecules just like protein enzymes. We’ve got two things now. We’ve got evidence that RNA can replicate itself and also evidence that RNA can have some sort of catalytic function. Taken together, these two sets of results suggest that in the very early stages of life, that magical point where a non-living protobiont somehow slipped over the edge into the state that we might want to call a living cell, happened in what we now call an RNA world. RNA actually dominated as the key biological molecule.

At some point after the RNA world, things changed. RNA had gotten the system rolling, but eventually DNA and proteins took over. DNA took over the job of being the information-bearing molecule. Proteins took over the job of doing all of the catalytic and other kinds of work in the cell. RNA became relegated to just an intermediate in the process.

Why this would happen is fairly obvious. Proteins are extraordinarily versatile molecules. They do an enormous number of tasks. Their versatility comes from the fact that they can assume all sorts of complicated shapes in a way that RNA can’t. Proteins clearly took over doing the real work in the cell because they were really good at it. DNA assumes a particular kind of chemical configuration that makes it really good at storing information in a way that RNA is not particularly good. Once we have DNA, it is much better than RNA at making more copies of itself and storing that information. So, it took over that job. RNA became just an intermediate.

I think that with this we have what is basically needed to the appearance of life. We’ve explained the origin of life, at least in part. Quite an accomplishment, eh? How do we get from these simple cells to platypuses and other things is another subject, and don’t worry, I’ll try to tackle it.

Return from Genetic Code to The Origin of Life

The Origin of Life, Part III: Primitive Cells

The experiments of Miller, Fox, Ferris and others had shown that complex polymers could arise spontaneously on the early Earth. We know, however, that the organic molecules that make us up are not just a jumble of things floating around in a primordial soup, they are highly ordered. They come in highly ordered packages. There are many such packages in living systems, but the most fundamental one is what we call the cell. All living things are made of units called cells. Minimally, for something to be living, requires a barrier between the living part and the non-living part. That barrier is what would define the cell.

Is it possible that some cell-like structure could arise spontaneously on the early Earth? Here, too, laboratory experiments suggest that the answer is yes. A number of experiments have been done that demonstrate, under conditions that are not too rigorous, that you can get aggregations of molecules that would spontaneously form cell-like structures.

This kind of spontaneously made cells are called protobionts. You can actually make protobionts, it is not difficult to do. You can make them under a number of different kinds of conditions. For example, if you have the right kind of lipids, you can almost literally put them in water and they spontaneously form a package where there is a membrane of lipids that encloses some central space.

The most remarkable kind of protobiont, called coacervate, is one that has been made to self-assemble out of a solution that includes polypeptides, nucleic acids and polysaccharides. If you have the right conditions, you can make these to self-assemble into a cell-like object. What is really interesting about coacervates is that if you then throw into the mix some real biological molecules, a protein enzyme that you’ve taken from a real living cell, for example, the coacervates can take up those enzymes. They would bring them inside of themselves.

Those enzymes would start working inside the coacervates. What enzymes do is to process some kind of biological molecule into another. Once these enzymes have been taken up by these coacervates, it would also start doing the reactions and putting out the products. This is really getting remarkably close to something that we might want to call living.

I don’t say that we can make primitive cells. Nobody has actually made a cell that any biologist would look and say “oh, that’s a cell you just made”. People are trying to do that now, but it hasn’t been done yet. We can, however, make cell-like things and it doesn’t seem to be any big trick. These things spontaneously form, we know that for sure.

Return from Primitive Cells to The Origin of Life

The Origin of Life, Part II: Polymerization

Okay, let’s go on with the origin of life. In my last post I talked about Miller’s experiment. The significance of this experiment was simply to show that non-biological processes could result in the formation of organic molecules, including amino-acids and nucleotides. These molecules that Miller got, however, were still relatively simple. They thus only represented a first small step.

Amino-acids and nucleotides by themselves don’t get us very far because we need to get these simple molecules linked together. They act as building blocks to make the more complicated stuff that we are really made up of. The technical term for this process is polymerization. In other words, complex organic molecules, like proteins, or DNA, are polymers. They are long chains of building blocks(monomers).

Miller was able to make the building blocks, but living things need those building blocks strung together in polymers.

Ordinarily, in living things today, there are a series of specialized proteins, called enzymes, that are responsible for building these polymers out of the monomeric building blocks. What happened in the early Earth in the absence of these specialized protein machinery that could possibly lead to polymerization?


Polymerization in Laboratory


The first evidence that this was possible came fairly early on in the late 1950’s, and it was worked by Sidney Walter Fox. Fox took Miller’s experiment one step further. He was able to take amino-acids that might have been created in an experiment like Miller’s and get them to start joining together but only under certain conditions. In just the right proportions, in just the right temperature, the right amount of time that you might heat them, he could get short polymers of amino-acids. We call a polymer of amino-acids a protein, but we also call it a polypeptide chain. That’s simply because the chemical bond that link these monomeric amino-acids to form that chain is called a peptide bond.

What Fox was able to do is to get fairly short polypeptides, showing that you can get spontaneous polymerization. The problem was that Fox could only do this under a very narrow range of conditions. In Miller’s work, you could just throw a bunch of stuff into those flasks, and you get some sort of organic molecule. Fox’s work, however, required much more controlled conditions, conditions that are unlikely to have been that of the early Earth.


Origin in Clay?


Fox and a number of other scientists, however, speculated that maybe you could get more spontaneous formation of polymers if you had some sort of non-biological catalyst. A catalyst is just a term that refers to something that makes a chemical reaction run faster. What Fox and others suggested was that maybe there was something that was non-biological that could catalyze these polymerization reactions. Specifically, what they suggested was that perhaps there were certain kinds of clays that acted as inorganic catalysts.

Why clay? It turns out that some kinds of clay, when they dry out, form very regular ladder-like structures. Furthermore, these clays would also have weak electrical charges on their surfaces. These weak electrical charges can adhere organic molecules. The idea here is that sometime in the early Earth, the shores of a primitive ocean had a bed of clay. As organic molecules that were being created in that primitive ocean got accumulated into the shore, they adhered to that clay. And the clay, because of its regular order and the spacing, would increase the probabilities that you get some sort of spontaneous polymerization.

Wow, that’s an interesting idea. Is there any evidence that this could work? We don’t know what the primitive Earth was like at that scale, but it turns out that recent work by James Ferris, who is at the Rensselaer Polytechnic Institute, has shown exactly that this process does work under abiotic conditions. In a laboratory, Ferris and his colleagues have been able to synthesize not only short polypeptides, but also short stretches of DNA from the component building blocks that were created from experiments like those done by Miller.

The proteins and DNA that Ferris and other have produced are not functional. These are strings of monomers that have been polymerized, but they don’t make any sense. It’s not like a string that would do anything like a real biological molecule might. Nevertheless, it is a start. We can postulate that biological polymers could arise spontaneously.

So, let’s imagine that we have complex polymers. Let’s imagine that we’ve got a primitive ocean brimming with a whole bunch of organic polymers, what has been called the primordial soup. Let’s imagine that even some of these polymers, by chance, have come together as strings that might even have some sort of useful biological function, like modern polymers. Where do we go from there?

The experiments of Miller, Fox, Ferris and others had shown that this is possible, but even with all of this we still don’t have anything approaching what we would want to call life. Why not? Because we know that the organic molecules that make us up are not just a jumble of things floating around in a primordial soup, they are highly ordered. They come in highly ordered packages. This is going to be the subject of my next post, cells. Stay tuned.

Return from Polymerization to The Origin of Life

The Origin of Life, Part I; Miller's Experiment

As Richard Dawkins puts it, “the theory of evolution is about as much open to doubt as the theory that the Earth goes round the sun”. We can also be sure that all of the diverse forms of life we see around us today have arisen from some common, primitive, single original living entity. Our first problem is, then, how this living “thing” originated. In 1953, Stanley Miller conducted his famous (or infamous) experiment. At the time he was a graduate student at the University of Chicago. For decades, scientists had speculated whether the complex organic compounds characteristic of living things could have somehow been generated spontaneously on the early Earth. Spontaneous generation of organic compounds can’t happen today. This is because organic compounds are too fragile.

It is possible that, given enough time, a complex compound might just come together. If it did, however, it would immediately be taken apart. This is because today our planet is just filled with oxygen. Oxygen breaks down organic compounds. Oxygen pulls electrons out of organic compounds and turns them into inorganic compounds.

How can we even get the formation of any kind of organic compound, if as soon as anything begins to arise by chance, it is immediately taken apart? Well, this one is easy. If oxygen is bothering you, just get rid of it.

Before the Miller-Urey experiment, two scientists, Alexander Oparin and J.B.S. Haldane, independently suggested that the early Earth actually did not have much or any oxygen. Oxygen is all around us in the atmosphere, but they suggested that when the planet was formed, the first atmosphere that developed was entirely composed of just a few gases: hydrogen, methane, ammonia and water vapor. This would be as the atmospheres of the moons of other planets that have been described.

Oparin and Haldane independently suggested that the problem of spontaneous generation of organic compounds wasn’t really a big deal, because the early Earth did not have an oxidizing atmosphere. To test this hypothesis, what Miller did was set out to reproduce the conditions presumed to exist on the early Earth before life have arisen, and see if he could get the spontaneous production of organic compounds.


A Simple Experiment, Powerful Results


Miller’s experiment was set up this way: He had two flasks connected by a series of glass tubes. He had a lower flask in which he put water, and he heated this water gently with a little flame. He would cause the water to evaporate and create water vapor, which would circulate into a higher flask. In the upper flask, Miller also added a number of other gasses. He created an atmosphere similar to the one of the early Earth, consistent of hydrogen, methane, ammonia and wáter vapor.


The Experiment. Souce: Wikipedia.

Miller also exposed the gases in this upper chamber to a lot of energy by putting two electrodes that would create electrical sparks. He knew that he needed energy to create any kind of compound, certainly organic compounds.

This is actually a pretty simple experiment, and you can almost do this in your own house. All of these materials are easily available. You could replicate Miller’s experiment and results, which were spectacular. In only a couple of days, he found he could synthesize a whole range of different organic compounds, including some very complex ones, like amino-acids.

The scientific community immediately set out to replicate this. Many people replicated the experiment and it quickly became clear that depending on starting conditions, it was possible to spontaneously, without any preexisting organic molecule, produce all of the amino-acids that are normally found in living material. Most intriguingly of all, you could create nucleotides, which are the building blocks of nucleic acids, DNA and RNA.

The implication of Miller’s experiment and those that followed was that there appeared to be no trouble at all for complex organic compounds to arise spontaneously on the inorganic early Earth. This is a first stepping stone to the origin of life from non-living matter.

On the other hand, as exciting as this result was, the organic compounds that Miller created were still relatively simple compared to the stuff that we are made of. What else do we need to get something that we would call “living”? We have to take our synthesis of organic compounds even farther, beyond these organic building blocks, to get the varied extremely complex molecules that living systems are really made of. We will see how that was possible in the early Earth next time.

Return from Miller's Experiment to the Origin of Life

In the Beginning

In the beginning... there was a singularity. Physicists tell us that the universe, as we know it, began between 10 and 20 billion years ago, at a moment in time they call the Big Bang. Our own star is comparatively young. Estimates are that it formed about 5 billion years ago. As our solar system was forming, cosmic dust gradually got swept up and began to form planets. Scientists estimate that our own planet reached its present size at about 4.6 billion years ago. That is generally taken as the age of the planet Earth.

In the beginning, planet Earth was a really miserable place. The way that the planet was formed, with ever larger and larger chunks of material slamming into it, created an enormous amount of heat. When the planet first formed it was melted. It was no place where one could ever conceive of life existing. Less than a billion years later, however, the fossil record clearly shows that life was there. This life was in the form of simple cells that resembled the bacteria we see around us today.

This is pretty fast work, especially when you consider that it took about a half a billion years just for the Earth to cool enough to actually have rocks and an atmosphere. In fact, some scientists now argue, based on fossil evidence, that life might have been present even earlier, as earlier as four billion years ago.

What we can take from this is that life appeared on the planet almost as soon as it was possible to do so. As soon as there were rocks to record the existence of life, we find evidence that life is there.


Where do these organisms come from?


In the beggining, life originated on the early Earth from non-living materials. All of the diverse forms of life we see around us today have arisen from some common, primitive, single original living entity. This is pretty deep stuff, a very cool idea.

There are alternatives to this account, of course. Many religious faiths hold that, in the beginning, life was bestowed on the planet by the work of a deity, but this is a pretty boring idea. Another alternative, one that has been suggested repeatedly over the years by a number of scientists, is the panspermia hypothesis. It suggests that the first life on Earth came from somewhere else in space.

Both of these alternatives, however, beg the question of how living matter could arise from non-living matter. That brings an important question into the table.

What’s the minimal difference between living and non-living materials? This is basically the same as asking “what is life?”. This question has been around for a long time. For me, however, with all the knowledge we have today at our disposal, to address this question is pretty simple.


So, what is life?


Life is defined by what is called organic chemistry. The most fundamental difference between living and non-living matter has to do with chemistry. Living things all have in common the fact that they are made of a particular class of chemical compounds. These are compounds that are built around the unique chemical properties of the element carbon. These kinds of compounds are called organic compounds. They are called that way because they are uniquely associated with living organic things.

There are only four kinds of organic compounds, broadly speaking. The first kind are amino-acids. These are the things that make up protein. The second kind of organic compound are the nucleic acids. These nucleic acids are DNA and RNA. The third class are the carbohydrates. These are what we commonly call sugars. The last general class of organic compounds are the lipids. Lipids are what we commonly call fats in many cases, but lipids can actually take a number of different forms.

These organic compounds have particular and quite sophisticated chemical properties that are unique to them. There is one property that is particularly remarkable, that is that the complex organic compounds that we find on the planet today, the stuff we are made of, is generally only produced through the action of living things. Another way to put this is that the creation of new organic matters depends on the existence of organic matter. You can’t make more organic compounds unless you got compounds to make them.

We can be quite confident given what we know about how the planet was formed the early Earth was entirely inorganic. Then, we have to ask: Where did the organic compounds that life depends on came from in the beginning? At this point, you might think that I’m going to throw Intelligent Design and Creationist arguments at you. I won’t, don’t worry. I’ll create some tension and leave that question unanswered, until next time, when we talk about the exciting, random and unintelligent origin of life.

Return from In the Beginning to The Origin of Life

Copyright © 2010
Template by bloggertheme