Weinberg angle: Difference between revisions
en>Linas m fix bad link |
en>ChrisGualtieri m →References: Remove stub template(s). Page is start class or higher. Also check for and do General Fixes + Checkwiki fixes using AWB |
||
Line 1: | Line 1: | ||
'''Error catastrophe''' is the extinction of an [[organism]] (often in the context of [[microorganism]]s such as [[viruses]]) as a result of excessive mutations. Error catastrophe is something predicted in mathematical models and has also been observed empirically.<ref>[http://www.ncbi.nlm.nih.gov/pubmed/15649564 Action of mutagenic agents and antiviral inhibitors on foot-and-mouth disease virus, Virus Res. 2005]</ref> | |||
Like every organism, viruses 'make mistakes' (or [[mutate]]) during replication. The resulting mutations increase [[biodiversity]] among the population and help subvert the ability of a host's immune system to recognise it in a subsequent infection. The more mutations the virus makes during replication, the more likely it is to avoid recognition by the immune system and the more diverse its population will be (see the article on [[biodiversity]] for an explanation of the selective advantages of this). However if it makes too many mutations, it may lose some of its biological features which have evolved to its advantage, including its ability to reproduce at all. | |||
The question arises: ''how many mutations can be made during each replication before the population of viruses begins to lose self-identity?'' | |||
==Basic mathematical model== | |||
Consider a virus which has a genetic identity modeled by a string of ones and zeros (e.g. 11010001011101....). Suppose that the string has fixed length ''L'' and that during replication the virus copies each digit one by one, making a mistake with probability ''q'' independently of all other digits. | |||
Due to the mutations resulting from erroneous replication, there exist up to ''2<sup>L</sup>'' distinct strains derived from the parent virus. Let ''x<sub>i</sub>'' denote the concentration of strain ''i''; let ''a<sub>i</sub>'' denote the rate at which strain ''i'' reproduces; and let ''Q<sub>ij</sub>'' denote the probability of a virus of strain ''i'' mutating to strain ''j''. | |||
Then the rate of change of concentration ''x<sub>j</sub>'' is given by | |||
:<math>\dot{x}_j = \sum_i a_i Q_{ij} x_i</math> | |||
At this point, we make a mathematical idealisation: we pick the fittest strain (the one with the greatest reproduction rate ''a<sub>j</sub>'') and assume that it is unique (i.e. that the chosen ''a<sub>j</sub>'' satisfies ''a<sub>j</sub> > a<sub>i</sub>'' for all ''i''); and we then group the remaining strains into a single group. Let the concentrations of the two groups be ''x , y'' with reproduction rates ''a>b'', respectively; let ''Q'' be the probability of a virus in the first group (''x'') mutating to a member of the second group (''y'') and let ''R'' be the probability of a member of the second group returning to the first (via an unlikely and very specific mutation). The equations governing the development of the populations are: | |||
:<math> | |||
\begin{cases} | |||
\dot{x} = & a(1-Q)x + bRy \\ | |||
\dot{y} = & aQx + b(1-R)y \\ | |||
\end{cases} | |||
</math> | |||
We are particularly interested in the case where ''L'' is very large, so we may safely neglect ''R'' and instead consider: | |||
:<math> | |||
\begin{cases} | |||
\dot{x} = & a(1-Q)x \\ | |||
\dot{y} = & aQx + by \\ | |||
\end{cases} | |||
</math> | |||
Then setting ''z = x/y'' we have | |||
:<math> | |||
\begin{matrix} | |||
\frac{\partial z}{\partial t} & = & \frac{\dot{x} y - x \dot{y}}{y^2} \\ | |||
&& \\ | |||
& = & \frac{a(1-Q)xy - x (aQx + by)}{y^2} \\ | |||
&& \\ | |||
& = & a(1-Q)z - (aQz^2 +bz) \\ | |||
&& \\ | |||
& = & z(a(1-Q) -aQz -b) \\ | |||
\end{matrix} | |||
</math>. | |||
Assuming ''z'' achieves a steady concentration over time, ''z'' settles down to satisfy | |||
:<math> z(\infty) = \frac{a(1-Q)-b}{aQ} </math> | |||
(which is deduced by setting the derivative of ''z'' with respect to time to zero). | |||
So the important question is ''under what parameter values does the original population persist (continue to exist)?'' The population persists if and only if the steady state value of ''z'' is strictly positive. i.e. if and only if: | |||
:<math> z(\infty) > 0 \iff a(1-Q)-b >0 \iff (1-Q) > b/a .</math> | |||
This result is more popularly expressed in terms of the ratio of ''a:b'' and the error rate ''q'' of individual digits: set ''b/a = (1-s)'', then the condition becomes | |||
:<math> z(\infty) > 0 \iff (1-Q) = (1-q)^L > 1-s </math> | |||
Taking a logarithm on both sides and approximating for small ''q'' and ''s'' one gets | |||
:<math>L \ln{(1-q)} \approx -Lq > \ln{(1-s)} \approx -s</math> | |||
reducing the condition to: | |||
:<math> Lq < s </math> | |||
[[RNA viruses]] which replicate close to the error threshold have a genome size of order 10<sup>4</sup> [[base pairs]]. Human [[DNA]] is about ''3.3'' billion (10<sup>9</sup>) base units long. This means that the replication mechanism for DNA must be [[orders of magnitude]] more accurate than for RNA. | |||
===Information-theory based presentation=== | |||
To avoid error catastrophe, the amount of information lost through mutation must be less than the amount gained through natural selection. This fact can be used to arrive at essentially the same equations as the more common differential presentation.<ref>M. Barbieri, ''The Organic Codes'', p. 140</ref> | |||
The information lost can be quantified as the genome length ''L'' times the replication error rate ''q''. The probability of survival, ''S'', determines the amount of information contributed by natural selection— and [[information]] is the negative log of probability. Therefore a genome can only survive unchanged when | |||
:<math> Lq < -\ln{S}</math> | |||
For example, the very simple genome where ''L = 1'' and ''q = 1'' is a genome with one bit which always mutates. Since ''Lq'' is then 1, it follows that S has to be ½ or less. This corresponds to half the offspring surviving; namely the half with the correct genome. | |||
==Applications of the theory== | |||
Some viruses such as [[polio]] or [[hepatitis C]] operate very close to the critical mutation rate (i.e. the largest ''q'' that ''L'' will allow). Drugs have been created to increase the mutation rate of the viruses in order to push them over the critical boundary so that they lose self identity. However, given the criticism of the basic assumption of the mathematical model, this approach is problematic. | |||
<!-- If someone more knowledgeable than I would expand on all of this then that would be great --> | |||
The result introduces a [[Catch-22 (logic)|Catch-22]] mystery for biologists: in general, large genomes are required for accurate replication (high replication rates are achieved by the help of [[enzymes]]), but a large genome requires a high accuracy rate ''q'' to persist. Which comes first and how does it happen? An illustration of the difficulty involved is ''L'' can only be 100 if ''q''' is 0.99 - a very small string length in terms of genes. | |||
==See also== | |||
*[[Viral decay acceleration]] | |||
==References== | |||
{{reflist}} | |||
==External links== | |||
*[http://www.pnas.org/cgi/content/extract/99/21/13374 Error catastrophe and antiviral strategy] | |||
*[http://www.i-sis.org.uk/meltdown.php Applications of error catastrophe to the persistence of GM crops] | |||
*[http://longevity-science.org/orgel.html The Orgel's Error Catastrophe Theory of Aging and Longevity] | |||
*[http://jvi.asm.org/cgi/content/full/80/1/20 Examining the theory of error catastrophe] | |||
{{DEFAULTSORT:Error Catastrophe}} | |||
[[Category:Pathology]] | |||
[[Category:Population genetics]] |
Latest revision as of 00:41, 13 December 2013
Error catastrophe is the extinction of an organism (often in the context of microorganisms such as viruses) as a result of excessive mutations. Error catastrophe is something predicted in mathematical models and has also been observed empirically.[1]
Like every organism, viruses 'make mistakes' (or mutate) during replication. The resulting mutations increase biodiversity among the population and help subvert the ability of a host's immune system to recognise it in a subsequent infection. The more mutations the virus makes during replication, the more likely it is to avoid recognition by the immune system and the more diverse its population will be (see the article on biodiversity for an explanation of the selective advantages of this). However if it makes too many mutations, it may lose some of its biological features which have evolved to its advantage, including its ability to reproduce at all.
The question arises: how many mutations can be made during each replication before the population of viruses begins to lose self-identity?
Basic mathematical model
Consider a virus which has a genetic identity modeled by a string of ones and zeros (e.g. 11010001011101....). Suppose that the string has fixed length L and that during replication the virus copies each digit one by one, making a mistake with probability q independently of all other digits.
Due to the mutations resulting from erroneous replication, there exist up to 2L distinct strains derived from the parent virus. Let xi denote the concentration of strain i; let ai denote the rate at which strain i reproduces; and let Qij denote the probability of a virus of strain i mutating to strain j.
Then the rate of change of concentration xj is given by
At this point, we make a mathematical idealisation: we pick the fittest strain (the one with the greatest reproduction rate aj) and assume that it is unique (i.e. that the chosen aj satisfies aj > ai for all i); and we then group the remaining strains into a single group. Let the concentrations of the two groups be x , y with reproduction rates a>b, respectively; let Q be the probability of a virus in the first group (x) mutating to a member of the second group (y) and let R be the probability of a member of the second group returning to the first (via an unlikely and very specific mutation). The equations governing the development of the populations are:
We are particularly interested in the case where L is very large, so we may safely neglect R and instead consider:
Then setting z = x/y we have
Assuming z achieves a steady concentration over time, z settles down to satisfy
(which is deduced by setting the derivative of z with respect to time to zero).
So the important question is under what parameter values does the original population persist (continue to exist)? The population persists if and only if the steady state value of z is strictly positive. i.e. if and only if:
This result is more popularly expressed in terms of the ratio of a:b and the error rate q of individual digits: set b/a = (1-s), then the condition becomes
Taking a logarithm on both sides and approximating for small q and s one gets
reducing the condition to:
RNA viruses which replicate close to the error threshold have a genome size of order 104 base pairs. Human DNA is about 3.3 billion (109) base units long. This means that the replication mechanism for DNA must be orders of magnitude more accurate than for RNA.
Information-theory based presentation
To avoid error catastrophe, the amount of information lost through mutation must be less than the amount gained through natural selection. This fact can be used to arrive at essentially the same equations as the more common differential presentation.[2]
The information lost can be quantified as the genome length L times the replication error rate q. The probability of survival, S, determines the amount of information contributed by natural selection— and information is the negative log of probability. Therefore a genome can only survive unchanged when
For example, the very simple genome where L = 1 and q = 1 is a genome with one bit which always mutates. Since Lq is then 1, it follows that S has to be ½ or less. This corresponds to half the offspring surviving; namely the half with the correct genome.
Applications of the theory
Some viruses such as polio or hepatitis C operate very close to the critical mutation rate (i.e. the largest q that L will allow). Drugs have been created to increase the mutation rate of the viruses in order to push them over the critical boundary so that they lose self identity. However, given the criticism of the basic assumption of the mathematical model, this approach is problematic.
The result introduces a Catch-22 mystery for biologists: in general, large genomes are required for accurate replication (high replication rates are achieved by the help of enzymes), but a large genome requires a high accuracy rate q to persist. Which comes first and how does it happen? An illustration of the difficulty involved is L can only be 100 if q' is 0.99 - a very small string length in terms of genes.
See also
References
43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.
External links
- Error catastrophe and antiviral strategy
- Applications of error catastrophe to the persistence of GM crops
- The Orgel's Error Catastrophe Theory of Aging and Longevity
- Examining the theory of error catastrophe
- ↑ Action of mutagenic agents and antiviral inhibitors on foot-and-mouth disease virus, Virus Res. 2005
- ↑ M. Barbieri, The Organic Codes, p. 140