Simple versus Complex Life Cycles

When you first start learning about parasite ecology—and even decades later, when you’re an expert—parasite life cycles can be confusing! Parasite life stages have a plethora of fancy names, like L3 larva and miracidium, which can be difficult to remember. Furthermore, every parasite’s trajectory from immature to adult stages seems different than the last, and we don’t even know all the life cycle details for most parasite species. So, if you’re feeling confused by parasite life cycles, you’re in good company! You might never memorize all the complex life cycles that exist, but you can understand the general ecology and evolution of complex life cycles. We’ll cover the basics in this post.

Simple versus Complex Life Cycles:

Let’s start with simple life cycles, which are sometimes called direct life cycles or one-host life cycles. Monoxenic or homoxenous parasite species with simple life cycles only use a single host species in their life. The single host species is entered by infective stages of the parasite, and then the parasite grows and develops in the host before switching to reproducing within the host.

Direct life cycle parasites include parasites with fecal-oral transmission, like Ascaris lumbricoides, a roundworm that infects humans. Adult male or female roundworms live in the small intestine, where the female releases fertilized eggs into the environment through human feces. Eggs are ingested by people when they (usually accidentally) consume fecal matter. The parasite grows and develops in a series of larval stages in multiple tissues. The larvae eventually make their way to the respiratory system, where they are then coughed up and swallowed, which allows them to find their way to the small intestine to mature. This entire life cycle uses just a single host (humans), even as the parasite goes through many developmental stages inside and outside host.

This life cycle diagram for a simple life cycle parasite, Ascaris lumbricoides, depicts the parasite moving through several life stages from egg to larvae to adult, where the eggs exist in the external environment and the other life stages occur in the human host.

Now let’s move on to complex life cycles, which are sometimes called indirect life cycles or multi-host life cycles. Parasites with complex life cycles are indirectly transmitted from one host species to the next. These heteroxenic parasites need to use multiple host species in sequence to successfully develop and reproduce. Reproduction occurs in the final host in the life cycle, which is called the definitive host. The one or more hosts where parasite growth or development occur (but no reproduction occurs) are called intermediate hosts. Sometimes a parasite will not complete any life stages within a host, and instead only use the host for transportation; those transportation hosts are called paratenic hosts. The number of hosts needed to complete a life cycle is the life cycle length, and it must be at least two hosts long.

Ready for some examples? Schistosomiasis is caused by a parasite with a two-host life cycle, which uses humans as the definitive human host and snails as the intermediate host. Euhaplorchis californiensis is an example of a parasite with a three-host life cycle, which uses birds like herons as the definitive host, snails as the first intermediate host, and killifish as the second intermediate host. Life cycle lengths appear to have an upward limit, because most life cycles require four or fewer host species. Why do you think that is?

The life cycle of Euhaplorchis californiensis, a trematode that must sequentially infect three host species to complete its life cycle. This life cycle diagram came from The Ethogram Blog.

To get from one host to the next, parasites with complex life cycles can use a few different modes of horizontal transmission. Passive transmission occurs when the parasite just waits around for the next host, like when E. californiensis eggs in heron feces wait to be consumed by a salt marsh snail. Active transmission occurs when the parasite is free-living in the environment and moves around to seek out the next host, like when E. californiensis cercariae leave their snail host and swim around looking for a killifish to infect. And finally, complex life cycles often involve trophic transmission, where the parasite is consumed along with its intermediate host by the next host in the life cycle. Trophic transmission is used by all cestodes and acanthocephalans and many nematode and trematode species. Trophic transmission is probably so common in complex life cycles because predator–prey interactions are one of the most common ways that two (host) species might interact.  

Three ways to add hosts to a life cycle

You might have noticed that we need to make an important distinction between whether a parasite uses multiple host species in sequence, in parallel, or both. For example, E. californiensis does both: it must infect birds, snails, and killifish in that order to complete its life cycle, so it uses multiple host species in sequence. But it can also use multiple host species at a given life stage; in particular, E. californiensis can successfully infect and reproduce in more than one bird species. The number of host species that a parasite can successfully use for any given life stage is quantified as host specificity. Many complex life cycle parasites have high host specificity for some parts of their life cycle (e.g., they can only infect a single snail species as a first intermediate host) and low specificity for other parts of their life cycle (e.g., they can infect many bird species as definitive hosts).

This brings up an important question: how do host species get “added” to a parasite’s life cycle? We assume that parasite species start by infecting just one host species and then complex life cycles evolve from those simple life cycles. There are two ways that a host species is thought to be added in sequence, thereby increasing the life cycle length: upward incorporation and downward incorporation.

In upward incorporation, a new definitive host that is a predator of the original definitive host is added to the life cycle. Parasites that can infect the new predator without being digested are selected for because they have avoided a source of mortality. They also likely have higher adult body sizes, life spans, and fecundity inside the new definitive host, because the new definitive host species should be larger and longer-lived, on average, than the old definitive host species. After the new definitive host species is added to the life cycle, the parasite then represses reproduction in the old definitive host species, which is now used as an intermediate host. There is an upper limit to how long the life cycle can be made using upward incorporation, because there are only so many trophic levels in a given food web.

In downward incorporation, a new intermediate host that consumes free-living stages of the parasite and is consumed by the definitive host is added to the life cycle. (Yes, parasites are often eaten by predators!) This again reduces parasite mortality, because the parasites can now infect the new intermediate host instead of being digested. Since fewer parasite stages are lost to mortality and more make it to the definitive host via trophic transmission, overall transmission rates increase. Both downward incorporation and upward incorporation are thought to have led to the evolution of complex life cycles for some parasite species.  

This diagram from Parker et al. (2015) shows how a host is added to higher trophic level in upper incorporation and a lower trophic level in downward incorporation.

There is also lateral incorporation, where host species are added in parallel, making the parasite species less host specific (more of a generalist) for a given life stage. Parasites can benefit from infecting more host species in a given life stage whenever that makes them more likely to be able to find a host that they can successfully infect that can continue their life cycle. However, there are also some likely costs associated with being a generalist, instead of specializing on just one host resource.

Final thoughts

In summary, some parasite species have complex life cycles and some have simple life cycles. Some parasite species are highly host specific at every life stage and others are host generalists that seem to infect nearly everything. There’s a lot of variability in parasite life cycles, but in general, they can be described by their life cycle length and the transmission modes that parasites use to get from one host to the next. Beyond that, each life cycle diagram is just a bunch of fancy terminology.

If you’re new to parasite ecology and thinking about life cycles for the first time, I have a question to leave you with: how do you think scientists have figured out all these life cycles? If you found a new species of larval trematode in a fish, how would you figure out its life cycle?

References:

Parker, G.A., Ball, M.A. & Chubb, J.C. (2015). Evolution of complex life cycles in trophically transmitted helminths. I. Host incorporation and trophic ascent. J. Evol. Biol., 28, 267–291.

Links, news, and paper highlights: January 2016

I’m trying to do a better job of keeping up with parasite ecology and epidemiology related news this year. Here’s some recent work that might be of interest:

Links:

Tasmanian devils have TWO types of infectious cancer!

Romans were wormy, despite relatively good hygienic practices.

The West African Ebola outbreak is over.

It looks like the mosquito-borne Zika virus is the likely culprit of the rapid increase in microcephaly in infants born in Brazil.

Paper highlights:

Pertussis, also known as whooping cough, kills tens of thousands of children per year, despite high global vaccination coverage. Additionally, developed countries with high pertussis vaccine coverage – like the United States – have experienced bigger outbreaks in recent years. Many hypotheses have been suggested to explain the “resurgence” of pertussis: (1) there is waning immunity to the vaccines and adults act as bacterial reservoirs; (2) the new acellular vaccine isn’t as good as the previous whole-cell vaccine; (3) the vaccines protect against infection but not transmission; and (4) there isn’t really a resurgence; we’re just better at detecting pertussis now than we used to be. A recent paper argues that all of those commonly held views are wrong and proposes some new hypotheses. Cool stuff!

Antibiotic resistance is a huge challenge facing global medicine. We usually assume that when bacteria evolve resistance to a given antiobiotic, the mutation that provides resistance is costly. Because we assume that those resistance mutations are costly, we also assume that if we stop using an antibiotic, the bacteria populations will evolve back to their susceptible state by acquiring compensatory mutations that restore the function(s) lost by resistance mutations. But resistance mutations vary in how costly they are. Some aren’t costly at all. And there are only so many compensatory mutations that can restore a given function. So, we can’t necessarily expect a resistant population to revert to susceptibility, whether a compensatory mutation pops up in the population or not. Furthermore, there are many other possible mutations that can reduce or eliminate any cost of resistance just by increasing overall bacterial fitness, without actually returning lost functions. We might be overlooking the importance of those “generally beneficial mutations” in the evolution and subsequent loss of antibiotic resistance in bacterial populations. Check it out.

Antibiotic Resistance is Old News

I just read a short review paper by Spellberg et al. (2013) that gave me a new perspective on antibiotic resistance.  They pointed out that antibiotics are not a “new” thing in evolutionary/geological time.  Antbiotics have been used by bacteria during ‘ecological warfare’ for ages!  In fact, they cite a cool PLoS ONE paper from Bhuller et al. (2013) that found that bacteria from a cave that had been isolated for more than 4 MILLION YEARS could resist a variety of antibiotics that humans currently use.  Spellberg et al. (2013) therefore suggested that there are no “new” antibiotic targets to be found – that is, any biochemical target that we might think of has probably already been targeted by an antibiotic, and thus resistance mechanisms related to that target probably already exist somewhere in the world.  Interesting!  And terrifying! 

I make a lovely cartoon lab bench, if I do say so myself.

References:   

Spellberg, B., J.G. Bartlett, and D.N. Gilbert.  2013.  The future of antibiotics and resistance. The New England Journal of Medicine 368(4): 299-302.  (Open access link!)

Bhullar, K., N. Waglechner, A. Pawlowski, K. Koteva, E.D. Banks, M.D. Johnston, H.A. Barton, and G.D. Wright.  2012. Antibiotic resistance is prevalent in an isolated cave microbiome.  PLoS ONE 7(4): e34953.  (Open access link!)

Parasites, Spatially Structured Populations, and the Evolution of Virulence

In the past few weeks, I’ve spent a lot of time thinking about how spatial heterogeneity and spatially structured host populations affect parasite transmission.  Consider this post my brain dump regarding this fascinating line of inquiry.

Modeling parasite transmission – well-mixed or spatially structured?

When we model parasite transmission, we usually assume that the parasite is moving through a population of hosts that is homogeneous and well-mixed.  This is the “mass-action” type model.  The assumption (for a directly transmitted parasite) is that every infected host is equally likely to interact with every susceptible host.  How realistic is this assumption?  Uhhh… probably not realistic at all, actually.

Instead of being well-mixed, interactions within the host population might occur at a local scale, where hosts only interact with their nearest neighbors.  What if we were to model parasite transmission in two different ways:  first under the assumption that the host population is spatially structured with local interactions, and then under the assumption that the host population is homogeneous and well-mixed.  Would the outcomes of the two models be different?

Enter a recent, freaking awesome Am Nat paper by Wodarz et al. (2013), who did just that.  When they used spatially structured vs. mixed populations in an agent based model, they found that host (and thus parasite) extinction was more likely in the spatially structured population.  They saw the same outcome using ODE metapopulation models.  Why should restricting interactions to the local scale increase the risk of extinction?  Wodarz et al. (2013) argue that it is because the carrying capacity at the local scale is smaller than the carrying capacity in the well-mixed model.  (See the self-shading idea, below.)  Basically, a giant chunk of the “population persistence” parameter space in well-mixed models is lost when we switch to spatially structured models.

Figure 3 from Wodarz et al. (2013). Extinction is more likely in populations that are spatially structured than in populations that are well-mixed.

You ought to go take a look at the Wodarz et al. (2013) paper, because it is packed with cool stuff.  Like, what if we change the scale of the local interactions?  What if introduce migration among local neighborhoods (=patches)?  Also, it’s open access.  GO LOOK.

(EDIT:  Begon et al. (2002) argue that you can also have what they call “homogeneous contact experience” without having homogeneous mixing – that is, even when interactions are spatially structured.  If the rates of contact at a local, nearest-neighbor scale are the same as those at the global scale, you still get a homogeneous contact experience.  Wodartz et al. considered both types of spatial structuring – the kind where local interactions scale with global interactions, and the kind where they don’t.)

How do spatially structured host populations affect parasite evolution?

            Evolutionarily speaking, parasites don’t “want” their host population to go extinct.  So, we should expect that there is some evolutionary pressure to maximize parasite transmission while minimizing the probability of host extinction.  (I’ve talked about the tradeoff between transmission and virulence in a previous post.)  In spatially-structured populations, where host (and thus parasite) extinction is more likely, we might therefore expect strong pressures for the evolution of less virulent parasites and/or lower transmission rates.

Boots and Mealor (2007) did an interesting experiment to test the hypothesis that parasites will evolve to have lower transmission rates in more spatially structured host populations.  By interesting, I mean that they put moth larvae (the hosts) in three concentrations of jello – soft, intermediate, and hard.  In the hard jello, moth larvae had the most restricted movement, and thus the most spatially structured populations.  Then they introduced a virus into the moth-jello environment, and tracked the evolution of the virus’ infectivity (which is part of the transmission rate).  As predicted, they found evolution to reduced infectivity/transmission in the hard jello.

When I first read Boots and Mealor (2007), I could not wrap my head around this idea that parasites with high transmission rates would “self-shade” themselves into extinction in highly structured host populations.  The idea is that in spatially structured populations, every infected host individual will be surrounded by other infected individuals if transmission rates are high, so the parasite’s offspring will have no new territory to conquer.  At first, that sounds pretty good for the parasite – it was so successful that it spread to all available hosts!  But if no new susceptible hosts turn up to be infected – either from birth processes or immigration – then the parasite will go extinct.  Enter a cool modeling paper by Lion and Boots (2010).  They show that yes, evolution can select for parasites that are “less harmful” (=lower virulence) and “slower transmitting” (e.g., lower infectivity), but this depends on the rate of demographic turnover in the population.  So. Cool.

References:

Boots, M., and M. Mealor. 2007. Local Interactions Select for Lower Pathogen Infectivity.  Science 315: 1284-1286.

Lion, S., and M. Boots. 2010. Are parasites prudent in space?  Ecology Letters 13: 1245–1255.  (Open access link to paper)

Wodarz, D., Z. Sun, J.W. Lau, and N.L. Komrova.  2013.  Nearest-Neighbor Interactions, Habitat Fragmentation, and the Persistence of Host-Pathogen Systems.  American Naturalist 182(3): E94-E11.  (Link to paper)

The evolution of virulence and ‘virulence management’

By definition, parasites/pathogens harm their hosts.  The degree to which parasites harm their hosts is called virulence.  The more virulent the parasite, the more it damages the host.  From an ecological perspective, we measure virulence in terms of reduced host survival and/or reproduction.

Why are some parasites/pathogens more virulent than others?  When should parasites evolve to be very virulent, and when should they evolve to be more benign?

Ewald (1993 and elsewhere) argued that a parasite’s virulence evolution should be related to the transmission mode of the parasite.  For instance, consider three types of transmission: direct transmission where the parasite cannot survive for long in the environment (e.g., the common cold – rhinovirus), direct transmission where the parasite can survive for a long time in the environment (e.g., smallpox), and vector transmission (e.g., malaria).  Virulence should be maladaptive if it hampers transmission – as in case 1.  If you’re so sick with a cold that you cannot leave your house, the virus is less likely to infect new hosts.  Virulence should be high whenever it increases transmission – as in 2 and 3.  If the virus can survive for a long time in the environment, why not go crazy replicating in the host (to the demise of the host) and then hang out in the environment until another host comes around?  And if the virus is vector-transmitted, why not replicate to a high density (to the detriment of the host) to insure that the vector gets a good dose of parasite with a blood meal?  Also, making the host lethargic might increase the likelihood that a vector gets a blood meal.

Ewald (1993) also suggested that by understanding how transmission can affect virulence evolution, medical scientists might be able to manipulate the evolution of virulence in important parasites/pathogens.  By reducing the probability of transmission, we could increase the cost of virulence (assuming that there is a trade-off between virulence and transmission).  Parasites should be more prudent when the probability of transmission is low.

Ewald (1993) gives several examples of pathogens that have evolved to become less virulent when the probability of transmission was reduced, but let’s just talk about HIV.  HIV is the sexually-transmitted retrovirus that causes AIDS.  HIV hangs out in white blood cells and can remain latent (=inactive) for long periods within the host, but the virus can also rapidly reproduce.  The longer the latent period, the less virulent the virus is, and the faster the reproduction rate, the more virulent the virus is.

How could we increase the probability of transmission of HIV?  If a human population were to change culturally from a monogamous, family-oriented culture to a more polygamous one, the rate of partner change and thus HIV transmission would increase.  Correspondingly, Ewald (1993) discussed some evidence that HIV can be more virulent in urban areas with many unmarried individuals than in rural areas that are more family-oriented.  (Of course, this is just one of many factors that affect the probability of HIV transmission, so please don’t go crazy on the cultural interpretations.)

How could we decrease the probability of transmission of HIV?  Condoms and safe sex education!  Ewald (1993) presented some evidence that the use of the drug AZT to treat HIV couldn’t completely explain the evolution of lower HIV virulence in homosexual males in urban areas in the 1980’s.  Increased use of safe sex practices might explain some of that decreased virulence.

Ewald (1993) is a bit old – there’s some great evidence for/against this idea of ‘virulence management’ in the literature now.  But the Ewald (1993) paper is a good read and a ‘popular’ article, so check it out!

Reference:

Ewald, P.W. 1993. Evolution of virulence.  Scientific American.

Related 2001 interview with Dr. Ewald from PBS.

Parasite(ish)-Host Coevolution

Continuing on my not-parasite-but-kinda-similar trend, let’s talk about bacteria and phages.  This post was stimulated by a really cool talk by Britt Koskella, from the University of Exeter.  She has a wordpress site and has been tweeting about the EEID conference, so head that way for more cool stuff.  (Maybe she’ll also come correct places where I butcher her work.)

A bacteriophage is a virus that infects bacteria.  In Britt’s case, the bacteria of interest are parasites of the horse chestnut tree.  She studies the co-evolution of these three groups: the trees, the bacteria, and the phages.  Importantly, because these organisms/phages have very different life spans, we expect phages to evolve faster than bacteria, which should in turn evolve faster than trees.

Question 1:  Do we see local adaptation of phages to the bacteria of a given tree?  Answer:  Yep!  If you take leaves from multiple trees and culture the bacteria from those leaves, and then test all the phages on all of those bacteria cultures, you find that phages do best on bacteria from the tree that they were collected on.  Also, phages did better on bacteria from the interior of the leaves, which makes sense because the exterior is likely highly controlled by abiotic processes (e.g., UV radiation).

My cartoon of Britt’s graph. Phages are more successful at infecting bacteria from the tree they were collected on (sympatric tree) than other trees (allopatric trees).

Question 2:  Given that we see local adaptation of phages to bacteria, does that adaptation vary with time?  Answer:  Yep!  In this part, Britt calculated “local adaptation” as an index comparing phage success on bacteria from sympatric vs. allopatric trees.  She cultured bacteria from trees from each month in the season, and tried the phages from the last month on all of those cultures.  Does that make sense?  So, September phages on September bacteria, September phages on August bacteria, September phages on July bacteria, etc.  Here’s what she found: phages were most adapted to the bacteria from the previous month (=August), and then adaptation declined as you went further back in time.  She suggested that this is demonstrative of fluctuating selection, rather than an arms race between bacteria and phages.  That is, in an arms race, you should never see a decline in phage success as you go backwards in time.

Phages are most adapted to the bacteria of the prior month, and then adaptation declines as you continue backwards in time.

So, that was a lot about the phage evolution, but what about the bacteria?  Question 3: Do bacteria evolve resistance to phages?  Answer: Yep!  Since she had all of those monthly bacteria  and phage samples, she tested bacterial resistance to phages that were from the past time step, the present time step, and the future time step.  Bacteria were most resistant to past phages, indicating that bacteria evolve to resist their phage.  They were least resistant to phages from the future, which indicates that phages also evolve to better infect bacteria.  Neat!

(Edit: Check out Britt’s comment below about whether this pattern is the result of pairwise coevolution or species sorting.  More coolness to come!)

Image

All of the parasite-host co-evolution stuff is super cool.  Britt also looks at co-evolution in other disease systems, and you can check out some of that work here.

Don’t you really, really want to do experiments where you’re looking “into the future?”  Futuristic snails must be awesome.  I’m sensing an upcoming cartoon…

References:

Britt Koskella. 2013. Bacteria-phage interactions within a long-lived host.  EEID.

Koskella, B., Thompson, J.N., Preston, G.M. & Buckling, A. 2011. Local biotic environment shapes the spatial scale of bacteriophage adaptation to bacteria. The American Naturalist177(4):440-51.

More coming soon!