Information

Literature that demonstrates organisms have a competitive advantage in numbers


I'm studying two bird populations that are competing against each other for a resource: Population A and Population B.

Population A is present in much higher numbers than Population B, and as a consequence, I think that Population A has a competitive advantage over Population B.

I'm looking for literature that demonstrates that when one population is present in higher numbers than the other population, the population present in higher numbers has a competitive advantage. Can anyone point me in the direction of literature that demonstrates this?


I have found these papers: http://www.sciencedirect.com/science/article/pii/S0169534713002322

http://www.sciencedirect.com/science/article/pii/S0040580912001360

http://www.ncbi.nlm.nih.gov/pubmed/19416834

http://www.ncbi.nlm.nih.gov/pubmed/21930936

The last two argue that larger groups have better "problem solving" capability, and this is a clear advantage in competition.


A randomised clinical study to determine the effect of a toothpaste containing enzymes and proteins on plaque oral microbiome ecology

The numerous species that make up the oral microbiome are now understood to play a key role in establishment and maintenance of oral health. The ability to taxonomically identify community members at the species level is important to elucidating its diversity and association to health and disease. We report the overall ecological effects of using a toothpaste containing enzymes and proteins compared to a control toothpaste on the plaque microbiome. The results reported here demonstrate that a toothpaste containing enzymes and proteins can augment natural salivary defences to promote an overall community shift resulting in an increase in bacteria associated with gum health and a concomitant decrease in those associated with periodontal disease. Statistical analysis shows significant increases in 12 taxa associated with gum health including Neisseria spp. and a significant decrease in 10 taxa associated with periodontal disease including Treponema spp. The results demonstrate that a toothpaste containing enzymes and proteins can significantly shift the ecology of the oral microbiome (at species level) resulting in a community with a stronger association to health.


Alleged Disadvantages

Genetically modified food products are a threat to humans as well as for the whole biodiversity system which further results in harming the environment. These products affect the biodiversity. These products create super bugs because these plants are designed in a way to kill small bugs. These plants kill small bugs but the bugs that survive develop a resistance power and they become super bugs. Genetically modified products are tool of business houses to make money.
An individual or a farmer is not allowed to harvest these seeds and if somebody does it even accidentally, company reserves the right to sue that person. In cases, where the crop is ruined, people lose money because these seeds are costlier and one has to purchase these seeds every year while farming. Genetically seeds and plants also affect the soil as well as the environment because their genes are altered and they affect other plants also as and when they come into contact with them (Ferry and Gatehouse, 2009).
Genetically modified food products have been found affecting the human health also. It has been observed in a number of cases that people especially children suffered from several allergic problems after they started consuming genetically modified food products. These allergic problems were different from other allergic issues and they were found having more grave effects than the traditional ones (Dona and Arvanitoyannis, 2009).
Larry Bohlen, who is the Director of Community Health and Environment Program, Friends of the Earth, writes, “I would say that the StarLink contamination incident is a serious setback for the biotech industry. Companies like Aventis and Monsanto have been aggressively and recklessly marketing their products, climbing over each other to get to the patent office so they can maximize their profits. That means they've ignored critical safety and environmental tests that should have been run . . . .” (Bohlen).


The theory of competitive prices

The competitive structure of industry leads to the establishment of competitive prices. Competitive prices are characterized by two main properties. The property of clearing markets is that of distributing existing supplies efficiently the property of equalizing returns to resources is that of directing production efficiently.

The clearing of markets

A competitive price is one that is not perceptibly influenced by any one buyer or seller. When we say that such prices are fixed by “supply and demand” we mean that the ensemble of all buyers and sellers determines price.

Since every buyer can purchase all he wishes of the good or service at the market price, there are no queues or unsatisfied demands, given the price. Since every seller can sell all he wishes at this market price, there are no undisposable stocks, other than inventories that are voluntarily held for future periods. The competitive price, then, clears the market—it equates the quantities offered by sellers and sought by buyers.

Whenever we find a persistent queue among buyers, we know that the price is being held below the level that clears the market, which we naturally call the equilibrium price. For example, when housing is unavailable under rent controls, we know that rents are below the equilibrium level. Whenever we find stocks held by sellers to be in excess of inventory needs, we know price is above the equilibrium level. The vast stocks of agricultural products held by the U.S. government are evidence that the prices of these products (more precisely, the amounts the government will lend on the products) are above the equilibrium level.

The importance of prices that clear markets is that they put goods and services in the hands of the people who most urgently wish them. If a price is held too low, some buyers who set a lower value on the commodity will get it while others in the queue who set a higher value get none. If the price is set too high, goods that buyers would be glad to purchase at a lower price go unsold even though (if a minimum price is imposed on a competitive industry) sellers would prefer to sell at this lower price.

The equalization of returns

It is part of the definition of industrial competition that every resource in an industry earn as much as it would earn in other industries, but no more. The self-interest of the owners of productive resources (including, of course, that most important resource, the laborer) leads them to apply their resources where they yield the most and thus to enter unusually attractive fields and abandon unattractive fields.

This equalization of returns, however, can be shown to imply that the prices of goods and services equal their (marginal) costs of production. The cost of a productive service to an industry is the amount that must be paid to attract it away from other uses—its foregone alternatives. (This most basic concept of cost is the essence of the alternative or opportunity cost theory.) If the amount the productive resource earns in an industry is in excess of this cost, clearly other units of the resource presently outside the industry could earn more if they entered. Conversely, if the productive resource is earning less than its cost or alternative product, it will leave the industry. Hence, if price exceeds cost, resources will flow into the industry and lower the price (and perhaps raise cost by raising the prices of the resources) if price is less than cost, resources will flow out and increase the price (and perhaps reduce cost).

The equality of the marginal products of a resource in all its uses is the condition for efficient production. The equality of average products has often been substituted, with a regrettable loss of logic: consider the catastrophic waste (of capital) in having equal output per worker in two industries when the capital equipment per worker is ten times as large in one industry as in the other. But if the marginal product of a resource is equal in its various uses, it follows that marginal cost must equal price. The resources necessary to produce one more unit of product A could produce an equal value of B, so the marginal cost of A—which is the foregone alternative of producing B—is equal to the value of A that it produces. Marginal cost, formally defined as an increment of cost divided by the increment of product associated with the increment of cost, and not the more easily measured average cost (total cost divided by output), is the economist’s fundamental criterion of competitive price—and of optimum price.

Marshall’s period analysis

The alternative uses open to a resource depend upon the time available for its redeployment (or more fundamentally, how much one is willing to spend on its movement). This principle, joined to an empirical observation that one can alter the rate of operation of a plant much sooner than one can build a new plant or wear out an existing one, provides the basis for the standard (Marshallian) theory of long-run and short-run competitive prices (Marshall 1890).

In the short run, defined as the period within which one cannot appreciably alter the number of plants (physical production units), the only method of varying output is to work a given plant more or less intensively. The so-called variable productive factors (labor, materials, fuel) are the only resources with effective alternative uses in this period and therefore the only services whose returns enter into marginal costs. The returns to the productive factors embodied in the plant are called quasi rents. So long as quasi rents are greater than zero it will be more profitable to operate a plant than to close it down.

The long run is defined as the period within which the entrepreneur can make any desired decision—including the decision to leave one industry and enter another. In this period all resources are variable in quantity, and therefore the returns to all factors enter into marginal cost.

The Marshallian apparatus permits very useful simplifications in price theory, but only if its underlying empirical assumption is fulfilled: the long-run adjustments of the firm are of negligible magnitude in the short run (and hence can be neglected), and the short-run adjustments do not appreciably affect the long-run costs. When these conditions are not met (they fail, for example, if discharge of workers in this period will lead to higher wage rates in the next period), the full analysis of the short run will still require explicit analysis of the long-run repercussions of the short-run decisions.


Aging yeast gain a competitive advantage on non-optimal carbon sources

Animals, plants and fungi undergo an aging process with remarkable physiological and molecular similarities, suggesting that aging has long been a fact of life for eukaryotes and one to which our unicellular ancestors were subject. Key biochemical pathways that impact longevity evolved prior to multicellularity, and the interactions between these pathways and the aging process therefore emerged in ancient single-celled eukaryotes. Nevertheless, we do not fully understand how aging impacts the fitness of unicellular organisms, and whether such cells gain a benefit from modulating rather than simply suppressing the aging process. We hypothesized that age-related loss of fitness in single-celled eukaryotes may be counterbalanced, partly or wholly, by a transition from a specialist to a generalist life-history strategy that enhances adaptability to other environments. We tested this hypothesis in budding yeast using competition assays and found that while young cells are more successful in glucose, highly aged cells outcompete young cells on other carbon sources such as galactose. This occurs because aged yeast divide faster than young cells in galactose, reversing the normal association between age and fitness. The impact of aging on single-celled organisms is therefore complex and may be regulated in ways that anticipate changing nutrient availability. We propose that pathways connecting nutrient availability with aging arose in unicellular eukaryotes to capitalize on age-linked diversity in growth strategy and that individual cells in higher eukaryotes may similarly diversify during aging to the detriment of the organism as a whole.

The progressive decline commonly associated with aging results in loss of fitness and eventually of viability. However, the almost universal conservation of aging amongst eukaryotes indicates that aging existed in the single-celled ancestors of extant eukaryotes (Jones et al., 2014 ). Therefore, aging emerged not in multicellular animals but in single-celled organisms and it remains to be tested whether age-related physiological changes always reduce the fitness of individual cells. If the fitness impact of aging differs between single-celled and multicellular organisms, it is possible that aging or age-modulatory pathways are under positive selection. Formidable arguments refute this concept in higher eukaryotes, but pathways that evolved under historical positive selection may still modulate aging in higher eukaryotes albeit not necessarily to a useful end.

Fitness is a measure of the transfer of genes to the next generation and, in non-social organisms, is largely determined by number of offspring. Fecundity is a function of both individual and environment, and there can be trade-offs between life-history strategies with fitness returns not always being equal under different environments. We asked whether age-linked physiological changes in single-celled eukaryotes represent a loss of specialization that may enhance adaptation to alternate environments (Fig. 1A). To this end, we designed a competitive growth assay for young and aged budding yeast in static environments and during environmental change (Fig. 1B). Saccharomyces cerevisiae is a glucose specialist, but can efficiently metabolize other carbon sources such as galactose in the absence of glucose.

300 viable cells plated per condition, analysis by t-test.

We employed the mother enrichment programme (MEP) (Lindstrom & Gottschling, 2009 ) to test the fitness of replicatively aged yeast (Fig. S1, Supporting information). We competed young cells against cells aged for 6, 24 and 48 h (Table S1, Supporting information, gives age distributions for each time) at 6 h, the aged cells are fully viable, reproductive viability starts to decline at 24 h and median lifespan has passed at 48 h (Lindstrom & Gottschling, 2009 Fig. S2, Supporting information). Cells of different ages were mixed, inoculated in glucose or galactose and outgrown to saturation (Fig. 1B). A change in the proportion of cells derived from the young and aged populations between inoculation and saturation indicates a fitness difference in that specific condition and is expressed as an Aged Fitness Score, with positive scores indicating that older cells engendered more progeny than younger cells during outgrowth (Eqn S1, Supporting information).

Consistent with an age-related physiological decline, young cells outcompeted 24- and 48-h aged cells when aging and outgrowth were performed in glucose, whereas young and 6-h aged cells showed similar fitness (Fig. 1C GLU). However, when cells were aged in glucose but outgrown in galactose, the fitness advantage of young cells relative to 24-h aged cells was reduced and strikingly 48-h aged cells outcompeted young cells in every replicate (Fig. 1C GAL, n = 4). This shows that the relationship between age and fitness depends on the environment.

The MEP system is not activated in young cells, and young populations also contain 50% newborn cells with an extended cell cycle (Hartwell & Unger, 1977 ). As both factors may confound competition assays, we compared cells aged for 6 and 48 h. Again, aged cells outcompeted young cells on galactose in all replicates (Fig. 1D, n = 8), and also on raffinose or acetate (Fig. 1E). Populations of aged cells only show a significant advantage at 48 h by which time the average age is 16–30 generations, and such cells are rare in the wild. However, individual cells may prosper much earlier. We measured colonies formed on glucose or galactose plates by cells aged for 2 or 18 h (

11 generations) and observed that aged cells form significantly larger colonies on galactose, demonstrating a significant growth advantage even at intermediate age (Figs 1F and S3, Supporting information).

Time-course and redilution experiments provided no evidence for permanent genetic adaptation to galactose (Figs 2A and S4, Supporting information) instead, aged cells have a transient advantage rapidly lost in progeny. This could be explained either by aged cells growing more rapidly than young cells on galactose or by aged cells resuming growth more rapidly after an environmental change. To distinguish these, we compared cells aged in galactose prior to outgrowth in glucose or galactose (Fig. 2B). As before, aged cells outcompeted young cells in galactose showing that an environmental change is not required. Furthermore, we did not observe an accelerated galactose response in aged cells or a dependence on nutrient storage (Fig. S5, Supporting information).

13–14 divisions by micromanipulation on glucose or galactose plates. Analysis by unpaired t-test with Welch's correction, P values – comparison of means P(F) – P values from F-test comparing variances. (D) Average cell division time obtained by OD measurement for log-phase cells, or by counting bud scars after 24 h growth for aged cells. Analysis by one-way anova , n = 3.

This implies that aging cells divide faster than young cells in galactose. We measured cell cycle times on glucose and galactose of wild-type (non-MEP) diploid cells aged for

14 generations by micromanipulation (cf. 16–30 generations for competition assays), and also of the daughters of these cells (Figs 2C and S6, Supporting information). On glucose, the cell division times of mothers and daughters were similar, although division time heterogeneity increased with age as reported (Liu et al., 2015 ). On galactose the opposite was observed: division time decreased significantly with age while heterogeneity lessened. To independently confirm this remarkable observation, we measured average cell division time across 24-h aging by bud scar counting and again observed that aged cells divide faster than young cells on galactose (Fig. 2D).

Our experiments show that aging does not entail a simple decline in fitness for yeast. Rather, aging cells lose glucose specialization as gene expression studies have suggested (Lin et al., 2001 Lesur & Campbell, 2004 ), but gain fitness for other carbon sources. This is very surprising, but is consistent with conflicts between optimal life-history strategies evident from mutants with reduced fitness on glucose but improved fitness in other media (Qian et al., 2012 ). Our data show that aging in yeast marks a transition between life-history strategies appropriate for different environments. This transition may emerge serendipitously from age-linked physiological changes, or may represent a defined programme that has evolved to be associated with (but is not necessarily causal to) the aging process. Either way, it creates a selective pressure for the evolution of aging regulatory systems as aged cells would be under positive selection in non-glucose and fluctuating environments. Consequently, as nutrient responsive signalling pathways evolved in early eukaryotes, manipulation of the aging process may have provided a way to tune growth strategy for current and future nutrient availability, and indeed aging is accelerated by galactose (Liu et al., 2015 ). To what extent such regulatory systems have been conserved in higher eukaryotes remains to be determined. However, loss of specialization seems intimately connected to aging, and the inability of cells to perform specialized functions is a major contributor to aging pathology in higher eukaryotes. It is therefore tempting to speculate that the modulation of aging by nutrient signalling in multicellular organisms stems from an ancient mechanism to control specialization.


Examples of Competition

Intraspecific Competition

Intraspecific competition is a density-dependent form of competition. “Intra” refers to within a species, as opposed to “inter” which means between. Intraspecific competition can be summed up in the image below.

In this image, two wild dogs known as Dholes fight over a carcass. The carcass is a resource, something both organisms need to survive. Intraspecific competition is density dependent for one reason. The more dholes you have, the less food each one gets. To the individual dhole, food is everything. With very few predators of their own, the most successful dholes (the ones who survive and reproduce the most) often are simply the ones who eat the most.

Thus, while these dholes may have coordinated to take down this deer, they are now competing to see which one will get to eat first. The one that eats first will get more, and be more likely to survive and reproduce. The other one (or the last one if there are many) will not get as much. This will lower its survivability and the chances it will get to reproduce. Since evolution relies mainly on which organisms reproduce, this form of competition can quickly lead to changes in a population if only a few of the individuals are surviving and reproducing.

Interspecific Competition

In this picture, there are dozens of species. There are several species of fish. Behind them, as a backdrop many people would ignore, is a canvas of dozens of species of coral. Coral, while it may look like some sort of rock or plant, is actually a colony of tiny animals. These tiny animals filter organic material from the water, and use stored bacteria to photosynthesize sunlight for additional energy. Thus, each coral species is competing with not only the other corals, but also with the fish for available nutrients and sunlight.

While corals might not seem like a competitive bunch, they are actually directly competitive with other corals. When an enemy coral is encroaching on their space, they can deploy chemical warfare to counter their rival. Often, coral fights end in one of the corals being killed by the other. While the corals are not predators of each other, the competition still ends in the death of one of the corals. The victorious coral was simply fighting for the resources it needs.

Direct and Indirect Competition

There is also another aspect of competition that can be applied to scenarios of limited resources, and that is the idea of direct vs indirect competition. Direct competition is like both of the scenarios above, and there are many more examples of it. Any time two or more animals fight or have a symbolized confrontation, this is probably some sort of competition for a resource.

However, indirect competition is when the two animals do not interact, but the presence of both animals in the same territory causes the competition. Think of the fish in the example above. If those fish feed on the same resources used by the corals, then the fish are in competition for the limited resources. Coral, being more or less anchored to the ocean floor, have little chance of directly attacking the fish. Instead, this would be referred to as an asymmetrical indirect competition. The fish eat as much of the food as they want, and the coral are limited to scraps. The coral have no way of competing. Luckily for most coral reef systems around the world, the ocean has plenty of food for most.


2.3. Trust in Data

Among the most essential but intangible aspects of data reuse is the ability to trust data collected by others. Scientific practice depends upon the ability to trust knowledge claims and products of others, a concept known as ‘epistemic trust’ (Darch, 2019 Porter, 1996 Shapin, 1994). Epistemic trust has several dimensions, and is relational, rather than something inherent in a dataset. One dimension is interpersonal trust, such as trust in the team that created a dataset (Prieto, 2009). For example, Jirotka et al.’s (2005) study of distributed readings of mammograms revealed strategies for assessing trustworthiness based on whether the data creator was known to produce reliable data. Similarly, ecologists assess data by disciplinary standards involved in their production and by reputation of the data creator (Yakel, Faniel, Kriesberg, & Yoon, 2013).

Other dimensions of trust include the ability to evaluate the quality of data, the reputations of the archives that host relevant datasets, and organizations responsible for the data curation process (Bietz & Lee, 2009 Borgman, Scharnhorst, & Golshan, 2019 Faniel & Jacobsen, 2010).

In an influential policy report, the U.S. National Science Board (2005) categorized data collections along a continuum from local to global uses. Research data collections are those that result from focused research projects curation is limited. Resource collections serve a community, have more extensive curation, and establish community-level standards. Reference collections are broader in scope, serve large communities, and conform to robust and comprehensive standards. The latter are intended to promote epistemic trust by their communities.

Our hypothesis, posed in the 2015 grant proposal that supported this research, is that centralized data collection requires early agreements on data management, resulting in particular kinds of expertise and disciplinary configurations, whereas distributed data collection is more flexible and adaptive to local conditions, but the resulting datasets are more difficult to integrate later. Centralized data collection and curation, such as reference collections and sky surveys, results in standardized datasets that are valuable for comparative reuses. Investigator-led projects are based in specific sets of research questions, models, theories, methods, and knowledge infrastructures. The resulting datasets, whether or not contributed to public collections, will be more idiosyncratic than those designed and curated for standardized comparisons (Borgman et al., 2015 Boscoe, 2019 Darch & Borgman, 2016 Sands, 2017).


Density stratification on microorganisms in aquatic ecosystems

Microorganisms play pivotal functions in nature, particularly within aquatic ecosystems. Whether in an ocean or a lake, they are key players in the food chain and the vitality of individual ecosystems.

A team of researchers led by Arezoo M. Ardekani, the Rev. John Cardinal O'Hara, C.S.C., Assistant Professor of Aerospace and Mechanical Engineering at the University of Notre Dame, has shown that density stratification, a frequent feature of aquatic environments, has important ecological consequences on these small organisms.

The team recently published a paper in the Proceedings of the National Academy of Sciences that demonstrates that density variations encountered by organisms at pycnoclines, regions of sharp vertical variation in fluid density, have a major effect on the flow field, energy expenditure and nutrient uptake of small organisms. Organisms at pycnoclines afford a competitive advantage due to smaller risk of predation. These results can be used to explain why an accumulation of organisms and particles, which leads to a wide range of environmental and oceanographic processes, is associated with pycnoclines.

Ardekani joined the University in 2011. Her research interests focus on the fundamental properties of multiphase flows of Newtonian and non-Newtonian fluids relevant to biofluids, and micro/nanofluids for use in biomimetic applications, biomedical devices, alternative energy technologies and environmental remediation.

Most recently, she was awarded a 2012 National Science Foundation Faculty Early Career Development Award for her work in fluid dynamics of bacterial aggregation and formation of biofilm streamers. Prior to joining the University, Ardekani served as a Shapiro Postdoctoral Fellow at the Massachusetts Institute of Technology and is currently a member of the American Association for the Advancement of Science, American Chemical Society, American Physical Society, American Society of Mechanical Engineers and Society of Rheology.

She received her doctorate (2009) and master's (2005) in mechanical and aerospace engineering from the University of California at Irvine and her bachelor's in mechanical engineering from the Sharif University of Technology in Iran (2003).


Characteristics of Aquatic Biomes

Aquatic ecosystems include both saltwater and freshwater biomes. The abiotic factors important for the structuring of aquatic ecosystems can be different than those seen in terrestrial systems. Sunlight is a driving force behind the structure of forests and also is an important factor in bodies of water, especially those that are very deep, because of the role of photosynthesis in sustaining certain organisms.

Like terrestrial biomes, aquatic biomes are influenced by a series of abiotic factors. The aquatic medium—water— has different physical and chemical properties than air, however. Even if the water in a pond or other body of water is perfectly clear (there are no suspended particles), water, on its own, absorbs light. As one descends into a deep body of water, there will eventually be a depth which the sunlight cannot reach. While there are some abiotic and biotic factors in a terrestrial ecosystem that might obscure light (like fog, dust, or insect swarms), usually these are not permanent features of the environment. The importance of light in aquatic biomes is central to the communities of organisms found in both freshwater and marine ecosystems. In freshwater systems, stratification due to differences in density is perhaps the most critical abiotic factor and is related to the energy aspects of light. The thermal properties of water (rates of heating and cooling) are significant to the function of marine systems and have major impacts on global climate and weather patterns. Marine systems are also influenced by large-scale physical water movements, such as currents these are less important in most freshwater lakes.

The ocean is categorized by several areas or zones (Figure 11). All of the ocean’s open water is referred to as the pelagic realm (or zone). The benthic realm (or zone) extends along the ocean bottom from the shoreline to the deepest parts of the ocean floor. Within the pelagic realm is the photic zone, which is the portion of the ocean that light can penetrate (approximately 200 m or 650 ft). At depths greater than 200 m, light cannot penetrate thus, this is referred to as the aphotic zone. The majority of the ocean is aphotic and lacks sufficient light for photosynthesis. The deepest part of the ocean, the Challenger Deep (in the Mariana Trench, located in the western Pacific Ocean), is about 11,000 m (about 6.8 mi) deep. To give some perspective on the depth of this trench, the ocean is, on average, 4267 m or 14,000 ft deep. These realms and zones are relevant to freshwater lakes as well.

Figure 11. The ocean is divided into different zones based on water depth and distance from the shoreline.

Practice Question

In which of the following regions would you expect to find photosynthetic organisms?

  1. the aphotic zone, the neritic zone, the oceanic zone, and the benthic realm
  2. the photic zone, the intertidal zone, the neritic zone, and the oceanic zone
  3. the photic zone, the abyssal zone, the neritic zone, and the oceanic zone
  4. the pelagic realm, the aphotic zone, the neritic zone, and the oceanic zone

FUTURE CHALLENGES FOR ROOT SYSTEMS BIOLOGY

Major advances have recently been made employing top-down systems approaches to identify the key molecular players (genes, RNAs, proteins, etc.) and to elucidate several GRNs that control root growth and development (Benfey et al., 2010 Ruffel et al., 2010 Bassel et al., 2012). However, regulatory networks not only operate at the molecular scale but must also integrate information from cell to organ to rhizophere scales, to probe the mechanisms controlling root development. Mathematical and computational models provide invaluable tools to bridge these distinct spatial and temporal scales and generate new mechanistic insights about the regulation of root growth and development (Band et al., 2012a). Nevertheless, we are only at the beginning of this new research area, and several challenging issues remain to be addressed for this field to move forward.

Integrating Models

In general, the models highlighted in this review have been designed independently of one another, raising important issues relating to interoperability. To proceed further, models must start to be assembled into unified frameworks. A number of conceptual and technological factors make model integration challenging. First, models may operate at different spatial resolutions or dimensions. For example, coupling a two-dimensional mechanical model of root tissues at a subcellular resolution with a two-dimensional model of auxin transport designed at a cellular resolution would require homogenizing the spatial resolution such that the mechanical model updates cell geometries over time. Second, models may also operate at different temporal resolutions. For example, mechanical processes are often assumed to be much faster than biochemical processes. Hence, cell size and shape can be considered constant in computing chemical simulations. Third, combining different types of mathematical models (e.g. Boolean, stochastic, ordinary differential equations, and partial differential equations) can often prove challenging. Nevertheless, models integrating mechanistic and stochastic models have been successfully designed at the organism level for aerial tissues (Costes et al., 2008).

Creating combinations of models often represents a large amount of theoretical and experimental effort. Hence, it is important that these models are freely available to the community as shared data sets, open-source model formats such as SBML (Hucka et al., 2003) or CellML (Cuellar et al., 2003), or common modeling software platforms such as VV (Prusinkiewicz, 2004), OpenAlea (Pradal et al., 2008), or MorphoGraphX (Kierzkowski et al., 2012). The move toward model integration and flexibility in the crop-modeling community should be translatable to the root systems biology ethos. For example, RECORD (for Renovation and Coordination of Agroecosystem Modeling) represents a concerted effort by French researchers at INRA to develop an “integrated modeling platform” that could eventually include Geographic Information System data, open source statistical resources, and data-handling resources (Bergez et al., 2012). Additionally, planned as a repository for crop models, the RECORD platform includes many principles of root systems biology, including multiple time scales (seconds to months), spatial variation, and environmental dynamics. RECORD aimed to use as many existing models as possible. However, to foster the incorporation of sub-crop-scale models as advocated in root systems biology, RECORD would need to look beyond crop-soil interactions to the root system itself.

Toward Digital Plants and Populations

Developing a mechanistic model of a whole plant represents a logical next step. Indeed, given the exchange of water, nutrients, and signals between root and shoot organs, developing a virtual root or shoot model independent of each other could be considered naive in the longer term. To date, models of diverse root system subprocesses have been developed at different scales. Compared with initial approaches in systems biology, most of these models make explicit use of spatial information. Such spatial information represents different aspects of realistic root structures and can take the form of a continuous medium, a branching structure of connected elements (e.g. root meristems), a multicellular population, or a set of interacting subcellular compartments. By progressively integrating more functional aspects into these realistic representations, researchers have created new models (Sievänen et al., 2000 Godin and Sinoquet, 2005). Functional structural plant models provide a promising platform with which to create a digital plant model.

Compared with many previous models developed on aerial parts (Vos et al., 2010) and on root systems (Bidel et al., 2000 Pagès et al., 2004), recent functional structural plant models also integrate gene regulation and signaling as a new dimension in the analysis of development (Prusinkiewicz et al., 2007 Han et al., 2010). Through the combined modeling of genetic networks, physiological processes, and spatial interaction between components, development of a new generation of functional structural plant models opens the way to building digital versions of real plants (Coen et al., 2004 Cui et al., 2010) and testing biological hypotheses in silico (Stoma et al., 2008 Perrine-Walker et al., 2010).

Ultimately, breeders measure plant performance at a population scale rather than an individual scale. Hence, mechanistic multiscale models need to be developed that bridge the remaining physical scale between the plant and the field to relate genotype to phenotype and enable the engineering of crop traits. Relatively few examples exist where modeling results have been used to direct the selection of new crop varieties. One promising example is the optimization of bean root systems for phosphorus and water acquisition. Using the model SimRoot, Lynch and coworkers investigated the impact of root gravitropism (Ge et al., 2000), root hair production (Ma et al., 2001), and aerenchyma production (Postma and Lynch, 2011a, 2011b) on plant performance under various phosphorus and/or water conditions. These results defined root system ideotypes (Lynch and Brown, 2001 Lynch, 2011, 2013) and were validated by several experimental trials (Bates and Lynch, 2001 Rubio et al., 2003). This identified QTLs for basal root gravitropism (Liao et al., 2004) and root hairs (Yan et al., 2004) and led to the selection of new bean varieties, now used by breeding programs in South America and Africa. This example illustrates how modeling at the plant level can drive research, untangle processes at the molecular level, and inform crop breeding.

Integrating Environmental Information

Phenotype represents the output of the interaction between genotype and environment. In this review, almost all of the models described study intrinsic root regulatory processes. This provides a solid foundation for describing basic developmental mechanisms, but to account for root plasticity, models must also integrate environmental information. Developing mixed genetic-ecophysiological models that bridge the gap between genetic and environmental regulation represents an important goal (Roose and Schnepf, 2008 Draye et al., 2010). These environmental factors include soil physical properties water, nutrient, and macroelement/microelement availability and distribution mycorrhization and nodulation and competition and interaction with other root systems. Obtaining such spatial information remains very challenging. Researchers traditionally assess soil structure by imaging and then physically removing successive layers of soil and examine root structure (of soil-grown roots) by root washing. As both are destructive, the construction of combined root and soil data sets requires the integration of measurements obtained from different samples. High-resolution synchrotron and x-ray micro-computed tomography imaging has the potential to provide rich data sets from single samples, with concurrent measurements of root and soil (Mooney et al., 2012 Keyes et al., 2013).

Root systems biology also urgently requires new methods to assay rhizosphere parameters in addition to root and soil. For example, having tools to dynamically monitor quantitative changes in root biology (hormones, water status, nutrients, etc.) and the root environment (pH, nutrient content) would address a major challenge. The development of novel biosensors based on new understanding of the pathways responsible for the perception of small signaling molecules and advances in imaging technologies and mathematical modeling will enable a truly quantitative analysis of biological processes (Band et al., 2012b for review, see Wells et al., 2013). Combined with new dynamic sensors for environmental parameters like optodes (Elberling et al., 2011), new biosensors promise to provide a deeper understanding of how plant systems interact with their environment. Moreover, these sensors could result in innovative technologies to monitor biotic or abiotic stresses in the field (Chaerle et al., 2009). Biosensors could help optimize the use of inputs such as water, fertilizers, or pesticides and, therefore, achieve more environment-friendly and sustainable agricultural practices. Such tools and the information generated will also greatly aid the development of more realistic root-rhizosphere models and, ultimately, help optimize crop root architectures for soil types and nutrient regimens.


Watch the video: Toward a New Global Architecture? Americas Role in a Changing World. Radcliffe Day 2018 (January 2022).