I am trying to use the DAVID tool to do some gene analysis. I have some probe set intensities for some cancer cell lines. I found this link in the DAVID tool http://david.abcc.ncifcrf.gov/tools.jsp. I am a bit confused with the terminology introduced here. It says gene list for the probe sets why is it so? I mean in the example you can see the probe sets like
1007_s_at 1053_at 117_at 121_at 1255_g_at 1294_at 1316_at 1320_at 1405_i_at 1431_at 1438_at 1487_at 1494_f_at 1598_g_at
Buy why are they called gene list not probe sets. Why?
Its a bit confusing, but DAVID uses the term Gene List as a generic term.
Looking at Step 2, you can submit many kinds of lists to DAVID, including actual gene symbols, Ensembl or RefSeq Accessions, etc… actually nearly 30 kinds of terms including 'not sure' which probably looks at your list and tries to guess.
Affymetrix or Illumina probe set IDs are each designed to measure a gene, ideally, though its not precisely a one Probe Set to one Gene relationship. This is because when the array is designed there may be a partial transcript RNA records which turn out later to be parts of a single gene. There are also probe sets which may turn out to hybridize to similar sequences in more than one gene.
Its messy, but its also true that often more than one gene symbol will appear for the same gene because of historical naming conventions…
Confusion related to the DAVID tool - Biology
Gars are members of the Lepisosteiformes / l ɛ p ɪ ˈ s ɒ s t iː ɪ f ɔːr m iː z / (or Semionotiformes), an ancient holosteian order of ray-finned fish fossils from this order are known from the Late Jurassic onwards. The family Lepisosteidae includes seven living species of fish in two genera that inhabit fresh, brackish, and occasionally marine waters of eastern North America, Central America and Cuba in the Caribbean.   Gars have elongated bodies that are heavily armored with ganoid scales,  and fronted by similarly elongated jaws filled with long, sharp teeth. Gars are sometimes referred to as "garpike", but are not closely related to pike, which are in the fish family Esocidae. All of the gars are relatively large fish, but the alligator gar (Atractosteus spatula) is the largest – the alligator gar often grows to a length of over 2 m (6.5 ft) and a weight of over 45 kg (100 lb),  and specimens of up to 3 m (9.8 ft) in length have been reported.  Unusually, their vascularised swim bladders can function as lungs,  and most gars surface periodically to take a gulp of air. Gar flesh is edible and the hard skin and scales of gars are used by humans, but gar eggs are highly toxic.
Confusion related to the DAVID tool - Biology
List Hits (LH) = 3
List Total (LT) = 300
Population Hits (PH) = 40
Population Total (PT) = 30,000
Bonferroni: The Bonferroni in DAVID is the Bonferroni &Scaronidák p-value (&Scaronidák 1967) which is a technique slightly less conservative than Bonferroni.
&Scaronidák, Z. (1967). "Rectangular Confidence Regions for the Means of Multivariate Normal Distributions." Journal of the American Statistical Association 62:626-633.
Benjamini: Benjamini in DAVID requests adjusted p-values by using the linear step-up method of Benjamini and Hochberg (1995).
Benjamini, Y. and Hochberg, Y. (1995). "Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing." Journal of the Royal Statistical Society. Series B, Statistical Methodology 57:289-300.
FDR: FDR in DAVID requests adaptive linear step-up adjusted p-values for approximate control of the false discovery rate, as discussed in Benjamini and Hochberg (2000). Use the lowest slope method to estimate the number of true NULL hypotheses.
EASE Score Threshold (Maximum Probability)
The threshold of EASE Score, a modified Fisher Exact p-value, for gene-enrichment analysis. It ranges from 0 to 1. Fisher Exact p-value = 0 represents perfect enrichment. Usually p-value is equal or smaller than 0.05 to be considered strongly enriched in the annotation categories. Default is 0.1. More details.
Count Threshold (Minimum Count)
The threshold of minimum gene counts belonging to an annotation term. It has to be equal or greater than 0. Default is 2. In short, you do not trust the term only having one gene involved.
The p-values associated with each annotation terms inside each clusters are exactly the same values as p-values (Fisher Exact/EASE Score) shown in the regular chart report for the same terms.
The Group Enrichment Score, the geometric mean (in -log scale) of member's p-values in a corresponding annotation cluster, is used to rank their biological significance. Thus, the top ranked annotation groups most likely have consistent lower p-values for their annotation members.
Gene Report is a highly integrated view of a single gene and its general annotations/accessions from multiple resources. It can quickly give a global idea about the gene. The hyperlinks throughout the report will lead to users to original resources for further details.
An independent database in a category, such as BioCarta Pathways.
DAVID Gene ID
An internal ID generated on "DAVID Gene Concept" in DAVID system. One DAVID gene ID represents one unique gene cluster belonging to one single gene entry.
After converting user input gene IDs to corresponding DAVID gene ID, it refers to the percentage of DAVID genes in the list associated with particular annotation term. Since DAVID gene ID is unique per gene, it is more accurate to use DAVID ID% to present the gene-annotation association by removing any redundancy in user gene list, i.e. two user's IDs represent same gene.
It represents DAVID Oracle databases which collect a large volume of annotation information from a wide range of bioinformatic public resources. It is probably the largest and most comprehensive integrated database in the field.
An alternative name of Fisher Exact Statistics in DAVID system, referring to one-tail Fisher Exact probability value used for gene-enrichment analysis.
A set of user's input genes is highly associated with certain terms, which is statistically measured by Fisher Exact in DAVID system.
Hierarchical Structure: Category &rarr Annotation Source &rarr Term
Pathways &rarr BioCarta &rarr p53 signaling pathway.
DAVID Bioinformatics Resources: expanded annotation database and novel algorithms to better extract biology from large gene lists
All tools in the DAVID Bioinformatics Resources aim to provide functional interpretation of large lists of genes derived from genomic studies. The newly updated DAVID Bioinformatics Resources consists of the DAVID Knowledgebase and five integrated, web-based functional annotation tool suites: the DAVID Gene Functional Classification Tool, the DAVID Functional Annotation Tool, the DAVID Gene ID Conversion Tool, the DAVID Gene Name Viewer and the DAVID NIAID Pathogen Genome Browser. The expanded DAVID Knowledgebase now integrates almost all major and well-known public bioinformatics resources centralized by the DAVID Gene Concept, a single-linkage method to agglomerate tens of millions of diverse gene/protein identifiers and annotation terms from a variety of public bioinformatics databases. For any uploaded gene list, the DAVID Resources now provides not only the typical gene-term enrichment analysis, but also new tools and functions that allow users to condense large gene lists into gene functional groups, convert between gene/protein identifiers, visualize many-genes-to-many-terms relationships, cluster redundant and heterogeneous terms into groups, search for interesting and related genes or terms, dynamically view genes from their lists on bio-pathways and more. With DAVID (http://david.niaid.nih.gov), investigators gain more power to interpret the biological mechanisms associated with large gene lists.
A DAVID gene constructed by…
A DAVID gene constructed by a single linkage algorithm. Two UniRef100 clusters, two…
An HTML report from the Functional Annotation Clustering. The annotation cluster 1 in…
MAKING SENSE OF YOUR PERCEPTIONS
W hen you step into Wilsaan Joiner's lab , the foosball table in the corner might seem a bit out of place. But for Joiner’s research on perception and eye movement, playing a simple game can tell you a lot about how your visual system works.
Your eyes are amazing sensors. Visual information sweeps across the retinas so fast that what you perceive should be a blur. However, your visual system smooths the action like an image stabilization tool for shaky camera shots. Your brain constantly applies corrections, providing a seamless picture of your world .
In the brains of people who have schizophrenia, bipolar disorder and other mental illnesses, these unconscious functions are in disarray, blurring the lines between internal and external sensations.
“Your central nervous system is making constant predictions about your body all the time,” says Joiner, an assistant professor of neurobiology, physiology and behavior . “It’s always going on in the background and you’re not typically aware of it. But when you can’t do it, it has pronounced consequences that are fairly devastating. You can’t make sense of many of the common things we experience in the world.”
UC Davis College of Biological Sciences neuroscientists like Joiner and his colleague Jochen Ditterich are exploring new ways to understand how our brains make sense of our perceptions, in hopes to help diagnose and fight these debilitating conditions.
With a simple visual test, Joiner can evaluate corollary discharge in both non-human primates and humans with schizophrenia. David Slipher/UC Davis
To fool the eye—or brain?
A key to understanding our visual processes is the concept of “corollary discharge,” a term that describes the brain’s capability to anticipate a change in sensory information due to self-movement. This guidance system allows you to distinguish the source of changes that occur in our environment and likely contributes to performing rapid activities, like hitting a baseball .
Another way to think about this internal, unconscious signal is to consider how it’s impossible to tickle yourself. Somehow, your brain recognizes your self-initiated movement and betrays the physical stimulation you’re trying to induce.
In healthy brains, the thalamus likely conveys this information with gr eat fidelity. But schizophrenic patients have perceptual difficulty with tasks that rely on corollary discharge, including identification of visual changes in the environment.
It’s unclear whether this difficulty is related to the transmission or actual utilization of the signal. Without it, test subjects will make a perceptual decision solely based on visual information rather than some combination of internal knowledge of our movements and the experienced visual information.
With a simple visual test, Joiner can evaluate corollary discharge in both non-human primates and humans with schizophrenia. In the experiments, which involve eye movements and perceptual decisions, Joiner found subjects relying solely on visual information consistently make the wrong choices.
“It’s only when you have this kind of deficit that you have more pronounced perceptual symptoms,” says Joiner. “So what this is showing is a somewhat simple visual perception task that correlates very well to the extent that you have delusions and hallucinations.”
While the absence of these internal signal cues reveals a larger void in our understanding of the origins of psychosis, it provides clues about how individuals with mental illness perceive themselves and the origins of their thoughts and ideas.
Joiner’s research suggests that deficits in corollary discharge may be an accurate and objective tool for diagnosing mental health conditions with psychotic symptoms.
Joiner discovered that as an individual’s deficit in corollary discharge increases, their sense of agency (e.g., ownership over thoughts or actions) decreases. This behavior can lead to trouble recognizing self-caused vs. externally caused sensations, which may lead to confusion, hearing voices and other psychoses.
Joiner’s long-term hope is that the absence of corollary discharge may help provide a simple, but objective litmus test that clinicians can use to accurately identify and develop treatments for these neurological diseases.
“If you have deficits in transmitting or utilizing corollary discharge signals, it speaks to higher mental disorders that are very pronounced, but we don’t quite understand,” says Joiner.
Ditterich’s research suggests that patients with Parkinson's experience problems using previous knowledge when making decisions. David Slipher/UC Davis
The power of decision-making
If you hear an unfamiliar sound in the woods, your survival could depend on making a rapid decision with very limited information.
Ditterich, an associate professor of neurobiology, physiology and behavior, wants to better understand how our brains make such quick decisions. He evaluates this process through a “decisional threshold,” which describes the amount of information you want to collect before you commit to a particular choice. For any scenario, the goal is to find a tradeoff between maximizing the accuracy of the decision and minimizing the time it takes to do so.
With the clear and precise diction of his German accent, combined with a system engineer’s analytical perspective, Ditterich methodically outlines his plan to transform the way we treat neurological and psychiatric diseases that involve cognitive deficits.
“Implanting a technical device called a deep brain stimulator (DBS), has become a viable treatment option for patients with motor disorders, like Parkinson’s, that do not respond well to drug therapy,” says Ditterich. “Using a more intelligent version of stimulation, we might at some point be able to treat cognitive deficits resulting from neurological or mental disorders that are also difficult to treat with medications.”
His approach is ambitious, but if successful, it could one day improve cognitive functioning related to decision-making, attention, memory and more. The basic idea is to design an intelligent, implantable device that directly communicates with the brain and steers it in an attempt to restore healthy neural signaling. In concept, such an advanced device would monitor and decode a patient’s neural activity and dynamically stimulate the brain to achieve a desired state. But first, Ditterich needs to understand precisely how cognitive functions are implemented within a healthy brain.
While this scenario may sound like science fiction, implants are already being used on a regular basis to treat patients with Parkinson’s. They are also being tested to treat conditions like depression and obsessive- compulsive disorder. But these devices aren’t particularly intelligent. Instead of responding to dynamic brain states, the current generation of stimulators provides only constant and steady stimulation.
Parkinson’s is generally thought of as a motor disorder, but it turns out patients experience cognitive deficits as well, including decision-making deficits, which are not typically addressed through current treatment options.
Ditterich’s research suggests that these patients experience problems using previous knowledge when making decisions. It’s not a learning problem but an implementation problem, and the patients’ decision thresholds cannot adjust appropriately.
Jochen Ditterich envisions a future in which intelligent, implantable device can monitor and respond to changes in brain activity. The device would communicate with the brain using electrical impulses to steer and restore healthy neural signaling. ATS/UC Davis
Lifting the neural veil
By compiling neural activity from healthy brains during different decision-making situations, Ditterich can use machine learning to plot an optimal decision path for any scenario. It’s like coming up with enough evidence for a choice before the exact moment of committing to it. And amazingly, the data shows that in healthy brains the decision process is an approximation of a statistically optimal algorithm.
Your healthy brain operates across a vast distributed network involving the frontal cortex, parietal cortex, basal ganglia and other subcortical areas that collectively compute different outcomes simultaneously. You can take pride that the final results of your “organic computing” are on par with even the most advanced supercomputers.
This staggering concept is what first drew Ditterich to neuroscience. An electrical engineer by training, he began investigating how the eye recalibrates during movement, similar to Joiner’s research on visual perception.
For Ditterich, who sees the central nervous system as the ultimate information processor, “there are some things that are just very, very hard to do with machines that the brain can accomplish with ease. We have to figure out how, ” he says.
“As engineers, we know a lot about how machines process information. You can use all the mathematics and the engineering tools behind that to analyze what’s going on biologically, to reverse-engineer the brain,” he adds.
While such advanced implantable devices are still a ways off, Ditterich is already in talks with control engineers at UC Davis to explore machine guidance. “They know very well how to steer airplanes and navigate other complex technical systems,” says Ditterich. “Could we use this understanding to steer the brain into a particular desired state?”
Now Ditterich is collaborating with UC Davis Health clinicians to monitor brain activity in patients receiving a DBS implant. He conducts research performing the same perceptual decision tests in both humans and rhesus monkeys. “
We use identical tasks to understand how cognitive functions work in humans and can be validated in non- human primates,” Ditterich says. “Behaviorally, in visual decision-making tasks, we find very, very similar results.”
Reframing your world
Your eye movement and decision-making processes are things you probably take for granted. Your identity is intimately connected to your ability and independence to make decisions.
But imagine if you couldn’t answer, “Who’s in control?” How would this impact your routine decisions, like “What will I eat for lunch? What do I do next in my day?” These are the very real challenges that people with cognitive deficits face every day.
Fortunately, building the foundations to diagnose and treat these conditions is a driving force for Joiner and Ditterich and many other faculty and student researchers at UC Davis.
They’re pushing the boundaries of knowledge to make sense of our world, and to help us make sense of our place in it.
Variants of concern
Earliest documented samples
The names, taken from the Greek alphabet (see ‘Variants of concern’), are not intended to replace scientific labels, but will serve as a handy shorthand for policymakers, the public and other non-experts who are increasingly losing track of different variant names.
“It is a lot easier for a radio newsreader to say ‘Delta’ than bee-one-six-one-seven-two,” says Jeffrey Barrett, a statistical geneticist leading SARS-CoV-2-sequencing efforts at the Wellcome Sanger Institute in Hinxton, UK. “So I’m willing to give it a try to help it take off.”
“Let’s hope it sticks,” says Tulio de Oliveira, a bioinformatician and director of the KwaZulu-Natal Research Innovation and Sequencing Platform in Durban, South Africa, whose team identified the Beta variant. “I find the names quite simple and easy.”
The system could be especially useful in countries battling a number of variants, such as South Africa, where a variant found in the United Kingdom and known to scientists as B.1.1.7 — now called Alpha — is on the rise, and researchers such as de Oliveira are watching out for cases of the B.1.617.2 variant identified in India, now called Delta. “For a country like South Africa, to follow Beta and Alpha and keep a small eye on Delta, that will potentially be easier,” he says.
Confusion isn’t the only reason to go with a simplified naming system, say advocates of the new system. Terms such as ‘the South African variant’ and ‘the Indian variant’ can stigmatize countries and their residents, and might even discourage nations from running surveillance for new variants. “The geographical names, we have to stop with that — really,” says de Oliveira. He is aware of countries in Africa where health ministers have been reluctant to announce the discovery of new local variants because of concerns about being made pariahs.
“I can understand why people just call it ‘the South African variant’ — they don’t mean anything by it,” says Salim Abdool Karim, an epidemiologist at the Centre for the AIDS Program of Research in South Africa in Durban. “The problem is, if we allow it to continue, there are people who have an agenda and will use it.”
Barrett intends to embrace the new naming system in media appearances, but he suspects geographical descriptors won’t go away quickly. “The reason we use country names (which is problematic) is that it ties the variants to the story of the pandemic in a way that’s easier to remember,” he wrote in an e-mail to Nature. “The new system is still very anonymous and it will still be hard for the public to remember who’s who.”
In recent months, most scientists have settled on a single lineage-naming system that describes the evolutionary relationships between variants. With time, the WHO’s naming system might gain the same currency among the general public, says Jeremy Kamil, a virologist at Louisiana State University Health in Shreveport. “If people use it, it will become the default.”
Modifying HCV: moving forward for greener agricultural products
We urge agriculture interests and conservationists to take a precautionary approach to the application of HCV to agriculture. We do not yet know (and may never know with certainty) which precise areas of forest merit HCV1 and/or HCV3 designation. For instance, the best available resource on critically endangered species—the IUCN range maps—shows the maximum extent of each species’ range and is not suitable for finer-scale designations. HCV1 and HCV3 will certainly protect some forests (particularly those in Sundaland and Southeastern Brazil), but precisely how much is impossible to determine. We thus see HCV criteria based upon area thresholds, which are easily measured and policed using satellite imagery, as the frontline in the battle to guarantee the protection of valuable forests from conversion to sustainable agricultural plantations.
Large, landscape-level habitats above thresholds of 20,000–500,000 ha are protected under HCV2. However, there is currently no HCV criterion designed to protect the important biodiversity and habitat connectivity that are provided by large patches of forest within the agricultural matrix. We highlight this as a major shortfall in the application of HCV to environmentally friendly agriculture. Without tackling this issue, it is plausible that we could lose vast amounts of biodiverse forest (e.g., Figure 1) to agricultural plantations under a green label. We thus promote the creation of an additional HCV criteria designed specifically to help ensure sustainability within agricultural plantation settings. In particular, we recommend the inclusion of:
HCV7: Maintenance of agricultural matrix-level biodiversity and connectivity.
We can be confident that the threshold for forest patches protected under HCV7 can be set at 1,000 ha (or less) and still permit sufficient expansion of certified crops. Such a threshold would certainly guarantee the protection of much biodiversity and the retention of connectivity within the agricultural matrix. However, in many locations, such a threshold would inevitably still leave much (or even all) forest without any protection. What must be decided is which, if any, of these smaller forest fragments can be sustainably converted.
Although it is tempting to argue for the protection of all smaller forest patches ( Ehrlich & Wilson 1991 ), such an argument could undermine the HCV concept because as fragment size declines so does conservation value. Furthermore, thresholds that are too stringent might prevent the economies of scale required by large plantation companies to warrant investment in infrastructure (e.g., to build processing plants). To this end, in addition to a threshold size that protects large forest patches (≥1,000 ha), HCV7 should also protect a proportion of fragment area that falls below 1,000 ha. As a hypothetical illustration, perhaps 25% of forest area within a plantation should be protected, which could be targeted to larger fragments (hundreds of hectares in size) and/or fragments that create stepping-stones between larger blocks of forest. An alternative would be to permit the conversion of forest patches below 1,000 ha conditional on setting aside an even larger area of land in a Biodiversity Bank that protects large, contiguous blocks of forest ( Edwards et al. 2010 ). Of course, HCV1 would still apply, providing protection to any critically endangered species in even the smallest patches.
We believe the most appropriate way forward is for the HCV Resource Network supported by ProForest—the independent organization that produced the global HCV toolkit ( Jennings et al. 2003 )—to create a revised HCV toolkit that includes HCV7 for the production certified plantation crops. It would then be the responsibility of all of the sustainability councils (including the FSC in the context of tree plantations) to adopt this expanded toolkit to ensure that green-labeled agricultural products are comparable in their environmental promise. The reason why all of the councils should abide by the same size threshold is to ensure consistency: it would make little sense to prohibit certified-sustainable soy growers from clearing forest fragments greater than 1,000 ha, but to then allow certified-sustainable sugarcane growers to do so. Such complications would confuse consumers and devalue green labels.
As noted earlier, certified crops currently represent only a small, but rapidly expanding, part of the global market for each of these commodities. We suggest that it is important that all certified crops demonstrate a very high standard of environmental responsibility, lest consumers feel betrayed or lose trust in the certification process.
All work with human subjects, as appropriate, was performed under approved protocols (IRB#1210012775 and #1803020378). During the process of designing the graph rubric, we gathered validity and reliability evidence so that we, and others, could use the graph rubric in teaching and research. Validity in our study is “the relationship between the content of the test and the construct it is intended to measure” (American Educational Research Association, American Psychological Association, and National Council on Measurement in Education [AERA, APA, and NCME], 2014, p. 14). In the context of our work, we sought validity evidence to ensure that the graph rubric has appropriate categories, descriptions, and guidelines that can be used to measure and assess student understanding and application of concepts and skills relevant to graph choice and construction. To this end, our design process involved establishing construct validity, which refers to the claim that the content and features of the instrument (i.e., the graph rubric) are well supported with evidence (Benson, 1998 AERA, APA, and NCME, 2014, p. 11). In support of our overall claim of construct validity for the graph rubric as a tool to evaluate graphs, we gathered evidence for content and face validity. Establishing content validity involves gathering data in support of the claim that the instrument includes all relevant features of the subject under examination (Benson, 1998). In our case, we consulted diverse sources to ensure that the graph rubric encompasses appropriate criteria or content used to evaluate graphs (Table 1). We also approached diverse users to gather evidence of face validity, which is the ability to conclude that an instrument (i.e., the graph rubric) is appropriate and effective in achieving its aims (Holden, 2010). While the rubric is not a test instrument, our design and construct validation process was informed by the instrument design literature and its application in discipline-based education research (Benson, 1998 Corwin et al., 2015) and consisted of three stages: 1) substantive, 2) structural, and 3) external. Although this process generally follows a linear path, there were cycles of revision and repetition of some stages. These design stages, our activities, and the types of validity evidence they contribute to are summarized in Table 1. As part of the evaluation of the construct validity of the rubric, we used interrater reliability (IRR) with a diverse group of users to understand consistency in judgment and scoring of graphs using the rubric (Holsti, 1969 Jonsson and Svingby, 2007 see Data Analysis below).
TABLE 1. Process of graph rubric design and construct validation with the three stages for graph rubric construct validation defined, the associated steps taken for each stage presented, and places in which evidence for content and face validity were obtained in support of the construct validation indicated
Review literature and data to establish the graph rubric categories, subcategories, and definitions.
Review of graphing and visualizations literature formed the initial basis of the rubric
Mine classroom data: Graph artifacts and student reflections on graph choice (Angra, 2016)
Mine clinical interview data: Graph artifacts and themes from student and professor graphing interviews (Angra and Gardner, 2017)
Solicit feedback from diverse audiences.
Revise rubric categories and descriptions, as needed.
Face validity: The quality enabling diverse users to conclude that the purpose of the rubric is to evaluate graphs
Content validity: Assurance from diverse sources that the graph rubric encompasses appropriate criteria or content used to evaluate graphs
Solicit input to establish content and face validity from:
Science education scholar feedback from rubric use
Non-education graduate student feedback from rubric use
Undergraduate student feedback from use of the rubric in the classroom (Spring 2015) and on the graph rubric categories, usability, and utility
Biology instructor feedback on the graph rubric categories, usability, and utility
Evaluate the rubric by using it to assess a diversity of graphs.
Confirm the features and structure of the graph rubric as appropriate and useful for evaluating graphs.
Face validity: The quality enabling diverse users to conclude that the purpose of the rubric is to evaluate graphs
Content validity: Assurance from diverse sources that the graph rubric encompasses appropriate criteria or content used to evaluate graphs
Rubric use by different stakeholders and to evaluate diverse graphs:
Undergraduate student evaluation of graphs generated in a class they had taken previously
Biology instructor evaluation of student-generated graphs from their courses
Evaluation of graphs from selected chapters from various introductory biology texts
Stage 1. Substantive Stage: Identifying Graphing Elements by Consulting the Literature and Ongoing Research
The substantive stage led to the initial draft of the graph rubric with its categories, subcategories, and definitions. Three sources of information contributed to this stage and supplied content validity evidence for the concepts within the rubric (Table 1). We consulted the graphing and visual representations literature, student-generated graphs and reflections from a classroom study (Angra and Gardner, 2015 Angra, 2016), and graphs and the articulated reasoning constructed by students and professors in a think-aloud clinical graphing interview (Angra and Gardner, 2017).
We began the process of rubric development by consulting books and primary literature that discuss appropriate graphing practices. Because graphs are ubiquitous in many fields, we did not restrict our literature search to biology at this stage. When doing our literature search for articles on graphing, we consulted Google Scholar and the university’s online library for article recommendations. We searched broadly for articles using keywords including “graph,” “construction,” “choice,” “presentation,” “science,” and “practices.” We then extended our research by consulting the reference sections in the articles. We read each reference, made notes on the authors’ recommendations on proper graph choice and construction practices, and grouped similar recommendations together. As graphs are visual representations of data, we consulted select seminal work in the visual representations literature to identify theory and best practices (e.g., Tufte, 1983 diSessa, 2004).
To supplement the literature review and aid in rubric development, we used data from two ongoing graphing studies (Angra, 2016 Angra and Gardner, 2017). Briefly, the first graphing study took place in a physiology laboratory in which students produced graphs from their experimental data. Specifically, we were interested in the general qualities of the graphs produced (graph type, data plotted, overall appearance, understanding of the take-home message) and student reasoning for graphs they produced (Angra, 2016). The second graphing study was an expert–novice analysis conducted to understand how professors and students constructed and reflected on their graphs in a think-aloud interview setting (Angra and Gardner, 2017).
Stage 2. Structural Stage: Soliciting Feedback to Establish Content and Face Validity
During this stage, we sought content and face validity evidence to convince us that the rubric contents and structure were appropriate and relevant for evaluating graphs in biology (Table 1). We accomplished this by seeking feedback on the rubric from four different groups of people: 1) science education scholars, 2) non–education research biology graduate students who were actively pursuing either a master’s or doctoral degree, 3) undergraduate biology students enrolled in an upper-level physiology laboratory course, and 4) biology instructors. Incorporating feedback from participants at various levels of education and with expertise in various fields allowed us to check the learning goals and usability of the rubric. Feedback from students allowed us to make sure that the language in the rubric was clear and easy to understand.
Science Education Scholars.
Drafts of the rubric were presented to an interdepartmental biology education research group of science education scholars (Table 1) that includes chemistry and biology education graduate students and postdoctoral fellows and instructors from the department of curriculum and instruction, biology, and chemistry. The reason for sharing the graph rubric with science education scholars was to obtain feedback from people with pedagogical expertise. The objective of the first meeting with this group was to obtain targeted feedback on the first draft of the graph rubric. In the first draft, we used a binary scale (i.e., present/not present) for the mechanics category and three levels of achievement for the other categories. We presented two de-identified student graphs (Graphs 1 and 2 in Appendix C, Supplemental Material) produced by different student groups in a physiology laboratory course along with a brief overview of the students’ experimental designs and variables associated with that particular laboratory context. Each science education scholar was instructed to independently use the graph rubric to evaluate both student graphs, then pair and discuss their ratings with a partner this was followed by a group discussion guided by A.A. and S.M.G. The guided group discussion began with broad questions to solicit feedback from the participants about rubric use, appropriateness, and descriptions of the rubric categories and subcategories. Percent agreement as an estimate of IRR between the science education scholars and authors was calculated after the meeting to gauge consistency in rubric scoring across the categories (see Results). IRR scores from the first meeting were low and are not reported in this article, but conversations about rubric scoring are provided in the Results section, as they were fruitful for rubric revisions.
After the initial round of feedback, the rubric categories and subcategories were expanded and refined based on comments from science education scholar group, further literature review, and ongoing graphing research (Table 1). We standardized the levels of achievement to three categories: “present/appropriate,” “needs improvement,” and “unsatisfactory.” In addition, we adjusted the weighting of the scoring of the subcategories across the three main categories of the rubric to reflect the level of cognitive demand scoring of items in the “mechanics” category is weighted less than scoring of items in the “communication” and “graph choice” categories (Figure 1). Using similar protocols but at a later time, the science education scholars were asked to use the revised rubric to evaluate Graph 3 (Appendix C, Supplemental Material).
FIGURE 1. The graph rubric. Final version of the analytic graph rubric with three levels of achievement. There are three broad categories: graph mechanics, communication, and graph choice. Within graph mechanics are seven subcategories: title, x-axis and y-axis labels and units, scale, and key. Within communication are two subcategories: aesthetics and take-home message. Within graph choice are three subcategories: graph type, data displayed, and alignment. We suggest weighting the graph mechanics lower than the other two categories, as indicated by the scoring criteria.
Biology Graduate Students.
We obtained feedback from 10 biology graduate students present at a biweekly graduate seminar (Table 1), using the revised version of the graph rubric (Figure 1). Feedback from this group is important because of the role they play as teaching assistants in assisting the main instructor to deliver knowledge and/or provide feedback to students, usually with a specific rubric or answer key. We gave the biology graduate students a copy of the graph rubric and a student-generated graph (Graph 3 in Appendix C, Supplemental Material) with the corresponding research question and hypothesis to review independently this was followed by a think–pair–share and a general discussion. IRR was calculated after the meeting to gauge consistency of rubric scoring across the graph rubric categories.
We tested the utility of the graph rubric in an upper-level physiology laboratory classroom with undergraduate students to 1) provide instructor feedback on graphs they constructed as a group and 2) have them use the graph rubric to provide peer feedback. Briefly, students worked in teams to design original experiments, collect data, and display findings in graphs. In conjunction with previously published graph tools (Angra and Gardner, 2016), students used the graph rubric to guide their graph construction and to inform their anonymous graph peer review, which occurred four times during the semester. At the end of the semester, students were prompted to anonymously fill out a survey and provide feedback on the usability of the rubric and the appropriateness of the rubric for the task and to offer suggestions for improving the rubric.
We recruited four research-active biology instructors from diverse biology subdisciplines to gather face and content validity. Instructors were shown a copy of the graph rubric (Figure 1) and were asked for feedback regarding the appropriateness of the rubric categories, its potential usability in the classroom and helpfulness to students, and the scoring features of the rubric.
Stage 3. External Stage: Usage of the Graph Rubric in Different Contexts and by Diverse Users
This stage consisted of using the final rubric (Figure 1) to evaluate graphs from different sources and by users from diverse external stakeholder groups to provide us with additional content and face validity evidence. The sources of evidence were derived from evaluation of 1) student-generated graphs from an upper-level undergraduate physiology class 2) student-generated graphs from a biology instructor’s class and 3) graphs from selected chapters from five introductory biology textbooks. To standardize and guide independent users’ scoring of graphs with the rubric, we constructed graph rubric training materials (Appendix B, Supplemental Material). These materials define and explain the features of the rubric and include example scoring of five graphs, each from the three levels of achievement, as shown on the final version of the graph rubric. IRR was calculated for each external user and an expert rater.
Feedback from Undergraduate Biology Majors.
We gathered feedback on an independent graph evaluation task from undergraduate students (n = 7) who had successfully completed an upper-level physiology course. We provided the participants with the graph rubric training materials (Appendix B, Supplemental Material) and five, de-identified student-generated graphs to evaluate with the rubric (Appendix D, Supplemental Material). Graphs chosen represented typical graph types and displayed some common undesirable attributes such as plots of all raw data when a descriptive statistic would be appropriate the use of dark backgrounds and gridlines, which deflect attention from the data displayed plots of averages without error bars and misalignment of the graph with the research question and/or hypothesis. Students were encouraged to comment and explain their reasoning for their scoring in each of the graph rubric subcategories.
Feedback from Biology Instructors.
To gather feedback and evaluate the rubric as a teaching tool within the context undergraduate biology courses, we recruited biology instructors who have students create or interpret graphs as part of their normal classroom instruction. We purposely recruited instructors who teach courses ranging from the introductory levels to advanced undergraduate and graduate levels. The four faculty instructors taught a range of courses: a course-based undergraduate research experience (CURE) introductory biology laboratory intermediate-level physiology and cell biology courses and upper-level field ecology, conservation biology, and neurobiology courses. We provided each instructor with the graph rubric and rubric training materials (Appendix B, Supplemental Material) and asked them to select and evaluate between five and 10 student graphs (with accompanying research question and/or hypothesis statements) with the graph rubric (see Appendix E in the Supplemental Material for descriptions). The graphs were returned to the research team for “expert” scoring with the graph rubric for comparison of scoring with each instructor. In addition, each instructor completed a brief survey to provide feedback on the clarity, usability, and appropriateness of the rubric for evaluating student graphs in their courses.
Evaluation of Biology Textbook Graphs.
Because undergraduate students may encounter graphs in their textbooks as part of their course work, we evaluated graphs from five introductory biology textbooks to augment our content validity evidence (see Table 7 later in this article and Appendix H, Supplemental Material). We chose four textbooks (Raven et al., 2008 Sadava et al., 2009 Singh-Cundy and Shin, 2010 Urry et al., 2014) based on the undergraduate curriculum for biology students at a large midwestern university. The fifth textbook (Campbell et al., 2014) was chosen because it integrates the recommendations put forth by Vision and Change to incorporate more quantitative thinking in biology (AAAS, 2011). Our selection criteria and graph analysis followed that of Rybarczyk (2011) and Hoskins et al. (2007). We randomly selected 10 chapters from each textbook and analyzed pages with graphs as stand-alone artifacts using the graph rubric. The definition that we use for a graph is taken from Kosslyn’s (1994, p. 2) work: “a visual display that illustrates one or more relationships among numbers.” We expanded this definition and analyzed graphs that were in a Cartesian coordinate system, framed with x- and y-axes, and found in the main chapter or in the side-panel chapter exercises (see Appendix G in the Supplemental Material for a list of graphs on which evaluation was performed). We excluded interactive graphs, graphs found in videos, and graphs found in the end-of-chapter exercises. Because the graphs in textbooks were rarely directly derived from or presented as related to experiments, we did not include evaluation of the “alignment” subcategory of the rubric.
We used IRR to quickly identify and refine areas of the rubric during the structural stages of rubric design and to provide us with feedback on the broad use and scope of the rubric during the external stage (Table 1). In this way, the IRR analysis contributed to both content and face validity evidence. We were able to identify areas in which the content and the structure of the rubric were well understood and relevant to users. In addition, IRR provided us with insight into how different raters at various skill levels use the rubric and how they rate graphs that they are most likely to encounter in their own contexts. We first calculated IRR in the form of percent agreement between raters to quantify reliability between expert raters (A.A. and S.M.G.) and each individual population that was asked to use the graph rubric for the structural stage (McHugh, 2012). Because the percent agreement between the two expert raters was high (>90%), percent agreement between other raters (e.g., students or instructors) and either expert rater is used for the values presented here (Stemler, 2004). In qualitative research, an IRR agreement of 80% or higher is considered acceptable (Holsti, 1969). This will inform limitations and usage of the rubric and suggest possible avenues of implementation in the classroom.
The Biology department provides students with strong foundations that enable them to think critically about biological topics, learn technical skills for solving biological problems, and communicate biological information in multiple formats.
By offering courses that span the spectrum of biological disciplines, students can prepare for advanced studies in a wide variety of fields. The curriculum also prepares students to take a leadership role in society by furnishing them with the tools to make informed decisions about scientific issues.
Prof. Karen Hales speaks to Genetics students about DNA sequence analysis of the fruit fly gene which they characterized by CRISPR-Cas9 targeted mutagenesis.
Prof. Chris Paradise and Davidson Research Institute summer fellows collect data on the insect population of a local farm.
Students study the effects on vaping using an exposure chamber that Prof. Karen Bernd's lab developed specifically for e-cigarette vapor experiments.
Molecular Biology & Genetics
Molecular Biology and Genetics seek to understand how the molecules that make up cells determine the behavior of living things. Biologists use molecular and genetic tools to study the function of those molecules in the complex milieu of the living cell. Groups in our department are using these approaches to study a wide variety of questions, including the fundamental processes of transcription and translation, mechanisms of global gene control including signal transduction pathways, the function of the visual and olfactory systems, and the nature of genetic diversity in natural populations and how that affects their evolution, among others. The systems under study cover the range of model organisms (bacteria, yeast, slime molds, worms, fruit flies, zebrafish, and mice) though the results of these studies relate directly or indirectly to human health.
Faculty with Interest in Molecular Biology and Genetics:
We are developing new transgenic mouse models of human prostate cancer.
We are investigating the regulation of brain development and metabolism. These studies are expected to contribute to the prevention of neural tube birth defects and the treatment of stroke.
We investigate the role of ubiquitin/proteasome mediated protein degradation in transcription and the regulation of gene expression in eukaryotes.
Genes and pathways involved in copper tolerance, biofilm formation and nanoparticle synthesis in the marine bacterium Alteromonas megaplasmids in bacterial niche adaptation.
We study the role of the Wnt signaling pathway in controlling cell fate decisions during C. elegans development. We also study regulation and function of the Hox gene lin-39 in C. elegans.
Cross-linking between experimental assays and in-silico data for regulatory elements.
Molecular genetics of translational accuracy in the yeast Saccharomyces cerevisiae and bacterium Escherichia coli.
Studying bacterial physiology using systems and synthetic biology Determining how microbes sense the environment and obtain energy examining the mechanisms of plant cell wall degradation in bacteria.
Understanding epigenetics and the regulation of the genome through investigation of histone post-translational modifications dissecting the role of protein post-translational modifications in nuclear signaling pathways.
Genetic mapping of quantitative traits, association mapping to identify the effects of natural polymorphism in candidate genes on phenotypic variation.
Characterizing function of genes regulating plant innate immunity and dissecting defense signaling networks.
Molecular phylogenetic systematics phylogenetic reconstruction of gene families.
Identification and characterization of transposons for tagging important developmental loci in Volvox carteri.
We are interested in understanding how alterations in key oncogenes and tumor suppressors impact ovarian cancer progression.
My research program uses the techniques of molecular biology to explore structure function relationships of visual pigments.
Molecular microbial ecology, physiology and genetics.
We use loss-of-function and gain-of-function genetic strategies in Drosophila to identify new genes involved in cell migration, and to better understand molecular pathways required for cell movement.
We study the role of G-protein coupled receptors (GPCRs) in regulating both normal and disease states, as well as the regulatory mechanisms that modulate GPCR responsiveness at the molecular level.
We are interested in how different genetic pathways and electrical activity levels in neurons regulate neuronal network development, stabilization, and aging in Drosophila.
We will study transcriptional changes facilitated by primary tumors to promote cancer metastasis. We will investigate various cell signaling mechanisms used by tumors to communicate with other cells. We will use bioinformatics, proteomics and molecular biology techniques to elucidate identify and characterize key regulators of metastasis and cellular dormancy in the bone marrow.
Abbott, Traci B. 2009. “Teaching Transgender Literature at a Business College.” Race, Gender & Class: 152–69.
Brondani, Mario A., and Randy Paterson. 2011. “Teaching Lesbian, Gay, Bisexual, and Transgender Issues in Dental Education: A Multipurpose Method.” Journal of dental education 75(10): 1354–61.
Case, Kim A., Briana Stewart, and Josephine Tittsworth. 2009. “Transgender across the Curriculum: A Psychology for Inclusion.” Teaching of Psychology 36(2): 117–21.
Clark, J. Elizabeth, Erica Rand, and Leonard Vogt. 2003. “Climate Control Teaching About Gender and Sexuality in 2003.” Radical Teacher (66): 2.
Courvant, Diana. 2011. “Strip!” Radical Teacher (92): 26–34.
Levine, Sheen S., and David Stark. 2015. “Diversity Makes You Brighter.” The New York Times. http://www.nytimes.com/2015/12/09/opinion/diversity-makes-you-brighter.html (April 11, 2016).
Phillips, Katherine W. 2014. “How Diversity Makes Us Smarter.” Scientific American. http://www.scientificamerican.com/article/how-diversity-makes-us-smarter/ (April 11, 2016).
Preston, Marilyn. 2011. “Not Another Special Guest: Transgender Inclusivity in a Human Sexuality Course.” Radical Teacher (92): 47–54.
Safer, Joshua, and Elizabeth Pearce. 2013. “A Simple Curriculum Content Change Increased Medical Student Comfort with Transgender Medicine.” Endocrine Practice 19(4): 633–37.
Schmalz, Julia. 2015. “‘Ask Me’: What LGBTQ Students Want Their Professors to Know.” The Chronicle of Higher Education. http://chronicle.com/article/Ask-Me-What-LGBTQ-Students/232797 (February 24, 2016).
Spade, Dean. 2011. “Some Very Basic Tips for Making Higher Education More Accessible to Trans Students and Rethinking How We Talk about Gendered Bodies.” Radical Teacher (92): 57–62.
Valle-Ruiz, Lis et al. 2015. “Course Design | A Guide to Feminist Pedagogy.” https://my.vanderbilt.edu/femped/habits-of-hand/course-design/ (April 7, 2016).
This teaching guide is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
1 Many thanks to the following individuals for their helpful feedback and edits:
Dr. Melanie Adley, Dr. Joe Bandy, Dr. Richard Coble, Dr. Vivian Finch, Jane Hirtle, Corey Jansen, Liv N. Parks and Chris Purcell.
2 Many thanks to Dr. Heather Fedesco and Dr. Joe Bandy for their helpful feedback and edits to the update.