Information

Automatic cropping of collimated digital X-ray images


Do you have any suggestions for a library which offers an algorithm to crop digital X-ray images automatically?

We receive full-size images (43x43cm) from a detector, independent of the collimator settings. If the collimator has been closed to 10x15cm, we need to crop this area. This should be done automatically, so that the user does not even see the full-size image because it contains a lot of information which is not needed.

Now we are looking for a good library which offers such an algorithm. This sounds like a simple task, but depending on the dose and the collimation, it can be pretty hard to detect the collimated area of the image. Therefore, we are looking for a professional solution rather than implementing it ourselves.

Any suggestions?


There is a software package ImageJ which can be used to what you are asking. Depending on your capability and the complexity of what you are doing you can either write a Macro in the program to crop images or you can extend the program in Java and write a custom plugin to do what you describe.

I used this program for years it is a very strong program.


X-ray software

dicomPACS ® DX-R is a professional acquisition software for X-ray images generated by various flat panel systems (DR) and CR units (computed radiography with imaging plates). The software also controls the operation of X-ray generators and X-ray units manufactured by diverse companies, thus ensuring an efficient and orderly workflow. The straightforward and user-friendly GUI (graphical user interface) functions via touchscreen and mouse. The dicomPACS ® DX-R image processing produces images of outstanding quality and can be adapted to special customer needs. High-performance image processing allows organ-specific optimisation, thereby guaranteeing highest quality X-ray images.

Everyday medical care is made easier by an array of integrated functions (e.g., a multimedia X-ray positioning guide) and an intuitive design. dicomPACS ® DX-R software can readily be integrated with existing information management systems. Furthermore, X-ray images can be evaluated using the dicomPACS ® viewer module included in the acquisition software. Thus, the system functions as a fully-fledged diagnostic workstation with the option to upgrade to a PACS (Picture Archiving and Communication System).

Make dicomPACS ® DX-R the linchpin of your direct digital X-ray system - be it a new unit with generator control, a retrofit of an existing X-ray machine, or a portable suitcase solution for mobile X-ray generators.


Introduction

In recent months, there has been a surge in patients presenting to the emergency department (ED) with respiratory illnesses associated with the coronavirus disease 2019 (COVID-19) 1,2 . Evaluating the risk of deterioration of these patients to perform triage is crucial for clinical decision-making and resource allocation 3 . While ED triage is difficult under normal circumstances 4,5 , during a pandemic, strained hospital resources increase the challenge 2,6 . This is compounded by our incomplete understanding of COVID-19. Data-driven risk evaluation based on artificial intelligence (AI) could, therefore, play an important role in streamlining ED triage.

As the primary complication of COVID-19 is pulmonary disease, such as pneumonia 7 , chest X-ray imaging is a first-line triage tool for COVID-19 patients 8 . Although other imaging modalities, such as computed tomography (CT), provide higher resolution, chest X-ray imaging is less costly, inflicts a lower radiation dose, and is easier to obtain without incurring the risk of contaminating imaging equipment and disrupting radiologic services 9 . In addition, abnormalities in the chest X-ray images of COVID-19 patients have been found to mirror abnormalities in CT scans 10 . Although the knowledge of the disease is rapidly evolving, the understanding of the correlation between pulmonary parenchymal patterns visible in the chest X-ray images and clinical deterioration remains limited. This motivates the use of machine learning approaches for risk stratification using chest X-ray imaging, which may be able to learn such correlations automatically from data.

The majority of related previous work using imaging data of COVID-19 patients focus more on diagnosis than prognosis 11,12,13,14,15,16,17,18 . Prognostic models used for predicting mortality, morbidity and other outcomes related to the disease course have a number of potential real-life applications, such as: consistently defining and triaging sick patients, alerting bed management teams on expected demands, providing situational awareness across teams of individual patients, and more general resource allocation 11 . Prior methodology for prognosis of COVID-19 patients via machine learning mainly use routinely collected clinical variables 2,19 such as vital signs and laboratory tests, which have long been established as strong predictors of deterioration 20,21 . Some studies have proposed scoring systems for chest X-ray images to assess the severity and progression of lung involvement using deep learning 22 , or more commonly, through manual clinical evaluation 7,23,24 . In general, the role of deep learning for the prognosis of COVID-19 patients using chest X-ray imaging has not yet been fully established. Using both the images and the clinical variables in a single AI system also has not been studied before. We show that they both contain complimentary information, which opens a new perspective on building prognostic AI systems for COVID-19.

In this retrospective study, we develop an AI system that performs an automatic evaluation of deterioration risk, based on chest X-ray imaging, combined with other routinely collected non-imaging clinical variables. An overview of the system is shown in Fig. 1a. The goal is to provide support for critical clinical decision-making involving patients arriving at the ED in need of immediate care 2,25 , based on the need for efficient patient triage. The system is based on chest X-ray imaging, while also incorporating other routinely collected non-imaging clinical variables that are known to be strong predictors of deterioration.

a Overview of the AI system that assesses the patient’s risk of deterioration every time a chest X-ray image is collected in the ED. We design two different models to process the chest X-ray images, both based on the GMIC neural network architecture 26,27 . The first model, COVID-GMIC, predicts the overall risk of deterioration within 24, 48, 72, and 96 h, and computes saliency maps that highlight the regions of the image that most informed its predictions. The predictions of COVID-GMIC are combined with predictions of a gradient boosting model 28 that learns from routinely collected clinical variables, referred to as COVID-GBM. The second model, COVID-GMIC-DRC, predicts how the patient’s risk of deterioration evolves over time in the form of deterioration risk curves. b Architecture of COVID-GMIC. First, COVID-GMIC utilizes the global network to generate four saliency maps that highlight the regions on the X-ray image that are predictive of the onset of adverse events within 24, 48, 72, and 96 h, respectively. COVID-GMIC then applies a local network to extract fine-grained visual details from these regions. Finally, it employs a fusion module that aggregates information from both the global context and local details to make a holistic diagnosis.

Our AI system uses deep convolutional neural networks to perform risk evaluation from chest X-ray images. In particular, we designed our imaging-based classifier based on the Globally-Aware Multiple Instance Classifier (GMIC) 26,27 , denoted as COVID-GMIC, aiming for accurate performance and interpretability (see Fig. 1b). The system also learns from routinely collected clinical variables using a gradient boosting model (GBM) 28 , denoted as COVID-GBM. Both models were trained using a dataset of 3661 patients admitted to NYU Langone Health between March 3, 2020, and May 13, 2020. To learn from both modalities, we combined the output predictions of COVID-GMIC and COVID-GBM to predict each patient’s overall risk of deterioration over different time horizons, ranging from 24 to 96 h. In addition, the system includes a model that predicts how the risk of deterioration is expected to evolve over time by computing deterioration risk curves (DRC), in the spirit of survival analysis 29 , denoted as COVID-GMIC-DRC.

Our system is able to accurately predict the deterioration risk on a test set of new patients. It achieves an area under the receiver operating characteristic curve (AUC) of 0.786 (95% CI: 0.745–0.830), and an area under the precision-recall curve (PR AUC) of 0.517 (95% CI: 0.429–0.600) for prediction of deterioration within 96 h. Additionally, its estimated probability of the temporal risk evolution discriminates effectively between patients, and is well-calibrated. The imaging-based model achieves a comparable AUC to two experienced chest radiologists in a reader study, highlighting the potential of our data-driven approach. In order to verify our system’s performance in a real clinical setting, we silently deployed a preliminary version of it at NYU Langone Health during the first wave of the pandemic, demonstrating that it can produce accurate predictions in real-time. Overall, these results strongly suggest that our system is a viable and valuable tool to inform triage of COVID-19 patients. For reproducibility, we published our code and the trained models at https://github.com/nyukat/COVID-19_prognosis.


Materials and Methods

Datasets

All datasets were deidentified and compliant with the Health Insurance Portability and Accountability Act. The Belarus and Thomas Jefferson University datasets were exempted from institutional review board review at Thomas Jefferson University Hospital. The National Institutes of Health datasets were exempted from review by the institutional review board (No. 5357) by the National Institutes of Health Office of Human Research Protection Programs. This was a retrospective study that involved four datasets (Table 1). This includes two publicly available datasets maintained by the National Institutes of Health, which are from Montgomery County, Maryland, and Shenzhen, China (20). The other two datasets are from Thomas Jefferson University Hospital, Philadelphia, and the Belarus Tuberculosis Portal maintained by the Belarus TB public health program (21). For the Thomas Jefferson University and Belarus datasets, the positive cases with radiologic manifestations of pulmonary TB were confirmed with pathologic findings of sputum, original authors of the radiology reports, and an independent radiologist (P.L., with 10 years of experience). For the Thomas Jefferson University Dataset, the healthy control patients were established from the original authors of the radiology reports and an independent radiologist (P.L.). For the National Institutes of Health datasets, patients who were positive for TB and healthy control patients were established from clinical records and expert readers. For the Belarus dataset, the first 88 consecutive cases (of 420 in the portal) were downloaded for patients who underwent posteroanterior chest radiography at the time of initial diagnosis and pathologic analysis. Because the Belarus dataset consisted of patients who were positive for TB , a similar number of healthy control patients were obtained from Thomas Jefferson University Hospital so that the cumulative total of all datasets would have a similar number of patients who were positive for TB and healthy patients (Table 1). The dataset from China included a minority of pediatric images (21 pediatric, 641 adults) so the image sizes had a larger range (Table 1). The patient demographics for the datasets and additional pertinent findings such as pleural effusion, military pattern of disease, and presence of cavitation for positive cases are also provided in Table 1.

Note.—Data in parentheses are numerator and denominator. There were a total of 1007 cases (492 cases positive for TB and 515 healthy control patients). “Positive cases” refer to cases that were positive for TB. CR = computed radiography, DICOM = Digital Communications in Medicine, DR = digital radiography, PNG = Portable Network Graphics.

Methods

The chest radiographic images were resized to a 256 × 256 matrix and converted into Portable Network Graphics format. The images were loaded onto a computer with a Linux operating system (Ubuntu 14.04 Canonical, London, England) and with the Caffe deep learning framework (http://caffe.berkeleyvision.org BVLC, Berkeley, Calif) (22), with CUDA 7.5/cuDNN 5.0 (Nvidia Corporation, Santa Clara, Calif) dependencies for graphics processing unit acceleration. The computer contained an Intel i5 3570k 3.4-gHz processor (Intel, Santa Clara, Calif), 4 TB of hard disk space, 32 GB of RAM, and a CUDA-enabled Nvidia Titan ×12 GB graphics processing unit (Nvidia).

Two different deep convolutional neural network architectures were evaluated in this study, AlexNet (23) and GoogLeNet (24), including pretrained and untrained models. Pretrained networks were already trained on 1.2 million everyday color images from ImageNet (http://www.image-net.org/) that consisted of 1000 categories before learning from the chest radiographs in this study (referred to as pretrained). Untrained networks were not trained before they were used (referred to as untrained). This included AlexNet untrained (Alex-Net-U), AlexNet pretrained (AlexNet-T), GoogLeNet untrained (GoogLeNet-U), and GoogLeNet pretrained (GoogLeNet-T). Pretrained networks were obtained from the Caffe Model Zoo, an open-access repository of pretrained models for use with Caffe. The following solver parameters were used for training: 120 epochs base learning rate for untrained models and for pretrained models, 0.01 and 0.001, respectively stochastic gradient descent step-down, 33% and γ, 0.1.

Of the 1007 patients in the total dataset (Table 1), 150 random patients (14.9%) were selected for testing. Randomization was performed by using pseudorandom numbers generated from the random function in the Python Standard Library (Python 2.7.13, Python Software Foundation, Wilmington, Del). Of these 150 test patients, 75 were positive for TB and 75 were healthy. Among the remaining 857 patients, they were randomly split into an 80%:20% ratio into training (685 patients) and validation (172 patients). The training set was used to train the algorithm, the validation set was for model selection, and the test set was for assessment of the final chosen model. In deciding the percent split, the goal is to keep enough data for the algorithms to train from but have enough validation and test cases to maintain a reasonable confidence interval of the accuracy of the model (26).

The 75 test patients positive for TB were analyzed by a cardiothoracic radiologist (P.L.) for degree of pulmonary parenchymal involvement by TB and placed into one of the following three categories: subtle (pulmonary parenchymal involvement, <4%), intermediate (pulmonary parenchymal involvement, 4%–8%), and readily apparent (pulmonary parenchymal involvement, >8%) (Table 2). To determine this, the right and left lungs were divided into three zones (upper, middle, and lower). Opacities that occupied half or more of one zone were considered readily apparent. Opacities occupying a fourth to half of a zone were considered intermediate. Opacities occupying less than a fourth of a zone were considered subtle.

Table 2 Distribution of Test Cases Positive for TB

Note.—Data in parentheses are numerator and denominator.

Statistical and Data Analysis

All statistical analyses were performed by using software (MedCalc v. 16.8 MedCalc Software, Ostend, Belgium). On the test datasets, receiver operating characteristic curves and AUC s were determined (27). Contingency tables, accuracy, sensitivity, and specificity were determined from the optimal threshold by the Youden index, which is the following equation: [1 − (false-positive rate + false-negative rate)]. For the receiver operating characteristic curves, standard error, 95% confidence intervals, and comparisons between AUC s were made by using a nonparametric approach (28–31). The adjusted Wald method was used to determine 95% confidence intervals on the accuracy, sensitivity, and specificity from the contingency tables (32). P values less than .05 were considered to indicate statistical significance.

Ensembles were performed by taking different weighted averages of the probability scores generated by the classifiers (AlexNet and GoogLeNet). This ranged from using equal weighting (50% AlexNet and 50% GoogLeNet) to up to 10-fold weighting biased toward either classifier. Receiver operating characteristic curves, AUC , and optimal sensitivity and specificity values were then determined for various ensemble approaches.

For cases where the AlexNet and GoogLeNet classifiers had disagreement, an independent board-certified cardiothoracic radiologist (B.S., with 18 years of experience) blindly interpreted the images as either having manifestations of TB or as normal. Contingency tables and sensitivity and specificity values were then created from these results (Fig 1).

Figure 1: Contingency tables. A, Sensitivity, 92.0% (95% confidence interval: 83.3%, 96.6%) specificity, 98.7% (95% confidence interval: 92.1%, 100%) accuracy, 95.3% (95% confidence interval: 90.5%, 97.9%). B, Sensitivity, 92.0% (95% confidence interval: 83.3%, 96.6%) specificity, 94.7% (95% confidence interval: 86.7%, 98.3%) accuracy, 93.3% (95% confidence interval: 88.0%, 96.5%). C, Sensitivity, 97.3% (95% confidence interval: 90.2%, 99.8%) specificity, 94.7% (95% confidence interval: 86.7%, 98.3%) accuracy, 96.0% (95% confidence interval: 91.4%, 98.3%). D, Sensitivity, 97.3% (95% confidence interval: 90.2%, 99.8%) specificity, 100% (95% confidence interval: 95.8%, 100%) accuracy, 98.7% (95% confidence interval: 95.0%, 99.9%).


Abstract

Current plant phenotyping technologies to characterize agriculturally relevant traits have been primarily developed for use in laboratory and/or greenhouse conditions. In the case of root architectural traits, this limits phenotyping efforts, largely, to young plants grown in specialized containers and growth media. Hence, novel approaches are required to characterize mature root systems of older plants grown under actual soil conditions in the field. Imaging methods able to address the challenges associated with characterizing mature root systems are rare due, in part, to the greater complexity of mature root systems, including the larger size, overlap, and diversity of root components. Our imaging solution combines a field-imaging protocol and algorithmic approach to analyze mature root systems grown in the field. Via two case studies, we demonstrate how image analysis can be utilized to estimate localized root traits that reliably capture heritable architectural diversity as well as environmentally induced architectural variation of both monocot and dicot plants. In the first study, we show that our algorithms and traits (including 13 novel traits inaccessible to manual estimation) can differentiate nine maize (Zea mays) genotypes 8 weeks after planting. The second study focuses on a diversity panel of 188 cowpea (Vigna unguiculata) genotypes to identify which traits are sufficient to differentiate genotypes even when comparing plants whose harvesting date differs up to 14 d. Overall, we find that automatically derived traits can increase both the speed and reproducibility of the trait estimation pipeline under field conditions.

Crop root systems represent an underexplored target for improvements as part of community efforts to ensure that global crop yields and productivity keep pace with population growth ( Godfray et al., 2010 Gregory and George, 2011 Nelson et al., 2012). The challenge in improving crop root systems is that yield and productivity also depend on soil fertility, which is also a major constraint to global food production ( Lynch, 2007). Hence, desired improvements to crop root systems include enhanced water use efficiency and water acquisition given the increased likelihood of drought in future climates ( Intergovernmental Panel on Climate Change, 2014). Over the long term, the development of crop genotypes with improved root phenotypes requires advances in the characterization of root system architecture ( RSA) and in the relationship between RSA and function.

The emerging discipline of plant phenomics aims to expand the scope, throughput, and accuracy of plant trait estimates ( Furbank, 2009). In the case of plant roots, structural traits may describe RSA as geometric or topological measures of the root shape at various scales (e.g. diameters and width of the whole root system or a single branch Lynch, 1995 Den Herder et al., 2010). These traits can be used to predict yield under specific conditions such as drought or low fertility. Understanding the diversity and development of root architectural traits is crucial, because spatial and temporal root deployment affects plant fitness, especially water and nutrient acquisition ( Rich and Watt, 2013). Thus, improving plant performance may benefit from improvements in the characterization of root architecture, including understanding how trait variation arises as a function of genotype and environmental conditions ( Band et al., 2012 Shi et al., 2013).

Current efforts to understand the structure of crop root systems have already led to a number of imaging solutions ( Lobet et al., 2013) that are able to extract root architecture traits under various conditions ( Fiorani et al., 2012), including laboratory conditions ( de Dorlodot et al., 2007) in which plants are often grown in pots or glass containers ( Zeng et al., 2008 Armengaud et al., 2009 Le Bot et al., 2010 Clark et al., 2011 Lobet et al., 2011 Naeem et al., 2011 Galkovskyi et al., 2012). In the case of pots, expensive magnetic resonance imaging technologies represent one noninvasive approach to capture high-resolution details of root architecture ( Schulz et al., 2013), similar to the capabilities of x-ray microcomputed tomography ( μCT) systems. X-ray systems allow capturing of the root architecture at a fine scale in containers with a wide variety of soil types ( Mairhofer et al., 2012 Mooney et al., 2012). It has been shown that x-ray μCT paired with specifically designed algorithms has sufficient resolution to recover the root structure in many cases ( Mairhofer et al., 2013). Nevertheless, x-ray μCT systems are currently unable to image mature root systems because of technical restrictions in container size.

As an alternative, root systems can be imaged directly with a digital camera when grown in glass containers with transparent media such as gellan gum or transparent soil replacements ( Downie et al., 2012). Such in situ imaging benefits from controlled lightning conditions during image acquisition, even more so when focusing on less complex root structures of younger plants that allow three-dimensional reconstruction ( Clark et al., 2011 Topp et al., 2013). Under such controlled conditions, it is expected that imaging would enable the study of growth of roots over time ( Spalding and Miller, 2013 Sozzani et al., 2014).

However, all of the above-listed solutions have been used primarily to assess root structures in the early seedling stage ( French et al., 2009 Brooks et al., 2010 Sozzani et al., 2014) to approximately 10 d after germination ( Clark et al., 2011), which makes it all but impossible to directly observe mature root systems. For example, primary and seminal roots make up the major portion of the seedling root system in maize (Zea mays) during the first weeks after germination. Later in development, postembryonic shoot-borne roots become the major component of the maize root system ( Hochholdinger, 2009), not yet accessible to laboratory phenotyping platforms. In addition to phenological limitations, current phenotyping approaches for root architecture require specialized growth conditions with aerial and soil environments that differ from field conditions the effects of such differences on RSA are only sparsely reported in literature ( Hargreaves et al., 2009 Wojciechowski et al., 2009).

Indeed, high-throughput field phenotyping can be seen as a new frontier for crop improvement ( Araus and Cairns, 2014) because imaging a mature root system under realistic field conditions poses unique challenges and opportunities ( Gregory et al., 2009 Zhu et al., 2011 Pieruschka and Poorter, 2012). Challenges are intrinsic to roots grown in the field because the in situ belowground imaging systems to date are unable to capture fine root systems. As a consequence, initial attempts to characterize root systems in the field focused on the manual extraction of structural properties. Manual approaches analyzed the root system’s branching hierarchy in relation to root length and rooting depth ( Fitter, 1991). In the late 1980s, imaging techniques were first used ( Tatsumi et al., 1989) to estimate the space-filling behavior of roots, an estimation process that was recently automated ( Zhong et al., 2009). A weakness of such approaches is that exact space-filling properties, such as the fractal dimension, are sensitive to the incompleteness of the excavated root network ( Nielsen et al., 1997, 1999). In particular, the box counting method was criticized for above-ground branching networks of tree crowns ( Da Silva et al., 2006). The same critiques apply to root systems, because fine secondary or tertiary roots can be lost or cutoff or can adhere to each other during the cleaning process, making it impossible to analyze the entire network.

As an alternative, the shovelomics field protocol has been proposed to characterize the root architecture of maize under field conditions ( Trachsel et al., 2011). In shovelomics, the researcher excavates the root at a radius of 20 cm around the hypocotyl and 20 cm below the soil surface. This standardized process captures the majority of the root system biomass within the excavation area. After excavation, the shoot is separated from the root 20 cm above the soil level and washed in water containing mild detergent to remove soil. The current procedure places the washed root on a phenotyping board consisting of a large protractor to measure dominant root angles with the soil level at depth intervals and marks to score length and density classes of lateral roots. A digital caliper is used to measure root stem diameters ( Fig. 1). Observed traits vary slightly from crop to crop but generally fit into the following categories by depth or root class: angle, number, density, and diameter. In this way, field-based shovelomics allows the researcher to visually quantify the excavated structure of the root crown and compare genotypes via a common set of traits that do not depend on knowledge or observation of the entire root system network. Of note, shovelomics is of particular use in developing countries, which have limited access to molecular breeding platforms ( Delannay et al., 2012), and for which direct phenotypic selection is an attractive option.

A, Classic shovelomics scoring board to score the angle of maize roots with the soil tissue. B, An example to score rooting depth and angle in common bean.


Conclusions

Mutational analysis remains the gold standard for identifying and characterizing gene function and this is being facilitated by high-throughput phenotyping. Given the demand for high-throughput phenotypic analysis in many organisms, we can expect the further development of large-scale phenotyping to unravel complex genotype-phenotype relationships. As an example, automated microscopy provides the opportunity to collect vast amounts of data that need to be standardized, normalized and analyzed. This increases the need for community access to store and search these large datasets. It would be of great benefit if large-scale phenotypic data could be easily compared and shared between labs. However, current limitations to the reuse and sharing of such data include the lack of standardized vocabulary terms, experimental parameters and quantitative benchmarks. Therefore, there is a pressing need for clearly defined standards and terms agreed upon by a given community. To achieve this goal, databases that contain phenotypic information and, especially, integration of phenomic and other genome-wide data are required. Multi-organism phenotype-genotype databases that facilitate cross-species identification of genes associated with orthologous phenotypes are now becoming available (for example, PhenomicDB) [83, 84]. In the next few years, the ability to harvest the full benefit of such large datasets can only be obtained by combining the genomic, epigenomic, transcriptomic, proteomic, metabolomic and phenomic data into shared databases. This resource will be invaluable for the investigation and eventual elucidation of molecular mechanisms regulating the biology of multicellular organisms, and will form a comprehensive description of the whole organism, opening new paths into systems biology.


Interpretation of Images

Radiographic images are complex, two-dimensional representations of three-dimensional subjects that are generated in a format unfamiliar to the average individual. Substantial experience and attention to detail is required to become proficient in interpretation. The start of radiographic interpretation is a properly positioned and exposed study. Studies that are poorly or inconsistently positioned are difficult to interpret, and improper technique further decreases the amount of information interpretable from the radiograph.

Although interpretation is aided by experience, conscious use of a systematic approach to evaluation of the image will improve the reading skill of even very experienced individuals and ensure that lesions in areas not of primary interest or near the edge of the image are not missed. However, many studies have shown that experience is the best teacher with regard to evaluation of radiographs. So, although anyone will become more adept at image interpretation with time, those individuals who interpret large numbers of images will be the most proficient.

Even proficient individuals can miss lesions that are unfamiliar to them, or so-called "lesions of omission." A lesion of omission is one in which a structure or organ generally depicted on the image is missing. A good example of this is the absence of one kidney or the spleen on an abdominal radiograph. Therefore, particular attention to systematic evaluation of the image is very important. It is perhaps best to begin interpretation of the image in an area that is not of primary concern. For instance, when evaluating the thorax of an animal with a heart murmur, the vertebral column and skeleton should be evaluated first, because if a substantial lesion is identified for the heart, it is possible that the skeletal structures may not be examined.

It is essentially impossible to evaluate radiographs without a preexisting bias as a result of knowledge of the history, physical examination findings, and previously performed laboratory results. This bias can easily promote under-evaluation of the image by focusing on only the area of interest associated with the bias. Even so, it is true that this knowledge of the history and signalment is necessary to achieve consistent and accurate interpretation of radiographic studies. 

Courtesy of Dr. Jimmy Lattimer.

Large, right-middle lung lobe mass: The ventral dorsal view (middle image) indicates the presence of an alveolar opacity in the right mid lung, which is not visible at all on the right lateral projection (right image). However, the mass is clearly seen on the left lateral view (left image). This lesion could have been completely missed on a radiographic examination consisting of a single right lateral view.

Interpretation of radiographic images depends on a pleural knowledge of anatomy and understanding of disease pathology. Anatomic changes, such as in size, shape, location/position, opacity, and margin sharpness represent the basis of radiographic interpretation. In addition, the degree of change, whether generalized throughout an organ or associated with other abnormalities, must be evaluated. The presence of lesions that do not affect the entirety of an organ, such as focal enlargement of the liver or focal opacification of the lung field, are strongly suggestive of localized disease such as tumors or bacterial infections. Conversely, lesions causing generalized change throughout an organ such as the liver or kidneys are most suggestive of a systemic disease such as viral infections or toxicities. Combinations of lesions in different locations or organs also help narrow down the potential diagnosis. Careful attention to the basic principles of interpretation and use of a careful systematic approach will often provide answers not readily apparent on initial examination.

Once all of the lesions on the study are identified, a rational cause for those lesions can be formulated. The maximum amount of information is derived from the radiographic study when interpretation is done in light of the clinical and clinicopathologic information available. In this way, the most likely cause for the animal’s condition can be determined. However, many diseases can cause similar radiographic lesions, and radiographs must be interpreted in light of the entire gestalt of lesions present and not based on any single lesion if multiple abnormalities are present. In many cases, it is appropriate and advisable to seek the opinion of a radiologist for interpretation of radiographic images, particularly as the number of radiographic studies available and potential diagnoses proliferate.


Acknowledgements

The authors would like to thank all members of the Zhou laboratory at the Nanjing Agricultural University, the National Institute of Agricultural Botany (Cambridge Crop Research) and Earlham Institute, as well as the Penfield Group at John Innes Centre for fruitful discussions and cross-disciplinary collaborations. We thank Mark Scoles in the Desktop Support team at the department of Norwich BioScience Institutes Partnership (NBIP) Computing and John Humble in the Equipment Services team at the NBIP Facilities department for excellent technical support in networking and hardware manufacture. We also thank researchers at the John Innes Centre and the University of East Anglia for constructive suggestions. We gratefully acknowledge the support of the NVIDIA Corporation with the award of the Quadro GPU used for this research. The authors declare no competing financial interests.

JZ was partially funded by the United Kingdom Research and Innovation (UKRI) Biotechnology and Biological Sciences Research Council's (BBSRC) Designing Future Wheat Strategic Programme (BB/P016855/1) to Graham Moore and BBS/E/T/000PR9785 to JZ JC was supported by BBSRC's National Productivity Investment Fund CASE Award (BB/S507441/1 to JZ), hosted at Norwich Research Park Biosciences Doctoral Training Partnership (BB/M011216/1), in collaboration with RB at Syngenta DR was partially supported by the Core Strategic Programme Grant (BB/CSP17270/1) at the Earlham Institute AB, TLC and DW were partially supported by NRP's Translational Fund (GP072/JZ1/D) and Syngenta's industrial collaboration fund (GP104/JZ1/D) awarded to JZ WL and JZ were also supported by Jiangsu Collaborative Innovation Center for Modern Crop Production. This work has been supported by the UK Biological and Biotechnology Research Council (BBSRC) via grant BB/P013511/1 to the John Innes Centre.


Comparative study of the fullness of dwarf Siberian pine seeds Pinus pumila (Pall.) Regel from places of natural growth and collected from plants introduced in northwestern Russia by microfocus X-ray radiography to predict their sowing qualities

As a result of analysis of the quality of Pinus pumila seeds by the method of microfocus X-ray radiography in combination with the automatic analysis of digital X-ray images, it was found that the best characteristics of individual structures and organs of seeds were demonstrated in samples collected from trees growing at site 71 of BIN RAS the worst — from seeds taken from South Sakhalin. The seeding qualities of Pinus pumila seed samples were determined by standard methods. Based on analysis of the characteristics of digital X-ray images of Pinus pumila seeds, it was found that a seed sample from site 71 BIN RAS was characterized by the high level of embryo area 4.19±0.49 mm 2 , maximum embryo to thalus ratio 60.95±7.45 %, high level of endosperm area 23.93±1.24 mm 2 , and maximum ratio square of embryo area 9.45 ±1.17 %. The same sample was characterized by a maximum weight of 1000 seeds and a maximum absolute and soil germination ratio, compared to other samples. The obtained data showed that Pinus pumila seeds collected from plants introduced in northwestern Russia by most parameters are not inferior or are even superior to seeds from the natural range.

Dwarf Siberian pine, Pinus pumila, seed quality, X-ray radiography of seeds, seed image analysis, soil germination of seeds


Automatic cropping of collimated digital X-ray images - Biology

OBJECTIVE. The purpose of this review is to summarize 10 steps a practice can take to manage radiation exposure in pediatric digital radiography.

CONCLUSION. The Image Gently campaign raises awareness of opportunities for lowering radiation dose while maintaining diagnostic quality of images of children. The newest initiative in the campaign, Back to Basics, addresses methods for standardizing the approach to pediatric digital radiography, highlighting challenges related to the technology in imaging of patients of widely varying body sizes.

Radiography is the most common type of examination performed in diagnostic imaging. The 2006 National Council on Radiation Protection and Measurements report 160 [1] states that 74% of all radiologic examinations are radiography. In a 2005–2007 survey of five large health care markets, radiography represented 85% of all ionizing radiation imaging examinations of children [2]. Chest radiography was the most common examination performed, followed by extremity, spinal, and abdominal examinations. During the 3-year period of the survey, 40% of the children underwent at least one radiographic examination, 22% underwent two examinations, and 14% underwent three or more examinations. For all ionizing radiation examinations combined, a child is expected to undergo more than seven studies by 18 years of age.

Digital radiography has largely replaced film-screen radiography throughout the United States. Radiologists, radiologic technologists, and medical imaging physicists are responsible for understanding and properly using digital radiography. Although digital images can be acquired with a low radiation dose, without careful attention the exposure factors can increase over time, resulting in overexposure of patients, a phenomenon called exposure creep [3].

The purpose of the Image Gently campaign is to promote radiation protection of children. The newest initiative developed by the digital radiography committee, called Back to Basics, addresses methods of standardizing the approach to pediatric digital radiography. It highlights the challenges related to the technology used for imaging of patients of widely varying body sizes. The purpose of this review is to describe 10 steps that a practice can take to manage radiation exposure in pediatric digital radiography. The basics of digital radiography are reviewed, and technology, terminology, and quality assurance are briefly described. In going Back to Basics ( Fig. 1 ), the approach standardized for film-screen radiography still applies for digital radiography of children.

Fig. 1 —Poster for Image Gently campaign. (Reproduced with permission of Alliance for Radiation Safety in Pediatric Radiology)

Digital radiography (Table 1) encompasses both computed radiography and direct digital radiography. These technologies digitally capture an x-ray image, replacing analog film-screen cassettes as image receptors. Computed radiography is performed with a photostimulable storage phosphor imaging plate that absorbs energy from the x-rays exiting a patient's body to form an invisible image. The cassette is placed in a laser reader, which scans the plate, creating a visible digital image on a monitor in 30–40 seconds. Europium-activated barium fluorohalide (BaFX:Eu 2+ ) powder phosphor imaging plates are the most common type for computed radiography. Needle phosphors composed of cesium bromide (CsBr) also have been developed and have improved physical properties [4] that reduce exposure of the patient [5].

Direct digital radiography is performed with x-ray sensors bonded onto thin-film transistor integrated circuits that rapidly convert the image stored on the sensor to a visible digital image, eliminating the need for a separate scanning step. Direct digital radiography can entail either direct detection (converting x-rays into electronic charge) or indirect detection (first converting x-rays into light, which is then converted to an electronic charge), resulting in readouts that are much faster (typically less than 10 seconds) than those of computed radiography [4]. Selenium is the most common material used for direct conversion digital radiography. Cesium iodide and gadolinium oxysulfide (Gd2O2S) are most commonly used for indirect conversion digital radiography.

Digital radiography has several advantages over traditional film-screen radiography. It has a latitude of exposure approximately 100 times that of film-screen radiography [6], reducing the number of examinations repeated because of underexpo-sure or overexposure. Image manipulation (processing) is possible for changing the appearance of the image, making subtle characteristics in the image more apparent. The electronic images can be stored and distributed anywhere in the hospital network [7], allowing point-of-care access to the images within minutes after exposure. Although the spatial resolution (sharpness) of digital images is typically less than that of a film-screen image (on the order of 2.5–3.5 line pairs per millimeter for digital radiography versus 5 line pairs per millimeter or greater for film-screen radiography [4]), the superior contrast and other improvements in image quality, including image processing available only on digital images, result in superior clinical examinations with digital radiography [4].

The performance of a digital imaging system can be characterized by its spatial resolution and noise level under different exposure conditions. Together these qualities determine the efficiency of an imaging system in converting the x-ray pattern in space that passes through the patient into an image. Detective quantum efficiency (DQE) is the measure of this efficiency [8]. DQE is a function of spatial frequency, which is related to object size (high-spatial-frequency information is needed to see small objects). An ideal detector would have a DQE of 1.0 across all spatial frequencies. Experiments have shown that at lower effective beam energies, such as those used commonly in pediatric radiography, digital radiography with the newest columnar cesium iodide phosphor can achieve up to a 30% higher DQE than computed radiography and film-screen radiography [9, 10]. The higher the DQE, the less radiation exposure is needed to achieve the same image quality.

Radiologists, radiologic technologists, and medical physicists must leverage the strengths and weaknesses of each of their detectors to optimize exposure factors and reduce doses, especially when imaging children.

In the past, at film-screen radiography radiologists and radiologic technologists had immediate and direct feedback about overexposure and underexposure. An overexposed image was too black, and an underexposed image was too white. The radiographic optical density of film-screen images was directly coupled with the exposure technique.

Digital radiography is fundamentally different: The optical density feedback to radiologists and radiologic technologists is lost [11]. Image processing is designed to produce adequate gray-scale images of the correct brightness despite underexposure or overexposure. Underexposed images have fewer x-rays absorbed by the detector, resulting in increased quantum mottle. If the underexposure is substantial, radiologists recognize and object to the marked noisy and grainy appearance and may request a repeat image. Increased exposure reduces noise at digital radiography. The radiologist may not recognize this subtle reduction in grainy appearance in the image. Thus overexposed images may go unnoticed, resulting in needless overexposure and potential harm to the patient. This recognition of underexposed, noisy images and lack of recognition of overexposed images is analogous to CT exposure concerns.

The image acquisition process varies by vendor and equipment type, and radiologic technologists must adjust techniques accordingly. The techniques required to achieve optimal digital radiographs probably will be different from those used for film-screen radiography [10]. Furthermore, different digital detectors may require different techniques owing to differences in efficiency (quantitated as DQE) [10]. The differences in technique between digital systems can cause confusion and result in varying image quality at facilities where more than one vendor or detector system is in use. Operators should determine a standard approach to producing consistent, high-quality digital radiographs based not on image brightness on the monitor but on feedback provided from detector exposure indicators and individual image-quality analysis.

Each manufacturer has a proprietary method of estimating exposure to the image receptor, which can be used to indicate the adequacy of radiographic technique [11]. When only a small number of manufacturers existed, it was relatively easy to learn and use the proprietary language. Now more than 15 manufacturers of digital radiographic equipment are in the market. Hospitals frequently have more than one detector type from more than one vendor, making it much more difficult for radiologic technologists and radiologists to become familiar with proprietary exposure terminology.

Standardized terminology was espoused by the 2004 conference on the as low as reasonably achievable (ALARA) principle in digital radiography [3] and by medical physicists. The International Electrotechnical Commission (IEC) [12], a standards-writing body, and the American Association of Physicists in Medicine [13] subsequently and independently developed standardized terminology designed to eliminate proprietary terminology for equipment installed in the future. In part because of the advocacy that occurred at the Image Gently Digital Radiography Summit in 2010 [14], manufacturers, through the Medical Imaging and Technology Alliance, publicly agreed to adopt the IEC standard [15].

Radiologists, radiologic technologists, and medical physicists have three important terms to learn from the IEC standard: target exposure index (EIT), exposure index (EI), and deviation index (DI) [16]. The EIT represents the ideal exposure at the image receptor. The EIT is programmed for each anatomic examination and each imaging apparatus. It can be set by either the manufacturer or the user facility. The EI is a direct, linear measure with respect to the tube current–exposure time product of radiation exposure at the image receptor in the relevant region of the image it is not a patient dose metric. Among other factors, the EI depends on the body part selected the body part thickness the tube voltage, measured as peak kilovoltage the added filtration in the x-ray beam and the type of detector. For the same patient, body part, tube voltage, and filtration selected, doubling the tube current–exposure time product will double the EI. The DI indicates to the radiologic technologist and the radiologist the degree to which the EI for an imaging examination deviates from the EIT. The DI is defined as 10 × log10 (EI / EIT).

In an ideal situation in which the EI equals the EIT, the DI is zero. If the EI is higher than the EIT (overexposed), the DI is positive, and if it is lower than the EIT (underexposed), the DI is negative. A DI of −1 is 20% below the appropriate exposure, and 1 is 26% overexposure. Furthermore, a DI of ± 3 indicates halving or doubling of the exposure relative to the EIT. The DI serves as an immediate feedback number to both the radiologic technologist and interpreting radiologist, indicating the adequacy of the exposure. The goal is DI values in the range −1 to 1, with very few images less than −3 or greater than 3.

The new standard is expected to reduce confusion resulting from the current proprietary terminology. It is, however, only one factor in image quality—the presence of noise on the image. Proper positioning, elimination of patient motion, appropriate use of grids, and collimation will have to be checked in each examination to ensure image quality.

The trio of automatic exposure control (AEC) sensors, commonly used for imaging of adults, is often problematic in children if the body part is smaller than the three AEC sensors [17] ( Fig. 2 ). On some equipment, AEC can be used for children if only the center sensor is activated and the child's body part is positioned to completely cover the entire single sensor. However, for smaller children, the anatomic area imaged may be smaller than the single central sensor. Thus manual techniques may be most appropriate for small children. To use a manual technique, one must develop pediatrics-specific technique charts. Establishing technique charts for common examinations in digital radiology is similar to the process for CT. Size-, weight-, or body part circumference–generated technique charts are used to appropriately size the tube current–exposure time product and tube voltage settings for each patient.

Fig. 2 —Photograph shows relative location of two of three automatic exposure control (AEC) chambers (pink circles) as outside thorax of infant model. Simulation of examination of tiny infant underscores difficulty with use of AEC chambers for pediatric patients. (Reprinted with permission of Springer Science+Business Media from [17])

The technique chart should be established by a team consisting of radiologists, radiologic technologists, and medical physicists with input from the manufacturer. The radiologist understands the clinical indications and the amount of noise to tolerate when interpreting the examination. The radiologic technologist is familiar with the technical factors (tube voltage, tube current–time product, grid, source-to-image distance, added filtration) and capabilities of each room. The medical physicist understands the science of image formation, the physics of the detector, and optimum image processing. The manufacturer is knowledgeable about its equipment and can serve as a resource for appropriate use in pediatrics. The manufacturer may refer the team to an appropriate children's hospital for technique suggestions. In a department unfamiliar with the equipment, the approach should start with a limited number of common examinations, such as chest, abdomen, and small parts such as hands and feet. As one becomes more familiar with a standard approach, additional examinations and body parts can be added to complete the technique chart.

The technique charts should be specific to each detector model. For example, an older powder-phosphor computed radiography system may require more exposure than a higher DQE cesium iodide digital radiography detector for the same examination. Each detector may require its own tailored technique chart.

Many vendors have different processing programs for children and adults for the same body part. Image processing differs for pediatric and adult patients, especially for chest radiographs [18, 19]. The anatomically programmed radiographic techniques must be reviewed and adjusted to assure that appropriate values are included for both AEC and manual technique selection. Use of preprogrammed adult techniques for pediatric imaging may not result in the appropriate image quality or patient dose.

X-ray absorption and transmission depend on the composition of the body part being imaged and its thickness. Patient age is a poor substitute for thickness. As with any projection radiograph, body part thickness is the most important determinant of the technique. The abdomens of the largest 3-year-olds are the same size as the abdomens of the smallest 18-year-old [20]. One cannot reliably use patient age as a guide for technique.

Going Back to Basics by measuring patients with calipers will ensure that a standardized technique is selected. Knowing the body part and its thickness, one can then set the tube voltage, filtration, and tube current–exposure time product for that specific examination to appropriately size the examination for the child on the basis of the technique chart. The goal is to obtain reproducible, consistent images for children with body parts of the same thickness.

The main purpose of antiscatter grids is to remove scatter from the image to improve the subject contrast on the image. Scatter starts to markedly degrade subject contrast on an image when the body part is at least 10–12 cm of water-equivalent thickness [21, 22]. The composition of the body part also determines when a grid may be beneficial. A solid body part (not an air-containing part such as the lung), for instance the abdomen, pelvis, or spine, may benefit from use of a grid when it is more than 12 cm thick. Structures that are more than 12 cm thick and contain air, especially the chest, can be successfully imaged without a grid. Imaging of larger x-ray field areas, increasing tube voltage, and additional filtration in the x-ray beam increase the production of scatter [23]. Depending on the grid selected, antiscatter grids double or triple the exposure factors necessary to obtain an adequate image [23]. Therefore, removing the grid when it is not necessary greatly reduces patient exposure.

Both the American College of Radiology– Society for Pediatric Radiology [24] and the American College of Radiology–American Association of Physicists in Medicine–Society for Imaging Informatics [25] guidelines for digital radiography state that grids should be used sparingly in pediatrics. Both sets of guidelines state that grids should not be routinely used for extremity imaging or for imaging of body parts with thicknesses of 10–12 cm or less. Furthermore, if a lower peak kilovoltage technique is used, a grid may not be necessary for imaging some areas of the body that are predominantly bone [23].

With the advent of digital radiography, it is possible to open the collimators and manipulate and electronically crop an image after the exposure. In a recent American Society of Radiologic Technologists survey of digital radiology trends, almost 50% of the technologists reported using electronic cropping of the image after the exposure 75% of the time [26]. Radiologists may not be aware that cropping is occurring ( Fig. 3 ), yet radiologists are responsible for the image before cropping occurs. The cropped portions of the body are exposed to unnecessary radiation. Although opening the collimators may occasionally be necessary for inclusion of structures such as an arm in a percutaneously inserted central venous catheter, under the best of circumstances it is better to immobilize the patient and colli-mate appropriately before the exposure rather than crop the image after the exposure [17].

Fig. 3 —Portable chest radiograph of infant. Request was made to include left arm for percutaneously inserted central venous catheter placement. Collimators were left open, unnecessarily exposing entire torso and portions of extremities. Image was cropped to insert box for interpretation. (Reproduced with permission of Alliance for Radiation Safety in Pediatric Radiology)

Collimation is necessary to eliminate x-ray exposure of body parts not affecting the clinical diagnosis to reduce the area exposed and lower the dose area product (DAP). Collimation also reduces scatter radiation, improving image quality [27]. A well-collimated field improves the accuracy of the image processing. Extraneous structures outside the area of interest, such as shields, are excluded and prevented from negatively affecting the applied image processing. When collimators are open, extraneous body parts and free-in-air exposure affect the exposure indicator [17].

The radiologist should become familiar with technical factors used for common pediatric radiographic examinations. This requires that the tube voltage, tube current–exposure time product, added beam filtration, exposure indicators including EI, and, especially, DI are present on the displayed image. Pediatric radiologists and physicists have been advocating this position since the ALARA conference in 2004 [3]. Ideally DAP meter results should also be displayed. (DAP meters are not required in the United States but are required in Europe.) The image-processing organ program (such as portable chest, abdomen, hand) should also be displayed. These displayed values provide feedback to the radiologist and can be used to help solve problems when an image is not acceptable ( Fig. 4 ).

Fig. 4 —Infant chest radiograph displays International Electrotechnical Commission standard technique factors (arrow) in left lower corner: EI = exposure index, DI = deviation index, DAP = dose area product, kvp = tube voltage, mAs = tube current–exposure time product in milliampere-seconds, microAs = tube current–time product in microampere-seconds, Grid = grid used or not used, last line = image processing. In this case DI is zero, indicating that EI is same as target exposure index (EIT, not displayed), and exposure technique is optimal. (Courtesy of Ann and Robert H. Lurie Children's Hospital of Chicago)

Radiologists prefer images that have little noise [28] however, noise intolerance can lead to exposure creep. To avoid exposure creep, radiologists need to become familiar with the exposure indicators for their equipment and understand the relation between exposure indicators (e.g., EI) and the visual appearance of noise on an image. Once this relation is understood, an appropriate target exposure value (EIT) can be established, and the DI can be calculated and displayed at the interpreting workstation. With routine monitoring of the appropriateness of the technique based on the level of image noise along with the DI, exposure creep be avoided.

Radiologists may be tolerant of more noise in some body tissues than in others. For example, noise has a lesser effect on visualization of high-resolution structures, such as bones, endotracheal tubes, and chest tubes [28]. The ability to identify disease processes, such as surfactant deficiency and respiratory distress syndrome of the premature newborn, and low-contrast structures is more noise sensitive [29, 30]. As users become more comfortable with the technique-noise relation of digital radiography, lower-dose follow-up examinations (as after adjustment of line placement) tailored to answer a specific question may become more common.

It is critical that radiologists, radiologic technologists, and physicists develop standards for their institutions, using a team approach to assure diagnostic image quality at a properly managed dose for pediatric patients. A 2012 study [31] showed that as many as 40% of digital radiographs obtained from one adult center were overexposed. The same center noted that exposure creep was occurring in ICU examinations. Exposures of 43% of computed radiographs at a pediatric center were also reported to be overexposed [28]. By recording and monitoring exposure indicators, an individual hospital can control and reverse exposure creep [31]. Analyzing the percentage of images that fall within and outside an acceptable range can be used to educate radiologic technologists and decrease the variation while improving the image quality goals of the department.

With the advent of digital imaging, much information is contained in the header of the DICOM image, which can be exported and used in a quality assurance program. The Integrating the Healthcare Enterprise radiation exposure monitoring profile facilitates collection and distribution of exposure information [32]. This is an easy method of exporting routine evaluation of the performance metrics of digital radiography for analysis by the quality assurance team. Coupled with the new IEC standard, there is a common terminology that can be used for an individual hospital to develop its own quality assurance program. One center used the new IEC standard for monitoring EI in an individual neonatal ICU over a 3-month period and found no tendency toward exposure creep [33].

Groups of hospitals with similar image receptors will be able to collaborate on their exposure techniques and exposure indicator ranges using DICOM structured reports and the Integrating the Healthcare Enterprise radiation exposure monitoring profile. This type of collaboration has already occurred with four neonatal ICUs at four children's hospitals using a single manufacturer's equipment and the new IEC standard. In that study [34], two hospitals had older powder-phosphor plates and two hospitals had newer cesium bromide needle-phosphor plates. The mean EI for the four hospitals varied less than 50%, likely reflective of the noise tolerance of the individual departments. Those investigators used hospitals' existing techniques and made no systematic change in technical factors to take advantage the higher DQE of the needle phosphor. With more experience and education, it may be possible for collaborating hospitals to modify their exposure techniques to take full advantage of newer technologies to reduce exposure.

It is likely that in the future there will be national standards of diagnostic reference levels to help radiology departments to compare their digital radiographic techniques. The American College of Radiology has a dose index registry [11] program for which a digital radiography registry has been approved [14]. It is likely that diagnostic reference levels will be developed from these data on the basis of detector type, body part, and thickness.

Pediatric digital radiography, although an imaging modality that entails less ionizing radiation than CT, is commonly performed at both adult and pediatric facilities. Increased knowledge of this versatile and efficient technology will give radiologists and radiologic technologists a basis for standardizing an approach to imaging of pediatric patients, thereby reducing the tendency for excess radiation through exposure creep. The new Image Gently Back to Basics campaign is a reminder that a consistent approach to technical factors based on body part thickness, elimination of grids when not needed, appropriate collimation, and a rigorous quality assurance program that tracks the new exposure indicators should improve image quality and properly manage the radiation dose during pediatric imaging.

S. Don has received research funding from Carestream, Inc and is a speaker on digital radiography for Siemens.