Bangyou Zheng is a research scientist of digital agriculture at the CSIRO. His research interests include crop modelling, crop phenotype to genotype adaptation, high throughput phenotyping, big data in agriculture and R programming.
He works in Crop Adaptation and Modelling team. The team deploys skills in physiology, phenomics, data management and decision systems to deliver innovations in crop yield.
Opinions are my own. Posts are not an endorsement.
Genotype by environment interaction (G×E) for the target trait, e.g. yield, is an emerging property of agricultural systems and results from the interplay between a hierarchy of secondary traits involving the capture and allocation of environmental resources during the growing season. This hierarchy of secondary traits ranges from basic traits that correspond to response mechanisms/sensitivities, to intermediate traits that integrate a larger number of processes over time and therefore show a larger amount of G×E. Traits underlying yield differ in their contribution to adaptation across environmental conditions and have different levels of G×E. Here, we provide a framework to study the performance of genotype to phenotype (G2P) modeling approaches. We generate and analyze response surfaces, or adaptation landscapes, for yield and yield related traits, emphasizing the organization of the traits in a hierarchy and their development and interactions over time. We use the crop growth model APSIM-wheat with genotype-dependent parameters as a tool to simulate non-linear trait responses over time with complex trait dependencies and apply it to wheat crops in Australia. For biological realism, APSIM parameters were given a genetic basis of 300 QTLs sampled from a gamma distribution whose shape and rate parameters were estimated from real wheat data. In the simulations, the hierarchical organization of the traits and their interactions over time cause G×E for yield even when underlying traits do not show G×E. Insight into how G×E arises during growth and development helps to improve the accuracy of phenotype predictions within and across environments and to optimize trial networks. We produced a tangible simulated adaptation landscape for yield that we first investigated for its biological credibility by statistical models for G×E that incorporate genotypic and environmental covariables. Subsequently, the simulated trait data were used to evaluate statistical genotype-to-phenotype models for multiple traits and environments and to characterize relationships between traits over time and across environments, as a way to identify traits that could be useful to select for specific adaptation. Designed appropriately, these types of simulated landscapes might also serve as a basis to train other, more deep learning methodologies in order to transfer such network models to real-world situations.
Image analysis using proximal sensors can help accelerate the selection process in plant breeding and improve the breeding efficiency. However, the accuracies of extracted phenotypic traits, especially those that require image classification, are affected by the pixel size in images. Ground coverage (GC), the ratio of projected to ground vegetation area to total land area, is a simple and important trait to monitor crop growth and development and is often captured by visual-spectrum cameras on multiple platforms from ground-based vehicles to satellites. In this study, we used GC as an example trait and explored its dependency on pixel size. In developing new spring wheat varieties, breeders often aim for rapid GC estimation, which is challenging especially when coverage is low (<25%) in a species with thin leaves (ranging from 2 to 15 mm across). In a wheat trial comprising 28 treatments, high-resolution images were manually taken at ca. 1 m above canopies on seven occasions from emergence to flowering. Using a cubic interpolation algorithm, the original images with small pixel size were degraded into coarse images with large pixel size (from 0.1 to 5.0 cm per pixel, 26 extra levels in total) to mimic the image acquisition at different flight heights of an unmanned aerial vehicle (UAV) based platform. A machine learning based classification model was used to classify pixels of the original images and the corresponding degraded images into either vegetation and background classes, and then computed their GCs. GCs of original images were referred as reference values to their corresponding degraded images. As pixel size increased, GC of the degraded images tended to be underestimated when reference GC was less than about 50% and overestimated for GC > 50%. The greatest errors (about 30%) were observed when reference GCs were around 30% and 70%. Meanwhile, the largest pixel sizes to distinguish between two treatments depended on the difference between GCs of the two treatments and were rapidly increased when differences were greater than the specific values at given significance levels (i.e. about 10%, 8% and 6% for P < 0.01, 0.05 and 0.1, respectively). For wheat, small pixel size (e.g. <0.1 cm) is always required to accurately estimate ground coverage when the most practical flight height is about 20 to 30 m at present. This study provides a guideline to choose appropriate pixel sizes and flight plans to estimate GC and other traits in crop breeding using UAV based HTP platforms.