Show simple item record

dc.contributor.authorBeck, Michael A.
dc.contributor.authorLiu, Chen-Yi
dc.contributor.authorBidinosti, Christopher P.
dc.contributor.authorHenry, Christopher J.
dc.contributor.authorGodee, Cara M.
dc.contributor.authorAjmani, Manisha
dc.date.accessioned2020-12-18T05:08:03Z
dc.date.available2020-12-18T05:08:03Z
dc.date.issued2020-12-17
dc.identifier.citationBeck, Michael A., Chen-Yi Liu, Christopher P. Bidinosti, Christopher J. Henry, Cara M. Godee, and Manisha Ajmani. "An embedded system for the automated generation of labeled plant images to enable machine learning applications in agriculture." PLoS ONE 15(12) (2020): e0243923. DOI: 10.1371/journal.pone.0243923.en_US
dc.identifier.issn1932-6203
dc.identifier.urihttps://hdl.handle.net/10680/1882
dc.description.abstractA lack of sufficient training data, both in terms of variety and quantity, is often the bottleneck in the development of machine learning (ML) applications in any domain. For agricultural applications, ML-based models designed to perform tasks such as autonomous plant classification will typically be coupled to just one or perhaps a few plant species. As a consequence, each crop-specific task is very likely to require its own specialized training data, and the question of how to serve this need for data now often overshadows the more routine exercise of actually training such models. To tackle this problem, we have developed an embedded robotic system to automatically generate and label large datasets of plant images for ML applications in agriculture. The system can image plants from virtually any angle, thereby ensuring a wide variety of data; and with an imaging rate of up to one image per second, it can produce lableled datasets on the scale of thousands to tens of thousands of images per day. As such, this system offers an important alternative to time- and costintensive methods of manual generation and labeling. Furthermore, the use of a uniform background made of blue keying fabric enables additional image processing techniques such as background replacement and image segementation. It also helps in the training process, essentially forcing the model to focus on the plant features and eliminating random correlations. To demonstrate the capabilities of our system, we generated a dataset of over 34,000 labeled images, with which we trained an ML-model to distinguish grasses from nongrasses in test data from a variety of sources. We now plan to generate much larger datasets of Canadian crop plants and weeds that will be made publicly available in the hope of further enabling ML applications in the agriculture sector.en_US
dc.description.sponsorshipM.B., C.B., C.H., C.-Y., and C.G. received funding from: George Weston Limited – Seeding Food Innovation SFI18-0276, https://www.weston.ca; Mitacs – Accelerate IT14120, https://www.mitacs.ca; and Western Economic Diversification Canada – Regional Innovation Ecosystems Program 15453, https://www.wd-deo.gc.ca/eng/19775.asp. The funders had no role in study design,en_US
dc.description.urihttps://journals.plos.org/plosone/article?id=10.1371/journal.pone.0243923en_US
dc.language.isoenen_US
dc.publisherPLOSen_US
dc.relationRelated dataset: Weed seedling images of species common to Manitoba, Canada.
dc.relation.urihttps://doi.org/10.5061/dryad.gtht76hhz
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.titleAn embedded system for the automated generation of labeled plant images to enable machine learning applications in agricultureen_US
dc.typeArticleen_US
dc.rights.licenseAttribution 4.0 International (CC BY 4.0)en_US
dc.identifier.doi10.1371/journal.pone.0243923en_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record