Vision model brings almost unsupervised crop segmentation to the field

By leveraging a vision foundation model called Depth Anything V2, the method can accurately segment crops across diverse environments—field, lab, and aerial—reducing both time and cost in agricultural data preparation.

Leave a Reply

Your email address will not be published. Required fields are marked *

0 Comments
scroll to top