Home DEVELOPMENT, VALIDATION, AND EVALUATION OF COMPUTER IMAGING FOR TICK DETECTION ON CATTLE

Projects

DEVELOPMENT, VALIDATION, AND EVALUATION OF COMPUTER IMAGING FOR TICK DETECTION ON CATTLE

Summary

<div class="container" style="width:300px;">
<!–
<div class="leftcol">
<B>Forestry Component:</B> #forestry_component%

</div>
–>
<div class="leftcol" style="width:194px">
<b>Animal Health Component</b>
</div>
<div class="rightcol" style="width:56px; text-align:right">25%</div>
<div class="endrow" style="float:none; display:block;"></div>

<!–
<div class="leftcol">
<B>Is this an Integrated Activity?</B> #integrated_activity

</div>
<div class="rightcol"></div>
<div class="endrow"></div>
–>
<div class="leftcol">
<b>Research Effort Categories</b><br>
<div class="container" style="width: 375px;">
<div class="rec_leftcol">Basic</div>
<div class="rec_rightcol">50%</div>
<div class="endrow"></div>
<div class="rec_leftcol">Applied</div>
<div class="rec_rightcol">25%</div>
<div class="endrow"></div>
<div class="rec_leftcol">Developmental</div>
<div class="rec_rightcol">25%</div>
<div class="endrow"></div>
</div>
</div>
<div class="endrow"></div>

</div>

Objectives & Deliverables

<b>Project Methods</b><br> The computer vision network will be developed and validated at the University of Tennessee Middle Tennessee Research and Education Center (MTREC). Information about each sampled animal and its associated lot will be recorded. All collections will have baseline information requested by USDA-APHIS. All data will be stored in a relational database and mapped in ArcMap 10.6.1. We will use a computer vision system mounted at the location of a water supply which is easily powered, requires minimal maintenance efforts, and can capture high-resolution images when animals are recorded. Already, each animal visits the water supply several times a day, so that it is guaranteed that each animal will be assessed every day. Building on what we initiated, we will develop a computer vision system consisting of several high-resolution cameras and deploy the system at a water supply for cattle.Development. At MTREC there is a bull evaluation facility which uses a C-LOCKTM SmartScale for each waterer. The SmartScale forces animals to approach the waterer from one direction and automatically logs the weight and radio frequency identification (RFID) tag of each animal. A solar panel kit, also from C-LOCKTM is used to provide power to the SmartScale. The computer vision system will be developed and installed on or close to the SmartScale. Because the animals can only approach the water from one side, cameras can be placed permanently to capture different body areas on cattle. In order to capture the head, neck, and tail areas, the initial placement of the cameras is high; however, various camera placement will be tested for optimal image quality. The camera system will include a Raspberry Pi 4 (8 GB RAM) and high-resolution surveillance cameras. The exact camera model will be determined during the development phase. A custom designed enclosure will be 3D printed to house and protect the camera system. Power will be supplied by the same solar panel kit as the SmartScale. To optimize power usage, an ultrasonic sensor will be used to trigger camera recording. A Python algorithm will be developed to use the ultrasonic sensor readings to trigger the cameras when an animal is at the waterer. The position and the distance threshold that triggers the cameras will be determined by experimentation. The cameras will be triggered to take multiple images, e.g., five images, each time an animal is at the waterer. Data will be stored in a 1TB flash drive. We expect each animal to visit the waterer at least twice a day. Assuming 30 animals on each pasture, the total daily recording for each pasture should be less than 150 images. Based on our preliminary results and calculations, the flash drive will last for more than 1.5 years at which point it will be replaced.The goal of the computer vision system is to perform automated detection of ticks. This is an object detection task that has been studied extensively. There are several object detection deep learning architectures; however, to achieve high detection accuracy for a specific task, a comprehensive training dataset is important. We propose to build a comprehensive dataset by incorporating a diverse set of annotated images (e.g., geo-graphic, breed, lighting, tick species, environmental conditions, etc.). The project team, including two post-doctoral scholars and an undergraduate student, will be trained by an expert to label the images. An initial deep-learning model will be trained for tick detection using the dataset collected previously. While more training data are collected and labeled, they will be added to the dataset to improve the model accuracy. The accuracy of the model will be evaluated based on mean average precision (mAP), an evaluation standard in the field of deep learning. The outcome will be a lightweight deep learning model that can accurately detect various tick species in the southeast region.Resulting images will be annotated and counted by three individuals, along with the computer vision system, in order to determine interrater and intrarater reliability. Both the individuals and the neural network will repeat the process three times with the same set of pictures. The total number of ticks counted by an individual rater and the system for one picture will be averaged amongst the repetitions. The coefficient variation (CV) will be calculated by dividing the standard deviation by the mean for every rater's picture. The CV will be calculated to determine a rater's dispersion around the mean for each picture counted. A lower CV means the rater had a more precise estimate every repetition. Mixed model analysis of variance will be used to assess interrater reliability, with the random effect of image and rater*image, while blocking on the image. To assess intrarater reliability, a no variable model in SAS 9.4, (Cary, NC) will be used with random effect of intercept. An interclass correlation (ICC) will then be calculated for each rater by dividing the estimate by the residual. ICC will be used to determine the correlation within a rater.Validation. To validate the resulting automated method, to maximize the efficiency of collections, and to protect the safety of both the investigator and the animal we will work with MTREC animals as they routinely enter chutes. The greater of 25% of the total lot size or ten animals will be sampled during each event, and to avoid reducing the efficiency of the husbandry practices of the producer and/or market. Animals that pose a threat to the safety of itself or the investigator will not be sampled. Procedures will be approved through the University of Tennessee's IACUC. The computer vision system will be installed at the end of chutes. As cattle are briefly stopped there, the computer vision system will first take the image(s) of the animals. Then animals will be scratched for ticks at common attachment sites along the ears, head, neck, tail, legs, belly, and underside of the tail for 5 minutes to minimize host stress. A second person will record animal information, where the investigator safely searched for ticks, where ticks originated from on the animal, the time it took to check the animals for ticks, and if any other incidences occurred. Ticks will be stored in ethanol until counted and identified to species and life stage.We will compare the number of ticks recorded by the automated system to the gold standard (number of ticks collected) for each animal. Additionally, analyses will include presence/absence variables about location of ticks, if animal could be checked, etc. We will also determine the amount of time it took for a person to scratch a tick compared to the vision system to capture and process the images. Separate generalized linear-mixed models will be conducted for each response variable. In the model, assessment method will be the fixed effects and other variables associated with the animal (e.g., breed) and collection (e.g., time, season) will be random effects.Evaluation. We will work with collaborators at livestock markets in North Carolina and Georgia and at the Cattle Fever Tick Eradication Program in Texas to evaluate the efficacy of the automated vision system and improve its efficiency to ensure it can be adopted and used at other markets. We will first develop a video and informational sheet for use of the vision system. Once instructions are developed, we will set up the vision systems at livestock markets for the spring, summer, and fall animal sales. Livestock markets will be visited weekly in year two by our team and we will modify the automated vision system as suggested by the market crews. For quality control as well as for tests of sensitivity and specificity, we will also compare the number of ticks collected from cattle to the number of ticks detected from cattle.

Principle Investigator(s)

Planned Completion date: 28/02/2025

Effort: $293,834.00

Project Status

ACTIVE

Principal Investigator(s)

National Institute of Food and Agriculture

Researcher Organisations

UNIVERSITY OF TENNESSEE

Source Country

United KingdomIconUnited Kingdom