What makes a robot move accurately? At RE2 Robotics, it’s a constant stream of unstructured data, as well as the machine learning “brain” that translates it. But annotation workflows often underestimate the time it takes to accurately train an algorithm to interpret dense datasets comprised of annotated video. Download our Case Study to get a glimpse at how industry-leading video annotation from Innotescus helped another innovative Pittsburgh-based company bring quality ML to production.

In this Case Study, you will learn more about how the alpha release of AVA™, Innotescus’ Automated Video Annotator helped RE2 take their ML to the next level.

  • RE2 saved significant time by leveraging the power of AVA’s™ tracking algorithm.Rather than annotating each video frame manually, the annotators only fully annotated 5% of the dataset; the 95% of AVA™-generated annotations merely required light edits.
  • Innotescus made the annotation process 20x more efficient. In a little over a month, RE2 manually annotated fewer than 2,000 images while AVA™ produced more than 40,000 in less time.
  • With a sufficiently large and high-quality dataset, the computer vision and autonomy team at RE2 was able to retrain their custom model and improve the mean average precision on the class of interest from 50% to over 90%—a staggering improvement.


 “The Innotescus Annotation tool (AVA™) saved our team a lot of time on a major project delivery. We were able to use their video annotation tool to generate over 40,000 annotations in a short period of time.  Online examples and support had us up and running in one day.”

–  Amanda Sgroi, Ph.D., Director of Computer Vision and Autonomy at RE2 Robotics

Download the full case study for all the details on how Innotescus helps companies harness the power of their video using AVA’s™ single-shot object tracking algorithm and more.

video annotation