Image annotation can seem like a simple part of any computer vision project—find an open-source tool, or cobble together an annotation program in-house, and start annotating. Nowadays, some annotation needs are simple enough to build a functioning proof of concept in little time, and being open-source is assumed to be an asset. But when deadlines are looming, your team can’t afford to lose precious time finagling with unintuitive tools or pouring over minute details that just won’t line up. This critical step of building your dataset can’t be an afterthought; you need a platform that won’t underserve your work.

When researching, for example, it makes perfect sense to focus your resources on the experiment at hand. But, when open-source tools and homemade solutions can’t provide everything you need, concessions need to be made during the annotation process. How can you guarantee your data is annotated accurately and efficiently and still set aside ample resources for the rest of your project?

    “Having the ability to leverage the Innotescus tools enabled us to focus on more critical aspects of the project.”

     – Shasa Antao, Carnegie Mellon University

What’s inside this article:

CMU Student Researchers double annotation speed and accuracy with Innotescus 

Student researchers at Carnegie Mellon University faced this exact challenge. Their work focused on unsafe road conditions; how do drivers identify and respond to these situations, and how can we teach machines to do the same? The students modeled both adverse and drivable terrain, using remote-controlled cars to recreate the decision-making process. But, when it was time to annotate their data, the open-source tool they selected proved tedious, unreliable, and inaccurate.

Enter: Innotescus.

two sample model outputs predicting potentially unsafe road conditions (blue), including the original RGB images (left) and the output of the CMU students' semantic segmentation model (right)

When the CMU students switched to Innotescus, they were amazed by its convenience and accuracy. Their model performance went from 80% with annotations created with an open-source tool to well over 90%; annotating with Innotescus helped radically improve the team’s own accuracy.

Convenience—Features that make data labeling fast and organized

Few things are as frustrating as spending your time preparing and keeping track of your work instead of just doing it. These usual problems all but disappeared for the student researchers when they started using Innotescus. The team was immediately able to reap the rewards of a centralized system that eliminated all of the administrative work their open source tool required. Innotescus provided a central location to create, delegate, perform, store, and review annotation work, eliminating all the inefficiencies of the team’s former process.

Accuracy—Innotescus annotation tools are shaped by precision

Part of what made this research team’s annotation experience so different was Innotescus’ semantic segmentation tools. Semantic segmentation requires the highest level of accuracy—down to the pixel level—so the model can be trained on annotations that contain only the intended objects and nothing else. This level of accuracy requires the most sophisticated, and therefore often slow and cumbersome, tools. This was not the case with Innotescus. Even when using its most complex tools, the team found the platform to be highly responsive, owing to a technology stack that optimizes for in-browser performance.

From the ability to configure tools to convenience features that let them speed through mundane tasks, every part of the platform gave these students results which were more accurate than they previously thought possible. They yielded >90% accuracy from their fully annotated dataset, a big improvement from their incomplete attempt with an open-source annotation tool. Even with just a fraction of its functionality, Innotescus offered the right tools for the job.

    The tools provided by the platform made it intuitive and convenient to label images quickly and accurately, especially when compared to other labeling platforms.”

    – Shaun Lu, Carnegie Mellon University

Using only its basic features, Innotescus creates better image annotations faster

Better data=better models. It’s a foundational idea that often gets overlooked. These student researchers experienced it first-hand—the easy-to-find, open-source platform left much to be desired. Switching to a platform as intuitive as Innotescus, they doubled their annotation speed and accuracy. Their work wasn’t only finished quicker; it was more complete and more accurate than they had originally hoped.

See for yourself—sign up for a free trial today!