Most platforms can scale quantity, but the Consensus Tool lets you scale quality. Learn about one of the earliest features that truly captures the collaborative spirit of Innotescus, and how easy it is to incorporate into your project. 

What’s in this article:

Image annotations do more when they work together

At Innotescus, we know that better data is key to the advancement of machine learning—and we’re not the only ones. The industry is shifting its focus from quantity to quality, which, once you know how data augmentation works, makes sense. Already-high-quality datasets can be modified in countless small ways to create more data points. But how do you improve your data?

In the case of data annotation, one of the best ways to improve is through collaboration. By merging inputs across annotation sets, our consensus tool automates quality control rather than huddling around a screen and deciding in the moment what is and isn’t a chair. “Consensus ensures a high training data quality which in turn helps model performance every time,” says Shashank Deshpande, Co-founder & Lead ML Scientist at Innotescus.

Most platforms can scale quantity, but the consensus tool let’s Innotescus scale quality.

Automating annotation quality from the start

The consensus tool was one of the first features incorporated into the Innotescus platform. Developers wanted an image-agnostic way to reduce ambiguity and automate the quality process.

“We saw consensus as a measure of consistency and accuracy. It was one of the first features introduced in the platform because it provides peace of mind to both us and our customers.”
— Shashank Deshpande, Co-founder & Lead ML Scientist

Since day one, we’ve looked for novel approaches to better data, and what we came up with was our proprietary algorithm that powers the consensus process. But when we’re talking about machine learning, what exactly is consensus?

Annotation consensus in action

Consensus is an arrangement of how we agree. It’s more than just a simple majority; it attempts to represent what is widely-shared among participants.

In annotation, consensus comes from the synthesis of several user’s attempts at labeling an image. How can we know when annotations from different sources intentionally correspond to each other, and which are merely close by? How do we decide which annotations are more reliable than others? The consensus algorithm answers these questions automatically.

Three annotation inputs (above) and the resulting consensus (below)

The consensus algorithm available on Innotescus solves common errors like misclassification, inaccurate object boundaries, and missed objects altogether. Where it can’t account for errors in annotation, it helps identify ambiguities, leading annotation managers to gaps in understanding that, when addressed, will improve overall data quality.

Empowering supervisors-in-the-loop

Users and creators of annotation platforms know a prudent design will both appreciate and anticipate the need for humans. That’s why collaboration and facilitation shape the consensus tool.

  • Avoid ambiguity in all steps of the process—easily identify outliers or other small issues that can negatively impact your model
  • Supervisors can see individual annotations and annotation consensuses side-by-side
  • Edit your team’s consensus as easily as moving an anchor point

Leading the ML/AI industry through consensus

Our consensus tool is the most accurate way to merge annotations, and requires no additional time to do so. It works in tandem with our extensive project management features, helping your work be more collaborative than ever.

See for yourself—sign up for a free 30-day trial of Innotescus today!