Features

More detailed overview of the features.

Model Difference

Examine differences in models by highlighting where the two models disagree. In the example you can see a model difference, followed by the ground truth label in the upper right and a label produced the model in the middle on the right. The red color in model difference indicates where the two labels dissagree, e.g. the classes assigned by the model are different from the from the ground truth classes. Where the two labels agree, the color is green.

Logo

Model Improvement

Visualize improvement of one model over another. Here we can see three labels, the first one is the ground truth, the second one is the base or candidate model we wish to compare against, and the third one is the label produced by our newest model. The question we wish to answer here is: is the newest model improved over the base/candidate model, given the grond truth label.

The example image contains three different colors: red, green and blue.

The blue color means that the newer model is not better and not worse than the candidate model. For example, if the candidate model was wrongly classifying a road as a tree, and our newer model is also wrongly classifying a road as a tree. Alternatively, if candidate model is correctly classifying a road and our newer model is also correctly classifying the road.

Logo

The red color indicates that the newer model is missclassifying a class that was correctly classified by the candidate model. Whereas the green color, the thing we want to see most often, indicates that the newer model is correctly classifying a color that was missclassified by the candidate model.

Confusion Matrix

Given any two labels, quickly examine the confusion matrix, to undrerstand the accuracy of your models and where the problems areas may be.

In the example you can observe the various classes along the left side: unlabeled, dynamic, ground, road, sidewalk. Each class has an index, like 0 for unlabeled and 3 for road. These are the ground truth labels. We can refer to each class either by its name or by its index, e.g. road (3).

If you look along the top row, you will see the same indicies but without the class names, for brevity. These are the model's labels.

To determine the accuracy of our model, look along the diagonal and notice anything that is below 100 percent. For example, here we can see that almost all the road (3) pixels classified correctly, 96% of them. But if you look at sidewalk, that you can see that our model only classified 26% of the pixels correctly as sidewalk.

Logo

To see which pixels the model the model missclassified, we can look along the row of the sidewalk (4) class, and observe that 48% of the sidewalk pixels were missclassified as unlabeled (0) class, and 26% were missclassified as a road (3) class.

Your Datasets

Manage multiple datasets.

Create multiple datasets, each one containg multiple model inferences. You are also able to see the progress as each dataset is uploaded or imported. The number of datasets you create is dependent on the subscription plan you pick, with the free plan giving you one dataset to play around with.

Logo

Ready to try it out?

Create an account for free and import our showcase dataset.