Deepomatic or Google AutoML Vision : Which one to choose ?

According to most research firms such as Gartner, Forrester or MarketsAndMarkets, the global computer vision market is set to reach a total of USD 20 billion by 2025. The market is therefore in full expansion, and the interest of companies is also growing. Based on this projection, we wanted to find out which computer vision platforms are best suited to the needs of businesses. In this article we will focus on the differences between Deepomatic and Google Auto ML as Enterprise Computer Vision Platforms.

Google AutoML was launched early 2018, it was the latest player in the game, while Deepomatic was founded in 2014 with offices in New York and Paris. Today Google AutoML is more adequately seen as a training platform while Deepomatic focuses on serving a broader set of tools to attend to the whole application lifecycle.

Focus on Google Auto ML

Google AutoML Vision performance page.

For this article, and for all from this series  we have chosen the 6 market leaders and to evaluate them, we set up the same three different projects on each platform. This gave us a sense of the features each platform provides and the associated shortcomings if any. Then we computed the model performance which gave us a good insight into the viability of using the platform for production-level business applications. If you want to read in detail our methodology, please click here.

Google is part of those leaders and has always had a strong foothold in the ML ecosystem thanks to TensorFlow and all its derivatives. Combined with their TPU offering this made for some very strong arguments.

Today that’s really where Google shines: training models. This is directly reflected in its name, AutoML. The promise is simple, give them your annotated data and they’ll find the best model. Although it should be noted that in our performance benchmark, Google never ranked first.

The typical workflow is to import your images or perform some lightweight annotation in the platform, and then quickly move on to training a model. You can choose between cloud or edge and then specify the training budget allowed to find the best model.

Although very appealing this can become quite costly. For instance, for one of our projects, using the recommended parameters provided by Google would incur a bill of $1500 for just one training session.

This is not really aligned with the iterative best-practice the industry is maturing into. Annotate data, train a first model, use it to detect errors and speed up annotation, rinse and repeat. If you want to know more about this you can read our white paper Lean AI & Computer Vision in Production for a rundown of the latest industry trends and best practices.

Google AutoML Vision is a bit different from other platforms as it focuses primarily on automatic model training, which is an important building block of the whole application lifecycle but not sufficient by itself to deploy applications in production.

And now, Deepomatic

Deepomatic project overview page.

Deepomatic is at the other end of the spectrum. Here, the value proposition is to enable the largest possible audience to create and deploy Enterprise computer vision applications. This means providing customers with a one-stop platform where everything is integrated, making it as easy to use as possible while promoting industry best practices.

Practically Deepomatic provides an easy-to-use annotation interface deeply linked to the model training. This means models are used to speed up annotations with active learning, but also to review existing annotations with error spotting, this alone can reduce annotation errors by up to 10% according to our latest tests.

Training is performed seamlessly with a few clicks and a full-featured performance dashboard is then used to analyze the model and identify potential improvements.

Unfortunately, training a model is not the end of story when it comes to Enterprise applications. You then need to be able to package your model, chain them to form complex applications, version and monitor them while being able to deploy them either on public cloud, on premises or at the edge. All of which are built-in capabilities of the Deepomatic platform.

Only then you can focus on closing the loop, automatically sending interesting images back to the platform to improve model performance in a virtuous circle. Finally, Deepomatic provides a built-in monitoring dashboard to follow day-to-day field operations and an analytics dashboard to perform BI analysis on long-term business trends.

Deepomatic is the go-to-platform if you want to be able to address your whole enterprise applications lifecycle from a centralized place with built-in industry best practices and state of the art models. This is the most feature-rich platform while at the same time requiring the least amount of coding and development skills.

Conclusion

Google’s platform is dedicated to the automatic training of models, and does it rather efficiently. It is actually only a part of the work that needs to be done to have an efficient computer vision application. If, on the contrary, you wish to use a centralized platform that accompanies you from start to finish of your project, easily and without coding, you should use Deepomatic . 

If you want to know more about the large-scale projects Deepomatic has carried out with its partners, click here. 

You may also like

How can Computer Vision revolutionize field service management?
Which computer vision platform to choose for your project?
Webinar: How to optimize each intervention with the Augmented Worker
Deepomatic or Matroid: Which one to choose ?
Deepomatic named “cool vendor” by GARTNER
Deepomatic or AWS Sagemaker : Which one to choose ?
Où nous trouver?

Deepomatic New York
135 East 57th street, 16th floor
New York, NY 10022

Deepomatic Paris
53 rue de Turbigo, 75 003 Paris

©Deepomatic 2020 – Privacy Policy

This website stores cookies on your computer. These cookies are used to collect information about how you interact with our website and allow us to remember you. We use this information in order to improve and customize your browsing experience and for analytics and metrics about our visitors both on this website and other media. To find out more about the cookies we use, see our Privacy Policy.