Top 8 Google AI Tools
There is no doubt that Google is an absolute giant in the IT world. It creates various software tools for almost any imaginable area of activity existing today. Whatever you could want, Google, probably, has a solution. Either it is a smart voice helper or an intelligent shopping list, it doesn’t matter. Seriously, even special streaming platforms, music tools, and advanced culture applications - Google reinvented the internet and proposed an absolutely new ecosystem for users.
But what about the IT community? Most likely, it is useful to have your thoughts in the Keep, or remember about appointment via Calendar, but does Google have any software for programmers? Especially, if we are talking about Artificial Intelligence (could there be a better moment, than a Terminator 1 anniversary?).
Luckily, Google cares about those, who are interested in AI. Today we will talk about the most interesting tools in this area. The article will be split by categories of people, that may be interested in some specific kinds of tools.
For developers
First of all, we can find software development. Google AI provides various tools for creating artificial systems, neural networks, and multilayered projects. Let’s take a look at some of them.
TensorFlow (TF)
If you want to develop high-precise and well-maintained Machine Learning (ML) systems, you have to know about TF. This open-source ML package was created for a Google system (namely, speech recognition), but right now its main task - to help the artificial intelligence community in product development. Here are some basic advantages of the TensorFlow:
-
robust and independent ML production;
-
research powers for the experimental purposes;
-
simple and high-level layers for a model creating.
Of course, if you are an absolute newbie in this subject, it may be quite difficult for you to start with TF. However, it has a widespread community, stable and regular updates (recently Google presented 2.0 version), and free-to-use full code of the product.
Right now TensorFlow supports many programming languages, but the basic one is Python. The main advantage of this package - all processes operate on the C++ modules, which is significantly faster and almost invisible for a Python user. Also, TF explores all-powerful ‘players’ in the Machine Learning world and tries to incorporate these projects inside itself. Bright example - Keras and all it’s distributives.
This library allows creating a graph of actions, where each node (tensor) stores data in different shapes and sends it via branches (flows). You can set all variables and functions beforehand, and finally run the project only after prerequisites handling. Also, TF gives the ability to manage and investigate the final model via visualization instruments. In other words, the more complicated system would need to be built, the more often TensorFlow may be a solution.
Use case: deep neural network for image recognition.
Let’s assume that you need to build a system that could recognize only human faces. Most probably, there will be a volumetric database. So the model could be slow and inaccurate. TensorFlow may allow completing this task with higher performance.
Furtherly, it is possible to create as many layers and perceptrons (special nodes of a layer) as needed for task implementation. First, we have to set a model, then initialize and start it. As practice shows, if the dataset is good, the system will be very good. TF supports graph visualization for comfortable maintaining of the workflow. This can be very informative in the developing process.
ML Kit
This tool may be very helpful for mobile app creators. When you want to make a really big product, you should take into account consumer interests. The most obvious way for it - events tracking, surveys, etc. Google proposes a very nice instrument for these features: Google Firebase. It allows storing and process a large amount of user data, add some lightweight analytics and synchronize with Big Query (data warehouse) and Google Data Studio (business intelligence).
But what if the system would need some Machine Learning system? ML Kit is one of the possible solutions in this case. Briefly speaking, it is a set of pre-defined APIs for the most common mobile application purposes. There is no need to take care of storage, coding skills, etc. If an existing problem is generic, probably the ML Kit has solved it. But even if it’s not - a tool gives an ability to build a custom system based on a TensorFlow.
ML Kit can be built-in for the app or called from some cloud storages (Google Cloud). The former is faster and can work with no internet connection. The latter is more powerful and doesn’t consume so much phone resources.
Use case: image detection for Zyl
This is an example of a real enterprise project. The main idea of the application - saving and recommending the most important photos in the gallery. But how is it possible to define the importance of different pictures? Zyl creators decided to label things inside of the image: faces, smiles, animals, etc.
They incorporated already operable API for image detection and this Machine Learning model improves results on 50%.
Google Open Source
Nobody likes secured and secret code. Open source is one of the most attractive philosophies of the current century. If you like certain technology, but you see potential upgrades and know how to do it - no problem, visit full repository, and make fork/forking experiments! Also, if you will share the results of upgrading (and it will be not so bad), the wide and active community always can give some advice, participate in the developing process, and even make their own upgrades.
Google stimulates the creation of cool and useful projects (sometimes even with no commercial interest) via this tool. Code-In challenges, competitions, wide-spread popularization - it’s only a tip of the iceberg.
But here you can not only test your own code but also all other projects (more than 2000). Try to investigate repositories by keywords and maybe something meaningful for your current research will appear.
Use case: butteraugli.
Continuing an example with an image recognizer, consider this project. The main goal - detect the difference between two pictures. It may be useful in some video compression, and lossy images evaluating. So we can build not only a powerful image detection system, but also understand how close some exact samples are from the dataset. Also if during the developing process, the team understood that their code may be interesting and useful, they can share it via Google Open Source.
CoLaboratory
If you are familiar with Python, you may have heard about a very popular study tool - Jupyter Notebook. It is very demonstrative, supports various add-ons and instruments. But there is always something to upgrade. Sometimes, it’s not so easy to share and work in the one file since libraries should be installed properly, language dependencies should be addressed, etc. So Google proposed CoLaboratory - Jupyter on a Google Drive.
Main advantages - remote computing (does not depend on a local machine), the ability to open access to the developing files. Similar to any other Google documents, there is also an opportunity to work in the same file at the same time. And final hatch - various code snippets for many generic tasks.
Use case: detailed tutorials for our image detection giant.
Okay, the model is accurate, efficiency is proven. But what if you are obliged to make some informational guides for it? With CoLaboratory this problem isn’t difficult at all. Tutorials can be made with markdown explanations of concrete places and special code snippets.
Finally, testing results can be visualized for study purposes. In short, there are enough conditions to create clear and understandable guidebooks.
For researchers
As it was said, developers have many tools for improving their work. But most IT specialists, who were involved in a real project, would say that any work starts from the research. And Google also doesn’t forget about this part of IT. Even in the artificial intelligence domain.
Google datasets
The Machine Learning model requires a good and balanced dataset. The bigger part of model-tuning and preparing is the work with raw data. So as Google has many different and very relevant datasets, sometimes it can be very useful to work with them.
Actually, there are 64 high-level datasets for popular ML tasks. All users need to do before using it to complete a free survey. After that data can be downloaded in some widespread formats (e.g., csv).
Use case: facial expression dataset.
After we have discovered multiple and powerful instruments for AI, there can be a need for some test data. Google datasets propose a few image datasets, but this is one of the most interesting. Let’s assume that the Machine Learning model is good with just face-recognizing. But such a model isn’t so universal and flexible, especially with ‘blurry’ images.
Google's facial expression dataset has more than 500K triples (3 images in a row), where two are with similar emotions. It gives an ability to evaluate differences and similarities between pairs of images. The main goal is to carry out a dynamic expression comparison instead of a simple and default emotion classification.
Google datasets search
Sometimes a problem case can be non-typical. Or, for example, basic datasets aren’t fully good for requested purposes. Even in this case, Google has something helpful. Datasets search allows finding the most relevant and big datasets for almost any task or query.
Results are sorted in the relevance order (more famous sites, e.g. Kaggle, will be on the top), links exactly correspond to the description and download. Pay attention that right now this search engine is a beta version.
Use case: image datasets search.
There can be a moment to feed the model with new data. So one can easily find some interesting datasets for face recognition and test them. As an example, take a look at a Dataset for Smile Detection from Face Images. It enables the detection of smile emotion on the images and upgrades accuracy on this type of picture.
For organizations
The more a product is commercial, the more important is the application of expert systems. That’s why Google works in the domain of enterprise instruments in a very active and passionate manner. Let’s consider some examples.
Cloud TPU
The main goals of commercial products are increasing speed and decreasing of local resources. Here is the place for a Google Cloud. All tools in this category will be under this domain. So we will start from the dramatic computing ‘booster’ - Cloud TPU (tensor processing unit). In simple language, it is a way to complete some large code computations with significant performance growth. Google uses this tool by itself in some of the most popular company products: Calendar, Gmail, etc. There are several Tensor Processing Unit versions (different in the power and pricing), so companies can scale their projects as large and powerful as it may be needed.
Use case: image detection with Cloud TPU.
This is a practical implementation of the image detection model, that is used on a TPU. The basic principle - DNN is created via TensorFlow, that responds for the picture recognition. But for improving time-to-target results (paired with lower local PC usage) there is a Cloud TPU. So finally the network is faster, robust and more efficient.
Cloud AI
Instead of productivity-improving, this tool is about artificial intelligence. Style is close enough to ML Kit: you can use some off-the-shelf solutions or propose something unique. But, instead of the second, Cloud AI works in the large systems (not only inside of the mobile applications). It gives an ability to interact with more advanced technologies, not only basic ML solutions. Let’s take a look at a few components of this cloud tool more detailed.
-
AI Hub - fully maintained pipelines of artificial intelligence end-to-end projects, various tools for more detailed tuning, or using some AI modules built by other teams within an organization and access thematic content published by Google AI, Google Cloud AI, and Google Cloud Partners.
-
AI building blocks - add vision, language, conversation, and structured data into custom applications. These building blocks cover a wide spectrum of typical use cases and needs. For example, there is recommendations AI, which helps in creating advanced advisory systems based on consumer preferences. The tool is ready out-of-box and just needs to be set on a company’s product.
-
AI platform - instead of previous components, this platform is dedicated to the direct development of 'thinking' computer systems. Here engineers can build own systems in portable pipelines (via Kubeflow), that furtherly could be used on the Google Cloud Platform. This platform gives remote computing and the ability to train new models without significant code modifications. It supports many Google tools for a full project synchronization: Big Query, Google Cloud Platform, Deep Learning VM Image. After the successful build-up, there is an opportunity to share the model via AI Hub.
One of the most interesting projects in Cloud AI - Cloud AutoML. Although it’s in beta version, popularity is only growing for AutoML. It’s a set of Machine Learning tools for some sort of specific operations, e.g. training data creation, AutoML translation, AutoML tables.
In short, if you need some remote and productive artificial intelligence projects - Cloud AI, most likely, will be able to provide full support for most needs and requests.
Use case: vision AI for image segmentation.
This is a part of the artificial intelligence building blocks, that represents a pretrained model for image segmentation. It supports very different types of frameworks and characteristics: REST API or AutoML interface, personal labels and images or basic deep insights or some default level, etc. So instead of trying to create something trivial and simple, companies can just use ready solution, that is continuously trained and provides a large amount of information about images/part of images/persons/emotions/etc.
Conclusion
As you may see, Google doesn’t spare time and efforts for the artificial intelligence tools. No matter who you are (or want to be): developer, researcher, the commercial worker - anyone could have something to profit from. A bigger part of commercial and scientific tasks and use cases are covered by these products.
The main advantage of the Google AI stack consists of direct integration of all tools between each other. It means, for example, that data could be stored in the Big Query, processed by a custom TF model, boosted by TPUs and shared through AI Hub.
However, the mentioned tools aren’t all available. Day by day, developers and those who are interested in the ML could open something interesting and new from Google expert systems tools. It may not only help them to improve software development and data storing, but also create a more accurate and swift Machine Learning models.
Comments (0)
Add a new comment: