Integrating IoT and Connected Devices into Enterprise-Grade AI Platforms
Synopsis
Enterprises can manage AI applications in production by leveraging both the Google Cloud Platform (GCP) and Google’s private computing platform. An enterprise-grade AI platform requires more than just TensorFlow and CPUs: scalable training in the cloud is a necessity. GPUs speed up training throughput by so much that an enterprise might even consider renting GPUs instead of buying them. The next logical step is to rent an entire TPU pod, which costs millions of dollars, typically unimaginable for all but the most groundbreaking projects.
Beyond training, scalable prediction is equally essential. A hosted model offering adjusts scale in line with user demand. Moreover, enterprises must oversee continuous deployments, testing current model versions with production traffic. Continuous training is equally vital, combining existing data sets with fresh data. In this approach, fresh data (often from “Internet of Things”-enabled devices located in the wild) enables models that constantly adapt to distribution changes or concept drift.