Published On: Wed, Feb 17th, 2016

Google Makes It Easier To Take Machine Learning Models Into Production


Google launched TensorFlow Serving today, a new open source plan that aims to assistance developers take their appurtenance training models into production. Unsurprisingly, TensorFlow Serving is optimized for Google’s possess TensorFlow appurtenance training library, though a association says it can also be extended to support other models and data.

While projects like TensorFlow make it easier to build appurtenance training algorithms and sight them for certain forms of information inputs, TensorFlow Serving specializes in creation these models serviceable in prolongation environments. Developers sight their models regulating TensorFlow and afterwards use TensorFlow Serving’s APIs to conflict to submit from a client. Google also records that TensorFlow Serving can make use of accessible GPU resources on a appurtenance to speed adult processing.

Tensor_Flow_Diagram1_TrainingPipeline_FINAL

As Google notes, carrying a complement like this in place doesn’t only meant developers can take their models into prolongation faster, though they can also examination with opposite algorithms and models and still have a fast design and API in place. In addition, as developers labour a models or a outlay changes formed on new incoming data, a rest of a design still stays stable.

As Google notes, TensorFlow Serving is created in C++ (and not Google’s possess Go). The program is optimized for performance, and a association says it can hoop over 100,000 queries per second per core on a 16-core Xeon machine.

The formula for TensorFlow Serving — as good as a series of tutorials — is now accessible on GitHub under a Apache 2.0 license.

About the Author

Leave a comment

XHTML: You can use these html tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>