Published On: Tue, Jun 20th, 2017

Google’s Tensor2Tensor creates it easier to control low training experiments

Google’s mind group is open sourcing Tensor2Tensor, a new low training library designed to assistance researchers replicate formula from new papers in a margin and pull a bounds of what’s probable by perplexing new combinations of models, datasets and other parameters. The perfect series of variables in AI investigate total with a quick gait of new developments creates it formidable for experiments run in dual graphic settings to match. This is a pain for researchers and a drag on investigate progress.

The Tensor2Tensor library creates it easier to say best practices while conducting AI research. It comes versed with pivotal mixture including hyperparameters, data-sets, indication architectures and training rate spoil schemes.

The best partial is that any of these components can be substituted in and out in a modular conform but totally destroying everything. From a training perspective, this means that with Tensor2Tensor we can move in new models and information sets during any time — a most easier routine than would usually be possible.

Google isn’t alone in a pursuits to assistance make investigate some-more reproducible outward a lab. Facebook recently open sourced ParlAI, a apparatus to promote dialog investigate that comes prepackaged with ordinarily used datasets.

Similarly, Google’s Tensor2Tensor comes with models from new Google investigate projects like “Attention Is All You Need” and “One Model to Learn Them All.” Everything is accessible now on Github so we can start training your possess low learning-powered tools.

About the Author

Leave a comment

XHTML: You can use these html tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>