Published On: Fri, Jun 15th, 2018

Amazon starts shipping the $249 DeepLens AI camera for developers

Back during a re:Invent discussion in November, AWS announced its $249 DeepLens, a camera that’s privately geared toward developers who wish to build and antecedent vision-centric appurtenance training models. The association started holding pre-orders for DeepLens a few months ago, yet now a camera is indeed shipping to developers.

Ahead of today’s launch, we had a possibility to attend a seminar in Seattle with DeepLens senior product manager Jyothi Nookula and Amazon’s VP for AI Swami Sivasubramanian to get some hands-on time with a hardware and a program services that make it tick.

DeepLens is radically a tiny Ubuntu- and Intel Atom-based mechanism with a built-in camera that’s absolute adequate to simply run and weigh visible appurtenance training models. In total, DeepLens offers about 106 GFLOPS of performance.

The hardware has all of a common I/O ports (think Micro HDMI, USB 2.0, Audio out, etc.) to let we emanate antecedent applications, no matter either those are elementary fondle apps that send we an warning when a camera detects a bear in your backyard or an industrial focus that keeps an eye on a circuit belt in your factory. The 4 megapixel camera isn’t going to win any prizes, yet it’s ideally adequate for many use cases. Unsurprisingly, DeepLens is deeply integrated with a rest of AWS’s services. Those embody a AWS IoT use Greengrass, that we use to muster models to DeepLens, for example, yet also SageMaker, Amazon’s newest apparatus for building appurtenance training models.

These integrations are also what creates removing started with a camera flattering easy. Indeed, if all we wish to do is run one of a pre-built samples that AWS provides, it shouldn’t take we some-more than 10 mins to set adult your DeepLens and muster one of these models to a camera. Those plan templates embody an intent showing indication that can heed between 20 objects (though it had some issues with fondle dogs, as we can see in a picture above), a character send instance to describe a camera picture in a character of outpost Gogh, a face showing indication and a indication that can distinguish between cats and dogs and one that can commend about 30 opposite actions (like personification guitar, for example). The DeepLens group is also adding a indication for tracking conduct poses. Oh, and there’s also a prohibited dog showing model.

But that’s apparently only a beginning. As a DeepLens group stressed during a workshop, even developers who have never worked with appurtenance training can take a existent templates and simply extend them. In part, that’s due to a fact that a DeepLens plan consists of dual parts: a indication and a Lambda duty that runs instances of a indication and lets we perform actions formed on a model’s output. And with SageMaker, AWS now offers a apparatus that also creates it easy to build models though carrying to conduct a underlying infrastructure.

You could do a lot of a growth on a DeepLens hardware itself, given that it is radically a tiny computer, yet you’re substantially improved off regulating a some-more absolute appurtenance and afterwards deploying to DeepLens regulating a AWS Console. If we unequivocally wanted to, we could use DeepLens as a low-powered desktop appurtenance as it comes with Ubuntu 16.04 pre-installed.

For developers who know their approach around appurtenance training frameworks, DeepLens creates it easy to import models from substantially all a renouned tools, including Caffe, TensorFlow, MXNet and others. It’s value observant that a AWS group also built a indication optimizer for MXNet models that allows them to run some-more well on a DeepLens device.

So because did AWS build DeepLens? “The whole motive behind DeepLens came from a elementary doubt that we asked ourselves: How do we put appurtenance training in a hands of each developer,” Sivasubramanian said. “To that end, we brainstormed a series of ideas and a many earnest thought was indeed that developers adore to build solutions as hands-on conform on devices.” And because did AWS confirm to build a possess hardware instead of simply operative with a partner? “We had a specific patron knowledge in mind and wanted to make certain that a end-to-end knowledge is unequivocally easy,” he said. “So instead of revelation somebody to go download this toolkit and afterwards go buy this toolkit from Amazon and afterwards handle all of these together. […] So we have to do like 20 opposite things, that typically takes dual or 3 days and afterwards we have to put a whole infrastructure together. It takes too prolonged for somebody who’s vehement about training low training and building something fun.”

So if we wish to get started with low training and build some hands-on projects, DeepLens is now accessible on Amazon. At $249, it’s not cheap, yet if we are already regulating AWS — and maybe even use Lambda already — it’s substantially a easiest approach to get started with building these kind of appurtenance learning-powered applications.

About the Author

Leave a comment

XHTML: You can use these html tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>