This week in AI #3: The USB stick to end them all


On the menu this week: three new applications of deep learning, two essays on its future and restrictions and one key tech-enabler. Keep on reading if you want the full story!

1- GPUs? Why not a USB stick?

The first step of any real-world applications of deep learning is training a network with sufficient accuracy. Alas this is not the end of the story but just the beginning. From there you need to be able to do inference in a production setup. Sometimes you’re lucky and you can either stream your data through the internet using an API or to a datacenter but for most remote applications this is not feasible.

Fortunately Movidius, which was acquired by Intel back in September, has got you covered! They just released their $79 deep learning USB stick to run most deep learning networks on pretty much any device. Ever dreamed of making your fancy coffee machine automatically setup itself when it recognizes you? The Movidius might just have made it possible!

2- What’s the future of deep learning?

Francois Chollet, main author of the Keras library at the TensorFlow Dev Summit 2017.

Depending on your sources you might think that the singularity is near and that human will soon be replaced. Not according to Francois Chollet, whom you might know as the creator of the open-source deep learning library Keras. In the two essays he just released this week he tackles both the limitations of deep learning and its future. For him there’s still a long way to go before we can even fathom the idea of a strong-AI, one that would equal, and promptly exceed, a human ability for reasoning.

3- Apple steps into the light

How Apple improves the realism of synthetic images.

Contrary to the likes of Google, Facebook or OpenAI, Apple has been secretive about what they’ve been up to in their Machine Learning lab, but this is all about to change. This week they just introduced their brand new Machine Learning Journal where we’ll finally be able to have a peek at how Apple engineers use machine learning technologies in their products! Their first post focuses on improving the realism of synthetic images, you should definitely read it, especially if you want useful tips on how they trained their GANs!

4- Easier to fool than we thought…

Robust adversarial examples developed by OpenAI.

We’ve long known about adversarial examples, those slight modifications to images that are indistinguishable for us but can fool neural networks into classifying them into pretty much anything. Well OpenAI just created the first robust adversarial examples, you can print the image on a piece of paper, view it from any angle and it will still fool the network! As far as autonomous vehicles are concerned, this means that you could, in theory, design a stop sign that appeared normal to you but didn’t exist for the car. This must resonate with Elon Musk, co-founder of openAI, who’s been on a crusade to warn us about the danger of AI for some time now…

5- Beethoven better watch out

One thing’s for sure, neural networks are not the only one that can be fooled! Since the advent of Generative Adversarial Networks, we’ve seen a plethora of generated images that appeared real to us but were, in fact, completely fake. One might thus wonder if sight is the only sense that can deceived. You might have guessed it by now but turns out… it’s not.

In their work, Justin Svegliato and Sam Witty designed Deep Jammer, a deep neural network that also doubles as a music composer. Don’t believe me? Do a blind test with friends and try to guess which one is real!

Got any crazy ideas for the next application of AI? Let us know in the comment section. In the mean time we’ll be reviewing the Movidius dev kit we just received to offer it as a deployment option.


Our Blog Articles