NeuralCandy

A Candy dispenser using Android Things and TensorFlow-Lite

View project on GitHub

NeuralCandy

NeuralCandy combines image classifier and sugar highs in one delicious Android Things project. The application asks for a random image to be placed in front of the camera module and if it matches the request; then the motor of the candy dispenser is activated to release the reward.

NeuralCandy uses the TensorFlow Lite inference library for Android to locally classify the captured image against the pre-trained ImageNet model. This model is good at recognizing categories that it was trained with. You can use a smartphone to search on Google for the requested target image and put it in front of the Pi camera. The Raspberry Pi 3 model B will handle the image processing and the motor for the candy release.

demo

Note that the Android Things project is still in the early-adopters stage and it may still have some stability issues. NeuralCandy is built using the preview 8 release; which requires to reboot the Raspberry Pi after installing the app because the camera permission is not granted until the next device reboot. Another limitation in this preview release is the maximum resolution; which is only 640*480. Hopefully, the next release of Android Things will support higher resolutions since the Raspberry Pi camera V2.1 is capable of taking 8MP images / 1080p videos at 30fps.

What you’ll need

Why do I need a motor driver?

The Raspberry Pi’s GPIO ports can only supply a few mA of current (16mA max). Attempting to draw more than this will damage the Pi. Motors typically require at least 400mA to start spinning (although they draw far less after startup). Motor drivers are often H-Bridge circuits, capable of driving a motor forwards or backwards. The Explorer pHAT has a dual H-Bridge and circuitry that makes the controlling of the motor easier.

motor

Notice that under this configuration, the motor will be powered by the same power supply powering the Raspberry Pi. Therefore, the AAA batteries are not longer requied to be installed in the candy dispenser.

Flow of control by time ordering

The sequence diagram below shows the passing of actions as they unfold by the user interaction with the application. sequence

Implementation classes

This is the implementation level class diagram which shows the classes involved in the NeuroCandy app. classes

References

  • https://github.com/googlecodelabs/androidthings-imageclassifier
  • https://github.com/androidthings/sample-simpleui/tree/master/app
  • https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite

License

Copyright 2018 Al Bencomo

Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.