Sebastien Cagnon have been a Behavior Architect at Aldebaran for 2 years, creating complex apps on NAO and Pepper. He is currently Head of Technical Support in Tokyo. Author of the blog About Robot, he regularly writes blog post about the application creation process with helpful ressources.
Today, we share with you guys his latest blog post about packaging the object recognition feature in an application. You can find the original article on Sebastien's blog and also find him on Twitter or Github.
NAO and Pepper have this awesome feature to recognize objects from a database of images. You can create the database quite easily with Choregraphe and upload it to your robot. It's rather simple. For more details on the basics of creating and using a Vision Recognition Database, check the official documentation here.
But what happens if you want to have deux apps with two completely different databases? Or if you want to distribute your application with an image recognition database through the application Store?
So in this little tutorial, I show you some simple trick to package your object recognition in your app. You can find the completed sample project on Github.
Important note: The following tutorial only works with a Choregraphe 2.1.2 or older (your NAO can be a more recent version), but the feature to export the database to NAO does not work with Choregraphe 2.1.3.
Once you have created your vision database as explained here, you should:
By default, the Vision Reco. box will keep sending a signal every time it recognizes something. So you often get several signals for the same object over 1 or 2 seconds.
To avoid that, I right-click on the onPictureLabel output, choose Edit, and change the "nature" to "onStopped". When the nature of the output is "onStopped", the box is unloaded and the robot stops the vision recognition. So you can process that one object, and then go back to Vision Reco. later in a loop.
Don't forget to check the completed sample project on Github.