Anonymous

What's new in Choregraphe 2.1

victor.pngA few days ago we released a new version of NAOQI that runs both on NAO and Pepper. With it came a new version of Choregraphe through which you can access all the new functionalities of our robots : dialog, facial recognition… We sat down with Victor Paléologue to talk about what’s new in this release.

Please introduce yourself and your work at Aldebaran.

I’m working with the Desktop Team. We’re in charge of all of Aldebaran’s graphical apps, including Choregraphe, a powerful tool to develop apps for the robot. We also work on Monitor, and some other tools for internal use.

My job is to take NAOQI and to create graphical interfaces to control it, with small modules and a little bit of refinement of what’s already in NAOQI. Sometimes I’ll work on NAOQI, for example to make sure we have events on what’s happening with the robot or the mechanics of launching behaviors. In the end everything that has to do with the graphical language has to do with what’s happening in NAOQI, and in the robot.

So we can thank you for the robot being so easy to develop for. That’s pretty cool. Can you tell us a bit about the latest evolution of Choregraphe coming with version 2.1

The latest big evolution is a new project type. We’ve actually gone back to something that existed in the early versions of Choregraphe. There were project files in Choregraphe, the advantage of it is that you’re not limited to only one behaviour.

A behavior is an ensemble of bricks led by a logic of events. That’s enough if you want to do one activity with the robot. But if you want to, you can add dialogs, trajectories, timelines, animations… A whole constellation of small behaviours that will complete your app.

An app is not just one behaviour, it’s made of a multitude of content, including Choregraphe behaviors, C++ code, Python modules. Now you can pack all of that in one app through this new project type called PML that references every file from your project. CRGs are still available though!

choregraphe capture project.png

What is the advantage in doing that in terms of interacting with the robot?

All these behaviors now run in the robot’s autonomous life, in a module called ALAutonomousLife. But there’s also several other modules that can manage the launch conditions of behaviors. You can tell each behaviour when to launch, what permissions it has, what are the key phrases that trigger launch, etc...

You can also make dialog into a collaborative one. The robot will find it and feed it into an ongoing discussion. Using that you can launch animations or apps through a natural interaction with the robot. Now you don’t need to click on a “play” button in Chroregraphe.

Back to Choregraphe in particular… Is there a feature that you feel is powerful but underused?

There’s a big change in the behavior execution engine that makes it so that now, if you have an error in a behavior, it will stop. It’s like a program with an exception: if no one catches it, the program stops.

Now behaviors use the same mechanism: you have to catch exceptions if you don’t want the app to stop. Putting “try except” everywhere in the Python code is cumbersome, but there’s a trick: add an “onError” output (type: “string”, nature: “onStopped”) on your boxes, so you can get the exception through that output and send it to another box that can take care of it. So you get an execution fork in Choregraphe.

This way when the exception is caught, you’ll see your box becoming red in Chroregraphe and you can take care of all these exceptions in the graphical interface.

What feature of Choregraphe are you particularly proud of ?

  • Start a new Choregraphe project.
  • Drop a "Say" box to the main diagram, connect I/Os.
  • Connect to a robot and click play: your robot talks!

This is one of the easiest "Hello World" in the galaxy, I'm proud to keep it simple while Choregraphe grows in power.

choregraphe hello world.png

Are there ways the developers have used Choregraphe that have surprised you?

It has become pretty standard now, but I was surprised by it at first: using timelines as State machines. I was used to Flash at the time and using timelines that way, but I didn’t think it would become something that widely used. I thought I was the only one doing it but it actually became the norm. Now we plan on doing something official to allow this mechanic.

In the new version we’ve changed a couple of things, notably on timelines. If you have no content and no end frame set, the timeline will stop right after the last keyframe. Before, if you didn’t stop your timeline it was going on indefinitely. Now they will stop after the last behavior keyframe.

So, if some people have used timelines in state machine mode, with keyframes still going on. I’m sorry if we break some workflows!

Extra tip: now you can set the FPS to 0 if you wish to use the timeline as a state machine. The frames allow you to create a “go to” from one part of the code to another and to charge or discharge keyframe behaviors.

We’ve talked about what’s new, let’s talk about what’s next. What do you dream of implementing in Choregraphe in the future?

So many things, it's hard to choose... As a lead developer, the sexiest thing I can think about is a new behavior system with live introspection! It would come with a deep rework of our node-based graphical behavior editor.

Understand what's on your robot's mind. See how the autonomous life triggers the behaviors. See how behaviors works, review its history of events. Apply the same to services and NAOqi tasks so that not to have to differentiate if it comes from Choregraphe or from C++ code.

Underneath the logic with which you program the robot lies the logic with which the robot will reason autonomously. This is exciting!

Back to top