Pepper Tutorial <6>: Touch sensor, Human recognition
In this tutorial, we will explain about the specifications and behaviours of touch sensor and human recognition functionalities.
For this tutorial, the hardware Pepper robot is required to simulate the applications as touch sensor and human recognition systems cannot be simulated on the virtual robot.
1. 3 sensors on the head: front [A], centre [B], back [C]
2. 1 each on the back of the hand
In this tutorial, we will make Pepper respond as the touch on head and hand sensors is detected.
1. Prepare boxes
Sensing > Touch > Tactile Head
Sensing > Touch > Tactile L.Hand
Sensing > Touch > Tactile R.Hand
Speech > Creation > Say x 5
2. Connect boxes
Just like the “Tactile Head box”, “Tactile L.Hand/R.Hand” boxes also have 3 outputs, but only backTouched output in the middle is outputted for Pepper.
3. Set parameters of “Say” boxes
Modify the parameters of each “Say” box so that Pepper says different things depending on which tactile sensor is detected.
Now the application is ready to be initiated. To check the operation, please connect to Pepper and run the application. Try touching different sensors and see if Pepper says the corresponding words.
Pepper identifies people nearby and detects where they are by using various sensors.
In the Robot view pane, semicircular zones 1, 2, 3 are displayed on the floor like the picture below. These are called the “engagement zones” and robot’s behaviour can be modified depending on the events happening within those zones.
The definition of the engagement zones can be customised with API, but default parameters are:
FirstDistance = 1.5m from Pepper (Zone 1)
SecondDistance = 2.5m from Pepper (Zone 1 + Zone 2)
LimitAngle = 90°
So far, we have only been using boxes to receive various inputs such as touch and face recognition information. We will now explain how to use memory event to receive inputs from the engagement zone information.
Memory event is one of the functionalities provided by ALMemory.
ALMemory is a mechanism which consolidates the information about the robot, and it can accumulate and share various information such as those related to the hardware and the information calculated from the input of hardware sensors.
ALMemory can also configure and receive the key and value combinations, and notify them in a form of memory event.
Boxes that react to a particular event are made to detect the occurrence of memory event and send out the outputs. For example, when you double click on “Tactile Head “box, you can see that its structure looks something like this:
The flows we have looked at so far had only one onStart input on the left, but this flow has three additional inputs. Their detail can be checked with mouse over, and each input represents different memory event (FrontTactilTouched [A], MiddleTactilTouched [B], RearTactilTouched [C]) that sends out the signal.
Using Memory Watcher:
Memory event can be checked on the Memory watcher pane.
1. Go to the View menu and select [Memory watcher] and open the pane.
2. Double click on <Select memory keys to watch> or right click on it and select [+] Select…
3. Select the memory event to watch. In this tutorial, we will select the FrontTactilTouched key which is in the “Tactile Head” box. Type in the key name in the Filter field and tick the box that appears below, then click [OK].
4. The selected memory event name and its value now appear on the memory watcher pane.
Memory watcher receives the memory value regarding the robot regularly and updates the display. The update frequency can be modified with the [Period] box found at the bottom of the pane.
Try touching Pepper’s head and see how the value of FrontTactilTouched changes on the Memory watcher pane.
Human Approach Detection
In this tutorial, we will make Pepper say “Hello” upon detection of human approach, and say “See you later” when someone leaves out of the detection zone.
1. Prepare boxes
Speech > Creation > Say x2
2. Detect the information about someone approaching or leaving as memory events.
Click on [+] button on the left of the flow diagram and open Select memory events dialog box.
3. We will be using the memory event called “PersonApproached”.
Enter ”Person” in the filter field and tick the box next to PersonApproached event under EngagementZones/, then click [OK].
4. New input appears on the left of flow diagram. This input produces signal as the PersonApproached event occurs.
The detail of the memory event can be checked with mouse over on the input.
5. Repeat the same steps from 2~4 and add another memory event “PersonMovedAway”.
6. Connect the boxes
7. Set parameters of ”Say” boxes.
Set the first one as “Hello” and the other as “See you later”.
Now the application is ready to be initiated. To check the operation, please connect to Pepper and run the application. Try walking towards and away (keep a good distance so that you are well out of the detection zone) from Pepper, and see if Pepper says “Hello” or “See you later”.
Entering in the Engagement Zone:
In this tutorial, we will make Pepper detect which zone a person is entering and say the corresponding zone number.
Just like the previous task, we will use memory events to detect which zone a person is entering.
1. Prepare boxes
Speech > Creation > Say x3
2. Add memory events.
Steps to add memory event inputs are exactly the same as the previous tutorial for [Human Approach Detection].
3. Connect boxes
4. Set parameters of Say boxes
Now the application is ready to be initiated. To check the operation, please connect to Pepper and run the application. Try walking towards and away from Pepper, and see if Pepper tells you which zone you are entering.