Pepper Tutorial <6>: Touch sensor, Human recognition

In this tutorial, we will explain about the specifications and behaviours of touch sensor and human recognition functionalities.

For this tutorial, the hardware Pepper robot is required to simulate the applications as touch sensor and human recognition systems cannot be simulated on the virtual robot.

Sensor specifications

1. 3 sensors on the head: front [A], centre [B], back [C]

2. 1 each on the back of the hand

Touch Detection

In this tutorial, we will make Pepper respond as the touch on head and hand sensors is detected.

1. Prepare boxes

  • Sensing > Touch > Tactile Head

  • Sensing > Touch > Tactile L.Hand

  • Sensing > Touch > Tactile R.Hand

  • Speech > Creation > Say x 5

2. Connect boxes

Just like the “Tactile Head box”, “Tactile L.Hand/R.Hand” boxes also have 3 outputs, but only backTouched output in the middle is outputted for Pepper.

3. Set parameters of “Say” boxes

Modify the parameters of each “Say” box so that Pepper says different things depending on which tactile sensor is detected.

Now the application is ready to be initiated. To check the operation, please connect to Pepper and run the application. Try touching different sensors and see if Pepper says the corresponding words.

Human recognition

Engagement Zone:

Pepper identifies people nearby and detects where they are by using various sensors.

In the Robot view pane, semicircular zones 1, 2, 3 are displayed on the floor like the picture below. These are called the “engagement zones” and robot’s behaviour can be modified depending on the events happening within those zones.

The definition of the engagement zones can be customised with API, but default parameters are:

  • FirstDistance = 1.5m from Pepper (Zone 1)

  • SecondDistance = 2.5m from Pepper (Zone 1 + Zone 2)

  • LimitAngle = 90°

Memory Event:

So far, we have only been using boxes to receive various inputs such as touch and face recognition information. We will now explain how to use memory event to receive inputs from the engagement zone information.

Memory event is one of the functionalities provided by ALMemory.

ALMemory is a mechanism which consolidates the information about the robot, and it can accumulate and share various information such as those related to the hardware and the information calculated from the input of hardware sensors.

ALMemory can also configure and receive the key and value combinations, and notify them in a form of memory event.

Boxes that react to a particular event are made to detect the occurrence of memory event and send out the outputs. For example, when you double click on “Tactile Head “box, you can see that its structure looks something like this:

The flows we have looked at so far had only one onStart input on the left, but this flow has three additional inputs. Their detail can be checked with mouse over, and each input represents different memory event (FrontTactilTouched [A], MiddleTactilTouched [B], RearTactilTouched [C]) that sends out the signal.

Using Memory Watcher:

Memory event can be checked on the Memory watcher pane.

1. Go to the View menu and select [Memory watcher] and open the pane.

2. Double click on <Select memory keys to watch> or right click on it and select [+] Select…

3. Select the memory event to watch. In this tutorial, we will select the FrontTactilTouched key which is in the “Tactile Head” box. Type in the key name in the Filter field and tick the box that appears below, then click [OK].

4. The selected memory event name and its value now appear on the memory watcher pane.

Memory watcher receives the memory value regarding the robot regularly and updates the display. The update frequency can be modified with the [Period] box found at the bottom of the pane.

Try touching Pepper’s head and see how the value of FrontTactilTouched changes on the Memory watcher pane.

Human Approach Detection

In this tutorial, we will make Pepper say “Hello” upon detection of human approach, and say “See you later” when someone leaves out of the detection zone.

1. Prepare boxes

  • Speech > Creation > Say x2

2. Detect the information about someone approaching or leaving as memory events.

Click on [+] button on the left of the flow diagram and open Select memory events dialog box.

3. We will be using the memory event called “PersonApproached”.

Enter ”Person” in the filter field and tick the box next to PersonApproached event under EngagementZones/, then click [OK].