Image Signal Processing Techniques for Hand-Eye-Foot Coordinated
System
TAIN-SOU TSAY
Department of Aeronautical Engineering,
National Formosa University,
64, Wen-Hua Road, Huwei, Yunlin, 63208, TAIWAN
Abstract: -In this paper, image signal processing technique is applied to a Hand-Eye-Foot Coordinated System.
The considered system integrates IP Camera device, multi-joint robotic arm and wheeled mobile carrier is
proposed to simulate the coordination of human hands, eyes and feet, and carry out the behaviour of taking
objects in the air during the march. IP Camera performs image capture, cooperates with image processing
software, and makes target recognition and target selection similar to the human eye. Multi-joint mechanical
arm, simulate the human arm, the target object grab. Wheeled mobile mounts simulate the movement behaviour
of human forwards, backs and turns. The rapid image processing and target identification technology related to
the system are developed. 2D target tracking technology, including (1) Camera tracking target law and (2)
platform tracking camera tracking law are proposed.
Key-Words: -image processing, hand-eye-foot coordination, intelligent robot, robot, target tracking
Received: March 17, 2021. Revised: April 12, 2022. Accepted: May 8, 2022. Published: June 8, 2022.
1 Introduction
In recent years, the development of robots is more
and more vigorous, the field of robot research is also
very broad, in which the research of autonomous
robots has become a very important part, there are
now many care robots, cleaning robots, display
robots; etc. These service robots gradually into
human life, the robot's appearance can easily be
designed to be similar to the human appearance
mechanism, but the action behaviour should be as
important as human action behaviour is worth
studying part. This paper studies the motivation,
designing the robot to simulate the human action
behaviour, can identify the desired object in the
complex environment, and move to the object, grab
the object, in the process to make the robot quickly
and accurately find the target object, and in the
process of moving the coordination of the action
adjustment.
In this paper, a complex system that integrates IP
Camera device, multi-joint robotic arm and wheeled
mobile carrier is proposed to simulate the
coordination of human hands and eyes and feet, and
carry out the behaviour of taking objects in the air
during the march. IP Camera performs image
capture, cooperates with image processing software,
and makes target recognition and target selection
similar to the human eye. Multi-joint mechanical
arm, simulate the human arm, the target object grab.
Wheeled mobile mounts simulate the movement
behaviour of human forwards, backs and turns. The
proposed technology for the system will be verified
by real test, can perfectly simulate the behaviour of
human moving to take objects in the air.
2.Operational Concepts of Hand-Eye-
Foot Coordinated System
2.1. The proposed System Hardware Design[
The proposed system design is shown in Fig.1[1-5].
It includes PZ7512 CCD Camera, Robot Wifi
Module, Human Moving Detector, Wheel Increment
Encoder, Ultrasonic Range Finder, IR Range Finder,
7-joint Robot Arm (Fig.2), Pressure Transducer for
Fingers.
Fig.1. The Proposed System.
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2022.18.17
Tain-Sou Tsay
E-ISSN: 2224-3488
123
Volume 18, 2022
Fig.2. The 7-joints Robot Arm.
System hardware interface architecture, as shown in
Fig. 3, with the PMS5005 motion card as the core of
the system, there are 3 sets of ultrasonic distance
sensors, 7 sets of infrared distance sensors, and the
PMS5005 motion card connection, and connected to
the set of incremental encoders respectively control
the left wheel, the right wheel, In addition, the
WFS802g wireless communication module is
connected to the system core and CM-5 controller,
the wireless network through the wireless base
station and the PC communication, in addition to the
CM-5 controller connected to the robot arm
composed of 10 AX-12 servo motors, the camera
built within the wireless communication module,
Therefore, only by the wireless network through the
wireless base station and PC communication, the
entire development process in the PC to do program
control, through the wireless network to do
communication control, and must first set up the
PMS5005 and camera IP. The camera part is
(192.168.0.199:8081), while the PMS5005 part,
through the X80SV WiRobot Gateway online
software to do the connection as shown in Fig.3, in
the Robot ID input PMS5005, select the second
WiFi Connect, and enter IP: 192.168.0.205,
Port:1001, press Connect to complete the
connection communication. The power supply of
the entire system is divided into a section, the
X80SV and the camera are supplied by a 12V NH
battery, and the arm and CM-5 controller are partly
supplied by 12V DC.
Fig.3. System Hardware Interface.
2.2. Tracking Control Configuration
The relationship on the horizontal plane between
Target, Camera and the Moving Platform is shown
in Fig.4, in which
SB
represents the angular
deviation between central lines of platform and
Camera;
S
represents the line of sight angle for
target. The zero values of
S
represents the camera
pointing to the target. The zero values of
SB
represents the body tracking the sight.
Fig.4. Relationships between Target, Camera and
Platform (Vehicle).
The tracking and control configuration on the
horizontal plane is shown in Fig.5. There are two
tacking laws will be developed for camera tracking
target and vehicle tracking camera. The
S
is the
yawing angle of camera; The
is the yawing
angle of the platform;
),( BB YX
and
),( TT YX
are
platform and target positions, respectively;
BT
R
is
the distance between the platform and target;
is
the line of sight;
e
is the tracking angular error of
the target with respect to central of the camera
screen;
err
X
is the tracking error in pixel. The
relationships of them are
BT
BT XX
YY
1
tan
(1)
se
(2)
)2/(tan/)(tan160 11 FOVeerr HX
(3)
and
22 )()( BTBTBT YYXXR
(4)
where
FOV
H
is the horizontal field of view of the
camera. Note that the tracking error
e
is replaced
by
err
X
for it is difficult to find
e
for the size of
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2022.18.17
Tain-Sou Tsay
E-ISSN: 2224-3488
124
Volume 18, 2022
the target is unknown. But
err
X
approaches to zero
are equivalent to
e
approaches to zero. The
definition of
err
X
is given in Fig.6. Fig.5 can be
used to evaluate the tracking laws. The responses of
the image seeker must faster than that of the vehicle
yawing for the disturbance from platform can be
suppressed. The vehicle yawing is accomplished by
different speed two wheels[6-9].
Fig.5. Tracking and Control Configuration of
Horizontal Plane.
Fig.6. Tracking error definition of the Target.
2.4.The proposed System Operation Procedure
Design
System signal process design, as shown in Fig. 7
Fig.7. System Operation Flow Chart.
3. Image Processing for Finding Target
Position Errors
The image processing process[10-13] for the target
is shown in Fig.8. They are : (1)taking an image
with a size of 320×240; (2)take out the RGB pixel
array data of each layer; (3) do RGB colour model
to HSI colour model processingand set the colour
of the target object you want to select; (4)filter out
the colour you want; divide the part you want to
select from the part you don't want to be black and
white(shown in Fig.9); (5)calculate the central of the
target(
CC YX ,
), the deviation from the central of the
screen
errerr YX ,
, and the target as a proportion of the
screen is SCim. Formulas of the central of the target
are listed as follows
239~0;319~0;
),(
),(
ji
jif
jifi
X
i j
i j
C
(5)
239~0;319~0;
),(
),(
ji
jif
jifj
Y
i j
i j
C
(6)
Where
),( jif
represents pixel value of position
(i,j). The value
),( jif
equal to zero represents
black, and that of 255 represents white. The
deviation from the central of the screen
errerr YX ,
are
OCerr
OCerr
YYY
XXX
;
(7)
Where
)120,160(),(
OO YX
. The target as a
proportion of the screen is
239~0;319~0;
240320
255/),(
ji
jif
SCim i j
(8)
The SCim value is used as a determination of
whether there is a target. Fig.9 shows that the final
output for red/blue ball is selected.
Fig.8. Data Processing Flow Chart & Parameter
Definition.
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2022.18.17
Tain-Sou Tsay
E-ISSN: 2224-3488
125
Volume 18, 2022
Fig.9. RGB to HIS Transformation for
Selecting Red /Blue Ball.
4. System Verifications
The verification of the whole system designs are
shown in Fig.10-12. Fig.10 shows approaching
images form initial moving to final grabbing
processes. Fig.11 shows internal evaluation datum
of each system parameters. Fig.12 shows tracking
errors
SBerr, ψX
.
Fig.10. Moving and Scanning to find Target and
Grab Target.
Fig.11. Screen Show of the Moving, Scanning and
Grabbing Process.
Fig.12. Tracking Errors of the Moving, Scanning
and Grabbing Process.
5. Conclusions
In this literature, Hand-Eye-Foot Coordinated
system is developed for the Intelligent Robot. The
developed techniques including (1) the technology
of rapid image processing and target identification.
Selected for objects of special shapes and colours;
(2) target tracking technology. Camera tracks the
target law so that Camera is on target. The carrier
tracks the Camera tracking law, making the carrier
coincide with the camera's central axis. The
proposed technology is verified by real test. It can
be seen that the developed system can perfectly
simulate the behaviour of human moving to take
objects in the air.
References
[1] WiFi Mobile Robot Development Platform with
High Resolution Pan-Tilt-Zoom Camera,
X80SV Quick Start Guide, Dr. Robot Inc, 2013.
[2] A. Salerno and J. Angels, A new family of
two wheeled mobile robot: modelling and
controllability, IEEE Transaction on
Robotics, Vol.23, No.1, pp.169-173, 2007.
[3] J.Y. Kim, “Trajectory Generation of a Two-
Wheeled Mobile Robot in an Uncertain
Environment, IEEE Transactions on Industrial
Electronics, Vol.67, No.7, pp. 5586 -5594, 2020.
[4] T. Urakubo, K. Tsuchiya & K. Tsujita , Motion
control of a two-wheeled mobile robot,”
Advanced Robotics, pp.711-728, 2012.
[5] B. Kocaturk, “Motion Control of Wheeled
Mobile Robots, “Interdisciplinary Description
of Complex Systems, Vol.13, No.1, pp. 41-47,
2015.
[6] D. S. Jang, G. Y. Kim, and H. I. Choi, “Model-
based Tracking of Moving Object, Pattern
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2022.18.17
Tain-Sou Tsay
E-ISSN: 2224-3488
126
Volume 18, 2022
Recognition, Vol. 30, No. 6, pp. 999-1008,
1997.
[7]A. Talukder, S. Goldberg, L. Matthies, A. Ansar,
“Real-time detection of moving objects in a
dynamic scene from moving robotic vehicles,
Proceedings 2003 IEEE/RSJ International
Conference on Intelligent Robots and Systems ,
Las Vegas, NV, USA , 27-31 Oct. 2003, Vol.2
pp.1307-1313.
[8]S. Hutchinson, G. D. Hage and P. I. Corke, “A
Tutorial on Visual Servo Control,IEEE Trans.
on Robotics and Automation, No. 5, pp. 651
670 , 1996.
[9]A. J. Lipton, H. Fujiyoshi, and R. S. Patil,
“Moving Target Classification and Tracking
From Real-Time Video,” IEEE Workshop on
Applications of Computer Vision, pp. 8-14, 1998.
[10] R. C. Gonzalez , R. E. Woods, Digital Image
Processing, 4/e, Pearson International Edition,
NY, 2017.
[11] L. G. Shapiro and G. C. Stockman, Computer
Vision, Prentice Hall, Upper Saddle River, NY,
2001.
[12] T. S. Tsay, ”Data Preprocessing Circuit Designs
and Analyses for Subsonic Cruise Missile
Infrared Image Seeker, The open Automation
and Control System Journal, Vol.12, pp-14-19,
2008.
[13] B. Kaur; J. Bhattacharya, A Hybrid Approach
for Scene Matching used for Autonomous
vehicles,” IEEE International Conference on
Communications Workshops, Montreal, QC,
Canada, 14-23 June 2021.
Creative Commons Attribution
License 4.0 (Attribution 4.0
International , CC BY 4.0)
This article is published under the terms of the
Creative Commons Attribution License 4.0
https://creativecommons.org/licenses/by/4.0/deed.en
_US
WSEAS TRANSACTIONS on SIGNAL PROCESSING
DOI: 10.37394/232014.2022.18.17
Tain-Sou Tsay
E-ISSN: 2224-3488
127
Volume 18, 2022