Clustering in Wireless Sensor Network (WSN) is one of the methods to minimize the energy usage of
sensor network. The design of sensor network itself can prolong the lifetime of network. Cluster head in
each cluster is an important part in clustering to ensure the lifetime of each sensor node can be preserved
as it acts as an intermediary node between the other sensors. Sensor nodes have the limitation of its
battery where the battery is impossible to be replaced once it has been deployed. Thus, this paper
presents an improvement of clustering algorithm for two-tier network as we named it as Multi-Tier
Algorithm (MAP). For the cluster head selection, fuzzy logic approach has been used which it can
minimize the energy usage of sensor nodes hence maximize the network lifetime. MAP clustering
approach used in this paper covers the average of 100Mx100M network and involves three parameters
that worked together in order to select the cluster head which are residual energy, communication cost
and centrality. It is concluded that, MAP dominant the lifetime of WSN compared to LEACH and SEP
protocols. For the future work, the stability of this algorithm can be verified in detailed via different data
and energy.
This paper proposes a 3D object recognition method based on 3D SURF and the derivation of the robot space transformations. In a previous work, a three fingered robot hand had been developed for grasping task. The reference position of the robot hand was programmed based on predetermined values for grasping two different shapes of object. The work showed successful grasping but it could not generate the reference position on its own since no external sensor was used, hence it is not fully automated. Later, a 2D Speed-Up Robust Features (SURF) and 3D point cloud algorithm were applied to calculate the object’s 3D position where the result showed that the method was capable of recognising but unable to calculate the 3D position. Thus, the present study developed 3D SURF by combining recognised images based on 2D SURF and triangulation method. The identified object grasping points then are converted to robot space using the robot’s transformation equation which is derived based on dimensions between robot and camera in the workplace. The result supported the capability of the SURF algorithm for recognising the target without fail for nine random images but produced errors in the 3D position. Meanwhile, the transformation has been successful where the calculated object positions are inclined towards the directions of actual measured positions accordingly related to robot coordinates. However, maximum error of 3.90 cm was observed due to the inaccuracy of SURF detection and human error during manual measurement which can to be solved by improving the SURF algorithm in future.