International Journal of Applied Science and Engineering
Published by Chaoyang University of Technology

Special issue: The 10th International Conference on Awareness Science and Technology (iCAST 2019)

Rung-Ching Chen1*, Christine Dewi1,3, Wei-Wei Zhang1,2Jia-Ming Liu1, Su-Wen Huang1,4*

1 Department of Information Management, Chaoyang University of Technology, Taiwan, R.O.C.
2 Ningxia University, Xixia District, Yinchuan City, Ningxia
3 Information Technology, Satya Wacana Christian University, Central Java, Indonesia.
4 Taichung Veterans General Hospital, Taiwan, R.O.C.


Download Citation: |
Download PDF


Due to the rise of the Internet of Things, more devices can connect with the Internet. A large amount of data is collected from devices that could be used for different applications. The development of hardware equipment for the Internet of Things not only use for industrial but also for smart homes. The smart home covers broad topics, including remote control of home applications, sensing of humans, temperature-controlling air conditions, and security monitors. When we carry out these topics, the human-machine interface is essential for system applications. A gesture recognition system is applied to many real applications. The reason is that the accuracy rate and real factors are complicated. They commonly use of gesture control service in the market is the sensor board of gesture control. The principle is using the electric field to change and determine the gestures. The limitation requires the close operation, and there is a problem of critical point sensitivity. In this paper, we use the gesture control board to combine with gesture image recognition methods to perform the double authentication gesture recognition. Raspberry Pi is the control center to integrate the intelligent light bulb. HUE makes a gesture recognition system. The results explain that the accuracy rate of the gesture recognition proposed is 90%. Meanwhile, it is higher than the SVM method.

Keywords: Raspberry Pi, Gesture recognition, Deep learning.

Share this article with your colleagues



  1. Adetiba, E., Olugbara, O.O. 2015. Lung cancer prediction using neural network ensemble with histogram of oriented gradient genomic features. Scientific World Journal, 1–17.

  2. Alanwar, A., Alzantot, M., Ho, B.J., Martin, P., Srivastava, M. 2017. SeleCon: Scalable IoT device selection and control using hand gestures. Proceedings - 2017 IEEE/ACM 2nd International Conference on Internet-of-Things Design and Implementation, IoTDI 2017 (part of CPS Week), 47–58.

  3. Aron, J. 2011. How innovative is Apple’s new voice assistant, Siri? New Scientist, 212, 24.

  4. Buzzi, M., Gennai, F., Leporini, B. 2018. How blind people can manage a remote control system: A case study. In Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, 71–81.

  5. Chai, T., Draxler, R.R. 2014. Root mean square error (RMSE) or mean absolute error (MAE)? -Arguments against avoiding RMSE in the literature. Geoscientific Model Development, 7, 1247–1250.

  6. Chen, J., Zhang, Z., Yao, L., Li, B., Chen, T. 2019. Face recognition using depth images base convolutional neural network. CITS 2019 - Proceeding of the 2019 International Conference on Computer, Information and Telecommunication Systems, 1–4.

  7. Chen, P.H., Lin, C.J., Schölkopf, B. 2005. A tutorial on v-support vector machines. Applied Stochastic Models in Business and Industry, 21, 111–136.

  8. Christidis, K., Devetsikiotis, M. 2016. Blockchains and smart contracts for the internet of things. IEEE Access, 2292–2303.

  9. Chung, C.W., Chiu, L.Y. 2014. The application of gestures recognition based on depth information using shape classification and trajectory matching. Journal of Taipei University of Science and Technology, 46, 15–26.

  10. Co-investigator, N. 2013. Linux kernel. Journal of chemical information and modeling, 53, 1689–1699.

  11. Dai, X. 2007. Google. New Political Economy, 12, 433–442.

  12. Dewi, C., Chen, R.C. 2019. Random forest and support vector machine on features selection for regression analysis. International Journal of Innovative Computing, Information and Control, 15, 2027–2037.

  13. Doan, H.G., Nguyen V.T., Vu, H., Tran, T.H. 2016. A combination of user-guide scheme and kernel descriptor on RGB-D data for robust and realtime hand posture recognition. Engineering Applications of Artificial Intelligence, 49, 103–113.

  14. FIBAR GROUP S.A., Fibaro, 2020. [Online [Accessed: 17-Dec-2019].

  15. Gregorio, F., González, G., Schmidt, C., Cousseau, J. 2020. Internet of things. in signals and communication technology, 217–245.

  16. Haghi, M., Thurow, K., Stoll, R. 2017. Wearable devices in medical internet of things: Scientific research and commercially available devices. Healthcare Informatics Research, 23, 4–15.

  17. Kalman, B.L., Kwasny, S.C. 2003. Why tanh: choosing a sigmoidal function.

  18. Kelly, C. 2012. OK, Google, take a deep breath. New York Times.

  19. Kotsidou, D. 2018. Gesture recognition. in gesture recognition: performance, Applications and Features.

  20. Li, H., Wu, L., Wang, H., Han, C., Quan, W., Zhao, J. 2020. Hand gesture recognition enhancement based on spatial fuzzy matching in leap motion. IEEE Transactions on Industrial Informatics, 16, 1885–1894.

  21. Mitra, S., charya, T. 2007. Gesture recognition: A survey.  IEEE transactions on systems, man and cybernetics part C: applications and reviews, 37, 11–324.

  22. Newmarch, J. 2017. Raspberry Pi. Linux sound programming, 537–545.

  23. R. R Development Core Team. 2011. R: A language and environment for statistical computing.

  24. Risteska, S.B.L., Trivodaliev, K.V. 2017. A review of internet of things for smart home: challenges and solutions. Journal of Cleaner Production, 1454–1464.

  25. Sakshi, B., Radha, D. 2019. Smart phone as a controlling device for smart home using speech recognition. 2019 International Conference on Communication and Signal Processing (ICCSP), 701–705.

  26. Shi, L., Wang, Y., Li., J. 2010. A real time Vision-Based hand gestures recognition system, in ISICA 2010: Advances in computation and intelligence, 349–358.

  27. Smisek, J., Jancosek, M., Pajdla, T. 2011. 3D with kinect. In proceedings of the IEEE international conference on computer vision, 1154–1160.

  28. Verma, N.K., Salour, A. 2020. Feature extraction. In studies in systems, decision and control, 121–173.

  29. Wang, C., Liu, Z., Chan, S.C. 2015. Superpixel-Based hand gesture recognition with kinect depth camera. IEEE Transactions on Multimedia, 17, 29–39.

  30. Wenjun, T., Chengdong, W., Shuying, Z., Li., J. 2010. Dynamic hand gesture recognition using motion trajectories and key frames. In Proceedings - 2nd IEEE International Conference on Advanced Computer Control, ICACC 2010, 163–167.

  31. Yu, X., Yuan, Y. 2019. Hand gesture recognition based on Faster-RCNN deep learning. Journal of Computers Department of Electronic Engineering, Fudan University, Shanghai, 14, 101–110.

  32. Zhu, Y., Xu, G., Kriegman, D.J. 2002. A real-time approach to the spotting, representation, and recognition of hand gestures for human-computer interaction. Computer Vision and Image Understanding, 85, 189–208.


Received: 2020-03-29
Revised: 2020-06-06
Accepted: 2020-07-28
Available Online: 2020-09-01

Cite this article:

Chen, R.C., Dewi, C., Zhang, W.W., Liu, J.M., Huang, S.W. 2020. Integrating gesture control board and image recognition for gesture recognition based on deep learning. International Journal of Applied Science and Engineering, 17, 237–248.

  Copyright The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are cited.