International Journal of Applied Science and Engineering
Published by Chaoyang University of Technology

Rung-Ching Chen*, Vani Suthamathi Saravanarajan, Hsiu-Te Hung

Deptartment of Information Management, Chaoyang University of Technology, Taichung City, Taiwan


 

Download Citation: |
Download PDF


ABSTRACT


With the progress of the times and the rapid development of science and technology, machine learning and artificial intelligence are increasingly used in transportation, logistics, and homes. In terms of pets, pet monitoring has also become very popular in recent years. In this study, a real-time monitoring system for home pets using raspberry pie is developed. The proposed method consists of a raspberry Pi based YOLOv3-Tiny identification system for rapid detection and better boundary frame prediction of the cat behavior. Based on the YOLOv3-Tiny method, the following fine-tuning is implemented: (1) The images collected using the mobile phones are uniformly cropped to 416*416 pixels. (2) The images were captured in different periods and randomly rotated to plus or minus 20 degrees to make the dataset more robust for training. (3) One thousand four hundred pictures of the cat's movements in the room for marking and training are used. The dataset is also used to train the YOLOv3 model. According to the input image categories, the results of the output are categorized into six cat actions. They were sleeping, eating, sitting down, walking, going to the toilet, and search for a trash can. The average accuracy of both models was 98%. Based on environmental constraints like speed and memory consumption, any one of these models can be used for image recognition. The raspberry Pi system continuously captures the images and instantly sends a message to the registered mobile phone to achieve an instant preventive measure if the cat goes to the toilet for too long or flips through the trash can.


Keywords: Raspberry Pi, Pattern recognition, YOLO model, Deep learning.


Share this article with your colleagues

 


REFERENCES


  1. Al Rasyid, M.U.H., Saputra, F.A., Prasetiyo, A. 2018. I-ON smart controller: Portable smart home solution based on arduino and raspberry Pi, 2018 International Conference on Applied Science and Technology (iCAST), 161–164.

  2. APPA. Available online: https://www.americanpetproducts.org/(accessed on 06/14/2021)

  3. Bbox label tool, 2021. Available online:  https://github.com/puzzledqs/BBox-Label-Tool/ (accessed on 06/14/2021).

  4. Borwarnginn, P., Thongkanchorn, K., Kanchanapreechakorn, S., Kusakunniran, W. 2019. Breakthrough conventional based approach for dog breed classification using CNN with transfer learning, 2019 11th International Conference on Information Technology and Electrical Engineering (ICITEE), 1–5.

  5. Botticelli, M., Ciabattoni, L., Ferracuti, F., Monteriù, A., Pizzuti, S., Romano, S. 2018. A smart home services demonstration: Monitoring, control and security services offered to the user, 2018 IEEE 8th International Conference on Consumer Electronics - Berlin (ICCE-Berlin), 1–4.

  6. Chakraborty, S., Singh, S.K., Kumar, K. 2020. Facial biometric system for recognition using extended LGHP algorithm on raspberry Pi, in IEEE Sensors Journal, 20, 8117–8127, doi: 10.1109/JSEN.2020.2979907.

  7. Chen, R.-C., Dewi, C., Zhang, W.-W., Liu, J.-M., Huang, S.-W. 2020. Integrating gesture control board and image recognition for gesture recognition based on deep learning, International Journal of Applied Science and Engineering, 17, 237–248.

  8. Demir, H.S., Christen, J.B., Ozev, S. 2020. Energy-efficient image recognition system for marine life, in IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 39, 3458–3466, doi: 10.1109/TCAD.2020.3012745.

  9. Girshick, R., Donahue, J., Darrell, T., Malik, J. 2013. Rich feature hierarchies for accurate object detection and semantic segmentation, arXiv preprint arXiv:1311.2524.

  10. Hansen, M.F., Smith, M.L., Smith, L.N., Salter, M.G., Baxter, E.M., Farish, M., Grieve, B. 2018. Towards on-farm pig face recognition using convolutional neural networks, Computers in Industry, 98, 145–152.

  11. He, K., Zhang, X., Ren, S., Sun, J. 2015. Deep residual learning for image recognition, arXiv preprint arXiv:1512.03385.

  12. HomeKit. Available Online: https://www.apple.com/tw/shop/accessories/allaccessories/homekit/(accessed on 06/14/2021).

  13. Hongwen, Y., Zhenyu, L., Qingliang, C., Zhiwei, H., Wen, L.Y. 2019. Detection of facial gestures of group pigs based on improved Tiny-YOLO, Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 169–179.

  14. IFTTT. Available Online: http://www.ifttt.com/ (accessed on 06/14/2021).

  15. Khatri, S., Rajput, A., Alhat, S., Gursal, V., Prof. Deshmukh, J. 2020. Image-based animal detection and breed identification using neural networks, Journal of Science and Technology, 05, 130–134.

  16. Liang, C., Xiong, J., Zheng, Z., Zhong, Z., Li, Z., Chen, S., Yang, Z. 2020. A visual detection method for nighttime litchi fruits and fruiting stems, Computer and Electronics in Agriculture, 167, 105192.

  17. Nadafa, R.A., Hatturea, S.M., Bonalam, V.M., Naikb, S.P. 2020. Home security against human intrusion using raspberry Pi, Procedia Computer Science, 167, 1811–1820.

  18. Nguyen, H., Maclagan, S.J., Nguyen, T.D., Nguyen, T., Flemons, P., Andrews, K., Ritchie, E.G., Phung, D. 2017. Animal recognition and Identification with deep convolutional neural networks for automated wildlife monitoring, 2017 IEEE International Conference on Data Science and Advanced Analytics (DSAA), 40–49.

  19. Redmon, J., Farhadi, A. 2016. Yolo9000: Better, faster, stronger, arXiv preprint arXiv:1612.08242.

  20. Redmon, J., Divvala, S., Girshick, R., Farhadi, A. 2015. you only look once: Unified, real-time object detection, arXiv preprint arXiv:1506.02640.

  21. Taipei city animal protection office. Available online:  https://www.tcapo.gov.taipei/(accessed on 06/14/2021).

  22. Taiwan agricultural commission. Available online:  https://animal.coa.gov.tw/(accessed on 06/14/2021).

  23. Trends Insight Available online: http://www.trendsightinc.com/insight/(accessed on 06/14/2021).

  24. Viola, P., Jones, M. 2001. Rapid object detection using a boosted cascade of simple features, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, 1, I–I.

  25. Virtual network computing. Available online: http://www.realvnc.com/(accessed on 06/14/2021).

  26. World population review. Available online: https://worldpopulationreview.com/(accessed on 06/14/2021).

  27. Wu, F., Jin, G., Gao, M., He, Z., Yang, Y. 2019. Helmet detection based on improved YOLO V3 deep model, in 2019 IEEE 16th International Conference on Networking, Sensing and Control, 363–368.

  28. Wu, F., Sinnott, R.O., Chen, W. 2018. A mobile application for dog breed detection and recognition based on deep learning, 2018 IEEE/ACM 5th International Conference on Big Data Computing Applications and Technologies (BDCAT), 87–96.

  29. Yang, J., Qian, T., Zhang, F., Khan, S.U. 2021. Real-time facial expression recognition based on edge computing, in IEEE Access, 9, 76178–76190. doi: 10.1109/ACCESS.2021.3082641.


ARTICLE INFORMATION


Received: 2021-05-12
Revised: 2021-06-28
Accepted: 2021-07-07
Available Online: 2021-09-01


Cite this article:

Chen, R.-C., Saravanarajan, V.S., Hung, H.-T. 2021. Monitoring the behaviours of pet cat based on YOLO model and raspberry Pi. International Journal of Applied Science and Engineering, 18, 2021137. https://doi.org/10.6703/IJASE.202109_18(5).016

  Copyright The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are cited.