International Journal of Applied Science and Engineering
Published by Chaoyang University of Technology

Shyi-Ming Chena* and Yao-De Fangb

a Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan, R.O.C.
b Department of Electronic Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan, R.O.C.


Download Citation: |
Download PDF


ABSTRACT


In this paper, we present a new method to deal with the Iris data classification problem based on the distribution of training instances. First, we find two useful attributes of the Iris data from the training instances that are more suitable to deal with the classification problem. It means that the distribution of the values of these two useful attributes of the three species (i.e., Setosa, Versicolor and Virginica) has less overlapping. Then, we calculate the average attribute values and the standard deviations of these two useful attributes. We also calculate the overlapping areas formed by the values of these two useful attributes between species of the training instances, the average attribute values, and the standard deviations of the values of these two useful attributes of each species. Then, we calculate the difference between the values of these two useful attributes of a testing instance to be classified and the values of these two useful attributes of each species of the training instances. We choose the species that has the smallest difference between the values of these two useful attributes of the testing instance and the values of these two useful attributes of each species of the training instances as the classification result of the testing instance. The proposed method gets a higher average classification accuracy rate than the existing methods.


Keywords: Iris data; maximum attribute value; minimum attribute value; standard deviation; average classification accuracy rate.


Share this article with your colleagues

 


REFERENCES


[1] Burkhardt, D. G. and Bonissone, P. P. 1992. Automated fuzzy knowledge base generation and tuning. Proceedings of the 1992 IEEE International Conference on Fuzzy Systems, San Diego, California: 179-188, 1992.

[2] Castro, J. L., Castro-Schez, J. J. and Zurita, J. M. 1999. Learning maximal structure rules in fuzzy logic for knowledge acquisition in expert systems. Fuzzy Sets and Systems, 101: 331-342.

[3] Chang, C. H. and Chen, S. M. 2001. Constructing membership functions and generating weighted fuzzy rules from training data. Proceedings of the 2001 Ninth National Conference on Fuzzy Theory and Its Applications, Chungli, Taoyuan, Taiwan, Republic of China: 708-713.

[4] Chang, C. H. and Chen, S. M. 2000. A new method to generate fuzzy rules from numerical data based on the exclusion of attribute terms. Proceedings of the 2000 International Computer Symposium: Workshop on Artificial Intelligence, Chiayi, Taiwan, Republic of China: 57-64.

[5] Chen, S. M. and Yeh, M. S. 1999. Generating fuzzy rules from relational database systems for estimating null values. Cybernetics and Systems: An International Journal, 28: 695-723.

[6] Chen, S. M., Lee, S. H., and Lee, C. H. 2001. A new method for generating fuzzy rules from numerical data for handling classification problems. Applied Artificial Intelligence: An International Journal, 15: 645-664.

[7] Chen, Y. C. and Chen, S. M. 2000. A new method to generate fuzzy rules for fuzzy classification systems. Proceedings of the 2000 Eighth National Conference on Fuzzy Theory and Its Applications, Taipei, Taiwan, Republic of China.

[8] Chen, Y. C. and Chen, S. M. 2001. Constructing membership functions and generating fuzzy rules using genetic algorithms. Proceedings of the 2001 Ninth National Conference on Fuzzy Theory and Its Applications, Chungli, Taoyuan, Taiwan, Republic of China: 195-200.

[9] Dasarathy, B. V. 1980. Noise around the neighborhood: A new system structure and classification rule for recognition in partially exposed environments. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2: 67-71.

[10] Fang, Y. D. and Chen, S. M. 2002. A new method for handling classification problems based on the distribution of training instances. Proceedings of the Seventh Conference on Artificial Intelligence and Applications, Taichung, Taiwan, Republic of China: 101-106.

[11]Fisher, R. 1936. The use of multiple measurements in taxonomic problem. Eugenics, 7: 179-188.

[12] Hirsh, H. 1994. Generalizing version space. Machine Learning, 17: 5-46.

[13] Hong, T. P. and Lee, C. Y. 1996. Induction of fuzzy rules and membership functions from training examples. Fuzzy Sets and Systems, 84: 33-47.

[14] Hong, T. P. and Lee, C. Y. 1999. Effect of merging order on performance of fuzzy induction. Intelligent Data Analysis, 3: 139-151.

[15] Hong, T. P. and Chen, J. B. 1999. Finding relevant attributes and membership functions. Fuzzy Sets and Systems, 103: 389-404.

[16] Hong, T. P. and Chen, J. B. 2000. Processing individual fuzzy attributes for fuzzy rule induction. Fuzzy Sets and Systems, 112: 127-140.

[17] Kibler, D. and Langley, P. 1988. Machine learning as an experimental science. Proceedings of the European Working Session of Learning: 87-92.

[18] Ishibuchi, H., Nozaki, K., and Tanaka, H. 1992. Distributed representation of fuzzy rules and it’s application to pattern classification. Fuzzy Sets and Systems, 52: 21-32.

[19] LLin, H. L. and Chen, S. M. 2000. Generating weighted fuzzy rules from training data for handling fuzzy classification problems. Proceedings of the 2000 International Computer Symposium: Workshop on Artificial Intelligence, Chiayi, Taiwan, Republic of China: 11-18.

[20] Lin, H. L. and Chen, S. M. 2001. A new method for generating weighted fuzzy rules from training instances using genetic algorithms. Proceedings of the 6th Conference on Artificial Intelligence and Applications, Kaohsiung, Taiwan, Republic of China: 628-633.

[21] Nomura, H., Hayashi, I., and Wakami, N. 1992. A learning method of fuzzy inference rules by descent method. Proceedings of the 1992 IEEE International Conference on Fuzzy Systems, San Diego, California: 203-210.

[22] Quinlan, J. R. 1987. Simplifying decision trees. International Journal of Man-Machine Studies, 27: 221-234.

[23] Quinlan, J. R. 1993. “C4.5 Programs for Machine Learning”. Morgan Kaufmann, California.

[24] Sudkamp, T. and Hammell II, R. J. 1994. Interpolation, completion, and learning fuzzy rules. IEEE Transactions on Systems, Man, and Cybernetics, 24: 332-342.

[25] Wu, T. P. and Chen, S. M. 1999. A new method for constructing membership functions and fuzzy rules from training examples. IEEE Transactions on Systems, Man, and Cybernetics-Part B: Cybernetics, 29: 25-40.

[26] Wang, C. H., Liu, J. F., Hong, T. P., and Tseng, S. S. 1999. A fuzzy inductive learning strategy for modular rules. Fuzzy Sets and Systems, 103: 91-105.

[27] Wang, L. X. and Mendel, J. M. 1992. Generating fuzzy rules by learning from examples. IEEE Transactions on Systems, Man, and Cybernetics-Part B: Cybernetics, 22: 1414-1427.

[28] Zadeh, L. A. 1965. Fuzzy sets. Information and Control, 8: 338-353.Computer Symposium: Workshop on Artificial Intelligence, Chiayi, Taiwan, Republic of China: 11-18.


ARTICLE INFORMATION




Accepted: 2005-02-15
Available Online: 2005-04-03


Cite this article:

Chen, S.-M., Fang, Y.-D. 2005. A new approach for handling the iris data classification problem. International Journal of Applied Science and Engineering, 3, 37–49. https://doi.org/10.6703/IJASE.2005.3(1).37