EXPLAINABLE AI YORDAMIDA SOC UCHUN TUSHUNTIRILADIGAN KIBERXAVF ANIQLASH TIZIMINI ISHLAB CHIQISH

EXPLAINABLE AI YORDAMIDA SOC UCHUN TUSHUNTIRILADIGAN KIBERXAVF ANIQLASH TIZIMINI ISHLAB CHIQISH

Authors

  • N.N. Jo‘rayev
  • A.Sh. Juraboyev

DOI:

https://doi.org/10.5281/zenodo.18950570

Keywords:

kiberxavfsizlik, Explainable AI, XAI, SOC, kiberhujumlarni aniqlash, mashinaviy o‘rganish, tarmoq trafigi tahlili, SHAP, LIME.

Abstract

So‘nggi yillarda kiberxavfsizlik sohasida sun’iy intellekt va mashinaviy o‘rganish texnologiyalaridan
foydalanish sezilarli darajada kengayib bormoqda. Biroq ko‘plab algoritmlar “qora quti” tamoyiliga asoslanganligi sababli,
ularning qaror qabul qilish jarayonini izohlash murakkab bo‘lishi mumkin. Bu holat xavfsizlik operatsiyalari markazlari
(Security Operations Center – SOC) faoliyatida muhim ahamiyat kasb etadi, chunki xavfsizlik mutaxassislari aniqlangan
tahdidlarning sabablarini ham tushunishi zarur. Mazkur tadqiqotda Explainable Artificial Intelligence (XAI) yondashuvi
asosida tushuntiriladigan kiberxavf aniqlash tizimi modeli taklif etiladi. Tizim mashinaviy o‘rganish algoritmlaridan
foydalangan holda tarmoq trafigini tahlil qiladi hamda aniqlangan tahdidlar uchun SHAP yoki LIME kabi tushuntirish
mexanizmlarini qo‘llaydi. Natijada SOC mutaxassislari hujumlarning kelib chiqish sabablari va ta’sir etuvchi omillarni
tezkor aniqlash imkoniyatiga ega bo‘ladi. Tadqiqot natijalari Explainable AI texnologiyalari kiberxavfsizlik tizimlarining
ishonchliligi, shaffofligi va samaradorligini oshirishda muhim rol o‘ynashini ko‘rsatadi.

Author Biographies

N.N. Jo‘rayev


O‘zbekiston Respublikasi Harbiy xavfsizlik va mudofaa universiteti,
Toshkent harbiy okrugi fakulteti, Istiqbolli harbiy texnologiyalar kafedrasi boshlig‘i podpolkovnik

A.Sh. Juraboyev


O‘zbekiston Respublikasi Harbiy xavfsizlik va mudofaa universiteti,
Toshkent harbiy okrugi fakulteti, katta o‘qituvchisi podpolkovnik

References

Goodfellow I., Bengio Y., Courville A. Deep Learning. – Cambridge: MIT Press, 2016. – 775 p.

Bishop C. M. Pattern Recognition and Machine Learning. – New York: Springer, 2006. – 738 p.

Géron A. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow. – 2nd ed. – Sebastopol: O’Reilly

Media, 2019. – 851 p.

Chollet F. Deep Learning with Python. – 2nd ed. – New York: Manning Publications, 2021. – 504 p.

Molnar C. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. – Munich: Leanpub,

– 350 p.

Lundberg S. M., Lee S. I. A Unified Approach to Interpreting Model Predictions // Advances in Neural Information

Processing Systems. – 2017. – Vol. 30. – P. 4765–4774.

Ribeiro M. T., Singh S., Guestrin C. “Why Should I Trust You?” Explaining the Predictions of Any Classifier //

Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. – 2016.

– P. 1135–1144.

Chen T., Guestrin C. XGBoost: A Scalable Tree Boosting System // Proceedings of the 22nd ACM SIGKDD International

Conference on Knowledge Discovery and Data Mining. – 2016. – P. 785–794.

Ke G. et al. LightGBM: A Highly Efficient Gradient Boosting Decision Tree // Advances in Neural Information Processing

Systems. – 2017. – P. 3146–3154.

Prokhorenkova L. et al. CatBoost: Unbiased Boosting with Categorical Features // Advances in Neural Information

Processing Systems. – 2018. – P. 6638–6648.

Sommer R., Paxson V. Outside the Closed World: On Using Machine Learning for Network Intrusion Detection // IEEE

Symposium on Security and Privacy. – 2010. – P. 305–316.

Sharafaldin I., Lashkari A. H., Ghorbani A. A. Toward Generating a New Intrusion Detection Dataset and Intrusion

Traffic Characterization (CIC-IDS2017) // Proceedings of the International Conference on Information Systems

Security and Privacy (ICISSP). – 2018. – P. 108–116.

Moustafa N., Slay J. UNSW-NB15: A Comprehensive Data Set for Network Intrusion Detection Systems // Military

Communications and Information Systems Conference. – 2015. – P. 1–6.

Hinton G. E., Salakhutdinov R. R. Reducing the Dimensionality of Data with Neural Networks // Science. – 2006. – Vol.

– P. 504–507.

Saito T., Rehmsmeier M. The Precision–Recall Plot Is More Informative than the ROC Plot When Evaluating Binary

Classifiers on Imbalanced Datasets // PLoS ONE. – 2015. – Vol. 10(3).

Fawcett T. An Introduction to ROC Analysis // Pattern Recognition Letters. – 2006. – Vol. 27. – P. 861–874.

Pedregosa F. et al. Scikit-learn: Machine Learning in Python // Journal of Machine Learning Research. – 2011. – Vol.

– P. 2825–2830.

McNemar Q. Note on the Sampling Error of the Difference Between Correlated Proportions or Percentages //

Psychometrika. – 1947. – Vol. 12. – P. 153–157.

Downloads

Published

2026-03-01
Loading...