Archive category

Computer science, instrumentation and automation
Authors: Zhailin A.G., Bekarystankyzy A., Aktanova B.M.

Abstract. Quantum computing – is an advanced technology which has a great impact on the traditional methods of computation causing a major challenge for the cryptographic systems that form the basis of our digital security. This research thesis is on cryptographic resilience in the age of quantum when the public key algorithms such as RSA and elliptic curve cryptography get compromised with the use of Shor’s algorithm, while symmetric primitives additionally lose half of their security against Grover’s search. The aim of the research is to thoroughly understand the quantum threat model and through experiments, figure out what is the realistic “cost of quantum safety” for the classical and post-quantum cryptographic mechanisms. The methodology combines systematic literature review with an experiment which is carried out using a reproducible benchmarking framework that (QCCB) outputs statistical performance estimates (mean, dispersion, and confidence intervals) and machine-readable result artifacts. The experimental findings support the migration viewpoint based on the risk quantified (i) the vulnerability window of RSA, 2048 and other classical public, key schemes, of which (ii) the comparative performance and size characteristics of the post-quantum candidates that agree with the NIST standardization. Furthermore, the study introduces a decision-making oriented evaluation metric, the Security Cost Index (SCI), that facilitates the understanding of a correlation of target security levels with the computational overhead enabling different deployment planning scenarios to be fathomed depending on the existing tradeoffs. The paper argues for migration to post-quantum cryptography that has been standardized, and is measurable, at reproducible and with the figure of the clear trade-off should be the mainstay of the efforts for securing confidentiality, integrity, and authenticity against the “harvest now, decrypt later” risk in the long run.

Keywords: Quantum Computing, Post-Quantum Cryptography, NIST FIPS 203/204/205, ML-KEM, ML-DSA, Cryptographic Resilience, Shor’s Algorithm, Grover’s Algorithm, Hybrid Cryptography, Security Migration, HNDL Attack, Lattice-Based Cryptography

Authors: Bazarova M.Zh., Alibekkyzy K., Adikanova S., Bugubayeva A.

Abstract. The article discusses the development of an ontology designed to formalize and support the process of teacher professional development in the context of digital transformation of education with an emphasis on STEM-based approaches. The relevance of the research is determined by the need for a systematic presentation of the professional competencies of teachers of the 21st century and ensuring their measurability in digital educational environments. The aim of the work is to create an ontological model that ensures the integration of STEM methods, competencies, indicators of their formation, assessment tools and forms of control in a single information structure. The study used methods of ontological modeling, formal description of knowledge using OWL, as well as SPARQL queries for data analysis and extraction. As a result, an ontology has been developed that implements the “STEM-method – competence – indicators – tools – forms of control” compliance matrix and supports automated monitoring of teachers’ professional development. The scientific novelty of the work lies in the comprehensive formalization of the professional development process based on an ontological approach. The practical significance is determined by the possibility of using ontology in digital professional development platforms, decision support systems and quality analysis of teacher education.

Keywords: STEM education, teacher training, professional competencies, digital transformation of education, STEM methods, methodological model, competence assessment.

Authors: Adilzhanova S.A., Rakhysh A.Y.

Abstract. This article examines the temporal dynamics of cyber incidents in the Republic of Kazakhstan and provides a comparative analysis with developed digital states (the United States and Singapore). The study’s methodological basis is the use of econometric tools for time series analysis: the augmented Dickey-Fuller (ADF) test, the Johansen cointegration test, and the vector autoregressive (VAR) model. An analysis of KZ-CERT data (2015–2025) revealed that the prevalence of botnet networks is stationary and remains at a consistently high level. An increase in virus and phishing attacks was also observed. A comparison with reports from the United States (FBI IC3) and Singapore (CSA) revealed that in Kazakhstan, threats of an infrastructural and technical nature predominate, while developed countries have a high share of social engineering and targeted attacks. Based on calculations performed in the Python programming language (pandas, statsmodels), a short-term forecast for 2026–2028 was compiled and scientifically based recommendations for improving the national cybersecurity system were provided.

Keywords: cybersecurity, mathematical modeling, VAR model, ADF test, botnets, phishing, comparative analysis.

Authors: Aitim A.

Abstract. A rule-based approach is presented for the morphological study and production of Kazakh, a highly agglutinative and morphologically complicated language. Computational modeling of Kazakh morphology demands an exact and methodical approach due to the language’s great use of affixation and phonological alternations such vowel harmony and consonant mutation. The main technology is finite-state transducers (FSTs), which provide both formal rigor and computing efficiency for faithfully capturing the regular patterns of word building. Two main components define the system: a morphological generator building well-formed surface variants from abstract morphological representations; a morphological analyzer separating surface word forms into root and affixes with related grammatical properties. For nominal and verbal paradigms including tense, mood, aspect, person, number, and case the FST architecture codes morphotactic rules, phonological constraints, and affix ordering. To support the transducer-based analysis, a thorough lexicon of Kazakh lemmas is constructed and arranged according to portion of speech. Covering both inflectional and derived morphology, the handmade morphological rules represent the linguistic structure of the language. High accuracy in both analysis and creation tasks is obtained via evaluation on a manually annotated corpus of modern Kazakh writings.
Part-of-speech tagging, syntactic parsing, and machine translation are just a few of the downstream natural language processing uses for which the resultant tool is a basic component. Released as an open-source module to allow more general use and additional study in Kazakh computational linguistics, the system is a contribution to the development of language technology for low-resourced languages.

Keywords: Kazakh language, morphological analysis, morphological generation, finite-state transducers, agglutinative languages, natural language processing, rule-based systems.

Authors: Alkhanova G.A., Alimbekova I.A.,Zhuzbaev S.S.,Juzbayeva B.G.

Abstract. This paper presents a survey of the most significant machine learning methods and models as of the end of 2025 and proposes their systematic classification. Key trends from leading international conferences, including NeurIPS 2025, ICML 2025, and ICLR 2025, are analyzed. Special attention is given to State Space Models, Offline and Counterfactual Reinforcement Learning, Federated and Continual Learning, compact and efficient models, and multimodal systems. An analysis of industrial reports indicates the dominance of efficiency-first and privacy-by-design principles. In resource-constrained environments, priority is given to architectures capable of processing long sequences without transmitting raw data. The scientific novelty of this study lies in the development of a new classification model based on performance, energy consumption, security, and privacy metrics, as well as in the experimental validation of the advantages of hybrid approaches. Such models provide a balance between accuracy, speed, and privacy in decentralized and long-term systems. The practical significance of the research lies in offering concrete recommendations for model selection and integration when designing intelligent systems of various scales and application domains.

Keywords: machine learning, State Space Models, Mamba, Federated Learning, Continual Learning, small language models, hybrid architectures.

Authors: Makatov Ye., Razaque A., Makatova A.Ye.

Abstract. Social networks have become complex socio-digital ecosystems exposed to misinformation, phishing, and manipulative influence on users’ affect. The topic is timely due to the acceleration of digital risk driven by artificial intelligence (AI) and large-scale automation. The subject of this study is a cognitive protection architecture for social media; the objective is to substantiate a model that integrates perception, interpretation, memory, decision-making, and an ethical filter. The tasks are to: (I) review approaches in behavioral analytics, affective computing, and explainable AI (XAI) relevant to social-media security; (II) develop an architectural framework capable of multimodal processing; (III) specify component roles and interconnections with emphasis on user interaction and transparency; and (IV) demonstrate novelty over rule-based and machine learning (ML) systems via integrated XAI, attention-based contextual modeling, and emotional-semantic analysis. Methods include hierarchical information processing; an integrative threat index T=αE+βB+γS+δC; Bayesian trust updating; ontological reasoning; softmax over the ethically admissible action set ethically admissible action set (Aeth); feedback-driven adaptation; and privacy-preserving mechanisms (federated learning, differential privacy (DP)). The main results demonstrate architectural coherence and functional feasibility, provide a mapping from threat levels to adaptive responses, and embed XAI interfaces while aligning with the General Data Protection Regulation (GDPR) and the EU Artificial Intelligence Act (EU AI Act) requirements. Conclusions and implications: by coupling cognitive depth with interpretability and privacy, the architecture enhances reliability, personalization, and trust; future work should extend toward multilingual and cross-cultural adaptation, neuro-symbolic integration, and participatory human-in-the-loop training; prototype verification will target phishing, manipulative content, and varied-trust sources with metrics spanning accuracy, false alarms, user trust, compute, and compliance.

Keywords: cognitive security architecture; multimodal perception; affective computing; explainable artificial intelligence; threat ontology; ethical decision-making; federated learning.

Authors: Nazyrova A.Ye., Zhumabayeva A.S., Zhartybayeva M.G., Iskakov Y.K., Lamasheva Zh.B.

Abstract. Urban growth is making it harder to plan for land use that is beneficial for the environment. This study looks at how land use changes from 2020 to 2040 by using a combined modeling approach that uses both agent-based simulation and machine learning. We model changes in three main types of land: residential, forest, and agricultural. We also include important elements that affect decisions, like how close the land is to roadways, its productivity index, how it was used before, and how it is changing next door. A feature importance analysis shows that the most important factor in land use decisions is how close it is to highways, followed by how productive the area is. The random forest classifier was the best of the machine learning models tested. It had an accuracy of 89.3%, a precision of 0.91, and a recall of 0.88, which was better than decision tree and neural network models. The results show that residential land usage is likely to rise at the expense of forest area, whereas agricultural land is likely to grow only a little. These results show how useful hybrid modeling approaches are for predicting how things will change in space and for making policy decisions that strike a balance between urban growth and environmental protection. Combining geographical data and historical trends into predictive modeling gives land management and urban planning strategies a strong framework.

Keywords: agent-based modeling, machine learning, policy assessment, agricultural decision-making, sustainability

Authors: Ismukhamedova A., Belginova S., Bakanova A., Rysbayev T., Khalimov A.

Abstract. This article discusses the project of a web-based Medical Decision Support System (CBSA) designed to predict the risk of death in ICU patients. The proposed solution combines modern approaches to machine learning with asynchronous web architecture and intelligent interactive interface. The system uses a microservice approach and uses Django and WebSocket channels. This approach provides an excellent user interface and allows you to process a large number of parallel connections in real time. The clinical data from the MIMIC-IV kit served as the basis for training the analytical core of the system. It includes a multi-stage data processing pipeline with skip imputation, feature engineering, and ensemble modeling based on LightGBM gradient boosting. The experimental results showed that the model has a high predictive efficiency (AUC-ROC 0.982) while maintaining the correct calibration of probabilistic estimates. The interpretation of forecasts by the SHAP method increased the confidence of clinicians and explained the key clinical factors. Special attention is paid to the support of multimodal input data, such as medical documents in PDF and Excel formats, as well as text messages, which makes the system more suitable for clinical operations.

Keywords: MIMIC-IV, web application architecture, Django channels, machine learning, mortality prediction, interpretability of models, WebSocket, medical informatics, clinical decision support system.

Authors: Nessipkaliyev U.E., Sailaukyzy Zh., Khassenova Z.T., Bigaliyeva A.Z., Khamitov D.R.

Abstract. Reliable data transmission in radio communication systems largely depends on the efficiency of error correction algorithms. For low-power and real-time radio systems, the use of decoding methods with reduced computational complexity and optimized hardware resource utilization is of particular importance. In this context, multi-threshold decoding algorithms represent a promising alternative to conventional iterative decoding techniques. The purpose of this paper is to investigate and analyze the hardware implementation of multi-threshold decoding algorithms for radio communication channels. The object of the study is an error correction system comprising encoder and decoder units implemented on a programmable logic device. The research methodology includes hardware modeling, experimental verification, and comparative performance analysis.
The proposed multi-threshold decoding algorithm was implemented on an Altera Cyclone IV EP4CE6E22C8N FPGA platform and tested under laboratory conditions. Experimental results demonstrate that after 20 iterations, the bit error probability is reduced to the level of 10⁻⁸. A comparative analysis with LDPC and Turbo codes shows that the proposed solution requires approximately 40% fewer FPGA resources and achieves lower power consumption. The obtained results confirm that multi-threshold decoding is an efficient and resource-saving solution for modern radio communication systems. The proposed approach is well suited for low-power and real-time applications, providing a favorable balance between decoding performance and hardware complexity.

Keywords: decoding, channel coding, multi-threshold decoding, error control coding, programmable logic integrated circuits, FPGA, radio communication, telecommunications

Authors: Kabdullin M., Naizabayeva L., Kabdullin A., Zhonkeshova A.

Abstract. This paper examines methods for the automatic analysis of biomedical images in cardiology using machine learning techniques. The relevance of the study is determined by the need to improve the accuracy of cardiovascular disease diagnosis through automated processing of heart and capillary images. The study focuses on analyzing data obtained from digital microscopes and electrocardiographs, emphasizing the identification of key diagnostic features. The proposed approach includes image preprocessing, noise removal, feature extraction, and classification based on principal component analysis (PCA) and neural network models. The preprocessing phase involves image filtering, segmentation, and data normalization. The study employs machine learning classification algorithms and deep learning techniques adapted for medical image analysis. Performance evaluation criteria and training parameters are examined to enhance diagnostic efficiency and ensure model generalization. Particular attention is paid to the biological safety aspects related to biomedical data processing, including personal data protection and classification accuracy. The study also evaluates the robustness of different models to variations in image quality and external factors. Additionally, it discusses the integration of machine learning-based image analysis with medical decision support systems for improved diagnostic precision. The paper analyzes the limitations of existing algorithms and suggests directions for their further improvement, including adaptation to different types of data and complex clinical scenarios. Future research perspectives include the optimization of feature extraction methods, refinement of classification algorithms, and the development of hybrid models that combine multiple approaches to improve diagnostic accuracy. Thus, the presented review of machine learning methods and biomedical image analysis algorithms identifies the most effective approaches for automated cardiovascular disease diagnosis and highlights the prospects for further development of intelligent medical systems.

Keywords: machine learning, artificial intelligence, neural networks, biomedical image processing, cardiology.

Authors: Kereev A.K., Zhaylybayeva A.O., Tashimova A.K., Kaparova L.Y., Umirzakova B.G.

Abstarct. An important task for autonomous robots is to navigate safely in unfamiliar environments, potentially using computer vision to detect and recognize obstacles. Vision-based control systems have been developed for several years. Some rely on artificial landmarks, while more advanced systems make use of natural landmarks.The latter approach is preferable when a robot must operate in real, unstructured environments.
In the field of autonomous robot navigation, which includes map building, path planning, and self-localization, this work develops the concept of a simple autonomous agent relying exclusively on visual information. The integrated navigation system reproduces certain functions of natural systems, as it requires minimal prior knowledge, limited onboard computation, and lacks an omnidirectional field of view. Since the goal is to move the robot across the floor while avoiding obstacles and people, the camera is mounted on top of the robot in a fixed forward-facing position. The article focuses on one of the fundamental tasks of image processing—detecting the boundaries of objects in the observed scene. The aim of the research is to study contour detection algorithms based on preliminary image filtering and to compare the proposed approaches with well-known edge detectors such as Sobel, Canny, and Laplacian of Gaussian. Preliminary filtering is used to suppress image noise and enhance edges. The scientific novelty of the work lies in the development and experimental evaluation of a contour detection algorithm that incorporates a pre-filtering stage based on contrast enhancement, the Kalman filter, and Monte Carlo methods. This increases the robustness of video processing for mobile robots operating in noisy environments. The Sobel, Canny, and LoG algorithms were comprehensively analyzed and compared using a set of metrics, including the number of lost pixels, mean squared error, normalized MSE, and the structural similarity index. This approach provided a deeper understanding of their effectiveness under different noise conditions.

Keywords: computer vision, image processing algorithms, Kalman filter, Monte Carlo methods, image segmentation, Laplacian of Gaussian, Sobel and Canny algorithms, real-time video analysis.

Authors: Bizhanov D., Zhetenbayev N., Maksut A., Kotov S.

Abstract. This paper presents the design and experimental evaluation of an exoskeleton device intended for the rehabilitation of the elbow joint’s pronation/supination motion. Accurately and safely performing this type of movement remains one of the main technical challenges in rehabilitation exoskeletons, as it involves complex biomechanical interactions and rotational motion of the forearm bones. Within the scope of the project, a CAD model with two degrees of freedom (DOF) was developed in SolidWorks. A physical prototype was then fabricated, and laboratory testing was conducted for one DOF – the pronation/supination motion. The accuracy of motion execution was recorded using an IMU sensor, and control performance was evaluated.
The proposed device is intended for use in the rehabilitation process of patients with neurological disorders, including restoring neuromotor functions after stroke and correcting elbow joint contractures. Experimental results demonstrate the suitability of the device for rehabilitation applications.

Keywords: elbow exoskeleton, pronation-supination, rehabilitation robotics, CAD model, IMU sensor, 2 DOF exoskeleton.

Load...