Abstract. Social networks have become complex socio-digital ecosystems exposed to misinformation, phishing, and manipulative influence on users’ affect. The topic is timely due to the acceleration of digital risk driven by artificial intelligence (AI) and large-scale automation. The subject of this study is a cognitive protection architecture for social media; the objective is to substantiate a model that integrates perception, interpretation, memory, decision-making, and an ethical filter. The tasks are to: (I) review approaches in behavioral analytics, affective computing, and explainable AI (XAI) relevant to social-media security; (II) develop an architectural framework capable of multimodal processing; (III) specify component roles and interconnections with emphasis on user interaction and transparency; and (IV) demonstrate novelty over rule-based and machine learning (ML) systems via integrated XAI, attention-based contextual modeling, and emotional-semantic analysis. Methods include hierarchical information processing; an integrative threat index T=αE+βB+γS+δC; Bayesian trust updating; ontological reasoning; softmax over the ethically admissible action set ethically admissible action set (Aeth); feedback-driven adaptation; and privacy-preserving mechanisms (federated learning, differential privacy (DP)). The main results demonstrate architectural coherence and functional feasibility, provide a mapping from threat levels to adaptive responses, and embed XAI interfaces while aligning with the General Data Protection Regulation (GDPR) and the EU Artificial Intelligence Act (EU AI Act) requirements. Conclusions and implications: by coupling cognitive depth with interpretability and privacy, the architecture enhances reliability, personalization, and trust; future work should extend toward multilingual and cross-cultural adaptation, neuro-symbolic integration, and participatory human-in-the-loop training; prototype verification will target phishing, manipulative content, and varied-trust sources with metrics spanning accuracy, false alarms, user trust, compute, and compliance.
Keywords: cognitive security architecture; multimodal perception; affective computing; explainable artificial intelligence; threat ontology; ethical decision-making; federated learning.