The Problem of Control Over Artificial Intelligence (AI) in Western Political Discourse

Leonova Olga G

Control over artificial intelligence (AI) is important to prevent the risks it poses. Almost all politicians, civil society leaders, academics and independent experts agree that AI needs to be controled, but there is no unified approach to solving this problem.
When trying to solve this problem, the following issues arise: AI development is decentralized, and its development is controlled by private companies; major powers – leaders of AI have different approaches to understanding the need and implementing control over these technologies; the emergence of each new AI model requires the development of a special and flexible control mechanism that could be adapted specifically to this model; AI developers are becoming geopolitical actors, since they have power in areas that previously belonged to nation States and they have become sovereign entities.
Many countries are trying to solve this problem, but none of the proposed initiatives are capable of creating a system of effective control over AI. The main goal of control over AI should be to identify and reduce risks to the stable development of the global community, which requires the creation of institutional structures, common basic characteristics and principles of this control.
The concept of control over the development and distribution of AI can be based on the following principles: caution, flexibility and adaptability, globality, inclusiveness, the principle of feedback.
The implementation of control mechanisms requires institutionalization, i.e. the creation of supranational institutions that implement this process. The creation of an AI control system creates the possibility of moving to a new level – managing the process of development, distribution and use of artificial intelligence.
Key words: artificial intelligence, risks, problems, characteristics, mechanism, principles.