Abstract
This article discusses the evolution of artificial intelligence (AI) definitions, and the challenges associated with its application in recruitment, particularly in the context of protecting candidates’ privacy. A key issue is the imprecision of AI definitions, which affects the effectiveness of regulations and the establishment of ethical standards. Inaccurate definitions can lead to excessive or insufficient regulation, encompassing technologies unrelated to AI or overlooking advanced systems with high risks. Moreover, AI in recruitment raises new privacy concerns that go beyond traditional data protection. To address these challenges, the concept of hybrid privacy has been proposed, integrating various dimensions of privacy protection: informational, physical and virtual accessibility, and decision‑making. The evolution of AI definitions, from Turing’s test (1950) to contemporary approaches, illustrates how the understanding of this technology has changed and what implications this has for ethics and legal regulations. Current AI definitions and regulations require further refinement to keep pace with the dynamic development of technology. Inaccurate definitions lead to interpretive issues that hinder effective implementation of the law. The conclusions suggest the need for flexible regulations and the development of ethical standards that protect candidates’ privacy while also taking into account the interests of employers. A dynamic balance is crucial between rigid and flexible regulations that can adapt to the evolving nature of AI technology and its associated ethical challenges, such as balancing candidates’ privacy with employers’ interests.