This article examines the difficulties in designing laws to regulate the use of Artificial Intelligence (AI). It claims that companies should instead focus on developing an effective system of ethical principles to guide the development of AI. It argues that AI poses unique challenges when it comes to regulation, because it is constantly learning and the algorithm decisions cannot always be monitored. AI presents questions of fairness, accountability, and security that differ from those posed by traditional tools like computers. It is difficult to frame regulations to address all of these issues, as lawmaking is generally slow and cannot keep up with the rapid pace of AI development. Furthermore, the authors argue that while laws can increase transparency, they cannot always guarantee that AI systems are ethical or effective. In order to deploy effective AI, companies should develop their own ethical principles and establish methods for monitoring and regulating AI use.