Faced with the prompt technological development of AI and blockchain technologies globally, policymakers are empowered to propose (make) laws to protect fundamental human rights following the opportunities and addressing challenges, even threats, presented by AI applications in everyday life. It aims to set future-proof and innovation-friendly standards, draft legal frameworks, develop new global norms, and harmonize landmark rules to ensure AI can be trusted: it is a force for good in society, works for people, and is not considered a clear threat to them. Democracy, the rule of law, safety and security, transparency, and trust following the protection of fundamental human rights are at stake. AI that help to manipulate human behavior to circumvent users' free will and permit some 'social scoring' by governments or pro-government majority in the parliament should be banned while demonstrating potential danger, clear threat, and causing unacceptable risk. Several countriesworldwide (Australia, Brazil, Canada, China, India, Japan, Korea, New Zealand, Saudi Arabia, Singapore, the United Kingdom, and the USA) have adopted a proactive approach toward AI regulation. Those countries aim to implement essential policies and infrastructure measures to cultivate a robust AI sector rather than introduce specific legislation to regulate its growth. Without comprehensive legislation, governments have published some legal frameworks, guidelines, and roadmaps, white papers that depict the future of possible AI regulation in these countries and help responsibly manage their AI usage. Finally, the European Union joined 'the club' following the political agreement reached recently, on December 11, 2023, between the European Parliament and the Council on the Artificial Intelligence Act (the first-ever comprehensive legal framework on AI globally), proposed by the Commission in April 2021.