Regulating the Future: A Comparative Analysis of Pakistan’s Need for EU-Based Cyber Privacy Frameworks
الكلمات المفتاحية:
Digital rights، Cyber-policing، AI-enabled law enforcement، Algorithmic bias، Pakistan data protectionالملخص
The use of cyber technologies in Pakistan has been both exposed in the governance system, commercial practices, law enforcement, and daily living, providing a great benefit by causing a real efficiency improvement, and at the same time, also presents severe threats to privacy and data security, among other essential rights. Without a regulatory framework of AI that is elaborate, enforceable, and solid in use rights-based protections, the remaining ones are still in bits, mostly aspirational, and poor. In comparison, the coherent risk-oriented regulatory model implemented in the European Union via the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act) is based on the principles of human dignity, accountability, and democratic control.
The article aims at achieving three research objectives: first, due to its doctrinal nature, to examine the current and proposed laws on data protection and AI-based decision-making in Pakistan; second, to comparatively explore how the EU GDPR and AI Act respond to analogous regulatory challenges with binding obligations, institutional regulation and risk classification; and third, to normatively evaluate the AI governance framework in Pakistan against the international human rights standards, specifically, the International Covenant on Civil and Political Rights (ICCPR) and the EU fundamental rights standards.
The methodological approach of the study is the doctrinal legal analysis, the comparative legal research, and normative evaluation of the statutory texts, policy documents, judicial principles, and scholarly literature. The analysis demonstrates that there are considerable regulatory gaps in Pakistan, such as a lack of enforceable limitations on automated decision-making, insufficient transparency and accountability requirements on AI developers and implementers, ineffective redressing of rights infringements, and the absence of an independent regulatory body that has effective powers. The EU model, in its turn, combines the principles of data protection, including lawfulness, minimization of information, and limitation of purpose, transparency, and accountability, and AI-specific risk-based regulation and strong mechanisms of institutional enforcement.
التنزيلات
المراجع
1. Ahmed, M. W. (2024). Artificial intelligence and legal ethics. International Journal of Law and Politics Studies, 6(5), 226–227. https://doi.org/10.32996/ijlps.2024.6.5.12
2. Abolaji, E. O., & Akinwande, O. T. (2024). AI powered privacy protection: A survey of current state and future directions. World Journal of Advanced Research and Reviews, 23(3), 2687–2696. https://doi.org/10.30574/wjarr.2024.23.3.2869
3. Al-Billeh, T., Hmaidan, R., Al-Hammouri, A., & Al Makhmari, M. (2024). The risks of using artificial intelligence on privacy and human rights: Unifying global standards. Journal of Modern Humanities, 31(2). https://doi.org/10.18196/jmh.v31i2.23480
4. Bashir, M. (2021). Surveillance and panopticism in the digital age. Qatar Journal of Social Sciences and Humanities, 2(1), 11–16. https://doi.org/10.55737/qjssh.257455953
5. Bertaina, S., Biganzoli, I., Desiante, R., Fontanella, D., Inverardi, N., Penco, I. G., & Cosentini, A. C. (2025). Fundamental rights and artificial intelligence impact assessment: A new quantitative methodology in the upcoming era of AI Act. Computer Law & Security Review, 56, 106101. https://doi.org/10.1016/j.clsr.2024.106101
6. Busuioc, M. (2021). Accountable artificial intelligence: Holding algorithms to account. Public Administration Review, 81(6), 1049–1059. https://doi.org/10.1111/puar.13293
7. Custers, B. (2022). New digital rights: Imagining additional fundamental rights for the digital era. Computer Law & Security Review, 44, 105636. https://doi.org/10.1016/j.clsr.2021.105636
8. Donnelly, S., Ríos Camacho, E., & Heidebrecht, S. (2024). Digital sovereignty as control: The regulation of digital finance in the European Union. Journal of European Public Policy, 31(8), 2226–2249. https://doi.org/10.1080/13501763.2023.2295520
9. Fernández-Basso, C., Gutiérrez-Batista, K., Gómez-Romero, J., Ruiz, M. D., & Martín-Bautista, M. J. (2024). An AI knowledge-based system for police assistance in crime investigation. Expert Systems, 41(1), e13524. https://doi.org/10.1111/exsy.13524
10. Gul, S., Ahmad, F., & Ahmad, R. (2025). Digital evidence and procedural fairness: Reforming cybercrime prosecution in Pakistan. Journal for Social Science Archives, 3(2), 544–554. https://doi.org/10.59075/jssa.v3i2.260
11. Haley, P. (2025). The impact of biometric surveillance on reducing violent crime: Strategies for apprehending criminals while protecting the innocent. Sensors, 25(3160). https://doi.org/10.3390/s25103160
12. Joh, E. E. (2019). Policing the smart city. International Journal of Law in Context, 15(2), 177-182. https://doi.org/10.1017/S1744552319000107
13. Kausche, K., & Weiss, M. (2025). Platform power and regulatory capture in digital governance. Business and Politics, 27(2), 284–308. https://doi.org/10.1017/bap.2024.33
14. Lynch, N. (2024). Facial recognition technology in policing and security—Case studies in regulation. Laws, 13(3), 35. https://doi.org/10.3390/laws13030035
15. Mason, M. (2010). Sample size and saturation in PhD studies using qualitative interviews. Forum: Qualitative Social Research, 11(3), Article 8. https://doi.org/10.17169/fqs-11.3.1428
16. Mantelero, A. (2018). AI and big data: A blueprint for a human right, social and ethical impact assessment. Computer Law & Security Review, 34(4), 754–772. https://doi.org/10.1016/j.clsr.2018.05.017
17. Matar, R., & Murray, D. (2024). Re-thinking international human rights law’s approach to identity in light of surveillance and AI. Queen Mary University London. https://doi.org/10.1093/hrlr/ngaf016
18. Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501-507. https://doi.org/10.1038/s42256-019-0114-4
19. Murray, D. (2024). “Police use of retrospective facial recognition technology: A step change in surveillance capability necessitating an evolution of human rights law.” The Modern Law Review. https://doi.org/10.1177/09240519241253061
20. Nagy, N. (2023). “Humanity’s new frontier”: Human rights implications of artificial intelligence and new technologies. Hungarian Journal of Legal Studies, 64(2), 236–267. https://doi.org/10.1556/2052.2023.00481
21. Nshimiyimana, F. R. (2025). Balancing cybersecurity and fundamental rights: The responsibility of states to address cyber threats. Essays of Faculty of Law, University of Pécs Yearbook, (2025), 1–15. https://doi.org/10.15170/studia.2025.01.15
22. Safdar, M. A., & Ghafoor, S. (2025). Algorithmic justice: Reassessing legal ethics through the lens of AI and moral philosophy. International Journal of Linguistics Applied Psychology and Technology, 2(6). https://ijlapt.strjournals.com/index.php/ijlapt
23. Shad, K. B. (2023). Artificial intelligence-related anomies and predictive policing: Normative (dis)orders in liberal democracies. AI & Society, 40(2), 891–902. https://doi.org/10.1007/s00146-023-01751-9
24. Shaelou, S. L., & Razmetaeva, Y. (2023). Shaping the digital legal order while upholding rule of law principles and European values. European Journal of Law and Technology, 14(1), 1–20. https://doi.org/10.1007/s12027-023-00777-2
25. Suddle, F. R., Pervaiz, S., & Nawaz, S. (2025). Unmasking digital deviance: Analyzing cybercrime trends via social media in Pakistan. Advance Social Science Archive Journal, 4(1), 834–848. https://doi.org/10.55966/assaj.2025.4.1.064
26. Urquhart, L., & Miranda, D. (2022). Policing faces: The present and future of intelligent facial surveillance. Information, Communication & Society, 25(8), 1169–1186. https://doi.org/10.1080/13600834.2021.1994220.
27. Warso, Z. (2022). Human rights requirements for person-based predictive policing: Lessons from selected ECtHR case law and its limits. Technology and Regulation, 71–80. https://doi.org/10.26116/techreg.2022.007
28. Wei, W., & Liu, L. (2024). Trustworthy distributed AI systems: Robustness, privacy, and governance. ACM Computing Surveys, 56(5), Article 103. https://doi.org/10.1145/3645102
29. Završnik, A. (2020). Criminal justice, artificial intelligence systems, and human rights. ERA Forum, 20(4), 567–583. https://doi.org/10.1007/s12027-020-00602-0
التنزيلات
منشور
إصدار
القسم
الرخصة
الحقوق الفكرية (c) 2025 Scholar Insight Journal

هذا العمل مرخص بموجب Creative Commons Attribution 4.0 International License.
Authors retain the copyright of their work. All articles in Scholar Insight Journal are published under the terms of the Creative Commons Attribution 4.0 International License (CC BY 4.0).
This license permits anyone to read, download, copy, distribute, print, search, or link to the full texts of the articles, and to use them for any other lawful purpose, without asking prior permission from the author(s) or the publisher, provided proper attribution is given to the original work.