Beyond Generation: Addressing Key Privacy and Security Risks in Generative AI

Authors: Dr. Meghna Bhatia

HOD, Department of Information Technology

S.I.E.S. (Nerul) College of Arts, Science and Commerce (Autonomous), Navi Mumbai, Maharashtra

Published in: Journal of Computer Science and Engineering in Innovations and Research (ISSN No: 3049-1762 online)

Publication Date: June 15, 2025

Pages: 55–60

📄 Abstract

Generative Artificial Intelligence (GenAI) has emerged as a transformative paradigm capable of producing human-like text, images, audio, and video. Its adoption is accelerating at an unprecedented scale: the global GenAI market was valued at USD 13.7 billion in 2023 and is projected to exceed USD 1 trillion by 2032, with an annual compound growth rate of nearly 36%. Similarly, the user base is forecasted to rise from 115 million in 2020 to over 950 million by 2030, reflecting the rapid mainstream integration across various sectors. Applications of GenAI now span healthcare, education, finance, transportation, and creative industries, driving innovation but simultaneously exposing new vulnerabilities. Key risks include adversarial attacks, disinformation through deepfakes, intellectual property infringement, and privacy leakage. This study has adopted a mixed-methods approach—combining statistical analysis, thematic review, and case studies of ChatGPT, Google Bard, and DALL·E—to investigate both the opportunities and threats posed by GenAI. The findings confirm GenAI’s dual role: as a catalyst for productivity and creativity and as a potential multiplier of cyber threats. The research underscores the urgent need for privacy-preserving architectures, ethical frameworks, and regulatory safeguards to enable the safe and sustainable deployment of GenAI.

🔑 Keywords

Generative AI, Security, Privacy, Adversarial Attacks, ChatGPT, DALL·E, Deepfakes

I. Introduction

Generative AI is the term for machine learning systems that use patterns found in existing data to produce new, realistic outputs, such as text, photos, audio, and video. GenAI creates unique synthetic instances as opposed to predictive AI, which categorizes or predicts results. StyleGANs for high-resolution visuals, DALL·E for photos, and ChatGPT for text are well-known examples.

It's unprecedented how quickly GenAI is being adopted. The industry's global sales exceeded $100 billion in 2024 and are expected to grow to $217 billion in 2025, with a long-term forecast of over $1 trillion by 2032 [1,2]. The number of users increased from 115 million in 2020 to 254 million in 2023, with estimates indicating that there will be approximately 379 million users in 2025 and 729 million users by 2030 [3].

Although these systems have revolutionary advantages, they also present serious privacy and security risks. Adversarial manipulations might jeopardize safety-critical applications like autonomous vehicles, AI models may reveal private training data, and synthetic data can be abused for disinformation. A thorough examination is required due to its combination of promise and danger.

GenAI's impact spans healthcare, education, transportation, finance, and the creative industries, fueling innovation and productivity. However, risks including adversarial attacks, data poisoning, deepfakes, misinformation, and violations of intellectual property now demand urgent attention from technologists and policymakers