AI Is Not Rewriting the Rules - it’s Reminding Us Why They Existed in the First Place
Artificial intelligence (AI, GenAI, agentic AI – pick your buzzword) is everywhere now. It’s being pitched as revolutionary, disruptive, and transformative. In many ways, it is. I use it every day to search, refine my writing, and review code. But here’s the truth: AI is not magic. It’s a tool and a powerful one we should all adopt. Like many technologies before it – cloud, mobile, even the internet itself – its risks and benefits flow directly (perhaps even indirectly) from how we choose to use it, govern it, and secure it.
What’s changing isn’t the rules. It’s the speed, volume, and scale. AI is moving so fast that people and organizations are struggling to keep up. But the principles that matter most about data, governance, and responsible use haven’t changed. If anything, they matter more than ever.
Nature of AI: Still a Tool
AI feels different because it automates at scale and operates at a velocity we’ve never seen before. In the end, it’s still just a tool: it processes inputs, applies logic (learned, designed, or sometimes fabricated), and delivers outputs. That means the foundational questions are still the same:
- Where does the data come from?
- Who owns it?
- How is it protected?
- How can it be misused?
For example, when OpenAI launched ChatGPT, companies quickly realized employees were pasting sensitive source code and customer data into prompts. The model wasn’t malicious—but the use was careless. A tool amplifies both good and bad behaviors.
Don’t confuse speed with novelty. Your fundamentals of security, privacy, and governance remain your bedrock.
Renewed Data Fundamentals
The real heart of AI risk comes back to the data itself. Leaders should view this as a chance to double down on fundamentals (urgently?):
- Data Minimization: The less you collect, the less you share, the less there is to leak, expose, or misuse. This principle is already embedded in GDPR and ISO 27701. In the age of AI, it’s not optional. Models thrive on volume—but your risk multiplies with it.
- Example: Why store every customer chat log indefinitely? Instead, anonymize and aggregate only what’s needed for model training.
- Data Use: Just because AI can process data doesn’t mean it should. Define clear purposes and boundaries. The FTC has warned companies against “AI washing” or deploying AI on data without clear justification or user consent.
- Example: Using employee keystroke data to “predict productivity” might be technically possible but could create toxic workplace surveillance and legal liability.
- Data Procedures: Establish clear processes for the data lifecycle: collection, classification, use, retention, and deletion. The NIST AI Risk Management Framework emphasizes documentation and reproducibility. It's two things many AI PoCs skip in the rush to production.
- Data Protection: Leverage least privilege for AI integrations, encrypt everything, and, when possible, opt for your own tenant instead of shared environments. If you’re using SaaS AI features, ask whether your enterprise data could be commingled with others in a multi-tenant architecture.
- Supply Chain: Every vendor, API, and model introduces a risk vector. Large language models are often a “black box” dependency. You don’t just need SOC 2 reports anymore you need to ask what data feeds their AI pipeline.
- Example: An HR platform adding an AI resume screener suddenly puts you on the hook for discrimination risk if the model favors specific schools or names.
It's classic security, applied with modern urgency and volume.
Beyond the Data: Bias Harm, and Accountability
Of course, AI does introduce new urgency around topics that extend beyond pure data issues. For example:
- Bias: Models trained on biased data can introduce systemic inequities. If that bias flows into hiring, lending, or law enforcement, the consequences are systemic. Consider recent MIT studies on facial recognition misidentifying women and people of color at higher rates.
- Harm: Unchecked outputs can cause reputational, financial, or even physical harm. Think of medical chatbots suggesting dangerous advice or deepfake scams convincing employees to transfer funds.
- Accountability: When AI influences a decision, who owns it? The EU AI Act explicitly requires human oversight for high-risk systems. Without a named accountable party, organizations risk responsibility diffusion.
- Transparency: People demand to know why a decision was made, not simply that it was. Explainability frameworks like SHAP, LIME, or PFI are early attempts, but for most end-users, plain-language transparency is still missing.
These aren’t brand-new issues. They’re governance issues. AI just magnifies their urgency and compresses the timeline for addressing them.
Practical Guidance for Security Leaders
Turning principles into practice requires focus:
- Risk Assessments: Apply traditional frameworks (ISO 27001, NIST CSF), but extend them to include bias, reputational harm, and regulatory risks. Test for these frequently and don’t wait for quarterly or yearly audits.
- Vendor Risk Reviews: Update third-party reviews to account for AI-specific dependencies. Does your vendor fine-tune on your data? Where are their models hosted? How do they handle model drift or prompt injection or poisoning attacks?
- Guardrails & Policies: Create clear acceptable use guidelines. Technical protections should include controls against prompt injection, data leakage, and model abuse. Microsoft’s Azure AI or promptfoo publishes guidance on red-teaming exercises - use them.
- Data Inventory: Know what you have, where it lives, and how it flows into AI. Gartner predicts that by 2026, 75% of organizations will face data quality issues from AI adoption. Inventory is the first step to preventing garbage in, garbage out.
- Human Factor: Train employees to avoid feeding sensitive data into public AI tools. Samsung famously learned this the hard way in 2023 when engineers pasted confidential source code into ChatGPT, which then became part of the model’s training set.
Principles only matter if they are reinforced and embedded into daily operations.
Hype vs. Reality
AI news and hype is loud, but the reality is quieter and more familiar. This is not the first technology to promise transformation. It's also not the first technology to squeeze “25% more productivity” out of workers. (Spoiler: people rarely reinvest that time, and the overhead costs often rise.)
What makes AI different is velocity and scale of adoption. That’s what creates the illusion of new risks.
Meanwhile, regulators are already moving to catch up:
- The EU AI Act sets strict categories of “unacceptable,” “high,” and “limited” risk.
- In the U.S., the FTC has warned against deceptive AI claims and misuse of consumer data.
- NIST’s AI Risk Management Framework offers practical controls for developers and enterprises alike.
Compliance and governance are no longer 'wait and see'. If you've not started, you're already behind.
And culturally, organizations must help employees understand not just AI’s power but the responsibilities of adoption. Shadow AI projects, rogue teams deploying ChatGPT plugins or API wrappers without oversight. These are today’s "Shadow IT."
Conclusion
AI is not rewriting the rules. It's reminding us why the rules existed in the first place. You still had that service account API key with overly broad permissions. You still had that business partner emailing confidential HR data. You still had a DLP system that detected instead of prevented.
Now you just have it happening at scale and speed.
The challenge for leaders is to anchor their organizations in the principles that endure: minimize the data you collect, use it responsibly, enforce strong governance, and rigorously audit your supply chain. It's difficult to do chasing every shiny new AI product.
AI is fast, but fundamentals are timeless. If we hold on to that, practice them with discipline, we truly can harness the power of AI without forgetting the responsibility that comes with it.
References
NIST AI Risk Management Framework (AI RMF 1.0) – U.S. guidance on trustworthy AI: https://www.nist.gov/itl/ai-risk-management-framework
EU AI Act – Final text and risk classifications (high/limited/unacceptable): https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
FTC on AI and “AI washing” – Business guidance on deceptive claims and AI use:https://www.ftc.gov/news-events/news/press-releases/2024/09/ftc-announces-crackdown-deceptive-ai-claims-schemes
MIT Media Lab facial recognition bias study (Gender Shades project by Joy Buolamwini): https://www.youtube.com/watch?v=TWWsW1w-BVo
Gartner prediction on AI and data quality issues: https://www.gartner.com/en/newsroom/press-releases/2025-02-26-lack-of-ai-ready-data-puts-ai-projects-at-risk
Microsoft AI red-teaming and prompt injection guidance: https://learn.microsoft.com/en-us/security/ai-red-team/
Promptfoo Intro / Overview, explains what Promptfoo is, its goals, how it helps with prompt and LLM evaluation, red-teaming: https://www.promptfoo.dev/docs/intro
Samsung ChatGPT data leak incident (2023), here’s one from Bloomberg: https://techcrunch.com/2023/05/02/samsung-bans-use-of-generative-ai-tools-like-chatgpt-after-april-internal-data-leak/