Welcome to Beyond the Algorithm

A forensic investigation perspective on the AI systems reshaping decisions in law, business, and governance.

Published: January 30, 2026 | By: Joe Sremack

An Introduction to Beyond the Algorithm

I spend my professional life investigating systems that aren't supposed to be investigated. For over two decades, I've analyzed artificial intelligence systems, source code, data flows, and the decisions they make. My work has taken me into boardrooms, courtrooms, regulatory agencies, and crisis response centers across fifty countries. I've examined systems that made hiring decisions affecting thousands of people, algorithms that determined creditworthiness, and AI tools that influenced criminal justice outcomes.

What I've learned is this: AI systems don't exist in a vacuum. They sit at the intersection of technology, business incentives, regulatory oversight, and human judgment. Understanding them requires more than reading research papers or relying on vendor documentation. It requires forensic thinking—the discipline of asking difficult questions, following evidence where it leads, and building arguments based on proof rather than assumption.

Beyond the Algorithm is my attempt to share what I've learned through that investigative lens. This newsletter is written for attorneys, corporate counsel, compliance officers, and business leaders who need to understand AI systems not as abstract technology, but as functional tools with real consequences in the real world.

About My Background

I'm an advisory partner leading the Forensic Data Analytics team at CBIZ. My background is in computer science and philosophy—a combination that taught me to think both technically and critically about why systems work the way they do. Over the past 20+ years, I've been involved in over 500 matters in more than 50 countries, ranging from digital forensics investigations to complex data analytics projects to serving as an expert witness in litigation.

My expertise spans AI forensics, source code analysis, algorithmic system investigation, data analytics, and digital evidence. I hold certifications as a Certified Information Systems Auditor (CISA), Certified Fraud Examiner (CFE), and Certified Information Privacy Professional (CIPP/US). Most recently, I'm the author of AI Forensics: Investigation and Analysis of Artificial Intelligence Systems, published by Chapman and Hall/CRC in March 2026.

This experience has given me a particular perspective: I don't analyze AI systems from the inside, as a developer or product manager trying to build something. I analyze them from the outside, as someone tasked with understanding what they actually do, how they do it, what can go wrong, and how to prove it in contexts where proof matters—in litigation, in regulatory investigations, and in crisis management.

What You Can Expect From This Blog

This newsletter covers three main areas, each essential for anyone working with AI systems in professional, legal, or regulatory contexts:

Emerging Technology Risks

New AI capabilities create new risks. I'll analyze emerging technologies—from agentic AI systems that operate autonomously to multimodal systems processing text, images, and video—examining the failure modes, liability implications, and evidence challenges they create. What can go wrong? How do you detect it? What does it mean for your organization?

Practical Forensic Techniques

Theory is valuable, but practice is essential. I'll break down how to actually investigate AI systems. How do you analyze whether an algorithm is biased? How do you trace the data flowing through a system? How do you evaluate whether an AI-generated piece of content actually came from the system vendors claim created it? How do you examine the source code that implements these systems? These aren't academic exercises—they're the techniques you'll need if you're investigating AI in litigation, compliance, or incident response.

Cases and Developments Worth Watching

The AI litigation and regulatory landscape is moving fast. I'll track emerging cases, regulatory developments, and industry changes that matter—not from a legal prediction standpoint, but from a technical and investigative angle. What's happening in the courts that reveals how judges are thinking about AI evidence? What are regulators focusing on? What are companies doing in response?

Topics You'll See in Upcoming Posts

To give you a sense of where this is headed, here are some of the topics I plan to cover in the coming months:

AI Benchmarks and What They Actually Mean

How AI systems are measured, what benchmark scores tell you (and what they don't), and how to use benchmarks as evidence in litigation. Why a high benchmark score doesn't guarantee a system will work in the real world.

Forensic Investigation of Agentic AI Systems

AI systems that make autonomous decisions create unique investigation challenges. How do you trace the decision-making process when the system is taking actions on its own? What evidence do you need? What goes wrong?

IP Disputes Involving AI-Generated Content and Code

Who owns the output of AI systems? How do you investigate claims of AI-generated content theft? What does copyright law mean for systems trained on large datasets? These questions are increasingly central to IP litigation.

AI Evidence in the Courtroom

How courts are treating AI evidence, what judges expect to see, how to present technical AI analysis credibly to non-technical decision-makers, and the evidentiary standards emerging for AI expert testimony.

Regulatory Developments in AI Governance

How the SEC, FTC, FDA, and other regulators are approaching AI governance. What compliance requirements are emerging? What does "responsible AI" actually mean from a regulatory perspective?

Why AI Forensics Matters Now

We've reached an inflection point. AI systems are no longer experimental technology—they're making real decisions with real consequences. And when consequences emerge, organizations need to understand what happened. This is where forensic analysis becomes essential.

The Employment and Hiring Crisis

AI systems are increasingly used to screen job applicants, evaluate candidate quality, and make promotion recommendations. These systems are failing in documented ways—exhibiting bias against protected classes, generating discriminatory patterns, and making recommendations that contradict human judgment in ways that appear systematic.

When employment litigation arises from AI screening systems, the forensic question is straightforward: Did this algorithm discriminate? Can you prove it? Can you quantify the harm? The answers require investigating the system's training data, the decision boundaries it learned, and its performance disparities across demographic groups. This isn't legal argumentation—it's forensic analysis.

AI-Generated Content and Authenticity Challenges

AI systems can now generate text, images, video, and audio with sufficient realism to fool human observers. This creates entirely new categories of problems: AI-generated content being published as authentic, deepfake concerns, copyright infringement disputes over training data, and authentication challenges where the origin of content becomes legally relevant.

Investigating these cases requires understanding how generative AI systems work, what digital artifacts they leave behind, how to distinguish AI-generated content from human-created work, and how to trace content back to specific training data or systems. Traditional digital forensics tools aren't enough—you need AI-forensic thinking.

Autonomous System Liability

As AI systems operate more autonomously—making decisions with minimal human review, taking actions without explicit human approval, operating in safety-critical contexts like medical diagnosis or autonomous vehicles—the liability implications become profound. When harm occurs, the forensic questions multiply: What decision did the system make? Was it operating as designed? Did humans understand what the system was doing? Could the harm have been prevented?

These are questions that require investigating not just the code, but the entire context in which the system operated: training data, performance testing, known limitations, documentation, operator understanding, and the actual sequence of events that led to harm.

Data Collection and Privacy Implications

AI systems require enormous amounts of data. That data comes from somewhere—often from users, often collected without full understanding or consent. As data privacy regulation tightens globally, investigations into how AI systems were trained, what data was used, how it was obtained, and whether users consented become essential. The GDPR, California's privacy laws, and emerging regulations worldwide are creating consequences for inappropriate data collection.

Criminal Justice and Forensic Credibility

AI systems are entering criminal justice contexts—predictive policing tools, recidivism assessment systems, and evidence analysis tools. When these systems influence charges, sentences, or investigative direction, the forensic questions are existential: Is this system accurate? Is it fair? Can we trust its outputs as evidence? Defending criminal cases increasingly requires understanding and challenging the AI systems that may have initiated prosecution.

Getting Started

If you're new to this work, here are the resources on this site that will help you understand what I do and how I approach these problems:

About Me

A detailed overview of my background, experience, credentials, and approach to AI forensics.

Services

The specific types of investigations and analysis I provide—AI forensics, software analysis, and data analytics.

AI Forensics Book

My comprehensive guide to investigating AI systems, available March 2026 from Chapman and Hall/CRC.

Contact

If you need to investigate an AI system or have questions about a specific case or situation.

Why I'm Doing This Now

I've spent years working with attorneys, corporate counsel, regulators, and business leaders who face AI problems they don't fully understand. The technical explanations available are either too academic (written for other researchers) or too superficial (marketing materials from AI vendors). There's a gap in the middle—practical, forensic-minded analysis that treats AI systems the way you'd treat any complex technology that requires investigation.

The stakes are too high for that gap to continue. Companies are deploying AI systems with insufficient understanding. Regulators are trying to govern systems they struggle to investigate. Litigants are making decisions about AI-related disputes based on incomplete information. The legal system is grappling with how to treat AI evidence and expert testimony.

Beyond the Algorithm is my attempt to help close that gap. It's written by someone who investigates these systems professionally, who understands the forensic questions you need to ask, who has seen what happens when organizations misunderstand the technology they've deployed, and who believes that rigorous, evidence-based thinking about AI is essential for everyone from attorneys to business leaders to regulators.

Related Resources

Start with these resources to deepen your understanding of AI forensics and investigation:

AI Forensics: Investigation and Analysis of Artificial Intelligence Systems

A comprehensive guide to investigating AI systems in litigation and regulatory contexts. Published by Chapman and Hall/CRC, March 2026.

AI Forensics Services

Expert analysis of AI systems for litigation, investigation, and regulatory compliance matters.

Full Blog Archive

Read all posts in the Beyond the Algorithm series, covering benchmarks, litigation strategy, regulatory analysis, and forensic techniques.

Subscribe to Beyond the Algorithm

Get regular expert insights on AI forensics, emerging technology risks, and litigation strategy delivered to your inbox.

Subscribe on LinkedIn
← Back to Blog