Holidays Are Coming, Holidays Are Coming | ✉️ #81

Illustration with text "MKDEV DISPATCH #81" and "HOLIDAYS ARE COMING, HOLIDAYS ARE COMING." Features a sketch of a person wearing glasses and an orange sweater around their shoulders. The background has an orange triangle pattern with paper planes.
Last updated: | Published:

Hey! 👋

Have you seen the new version of Coca-Cola’s iconic “Holidays Are Coming” ad yet? I fucking hate it. Not because some multibillion megacorp swapped humans for AI slop or becasue of how inconsistent the video is (people on the internet counted ten different versions of the same truck), but because the old ads, shot with practical effects, actually created a sense of magic. Somewhere in the real world there was that caravan of glowing trucks bringing the holiday with them, and around people a million little lights lit up with festive magic. None of that is in the AI version. Zero magic. Not even the illusion of magic. It’s like a magician walking on stage and saying, “I’m not a wizard, I’m skillfully screwing you, you’re just too dumb to understand how exactly.” Of course we all know it’s a trick. But playing along with the illusion of magic is still pleasant. And Coca-Cola literally spells out how and with what tool they screwed us. It’s annoying and not exactly inspiring.

What’s interesting is how two events collided at one point. On one side of the ocean Coca-Cola labels its AI slop as AI slop, and on the other side a seven-month marathon kicks off around the EU’s Code of Practice on transparency of AI-generated content. The kick-off session happened on November 5. First draft is due in December, second in March, and the finale in May–June 2026. The task is simple and pragmatic: agree on how exactly to label AI slop so that both humans and machines understand it, and so companies can show compliance with Art. 50 of the AI Act without pointless bureaucracy. Full enforcement starts on August 2, 2026.

The coincidence seemed interesting to me. I think the hate that hit Coca-Cola, combined with regulation, will make companies less willing to publish AI content. Or at least less willing to admit it. And that made it worth digging into the methods used for labeling. Short version: it’s a combo of two methods. First, a visible label in the frame or description. A plain disclaimer “This was made with AI.” Second, a “trace” inside the file that machines can detect: invisible markers and origin metadata. Think watermarks for images, video, and audio. You’ve probably read stories about how people used AI to reverse-engineer a watermark’s algorithm and pattern, isolate it, then mask it out of the file. So there are additional methods like metadata and crypto-signatures (C2PA/Content Credentials + IPTC). Plus auto-detection algorithms used by TikTok and YouTube. Each method breaks when used alone, so the industrial recipe the EU session is working on is the bundle “label for humans + signal for machines.”

What does this change right now? If you create visuals or video with AI, prepare for mandatory labeling and for your content to be “detectable.” This is no longer about taste or how you feel about AI slop. It’s about compliance, reputation, and product trust. That’s exactly what the EU wants: synthetic content clearly marked, and detection tools working across the ecosystem, from studio to platform.

And I can’t resist plugging ourself: we help companies roll this out. EU AI Act audit and roadmap, inventory of AI systems, and upgrading team AI literacy so everyone understands where a visible label is mandatory and where a machine-readable signal is enough. Call us!


What We've Discovered


The 82nd mkdev dispatch will arrive on Friday, November 28th. See you next time!