Article imageLogo
Chatbots Behaving Badly™

Tasteful AI, Revisited - From Style Knobs to Taste Controls

By Markus Brinsa  |  October 20, 2025

Sources

Tasteful AI was never about making machines refined. It was about making human refinement visible—and scalable—without losing its soul. The tools are finally catching up. Whether the outputs feel alive will depend on us.

Do it consistently and you build a moat: not because the AI is magical, but because your decisions are.

Practically, a tasteful workflow in 2025 looks like this: anchor intent, choose a reference that truly represents you or the brand, lock structure where it matters and keep “play” where it doesn’t, generate widely, shortlist narrowly, and annotate why the survivors survived. Treat every shipped artifact as a style sample to refine your model of taste—your own private “taste layer.”

Fourth, we’re still early on personal style fidelity. The frontier is getting a model to keep your implicit rhythm—the cadence you never wrote down—across formats without plagiarizing you or anyone else. That’s where the next round of breakthroughs and governance fights will live.

Third, personalization is moving from “for you” to “by you.” Tools like Spotify’s Taste Profile toggle are small but profound. They model taste as editable, not inferred. Expect more products to follow: sliders for divergence vs. familiarity, dials for risk-taking, switches for “in my voice” vs. “in conversation with my voice.”

Second, codifying taste is power. Aesthetic predictors and preference models don’t just curate datasets; they steer culture. When “good” is mathematically defined by a few evaluators or historical labels, you risk beautiful conformity. A tasteful practice documents why something was chosen, rather than just stating it ranked higher.

First, curation fatigue is real, but the new controls reduce it. When you can pin down structure and style directly, you spend less time rejecting “almosts” and more time exercising judgment. That’s not automation of taste; it’s amplification of it.

So where does that leave “Tasteful AI” today?

Apple’s “Writing Tools” add another wrinkle: they made tone-forward editing ambient—compose, restyle, summarize—baked across the OS and even roped ChatGPT into Siri. That move mainstreams stylistic manipulation, but it also begs awkward questions about voice. If everyone can sound “more like themselves,” will they? Or will we all sound like the same agreeable paragraph? The fact that Apple had to design guardrails around privacy and opt-ins underscores that taste is not just output—it’s identity.

If you zoom out, even the cultural conversation matured. “Taste is the instinct that tells us not just what can be done, but what should be done,” wrote The Atlantic this summer—a line that’s aged well as executives realize that style guidelines, design language, and editorial voice are now strategic assets in the AI era. Meanwhile, a counter-current has warned about the risks of benchmark-driven sameness—“opinionated models” that all optimize toward the same handful of aesthetic evaluators, as one essayist put it. Both are right. Taste can either become a moat or a monoculture; the difference is how intentionally we model it and who gets to decide.

The lesson is humbling: taste can be hinted at and nudged, but the tacit is stubborn.

At the same time, researchers are trying to bottle personal taste without flattening it. One line of work shows clever “activation steering” tricks that imprint a user’s style as a vector—no per-user fine-tuning required. Another reminds us that, despite all the swagger, models still struggle to imitate the implicit texture of a person’s writing across messy, informal contexts.

Underneath these UI flourishes, there’s a scramble to quantify “good.” LAION Aesthetics Predictor—once just a prefilter for nicer training images—continues to seep into real experiences, from museum collections that now offer a “sort by beauty” toggle to research benchmarks that treat “aesthetic preference” as a first-class signal. The signal is imperfect and culturally loaded, but it’s affecting what we see and ship.

Taste also leaked into everyday consumer UX. Spotify quietly made “taste” editable. You can now exclude tracks from your Taste Profile—quarantining kids’ songs, sleep noise, or one-off curiosity listens so they don’t stain your recommendations—and its AI DJ picked up richer, voice-driven steering along the way. Instead of passively reverse-engineering you, the product lets you participate in defining you. That’s not just personalization; that’s taste scaffolding.

Adobe chased a similar seam. Firefly’s Style Reference and Structure Reference matured into practical controls you can rely on in client work, not just demo theater. The July and earlier 2025 drops layered on composition and video features; the upshot is brand-consistent outputs with far less trial-and-error prompt wrangling. This is taste in tooling: not a promise of genius, but the removal of friction so your judgment shows up more clearly, more often.

A good way to see the shift is to look at the tools people actually use. Midjourney’s Version 7 didn’t just bump resolution; it tightened the feedback loop between a reference image and the generated look, and made “style reference” less of a vibe and more of a lever. It became the default this summer, adding draft mode and “Omni Reference” so creators can steer look and feel more deliberately instead of whispering at the prompt parser.

The first time we discussed Tasteful AI, the world was overwhelmed by infinite options and yearned for judgment. That hasn’t changed. What has changed is that “taste” isn’t just an aesthetic sermon anymore; it’s creeping into the stack as a feature—sometimes even a setting. Platforms are handing us new dials. Research is challenging the easy narratives. And the models themselves are learning how to maintain a distinct perspective without collapsing into sameness.

Everyone promised personalization; few handed you the off switch. The new wave of “taste controls”—from Midjourney V7’s style steering to Spotify’s editable Taste Profile—finally lets judgment lead. This isn’t AI replacing taste. It’s infrastructure for it. New piece on how to use it without drifting into beautiful sameness.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. He created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 15 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2025 Copyright by Markus Brinsa | Chatbots Behaving Badly™