You can practically hear the keynotes now: “Let AI handle the messaging so leaders can focus on strategy.” Somewhere, a demo shows a smiling manager clicking “Auto-Write,” and a perfectly formatted pulse of empathy lands in everyone’s inbox. What the demo never shows is the reply-all thread from the actual humans who received it, the quiet credibility hit the manager just took, and the compliance officer who just started to sweat.
Recently, HR Dive highlighted peer-reviewed research on exactly this phenomenon: employees can spot when their bosses lean too hard on AI to write to them—and they trust those bosses less when it happens. The findings aren’t hand-wavy opinion; they’re grounded in a study of 1,100 professionals published in the International Journal of Business Communication. Low-assist editing (think Grammarly) was fine. But when AI did the composing—especially for praise, feedback, or anything that requires tone and care—perceived sincerity and trust dropped off a cliff. In other words, if the message is supposed to feel human, outsourcing the humanity backfires.
“Let the bot answer; we’re busy,” sounds efficient—until the bot is wrong. Ask Air Canada, which was ordered to compensate a passenger after its customer-facing chatbot misstated the airline’s bereavement policy. The tribunal rejected the company’s argument that the chatbot was a “separate legal entity.” If your system says it, you said it. That precedent doesn’t just sting; it clarifies accountability in the age of automated replies.
New York City learned a similar lesson in public. It's official MyCity chatbot meant to help entrepreneurs navigate rules, told business owners it was okay to do things that are, in fact, illegal—like firing workers who complain about harassment or refusing cash payments. The city kept it online while “testing,” and the national press did the rest. This is what happens when an answer engine is treated like a search box with manners. If the output can change behavior or create exposure, you cannot “ship it and see.”
Back to your team. The Florida/USC study found that people were broadly okay with AI for proofreading and polishing, but viewed managers as less sincere when the AI’s role moved from brushing to painting. Acceptance plummeted for congratulations, motivation, and feedback—the very messages that set culture. The more the machine “sounded like you,” the less you sounded like you. That’s not a vibe problem; it’s a leadership one.
There’s no clause in the Sarbanes–Oxley Act that says, “Thou shalt not use generative AI.” What SOX does require is that public companies maintain effective internal control over financial reporting (Section 404) and robust disclosure controls and procedures (Exchange Act Rule 13a-15). In plain English: material information must be accurate, authorized, and controlled; your processes must ensure that, and you must be able to prove it. If a bot can send messages that touch policy, finance, controls, or investor-relevant disclosures without human authorization, you may be undermining the very controls SOX expects you to have. It’s not the tool that violates SOX—it’s how you design the process around it.
If you operate in regulated corners of finance, the bar is higher. The SEC has hammered firms with hundreds of millions in penalties for “off-channel” communications that weren’t preserved or supervised. Those cases weren’t about AI per se, but they’re a bright-red warning for anyone auto-sending messages from systems that your recordkeeping stack can’t capture. If your AI drafts or dispatches messages outside approved channels—or in approved channels without retention—you’ve recreated the same exposure with a shinier interface.
And if your “automated messaging” includes calls or voice drops, remember the FCC’s 2024 ruling: AI-generated voices in robocalls fall under the Telephone Consumer Protection Act. Translation: consent and other TCPA requirements apply, and regulators can fine you for skipping them. Email-style campaigns carry their own obligations under CAN-SPAM as well; monitoring what vendors send “on your behalf” is part of the law. “The bot did it” is not a defense.
Automated drafting is also a data governance problem. High-profile companies have restricted or banned use of public chatbots after staff pasted sensitive source code and meeting transcripts into them—exactly the sort of leak your policies are supposed to prevent. If your AI system is cloud-hosted, logs prompts, or trains on enterprise content, you’re not just sending a message to an employee—you might be sending your secrets to the world.
Employees aren’t naïve. They notice the sudden shift to frictionless “voice,” the identical phrasing across different managers, the 3:07 a.m. timestamp, the odd emotional temperature. The University of Florida release quantifies the reaction: sincerity scores for managers crater when AI is perceived to be doing the composing. Trust isn’t built by tightening your prose; it’s built by taking the time to write it yourself when it matters. AI can proofread; it can’t care. Your team can tell the difference.
Use AI like a seatbelt, not a chauffeur. Let it catch typos and tidy syntax. Don’t let it deliver praise, criticism, or anything with legal or financial teeth. For external comms, keep it on a leash: approved channels, human authorization, retention turned on, and language that’s been actually read by a person who will sign their name to it. If your company insists on auto-sending at scale, treat every automated message as if it were marketing under CAN-SPAM and as if a regulator will ask for the log. Because one day, they might.
The “AI wrote it for me” pitch promises to make managers efficient. But the moments where efficiency matters least are the ones your team remembers most. Congratulating someone. Owning a mistake. Explaining a hard call. Those are not throughput problems; they’re relationship investments. Delegating them to a machine doesn’t make you modern. It makes you absent.
For everyone who’s received those uncanny notes that sound like corporate Mad Libs, you’re not crazy. The vibe is off because the authorship is off. If the goal is to be a better communicator, the shortcut isn’t a bot—it’s time, clarity, and a willingness to hit backspace yourself.