GenAI and ESG: The new language of business
The Barcelona Accounting Summer Workshop brought together top researchers to explain how artificial intelligence is reshaping corporate communications—and the resulting urgent questions about credibility and transparency.
Not long ago, corporate communication followed a quite conventional pattern. Annual reports were written by teams of people spending weeks debating every line and tone of corporate disclosure, which changed slowly over time. Today, that pattern is being rewritten.
Artificial intelligence can now draft corporate messages in seconds. This transformation is illustrated by Goldman Sachs CEO David Solomon, who recently pointed that drafting an S1 filing—a task that once required a six-person team and two weeks of effort—can now be “95% done by AI in minutes.”Simultaneously, corporations are increasingly facing strong demands for “non-traditional” disclosure, with sustainability information taking a prominent role, to the point of shaping executive pay. Even in less regulated environments, firms are experimenting with more flexible forms of disclosure to attract investors. The result is a new corporate language—faster, and often more ambitious. But as communication accelerates, a central tension emerges: when words become cheap and easy to generate, distinguishing genuine commitments from polished narratives becomes increasingly difficult.
That question was at the heart of the Barcelona Accounting Summer Workshop (2025 edition), held last June at Universitat Pompeu Fabra and co-organized with Esade Business School. The workshop brought together researchers from leading universities to examine how generative AI, sustainability, and governance choices are reshaping corporate communication.
The rise of automated corporate language
One of the most visible changes in corporate communication is the growing use of generative artificial intelligence. Firms now experiment with AI tools to draft earnings announcements, commentaries, press releases, and internal communications.
Research by Elizabeth Blankespoor shows that GenAI is already writing the corporate present. The study detects AI usage across all major corporate disclosure forms, particularly in those with high ‘cost of drafting’—such as the Management’s Discussion and Analysis section of annual reports or Business Descriptions in IPO prospectuses. While this automation can improve consistency, the study warns of a “fluency trap”: when communication sounds confident and polished, it becomes harder for readers to assess underlying uncertainty or risk.
A key insight from Blankespoor is that GenAI reports are more positive in tone and less readable, suggesting that firms may obfuscate negative news behind sophisticated prose. This raises new accountability questions: when machines are involved in drafting reports for investors, interpretation rather than disclosure becomes the new challenge.
As AI holds the pen, disclosure interpretability suffers, masking risks and uncertainty behind a robotic prose curtain
Working with AI, not just deploying it
While Blankespoor’s work focus on how AI shapes external communication, research by Jasmijn Bol studies what happens inside organizations when humans and AI collaborate. Their experimental work distinguishes between different ways AI can be used: generating first drafts, refining existing text, or providing feedback to human authors. The design choices matter. When AI fully takes over content creation, employees may lose a sense of ownership and responsibility. When AI is used as a feedback or support tool, humans tend to remain more engaged while still benefiting from AI’s analytical strengths.
The broader message from Bol is that GenAI adoption is not just a technology decision—it is an organizational design choice. Firms that think carefully about how humans and AI interact are more likely to improve communication without undermining accountability or trust.
ESG targets and incentives
If GenAI is changing how firms speak, ESG is increasingly shaping what they say.
Sustainability targets and outcomes are now common components of firms’ narratives, with direct implications for firms and their incentive systems. Many firms now tie CEO bonuses to ESG targets to prove their commitment to sustainability. But Clara Chen’s research provides a jarring reality check: firms that "meet" or "beat" their ESG targets often see worse actual outcomes—including more air emissions and lower workplace safety—the following year. This suggests that many ESG metrics may be "diversity washing" or "greenwashing" exercises, with ESG targets often set at an easily achievable level to justify higher CEO pay and attract socially responsible investors, who are still being "tricked" by these signals and narratives.
Governance and firm narratives
Questions of incentives and credibility extend beyond ESG. Researchby Iván Marinovic examines how firms justify and structure extreme executive compensation. Using high-profile cases such as Elon Musk’s compensation at Tesla, they illustrate how complex pay packages rely on assumptions that are difficult to verify in advance and easy to rationalize after the fact. As a result, their work highlights the role of narratives—about exceptional executive pay—to justify outcomes that would otherwise be hard to defend. In that sense, language is not just descriptive; it is a part of how power and incentives are exercised within organizations.
As corporate language becomes easier to produce, the challenge shifts from output (disclosure) to credibility and trust
Credibility, the common denominator
Across discussions of GenAI, ESG, and executive incentives, one concept surfaced repeatedly: credibility. Research by Thomas Bourveau focuses on firms operating under alternative, more flexible reporting standards—the over-the-counter “Pink Sheets” market. These regimes allow companies greater discretion in how they disclose information. The study shows that disclosure is most effective when it is supported by credibility mechanisms such as verification, consistent reporting behaviour, and strong governance. When these mechanisms are absent, additional disclosure does little to help firms raise capital. Markets discount information that appears cheap, unverifiable, or purely symbolic—a feature that may be embedded in AI-produced narratives.
Related evidence on governance transparency came from Fabrizio Ferri, who studies a U.S. Securities and Exchange Commission rule requiring firms to disclose who recommends new independent directors—executives, independent directors, or external headhunters. It turns out that, while firms frequently fail to comply, the source of director recommendations is highly informative about board independence, expertise and diversity—being higher after external headhunter’s recommendations. The fact that firms often withhold this information weakens the credibility of governance disclosures and limits their usefulness.
This insight ties the workshop together. As corporate language becomes easier to generate, the real constraint shifts from production to credibility. The central question is no longer how much firms say, but how much trust they convey.
Going forward
The language of business is changing. It is faster, more adaptive, and increasingly shaped by algorithms and metrics. Yet the underlying challenge remains familiar: aligning words with reality. As the prior work makes clear, GenAI and ESG are not separate trends. Together, they are reshaping how firms communicate performance, purpose, and responsibility. Whether this new language strengthens trust—or undermines it—will depend less on technology itself than on how organizations choose to govern it. In an era of cheap talk, credibility will become the most valuable corporate asset.
- Compartir en Twitter
- Compartir en Linked in
- Compartir en Facebook
- Compartir en Whatsapp Compartir en Whatsapp
- Compartir en e-Mail
Do you want to receive the Do Better newsletter?
Subscribe to receive our featured content in your inbox.