top of page

Master thesis collaboration on AI transparency and trust in news media

  • Writer: Molly Grönlund Müller
    Molly Grönlund Müller
  • Sep 11
  • 4 min read

How do we talk about AI in the newsroom without making people trust us less? Pranit Popli, a master’s student in interactive media design at KTH, has spent the spring exploring this question for his thesis, supported by IN/LAB.




ree

Pranits research looks at the “paradox of disclosure”. Being open about AI use can sometimes lower trust, while hiding it can backfire. So how should newsrooms address this? The work resulted in five key tensions and a number of solution prototypes, as well as practical recommendations for newsrooms trying to get AI transparency right. Below is a summary of the thesis, written by Pranit. We are very grateful that Pranit chose to work together with us on this important topic and wish him the best of luck in the future! 

Tailoring transparency

The integration of generative AI into journalism presents both significant opportunities and critical challenges. Among the most pressing is the question of transparency. How can newsrooms effectively disclose their use of AI in a way that builds and maintains reader trust? My thesis project conducted together with IN/LAB looks into this complex issue.


The research addresses a central challenge known as the "paradox of disclosure”. Simply revealing AI involvement can sometimes erode trust, while concealing it can lead to accusations of deception. Furthermore, providing too much technical detail can lead to disclosure fatigue, causing readers to tune out, whereas providing too little information can breed suspicion. This project aimed to address a critical question: How can we design transparency that nurtures trust without overwhelming the user or compromising the integrity of the journalistic experience?


The research approach


To navigate this challenge, I employed a multi-faceted research methodology. The process began with an extensive review of academic literature, followed by in-depth interviews with nine experts from across the Nordic media company Schibsted, who collectively hold 129 years of industry experience.

These interviews helped identify key tensions and trade-offs in AI transparency. The insights gathered were then used to design and build a series of functional prototypes. These prototypes were subsequently evaluated by newsroom experts through a combination of heuristic evaluation, a think-aloud protocol, and a value checklist to gauge their effectiveness and usability in a real-world context.


Key insights: deconstructing the transparency paradox

The research identified five core tensions that news organizations must navigate when implementing AI transparency measures:


  1. Human expectations: Readers generally expect journalism to have a human touch. Overt AI disclosure can disrupt the traditional sense of authorship and connection.

  2. Verifiability: Even with human oversight, users desire a way to audit the information presented. They want to understand the origins of the facts, especially when an AI is involved.

  3. Cognitive fatigue: Overloading users with technical details about how an AI was used can be counterproductive, as it creates confusion rather than confidence.

  4. AI literacy: A significant portion of the audience may not understand how generative AI works, making generic disclosures ineffective or even misleading.

  5. Editorial control: For transparency to be meaningful, newsrooms require robust tools to manage, oversee, and take responsibility for any AI-assisted content.



The five parts of the transparency paradox
The five parts of the transparency paradox


Prototypes for adaptive transparency

Based on these findings, I developed several prototypes designed to test adaptive user experience (UX) strategies within existing Schibsted news products such as Omni and Aftonbladet.

  • Disclosure Builder (for Omni’s CMS)

    An internal tool designed to give editors granular control over how AI involvement is communicated. It features an "AI Impact" score to assess the extent of AI influence, editable disclosure messages to match brand tone, and options for linking to source materials.

  • Chain of Thought and Tone

    • The first feature integrated into the Hej Aftonbladet chatbot that allows users, with a simple click, to see the step-by-step logic the AI used to generate an answer for a complex query.

    • The second feature explores user personalization, allowing readers to adjust settings like tone and preferred topics while ensuring editorial guardrails remain firmly in place.

  • Content settings

    A feature that allows users to switch between different formats of an article (e.g., text, audio, video) while maintaining consistent, editorially-approved information.



    Three prototypes for adaptive transparency
    Three prototypes for adaptive transparency


Recommendations for newsrooms

The evaluation of these prototypes by experts led to a set of actionable recommendations for any newsroom looking to implement generative AI responsibly.

  1. Quantify substantial impact: Move beyond vague labels. Develop a clear framework to measure and communicate how significantly AI has influenced a given story. This creates a common, understandable language for both editors and readers.

  2. Implement chain of thought transparency: When applicable, show users how the AI arrived at its conclusions. Revealing the AI’s reasoning process, even in a simplified form, demystifies the technology and strengthens auditability.

  3. Strengthen editorial control in the CMS: Integrate transparency tools directly into existing editorial workflows. Giving editors intuitive, context-aware controls ensures that transparency is not an afterthought but a core part of the publishing process.

  4. Strategize personalization carefully: While giving users choices is valuable, it must be balanced with strong ethical guidelines. Ensure that personalization features do not allow users to filter out essential information or create unintended filter bubbles.

  5. Prioritize on-demand transparency: Avoid information overload by using a layered approach. Instead of displaying all details upfront, use prompts like a "How This Was Made" button that allows curious readers to explore further, giving users control over the level of detail they engage with.


The future of AI transparency

AI transparency in journalism must be adaptive and explainable to build trust, moving beyond simple labels. This research underscores that a one-size-fits-all approach to AI disclosure is insufficient. The path toward building trust in the age of AI lies in developing adaptive, user-centric, and editorially driven transparency strategies. The next phase of this work will involve expanding testing to include a diverse range of everyday readers to further refine these concepts. By turning transparency into an experience rather than a simple declaration, news organizations can create a more resilient and trustworthy relationship with their audience.


You can find a presentation summarising Pranit's work here.

Reach out to molly.gronlund.muller@schibsted.com regarding any questions about the thesis.



Pranit Popli
Pranit Popli



© 2025 IN/LAB AS

bottom of page