A staggering 53 journalists across 29 news organizations have reportedly had their work plagiarized by content generated for Nota News, an AI company’s network of local news sites. This revelation has sent shockwaves through the journalism community, prompting immediate and critical reviews of existing partnerships. The fallout highlights the urgent need for transparency and accountability when integrating AI into content creation processes.
Nota’s “Experiment” Unravels Amidst Plagiarism Allegations
Nota, an AI firm, is now grappling with severe backlash after its experimental network, Nota News, was discovered to have published articles without proper attribution, lifting content from numerous journalists. These sites were intended to aggregate public information, but instead, they processed articles from local outlets through Nota’s tools, republishing them verbatim. This breach of journalistic ethics has led to significant repercussions.
Several news organizations have taken swift action. The Boston Globe has instructed its staff to cease using Nota products and is actively working to terminate its contract with the company. Concurrently, the Institute for Nonprofit News is disseminating Poynter’s findings to its members, raising awareness of the potential risks associated with Nota’s services.
Doubt Cast on AI’s Role in Journalism and Data Ethics
Nota CEO Josh Brandau has characterized the Nota News sites as an “experiment,” attributing the plagiarism to human error by contractors rather than the AI software itself. However, this explanation has been met with skepticism. The fact that at least three Nota clients had their own content plagiarized raises serious questions about the integrity and security of Nota’s broader services, including those offered to other newsrooms.
Further complicating the narrative, none of the queried media organizations granted Nota permission to train its AI tools on their data. This contrasts sharply with deals like Meta’s $150 million agreement with News Corp, which explicitly involves content licensing for AI training. The company’s previous statements about building its large language model, Polaris, by refining open-source models with “high quality journalism” from clients now appear deeply problematic, especially when unauthorized data use is suspected.
📊 Key Numbers
- Journalists impacted: 53
- News outlets affected: 29
- Nota News sites launched: September
- Nota News sites closed: March 31
- Contractor Jorge Rodríguez fired: March 30
- Contractor Isabella Rolz terminated: April 7
- Rolz’s bylined plagiarized stories: 41
- Rodríguez’s bylined plagiarized stories: 30
- Rolz’s weekly story target: Upward of 80
- Boston Globe ceased Nota tool use: April 3
- Meta/News Corp deal value: $150 million
- Meta/News Corp deal duration: 3 years
🔍 Context
The recent plagiarism scandal involving AI company Nota and its Nota News network highlights a critical gap in the responsible deployment of AI within the media industry. The incident addresses the urgent need for verifiable attribution and ethical data sourcing in AI-driven content generation tools that are rapidly entering newsrooms. This situation directly challenges the trend of AI companies seeking to monetize journalistic content without clear consent or compensation frameworks, unlike the explicit licensing agreements seen in deals such as Meta’s partnership with News Corp.
The current AI landscape is increasingly focused on content creation at scale, but this incident underscores the risks of inadequate oversight and potential data misuse. What has changed in the last six months is the increasing adoption of AI tools by news organizations, making the consequences of such ethical breaches more immediate and widespread. The reliance on contractors without robust quality control mechanisms, as suggested by Nota’s CEO, is a notable point of scrutiny.
💡 AIUniverse Analysis
★ LIGHT: The rapid closure of Nota News sites and the termination of responsible contractors, alongside The Boston Globe’s decisive action, signal a crucial industry response to ethical breaches. It demonstrates that media organizations are willing to sever ties with AI vendors when trust is compromised, pushing for greater accountability in AI development and deployment.
★ SHADOW: Nota’s framing of the plagiarism as an isolated “experiment” by contractors, separate from its core AI tools sold to newsrooms, warrants deep skepticism. The concurrent use of similar AI tools for both the experimental sites and client services, combined with unacknowledged data training practices and a questionable attempt to impose NDAs on a terminated contractor for owed payment, suggests a systemic issue rather than mere oversight. The legal implications of the NDA demands also raise serious concerns about Nota’s labor practices.
For this situation to matter in 12 months, AI vendors must establish demonstrably transparent data sourcing policies and robust plagiarism detection mechanisms, and media outlets need to conduct rigorous due diligence before integrating any AI solution.
⚖️ AIUniverse Verdict
⚠️ Overhyped. Nota’s claims of an isolated “experiment” masking potential systemic issues with data ethics and contractor oversight are not supported by the broad scope of plagiarism and the subsequent client backlash.
Developers: Developers must ensure that AI tools intended for newsrooms have built-in safeguards against plagiarism and unauthorized content usage.
Enterprise & Mid-Market: Enterprise media companies must rigorously vet AI vendors, conduct thorough due diligence, and establish clear contractual terms regarding data usage and AI output.
General Users: Users may experience a decline in trust for AI-generated news content if incidents of plagiarism and lack of transparency continue to surface.
⚡ TL;DR
- What happened: AI company Nota’s news sites were found to have plagiarized content from numerous journalists, leading to backlash.
- Why it matters: This incident underscores significant ethical and trust issues surrounding AI tools in journalism and data usage.
- What to do: News organizations must exercise extreme caution and demand transparency from AI vendors regarding data sources and content integrity.
📖 Key Terms
- Nota News
- A network of local news sites operated by the AI company Nota, which was found to have plagiarized content.
- Polaris
- The large language model developed by Nota, reportedly built using open-source models refined with client journalism.
Analysis based on reporting by Poynter. Original article here.

