In July 2025, WeTransfer quietly updated its terms and conditions, and the implications for creatives, businesses and anyone using the platform are sobering.

The update caused a wave of criticism on social media and tech forums, with the backlash fuelled by growing concerns over data privacy and AI transparency. Users, especially creatives and professionals, were alarmed by the idea that their work could be used without consent for AI training. 

So, what’s the big deal? And why are people so concerned?

The public reaction and WeTransfer’s response

Under the initial updated terms, WeTransfer quietly gave itself permission to train AI on any content users upload without needing to implicitly ask permission or inform you. 

The implications of this meant that the next time people use WeTransfer to send design drafts, code, marketing plans, research or intellectual property, they could be handing over the rights to that work; not to just store or transfer it, but to feed AI systems and therefore potentially giving people the opportunity to both commercialise it and lose control over how it’s reused. 

WeTransfer later clarified that it does not use AI to process customer content, nor does it sell user data to third parties. They explained that the now controversial clause was intended to cover hypothetical future features, like AI-powered moderation tools (which are not currently in use) and, following the backlash, WeTransfer has since removed all references to machine learning from the updated terms.

The new age of ‘free’ services 

However, this issue isn’t confined to WeTransfer’s T&Cs: it’s indicative of a deeper shift in how we must think about free-to-use services in an AI-powered world.

For years, the trade-off with free platforms was simple: tolerate ads (or some form of user data collection) in exchange for convenience. But now, your work – your words, images and creations – could be absorbed as training fuel for corporate AI. And this changes everything. 

This isn’t just a privacy concern, either; it’s a corporate governance and IP control issue. Businesses, large and small, may be inadvertently leaking proprietary information, designs or strategies into models that they no longer control.

This is the essence of shadow AI: an invisible and largely unchecked use of AI by employees bypassing formal governance channels and risking long-term damage to your organisation. 

Shadow AI and the corporate blind spot 

The rise of shadow AI means that organisations must get serious about how their data moves, especially as staff experiment with generative AI tools, or use everyday platforms that now quietly integrate with AI. 

We’re seeing the cracks widen across the digital landscape, and it’s not just companies like WeTransfer. Microsoft, however, offers a stark contrast. Tools like Microsoft Copilot integrate AI into their productivity apps with enterprise-grade governance, particularly when paired with Microsoft Purview. With Purview, organisations can monitor, classify and protect data across their environment, allowing them to flag inappropriate use and mitigating AI-related risks before they spiral out of control. 

As our Cyber & Information Security Manager, James Scott, explains: “In a modern tech ecosystem, there’s many levels of complexity that internal IT teams need to cover in order to keep their data and staff safe. As a starting point, tools like Microsoft SharePoint should be used as a central repository with centralised, managed sharing options to avoid the need for external sharing platforms like WeTransfer where possible. Then, working with companies like Mimecast for secure messaging should be used for a secure channel of external communication and conducting AI readiness reviews to identify the potential gaps – and therefore risks – will help ensure that information egress is monitored and managed by your team correctly. Finally, make yourself aware of which cloud platforms are being used and review third-party platforms’ terms and conditions regularly.”

This is the kind of intentional architecture needed to survive and thrive in the AI era, especially as more platforms adopt vague and exploitative terms under the guise of innovation. 

So, what can you do? 

  1. Audit your workflows: know which tools your team are using to transfer or process content.
  2. Educate your employees: make it clear that ‘free’ doesn’t necessarily mean safe. 
  3. Leverage trusted AI providers: if you’re using AI, safeguard with transparent models such as Microsoft Copilot. 
  4. Get secured: utilise tools at your disposal, such as Microsoft Purview, to safeguard your data and protect against data leakage. 

Users need to take WeTransfer’s new terms as a warning sign that, as AI becomes ever-present, platforms will increasingly seek to obtain and utilise value from your data. This includes your most valuable assets – your work and your team.