Microsoft has prohibited its employees from using the DeepSeek app, citing concerns over data security and potential propaganda influence. This announcement was made by Microsoft’s vice chairman and president, Brad Smith, during a recent Senate hearing. The company is worried that user data could be stored on servers in China and that the AI’s responses might be shaped by Chinese government agendas.
The core issue is that DeepSeek’s privacy policy indicates user data is stored on Chinese servers, making it subject to Chinese law. This law requires companies to cooperate with Chinese intelligence agencies. Additionally, DeepSeek is known to censor topics that are considered sensitive by the Chinese government.
Interestingly, despite these concerns, Microsoft offers DeepSeek’s R1 model on its Azure cloud service. The key difference is that users can download the model and host it on their own servers, preventing data from being sent back to China. However, this doesn’t eliminate risks like the spread of propaganda or the generation of insecure code.
Microsoft has taken steps to mitigate these risks. According to Smith, Microsoft has modified DeepSeek’s AI model to remove “harmful side effects”. The company also stated that DeepSeek underwent rigorous safety evaluations before being made available on Azure.
While Microsoft emphasizes security concerns, the situation also raises questions about competition. DeepSeek’s chatbot app directly competes with Microsoft’s own Copilot search app. However, Microsoft doesn’t ban all competing chat apps from its Windows app store. For example, Perplexity is available, while apps from Google, including Chrome and Gemini, are not.