Google has warned its employees not to disclose confidential information or use code generated by its AI chatbot, Bard.
The policy isn't surprising, as the company also advised users not to include sensitive information in their conversations with Bard due to privacy concerns.
Other major companies have also warned their staff against leaking proprietary documents or code and banned them from using other AI chatbots.
Google's internal warning, however, raises concerns that AI tools created by private companies are not trustworthy — especially if the creators themselves don't use them because of privacy and security risks.
The company's warning to employees not to directly use code generated by Bard undermines Google's claims that its chatbot can help developers become more productive.
Google Reported to Reuters that the internal ban was proposed because Bard may produce "unwanted code".
This could lead to buggy programs or some complex, bloated software that would cost developers more time to fix than if they had written it from scratch without AI.