Google warned its employees not to disclose confidential information and not to use code which is created by chatbot AI, Bard.
The policy is not surprising, as the company also advised users not to include sensitive information in talks them with Bard due to confidentiality.
Other major companies have also warned their staff against leaking proprietary documents or code and banned them from using other AI chatbots.
Google's internal warning, however, raises concerns that AI tools created by private companies are not trustworthy — especially if the creators themselves don't use them because of privacy and security risks. better safety.
The company's warning to employees not to directly use code generated by Bard undermines Google's claims that its chatbot can help developers become more productive.
Google Reported to Reuters that the internal ban was proposed because Bard may produce "unwanted code".
This could lead to buggy or complex, bloated programs software which will cost developers more time to fix it than writing it from scratch without AI.