Bard not trained on Gmail data, glitch part of early experiment: Google


Tech giant Google has officially stated that their new AI chatbot is not trained on Gmail data. The Bard AI chatbot, which Google describes as an “early experiment that allows you to collaborate with generative AI,” was recently made public in the United States and the United Kingdom with limited access.


However, Bard’s response to Kate Crawford’s question, a Microsoft researcher, sparked concerns. The chatbot stated that its training dataset also included internal data from Google, including data from Google Search, Gmail, and other products. This led to a massive backlash against Google from Gmail users who were concerned about the privacy of their personal information and communication on the platform. Former Google AI ethicist Margaret Mitchell tweeted her concerns on data security after the apparent development.


Google immediately clarified that Bard is not trained on Gmail data, indicating that the glitch was just a part of a chatbot ‘hallucination’. It stated that Bard was a preliminary experiment based on Large Language Models (LLMs) and hence prone to errors.


People using the ChatGPT rival for the first time have also discovered several mistakes made by the chatbot. Bard claimed that Google has already shut it down and that the third month of the year is ‘Maruary’.


Google had said earlier in a blog post, “You can use Bard to boost your productivity, accelerate your ideas and fuel your curiosity. You might ask Bard to give you tips to reach your goal of reading more books this year, explain quantum physics in simple terms or spark your creativity by outlining a blog post”.


Access to Google’s Bard is limited. Users in the United States and the United Kingdom can sign up for a waitlist at bard.google.com.

Originally appeared on: TheSpuzz

Scoophot
Logo