Google’s Cautionary Tale on Chatbots
Generative AI is in high demand, and chatbots are the “poster child” of the movement. After all, it was also ChatGPT that showed the possibilities of the technology: the often human-looking answers and resourceful applications appeal to the imagination. Google has its own chatbot, Bard, which competes with OpenAI’s product.
Secure AI
Google CEO Sundar Pichai is very cautious about Bard’s abilities. At the end of March, he spoke in a The New York Times podcast about the “inspiration” the chatbot was intended for. So not really a workhorse for professionals.
Rather, what Google’s caution indicates is that it doesn’t position Bard as a viable resource for serious work. Or at least not in the here and now. It is making headway with improved programming code skills and mathematical processing power.
Where Google wants to go is safe handling of AI models. This means, for example, that cyber security must be in order. Also, the input of data should not lead to the leakage of data to the outside world. That’s why it recently came out with the Secure AI Framework, which reflects much better how Google views AI than the restrictions it places on staff.
Copilots vs Chatbots
For applications based on the technology of competitor OpenAI, Microsoft in particular uses the term ‘copilot’. These are still the same underlying LLMs (large language models) that also support ChatGPT, but they have been applied for a specific purpose and with a mountain of limitations. For example, the Copilot from GitHub or within an Office 365 application is not as talkative as a normal chatbot. These focus specifically on a certain task and are in fact already trained in advance not to deviate from it. We have already referred to this as a ‘chatbot with a job’.
That is ultimately why it is not surprising that Google is somewhat hesitant to just use Bard for its own work. The chatbot is an experiment, as the company explicitly calls the application.
Worrying Development
Despite the fact that parties such as Samsung and Amazon do not simply allow their staff to use an AI assistant, Reuters cites research that indicates that 43 percent of professionals still use ChatGPT. This is a worrying development, as it could lead to the leakage of sensitive data to the outside world.
Google’s cautionary tale on chatbots is an important one to consider. While the technology is advancing rapidly, it is important to ensure that the security of AI models is in order. Companies should also be wary of allowing their staff to use AI assistants, as it could lead to the leakage of sensitive data.