The European Commission has issued a directive for its staff not to use generative AI services for critical work. This week, the European Commission published a series of internal guidelines for its staff relating to the use of generative AI tools such as ChatGPT and Bard. The document, titled “Guidelines for Staff on the Use of Generative Artificial Intelligence Tools Available Online,” was viewed by POLITICO and made available through the Commission’s internal information system.
The document aims to help staff assess the risks and limitations of AI generative tools available online and establish the conditions for their safe use in the work of the Commission. The accompanying note is clear in its purpose: “The guidelines cover third-party tools that are publicly available online, such as ChatGPT. They are intended to help European Commission staff understand the risks and limitations that tools available online can pose and to support their appropriate use.”
The first risk outlined in the document is the potential disclosure of sensitive information or personal data. The guidelines note that any input provided to an online generative AI model is passed on to the AI provider, which can then use that information to feed outputs generated in the future that are available to the public. To prevent this from causing problems, Commission staff are prohibited from “sharing information that is not already public, nor personal data, with an online file available generative AI model.”
Staff should also be aware that the AI’s answers may be inaccurate or biased, and they must check whether the AI may violate intellectual property rights. Most importantly, employees should never “cut and paste” AI-generated output into official documents. In addition, staff are requested to avoid using AI tools in “critical and time-sensitive processes.” The guidelines also make an exception for their own AI services.
The European Commission has issued a directive for its staff not to use generative AI services for critical work. This week, the European Commission published a series of internal guidelines for its staff to help them assess the risks and limitations of AI generative tools available online and establish the conditions for their safe use in the work of the Commission.
The guidelines cover third-party tools that are publicly available online, such as ChatGPT. They are intended to help European Commission staff understand the risks and limitations that tools available online can pose and to support their appropriate use.
The first risk outlined in the document is the potential disclosure of sensitive information or personal data. The guidelines note that any input provided to an online generative AI model is passed on to the AI provider, which can then use that information to feed outputs generated in the future that are available to the public. To prevent this from causing problems, Commission staff are prohibited from sharing information that is not already public, nor personal data, with an online file available generative AI model.
Staff should also be aware that the AI’s answers may be inaccurate or biased, and they must check whether the AI may violate intellectual property rights. Most importantly, employees should never “cut and paste” AI-generated output into official documents. In addition, staff are requested to avoid using AI tools in critical and time-sensitive processes. The guidelines also make an exception for their own AI services.