Last month, Italy became the first company to restrict national access to ChatGPT. A number of companies such as Samsung, Amazon and JP Morgan have banned the use of ChatGPT by employees, in some cases after leaks of sensitive company information. These are partly a result of the novel and in some cases unknown nature of these technologies.
As we embark on the beginning of the age of AI, privacy and data protection are emerging as key issues, particularly when considering the use of AI tools in corporate settings. Companies are worried that sensitive data input into AI tools could be stored and eventually disclosed to competitors and other parties.
Governmental concerns come from a different angle. Regulators in European countries like Italy have a wide range of concerns including the use of personal data in the training of the algorithms, the management of personal data in the use of AI systems as well as the potential harms the outputs of AI could cause such as deception and manipulation.
Italy has since reinstated access to ChatGPT and other similar tools. Company bans also appear to be framed as temporary in nature, as companies develop clear and secure working models for the use of AI. What is clear is that both entities and individuals see huge potential benefits in AI technology but will continue to approach it tentatively given our early and emerging knowledge of its benefits and harms.