I. Artificial Intelligence Risks and Best Practices
A. Overview
1. Generative Artificial Intelligence (GAI) is a type of artificial intelligence that uses artificial intelligence to automatically produce (generate) various types of content, including text, images, audio, videos, and other materials. ChatGPT is currently the most popular and well-known GAI solution. In its typical use case, a user submits a question, request, or “prompt” to the GAI system (e.g., explain how a bill becomes a law, or write a job description for a chief information officer), and within seconds the GAI will provide the results. GAI can also be used to generate non-narrative text, such as semiconductor chip designs, and non-textual content, such as images, artwork, videos, and music.
2. The Risks and Mitigation of Risks when Using GAI in the Workplace. The following guidance can be useful to companies when implementing an AI use policy for its employees generally.
a. Quality Control and Accuracy: As revealed by even the large developers of GAI solutions, GAI anscan produce misleading, inappropriate, and inaccurate results. One of the strengths of GAI is that it can produce very convincing results, using complex and sophisticated language and phraseology, and appear to be true or accurate, but in fact are not, or contain significant errors. Thus, if and when approved to use GAI, employees should carefully review, verify, and edit the accuracy of the results.
Employees may be held responsible and accountable – both within Company and by external third parties – for any materials to develop and distribute using GAI. Employees will not be relieved from obligations, responsibilities, or liability by saying they relied on a GAI solution when preparing the materials. Consequently, because employees are agents for and represent the interests of Company, Company will likewise be held responsible, liable, and accountable for all materials and content developed by employees using GAI.