2.
Inclusion and Respect. The use and development of AI should uplift communities,
connecting them effectively with resources to thrive, especially those historically marginalized.
As public stewards, we will use tools respectfully to reflect values of equity and social justice.
3.
Transparency and Accountability. We acknowledge the limits of foresight. But transparency
builds trust and enables collective learning. When AI is used, we must disclose that
responsibly and share our workflow freely with other public servants and with the public.
4.
Innovation and Risk Management. We embrace responsible experimentation that maintains
control and respects privacy and security while developing uses that drive efficiency, dialogue,
and better service. We understand risks may not be fully apparent initially and commit to
proactive risk assessment.
B.
USE GUIDELINES
1.
Ask - Early and Often. Generative AI creates content based on your inputs. That is why it is
good to try out different questions (also known as “prompts”) to see how it responds. You can
specify the tone, style, and length of a text response and the attributes and qualities of an
image. The more you experiment with different ways of steering the tools, the faster you will
learn how to instruct them to yield the best results.
2.
Fact Check. Verify all AI-generated content, especially for public use. Generative AI can swiftly
produce clear prose, but the information may be inaccurate or outdated. Research claims to
ensure accuracy. Watch for: incorrect facts, events, links, or references; and biased information
potentially harmful to vulnerable groups like racial, ethnic, and gender minorities, people with
disabilities, etc. We must actively mitigate risks from AI while benefiting from its capabilities. As
public stewards, we have an obligation to use these tools responsibly. Signoff from Agency
Director level or Asst Commissioner is required before production use.
3.
Disclose. Label content created with generative AI to that effect. Transparency is crucial in AI-
generated content, even when AI is used sparingly. Disclosing AI involvement fosters trust and
aids in error identification. Detail the AI model, prompts, and methods employed. This
documentation aids comprehension and safe usage by colleagues and stakeholders.
Sample AI-generated content disclosures:
"This content was generated with the aid of ChatGPT and subsequently revised by Bob
Smith."
"This text was summarized for clarity using Google Bard."
4.
Sensitive Information. When prompting the AI or using AI models, do not disclose sensitive
or private information. We aim to enable responsible AI use while safeguarding sensitive
information.
a)
Do not share personally identifiable information (PII) about residents, colleagues, or
yourself. Do not share confidential or sensitive content.
b)
Do not use AI tools to transcribe or summarize meetings where sensitive topics are
discussed.
c)
Do not share any information that you wouldn’t share publicly.
III.
PROCEDURES
A.
HOW TO’S
1.
Drafting. Generative AI like ChatGPT can help draft memos, letters, and job descriptions.
When creating prompts, specify any format preferences, keywords, technical terms, or
phrases to include or avoid. Remember, you can tell the software how long you would like the
response to be (e.g. the word count), the style (e.g. professional or informal), the language
(e.g. please respond in Spanish).
Example: Memos: In government we often have to write short documents that present an
argument why a policy should be adopted, or a decision should be made. For instance, try