Background and Findings. | |
Definitions. | |
Roles and Responsibilities. | |
Enforcement. | |
Promotion of the General Welfare. |
(a) Many technologists, historians, scientists, elected officials, and other societal leaders believe that the advent of Artificial Intelligence that has advanced significantly with the release of generative systems is revolutionizing, and will continue to revolutionize, our world.
(b) Local governments have been using AI products since the early 1990s. However, beginning in the 2010s, significant advancements in AI technology, including machine and deep learning, led to a surge in acquisition of various products by local governments. With the advent of Generative AI products like Chat GPT and others that produce original content, the potential benefits and risks to San Francisco residents and workers have increased.
(c) Policymakers are trying to avoid repeating past mistakes with technological developments, like the failure to regulate social media before it led to many societal harms, and find ways to protect human beings from the worst predictable problems of this newest wave of technological advancement.
(d) While the City government, as with all levels of government, continues to develop the best tools for the City to both harness the benefits and protect against the harms of emerging AI technology, it is important that policymakers and the public understand the AI technologies the City is using and will use in the future.
(e) The City has a decentralized Information Technology (IT) system. Most City departments have their own IT units and as of 2024 the City’s Department of Technology (“DT”) did not generally know which AI products and systems were in use by departments.
(f) This Chapter 22J remedies this problem by requiring the City’s Chief Information Officer (“CIO”) to create a public inventory of AI technologies used within City government. The inventory will include basic facts about technologies including their purpose, accuracy, biases, and limits.
(g) As of 2024, the City used AI technologies in a variety of ways. Here are just a few illustrative examples:
(1) The Department of Technology used AI to review activity on IT infrastructure for network security, intrusion detection, and to identify other potential cybersecurity threats.
(2) The SF311 mobile application used AI to make upfront service type recommendations based on the user’s description or picture of the issue. A model had been trained on years of service request (SR) data.
(3) The Department of Public Health (DPH) Radiology Department used an AI-based medical imaging tool to support the confirmatory diagnosis of cerebrovascular events (strokes). The AI system reviewed imaging studies (CT scans) and provided supporting information to the physicians who make the diagnoses.
(h) The use of AI technologies by local governments can offer many benefits including but not limited to increased efficiency and effectiveness of public services, quick and accurate analysis of large volumes of data, automation of routine administrative tasks, facilitation of communication between residents and their local government through chatbots and virtual assistants, and prediction of potential hazards.
(i) However, with the increased use of AI technologies, local governments also potentially subject their workers, residents, and visitors to new risks, including:
(1) Privacy Concerns: AI systems often collect, store, and analyze vast amounts of data, which can include personal information of individuals. This raises concerns about privacy breaches, unauthorized data sharing, and surveillance, potentially leading to a loss of anonymity in public spaces.
(2) Bias and Discrimination: AI algorithms can perpetuate or amplify existing biases if they are trained on data that reflects societal inequities. This can result in discriminatory outcomes in areas such as law enforcement, housing, and public services, disproportionately affecting marginalized communities.
(3) Lack of Transparency: Many AI systems operate as “black boxes,” meaning the processes and decision-making criteria are not transparent to the public. This can erode trust and make it challenging for individuals to understand how decisions that affect their lives are made.
(4) Job Displacement: The automation of certain government functions through AI can lead to job losses in the public sector or in industries reliant on those functions, impacting the employment landscape and economic stability of communities.
(5) Security Risks: AI systems can be vulnerable to cyberattacks and exploitation. If malicious actors gain access to these systems, they can manipulate data, disrupt services, or compromise sensitive information, potentially leading to significant harm to individuals.
(6) Dependence on Technology: Increasing reliance on AI for critical services may create vulnerabilities. Technical failures or misconfigurations can result in service interruptions or errors that affect public safety and welfare.
(7) Legal and Ethical Concerns: The application of AI in sensitive areas (e.g., policing, social services) raises legal and ethical concerns about the appropriateness of AI decisions in life-altering contexts, such as risk assessment for individuals involved in the justice system or the allocation of social support.
(8) Erosion of Constitutional Rights and Civil Liberties: Heightened surveillance and data collection through AI can infringe on constitutional rights and civil liberties, prompting concerns about the potential overreach of government authority and reduced freedoms for individuals.
(9) Public Mistrust: The combination of the above risks can lead to a general sense of mistrust in government, where residents may feel that the government is not acting in their best interests or that their rights are being compromised.
(j) In order to promote the ethical, responsible, and transparent use of AI tools, it is important that policy makers and the public are aware of the AI technologies that the City uses, including information critical to understanding those technologies.
(Added by Ord. 288-24, File No. 241022, App. 12/19/2024, Eff. 1/19/2025)
For the purposes of this Chapter 22J, the following definitions shall apply:
“AI” means Artificial Intelligence.
“AI Technology” means logical and physical technology that uses Artificial Intelligence.
“Algorithms” means a set of rules that a machine follows to generate an outcome or a decision.
“Artificial Intelligence” means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.
“Chatbot” means a computer program that simulates conversations.
“CIO” means the City’s Chief Information Officer, or designee.
“City” means the City and County of San Francisco.
“COIT” means the Committee on Information Technology or one of its committees.
“Data” means information prepared, managed, used, or retained by a department or employee of the City or a data user relating to the activities or operations of the City.
“Department” means any unit or component of City government, including but not limited to boards and commissions, departments, offices, agencies, or officials.
“Department Head” means the head of a Department, or designee.
“DT” means the Department of Technology.
“Inventory” means the information collected and published in accordance with Section 22J.3.
“Training Data” means the dataset that is used by a machine learning model to learn the rules.
(Added by Ord. 288-24, File No. 241022, App. 12/19/2024, Eff. 1/19/2025)
(a) Chief Information Officer.
(1) Within six months of the effective date of this Chapter 22J, the CIO shall collect the data requested under subsections (b)(1)-(22) from Departments using AI technology, and begin publishing the Inventory responses on the DataSF platform.
(2) Within one year of the effective date of this Chapter 22J, the Inventory shall be complete, including any and all AI technology used by the City. In addition, within one year of the effective date, the CIO shall update the Inventory with any AI technology that the City is in the process of purchasing, borrowing, or receiving as a gift, with or without the exchange of compensation or other consideration before acquiring the technology and/or putting the technology into use. If the technology is never obtained or no longer used, it shall be removed from the Inventory.
(b) Department Head. The Department Head shall disclose and submit to the CIO for inclusion on the Inventory the AI technologies the Department has procured, borrowed, or received as a gift, with or without the exchange of money or compensation, and for each technology shall disclose the following information:
(1) Name of the technology and vendor;
(2) A brief description of the technology’s purpose and function;
(3) The intended use of the technology;
(4) The context or domain in which the technology is intended to be used;
(5) The data used to train the technology;
(6) An explanation of how the technology works;
(7) The data generated by the technology;
(8) A description of what the technology is optimizing for, and its accuracy, preferably with numerical performance metrics;
(9) Conditions necessary for the technology to perform optimally;
(10) Conditions under which the technology’s performance would decrease in accuracy;
(11) Whether testing has been performed to identify any bias in the technology such as bias based on race, gender, etc., and the results of those tests;
(12) A description of how and where people report bias, inaccuracies, or poor performance of the technology;
(13) A description of the conditions or circumstances under which the technology has been tested;
(14) A description of adverse incident monitoring and communication procedures;
(15) A description of the level of human oversight associated with the technology;
(16) A description of whether the data collected will or can be used for training of proprietary vendor or third-party systems;
(17) The individuals and communities that will interact with the technology;
(18) How the information or decisions generated by the technology could impact the public’s rights, opportunities, or access to critical resources or services;
(19) How people with diverse abilities will interact with the user interface of the technology and whether the system integrates and interacts with commonly used assistive technologies;
(20) Whether the technology is expected to replace any jobs currently being performed by human beings or could impact the employment and/or working conditions of City workers;
(21) Why it is important for the City to use the technology; and
(22) Potential risks of the technology and steps that would be taken to mitigate these risks.
(c) COIT, at the recommendation of the CIO, may modify the information requested under subsection (b).
(d) Exceptions. The requirements set forth subsections (a) and (b) shall not apply to the following uses. COIT, at the recommendation of the CIO, may reevaluate and modify these exceptions:
(1) Internal Administration: AI technology solely used to improve internal administrative processes that does not affect rights, staffing decisions, or make substantive changes affecting Department decisions, rights, or services. Examples include systems for internal data management, coding support, data analysis and visualization, graphic design and image creation, automation of manual processes, speech-to-text and transcription, email sorting, data entry, file management, document organization, grammar and spellcheck and other text editing or text formatting.
(2) Internal Cybersecurity: AI technology solely used for internal cybersecurity purposes and that does not involve surveillance of the public, decision-making, or similar actions otherwise impacting the public’s rights or safety, including intrusion detection, threat monitoring, and other cyber defense systems.
(e) Each Department shall:
(1) Complete and return the Inventory to the CIO;
(2) For subsections (b)(1)-(16), it is anticipated but not required that the department will obtain the information requested directly from the AI Technology Vendor;
(3) For subsections (b)(17)-(22), it is anticipated but not required that the Department will assess the intended use of the technology to answer the questions for the inventory;
(4) Notify DT of any updates to published Inventory information; and
(5) Participate in and facilitate a timely and accurate response to all information in Section (b)(1)-(22).
(f) The Controller shall conduct an annual review of all Department inventory responses and by letter addressed to the Board of Supervisors confirm each Department’s compliance or noncompliance with this Section 22J.3.
(g) In addition to the Inventory, the CIO shall submit to the Board of Supervisors and shall make available on the DataSF platform an AI Technology Report for all AI technologies used by the City within 12 months of the effective date of this Chapter 22J, and every two years thereafter. For each report the CIO submits to the Board of Supervisors, the CIO shall include a resolution to accept the report.
(h) The requirements of this Chapter 22J are in addition to any requirements in Chapter 19B, “Acquisition of Surveillance Technology.”
(Added by Ord. 288-24, File No. 241022, App. 12/19/2024, Eff. 1/19/2025)
Loading...