text
stringlengths
2.62k
162k
# Assembly Bill No. 1027 ## CHAPTER 824 An act to amend Sections 22677 and 22945 of, and to add and repeal Sections 22945.7 and 22945.9 of, the Business and Professions Code, relating to social media platforms. [Approved by Governor October 13, 2023. Filed with Secretary of State October 13, 2023.] ### LEGISLATIVE COUNSEL'S DIGEST AB 1027, Petrie-Norris. Social media platforms: drug safety policies. Existing law, the California Consumer Privacy Act of 2018 (CCPA), as amended by the California Privacy Rights Act of 2020, an initiative measure, grants to a consumer various rights with respect to personal information, as defined, that is collected by a business, as defined. The CCPA requires a business that controls the collection of a consumer’s personal information to inform consumers of the categories of personal information collected, the purposes for which the categories of personal information are collected or used, and the length of time the business intends to retain each category of personal information, as specified. Existing law, the Electronic Communications Privacy Act, generally prohibits a government entity from compelling the production of or access to electronic communication information or electronic device information, as defined, without a search warrant, wiretap order, order for electronic reader records, subpoena, or order for a pen register or trap and trace device, except for emergency situations, as specified. The CCPA grants to a consumer various rights with respect to personal information, as defined, that is collected by a business, as defined, including the right to request that a business delete personal information about the consumer that the business has collected from the consumer. The California Privacy Rights Act of 2020, approved by the voters as Proposition 24 at the November 3, 2020, statewide general election, amended, added to, and reenacted the CCPA. Existing law requires a social media company, as defined, to submit reports, as specified, starting no later than January 1, 2024, to the Attorney General, including, but not limited to, the current version of the terms of service for each social media platform owned or operated by the company, specified categories of content and what policies the social media company has for that platform to address that content, and data related to violations of the terms of service for each platform. Existing law requires the Attorney General to make all terms of service reports submitted pursuant to those provisions available to the public in a searchable repository on its official internet website. --- # Ch. 824 This bill would add to those categories of content the distribution of controlled substances. Existing law, until January 1, 2028, requires a social media platform to create and post a policy statement regarding the use of the social media platform to illegally distribute controlled substances, including a general description of its policies and procedures for responding to law enforcement inquiries. Existing law exempts from these requirements a business that generated less than $100,000,000 in gross revenue during the preceding calendar year. This bill would delete the above-described exemption and would require the policy statement to include a general description of the social media platform’s policy on the retention of electronic communication information and policies and procedures governing when a platform proactively shares relevant information pertaining to distribution of a controlled substance, as specified. The bill would require a social media platform to retain content it has taken down or removed for a violation of its policy related to controlled substances, as specified, for a period of 90 days, except when the platform has a good faith belief that the content is related to the offering, seeking, or receiving of gender-affirming health care, gender-affirming mental health care, or reproductive health care that is lawful under California law. The bill would specify that it does not alter the rights or obligations established in any other law, including the Electronic Communications Privacy Act and the California Consumer Privacy Act. The people of the State of California do enact as follows: SECTION 1. It is the intent of the Legislature that this act is not intended to interfere with the offering, seeking, or receiving of gender-affirming health care, gender-affirming mental health care, or reproductive health care that is lawful under California law. SEC. 2. Section 22677 of the Business and Professions Code is amended to read: 22677. (a) On a semiannual basis in accordance with subdivision (b), a social media company shall submit to the Attorney General a terms of service report. The terms of service report shall include, for each social media platform owned or operated by the company, all of the following: (1) The current version of the terms of service of the social media platform. (2) If a social media company has filed its first report, a complete and detailed description of any changes to the terms of service since the previous report. (3) A statement of whether the current version of the terms of service define each of the following categories of content, and, if so, the definitions of those categories, including any subcategories: (A) Hate speech or racism. (B) Extremism or radicalization. --- (C) Disinformation or misinformation. (D) Harassment. (E) Foreign political interference. (F) Controlled substance distribution. (4) A detailed description of content moderation practices used by the social media company for that platform, including, but not limited to, all of the following: (A) Any existing policies intended to address the categories of content described in paragraph (3). (B) How automated content moderation systems enforce terms of service of the social media platform and when these systems involve human review. (C) How the social media company responds to user reports of violations of the terms of service. (D) How the social media company would remove individual pieces of content, users, or groups that violate the terms of service, or take broader action against individual users or against groups of users that violate the terms of service. (E) The languages in which the social media platform does not make terms of service available, but does offer product features, including, but not limited to, menus and prompts. (5) (A) Information on content that was flagged by the social media company as content belonging to any of the categories described in paragraph (3), including all of the following: (i) The total number of flagged items of content. (ii) The total number of actioned items of content. (iii) The total number of actioned items of content that resulted in action taken by the social media company against the user or group of users responsible for the content. (iv) The total number of actioned items of content that were removed, demonetized, or deprioritized by the social media company. (v) The number of times actioned items of content were viewed by users. (vi) The number of times actioned items of content were shared, and the number of users that viewed the content before it was actioned. (vii) The number of times users appealed social media company actions taken on that platform and the number of reversals of social media company actions on appeal disaggregated by each type of action. (B) All information required by subparagraph (A) shall be disaggregated into the following categories: (i) The category of content, including any relevant categories described in paragraph (3). (ii) The type of content, including, but not limited to, posts, comments, messages, profiles of users, or groups of users. (iii) The type of media of the content, including, but not limited to, text, images, and videos. (iv) How the content was flagged, including, but not limited to, flagged by company employees or contractors, flagged by artificial intelligence --- ``` software, flagged by community moderators, flagged by civil society partners, and flagged by users. (v) How the content was actioned, including, but not limited to, actioned by company employees or contractors, actioned by artificial intelligence software, actioned by community moderators, actioned by civil society partners, and actioned by users. (b) (1) A social media company shall electronically submit a semiannual terms of service report pursuant to subdivision (a), covering activity within the third and fourth quarters of the preceding calendar year, to the Attorney General no later than April 1 of each year, and shall electronically submit a semiannual terms of service report pursuant to subdivision (a), covering activity within the first and second quarters of the current calendar year, to the Attorney General no later than October 1 of each year. (2) Notwithstanding paragraph (1), a social media company shall electronically submit its first terms of service report pursuant to subdivision (a), covering activity within the third quarter of 2023, to the Attorney General no later than January 1, 2024, and shall electronically submit its second terms of service report pursuant to subdivision (a), covering activity within the fourth quarter of 2023, to the Attorney General no later than April 1, 2024. A social media platform shall submit its third report no later than October 1, 2024, in accordance with paragraph (1). (c) The Attorney General shall make all terms of service reports submitted pursuant to this section available to the public in a searchable repository on its official internet website. SEC. 3. Section 22945 of the Business and Professions Code is amended to read: 22945. (a) For purposes of this chapter, the following definitions apply: (1) (A) “Content” means statements or comments made by users and media that are created, posted, shared, or otherwise interacted with by users on an internet-based service or application. (B) “Content” does not include media put on a service or application exclusively for the purpose of cloud storage, transmitting files, or file collaboration. (2) “Controlled substance” has the same meaning as that term is defined in Section 11007 of the Health and Safety Code. (3) “Social media platform” means a public or semipublic internet-based service or application that has users in California and that meets both of the following criteria: (A) (i) A substantial function of the service or application is to connect users in order to allow users to interact socially with each other within the service or application. (ii) A service or application that provides email or direct messaging services shall not be considered to meet this criterion on the basis of that function alone. (B) The service or application allows users to do all of the following: (i) Construct a public or semipublic profile for purposes of signing into and using the service. ``` --- (ii) Populate a list of other users with whom an individual shares a social connection within the system. (iii) Create or post content viewable by other users, including, but not limited to, on message boards, in chat rooms, or through a landing page or main feed that presents the user with content generated by other users. (4) “Public or semipublic internet-based service or application” excludes a service or application used to facilitate communication within a business or enterprise among employees or affiliates of the business or enterprise, provided that access to the service or application is restricted to employees or affiliates of the business or enterprise using the service or application. (b) A social media platform that operates in the state shall create, and publicly post on the social media platform’s internet website, a policy statement that includes all of the following: (1) The social media platform’s policy on the use of the social media platform to illegally distribute a controlled substance. (2) A general description of the social media platform’s moderation practices that are employed to prevent users from posting or sharing electronic content pertaining to the illegal distribution of a controlled substance. The description shall not include any information that the social media platform believes would compromise operational efforts to identify prohibited content or user activity, or otherwise endanger user safety. (3) A link to mental health and drug education resources provided by governmental public health authorities. (4) A link to the social media platform’s reporting mechanism for illegal or harmful content or behavior on the social media platform, if one exists. (5) A general description of the social media platform’s policies and procedures for responding to law enforcement inquiries, including warrants, subpoenas, and other court orders compelling the production of or access to electronic communication information, as defined in Section 1546 of the Penal Code. (6) A general description of the social media platform’s policy on the retention of electronic communication information, as defined in Section 1546 of the Penal Code, including how long the platform retains that information. (7) A general description of the social media platform’s policies and procedures governing when a platform proactively shares relevant information pertaining to the illegal distribution of a controlled substance. (c) The disclosures required by this section may be posted separately or incorporated within another document or post, including, but not limited to, the terms of service or the community guidelines. (d) A person or entity operating a social media platform in the state shall do all of the following: (1) Update the policy statement created pursuant to subdivision (b) as necessary. (2) Consider consulting with nonprofits, safety advocates, and survivors to assist in developing and supporting the policy statement created pursuant to subdivision (b). --- (3) (A) A social media platform shall retain data on content it has taken action to take down or remove for a violation of a policy prohibiting the unlawful sale, distribution, amplification, or otherwise proliferation of controlled substances and related paraphernalia. A social media platform shall retain the content that violated a policy and the username of the violating account or its user for a period of 90 days. (B) Notwithstanding subparagraph (A), a social media platform is not required to retain content removed in violation of the policy if there is a good faith belief that the content is related to the offering, seeking, or receiving of gender-affirming health care, gender-affirming mental health care, or reproductive health care that is lawful under California law. SEC. 4. Section 22945.7 is added to the Business and Professions Code, to read: 22945.7. Nothing in this chapter alters the rights or obligations established in any other law, including, but not limited to, the Electronic Communications Privacy Act (Chapter 3.6 (commencing with Section 1546) of Title 12 of Part 2 of the Penal Code) and the California Consumer Privacy Act of 2018 (Title 1.81.5 (commencing with Section 1798.100) of Part 4 of Division 3 of the Civil Code). SEC. 5. Section 22945.9 is added to the Business and Professions Code, to read: 22945.9. This chapter shall remain in effect only until January 1, 2028, and as of that date is repealed.
# ASSEMBLY BILL No. 1282 **Introduced by Assembly Member Lowenthal** *(Coauthors: Assembly Members Connolly, Muratsuchi, and Villapudua)* February 16, 2023 --- AMENDED IN SENATE SEPTEMBER 1, 2023 AMENDED IN SENATE JUNE 13, 2023 AMENDED IN ASSEMBLY APRIL 20, 2023 AMENDED IN ASSEMBLY APRIL 6, 2023 AMENDED IN ASSEMBLY MARCH 9, 2023 CALIFORNIA LEGISLATURE—2023–24 REGULAR SESSION --- An act to add and repeal Part 4.3 (commencing with Section 5887) of Division 5 of the Welfare and Institutions Code, relating to mental health. ## LEGISLATIVE COUNSEL’S DIGEST **AB 1282**, as amended, Lowenthal. Mental health: impacts of social media. Existing law, the Mental Health Services Act, an initiative measure enacted by the voters as Proposition 63 at the November 2, 2004, statewide general election, establishes the Mental Health Services Oversight and Accountability Commission, and authorizes the commission to take specified actions, including advising the Governor or the Legislature regarding actions the state may take to improve care and services for people with mental illness. --- # AB 1282 This bill would require the commission to report to specified policy committees of the Legislature, on or before July 1, 2025, a statewide strategy to understand, communicate, and mitigate mental health risks associated with the use of social media by children and youth. The bill would require the report to include, among other things, (1) the degree to which individuals negatively impacted by social media are accessing and receiving mental health services and (2) recommendations to strengthen children and youth resiliency strategies and California’s use of mental health services to reduce the negative outcomes that may result from untreated mental illness, as specified. The bill would require the commission to explore, among other things, the persons and populations that use social media and the negative mental health risks associated with social media and artificial intelligence, as defined. The bill would repeal these provisions on January 1, 2029. Vote: majority. Appropriation: no. Fiscal committee: yes. State-mandated local program: no. The people of the State of California do enact as follows: SECTION 1. Part 4.3 (commencing with Section 5887) is added to Division 5 of the Welfare and Institutions Code, to read: PART 4.3. IMPACTS OF SOCIAL MEDIA AND ARTIFICIAL INTELLIGENCE ON MENTAL HEALTH 5887. SECTION 1. Part 4.3 (commencing with Section 5888) is added to Division 5 of the Welfare and Institutions Code, to read: PART 4.3. IMPACTS OF SOCIAL MEDIA AND ARTIFICIAL INTELLIGENCE ON MENTAL HEALTH 5888. As used in this part, the following definitions shall apply: (a) “Children and youth” means individuals up to 26 years of age. (b) “Commission” means the Mental Health Services Oversight and Accountability Commission established pursuant to Section 5845. (c) “Social media” means a social media platform, as defined in Section 22675 of the Business and Professions Code. --- ``` (d) “Artificial intelligence” means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to do all of the following: (1) Perceive real and virtual environments. (2) Abstract those perceptions into models through analysis in an automated manner. (3) Use model inferences to formulate options for information or action. 5887.1. (a) The commission shall report to the Senate and Assembly Committees on Health, the Senate Committee on Judiciary, the Assembly Committee on Privacy and Consumer Protection, and other relevant policy committees of the Legislature a statewide strategy to understand, communicate, and mitigate mental health risks associated with the use of social media by children and youth. The report shall include all of the following: (1) The degree to which individuals negatively impacted by social media are accessing and receiving mental health services. (2) Recommendations to strengthen children and youth resiliency strategies and California’s use of mental health services to reduce the negative outcomes that may result from untreated mental illness enumerated in subdivision (d) of Section 5840. (3) Any barriers to receiving the data relevant to completing this report. (b) In preparing the report, the commission shall explore all of the following: (1) The types of social media. (2) The persons and populations that use social media. (3) Opportunities to support resilience. (4) Negative mental health risks associated with social media, including all of the following: (A) Suicide. (B) Eating disorders. (C) Self-harm. (D) Prolonged suffering. (E) Depression. (F) Anxiety. (G) Bullying. ``` --- # AB 1282 1. (H) Substance abuse. 2. (I) Other mental health risks as determined by the commission. 3. (5) The negative health risks associated with artificial 4. intelligence. 5. (c) In formulating this report, the commission shall prioritize 6. the perspectives of children and youth through a robust engagement 7. process with a focus on transition-age youth, at-risk populations, 8. in-need populations and underserved cultural and linguistic 9. populations. The commission shall also consult with the California 10. mental health community, including, but not limited to, consumers, 11. family members, providers, and other subject matter experts. 12. (d) The report shall be submitted on or before July 1, 2025. 13. 5887.2. 14. 5888.2. This part shall remain in effect only until January 1, 15. 2029, and as of that date is repealed. O 94
# ASSEMBLY BILL No. 2839 **Introduced by Assembly Member Pellerin** February 15, 2024 --- An act to amend Section 35 of the Code of Civil Procedure, and to add Section 20012 to the Elections Code, relating to elections. --- ## LEGISLATIVE COUNSEL’S DIGEST **AB 2839**, as introduced, Pellerin. Elections: deceptive media in advertisements. Existing law prohibits certain distribution of materially deceptive audio or visual media of a candidate within 60 days of an election at which the candidate will appear on the ballot, unless the media includes a disclosure stating that the media has been manipulated, subject to specified exemptions. Existing law authorizes a candidate whose voice or likeness appears in audio or visual media distributed in violation of these provisions to file specified actions, and it requires a court to place such proceedings on the calendar in the order of their date of filing and give the proceedings precedence. This bill would prohibit a person, committee, or other entity from knowingly distributing an advertisement or other election communication, as defined, that contains certain materially deceptive and digitally altered or digitally created images or audio or video files, as defined, with the intent to influence an election or solicit funds for a candidate or campaign, subject to specified exemptions. The bill would apply this prohibition within 120 days of an election and, in specified cases, 60 days after an election. The bill would authorize a recipient of a materially deceptive and digitally altered or digitally created image or audio or video file distributed in violation of this section, or a --- # The people of the State of California do enact as follows: ## SECTION 1. Section 35 of the Code of Civil Procedure, as amended by Section 1 of Chapter 343 of the Statutes of 2023, is amended to read: 1. 35. (a) Proceedings in cases involving the registration or denial of registration of voters, the certification or denial of certification of candidates, the certification or denial of certification of ballot measures, election contests, actions under Section 20010 or 20012 of the Elections Code, and actions under Chapter 2 (commencing with Section 21100) of Division 21 of the Elections Code shall be placed on the calendar in the order of their date of filing and shall be given precedence. 2. (b) This section shall remain in effect only until January 1, 2027, and as of that date is repealed, unless a later enacted statute, that is enacted before January 1, 2027, deletes or extends that date. ## SEC. 2. Section 35 of the Code of Civil Procedure, as amended by Section 2 of Chapter 343 of the Statutes of 2023, is amended to read: 1. 35. (a) Proceedings in cases involving the registration or denial of registration of voters, the certification or denial of certification of candidates, the certification or denial of certification of ballot measures, election contests, actions under Section 20012 of the Elections Code, and actions under Chapter 2 (commencing with Section 21100) of Division 21 of the Elections Code shall be placed on the calendar in the order of their date of filing and shall be given precedence. 2. (b) This section shall become operative January 1, 2027. ## SEC. 3. Section 20012 is added to the Elections Code, to read: 1. 20012. (a) The Legislature finds and declares as follows: 2. (1) California is entering its first-ever artificial intelligence (AI) election, in which disinformation powered by generative AI will --- # AB 2839 pollute our information ecosystems like never before. Voters will not know what images, audio, or video they can trust. (2) In a few clicks, using current technology, bad actors now have the power to create a false image of a candidate accepting a bribe, or a fake video of an elections official “caught on tape” saying that voting machines are not secure, or generate an artificial robocall in the Governor’s voice telling millions of Californians their voting site has changed. (3) In the lead-up to the 2024 presidential elections, candidates and parties are already creating and distributing deepfake images and audio and video content. These fake images or files can skew election results, even if they use older methods of distribution, such as mail, television, telephone, and text, and undermine trust in the ballot counting process. (4) In order to ensure California elections are free and fair, California must, for a limited time before and after elections, prevent the use of deepfakes and disinformation meant to prevent voters from voting and deceive voters based on fraudulent content. (b) (1) A person, committee, or other entity shall not, during the time period set forth in subdivision (c), with the intent to influence an election or solicit funds for a candidate or campaign, knowingly distribute an advertisement or other election communication containing materially deceptive and digitally altered or digitally created images or audio or video files of any of the following: (A) A candidate portrayed as doing or saying something that the candidate did not do or say. (B) An officer holding an election or conducting a canvass portrayed as doing or saying something in connection with the election that the officer holding an election or conducting a canvass did not do or say. (C) An elected official portrayed as doing or saying something in connection with the election that the elected official did not do or say. (D) A voting machine, ballot, voting site, or other elections-related property or equipment portrayed in a materially false way. (2) Notwithstanding subparagraph (A) of paragraph (1), a candidate may portray themself as doing or saying something that the candidate did not do or say, but only if the image or audio or --- # AB 2839 1. video file includes a disclosure stating “This ____ has been 2. manipulated.” and complies with the following requirements: 3. (A) The blank in the disclosure required by paragraph (2) shall 4. be filled with whichever of the following terms most accurately 5. describes the media: 6. (i) Image. 7. (ii) Audio. 8. (iii) Video. 9. (B) (i) For visual media, the text of the disclosure shall appear 10. in a size that is easily readable by the average viewer and no 11. smaller than the largest font size of other text appearing in the 12. visual media. If the visual media does not include any other text, 13. the disclosure shall appear in a size that is easily readable by the 14. average viewer. For visual media that is video, the disclosure shall 15. appear for the duration of the video. 16. (ii) If the media consists of audio only, the disclosure shall be 17. read in a clearly spoken manner and in a pitch that can be easily 18. heard by the average listener, at the beginning of the audio, at the 19. end of the audio, and, if the audio is greater than two minutes in 20. length, interspersed within the audio at intervals of not greater than 21. two minutes each. 22. (c) The prohibition in subdivision (b) applies only during the 23. following time periods: 24. (1) One hundred twenty days before any election. 25. (2) For elections officials and items set forth in subparagraphs 26. (B) and (C) of paragraph (1) of subdivision (b), 120 days before 27. any election through 60 days after the election, inclusive. 28. (d) (1) A recipient of a materially deceptive and digitally altered 29. or digitally created image or audio or video file distributed in 30. violation of this section, or a candidate or committee participating 31. in the election, may seek injunctive or other equitable relief 32. prohibiting the distribution of the materially deceptive and digitally 33. altered or digitally created image or audio or video file in violation 34. of this section. The court shall also award a prevailing plaintiff 35. reasonable attorney’s fees and costs. An action under this paragraph 36. shall be entitled to precedence in accordance with Section 35 of 37. the Code of Civil Procedure. 38. (2) A recipient of a materially deceptive and digitally altered 39. or digitally created image or audio or video file distributed in 40. violation of this section, or a candidate or committee participating --- # AB 2839 in the election, may bring an action for general or special damages against the person, committee, or other entity that distributed the materially deceptive and digitally altered or digitally created image or audio or video file in violation of this section. The court shall also award a prevailing party reasonable attorney’s fees and costs. This subdivision shall not be construed to limit or preclude a plaintiff from securing or recovering any other available remedy at law or equity. (3) In any civil action alleging a violation of this section, the plaintiff shall bear the burden of establishing the violation through clear and convincing evidence. (e) (1) This section does not apply to a radio or television broadcasting station, including a cable or satellite television operator, programmer, or producer, that broadcasts any materially deceptive and digitally altered or digitally created image or audio or video file prohibited by this section as part of a bona fide newscast, news interview, news documentary, or on-the-spot coverage of bona fide news events, if the broadcast clearly acknowledges that content or a disclosure, in a manner that can be easily heard or read by the average listener or viewer, that the materially deceptive audio or visual media does not accurately represent any actual event, occurrence, appearance, speech, or expressive conduct. (2) This section does not apply to a regularly published newspaper, magazine, or other periodical of general circulation, including an internet or electronic publication, that routinely carries news and commentary of general interest, and that publishes any materially deceptive and digitally altered or digitally created image or audio or video file prohibited by this section, if the publication clearly states that the materially deceptive and digitally altered or digitally created image or audio or video file does not accurately represent any actual event, occurrence, appearance, speech, or expressive conduct. (3) This section does not apply to a materially deceptive and digitally altered or digitally created image or audio or video file that constitutes satire or parody. (f) For purposes of this section, the following definitions apply: (1) “Advertisement” means any general or public communication that is authorized or paid for the purpose of supporting or opposing a candidate for elective office or a ballot --- # AB 2839 measure and that is broadcast by or through television, radio, telephone, or text, or disseminated by print media, including billboards, video billboards or screens, and other similar types of advertising. 1. "Committee" means a committee as defined in Section 82013 of the Government Code. 2. "Election communication" means any general or public communication not covered under "advertisement" that is broadcast by or through television, radio, telephone, or text, or disseminated by print media, including billboards, video billboards or screens, and other similar types of communications, that concerns any of the following: - (A) A candidate for office or ballot measure. - (B) Voting or refraining from voting in an election. - (C) The canvass of the vote. 3. "Materially deceptive and digitally modified or created image or audio or video file" means an image or an audio or video file that has been intentionally manipulated in a manner such that all of the following conditions are met: - (i) The image or audio or video file is the product of digital manipulation, artificial intelligence, or machine learning, including deep learning techniques, that merges, combines, replaces, or superimposes content onto an image or an audio or video file, or generates an authentic image or an audio or video file that appears authentic. - (ii) (I) The image or audio or video file represents a false portrayal of a candidate for elective office, an elected official, an elections official, or a voting machine, ballot, voting site, or other elections property or equipment. - (II) For the purposes of this clause, "a false portrayal of the candidate for elective office, an elected official, an elections official, or a voting machine, ballot, voting site, or other elections property or equipment" means the image or audio or video file would cause a reasonable person to have a fundamentally different understanding or impression of the expressive content of the image or audio or video file than that person would have if the person were hearing or seeing the unaltered, original version of the image or audio or video file. --- (iii) The person, committee, or other entity distributed the image or audio or video file knowing the portrayal of the candidate for elective office, the elected official, the elections official, or the voting machine, ballot, voting site, or other elections property or equipment was false or with a reckless disregard for the true portrayal of the candidate, the elected official, the elections official, or the voting machine, ballot, voting site, or other elections property or equipment. This clause is presumed when an image or audio or video file has been intentionally manipulated to represent a false portrayal of the candidate for elective office, the elected official, the elections official, or the voting machine, ballot, voting site, or other elections property or equipment, but may be rebutted. (B) “Materially deceptive and digitally modified or created image or audio or video file” does not include any image or audio or video file that contains only minor modifications that do not lead to significant changes to the perceived contents or meaning of the content. Minor changes include changes to brightness or contrast of images, removal of background noise in audio, and other minor changes that do not impact the content of the image or audio or video file. (5) “Officer holding an election or conducting a canvass” has the same meaning as in Section 18502. (6) “Recipient” includes a person who views, hears, or otherwise perceives an image or audio or video file that was initially distributed in violation of this section. (g) The provisions of this section apply regardless of the language used in the advertisement or solicitation. If the language used is not English, the disclosure required by paragraph (2) of subdivision (a) shall appear in the language used in the advertisement or solicitation. (h) The provisions of this section are severable. If any provision of this section or its application is held invalid, that invalidity shall not affect other provisions or applications that can be given effect without the invalid provision or application.
# Assembly Bill No. 302 ## CHAPTER 800 An act to add Section 11546.45.5 to the Government Code, relating to automated decision systems. [Approved by Governor October 13, 2023. Filed with Secretary of State October 13, 2023.] ### LEGISLATIVE COUNSEL'S DIGEST AB 302, Ward. Department of Technology: high-risk automated decision systems: inventory. Existing law establishes the Department of Technology within the Government Operations Agency and requires the Director of Technology to supervise the Department of Technology and report directly to the Governor on issues relating to information technology. This bill would require the department, in coordination with other interagency bodies, to conduct, on or before September 1, 2024, a comprehensive inventory of all high-risk automated decision systems, as defined, that have been proposed for use, development, or procurement by, or are being used, developed, or procured by, state agencies, as defined. The bill would require the comprehensive inventory to include a description of, among other things, the categories of data and personal information the automated decision system uses to make its decisions. On or before January 1, 2025, and annually thereafter, the bill would require the department to submit a report of the above-described comprehensive inventory to specified committees of the Legislature. The people of the State of California do enact as follows: ### SECTION 1. Section 11546.45.5 is added to the Government Code, to read: 11546.45.5. (a) For purposes of this section: (1) “Automated decision system” means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons. “Automated decision system” does not include a spam email filter, firewall, antivirus software, identity and access management tools, calculator, database, dataset, or other compilation of data. (2) “Board” means any administrative or regulatory board, commission, committee, council, association, or authority consisting of more than one person. --- person whose members are appointed by the Governor, the Legislature, or both. (3) “Department” means the Department of Technology. (4) “High-risk automated decision system” means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, housing or accommodations, education, employment, credit, health care, and criminal justice. (5) (A) “State agency” means any of the following: (i) Any state office, department, division, or bureau. (ii) The California State University. (iii) The Board of Parole Hearings. (iv) Any board or other professional licensing and regulatory body under the administration or oversight of the Department of Consumer Affairs. (B) “State agency” does not include the University of California, the Legislature, the judicial branch, or any board, except as provided in subparagraph (A). (b) On or before September 1, 2024, the Department of Technology shall conduct, in coordination with other interagency bodies as it deems appropriate, a comprehensive inventory of all high-risk automated decision systems that have been proposed for use, development, or procurement by, or are being used, developed, or procured by, any state agency. (c) The comprehensive inventory described by subdivision (b) shall include a description of all of the following: (1) (A) Any decision the automated decision system can make or support and the intended benefits of that use. (B) The intended uses of any use described in subparagraph (A). (2) The results of any research assessing the efficacy and relative benefits and harms and alternatives to the automated decision system described by paragraph (1). (3) The categories of data and personal information the automated decision system uses to make its decisions. (4) (A) The measures in place, if any, to mitigate the risks, including cybersecurity risk and the risk of inaccurate, unfairly discriminatory, or biased decisions, of the automated decision system. (B) Measures described by this paragraph may include, but are not limited to, any of the following: (i) Performance metrics to gauge the accuracy of the system. (ii) Cybersecurity controls. (iii) Privacy controls. (iv) Risk assessments or audits for potential risks. (v) Measures or processes in place to contest an automated decision. (d) (1) On or before January 1, 2025, and annually thereafter, the department shall submit a report of the comprehensive inventory described in subdivision (b) to the Assembly Committee on Privacy and Consumer Protection and the Senate Committee on Governmental Organization. --- ### Ch. 800 (2) The requirement for submitting a report imposed under paragraph (1) is inoperative on January 1, 2029, pursuant to Section 10231.5. (3) A report to be submitted pursuant to paragraph (1) shall be submitted in compliance with Section 9795. --- 94
# ASSEMBLY BILL No. 3204 ## Introduced by Assembly Member Bauer-Kahan February 16, 2024 --- An act to add Title 1.81.8 (commencing with Section 1798.321) to Part 4 of Division 3 of the Civil Code, relating to data digesters. --- ### LEGISLATIVE COUNSEL'S DIGEST AB 3204, as introduced, Bauer-Kahan. Data digesters. The California Consumer Privacy Act of 2018 (CCPA) grants a consumer various rights with respect to personal information that is collected or sold by a business. The CCPA defines various terms for these purposes. The California Privacy Rights Act of 2020 (CPRA), approved by the voters as Proposition 24 at the November 3, 2020, statewide general election, amended, added to, and reenacted the CCPA and establishes the California Privacy Protection Agency (agency) and vests the agency with full administrative power, authority, and jurisdiction to enforce the CCPA. Existing law requires data brokers to register with the agency, pay a registration fee, and provide certain information, prescribes penalties for failure to register as required by these provisions, requires the agency to create a page on its internet website where this registration information is accessible to the public, and creates a fund known as the “Data Brokers’ Registry Fund” that may be used by the agency, upon appropriation, to, among other things, offset the reasonable costs of establishing and maintaining the informational website and the costs incurred by the state courts and the agency in connection with enforcing these provisions, as specified. Existing law defines various terms for --- these purposes, including by incorporating specified definitions provided in the CPRA. This bill would require data digesters to register with the agency, pay a registration fee, and provide specified information, prescribe penalties for a failure to register as required by these provisions, require the agency to create a page on its internet website where this registration information is accessible to the public, and create a fund known as the “Data Digester Registry Fund” to be administered by the agency to be available for expenditure by the agency, upon appropriation, to offset the reasonable costs of establishing and maintaining the informational website and the costs incurred by the state courts and the agency in connection with enforcing these provisions, as specified. The bill would define “data digester” and incorporate specified definitions provided in the CPRA for these purposes. Vote: majority. Appropriation: no. Fiscal committee: yes. State-mandated local program: no. The people of the State of California do enact as follows: SECTION 1. Title 1.81.8 (commencing with Section 1798.321) is added to Part 4 of Division 3 of the Civil Code, to read: # TITLE 1.81.8. DATA DIGESTERS ## 1798.321. For purposes of this title: (a) The definitions in Section 1798.140 shall apply unless otherwise specified in this title. (b) “Data digester” means a business that uses personal information to train artificial intelligence. ## 1798.322. A fund to be known as the “Data Digester Registry Fund” is hereby created within the State Treasury. The fund shall be administered by the California Privacy Protection Agency. All moneys collected or received by the California Privacy Protection Agency under this title shall be deposited into the Data Digester Registry Fund, to be available for expenditure by the California Privacy Protection Agency, upon appropriation by the Legislature, to offset all of the following costs: (a) The reasonable costs of establishing and maintaining the informational internet website described in Section 1798.324. --- (b) The costs incurred by the state courts and the California Privacy Protection Agency in connection with enforcing this title, as specified in Section 1798.323. 1798.323. (a) On or before January 31 following each year in which a business meets the definition of data digester as provided in this title, the business shall register with the California Privacy Protection Agency pursuant to the requirements of this section. (b) In registering with the California Privacy Protection Agency, as described in subdivision (a), a data digester shall do all of the following: (1) Pay a registration fee in an amount determined by the California Privacy Protection Agency, not to exceed the reasonable costs of establishing and maintaining the informational internet website described in Section 1798.324. (2) Provide the following information: (A) The name of the data digester and its primary physical, email, and internet website addresses. (B) Each category of personal information that the data digester uses to train artificial intelligence, identified by reference to the applicable subparagraph enumerated under paragraph (1) of subdivision (v) of Section 1798.140. (C) Each category of sensitive personal information that the data digester uses to train artificial intelligence, identified by reference to the applicable paragraph and subparagraph enumerated under subdivision (ae) of Section 1798.140. (D) Each category of information related to consumers’ receipt of sensitive services, as that term is defined in Section 56.05, that the data digester uses to train artificial intelligence, identified by reference to the specific category of sensitive service enumerated in the definition. (E) Whether the data digester trains artificial intelligence using the personal information of minors. (F) Whether and to what extent the data digester or any of its subsidiaries is regulated by any of the following: (i) The federal Fair Credit Reporting Act (15 U.S.C. Sec. 1681 et seq.). (ii) The federal Gramm-Leach-Bliley Act (Public Law 106-102) and implementing regulations. (iii) The federal Driver’s Privacy Protection Act of 1994 (18 U.S.C. Sec. 2721 et seq.). --- # AB 3204 1. (iv) The Insurance Information and Privacy Protection Act 2. Article 6.6 (commencing with Section 791) of Chapter 1 of Part 3. 2 of Division 1 of the Insurance Code). 4. (v) The Confidentiality of Medical Information Act (Part 2.6 5. commencing with Section 56) of Division 1) or the privacy, 6. security, and breach notification rules issued by the United States 7. Department of Health and Human Services, Parts 160 and 164 of 8. Title 45 of the Code of Federal Regulations, established pursuant 9. to the federal Health Insurance Portability and Accountability Act 10. of 1996 (Public Law 104-191). 11. (vi) The privacy of pupil records pursuant to Article 5 12. (commencing with Section 49073) of Chapter 6.5 of Part 27 of 13. Division 4 of Title 2 of the Education Code. 14. (G) Any additional information or explanation the data digester 15. chooses to provide concerning its artificial intelligence training 16. practices. 17. (c) If the California Privacy Protection Agency reasonably 18. believes that a data digester has failed to register within 90 days 19. of the date on which it is required to register under this section, 20. the California Privacy Protection Agency shall provide notice of 21. failure to the data digester and post a copy of the notice on the 22. informational internet website described in Section 1798.324. 23. (d) A data digester that fails to register as required by this section 24. is liable for administrative fines and costs in an administrative 25. action brought by the California Privacy Protection Agency as 26. follows: 27. (1) Administrative fines according to the following schedule: 28. (A) An administrative fine of two hundred dollars ($200) for 29. each day the data digester fails to register as required by this section 30. prior to the date on which notice is posted on the informational 31. internet website pursuant to subdivision (c). 32. (B) An administrative fine of five thousand dollars ($5,000) for 33. each day the data digester fails to register as required by this section 34. beginning the 15th day after notice is posted on the informational 35. internet website pursuant to subdivision (c). 36. (2) An amount equal to the fees that were due during the period 37. it failed to register. 38. (3) Expenses incurred by the California Privacy Protection 39. Agency in the investigation and administration of the action as the 40. court deems appropriate. --- ``` (e) Any penalties, fines, fees, and expenses recovered in an action prosecuted under subdivision (d) shall be deposited in the Data Breach Registry Fund, created within the State Treasury pursuant to Section 1798.322, with the intent that they be used to fully offset costs incurred by the state courts and the California Privacy Protection Agency in connection with this title. 1798.324. The California Privacy Protection Agency shall create a page on its internet website where the registration information provided by data digesters described in paragraph (2) of subdivision (b) of Section 1798.323 shall be accessible to the public. 1798.325. (a) Except as provided in subdivision (b), the California Privacy Protection Agency may adopt regulations pursuant to the Administrative Procedure Act (Chapter 3.5 (commencing with Section 11340) of Part 1 of Division 3 of Title 2 of the Government Code) to implement and administer this title. (b) Notwithstanding subdivision (a), any regulation adopted by the California Privacy Protection Agency to establish fees authorized by this title shall be exempt from the Administrative Procedure Act (Chapter 3.5 (commencing with Section 11340) of Part 1 of Division 3 of Title 2 of the Government Code). 1798.326. This title shall not be construed to supersede or interfere with the operation of the California Consumer Privacy Act of 2018 (Title 1.81.5 (commencing with Section 1798.100)). 1798.327. An administrative action brought pursuant to this title alleging a violation of any of the provisions of this title shall not be commenced more than five years after the date on which the violation occurred. O ```
# SENATE BILL No. 1047 **Introduced by Senator Wiener** February 7, 2024 --- An act to add Chapter 22.6 (commencing with Section 22602) to Division 8 of the Business and Professions Code, and to add Sections 11547.6 and 11547.7 to the Government Code, relating to artificial intelligence. --- ## LEGISLATIVE COUNSEL’S DIGEST SB 1047, as introduced, Wiener. Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act. Existing law requires the Secretary of Government Operations to develop a coordinated plan to, among other things, investigate the feasibility of, and obstacles to, developing standards and technologies for state departments to determine digital content provenance. For the purpose of informing that coordinated plan, existing law requires the secretary to evaluate, among other things, the impact of the proliferation of deepfakes, defined to mean audio or visual content that has been generated or manipulated by artificial intelligence that would falsely appear to be authentic or truthful and that features depictions of people appearing to say or do things they did not say or do without their consent, on state government, California-based businesses, and residents of the state. Existing law creates the Department of Technology within the Government Operations Agency and requires the department to, among other things, identify, assess, and prioritize high-risk, critical information technology services and systems across state government for modernization, stabilization, or remediation. This bill would enact the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act to, among other things, require a --- developer of a covered model, as defined, to determine whether it can make a positive safety determination with respect to a covered model before initiating training of that covered model, as specified. The bill would define “positive safety determination” to mean a determination with respect to a covered model, that is not a derivative model, that a developer can reasonably exclude the possibility that the covered model has a hazardous capability, as defined, or may come close to possessing a hazardous capability when accounting for a reasonable margin for safety and the possibility of posttraining modifications. This bill would require that a developer, before initiating training of a nonderivative covered model, comply with various requirements, including implementing the capability to promptly enact a full shutdown of the covered model until that covered model is the subject of a positive safety determination. This bill would require a developer of a nonderivative covered model that is not the subject of a positive safety determination to submit to the Frontier Model Division, which the bill would create within the Department of Technology, an annual certification of compliance with these provisions signed by the chief technology officer, or a more senior corporate officer, in a format and on a date as prescribed by the Frontier Model Division. By expanding the scope of the crime of perjury, this bill would impose a state-mandated local program. The bill would also require a developer to report an artificial intelligence safety incident affecting a covered model to the Frontier Model Division in a manner prescribed by the Frontier Model Division. This bill would require a person that operates a computing cluster, as defined, to implement appropriate written policies and procedures to do certain things when a customer utilizes compute resources that would be sufficient to train a covered model, including assess whether a prospective customer intends to utilize the computing cluster to deploy a covered model. The bill would punish a violation of these provisions with a civil penalty, as prescribed, to be recovered by the Attorney General. This bill would also create the Frontier Model Division within the Department of Technology and would require the division to, among other things, review annual certification reports from developers received pursuant to these provisions and publicly release summarized findings based on those reports. The bill would authorize the division to assess related fees and would require deposit of the fees into the Frontier Model Division Programs Fund, which the bill would create. --- The bill would make moneys in the fund available for the purpose of these provisions only upon appropriation by the Legislature. This bill would also require the Department of Technology to commission consultants, as prescribed, to create a public cloud computing cluster, to be known as CalCompute, with the primary focus of conducting research into the safe and secure deployment of large-scale artificial intelligence models and fostering equitable innovation that includes, among other things, a fully owned and hosted cloud platform. The California Constitution requires the state to reimburse local agencies and school districts for certain costs mandated by the state. Statutory provisions establish procedures for making that reimbursement. This bill would provide that no reimbursement is required by this act for a specified reason. Vote: majority. Appropriation: no. Fiscal committee: yes. State-mandated local program: yes. The people of the State of California do enact as follows: ## SECTION 1. This act shall be known, and may be cited, as the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act. ## SEC. 2. The Legislature finds and declares all of the following: (a) California is leading the world in artificial intelligence innovation and research, through companies large and small, as well as through our remarkable public and private universities. (b) Artificial intelligence, including new advances in generative artificial intelligence, has the potential to catalyze innovation and the rapid development of a wide range of benefits for Californians and the California economy, including advances in medicine, wildfire forecasting and prevention, and climate science, and to push the bounds of human creativity and capacity. (c) If not properly subject to human controls, future development in artificial intelligence may also have the potential to be used to create novel threats to public safety and security, including by enabling the creation and the proliferation of weapons of mass destruction, such as biological, chemical, and nuclear weapons, as well as weapons with cyber-offensive capabilities. (d) The state government has an essential role to play in ensuring that California recognizes the benefits of this technology while avoiding the most severe risks, as well as to ensure that artificial --- # Chapter 22.6. Safe and Secure Innovation for Frontier Artificial Intelligence Systems ## 22602. As used in this chapter: (a) “Advanced persistent threat” means an adversary with sophisticated levels of expertise and significant resources that allow it, through the use of multiple different attack vectors, including, but not limited to, cyber, physical, and deception, to generate opportunities to achieve its objectives that are typically to establish and extend its presence within the information technology infrastructure of organizations for purposes of exfiltrating information or to undermine or impede critical aspects of a mission, program, or organization or place itself in a position to do so in the future. (b) “Artificial intelligence model” means a machine-based system that can make predictions, recommendations, or decisions influencing real or virtual environments and can use model inference to formulate options for information or action. (c) “Artificial intelligence safety incident” means any of the following: (1) A covered model autonomously engaging in a sustained sequence of unsafe behavior other than at the request of a user. (2) Theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights of a covered model. (3) The critical failure of technical or administrative controls, including controls limiting the ability to modify a covered model, designed to limit access to a hazardous capability of a covered model. (4) Unauthorized use of the hazardous capability of a covered model. (d) “Computing cluster” means a set of machines transitively connected by data center networking of over 100 gigabits that has a theoretical maximum computing capacity of 10^20 integer or --- # SB 1047 floating-point operations per second for training artificial intelligence. (e) “Covered guidance” means any of the following: 1. Applicable guidance issued by the National Institute of Standards and Technology and by the Frontier Model Division. 2. Industry best practices, including relevant safety practices, precautions, or testing procedures undertaken by developers of comparable models, and any safety standards or best practices commonly or generally recognized by relevant experts in academia or the nonprofit sector. 3. Applicable safety-enhancing standards set by standards setting organizations. (f) “Covered model” means an artificial intelligence model that meets either of the following criteria: 1. The artificial intelligence model was trained using a quantity of computing power greater than 10^26 integer or floating-point operations in 2024, or a model that could reasonably be expected to have similar performance on benchmarks commonly used to quantify the performance of state-of-the-art foundation models, as determined by industry best practices and relevant standard setting organizations. 2. The artificial intelligence model has capability below the relevant threshold on a specific benchmark but is of otherwise similar general capability. (g) “Critical harm” means a harm listed in paragraph (1) of subdivision (n). (h) “Critical infrastructure” means assets, systems, and networks, whether physical or virtual, the incapacitation or destruction of which would have a debilitating effect on physical security, economic security, public health, or safety in the state. (i) 1. “Derivative model” means an artificial intelligence model that is a derivative of another artificial intelligence model, including either of the following: - A modified or unmodified copy of an artificial intelligence model. - A combination of an artificial intelligence model with other software. 2. “Derivative model” does not include an entirely independently trained artificial intelligence model. --- # SB 1047 (j) (1) “Developer” means a person that creates, owns, or otherwise has responsibility for an artificial intelligence model. (2) “Developer” does not include a third-party machine-learning operations platform, an artificial intelligence infrastructure platform, a computing cluster, an application developer using sourced models, or an end-user of an artificial intelligence model. (k) “Fine tuning” means the adjustment of the model weights of an artificial intelligence model that has been previously trained by training the model with new data. (l) “Frontier Model Division” means the Frontier Model Division created pursuant to Section 11547.6 of the Government Code. (m) “Full shutdown” means the cessation of operation of a covered model, including all copies and derivative models, on all computers and storage devices within custody, control, or possession of a person, including any computer or storage device remotely provided by agreement. (n) (1) “Hazardous capability” means the capability of a covered model to be used to enable any of the following harms in a way that would be significantly more difficult to cause without access to a covered model: (A) The creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties. (B) At least five hundred million dollars ($500,000,000) of damage through cyberattacks on critical infrastructure via a single incident or multiple related incidents. (C) At least five hundred million dollars ($500,000,000) of damage by an artificial intelligence model that autonomously engages in conduct that would violate the Penal Code if undertaken by a human. (D) Other threats to public safety and security that are of comparable severity to the harms described in paragraphs (A) to (C), inclusive. (2) “Hazardous capability” includes a capability described in paragraph (1) even if the hazardous capability would not manifest but for fine tuning and posttraining modifications performed by third-party experts intending to demonstrate those abilities. (o) “Machine-learning operations platform” means a solution that includes a combined offering of necessary machine-learning development capabilities, including exploratory data analysis, data preparation, model training and tuning, model review and --- ``` governance, model inference and serving, model deployment and monitoring, and automated model retraining. (p) “Model weight” means a numerical parameter established through training in an artificial intelligence model that helps determine how input information impacts a model’s output. (q) “Open-source artificial intelligence model” means an artificial intelligence model that is made freely available and may be freely modified and redistributed. (r) “Person” means an individual, proprietorship, firm, partnership, joint venture, syndicate, business trust, company, corporation, limited liability company, association, committee, or any other nongovernmental organization or group of persons acting in concert. (s) “Positive safety determination” means a determination, pursuant to subdivision (a) or (c) of Section 22603, with respect to a covered model that is not a derivative model that a developer can reasonably exclude the possibility that a covered model has a hazardous capability or may come close to possessing a hazardous capability when accounting for a reasonable margin for safety and the possibility of posttraining modifications. (t) “Posttraining modification” means the modification of the capabilities of an artificial intelligence model after the completion of training by any means, including, but not limited to, initiating additional training, providing the model with access to tools or data, removing safeguards against hazardous misuse or misbehavior of the model, or combining the model with, or integrating it into, other software. (u) “Safety and security protocol” means documented technical and organizational protocols that meet both of the following criteria: (1) The protocols are used to manage the risks of developing and operating covered models across their life cycle, including risks posed by enabling or potentially enabling the creation of derivative models. (2) The protocols specify that compliance with the protocols is required in order to train, operate, possess, and provide external access to the developer’s covered model. 22603. (a) Before initiating training of a covered model that is not a derivative model, a developer of that covered model shall ``` --- determine whether it can make a positive safety determination with respect to the covered model. (1) In making the determination required by this subdivision, a developer shall incorporate all covered guidance. (2) A developer may make a positive safety determination if the covered model will have lower performance on all benchmarks relevant under subdivision (f) of Section 22602 than either of the following: (A) A non-covered model that manifestly lacks hazardous capabilities. (B) Another model that is the subject of a positive safety determination. (3) Upon making a positive safety determination, the developer of the covered model shall submit to the Frontier Model Division a certification under penalty of perjury that specifies the basis for that conclusion. (b) Before initiating training of a covered model that is not a derivative model that is not the subject of a positive safety determination, and until that covered model is the subject of a positive safety determination, the developer of that covered model shall do all of the following: (1) Implement administrative, technical, and physical cybersecurity protections to prevent unauthorized access to, or misuse or unsafe modification of, the covered model, including to prevent theft, misappropriation, malicious use, or inadvertent release or escape of the model weights from the developer’s custody, that are appropriate in light of the risks associated with the covered model, including from advanced persistent threats or other sophisticated actors. (2) Implement the capability to promptly enact a full shutdown of the covered model. (3) Implement all covered guidance. (4) Implement a written and separate safety and security protocol that does all of the following: (A) Provides reasonable assurance that if a developer complies with its safety and security protocol, either of the following will apply: (i) The developer will not produce a covered model with a hazardous capability or enable the production of a derivative model with a hazardous capability. --- ``` (ii) The safeguards enumerated in the policy will be sufficient to prevent critical harms from the exercise of a hazardous capability in a covered model. (B) States compliance requirements in an objective manner and with sufficient detail and specificity to allow the developer or a third party to readily ascertain whether the requirements of the safety and security protocol have been followed. (C) Identifies specific tests and test results that would be sufficient to reasonably exclude the possibility that a covered model has a hazardous capability or may come close to possessing a hazardous capability when accounting for a reasonable margin for safety and the possibility of posttraining modifications, and in addition does all of the following: (i) Describes in detail how the testing procedure incorporates fine tuning and posttraining modifications performed by third-party experts intending to demonstrate those abilities. (ii) Describes in detail how the testing procedure incorporates the possibility of posttraining modifications. (iii) Describes in detail how the testing procedure incorporates the requirement for reasonable margin for safety. (iv) Provides sufficient detail for third parties to replicate the testing procedure. (D) Describes in detail how the developer will meet requirements listed under paragraphs (1), (2), (3), and (5). (E) If applicable, describes in detail how the developer intends to implement the safeguards and requirements referenced in paragraph (1) of subdivision (d). (F) Describes in detail the conditions that would require the execution of a full shutdown. (G) Describes in detail the procedure by which the safety and security protocol may be modified. (H) Meets other criteria stated by the Frontier Model Division in guidance to achieve the purpose of maintaining the safety of a covered model with a hazardous capability. (5) Ensure that the safety and security protocol is implemented as written, including, at a minimum, by designating senior personnel responsible for ensuring implementation by employees and contractors working on a covered model, monitoring and reporting on implementation, and conducting audits, including through third parties as appropriate. ``` --- ``` (6) Provide a copy of the safety and security protocol to the Frontier Model Division. (7) Conduct an annual review of the safety and security protocol to account for any changes to the capabilities of the covered model and industry best practices and, if necessary, make modifications to the policy. (8) If the safety and security protocol is modified, provide an updated copy to the Frontier Model Division within 10 business days. (9) Refrain from initiating training of a covered model if there remains an unreasonable risk that an individual, or the covered model itself, may be able to use the hazardous capabilities of the covered model, or a derivative model based on it, to cause a critical harm. (c) (1) Upon completion of the training of a covered model that is not the subject of a positive safety determination and is not a derivative model, the developer shall perform capability testing sufficient to determine whether the developer can make a positive safety determination with respect to the covered model pursuant to its safety and security protocol. (2) Upon making a positive safety determination with respect to the covered model, a developer of the covered model shall submit to the Frontier Model Division a certification of compliance with the requirements of this section within 90 days and no more than 30 days after initiating the commercial, public, or widespread use of the covered model that includes both of the following: (A) The basis for the developer’s positive safety determination. (B) The specific methodology and results of the capability testing undertaken pursuant to this subdivision. (d) Before initiating the commercial, public, or widespread use of a covered model that is not subject to a positive safety determination, a developer of the nonderivative version of the covered model shall do all of the following: (1) Implement reasonable safeguards and requirements to do all of the following: (A) Prevent an individual from being able to use the hazardous capabilities of the model, or a derivative model, to cause a critical harm. (B) Prevent an individual from being able to use the model to create a derivative model that was used to cause a critical harm. ``` --- (C) Ensure, to the extent reasonably possible, that the covered model’s actions and any resulting critical harms can be accurately and reliably attributed to it and any user responsible for those actions. (2) Provide reasonable requirements to developers of derivative models to prevent an individual from being able to use a derivative model to cause a critical harm. (3) Refrain from initiating the commercial, public, or widespread use of a covered model if there remains an unreasonable risk that an individual may be able to use the hazardous capabilities of the model, or a derivative model based on it, to cause a critical harm. (e) A developer of a covered model shall periodically reevaluate the procedures, policies, protections, capabilities, and safeguards implemented pursuant to this section in light of the growing capabilities of covered models and as is reasonably necessary to ensure that the covered model or its users cannot remove or bypass those procedures, policies, protections, capabilities, and safeguards. (f) (1) A developer of a nonderivative covered model that is not the subject of a positive safety determination shall submit to the Frontier Model Division an annual certification of compliance with the requirements of this section signed by the chief technology officer, or a more senior corporate officer, in a format and on a date as prescribed by the Frontier Model Division. (2) In a certification submitted pursuant to paragraph (1), a developer shall specify or provide, at a minimum, all of the following: (A) The nature and magnitude of hazardous capabilities that the covered model possesses or may reasonably possess and the outcome of capability testing required by subdivision (c). (B) An assessment of the risk that compliance with the safety and security protocol may be insufficient to prevent harms from the exercise of the covered model’s hazardous capabilities. (C) Other information useful to accomplishing the purposes of this subdivision, as determined by the Frontier Model Division. (g) A developer shall report each artificial intelligence safety incident affecting a covered model to the Frontier Model Division in a manner prescribed by the Frontier Model Division. The notification shall be made in the most expedient time possible and without unreasonable delay and in no event later than 72 hours after learning that an artificial intelligence safety incident has --- ``` occurred or learning facts sufficient to establish a reasonable belief that an artificial intelligence safety incident has occurred. (h) (1) Reliance on an unreasonable positive safety determination does not relieve a developer of its obligations under this section. (2) A positive safety determination is unreasonable if the developer does not take into account reasonably foreseeable risks of harm or weaknesses in capability testing that lead to an inaccurate determination. (3) A risk of harm or weakness in capability testing is reasonably foreseeable, if, by the time that a developer releases a model, an applicable risk of harm or weakness in capability testing has already been identified by either of the following: (A) Any other developer of a comparable or comparably powerful model through risk assessment, capability testing, or other means. (B) By the United States Artificial Intelligence Safety Institute, the Frontier Model Division, or any independent standard-setting organization or capability-testing organization cited by either of those entities. 22064. A person that operates a computing cluster shall implement appropriate written policies and procedures to do all of the following when a customer utilizes compute resources that would be sufficient to train a covered model: (a) Obtain a prospective customer’s basic identifying information and business purpose for utilizing the computing cluster, including all of the following: (1) The identity of that prospective customer. (2) The means and source of payment, including any associated financial institution, credit card number, account number, customer identifier, transaction identifiers, or virtual currency wallet or wallet address identifier. (3) The email address and telephonic contact information used to verify a prospective customer’s identity. (4) The Internet Protocol addresses used for access or administration and the date and time of each access or administrative action. (b) Assess whether a prospective customer intends to utilize the computing cluster to deploy a covered model. ``` --- ``` (c) Annually validate the information collected pursuant to subdivision (a) and conduct the assessment required pursuant to subdivision (b). (d) Maintain for seven years and provide to the Frontier Model Division or the Attorney General, upon request, appropriate records of actions taken under this section, including policies and procedures put into effect. (e) Implement the capability to promptly enact a full shutdown in the event of an emergency. 22605. (a) A developer of a covered model that provides commercial access to that covered model shall provide a transparent, uniform, publicly available price schedule for the purchase of access to that covered model at a given level of quality and quantity subject to the developer’s terms of service and shall not engage in unlawful discrimination or noncompetitive activity in determining price or access. (b) A person that operates a computing cluster shall provide a transparent, uniform, publicly available price schedule for the purchase of access to the computing cluster at a given level of quality and quantity subject to the developer’s terms of service and shall not engage in unlawful discrimination or noncompetitive activity in determining price or access. 22606. (a) If the Attorney General has reasonable cause to believe that a person is violating this chapter, the Attorney General shall commence a civil action in a court of competent jurisdiction. (b) In a civil action under this section, the court may award any of the following: (1) (A) Preventive relief, including a permanent or temporary injunction, restraining order, or other order against the person responsible for a violation of this chapter, including deletion of the covered model and the weights utilized in that model. (B) Relief pursuant to this paragraph shall be granted only in response to harm or an imminent risk or threat to public safety. (2) Other relief as the court deems appropriate, including monetary damages to persons aggrieved and an order for the full shutdown of a covered model. (3) A civil penalty in an amount not exceeding 10 percent of the cost, excluding labor cost, to develop the covered model for a first violation and in an amount not exceeding 30 percent of the ``` --- cost, excluding labor cost, to develop the covered model for any subsequent violation. (c) In the apportionment of penalties assessed pursuant to this section, defendants shall be jointly and severally liable. (d) A court shall disregard corporate formalities and impose joint and several liability on affiliated entities for purposes of effectuating the intent of this section if the court concludes that both of the following are true: (1) Steps were taken in the development of the corporate structure among affiliated entities to purposely and unreasonably limit or avoid liability. (2) The corporate structure of the developer or affiliated entities would frustrate recovery of penalties or injunctive relief under this section. 22607. (a) Pursuant to subdivision (a) of Section 1102.5 of the Labor Code, a developer shall not prevent an employee from disclosing information to the Attorney General if the employee has reasonable cause to believe that the information indicates that the developer is out of compliance with the requirements of Section 22603. (b) Pursuant to subdivision (b) of Section 1102.5 of the Labor Code, a developer shall not retaliate against an employee for disclosing information to the Attorney General if the employee has reasonable cause to believe that the information indicates that the developer is out of compliance with the requirements of Section 22603. (c) The Attorney General may publicly release any complaint, or a summary of that complaint, pursuant to this section if the Attorney General concludes that doing so will serve the public interest. (d) Employees shall seek relief for violations of this section pursuant to Sections 1102.61 and 1102.62 of the Labor Code. (e) Pursuant to subdivision (a) of Section 1102.8 of the Labor Code, a developer shall provide clear notice to all employees working on covered models of their rights and responsibilities under this section. SEC. 4. Section 11547.6 is added to the Government Code, to read: 11547.6. (a) As used in this section: --- ``` (1) “Hazardous capability” has the same meaning as defined in Section 22602 of the Business and Professions Code. (2) “Positive safety determination” has the same meaning as defined in Section 22602 of the Business and Professions Code. (b) The Frontier Model Division is hereby created within the Department of Technology. (c) The Frontier Model Division shall do all of the following: (1) Review annual certification reports received from developers pursuant to Section 22603 of the Business and Professions Code and publicly release summarized findings based on those reports. (2) Advise the Attorney General on potential violations of this section or Chapter 22.6 (commencing with Section 22602) of Division 8 of the Business and Professions Code. (3) (A) Issue guidance, standards, and best practices sufficient to prevent unreasonable risks from covered models with hazardous capabilities including, but not limited to, more specific requirements on the duties required under Section 22603 of the Business and Professions Code. (B) Establish an accreditation process and relevant accreditation standards under which third parties may be accredited for a three-year period, which may be extended through an appropriate process, to certify adherence by developers to the best practices and standards adopted pursuant to subparagraph (A). (4) Publish anonymized artificial intelligence safety incident reports received from developers pursuant to Section 22603 of the Business and Professions Code. (5) Establish confidential fora that are structured and facilitated in a manner that allows developers to share best risk management practices for models with hazardous capabilities in a manner consistent with state and federal antitrust laws. (6) (A) Issue guidance describing the categories of artificial intelligence safety events that are likely to constitute a state of emergency within the meaning of subdivision (b) of Section 8558 and responsive actions that could be ordered by the Governor after a duly proclaimed state of emergency. (B) The guidance issued pursuant to subparagraph (A) shall not limit, modify, or restrict the authority of the Governor in any way. (7) Appoint and consult with an advisory committee that shall advise the Governor on when it may be necessary to proclaim a ``` --- state of emergency relating to artificial intelligence and advise the Governor on what responses may be appropriate in that event. (8) Appoint and consult with an advisory committee for open-source artificial intelligence that shall do all of the following: (A) Issue guidelines for model evaluation for use by developers of open-source artificial intelligence models that do not have hazardous capabilities. (B) Advise the Frontier Model Division on the creation and feasibility of incentives, including tax credits, that could be provided to developers of open-source artificial intelligence models that are not covered models. (C) Advise the Frontier Model Division on future policies and legislation impacting open-source artificial intelligence development. (9) Provide technical assistance and advice to the Legislature, upon request, with respect to artificial intelligence-related legislation. (10) Monitor relevant developments relating to the safety risks associated with the development of artificial intelligence models and the functioning of markets for artificial intelligence models. (11) Levy fees, including an assessed fee for the submission of a certification, in an amount sufficient to cover the reasonable costs of administering this section that do not exceed the reasonable costs of administering this section. (12) (A) Develop and submit to the Judicial Council proposed model jury instructions for actions brought by individuals injured by a hazardous capability of a covered model. (B) In developing the model jury instructions required by subparagraph (A), the Frontier Model Division shall consider all of the following factors: (i) The level of rigor and detail of the safety and security protocol that the developer faithfully implemented while it trained, stored, and released a covered model. (ii) Whether and to what extent the developer’s safety and security protocol was inferior, comparable, or superior, in its level of rigor and detail, to the mandatory safety policies of comparable developers. (iii) The extent and quality of the developer’s safety and security protocol’s prescribed safeguards, capability testing, and other --- precautionary measures with respect to the relevant hazardous capability and related hazardous capabilities. (iv) Whether and to what extent the developer and its agents complied with the developer’s safety and security protocol, and to the full degree, that doing so might plausibly have avoided causing a particular harm. (v) Whether and to what extent the developer carefully and rigorously investigated, documented, and accurately measured, insofar as reasonably possible given the state of the art, relevant risks that its model might pose. (d) There is hereby created in the General Fund the Frontier Model Division Programs Fund. (1) All fees received by the Frontier Model Division pursuant to this section shall be deposited into the fund. (2) All moneys in the account shall be available, only upon appropriation by the Legislature, for purposes of carrying out the provisions of this section. SEC. 5. Section 11547.7 is added to the Government Code, to read: 11547.7. (a) The Department of Technology shall commission consultants, pursuant to subdivision (b), to create a public cloud computing cluster, to be known as CalCompute, with the primary focus of conducting research into the safe and secure deployment of large-scale artificial intelligence models and fostering equitable innovation that includes, but is not limited to, all of the following: (1) A fully owned and hosted cloud platform. (2) Necessary human expertise to operate and maintain the platform. (3) Necessary human expertise to support, train, and facilitate use of CalCompute. (b) The consultants shall include, but not be limited to, representatives of national laboratories, public universities, and any relevant professional associations or private sector stakeholders. (c) To meet the objective of establishing CalCompute, the Department of Technology shall require consultants commissioned to work on this process to evaluate and incorporate all of the following considerations into its plan: (1) An analysis of the public, private, and nonprofit cloud platform infrastructure ecosystem, including, but not limited to, --- # SB 1047 — 18 — 1. dominant cloud providers, the relative compute power of each 2. provider, the estimated cost of supporting platforms as well as 3. pricing models, and recommendations on the scope of CalCompute. 4. (2) The process to establish affiliate and other partnership 5. relationships to establish and maintain an advanced computing 6. infrastructure. 7. (3) A framework to determine the parameters for use of 8. CalCompute, including, but not limited to, a process for deciding 9. which projects will be supported by CalCompute and what 10. resources and services will be provided to projects. 11. (4) A process for evaluating appropriate uses of the public cloud 12. resources and their potential downstream impact, including 13. mitigating downstream harms in deployment. 14. (5) An evaluation of the landscape of existing computing 15. capability, resources, data, and human expertise in California for 16. the purposes of responding quickly to a security, health, or natural 17. disaster emergency. 18. (6) An analysis of the state’s investment in the training and 19. development of the technology workforce, including through 20. degree programs at the University of California, the California 21. State University, and the California Community Colleges. 22. (7) A process for evaluating the potential impact of CalCompute 23. on retaining technology professionals in the public workforce. 24. (d) The Department of Technology shall submit, pursuant to 25. Section 9795, an annual report to the Legislature from the 26. commissioned consultants to ensure progress in meeting the 27. objectives listed above. 28. (e) The Department of Technology may receive private 29. donations, grants, and local funds, in addition to allocated funding 30. in the annual budget, to effectuate this section. 31. (f) This section shall become operative only upon an 32. appropriation in a budget act for the purposes of this section. 33. SEC. 6. The provisions of this act are severable. If any 34. provision of this act or its application is held invalid, that invalidity 35. shall not affect other provisions or applications that can be given 36. effect without the invalid provision or application. 37. SEC. 7. This act shall be liberally construed to effectuate its 38. purposes. 39. SEC. 8. The duties and obligations imposed by this act are 40. cumulative with any other duties or obligations imposed under --- ``` other law and shall not be construed to relieve any party from any duties or obligations imposed under other law. SEC. 9. No reimbursement is required by this act pursuant to Section 6 of Article XIII B of the California Constitution because the only costs that may be incurred by a local agency or school district will be incurred because this act creates a new crime or infraction, eliminates a crime or infraction, or changes the penalty for a crime or infraction, within the meaning of Section 17556 of the Government Code, or changes the definition of a crime within the meaning of Section 6 of Article XIII B of the California Constitution. ```
# SENATE BILL No. 398 **Introduced by Senator Wahab** **(Coauthor: Senator Limón)** February 9, 2023 --- An act to add and repeal Chapter 4 (commencing with Section 15210) of Part 6 of Division 3 of Title 2 Section 11546.8 of the Government Code, relating to technology. --- ## LEGISLATIVE COUNSEL'S DIGEST SB 398, as amended, Wahab. Department of Technology: advanced technology: research. Existing law establishes the Department of Technology, within the Government Operations Agency, under the supervision of the Director of Technology. Under existing law, the department is responsible for the approval and oversight of information technology projects. Existing law requires the department to submit various reports to the Legislature, including an annual information technology strategic plan. --- # SB 398 that guides the acquisition, management, and use of information technology by state agencies. This bill, the ~~Government Services Advanced Technology Act~~, Artificial Intelligence for California Research Act, would require the Department of ~~Justice~~, Technology, upon appropriation by the Legislature, to develop and implement a comprehensive research plan to study the feasibility of using advanced technology to improve state and local government services. The bill would require the research plan to include, among other things, an analysis of the potential benefits and risks of using artificial intelligence technology ~~to assist disaster victims in finding and applying for disaster relief funds and to assist individuals in determining their eligibility for various public benefits programs~~ in government services, as specified. The bill would require the department, on or before January 1, 2026, to provide a report to the Legislature on the findings of its research. Vote: majority. Appropriation: no. Fiscal committee: yes. State-mandated local program: no. The people of the State of California do enact as follows: ## SECTION 1. Section 11546.8 is added to the Government Code, to read: ### 11546.8. (a) This section shall be known, and may be cited, as the Artificial Intelligence for California Research Act. (b) The purpose of this section is to dedicate funds to research the feasibility and risks of using advanced technology, such as artificial intelligence dialogue systems, to improve government services. (c) Upon appropriation by the Legislature, the Department of Technology shall develop and implement a comprehensive research plan to study the feasibility of using advanced technology to improve government services. (d) The research plan shall include, but is not limited to, all of the following: 1. An analysis of the potential benefits and risks, including the impacts on equity, efficiency, accuracy, and cost effectiveness, of using artificial intelligence technology in different government services, including the use of artificial intelligence for each of the following: --- (A) Virtual assistants powered by an artificial intelligence language system for unemployment and disability insurance to assist claimants in navigating the unemployment or disability insurance application process, answering questions, and providing real-time status updates. (B) A rental assistance chatbot to assist renters in finding affordable housing options, assist with the application process for rental assistance programs, and provide real-time updates on their application status. (C) Assisting disaster victims in finding and applying for disaster relief funds and assisting with navigating the application process. (D) Assisting individuals in making public records requests, providing information on the required documents and processes, and providing real-time updates on the status of their requests. (E) Assisting individuals in determining their eligibility for various public benefits programs, such as food stamps, energy assistance, and childcare assistance. (2) A review of best practices and case studies from other government entities using similar advanced technology and an assessment of their successes and failures. (3) A thorough cost-benefit analysis of implementing advanced technology in government services, including the costs of implementation, maintenance, and training, and the potential benefits and savings to government operations and the public. (4) Recommendations for effectively integrating advanced technology into government services, including guidelines for implementation, risk management, and ongoing monitoring and evaluation. (5) An analysis of any risks to individual privacy and recommendations for mitigating privacy issues in implementation. (e) On or before January 1, 2026, the Department of Technology shall provide a report to the Legislature, in accordance with Section 9795, on the findings of its research conducted pursuant to this section. (f) For the purposes of this section, “government services” means public benefits provided by the state or a local government. (g) This section shall remain in effect only until January 1, 2026, and as of that date is repealed. --- # Chapter 4. Government Services Advanced Technology Act ## 15210. (a) This act shall be known, and may be cited, as the Government Services Advanced Technology Act. (b) The purpose of this act is to dedicate funds to research the feasibility and risks of using advanced technology, such as artificial intelligence dialogue systems, to improve government services. ## 15211. For the purposes of this chapter, "government services" means public benefits provided by the state or a local government. ## 15212. (a) Upon appropriation by the Legislature, the Department of Justice shall develop and implement a comprehensive research plan to study the feasibility of using advanced technology to improve government services. (b) The research plan shall include, but is not limited to, all of the following: 1. An analysis of the potential benefits and risks, including the impact on the equity, efficiency, accuracy, and cost-effectiveness, of using artificial intelligence technology in different government services, including the use of artificial intelligence for each of the following: - (A) Virtual assistants powered by an artificial intelligence language system for unemployment and disability insurance to assist claimants in navigating the unemployment or disability insurance application process, answering questions, and providing real-time status updates. - (B) A rental assistance chatbot to assist renters in finding affordable housing options, assist with the application process for rental assistance programs, and provide real-time updates on their application status. - (C) Assisting disaster victims in finding and applying for disaster relief funds and assisting with navigating the application process. - (D) Assisting individuals in making public records requests, providing information on the required documents and process, and providing real-time updates on the status of their request. --- # SB 398 1. (E) Assisting individuals in determining their eligibility for 2. various public benefits programs, such as food stamps, energy 3. assistance, and child care assistance. 4. (2) A review of best practices and case studies from other 5. government entities using similar advanced technology and an 6. assessment of their successes and failures. 7. (3) A thorough cost-benefit analysis of implementing advanced 8. technology in government services, including the costs of 9. implementation, maintenance, and training, and the potential 10. benefits and savings to government operations and the public. 11. (4) Recommendations for effectively integrating advanced 12. technology into government services, including guidelines for 13. implementation, risk management, and ongoing monitoring and 14. evaluation. 15. (5) An analysis of any risks to individual privacy and 16. recommendations for mitigating privacy issues in implementation. 17. 15213. On or before January 1, 2026, the Department of Justice 18. shall provide a report to the Legislature, in accordance with Section 19. 9795, on the findings of its research conducted pursuant to this 20. chapter. 21. 15214. This chapter shall remain in effect only until January 22. 1, 2026, and as of that date is repealed. --- O
# Senate Bill No. 848 ## CHAPTER 724 An act to add Section 12945.6 to the Government Code, relating to employment. [Approved by Governor October 10, 2023. Filed with Secretary of State October 10, 2023.] ## LEGISLATIVE COUNSEL'S DIGEST SB 848, Rubio. Employment: leave for reproductive loss. Existing law, the California Fair Employment and Housing Act, makes it an unlawful employment practice for an employer to refuse to grant a request by any employee to take up to 5 days of bereavement leave upon the death of a family member. This bill would additionally make it an unlawful employment practice for an employer to refuse to grant a request by an eligible employee to take up to 5 days of reproductive loss leave following a reproductive loss event, as defined. The bill would require that leave be taken within 3 months of the event, except as described, and pursuant to any existing leave policy of the employer. The bill would provide that if an employee experiences more than one reproductive loss event within a 12-month period, the employer is not obligated to grant a total amount of reproductive loss leave time in excess of 10 days within a 12-month period. Under the bill, in the absence of an existing policy, the reproductive loss leave may be unpaid. However, the bill would authorize an employee to use certain other leave balances otherwise available to the employee, including accrued and available paid sick leave. The bill would make leave under these provisions a separate and distinct right from any right under the California Fair Employment and Housing Act. The bill would make it an unlawful employment practice for an employer to retaliate against an individual, as described, because of the individual’s exercise of the right to reproductive loss leave or the individual’s giving of information or testimony as to reproductive loss leave, as described. The bill would require the employer to maintain employee confidentiality relating to reproductive loss leave, as specified. Existing constitutional provisions require that a statute that limits the right of access to the meetings of public bodies or the writings of public officials and agencies be adopted with findings demonstrating the interest protected by the limitation and the need for protecting that interest. This bill would make legislative findings to that effect. --- # The people of the State of California do enact as follows: ## SECTION 1 Section 12945.6 is added to the Government Code, to read: ### 12945.6 (a) For purposes of this section, the following definitions apply: 1. (A) "Assisted reproduction" means a method of achieving a pregnancy through artificial insemination or an embryo transfer and includes gamete and embryo donation. (B) "Assisted reproduction" does not include any pregnancy achieved through sexual intercourse. 2. "Employee" means a person employed by the employer for at least 30 days prior to the commencement of the leave. 3. "Employer" means either of the following: (A) A person who employs five or more persons to perform services for a wage or salary. (B) The state and any political or civil subdivision of the state, including, but not limited to, cities and counties. 4. "Failed adoption" means the dissolution or breach of an adoption agreement with the birth mother or legal guardian, or an adoption that is not finalized because it is contested by another party. This event applies to a person who would have been a parent of the adoptee if the adoption had been completed. 5. "Failed surrogacy" means the dissolution or breach of a surrogacy agreement, or a failed embryo transfer to the surrogate. This event applies to a person who would have been a parent of a child born as a result of the surrogacy. 6. "Miscarriage" means a miscarriage by a person, by the person's spouse or domestic partner, or by another individual if the person would have been a parent of a child born as a result of the pregnancy. 7. "Reproductive loss event" means the day or, for a multiple-day event, the final day of a failed adoption, failed surrogacy, miscarriage, stillbirth, or an unsuccessful assisted reproduction. 8. "Reproductive loss leave" means the leave provided by subdivision (b). 9. "Stillbirth" means a stillbirth resulting from a person's pregnancy, the pregnancy of a person's current spouse or domestic partner, or another individual, if the person would have been a parent of a child born as a result of the pregnancy that ended in stillbirth. 10. "Unsuccessful assisted reproduction" means an unsuccessful round of intrauterine insemination or of an assisted reproductive technology procedure. This event applies to a person, the person's current spouse or domestic partner, or another individual, if the person would have been a parent of a child born as a result of the pregnancy. (b) (1) It shall be an unlawful employment practice for an employer to refuse to grant a request by any employee to take up to five days of reproductive loss leave following a reproductive loss event. If an employee experiences more than one reproductive loss event within a 12-month period, --- # Ch. 724 an employer shall not be obligated to grant a total amount of reproductive loss leave in excess of 20 days within a 12-month period. (2) The employer shall allow the days an employee takes for reproductive loss leave to be nonconsecutive. (3) (A) Except as provided in subparagraph (B), reproductive loss leave shall be completed within three months of the event entitling the employee to that leave under paragraph (1). (B) Notwithstanding subparagraph (A), if, prior to or immediately following a reproductive loss event, an employee is on or chooses to go on leave from work pursuant to Section 12945, 12945.2, or any other leave entitlement under state or federal law, the employee shall complete their reproductive loss leave within three months of the end date of the other leave. (4) (A) Reproductive loss leave shall be taken pursuant to any existing applicable leave policy of the employer. (B) If there is no existing applicable leave policy, reproductive loss leave may be unpaid, except that an employee may use vacation, personal leave, accrued and available sick leave, or compensatory time off that is otherwise available to the employee. (c) It shall be an unlawful employment practice for an employer to retaliate against an individual, including, but not limited to, refusing to hire, discharging, demoting, fining, suspending, expelling, or discriminating against, an individual because of either of the following: (1) An individual’s exercise of the right to reproductive loss leave. (2) An individual’s giving information or testimony as to their own reproductive loss leave, or another person’s reproductive loss leave, in an inquiry or proceeding related to rights guaranteed under this section. (d) It shall be an unlawful employment practice for an employer to interfere with, restrain, or deny the exercise of, or the attempt to exercise, any right provided under this section. (e) The employer shall maintain the confidentiality of any employee requesting leave under this section. Any information provided to the employer pursuant to this section shall be maintained as confidential and shall not be disclosed except to internal personnel or counsel, as necessary, or as required by law. (f) An employee’s right to reproductive loss leave shall be construed as a separate and distinct right from any right under this part. ## SEC. 2. The Legislature finds and declares that Section 1 of this act, which adds Section 12945.6 to the Government Code, imposes a limitation on the public’s right of access to the meetings of public bodies or the writings of public officials and agencies within the meaning of Section 3 of Article I of the California Constitution. Pursuant to that constitutional provision, the Legislature makes the following findings to demonstrate the interest protected by this limitation and the need for protecting that interest: The confidentiality provisions set forth in Section 1 further the need to protect the privacy rights of employees regarding a reproductive loss, and to protect the enforcement process related to violations of these provisions. --- # Ch. 724 These limitations are needed in order to strike the proper balance between the privacy interests of the employee and the employee’s family, and the public’s right to access. --- O 94
# SENATE BILL No. 893 **Introduced by Senator Padilla** January 3, 2024 --- An act to add Chapter 5.1 (commencing with Section 11530) to Part 1 of Division 3 of Title 2 of the Government Code, relating to artificial intelligence. --- ## LEGISLATIVE COUNSEL’S DIGEST **SB 893, as introduced, Padilla. California Artificial Intelligence Research Hub.** Existing law requires the Secretary of Government Operations to develop a coordinated plan to, among other things, investigate the feasibility of, and obstacles to, developing standards and technologies for state departments to determine digital content provenance. For the purpose of informing that coordinated plan, existing law requires the secretary to evaluate, among other things, the impact of the proliferation of deepfakes, defined to mean audio or visual content that has been generated or manipulated by artificial intelligence that would falsely appear to be authentic or truthful and that features depictions of people appearing to say or do things they did not say or do without their consent, on state government, California-based businesses, and residents of the state. This bill would require the Government Operations Agency, the Governor’s Office of Business and Economic Development, and the Department of Technology to collaborate to establish the California Artificial Intelligence Research Hub (hub) in the Government Operations Agency. The bill would require the hub to serve as a centralized entity to facilitate collaboration between government agencies, academic institutions, and private sector partners to advance artificial intelligence research and development that seeks to harness the technology’s full potential. --- 99 --- potential for public benefit while safeguarding privacy, advancing security, and addressing risks and potential harms to society, as prescribed. Vote: majority. Appropriation: no. Fiscal committee: yes. State-mandated local program: no. The people of the State of California do enact as follows: SECTION 1. This act shall be known as the California Artificial Intelligence Research Hub Act. SEC. 2. Chapter 5.1 (commencing with Section 11530) is added to Part 1 of Division 3 of Title 2 of the Government Code, to read: # Chapter 5.1. California Artificial Intelligence Research Hub 11530. (a) As used in this section, “the hub” means the California Artificial Intelligence Research Hub. (b) The Government Operations Agency, the Governor’s Office of Business and Economic Development, and the Department of Technology shall collaborate to establish the California Artificial Intelligence Research Hub in the Government Operations Agency. (c) The hub shall serve as a centralized entity to facilitate collaboration between government agencies, academic institutions, and private sector partners to advance artificial intelligence research and development that seeks to harness the technology’s full potential for public benefit while safeguarding privacy, advancing security, and addressing risks and potential harms to society. (d) The Government Operations Agency, the Governor’s Office of Business and Economic Development, and the Department of Technology shall consult with academic institutions within the state in establishing the hub. (e) The hub shall do all of the following: 1. (A) Increase lawful access to government data while protecting privacy and safeguarding access to data by developing a streamlined process for researchers to access data collected by state agencies. (B) In complying with subparagraph (A), the hub shall create a process for eligibility that prioritizes security by limiting who can access the data and for what purpose. --- ``` (2) Support the access to, and development of, artificial intelligence computing capacity and technology by building out public computing infrastructure, facilitating access to existing commercial computing infrastructure, or finding ways to reduce costs and other economic barriers research institutions may face in accessing computing infrastructure. (3) Spur innovation in artificial intelligence applications for the benefit of the public. (4) Ensure the development of trustworthy artificial intelligence technologies with a focus on transparency, fairness, and accountability. (5) Provide researchers with increased access to data and computing resources, education, and training opportunities in furtherance of applications of artificial intelligence for benefit to the people of California. O ```
# SENATE BILL No. 896 **Introduced by Senator Dodd** January 3, 2024 --- An act to add Chapter 5.9 (commencing with Section 11549.63) to Part 1 of Division 3 of Title 2 of the Government Code, relating to artificial intelligence. --- ## LEGISLATIVE COUNSEL’S DIGEST **SB 896**, as introduced, Dodd. Artificial Intelligence Accountability Act. Existing law requires the Secretary of Government Operations to develop a coordinated plan to, among other things, investigate the feasibility of, and obstacles to, developing standards and technologies for state departments to determine digital content provenance. For the purpose of informing that coordinated plan, existing law requires the secretary to evaluate, among other things, the impact of the proliferation of deepfakes, defined to mean audio or visual content that has been generated or manipulated by artificial intelligence that would falsely appear to be authentic or truthful and that features depictions of people appearing to say or do things they did not say or do without their consent, on state government, California-based businesses, and residents of the state. This bill, the Artificial Intelligence Accountability Act, would, among other things, require the Government Operations Agency, the Department of Technology, and the Office of Data and Innovation to produce a State of California Benefits and Risk of Generative Artificial Intelligence Report that includes certain items, including an examination of the most significant, potentially beneficial uses for deployment of generative artificial intelligence tools by the state, and would require those entities to update the report, as prescribed. The bill would require, --- # SB 896 as often as is deemed appropriate by the Director of Emergency Services, the California Cybersecurity Integration Center, and the State Threat Assessment Center, those entities to perform a joint risk analysis of potential threats posed by the use of generative artificial intelligence to California’s critical energy infrastructure, including those that could lead to mass casualty events and environmental emergencies. This bill would also require a state agency or department that utilizes generative artificial intelligence to directly communicate with a person, either through an online interface or telephonically, to clearly and in a conspicuous manner identify to that person that the person’s interaction with the state agency or department is being communicated through artificial intelligence. This bill would also require an automated decisionmaking system, as defined, utilized by a state agency or department to be evaluated for risk potential before adoption, as specified. Vote: majority. Appropriation: no. Fiscal committee: yes. State-mandated local program: no. The people of the State of California do enact as follows: ## SECTION 1. This act shall be known as the Artificial Intelligence Accountability Act. ## SEC. 2. Chapter 5.9 (commencing with Section 11549.63) is added to Part 1 of Division 3 of Title 2 of the Government Code, to read: ### Chapter 5.9. Artificial Intelligence Tools #### 11549.63. The Legislature finds and declares all of the following: (a) The Legislature recognizes the tremendous potential of artificial intelligence (AI) to improve the lives of its citizens and the functioning of government. However, the Legislature also recognizes that the use of AI must be guided by principles of fairness, transparency, privacy, and accountability to ensure that the rights and opportunities of all Californians are protected in the age of artificial intelligence. (b) The Legislature further recognizes that generative artificial intelligence (GenAI) enables significant, beneficial uses through its unique capabilities, but GenAI raises novel risks compared to --- conventional AI across critical areas, including democratic and legal processes, biases and equity, public health and safety, and the economy, and requires measures to address insufficiently guarded governmental systems and unintended or emergent harmful effects from this technology. Additionally, because humans have explicit and implicit biases built into our society, GenAI has the capacity to amplify these biases as it learns from input data. Therefore, it is imperative to consider the implications on Californians of, among other categories, different regions, income, races, ethnicities, gender, ages, religions, abilities, and sexual orientation for all GenAI inputs, outputs, and products for both prioritizing implementations that may promote equity and guarding against bias and other negative impacts. (c) No individual or group should be discriminated against on the basis of race, gender, age, religion, sexual orientation, or any other protected characteristic in the design, development, deployment, or use of AI systems. The unprecedented speed of innovation and deployment of GenAI technologies necessitates proactive guardrails to protect against potential risks or malicious uses, including, but not limited to, bioterrorism, cyberattacks, disinformation, deception, violation of privacy, and discrimination or bias. (d) The Legislature affirms the importance of transparency in the use of AI systems. The public has the right to know when they are interacting with AI being used by the state and to have a clear and conspicuous identification of that interaction. (e) The Legislature recognizes that the use of AI systems must be consistent with the protection of privacy and civil liberties and must be guided by a commitment to equity and social justice. It is the intent of the Legislature in enacting this legislation that all AI systems be designed and deployed in a manner that is consistent with state and federal laws and regulations regarding privacy and civil liberties and minimizes bias and promotes equitable outcomes for all Californians. (f) This act, in addition to the 2022 White House Blueprint for an AI Bill of Rights, executive guidance from the governor, statutory or regulatory requirements, and evolving best practices should guide the decisionmaking of state agencies, departments, and subdivisions in the review, adoption, management, governance, and regulations of automated decisionmaking technologies. Further, --- # SB 896 1. State leadership on adopting these standards and best practices should encourage the private sector to adopt these practices and safeguards. 2. (g) Public colleges and universities should collaborate with the private sector and relevant state agencies to train students to meet the AI workforce development needs of the state, including providing instruction on AI and related ethical, privacy, and security considerations while advancing research on best practices. 3. Further, there is the need for the state to recruit, retain, and train AI professionals in certain state jobs, and agencies should collaborate to facilitate a pipeline and infrastructure to accomplish that goal, including adopting appropriate job classifications and incentive programs. 4. (h) State agencies, departments, and boards should utilize their full range of authority to protect consumers, patients, passengers, and students from fraud, discrimination, and threats to privacy and to address other risks that may arise from the use of GenAI, including risks to financial stability. State agencies, departments, and boards should consider rulemaking and emphasize or clarify if existing regulations and guidance apply to GenAI or other automated decisionmaking systems. They should also clarify any responsibility of regulated entities to conduct due diligence with respect to third-party AI services they use and emphasize or clarify requirements and expectations related to the transparency of AI models and regulated entities' ability to explain their use of AI models. 5. (i) The California Privacy Protection Agency should utilize its existing authority to promulgate automated decisionmaking technology regulations. ## 11549.64. As used in this chapter: (a) (1) “Automated decision system” means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons. (2) “Automated decision system” does not mean a spam email filter, firewall, antivirus software, identity and access management tool, calculator, database, dataset, or other compilation of data. --- # SB 896 (b) “Generative artificial intelligence” means the class of artificial intelligence models that emulate the structure and characteristics of input data in order to generate derived synthetic content, including images, videos, audio, text, and other digital content. (c) “High-risk automated decision system” means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, housing or accommodations, education, employment, credit, health care, and criminal justice. (d) “Person” means a natural person. (e) “Report” means the State of California Benefits and Risk of Generative Artificial Intelligence Report required by Section 11549.65. 11549.65. (a) (1) The Government Operations Agency, the Department of Technology, and the Office of Data and Innovation shall produce a State of California Benefits and Risk of Generative Artificial Intelligence Report that includes all of the following: (A) An examination of the most significant, potentially beneficial uses for deployment of generative artificial intelligence tools by the state. (B) An explanation of the potential risks of the uses described in subparagraph (A) to individuals, communities, and government workers with a focus on high-risk uses, including the use of generative artificial intelligence to make a consequential decision affecting access to essential goods and services. (C) An explanation of risks from bad actors and insufficiently guarded governmental systems, unintended or emergent effects, and potential risks toward democratic and legal processes, public health and safety, and the economy. (2) The Government Operations Agency, the Department of Technology, and the Office of Data and Innovation shall update the report, as needed, to respond to significant developments and shall, as appropriate, consult with academia, industry experts, and organizations that represent state government employees. (b) (1) (A) As often as is deemed appropriate by the Director of Emergency Services, the California Cybersecurity Integration Center, and the State Threat Assessment Center, those entities shall perform a joint risk analysis of potential threats posed by the --- ``` use of generative artificial intelligence to California’s critical energy infrastructure, including those that could lead to mass casualty events and environmental emergencies. (B) The entities described in subparagraph (A) shall develop, in consultation with appropriate external experts from academia and industry, a strategy to assess similar potential threats to other critical infrastructure. (2) The analysis required by paragraph (1) shall be provided to the Governor, and, if appropriate, public recommendations shall be made reflecting changes to artificial intelligence technology, its applications, and risk management, including further private actions, administrative actions, and collaboration with the Legislature to guard against potential threats and vulnerabilities. (c) (1) (A) The Government Operations Agency, the Department of General Services, the Department of Technology, and the California Cybersecurity Integration Center shall develop, maintain, and periodically evaluate and revise general guidelines for public sector procurement, uses, and required trainings for the use of generative artificial intelligence, including for high-risk scenarios, and including for consequential decisions affecting access to essential goods and services. (B) The guidelines required by this paragraph shall build on guidance from the White House publication titled Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology’s AI Risk Management Framework and shall address safety, algorithmic discrimination, data privacy, and notice of when materials are generated by generative artificial intelligence. (C) The Government Operations Agency shall engage and consult with organizations that represent state government employees and industry experts, including, but not limited to, trust and safety experts, academic researchers, and research institutions in developing the guidelines required by this paragraph. (2) For purposes of the periodic evaluation and revision required by paragraph (1), the Government Operations Agency, the Department of General Services, the Department of Technology, and the California Cybersecurity Integration Center shall periodically evaluate any need to revise the guidelines and establish a consultative process by which to do so with academia, industry experts, and organizations that represent state government employees. ``` --- 1 (d) (1) (A) The Government Operations Agency, the 2 Department of Technology, and the Office of Data and Innovation 3 shall develop, maintain, and periodically evaluate and revise 4 guidelines for state agencies and departments to analyze the impact 5 that adopting a generative artificial intelligence tool may have on 6 vulnerable communities, including criteria to evaluate equitable 7 outcomes in deployment and implementation of high-risk uses. 8 (B) The guidelines required by this paragraph shall inform 9 whether and how a state agency or department deploys a particular 10 generative artificial intelligence tool. 11 (C) The Government Operations Agency shall engage and 12 consult with organizations that represent state government 13 employees and industry experts, including, but not limited to, trust 14 and safety experts, academic researchers, and research institutions 15 in developing the guidelines required by this paragraph. 16 (2) For purposes of the periodic evaluation and revision required 17 by paragraph (1), the Government Operations Agency, the 18 Department of General Services, the Department of Technology, 19 and the California Cybersecurity Integration Center shall 20 periodically evaluate any need to revise the guidelines and establish 21 a consultative process by which to do so with academia, industry 22 experts, and organizations that represent state government 23 employees. 24 (e) The Government Operations Agency, the Department of 25 General Services, and the Department of Technology shall update, 26 as needed, the state’s project approval, procurement, and contract 27 terms to incorporate analysis and feedback obtained pursuant to 28 subdivisions (c) and (d). 29 (f) (1) To assist the Government Operations Agency and the 30 Department of Technology in their efforts to perform any periodic 31 review and update under this section, all state agencies and 32 departments shall, as requested by the Government Operations 33 Agency or the Department of Technology, conduct and submit an 34 inventory of all current high-risk uses of generative artificial 35 intelligence within the agency or department to the Department of 36 Technology, which shall administer the inventory. 37 (2) A state agency or department shall appoint a senior level 38 management personnel who will be responsible for maintaining, 39 conducting, and reporting the results of the inventory described 40 by paragraph (1) to the Department of Technology within 60 days --- ``` of issuance of a request for an inventory pursuant to paragraph (1). (g) Any state agency or department shall consider procurement and enterprise use opportunities in which generative artificial intelligence can improve the efficiency, effectiveness, accessibility, and equity of government operations consistent with the Government Operations Agency, the Department of General Services, and the Department of Technology’s guidelines for public sector generative artificial intelligence procurement. (h) (1) The Department of Technology shall establish and maintain the infrastructure to conduct pilot projects of generative artificial intelligence projects, including Department of Technology-approved environments to test those pilot projects. (2) An environment created pursuant to this subdivision shall be available to any state agency or department to help evaluate generative artificial intelligence tools and services, to further safe, ethical, and responsible implementations, and to inform decisions to use generative artificial intelligence consistent with state guidelines. (i) (1) By July 1, ____, any state agency or department shall consider pilot projects of generative artificial intelligence applications, in consultation with organizations that represent state government employees and appropriate experts from academia and industry. (2) A pilot project described by paragraph (1) shall measure both of the following: (A) How generative artificial intelligence can improve Californians’ experience with, and access to, government services. (B) How generative artificial intelligence can support state employees in the performance of their duties in addition to any domain-specific impacts to be measured by the state agency or department. (j) The Government Operations Agency, the Department of General Services, the Department of Technology, the Office of Data and Innovation, and the California Cybersecurity Integration Center shall engage with the Legislature and relevant stakeholders, including historically vulnerable and marginalized communities and organizations that represent state government employees, in the development and revision of any guidelines, criteria, reports, or training pursuant to this section. ``` --- # SB 896 (k) A state agency or department shall support the state government workforce and prepare for the next generation of skills needed to thrive in the generative artificial intelligence economy by complying with both of the following: 1. (1) The Government Operations Agency, the Department of Technology, and any other agencies deemed necessary shall make available trainings for state government worker use of state-approved generative artificial intelligence tools to achieve equitable outcomes and to identify and mitigate potential output inaccuracies, fabricated text, hallucinations, and biases of generative artificial intelligence, while enforcing public privacy and applicable state laws and policies. If appropriate, the Department of Technology and any other agency or department deemed necessary shall collaborate with organizations that represent state government employees and industry experts on developing and providing training. 2. (2) The Government Operations Agency, in consultation with appropriate state agencies and organizations that represent state government employees, shall establish criteria to evaluate the impact of generative artificial intelligence on the state government workforce and provide guidelines on how state agencies and departments can support state government employees to use these tools effectively and respond to these technological advancements. - (i) Legal counsel for any state agency or department shall consider any potential impact of generative artificial intelligence on regulatory issues under the respective agency’s or department’s authority and recommend necessary updates, if appropriate, as a result of this evolving technology. 11549.66. (a) A state agency or department that utilizes generative artificial intelligence to directly communicate with a person, either through an online interface or telephonically, shall clearly and in a conspicuous manner identify to that person that the person’s interaction with the state agency or department is being communicated through artificial intelligence. (b) A state agency or department that utilizes generative artificial intelligence to directly communicate with a person shall provide on the state agency’s or department’s internet website clear instructions, or a link to a web page with clear instructions, informing the person how to directly communicate with a person from the state agency or department. --- # SB 896 11549.67. (a) (1) An automated decisionmaking system utilized by a state agency or department shall be evaluated for risk potential before adoption. (2) A highrisk automated decision system shall receive appropriate consultation, testing, risk identification, and mitigation consistent with this chapter and shall not be adopted or utilized without prior approval of the director of a state agency or department or that person’s designee before being adopted and utilized by a state agency or department. (b) A highrisk automated decision system that is utilized by a state agency or department shall receive ongoing monitoring and clear organizational oversight.
# Senate Concurrent Resolution No. 17 ## RESOLUTION CHAPTER 135 Senate Concurrent Resolution No. 17—Relative to artificial intelligence. [Filed with Secretary of State August 23, 2023.] ## LEGISLATIVE COUNSEL’S DIGEST SCR 17, Dodd. Artificial intelligence. This measure would affirm the California Legislature’s commitment to President Biden’s vision for a safe AI and the principles outlined in the “Blueprint for an AI Bill of Rights” and would express the Legislature’s commitment to examining and implementing those principles in its legislation and policies related to the use and deployment of automated systems. **WHEREAS**, The use of technology, data, and automated systems poses significant challenges to democracy and the rights of the public, as evidenced by incidents of unsafe, ineffective, or biased systems in health care, discriminatory algorithms in hiring and credit decisions, and unchecked data collection that threatens privacy and opportunities; and **WHEREAS**, Automated systems also have the potential to bring about extraordinary benefits, including increasing efficiency in agriculture and revolutionizing industries through data analysis; and **WHEREAS**, President Joseph R. Biden has affirmed civil rights and democratic values as a cornerstone of his administration and has ordered the federal government to work toward rooting out inequity and advancing civil rights, equal opportunity, and racial justice; and **WHEREAS**, The White House Office of Science and Technology Policy has developed the “Blueprint for an AI Bill of Rights,” a set of five principles to guide the design, use, and deployment of automated systems in a manner that protects the rights of the public while leveraging the benefits of AI; now, therefore, be it **Resolved by the Senate of the State of California, the Assembly thereof concurring**, That the California Legislature affirms its commitment to President Biden’s vision for safe AI and the principles outlined in the “Blueprint for an AI Bill of Rights,” including: Safe and Effective Systems, Algorithmic Discrimination Protections, Data Privacy, Notice and Explanation, and Human Alternatives, Consideration, and Fallback; and be it further **Resolved**, That the California Legislature commits to examining and implementing these principles in its legislation and policies related to the use and deployment of automated systems in the State of California; and be it further --- # Res. Ch. 135 Resolved, That the Secretary of the Senate transmit copies of this resolution to the author for appropriate distribution. --- O 97

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
0
Add dataset card