S. 1169--A                          2
 
 iment  with  new applications of AI and which have the potential to find
 new ways to employ technology at the service of New Yorkers. The goal of
 the legislature is to encourage safe innovation  in  the  AI  sector  by
 providing  clear  guidance  for  AI development, testing, and validation
 both before a product is launched  and  throughout  the  product's  life
 cycle.
   (d) New York must establish that the burden of responsibility of prov-
 ing that AI products do not cause harm to New Yorkers will be shouldered
 by the developers and deployers of AI. While government and civil socie-
 ty must act to audit and enforce human rights laws around the use of AI,
 the  companies  employing  and profiting from the use of AI must lead in
 ensuring that their products are free from algorithmic discrimination.
   (e) Close collaboration and communication between New York  state  and
 industry  partners  is  key  to  ensuring that innovation can occur with
 safeguards to protect all New Yorkers. This legislation will ensure that
 lines of communication exist and that there is clear statutory authority
 to investigate and prosecute entities that break the law.
   (f) As new forms of AI are developed beyond what is currently  techno-
 logically  feasible,  the goal of the legislature is to use this section
 as a guiding light for future regulations.
   (g) Lastly, it is in the interest of all New Yorkers that certain uses
 of AI that infringe on fundamental rights, deepen structural inequality,
 or that result in unequal access to services shall be banned.
   § 3. The civil rights law is amended by adding a new  article  8-A  to
 read as follows:
                                ARTICLE 8-A
           PROTECTIONS REGARDING USE OF ARTIFICIAL INTELLIGENCE
 SECTION 85.   DEFINITIONS.
         86.   UNLAWFUL DISCRIMINATORY PRACTICES.
         86-A. DEPLOYER AND DEVELOPER OBLIGATIONS.
         86-B. WHISTLEBLOWER PROTECTIONS.
         87.   AUDITS.
         88.   HIGH-RISK AI SYSTEM REPORTING REQUIREMENTS.
         89.   RISK MANAGEMENT POLICY AND PROGRAM.
         89-A. SOCIAL SCORING AI SYSTEMS PROHIBITED.
         89-B. DEVELOPER SAFE HARBOR.
         89-C. ENFORCEMENT.
         89-D. SEVERABILITY.
   §  85. DEFINITIONS. THE FOLLOWING TERMS SHALL HAVE THE FOLLOWING MEAN-
 INGS:
   1. "ALGORITHMIC DISCRIMINATION" MEANS ANY CONDITION IN WHICH  THE  USE
 OF  AN  AI  SYSTEM  CONTRIBUTES TO UNJUSTIFIED DIFFERENTIAL TREATMENT OR
 IMPACTS, DISFAVORING PEOPLE BASED ON  THEIR  ACTUAL  OR  PERCEIVED  AGE,
 RACE, ETHNICITY, CREED, RELIGION, COLOR, NATIONAL ORIGIN, CITIZENSHIP OR
 IMMIGRATION   STATUS,   SEXUAL   ORIENTATION,  GENDER  IDENTITY,  GENDER
 EXPRESSION, MILITARY STATUS, SEX, DISABILITY, PREDISPOSING GENETIC CHAR-
 ACTERISTICS,  FAMILIAL  STATUS,  MARITAL  STATUS,  PREGNANCY,  PREGNANCY
 OUTCOMES, DISABILITY, HEIGHT, WEIGHT, REPRODUCTIVE HEALTH CARE OR AUTON-
 OMY,  STATUS  AS  A  VICTIM OF DOMESTIC VIOLENCE OR OTHER CLASSIFICATION
 PROTECTED UNDER STATE OR FEDERAL LAWS.  ALGORITHMIC DISCRIMINATION SHALL
 NOT INCLUDE:
   (A) A DEVELOPER'S OR DEPLOYER'S TESTING OF  THEIR  OWN  AI  SYSTEM  TO
 IDENTIFY, MITIGATE, AND PREVENT DISCRIMINATORY BIAS;
   (B)  EXPANDING AN APPLICANT, CUSTOMER, OR PARTICIPANT POOL TO INCREASE
 DIVERSITY OR REDRESS HISTORICAL DISCRIMINATION; OR
 S. 1169--A                          3
 
   (C) AN ACT OR OMISSION BY OR ON BEHALF OF  A  PRIVATE  CLUB  OR  OTHER
 ESTABLISHMENT  THAT  IS  NOT IN FACT OPEN TO THE PUBLIC, AS SET FORTH IN
 TITLE II OF THE FEDERAL CIVIL RIGHTS ACT  OF  1964,  42  U.S.C.  SECTION
 2000A(E), AS AMENDED.
   2.  "ARTIFICIAL  INTELLIGENCE  SYSTEM" OR "AI SYSTEM" MEANS A MACHINE-
 BASED SYSTEM OR COMBINATION OF SYSTEMS, THAT FOR EXPLICIT  AND  IMPLICIT
 OBJECTIVES,  INFERS, FROM THE INPUT IT RECEIVES, HOW TO GENERATE OUTPUTS
 SUCH AS PREDICTIONS, CONTENT, RECOMMENDATIONS,  OR  DECISIONS  THAT  CAN
 INFLUENCE  PHYSICAL  OR  VIRTUAL ENVIRONMENTS.   ARTIFICIAL INTELLIGENCE
 SYSTEM SHALL NOT INCLUDE:
   (A) ANY SYSTEM THAT (I) IS USED BY A BUSINESS ENTITY SOLELY FOR INTER-
 NAL PURPOSES AND (II) IS NOT USED AS A SUBSTANTIAL FACTOR  IN  A  CONSE-
 QUENTIAL DECISION; OR
   (B) ANY SOFTWARE USED PRIMARILY FOR BASIC COMPUTERIZED PROCESSES, SUCH
 AS  ANTI-MALWARE, ANTI-VIRUS, AUTO-CORRECT FUNCTIONS, CALCULATORS, DATA-
 BASES,  DATA  STORAGE,  ELECTRONIC  COMMUNICATIONS,  FIREWALL,  INTERNET
 DOMAIN  REGISTRATION,  INTERNET  WEBSITE  LOADING,  NETWORKING, SPAM AND
 ROBOCALL-FILTERING, SPELLCHECK TOOLS,  SPREADSHEETS,  WEB  CACHING,  WEB
 HOSTING,  OR  ANY  TOOL THAT RELATES ONLY TO INTERNAL MANAGEMENT AFFAIRS
 SUCH AS ORDERING OFFICE SUPPLIES OR PROCESSING PAYMENTS, AND THAT DO NOT
 MATERIALLY AFFECT THE RIGHTS, LIBERTIES, BENEFITS, SAFETY OR WELFARE  OF
 ANY INDIVIDUAL WITHIN THE STATE.
   3.  "AUDITOR"  SHALL  REFER TO AN INDEPENDENT ENTITY INCLUDING BUT NOT
 LIMITED TO AN INDIVIDUAL, NON-PROFIT,  FIRM,  CORPORATION,  PARTNERSHIP,
 COOPERATIVE, ASSOCIATION, ACADEMIC INSTITUTION, OR GROUP AFFILIATED WITH
 AN ACADEMIC INSTITUTION, COMMISSIONED TO PERFORM AN AUDIT.
   4.  "CONSEQUENTIAL  DECISION"  MEANS A DECISION OR JUDGMENT THAT HAS A
 MATERIAL, LEGAL OR  SIMILARLY  SIGNIFICANT  EFFECT  ON  AN  INDIVIDUAL'S
 ACCESS TO, OR THE COST, TERMS, OR AVAILABILITY OF, ANY OF THE FOLLOWING:
   (A)  EMPLOYMENT,  WORKERS'  MANAGEMENT, OR SELF-EMPLOYMENT, INCLUDING,
 BUT NOT LIMITED TO, ALL OF THE FOLLOWING:
   (I) PAY OR PROMOTION; AND
   (II) HIRING OR TERMINATION.
   (B) EDUCATION AND VOCATIONAL TRAINING, INCLUDING, BUT NOT LIMITED  TO,
 ALL OF THE FOLLOWING:
   (I) ACCREDITATION;
   (II) CERTIFICATION;
   (III) ADMISSIONS; AND
   (IV) FINANCIAL AID OR SCHOLARSHIPS.
   (C)  HOUSING  OR  LODGING,  INCLUDING  RENTAL OR SHORT-TERM HOUSING OR
 LODGING.
   (D) FAMILY  PLANNING,  INCLUDING  ADOPTION  SERVICES  OR  REPRODUCTIVE
 SERVICES, AS WELL AS ASSESSMENTS RELATED TO CHILD PROTECTIVE SERVICES.
   (E)  HEALTH  CARE  OR  HEALTH INSURANCE, INCLUDING MENTAL HEALTH CARE,
 DENTAL, OR VISION.
   (F) FINANCIAL SERVICES, INCLUDING A FINANCIAL SERVICE  PROVIDED  BY  A
 MORTGAGE COMPANY, MORTGAGE BROKER, OR CREDITOR.
   (G)  LAW  ENFORCEMENT  ACTIVITIES,  INCLUDING  THE  ALLOCATION  OF LAW
 ENFORCEMENT PERSONNEL OR ASSETS, THE ENFORCEMENT  OF  LAWS,  MAINTAINING
 PUBLIC ORDER, OR MANAGING PUBLIC SAFETY.
   (H) LEGAL SERVICES.
   5.  "DEPLOYER"  MEANS  ANY  PERSON, PARTNERSHIP, ASSOCIATION OR CORPO-
 RATION THAT OFFERS OR USES AN AI SYSTEM FOR COMMERCE IN THE STATE OF NEW
 YORK, OR PROVIDES AN AI SYSTEM FOR USE BY  THE  GENERAL  PUBLIC  IN  THE
 STATE  OF  NEW  YORK.    A DEPLOYER SHALL NOT INCLUDE ANY NATURAL PERSON
 S. 1169--A                          4
 
 USING AN AI SYSTEM FOR PERSONAL USE. A DEVELOPER MAY ALSO BE  CONSIDERED
 A DEPLOYER IF ITS ACTIONS SATISFY THIS DEFINITION.
   6.  "DEVELOPER"  MEANS  A  PERSON,  PARTNERSHIP,  OR  CORPORATION THAT
 DESIGNS, CODES, OR PRODUCES AN  AI  SYSTEM,  OR  CREATES  A  SUBSTANTIAL
 CHANGE  WITH  RESPECT  TO  AN  AI SYSTEM, WHETHER FOR ITS OWN USE IN THE
 STATE OF NEW YORK OR FOR USE BY A THIRD PARTY IN THE STATE OF NEW  YORK.
 A  DEPLOYER  MAY  ALSO  BE CONSIDERED A DEVELOPER IF ITS ACTIONS SATISFY
 THIS DEFINITION.
   7. "EMPLOYEE" MEANS AN INDIVIDUAL WHO PERFORMS SERVICES FOR AND  UNDER
 THE  CONTROL  AND  DIRECTION OF AN EMPLOYER FOR WAGES OR OTHER REMUNERA-
 TION, INCLUDING FORMER EMPLOYEES, OR NATURAL PERSONS EMPLOYED  AS  INDE-
 PENDENT  CONTRACTORS  TO  CARRY OUT WORK IN FURTHERANCE OF AN EMPLOYER'S
 BUSINESS ENTERPRISE WHO ARE NOT THEMSELVES EMPLOYERS.
   8. "EMPLOYER" MEANS ANY PERSON, FIRM, PARTNERSHIP, INSTITUTION, CORPO-
 RATION, OR ASSOCIATION THAT EMPLOYS ONE OR MORE EMPLOYEES.
   9. "END USER" MEANS ANY INDIVIDUAL OR GROUP OF INDIVIDUALS THAT:
   (A) IS THE SUBJECT OF A CONSEQUENTIAL DECISION  MADE  ENTIRELY  BY  OR
 WITH THE ASSISTANCE OF AN AI SYSTEM; OR
   (B)  INTERACTS, DIRECTLY OR INDIRECTLY, WITH THE RELEVANT AI SYSTEM ON
 BEHALF OF AN INDIVIDUAL OR GROUP THAT IS THE SUBJECT OF A  CONSEQUENTIAL
 DECISION MADE ENTIRELY BY OR WITH THE ASSISTANCE OF AN AI SYSTEM.
   10.  "HIGH-RISK  AI  SYSTEM"  MEANS ANY AI SYSTEM THAT, WHEN DEPLOYED:
 (A) IS A SUBSTANTIAL FACTOR IN MAKING A CONSEQUENTIAL DECISION;  OR  (B)
 WILL  HAVE  A MATERIAL IMPACT ON THE STATUTORY OR CONSTITUTIONAL RIGHTS,
 CIVIL LIBERTIES, SAFETY, OR WELFARE OF AN INDIVIDUAL IN THE STATE.
   11. "RISK MANAGEMENT POLICY AND PROGRAM"  MEANS  THE  RISK  MANAGEMENT
 POLICY AND PROGRAM CREATED PURSUANT TO SECTION EIGHTY-NINE OF THIS ARTI-
 CLE.
   12.  "SUBSTANTIAL  CHANGE"  MEANS ANY NEW VERSION, NEW RELEASE, OR ANY
 OTHER UPDATE TO AN AI SYSTEM THAT RESULTS IN SIGNIFICANT CHANGES TO SUCH
 AI SYSTEM'S  APPROPRIATE  USE  CASES,  KEY  FUNCTIONALITY,  OR  EXPECTED
 OUTCOMES.
   13. "SUBSTANTIAL FACTOR" MEANS A FACTOR THAT IS (A) MATERIAL IN MAKING
 A CONSEQUENTIAL DECISION, OR (B) IS CAPABLE OF ALTERING THE OUTCOME OF A
 CONSEQUENTIAL DECISION.
   §  86.  UNLAWFUL  DISCRIMINATORY  PRACTICES.   IT SHALL BE AN UNLAWFUL
 DISCRIMINATORY PRACTICE FOR A DEVELOPER OR DEPLOYER TO  FAIL  TO  COMPLY
 WITH THE DUTIES UNDER THIS SECTION.
   1. A DEVELOPER OR DEPLOYER SHALL TAKE REASONABLE CARE TO PREVENT FORE-
 SEEABLE  RISK OF ALGORITHMIC DISCRIMINATION THAT IS A CONSEQUENCE OF THE
 USE, SALE, OR SHARING OF A HIGH-RISK AI SYSTEM OR A PRODUCT FEATURING  A
 HIGH-RISK AI SYSTEM.
   2.  ANY  DEVELOPER OR DEPLOYER THAT USES, SELLS, OR SHARES A HIGH-RISK
 AI SYSTEM SHALL HAVE COMPLETED AN INDEPENDENT AUDIT, PURSUANT TO SECTION
 EIGHTY-SEVEN OF THIS ARTICLE, CONFIRMING THAT THE DEVELOPER OR  DEPLOYER
 HAS  TAKEN  REASONABLE  CARE  TO PREVENT FORESEEABLE RISK OF ALGORITHMIC
 DISCRIMINATION WITH RESPECT TO SUCH HIGH-RISK AI SYSTEM.
   § 86-A. DEPLOYER AND DEVELOPER OBLIGATIONS. 1. (A) ANY  DEPLOYER  THAT
 EMPLOYS  A HIGH-RISK AI SYSTEM FOR A CONSEQUENTIAL DECISION SHALL COMPLY
 WITH THE FOLLOWING REQUIREMENTS; PROVIDED, HOWEVER, THAT WHERE THERE  IS
 AN URGENT NECESSITY FOR A DECISION TO BE MADE TO CONFER A BENEFIT TO THE
 END  USER,  INCLUDING,  BUT  NOT  LIMITED  TO,  SOCIAL BENEFITS, HOUSING
 ACCESS, OR DISPENSING OF  EMERGENCY  FUNDS,  AND  COMPLIANCE  WITH  THIS
 SECTION  WOULD  CAUSE IMMINENT DETRIMENT TO THE WELFARE OF THE END USER,
 SUCH OBLIGATION SHALL BE CONSIDERED WAIVED; PROVIDED FURTHER, THAT NOTH-
 S. 1169--A                          5
 
 ING IN THIS SECTION SHALL BE  CONSTRUED  TO  WAIVE  A  NATURAL  PERSON'S
 OPTION TO REQUEST HUMAN REVIEW OF THE DECISION:
   (I)  INFORM  THE END USER AT LEAST FIVE BUSINESS DAYS PRIOR TO THE USE
 OF SUCH SYSTEM FOR THE MAKING OF  A  CONSEQUENTIAL  DECISION  IN  CLEAR,
 CONSPICUOUS,  AND CONSUMER-FRIENDLY TERMS, MADE AVAILABLE IN EACH OF THE
 LANGUAGES IN WHICH THE COMPANY OFFERS ITS END SERVICES, THAT AI  SYSTEMS
 WILL BE USED TO MAKE A DECISION OR TO ASSIST IN MAKING A DECISION; AND
   (II)  ALLOW  SUFFICIENT  TIME AND OPPORTUNITY IN A CLEAR, CONSPICUOUS,
 AND CONSUMER-FRIENDLY MANNER FOR THE CONSUMER TO OPT-OUT  OF  THE  AUTO-
 MATED  CONSEQUENTIAL DECISION PROCESS AND FOR THE DECISION TO BE MADE BY
 A HUMAN REPRESENTATIVE. A CONSUMER MAY NOT BE PUNISHED OR FACE ANY OTHER
 ADVERSE ACTION FOR OPTING OUT OF A DECISION BY  AN  AI  SYSTEM  AND  THE
 DEPLOYER SHALL RENDER A DECISION TO THE CONSUMER WITHIN FORTY-FIVE DAYS.
   (B)  IF  A  DEPLOYER EMPLOYS A HIGH-RISK AI SYSTEM FOR A CONSEQUENTIAL
 DECISION TO DETERMINE WHETHER TO OR ON WHAT TERMS TO CONFER A BENEFIT ON
 AN END USER, THE DEPLOYER SHALL OFFER THE END USER THE OPTION  TO  WAIVE
 THEIR  RIGHT TO ADVANCE NOTICE OF FIVE BUSINESS DAYS UNDER THIS SUBDIVI-
 SION.
   (C) IF THE END USER CLEARLY AND AFFIRMATIVELY WAIVES  THEIR  RIGHT  TO
 FIVE  BUSINESS DAYS' NOTICE, THE DEPLOYER SHALL THEN INFORM THE END USER
 AS EARLY AS PRACTICABLE BEFORE THE MAKING OF THE CONSEQUENTIAL  DECISION
 IN  CLEAR,  CONSPICUOUS,  AND CONSUMER-FRIENDLY TERMS, MADE AVAILABLE IN
 EACH OF THE LANGUAGES IN WHICH THE COMPANY OFFERS ITS END SERVICES, THAT
 AI SYSTEMS WILL BE USED TO MAKE A DECISION OR  TO  ASSIST  IN  MAKING  A
 DECISION.  THE DEPLOYER SHALL ALLOW SUFFICIENT TIME AND OPPORTUNITY IN A
 CLEAR, CONSPICUOUS, AND CONSUMER-FRIENDLY MANNER  FOR  THE  CONSUMER  TO
 OPT-OUT  OF  THE  AUTOMATED PROCESS AND FOR THE DECISION TO BE MADE BY A
 HUMAN REPRESENTATIVE. A CONSUMER MAY NOT BE PUNISHED OR FACE  ANY  OTHER
 ADVERSE  ACTION  FOR  OPTING  OUT  OF A DECISION BY AN AI SYSTEM AND THE
 DEPLOYER SHALL RENDER A DECISION TO THE CONSUMER WITHIN FORTY-FIVE DAYS.
   (D) AN END USER SHALL BE ENTITLED TO NO MORE  THAN  ONE  OPT-OUT  WITH
 RESPECT TO THE SAME CONSEQUENTIAL DECISION WITHIN A SIX-MONTH PERIOD.
   2.  (A)  ANY  DEPLOYER THAT EMPLOYS A HIGH-RISK AI SYSTEM FOR A CONSE-
 QUENTIAL DECISION SHALL INFORM THE END USER WITHIN FIVE DAYS IN A CLEAR,
 CONSPICUOUS AND CONSUMER-FRIENDLY MANNER IF A HIGH-RISK  AI  SYSTEM  HAS
 BEEN  USED  TO  MAKE  A  CONSEQUENTIAL DECISION. THE DEPLOYER SHALL THEN
 PROVIDE AND EXPLAIN A PROCESS FOR THE END USER TO APPEAL  THE  DECISION,
 WHICH  SHALL  AT  MINIMUM ALLOW THE END USER TO (I) FORMALLY CONTEST THE
 DECISION, (II) PROVIDE INFORMATION TO SUPPORT THEIR POSITION, AND  (III)
 OBTAIN  MEANINGFUL  HUMAN  REVIEW  OF  THE  DECISION.   A DEPLOYER SHALL
 RESPOND TO AN END USER'S APPEAL WITHIN FORTY-FIVE DAYS OF RECEIPT OF THE
 APPEAL. THAT PERIOD MAY BE EXTENDED ONCE BY FORTY-FIVE  ADDITIONAL  DAYS
 WHERE  REASONABLY  NECESSARY,  TAKING  INTO  ACCOUNT  THE COMPLEXITY AND
 NUMBER OF APPEALS. THE DEPLOYER SHALL INFORM THE END USER  OF  ANY  SUCH
 EXTENSION WITHIN FORTY-FIVE DAYS OF RECEIPT OF THE APPEAL, TOGETHER WITH
 THE REASONS FOR THE DELAY.
   (B)  AN  END  USER  SHALL  BE ENTITLED TO NO MORE THAN ONE APPEAL WITH
 RESPECT TO THE SAME CONSEQUENTIAL DECISION IN A SIX-MONTH PERIOD.
   3. THE DEPLOYER OR DEVELOPER OF  A  HIGH-RISK  AI  SYSTEM  IS  LEGALLY
 RESPONSIBLE  FOR  QUALITY  AND  ACCURACY  OF ALL CONSEQUENTIAL DECISIONS
 MADE, INCLUDING ANY BIAS OR ALGORITHMIC  DISCRIMINATION  RESULTING  FROM
 THE OPERATION OF THE AI SYSTEM ON THEIR BEHALF.
   4.  THE RIGHTS AND OBLIGATIONS UNDER THIS SECTION MAY NOT BE WAIVED BY
 ANY PERSON, PARTNERSHIP, ASSOCIATION OR CORPORATION.
   5. WITH RESPECT TO A SINGLE CONSEQUENTIAL DECISION, AN  END  USER  MAY
 NOT EXERCISE BOTH ITS RIGHT TO OPT-OUT OF A CONSEQUENTIAL DECISION UNDER
 S. 1169--A                          6
 
 SUBDIVISION  ONE OF THIS SECTION AND ITS RIGHT TO APPEAL A CONSEQUENTIAL
 DECISION UNDER SUBDIVISION TWO OF THIS SECTION.
   §  86-B.  WHISTLEBLOWER PROTECTIONS. 1. DEVELOPERS AND/OR DEPLOYERS OF
 HIGH-RISK AI SYSTEMS SHALL NOT:
   (A) PREVENT ANY OF THEIR EMPLOYEES FROM DISCLOSING INFORMATION TO  THE
 ATTORNEY  GENERAL,  INCLUDING THROUGH TERMS AND CONDITIONS OF EMPLOYMENT
 OR SEEKING TO ENFORCE TERMS AND CONDITIONS OF EMPLOYMENT, IF THE EMPLOY-
 EE HAS REASONABLE CAUSE TO BELIEVE THE INFORMATION INDICATES A VIOLATION
 OF THIS ARTICLE; OR
   (B) RETALIATE AGAINST AN EMPLOYEE FOR DISCLOSING  INFORMATION  TO  THE
 ATTORNEY GENERAL PURSUANT TO THIS SECTION.
   2.  AN  EMPLOYEE  HARMED BY A VIOLATION OF THIS ARTICLE MAY PETITION A
 COURT FOR APPROPRIATE RELIEF AS PROVIDED IN SUBDIVISION FIVE OF  SECTION
 SEVEN HUNDRED FORTY OF THE LABOR LAW.
   3.  DEVELOPERS  AND  DEPLOYERS OF HIGH-RISK AI SYSTEMS SHALL PROVIDE A
 CLEAR NOTICE TO ALL OF THEIR EMPLOYEES WORKING ON  SUCH  AI  SYSTEMS  OF
 THEIR  RIGHTS  AND  RESPONSIBILITIES  UNDER  THIS ARTICLE, INCLUDING THE
 RIGHT OF EMPLOYEES OF CONTRACTORS AND SUBCONTRACTORS TO USE THE DEVELOP-
 ER'S INTERNAL PROCESS  FOR  MAKING  PROTECTED  DISCLOSURES  PURSUANT  TO
 SUBDIVISION FOUR OF THIS SECTION. A DEVELOPER OR DEPLOYER IS PRESUMED TO
 BE IN COMPLIANCE WITH THE REQUIREMENTS OF THIS SUBDIVISION IF THE DEVEL-
 OPER OR DEPLOYER DOES EITHER OF THE FOLLOWING:
   (A)  AT ALL TIMES POST AND DISPLAY WITHIN ALL WORKPLACES MAINTAINED BY
 THE DEVELOPER OR DEPLOYER A NOTICE TO ALL EMPLOYEES OF THEIR RIGHTS  AND
 RESPONSIBILITIES  UNDER  THIS  ARTICLE,  ENSURE  THAT  ALL NEW EMPLOYEES
 RECEIVE EQUIVALENT NOTICE, AND ENSURE THAT EMPLOYEES WHO  WORK  REMOTELY
 PERIODICALLY RECEIVE AN EQUIVALENT NOTICE; OR
   (B) NO LESS FREQUENTLY THAN ONCE EVERY YEAR, PROVIDE WRITTEN NOTICE TO
 ALL  EMPLOYEES  OF  THEIR RIGHTS AND RESPONSIBILITIES UNDER THIS ARTICLE
 AND ENSURE THAT THE NOTICE IS RECEIVED AND ACKNOWLEDGED BY ALL OF  THOSE
 EMPLOYEES.
   4.  EACH  DEVELOPER  AND  DEPLOYER SHALL PROVIDE A REASONABLE INTERNAL
 PROCESS THROUGH WHICH AN EMPLOYEE MAY ANONYMOUSLY  DISCLOSE  INFORMATION
 TO THE DEVELOPER OR DEPLOYER IF THE EMPLOYEE BELIEVES IN GOOD FAITH THAT
 THE  INFORMATION  INDICATES  THAT THE DEVELOPER OR DEPLOYER HAS VIOLATED
 ANY PROVISION OF THIS ARTICLE OR ANY OTHER LAW, OR  HAS  MADE  FALSE  OR
 MATERIALLY  MISLEADING  STATEMENTS RELATED TO ITS RISK MANAGEMENT POLICY
 AND PROGRAM, OR FAILED TO DISCLOSE KNOWN RISKS TO EMPLOYEES,  INCLUDING,
 AT  A  MINIMUM,  A  MONTHLY UPDATE TO THE PERSON WHO MADE THE DISCLOSURE
 REGARDING THE STATUS OF THE DEVELOPER'S OR DEPLOYER'S  INVESTIGATION  OF
 THE  DISCLOSURE  AND  THE  ACTIONS TAKEN BY THE DEVELOPER OR DEPLOYER IN
 RESPONSE TO THE DISCLOSURE.
   5. THIS SECTION DOES NOT LIMIT PROTECTIONS PROVIDED TO EMPLOYEES UNDER
 SECTION SEVEN HUNDRED FORTY OF THE LABOR LAW.
   § 87. AUDITS. 1. DEVELOPERS OF HIGH-RISK AI SYSTEMS SHALL CAUSE TO  BE
 CONDUCTED THIRD-PARTY AUDITS IN ACCORDANCE WITH THIS SECTION.
   (A) A DEVELOPER OF A HIGH-RISK AI SYSTEM SHALL COMPLETE AT LEAST:
   (I) A FIRST AUDIT WITHIN SIX MONTHS AFTER COMPLETION OF DEVELOPMENT OF
 THE  HIGH-RISK  AI  SYSTEM  AND THE INITIAL OFFERING OF THE HIGH-RISK AI
 SYSTEM TO A DEPLOYER FOR  DEPLOYMENT  OR,  IF  THE  DEVELOPER  IS  FIRST
 DEPLOYER  TO  DEPLOY  THE HIGH-RISK AI SYSTEM, AFTER INITIAL DEPLOYMENT;
 AND
   (II) ONE AUDIT EVERY ONE YEAR FOLLOWING THE SUBMISSION  OF  THE  FIRST
 AUDIT.
   (B) A DEVELOPER AUDIT UNDER THIS SECTION SHALL INCLUDE:
 S. 1169--A                          7
 
   (I) AN EVALUATION AND DETERMINATION OF WHETHER THE DEVELOPER HAS TAKEN
 REASONABLE  CARE  TO  PREVENT  FORESEEABLE RISK OF ALGORITHMIC DISCRIMI-
 NATION WITH RESPECT TO SUCH HIGH-RISK AI SYSTEM; AND
   (II) AN EVALUATION OF THE DEVELOPER'S DOCUMENTED RISK MANAGEMENT POLI-
 CY  AND  PROGRAM  REQUIRED UNDER SECTION EIGHTY-NINE OF THIS ARTICLE FOR
 CONFORMITY WITH SUBDIVISION ONE OF SUCH SECTION EIGHTY-NINE.
   2. DEPLOYERS OF HIGH-RISK AI  SYSTEMS  SHALL  CAUSE  TO  BE  CONDUCTED
 THIRD-PARTY AUDITS IN ACCORDANCE WITH THIS SECTION.
   (A) A DEPLOYER OF A HIGH-RISK AI SYSTEM SHALL COMPLETE AT LEAST:
   (I) A FIRST AUDIT WITHIN SIX MONTHS AFTER INITIAL DEPLOYMENT;
   (II)  A  SECOND  AUDIT WITHIN ONE YEAR FOLLOWING THE SUBMISSION OF THE
 FIRST AUDIT; AND
   (III) ONE AUDIT EVERY TWO YEARS FOLLOWING THE SUBMISSION OF THE SECOND
 AUDIT.
   (B) A DEPLOYER AUDIT UNDER THIS SECTION SHALL INCLUDE:
   (I) AN EVALUATION AND DETERMINATION OF WHETHER THE DEPLOYER HAS  TAKEN
 REASONABLE  CARE  TO  PREVENT  FORESEEABLE RISK OF ALGORITHMIC DISCRIMI-
 NATION WITH RESPECT TO SUCH HIGH-RISK AI SYSTEM;
   (II) AN EVALUATION OF SYSTEM ACCURACY AND RELIABILITY WITH RESPECT  TO
 SUCH HIGH-RISK AI SYSTEM'S DEPLOYER-INTENDED AND ACTUAL USE CASES; AND
   (III) AN EVALUATION OF THE DEPLOYER'S DOCUMENTED RISK MANAGEMENT POLI-
 CY  AND  PROGRAM  REQUIRED UNDER SECTION EIGHTY-NINE OF THIS ARTICLE FOR
 CONFORMITY WITH SUBDIVISION ONE OF SUCH SECTION EIGHTY-NINE.
   3. A DEPLOYER OR DEVELOPER MAY HIRE MORE THAN ONE AUDITOR  TO  FULFILL
 THE REQUIREMENTS OF THIS SECTION.
   4. AT THE ATTORNEY GENERAL'S DISCRETION, THE ATTORNEY GENERAL MAY:
   (A)  PROMULGATE FURTHER RULES AS NECESSARY TO ENSURE THAT AUDITS UNDER
 THIS SECTION ASSESS  WHETHER  OR  NOT  AI  SYSTEMS  PRODUCE  ALGORITHMIC
 DISCRIMINATION AND OTHERWISE COMPLY WITH THE PROVISIONS OF THIS ARTICLE;
 AND
   (B)  RECOMMEND AN UPDATED AI SYSTEM AUDITING FRAMEWORK TO THE LEGISLA-
 TURE, WHERE SUCH RECOMMENDATIONS ARE BASED ON A  STANDARD  OR  FRAMEWORK
 (I)  DESIGNED  TO  EVALUATE  THE  RISKS  OF AI SYSTEMS, AND (II) THAT IS
 NATIONALLY OR INTERNATIONALLY RECOGNIZED AND CONSENSUS-DRIVEN, INCLUDING
 BUT NOT LIMITED TO A RELEVANT  FRAMEWORK  OR  STANDARD  CREATED  BY  THE
 INTERNATIONAL STANDARDS ORGANIZATION.
   5.  THE  INDEPENDENT AUDITOR SHALL HAVE COMPLETE AND UNREDACTED COPIES
 OF ALL REPORTS PREVIOUSLY FILED  BY  THE  DEPLOYER  OR  DEVELOPER  UNDER
 SECTION EIGHTY-EIGHT OF THIS ARTICLE.
   6. AN AUDIT CONDUCTED UNDER THIS SECTION MAY BE COMPLETED IN PART, BUT
 SHALL NOT BE COMPLETED ENTIRELY, WITH THE ASSISTANCE OF AN AI SYSTEM.
   (A)  ACCEPTABLE  AUDITOR  USES  OF  AN  AI SYSTEM INCLUDE, BUT ARE NOT
 LIMITED TO:
   (I) USE OF AN AUDITED HIGH-RISK AI SYSTEM IN A CONTROLLED  ENVIRONMENT
 WITHOUT IMPACTS ON END USERS FOR SYSTEM TESTING PURPOSES; OR
   (II) DETECTING PATTERNS IN THE BEHAVIOR OF AN AUDITED AI SYSTEM.
   (B) AN AUDITOR SHALL NOT:
   (I)  USE A DIFFERENT HIGH-RISK AI SYSTEM THAT IS NOT THE SUBJECT OF AN
 AUDIT TO COMPLETE AN AUDIT; OR
   (II) USE AN AI SYSTEM TO DRAFT AN AUDIT  UNDER  THIS  SECTION  WITHOUT
 MEANINGFUL HUMAN REVIEW AND OVERSIGHT.
   7.  (A)  AN  AUDITOR  SHALL BE AN INDEPENDENT ENTITY INCLUDING BUT NOT
 LIMITED TO AN INDIVIDUAL, NON-PROFIT,  FIRM,  CORPORATION,  PARTNERSHIP,
 COOPERATIVE, OR ASSOCIATION.
   (B)  FOR  THE PURPOSES OF THIS ARTICLE, NO AUDITOR MAY BE COMMISSIONED
 BY A DEVELOPER OR DEPLOYER OF A HIGH-RISK AI SYSTEM IF SUCH ENTITY:
 S. 1169--A                          8
 
   (I) HAS ALREADY BEEN COMMISSIONED TO PROVIDE ANY AUDITING  OR  NON-AU-
 DITING  SERVICE,  INCLUDING  BUT  NOT  LIMITED  TO  FINANCIAL  AUDITING,
 CYBERSECURITY AUDITING, OR CONSULTING SERVICES OF  ANY  TYPE,    TO  THE
 COMMISSIONING COMPANY IN THE PAST TWELVE MONTHS; OR
   (II) IS, WILL BE, OR PLANS TO BE ENGAGED IN THE BUSINESS OF DEVELOPING
 OR DEPLOYING AN AI SYSTEM THAT CAN COMPETE COMMERCIALLY WITH SUCH DEVEL-
 OPER'S  OR DEPLOYER'S HIGH-RISK AI SYSTEM IN THE FIVE YEARS FOLLOWING AN
 AUDIT.
   (C) FEES PAID TO AUDITORS MAY NOT BE CONTINGENT ON THE RESULT  OF  THE
 AUDIT  AND THE COMMISSIONING COMPANY SHALL NOT PROVIDE ANY INCENTIVES OR
 BONUSES FOR A POSITIVE AUDIT RESULT.
   8. THE ATTORNEY GENERAL MAY PROMULGATE FURTHER RULES TO ENSURE (A) THE
 INDEPENDENCE OF AUDITORS UNDER THIS SECTION, AND (B) THAT TEAMS CONDUCT-
 ING AUDITS INCORPORATE FEEDBACK FROM COMMUNITIES THAT MAY FORESEEABLY BE
 THE SUBJECT OF ALGORITHMIC DISCRIMINATION WITH RESPECT TO THE AI  SYSTEM
 BEING AUDITED.
   9.  IF  A DEVELOPER OR DEPLOYER HAS AN AUDIT COMPLETED FOR THE PURPOSE
 OF COMPLYING WITH ANOTHER APPLICABLE FEDERAL, STATE,  OR  LOCAL  LAW  OR
 REGULATION,  AND THE AUDIT OTHERWISE SATISFIES ALL OTHER REQUIREMENTS OF
 THIS SECTION, SUCH AUDIT SHALL BE DEEMED TO SATISFY THE REQUIREMENTS  OF
 THIS SECTION.
   §  88.  HIGH-RISK AI SYSTEM REPORTING REQUIREMENTS. 1. EVERY DEVELOPER
 AND DEPLOYER OF A HIGH-RISK AI SYSTEM SHALL COMPLY  WITH  THE  REPORTING
 REQUIREMENTS OF THIS SECTION.
   2.  TOGETHER WITH EACH REPORT REQUIRED TO BE FILED UNDER THIS SECTION,
 EVERY DEVELOPER AND DEPLOYER SHALL FILE WITH THE ATTORNEY GENERAL A COPY
 OF THE LAST COMPLETED INDEPENDENT AUDIT REQUIRED BY THIS ARTICLE.
   3. DEVELOPERS OF HIGH-RISK AI SYSTEMS SHALL COMPLETE AND FILE WITH THE
 ATTORNEY GENERAL REPORTS IN ACCORDANCE WITH THIS SUBDIVISION.
   (A) A DEVELOPER OF A HIGH-RISK AI SYSTEM SHALL COMPLETE AND FILE  WITH
 THE ATTORNEY GENERAL AT LEAST:
   (I)  A  FIRST REPORT WITHIN SIX MONTHS AFTER COMPLETION OF DEVELOPMENT
 OF THE HIGH-RISK AI SYSTEM AND THE INITIAL OFFERING OF THE HIGH-RISK  AI
 SYSTEM  TO  A  DEPLOYER  FOR  DEPLOYMENT  OR,  IF THE DEVELOPER IS FIRST
 DEPLOYER TO DEPLOY THE HIGH-RISK AI SYSTEM, AFTER INITIAL DEPLOYMENT;
   (II) ONE REPORT ANNUALLY FOLLOWING THE SUBMISSION OF THE FIRST REPORT;
 AND
   (III) ONE REPORT WITHIN SIX MONTHS OF ANY SUBSTANTIAL  CHANGE  TO  THE
 HIGH-RISK AI SYSTEM.
   (B) A DEVELOPER REPORT UNDER THIS SECTION SHALL INCLUDE:
   (I) A DESCRIPTION OF THE SYSTEM INCLUDING:
   (A)  THE  USES  OF THE HIGH-RISK AI SYSTEM THAT THE DEVELOPER INTENDS;
 AND
   (B) ANY EXPLICITLY UNINTENDED OR DISALLOWED USES OF THE  HIGH-RISK  AI
 SYSTEM;
   (II) AN OVERVIEW OF HOW THE HIGH-RISK AI SYSTEM WAS DEVELOPED;
   (III) AN OVERVIEW OF THE HIGH-RISK AI SYSTEM'S TRAINING DATA; AND
   (IV) ANY OTHER INFORMATION NECESSARY TO ALLOW A DEPLOYER TO:
   (A)  UNDERSTAND THE OUTPUTS AND MONITOR THE SYSTEM FOR COMPLIANCE WITH
 THIS ARTICLE; AND
   (B) FULFILL ITS DUTIES UNDER THIS ARTICLE.
   4. DEPLOYERS OF HIGH-RISK AI SYSTEMS SHALL COMPLETE AND FILE WITH  THE
 ATTORNEY GENERAL REPORTS IN ACCORDANCE WITH THIS SUBDIVISION.
   (A)  A  DEPLOYER OF A HIGH-RISK AI SYSTEM SHALL COMPLETE AND FILE WITH
 THE ATTORNEY GENERAL AT LEAST:
   (I) A FIRST REPORT WITHIN SIX MONTHS AFTER INITIAL DEPLOYMENT;
 S. 1169--A                          9
 
   (II) A SECOND REPORT WITHIN ONE  YEAR  FOLLOWING  THE  COMPLETION  AND
 FILING OF THE FIRST REPORT;
   (III)  ONE  REPORT EVERY TWO YEARS FOLLOWING THE COMPLETION AND FILING
 OF THE SECOND REPORT; AND
   (IV) ONE REPORT WITHIN SIX MONTHS OF ANY  SUBSTANTIAL  CHANGE  TO  THE
 HIGH-RISK AI SYSTEM.
   (B) A DEPLOYER REPORT UNDER THIS SECTION SHALL INCLUDE:
   (I) A DESCRIPTION OF THE SYSTEM INCLUDING:
   (A)  THE DEPLOYER'S ACTUAL, INTENDED, OR PLANNED USES OF THE HIGH-RISK
 AI SYSTEM WITH RESPECT TO CONSEQUENTIAL DECISIONS; AND
   (B) WHETHER THE DEPLOYER IS USING THE  HIGH-RISK  AI  SYSTEM  FOR  ANY
 DEVELOPER UNINTENDED OR DISALLOWED USES; AND
   (II) AN IMPACT ASSESSMENT INCLUDING:
   (A)  WHETHER  THE  HIGH-RISK  AI  SYSTEM  POSES  A RISK OF ALGORITHMIC
 DISCRIMINATION AND THE STEPS TAKEN TO ADDRESS THE  RISK  OF  ALGORITHMIC
 DISCRIMINATION;
   (B)  IF  THE HIGH-RISK AI SYSTEM IS OR WILL BE MONETIZED, HOW IT IS OR
 IS PLANNED TO BE MONETIZED; AND
   (C) AN EVALUATION OF THE COSTS AND BENEFITS TO CONSUMERS AND OTHER END
 USERS.
   (C) A DEPLOYER THAT IS ALSO A DEVELOPER  AND  IS  REQUIRED  TO  SUBMIT
 REPORTS  UNDER  SUBDIVISION  THREE  OF  THIS SECTION MAY SUBMIT A SINGLE
 JOINT REPORT PROVIDED IT  CONTAINS  THE  INFORMATION  REQUIRED  IN  THIS
 SUBDIVISION.
   5. THE ATTORNEY GENERAL SHALL:
   (A)  PROMULGATE  RULES  FOR A PROCESS WHEREBY DEVELOPERS AND DEPLOYERS
 MAY REQUEST REDACTION OF PORTIONS OF REPORTS REQUIRED UNDER THIS SECTION
 TO ENSURE THAT THEY ARE NOT REQUIRED TO DISCLOSE SENSITIVE AND PROTECTED
 INFORMATION; AND
   (B) MAINTAIN AN ONLINE DATABASE THAT  IS  ACCESSIBLE  TO  THE  GENERAL
 PUBLIC  WITH  REPORTS, REDACTED IN ACCORDANCE WITH THIS SUBDIVISION, AND
 AUDITS REQUIRED BY THIS ARTICLE, WHICH DATABASE SHALL BE UPDATED BIANNU-
 ALLY.
   6. FOR HIGH-RISK AI SYSTEMS WHICH ARE ALREADY  IN  DEPLOYMENT  AT  THE
 TIME  OF  THE  EFFECTIVE  DATE OF THIS ARTICLE, DEVELOPERS AND DEPLOYERS
 SHALL HAVE EIGHTEEN MONTHS FROM SUCH EFFECTIVE DATE TO COMPLETE AND FILE
 THE FIRST REPORT AND ASSOCIATED INDEPENDENT AUDIT REQUIRED BY THIS ARTI-
 CLE.
   (A) EACH DEVELOPER OF A HIGH-RISK AI SYSTEM SHALL THEREAFTER  FILE  AT
 LEAST  ONE  REPORT ANNUALLY FOLLOWING THE SUBMISSION OF THE FIRST REPORT
 UNDER THIS SUBDIVISION.
   (B) EACH DEPLOYER OF A HIGH-RISK AI SYSTEM SHALL  THEREAFTER  FILE  AT
 LEAST  ONE  REPORT EVERY TWO YEARS FOLLOWING THE SUBMISSION OF THE FIRST
 REPORT UNDER THIS SUBDIVISION.
   § 89. RISK MANAGEMENT POLICY AND PROGRAM. 1. EACH DEVELOPER OR DEPLOY-
 ER OF HIGH-RISK AI SYSTEMS SHALL PLAN, DOCUMENT, AND  IMPLEMENT  A  RISK
 MANAGEMENT  POLICY  AND  PROGRAM TO GOVERN DEVELOPMENT OR DEPLOYMENT, AS
 APPLICABLE, OF SUCH HIGH-RISK AI SYSTEM.  THE RISK MANAGEMENT POLICY AND
 PROGRAM SHALL SPECIFY AND INCORPORATE  THE  PRINCIPLES,  PROCESSES,  AND
 PERSONNEL  THAT  THE  DEPLOYER  USES TO IDENTIFY, DOCUMENT, AND MITIGATE
 KNOWN OR REASONABLY  FORESEEABLE  RISKS  OF  ALGORITHMIC  DISCRIMINATION
 COVERED UNDER SUBDIVISION ONE OF SECTION EIGHTY-SIX OF THIS ARTICLE. THE
 RISK  MANAGEMENT  POLICY  AND  PROGRAM  SHALL  BE  AN  ITERATIVE PROCESS
 PLANNED, IMPLEMENTED, AND  REGULARLY  AND  SYSTEMATICALLY  REVIEWED  AND
 UPDATED OVER THE LIFE CYCLE OF A HIGH-RISK AI SYSTEM, REQUIRING REGULAR,
 SYSTEMATIC  REVIEW  AND  UPDATES,  INCLUDING UPDATES TO DOCUMENTATION. A
 S. 1169--A                         10
 
 RISK MANAGEMENT POLICY AND PROGRAM IMPLEMENTED AND  MAINTAINED  PURSUANT
 TO THIS SECTION SHALL BE REASONABLE CONSIDERING:
   (A) THE GUIDANCE AND STANDARDS SET FORTH IN:
   (I) VERSION 1.0 OF THE "ARTIFICIAL INTELLIGENCE RISK MANAGEMENT FRAME-
 WORK" PUBLISHED BY THE NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY IN
 THE UNITED STATES DEPARTMENT OF COMMERCE, OR
   (II)  ANOTHER  SUBSTANTIALLY  EQUIVALENT  FRAMEWORK  SELECTED  AT  THE
 DISCRETION OF THE ATTORNEY GENERAL, IF SUCH FRAMEWORK  WAS  DESIGNED  TO
 MANAGE  RISKS  ASSOCIATED  WITH  AI  SYSTEMS,  IS NATIONALLY OR INTERNA-
 TIONALLY RECOGNIZED AND CONSENSUS-DRIVEN, AND IS AT LEAST  AS  STRINGENT
 AS  VERSION  1.0  OF THE "ARTIFICIAL INTELLIGENCE RISK MANAGEMENT FRAME-
 WORK" PUBLISHED BY THE NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY;
   (B) THE SIZE AND COMPLEXITY OF THE DEVELOPER OR DEPLOYER;
   (C) THE NATURE, SCOPE, AND INTENDED USES OF THE  HIGH-RISK  AI  SYSTEM
 DEVELOPED OR DEPLOYED; AND
   (D)  THE  SENSITIVITY  AND VOLUME OF DATA PROCESSED IN CONNECTION WITH
 THE HIGH-RISK AI SYSTEM.
   2. A RISK MANAGEMENT POLICY AND PROGRAM IMPLEMENTED PURSUANT TO SUBDI-
 VISION ONE OF THIS SECTION  MAY  COVER  MULTIPLE  HIGH-RISK  AI  SYSTEMS
 DEVELOPED  BY  THE  SAME  DEVELOPER  OR DEPLOYED BY THE SAME DEPLOYER IF
 SUFFICIENT.
   3. THE ATTORNEY GENERAL MAY REQUIRE  A  DEVELOPER  OR  A  DEPLOYER  TO
 DISCLOSE  THE RISK MANAGEMENT POLICY AND PROGRAM IMPLEMENTED PURSUANT TO
 SUBDIVISION ONE OF THIS SECTION IN A FORM AND MANNER PRESCRIBED  BY  THE
 ATTORNEY  GENERAL. THE ATTORNEY GENERAL MAY EVALUATE THE RISK MANAGEMENT
 POLICY AND PROGRAM TO ENSURE COMPLIANCE WITH THIS SECTION.
   § 89-A. SOCIAL SCORING AI SYSTEMS PROHIBITED. NO PERSON,  PARTNERSHIP,
 ASSOCIATION  OR  CORPORATION  SHALL  DEVELOP, DEPLOY, USE, OR SELL AN AI
 SYSTEM WHICH EVALUATES OR  CLASSIFIES  THE  TRUSTWORTHINESS  OF  NATURAL
 PERSONS  OVER A CERTAIN PERIOD OF TIME BASED ON THEIR SOCIAL BEHAVIOR OR
 KNOWN OR PREDICTED PERSONAL OR  PERSONALITY  CHARACTERISTICS,  WITH  THE
 SOCIAL SCORE LEADING TO ANY OF THE FOLLOWING:
   1.  DIFFERENTIAL  TREATMENT OF CERTAIN NATURAL PERSONS OR WHOLE GROUPS
 THEREOF IN SOCIAL CONTEXTS WHICH ARE UNRELATED TO THE CONTEXTS IN  WHICH
 THE DATA WAS ORIGINALLY GENERATED OR COLLECTED;
   2.  DIFFERENTIAL  TREATMENT OF CERTAIN NATURAL PERSONS OR WHOLE GROUPS
 THEREOF THAT IS UNJUSTIFIED OR DISPROPORTIONATE TO THEIR SOCIAL BEHAVIOR
 OR ITS GRAVITY; OR
   3. THE INFRINGEMENT OF ANY RIGHT GUARANTEED UNDER  THE  UNITED  STATES
 CONSTITUTION, THE NEW YORK CONSTITUTION, OR STATE OR FEDERAL LAW.
   §  89-B.  DEVELOPER  SAFE  HARBOR.  A DEVELOPER MAY BE EXEMPT FROM ITS
 DUTIES AND OBLIGATIONS UNDER SECTIONS EIGHTY-SIX, EIGHTY-SIX-A,  EIGHTY-
 SIX-B,  EIGHTY-SEVEN,  EIGHTY-EIGHT,  AND EIGHTY-NINE OF THIS ARTICLE IF
 SUCH DEVELOPER:
   1. RECEIVES A WRITTEN  AND  SIGNED  CONTRACTUAL  AGREEMENT  FROM  EACH
 DEPLOYER  AUTHORIZED TO USE THE ARTIFICIAL INTELLIGENCE SYSTEM DEVELOPED
 BY SUCH DEVELOPER, INCLUDING THE DEVELOPER IF THEY ARE ALSO A  DEPLOYER,
 THAT SUCH ARTIFICIAL INTELLIGENCE SYSTEM WILL NOT BE USED AS A HIGH-RISK
 AI SYSTEM;
   2.  IMPLEMENTS  REASONABLE TECHNICAL SAFEGUARDS DESIGNED TO PREVENT OR
 DETECT HIGH-RISK AI SYSTEM USE CASES OR OTHERWISE  DEMONSTRATES  REASON-
 ABLE  STEPS  TAKEN TO ENSURE THAT ANY UNAUTHORIZED DEPLOYMENTS OF ITS AI
 SYSTEMS ARE NOT BEING USED AS A HIGH-RISK AI SYSTEM;
   3. PROMINENTLY DISPLAYS ON ITS WEBSITE, IN MARKETING MATERIALS, AND IN
 ALL LICENSING AGREEMENTS OFFERED TO  PROSPECTIVE  DEPLOYERS  OF  ITS  AI
 SYSTEM THAT THE AI SYSTEM CANNOT BE USED AS A HIGH-RISK AI SYSTEM; AND
 S. 1169--A                         11
 
   4.  MAINTAINS  RECORDS OF DEPLOYER AGREEMENTS FOR A PERIOD OF NOT LESS
 THAN FIVE YEARS.
   § 89-C. ENFORCEMENT. 1. WHENEVER THERE SHALL BE A VIOLATION OF SECTION
 EIGHTY-SIX-A,  EIGHTY-SIX-B, EIGHTY-SEVEN, EIGHTY-EIGHT, EIGHTY-NINE, OR
 EIGHTY-NINE-A OF THIS ARTICLE, AN APPLICATION MAY BE MADE BY THE  ATTOR-
 NEY  GENERAL  IN THE NAME OF THE PEOPLE OF THE STATE OF NEW YORK, TO THE
 SUPREME COURT HAVING JURISDICTION  TO  ISSUE  AN  INJUNCTION,  AND  UPON
 NOTICE  TO  THE  RESPONDENT  OF  NOT  LESS  THAN TEN DAYS, TO ENJOIN AND
 RESTRAIN THE CONTINUANCE OF SUCH VIOLATION; AND IF IT  SHALL  APPEAR  TO
 THE SATISFACTION OF THE COURT THAT THE RESPONDENT HAS, IN FACT, VIOLATED
 THIS  ARTICLE,  AN  INJUNCTION MAY BE ISSUED BY THE COURT, ENJOINING AND
 RESTRAINING ANY FURTHER VIOLATIONS, WITHOUT  REQUIRING  PROOF  THAT  ANY
 PERSON  HAS,  IN  FACT,  BEEN  INJURED  OR  DAMAGED THEREBY. IN ANY SUCH
 PROCEEDING, THE COURT MAY MAKE ALLOWANCES TO  THE  ATTORNEY  GENERAL  AS
 PROVIDED  IN  PARAGRAPH  SIX  OF SUBDIVISION (A) OF SECTION EIGHTY-THREE
 HUNDRED THREE OF THE CIVIL PRACTICE LAW AND RULES, AND  DIRECT  RESTITU-
 TION.  WHENEVER THE COURT SHALL DETERMINE THAT A VIOLATION OF THIS ARTI-
 CLE HAS OCCURRED, THE COURT MAY IMPOSE A CIVIL PENALTY OF NOT MORE  THAN
 TWENTY THOUSAND DOLLARS FOR EACH VIOLATION.
   2.  THERE SHALL BE A PRIVATE RIGHT OF ACTION BY PLENARY PROCEEDING FOR
 ANY  PERSON  HARMED  BY  ANY  VIOLATION  OF      SECTION   EIGHTY-SIX-A,
 EIGHTY-SIX-B,  EIGHTY-SEVEN, EIGHTY-EIGHT, EIGHTY-NINE, OR EIGHTY-NINE-A
 OF THIS ARTICLE BY ANY NATURAL PERSON OR ENTITY.  THE COURT SHALL  AWARD
 COMPENSATORY DAMAGES AND LEGAL FEES TO THE PREVAILING PARTY.
   3.  IN EVALUATING ANY MOTION TO DISMISS A PLENARY PROCEEDING COMMENCED
 PURSUANT TO SUBDIVISION TWO OF THIS SECTION, THE COURT SHALL PRESUME THE
 SPECIFIED AI SYSTEM WAS CREATED AND/OR OPERATED IN VIOLATION OF A SPECI-
 FIED LAW OR LAWS AND THAT  SUCH  VIOLATION  CAUSED  THE  HARM  OR  HARMS
 ALLEGED.
   (A)  A DEFENDANT CAN REBUT PRESUMPTIONS MADE PURSUANT TO THIS SUBDIVI-
 SION THROUGH CLEAR AND CONVINCING EVIDENCE THAT THE SPECIFIED AI  SYSTEM
 DID  NOT  CAUSE  THE  HARM  OR  HARMS ALLEGED AND/OR DID NOT VIOLATE THE
 ALLEGED LAW OR LAWS. AN ALGORITHMIC AUDIT CAN BE CONSIDERED AS  EVIDENCE
 IN REBUTTING SUCH PRESUMPTIONS, BUT THE MERE EXISTENCE OF SUCH AN AUDIT,
 WITHOUT  ADDITIONAL EVIDENCE, SHALL NOT BE CONSIDERED CLEAR AND CONVINC-
 ING EVIDENCE.
   (B) WITH RESPECT TO A VIOLATION OF SECTION EIGHTY-SIX-A, EIGHTY-SIX-B,
 EIGHTY-SEVEN, EIGHTY-EIGHT, OR EIGHTY-NINE OF THIS ARTICLE, A  DEVELOPER
 CAN  REBUT  PRESUMPTIONS MADE PURSUANT TO THIS SUBDIVISION THROUGH CLEAR
 AND CONVINCING EVIDENCE THAT IT  HAS  COMPLIED  WITH  THE  DUTIES  UNDER
 SECTION EIGHTY-NINE-B OF THIS ARTICLE.
   (C) WHERE SUCH PRESUMPTIONS ARE NOT REBUTTED PURSUANT TO THIS SUBDIVI-
 SION, THE ACTION SHALL NOT BE DISMISSED.
   (D) WHERE SUCH PRESUMPTIONS ARE REBUTTED PURSUANT TO THIS SUBDIVISION,
 A MOTION TO DISMISS AN ACTION SHALL BE ADJUDICATED WITHOUT ANY CONSIDER-
 ATION OF THIS SECTION.
   4.  THE  SUPREME  COURT  IN THE STATE SHALL HAVE JURISDICTION OVER ANY
 ACTION, CLAIM, OR LAWSUIT TO ENFORCE THE PROVISIONS OF THIS ARTICLE.
   § 89-D. SEVERABILITY. IF ANY CLAUSE, SENTENCE, PARAGRAPH, SUBDIVISION,
 SECTION OR PART OF THIS ARTICLE SHALL BE ADJUDGED BY ANY COURT OF COMPE-
 TENT JURISDICTION TO BE INVALID, SUCH JUDGMENT SHALL NOT AFFECT, IMPAIR,
 OR INVALIDATE THE REMAINDER THEREOF, BUT SHALL BE CONFINED IN ITS OPERA-
 TION TO THE CLAUSE, SENTENCE, PARAGRAPH, SUBDIVISION, SECTION,  OR  PART
 THEREOF  DIRECTLY  INVOLVED  IN  THE  CONTROVERSY IN WHICH SUCH JUDGMENT
 SHALL HAVE BEEN MADE.
 S. 1169--A                         12
 
   § 4. Section 296 of the executive law  is  amended  by  adding  a  new
 subdivision 23 to read as follows:
   23. IT SHALL BE AN UNLAWFUL DISCRIMINATORY PRACTICE UNDER THIS SECTION
 FOR  A  DEPLOYER  OR  A  DEVELOPER, AS SUCH TERMS ARE DEFINED IN SECTION
 EIGHTY-FIVE OF THE CIVIL RIGHTS LAW, TO ENGAGE IN AN UNLAWFUL  DISCRIMI-
 NATORY PRACTICE UNDER SECTION EIGHTY-SIX OF THE CIVIL RIGHTS LAW.
   §  5. This act shall take effect one year after it shall have become a
 law; provided, however, that section 87 of  article  8-A  of  the  civil
 rights  law  as added by section three of this act shall take effect two
 years after it shall have become a law.
               
              
TechEquity supports S. 1169-A because it establishes essential protections that New Yorkers urgently need as AI systems increasingly make decisions about their jobs, homes, and healthcare.
Notice and transparency requirements are fundamental to fairness. Right now, people are being screened out of jobs, denied housing, or rejected for loans by AI systems they don't even know are being used. This bill's requirement for advance notice and opt-out rights ensures New Yorkers can make informed choices about how technology affects their lives, rather than discovering after the fact that an algorithm determined their fate.
Mandatory risk management and auditing can prevent real harm before it happens. The bill's requirements for developers to test their systems and deployers to conduct regular audits create critical checkpoints to catch discriminatory patterns early. This proactive approach is essential—by the time bias is discovered in deployment, thousands of people may have already been unfairly denied opportunities.
This bill is needed to prevent real harms driven by inaccurate or badly developed AI systems. For example, in 2021, HireVue's AI recruiting tool was found to exhibit bias against candidates with disabilities and certain ethnic backgrounds, screening out qualified applicants before human recruiters ever saw their applications. Under S. 1169-A, this system would have required pre-deployment auditing to identify these biases, advance notice to job applicants, and opt-out rights—potentially preventing discrimination against thousands of job seekers while preserving innovation in hiring technology.
New York has the opportunity to lead the nation in responsible AI governance that protects civil rights while fostering innovation. We urge passage of this essential legislation.
TechEquity advocates for technology policies that ensure everyone benefits from technological progress.
Will be interesting to see how this is policed and approached if it is passed. Training data and accuracy metrics will need to be reviewed. It is a very encompassing bill, I wonder if there is a clear path forward. Transparency will not be easy.