Body
Understanding Responsibilities in AI PracticesGuidance for understanding roles and responsibilities to ensure responsible artificial intelligence (AI) practices.PurposeResponsible AI is a collective responsibility, encompassing all levels within an agency, from executives to end users. It is important that everyone understands their role in ensuring the safe and responsible use of AI. This module provides guidance for aligning responsibilities and accountability within your agency, based on ISO/IEC standards and in alignment with theNSW AI Assessment Framework (AIAF).NSW government agencies can use this guidance to understand what responsible AI means in practice. Public servants are encouraged to consider integrating these responsibilities into performance plans where appropriate, as well as into AI awareness training content and internal processes for AI governance, assurance, and the evaluation, development, and operation of AI solutions.How to useResponsibilities are outlined through standard roles for flexibility and organised under broader strategic objectives. The recommended approach to using this guidance is:Identify the general role descriptions that best relate to your role by reviewing the 'Roles' section.Review the responsibilities and consider how to integrate them within your team's existing workflows. If you manage direct reports, assess whether any responsibilities should be included in performance plans.Implement the relevant changes.Roles & ResponsibilitiesWhile executives within Agencies are ultimately accountable for the safe and responsible use of AI technologies, ensuring responsible AI use involves everyone within an agency, not just technical or product teams.This guidance should be used to enhance existing structures, allowing agencies to define and assign AI-related responsibilities that best meet their unique operational needs.RolesExpand allCollapse allExecutive levelDefinitionSenior leaders and decision-makers in the organisation, such as CEOs, CFOs, CIOs, CDOs, and department heads.ResponsibilitiesSetting strategic direction, approving major initiatives, and ensuring overall accountability and governance.Management levelDefinitionMid-level leaders responsible for overseeing specific functions or departments within the organisation, with a strong focus on governance, assurance, cybersecurity, legal, record keeping, ethics, policy, and risk management.ResponsibilitiesImplementing strategies, managing teams, ensuring compliance with policies, and reporting to the executive level.Product ownersDefinitionIndividuals responsible for the development and success of products or projects that incorporate AI.ResponsibilitiesDefining product vision, prioritising features, coordinating with development teams, and ensuring alignment with business objectives, policies and regulation.UsersDefinitionIndividuals who interact with AI systems as part of their daily tasks or roles within the organisation.ResponsibilitiesUsing AI systems as intended, providing feedback on usability and performance, and adhering to usage guidelines.EveryoneDefinitionAll members of the organisation, encompassing every role and level.ResponsibilitiesUnderstanding and adhering to responsible AI practices, contributing to a culture of ethical AI use, and reporting any concerns or issues.ResponsibilitiesThe RACI matrices provided are to assist agencies in considering different roles and responsibilities. They are organised under broad strategic objectives to support the establishment of responsible AI practices.Expand allCollapse allFoster a responsible AI CultureActionAccountableResponsibleConsultedInformedPromote a culture of responsible AI by integratingNSW AI ethics principlesin business objectives, values, and communications.Executive levelManagement levelUsersUsersPeriodically review organizational awareness of individual responsibilities outlined in this guidance to ensure responsible AI use.Executive levelManagement levelUsersEveryoneRegularly evaluate the impact of AI on the workforce to enhance strategic workforce planning, identifying needed skills and resources.Executive levelManagement levelUsersEveryoneEnsure product teams collaborate with legal, data, privacy and AI experts when AI is used.Executive levelManagement levelProduct Owners-Encourage innovation and responsible AI development through initiatives like hackathons, competitions and collaborative research projects.Executive levelManagement levelEveryoneEveryoneReport if you believe a solution you’re using may influence decisions or actions that could be unethical, illegal, or unsafe.EveryoneEveryoneManagement levelExecutive levelEnsure accountability and transparencyActionAccountableResponsibleConsultedInformedClearly define and communicate responsible AI-related authorities within the organisation, including governance, assurance, procurement, ethics, cyber, privacy, legal, technology, data governance, and risk management.Executive levelManagement level-EveryoneReview and endorse the AI Assessment Framework (AIAF) compliance plans[1], detailing Department/Agency progress towards compliance in ensuring use of the AIAF.Executive levelManagement level--Ensure each AI solution has documented accountabilities for managing risks, ensuring continuity, enabling appeals, and providing evidence for decisions and actions.Management levelProduct OwnersProduct OwnersExecutive levelEnsure record-keeping for decisions related to managing the risk of AI solutions such the results of applying the AIAF and risk mitigations.Executive levelProduct OwnersManagement level-Publish regular transparency reports for high-risk or customer-facing AI systems, detailing use cases, performance, governance practices, and any incidents or interventions.Executive levelManagement levelProduct OwnersEveryoneAllocate ResourcesActionAccountableResponsibleConsultedInformedSupport initiatives to increase AI risk management awareness and capabilities at all levels of the organisation.Executive levelManagement levelEveryoneUsersAllocate budget and resources for responsible AI, including expert advisory services (legal, data, privacy, ethics, technology, risk).Executive levelExecutive levelManagement levelProduct ownersProvide sufficient training and tools for ethical AI implementation.Management levelProduct owners-EveryoneEnsure adequate resources for continuous monitoring and evaluation of AI systems that could cause harm.Executive levelManagement levelProduct owners-Reduce costs of digital governance and assurance with streamlined, integrated processes across cybersecurity, privacy, ethics, legal, AI, data governance, and technology & architecture domains.Executive levelManagement levelProduct ownersEveryoneEnsure compliance and risk managementActionAccountableResponsibleConsultedInformedEnsure governance and assurance oversight for compliance with AI-related laws and regulations (e.g., human rights, privacy, data protection, administrative law, consumer, anti-discrimination, state records, critical infrastructure and cyber security).Executive levelManagement levelProduct Owners-Ensure AI system development complies with the NSW AI ethics policy, AI assessment framework, organisational values, and related standards.Executive levelManagement levelProduct Owners-Ensure that high-risk AI projects and solutions are presented to the AI Review Board (AIRC).Executive levelManagement levelProduct Owners-Approve AI project and solution risk tiering, treatment plans and accept residual risks.Executive levelManagement levelProduct OwnersExecutive levelEstablish clear data governance policies for AI systems, including data collection, storage, and usageExecutive levelManagement levelProduct OwnersEveryoneRegularly review agency governance, risk, and compliance frameworks to ensure alignment with the AIAF and emerging AI regulatory guidance and legislation.Executive levelManagement levelProduct OwnersExecutive levelEvaluate vendors and third-party AI solutions for compliance with the NSW AI Ethics Policy, AIAF, and AI procurement guidance and frameworkManagement levelProduct Owners-EveryoneEnsure compliance with Digital NSW, department, and agency policies and guidelines on using public, non-secure applications, such as generative AI chatbots (e.g., ChatGPT).EveryoneEveryoneManagement levelExecutive levelEstablish oversight mechanismsActionAccountableResponsibleConsultedInformedCreate a multidisciplinary AI advisory board or committee to monitor and advise AI projects & solutions (ethics, legal, technology, data, privacy etc.)Executive levelManagement level-EveryoneInclude external experts and stakeholders in Governance, Assurance, Audit, and advisory committees to ensure diverse perspectives.Executive levelManagement level--Designate a responsible owner for AI governance in the C-suite.Executive levelExecutive level-Management levelEnsure regular independent reviews of AI governance and assurance functions to assess performance and effectiveness.Executive levelManagement level-EveryoneEnsure that AI systems augment, rather than replace, human decision-making where its use could create harm.Executive levelManagement levelUsersUsersEnsure AI solutions with medium or higher risk have incident response plans, with tested, monitored, and communicated appeal processes that include human intervention.Executive levelManagement levelProduct Owners-Ensure high-risk AI solutions can provide clear explanations for their outputs when required and have established mechanisms to trace AI decisions back to their source data and logic.Management levelProduct OwnersUsersUsersEnsure the use of AI solution benefits outweigh the risksManagement levelProduct OwnersUsers-Conduct audits at a frequency determined by potential risk to ensure AI systems meet data quality standards, desired outcomes, and NSW ethical policy.Management levelManagement levelProduct Owners-What’s next?By following this guidance, government agencies can better structure their approach to responsible AI, ensuring that all levels of the organisation are aligned and actively contributing to ethical AI practices.Agencies should start by ensuring a Governance and Assurance function has clear accountability for overseeing responsible AI use.ResourcesDownload ‘Understanding Responsibilities in AI Practices’ (DOCX, 698.33 KB)east[1] Compliance plans communicate departmental compliance with the AIAF, tracking implementation progress and raising awareness of challenges for support. Reporting requirements apply to department Governance and Assurance functions and are collated by Digital NSW through the AI Secretariat.Explore further AIAF GuidanceIdentifying AIeastAI Procurement EssentialseastFeedbackAgencies are encouraged to provide feedback for any suggested improvements to this guidance to the AI Secretariat.AlSecretariat@customerservice.nsw.gov.au