GovWhitePapers Logo

Sorry, your browser is not compatible with this application. Please use the latest version of Google Chrome, Mozilla Firefox, Microsoft Edge or Safari.

A Taxonomy of Trustworthiness for Artificial Intelligence

This paper aims to provide a resource that is useful for Artificial Intelligence (AI) organizations and teams developing AI technologies, systems, and applications. It is designed to specifically assist users of the National Institute of Standards and Technology Artificial Intelligence Risk Management Framework (NIST AI RMF).

This report could however also be helpful for people using any kind of AI risk or impact assessment, or for people developing model cards, system cards, or other types of AI documentation. It may also be useful for standards-setting bodies, policymakers, independent auditors, and civil society organizations working to evaluate and promote trustworthy AI.

  • Author(s):
  • Jessica Newman
  • Share this:
  • Share on Facebook
  • Share on Twitter
  • Share via Email
  • Share on LinkedIn
A Taxonomy of Trustworthiness  for Artificial Intelligence
Format:
  • White Paper
Topics:
Website:Visit Publisher Website
Publisher:Center for Long-Term Cybersecurity - UC Berkeley
Published:January 1, 2023
License:Copyrighted
Copyright:© CLTCBerkeley

Featured Content

Contact Publisher

Claim Content