Monitoring foundation model usage is an evolving area of research. The following techniques, such as watermarking model outputs or gating access to the model, are some of the ways to do so.
9 Usage Monitoring for Foundation Models
- Home /
- Foundation Model Resources /
- Usage Monitoring for Foundation Models
Usage Monitoring
AI Incident Database
A database of harmful incidents tied to AI systems where developers or users can submit incident reports
Text Speech VisionBigScience Ethical Charter
Outlines BigScience’s core values and how they promote them, which in turn guides use restrictions and communicates acceptable usage to users
Text Speech VisionLlama 2 Responsible Use Guide
Guidance for downstream developers on how to responsibly build with Llama 2. Includes details on how to report issues and instructions related to red-teaming and RLHF
Text Speech VisionModel Gating from Hugging Face
A resource describing how to require user credentials for model access, which may be appropriate for models trained for topics such as hate speech
Text Speech VisionModel Monitoring in Practice Tutorial
A tutorial given at FAccT and other venues describing how and why to monitor ML models. Includes a presentation on using transformer models to monitor for error detection
Text Speech VisionRobust Distortion-free Watermarks for Language Models
A watermark for autoregressive language models
TextHuggingFace Provenance, Watermarking & Deepfake Detection Collection
A collection of resources on provenance, watermarking & deepfake detection tools, that are used to assess the outputs of foundation models.
Text Speech VisionAI Vulnerability Database
An open-source, extensible knowledge base of AI failures.
Text Speech Vision