Monitoring foundation model usage is an evolving area of research. The following techniques, such as watermarking model outputs or gating access to the model, are some of the ways to do so.

Monitoring foundation model usage is an evolving area of research. The following techniques, such as watermarking model outputs or gating access to the model, are some of the ways to do so.
A database of harmful incidents tied to AI systems where developers or users can submit incident reports
Outlines BigScience’s core values and how they promote them, which in turn guides use restrictions and communicates acceptable usage to users
Guidance for downstream developers on how to responsibly build with Llama 2. Includes details on how to report issues and instructions related to red-teaming and RLHF
A resource describing how to require user credentials for model access, which may be appropriate for models trained for topics such as hate speech
A tutorial given at FAccT and other venues describing how and why to monitor ML models. Includes a presentation on using transformer models to monitor for error detection
A watermark for autoregressive language models
A collection of resources on provenance, watermarking & deepfake detection tools, that are used to assess the outputs of foundation models.
An open-source, extensible knowledge base of AI failures.
FinetuneDB is an LLM Ops platform for customizing AI models to deliver personalized experiences at scale. We do that by helping you automate the creation of fine-tuning datasets on a per-user basis, by transforming any provided data into the right format. With our monitoring and evaluation suite, we ensure that each personalized model is aligned to your goals.