Foundation model development is often resource intensive. The following tools help one to measure energy consumption and estimate the carbon intensity of the energy source used. Decisions made during or prior to model training can have a significant effect on the upstream and downstream environmental impact of a given model.
10 Environmental Impact Resources for Foundation Models
- Home /
- Foundation Model Resources /
- Environmental Impact Resources for Foundation Models
Environmental Impact
Azure Emissions Impact Dashboard
Monitoring the environmental impact of training machine learning models on Azure
Text Speech VisionCarbontracker
carbontracker is a tool for tracking and predicting the energy consumption and carbon footprint of training deep learning models as described in Anthony et al. (2020).
Text Speech VisionCodeCarbon
Estimate and track carbon emissions from your computer, quantify and analyze their impact.
Text Speech VisionEstimating the Carbon Footprint of BLOOM, a 176B Parameter Language Model
A comprehensive account of the broader environmental impact of the BLOOM language model.
TextExperiment Impact Tracker
The experiment-impact-tracker is meant to be a simple drop-in method to track energy usage, carbon emissions, and compute utilization of your system. Currently, on Linux systems with Intel chips (that support the RAPL or powergadget interfaces) and NVIDIA GPUs, we record: power draw from CPU and GPU, hardware information, python package versions, estimated carbon emissions information, etc. In California we even support realtime carbon emission information by querying caiso.com!
Text Speech VisionGoogle Cloud Carbon Footprint Measurement
Tracking the emissions of using Google’s cloud compute resources
Text Speech VisionMaking AI Less "Thirsty"
Uncovering and Addressing the Secret Water Footprint of AI Models, and estimating water usage for training and deploying LLMs.
Text Speech VisionScaling Laws for Neural Language Models
Provide scaling laws to determine the optimal allocation of a fixed compute budget.
TextTraining Compute-Optimal Large Language Models
Provides details on the optimal model size and number of tokens for training a transformer-based language model in a given computational budget.
Text