DVCLive 3.0 is now available. Go to the release notes to see what changed.
This documentation provides the details about the dvclive Python API module,
which can be imported regularly, for example:
from dvclive import LiveIf you use one of the supported ML Frameworks, you can jump directly to its corresponding page.
with Live() as live:See Live() for details.
live.log_artifact("model.pt", type="model", name="gpt")See Live.log_artifact().
img = np.ones((500, 500, 3), np.uint8)
live.log_image("image.png", img)See Live.log_image().
live.log_metric("acc", 0.9)See Live.log_metric().
params = {
"num_classes": 10,
"metrics": ["accuracy", "mae"],
"optimizer": "adam"
}
live.log_params(params)datapoints = [
{"name": "petal_width", "importance": 0.4},
{"name": "petal_length", "importance": 0.33},
{"name": "sepal_width", "importance": 0.24},
{"name": "sepal_length", "importance": 0.03}
]
live.log_plot(
"iris", datapoints, x="importance", y="name",
template="bar_horizontal", title="Iris Feature Importance"
)See Live.log_plot().
y_true = [0, 0, 1, 1]
y_pred = [0.2, 0.5, 0.3, 0.8]
live.log_sklearn_plot("roc", y_true, y_score)live.next_step()See Live.next_step().
Under the hood, Live.next_step() calls Live.make_summary(),
Live.make_dvcyaml(), and Live.make_report().
When access is enabled, updates will be sent to DVC Studio.
If you want to decouple the step update from the rest of the calls, you can
manually modify the Live.step property and call Live.make_summary() /
Live.make_dvcyaml() / Live.make_report().
Joining the above snippets, you can include DVCLive in your training code:
# train.py
from dvclive import Live
with Live() as live:
live.log_param("epochs", NUM_EPOCHS)
for epoch in range(NUM_EPOCHS):
train_model(...)
metrics = evaluate_model(...)
for metric_name, value in metrics.items():
live.log_metric(metric_name, value)
live.next_step()
live.log_artifact(path, type="model", name=name)After you run your training code, all the logged data will be stored in the
dvclive directory and tracked as a DVC experiment for
analysis and comparison.
Experimenting in Python interactively (like in notebooks) is great for
exploration, but eventually you may need a more structured way to run
reproducible experiments. By configuring DVC pipelines, you can
run experiments with dvc exp run. Pipelines help you organize your ML
workflow beyond a single notebook or script so you can modularize and
parametrize your code. See how to setup a pipeline to work with DVCLive.