> ## Documentation Index
> Fetch the complete documentation index at: https://wb-21fd5541-python-sdk-testing.mintlify.app/llms.txt
> Use this file to discover all available pages before exploring further.

# Run

export const GitHubLink = ({url}) => <a href={url} target="_blank" rel="noopener noreferrer" className="github-source-link">
    <svg width="20" height="20" viewBox="0 0 24 24" fill="currentColor" xmlns="http://www.w3.org/2000/svg">
      <path d="M12 0C5.37 0 0 5.37 0 12c0 5.31 3.435 9.795 8.205 11.385.6.105.825-.255.825-.57 0-.285-.015-1.23-.015-2.235-3.015.555-3.795-.735-4.035-1.41-.135-.345-.72-1.41-1.23-1.695-.42-.225-1.02-.78-.015-.795.945-.015 1.62.87 1.845 1.23 1.08 1.815 2.805 1.305 3.495.99.105-.78.42-1.305.765-1.605-2.67-.3-5.46-1.335-5.46-5.925 0-1.305.465-2.385 1.23-3.225-.12-.3-.54-1.53.12-3.18 0 0 1.005-.315 3.3 1.23.96-.27 1.98-.405 3-.405s2.04.135 3 .405c2.295-1.56 3.3-1.23 3.3-1.23.66 1.65.24 2.88.12 3.18.765.84 1.23 1.905 1.23 3.225 0 4.605-2.805 5.625-5.475 5.925.435.375.81 1.095.81 2.22 0 1.605-.015 2.895-.015 3.3 0 .315.225.69.825.57A12.02 12.02 0 0024 12c0-6.63-5.37-12-12-12z" />
    </svg>
    GitHub source
  </a>;

<GitHubLink url="https://github.com/wandb/wandb/blob/main/wandb/sdk/wandb_run.py#L489" />

```python theme={null}
settings: 'Settings',
config: 'dict[str, Any] | None' = None,
sweep_config: 'dict[str, Any] | None' = None,
launch_config: 'dict[str, Any] | None' = None
```

## Description

A unit of computation logged by W\&B. Typically, this is an ML experiment.

Call [`wandb.init()`](https://docs.wandb.ai/models/ref/python/functions/init) to create a
new run. `wandb.init()` starts a new run and returns a `wandb.Run` object.
Each run is associated with a unique ID (run ID). W\&B recommends using
a context (`with` statement) manager to automatically finish the run.

For distributed training experiments, you can either track each process
separately using one run per process or track all processes to a single run.
See [Log distributed training experiments](https://docs.wandb.ai/models/track/log/distributed-training)
for more information.

You can log data to a run with `wandb.Run.log()`. Anything you log using
`wandb.Run.log()` is sent to that run. See
[Create an experiment](https://docs.wandb.ai/models/track/create-an-experiment) or
[`wandb.init`](https://docs.wandb.ai/models/ref/python/functions/init) API reference page
or more information.

There is a another `Run` object in the
[`wandb.apis.public`](https://docs.wandb.ai/models/ref/python/public-api/api)
namespace. Use this object is to interact with runs that have already been
created.

Attributes:
summary: (Summary) A summary of the run, which is a dictionary-like
object. For more information, see
[Log summary metrics](https://docs.wandb.ai/models/track/log/log-summary).

## Examples:

Create a run with `wandb.init()`:

```python theme={null}
import wandb

# Start a new run and log some data
# Use context manager (`with` statement) to automatically finish the run
with wandb.init(entity="entity", project="project") as run:
    run.log({"accuracy": acc, "loss": loss})
```

## Args:

* **settings**:
* **config**:
* **sweep\_config**:
* **launch\_config**:

## Properties:

### settings

A frozen copy of run's Settings object.

### dir

The directory where files associated with the run are saved.

### config

Config object associated with this run.

### config\_static

Static config object associated with this run.

### name

Display name of the run.

Display names are not guaranteed to be unique and may be descriptive.
By default, they are randomly generated.

### notes

Notes associated with the run, if there are any.

Notes can be a multiline string and can also use markdown and latex
equations inside `$$`, like `$x + 3$`.

### tags

Tags associated with the run, if there are any.

### id

Identifier for this run.

### sweep\_id

Identifier for the sweep associated with the run, if there is one.

### path

Path to the run.

Run paths include entity, project, and run ID, in the format
`entity/project/run_id`.

### start\_time

Unix timestamp (in seconds) of when the run started.

### resumed

True if the run was resumed, False otherwise.

### offline

True if the run is offline, False otherwise.

### disabled

True if the run is disabled, False otherwise.

### group

Returns the name of the group associated with this run.

Grouping runs together allows related experiments to be organized and
visualized collectively in the W\&B UI. This is especially useful for
scenarios such as distributed training or cross-validation, where
multiple runs should be viewed and managed as a unified experiment.

In shared mode, where all processes share the same run object,
setting a group is usually unnecessary, since there is only one
run and no grouping is required.

### job\_type

Name of the job type associated with the run.

View a run's job type in the run's Overview page in the W\&B App.

You can use this to categorize runs by their job type, such as
"training", "evaluation", or "inference". This is useful for organizing
and filtering runs in the W\&B UI, especially when you have multiple
runs with different job types in the same project. For more
information, see [Organize runs](https://docs.wandb.ai/models/runs#organize-runs).

### project

Name of the W\&B project associated with the run.

### project\_url

URL of the W\&B project associated with the run, if there is one.

Offline runs do not have a project URL.

### sweep\_url

URL of the sweep associated with the run, if there is one.

Offline runs do not have a sweep URL.

### url

The url for the W\&B run, if there is one.

Offline runs will not have a url.

### entity

The name of the W\&B entity associated with the run.

Entity can be a username or the name of a team or organization.

## Methods:

### alert

Create an alert with the given title and text.

### define\_metric

Customize metrics logged with `wandb.Run.log()`.

### display

Display this run in Jupyter.

### finish

Finish a run and upload any remaining data.

Marks the completion of a W\&B run and ensures all data is synced to the server.
The run's final state is determined by its exit conditions and sync status.

Run States:

* Running: Active run that is logging data and/or sending heartbeats.
* Crashed: Run that stopped sending heartbeats unexpectedly.
* Finished: Run completed successfully (`exit_code=0`) with all data synced.
* Failed: Run completed with errors (`exit_code!=0`).
* Killed: Run was forcibly stopped before it could finish.

### finish\_artifact

Finishes a non-finalized artifact as output of a run.

Subsequent "upserts" with the same distributed ID will result in a new version.

### link\_artifact

Link the artifact to a collection.

The term “link” refers to pointers that connect where W\&B stores the
artifact and where the artifact is accessible in the registry. W\&B
does not duplicate artifacts when you link an artifact to a collection.

View linked artifacts in the Registry UI for the specified collection.

### link\_model

Log a model artifact version and link it to a registered model in the model registry.

Linked model versions are visible in the UI for the specified registered model.

This method will:

* Check if 'name' model artifact has been logged. If so, use the artifact version that matches the files
  located at 'path' or log a new version. Otherwise log files under 'path' as a new model artifact, 'name'
  of type 'model'.
* Check if registered model with name 'registered\_model\_name' exists in the 'model-registry' project.
  If not, create a new registered model with name 'registered\_model\_name'.
* Link version of model artifact 'name' to registered model, 'registered\_model\_name'.
* Attach aliases from 'aliases' list to the newly linked model artifact version.

### log

Upload run data.

Use `log` to log data from runs, such as scalars, images, video,
histograms, plots, and tables. See [Log objects and media](https://docs.wandb.ai/models/track/log) for
code snippets, best practices, and more.

Basic usage:

```python theme={null}
import wandb

with wandb.init() as run:
    run.log({"train-loss": 0.5, "accuracy": 0.9})
```

The previous code snippet saves the loss and accuracy to the run's
history and updates the summary values for these metrics.

Visualize logged data in a workspace at [wandb.ai](https://wandb.ai),
or locally on a [self-hosted instance](https://docs.wandb.ai/platform/hosting)
of the W\&B app, or export data to visualize and explore locally, such as in a
Jupyter notebook, with the [Public API](https://docs.wandb.ai/models/track/public-api-guide).

Logged values don't have to be scalars. You can log any
[W\&B supported Data Type](https://docs.wandb.ai/models/ref/python/data-types)
such as images, audio, video, and more. For example, you can use
`wandb.Table` to log structured data. See
[Log tables, visualize and query data](https://docs.wandb.ai/models/tables/tables-walkthrough)
tutorial for more details.

W\&B organizes metrics with a forward slash (`/`) in their name
into sections named using the text before the final slash. For example,
the following results in two sections named "train" and "validate":

```python theme={null}
with wandb.init() as run:
    # Log metrics in the "train" section.
    run.log(
        {
            "train/accuracy": 0.9,
            "train/loss": 30,
            "validate/accuracy": 0.8,
            "validate/loss": 20,
        }
    )
```

Only one level of nesting is supported; `run.log({"a/b/c": 1})`
produces a section named "a".

`run.log()` is not intended to be called more than a few times per second.
For optimal performance, limit your logging to once every N iterations,
or collect data over multiple iterations and log it in a single step.

By default, each call to `log` creates a new "step".
The step must always increase, and it is not possible to log
to a previous step. You can use any metric as the X axis in charts.
See [Custom log axes](https://docs.wandb.ai/models/track/log/customize-logging-axes)
for more details.

In many cases, it is better to treat the W\&B step like
you'd treat a timestamp rather than a training step.

```python theme={null}
with wandb.init() as run:
    # Example: log an "epoch" metric for use as an X axis.
    run.log({"epoch": 40, "train-loss": 0.5})
```

It is possible to use multiple `wandb.Run.log()` invocations to log to
the same step with the `step` and `commit` parameters.
The following are all equivalent:

```python theme={null}
with wandb.init() as run:
    # Normal usage:
    run.log({"train-loss": 0.5, "accuracy": 0.8})
    run.log({"train-loss": 0.4, "accuracy": 0.9})

    # Implicit step without auto-incrementing:
    run.log({"train-loss": 0.5}, commit=False)
    run.log({"accuracy": 0.8})
    run.log({"train-loss": 0.4}, commit=False)
    run.log({"accuracy": 0.9})

    # Explicit step:
    run.log({"train-loss": 0.5}, step=current_step)
    run.log({"accuracy": 0.8}, step=current_step)
    current_step += 1
    run.log({"train-loss": 0.4}, step=current_step)
    run.log({"accuracy": 0.9}, step=current_step, commit=True)
```

### log\_artifact

Declare an artifact as an output of a run.

### log\_code

Save the current state of your code to a W\&B Artifact.

By default, it walks the current directory and logs all files that end with `.py`.

### log\_model

Logs a model artifact containing the contents inside the 'path' to a run and marks it as an output to this run.

The name of model artifact can only contain alphanumeric characters,
underscores, and hyphens.

### mark\_preempting

Mark this run as preempting.

Also tells the internal process to immediately report this to server.

### pin\_config\_keys

Pin config keys to display in the References section on Run Overview.

Pinned keys appear prominently above Notes on the Run Overview page.
String values are rendered as markdown; non-strings are rendered as
plain text. Calling this again replaces the previously pinned list.

### restore

Download the specified file from cloud storage.

File is placed into the current directory or run directory.
By default, will only download the file if it doesn't already exist.

### save

Sync one or more files to W\&B.

Relative paths are relative to the current working directory.

A Unix glob, such as "myfiles/\*", is expanded at the time `save` is
called regardless of the `policy`. In particular, new files are not
picked up automatically.

A `base_path` may be provided to control the directory structure of
uploaded files. It should be a prefix of `glob_str`, and the directory
structure beneath it is preserved.

When given an absolute path or glob and no `base_path`, one
directory level is preserved as in the example above.

Files are automatically deduplicated: calling `save()` multiple times
on the same file without modifications will not re-upload it.

### status

Get sync info from the internal backend, about the current run's sync status.

### unwatch

Remove pytorch model topology, gradient and parameter hooks.

### upsert\_artifact

Declare (or append to) a non-finalized artifact as output of a run.

Note that you must call run.finish\_artifact() to finalize the artifact.
This is useful when distributed jobs need to all contribute to the same artifact.

### use\_artifact

Declare an artifact as an input to a run.

Call `download` or `file` on the returned object to get the contents locally.

### use\_model

Download the files logged in a model artifact 'name'.

### watch

Hook into given PyTorch model to monitor gradients and the model's computational graph.

This function can track parameters, gradients, or both during training.
