Command-line interface#

swh objstorage#

Software Heritage Objstorage tools.

swh objstorage [OPTIONS] COMMAND [ARGS]...

Options

-C, --config-file <config_file>#

Configuration file.

fsck#

Check the objstorage is not corrupted.

swh objstorage fsck [OPTIONS]

import#

Import a local directory in an existing objstorage.

swh objstorage import [OPTIONS] DIRECTORY...

Arguments

DIRECTORY#

Required argument(s)

replay#

Fill a destination Object Storage using a journal stream.

This is typically used for a mirror configuration, by reading a Journal and retrieving objects from an existing source ObjStorage.

There can be several ‘replayers’ filling a given ObjStorage as long as they use the same group-id. You can use the KAFKA_GROUP_INSTANCE_ID environment variable to use KIP-345 static group membership.

This service retrieves object ids to copy from the ‘content’ topic. It will only copy object’s content if the object’s description in the kafka nmessage has the status:visible set.

--exclude-sha1-file may be used to exclude some hashes to speed-up the replay in case many of the contents are already in the destination objstorage. It must contain a concatenation of all (sha1) hashes, and it must be sorted. This file will not be fully loaded into memory at any given time, so it can be arbitrarily large.

--size-limit exclude file content which size is (strictly) above the given size limit. If 0, then there is no size limit.

--check-dst sets whether the replayer should check in the destination ObjStorage before copying an object. You can turn that off if you know you’re copying to an empty ObjStorage.

--check-src-hashes computes the hashes of the fetched object before sending it to the destination.

--concurrency N sets the number of threads in charge of copy blob objects from the source objstorage to the destination one. Using a large concurrency value make sense if both the source and destination objstorages support highly parallel workloads. Make not to set the batch_size configuration option too low for the concurrency to be actually useful (each batch of kafka messages is dispatched among the threads).

The expected configuration file should have 3 sections:

In addition to these 3 mandatory sections, an optional ‘replayer’ section can be provided with an ‘error_reporter’ config entry allowing to specify a Redis connection parameter set that will be used to report objects that could not be copied, eg.:

objstorage:
  [...]
objstorage_dst:
  [...]
journal_client:
  [...]
replayer:
  error_reporter:
    host: redis.local
    port: 6379
    db: 1
swh objstorage replay [OPTIONS]

Options

-n, --stop-after-objects <stop_after_objects>#

Stop after processing this many objects. Default is to run forever.

--exclude-sha1-file <exclude_sha1_file>#

File containing a sorted array of hashes to be excluded.

--size-limit <size_limit>#

Exclude files which size is over this limit. 0 (default) means no size limit.

--check-dst, --no-check-dst#

Check whether the destination contains the object before copying.

--check-src-hashes#

Check objects in flight.

--concurrency <concurrency>#

Number of concurrent threads doing the actual copy of blobs between the source and destination objstorages.

rpc-serve#

Run a standalone objstorage server.

This is not meant to be run on production systems.

swh objstorage rpc-serve [OPTIONS]

Options

--host <IP>#

Host ip address to bind the server on

Default:

'0.0.0.0'

-p, --port <PORT>#

Binding port of the server

Default:

5003

--debug, --no-debug#

Indicates if the server should run in debug mode

winery#

Winery related commands

swh objstorage winery [OPTIONS] COMMAND [ARGS]...

clean-deleted-objects#

Clean deleted objects from Winery

swh objstorage winery clean-deleted-objects [OPTIONS]

packer#

Run the winery packer process

This process is in charge of creating (packing) shard files when a winery writer has accumulated enough file objects to reach the shard’s max_size size.

When a shard becomes full, it gets locked by this packer service. The shard creation can then occur either as part of the packing step (within this process) when create_images configuration option is set, or waited for (in this case, the shard creation processing is delegated to the shard managenent tool, aka swh objstorage winery rdb).

When the shard file is ready, the shard gets packed.

If clean_immediately is set, the write shard is immediately removed and the shard moved to the readonly state.

Note: when using a cls: directory type for shards_pool configuration, it is advisable to set create_images to True; the rdb management process is then unnecessary (when writing directly in shard files, there is no need for provisionning the RDB volume etc.).

swh objstorage winery packer [OPTIONS]

Options

--stop-after-shards <stop_after_shards>#

rbd#

Run a winery RBD image manager process

This process is in charge of creating and mapping image files for shards. This is required for shards_pool of type cls: rbd. It will:

  • Map all readonly shards (if need be).

  • If manage_rw_images is true, provision a new RBD image in the Ceph cluster each time a shard appears in the standby or writing state.

  • When a shard packing completes (shrd status becomes one of packed, cleaning, readonly), the image is mapped read-only.

  • Record mapping event in the database.

swh objstorage winery rbd [OPTIONS]

Options

--stop-instead-of-waiting#
--manage-rw-images#
--only-prefix <only_prefix>#

rw-shard-cleaner#

Run the winery database image manager process

This process is responsible for cleaning winery DB tables for shards that have been packed.

It performs clean up of the packed read-write shards, as soon as they are recorded as mapped on enough (–min-mapped-hosts) hosts. They get locked in the cleaning state, the database cleanup is performed, then the shard gets moved in the final readonly state.

This process should run continuously as a background process if the winery setup is configured with clean_immediately=false.

swh objstorage winery rw-shard-cleaner [OPTIONS]

Options

--stop-after-shards <stop_after_shards>#
--stop-instead-of-waiting#
--min-mapped-hosts <min_mapped_hosts>#

Number of hosts on which the image should be mapped read-only before cleanup