This command is commonly needed after dvc remote add or
default to setup credentials or other
customizations to each remote storage type.
Synopsis
usage: dvc remote modify [-h] [--global | --system | --local] [-q | -v]
[-u]
name option [value]
positional arguments:
name Name of the remote
option Name of the option to modify
value (optional) Value of the option
Description
Remote name and option name are required. Config option names are specific
to the remote type. See dvc remote add and
Available settings below for a list of
remote storage types.
This command modifies a remote section in the project's
config file. Alternatively, dvc config or
manual editing could be used to change the configuration.
Command options (flags)
-u, --unset - delete configuration value for the given config option.
Don't provide a value when employing this flag.
--global - save remote configuration to the global config (e.g.
~/.config/dvc/config) instead of .dvc/config.
--system - save remote configuration to the system config (e.g.
/etc/dvc/config) instead of .dvc/config.
--local - modify a local config file
instead of .dvc/config. It is located in .dvc/config.local and is
Git-ignored. This is useful when you need to specify private config options in
your config that you don't want to track and share through Git (credentials,
private locations, etc).
-h, --help - prints the usage/help message, and exit.
-q, --quiet - do not write anything to standard output. Exit with 0 if no
problems arise, otherwise 1.
The following config options are available for all remote types:
url - the remote location can always be modified. This is how DVC determines
what type of remote it is, and thus which other config options can be modified
(see each type in the next section for more details).
For example, for an Amazon S3 remote (see more details in the S3 section
below):
$ dvc remote modify s3remote s3://mybucket/path
Or a local remote (a directory in the file system):
jobs - change the default number of processes for
remote storage synchronization operations
(see the --jobs option of dvc push, dvc pull, dvc fetch, dvc status,
and dvc gc). Accepts positive integers. The default is typically 4.
$ dvc remote modify myremote jobs8
verify - upon downloading cache files (dvc pull, dvc fetch)
DVC will recalculate the file hashes upon download (e.g. dvc pull) to make
sure that these haven't been modified, or corrupted during download. It may
slow down the aforementioned commands. The calculated hash is compared to the
value saved in the corresponding
DVC-file.
Note that this option is enabled on Google Drive remotes by default.
$ dvc remote modify myremote verify true
Available parameters per storage type
The following are the types of remote storage (protocols) and their config
options:
Click for Amazon S3
By default, DVC expects your AWS CLI is already
configured.
DVC will be using default AWS credentials file to access S3. To override some of
these settings, you could use the following options.
url - remote location, in the s3://<bucket>/<key> format:
use_ssl - whether or not to use SSL. By default, SSL is used.
$ dvc remote modify myremote use_ssl false
listobjects - whether or not to use list_objects. By default,
list_objects_v2 is used. Useful for ceph and other S3 emulators.
$ dvc remote modify myremote listobjects true
sse - server-side encryption algorithm to use (e.g. AES256, aws:kms). By
default, no encryption is used.
$ dvc remote modify myremote sse AES256
sse_kms_key_id - identifier of the key to encrypt data uploaded when using
SSE-KMS. Required when the sse parameter (above) is set to aws:kms. This
parameter will be passed directly to AWS S3 functions, so DVC supports any
value that S3 supports, including both key ids and aliases.
grant_full_control* - grants FULL_CONTROL permissions at object level
access control list for specific grantees**. Equivalent of grantread +
grantreadacp + grantwrite_acp
Besides that, any settings that are available for Amazon S3 (see previous
section) may be available for S3 compatible storage. For example, let's setup a
DVC remote using the example-nameDigitalOcean space
(equivalent to a bucket in AWS) in the nyc3 region:
The connection string contains sensitive user info. Therefore, it's safer to
add it with the --local option, so it's written to a Git-ignored config
file.
For more information on configuring Azure Storage connection strings, visit
here.
gdrive_client_secret - Client secret for authentication with OAuth 2.0 when
using a custom Google Client project. Also requires using gdrive_client_id.
gdrive_trash_only - configures dvc gc to move remote files to
trash
instead of
deleting
them permanently. false by default, meaning "delete". Useful for shared
drives/folders, where delete permissions may not be given.
A service account is a Google account associated with your GCP project, and not
a specific user. Please refer to
Using service accounts for
more information.
gdrive_use_service_account - instructs DVC to authenticate using a service
account instead of OAuth. Make sure that the service account has read/write
access (as needed) to the file structure in the remote url.
gdrive_service_account_email - email address of the Google Project's service
account when gdrive_use_service_account is on. Also requires using
gdrive_service_account_p12_file_path.
gdrive_service_account_p12_file_path - Google Project's service account
.p12 file path when gdrive_use_service_account is on. Also requires using
gdrive_service_account_email.
A service account is a Google account associated with your GCP project, and not
a specific user. Please refer to
Using service accounts for
more information.
credentialpath - path to the file that contains the
service account key.
Make sure that the service account has read/write access (as needed) to the
file structure in the remote url.
The key ID and secret key contain sensitive user info. Therefore, it's safer
to add them with the --local option, so they're written to a Git-ignored
config file.
⚠️ DVC requires both SSH and SFTP access to work with remote SSH locations.
Please check that you are able to connect both ways with tools like ssh and
sftp (GNU/Linux).
Note that your server's SFTP root might differ from its physical root (/).
user - username to access the remote.
$ dvc remote modify --local myremote user myuser
The order in which DVC picks the username:
user parameter set with this command (found in .dvc/config);
User defined in the URL (e.g. ssh://user@example.com/path);
User defined in ~/.ssh/config for this host (URL);
Current user
port - port to access the remote.
$ dvc remote modify myremote port 2222
The order in which DVC decide the port number:
port parameter set with this command (found in .dvc/config);
Port defined in the URL (e.g. ssh://example.com:1234/path);
Port defined in ~/.ssh/config for this host (URL);
Default SSH port 22
keyfile - path to private key to access the remote.
The username and password (may) contain sensitive user info. Therefore, it's
safer to add them with the --local option, so they're written to a
Git-ignored config file.
ask_password - ask for a private key passphrase or a password to access the
remote.
$ dvc remote modify myremote ask_password true
gss_auth - use Generic Security Services authentication if available on host
(for example,
with kerberos).
Using this param requires paramiko[gssapi], which is currently only
supported by our pip package, and could be installed with
pip install 'dvc[ssh_gssapi]'. Other packages (Conda, Windows, and MacOS
PKG) do not support it.
$ dvc remote modify myremote gss_auth true
allow_agent - whether to use SSH agents
(true by default). Setting this to false is useful when ssh-agent is
causing problems, such as a "No existing session" error:
$ dvc remote modify myremote allow_agent false
Click for HDFS
💡 Using a HDFS cluster as remote storage is also supported via the WebHDFS API.
Read more about by expanding the WebHDFS section in
dvc remote add.
custom - an additional HTTP header field will be set for all HTTP requests
to the remote in the form: custom_auth_header: password.
custom_auth_header and password (or ask_password) parameters should
also be configured.
$ dvc remote modify myremote auth basic
method - override the
HTTP method to
use for file uploads (e.g. PUT should be used for
Artifactory).
By default, POST is used.
$ dvc remote modify myremote method PUT
custom_auth_header - HTTP header field name to use when the auth parameter
is set to custom.
The username and password (may) contain sensitive user info. Therefore, it's
safer to add them with the --local option, so they're written to a
Git-ignored config file.
ask_password - ask each time for the password to use for any auth method.
$ dvc remote modify myremote ask_password true
Note that the password parameter takes precedence over ask_password. If
password is specified, DVC will not prompt the user to enter a password
for this remote.
ssl_verify - allows to disable SSH verification, which is enabled by
default.
$ dvc remote modify myremote ssl_verify false
Click for WebHDFS
💡 WebHDFS serves as an alternative for using the same remote storage supported
by HDFS. Read more about by expanding the WebHDFS section in
dvc remote add.
user - username to access the remote, can be empty in case of using token
or if using a HdfsCLI cfg file. May only be used when Hadoop security is
off. Defaults to current user as determined by whoami.
$ dvc remote modify --local myremote user myuser
token - Hadoop delegation token for WebHDFS, can be empty in case of using
user or if using a HdfsCLI cfg file. May be used when Hadoop security is
on.
hdfscli_config - path to a HdfsCLI cfg file. WebHDFS access depends on
HdfsCLI, which allows the usage of a configuration file by default located
in ~/.hdfscli.cfg. In the file, multiple aliases can be set with their own
connection parameters, like url or user. If using a cfg file,
webhdfs_alias can be set to specify which alias to use.
webhdfs_alias - alias in a HdfsCLI cfg file to use. Only relevant if used
in conjunction with hdfscli_config. If not defined, default.alias in
HdfsCLI cfg file will be used instead.
The username, token, webhdfsalias, and hdfscliconfig may contain sensitive
user info. Therefore, it's safer to add it with the --local option, so it's
written to a Git-ignored config file.
The username, password, and token (may) contain sensitive user info.
Therefore, it's safer to add them with the --local option, so they're
written to a Git-ignored config file.
Note that user/password and token authentication are incompatible. You
should authenticate against yout WebDAV remote by either user/password or
token.
ask_password - ask each time for the password to use for user/password
authentication. This has no effect if password or token are set.
$ dvc remote modify myremote ask_password true
cert_path - path to certificate used for WebDAV server authentication, if
you need to use local client side certificates.