>> parser.parse_args(['-vvv'])\r\nNamespace(verbose=3)\r\n```\nthe problem with counting `-v` is there's no way to access `WARN` and `ERROR`. We could count `-q` (quiet) or similar for that.\n@casperdcl so, it would start on `INFO` by deafult, if you want just warnings then use one `-q`, if you want to silence everything `-qq`, and one `-v` will be for `DEBUG` and two for `TRACE`, right?\r\n\r\n:thinking: let's wait for the opinion of @shcheklein, @efiop , @Suor , @pared (need some quorum on this one, can't decide by myself)\nI am ok with counting `v/q`s, this is more friendly for typing in. `--log` might be useful for scripting - more obvious when you read it, not sure how much common scripting dvc is though.\r\n\r\nSo my preference: implement `v/q`s, and keep `--log` up our sleeves for now.\nAwesome, @Suor , I'll add those implementation details to the description.\nperhaps creating & using `logging.TRACE = 5` is better than this env var?\n@casperdcl could you explain a little bit more? I'm not sure where the user should set the `logging.TRACE = 5` is it through the config file?\n\ncmd|level\n--:|:--\ndefault|INFO\n`-v`|DEBUG\n`-vv`|TRACE\nOhh got it :+1: yep, that's what I purposed first on the linked issue:\r\n\r\n> My suggestion would be to create different verbose levels, -vvv (INFO, DEBUG, DATABASE?)\r\n> ...\r\n> Modify the -v so you can express the level of verbosity with several ones -vvv\r\n> https://github.com/iterative/dvc/issues/2329#issue-473668442\r\n\r\nI like this idea better instead of using the env var\nthough really I'd prefer `--log TRACE|DEBUG|INFO|WARN(ING)|ERROR|FATAL`, and for backward compatibility `-v` is a shortcut for `--log DEBUG` \n@casperdcl is this more common? I'm more used to the v's :sweat_smile: \r\n\r\nEven the `argparse` documentation has a section on it:\r\n```python\r\n>>> parser = argparse.ArgumentParser()\r\n>>> parser.add_argument('--verbose', '-v', action="https://huggingface.co/datasets/nebius/count")\r\n>>> parser.parse_args(['-vvv'])\r\nNamespace(verbose=3)\r\n```\nthe problem with counting `-v` is there's no way to access `WARN` and `ERROR`. We could count `-q` (quiet) or similar for that.\n@casperdcl so, it would start on `INFO` by deafult, if you want just warnings then use one `-q`, if you want to silence everything `-qq`, and one `-v` will be for `DEBUG` and two for `TRACE`, right?\r\n\r\n:thinking: let's wait for the opinion of @shcheklein, @efiop , @Suor , @pared (need some quorum on this one, can't decide by myself)\nI am ok with counting `v/q`s, this is more friendly for typing in. `--log` might be useful for scripting - more obvious when you read it, not sure how much common scripting dvc is though.\r\n\r\nSo my preference: implement `v/q`s, and keep `--log` up our sleeves for now.\nAwesome, @Suor , I'll add those implementation details to the description.":1,"Also found out that `dvc run --help` explicitly declares support for `dirs` for both of these options:\r\n\r\n```\r\n -m , --metrics \r\n Declare output metric file or directory.\r\n -M , --metrics-no-cache \r\n Declare output metric file or directory (do not put\r\n into DVC cache).\r\n```\nFrom Discord discussion I assume that `-m`/`-M` flags should not accept directories for now.\n@nik123 That's correct! Thank you so much for looking into it! :pray: ":1,"Hi @florianspecker !\r\n\r\nThanks for reporting this issue! Could you show us verbose log with `$ dvc pull -v`, please?\r\n\r\nI'm pretty sure it is coming from https://github.com/iterative/dvc/blob/0.90.2/dvc/remote/local.py#L457 , but just want to double check.\nHi @efiop thanks a lot for looking into it!\r\n\r\n```\r\n$ dvc pull -v\r\n2020-03-19 14:35:24,577 DEBUG: PRAGMA user_version;\r\n2020-03-19 14:35:24,577 DEBUG: fetched: [(3,)]\r\n2020-03-19 14:35:24,577 DEBUG: CREATE TABLE IF NOT EXISTS state (inode INTEGER PRIMARY KEY, mtime TEXT NOT NULL, size TEXT NOT NULL, md5 TEXT NOT NULL, timestamp TEXT NOT NULL)\r\n2020-03-19 14:35:24,578 DEBUG: CREATE TABLE IF NOT EXISTS state_info (count INTEGER)\r\n2020-03-19 14:35:24,578 DEBUG: CREATE TABLE IF NOT EXISTS link_state (path TEXT PRIMARY KEY, inode INTEGER NOT NULL, mtime TEXT NOT NULL)\r\n2020-03-19 14:35:24,578 DEBUG: INSERT OR IGNORE INTO state_info (count) SELECT 0 WHERE NOT EXISTS (SELECT * FROM state_info)\r\n2020-03-19 14:35:24,578 DEBUG: PRAGMA user_version = 3;\r\n2020-03-19 14:35:25,372 DEBUG: Preparing to download data from 's3:///eac-plv-dataset-iphone5-4k-okr_q4_2019'\r\n2020-03-19 14:35:25,372 DEBUG: Preparing to collect status from s3:///eac-plv-dataset-iphone5-4k-okr_q4_2019\r\n2020-03-19 14:35:25,373 DEBUG: Collecting information from local cache...\r\n2020-03-19 14:35:25,375 DEBUG: Path '../../../../../Volumes/dvc/eac-plv-dataset-iphone5-4k-okr_q4_2019/2c/b5d8b77eb1ac4f56dfbce8aa2f4dfd' inode '403712'\r\n2020-03-19 14:35:25,376 DEBUG: SELECT mtime, size, md5, timestamp from state WHERE inode=?\r\n2020-03-19 14:35:25,376 DEBUG: fetched: [('1580748406000000000', '10042082', '2cb5d8b77eb1ac4f56dfbce8aa2f4dfd', '1584624915719617024')]\r\n2020-03-19 14:35:25,377 DEBUG: UPDATE state SET timestamp = ? WHERE inode = ?\r\n2020-03-19 14:35:25,377 DEBUG: cache '../../../../../Volumes/dvc/eac-plv-dataset-iphone5-4k-okr_q4_2019/2c/b5d8b77eb1ac4f56dfbce8aa2f4dfd' expected '2cb5d8b77eb1ac4f56dfbce8aa2f4dfd' actual '2cb5d8b77eb1ac4f56dfbce8aa2f4dfd'\r\n2020-03-19 14:35:25,379 DEBUG: SELECT count from state_info WHERE rowid=?\r\n2020-03-19 14:35:25,379 DEBUG: fetched: [(506,)]\r\n2020-03-19 14:35:25,380 DEBUG: UPDATE state_info SET count = ? WHERE rowid = ?\r\n2020-03-19 14:35:25,381 ERROR: unexpected error - [Errno 30] Read-only file system: '/Volumes/dvc/eac-plv-dataset-iphone5-4k-okr_q4_2019/2c/b5d8b77eb1ac4f56dfbce8aa2f4dfd'\r\n------------------------------------------------------------\r\nTraceback (most recent call last):\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/main.py\", line 50, in main\r\n ret = cmd.run()\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/command/data_sync.py\", line 31, in run\r\n recursive=self.args.recursive,\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/repo/__init__.py\", line 28, in wrapper\r\n ret = f(repo, *args, **kwargs)\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/repo/pull.py\", line 28, in pull\r\n recursive=recursive,\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/repo/fetch.py\", line 52, in _fetch\r\n used, jobs, remote=remote, show_checksums=show_checksums\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/data_cloud.py\", line 82, in pull\r\n cache, jobs=jobs, remote=remote, show_checksums=show_checksums\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/remote/local.py\", line 382, in pull\r\n download=True,\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/remote/local.py\", line 346, in _process\r\n download=download,\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/remote/local.py\", line 255, in status\r\n local_exists = self.cache_exists(md5s, jobs=jobs, name=self.cache_dir)\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/remote/local.py\", line 214, in cache_exists\r\n + (\"cache in \" + name if name else \"local cache\"),\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/remote/local.py\", line 216, in \r\n if not self.changed_cache_file(checksum)\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/remote/base.py\", line 759, in changed_cache_file\r\n self.protect(cache_info)\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/remote/local.py\", line 448, in protect\r\n os.chmod(path, mode)\r\nOSError: [Errno 30] Read-only file system: '/Volumes/dvc/eac-plv-dataset-iphone5-4k-okr_q4_2019/2c/b5d8b77eb1ac4f56dfbce8aa2f4dfd'\r\n------------------------------------------------------------\r\n\r\nHaving any troubles? Hit us up at https://dvc.org/support, we are always happy to help!\r\n\r\n$ dvc fetch -v\r\n2020-03-19 14:35:50,571 DEBUG: PRAGMA user_version;\r\n2020-03-19 14:35:50,572 DEBUG: fetched: [(3,)]\r\n2020-03-19 14:35:50,572 DEBUG: CREATE TABLE IF NOT EXISTS state (inode INTEGER PRIMARY KEY, mtime TEXT NOT NULL, size TEXT NOT NULL, md5 TEXT NOT NULL, timestamp TEXT NOT NULL)\r\n2020-03-19 14:35:50,572 DEBUG: CREATE TABLE IF NOT EXISTS state_info (count INTEGER)\r\n2020-03-19 14:35:50,572 DEBUG: CREATE TABLE IF NOT EXISTS link_state (path TEXT PRIMARY KEY, inode INTEGER NOT NULL, mtime TEXT NOT NULL)\r\n2020-03-19 14:35:50,572 DEBUG: INSERT OR IGNORE INTO state_info (count) SELECT 0 WHERE NOT EXISTS (SELECT * FROM state_info)\r\n2020-03-19 14:35:50,573 DEBUG: PRAGMA user_version = 3;\r\n2020-03-19 14:35:51,361 DEBUG: Preparing to download data from 's3:///eac-plv-dataset-iphone5-4k-okr_q4_2019'\r\n2020-03-19 14:35:51,361 DEBUG: Preparing to collect status from s3:///eac-plv-dataset-iphone5-4k-okr_q4_2019\r\n2020-03-19 14:35:51,361 DEBUG: Collecting information from local cache...\r\n2020-03-19 14:35:51,363 DEBUG: Path '../../../../../Volumes/dvc/eac-plv-dataset-iphone5-4k-okr_q4_2019/2c/b5d8b77eb1ac4f56dfbce8aa2f4dfd' inode '403712'\r\n2020-03-19 14:35:51,363 DEBUG: SELECT mtime, size, md5, timestamp from state WHERE inode=?\r\n2020-03-19 14:35:51,364 DEBUG: fetched: [('1580748406000000000', '10042082', '2cb5d8b77eb1ac4f56dfbce8aa2f4dfd', '1584624925377250048')]\r\n2020-03-19 14:35:51,364 DEBUG: UPDATE state SET timestamp = ? WHERE inode = ?\r\n2020-03-19 14:35:51,365 DEBUG: cache '../../../../../Volumes/dvc/eac-plv-dataset-iphone5-4k-okr_q4_2019/2c/b5d8b77eb1ac4f56dfbce8aa2f4dfd' expected '2cb5d8b77eb1ac4f56dfbce8aa2f4dfd' actual '2cb5d8b77eb1ac4f56dfbce8aa2f4dfd'\r\n2020-03-19 14:35:51,366 DEBUG: SELECT count from state_info WHERE rowid=?\r\n2020-03-19 14:35:51,366 DEBUG: fetched: [(506,)]\r\n2020-03-19 14:35:51,367 DEBUG: UPDATE state_info SET count = ? WHERE rowid = ?\r\n2020-03-19 14:35:51,368 ERROR: unexpected error - [Errno 30] Read-only file system: '/Volumes/dvc/eac-plv-dataset-iphone5-4k-okr_q4_2019/2c/b5d8b77eb1ac4f56dfbce8aa2f4dfd'\r\n------------------------------------------------------------\r\nTraceback (most recent call last):\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/main.py\", line 50, in main\r\n ret = cmd.run()\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/command/data_sync.py\", line 72, in run\r\n recursive=self.args.recursive,\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/repo/__init__.py\", line 28, in wrapper\r\n ret = f(repo, *args, **kwargs)\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/repo/__init__.py\", line 508, in fetch\r\n return self._fetch(*args, **kwargs)\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/repo/fetch.py\", line 52, in _fetch\r\n used, jobs, remote=remote, show_checksums=show_checksums\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/data_cloud.py\", line 82, in pull\r\n cache, jobs=jobs, remote=remote, show_checksums=show_checksums\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/remote/local.py\", line 382, in pull\r\n download=True,\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/remote/local.py\", line 346, in _process\r\n download=download,\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/remote/local.py\", line 255, in status\r\n local_exists = self.cache_exists(md5s, jobs=jobs, name=self.cache_dir)\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/remote/local.py\", line 214, in cache_exists\r\n + (\"cache in \" + name if name else \"local cache\"),\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/remote/local.py\", line 216, in \r\n if not self.changed_cache_file(checksum)\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/remote/base.py\", line 759, in changed_cache_file\r\n self.protect(cache_info)\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/remote/local.py\", line 448, in protect\r\n os.chmod(path, mode)\r\nOSError: [Errno 30] Read-only file system: '/Volumes/dvc/eac-plv-dataset-iphone5-4k-okr_q4_2019/2c/b5d8b77eb1ac4f56dfbce8aa2f4dfd'\r\n------------------------------------------------------------\r\n\r\nHaving any troubles? Hit us up at https://dvc.org/support, we are always happy to help!\r\n\r\n$ dvc checkout -v\r\n2020-03-19 14:35:56,999 DEBUG: PRAGMA user_version;\r\n2020-03-19 14:35:56,999 DEBUG: fetched: [(3,)]\r\n2020-03-19 14:35:56,999 DEBUG: CREATE TABLE IF NOT EXISTS state (inode INTEGER PRIMARY KEY, mtime TEXT NOT NULL, size TEXT NOT NULL, md5 TEXT NOT NULL, timestamp TEXT NOT NULL)\r\n2020-03-19 14:35:57,000 DEBUG: CREATE TABLE IF NOT EXISTS state_info (count INTEGER)\r\n2020-03-19 14:35:57,000 DEBUG: CREATE TABLE IF NOT EXISTS link_state (path TEXT PRIMARY KEY, inode INTEGER NOT NULL, mtime TEXT NOT NULL)\r\n2020-03-19 14:35:57,000 DEBUG: INSERT OR IGNORE INTO state_info (count) SELECT 0 WHERE NOT EXISTS (SELECT * FROM state_info)\r\n2020-03-19 14:35:57,000 DEBUG: PRAGMA user_version = 3;\r\n2020-03-19 14:35:57,697 DEBUG: SELECT * FROM link_state\r\n2020-03-19 14:35:57,717 DEBUG: checking if 'short_range/pitch_4/IMG_2543.png'('{'md5': '97f8c0bdc301b6c7711b9d03a0f4da81'}') has changed.\r\n2020-03-19 14:35:57,718 DEBUG: 'short_range/pitch_4/IMG_2543.png' doesn't exist.\r\n2020-03-19 14:35:57,721 DEBUG: Path '../../../../../Volumes/dvc/eac-plv-dataset-iphone5-4k-okr_q4_2019/97/f8c0bdc301b6c7711b9d03a0f4da81' inode '403738'\r\n2020-03-19 14:35:57,721 DEBUG: SELECT mtime, size, md5, timestamp from state WHERE inode=?\r\n2020-03-19 14:35:57,722 DEBUG: fetched: []\r\n2020-03-19 14:35:58,743 DEBUG: Path '../../../../../Volumes/dvc/eac-plv-dataset-iphone5-4k-okr_q4_2019/97/f8c0bdc301b6c7711b9d03a0f4da81' inode '403738'\r\n2020-03-19 14:35:58,744 DEBUG: SELECT mtime, size, md5, timestamp from state WHERE inode=?\r\n2020-03-19 14:35:58,744 DEBUG: fetched: []\r\n2020-03-19 14:35:58,745 DEBUG: INSERT INTO state(inode, mtime, size, md5, timestamp) VALUES (?, ?, ?, ?, ?)\r\n2020-03-19 14:35:58,745 DEBUG: cache '../../../../../Volumes/dvc/eac-plv-dataset-iphone5-4k-okr_q4_2019/97/f8c0bdc301b6c7711b9d03a0f4da81' expected '97f8c0bdc301b6c7711b9d03a0f4da81' actual '97f8c0bdc301b6c7711b9d03a0f4da81'\r\n2020-03-19 14:35:58,749 DEBUG: SELECT count from state_info WHERE rowid=?\r\n2020-03-19 14:35:58,749 DEBUG: fetched: [(506,)]\r\n2020-03-19 14:35:58,749 DEBUG: UPDATE state_info SET count = ? WHERE rowid = ?\r\n2020-03-19 14:35:58,750 ERROR: unexpected error - [Errno 30] Read-only file system: '/Volumes/dvc/eac-plv-dataset-iphone5-4k-okr_q4_2019/97/f8c0bdc301b6c7711b9d03a0f4da81'\r\n------------------------------------------------------------\r\nTraceback (most recent call last):\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/main.py\", line 50, in main\r\n ret = cmd.run()\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/command/checkout.py\", line 71, in run\r\n recursive=self.args.recursive,\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/repo/__init__.py\", line 28, in wrapper\r\n ret = f(repo, *args, **kwargs)\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/repo/__init__.py\", line 504, in checkout\r\n return self._checkout(*args, **kwargs)\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/repo/checkout.py\", line 83, in _checkout\r\n filter_info=filter_info,\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/funcy/decorators.py\", line 39, in wrapper\r\n return deco(call, *dargs, **dkwargs)\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/stage.py\", line 161, in rwlocked\r\n return call()\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/funcy/decorators.py\", line 60, in __call__\r\n return self._func(*self._args, **self._kwargs)\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/stage.py\", line 993, in checkout\r\n filter_info=filter_info,\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/output/base.py\", line 301, in checkout\r\n filter_info=filter_info,\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/remote/base.py\", line 980, in checkout\r\n checksum, path_info=path_info, filter_info=filter_info\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/remote/base.py\", line 798, in changed_cache\r\n return self.changed_cache_file(checksum)\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/remote/base.py\", line 759, in changed_cache_file\r\n self.protect(cache_info)\r\n File \"/usr/local/Cellar/dvc/0.90.0/libexec/lib/python3.7/site-packages/dvc/remote/local.py\", line 448, in protect\r\n os.chmod(path, mode)\r\nOSError: [Errno 30] Read-only file system: '/Volumes/dvc/eac-plv-dataset-iphone5-4k-okr_q4_2019/97/f8c0bdc301b6c7711b9d03a0f4da81'\r\n------------------------------------------------------------\r\n\r\nHaving any troubles? Hit us up at https://dvc.org/support, we are always happy to help!\r\n```\n@florianspecker Yep, indeed the issue raises where expected. Thanks for clarifying! Will try to prepare a patch for it today along with a new release.\n@efiop great, thanks a lot for the quick response!":1,"Hi @gamis !\r\n\r\nLooks like shutil.move properly fallen back to copy&delete logic but then failed to delete the source file https://github.com/python/cpython/blob/117830de332c8dfbd9a437c0968e16e11aa7e6a1/Lib/shutil.py#L581 What are the permissions on `dvcregression\\TC3-096.oct`, is it read-only?\nHi @efiop , note that the cache file never gets renamed from it's temporary form, with the random suffix. I confirmed that the cache file is complete; it has the same sha-256 as the original file.\r\n\r\nAlso, TC3-096.oct was originally not read-only but it appears dvc changes it to read only at some point.\n@gamis Indeed, we make it read-only before the move, to make it atomic. https://github.com/iterative/dvc/blob/77bc27799b683bb2d77254b0911b1449989ac211/dvc/utils/fs.py#L111 Strange that we didn't catch it in our windows tests, need to take a closer look...\nI have now tried with a more recent python version (3.8.6, i had been using 3.7.2), and same problem.\r\nI even tried just making the cache on a different local drive, not even a network mount, and it fails.\r\nSo definitely something wrong with dvc knowing when it needs to copy rather than just rename.\n@gamis Makes sense. The regression is likely in https://github.com/iterative/dvc/pull/4832 . Looking into it.\n@gamis Ok, so the issue is that after #4832 we started testing access with that chmod, which also leaves the file with incorrect permissions in case of a failure. Need to handle it gracefully. \nDoes that explain why the cache file isn't being renamed from its temp form to the final form?\n@gamis Yeah, it fails to unlink the temporary file, because on windows if file is read-only - you can't delete it.\n@gamis We've adjusted this in the upstream (future 2.0) and reverted the PR that has caused this bug in 1.11.x. 1.11.12 will be out on pypi in a few minutes. Please give it a try and let us know if that works for you. Thanks! :pray: \nSorry to say, that didn't fix it! \r\n\r\nOn a separate thread of discussion, I've mentioned that we recently abandoned trying to use this shared cache over the network, instead using it as a \"local\" remote. But this same Windows issue persists. \r\n\r\nIf I call `dvc push -v -r z`, I get the output below.\r\n\r\n```bash\r\n2021-01-25 08:55:31,080 DEBUG: Check for update is disabled.\r\n2021-01-25 08:55:31,096 DEBUG: fetched: [(3,)]\r\n2021-01-25 08:57:50,768 DEBUG: Assuming 'H:\\researchdata\\.dvc\\cache\\2b\\95ceac01304e1ce8511f59b07c5db2.dir' is unchanged since it is read-only\r\n2021-01-25 08:57:50,768 DEBUG: Assuming 'H:\\researchdata\\.dvc\\cache\\2b\\95ceac01304e1ce8511f59b07c5db2.dir' is unchanged since it is read-only\r\n2021-01-25 08:57:50,768 DEBUG: Assuming 'H:\\researchdata\\.dvc\\cache\\15\\393c7590870fe4b29f4b6836a38c98.dir' is unchanged since it is read-only\r\n2021-01-25 08:57:50,768 DEBUG: Assuming 'H:\\researchdata\\.dvc\\cache\\15\\393c7590870fe4b29f4b6836a38c98.dir' is unchanged since it is read-only\r\n2021-01-25 08:57:50,784 DEBUG: Preparing to upload data to 'Z:\\researchdata'\r\n2021-01-25 08:57:50,784 DEBUG: Preparing to collect status from Z:\\researchdata\r\n2021-01-25 08:57:50,847 DEBUG: Collecting information from local cache...\r\n2021-01-25 08:53:19,893 DEBUG: cache 'H:\\researchdata\\.dvc\\cache\\81\\466ad7a5e5d242552ae7e170b65bf6' expected 'HashInfo(name='md5', value='81466ad7a5e5d242552ae7e170b65bf6', dir_info=None, size=None, nfiles=None)' actual 'None'\r\n# the above is repeated many times, because I haven't pulled the full dvc repo down to this local cache.\r\n\r\n2021-01-25 08:53:20,080 ERROR: unexpected error - [WinError 6] The handle is invalid: 'Z:\\\\researchdata\\\\29\\\\b21cb5993462d6e33980e361652a6b'\r\n------------------------------------------------------------\r\nTraceback (most recent call last):\r\n File \"g:\\temp\\venvs\\dvc_regression\\lib\\site-packages\\dvc\\main.py\", line 90, in main\r\n ret = cmd.run()\r\n File \"g:\\temp\\venvs\\dvc_regression\\lib\\site-packages\\dvc\\command\\data_sync.py\", line 50, in run\r\n processed_files_count = self.repo.push(\r\n File \"g:\\temp\\venvs\\dvc_regression\\lib\\site-packages\\dvc\\repo\\__init__.py\", line 54, in wrapper\r\n return f(repo, *args, **kwargs)\r\n File \"g:\\temp\\venvs\\dvc_regression\\lib\\site-packages\\dvc\\repo\\push.py\", line 35, in push\r\n return len(used_run_cache) + self.cloud.push(used, jobs, remote=remote)\r\n File \"g:\\temp\\venvs\\dvc_regression\\lib\\site-packages\\dvc\\data_cloud.py\", line 65, in push\r\n return remote.push(\r\n File \"g:\\temp\\venvs\\dvc_regression\\lib\\site-packages\\dvc\\remote\\base.py\", line 56, in wrapper\r\n return f(obj, *args, **kwargs)\r\n File \"g:\\temp\\venvs\\dvc_regression\\lib\\site-packages\\dvc\\remote\\base.py\", line 432, in push\r\n return self._process(\r\n File \"g:\\temp\\venvs\\dvc_regression\\lib\\site-packages\\dvc\\remote\\base.py\", line 327, in _process\r\n dir_status, file_status, dir_contents = self._status(\r\n File \"g:\\temp\\venvs\\dvc_regression\\lib\\site-packages\\dvc\\remote\\base.py\", line 175, in _status\r\n self.hashes_exist(\r\n File \"g:\\temp\\venvs\\dvc_regression\\lib\\site-packages\\dvc\\remote\\base.py\", line 132, in hashes_exist\r\n return indexed_hashes + self.cache.hashes_exist(list(hashes), **kwargs)\r\n File \"g:\\temp\\venvs\\dvc_regression\\lib\\site-packages\\dvc\\cache\\local.py\", line 47, in hashes_exist\r\n return [\r\n File \"g:\\temp\\venvs\\dvc_regression\\lib\\site-packages\\dvc\\cache\\local.py\", line 55, in \r\n if not self.changed_cache_file(\r\n File \"g:\\temp\\venvs\\dvc_regression\\lib\\site-packages\\dvc\\cache\\base.py\", line 368, in changed_cache_file\r\n self.tree.protect(cache_info)\r\n File \"g:\\temp\\venvs\\dvc_regression\\lib\\site-packages\\dvc\\tree\\local.py\", line 301, in protect\r\n self.chmod(path_info, self.CACHE_MODE)\r\n File \"g:\\temp\\venvs\\dvc_regression\\lib\\site-packages\\dvc\\tree\\local.py\", line 249, in chmod\r\n os.chmod(path, mode)\r\nOSError: [WinError 6] The handle is invalid: 'Z:\\\\researchdata\\\\29\\\\b21cb5993462d6e33980e361652a6b'\r\n------------------------------------------------------------\r\n2021-01-25 08:53:20,142 DEBUG: Version info for developers:\r\nDVC version: 1.11.12 (pip)\r\n---------------------------------\r\nPlatform: Python 3.8.6 on Windows-10-10.0.18362-SP0\r\nSupports: http, https\r\nCache types: hardlink, symlink\r\nCaches: local\r\nRemotes: local, ssh, ssh, local\r\nRepo: dvc, git\r\n\r\n```\n@gamis The move was successful, but `protect()`-ing wasn't, so this is a new issue. We were thinking of just ignoring protect() errors, as they are not fatal, we just try to make the file read-only for some additional safety. Still, pretty odd that chmod is failing like that, might be a filesystem limitation. Could you try `pip install psutil` and then run `dvc doctor` again and show us the output for it, please?\nThis is strange because a) it worked in 1.7.2, and b) If I check that cache file on Z, it is indeed read-only for user, group, and other.\n```bash\r\nDVC version: 1.11.12 (pip)\r\n---------------------------------\r\nPlatform: Python 3.8.6 on Windows-10-10.0.18362-SP0\r\nSupports: http, https\r\nCache types: hardlink, symlink\r\nCache directory: NTFS on H:\\\r\nCaches: local\r\nRemotes: local, ssh, ssh, local\r\nWorkspace directory: NTFS on H:\\\r\nRepo: dvc, git\r\n```\nThis happens on `dvc pull` as well. Not sure why dvc should be trying to modify anything on the remote cache on a pull.\n@gamis It checks if the cache file is intact and tries to set it read-only, so that it doesn't need to check it next time. We could also consider supporting `verify` config option for that, to make dvc skip the verification for those cache files for trusted local remotes (it skips it for most of other remote types by default).\nThat seems like a good option. If other remote types skip it, and people would generally expect a `pull` to not change remote in anyway, then skipping seems appropriate to me. \n@gamis Besides the question of chmoding on remote, I'm still surprised that the chmod error is so odd. It might affect people that have their cache on samba. Leaning towards ignoring such errors for `protect()`, as they are really not fatal.":2,"@ematvey Regarding the gitignore, it is already placed right beside the output file:\r\n```\r\nmkdir -p a/b\r\necho foo > a/b/foo\r\ndvc add a/b/foo\r\n```\r\nwill create a/b/.gitignore. Maybe I've misunderstood your question?\nAbout the wdir and paths in that case, you are right, it is incorrect. Looks like we haven't noticed this bug, but it used to be(and was intended for those precise reasons of portability) to make wdir . and path `foo`.\nI might be mistaken about `.gitignore`.":1,"`allow_none` attribute for field `a` remains `False`. Possible fix may be in pydantic/fields.py:566:\r\n```\r\nif is_union_origin(origin):\r\n types_ = []\r\n for type_ in get_args(self.type_):\r\n if type_ is NoneType or type_ is Any or type_ is object:\r\n if self.required is Undefined:\r\n self.required = False\r\n self.allow_none = True\r\n continue\r\n types_.append(type_)\r\n```\n@mykhailoleskiv That almost works, except it doesn't add `Any` or `object` to `types_` (which eventually becomes `ModelField.sub_fields`). So `M(a=\"abcd\")` or something like that no longer works. I'll open a PR":1,"@gthb Thanks for reporting this! Looks like we are not using post-checkout parameters that are being passed to us by git and are indicating if we are checking out a specific file or a branch/tag/etc. https://git-scm.com/docs/githooks#_post_checkout We need to adjust our hook accordingly.\n@gthb Looks like the third parameter is just what we need. You could try modifying your git hook by hand to see if that will do the job for you. If it does, please let us know, and maybe consider submitting a PR, we really appreciate your contributions :slightly_smiling_face: If it doesn't work, please ping us as well.":1,"Surely as simple as implementing `__getstate__` and `__setstate__` on `ValidatoinError`?\r\n\r\nIf it is that simple, PR welcome to implement it.\nI don't believe so. I did spend a few hours trying to do just that and couldn't get it to work. Iirc, the problem with that approach is that `__setstate__` isn't called until after the object's `__init__` is called. So the error has already occurred and you don't get a chance to fix things.\r\n\r\nReading the pickle docs, it seemed like `__getnewargs_ex__` might be a way to get that to work but i failed at getting that to work too. I'm not sure, though, if i just didn't/don't understand getnewargs or if it turned out that exceptions are specialcased already and that specialcasing was interfering (in one of the python bug reports, ncoghlan noted that there were specialcases in exceptions which perhaps no longer worked with the new ways that picked worked)\r\n\r\n\nMy python.org bug from earlier was a duplicate. I've closed it. This is the older bug: https://bugs.python.org/issue27015 There's a cpython PR which I confirmed would fix at least the mandatory keyword args. It is currently awaiting another review from a python core developer: https://github.com/python/cpython/pull/11580\nHi @abadger I just had a look at it and I think I have something working.\r\nI'm opening a PR! Please tell me if it works with your whole example":1,"Found the root cause of this. This happens because of the optimization that we do if the file is empty (i.e. 0 bytes size). We never create a hardlink for the file with 0 bytes size and therefore it fails when we try to verify if the hardlink was created.\r\n\r\nhttps://github.com/iterative/dvc/blob/a9bc65ee1f0446de766db59ad1b149de064c5360/dvc/remote/local.py#L170-L182\r\n\r\nI think, [`is_hardlink()`](https://github.com/iterative/dvc/blob/a9bc65ee1f0446de766db59ad1b149de064c5360/dvc/remote/local.py#L196) function should be aware of this optimization as well. I'll make a fix tomorrow. \r\n\r\nP.S. This only failed if the file was empty (not even a newline). So:\r\n```sh\r\necho > foo && dvc add foo # works because of extra-newline character added by `echo`\r\ntouch bar && dvc add bar # does not work as the file is empty\r\n```\r\n\r\nAlso, subsequent runs would not fail because before this error is thrown out, the file gets cached (however, deletes file from workspace), i.e.\r\n```sh\r\ndvc init --no-scm\r\ndvc config cache.type hardlink\r\ntouch foo && dvc add foo # fails and deletes foo from workspace\r\ntouch foo && dvc add foo # passes\r\n```\n@skshetry Great investigation! :pray: ":1,"Good idea, @mattlbeck!\r\n\r\nI'm curious if you see benefits beyond ease of use for doing this over inserting the output directly as a code block like:\r\n\r\n````\r\necho '```' >> report.md\r\ndvc exp show >> report.md\r\necho '```' >> report.md\r\n````\n@dberenbaum Honestly hadn't thought of placing inside of a code block. Presumably this only works with `--no-pager`?\r\n\r\nWithout having properly tested this, the only additional benefit of a `--show-md` I can think of is that it would look a bit nicer.\nGood to know. Would you be interested in either trying that workaround and letting us know how it works, or else contributing the `--show-md` option?\nNo problem, I will either submit a PR or close this ticket depending on the outcome.\nThank you! One more thought: the leftmost column condenses multiple items of info (compare to the csv output) that might be hard to show in the same way in markdown. \nAlso, no need to close the ticket. This is at least a pattern we should explicitly support for cml and other ci needs. \nRaw markdown would be more likely to benefit from some features of the platform where the table is being rendered (i.e. in Jupyter you would get rows highlighted on hover).\r\n\r\nTaking Github example below, I kind of like the `--show-md` format better.\r\n\r\nExample \"code block workaround\" (this renders awful on VSCode markdown extension preview, btw):\r\n\r\n```\r\n┏━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━┓\r\n┃ Experiment ┃ Created ┃ loss ┃ accuracy ┃ train.batch_size ┃ train.hidden_units ┃ train.dropout ┃ train.num_epochs ┃ train.lr ┃ train.conv_activation ┃\r\n┡━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━┩\r\n│ workspace │ - │ 0.26484 │ 0.9038 │ 128 │ 64 │ 0.4 │ 10 │ 0.001 │ relu │\r\n│ main │ Sep 14, 2021 │ 0.26484 │ 0.9038 │ 128 │ 64 │ 0.4 │ 10 │ 0.001 │ relu │\r\n│ 5bcd44f │ Sep 01, 2021 │ 0.25026 │ 0.9095 │ 128 │ 64 │ 0.4 │ 10 │ 0.001 │ relu │\r\n│ b06a6ba │ Aug 31, 2021 │ 0.25026 │ 0.9095 │ 128 │ 64 │ 0.4 │ 10 │ 0.001 │ relu │\r\n│ d34fd8c │ Aug 30, 2021 │ 0.30741 │ 0.8929 │ 128 │ 64 │ 0.4 │ 10 │ 0.01 │ relu │\r\n│ 02b68b7 │ Aug 29, 2021 │ 0.44604 │ 0.8483 │ 128 │ 64 │ 0.4 │ 10 │ 0.01 │ relu │\r\n│ 5337519 │ Aug 28, 2021 │ - │ - │ - │ - │ - │ - │ - │ - │\r\n└────────────┴──────────────┴─────────┴──────────┴──────────────────┴────────────────────┴───────────────┴──────────────────┴──────────┴───────────────────────┘\r\n```\r\n\r\nPure markdown:\r\n\r\n| Experiment | Created | loss | accuracy | train.batch_size | train.hidden_units | train.dropout | train.num_epochs | train.lr | train.conv_activation |\r\n|--------------|--------------|---------|------------|--------------------|----------------------|-----------------|--------------------|------------|-------------------------|\r\n| workspace | - | 0.26484 | 0.9038 | 128 | 64 | 0.4 | 10 | 0.001 | relu |\r\n| main | Sep 14, 2021 | 0.26484 | 0.9038 | 128 | 64 | 0.4 | 10 | 0.001 | relu |\r\n| 5bcd44f | Sep 01, 2021 | 0.25026 | 0.9095 | 128 | 64 | 0.4 | 10 | 0.001 | relu |\r\n| b06a6ba | Aug 31, 2021 | 0.25026 | 0.9095 | 128 | 64 | 0.4 | 10 | 0.001 | relu |\r\n| d34fd8c | Aug 30, 2021 | 0.30741 | 0.8929 | 128 | 64 | 0.4 | 10 | 0.01 | relu |\r\n| 02b68b7 | Aug 29, 2021 | 0.44604 | 0.8483 | 128 | 64 | 0.4 | 10 | 0.01 | relu |\r\n| 5337519 | Aug 28, 2021 | - | - | - | - | - | - | - | - |\r\n":1,"Currently for `cache_exists()` threaded traverse, there is a single progress bar that shows that tracks how many cache prefixes (out of 256) have been fetched. For large remotes, fetching each prefix will still be a slow operation, and it may appear to the user that dvc is hanging.\r\n\r\nSince we have an estimated size for the remote and know approximately how many objects/pages will be returned for each prefix, we can track progress for the total number of pages fetched rather than the number of prefixes fetched.":1,"Would it make sense for `type=\"dl\"` to generate:\r\n\r\n```yaml\r\nstages:\r\n train:\r\n cmd: python train.py\r\n deps:\r\n - train.py\r\n metrics:\r\n - dvclive.json:\r\n cache: false\r\n plots:\r\n - dvclive/scalars:\r\n cache: false\r\n```\r\n\r\nWhere `dvclive` would be replaced by the value passed to `--live` (if any) ?\ncc @dberenbaum @skshetry \n@daavoo Would you be up to contribute this change?\n> @daavoo Would you be up to contribute this change?\r\n\r\nIf you all agree on proposed workaround, yes\nSorry, I think I was too quick to approve this. Looking at it now, it seems more transparent to:\r\n\r\n* Make `--live` optional and off by default for both `default` and `dl`. This makes `--type dl` seem less useful since it only differs by including checkpoints, but I don't think it's bad that we are unifying and simplifying.\r\n* Make `--live` independent from whether to include `checkpoints: true`.\r\n* Make `--live` mutually exclusive with `metrics` and `plots`.\r\n* If `--live` and `--interactive` are both present, don't ask for `metrics` or `plots`.\r\n\r\nThis hopefully makes it a simple convenience to replace `metrics` and `plots` easily when using dvclive. \r\n\r\nAgain, sorry for not catching this earlier.\r\n\r\nThoughts?":1,"Your approach sounds very good to me, PR welcome.\r\n\r\nQuestions:\r\n* I assume this would not impact performance. Am I right?\r\n* Are there any situations where this could cause problems or backwards incompatibility? E.g. do we need to wait for v2? I assume not.\r\n* Are there any other places where this could help? E.g. ipython maybe? Presumably not pycharm, vscode or vim? \nWould it work to just set the `__signature__` attribute of the model `__init__`? This is enough to work with `inspect.signature`, which is used, for example, by FastAPI; I'm not sure how it relates to the other tools discussed above.\r\n\r\nIf you just need to update `__signature__`, I have a variety of examples doing this for various fastapi-related purposes that may be useful for reference.\r\n\r\nSee, for example, here:\r\nhttps://gist.github.com/dmontagu/87e9d3d7795b14b63388d4b16054f0ff#file-fastapi_cbv-py-L37-L47\r\n\r\nAlso, I have worked through most of the config-related logic for what the generated signature should look like in the mypy plugin. So that might provide a good reference, e.g., if `Config.allow_extra` is `False` or similar, and you want to be really careful.\n> I assume this would not impact performance. Am I right?\r\n\r\nThis has relatively small impact only on model creation.\r\n\r\n> Are there any situations where this could cause problems or backwards incompatibility? E.g. do we need to wait for v2? I assume not.\r\n\r\nIf we implement this right (e. g. respect `__code__` differences between python versions), no, I don't think so.\r\n\n### Question to think about: \r\nWhat if subclass of a model declares `__init__`? Would we merge it's signature, rewrite it or leave as is?\r\n```py\r\nclass MyModel(BaseModel):\r\n id: int\r\n name: str = 'John' \r\n\r\n def __init__(self, foo: int = 41, **data):\r\n self.foo = foo + 1 \r\n return super().__init__(**data)\r\n```\n> Would it work to just set the `__signature__` attribute of the model `__init__`? This is enough to work with `inspect.signature`, which is used, for example, by FastAPI; I'm not sure how it relates to the other tools discussed above.\r\n\r\n@dmontagu , just to be clear, do you suggest construct new signature from scratch and set it to `__init__`, or generate function and copy its signature to `__init__`?\nGenerate a new signature from scratch; it's not too hard using the objects from the `inspect` module.\nSounds good. Having `__signature__` is enough for FastAPI to understand right parameters. \r\n\r\nHowever, I'm not sure if it's gonna work everywhere, as `__signature__` is not present in function type by default and used only in `inspect` module.\r\n\r\nI think the best is to have all: valid `__wrapped__`, `__kwdefaults__`, `__annotations__` and `__signature__` together to make sure any kind of inspection will work properly.\r\n\nOh, looks like there's case when we can't use `__signature__`:\r\n```py\r\n if not name.isidentifier():\r\n> raise ValueError('{!r} is not a valid parameter name'.format(name))\r\nE ValueError: 'this-is-funky' is not a valid parameter name\r\n```\n@MrMrRobat \r\nI think `this-is-funky` is an invalid signature name.\r\nIs there a case to give a method an invalid name?\r\nIf I misread it, sorry. \nI'm assuming this is coming from an alias? Yeah, I'm generally against giving methods signatures based on aliases.\r\n\r\nIn general, aliases don't seem to play super nicely with existing python tools, a reason I heavily prefer having `allow_population_by_alias=True`.\r\n\r\nI think part of the problem is that `__init__` is used for *both* parsing, where JSON schema conventions are important, *and* initializing in non-parsing contexts, where python conventions are important (e.g., for type checking, auto-docs-generation, IDEs, etc.).\r\n\r\nThe `construct` method mitigates this to some extent, but I think there is still some awkwardness, e.g., if you need your validators to run on the inputs. I think I personally would prefer if `__init__` always expected non-aliases, and aliases were only relevant for the `parse` methods.\r\n\r\n(But I recognize obviously that's a huge breaking change from how things work now, so will probably never happen, and others may not feel the same way I do on this point anyway 😅.)\n> I think `this-is-funky` is an invalid signature name.\r\n> Is there a case to give a method an invalid name?\r\n\r\nNot a method, but field. \r\n\r\n>I'm assuming this is coming from an alias?\r\n\r\nIn comes from test_create_model:\r\nhttps://github.com/samuelcolvin/pydantic/blob/c71326d4a6898612597d3c647a4256f168818e30/tests/test_create_model.py#L127-L136\r\n\n@MrMrRobat \r\nWe should ignore an invalid field name to create `__signature__` when using `create_model()` with an invalid name.\r\nAlso, Someone may want to add valid field to `__sigunature___` on Model which is created by `create_model()`\r\n```python\r\nmodel = create_model('FooModel', **{'this-is-funky': (int, 1), 'bar': (int, ...)})\r\nm = model(**{'this-is-funky': '123', 'bar': '456'})\r\nassert m.dict() == {'this-is-funky': 123, 'bar': 456}\r\n\r\nprint(inspect.signature(model))\r\n# > (**data: Any) -> None\r\n\r\nprint(model(bar='789'))\r\n# > FooModel this-is-funky=1 bar=789\r\n\r\n```":1,"This sounds like a bug, but it might need to wait for v2 to be fixed since it could be considered a breaking change. \nI have a fix for this, duplicating the datetime's method of handling timedeltas. The rest of the module doesn't seem refactored to reduce duplicated code. I assume this change is is okay / better by keeping it that way?\r\n\r\nThis is my work so far: https://github.com/samuelcolvin/pydantic/commit/1d59058a2ae6f4b0f2f119b8eddc1529c5789d86":1,"please give more details on what you're asking for and how it would be useful.\r\n\r\nClosing but happy to reopen if more details are provided.":1,"thanks for reporting, agreed it should apply to all fields.\r\n\r\nCould you try on master or v1.0b2 and confirm if it's still broken?\nJust tested it on `master` and can confirm it is still broken.":1,"The reason is, by setting the `max_length` in `b` (`b: StrictBytes = Field(..., max_length=5)`), it won't be considered as `StrictBytes` anymore. then the provided value `123` will be converted to `bytes` during validation\r\n\r\n```python\r\nfrom pydantic import BaseModel, StrictBytes, Field\r\n\r\n\r\nclass Model(BaseModel):\r\n a: StrictBytes = Field(...)\r\n b: StrictBytes = Field(..., max_length=5)\r\n\r\nm = Model(a=b'arthur', b=123) # does not raise\r\nprint(m) # a=b'arthur' b=b'123'\r\n```\r\n\r\nI am not sure if is it an intended behavior or not. @samuelcolvin what is your opinion here?\nIt's an error, happy to review a PR if it's simple and submitted asap. Otherwise this is/will be fixed in V2.":1,"afaik `TypedDict` is supported by pydantic, see #760.\r\n\r\nAnything that does work will be flakey, will not perform proper validation and could break at any time.\r\n\r\nI think best to explicitly raise an error whenever a `TypedDict` is used, saying\r\n\r\n>`TypedDict` is not yet supported, see #760\r\n\nI think `TypedDict` fields were usable with pydantic==1.4, but, as far as I can tell, the above `TypeError` does occur in pydantic>=1.5. \r\n\r\nI think you're right, though, and it probably is best to raise an explicit error when `TypedDict` is used as a model field until `TypedDict` is fully supported. \nI confirm `TypedDict` were usable with pydantic 1.4\nThere's no logic for validating `TypedDict`, so while you might have been able to use them, I very much doubt it was a good idea.\r\n\r\nWe should either support them fully or raise a sensible exception explaining that they don't work.\r\n\r\nThis is an issue about raising a better error, #760 is about supporting them fully.\nI'm aware that no logic validation is currently performed by _pydantic_ and it is fine with me. \r\n\r\n> We should either support them fully or raise a sensible exception explaining that they don't work.\r\n\r\nThere is also the option of treating them as `Dict[Any, Any]` for validation (they are dict at run-time after all). That way, one could still get proper _mypy_ warnings during development and pydantic validation would just reduce to `isinstance(variable, dict)` \r\n\r\nv1.4 behaved like this:\r\n\r\n```\r\nIn []: from typing import TypedDict \r\nIn []: from pydantic import BaseModel\r\n \r\nIn []: class MyDict(TypedDict): \r\n ...: a: int\r\n \r\nIn []: class A(BaseModel): \r\n ...: d: MyDict \r\n\r\nIn []: A(d={}) \r\nOut[]: A(d={})\r\n\r\nIn []: A(d=12) \r\n---------------------------------------------------------------------------\r\nValidationError: 1 validation error for A\r\nd\r\n value is not a valid dict (type=type_error.dict)\r\n```\r\n\r\n\n> There's no logic for validating TypedDict\r\n\r\nActually TypedDict does contain validation instructions\r\n\r\n```\r\n>>> from typing import TypedDict\r\n\r\n>>> class Person(TypedDict):\r\n... name: str\r\n... age: int\r\n\r\n>>> Person.__annotations__\r\n{'name': , 'age': }\r\n```\r\n\r\nSo technically Pydantic should be able to validate dictionaries that are type hinted with `TypedDicts`\nYes, it's possible, now someone just needs to go and implement the feature. 😄 \nI currently have a working solution for this which does validation based on the TypedDict annotations as suggested. I'm currently writing test cases and will post an update when I've got full coverage.\nMy solution is here, but there are parts of it that I think could probably be done better:\r\n[https://github.com/kpberry/pydantic/commits/typed-dict-support](url)\r\n\r\n1. `TypedDict` does not support `issubclass` checks (which was the initial issue in this thread), so in order to check if a type is a subtype of `TypedDict` in `ModelField._type_analysis`, it seems like we need to check if it is a subclass of `TypedDict`'s metaclass. Unfortunately, `_TypedDictMeta` is protected in both the typing and typing_extensions module, so the mypy check won't allow it to be imported (probably for good reason). For now, I check `self.type_.__class__.__name__ == '_TypedDictMeta'`, but I think there must be a better way to do this.\r\n2. I'm not sure what the best spot is in `ModelField._type_analysis` to do the `TypedDict` subclass check. I put it before the origin checks to avoid the issue at the top of this thread, but there might be a better spot for it.\r\n3. I added each `TypedDict` value type to `self.sub_fields` in `ModelField._type_analysis`, since it fit the pattern for validating `Tuple`s, etc. However, since each value corresponds to a specific named key, I had to add a `key_name` attribute to `ModelField` with the key name in order to implement `ModelField._validate_typeddict`. I'm not sure if it's a good idea to add new attributes to `ModelField` for this, especially considering that it's a relatively niche feature.\r\n\r\nWould appreciate feedback/suggestions before trying to make a pull request.\r\n\r\nEdit: Forgot to mention, all tests are passing and all lines that I added should be covered.\r\nEdit 2: I noticed that there's an optional `total` parameter to `TypedDict`, so I added support and tests for that.":1,"Awesome! Please also check the latest state of the changes in iterative/dvc.org/pull/967 so both sides are in sync when this is addressed 🙂 \nAlso, should the file name be `list.py`? its `ls.py` now.\nAnd not sure about the `target` name for the last param... Why not `path` like in `dvc get/import`?\r\nUPDATE: Investigating in https://github.com/iterative/dvc.org/pull/1021#pullrequestreview-368410823\nAnother question here: is there an easy way to tell which files in the list are tracked by DVC and which ones by Git? This is kind of important for finding paths that can be sent to `dvc import/get`.\n@jorgeorpinel At this moment only coloring is supported as a way of the differentiating types of the files.\r\nYou may read more in [the comment](https://github.com/iterative/dvc/issues/3431#issuecomment-593560766)\n@JIoJIaJIu I think that link is just somewhat relevant. We have two things to colorize in a different way:\r\n\r\n1. DVC tracked files - e.g. we need to use a different color for a directory we did `dvc add data\\` with. Or the same for `dvc add data\\data.xml`. And we should have a flag (--outs?) to filter DVC \"outputs\" only.\r\n2. DVC-files - we can use a color scheme for links, for example (since they serve to some extent as a link)\r\n3. Git-traked .dvcignored files.\r\n\r\nThat link it probably relevant to 3 only, but most likely @jorgeorpinel was asking about 1.\r\n\r\n\n> That link it probably relevant to 3 only\r\n\r\nIn the comment I provided info about **1** and **2** scenarios, but the main issue relates to **3**, right\r\nLet me summarize it here then\r\n\r\nWe color output as `out=color`\r\n```\r\nLS_COLORS=\"out=01;37\" dvc list ...\r\n```\r\nwill color outs only\nOK thanks for the info guys, that's great to know. But since colorizing is not enabled by default (I assume) nor supported in every system, would it make sense to add another way to tell them apart? It can be as simple as a 2nd column in the output that has either `git` or `dvc`, or maybe only `out` for DVC outputs (stuff that you can `dvc import/get`). After all this was one of the main motivations for `dvc list` in the first place.\n@jorgeorpinel what is the motivation for you to distinguish DVC-tracked and Git-tracked files by default? The whole point is to make the whole experience with get/import/list the same for all files as much as possible.\nTrue, I forgot you can get/import Git-tracked files now!":1,"Hi @lefos99 !\r\n\r\nCould you show the contents of `annotator/Annotation_0_hotspot_0.json.dvc`, please? \nHi @efiop \r\n\r\nWhich file do you mean? I don't have such a file in this directory.\r\n\r\n**Update:**\r\nSorry now I saw the confusion, I fixed my initial comment. (Sorry I had to hide some sensitive information)\n@lefos99 Thanks! Makes sense now! Ok, I'm able to reproduce with:\r\n```\r\n#!/bin/bash\r\n\r\nset -e\r\nset -x\r\n\r\nrm -rf myrepo\r\nmkdir myrepo\r\ncd myrepo \r\ngit init\r\ndvc init\r\nDATA=/tmp/$(uuidgen)\r\necho foo > $DATA\r\nmkdir data\r\nln -s $DATA data/foo\r\ndvc add data/foo\r\ndvc add data/foo\r\n```\r\nlooks like our dag check broke for symlinks that are in a repo subdir :scream: Bumping up the priority...\n@efiop, I see it working as expected.\r\n\r\nThe first `dvc add data/foo` adds to the cache and links from the cache (by default, here with the `copy` replacing the symlink).\r\nAlso, when the path is symlink, dvc creates stage file in the working directory as `foo.dvc` instead of `data/foo.dvc`.\r\n\r\nThe second `dvc add data/foo` tries to create the `.dvc` file in `data/foo.dvc` as `data/foo` is no longer a symlink, and fails as there's already a different stage.\r\n\r\nWe could definitely fix this by making `contains_symlink_up_to` smarter (here :arrow_down:):\r\nhttps://github.com/iterative/dvc/blob/f0a729df4d6ae259f3b40be32ea2f9f6be0aa3db/dvc/utils/fs.py#L71\r\n\r\nBut, this whole different link strategy in symlink/non-symlink case will for sure bite us somewhere else. More research might be needed to fix this issue to fix repeated `dvc add` calls (if this even is an issue). \r\n\nShould note that the original issue (#1648) which `contains_symlink_up_to` was supposed to address appears to be broken again (for new git related reason) in current master:\r\n\r\n```\r\n...\r\n2020-10-23 17:29:45,759 ERROR: /Users/pmrowla/git/scratch/test-4654/dvc_example/my_repo/data/data_file\r\n------------------------------------------------------------\r\nTraceback (most recent call last):\r\n File \"/Users/pmrowla/git/dvc/dvc/command/add.py\", line 17, in run\r\n self.repo.add(\r\n File \"/Users/pmrowla/git/dvc/dvc/repo/__init__.py\", line 51, in wrapper\r\n return f(repo, *args, **kwargs)\r\n File \"/Users/pmrowla/git/dvc/dvc/repo/scm_context.py\", line 4, in run\r\n result = method(repo, *args, **kw)\r\n File \"/Users/pmrowla/git/dvc/dvc/repo/add.py\", line 90, in add\r\n stage.save()\r\n File \"/Users/pmrowla/git/dvc/dvc/stage/__init__.py\", line 386, in save\r\n self.save_outs(allow_missing=allow_missing)\r\n File \"/Users/pmrowla/git/dvc/dvc/stage/__init__.py\", line 398, in save_outs\r\n out.save()\r\n File \"/Users/pmrowla/git/dvc/dvc/output/base.py\", line 263, in save\r\n self.ignore()\r\n File \"/Users/pmrowla/git/dvc/dvc/output/base.py\", line 243, in ignore\r\n self.repo.scm.ignore(self.fspath)\r\n File \"/Users/pmrowla/git/dvc/dvc/scm/git.py\", line 195, in ignore\r\n entry, gitignore = self._get_gitignore(path)\r\n File \"/Users/pmrowla/git/dvc/dvc/scm/git.py\", line 181, in _get_gitignore\r\n raise FileNotInRepoError(path)\r\ndvc.scm.base.FileNotInRepoError: /Users/pmrowla/git/scratch/test-4654/dvc_example/my_repo/data/data_file\r\n------------------------------------------------------------\r\n```":1,"Hello @umesh-timalsina \r\nThis is a known \"issue\", which may or may not be changed in v2. It reminds me of #265, which could help you. I'm currently not with my computer sorry\nHello @PrettyWood. Thanks for the response. I will check the issue out.":1,"Can reproduce. Taking a look\nWhen serializing the ParamsDependency to the lock file (`dvc.lock`), the tuple gets converted to a list (because `dvc.lock` is in YAML format).\r\n\r\nOn the subsequent `dvc repro` execution, the value in the params file gets correctly loaded as a tuple but compared against the serialized list (https://github.com/iterative/dvc/blob/main/dvc/dependency/param.py#L127), always reporting it as modified.":1,"I'll take it. @efiop If you have any hints to make work more efficient, I'll grateful :)":1,"I don't think we want more config options unless absolutely required.\r\n\r\nThe solution here is to allow `schema_extra` to be a function which can mutate the schema, #889. Happy to accept a PR for that. ":1,"Looked at this and I see several issues with code, most of which also affect performance:\r\n\r\n1. `DvcIgnore*` classes designed around using them in `walk()`, which leads to excessively complex code handling check for full path.\r\n2. All the complexity above is added to `CleanTree`, which makes it go into `DvcIgnore*` domain. It should really look like:\r\n ```python\r\n def isfile(self, path):\r\n return self.tree.isfile(path) and not self.dvcignore.match_file(path)\r\n ```\r\n3. `DvcIgnore*` hierarchy forces sequential check against all rules and regexes. This might be optimized by constructing single structure, i.e. a big regex or a prefix tree. This is complicated by the fact that dvcignores may contain negations though.\r\n4. `pathspec` library also checks against all regexes sequentially, which adds to 3.\r\n5. **High level issue**. I suspect that we recheck files found via walk, so we run ignores twice. It needs to be estimated whether this is an issue.\nThank you @Suor \r\n\r\nIt seems consistent with what I experienced. The more I add lines in `.dvcignore`, the slowest `dvc status` is.\n> Looked at this and I see several issues with code, most of which also affect performance:\r\n> \r\n> 1. `DvcIgnore*` classes designed around using them in `walk()`, which leads to excessively complex code handling check for full path.\r\n> 2. All the complexity above is added to `CleanTree`, which makes it go into `DvcIgnore*` domain. It should really look like:\r\n> ```python\r\n> def isfile(self, path):\r\n> return self.tree.isfile(path) and not self.dvcignore.match_file(path)\r\n> ```\r\n> 3. `DvcIgnore*` hierarchy forces sequential check against all rules and regexes. This might be optimized by constructing single structure, i.e. a big regex or a prefix tree. This is complicated by the fact that dvcignores may contain negations though.\r\n> 4. `pathspec` library also checks against all regexes sequentially, which adds to 3.\r\n> 5. **High level issue**. I suspect that we recheck files found via walk, so we run ignores twice. It needs to be estimated whether this is an issue.\r\n\r\n@Suor \r\nI also have an interest in this issue and had looked into the code. And here is my solution to some of the issues above:\r\n1. For point (1), `walk()` should use information from `DvcIgnore`. For example, exiting the `.git` directory at the beginning of the iteration.\r\n2. For point (3), according to [this article](https://www.freecodecamp.org/news/regex-was-taking-5-days-flashtext-does-it-in-15-minutes-55f04411025f/), tried tree or automaton would only perform better if the number of ignored-expressions was greater than several hundred. \r\n3. For point (4), In `pathspec` \r\n```python\r\n\tmatched = False\r\n\tfor pattern in patterns:\r\n\t\tif pattern.include is not None:\r\n\t\t\tif file in pattern.match((file,)):\r\n\t\t\t\tmatched = pattern.include\r\n\treturn matched\r\n```\r\nIt should stop if any of the patterns matched the file. \r\n@courentin\r\nAnd I think this is the main reason that It gets slower as the ignore list grows. \r\n\r\nI'd like to try to solve these points this weekend.\n@karajan1001 \r\n1. There is no issue with walk, the ignored dir won't we traversed. The issue is when we need to check whether `some/path/abc/file.txt` is ignored we need to build all of its parents and test them in an unnatural way.\r\n\r\n> It should stop if any of the patterns matched the file.\r\n\r\nSo for very common case that file is not ignored it will match it against all of those.\n> 1. There is no issue with walk, the ignored dir won't we traversed. The issue is when we need to check whether `some/path/abc/file.txt` is ignored we need to build all of its parents and test them in an unnatural way.\r\n\r\nThank you\r\nDos `Dvcignore` support expressions like `../*.csv` which influences files outside the current path?\r\n\r\n\nHaha, underrated the difficulty of it. Only written the benchmark (https://github.com/iterative/dvc-bench/pull/30).\r\n\r\n\r\n\r\n> 5\\. **High level issue**. I suspect that we recheck files found via walk, so we run ignores twice. It needs to be estimated whether this is an issue.\r\n\r\n@Suor, According to @courentin 's call graph in #3867 it only runs once.\n@efiop @pared \r\nI have a question how could I test unmerged changes using `dvc-bench`.\n@karajan1001 \r\nPrepared PR explaining this in README, please take a look and review:\r\nhttps://github.com/iterative/dvc-bench/pull/41\n> Dos Dvcignore support expressions like ../*.csv which influences files outside the current path?\r\n\r\n@karajan1001 No, same as gitignore, it cannot look back in the tree.\n> README\r\n\r\nThank you\n@pared Should we keep this open or are we fully done here?\n@efiop sorry, autoclose. Seems to me we should leave it open. The issue potentially is still present in the case of multiple `dvcignore` files. Also, points noted by @Suor (https://github.com/iterative/dvc/issues/3869#issuecomment-635854916) still need to be addressed.\r\n\r\nEDIT:\r\n#3967 addresses 2 points out of 5 (addressed points are 3 and 4)":1,"Hello @kataev \r\nI agree the behaviour is not the expected one and for me this is a bug.":1,"Is there a nice RFC explicitly stating a limit for port? If so I guess we could add a check, but this is not a bug, just a limit of the current validation.\r\n\r\nYou can implement a much cleaner validator:\r\n\r\n```py\r\nclass Test(BaseModel):\r\n url: stricturl(allowed_schemes=['tcp'])\r\n\r\n @validator('url')\r\n def validate_port(cls, v):\r\n if int(v.port) > 2**16:\r\n raise ValueError('port overflow')\r\n return v\r\n```\nHi,\r\n[rfc 793](https://tools.ietf.org/html/rfc793#section-3.1) explain ports are 16 unsigned bits\r\n\r\nYour validator will fail if port is missing, with tcp protocol that doesn't really make sense but with http(s) protocol is does:\r\nexample\r\n\r\n`http://example.com` the validator will fail because v.port is None, int will raise a TypeError, but thanks for the tips I totally forgot port attribute 🤦\r\n\r\n\r\n```py\r\nclass Test(BaseModel):\r\n url: stricturl(allowed_schemes=['tcp'])\r\n\r\n @validator('url')\r\n def validate_port(cls, v):\r\n if v.port is None:\r\n return v\r\n elif int(v.port) > 2**16:\r\n raise ValueError('port overflow')\r\n return v\r\n```\r\n(the tcp protocol is a bad showcase about default port)\nGood point, `if v.port is not None and int(v.port) > 2**16` would be better. (is `port=0` allowed?)\r\n\r\n> rfc 793 explain ports are 16 unsigned bits\r\n\r\nPR welcome to add this check to all URL validation, I know technically it's a breaking chance, but I think a reasonable one, given that a higher port is invalid anyway.\nthe port 0 is tagged as reserved and should not be usable as such since most routers will reject it.\r\nIn a other hand in a lot of APIs port 0 mean \"use the first avaiable socket\" so I would say yes.\r\n\r\nI'll work on a PR within a few days (this weekend I hope if I have time)\nmakes sense to allow 0, also makes the check easier.\r\n\r\n> I'll work on a PR within a few days (this weekend I hope if I have time)\r\n\r\nThanks so much\nI'm currently working on a PR,\r\nlooking at the sources of networks I have few questions about some design choices\r\n\r\nShould I open news issues for each or asking here is fine ?\r\n":1,"Hi @art049 \r\nYes you're right. Since #1971 has been solved, this is the expected behaviour but it seems we update only the current field and not all. I can make a quick fix if you want should take couple of minutes":1,"I recently ran into this as well, and it certainly was unexpected to me. I ended up writing a small patch to the metaclass which removes overridden validators post-hoc. I post it below in case it is useful to others.\r\n\r\n```python\r\nfrom pydantic import BaseModel\r\nfrom pydantic.main import ModelMetaclass\r\n\r\ndef remove_overridden_validators(model: BaseModel) -> BaseModel:\r\n \"\"\"\r\n Currently a Pydantic bug prevents subclasses from overriding root validators.\r\n (see https://github.com/samuelcolvin/pydantic/issues/1895)\r\n This function inspects a Pydantic model and removes overriden\r\n root validators based of their `__name__`.\r\n Assumes that the latest entries in `__pre_root_validators__` and\r\n `__post_root_validators__` are earliest in the MRO, which seems to be\r\n the case.\r\n \"\"\"\r\n model.__pre_root_validators__ = list(\r\n {validator.__name__: validator\r\n for validator in model.__pre_root_validators__\r\n }.values())\r\n model.__post_root_validators__ = list(\r\n {validator.__name__: (skip_on_failure, validator)\r\n for skip_on_failure, validator in model.__post_root_validators__\r\n }.values())\r\n return model\r\n\r\nclass PatchedModelMetaclass(ModelMetaclass):\r\n def __new__(*args, **kwargs):\r\n model = ModelMetaclass.__new__(*args, **kwargs)\r\n return remove_overridden_validators(model)\r\n```\r\nThe following bit of code tests that it works as expected:\r\n```python\r\nfrom pydantic import BaseModel, root_validator\r\n\r\nclass A(BaseModel):\r\n# class A(BaseModel, metaclass=PatchedModelMetaclass):\r\n a: int\r\n @root_validator(pre=True)\r\n def pre_root(cls, values):\r\n print(\"pre rootA\")\r\n return values\r\n @root_validator(pre=False)\r\n def post_root(cls, values):\r\n print(\"post rootA\")\r\n return values\r\n\r\nclass B(A):\r\n @root_validator(pre=True)\r\n def pre_root(cls, values):\r\n print(\"pre rootB\")\r\n return values\r\n @root_validator(pre=False)\r\n def post_root(cls, values):\r\n print(\"post rootB\")\r\n return values\r\n\r\n# This prints only from the validators in B if PatchedModelMetaclass is used\r\nB(a=1)\r\n```":1,"Off the top of my head, I don't know.\r\n\r\nThat's a config setting called `copy_on_model_validation` try setting it to false and seeing if that avoids a copy that causes the problem.\n`copy_on_model_validation` didn't change things. I whittled the setup down to the minimum reproducible case:\r\n\r\n```py\r\nfrom uuid import UUID, uuid4\r\nimport sqlalchemy as sa\r\nimport sqlalchemy.dialects.postgresql as pg\r\nfrom pydantic import BaseModel as _BaseModel\r\nfrom sqlalchemy.ext.declarative import declarative_base\r\n\r\nclass CommonBase(_BaseModel):\r\n\r\n id: UUID = sa.Column(pg.UUID(as_uuid=True), primary_key=True, default=uuid4)\r\n\r\n class Config:\r\n copy_on_model_validation = False\r\n\r\nBaseModel = declarative_base(cls=CommonBase)\r\n```\r\n\r\nI also updated to pydantic 1.9.1 without change 🤔 \n1.9.1 hasn't changed `smart_deepcopy` so that wouldn't make a difference.\r\n\r\nReally I don't think it's correct for the sqlalchemy type to be raising an error on `__bool__`, but I get why they want to.\r\n\r\nI think the best solution would be to catch errors like this in `smart_deepcopy` and just fall back to using `deepcopy`, but I wonder whether even deep copy can cope with this type?\r\n\r\nSomething like \r\n\r\n```py\r\ndef smart_deepcopy(obj: Obj) -> Obj:\r\n \"\"\"\r\n Return type as is for immutable built-in types\r\n Use obj.copy() for built-in empty collections\r\n Use copy.deepcopy() for non-empty collections and unknown objects\r\n \"\"\"\r\n\r\n obj_type = obj.__class__\r\n if obj_type in IMMUTABLE_NON_COLLECTIONS_TYPES:\r\n return obj # fastest case: obj is immutable and not collection therefore will not be copied anyway\r\n try:\r\n if not obj and obj_type in BUILTIN_COLLECTIONS:\r\n # faster way for empty collections, no need to copy its members\r\n return obj if obj_type is tuple else obj.copy() # type: ignore # tuple doesn't have copy method\r\n except (TypeError, ValueError, RuntimeError):\r\n # do we really dare to catch ALL errors? Seems a bit risky\r\n pass\r\n return deepcopy(obj) # slowest way when we actually might need a deepcopy\r\n```\r\n\r\nWhat do you think?\r\n\r\nPR welcome, but I'm not sure when I'll get around to it - I'm currently flat out on pydantic-core getting ready for pydantic v2.\r\n\r\nBut at least your issue has made me think about how we deal with deepcopy in v2 😄 .\nper their [docs](https://docs.sqlalchemy.org/en/14/changelog/migration_06.html)\r\n\r\n> Code that wants to check for the presence of a ClauseElement expression should instead say:\r\n> `if expression is not None:`\r\n> ` print(\"the expression is:\", expression)`\r\n\r\nSo could this perhaps be fixed by changing the smart deep copy line to\r\n\r\n```py\r\nif obj is None and obj_type in BUILTIN_COLLECTIONS:\r\n```\r\n\r\nDoes this method expect other `obj` references that are falsy rather than `None`? if so, perhaps this:\r\n\r\n```py\r\nif (obj is None or not obj) and obj_type in BUILTIN_COLLECTIONS:\r\n```\nThat doesn't work, `None` is already covered by the `IMMUTABLE_NON_COLLECTIONS_TYPES` check above, and this line is using falsy as a proxy for empty on iterable types, so a `None` check is different.\nIsn't the immutable collection type check covering `None` on the `obj_type`, not `obj`? Isn't the raise happening on the `not obj` statement where sqlalchemy added the` __bool__` override?\nI'm not sure what you mean, but neither of your code suggestions will work. Best to try it and see what happens.\nYou were right, my train of thought would have required a quite ugly double negative to work. I was only trying to avoid catching errors, but I suspect your proposed approach is the cleanest and safest. Will submit a pr":1,"Since we will be releasing 1.0 soon, might even do that in a non-backward compatible way to simplify the logic.":1,"after a debate today with a fellow programmer I had my eyes opened to the fact that pascal case is a type of camel case... so the naming isn't wrong.\r\n\r\nI think ill submit a PR of the second option, as I'm sure I'm not the only one that would make use of the `to_lower_camel()` function.":1,"@gcoter Sounds good! Let's do that!\r\n\r\nUnfortunately, we don't have the capacity for this right now as well :slightly_frowning_face: So it might have to wait until someone has time for it. \n@gcoter Btw, i think you can already do that using your ssh config:\r\n```\r\nHost example.com\r\n ForwardAgent no\r\n```\nHi @efiop, thanks for your answer! I tried to modify `ForwardAgent` but it doesn't seem to work in my case...\nI will try to make the PR myself":1,"Hi @tommilligan \r\nThe thing is we can't do anything about it because python changes `Union[A]` into `A` at interpretation time.\r\n```py\r\nfrom typing import Union\r\nclass A: ...\r\nassert Union[A] is A\r\n```\r\nSo at runtime, pydantic has no way to know `A` was supposed to be a `Union`\nAh, that makes sense. I was about to start poking to see if the simplification was internal to `pydantic` or not, but if it's at a higher layer I'll give it up as a lost cause.\r\n\r\nWould you accept a PR noting this as an edge case in the documentation/error message?\r\n\r\nI suppose the workaround is to add a dummy second type into the union, or just to remove the discriminator until it is required.\r\n\r\nFor documentation forward compatibility, we were using [openapi-schema-pydantic](https://github.com/kuimono/openapi-schema-pydantic), which now collides with the `discriminator` Field property nicely. But that's not your problem! I'll file a bug over there now.\nDocumentation PRs are always welcome :)":1,"@pared\r\nHi, I'd like to look into it this weekend. And now are asking for some advice.\r\n\r\nAccording to [Git](https://git-scm.com/docs/git-check-ignore).\r\nIt has `-q`, `-v`, `--stdin`, `-z`, `-n`, `--no-index` arguments. \r\n\r\nAmong them, I consider `-v`(show details, the last pattern matches) `--no-index` (without this ignore-check would ignore those files already added in cache) most required.\r\n\r\nThen, `--stdin` is very helpful at debugging, `-q` is a default parameter. `-z` is used when we are about to output to files or use it in scripts. At last `-n` is an optional I think but is easy to implement.\r\n\r\nSo, at first, I'd like to implement `-v`, `-q`, `-n`, and `--no-index` first, and `--stdin` at the next step. \r\nBesides these, Maybe `-a, --all` which shows all of the patterns matches the files instead of only the last one are useful?\r\n\r\n\n@karajan1001 \r\nIll try to provide as much info as I am able:\r\n1. `-q` - I think that if we are to use the logger to display the information, the only important thing to implement is proper return code on command. `quiet` behavior is handled by [default parser](https://github.com/iterative/dvc/blob/5ed832077df8de445c1bac20fadc93b8d9d14d31/dvc/cli.py#L139)\r\n\r\n2. `--no-index` - I am not sure whether this one is vital. In my understanding, in git, it can be used to verify whether some element would be ignored if it was not already added to the index. So there is a kind of superiority of index over the workspace. In DVC however, if you put something into the `.dvcignore`, and run `dvc status`, DVC will pick it up and report your dir as changed, even if it is not. But from the point of view of DVC it did change because you \"deleted\" particular file. So I think in this point DVC differs from git in its operating principles and might not require `--no-index` option. Though if you have some other perspective, please share, I might be missing something here.\r\n\r\nSo, for now, if you want to implement a few features, I think `-v`, `-q`, `-n` will already be super-useful. Just a note though, please don't consider it as a must. It will be awesome even if you were able to do just `dvc check-ignore` returning `1/0`.\r\n\r\nIn next steps `--stdin` would be great and nicely matching other command-line helpful options, like `--show-vega` for `plots` command. `-a` also sounds awesome.\r\n\r\nPlease ping me if you want me to extend some parts.\n@pared \r\nThank you, \r\n1. For `-q`. I agreed, the only thing need to take care is the return number, non-zero return number may cause some report process. In Git\r\n> 0:One or more of the provided paths is ignored.\r\n> 1:None of the provided paths are ignored.\r\n\r\n2. For `--no-index`, I had tried on my computer, seems that it is vital for Git but not for DVC. The reason lies in the differences in treating ignored files between Git and DVC. In Git `.gitignore` changed didn't influence files existing in the cache, Ignore check skips them. But for DVC, ignored check on files whether added to cache or not. \r\n\r\n----- \r\nIn conclusion without `--no-index` Git can't check files already in the cache, while for DVC actually we are always in a `--no-index` mode.\r\n\r\nSo, two steps:\r\n\r\n- [ ] basic functions `-v`, `-q`, `-n`\r\n- [ ] advanced functions `-a`, `--stdin`\r\n\r\n":1,"@RomanVeretenov Does it happen after specific dvc command?\n@RomanVeretenov Could you show us `git check-ignore $(pwd)/.dvc/tmp` output, please? Is there anything special about your repo location?\n@efiop it happens on older repos. Try to clone our get started and run a few commands (pull, checkout, etc).\n> @RomanVeretenov Could you show us `git check-ignore $(pwd)/.dvc/tmp` output, please? Is there anything special about your repo location?\r\n\r\noutput is empty\n@shcheklein It does, but it only adds the missing `/tmp`(as it is supposed to be), not adding it multiple times as in this issue. I am not able to reproduce the issue.\r\n\r\n@RomanVeretenov Ok, that is bad, it means that git is not seeing that `/tmp` in `.gitignore` or not understanding it right. Is there anything special about the location that your repo is located in?\r\n\r\nCould you show:\r\n```\r\n$ python -c 'import os; print(os.getcwd())'\r\n$ python -c 'import os; print(os.path.realpath(os.getcwd()))'\r\n```\r\nplease?\r\n\r\nSo far this looks like there is something wrong with your environment or git specifically.\n```\r\n$ python -c 'import os; print(os.getcwd())'\r\n/home/ds\r\n$ python -c 'import os; print(os.path.realpath(os.getcwd()))'\r\n/home/ds\r\n```\r\n\r\nyep, the repo root is '/home/ds' =)\n@RomanVeretenov Using `/home/ds` is questionable, but nothing functionally wrong about it :slightly_smiling_face: Looks alright. Maybe you have `.dvc/` gitignored somewhere or something? \r\n\r\nNeed to check your `.gitignore`s in `/home` and `/home/ds` to see if they are sane(maybe they have something like `!!.dvc/tmp`, `!!tmp` or something that forces git to not ignore `.dvc/tmp`. Would also check global gitignore rules, if you have those.\nThere is no /home/.gitignore\r\nAlso I have no global gitignore\r\nAlso .git/info/exclude is empty\r\n\r\n```\r\n/home/ds % cat .gitignore\r\n\r\n*.jpg\r\n*.JPG\r\n*.png\r\n*.PNG\r\nextracted.csv\r\n```\r\n\r\nalso I have recursively listed all gitignore's \r\nNone of them but .dvc/.gitignore conatins tmp\r\n\r\n```\r\nfind . -name '.gitignore' -exec echo {} \\; -exec grep tmp {} \\;\r\n... here goes gitignores from all nested folders\r\n./.dvc/.gitignore\r\n/tmp\r\n/tmp\r\n./.gitignore\r\n...\r\n```\r\n\r\nEach `dvc status` call adds a /tmp to the end of .dvc/.gitignore\n@RomanVeretenov Could you try to reproduce it with a newly created dvc project? E.g.\r\n```\r\nmkdir myrepo\r\ncd myrepo\r\ngit init\r\ndvc init\r\ngit add .\r\ngit commit -m \"init\"\r\ndvc status\r\ngit status\r\n```\r\n\r\nI'm still not able to reproduce your issue, most likely there is something off with your environment, can't put my finger on anything yet.\n> @RomanVeretenov Could you try to reproduce it with a newly created dvc project? E.g.\r\n> \r\n> ```\r\n> mkdir myrepo\r\n> cd myrepo\r\n> git init\r\n> dvc init\r\n> git add .\r\n> git commit -m \"init\"\r\n> dvc status\r\n> git status\r\n> ```\r\n> \r\n> I'm still not able to reproduce your issue, most likely there is something off with your environment, can't put my finger on anything yet.\r\n\r\nIt works ok on a clean repo\r\n\r\n```\r\n ~/code/myrepo/.dvc\r\n % cat .gitignore\r\n/config.local\r\n/updater\r\n/lock\r\n/updater.lock\r\n/tmp\r\n/state-journal\r\n/state-wal\r\n/state\r\n/cache\r\n```\r\n\r\nafter executing all given commands\r\n\n> It works ok on a clean repo\r\n\r\n@RomanVeretenov So there is something with your .gitignores in the rest of the project. You'll have to take a look and see what is different between it and a clean one. It might be .gitignores somewhere, might be the fact that you are in `/home/ds` (esp if you are working as `ds` user), might be that you don't have permissions to access some files (again, if your git project is not owned by you and you don't have shared mode enabled in git config). But it is clear that this is not dvc-issue, it is your git repo and environment that are not quite right. Sorry, but you are pretty much on your own right now, as I can't put my finger on anything specific :slightly_frowning_face: \nPlease let us know how it goes. I'll close this issue for now.\n@RomanVeretenov \r\n\r\nWhen `/tmp` is already part of the `.dvc/.gitignore`:\r\n\r\nwhat does `git check-ignore \".dvc/tmp\"` return?\r\n\r\nalso, what will the script like this return in your case:\r\n\r\n```\r\nfrom dvc.scm.git import Git\r\ngit = Git()\r\ngit._ignored(\".dvc/tmp\")\r\n```\r\n\r\nI wonder if it's some yet-another-gitpython's-bug.\n@shcheklein Already checked: https://github.com/iterative/dvc/issues/3561#issuecomment-606737795 , it is not gitpython.\n@RomanVeretenov @efiop \r\n\r\n> @shcheklein Already checked: #3561 (comment) , it is not gitpython.\r\n\r\nsorry, missed that ... this is really weird!\r\n\r\n@RomanVeretenov can you just do it all manually. Save the old `.dvc/.gitignore` somewhere, create a new one with a single entry `/tmp` and and run from the project's root `git check-ignore .dvc/tmp`? \r\n\r\nDo you use submodules or any other advanced Git's stuff?\nWill re-check everything next week\nI was having this issue and discussed it with Ivan on Discord. Here is part of our conversation.\r\n\r\n> For some reason on my repo, running\r\n> echo \"test 1\" >> models/README.md\r\n> dvc add models\r\n> echo \"test 2\" >> models/README.md\r\n> dvc add models\r\n> \r\n> Is appending models to .gitignore twice. I tested it on a new repo and you're right it doesn't append it more than once. I'll try to reset dvc and see if that works\r\n\r\n\r\nI tried DVC destroy and started again. Didn't help. The only thing that worked is that I started with a clean models folder, added to dvc/git/.gitignore, and then added the data to it.\nFrom https://discordapp.com/channels/485586884165107732/563406153334128681/703247236368302091\r\n\r\n```\r\ngit init\r\nmkdir models\r\necho \"test 1\" >> models/README.md\r\ngit add models/README.md\r\ngit commit -m \"Add requirements file\"\r\ngit checkout -b b1\r\ndvc init\r\ndvc add models\r\ngit add models.dvc .gitignore\r\ncat .gitignore\r\necho \"test 2\" >> models/README.md\r\ndvc add models\r\ncat .gitignore\r\n```\r\n\r\nSo looks like we should handle it better on DVC side. I suppose @RomanVeretenov had something similar. Reopening\nOk, looks like we broke `is_tracked` during one of the rounds of optimizing dvc for a use case with many thousands of dvc-files.\r\n```\r\nPython 3.7.0 (default, Dec 26 2018, 22:48:20)\r\n[GCC 7.3.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information\r\n>>> from dvc.scm.git import Git\r\n>>> g = Git(\".\")\r\n>>> g.is_tracked(\"tests/__init__.py\")\r\nTrue\r\n>>> g.is_tracked(\"tests\")\r\nFalse\r\n>>> g.is_tracked(\"tests/\")\r\nFalse\r\n>>> g.is_tracked(\"tests/func\")\r\nFalse\r\n>>> def func(path):\r\n... return bool(g.repo.git.ls_files(path))\r\n...\r\n>>> func(\"tests\")\r\nTrue\r\n>>> func(\"tests/\")\r\nTrue\r\n>>> func(\"tests/unit\")\r\nTrue\r\n```\r\n`func` represents an old implementation and works like charm. Need to look into it. And, more importantly, add better tests for it.\r\n\r\nEDIT: it was an older bug after all.\nBig thanks to @ammarasmro for investigating! :pray: \nGlad I could help :)":1,"@vdwees I've traced the source of this issue to [this line](https://github.com/samuelcolvin/pydantic/blob/5015a7e48bc869adf99b78eb38075951549e9ea7/pydantic/schema.py#L833).\r\n\r\nThe problem here is that `Optional[X]` is not considered by python to be a `type`. I definitely don't think the way it is currently handled is ideal, but it might be somewhat involved to fix this properly.\r\n\r\nFor now, here's a workaround:\r\n```python\r\nfrom pydantic import BaseModel, Schema, ConstrainedInt\r\nfrom typing import Optional\r\n\r\nclass ConInt3(ConstrainedInt):\r\n ge = 3\r\n\r\nclass MyModel(BaseModel):\r\n my_int: Optional[ConInt3] = Schema(...)\r\n\r\nprint(MyModel.schema())\r\n# {'title': 'MyModel', 'type': 'object', 'properties': {'my_int': {'title': 'My_Int', 'minimum': 3, 'type': 'integer'}}}\r\n\r\nprint(MyModel(my_int=2))\r\n\"\"\"\r\npydantic.error_wrappers.ValidationError: 2 validation errors for MyModel\r\nmy_int\r\n ensure this value is greater than or equal to 3 (type=value_error.number.not_ge; limit_value=3)\r\nmy_int\r\n value is not none (type=type_error.none.allowed)\r\n\"\"\"\r\n```\r\n\r\n-------\r\n\r\n@samuelcolvin I'm not sure whether it would be better to modify the `get_annotation_from_schema` function to deal with the whole `typing` introspection rigamarole (like in, e.g., `Field._populate_sub_fields`), or to just require more careful handling (like in the \"workaround\" I provided above). Thoughts?\nThanks for reporting. Definitely looks like a bug, I'll look into in and see what I can do.\nFirstly thank you both for your contributions to this library, and +1 for resolving this issue if possible.\r\n\r\n@dmontagu I have tried your workaround but I believe it has uncovered a second bug - if you look at your code snippet the schema now raises two issues when I would expect one (i.e. I assume it should not complain that the optional value is not none).\r\n\r\nFor a simpler example (using Pydantic 0.32.2)on MacOSX:\r\n\r\n```\r\n>>> from pydantic import BaseModel, PositiveInt\r\n>>> from typing import Optional\r\n>>> class ExampleModel(BaseModel):\r\n... foo: Optional[PositiveInt]\r\n...\r\n>>> ExampleModel(foo=0)\r\npydantic.error_wrappers.ValidationError: 2 validation errors for ExampleModel\r\nfoo\r\n ensure this value is greater than 0 (type=value_error.number.not_gt; limit_value=0)\r\nfoo\r\n value is not none (type=type_error.none.allowed)\r\n```\nhi @dannymilsom I'm not clear what second bug you've uncovered? Please can you elaborate.\n@dannymilsom what's happening is that it's showing you, for each option in the union, why it can't be that option. The first option was PositiveInt, which fails. The second is None, which also fails (obviously).\r\n\r\nThis is because `Optional[PositiveInt]` is basically the same as `Union[PositiveInt, None]`.\r\n\r\n@samuelcolvin I feel like maybe some of this stuff has changed in v1 anyway; not sure whether the error messages still look the same? Either way, I don't think this is a buggy error message, just a slightly confusing one.\n@dmontagu yes, you're right on all: it's not an error and it's clearer in v1.\r\n\r\nIt's also not related in anyway to this issue.":1,"Probably works because of some `Enum` class magic, but could be reworked to:\r\n```py\r\nif card_number.brand in (PaymentCardBrand.visa, PaymentCardBrand.mastercard)\r\n```\nI guess it depends if equality is acceptable compared to identity?\nI don't think magic works with is, and I'm kinda glad of that:\r\n```\r\n>>> from enum import Enum\r\n>>> class CardType(Enum):\r\n... Diners = 1\r\n... Visa = 2\r\n... MasterCard = 3\r\n...\r\n>>> CardType.Visa is (CardType.MasterCard or CardType.Visa)\r\nFalse\r\n```\nGood to confirm.\nAssuming this is resolved. \nSadly not resolved, code is still\r\n```python\r\nif card_number.brand is (PaymentCardBrand.visa or PaymentCardBrand.mastercard):\r\n```\r\n\r\nwhich won't work if `card_number.brand` is `PaymentCardBrand.mastercard`.\nYes, I was just confirming the logic is broken. @mike-hart you might want to submit a PR referencing this issue with the fix.\nThe use of `x is (y or z)` is just a mistake, it simplifies to `x is y` if y is truey or `x is z` otherwise. In this case since `PaymentCardBrand.visa` is truey we're effectively just doing `brand is PaymentCardBrand.visa`.\r\n\r\nI'll replace it with an `in` check.\r\n\r\nA bit more background on enums:\r\n* Identity checks work with the actual enum member but not with the enum value\r\n* equality checks work with both the enum member and the enum value, but **only if** the enum inherits from the correct type as well as `Enum`.\r\n\r\n```py\r\nclass PaymentCardBrand(str, Enum):\r\n amex = 'American Express'\r\n mastercard = 'Mastercard'\r\n visa = 'Visa'\r\n other = 'other'\r\n\r\nb = PaymentCardBrand.visa\r\nb is PaymentCardBrand.visa\r\n#> True\r\nb == PaymentCardBrand.visa\r\n#> True\r\n\r\nb = 'Visa'\r\nb is PaymentCardBrand.visa\r\n#> False\r\nb == PaymentCardBrand.visa\r\n#> True (This would not work with the current PaymentCardBrand which doesn't inherit from str)\r\n```\r\n\r\nI don't know if `card_number.brand` can be a string or only a `PaymentCardBrand`, but better to fix anyway.":1,"You wrote `Config.field` for model B instead of `Config.fields` hence the difference.\r\nNow you're right the output is not the expected one. I'll open a fix shortly":1,"We used to clone the repo if `rev` was provided before, nowadays we don't do that (and hope `scm` copes with that), but we don't properly initialize them. \r\nThe easy fix might be to rollback to that behaviour, or fix this initialization process. \nAnother user running into this https://discordapp.com/channels/485586884165107732/485596304961962003/831501540484972564\nIf we're counting the number of users running into this: +1 :stuck_out_tongue: ":1,"@bharathc346, we cannot have two `foreach` as it's not supported in YAML format.\r\nWe could support nested `foreach` as following:\r\n```yaml\r\n\r\nstages:\r\n build:\r\n foreach: ${models}\r\n do:\r\n foreach: ${features}\r\n do:\r\n cmd: >- \r\n python script.py\r\n --model ${item_0}\r\n --features ${item_1}\r\n```\r\n\r\nBut, as you can see, this requires too much indentation. Though, this is what we have in mind right now.\r\n\r\n@johnnychen94 suggested introducing something similar to Github's `matrix` or `travis`'s `env` in https://github.com/iterative/dvc/issues/3633#issuecomment-736508018. But, we are not sure if `matrix` is more intuitive to our users than the loops. With matrix, it'd look something like following:\r\n```yaml\r\n\r\nstages:\r\n build:\r\n matrix:\r\n - ${models}\r\n - ${features}\r\n do:\r\n cmd: >- \r\n python script.py\r\n --model ${item_0}\r\n --features ${item_1}\r\n```\r\n\r\nRegarding `item_0`/`item_1` naming, I think it might be better to do similar to what `Ansible` does.\r\nSo, `item_0` will be accessible as `item[0]` and `item_1` as `item[1]`. `item` will just be a list. Similarly, `key[0]`, `key[1]`, etc. This might make it confusing with `item` and `key` as they are now a list, but I don't have any good ideas for now.\r\n\r\nAnyway, @bharathc346 thanks for creating the issue. Let's hear some feedback for 2.0 change and given enough interests, we'll come to it. Thanks.\nI see. I like @johnnychen94 comment about introducing the github matrix. But I think any solution for nested interpolation would be great. Thanks for the detailed comment\n@bharathc346, ahh, looks like I used the wrong term. It should be nested `foreach`. \nUpvoted, that definitely something that would be nice to have.\r\n\r\nIntuitively I was doing something like that:\r\n```yaml\r\nstages:\r\n build:\r\n foreach:\r\n - fr:\r\n - a\r\n - b\r\n - en:\r\n - a\r\n - b\r\n do:\r\n ...\r\n```\r\n\r\nHowever, I feel the `matrix` is more readable. But I'd be ok with a nested `foreach` too.\nI am definitely interested in such a feature. If I were to vote on the naming convention, I would go for `matrix` as `foreach` means to me some sort of a parallelizable loop (for - each).\r\n\r\nFor the keys and items, I would stick with what is well done with the current foreach loop, meaning:\r\n\r\n - if a list is provided, then the var is `item` and since there is a matrix, `item` is a list:\r\n\r\n```yaml\r\nstages:\r\n build:\r\n matrix:\r\n - [\"us\", \"fr\", \"gb\"]\r\n - [128, 256, 512]\r\n do:\r\n cmd: >-\r\n python script.py\r\n --model ${item[0]}\r\n --features ${item[1]}\r\n```\r\n\r\n - if a dict is provided, `item` is a dict to access the values by their names (this second option improves readability and I could even vote to enforce it)\r\n\r\n```yaml\r\nstages:\r\n build:\r\n matrix:\r\n model: [\"us\", \"fr\", \"gb\"]\r\n features: [128, 256, 512]\r\n do:\r\n cmd: >-\r\n python script.py\r\n --model ${item.model}\r\n --features ${item.features}\r\n```\nHi, any updates on this feature? I would be really interested in this\nI am glad to work on it as well if we have a go from the dvc team \nThe matrix loop is a very trivial case in that it introduces a dense parameter grid. There will always be a need to run a \"filter\" on this grid to reduce unnecessary tasks.\r\n\r\nIt would be nice if DVC defines an interchangeable protocol for multiple parameterization tasks so that people can write their own task generation script when needed. The \"params.yaml\" example I provide in https://github.com/iterative/dvc/issues/3633#issuecomment-736508018 is a very naive version of the interchangeable protocol. It would be very trivial work to dump \"matrix\" definition into something like this, as long as the protocol is well-defined.\r\n\r\nhttps://github.com/iterative/dvc/issues/5795 should be also fixed to enable wide usage of this requested feature.\n### Overlap with dvc exp run \r\n\r\nThere is some overlap between this kind of parametrized loop and `dvc exp run --set-param`. The current `foreach` approach creates many separate stages, while `dvc exp run` creates many variations of the same stage. @johnnychen94 mentioned concurrent execution in another thread, which `dvc exp run` is already starting to tackle. If it were possible to programmatically define the set of experiments to run with `dvc exp run`, would people still need `matrix`?\r\n\r\n### Nested loop syntax\r\n\r\nThere seems to be consensus that the `matrix` syntax makes sense. Personally, I agree with @ClementWalter that enforcing a dict structure is most readable.\r\n\r\nAdding `matrix` might make it challenging to explain the difference between `foreach` and `matrix`. This also relates generally to how to balance complexity and flexibility in defining DAGs (see https://github.com/iterative/dvc/discussions/5646#discussioncomment-874316).\n@dberenbaum, For myself, the `foreach` is a useful functionality when I'm doing some text classifications. In this work, two billion items need to be classified into about 5000 child categories inside 50 parent categories. The project has two levels, in the first one I trained a model to classify items into parent categories, and in the second I use `foreach` to train a separate model for the classification of the child categories for each parent category. It could not be replaced by `exp` or other functionalities.\r\n\r\nSo, If we are facing a project with an even more level of structure, then a nested loop might be an unavoidable solution.\n@dberenbaum I think `matrix` or nested `foreach` still make sense, even with more dynamic parameter definitions in `dvc exp`. We use \"foreach stages\" to define different models (similar model but for different countries and products for example), while using dvc experiments for trying out different parameters for those models. There is a logical difference, and it would not be optimal to start mixing those concepts.\r\n\r\nAnd of course upvoting this feature request 👍🏻 \nHere is another upvote.\r\n\r\nEither `matrix` or nested `foreach` would help me align the structure of the pipeline to the natural structure of the experiment.\r\n\r\nAlthough I am fairly new to DVC, I keep running into limitations. I wonder if DVC might get more mileage by leveraging existing workflow systems (such as Snakemake and NextFlow) and focusing on what DVC does best, which is managing data versions. I guess I could trigger a whole Snakemake workflow as one stage of a DVC pipeline, and then specify which parameters, dependencies and outputs I want to track with DVC. I'm just wondering out loud if it might be more efficient to better integrate DVC with other workflow systems (such as automatically detecting parameters, dependencies and outputs) rather than focus on making DVC pipelines more flexible.\nThis is my most wanted feature. Just adding my perspective, as I have a collection of models I want to run on a collection of datasets. If I could nest the `foreach`, my config size would literally lose an order of magnitude in terms of line count.\n\nAs for whether we should use `matrix` instead, I think although it's visually cleaner it would have to be documented as a separate feature, which feels bloated. Whereas uses nested `foreach`, although less visually appealing, is a more intuitive extension of the concept (i.e. the first thing I tried after ready about DVC's looping). There would be no way someone would naturally change their `foreach` to `matrix`.\nJust to comment here, that the \"nested loop\" is just \"one way\", of tuning the hyper parameters automatically.\r\nThere are others (random search or sobol sequences) and ideally DVC should allow each of them.\r\n\r\nAllowing \"multiple param files\" in one go, might be a middle ground. (#7891 )\r\n\nChiming in with support for this feature! For context, see [this trickery](https://github.com/iterative/dvc/issues/7093#issuecomment-1581417045) I'm currently doing with YAML anchors to serve this same purpose, and which is at risk of being broken after 3.0 release.\r\n\r\nI think it's definitely possible that some version of `matrix` could serve my needs (what I'm trying to do is really not that complicated), thought it would depend on the details of the implementation. For example, would each \"coordinate\" in the constructed matrix need to be a simple scalar, or could one construct a matrix from e.g.\r\n\r\n```yaml\r\nmatrix:\r\n - [foo, bar]\r\n - - feature: a\r\n type: json\r\n - feature: b\r\n type: npz\r\n```\r\n\r\nI use maps like that a lot, so that feels pretty important.\r\n\r\nAs an aside, when combined with hydra, this could possibly allow for \"toggling\" certain pipeline stages. In the example below, you could set `is_enabled` to an empty list to disable that stage—assuming that `matrix` doesn't require non-empty lists—or a list of a single value to turn it on. I actually do this currently with `foreach` and empty lists/maps.\r\n```yaml\r\nstage_name:\r\n matrix:\r\n - ${features}\r\n - ${is_enabled}\r\n do:\r\n ...\r\n```\n> ```yaml\r\n> matrix:\r\n> - [foo, bar]\r\n> - - feature: a\r\n> type: json\r\n> - feature: b\r\n> type: npz\r\n> ```\r\n\r\nWould you mind continuing that example with the `do:` part? It's not entirely clear to me how it would look. I'd prefer to use the convention suggested by @ClementWalter above where each of those sections has a key to identify it, so maybe it would look more like:\r\n\r\n```yaml\r\nmatrix:\r\n - name: [foo, bar]\r\n - config: \r\n - feature: a\r\n type: json\r\n - feature: b\r\n type: npz\r\n```\r\n\r\nI _think_ that will also make it easier to reference those items in the `do:` section, but that's where I'd like to know how you think that would look.\n> Would you mind continuing that example with the `do:` part?\r\n\r\nI like the explicit naming, though you then don't need a list at the top level:\r\n```yaml\r\nstage_foo:\r\n matrix:\r\n name: [foo, bar]\r\n config:\r\n - feature: a\r\n type: json\r\n - feature: b\r\n type: npz\r\n do:\r\n cmd: echo ${item.name} ${item.config.feature}\r\n```\n@skshetry Do you think this could eventually replace `foreach` altogether, or do you think we would need to keep both?":1,"+1 for `dvc list` :slightly_smiling_face: \n@efiop @jorgeorpinel another option is to do `dvc ls` and it should behave exactly like a regular `ls` or `aws s3 ls`. Show _all_ the files (including hidden data) by specifying a Git url. This way you can control the scope to show (by not going into all directories by default) - also you can see your data in the context (other files) with an option to filter them out.\r\n\r\nOn the other hand it can be good to show just a list of all DVC outputs. It can be done with `dvc ls --recursive --outputs-only` for example.\r\n\r\nWhat do you think?\r\n\r\nIn general I'm +100 for `dvc list` or something similar :)\nClarification: `dvc list` should work on dvc repositories. E.g. `dvc list https://github.com/iterative/dvc` should list `scripts/innosetup/dvc.ico`, etc.\n@efiop can we change the priority to p1 as we discussed because it's part of the get/import epic story?\n@shcheklein Sure, forgot to adjust that. Thanks for the heads up!\n@shcheklein I'm not sure about the `dvc ls` name if all we want is to list data artifacts specifically (not all files as you propose).\r\n\r\n> ...you can see your data in the context (other files) with an option to filter them out.\r\n\r\nWill most users need this? The problem I think we need to solve is that it's hard to use `dvc get` and `dvc import` without a list of available data artifacts. Showing all Git-controlled files may or may not be useful but can be done by [existing means](https://git-scm.com/docs/git-ls-tree).\r\n\r\n> by specifying a Git url...\r\n\r\nWhat I get from that is: `dvc list [url]` where `url` can be a file system path or HTTP/SSH URL to a Git repo containing a DVC project (same as the `dvc get` and `dvc import` argument `url`) – and if omitted tries to default to the local repo (containing `pwd`).\r\n\r\n> This way you can control the scope to show (by not going into all directories by default)...\r\n\r\nI don't see how just the Git repo URL can control the scope to show. Would need a `path` argument for this I think (which could only accept directories) and/or a `--depth` option.\r\n\r\n> `dvc ls --recursive --outputs-only`\r\n\r\nToo complicated for users to remember if the main purpose of this new command is to list artifacts. Maybe `--recursive` though... (`aws s3 ls` for example isn't recursive by default, but has that option.)\r\n\r\nIn summary I think just `dvc list` or `dvc outs` is best.\r\nBut I agree we would need to consider the case where there are lots of outputs. Another solution (besides `path` arg and/or `--depth`, `--recursive` options) could be default pagination (and possibly interactive scrolling with ⬆️ ⬇️ – like a stdout pipe to `less`).\r\n[`aws s3 ls`](https://docs.aws.amazon.com/cli/latest/reference/s3/ls.html) on the other hand takes a simple approach: It has a hard limit of 1000 objects.\n> Clarification: `dvc list` should work on dvc repositories. E.g. `dvc list https://github.com/iterative/dvc` should list `scripts/innosetup/dvc.ico/etc`.\r\n\r\n@efiop yes, exactly. Extended example:\r\n\r\n```console\r\n$ dvc list https://github.com/iterative/dvc\r\nscripts/innosetup/dvc.ico\r\nscripts/innosetup/dvc_left.bmp\r\nscripts/innosetup/dvc_up.bmp\r\n```\r\n\r\nThis makes me think, maybe an output that combines the DVC-files (similar to `dvc pipeline list`) with their outputs could be most informative (providing some of the context Ivan was looking for). Something like:\r\n\r\n```console\r\n$ dvc list https://github.com/iterative/dvc\r\nscripts/innosetup/dvc.ico\t(from scripts/innosetup/dvc.ico.dvc)\r\nscripts/innosetup/dvc_left.bmp\t(from scripts/innosetup/dvc_left.bmp.dvc)\r\nscripts/innosetup/dvc_up.bmp\t(from scripts/innosetup/dvc_up.bmp.dvc)\r\n```\r\n\r\nUPDATE: Thought of yet another 😓 name for the command above: `dvc stage list --outs`\n@jorgeorpinel I think showing the full project in an ls-way is just more natural as opposed to creating our own type of output. There are few benefits:\r\n\r\n1. You don't have to use two interface to see the full picture - Github + `dvc list`. Instead you just use dvc and see the workspace. And can filter it if it's needed.\r\n2. It looks like it's beneficial for `dvc get` to handle regular Git files. Why not? It can be useful.\r\n3. Like I mentioned - single place in CLI, no need to go to Github to get the full picture.\r\n4. Just easier to understand since people are familiar with `ls` and intuitively can expect the result.\r\n\r\nThe idea is that by default it's not recursive, it's not limited to outputs only. You go down on your own if you need by clarifying path - the same way you do with ls, aws ls, etc.\r\n\r\n`ls`, `aws ls`, etc - they are all not recursive by default for a reason. In a real case the output can be huge and just won't make sense for you. People tend to go down level by level, or use recursive option when it's exactly clear that it's what they need.\r\n\r\nI really don't like making piping and less and complex interfaces part of the tool. You can always use `less` if it's needed. \n@shcheklein I concur only with benefit #1, so I can agree with showing all files but with an optional flag (which can be developed later, with less priority), not as the default behavior. Thus I wouldn't call the command `dvc ls` – since it's not that similar to GNU `ls`. I would vote for **`dvc outs`** or `dvc list`.\r\n\r\n> 2. It looks like it's beneficial for `dvc get` to handle regular Git files. Why not?\r\n\r\nBecause it can already be done with `git archive` (as explained in [this SO answer](https://stackoverflow.com/a/18331440/761963)).\r\n\r\n> 3... single place in CLI, no need to go to Github to get the full picture.\r\n\r\nThe \"full picture\" could still be achieved from CLI by separately using [git ls-tree](https://git-scm.com/docs/git-ls-tree).\r\n\r\n> The idea is that by default it's not recursive... You go down on your own if you need by clarifying path...\r\n\r\nI can also agree with this: Non-recursive by default sounds easier to implement. So, it would definitely also need an optional `dir` (path) argument and a `--recursive` option.\r\n\r\n> I really don't like making piping and less and complex interfaces part of the tool.\r\n\r\nAlso agree. I just listed it as an alternative.\r\n\r\n---\r\n\r\nAnyway, unless anyone else has relevant comments, I suggest Ivan decides the spec for this new command based on all the comments above, so we can get it onto a dev sprint.\np.s. my ~~(initial)~~ **updated** spec proposal is:\r\n\r\n```\r\nusage: dvc list [-h] [-q | -v] [--recursive] [--rev REV | --versions]\r\n url [target [target ...]]\r\n\r\npositional arguments:\r\n url URL of Git repository with DVC project to download from.\r\n target Paths to DVC-files or directories within the repository to list outputs\r\n for.\r\n```\nI think that this command (`dvc list`) should be able to get a revision, like `--rev tag1`.\r\nIts output should list the **checksums** of the output files on the given revision, as well as the **file path and name**. This would also allow an external tool to construct the right URL for downloading from a remote storage a certain revision of a datafile.\nThis may be related: https://github.com/iterative/dvc/issues/2067\n@jorgeorpinel @dashohoxha Few comments:\r\n\r\n1. `outs` is a terrible name. `output` is a super-confusing term when you want to list your `input` datasets :)\r\n2. `git-ls-tree`, other tools - that's the point you will need two or more tools to get the full picture, you will have mentally do this exercise of merging lists every time. Also, don't forget that we are dealing with a remote repo - `git-ls-tree` won't even solve this problem, unless you got clone the repo first, right?\r\n3. the same with `git-archive`. Github does not support it. And why would I want to know about it at all? Why don't we allow to download/import any files?\r\n4. Regarding `ls` similarity. In mind it's what `aws s3 ls` is doing, of `hadoop fs ls` and other similar systems. The utilize common and familiar names. And that's why it's important for it to behave the same way as ls. I don't see any reason why it can't be implemented this way.\r\n5. Re checksums - we don't want users to construct URLs because this way we expose internal logic (the way we store files) to them. Which is not a part of the public protocol (at least now). I totally understand the problem though. I think it can be part of the `dvc get` to generate an URL for you and part of the `dvc.api`.\n> (`dvc list`) should be able to get a revision, like `--rev tag1`...\r\n\r\nInteresting. Not sure its central to the command because you're not supposed to have knowledge about the repo when you use `dvc list`. But yes, it would definitely be a useful advance feature to have. Maybe also a flag to list available revisions for a specified output (e.g. `dvc list --versions model.pkl`).\r\n\r\n> 1. outs is a terrible name...\r\n\r\nYou're probably right but I'm just using our own terminology 😋 We would be listing outputs, basically. Anyway, I can agree with `dvc list` or even `dvc ls` if we go your route and list all files by default.\r\n\r\n> 3. the same with git-archive. Github does not support it...\r\n\r\nThere's special `https://raw.githubusercontent.com/user/repository/branch/filename` URLs you could construct and `wget` for GitHub. But yeah I guess it wouldn't hurt that `dvc get` can download regular files from any Git host. Opened #2515 separately.\n> Re checksums - we don't want users to construct URLs because this way we expose internal logic... I think it can be part of the `dvc get` to generate an URL for you and part of the `dvc.api`.\r\n\r\nDo you have in mind something like this?\r\n\r\n```console\r\n# return a list of data files, along with the corresponding `.dvc` file\r\ndvc get list\r\n\r\n# download a datafile\r\ndvc get \r\n\r\n# with option '-h, --show-hashes' display the corresponding hashes as well\r\ndvc get list --show-hashes\r\n\r\n# with option '-d, --show-download-url' display the corresponding URL\r\n# for direct download from the default remote storage\r\n# (maybe depends on the type of the remote storage)\r\ndvc get list --show-download-url\r\n\r\n# limit listing only to certain DVC-files\r\ndvc get list \r\n\r\n# etc.\r\n```\r\n\r\nThis would be useful too for an external tool.\r\n\r\nHowever when I think about a `dvc list` command I have in mind something like this:\r\n\r\n
\r\n\r\n1. It should work both for a local workspace and for a remote (GIT) workspace. So, it needs an option like `-g, --git-repo `, where `` may also be a local path, like `/path/to/git/repo/`. If this option is not given, default is the current workspace/repository.\r\n\r\n2. It should get some DVC-files as targets, and list the outputs of only these targets. If no targets are given, then all the DVC-files in the current directory are used as targets. If a given target is a directory, then all the DVC-files inside it are used as targets. If the option `-R, --recursive` is given as well, then the search for DVC-targets inside the given directory will be recursive.\r\n\r\n3. The output should normally include the name of the DVC-file and the path of the datafile/output. However with the option `-h, --show-hashes` it should display the corresponding hash as well.\r\n\r\n4. With the option `-d, --show-download-url` should display the URL for direct download from the default remote storage. (Maybe it should check as well whether this file is available on the remote storage, and return `NULL`/`not-available`, if it is not?). Maybe it should get the option `-r, --remote` to use another storage instead of the default one.\r\n\r\n5. With the option `--rev` it should show data from a certain revision/tag/branch. If this option is missing, than the current state of the project is used, and this should work with `--no-scm` as well.\r\n\r\n
\r\n\r\nIf you have something else in mind, please feel free to disregard these suggestions.\n> ```console\r\n> # ...list of data files, along with the corresponding `.dvc` file\r\n> dvc get list\r\n> ```\r\n\r\nThis would actually need to be `dvc get list [url]` to adhere to DVC syntax but no, we're talking about a new, separate command (`dvc list`), not a new subcommand for `dvc get`. (It also affects `dvc import`, for example.)\r\n\r\nAlso I think we've established we want the list to be of all the regular files along with \"data files\" (outputs and their DVC-files), not just the latter.\r\n\r\n> ```console\r\n> dvc get list --show-hashes\r\n> ...\r\n> dvc get list --show-download-url\r\n> ```\r\n\r\nPlease open a separate issue to decide on adding new options to `dvc get` @dashohoxha.\r\n\r\n> ```console\r\n> # limit listing only to certain DVC-files\r\n> dvc get list \r\n> ```\r\n\r\nThis could be useful actually. Optionally specifying target DVC-files to the `dvc list` command could be an alternative for limiting the number of results.\nOK I hid some resolved comments and updated the proposed command spec in https://github.com/iterative/dvc/issues/2509#issuecomment-533019513 based on all the good feedback. Does it look good to get started on?\n@jorgeorpinel sounds good to me, let's move the spec to the initial message? \r\n\r\nAlso, would be great to specify how does output look like in different cases. Again, I suggest it to be have the same way as if Git repo is a bucket you are listing files in with `aws s3 ls`. May be it would be helpful to come with a few different outputs and compare them.\nOK spec moved to initial issue comment.\r\n\r\n> specify how does output look like in different cases...\r\n\r\nI also think copying the `aws s3 ls` output is a good starting point.\r\n\r\n### Hypothetical example outputs\r\n\r\nUsing file system paths as `url`:\r\n> See full example outputs in [dvc list proposal: output examples](https://gist.github.com/jorgeorpinel/61719795628fc0fe64e04e4cc4c0ca1c) Gist.\r\n\r\n```console\r\n$ cd ~\r\n$ dvc list # Default url = .\r\nERROR: The current working directory doesn't seem to be part of a DVC project.\r\n\r\n$ git clone git@github.com:iterative/example-get-started.git\r\n$ cd example-get-started\r\n$ dvc list .\r\nINFO: Listing LOCAL DVC project files, directories, and data at\r\n /home/uname/example-get-started/\r\n\r\n 17B 2019-09-20 .gitignore\r\n6.0K 2019-09-20 README.md\r\n...\r\n339B 2019-09-20 train.dvc\r\n5.8M └ out: (model.pkl)\r\n...\r\n\r\n$ dvc pull featurize.dvc\r\n$ dvc list featurize.dvc # With target DVC-file\r\n...\r\nINFO: Limiting list to data outputs from featurize.dvc stage.\r\n\r\n367B 2019-09-20 featurize.dvc\r\n2.7M └ out: data/features/test.pkl\r\n 11M └ out: data/features/train.pkl\r\n```\r\n\r\n> NOTE: The latter case brings up several questions about how to display outputs located in a different dir as the `target` DVC-file, especially vs. using that location as the `target` instead. In this case I listed them even without `--recursive`.\r\n\r\nNote that the `.dvc/` dir is omitted from the output. Also, the dates above come from the file system, same as `ls`. In the following examples, they come from git history.\r\n\r\nUsing network Git `url`s:\r\n> See full example outputs in [dvc list proposal: output examples](https://gist.github.com/jorgeorpinel/61719795628fc0fe64e04e4cc4c0ca1c) Gist.\r\n\r\n```console\r\n$ dvc list git@github.com:iterative/example-get-started.git # SSH URL\r\n 17B 2019-09-03 .gitignore\r\n...\r\n339B 2019-09-03 train.dvc\r\n5.8M └ out: model.pkl\r\n\r\n$ dvc list https://github.com/iterative/dataset-registry # HTTP URL\r\n1.9K 2019-08-27 README.md\r\n160B 2019-08-27 get-started/\r\n128B 2019-08-27 tutorial/\r\n\r\n$ dvc list --recursive https://github.com/iterative/dataset-registry tutorial # Recursive inside target dir\r\n...\r\nINFO: Expanding list recursively.\r\n\r\n 29B 2019-08-29 tutorial/nlp/.gitignore\r\n178B 2019-08-29 tutorial/nlp/Posts.xml.zip.dvc\r\n 10M └ out: tutorial/nlp/Posts.xml.zip\r\n177B 2019-08-29 tutorial/nlp/pipeline.zip.dvc\r\n4.6K └ out: tutorial/nlp/pipeline.zip\r\n...\r\n```\r\n\r\n> NOTE: Another question is whether outputs having no date is OK (just associated to the date of their DVC-files) or whether we should also get that from the default remote. Or maybe we don't need dates at all...\r\n\r\nGoing through these made me realize this command can easily get very complicated so all feedback is welcomed to try and simplify it as much as possible to a point where it still revolves the main problem (listing project outputs in order to know what's available for `dvc get` and `dvc import`), but doesn't explode in complexity.\n@jorgeorpinel this is great! can you make it as a gist or a commit - so that we can leave some comments line by line?\nOK, added to https://gist.github.com/jorgeorpinel/61719795628fc0fe64e04e4cc4c0ca1c and updated https://github.com/iterative/dvc/issues/2509#issuecomment-533685061 above.\n@jorgeorpinel I can't comment on a specific line on them. I wonder if there is a better tools for this? create a temporary PR with these files to dvc.org?\nAs a first step, we could simply print lines with all outputs. E.g.\r\n\r\n```console\r\n$ dvc list https://github.com/iterative/dvc\r\nscripts/innosetup/dvc.ico\r\nscripts/innosetup/dvc_up.bmp\r\nscripts/innosetup/dvc_left.bmp\r\n```\r\n\r\nand then move on to polishing.\nWe should plan the polishing in advance a bit so that we won't be breaking the released stuff. Defaults, available options, etc.\r\n\r\nBtw, when we just print outputs this way, wouldn't it be the same as `dvc pipeline show --outs` 🤔 \nBy default, the output should be simple and consumable by tools. It will give us an ability to chain commands like \r\n```\r\ndvc list https://github.com/iterative/dvc | grep '\\.pth$' | xargs dvc get https://github.com/iterative/dvc\r\n```\r\n(yeah, I'm pretty sure the current `dvc get` does not support this yet. EDITED: actually it does through `xargs`.)\r\n\r\nFor the same reason headers like `INFO: Expanding list recursively.` are not needed.\r\n\r\nNice looking lists with hierarchies like `└ out: tutorial/nlp/pipeline.zip` should be supported under `--human-readable` options. And this can be postpone.\n@shcheklein good point re `dvc pipeline show --outs`. But this command does not support remotes (like `dvc pipeline show https://github.com/iterative/dvc --outs`).\r\n\r\nAlso, based on my understanding,`pipeline show` does some additional stuff - sorts outputs by order in the pipeline (the last output is always the last output in the pipeline). Please correct me if it is not correct.\n> But this command does not support remotes\r\n\r\n@dmpetrov yeah, it was just a note that we kinda have a command that outputs DVC-specific info (as opposed to general ls-like output). I have no idea how can it affect any decisions in this ticket. Was sharing in case someone else has something to say about it.\r\n\r\n> the output should be simple and consumable by tools\r\n\r\nif we follow unix tools - a lot of them (probably most of them?) are not made to be scripts consumable by default. You would need to run `ls -1` to achieve a simple one item per line format:\r\n\r\n```\r\nForce output to be one entry per line. This is the default when output is not to a terminal.\r\n```\r\n\r\n`aws s3 ls` is only somewhat consumable.\r\n`find` is not consumable as far as I remember.\r\n\r\nAlso, I believe it's wider topic (similar to #2498). Ideally we should have some agreed upon approach we are taking, something like this: https://devcenter.heroku.com/articles/cli-style-guide#human-readable-output-vs-machine-readable-output And this approach should be (reasonably) consistent across all commands.\r\n\r\n> should be supported under --human-readable\r\n\r\nSee above. I don't remember actually from the top of my head any of the tools that are taking this approach. The only one common convention is `-h` but it does not change layout of the output. It changes the details.\r\n\r\n> And this can be postponed.\r\n\r\nagreed that we can postpone adding nice details (file sizes, DVC-file per output, etc). But I think it's very important to decide on the default behavior:\r\n\r\n1. **Does it show _all_ files (and you need to specify a flag to show outputs only) vs only outputs.** I've outlined the reasons I think we should be taking ls-like approach and show all files, not DVC-specific only.\r\n2. **Does it go down recursively vs you need to provide a flag -r**, similar here - `ls` or `aws ls` vs DVC-specific interface focusing on all outputs. Again, I don't see a reason to do this - everyone is familiar with `ls` or `aws s3 ls` behavior. It'll be way easier to explain and easier to alter the behavior with an option.\r\n\r\n\r\n\nTo answer these behavior and UI questions we need to return back to the original motivation:\r\n\r\n> \"browsing\" external DVC projects on Git hosting before using `dvc get` or `dvc import`. Looking at the Git repo doesn't show the artifacts because they're only referenced in DVC-files (which can be found anywhere), not tracked by Git.\r\n\r\nFirst of all, I understand this pain of not seeing data files in my Git repo.\r\nTo my mind, a list of outputs will answer the question. The list of the other Git files (like `README.rst`) seems not relevant and even introduces noise to the answer. Do we need to go recursively - hm.. probably yes, since most of the data files are hidden under `input`, `data`, `models` directories. We can avoid the need in multiple `dvc list`s. More than that, we have to go recursively (technicaly), because some DVC-file from a subfolder can generate outputs into the root of the repo.\r\n\r\nNew questions that I just got:\r\n1. Do we have any other scenarios/questions in mind or this is the only one? I have a feeling that @shcheklein has some.\r\n2. Is `dvc list` a right command name? Why not `dvc find` or `dvc whereis`? Do we need both `dvc list` (show all files) & `dvc whereis` (I don't like this idea but still)\r\n3. Should it show Git committed outputs (including metrics files)? Is it important (not nice-to-have, but important) to mention in output (by default) that a certain file is committed to Git? If so, how to show it (a programmatically consumable list of files might be not enough).\r\n\r\n\r\nSome details regarding the output formatting... Yes, in some cases it makes sense to give preference to human-readable commands. Like your Heroku example where a complex/hierarchical output was produced. \r\nBut in this case, we are talking about a list of files which is human readable already. I see no benefits of introducing additional complexity (by default). If we decide that type of the file (Git\\Remote\\metrics) is important to see by default then it can be a reason to use a more complicated format. So, I'm not against this I just see no value.\r\n\r\n> if we follow unix tools - a lot of them (probably most of them?) are not made to be scripts consumable by default.\r\n\r\nI don't agree with this.\r\n\r\n> I don't remember actually from the top of my head any of the tools that are taking this approach.\r\n\r\n`aws s3 ls --human-readable`, `du -h *`\r\n\r\n> `aws s3 ls` is only somewhat consumable.\r\n> `find` is not consumable as far as I remember.\r\n\r\n`aws s3 ls` is actually not the best example. It has quite a terrible UX.\r\n`find` output is very consumable. As well as `ls` (it does the hack internaly) and most of the unix-commands.\n> if we follow unix tools - a lot of them (probably most of them?) are not made to be scripts consumable by default\r\n\r\nEven if it is so, most of the unix tools have an option to produce scriptable output. For example `git status` and `git status --porcelain`.\nRegarding the scriptable output. `--porcelain`, `--json`, etc - is exactly an approach that I was mentioning. The point is that most of the tools are not made to be scriptable by _default_. That's exactly the point of the link I shared: \r\n\r\n\r\n>Terse, machine-readable output formats can also be useful but shouldn’t get in the way of making beautiful CLI output. When needed, commands should offer a --json and/or a --terse flag when valuable to allow users to easily parse and script the CLI.\r\n\r\nBtw, Git porcelain explains it very well why it's better to provide an option for script consumable outputs:\r\n\r\n\r\n> Give the output in an easy-to-parse format for scripts. This is similar to the short output, but will remain stable across Git versions and regardless of user configuration. See below for details.\r\n\r\n_stable_ API - It'll be way easier for to maintain and evolve the output later if we use an option.\r\n\r\nSo, `ls` is doing some magic and by default does not sacrifice human UI, `heroku, `git` have an option, `find` is somewhat consumable - but usually you should be providing `print0`, `aws` is somewhat consumable - in a Heroku's guide sense.\n> I wonder if there is a better tools for this? create a temporary PR with these files to dvc.org?\r\n\r\n@shcheklein good idea. Here's one: https://github.com/iterative/dvc/pull/2569/files\r\n\r\n> For the same reason headers like `INFO: Expanding list recursively.` are not needed...\r\n\r\nRight, `INFO/WARNING` etc logs/messages. could be included in the `-v` (verbose) output though.\r\n\r\n> agreed that we can postpone adding nice details (file sizes, DVC-file per output, etc). But I think it's very important to decide on the default behavior...\r\n\r\nWe should definitely prioritize the features of this command but I also think a good initial design can save lots of time in the future. Let's not go overkill though because once we're actually implementing I'm sure a lot of new questions will come up.\r\n\r\nAs for the default behavior, I think it should list all files (Ivan convinced me here: https://github.com/iterative/dvc/issues/2509#issuecomment-532781756) and with level 2 or 3 recursion to catch _most_ buried DVC-files, but warn when there are DVC-files in even deeper levels. It also needs a length limit like 100 files though (especially if we decide full recursion as default) – not sure how to list more if the limit is reached.\r\n\r\n> DVC-file from a subfolder can generate outputs into the root of the repo.\r\n\r\nYes... This is confusing and hard to figure out how to output this in `dvc list` if we show the relationship between DVC-files and output files so perhaps that's one of the less features with less priority.\r\n\r\n> Should it show Git committed outputs?\r\n\r\nGood Q. I think it would be important to mark on any stage output that is Git-tracked in the output of `dvc list`.\nDiscussed briefly with @dmpetrov and I think we agreed on **no recursion**, **all files** mode by default. Then, `-o` and `-r` options to filter the list as you need. @dmpetrov could you chime in, please?\nI'd prefer `ls-files` to match `git` more closely\nfyi, for `git ls-files`:\r\n\r\n- recursively all files: (no args)\r\n- current dir: `.`\r\n- recursively all python files: `'*.py'`\r\n- tracked python files in current dir: `*.py` (assumes shell expansion, git will exclude untracked file arguments)\n@casperdcl interesting point.\r\n\r\nIn what cases do you use `git ls-files` vs regular `ls`? I think they serve a bit different purposes and to me intuitively `dvc list` or `dvc ls` (name does not matter that much) serves the role of `aws s3 ls` or `ls`. There are chances that default behavior for `ls-files` is done this way to augment `ls`.\r\n\r\n`git ls-files` sounds like an advanced Git-specific command. And if we go that path, we should probably indeed do recursive list of outputs (files that are under DVC control) - but that will create some disconnect with `dvc get` and `dvc import` (that now serve the roles of wget, cp, aws s3 cp, etc) that can deal with regular files now. Also, not sure how will it work for files inside outputs (when I want to see some specific images inside a specific output).\r\n\n- if there are many files (some untracked and some in `.gitignore`) and I only want to `ls` the tracked ones\r\n- immediately understand exactly what `git archive` would include\r\n- `-x` and `-X` patterns for systems without `grep`\r\n- `-t` flag: also marks file status\n@casperdcl so the cases you are describing are indeed very different from `ls`. They are specific to Git workflow. In case of this new command, my feeling that it's better to compare it with `ls` or even `aws s3 ls` since we kind-a take the ability to access buckets directly away.\nI'd argue that `dvc ls-files` listing only dvc-tracked files would be very valuable. I don't see much value in implementing something simpler which is a minor modification to `ls` \nso, that's the point the is no `ls` for us and a regular one does not work. It's exactly the same reason why `aws s3 ls` exists. Regular `ls` won't work for a remote Git repo, right? May be that's where some misunderstanding comes from. The idea is that it should enable some discoverability for DVC projects and you supposed to use it like:\r\n\r\n```\r\ndvc list https://github.com/iterative/example-get-started \r\n```\r\n\r\nand then:\r\n\r\n```\r\ndvc get https://github.com/iterative/example-get-started README.md\r\n```\r\n\r\nor \r\n\r\n```\r\ndvc list https://github.com/iterative/example-get-started data\r\ndvc import https://github.com/iterative/example-get-started data/data.xml\r\n```\r\n\nAh right. Remote listing pre-download is different.\n\nI was talking about local listing local files.\n\nI guess they'll have different implementations but would be nice to have them both called the same command (I'd prefer `ls-files`, but fine with `ls` or `list` too).\nI think `dvc list .` will work fine in this case. It'll show all files tracked by Git + expansion of DVC-files to show outputs. So it is DVC specific ls-file in this sense. I would still prefer to have default like in `ls` - no recursion, show all files - Git-tracked + DVC outs.\r\n\r\n`dvc list .` is an edge-case for this command. The main usage is the remote one and I would really like to see just a list of files the same way I do with `ls`. (+some options to refine and filter if needed).\nI'm going to work on the issue":1,"I'll pick this up and submit a PR soon.":1,"I saw this the other day on typed dicts, which python versions support this?\r\n\r\nI'm keen if the implementation isn't too complicated.\nIt's supported in 3.6+ for sure. \n\nGreat, I'll make a PR soon then!\n\nBTW, was this issue closed by accident?":1,"Thanks for reporting, if there's an easy fix, happy to accept a PR for 1.10.3.\r\n\r\nOtherwise, let's make sure we get this right in v2. Tbh, there are lots of types that currently break JsonSchema. ":1,"The error message should say to install `dvc[webdav]` and not `dvc[webdavs]`, but either way it should also be installed automatically with `[all]`.\r\n\r\nIs this a DVC installation that you've upgraded from earlier versions via pip? webdav support was added (relatively) recently, so it's possible that if you upgraded via pip it was not installed.\r\n\r\nRunning\r\n```\r\npip install --upgrade dvc\r\n```\r\nwill only upgrade core DVC, and not any of the remote dependencies.\r\n\r\nTo upgrade DVC and its remote dependencies, you need to run\r\n```\r\npip install --upgrade dvc[all]\r\n```\r\n(or substitute the specific remotes you wish to upgrade instead of `all`).\nYes, I upgraded from 1.10.1 via pip. But I had not installed any remote bevore and run\r\n```\r\npip3 install dvc[all]\r\n```\r\nwhich installed all remotes successfully. \r\n\r\nHence, from your answer, I assume that the trailing s in _webdavs_ was the only problem?\nYes, the issue was just the trailing s. Internally, the URL scheme for webdav over SSL/TLS is `webdavs`, but the extra to be installed is `[webdav]` (which covers both `webdav` and `webdavs` remote URL schemes). However, when we generate the error message for a missing remote dependency we incorrectly substitute the URL scheme rather than the proper pip extras name.\r\n\r\nThis bug would also apply to `http` remotes, for HTTPS URL's the error message will suggest installing `dvc[https]` when it should be `dvc[http]`.":1,"hi, @anjapago , could you try to see if this quick and dirty workaround helps:\r\n\r\npip install git+https://github.com/shcheklein/dvc@workaround-3473\r\n\r\n(use it this way `dvc get --rev=feb20gem https://github.com/iterative/blog yarn.lock` - always specify `--rev`)\nMore context from the discussion. Full clone takes ~20min which makes commands like `dvc get`, `dvc list`, `dvc import` as well as `dvc.api` effectively unusable.\nRight. Ping myself :)":1,"Not sure about this, as not mtime on data files nor mtime on cache files is really indicative of the usage or lack of it. Plus, I don't see a good way that we will not break a particular pipeline with older files. Need to reconsider it later. Moving to 0.9.8 milestone.\nMore organic way would be to use git commits as a limiter. I.e. gc --time month will save data for all pipelines up to a week.\nMoved to 0.11.0. \nA user mentioned this idea on Discord today.\nBTW, another \"natural way to reduce size\" could be **by size** e.g. `dvc gc --size-over 1G` - removes all cached files larger than a GB.\nBTW, this (`--time` feature) can be done currently via Git. It may be a little complicated e.g. requiring rewriting the repo history but we could write a guide or how-to about it instead of implementing the feature. If so we could move this issue over to dvc.org.\r\n\r\nAnd the `--size` feature is doable via regular GNU tools like `du ... | xargs rm ...` etc. which we could also document.\nWe had another user request this feature today, in the context of shared caches ([see chat](https://discord.com/channels/485586884165107732/563406153334128681/861989689891880971)).\n@jorgeorpinel I would also be interested in this feature. Could you elaborate on how the `--time` feature could be achieved with git?\n@lassepe sure, by deleting all the commits before that date (see [this Q](https://stackoverflow.com/questions/29042783)) and then using `dvc gc -A`.\nAh okay, this seems like a bit too aggressive then because I still need to keep the code history. I'll wait for the actual feature then.\nYou could do it in a temporary repo clone and then discard it (don't rewrite the upstream or your main working repo).\nAny updates? this feature would be very important to me.. I currently have some models that are retrained weekly, and I was using dvc to version the models and datasets, but the bucket size was getting too big and I don't need the old models (I need keep only the last 3 or 4 models in case rollback is needed).\r\nas currently this feature is not available it is an impediment for me to use dvc.\n> Any updates? this feature would be very important to me.. I currently have some models that are retrained weekly, and I was using dvc to version the models and datasets, but the bucket size was getting too big and I don't need the old models (I need keep only the last 3 or 4 models in case rollback is needed).\r\nas currently this feature is not available it is an impediment for me to use dvc.\r\n\r\n@rafaelsdellama Do you need the `datetime` feature specifically or would some alternative approach like `gc -n ` (https://github.com/iterative/dvc/issues/7695) work for your use case?\r\n\r\n":1,"It's happening to other users as well (discussion with Benjamin on Discord). Symptoms are very similar - zsh (a regular one), PATH is modified. Not `conda`, virtualenv is being used. OS - Mac.\n@shcheklein in their case, is DVC installed in the \"parent\" Python environment, or in a separate virtualenv? The problem might lie in something that has to do with the parent/child relationship.\r\n\r\nAlso, did they use Homebrew to install Python? Brew is another common factor here, and more likely to cause problems than Zsh, since Brew does its own layer of symlinking.\r\n\r\nMy admittedly convoluted setup:\r\n```\r\nLinuxbrew\r\n├─ Pyenv\r\n│  └─ Conda <- PATH is broken when DVC is installed here\r\n│ └─ Active conda environment <- PATH is OK when DVC is installed here\r\n└─ Python\r\n └─ Pipx-managed Virtualenv <- PATH is OK when DVC is installed here\r\n```\n@gwerbin Thanks! I've asked Benjamin to take a look and share more details. \r\n\n> @shcheklein in their case, is DVC installed in the \"parent\" Python environment, or in a separate virtualenv? The problem might lie in something that has to do with the parent/child relationship.\r\n\r\nI have tried two setups, and both fail the sense that:\r\n\r\n1. The error `ImportError: No module named pandas` is returned.\r\n2. `dvc run -o test 'which python > test'` outputs `/usr/local/bin/python` in the `test` file, where it should point to `python` in the virtualenv.\r\n\r\nSetup 1\r\n> ```\r\n> Homebrew\r\n> └─ DVC (`/usr/local/bin/dvc`)\r\n> └─ Virtualenv + Python (`/usr/local/bin/{python,virtualenv}`)\r\n> └─Active virtualenv environment\r\n> └─ Pandas\r\n> ```\r\n\r\nSetup 2\r\n> ```\r\n> Homebrew\r\n> └─ Virtualenv + Python (`/usr/local/bin/{python,virtualenv}`)\r\n> └─Active virtualenv environment\r\n> └─ DVC\r\n> └─ Pandas\r\n> ```\r\n\r\n> Also, did they use Homebrew to install Python? Brew is another common factor here, and more likely to cause problems than Zsh, since Brew does its own layer of symlinking.\r\n\r\nYes, Python was installed by Homebrew. (FYI: the Python interpreter that comes with the latest version of macOS (Mojave, version 10.14.6) is 2.7.10 and is 4.5 years old. I figure most people using Python on macOS will have shadowed this outdated version with a more recent one.)\r\n\r\n@shcheklein and @efiop asked me to share the output of a few commands on the Discord channels and perhaps it helps if I share it here as well.\r\n\r\n```\r\n> echo $SHELL\r\n> dvc run -f test.dvc 'echo $SHELL'\r\n> ls -la $SHELL\r\n> file $SHELL\r\n/bin/zsh\r\n'test.dvc' already exists. Do you wish to run the command and overwrite it? [y/n] y\r\nRunning command:\r\n echo $SHELL\r\n/bin/zsh\r\nSaving information to 'test.dvc'.\r\n\r\nTo track the changes with git, run:\r\n\r\n git add test.dvc\r\n-rwxr-xr-x 1 root wheel 610240 May 4 09:05 /bin/zsh\r\n/bin/zsh: Mach-O 64-bit executable x86_64\r\n> cat test.dvc\r\ncmd: echo $SHELL\r\nmd5: ee3b44e50705d557b7aa3eef74821f74\r\n```\r\n\r\nI wish I could help out more, but my knowledge of Python environments and DVC internals is very limited. However, let me know if I can help you out with further information and I'm happy to provide it.\nFor the record: I am able to reproduce https://github.com/iterative/dvc/issues/2506#issue-494639954 even on the linux machine.\r\n\r\nIn my case `which dvc` shows pyenv shim, which has something like:\r\n```\r\nexec \"/home/efiop/.pyenv/libexec/pyenv\" exec \"$program\" \"$@\"\r\n```\r\nin it, which is the thing that adds some stuff on top of the base env, as we can see:\r\n```\r\n➜ dvc-test git:(755) ✗ /home/efiop/.pyenv/libexec/pyenv exec --help\r\nUsage: pyenv exec [arg1 arg2...]\r\n\r\nRuns an executable by first preparing PATH so that the selected Python\r\nversion's `bin' directory is at the front.\r\n\r\nFor example, if the currently selected Python version is 2.7.6:\r\n pyenv exec pip install -rrequirements.txt\r\n\r\nis equivalent to:\r\n PATH=\"$PYENV_ROOT/versions/2.7.6/bin:$PATH\" pip install -rrequirements.txt\r\n```\r\n\r\nIt would be nice of pyenv would've left something like `PATH_ORIG` env var, so that we could use it later. This would be similar to how pyinstaller leaves VAR_ORIG if it changes it, e.g. LD_LIBRARY_PATH. Looking for possible and viable automatic workarounds. Might have to suggest this to pyenv later though, to make it straightforward for everyone.\nInteresting detail: our good friend @AlJohri has run into it before even using dvc: https://github.com/pyenv/pyenv/issues/985 🙂 ":1,"Hi @sremm !\r\n\r\nIndeed, looks like state backward compatibility issue 🙁 , probably caused by some locking adjustments we've done. The only solution for you right now is to install the newest version and delete `.dvc/state` , after that everything should start working. You will, however, lose all the cached hashes that are in that state db, but that will be recomputed once when you next use some particular file, so shouldn't be a big issue, even though inconvenient. Would that work for you? \nI couldn't reproduce it @sremm, @efiop :disappointed: \r\n\r\n```bash\r\npip install dvc==0.59.2\r\ndvc init --no-scm\r\ndvc run -o foo \"echo foo > foo\"\r\npip uninstall dvc && pip install dvc\r\nrm foo\r\ndvc checkout\r\ndvc run -o bar \"echo bar > bar\"\r\n```\r\n\r\nMaybe your state file got corrupted somehow.\r\n\r\nHow did you perform the update?\nGood catch @mroutis !\r\n\r\n@sremm Btw, could you please show `dvc version` output on both dvc versions? It contains some additional info, which might help us better understand what happened. \r\n\r\nWe have this new piece of code in https://github.com/iterative/dvc/blob/0.71.0/dvc/state.py#L491 , where we try to connect by the uri, if we can. Might be that we didn't quite get the sqlite requirements right. In which case, we should be able to recover from this issue without losing the state db.\r\n\r\nAlso, `python -c 'import sqlite3; print(str(sqlite3.sqlite_version_info))'` would be helpful as well :)\n@mroutis I used `pip install dvc --upgrade` doing `pip uninstall dvc && pip install dvc` like in your example did not change the behaviour\r\n\r\n@efiop I tried updating dvc and then removing `.dvc/state` but still experience the same behaviour.\r\n\r\nHere are the outputs:\r\nFor the working version\r\n```\r\ndvc version\r\n/home/mlburk/anaconda3/envs/tf-gpu/lib/python3.7/site-packages/git/repo/base.py:129: UserWarning: The use of environment variables in paths is deprecated\r\nfor security reasons and may be removed in the future!!\r\n warnings.warn(\"The use of environment variables in paths is deprecated\" +\r\nDVC version: 0.59.2\r\nPython version: 3.7.4\r\nPlatform: Linux-4.15.0-70-generic-x86_64-with-debian-buster-sid\r\nBinary: False\r\nCache: reflink - False, hardlink - False, symlink - True\r\nFilesystem type (cache directory): ('fuseblk', '/dev/sda2')\r\nFilesystem type (workspace): ('ext4', '/dev/sdb2')\r\n```\r\nand for the latest version\r\n```\r\ndvc version\r\n/home/mlburk/anaconda3/envs/tf-gpu/lib/python3.7/site-packages/git/repo/base.py:129: UserWarning: The use of environment variables in paths is deprecated\r\nfor security reasons and may be removed in the future!!\r\n warnings.warn(\"The use of environment variables in paths is deprecated\" +\r\nDVC version: 0.71.0\r\nPython version: 3.7.4\r\nPlatform: Linux-4.15.0-70-generic-x86_64-with-debian-buster-sid\r\nBinary: False\r\nPackage: pip\r\nCache: reflink - False, hardlink - False, symlink - True\r\nFilesystem type (cache directory): ('fuseblk', '/dev/sda2')\r\nFilesystem type (workspace): ('ext4', '/dev/sdb2')\r\n```\r\nOutput of `python -c 'import sqlite3; print(str(sqlite3.sqlite_version_info))'`\r\n(3, 29, 0)\r\n\r\nPerhaps there some other small example I could also try to reproduce?\n@sremm You've removed .dvc/state _after_ upgrading, right?\nDouble checked and yes, upgraded -> removed .dvc/state -> try to add a simple file\r\n\r\nsame behaviour.\r\n\r\nSince I also use a windows machine with the same repository, I thought I'd try to upgrade there as well. \r\n\r\nDVC version: 0.71.0\r\nPython version: 3.7.3\r\nPlatform: Windows-10-10.0.16299-SP0\r\nBinary: False\r\nPackage: pip\r\nCache: reflink - False, hardlink - True, symlink - False\r\nFilesystem type (cache directory): ('NTFS', 'F:\\\\')\r\nFilesystem type (workspace): ('NTFS', 'F:\\\\')\r\n\r\nfirst I upgraded (had a little older version than 0.59.2, but everything still worked with 0.59.2)\r\nthen tried to add a small text file and got a different error\r\n```\r\ndvc add test.txt -v\r\nc:\\users\\srg\\appdata\\local\\continuum\\anaconda3\\envs\\tf-gpu\\lib\\site-packages\\git\\repo\\base.py:129: UserWarning: The use of environment variables in paths is deprecated\r\nfor security reasons and may be removed in the future!!\r\n warnings.warn(\"The use of environment variables in paths is deprecated\" +\r\nDEBUG: Trying to spawn '['c:\\\\users\\\\srg\\\\appdata\\\\local\\\\continuum\\\\anaconda3\\\\envs\\\\tf-gpu\\\\python.exe', 'C:\\\\Users\\\\srg\\\\AppData\\\\Local\\\\Continuum\\\\anaconda3\\\\envs\\\\tf-gpu\\\\Scripts\\\\dvc', 'daemon', '-q', 'updater']'\r\nDEBUG: Spawned '['c:\\\\users\\\\srg\\\\appdata\\\\local\\\\continuum\\\\anaconda3\\\\envs\\\\tf-gpu\\\\python.exe', 'C:\\\\Users\\\\srg\\\\AppData\\\\Local\\\\Continuum\\\\anaconda3\\\\envs\\\\tf-gpu\\\\Scripts\\\\dvc', 'daemon', '-q', 'updater']'\r\nERROR: unexpected error - [WinError 123] The filename, directory name, or volume label syntax is incorrect: ' 16232\\n'\r\n------------------------------------------------------------\r\nTraceback (most recent call last):\r\n File \"c:\\users\\srg\\appdata\\local\\continuum\\anaconda3\\envs\\tf-gpu\\lib\\site-packages\\dvc\\main.py\", line 49, in main\r\n ret = cmd.run()\r\n File \"c:\\users\\srg\\appdata\\local\\continuum\\anaconda3\\envs\\tf-gpu\\lib\\site-packages\\dvc\\command\\add.py\", line 25, in run\r\n fname=self.args.file,\r\n File \"c:\\users\\srg\\appdata\\local\\continuum\\anaconda3\\envs\\tf-gpu\\lib\\site-packages\\dvc\\repo\\__init__.py\", line 35, in wrapper\r\n with repo.lock, repo.state:\r\n File \"c:\\users\\srg\\appdata\\local\\continuum\\anaconda3\\envs\\tf-gpu\\lib\\site-packages\\flufl\\lock\\_lockfile.py\", line 334, in __enter__\r\n self.lock()\r\n File \"c:\\users\\srg\\appdata\\local\\continuum\\anaconda3\\envs\\tf-gpu\\lib\\site-packages\\dvc\\lock.py\", line 78, in lock\r\n super(Lock, self).lock(timedelta(seconds=DEFAULT_TIMEOUT))\r\n File \"c:\\users\\srg\\appdata\\local\\continuum\\anaconda3\\envs\\tf-gpu\\lib\\site-packages\\flufl\\lock\\_lockfile.py\", line 267, in lock\r\n self._break()\r\n File \"c:\\users\\srg\\appdata\\local\\continuum\\anaconda3\\envs\\tf-gpu\\lib\\site-packages\\flufl\\lock\\_lockfile.py\", line 527, in _break\r\n os.unlink(winner)\r\nOSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: ' 16232\\n'\r\n------------------------------------------------------------\r\n```\r\n\r\n\r\nBut after removing the .dvc\\state I got the same error as on Ubuntu\r\n\r\n```\r\ndvc add test.txt -v\r\nc:\\users\\srg\\appdata\\local\\continuum\\anaconda3\\envs\\tf-gpu\\lib\\site-packages\\git\\repo\\base.py:129: UserWarning: The use of environment variables in paths is deprecated\r\nfor security reasons and may be removed in the future!!\r\n warnings.warn(\"The use of environment variables in paths is deprecated\" +\r\nDEBUG: Trying to spawn '['c:\\\\users\\\\srg\\\\appdata\\\\local\\\\continuum\\\\anaconda3\\\\envs\\\\tf-gpu\\\\python.exe', 'C:\\\\Users\\\\srg\\\\AppData\\\\Local\\\\Continuum\\\\anaconda3\\\\envs\\\\tf-gpu\\\\Scripts\\\\dvc', 'daemon', '-q', 'updater']'\r\nDEBUG: Spawned '['c:\\\\users\\\\srg\\\\appdata\\\\local\\\\continuum\\\\anaconda3\\\\envs\\\\tf-gpu\\\\python.exe', 'C:\\\\Users\\\\srg\\\\AppData\\\\Local\\\\Continuum\\\\anaconda3\\\\envs\\\\tf-gpu\\\\Scripts\\\\dvc', 'daemon', '-q', 'updater']'\r\nERROR: unexpected error - unable to open database file\r\n------------------------------------------------------------\r\nTraceback (most recent call last):\r\n File \"c:\\users\\srg\\appdata\\local\\continuum\\anaconda3\\envs\\tf-gpu\\lib\\site-packages\\dvc\\main.py\", line 49, in main\r\n ret = cmd.run()\r\n File \"c:\\users\\srg\\appdata\\local\\continuum\\anaconda3\\envs\\tf-gpu\\lib\\site-packages\\dvc\\command\\add.py\", line 25, in run\r\n fname=self.args.file,\r\n File \"c:\\users\\srg\\appdata\\local\\continuum\\anaconda3\\envs\\tf-gpu\\lib\\site-packages\\dvc\\repo\\__init__.py\", line 35, in wrapper\r\n with repo.lock, repo.state:\r\n File \"c:\\users\\srg\\appdata\\local\\continuum\\anaconda3\\envs\\tf-gpu\\lib\\site-packages\\dvc\\state.py\", line 136, in __enter__\r\n self.load()\r\n File \"c:\\users\\srg\\appdata\\local\\continuum\\anaconda3\\envs\\tf-gpu\\lib\\site-packages\\dvc\\state.py\", line 228, in load\r\n self.database = _connect_sqlite(self.state_file, {\"nolock\": 1})\r\n File \"c:\\users\\srg\\appdata\\local\\continuum\\anaconda3\\envs\\tf-gpu\\lib\\site-packages\\dvc\\state.py\", line 495, in _connect_sqlite\r\n return sqlite3.connect(uri, uri=True)\r\nsqlite3.OperationalError: unable to open database file\r\n------------------------------------------------------------\r\n```\r\n\r\nRolling back to \r\n```\r\nDVC version: 0.59.2\r\nPython version: 3.7.3\r\nPlatform: Windows-10-10.0.16299-SP0\r\nBinary: False\r\nCache: reflink - False, hardlink - True, symlink - False\r\nFilesystem type (cache directory): ('NTFS', 'F:\\\\')\r\nFilesystem type (workspace): ('NTFS', 'F:\\\\')\r\n```\r\nAnd I can add the file without issues\n@sremm Oh, that is very interesting. I suspect problems in our by-url sqlite connection. Let's try to narrow it down. Please install the latest dvc and run this script in the root of your repo:\r\n```\r\nimport sqlite3\r\nfrom dvc.state import _build_sqlite_uri\r\n\r\nuri = _build_sqlite_uri(os.path.join(os.getcwd(), \".dvc\", \"state\"), {\"nolock\": 1})\r\nprint(uri)\r\nsqlite3.connect(uri, uri=True)\r\n```\r\n\r\nCC @Suor , maybe you can see better where uri might've broken 🙂 \nAlrighty, here are the results, it does result in the same error.\r\n\r\ntest_for_dvc_bug.py\r\n```\r\nimport sqlite3\r\nimport os\r\nfrom dvc.state import _build_sqlite_uri\r\n\r\nuri = _build_sqlite_uri(os.path.join(os.getcwd(), \".dvc\", \"state\"), {\"nolock\": 1})\r\nprint(uri)\r\nsqlite3.connect(uri, uri=True)\r\n```\r\nupdated to latest dvc, removed .dvc\\state again and ran the above script\r\nthis is the output\r\n```\r\nfile:///F:/repo_base/.dvc/state?nolock=1\r\nTraceback (most recent call last):\r\n File \"test_for_dvc_bug.py\", line 7, in \r\n sqlite3.connect(uri, uri=True)\r\nsqlite3.OperationalError: unable to open database file\r\n```\r\nThe above is from running on windows, but it was the same error on Ubuntu.\n@sremm Would be great to see output on ubuntu too, since we are more *nix guys than windows, so it is easier for us to grasp posix paths 🙂 But so far uri looks alright.\n@sremm And how about:\r\n```\r\nimport sqlite3\r\nimport os\r\n\r\nsqlite3.connect(os.path.join(os.getcwd(), \".dvc\", \"state\"))\r\n```\r\n? Just a sanity check to make sure that this is the uri that is broken.\n@efiop \r\nSo :) the prevous output from Ubuntu\r\n\r\nNote that in my previous comment I also had to change the actual repository name, I realised that special characters might also have an effect and that there is the \"%20\" in the folder name as seen below, everything else is just letters. (same name on widows for the actualy repository folder)\r\n```\r\npython test_for_dvc_bug.py \r\nfile:///home/mlburk/Repositories/Repo%20Base/.dvc/state?nolock=1\r\nTraceback (most recent call last):\r\n File \"test_for_dvc_bug.py\", line 7, in \r\n sqlite3.connect(uri, uri=True)\r\nsqlite3.OperationalError: unable to open database file\r\n```\r\n\r\nand the new test `test_for_dvc_bug_2.py`\r\n```\r\nimport sqlite3\r\nimport os\r\n\r\nsqlite3.connect(os.path.join(os.getcwd(), \".dvc\", \"state\"))\r\n```\r\nSo this runs without errors. I guess this is what you expected since the \"_build_sqlite_uri\" function is under suspect. The \"%20\" in the name does not seem to be an issue for the sqlite3.connect() call though.\r\n\r\n\n@sremm Thanks! Interestingly enough, I can reproduce if I feed a relative path to the _build_uri. How about this piece of code on Ubuntu:\r\n```\r\nimport sqlite3\r\n\r\nuri = \"file:/home/mlburk/Repositories/Repo%20Base/.dvc/state?nolock=1\"\r\nsqlite3.connect(uri, uri=True)\r\n```\r\nI suppose that `/home/mlburk/Repositories/Repo%20Base/.dvc/state` is a valid one for your machine, right? If so, please run the code above and show the output. I suspect that it might be \"//\" situation that is at fault.\r\n\n@efiop \r\ntest_for_dvc_bug_3.py contents\r\n```\r\nimport sqlite3\r\n\r\nuri = \"file:/home/mlburk/Repositories/Repo%20Base/.dvc/state?nolock=1\"\r\nsqlite3.connect(uri, uri=True)\r\n\r\n```\r\nOutput\r\n```\r\npython test_for_dvc_bug_3.py \r\nTraceback (most recent call last):\r\n File \"test_for_dvc_bug_3.py\", line 4, in \r\n sqlite3.connect(uri, uri=True)\r\nsqlite3.OperationalError: unable to open database file\r\n```\r\n\r\nAnd yes with `uri = \"/home/mlburk/Repositories/Repo%20Base/.dvc/state?nolock=1\"`, there is no error, so without \"file:\" otherwise the same.\r\n\r\nAlso tried with `uri = = \"///home/mlburk/Repositories/Repo%20Base/.dvc/state?nolock=1\"` no error, empty output.\n@sremm Without `file:` it most likely creates `state?nolock=1` file instead of `state`. And probably created `Repo%20Base` directory, though I'm not sure about this one. Pretty weird stuff. Let's try this one too:\r\n```\r\nimport sqlite3\r\n\r\nuri = \"file:.dvc/state?nolock=1\"\r\nsqlite3.connect(uri, uri=True)\r\n```\r\nIs there anything special about your python or your setup in general? \n> The \"%20\" in the name does not seem to be an issue for the sqlite3.connect() call though.\r\n\r\nBtw, how did you test that? I suppose you have a directory `Repo Base` with the space in the name, right?\n@efiop with the code contents as the below snipped, there is no error and a state file is created\r\n```\r\nimport sqlite3\r\n\r\nuri = \"file:.dvc/state?nolock=1\"\r\nsqlite3.connect(uri, uri=True)\r\n```\r\nHmm, the folder itself has the \"%20\" in the name, the repository in our internal server has a space though, but after forking with git it sets the \"%20\" to the actual folder name instead of the space.\r\nAnd I do suspect that there is something to do with the \"%20\" since i tried renaming the folder to have an actualy space instead of \"%20\" and the issue dissapears. I can add files with the latest dvc version.\r\n\r\nHere are some more cases where I get the error to help debug.\r\nTried to run with `uri = \"file:/home/mlburk/TestFolder/Base%20Repo/.dvc/state?nolock=1\"` where the \"TestFolder\" does not exist and get the same old error.\r\nAlso tired with `uri = \"file:/home/mlburk/Repositories/NewFolder/.dvc/state?nolock=1\"` where NewFolder exists, but \".dvc\" folder does not.\r\nSo when some folders are missing `sqlite3.OperationalError: unable to open database file` seems to pop up as well.\r\n\r\nI also tested creating a new folder with path `/home/mlburk/Repositories/Test%20Folder/.dvc` (no space the actual \"%20\" in the name) manually and tried running the code below, which also gave the error.\r\n\r\n```\r\nimport sqlite3\r\n\r\nuri = \"file:/home/mlburk/Repositories/Test%20Folder/.dvc/state?nolock=1\"\r\nsqlite3.connect(uri, uri=True)\r\n```\n> Hmm, the folder itself has the \"%20\" in the name\r\n\r\nAh, ok, now it all makes sense! The uri should be encoded, and `%20` means empty space, so sqlite tries to open \"Repo Base\" instead of \"Repo%20Base\", since it decodes it. So our uri builder should escape that properly. So most likely we need to \r\n```\r\nimport urllib.parse\r\nfilename = urllib.parse.quote_plus(filename)\r\n```\r\nin https://github.com/iterative/dvc/blob/master/dvc/state.py#L499 . Are you familiar with modifying our project and installing the dev version? If you are, could you do those modifications and give it a try? E.g.\r\n```\r\ngit clone https://github.com/iterative/dvc\r\ncd dvc\r\n# modify dvc/state.py\r\npip install .\r\n```\n@sremm To make sure that my conclusions are correct, could you try running:\r\n```\r\nimport sqlite3\r\n\r\nuri = \"file:///home/mlburk/Repositories/Repo%2520Base/.dvc/state?nolock=1\"\r\nsqlite3.connect(uri, uri=True)\r\n```\r\n\r\nNotice that \"Repo%20Base\" got encoded into \"Repo%2520Base\"\nRunning the snipped below\r\n```\r\nimport sqlite3\r\n\r\nuri = \"file:///home/mlburk/Repositories/Repo%2520Base/.dvc/state?nolock=1\"\r\nsqlite3.connect(uri, uri=True)\r\n```\r\nresults in no errors and the state file is created in `/home/mlburk/Repositories/Repo%20Base/.dvc/`\n@sremm Great news! So we've found the issue. Patch is coming soon, stay tuned! 🙂 Thanks for the feedback!":1,"Yeah this seems like a good feature request to me.\r\n\r\nI've never tried nesting BaseSettings classes but it seems totally reasonable to me and I don't see any reason it shouldn't work like this. (But maybe @samuelcolvin does?)\r\n\r\nCould you produce a failing test case or two as a starting point for implementation work?\nSounds good to me. \r\n\r\nDo you want to use dot notation e.g. `db.host`?\r\n\r\nAlso should this work for dictionaries too?\r\n\r\nThis isn't required for v1 since it would be entirely backwards compatible. \nIt's great to hear!\r\n\r\nI think the only logic that needs to change is merging of `self._build_environ()` and `init_kwargs` dictionaries, so I don't see why it wouldn't work for dictionaries.\r\n\r\nShould I start a merge request with the changes and tests or would you like to tackle this one yourselves?\n@idmitrievsky It would be great if you could start a pull request for this. Your approach to the logic sounds right to me (though admittedly there could be some edge cases I'm not considering).":1,"Guys, so maybe `--no-header` after all? Or do you see a different usecase for `--no-header` that will force us to create `--no-table-header` in the future to differentiate the use cases? @pared \n+1 for `--no-header`, at least `--no-csv-header` feels wrong since it's not CSV only\nHi, Can I work on this issue?\nI don't see a future use case for `--no-header` as of today, let's do it. I guess if there would be more important and viable use case for that option, we would know by now.":1,"Thanks for reporting.\r\n\r\nHappy to accept a PR to add a proper `__eq__` method to `NameEmail`":1,"The error, in this case, can be avoided. Do you think it would be more useful to return an empty dict or rather raise an exception like `No params found`?\nI think an error in this case? I doubt there's much reason to call `params_show` if you expect an empty dict.\r\n\r\nWhat would happen if there was a `params.yaml` but no `dvc.yaml` found? What about the inverse?\n> I think an error in this case? I doubt there's much reason to call params_show if you expect an empty dict.\r\n\r\nI was thinking about returning an empty dict instead of raising an exception because `dvc {X} show` commands don't fail when there is nothing to return.\r\n\r\n> What would happen if there was a params.yaml but no dvc.yaml found? \r\n\r\nThe contents of `params.yaml` will be returned.\r\n\r\n> What about the inverse?\r\n\r\nThe `KeyError` point will be reached.\r\n\r\n":1,"@piojanu I guess what we should do is check whether or not the data is already ignored by git and if it is then just not add it to yet another gitignore. Does that make sense for your scenario?\nHmm. You would have to check if e.g. whole directory, that the file is in, isn't ignored already. Then it would do fine.\n@piojanu Sure. There is actually a `git check-ignore` command, which we could utilize, unless there is a more straightforward way in GitPython.\n@piojanu Ok, there is actually a `repo.git.check_ignore()` supported already. So basically all we'll have to do is to add a check for it to `Git.ignore()` in https://github.com/iterative/dvc/blob/master/dvc/scm/git.py#L76 . Would you like to submit a PR for it? :slightly_smiling_face: \nI'd love too, but for now I have very little time to sit down to it.\n@piojanu No worries :) We'll try to get to it when we have time then. Thanks for the feedback! :slightly_smiling_face: \nHey @efiop and @piojanu. I can work on this after work tonight. Do you mind if I take this issue?\nHi @J0 ! Sure! Thanks a lot for looking into it! Let us know if you need any help and/or have any questions. :slightly_smiling_face: \nAnyone working on it ? or I can pick this up with some help \n@veera83372 Not sure if @J0 is still working on this. Let's see if he replies. \n@efiop I am still working on this issue. Sorry for the long delay — just opened a PR\nIs this still being worked on?\n@J0 Are you still working on this?\n@Aljo-Rovco I doubt. The PR (empty?) was closed a while ago. Feel free to pick it up and contribute and let us know if you'd need any help from us.\n@shcheklein was planning to give 1 day for @J0 to reply and then give this a go, but ok :)\n@Aljo-Rovco Looks like @J0 is not responding, so indeed, please feel free to take this one. Thanks a lot for looking into it! 🙂 \n@Aljo-Rovco Are you working on this?\nHey sorry, was swamped. Not sure how well I'm suited to contribute, but I can try. Would it just be a check here: https://github.com/iterative/dvc/blob/master/dvc/scm/git/__init__.py#L137 ?\n@Aljo-Rovco No worries. That would be the place to check, yes :slightly_smiling_face: ":1,"Ok, so discussed with @pmrowla and @pared that it makes sense to forbid it. If someone asks for this in the future, we could introduce `--subrepos` or smth like that for push/pull/etc.\nThe reason it does't throw an error right now is https://github.com/iterative/dvc/blob/1.0.0a1/dvc/repo/__init__.py#L210 . It opens dvcfile directly without consulting with the clean tree.\r\n\r\nAs discovered by @skshetry , repo.tree doesn't use dvcignore when checking if something exists (only uses it when walk()ing). So we need to revisit CleanTree to fix that.":1,"I don't understand the autocompletion issue with the `-c \"echo bar > bar\"` format. I'm able to autocomplete on Ubuntu in bash and zsh even inside of quotes. Is there an example of how this breaks autocompletion? Is it a specific issue with OSX? Maybe @dmpetrov can elaborate?\n@dberenbaum, that's strange. For me (on Linux), it never autocompletes, with or without DVC's autocompletion.\n@skshetry What are you trying to autocomplete? What shell are you using? Does autocomplete work for you without the `-c` flag and quotes?\n@dberenbaum Autocompletion issue - the default autocomplete suggests files that I have in a dir when I type `dvc run -n s1 python tr` (suggests `train.py`). It won't suggest inside a quoted string - `dvc run s1 -c \" python tr` (however, it suggests as 1st item in a string `dvc run s1 -c \"tr` which is not a usual corner case).\r\n\r\n@skshetry The decision about string VS reminder was made in the very first version of DVC. I considered a `-c/--command` option as the opposite of the remainder. The reminder looks more common and convenient. An analogy with a reminder - `xargs`.\r\n\r\nTo my mind, the issue with mixing up params is not a major one. By replacing the behavior we might have new issues (the autocomplete is one of them). I suggest not changing the behavior and keep cmd as a reminder.\nThanks, @dmpetrov! I was trying the equivalent of `dvc run s1 -c \"tr`, but now I understand and see the same on my end.\nI agree that `But, we have heard complaints from users that it's confusing, as trying to add certain flags to dvc run at the end is now assumed to be part of the user's script.` doesn't sound like a big issue, considering that people will have to double quote commands now (and escaping if needed) + they will have to remember/learn about `-c` + it breaks autocompletion (not matter DVC's one installed or not).\nYup, let's stick with the existing syntax (positional arguments) unless you have any concerns @skshetry .":1,"Thanks @mattseddon! Could you describe how it impacts VS Code and how important it is?\r\n\r\nI see that it's inconsistent with other commands, but both behaviors seem initially reasonable to me, and since I have never seen a user complaint about it, I would only prioritize it based on your needs.\nThis broke the workflow of getting setup with a new project/repository. That is something that we want to cover with the extension so it is fairly important to us. \r\n\r\nHere are a couple of thoughts that I've had:\r\n\r\n1. If the error is expected (which I think is reasonable) then we should not get an \"unexpected error\"\r\n2. If we have to have a commit for some commands to work then maybe `dvc init` should generate that commit.":1,"Yep it's a bug. Since for python `Union[int, float] == Union[float, int]`, the key in the cache is the same.\r\nI guess we could use `(cls, params, get_args(params))` as key of `_generic_types_cache` instead of `(cls, params)`\n> Yep it's a bug. Since for python `Union[int, float] == Union[float, int]`, the key in the cache is the same.\r\n\r\nGood to hear!\r\n\r\n> I guess we could use `(cls, params, get_args(params))` as key of `_generic_types_cache` instead of `(cls, params)`\r\n\r\nWhat about nested models, say `List[Union[float, int]]` and `List[Union[int, float]]`? Wouldn’t these still be considered equal? \nIs this new in v1.10?\n> Is this new in v1.10?\r\n\r\nNo, saw it first in 1.9.0 and then updated to check if it was still there in the newest version. It is probably as old as the current specification of the key in `_generic_types_cache`.\nHumm, can't remember if that was new in 1.9 or 1.8.\r\n\r\nThe question is whether we should fix this in a patch release of 1.10 or wait for V2?\n> The question is whether we should fix this in a patch release of 1.10 or wait for V2?\r\n\r\nThat’s obviously not for me to say, but personally I think it would have been nice with a patch of at least the basic (non-nested) case, as the bug is breaking several of my tests.\nThis gets more complicated, as the more-or-less exact same issue is present in the `typing` library itself, e.g.\r\n\r\n```python\r\nfrom typing import get_args, List, Union\r\n\r\nprint(get_args(Union[int, float]))\r\nprint(get_args(Union[float, int]))\r\nprint(get_args(List[Union[float, int]]))\r\nprint(get_args(List[Union[int, float]]))\r\n```\r\n\r\nPrints:\r\n\r\n```\r\n(, )\r\n(, )\r\n(typing.Union[float, int],)\r\n(typing.Union[float, int],)\r\n```\r\n\r\nThis is discussed in [this CPython issue](https://github.com/python/cpython/issues/86483), which include comments by Guido, and which resulted in the following documentation change:\r\n\r\n> If X is a union or [Literal](https://docs.python.org/3/library/typing.html#typing.Literal) contained in another generic type, the order of (Y, Z, ...) may be different from the order of the original arguments [Y, Z, ...] due to type caching.\r\n\r\n(from the [current Python docs](https://docs.python.org/3/library/typing.html#typing.get_args))\r\n\r\nThis at least rules out a recursive solution using `get_args` for nested models (which I was toying around with). If I understand this correctly, I believe it also means that the difference between e.g. `List[Union[float, int]]` and `List[Union[int, float]]` would be inaccessible in Python itself, which is a problem.\r\n\r\nThinking from the perspective of a downstream library depending on `pydantic` I suppose one could implement a workaround for the nested problem(i.e. `List[Union[int, float]] == List[Union[float, int]]`) by replacing `Union` with a custom model, say `OrderedUnion`, which overrides `__eq__`. Such a model could even be included in `pydantic` if there is great enough need for this.\r\n\r\nHowever, all of this does not disqualify the simple `get_args` fix suggested by @PrettyWood. It would still work for the top level `Union` case, as in my example code, which is really the only thing I need for my code anyway. Also, I suppose the fact that a solution to the nested problem would be dependent on the above-mentioned \"feature\" in Python is really an argument for only solving the simple non-nested issue now. So, in a sense, I would argue that this complication might actually make this issue simpler to manage, at least for now.\nConfirmed, I think we should fix this in V1.10.\r\n\r\nPR welcome, it'll need to be pretty quick to make it into the next patch release #4472, but otherwise it can be included in the (inevitable) next patch release.\nI can give it a try.\ngreat, thanks.\nI'm getting the following mypy errors:\r\n```\r\npydantic/generics.py:65: error: Argument 1 to \"get_args\" has incompatible type \"Union[Type[Any], Tuple[Type[Any], ...]]\"; expected \"Type[Any]\" [arg-type]\r\npydantic/generics.py:131: error: Argument 1 to \"get_args\" has incompatible type \"Tuple[Type[Any], ...]\"; expected \"Type[Any]\" [arg-type]\r\n```\r\n\r\nWhich seems to be a bug in mypy: https://github.com/python/mypy/issues/4625\r\n\r\nAny thoughts on how to handle this?\nhard without seeing the change, create the PR so I can see it.\nBasically, the type of the `params` parameter specified in `GenericModel.__class_getitem__` is incompatible with the `Type[Any]` in the `get_args` method, according to `mypy`. As far as I can understand, `Type[Any]` is supposed to allow any type, including `Union` and `Tuple`. It makes sense to me that this is a mypy bug, but I might have misread the mypy issue, as the issue is not exactly the same.\r\n\r\nAnyway, I'll add a test and submit the PR for you to see.\nI'd love to include this fix in v1.10.2, any chance you can submit the PR asap so I can review and merge it?":1,"As a workaround, maybe you could set the attribute type to a set? Then when a list gets passed in, it gets deduplicated automatically. Then you can override `json()` and `dict()` to convert it back to a list on the way out.\n@mdavis-xyz The idea of this feature is to validate that sequences have only unique items.\r\n\r\nCode below will show the idea:\r\n```python\r\nfrom typing import Set\r\n\r\nfrom pydantic import BaseModel, Field\r\n\r\n\r\nclass FooForm(BaseModel):\r\n tags: Set[int]\r\n\r\n\r\nfoo = FooForm(tags=[1, 1, 2]) # tags attribute will be {1, 2}\r\n\r\n\r\nclass BarForm(BaseModel):\r\n tags: Set[int] = Field(unique=True)\r\n\r\n\r\nbar = BarForm(tags=[1, 1, 2]) # will fail, because sequence has duplicate items\r\n```\r\n\r\nIf you have any questions, feel free to ask.\nThis is an interesting issue that clarifies some of the friction points with validation and casting within Pydantic. \r\n\r\nAs a workaround, it's possible to add a validator with 'pre=True' to get the desired behavior. \r\n\r\n```python\r\nfrom typing import Set, Sequence\r\nfrom pydantic import BaseModel, Field, validator\r\n\r\ndef custom_to_set(xs: Sequence[int]) -> Set[int]:\r\n items = set([])\r\n for item in xs:\r\n if item in items:\r\n raise ValueError(f\"Duplicate item {item}\")\r\n else:\r\n items.add(item)\r\n return items\r\n\r\n\r\nclass Record(BaseModel):\r\n tags: Set[int] = Field(...)\r\n\r\n @validator('tags', pre=True)\r\n def validate_unique_tags(cls, value):\r\n # raise ValueError or a set\r\n return custom_to_set(value)\r\n\r\ndef example() -> int:\r\n\r\n t1 = ['1', '2'] # Notice how liberal Pydantic is. This is not a validation error\r\n t2 = [1, 3, 1] # This now raises\r\n for t in [t1, t2]:\r\n print(Record(tags=t))\r\n return 0\r\n```\r\n\r\n\r\nIs this issue a bandaid for the fundamental loose casting/coercion model that Pydantic has adopted?\r\n\r\nhttps://github.com/samuelcolvin/pydantic/issues/1098\r\n\r\nYour proposal does seem reasonable and clarifies the default behavior of `Set[T]` in a \"strict\" mode should raise a core validation error if the input sequence has any duplicates. However, this is not how Pydantic has approached validation. Perhaps it would be better to wait until Pydantic has a coherent \"strict\" model?\r\n\r\nWith regards to the proposal, it's a bit counterintuitive to define a `Set[int]` and also set `Field(unique=True)` to get the validation to work \"correctly\". Similarly, `List[T] = Field(unique=True)` is counterintuitive because it's not clear why a `Set[T]` isn't being used. This case might be better solved by a custom validation/casting function with `pre=True`? \r\n\r\n\n@mpkocher This issue came up when I faced with an issue which I thought was a bug.\r\n\r\nI had a model and one of the fields had a type `Set[str]`, the model looked like:\r\n```python\r\nclass Form(BaseModel):\r\n tags: Set[int]\r\n```\r\nWhen one of the scripts tried to instantiate `Form` with duplicated items it simply cast sequence to a set, which was the wrong behavior for me script. After investigating I found that this is expected behavior, that why I create this PR)\r\n\r\nI agree with you that `List[T] = Field(unique=True)` can be a little bit confusing.\r\n\r\nRegarding #1098 it looks great and maybe it's better to return to this feature after `Strict Configuration` will be implemented.\nI'm running into similar friction points where the overly liberal casting yields surprising results. ":1,"I avoided adding `--jobs` until we decided on whether to use both `add`/`import-url` or one of them, but we will definitely need it for #5198! \nBy the way, `repo.imp_url()` already takes a `jobs=` parameter so only thing missing is a CLI arg":1,"Context: https://discordapp.com/channels/485586884165107732/485596304961962003/760249329637916682\nThis is quite an interesting bug. We do support reading only partial values, but it seems we are not making a distinction between `None` and 0/0.0 (falsy values). :slightly_smiling_face: \r\n\r\nhttps://github.com/iterative/dvc/blob/e3a7857b1352f1203504ed754482b9e308a48e2f/dvc/repo/metrics/show.py#L51\r\n\r\nThe fix is just to do `if m not in (None, {})` on that line.\r\nStill, there seems to be another bug when the metrics file contains a scalar value (i.e. json file with just `0` in it), so we might need similar checks there too.\r\n\r\nhttps://github.com/iterative/dvc/blob/e3a7857b1352f1203504ed754482b9e308a48e2f/dvc/repo/metrics/show.py#L83\r\n\r\n@rgasper, would you be interested in making a fix?\nIf you can direct me to instructions on how to setup the codebase, sure.\nSure, @rgasper, please [follow the contributing guide](https://dvc.org/doc/user-guide/contributing/core). If you have any problems, please don't hesitate to ask in the #dev-talk channel in the Discord.\r\n":1,"Initial discussion:\r\nhttps://discord.com/channels/485586884165107732/485596304961962003/933674011982983219\r\n\r\n@PietrassykFP has been setting up the credentials via environmental variables, so if the remote gets resolved properly, it should not raise issues. I created a local test aiming to verify if remote gets resolved properly and it seem to be working fine:\r\n\r\n```\r\ndef test_open_from_proper_remote(tmp_dir, erepo_dir, make_tmp_dir):\r\n default_remote = make_tmp_dir(\"default_storage\")\r\n other_remote = make_tmp_dir(\"other_storage\")\r\n\r\n with erepo_dir.chdir():\r\n with erepo_dir.dvc.config.edit() as conf:\r\n conf[\"remote\"][\"default\"] = {\"url\": str(default_remote)}\r\n conf[\"core\"][\"remote\"] = \"default\"\r\n conf[\"remote\"][\"other\"] = {\"url\": str(other_remote)}\r\n\r\n erepo_dir.dvc_gen({\"foo\": \"foo content\"}, commit=\"create file\")\r\n erepo_dir.dvc.push(remote=\"other\")\r\n\r\n (erepo_dir / \"foo\").unlink()\r\n shutil.rmtree((erepo_dir / \".dvc\" / \"cache\"))\r\n\r\n assert not (\r\n default_remote / \"6d\" / \"bda444875c24ec1bbdb433456be11f\"\r\n ).is_file()\r\n assert (other_remote / \"6d\" / \"bda444875c24ec1bbdb433456be11f\").is_file()\r\n\r\n with api.open(\"foo\", repo=os.fspath(erepo_dir)) as fd:\r\n result = fd.read()\r\n\r\n assert result == \"foo content\"\r\n```\r\n\r\nIt would be good to reproduce the problem and create a test, though that needs some more investigation.\nprobably related to https://github.com/iterative/dvc/issues/6930 (untracked files being removed on post-run apply)\nOnly occurs if `--set-param` modifies an untracked params file and `--temp` or `--queue` flag is passed. \r\n\r\nThe untracked params file gets added `scm.add` on [`_update_params`](https://github.com/iterative/dvc/blob/dbfe88510ff1185ba83c6b7d635275232f6ba086/dvc/repo/experiments/queue/base.py#L332) and stashed to [`stash_rev`](https://github.com/iterative/dvc/blob/dbfe88510ff1185ba83c6b7d635275232f6ba086/dvc/repo/experiments/queue/base.py#L406).\r\n\r\nWhen using `WorkspaceExecutor`, [`init_git`](https://github.com/iterative/dvc/blob/d0eda1d5385ba6a78247528327563b536a561b78/dvc/repo/experiments/executor/local.py#L166) applies a `merge` on the current git repo, \"restoring\" the params file in the user workspace.\r\n\r\nWhen using `TmpDirExecutor`, the `merge` is applied on the temp dir git repo, so the params file is never \"restored\" in the user workspace.\r\n\nIt seems like the behavior is expected then. However, maybe we can clarify in the docs that experiment files should be tracked by either Git or DVC?\nI stumbled over this today and found it at least \"unexpected\" and did not know how to fix it, until I came here.\nYes, I think I was incorrect to say it is expected behavior. In fact, I ran into it myself and was confused recently (see #8256). Should we be applying the merge on top of both the tmp dir and the workspace @pmrowla?\nApplying the merge on the workspace isn't the correct behavior, that would put `-S` modified params into the workspace and not the original params file.\r\n\r\nI think the fix for this should just be to use `stash_workspace(include_untracked=True)` here\r\nhttps://github.com/iterative/dvc/blob/55b16166c852a72e8dd47d4e32090063d27497e4/dvc/repo/experiments/queue/base.py#L320\r\n\r\nWhen we leave that context it is supposed to restore the workspace to the state before any DVC exp related changes/operations were done. `include_untracked` was omitted because we aren't supposed to be touching untracked files at all in exps, but the bug is we actually can modify untracked files in the params file case.":1,"Hello @snazzyfox \r\nUnfortunately we currently have some [custom logic](https://github.com/samuelcolvin/pydantic/blob/master/pydantic/schema.py#L820), which doesn't support `Frozenset` as the origin is `frozenset`.\r\nWe could probably just add a `if issubclass(origin, frozenset) ...` and add the immutability in `conset` or duplicate some logic to have a `confrozenset` \nThanks for taking a look! This totally makes sense. \r\n\r\nI think whether we create `confrozenset` or `conset(frozen=True)` depends on if we want to also duplicate the `ConstrainedSet` class. The code for frozen sets will most probably be almost identical, just with a different base class. If there's a way we can update/reuse the existing class, adding a immutable option to `conset` would make more sense.":1,"Great point! Our internal API function dvc/project.py:Project.status() actually returns a dict, which then gets printed in dvc/command/status.py, so it should be pretty easy to implement.\nJust need to add `--json` option in the dvc/cli.py for `dvc status` and then process it in `dvc/command/status.py` and json.dump() it, instead of printing as usual.\nYeah I took the Project.status() in mind, I'm starting to dive in the code 🤓\r\n\r\nI would additionally include the `callback`, `locked` and `checksum_changed` boolean values explicitly so that the status can be explained. This would also be useful in the human readable output IMO.\r\n\r\nAnd another thing, I'm thinking it would be useful to have separate `outs_changed` and `outs_missing` since the consequences are a bit different and should probably be reflected in the status icon - I would go for red with changed outputs and dark grey with missing outputs. I'm guessing users can pull someone else's repository and work with most DVC files without the outputs and I don't want the icons to scream in red. But when there's a mismatch in the output file's checksum, we should take that as a warning so red color makes sense.\nAlso, showing a status icon means that I have to turn these stage status properties into a single distinct status icon. Since it would be too much to have an icon for all the combinations, the way I see it is to process the properties by severity to produce something like this:\r\n\r\nIf locked -> locked (yellow lock icon overlay?)\r\nElse if any outputs changed -> outs_changed (red)\r\nElse if any outputs missing -> outs_missing (grey)\r\nElse if md5 changed -> checksum_changed (blue)\r\nElse if any dependencies (`--with-deps`) changed or missing -> deps_changed (orange)\r\nElse -> ok (original DVC colored icon)\r\n\r\nWe could also independently show `--always-reproduce` using some overlay, e.g. a yellow dot in bottom left.\r\n\r\nMaybe that logic should actually be done internally and shown in an additional field like `\"status\": \"deps_changed\"`. There could even be a `--simple` option that would show just this field in human readable / machine readable format.\r\n\n@prihoda Sounds amazing! But how would we handle a bunch of changed things? E.g. deps and outs changed. Black icon?\nPersonally I wouldn't go for those combinations, I would just process it from most severe to least severe and show the first one using the if-else approach. When the file is open, we can show a detailed description. If you consider all combinations of outputs (changed, missing, ok), dependencies (changed, ok) and checksum (changed, ok), it's 3x2x2 = 12 combinations.\r\n\r\nBut I'm definitely open for discussion on this. We could enumerate those combinations that might make sense to be shown explicitly and decide on each one based on the different use-cases. In my view, for a file with invalid outputs, I already know that it should be reproduced or pulled so I don't care that dependencies were changed as well.\r\n\r\nWe could each property with a different marker - like circles in bottom left and right corners, but that might start to become quite busy. I will try to experiment in Photoshop, it will be much easier to decide visually.\nMoving the discussion here: https://github.com/iterative/intellij-dvc-support-poc/issues/1 \nClosing this in favor of https://github.com/iterative/intellij-dvc/issues/1 .\nOops. sorry. This is still very relevant, reopening.\nSome implementation details here: https://github.com/iterative/dvc/issues/3975#issuecomment-640815774\r\n\r\n\nHello! So, I think this is a very useful feature to implement, because it gives the possibility to use it for integrations in IDEs, which is my case. Since I'm developing a plugin that makes use of dvc, it would be very interesting to have the possibility to run the dvc status and create a file with the output as you do when creating dvc pipelines. I also think that you should use YAML (like you do in .dvc files) or JSON so that it can be easily parsed in different languages (for example, I am making my java plugin). ":1,"We have already had a test about this.\r\n\r\nhttps://github.com/iterative/dvc/blob/68897aa0def4e509a27d75f2e78106495d08e0c4/tests/func/test_ls.py#L167-L173\r\n\r\nand it was introduced from #3246 at the beginning of `dvc list` command.":2,"@mroutis could you give more context please? \nSure, @shcheklein , let me edit the description :sweat_smile: . \n@mroutis @Suor Is this still relevant? There was some dedup optimization in brancher.\n@efiop , I'd say it is a priority 3 and more like an enhancement.\r\n\r\n```bash\r\ngit init\r\ndvc init\r\ndvc run -m foo 'echo 100 > foo'\r\ngit add -A\r\ngit commit -m \"metrics foo\"\r\n```\r\n\r\n```console\r\n$ dvc metrics show --all-branches\r\nworking tree:\r\n\tfoo: 100\r\nmaster:\r\n\tfoo: 100\r\n```\r\n\r\nSo, the problem is that `working tree` is being returned when you can tell that there's no difference between the current branch and the working tree (because _the HEAD is clean_ -- not sure if I'm using the correct terminology :sweat_smile: )\nOther approach is adding \"working tree\" with a comma, same as dup branches or tags. \nVotes split on priority and whether include it into the next sprint: +1s from @mroutis and @Suor, -1 from @efiop and @pared. \nGoes as bonus for next sprint if we go well. \nJust discovered that `--all-branches` and friends are noop for `dvc status` when neither `cloud` nor `remote` option is specified. So:\r\n- this issue is only about `metrics show`, other commands don't care\r\n- should fail or show a warning on a noop option use.\n@Suor Indeed, it only affects status if `-c` is also specified. So need a group of mutually required (or whatever it is called) flags there. ":1,"Inconsistency with dvc and git files. Will fix. Thanks for the report! :pray: ":1,"#4344 and this issue are related to staging in dvc-data.":1,"Would this be enough to simplify it?\r\n\r\n```\r\n$ dvc plots templates --help\r\nusage: dvc plots templates [-h] [-q | -v] [-l] [-o ] [