url stringlengths 58 61 | repository_url stringclasses 1 value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 48 51 | id int64 600M 3.43B | node_id stringlengths 18 24 | number int64 2 7.78k | title stringlengths 1 290 | user dict | labels listlengths 0 4 | state stringclasses 2 values | locked bool 1 class | assignee dict | assignees listlengths 0 4 | milestone dict | comments listlengths 0 30 | created_at stringdate 2020-04-14 18:18:51 2025-09-18 08:25:34 | updated_at stringdate 2020-04-29 09:23:05 2025-09-22 08:47:53 | closed_at stringlengths 20 20 ⌀ | author_association stringclasses 4 values | type null | active_lock_reason null | draft bool 0 classes | pull_request dict | body stringlengths 0 228k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | state_reason stringclasses 4 values | sub_issues_summary dict | issue_dependencies_summary dict | is_pull_request bool 1 class |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7780 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7780/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7780/comments | https://api.github.com/repos/huggingface/datasets/issues/7780/events | https://github.com/huggingface/datasets/issues/7780 | 3,429,267,259 | I_kwDODunzps7MZnc7 | 7,780 | BIGPATENT dataset inaccessible (deprecated script loader) | {
"avatar_url": "https://avatars.githubusercontent.com/u/137755081?v=4",
"events_url": "https://api.github.com/users/ishmaifan/events{/privacy}",
"followers_url": "https://api.github.com/users/ishmaifan/followers",
"following_url": "https://api.github.com/users/ishmaifan/following{/other_user}",
"gists_url": "https://api.github.com/users/ishmaifan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ishmaifan",
"id": 137755081,
"login": "ishmaifan",
"node_id": "U_kgDOCDX5yQ",
"organizations_url": "https://api.github.com/users/ishmaifan/orgs",
"received_events_url": "https://api.github.com/users/ishmaifan/received_events",
"repos_url": "https://api.github.com/users/ishmaifan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ishmaifan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ishmaifan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ishmaifan",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! I opened https://huggingface.co/datasets/NortheasternUniversity/big_patent/discussions/7 to update the dataset, hopefully it's merged soon !"
] | 2025-09-18T08:25:34Z | 2025-09-19T14:35:54Z | null | NONE | null | null | null | null | dataset: https://huggingface.co/datasets/NortheasternUniversity/big_patent
When I try to load it with the datasets library, it fails with:
RuntimeError: Dataset scripts are no longer supported, but found big_patent.py
Could you please publish a Parquet/Arrow export of BIGPATENT on the Hugging Face so that it can be accessed with datasets>=4.x.
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7780/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7780/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7777 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7777/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7777/comments | https://api.github.com/repos/huggingface/datasets/issues/7777/events | https://github.com/huggingface/datasets/issues/7777 | 3,424,462,082 | I_kwDODunzps7MHSUC | 7,777 | push_to_hub not overwriting but stuck in a loop when there are existing commits | {
"avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4",
"events_url": "https://api.github.com/users/Darejkal/events{/privacy}",
"followers_url": "https://api.github.com/users/Darejkal/followers",
"following_url": "https://api.github.com/users/Darejkal/following{/other_user}",
"gists_url": "https://api.github.com/users/Darejkal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Darejkal",
"id": 55143337,
"login": "Darejkal",
"node_id": "MDQ6VXNlcjU1MTQzMzM3",
"organizations_url": "https://api.github.com/users/Darejkal/orgs",
"received_events_url": "https://api.github.com/users/Darejkal/received_events",
"repos_url": "https://api.github.com/users/Darejkal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Darejkal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darejkal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Darejkal",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"HTTP 412 means a commit happened in the meantime, so `get_deletions_and_dataset_card` has to retry to get the latest version of the dataset card and what files to delete based on the latest version of the dataset repository\n\nAre you running other operations in the dataset repo for your push_to_hub ?",
"There w... | 2025-09-17T03:15:35Z | 2025-09-17T19:31:14Z | 2025-09-17T19:31:14Z | NONE | null | null | null | null | ### Describe the bug
`get_deletions_and_dataset_card` stuck at error a commit has happened error since push to hub for http error 412 for tag 4.1.0. The error does not exists in 4.0.0.
### Steps to reproduce the bug
Create code to use push_to_hub, ran twice each time with different content for datasets.Dataset.
The code will stuck in time.sleep loop for `get_deletions_and_dataset_card`. If error is explicitly printed, the error is HTTP 412.
### Expected behavior
New datasets overwrite existing one on repo.
### Environment info
datasets 4.1.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4",
"events_url": "https://api.github.com/users/Darejkal/events{/privacy}",
"followers_url": "https://api.github.com/users/Darejkal/followers",
"following_url": "https://api.github.com/users/Darejkal/following{/other_user}",
"gists_url": "https://api.github.com/users/Darejkal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Darejkal",
"id": 55143337,
"login": "Darejkal",
"node_id": "MDQ6VXNlcjU1MTQzMzM3",
"organizations_url": "https://api.github.com/users/Darejkal/orgs",
"received_events_url": "https://api.github.com/users/Darejkal/received_events",
"repos_url": "https://api.github.com/users/Darejkal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Darejkal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darejkal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Darejkal",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7777/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7777/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7772 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7772/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7772/comments | https://api.github.com/repos/huggingface/datasets/issues/7772/events | https://github.com/huggingface/datasets/issues/7772 | 3,417,353,751 | I_kwDODunzps7LsK4X | 7,772 | Error processing scalar columns using tensorflow. | {
"avatar_url": "https://avatars.githubusercontent.com/u/3871483?v=4",
"events_url": "https://api.github.com/users/khteh/events{/privacy}",
"followers_url": "https://api.github.com/users/khteh/followers",
"following_url": "https://api.github.com/users/khteh/following{/other_user}",
"gists_url": "https://api.github.com/users/khteh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/khteh",
"id": 3871483,
"login": "khteh",
"node_id": "MDQ6VXNlcjM4NzE0ODM=",
"organizations_url": "https://api.github.com/users/khteh/orgs",
"received_events_url": "https://api.github.com/users/khteh/received_events",
"repos_url": "https://api.github.com/users/khteh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/khteh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khteh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/khteh",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-09-15T10:36:31Z | 2025-09-15T10:49:17Z | null | NONE | null | null | null | null | `datasets==4.0.0`
```
columns_to_return = ['input_ids','attention_mask', 'start_positions', 'end_positions']
train_ds.set_format(type='tf', columns=columns_to_return)
```
`train_ds`:
```
train_ds type: <class 'datasets.arrow_dataset.Dataset'>, shape: (1000, 9)
columns: ['question', 'sentences', 'answer', 'str_idx', 'end_idx', 'input_ids', 'attention_mask', 'start_positions', 'end_positions']
features:{'question': Value('string'), 'sentences': Value('string'), 'answer': Value('string'), 'str_idx': Value('int64'), 'end_idx': Value('int64'), 'input_ids': List(Value('int32')), 'attention_mask': List(Value('int8')), 'start_positions': Value('int64'), 'end_positions': Value('int64')}
```
`train_ds_tensor = train_ds['start_positions'].to_tensor(shape=(-1,1))` hits the following error:
```
AttributeError: 'Column' object has no attribute 'to_tensor'
```
`tf.reshape(train_ds['start_positions'], shape=[-1,1])` hits the following error:
```
TypeError: Scalar tensor has no `len()`
``` | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7772/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7772/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7767 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7767/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7767/comments | https://api.github.com/repos/huggingface/datasets/issues/7767/events | https://github.com/huggingface/datasets/issues/7767 | 3,411,654,444 | I_kwDODunzps7LWbcs | 7,767 | Custom `dl_manager` in `load_dataset` | {
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ain-soph",
"id": 13214530,
"login": "ain-soph",
"node_id": "MDQ6VXNlcjEzMjE0NTMw",
"organizations_url": "https://api.github.com/users/ain-soph/orgs",
"received_events_url": "https://api.github.com/users/ain-soph/received_events",
"repos_url": "https://api.github.com/users/ain-soph/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ain-soph",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2025-09-12T19:06:23Z | 2025-09-12T19:07:52Z | null | NONE | null | null | null | null | ### Feature request
https://github.com/huggingface/datasets/blob/4.0.0/src/datasets/load.py#L1411-L1418
```
def load_dataset(
...
dl_manager: Optional[DownloadManager] = None, # add this new argument
**config_kwargs,
) -> Union[DatasetDict, Dataset, IterableDatasetDict, IterableDataset]:
...
# Create a dataset builder
builder_instance = load_dataset_builder(
path=path,
name=name,
data_dir=data_dir,
data_files=data_files,
cache_dir=cache_dir,
features=features,
download_config=download_config,
download_mode=download_mode,
revision=revision,
token=token,
storage_options=storage_options,
**config_kwargs,
)
# Return iterable dataset in case of streaming
if streaming:
return builder_instance.as_streaming_dataset(split=split)
# Note: This is the revised part
if dl_manager is None:
if download_config is None:
download_config = DownloadConfig(
cache_dir=builder_instance._cache_downloaded_dir,
force_download=download_mode == DownloadMode.FORCE_REDOWNLOAD,
force_extract=download_mode == DownloadMode.FORCE_REDOWNLOAD,
use_etag=False,
num_proc=num_proc,
token=builder_instance.token,
storage_options=builder_instance.storage_options,
) # We don't use etag for data files to speed up the process
dl_manager = DownloadManager(
dataset_name=builder_instance.dataset_name,
download_config=download_config,
data_dir=builder_instance.config.data_dir,
record_checksums=(
builder_instance._record_infos or verification_mode == VerificationMode.ALL_CHECKS
),
)
# Download and prepare data
builder_instance.download_and_prepare(
download_config=download_config,
download_mode=download_mode,
verification_mode=verification_mode,
dl_manager=dl_manager, # pass the new argument
num_proc=num_proc,
storage_options=storage_options,
)
...
```
### Motivation
In my case, I'm hoping to deal with the cache files downloading manually (not using hash filenames and save to another location, or using potential existing local files).
### Your contribution
It's already implemented above. If maintainers think this should be considered, I'll open a PR. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7767/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7767/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7766 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7766/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7766/comments | https://api.github.com/repos/huggingface/datasets/issues/7766/events | https://github.com/huggingface/datasets/issues/7766 | 3,411,611,165 | I_kwDODunzps7LWQ4d | 7,766 | cast columns to Image/Audio/Video with `storage_options` | {
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ain-soph",
"id": 13214530,
"login": "ain-soph",
"node_id": "MDQ6VXNlcjEzMjE0NTMw",
"organizations_url": "https://api.github.com/users/ain-soph/orgs",
"received_events_url": "https://api.github.com/users/ain-soph/received_events",
"repos_url": "https://api.github.com/users/ain-soph/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ain-soph",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2025-09-12T18:51:01Z | 2025-09-12T18:51:01Z | null | NONE | null | null | null | null | ### Feature request
Allow `storage_options` to be passed in
1. `cast` related operations (e.g., `cast_columns, cast`)
2. `info` related reading (e.g., `from_dict, from_pandas, from_polars`) together with `info.features`
```python3
import datasets
image_path = "s3://bucket/sample.png"
dataset = datasets.Dataset.from_dict({"image_path": [image_path]})
# dataset = dataset.cast_column("image_path", datasets.Image()) # now works without `storage_options`
# expected behavior
dataset = dataset.cast_column("image_path", datasets.Image(), storage_options={"anon": True})
```
### Motivation
I'm using my own registered fsspec filesystem (s3 with customized local cache support). I need to pass cache folder paths `cache_dirs: list[str]` to the filesystem when I read the remote images (cast from file_paths).
### Your contribution
Could help with a PR at weekends | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7766/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7766/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7765 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7765/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7765/comments | https://api.github.com/repos/huggingface/datasets/issues/7765/events | https://github.com/huggingface/datasets/issues/7765 | 3,411,556,378 | I_kwDODunzps7LWDga | 7,765 | polars dataset cannot cast column to Image/Audio/Video | {
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ain-soph",
"id": 13214530,
"login": "ain-soph",
"node_id": "MDQ6VXNlcjEzMjE0NTMw",
"organizations_url": "https://api.github.com/users/ain-soph/orgs",
"received_events_url": "https://api.github.com/users/ain-soph/received_events",
"repos_url": "https://api.github.com/users/ain-soph/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ain-soph",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I fixed this with a combination of `to_dict` and `from_dict`:\n\n```py\ndatasets.Dataset.from_dict(df.to_dict(as_series=False))\n```",
"@samuelstevens Yeah, I'm using similar workaround as well. But it would be ideal if we can avoid the copy."
] | 2025-09-12T18:32:49Z | 2025-09-16T01:33:31Z | null | NONE | null | null | null | null | ### Describe the bug
`from_polars` dataset cannot cast column to Image/Audio/Video, while it works on `from_pandas` and `from_dict`
### Steps to reproduce the bug
```python3
import datasets
import pandas as pd
import polars as pl
image_path = "./sample.png"
# polars
df = pl.DataFrame({"image_path": [image_path]})
dataset = datasets.Dataset.from_polars(df)
dataset = dataset.cast_column("image_path", datasets.Image())
# # raises Error
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from large_string to struct using function cast_struct
# pandas
df = pd.DataFrame({"image_path": [image_path]})
dataset = datasets.Dataset.from_pandas(df)
dataset = dataset.cast_column("image_path", datasets.Image())
# # pass
{'image_path': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=338x277 at 0x7FBA719D4050>}
# dict
dataset = datasets.Dataset.from_dict({"image_path": [image_path]})
dataset = dataset.cast_column("image_path", datasets.Image())
# # pass
{'image_path': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=338x277 at 0x7FBA719D4050>}
```
### Expected behavior
`from_polars` case shouldn't raise error and have the same outputs as `from_pandas` and `from_dict`
### Environment info
```
# Name Version Build Channel
datasets 4.0.0 pypi_0 pypi
pandas 2.3.1 pypi_0 pypi
polars 1.32.3 pypi_0 pypi
``` | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7765/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7765/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7760 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7760/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7760/comments | https://api.github.com/repos/huggingface/datasets/issues/7760/events | https://github.com/huggingface/datasets/issues/7760 | 3,401,799,485 | I_kwDODunzps7Kw1c9 | 7,760 | Hugging Face Hub Dataset Upload CAS Error | {
"avatar_url": "https://avatars.githubusercontent.com/u/142820182?v=4",
"events_url": "https://api.github.com/users/n-bkoe/events{/privacy}",
"followers_url": "https://api.github.com/users/n-bkoe/followers",
"following_url": "https://api.github.com/users/n-bkoe/following{/other_user}",
"gists_url": "https://api.github.com/users/n-bkoe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/n-bkoe",
"id": 142820182,
"login": "n-bkoe",
"node_id": "U_kgDOCINDVg",
"organizations_url": "https://api.github.com/users/n-bkoe/orgs",
"received_events_url": "https://api.github.com/users/n-bkoe/received_events",
"repos_url": "https://api.github.com/users/n-bkoe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/n-bkoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n-bkoe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/n-bkoe",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"cc @jsulz maybe ?",
"Curious! I took a look at this and was unable to see why this would be occurring on our side. Tagging in @jgodlew and @bpronan since they might have insights. \n\n@n-bkoe just a few questions if you wouldn't mind: \n1. What kind of data are you uploading and what is the difference in file si... | 2025-09-10T10:01:19Z | 2025-09-16T20:01:36Z | null | NONE | null | null | null | null | ### Describe the bug
Experiencing persistent 401 Unauthorized errors when attempting to upload datasets to Hugging Face Hub using the `datasets` library. The error occurs specifically with the CAS (Content Addressable Storage) service during the upload process. Tried using HF_HUB_DISABLE_XET=1. It seems to work for smaller files.
Exact error message :
```
Processing Files (0 / 0) : | | 0.00B / 0.00B 2025-09-10T09:44:35.657565Z ERROR Fatal Error: "cas::upload_xorb" api call failed (request id 01b[...]XXX): HTTP status client error (401 Unauthorized) for url (https://cas-server.xethub.hf.co/xorb/default/7f3abdc[...]XXX)
at /home/runner/work/xet-core/xet-core/cas_client/src/retry_wrapper.rs:113
Processing Files (0 / 0) : 0%| | 0.00B / 184kB, 0.00B/s
New Data Upload : 0%| | 0.00B / 184kB, 0.00B/s
❌ Failed to push some_dataset: Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/7f3abdc[...]XXX
```
Workaround Attempts
1. **Disabled XET**: Set `HF_HUB_DISABLE_XET=1` environment variable
2. **Updated hf-xet**: Use `hf-xet==1.1.9` rather than latest
3. **Verified Authentication**: Confirmed HF token is valid and has write permissions
4. **Tested with Smaller Datasets**:
- 100 samples: ✅ **SUCCESS** (uploaded successfully)
- 10,000 samples: ❌ **FAILS** (401 Unauthorized)
### Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
# Create dataset (example with 10,000 samples)
dataset = Dataset.from_dict({
"question": questions,
"answer": answers,
# ... other fields
})
# Split into train/test
dataset_dict = dataset.train_test_split(test_size=0.1)
# Upload to Hub
dataset_dict.push_to_hub("Org/some-dataset")
```
### Expected behavior
## Expected Behavior
- Dataset should upload successfully to Hugging Face Hub
- Progress bars should complete without authentication errors
- Dataset should be accessible at the specified repository URL
## Actual Behavior
- Upload fails consistently with 401 Unauthorized error
- Error occurs specifically during CAS service interaction
- No progress is made on the upload (0% completion)
- Dataset is created on Hugging Face Hub with no data folder
### Environment info
- **Platform**: SageMaker (AWS)
- **Python Version**: 3.12
- **Libraries**:
- `datasets` library (latest version)
- `hf-xet==1.1.9` (attempted fix)
- **Authentication**: Hugging Face token configured
- **Dataset Size**: ~10,000 samples, works for smaller sizes (e.g. 100) | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7760/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7760/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7759 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7759/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7759/comments | https://api.github.com/repos/huggingface/datasets/issues/7759/events | https://github.com/huggingface/datasets/issues/7759 | 3,398,099,513 | I_kwDODunzps7KiuI5 | 7,759 | Comment/feature request: Huggingface 502s from GHA | {
"avatar_url": "https://avatars.githubusercontent.com/u/52365471?v=4",
"events_url": "https://api.github.com/users/Scott-Simmons/events{/privacy}",
"followers_url": "https://api.github.com/users/Scott-Simmons/followers",
"following_url": "https://api.github.com/users/Scott-Simmons/following{/other_user}",
"gists_url": "https://api.github.com/users/Scott-Simmons/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Scott-Simmons",
"id": 52365471,
"login": "Scott-Simmons",
"node_id": "MDQ6VXNlcjUyMzY1NDcx",
"organizations_url": "https://api.github.com/users/Scott-Simmons/orgs",
"received_events_url": "https://api.github.com/users/Scott-Simmons/received_events",
"repos_url": "https://api.github.com/users/Scott-Simmons/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Scott-Simmons/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Scott-Simmons/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Scott-Simmons",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-09-09T11:59:20Z | 2025-09-09T13:02:28Z | null | NONE | null | null | null | null | This is no longer a pressing issue, but for completeness I am reporting that in August 26th, GET requests to `https://datasets-server.huggingface.co/info\?dataset\=livebench/math` were returning 502s when invoked from [github actions](https://github.com/UKGovernmentBEIS/inspect_evals/actions/runs/17241892475/job/48921123754) (that link will expire eventually, [here are the logs](https://github.com/user-attachments/files/22233578/logs_44225296943.zip)).
When invoked from actions, it appeared to be consistently failing for ~6 hours. However, these 502s never occurred when the request was invoked from my local machine in that same time period.
I suspect that this is related to how the requests are routed with github actions versus locally.
Its not clear to me if the request even reached huggingface servers or if its the github proxy that stopped it from going through, but I wanted to report it nonetheless in case this is helpful information. I'm curious if huggingface can do anything on their end to confirm cause.
And a feature request for if this happens in the future (assuming huggingface has visibilty on it): A "datasets status" page highlighting if 502s occur for specific individual datasets could be useful for people debugging on the other end of this!
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7759/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7759/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7758 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7758/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7758/comments | https://api.github.com/repos/huggingface/datasets/issues/7758/events | https://github.com/huggingface/datasets/issues/7758 | 3,395,590,783 | I_kwDODunzps7KZJp_ | 7,758 | Option for Anonymous Dataset link | {
"avatar_url": "https://avatars.githubusercontent.com/u/38985481?v=4",
"events_url": "https://api.github.com/users/egrace479/events{/privacy}",
"followers_url": "https://api.github.com/users/egrace479/followers",
"following_url": "https://api.github.com/users/egrace479/following{/other_user}",
"gists_url": "https://api.github.com/users/egrace479/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/egrace479",
"id": 38985481,
"login": "egrace479",
"node_id": "MDQ6VXNlcjM4OTg1NDgx",
"organizations_url": "https://api.github.com/users/egrace479/orgs",
"received_events_url": "https://api.github.com/users/egrace479/received_events",
"repos_url": "https://api.github.com/users/egrace479/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/egrace479/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/egrace479/subscriptions",
"type": "User",
"url": "https://api.github.com/users/egrace479",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2025-09-08T20:20:10Z | 2025-09-08T20:20:10Z | null | NONE | null | null | null | null | ### Feature request
Allow for anonymized viewing of datasets. For instance, something similar to [Anonymous GitHub](https://anonymous.4open.science/).
### Motivation
We generally publish our data through Hugging Face. This has worked out very well as it's both our repository and archive (thanks to the DOI feature!). However, we have an increasing challenge when it comes to sharing our datasets for paper (both conference and journal) submissions. Due to the need to share data anonymously, we can't use the Hugging Face URLs, but datasets tend to be too large for inclusion as a zip. Being able to have an anonymous link would be great since we can't be double-publishing the data.
### Your contribution
Sorry, I don't have a contribution to make to the implementation of this. Perhaps it would be possible to work off the [Anonymous GitHub](https://github.com/tdurieux/anonymous_github) code to generate something analogous with pointers to the data still on Hugging Face's servers (instead of the duplication of data required for the GitHub version)? | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7758/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7758/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7757 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7757/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7757/comments | https://api.github.com/repos/huggingface/datasets/issues/7757/events | https://github.com/huggingface/datasets/issues/7757 | 3,389,535,011 | I_kwDODunzps7KCDMj | 7,757 | Add support for `.conll` file format in datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/88763593?v=4",
"events_url": "https://api.github.com/users/namesarnav/events{/privacy}",
"followers_url": "https://api.github.com/users/namesarnav/followers",
"following_url": "https://api.github.com/users/namesarnav/following{/other_user}",
"gists_url": "https://api.github.com/users/namesarnav/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/namesarnav",
"id": 88763593,
"login": "namesarnav",
"node_id": "MDQ6VXNlcjg4NzYzNTkz",
"organizations_url": "https://api.github.com/users/namesarnav/orgs",
"received_events_url": "https://api.github.com/users/namesarnav/received_events",
"repos_url": "https://api.github.com/users/namesarnav/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/namesarnav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/namesarnav/subscriptions",
"type": "User",
"url": "https://api.github.com/users/namesarnav",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"That would be cool ! feel free to ping me if I can help reviewing a PR"
] | 2025-09-06T07:25:39Z | 2025-09-10T14:22:48Z | null | NONE | null | null | null | null | ### Feature request
I’d like to request native support in the Hugging Face datasets library for reading .conll files (CoNLL format). This format is widely used in NLP tasks, especially for Named Entity Recognition (NER), POS tagging, and other token classification problems.
Right now `.conll` datasets need to be manually parsed or preprocessed before being loaded into datasets. Having built in support would save time and make workflows smoother for researchers and practitioners.
I propose -
Add a conll dataset builder or file parser to datasets that can:
- Read `.conll` files with customizable delimiters (space, tab).
- Handle sentence/document boundaries (typically indicated by empty lines).
- Support common CoNLL variants (e.g., CoNLL-2000 chunking, CoNLL-2003 NER).
- Output a dataset where each example contains:
- tokens: list of strings
- tags (or similar): list of labels aligned with tokens
Given a .conll snippet like:
```
EU NNP B-ORG
rejects VBZ O
German JJ B-MISC
call NN O
. . O
```
The dataset should load as:
```
{
"tokens": ["EU", "rejects", "German", "call", "."],
"tags": ["B-ORG", "O", "B-MISC", "O", "O"]
}
```
### Motivation
- CoNLL files are a standard benchmark format in NLP (e.g., CoNLL-2003, CoNLL-2000).
- Many users train NER or sequence labeling models (like BERT for token classification) directly on `.conll`
- Right now you have to write your own parsing scripts. Built in support would unify this process and would be much more convenient
### Your contribution
I’d be happy to contribute by implementing this feature. My plan is to-
- Add a new dataset script (conll.py) to handle .conll files.
- Implement parsing logic that supports sentence/document boundaries and token-label alignment.
- Write unit tests with small `.conll` examples to ensure correctness.
- Add documentation and usage examples so new users can easily load `.conll` datasets.
This would be my first open source contribution, so I’ll follow the `CONTRIBUTING.md` guidelines closely and adjust based on feedback from the maintainers. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7757/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7757/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7756 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7756/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7756/comments | https://api.github.com/repos/huggingface/datasets/issues/7756/events | https://github.com/huggingface/datasets/issues/7756 | 3,387,076,693 | I_kwDODunzps7J4rBV | 7,756 | datasets.map(f, num_proc=N) hangs with N>1 when run on import | {
"avatar_url": "https://avatars.githubusercontent.com/u/20065?v=4",
"events_url": "https://api.github.com/users/arjunguha/events{/privacy}",
"followers_url": "https://api.github.com/users/arjunguha/followers",
"following_url": "https://api.github.com/users/arjunguha/following{/other_user}",
"gists_url": "https://api.github.com/users/arjunguha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/arjunguha",
"id": 20065,
"login": "arjunguha",
"node_id": "MDQ6VXNlcjIwMDY1",
"organizations_url": "https://api.github.com/users/arjunguha/orgs",
"received_events_url": "https://api.github.com/users/arjunguha/received_events",
"repos_url": "https://api.github.com/users/arjunguha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/arjunguha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arjunguha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/arjunguha",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-09-05T10:32:01Z | 2025-09-05T10:32:01Z | null | NONE | null | null | null | null | ### Describe the bug
If you `import` a module that runs `datasets.map(f, num_proc=N)` at the top-level, Python hangs.
### Steps to reproduce the bug
1. Create a file that runs datasets.map at the top-level:
```bash
cat <<EOF > import_me.py
import datasets
the_dataset = datasets.load_dataset("openai/openai_humaneval")
the_dataset = the_dataset.map(lambda item: item, num_proc=2)
EOF
```
2. Start Python REPL:
```bash
uv run --python 3.12.3 --with "datasets==4.0.0" python3
Python 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
```
3. Import the file:
```python
import import_me
````
Observe hang.
### Expected behavior
Ideally would not hang, or would fallback to num_proc=1 with a warning.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-6.14.0-29-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.34.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.2
- `fsspec` version: 2025.3.0
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7756/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7756/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7753 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7753/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7753/comments | https://api.github.com/repos/huggingface/datasets/issues/7753/events | https://github.com/huggingface/datasets/issues/7753 | 3,381,831,487 | I_kwDODunzps7Jkqc_ | 7,753 | datasets massively slows data reads, even in memory | {
"avatar_url": "https://avatars.githubusercontent.com/u/1191040?v=4",
"events_url": "https://api.github.com/users/lrast/events{/privacy}",
"followers_url": "https://api.github.com/users/lrast/followers",
"following_url": "https://api.github.com/users/lrast/following{/other_user}",
"gists_url": "https://api.github.com/users/lrast/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lrast",
"id": 1191040,
"login": "lrast",
"node_id": "MDQ6VXNlcjExOTEwNDA=",
"organizations_url": "https://api.github.com/users/lrast/orgs",
"received_events_url": "https://api.github.com/users/lrast/received_events",
"repos_url": "https://api.github.com/users/lrast/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lrast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lrast/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lrast",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! you should try\n\n```python\nfrom datasets import Array3D, Dataset, Features, Value\n\nfeatures = Features({\"image\": Array3D(shape=(3, 224, 224), dtype=\"uint8\"), \"label\": Value(\"uint8\")})\nhf_dataset = Dataset.from_dict({'image': images, 'label':labels}, features=features)\n```\n\notherwise the type o... | 2025-09-04T01:45:24Z | 2025-09-18T22:08:51Z | null | NONE | null | null | null | null | ### Describe the bug
Loading image data in a huggingface dataset results in very slow read speeds, approximately 1000 times longer than reading the same data from a pytorch dataset. This applies even when the dataset is loaded into RAM using a `keep_in_memory=True` flag.
The following script reproduces the result with random data, but it applies equally to datasets that are loaded from the hub.
### Steps to reproduce the bug
The following script should reproduce the behavior
```
import torch
import time
from datasets import Dataset
images = torch.randint(0, 255, (1000, 3, 224, 224), dtype=torch.uint8)
labels = torch.randint(0, 200, (1000,), dtype=torch.uint8)
pt_dataset = torch.utils.data.TensorDataset(images, labels)
hf_dataset = Dataset.from_dict({'image': images, 'label':labels})
hf_dataset.set_format('torch', dtype=torch.uint8)
hf_in_memory = hf_dataset.map(lambda x: x, keep_in_memory=True)
# measure access speeds
def time_access(dataset, img_col):
start_time = time.time()
for i in range(1000):
_ = dataset[i][img_col].shape
end_time = time.time()
return end_time - start_time
print(f"In-memory Tensor access: {time_access(pt_dataset, 0):.4f} seconds")
print(f"HF Dataset access: {time_access(hf_dataset, 'image'):.4f} seconds")
print(f"In-memory HF Dataset access: {time_access(hf_in_memory, 'image'):.4f} seconds")
```
### Expected behavior
For me, the above script produces
```
In-memory Tensor access: 0.0025 seconds
HF Dataset access: 2.9317 seconds
In-memory HF Dataset access: 2.8082 seconds
```
I think that this difference is larger than expected.
### Environment info
- `datasets` version: 4.0.0
- Platform: macOS-14.7.7-arm64-arm-64bit
- Python version: 3.12.11
- `huggingface_hub` version: 0.34.3
- PyArrow version: 18.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7753/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7753/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7751 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7751/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7751/comments | https://api.github.com/repos/huggingface/datasets/issues/7751/events | https://github.com/huggingface/datasets/issues/7751 | 3,358,369,976 | I_kwDODunzps7ILKi4 | 7,751 | Dill version update | {
"avatar_url": "https://avatars.githubusercontent.com/u/98005188?v=4",
"events_url": "https://api.github.com/users/Navanit-git/events{/privacy}",
"followers_url": "https://api.github.com/users/Navanit-git/followers",
"following_url": "https://api.github.com/users/Navanit-git/following{/other_user}",
"gists_url": "https://api.github.com/users/Navanit-git/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Navanit-git",
"id": 98005188,
"login": "Navanit-git",
"node_id": "U_kgDOBddwxA",
"organizations_url": "https://api.github.com/users/Navanit-git/orgs",
"received_events_url": "https://api.github.com/users/Navanit-git/received_events",
"repos_url": "https://api.github.com/users/Navanit-git/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Navanit-git/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Navanit-git/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Navanit-git",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"#7752 ",
"related: #7510 "
] | 2025-08-27T07:38:30Z | 2025-09-10T14:24:02Z | null | NONE | null | null | null | null | ### Describe the bug
Why the datasets is not updating the dill ?
Just want to know if I update the dill version in dill what will be the repucssion.
For now in multiplaces I have to update the library like process requirequire dill 0.4.0 so why not datasets.
Adding a pr too.
### Steps to reproduce the bug
.
### Expected behavior
.
### Environment info
. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7751/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7751/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7746 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7746/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7746/comments | https://api.github.com/repos/huggingface/datasets/issues/7746/events | https://github.com/huggingface/datasets/issues/7746 | 3,345,391,211 | I_kwDODunzps7HZp5r | 7,746 | Fix: Canonical 'multi_news' dataset is broken and should be updated to a Parquet version | {
"avatar_url": "https://avatars.githubusercontent.com/u/187888489?v=4",
"events_url": "https://api.github.com/users/Awesome075/events{/privacy}",
"followers_url": "https://api.github.com/users/Awesome075/followers",
"following_url": "https://api.github.com/users/Awesome075/following{/other_user}",
"gists_url": "https://api.github.com/users/Awesome075/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Awesome075",
"id": 187888489,
"login": "Awesome075",
"node_id": "U_kgDOCzLzaQ",
"organizations_url": "https://api.github.com/users/Awesome075/orgs",
"received_events_url": "https://api.github.com/users/Awesome075/received_events",
"repos_url": "https://api.github.com/users/Awesome075/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Awesome075/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Awesome075/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Awesome075",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"@sayakpaul @a-r-r-o-w could you verify this issue then i can contribute to solve this issue!😊"
] | 2025-08-22T12:52:03Z | 2025-08-27T20:23:35Z | null | NONE | null | null | null | null | Hi,
The canonical `multi_news` dataset is currently broken and fails to load. This is because it points to the [alexfabri/multi_news](https://huggingface.co/datasets/alexfabbri/multi_news) repository, which contains a legacy loading script (`multi_news.py`) that requires the now-removed `trust_remote_code` parameter.
The original maintainer's GitHub and Hugging Face repositories appear to be inactive, so a community-led fix is needed.
I have created a working fix by converting the dataset to the modern Parquet format, which does not require a loading script. The fixed version is available here and loads correctly:
**[Awesome075/multi_news_parquet](https://huggingface.co/datasets/Awesome075/multi_news_parquet)**
Could the maintainers please guide me or themselves update the official `multi_news` dataset to use this working Parquet version? This would involve updating the canonical pointer for "multi_news" to resolve to the new repository.
This action would fix the dataset for all users and ensure its continued availability.
Thank you! | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7746/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7746/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7745 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7745/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7745/comments | https://api.github.com/repos/huggingface/datasets/issues/7745/events | https://github.com/huggingface/datasets/issues/7745 | 3,345,286,773 | I_kwDODunzps7HZQZ1 | 7,745 | Audio mono argument no longer supported, despite class documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/5666041?v=4",
"events_url": "https://api.github.com/users/jheitz/events{/privacy}",
"followers_url": "https://api.github.com/users/jheitz/followers",
"following_url": "https://api.github.com/users/jheitz/following{/other_user}",
"gists_url": "https://api.github.com/users/jheitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jheitz",
"id": 5666041,
"login": "jheitz",
"node_id": "MDQ6VXNlcjU2NjYwNDE=",
"organizations_url": "https://api.github.com/users/jheitz/orgs",
"received_events_url": "https://api.github.com/users/jheitz/received_events",
"repos_url": "https://api.github.com/users/jheitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jheitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jheitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jheitz",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I want to solve this problem can you please assign it to me\nand also can you please guide whether the mono parameter is required to be re-added or the documentation needs an update?"
] | 2025-08-22T12:15:41Z | 2025-08-24T18:22:41Z | null | NONE | null | null | null | null | ### Describe the bug
Either update the documentation, or re-introduce the flag (and corresponding logic to convert the audio to mono)
### Steps to reproduce the bug
Audio(sampling_rate=16000, mono=True) raises the error
TypeError: Audio.__init__() got an unexpected keyword argument 'mono'
However, in the class documentation, is says:
Args:
sampling_rate (`int`, *optional*):
Target sampling rate. If `None`, the native sampling rate is used.
mono (`bool`, defaults to `True`):
Whether to convert the audio signal to mono by averaging samples across
channels.
[...]
### Expected behavior
The above call should either work, or the documentation within the Audio class should be updated
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
- Python version: 3.12.11
- `huggingface_hub` version: 0.34.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.2
- `fsspec` version: 2025.3.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7745/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7745/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7744 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7744/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7744/comments | https://api.github.com/repos/huggingface/datasets/issues/7744/events | https://github.com/huggingface/datasets/issues/7744 | 3,343,510,686 | I_kwDODunzps7HSeye | 7,744 | dtype: ClassLabel is not parsed correctly in `features.py` | {
"avatar_url": "https://avatars.githubusercontent.com/u/43553003?v=4",
"events_url": "https://api.github.com/users/cmatKhan/events{/privacy}",
"followers_url": "https://api.github.com/users/cmatKhan/followers",
"following_url": "https://api.github.com/users/cmatKhan/following{/other_user}",
"gists_url": "https://api.github.com/users/cmatKhan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cmatKhan",
"id": 43553003,
"login": "cmatKhan",
"node_id": "MDQ6VXNlcjQzNTUzMDAz",
"organizations_url": "https://api.github.com/users/cmatKhan/orgs",
"received_events_url": "https://api.github.com/users/cmatKhan/received_events",
"repos_url": "https://api.github.com/users/cmatKhan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cmatKhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmatKhan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cmatKhan",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"I think it's \"class_label\"",
"> I think it's \"class_label\"\n\nI see -- thank you. This works\n\n```yaml\nlicense: mit\nlanguage:\n- en\ntags:\n- genomics\n- yeast\n- transcription\n- perturbation\n- response\n- overexpression\npretty_name: Hackett, 2020 Overexpression\nsize_categories:\n- 1M<n<10M\ndataset_i... | 2025-08-21T23:28:50Z | 2025-09-10T15:23:41Z | 2025-09-10T15:23:41Z | NONE | null | null | null | null | `dtype: ClassLabel` in the README.md yaml metadata is parsed incorrectly and causes the data viewer to fail.
This yaml in my metadata ([source](https://huggingface.co/datasets/BrentLab/yeast_genome_resources/blob/main/README.md), though i changed `ClassLabel` to `string` to using different dtype in order to avoid the error):
```yaml
license: mit
pretty_name: BrentLab Yeast Genome Resources
size_categories:
- 1K<n<10K
language:
- en
dataset_info:
features:
- name: start
dtype: int32
description: Start coordinate (1-based, **inclusive**)
- name: end
dtype: int32
description: End coordinate (1-based, **inclusive**)
- name: strand
dtype: ClassLabel
...
```
is producing the following error in the data viewer:
```
Error code: ConfigNamesError
Exception: ValueError
Message: Feature type 'Classlabel' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 996, in dataset_module_factory
return HubDatasetModuleFactory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 605, in get_module
dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 386, in from_dataset_card_data
dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 317, in _from_yaml_dict
yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 2027, in _from_yaml_list
return cls.from_dict(from_yaml_inner(yaml_data))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1872, in from_dict
obj = generate_from_dict(dic)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1459, in generate_from_dict
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1459, in <dictcomp>
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1465, in generate_from_dict
raise ValueError(f"Feature type '{_type}' not found. Available feature types: {list(_FEATURE_TYPES.keys())}")
ValueError: Feature type 'Classlabel' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
```
I think that this is caused by this line
https://github.com/huggingface/datasets/blob/896616c6cb03d92a33248c3529b0796cda27e955/src/datasets/features/features.py#L2013
Reproducible example from [naming.py](https://github.com/huggingface/datasets/blob/896616c6cb03d92a33248c3529b0796cda27e955/src/datasets/naming.py)
```python
import itertools
import os
import re
_uppercase_uppercase_re = re.compile(r"([A-Z]+)([A-Z][a-z])")
_lowercase_uppercase_re = re.compile(r"([a-z\d])([A-Z])")
_single_underscore_re = re.compile(r"(?<!_)_(?!_)")
_multiple_underscores_re = re.compile(r"(_{2,})")
_split_re = r"^\w+(\.\w+)*$"
def snakecase_to_camelcase(name):
"""Convert snake-case string to camel-case string."""
name = _single_underscore_re.split(name)
name = [_multiple_underscores_re.split(n) for n in name]
return "".join(n.capitalize() for n in itertools.chain.from_iterable(name) if n != "")
snakecase_to_camelcase("ClassLabel")
```
Result:
```raw
'Classlabel'
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/43553003?v=4",
"events_url": "https://api.github.com/users/cmatKhan/events{/privacy}",
"followers_url": "https://api.github.com/users/cmatKhan/followers",
"following_url": "https://api.github.com/users/cmatKhan/following{/other_user}",
"gists_url": "https://api.github.com/users/cmatKhan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cmatKhan",
"id": 43553003,
"login": "cmatKhan",
"node_id": "MDQ6VXNlcjQzNTUzMDAz",
"organizations_url": "https://api.github.com/users/cmatKhan/orgs",
"received_events_url": "https://api.github.com/users/cmatKhan/received_events",
"repos_url": "https://api.github.com/users/cmatKhan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cmatKhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmatKhan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cmatKhan",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7744/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7744/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7742 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7742/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7742/comments | https://api.github.com/repos/huggingface/datasets/issues/7742/events | https://github.com/huggingface/datasets/issues/7742 | 3,336,704,928 | I_kwDODunzps7G4hOg | 7,742 | module 'pyarrow' has no attribute 'PyExtensionType' | {
"avatar_url": "https://avatars.githubusercontent.com/u/6106392?v=4",
"events_url": "https://api.github.com/users/mnedelko/events{/privacy}",
"followers_url": "https://api.github.com/users/mnedelko/followers",
"following_url": "https://api.github.com/users/mnedelko/following{/other_user}",
"gists_url": "https://api.github.com/users/mnedelko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mnedelko",
"id": 6106392,
"login": "mnedelko",
"node_id": "MDQ6VXNlcjYxMDYzOTI=",
"organizations_url": "https://api.github.com/users/mnedelko/orgs",
"received_events_url": "https://api.github.com/users/mnedelko/received_events",
"repos_url": "https://api.github.com/users/mnedelko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mnedelko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mnedelko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mnedelko",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Just checked out the files and thishad already been addressed",
"For others who find this issue: \n\n`pip install --upgrade \"datasets>=2.20.0\"` \n\nfrom https://github.com/explodinggradients/ragas/issues/2170#issuecomment-3204393672 can fix it."
] | 2025-08-20T06:14:33Z | 2025-09-09T02:51:46Z | null | NONE | null | null | null | null | ### Describe the bug
When importing certain libraries, users will encounter the following error which can be traced back to the datasets library.
module 'pyarrow' has no attribute 'PyExtensionType'.
Example issue: https://github.com/explodinggradients/ragas/issues/2170
The issue occurs due to the following. I will proceed to submit a PR with the below fix:
**Issue Reason**
The issue is that PyArrow version 21.0.0 doesn’t have PyExtensionType. This was changed in newer versions of PyArrow. The
PyExtensionType class was renamed to ExtensionType in PyArrow 13.0.0 and later versions.
** Issue Solution**
Making the following changes to the following lib files should temporarily resolve the issue.
I will submit a PR to the dataets library in the meantime.
env_name/lib/python3.10/site-packages/datasets/features/features.py:
```
> 521 self.shape = tuple(shape)
522 self.value_type = dtype
523 self.storage_dtype = self._generate_dtype(self.value_type)
524 - pa.PyExtensionType.__init__(self, self.storage_dtype)
524 + pa.ExtensionType.__init__(self, self.storage_dtype)
525
526 def __reduce__(self):
527 return self.__class__, (
```
Updated venv_name/lib/python3.10/site-packages/datasets/features/features.py:
```
510 _type: str = field(default=“Array5D”, init=False, repr=False)
511
512
513 - class _ArrayXDExtensionType(pa.PyExtensionType):
513 + class _ArrayXDExtensionType(pa.ExtensionType):
514 ndims: Optional[int] = None
515
516 def __init__(self, shape: tuple, dtype: str):
```
### Steps to reproduce the bug
Ragas version: 0.3.1
Python version: 3.11
**Code to Reproduce**
_**In notebook:**_
!pip install ragas
from ragas import evaluate
### Expected behavior
The required package installs without issue.
### Environment info
In Jupyter Notebook.
venv | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7742/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7742/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7741 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7741/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7741/comments | https://api.github.com/repos/huggingface/datasets/issues/7741/events | https://github.com/huggingface/datasets/issues/7741 | 3,334,848,656 | I_kwDODunzps7GxcCQ | 7,741 | Preserve tree structure when loading HDF5 | {
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/klamike",
"id": 17013474,
"login": "klamike",
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"repos_url": "https://api.github.com/users/klamike/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"type": "User",
"url": "https://api.github.com/users/klamike",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [] | 2025-08-19T15:42:05Z | 2025-08-26T15:28:06Z | 2025-08-26T15:28:06Z | CONTRIBUTOR | null | null | null | null | ### Feature request
https://github.com/huggingface/datasets/pull/7740#discussion_r2285605374
### Motivation
`datasets` has the `Features` class for representing nested features. HDF5 files have groups of datasets which are nested, though in #7690 the keys are flattened. We should preserve that structure for the user.
### Your contribution
I'll open a PR (#7743) | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7741/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7741/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7739 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7739/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7739/comments | https://api.github.com/repos/huggingface/datasets/issues/7739/events | https://github.com/huggingface/datasets/issues/7739 | 3,331,537,762 | I_kwDODunzps7Gkzti | 7,739 | Replacement of "Sequence" feature with "List" breaks backward compatibility | {
"avatar_url": "https://avatars.githubusercontent.com/u/15764776?v=4",
"events_url": "https://api.github.com/users/evmaki/events{/privacy}",
"followers_url": "https://api.github.com/users/evmaki/followers",
"following_url": "https://api.github.com/users/evmaki/following{/other_user}",
"gists_url": "https://api.github.com/users/evmaki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/evmaki",
"id": 15764776,
"login": "evmaki",
"node_id": "MDQ6VXNlcjE1NzY0Nzc2",
"organizations_url": "https://api.github.com/users/evmaki/orgs",
"received_events_url": "https://api.github.com/users/evmaki/received_events",
"repos_url": "https://api.github.com/users/evmaki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/evmaki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/evmaki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/evmaki",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Backward compatibility here means 4.0.0 can load datasets saved with older versions.\n\nYou will need 4.0.0 to load datasets saved with 4.0.0"
] | 2025-08-18T17:28:38Z | 2025-09-10T14:17:50Z | null | NONE | null | null | null | null | PR #7634 replaced the Sequence feature with List in 4.0.0, so datasets saved with version 4.0.0 with that feature cannot be loaded by earlier versions. There is no clear option in 4.0.0 to use the legacy feature type to preserve backward compatibility.
Why is this a problem? I have a complex preprocessing and training pipeline dependent on 3.6.0; we manage a very large number of separate datasets that get concatenated during training. If just one of those datasets is saved with 4.0.0, they become unusable, and we have no way of "fixing" them. I can load them in 4.0.0 but I can't re-save with the legacy feature type, and I can't load it in 3.6.0 for obvious reasons.
Perhaps I'm missing something here, since the PR says that backward compatibility is preserved; if so, it's not obvious to me how. | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7739/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7739/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7738 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7738/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7738/comments | https://api.github.com/repos/huggingface/datasets/issues/7738/events | https://github.com/huggingface/datasets/issues/7738 | 3,328,948,690 | I_kwDODunzps7Ga7nS | 7,738 | Allow saving multi-dimensional ndarray with dynamic shapes | {
"avatar_url": "https://avatars.githubusercontent.com/u/82735346?v=4",
"events_url": "https://api.github.com/users/ryan-minato/events{/privacy}",
"followers_url": "https://api.github.com/users/ryan-minato/followers",
"following_url": "https://api.github.com/users/ryan-minato/following{/other_user}",
"gists_url": "https://api.github.com/users/ryan-minato/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ryan-minato",
"id": 82735346,
"login": "ryan-minato",
"node_id": "MDQ6VXNlcjgyNzM1MzQ2",
"organizations_url": "https://api.github.com/users/ryan-minato/orgs",
"received_events_url": "https://api.github.com/users/ryan-minato/received_events",
"repos_url": "https://api.github.com/users/ryan-minato/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ryan-minato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryan-minato/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ryan-minato",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"I agree this would be super valuable.\n\nIt looks like this was discussed a few years ago in https://github.com/huggingface/datasets/issues/5272#issuecomment-1550200824 but there were some issues. Those PRs are merged now and it looks like Arrow [officially supports](https://arrow.apache.org/docs/format/CanonicalE... | 2025-08-18T02:23:51Z | 2025-08-26T15:25:02Z | null | NONE | null | null | null | null | ### Feature request
I propose adding a dedicated feature to the datasets library that allows for the efficient storage and retrieval of multi-dimensional ndarray with dynamic shapes. Similar to how Image columns handle variable-sized images, this feature would provide a structured way to store array data where the dimensions are not fixed.
A possible implementation could be a new Array or Tensor feature type that stores the data in a structured format, for example,
```python
{
"shape": (5, 224, 224),
"dtype": "uint8",
"data": [...]
}
```
This would allow the datasets library to handle heterogeneous array sizes within a single column without requiring a fixed shape definition in the feature schema.
### Motivation
I am currently trying to upload data from astronomical telescopes, specifically FITS files, to the Hugging Face Hub. This type of data is very similar to images but often has more than three dimensions. For example, data from the SDSS project contains five channels (u, g, r, i, z), and the pixel values can exceed 255, making the Pillow based Image feature unsuitable.
The current datasets library requires a fixed shape to be defined in the feature schema for multi-dimensional arrays, which is a major roadblock. This prevents me from saving my data, as the dimensions of the arrays can vary across different FITS files.
https://github.com/huggingface/datasets/blob/985c9bee6bfc345787a8b9dd316e1d4f3b930503/src/datasets/features/features.py#L613-L614
A feature that supports dynamic shapes would be incredibly beneficial for the astronomy community and other fields dealing with similar high-dimensional, variable-sized data (e.g., medical imaging, scientific simulations).
### Your contribution
I am willing to create a PR to help implement this feature if the proposal is accepted. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7738/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7738/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7733 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7733/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7733/comments | https://api.github.com/repos/huggingface/datasets/issues/7733/events | https://github.com/huggingface/datasets/issues/7733 | 3,304,979,299 | I_kwDODunzps7E_ftj | 7,733 | Dataset Repo Paths to Locally Stored Images Not Being Appended to Image Path | {
"avatar_url": "https://avatars.githubusercontent.com/u/27898715?v=4",
"events_url": "https://api.github.com/users/dennys246/events{/privacy}",
"followers_url": "https://api.github.com/users/dennys246/followers",
"following_url": "https://api.github.com/users/dennys246/following{/other_user}",
"gists_url": "https://api.github.com/users/dennys246/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dennys246",
"id": 27898715,
"login": "dennys246",
"node_id": "MDQ6VXNlcjI3ODk4NzE1",
"organizations_url": "https://api.github.com/users/dennys246/orgs",
"received_events_url": "https://api.github.com/users/dennys246/received_events",
"repos_url": "https://api.github.com/users/dennys246/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dennys246/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dennys246/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dennys246",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"This is the download issues I come into, about ever other time it fails...\n<img width=\"1719\" height=\"1226\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/2e5b4b3e-7c13-4bad-a77c-34b47a932831\" />"
] | 2025-08-08T19:10:58Z | 2025-08-12T00:54:58Z | null | NONE | null | null | null | null | ### Describe the bug
I’m not sure if this is a bug or a feature and I just don’t fully understand how dataset loading is to work, but it appears there may be a bug with how locally stored Image() are being accessed. I’ve uploaded a new dataset to hugging face (rmdig/rocky_mountain_snowpack) but I’ve come into a ton of trouble trying to have the images handled properly (at least in the way I’d expect them to be handled).
I find that I cannot use relative paths for loading images remotely from the Hugging Face repo or from a local repository. Any time I do it always simply appends my current working directory to the dataset. As a result to use the datasets library with my dataset I have to change my working directory to the dataset folder or abandon the dataset object structure, which I cannot imagine you intended. As a result I have to use URL’s since an absolute path on my system obviously wouldn’t work for others. The URL works ok, but despite me having it locally downloaded, it appears to be redownloading the dataset every time I train my snowGAN model on it (and often times I’m coming into HTTPS errors for over requesting the data).
Or maybe image relative paths aren't intended to be loaded directly through your datasets library as images and should be kept as strings for the user to handle? If so I feel like you’re missing out on some pretty seamless functionality
### Steps to reproduce the bug
1. Download a local copy of the dataset (rmdig/rocky_mountain_snowpack) through git or whatever you prefer.
2. Alter the README.md YAML for file_path (the relative path to each image) to be type Image instead of type string
`
---
dataset_info:
features:
- name: image
dtype: Image
- name: file_path
dtype: Image
`
3. Initialize the dataset locally, make sure your working directory is not the dataset directory root
`dataset = datasets.load_dataset(‘path/to/local/rocky_mountain_snowpack/‘)`
4. Call to one of the samples and you’ll get an error that the image was not found in current/working/directory/preprocessed/cores/image_1.png. Showing that it’s simply looking in the current working directory + relative path
`
>>> dataset['train'][0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 2859, in __getitem__
return self._getitem(key)
^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 2841, in _getitem
formatted_output = format_table(
^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 657, in format_table
return formatter(pa_table, query_type=query_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 410, in __call__
return self.format_row(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 459, in format_row
row = self.python_features_decoder.decode_row(row)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 223, in decode_row
return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/features.py", line 2093, in decode_example
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/features.py", line 1405, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/image.py", line 171, in decode_example
image = PIL.Image.open(path)
^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/PIL/Image.py", line 3277, in open
fp = builtins.open(filename, "rb")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/Users/dennyschaedig/Datasets/preprocessed/cores/image_1.png'
`
### Expected behavior
I expect the datasets and Image() to load the locally hosted data using path/to/local/rocky_mountain_snowpack/ (that I pass in with my datasets.load_dataset() or the you all handle on the backend) call + relative path.
Instead it appears to load from my current working directory + relative path.
### Environment info
Tested on…
Windows 11, Ubuntu Linux 22.04 and Mac Sequoia 15.5 Silicone M2
datasets version 4.0.0
Python 3.12 and 3.13 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7733/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7733/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7732 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7732/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7732/comments | https://api.github.com/repos/huggingface/datasets/issues/7732/events | https://github.com/huggingface/datasets/issues/7732 | 3,304,673,383 | I_kwDODunzps7E-VBn | 7,732 | webdataset: key errors when `field_name` has upper case characters | {
"avatar_url": "https://avatars.githubusercontent.com/u/29985433?v=4",
"events_url": "https://api.github.com/users/YassineYousfi/events{/privacy}",
"followers_url": "https://api.github.com/users/YassineYousfi/followers",
"following_url": "https://api.github.com/users/YassineYousfi/following{/other_user}",
"gists_url": "https://api.github.com/users/YassineYousfi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/YassineYousfi",
"id": 29985433,
"login": "YassineYousfi",
"node_id": "MDQ6VXNlcjI5OTg1NDMz",
"organizations_url": "https://api.github.com/users/YassineYousfi/orgs",
"received_events_url": "https://api.github.com/users/YassineYousfi/received_events",
"repos_url": "https://api.github.com/users/YassineYousfi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/YassineYousfi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YassineYousfi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/YassineYousfi",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-08-08T16:56:42Z | 2025-08-08T16:56:42Z | null | CONTRIBUTOR | null | null | null | null | ### Describe the bug
When using a webdataset each sample can be a collection of different "fields"
like this:
```
images17/image194.left.jpg
images17/image194.right.jpg
images17/image194.json
images17/image12.left.jpg
images17/image12.right.jpg
images17/image12.json
```
if the field_name contains upper case characters, the HF webdataset integration throws a key error when trying to load the dataset:
e.g. from a dataset (now updated so that it doesn't throw this error)
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[1], line 2
1 from datasets import load_dataset
----> 2 ds = load_dataset("commaai/comma2k19", data_files={'train': ['data-00000.tar.gz']}, num_proc=1)
File ~/xx/.venv/lib/python3.11/site-packages/datasets/load.py:1412, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs)
1409 return builder_instance.as_streaming_dataset(split=split)
1411 # Download and prepare data
-> 1412 builder_instance.download_and_prepare(
1413 download_config=download_config,
1414 download_mode=download_mode,
1415 verification_mode=verification_mode,
1416 num_proc=num_proc,
1417 storage_options=storage_options,
1418 )
1420 # Build dataset for splits
1421 keep_in_memory = (
1422 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1423 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:894, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, dl_manager, base_path, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
892 if num_proc is not None:
893 prepare_split_kwargs["num_proc"] = num_proc
--> 894 self._download_and_prepare(
895 dl_manager=dl_manager,
896 verification_mode=verification_mode,
897 **prepare_split_kwargs,
898 **download_and_prepare_kwargs,
899 )
900 # Sync info
901 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:1609, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs)
1608 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs):
-> 1609 super()._download_and_prepare(
1610 dl_manager,
1611 verification_mode,
1612 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS
1613 or verification_mode == VerificationMode.ALL_CHECKS,
1614 **prepare_splits_kwargs,
1615 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:948, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
946 split_dict = SplitDict(dataset_name=self.dataset_name)
947 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 948 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
950 # Checksums verification
951 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File ~/xx/.venv/lib/python3.11/site-packages/datasets/packaged_modules/webdataset/webdataset.py:81, in WebDataset._split_generators(self, dl_manager)
78 if not self.info.features:
79 # Get one example to get the feature types
80 pipeline = self._get_pipeline_from_tar(tar_paths[0], tar_iterators[0])
---> 81 first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
82 if any(example.keys() != first_examples[0].keys() for example in first_examples):
83 raise ValueError(
84 "The TAR archives of the dataset should be in WebDataset format, "
85 "but the files in the archive don't share the same prefix or the same types."
86 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/packaged_modules/webdataset/webdataset.py:55, in WebDataset._get_pipeline_from_tar(cls, tar_path, tar_iterator)
53 data_extension = field_name.split(".")[-1]
54 if data_extension in cls.DECODERS:
---> 55 current_example[field_name] = cls.DECODERS[data_extension](current_example[field_name])
56 if current_example:
57 yield current_example
KeyError: 'processed_log_IMU_magnetometer_value.npy'
```
### Steps to reproduce the bug
unit test was added in: https://github.com/huggingface/datasets/pull/7726
it fails without the fixed proposed in the same PR
### Expected behavior
Not throwing a key error.
### Environment info
```
- `datasets` version: 4.0.0
- Platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
- Python version: 3.11.4
- `huggingface_hub` version: 0.33.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2025.7.0
``` | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7732/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7732/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7731 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7731/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7731/comments | https://api.github.com/repos/huggingface/datasets/issues/7731/events | https://github.com/huggingface/datasets/issues/7731 | 3,303,637,075 | I_kwDODunzps7E6YBT | 7,731 | Add the possibility of a backend for audio decoding | {
"avatar_url": "https://avatars.githubusercontent.com/u/142020129?v=4",
"events_url": "https://api.github.com/users/intexcor/events{/privacy}",
"followers_url": "https://api.github.com/users/intexcor/followers",
"following_url": "https://api.github.com/users/intexcor/following{/other_user}",
"gists_url": "https://api.github.com/users/intexcor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/intexcor",
"id": 142020129,
"login": "intexcor",
"node_id": "U_kgDOCHcOIQ",
"organizations_url": "https://api.github.com/users/intexcor/orgs",
"received_events_url": "https://api.github.com/users/intexcor/received_events",
"repos_url": "https://api.github.com/users/intexcor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/intexcor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/intexcor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/intexcor",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"is there a work around im stuck",
"never mind just downgraded"
] | 2025-08-08T11:08:56Z | 2025-08-20T16:29:33Z | null | NONE | null | null | null | null | ### Feature request
Add the possibility of a backend for audio decoding. Before version 4.0.0, soundfile was used, and now torchcodec is used, but the problem is that torchcodec requires ffmpeg, which is problematic to install on the same colab. Therefore, I suggest adding a decoder selection when loading the dataset.
### Motivation
I use a service for training models in which ffmpeg cannot be installed.
### Your contribution
I use a service for training models in which ffmpeg cannot be installed. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7731/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7731/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7729 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7729/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7729/comments | https://api.github.com/repos/huggingface/datasets/issues/7729/events | https://github.com/huggingface/datasets/issues/7729 | 3,300,672,954 | I_kwDODunzps7EvEW6 | 7,729 | OSError: libcudart.so.11.0: cannot open shared object file: No such file or directory | {
"avatar_url": "https://avatars.githubusercontent.com/u/115183904?v=4",
"events_url": "https://api.github.com/users/SaleemMalikAI/events{/privacy}",
"followers_url": "https://api.github.com/users/SaleemMalikAI/followers",
"following_url": "https://api.github.com/users/SaleemMalikAI/following{/other_user}",
"gists_url": "https://api.github.com/users/SaleemMalikAI/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SaleemMalikAI",
"id": 115183904,
"login": "SaleemMalikAI",
"node_id": "U_kgDOBt2RIA",
"organizations_url": "https://api.github.com/users/SaleemMalikAI/orgs",
"received_events_url": "https://api.github.com/users/SaleemMalikAI/received_events",
"repos_url": "https://api.github.com/users/SaleemMalikAI/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SaleemMalikAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaleemMalikAI/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SaleemMalikAI",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-08-07T14:07:23Z | 2025-08-07T14:07:23Z | null | NONE | null | null | null | null | > Hi is there any solution for that eror i try to install this one
pip install torch==1.12.1+cpu torchaudio==0.12.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
this is working fine but tell me how to install pytorch version that is fit for gpu | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7729/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7729/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7728 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7728/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7728/comments | https://api.github.com/repos/huggingface/datasets/issues/7728/events | https://github.com/huggingface/datasets/issues/7728 | 3,298,854,904 | I_kwDODunzps7EoIf4 | 7,728 | NonMatchingSplitsSizesError and ExpectedMoreSplitsError | {
"avatar_url": "https://avatars.githubusercontent.com/u/104755879?v=4",
"events_url": "https://api.github.com/users/efsotr/events{/privacy}",
"followers_url": "https://api.github.com/users/efsotr/followers",
"following_url": "https://api.github.com/users/efsotr/following{/other_user}",
"gists_url": "https://api.github.com/users/efsotr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/efsotr",
"id": 104755879,
"login": "efsotr",
"node_id": "U_kgDOBj5ypw",
"organizations_url": "https://api.github.com/users/efsotr/orgs",
"received_events_url": "https://api.github.com/users/efsotr/received_events",
"repos_url": "https://api.github.com/users/efsotr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/efsotr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efsotr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/efsotr",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-08-07T04:04:50Z | 2025-08-07T07:31:47Z | null | NONE | null | null | null | null | ### Describe the bug
When loading dataset, the info specified by `data_files` did not overwrite the original info.
### Steps to reproduce the bug
```python
from datasets import load_dataset
traindata = load_dataset(
"allenai/c4",
"en",
data_files={"train": "en/c4-train.00000-of-01024.json.gz",
"validation": "en/c4-validation.00000-of-00008.json.gz"},
)
```
```log
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=828589180707, num_examples=364868892, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=809262831, num_examples=356317, shard_lengths=[223006, 133311], dataset_name='c4')}, {'expected': SplitInfo(name='validation', num_bytes=825767266, num_examples=364608, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='validation', num_bytes=102199431, num_examples=45576, shard_lengths=None, dataset_name='c4')}]
```
```python
from datasets import load_dataset
traindata = load_dataset(
"allenai/c4",
"en",
data_files={"train": "en/c4-train.00000-of-01024.json.gz"},
split="train"
)
```
```log
ExpectedMoreSplitsError: {'validation'}
```
### Expected behavior
No error
### Environment info
datasets 4.0.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7728/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7728/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7727 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7727/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7727/comments | https://api.github.com/repos/huggingface/datasets/issues/7727/events | https://github.com/huggingface/datasets/issues/7727 | 3,295,718,578 | I_kwDODunzps7EcKyy | 7,727 | config paths that start with ./ are not valid as hf:// accessed repos, but are valid when accessed locally | {
"avatar_url": "https://avatars.githubusercontent.com/u/2229300?v=4",
"events_url": "https://api.github.com/users/doctorpangloss/events{/privacy}",
"followers_url": "https://api.github.com/users/doctorpangloss/followers",
"following_url": "https://api.github.com/users/doctorpangloss/following{/other_user}",
"gists_url": "https://api.github.com/users/doctorpangloss/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/doctorpangloss",
"id": 2229300,
"login": "doctorpangloss",
"node_id": "MDQ6VXNlcjIyMjkzMDA=",
"organizations_url": "https://api.github.com/users/doctorpangloss/orgs",
"received_events_url": "https://api.github.com/users/doctorpangloss/received_events",
"repos_url": "https://api.github.com/users/doctorpangloss/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/doctorpangloss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/doctorpangloss/subscriptions",
"type": "User",
"url": "https://api.github.com/users/doctorpangloss",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-08-06T08:21:37Z | 2025-08-06T08:21:37Z | null | NONE | null | null | null | null | ### Describe the bug
```
- config_name: some_config
data_files:
- split: train
path:
- images/xyz/*.jpg
```
will correctly download but
```
- config_name: some_config
data_files:
- split: train
path:
- ./images/xyz/*.jpg
```
will error with `FileNotFoundError` due to improper url joining. `load_dataset` on the same directory locally works fine.
### Steps to reproduce the bug
1. create a README.md with the front matter of the form
```
- config_name: some_config
data_files:
- split: train
path:
- ./images/xyz/*.jpg
```
2. `touch ./images/xyz/1.jpg`
3. Observe this directory loads with `load_dataset("filesystem_path", "some_config")` correctly.
4. Observe exceptions when you load this with `load_dataset("repoid/filesystem_path", "some_config")`
### Expected behavior
`./` prefix should be interpreted correctly
### Environment info
datasets 4.0.0
datasets 3.4.0
reproduce | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7727/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7727/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7724 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7724/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7724/comments | https://api.github.com/repos/huggingface/datasets/issues/7724/events | https://github.com/huggingface/datasets/issues/7724 | 3,292,315,241 | I_kwDODunzps7EPL5p | 7,724 | Can not stepinto load_dataset.py? | {
"avatar_url": "https://avatars.githubusercontent.com/u/13776012?v=4",
"events_url": "https://api.github.com/users/micklexqg/events{/privacy}",
"followers_url": "https://api.github.com/users/micklexqg/followers",
"following_url": "https://api.github.com/users/micklexqg/following{/other_user}",
"gists_url": "https://api.github.com/users/micklexqg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/micklexqg",
"id": 13776012,
"login": "micklexqg",
"node_id": "MDQ6VXNlcjEzNzc2MDEy",
"organizations_url": "https://api.github.com/users/micklexqg/orgs",
"received_events_url": "https://api.github.com/users/micklexqg/received_events",
"repos_url": "https://api.github.com/users/micklexqg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/micklexqg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/micklexqg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/micklexqg",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-08-05T09:28:51Z | 2025-08-05T09:28:51Z | null | NONE | null | null | null | null | I set a breakpoint in "load_dataset.py" and try to debug my data load codes, but it does not stop at any breakpoints, so "load_dataset.py" can not be stepped into ?
<!-- Failed to upload "截图 2025-08-05 17-25-18.png" --> | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7724/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7724/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7723 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7723/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7723/comments | https://api.github.com/repos/huggingface/datasets/issues/7723/events | https://github.com/huggingface/datasets/issues/7723 | 3,289,943,261 | I_kwDODunzps7EGIzd | 7,723 | Don't remove `trust_remote_code` arg!!! | {
"avatar_url": "https://avatars.githubusercontent.com/u/758925?v=4",
"events_url": "https://api.github.com/users/autosquid/events{/privacy}",
"followers_url": "https://api.github.com/users/autosquid/followers",
"following_url": "https://api.github.com/users/autosquid/following{/other_user}",
"gists_url": "https://api.github.com/users/autosquid/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/autosquid",
"id": 758925,
"login": "autosquid",
"node_id": "MDQ6VXNlcjc1ODkyNQ==",
"organizations_url": "https://api.github.com/users/autosquid/orgs",
"received_events_url": "https://api.github.com/users/autosquid/received_events",
"repos_url": "https://api.github.com/users/autosquid/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/autosquid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/autosquid/subscriptions",
"type": "User",
"url": "https://api.github.com/users/autosquid",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2025-08-04T15:42:07Z | 2025-08-04T15:42:07Z | null | NONE | null | null | null | null | ### Feature request
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
Add `trust_remote_code` arg back please!
### Motivation
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
### Your contribution
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7723/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7723/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7722 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7722/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7722/comments | https://api.github.com/repos/huggingface/datasets/issues/7722/events | https://github.com/huggingface/datasets/issues/7722 | 3,289,741,064 | I_kwDODunzps7EFXcI | 7,722 | Out of memory even though using load_dataset(..., streaming=True) | {
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/padmalcom",
"id": 3961950,
"login": "padmalcom",
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"type": "User",
"url": "https://api.github.com/users/padmalcom",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-08-04T14:41:55Z | 2025-08-04T14:41:55Z | null | NONE | null | null | null | null | ### Describe the bug
I am iterating over a large dataset that I load using streaming=True to avoid running out of memory. Unfortunately, I am observing that memory usage increases over time and I'm finally running in an oom.
### Steps to reproduce the bug
```
ds = load_dataset("openslr/librispeech_asr", split="train.clean.360", streaming=True)
for i,sample in enumerate(tqdm(ds)):
target_file = os.path.join(NSFW_TARGET_FOLDER, f'audio{i}.wav')
try:
sf.write(target_file, sample['audio']['array'], samplerate=sample['audio']['sampling_rate'])
except Exception as e:
print(f"Could not write audio {i} in ds: {e}")
```
### Expected behavior
I'd expect to have a small memory footprint and memory being freed after each iteration of the for loop. Instead the memory usage is increasing. I tried to remove the logic to write the sound file and just print the sample but the issue remains the same.
### Environment info
Python 3.12.11
Ubuntu 24
datasets 4.0.0 and 3.6.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7722/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7722/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7721 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7721/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7721/comments | https://api.github.com/repos/huggingface/datasets/issues/7721/events | https://github.com/huggingface/datasets/issues/7721 | 3,289,426,104 | I_kwDODunzps7EEKi4 | 7,721 | Bad split error message when using percentages | {
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/padmalcom",
"id": 3961950,
"login": "padmalcom",
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"type": "User",
"url": "https://api.github.com/users/padmalcom",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I'd like to work on this: add clearer validation/messages for percent-based splits + tests",
"The most basic example is this code:\n`load_dataset(\"openslr/librispeech_asr\", split=\"train[10%:20%]\")`\n\nThis results in this ValueError:\n```\n raise ValueError(f'Unknown split \"{split}\". Should be one of {l... | 2025-08-04T13:20:25Z | 2025-08-14T14:42:24Z | null | NONE | null | null | null | null | ### Describe the bug
Hi, I'm trying to download a dataset. To not load the entire dataset in memory, I split it as described [here](https://huggingface.co/docs/datasets/v4.0.0/loading#slice-splits) in 10% steps.
When doing so, the library returns this error:
raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}")
ValueError: Bad split: train[0%:10%]. Available splits: ['train']
Edit: Same happens with a split like _train[:90000]_
### Steps to reproduce the bug
```
for split in range(10):
split_str = f"train[{split*10}%:{(split+1)*10}%]"
print(f"Processing split {split_str}...")
ds = load_dataset("user/dataset", split=split_str, streaming=True)
```
### Expected behavior
I'd expect the library to split my dataset in 10% steps.
### Environment info
python 3.12.11
ubuntu 24
dataset 4.0.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7721/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7721/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7720 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7720/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7720/comments | https://api.github.com/repos/huggingface/datasets/issues/7720/events | https://github.com/huggingface/datasets/issues/7720 | 3,287,150,513 | I_kwDODunzps7D7e-x | 7,720 | Datasets 4.0 map function causing column not found | {
"avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4",
"events_url": "https://api.github.com/users/Darejkal/events{/privacy}",
"followers_url": "https://api.github.com/users/Darejkal/followers",
"following_url": "https://api.github.com/users/Darejkal/following{/other_user}",
"gists_url": "https://api.github.com/users/Darejkal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Darejkal",
"id": 55143337,
"login": "Darejkal",
"node_id": "MDQ6VXNlcjU1MTQzMzM3",
"organizations_url": "https://api.github.com/users/Darejkal/orgs",
"received_events_url": "https://api.github.com/users/Darejkal/received_events",
"repos_url": "https://api.github.com/users/Darejkal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Darejkal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darejkal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Darejkal",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi, I tried to reproduce this issue on the latest `main` branch but it seems to be working correctly now. My test script (which creates a dummy dataset and applies the `.map()` function) successfully creates and accesses the new column without a `KeyError`.\n\nIt's possible this was fixed by a recent commit. The m... | 2025-08-03T12:52:34Z | 2025-08-07T19:23:34Z | null | NONE | null | null | null | null | ### Describe the bug
Column returned after mapping is not found in new instance of the dataset.
### Steps to reproduce the bug
Code for reproduction. After running get_total_audio_length, it is errored out due to `data` not having `duration`
```
def compute_duration(x):
return {"duration": len(x["audio"]["array"]) / x["audio"]["sampling_rate"]}
def get_total_audio_length(dataset):
data = dataset.map(compute_duration, num_proc=NUM_PROC)
print(data)
durations=data["duration"]
total_seconds = sum(durations)
return total_seconds
```
### Expected behavior
New datasets.Dataset instance should have new columns attached.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.33.2
- PyArrow version: 20.0.0
- Pandas version: 2.3.0
- `fsspec` version: 2023.12.2 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7720/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7720/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7719 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7719/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7719/comments | https://api.github.com/repos/huggingface/datasets/issues/7719/events | https://github.com/huggingface/datasets/issues/7719 | 3,285,928,491 | I_kwDODunzps7D20or | 7,719 | Specify dataset columns types in typehint | {
"avatar_url": "https://avatars.githubusercontent.com/u/36135455?v=4",
"events_url": "https://api.github.com/users/Samoed/events{/privacy}",
"followers_url": "https://api.github.com/users/Samoed/followers",
"following_url": "https://api.github.com/users/Samoed/following{/other_user}",
"gists_url": "https://api.github.com/users/Samoed/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Samoed",
"id": 36135455,
"login": "Samoed",
"node_id": "MDQ6VXNlcjM2MTM1NDU1",
"organizations_url": "https://api.github.com/users/Samoed/orgs",
"received_events_url": "https://api.github.com/users/Samoed/received_events",
"repos_url": "https://api.github.com/users/Samoed/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Samoed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Samoed/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Samoed",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2025-08-02T13:22:31Z | 2025-08-02T13:22:31Z | null | NONE | null | null | null | null | ### Feature request
Make dataset optionaly generic to datasets usage with type annotations like it was done in `torch.Dataloader` https://github.com/pytorch/pytorch/blob/134179474539648ba7dee1317959529fbd0e7f89/torch/utils/data/dataloader.py#L131
### Motivation
In MTEB we're using a lot of datasets objects, but they're a bit poor in typehints. E.g. we can specify this for dataloder
```python
from typing import TypedDict
from torch.utils.data import DataLoader
class CorpusInput(TypedDict):
title: list[str]
body: list[str]
class QueryInput(TypedDict):
query: list[str]
instruction: list[str]
def queries_loader() -> DataLoader[QueryInput]:
...
def corpus_loader() -> DataLoader[CorpusInput]:
...
```
But for datasets we can only specify columns in type in comments
```python
from datasets import Dataset
QueryDataset = Dataset
"""Query dataset should have `query` and `instructions` columns as `str` """
```
### Your contribution
I can create draft implementation | null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7719/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7719/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7717 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7717/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7717/comments | https://api.github.com/repos/huggingface/datasets/issues/7717/events | https://github.com/huggingface/datasets/issues/7717 | 3,282,855,127 | I_kwDODunzps7DrGTX | 7,717 | Cached dataset is not used when explicitly passing the cache_dir parameter | {
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/padmalcom",
"id": 3961950,
"login": "padmalcom",
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"type": "User",
"url": "https://api.github.com/users/padmalcom",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi, I've investigated this issue and can confirm the bug. Here are my findings:\n\n**1. Reproduction:**\nI was able to reproduce the issue on the latest `main` branch. Using the provided code snippet, `snapshot_download` correctly populates the custom `cache_dir`, but `load_dataset` with the same `cache_dir` trigg... | 2025-08-01T07:12:41Z | 2025-08-05T19:19:36Z | null | NONE | null | null | null | null | ### Describe the bug
Hi, we are pre-downloading a dataset using snapshot_download(). When loading this exact dataset with load_dataset() the cached snapshot is not used. In both calls, I provide the cache_dir parameter.
### Steps to reproduce the bug
```
from datasets import load_dataset, concatenate_datasets
from huggingface_hub import snapshot_download
def download_ds(name: str):
snapshot_download(repo_id=name, repo_type="dataset", cache_dir="G:/Datasets/cache")
def prepare_ds():
audio_ds = load_dataset("openslr/librispeech_asr", num_proc=4, cache_dir="G:/Datasets/cache")
print(sfw_ds.features)
if __name__ == '__main__':
download_ds("openslr/librispeech_asr")
prepare_ds()
```
### Expected behavior
I'd expect that the cached version of the dataset is used. Instead, the same dataset is downloaded again to the default cache directory.
### Environment info
Windows 11
datasets==4.0.0
Python 3.12.11 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7717/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7717/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7709 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7709/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7709/comments | https://api.github.com/repos/huggingface/datasets/issues/7709/events | https://github.com/huggingface/datasets/issues/7709 | 3,276,677,990 | I_kwDODunzps7DTiNm | 7,709 | Release 4.0.0 breaks usage patterns of with_format | {
"avatar_url": "https://avatars.githubusercontent.com/u/9154515?v=4",
"events_url": "https://api.github.com/users/wittenator/events{/privacy}",
"followers_url": "https://api.github.com/users/wittenator/followers",
"following_url": "https://api.github.com/users/wittenator/following{/other_user}",
"gists_url": "https://api.github.com/users/wittenator/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wittenator",
"id": 9154515,
"login": "wittenator",
"node_id": "MDQ6VXNlcjkxNTQ1MTU=",
"organizations_url": "https://api.github.com/users/wittenator/orgs",
"received_events_url": "https://api.github.com/users/wittenator/received_events",
"repos_url": "https://api.github.com/users/wittenator/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wittenator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wittenator/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wittenator",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"This is a breaking change with 4.0 which introduced `Column` objects. To get the numpy array from a `Column` you can `col[i]`, `col[i:j]` or even `col[:]` if you want the full column as a numpy array:\n\n```python\nfrom datasets import load_dataset\ndataset = load_dataset(...)\ndataset = dataset.with_format(\"nump... | 2025-07-30T11:34:53Z | 2025-08-07T08:27:18Z | 2025-08-07T08:27:18Z | NONE | null | null | null | null | ### Describe the bug
Previously it was possible to access a whole column that was e.g. in numpy format via `with_format` by indexing the column. Now this possibility seems to be gone with the new Column() class. As far as I see, this makes working on a whole column (in-memory) more complex, i.e. normalizing an in-memory dataset for which iterating would be too slow. Is this intended behaviour? I couldn't find much documentation on the intended usage of the new Column class yet.
### Steps to reproduce the bug
Steps to reproduce:
```
from datasets import load_dataset
dataset = load_dataset("lhoestq/demo1")
dataset = dataset.with_format("numpy")
print(dataset["star"].ndim)
```
### Expected behavior
Working on whole columns should be possible.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-6.8.0-63-generic-x86_64-with-glibc2.36
- Python version: 3.12.11
- `huggingface_hub` version: 0.34.3
- PyArrow version: 21.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2025.3.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/9154515?v=4",
"events_url": "https://api.github.com/users/wittenator/events{/privacy}",
"followers_url": "https://api.github.com/users/wittenator/followers",
"following_url": "https://api.github.com/users/wittenator/following{/other_user}",
"gists_url": "https://api.github.com/users/wittenator/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wittenator",
"id": 9154515,
"login": "wittenator",
"node_id": "MDQ6VXNlcjkxNTQ1MTU=",
"organizations_url": "https://api.github.com/users/wittenator/orgs",
"received_events_url": "https://api.github.com/users/wittenator/received_events",
"repos_url": "https://api.github.com/users/wittenator/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wittenator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wittenator/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wittenator",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7709/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7709/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7707 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7707/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7707/comments | https://api.github.com/repos/huggingface/datasets/issues/7707/events | https://github.com/huggingface/datasets/issues/7707 | 3,271,867,998 | I_kwDODunzps7DBL5e | 7,707 | load_dataset() in 4.0.0 failed when decoding audio | {
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jiqing-feng",
"id": 107918818,
"login": "jiqing-feng",
"node_id": "U_kgDOBm614g",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jiqing-feng",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq . Would you please have a look at it? I use the official NV Docker ([NV official docker image](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch): `nvcr.io/nvidia/pytorch:25.06-py3`) on A100 and encountered this issue, but I don't know how to fix it.",
"Use !pip install -U datasets[audio]... | 2025-07-29T03:25:03Z | 2025-09-15T16:17:06Z | 2025-08-01T05:15:45Z | NONE | null | null | null | null | ### Describe the bug
Cannot decode audio data.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
print(dataset[0]["audio"]["array"])
```
1st round run, got
```
File "/usr/local/lib/python3.12/dist-packages/datasets/features/audio.py", line 172, in decode_example
raise ImportError("To support decoding audio data, please install 'torchcodec'.")
ImportError: To support decoding audio data, please install 'torchcodec'.
```
After `pip install torchcodec` and run, got
```
File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/_metadata.py", line 16, in <module>
from torchcodec._core.ops import (
File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 84, in <module>
load_torchcodec_shared_libraries()
File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 69, in load_torchcodec_shared_libraries
raise RuntimeError(
RuntimeError: Could not load libtorchcodec. Likely causes:
1. FFmpeg is not properly installed in your environment. We support
versions 4, 5, 6 and 7.
2. The PyTorch version (2.8.0a0+5228986c39.nv25.06) is not compatible with
this version of TorchCodec. Refer to the version compatibility
table:
https://github.com/pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec.
3. Another runtime dependency; see exceptions below.
The following exceptions were raised as we tried to load libtorchcodec:
[start of libtorchcodec loading traceback]
FFmpeg version 7: libavutil.so.59: cannot open shared object file: No such file or directory
FFmpeg version 6: libavutil.so.58: cannot open shared object file: No such file or directory
FFmpeg version 5: libavutil.so.57: cannot open shared object file: No such file or directory
FFmpeg version 4: libavutil.so.56: cannot open shared object file: No such file or directory
[end of libtorchcodec loading traceback].
```
After `apt update && apt install ffmpeg -y`, got
```
Traceback (most recent call last):
File "/workspace/jiqing/test_datasets.py", line 4, in <module>
print(dataset[0]["audio"]["array"])
~~~~~~~^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py", line 2859, in __getitem__
return self._getitem(key)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py", line 2841, in _getitem
formatted_output = format_table(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 657, in format_table
return formatter(pa_table, query_type=query_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 410, in __call__
return self.format_row(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 459, in format_row
row = self.python_features_decoder.decode_row(row)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 223, in decode_row
return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/features/features.py", line 2093, in decode_example
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/features/features.py", line 1405, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/features/audio.py", line 198, in decode_example
audio = AudioDecoder(bytes, stream_index=self.stream_index, sample_rate=self.sampling_rate)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torchcodec/decoders/_audio_decoder.py", line 62, in __init__
self._decoder = create_decoder(source=source, seek_mode="approximate")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torchcodec/decoders/_decoder_utils.py", line 33, in create_decoder
return core.create_from_bytes(source, seek_mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 144, in create_from_bytes
return create_from_tensor(buffer, seek_mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_ops.py", line 756, in __call__
return self._op(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
NotImplementedError: Could not run 'torchcodec_ns::create_from_tensor' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchcodec_ns::create_from_tensor' is only available for these backends: [Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMTIA, AutogradMAIA, AutogradMeta, Tracer, AutocastCPU, AutocastMTIA, AutocastMAIA, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
Meta: registered at /dev/null:214 [kernel]
BackendSelect: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at /__w/torchcodec/torchcodec/pytorch/torchcodec/src/torchcodec/_core/custom_ops.cpp:694 [kernel]
FuncTorchDynamicLayerBackMode: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:479 [backend fallback]
Functionalize: registered at /opt/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:349 [backend fallback]
Named: registered at /opt/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at /opt/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at /opt/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at /opt/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:104 [backend fallback]
AutogradOther: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:63 [backend fallback]
AutogradCPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:67 [backend fallback]
AutogradCUDA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:75 [backend fallback]
AutogradXLA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:87 [backend fallback]
AutogradMPS: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:95 [backend fallback]
AutogradXPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:71 [backend fallback]
AutogradHPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:108 [backend fallback]
AutogradLazy: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:91 [backend fallback]
AutogradMTIA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:79 [backend fallback]
AutogradMAIA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:83 [backend fallback]
AutogradMeta: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:99 [backend fallback]
Tracer: registered at /opt/pytorch/pytorch/torch/csrc/autograd/TraceTypeManual.cpp:294 [backend fallback]
AutocastCPU: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:322 [backend fallback]
AutocastMTIA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:466 [backend fallback]
AutocastMAIA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:504 [backend fallback]
AutocastXPU: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:542 [backend fallback]
AutocastMPS: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:209 [backend fallback]
AutocastCUDA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:165 [backend fallback]
FuncTorchBatched: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:731 [backend fallback]
BatchedNestedTensor: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:758 [backend fallback]
FuncTorchVmapMode: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:27 [backend fallback]
Batched: registered at /opt/pytorch/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:208 [backend fallback]
PythonTLSSnapshot: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:202 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:475 [backend fallback]
PreDispatch: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:206 [backend fallback]
PythonDispatcher: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:198 [backend fallback]
```
### Expected behavior
The result is
```
[0.00238037 0.0020752 0.00198364 ... 0.00042725 0.00057983 0.0010376 ]
```
on `datasets==3.6.0`
### Environment info
[NV official docker image](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch): `nvcr.io/nvidia/pytorch:25.06-py3`
```
- `datasets` version: 4.0.0
- Platform: Linux-5.4.292-1.el8.elrepo.x86_64-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.34.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jiqing-feng",
"id": 107918818,
"login": "jiqing-feng",
"node_id": "U_kgDOBm614g",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jiqing-feng",
"user_view_type": "public"
} | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7707/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7707/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7705 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7705/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7705/comments | https://api.github.com/repos/huggingface/datasets/issues/7705/events | https://github.com/huggingface/datasets/issues/7705 | 3,269,070,499 | I_kwDODunzps7C2g6j | 7,705 | Can Not read installed dataset in dataset.load(.) | {
"avatar_url": "https://avatars.githubusercontent.com/u/52521165?v=4",
"events_url": "https://api.github.com/users/HuangChiEn/events{/privacy}",
"followers_url": "https://api.github.com/users/HuangChiEn/followers",
"following_url": "https://api.github.com/users/HuangChiEn/following{/other_user}",
"gists_url": "https://api.github.com/users/HuangChiEn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HuangChiEn",
"id": 52521165,
"login": "HuangChiEn",
"node_id": "MDQ6VXNlcjUyNTIxMTY1",
"organizations_url": "https://api.github.com/users/HuangChiEn/orgs",
"received_events_url": "https://api.github.com/users/HuangChiEn/received_events",
"repos_url": "https://api.github.com/users/HuangChiEn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HuangChiEn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HuangChiEn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HuangChiEn",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"You can download the dataset locally using [huggingface_hub.snapshot_download](https://huggingface.co/docs/huggingface_hub/v0.34.3/en/package_reference/file_download#huggingface_hub.snapshot_download) and then do\n\n```python\ndataset = load_dataset(local_directory_path)\n```",
"> You can download the dataset lo... | 2025-07-28T09:43:54Z | 2025-08-05T01:24:32Z | null | NONE | null | null | null | null | Hi, folks, I'm newbie in huggingface dataset api.
As title, i'm facing the issue that the dataset.load api can not connect to the installed dataset.
code snippet :
<img width="572" height="253" alt="Image" src="https://github.com/user-attachments/assets/10f48aaf-d6ca-4239-b1cf-145d74f125d1" />
data path :
"/xxx/joseph/llava_ds/vlm_ds"
it contains all video clips i want!
<img width="1398" height="261" alt="Image" src="https://github.com/user-attachments/assets/bf213b66-e344-4311-97e7-bc209677ae77" />
i run the py script by
<img width="1042" height="38" alt="Image" src="https://github.com/user-attachments/assets/8b3fcee4-e1a6-41b8-bee1-91567b00d9d2" />
But bad happended, even i provide dataset path by "HF_HUB_CACHE", it still attempt to download data from remote side :
<img width="1697" height="813" alt="Image" src="https://github.com/user-attachments/assets/baa6cff1-a724-4710-a8c4-4805459deffb" />
Any suggestion will be appreciated!! | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7705/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7705/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7703 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7703/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7703/comments | https://api.github.com/repos/huggingface/datasets/issues/7703/events | https://github.com/huggingface/datasets/issues/7703 | 3,265,648,942 | I_kwDODunzps7Cpdku | 7,703 | [Docs] map() example uses undefined `tokenizer` — causes NameError | {
"avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4",
"events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}",
"followers_url": "https://api.github.com/users/Sanjaykumar030/followers",
"following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}",
"gists_url": "https://api.github.com/users/Sanjaykumar030/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Sanjaykumar030",
"id": 183703408,
"login": "Sanjaykumar030",
"node_id": "U_kgDOCvMXcA",
"organizations_url": "https://api.github.com/users/Sanjaykumar030/orgs",
"received_events_url": "https://api.github.com/users/Sanjaykumar030/received_events",
"repos_url": "https://api.github.com/users/Sanjaykumar030/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Sanjaykumar030/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sanjaykumar030/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Sanjaykumar030",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I've submitted PR #7704 which adds documentation to clarify the behavior of `map()` when returning `None`."
] | 2025-07-26T13:35:11Z | 2025-07-27T09:44:35Z | null | CONTRIBUTOR | null | null | null | null | ## Description
The current documentation example for `datasets.Dataset.map()` demonstrates batched processing but uses a `tokenizer` object without defining or importing it. This causes an error every time it's copied.
Here is the problematic line:
```python
# process a batch of examples
>>> ds = ds.map(lambda example: tokenizer(example["text"]), batched=True)
```
This assumes the user has already set up a tokenizer, which contradicts the goal of having self-contained, copy-paste-friendly examples.
## Problem
Users who copy and run the example as-is will encounter:
```python
NameError: name 'tokenizer' is not defined
```
This breaks the flow for users and violates HuggingFace's documentation principle that examples should "work as expected" when copied directly.
## Proposal
Update the example to include the required tokenizer setup using the Transformers library, like so:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
ds_tokenized = ds.map(lambda example: tokenizer(example["text"]), batched=True)
```
This will help new users understand the workflow and apply the method correctly.
## Note
This PR complements ongoing improvements like #7700, which clarifies multiprocessing in .map(). My change focuses on undefined tokenizer — causes NameError
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7703/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7703/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7700 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7700/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7700/comments | https://api.github.com/repos/huggingface/datasets/issues/7700/events | https://github.com/huggingface/datasets/issues/7700 | 3,263,922,255 | I_kwDODunzps7Ci4BP | 7,700 | [doc] map.num_proc needs clarification | {
"avatar_url": "https://avatars.githubusercontent.com/u/196988264?v=4",
"events_url": "https://api.github.com/users/sfc-gh-sbekman/events{/privacy}",
"followers_url": "https://api.github.com/users/sfc-gh-sbekman/followers",
"following_url": "https://api.github.com/users/sfc-gh-sbekman/following{/other_user}",
"gists_url": "https://api.github.com/users/sfc-gh-sbekman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sfc-gh-sbekman",
"id": 196988264,
"login": "sfc-gh-sbekman",
"node_id": "U_kgDOC73NaA",
"organizations_url": "https://api.github.com/users/sfc-gh-sbekman/orgs",
"received_events_url": "https://api.github.com/users/sfc-gh-sbekman/received_events",
"repos_url": "https://api.github.com/users/sfc-gh-sbekman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sfc-gh-sbekman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sfc-gh-sbekman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sfc-gh-sbekman",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-07-25T17:35:09Z | 2025-07-25T17:39:36Z | null | NONE | null | null | null | null | https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/main_classes#datasets.Dataset.map.num_proc
```
num_proc (int, optional, defaults to None) — Max number of processes when generating cache. Already cached
shards are loaded sequentially.
```
for batch:
```
num_proc (int, optional, defaults to None): The number of processes to use for multiprocessing. If None, no
multiprocessing is used. This can significantly speed up batching for large datasets.
```
So what happens to `map.num_proc` - is it the same behavior as `batch.num_proc` - so only if `num_proc=None` then no multiprocessing is used?
Let's update the doc to be unambiguous.
**bonus**: we could make all of these behave similarly to `DataLoader.num_workers` - where `num_workers==0` implies no multiprocessing. I think that's the most intuitive, IMHO. 0 workers - the main process has to do all the work. `None` could be the same as `0`.
context: debugging a failing `map`
Thank you! | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7700/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7700/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7699 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7699/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7699/comments | https://api.github.com/repos/huggingface/datasets/issues/7699/events | https://github.com/huggingface/datasets/issues/7699 | 3,261,053,171 | I_kwDODunzps7CX7jz | 7,699 | Broken link in documentation for "Create a video dataset" | {
"avatar_url": "https://avatars.githubusercontent.com/u/122366389?v=4",
"events_url": "https://api.github.com/users/cleong110/events{/privacy}",
"followers_url": "https://api.github.com/users/cleong110/followers",
"following_url": "https://api.github.com/users/cleong110/following{/other_user}",
"gists_url": "https://api.github.com/users/cleong110/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cleong110",
"id": 122366389,
"login": "cleong110",
"node_id": "U_kgDOB0sptQ",
"organizations_url": "https://api.github.com/users/cleong110/orgs",
"received_events_url": "https://api.github.com/users/cleong110/received_events",
"repos_url": "https://api.github.com/users/cleong110/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cleong110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cleong110/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cleong110",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"The URL is ok but it seems the webdataset website is down. There seems to be a related issue here: https://github.com/webdataset/webdataset/issues/155\n\nFeel free to ask the authors there for an update. Otherwise happy to witch the link to the mirror shared in that issue"
] | 2025-07-24T19:46:28Z | 2025-07-25T15:27:47Z | null | NONE | null | null | null | null | The link to "the [WebDataset documentation](https://webdataset.github.io/webdataset)." is broken.
https://huggingface.co/docs/datasets/main/en/video_dataset#webdataset
<img width="2048" height="264" alt="Image" src="https://github.com/user-attachments/assets/975dd10c-aad8-42fc-9fbc-de0e2747a326" /> | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7699/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7699/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7698 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7698/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7698/comments | https://api.github.com/repos/huggingface/datasets/issues/7698/events | https://github.com/huggingface/datasets/issues/7698 | 3,255,350,916 | I_kwDODunzps7CCLaE | 7,698 | NotImplementedError when using streaming=True in Google Colab environment | {
"avatar_url": "https://avatars.githubusercontent.com/u/100470741?v=4",
"events_url": "https://api.github.com/users/Aniket17200/events{/privacy}",
"followers_url": "https://api.github.com/users/Aniket17200/followers",
"following_url": "https://api.github.com/users/Aniket17200/following{/other_user}",
"gists_url": "https://api.github.com/users/Aniket17200/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Aniket17200",
"id": 100470741,
"login": "Aniket17200",
"node_id": "U_kgDOBf0P1Q",
"organizations_url": "https://api.github.com/users/Aniket17200/orgs",
"received_events_url": "https://api.github.com/users/Aniket17200/received_events",
"repos_url": "https://api.github.com/users/Aniket17200/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Aniket17200/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aniket17200/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Aniket17200",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi, @Aniket17200, try upgrading datasets using '!pip install -U datasets'. I hope this will resolve your issue.",
"Thank you @tanuj-rai, it's working great "
] | 2025-07-23T08:04:53Z | 2025-07-23T15:06:23Z | null | NONE | null | null | null | null | ### Describe the bug
When attempting to load a large dataset (like tiiuae/falcon-refinedweb or allenai/c4) using streaming=True in a standard Google Colab notebook, the process fails with a NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet. This issue persists even after upgrading datasets and huggingface_hub and restarting the session.
### Steps to reproduce the bug
Open a new Google Colab notebook.
(Optional but recommended) Run !pip install --upgrade datasets huggingface_hub and restart the runtime.
Run the following code:
Python
from datasets import load_dataset
try:
print("Attempting to load a stream...")
streaming_dataset = load_dataset('tiiuae/falcon-refinedweb', streaming=True)
print("Success!")
except Exception as e:
print(e)
### Expected behavior
The load_dataset command should return a StreamingDataset object without raising an error, allowing iteration over the dataset.
Actual Behavior
The code fails and prints the following error traceback:
[PASTE THE FULL ERROR TRACEBACK HERE]
(Note: Copy the entire error message you received, from Traceback... to the final error line, and paste it in this section.)
### Environment info
Platform: Google Colab
datasets version: [Run !pip show datasets in Colab and paste the version here]
huggingface_hub version: [Run !pip show huggingface_hub and paste the version here]
Python version: [Run !python --version and paste the version here] | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7698/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7698/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7697 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7697/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7697/comments | https://api.github.com/repos/huggingface/datasets/issues/7697/events | https://github.com/huggingface/datasets/issues/7697 | 3,254,526,399 | I_kwDODunzps7B_CG_ | 7,697 | - | {
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghost",
"id": 10137,
"login": "ghost",
"node_id": "MDQ6VXNlcjEwMTM3",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"repos_url": "https://api.github.com/users/ghost/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghost",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 2025-07-23T01:30:32Z | 2025-07-25T15:21:39Z | 2025-07-25T15:21:39Z | NONE | null | null | null | null | - | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7697/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7697/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7696 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7696/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7696/comments | https://api.github.com/repos/huggingface/datasets/issues/7696/events | https://github.com/huggingface/datasets/issues/7696 | 3,253,433,350 | I_kwDODunzps7B63QG | 7,696 | load_dataset() in 4.0.0 returns different audio samples compared to earlier versions breaking reproducibility | {
"avatar_url": "https://avatars.githubusercontent.com/u/25346345?v=4",
"events_url": "https://api.github.com/users/Manalelaidouni/events{/privacy}",
"followers_url": "https://api.github.com/users/Manalelaidouni/followers",
"following_url": "https://api.github.com/users/Manalelaidouni/following{/other_user}",
"gists_url": "https://api.github.com/users/Manalelaidouni/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Manalelaidouni",
"id": 25346345,
"login": "Manalelaidouni",
"node_id": "MDQ6VXNlcjI1MzQ2MzQ1",
"organizations_url": "https://api.github.com/users/Manalelaidouni/orgs",
"received_events_url": "https://api.github.com/users/Manalelaidouni/received_events",
"repos_url": "https://api.github.com/users/Manalelaidouni/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Manalelaidouni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Manalelaidouni/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Manalelaidouni",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi ! This is because `datasets` now uses the FFmpeg-based library `torchcodec` instead of the libsndfile-based library `soundfile` to decode audio data. Those two have different decoding implementations",
"I’m all for torchcodec, good luck with the migration!"
] | 2025-07-22T17:02:17Z | 2025-07-30T14:22:21Z | 2025-07-30T14:22:21Z | NONE | null | null | null | null | ### Describe the bug
In datasets 4.0.0 release, `load_dataset()` returns different audio samples compared to earlier versions, this breaks integration tests that depend on consistent sample data across different environments (first and second envs specified below).
### Steps to reproduce the bug
```python
from datasets import Audio, load_dataset
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.cast_column("audio", Audio(24000))
sample= ds[0]["audio"]["array"]
print(sample)
# sample in 3.6.0
[0.00231914 0.00245417 0.00187414 ... 0.00061956 0.00101157 0.00076325]
# sample in 4.0.0
array([0.00238037, 0.00220794, 0.00198703, ..., 0.00057983, 0.00085863,
0.00115309], dtype=float32)
```
### Expected behavior
The same dataset should load identical samples across versions to maintain reproducibility.
### Environment info
First env:
- datasets version: 3.6.0
- Platform: Windows-10-10.0.26100-SP0
- Python: 3.11.0
Second env:
- datasets version: 4.0.0
- Platform: Linux-6.1.123+-x86_64-with-glibc2.35
- Python: 3.11.13 | {
"avatar_url": "https://avatars.githubusercontent.com/u/25346345?v=4",
"events_url": "https://api.github.com/users/Manalelaidouni/events{/privacy}",
"followers_url": "https://api.github.com/users/Manalelaidouni/followers",
"following_url": "https://api.github.com/users/Manalelaidouni/following{/other_user}",
"gists_url": "https://api.github.com/users/Manalelaidouni/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Manalelaidouni",
"id": 25346345,
"login": "Manalelaidouni",
"node_id": "MDQ6VXNlcjI1MzQ2MzQ1",
"organizations_url": "https://api.github.com/users/Manalelaidouni/orgs",
"received_events_url": "https://api.github.com/users/Manalelaidouni/received_events",
"repos_url": "https://api.github.com/users/Manalelaidouni/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Manalelaidouni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Manalelaidouni/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Manalelaidouni",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7696/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7696/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7694 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7694/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7694/comments | https://api.github.com/repos/huggingface/datasets/issues/7694/events | https://github.com/huggingface/datasets/issues/7694 | 3,247,600,408 | I_kwDODunzps7BknMY | 7,694 | Dataset.to_json consumes excessive memory, appears to not be a streaming operation | {
"avatar_url": "https://avatars.githubusercontent.com/u/49603999?v=4",
"events_url": "https://api.github.com/users/ycq0125/events{/privacy}",
"followers_url": "https://api.github.com/users/ycq0125/followers",
"following_url": "https://api.github.com/users/ycq0125/following{/other_user}",
"gists_url": "https://api.github.com/users/ycq0125/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ycq0125",
"id": 49603999,
"login": "ycq0125",
"node_id": "MDQ6VXNlcjQ5NjAzOTk5",
"organizations_url": "https://api.github.com/users/ycq0125/orgs",
"received_events_url": "https://api.github.com/users/ycq0125/received_events",
"repos_url": "https://api.github.com/users/ycq0125/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ycq0125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ycq0125/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ycq0125",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! to_json is memory efficient and writes the data by batch:\n\nhttps://github.com/huggingface/datasets/blob/d9861d86be222884dabbd534a2db770c70c9b558/src/datasets/io/json.py#L153-L159\n\nWhat memory are you mesuring ? If you are mesuring RSS, it is likely that it counts the memory mapped data of the dataset. Mem... | 2025-07-21T07:51:25Z | 2025-07-25T14:42:21Z | null | NONE | null | null | null | null | ### Describe the bug
When exporting a Dataset object to a JSON Lines file using the .to_json(lines=True) method, the process consumes a very large amount of memory. The memory usage is proportional to the size of the entire Dataset object being saved, rather than being a low, constant memory operation.
This behavior is unexpected, as the JSONL format is line-oriented and ideally suited for streaming writes. This issue can easily lead to Out-of-Memory (OOM) errors when exporting large datasets, especially in memory-constrained environments like Docker containers.
<img width="1343" height="329" alt="Image" src="https://github.com/user-attachments/assets/518b4263-ad12-422d-9672-28ffe97240ce" />
### Steps to reproduce the bug
```
import os
from datasets import load_dataset, Dataset
from loguru import logger
# A public dataset to test with
REPO_ID = "adam89/TinyStoriesChinese"
SUBSET = "default"
SPLIT = "train"
NUM_ROWS_TO_LOAD = 10 # Use a reasonably large number to see the memory spike
def run_test():
"""Loads data into memory and then saves it, triggering the memory issue."""
logger.info("Step 1: Loading data into an in-memory Dataset object...")
# Create an in-memory Dataset object from a stream
# This simulates having a processed dataset ready to be saved
iterable_dataset = load_dataset(REPO_ID, name=SUBSET, split=SPLIT, streaming=True)
limited_stream = iterable_dataset.take(NUM_ROWS_TO_LOAD)
in_memory_dataset = Dataset.from_generator(limited_stream.__iter__)
logger.info(f"Dataset with {len(in_memory_dataset)} rows created in memory.")
output_path = "./test_output.jsonl"
logger.info(f"Step 2: Saving the dataset to {output_path} using .to_json()...")
logger.info("Please monitor memory usage during this step.")
# This is the step that causes the massive memory allocation
in_memory_dataset.to_json(output_path, force_ascii=False)
logger.info("Save operation complete.")
os.remove(output_path)
if __name__ == "__main__":
# To see the memory usage clearly, run this script with a memory profiler:
# python -m memray run your_script_name.py
# python -m memray tree xxx.bin
run_test()
```
### Expected behavior
I would expect the .to_json(lines=True) method to be a memory-efficient, streaming operation. The memory usage should remain low and relatively constant, as data is converted and written to the file line-by-line or in small batches. The memory footprint should not be proportional to the total number of rows in the in_memory_dataset.
### Environment info
datasets version:3.6.0
Python version:3.9.18
os:macOS 15.3.1 (arm64) | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7694/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7694/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7693 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7693/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7693/comments | https://api.github.com/repos/huggingface/datasets/issues/7693/events | https://github.com/huggingface/datasets/issues/7693 | 3,246,369,678 | I_kwDODunzps7Bf6uO | 7,693 | Dataset scripts are no longer supported, but found superb.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/114297534?v=4",
"events_url": "https://api.github.com/users/edwinzajac/events{/privacy}",
"followers_url": "https://api.github.com/users/edwinzajac/followers",
"following_url": "https://api.github.com/users/edwinzajac/following{/other_user}",
"gists_url": "https://api.github.com/users/edwinzajac/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/edwinzajac",
"id": 114297534,
"login": "edwinzajac",
"node_id": "U_kgDOBtAKvg",
"organizations_url": "https://api.github.com/users/edwinzajac/orgs",
"received_events_url": "https://api.github.com/users/edwinzajac/received_events",
"repos_url": "https://api.github.com/users/edwinzajac/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/edwinzajac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/edwinzajac/subscriptions",
"type": "User",
"url": "https://api.github.com/users/edwinzajac",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I got a pretty similar issue when I try to load bigbio/neurotrial_ner dataset. \n`Dataset scripts are no longer supported, but found neurotrial_ner.py`",
"Same here. I was running this tutorial and got a similar error: https://github.com/openai/whisper/discussions/654 (I'm a first-time transformers library user)... | 2025-07-20T13:48:06Z | 2025-09-04T10:32:12Z | null | NONE | null | null | null | null | ### Describe the bug
Hello,
I'm trying to follow the [Hugging Face Pipelines tutorial](https://huggingface.co/docs/transformers/main_classes/pipelines) but the tutorial seems to work only on old datasets versions.
I then get the error :
```
--------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[65], [line 1](vscode-notebook-cell:?execution_count=65&line=1)
----> [1](vscode-notebook-cell:?execution_count=65&line=1) dataset = datasets.load_dataset("superb", name="asr", split="test")
3 # KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item
4 # as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset
5 for out in tqdm(pipe(KeyDataset(dataset, "file"))):
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1392, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs)
1387 verification_mode = VerificationMode(
1388 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
1389 )
1391 # Create a dataset builder
-> [1392](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1392) builder_instance = load_dataset_builder(
1393 path=path,
1394 name=name,
1395 data_dir=data_dir,
1396 data_files=data_files,
1397 cache_dir=cache_dir,
1398 features=features,
1399 download_config=download_config,
1400 download_mode=download_mode,
1401 revision=revision,
1402 token=token,
1403 storage_options=storage_options,
1404 **config_kwargs,
1405 )
1407 # Return iterable dataset in case of streaming
1408 if streaming:
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1132, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, **config_kwargs)
1130 if features is not None:
1131 features = _fix_for_backward_compatible_features(features)
-> [1132](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1132) dataset_module = dataset_module_factory(
1133 path,
1134 revision=revision,
1135 download_config=download_config,
1136 download_mode=download_mode,
1137 data_dir=data_dir,
1138 data_files=data_files,
1139 cache_dir=cache_dir,
1140 )
1141 # Get dataset builder class
1142 builder_kwargs = dataset_module.builder_kwargs
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1031, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs)
1026 if isinstance(e1, FileNotFoundError):
1027 raise FileNotFoundError(
1028 f"Couldn't find any data file at {relative_to_absolute_path(path)}. "
1029 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
1030 ) from None
-> [1031](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1031) raise e1 from None
1032 else:
1033 raise FileNotFoundError(f"Couldn't find any data file at {relative_to_absolute_path(path)}.")
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:989, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs)
981 try:
982 api.hf_hub_download(
983 repo_id=path,
984 filename=filename,
(...) 987 proxies=download_config.proxies,
988 )
--> [989](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:989) raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
990 except EntryNotFoundError:
991 # Use the infos from the parquet export except in some cases:
992 if data_dir or data_files or (revision and revision != "main"):
RuntimeError: Dataset scripts are no longer supported, but found superb.py
```
NB : I tried to replace "superb" by "anton-l/superb_demo" but I get a 'torchcodec' importing error. Maybe I misunderstood something.
### Steps to reproduce the bug
```
import datasets
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
from tqdm.auto import tqdm
pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0)
dataset = datasets.load_dataset("superb", name="asr", split="test")
# KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item
# as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset
for out in tqdm(pipe(KeyDataset(dataset, "file"))):
print(out)
# {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
# {"text": ....}
# ....
```
### Expected behavior
Get the tutorial expected results
### Environment info
--- SYSTEM INFO ---
Operating System: Ubuntu 24.10
Kernel: Linux 6.11.0-29-generic
Architecture: x86-64
--- PYTHON ---
Python 3.11.13
--- VENV INFO ----
datasets=4.0.0
transformers=4.53
tqdm=4.67.1 | null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7693/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7693/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7692 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7692/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7692/comments | https://api.github.com/repos/huggingface/datasets/issues/7692/events | https://github.com/huggingface/datasets/issues/7692 | 3,246,268,635 | I_kwDODunzps7BfiDb | 7,692 | xopen: invalid start byte for streaming dataset with trust_remote_code=True | {
"avatar_url": "https://avatars.githubusercontent.com/u/5188731?v=4",
"events_url": "https://api.github.com/users/sedol1339/events{/privacy}",
"followers_url": "https://api.github.com/users/sedol1339/followers",
"following_url": "https://api.github.com/users/sedol1339/following{/other_user}",
"gists_url": "https://api.github.com/users/sedol1339/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sedol1339",
"id": 5188731,
"login": "sedol1339",
"node_id": "MDQ6VXNlcjUxODg3MzE=",
"organizations_url": "https://api.github.com/users/sedol1339/orgs",
"received_events_url": "https://api.github.com/users/sedol1339/received_events",
"repos_url": "https://api.github.com/users/sedol1339/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sedol1339/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sedol1339/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sedol1339",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! it would be cool to convert this dataset to Parquet. This will make it work for `datasets>=4.0`, enable the Dataset Viewer and make it more reliable to load/stream (currently it uses a loading script in python and those are known for having issues sometimes)\n\nusing `datasets==3.6.0`, here is the command to ... | 2025-07-20T11:08:20Z | 2025-07-25T14:38:54Z | null | NONE | null | null | null | null | ### Describe the bug
I am trying to load YODAS2 dataset with datasets==3.6.0
```
from datasets import load_dataset
next(iter(load_dataset('espnet/yodas2', name='ru000', split='train', streaming=True, trust_remote_code=True)))
```
And get `UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte`
The cause of the error is the following:
```
from datasets.utils.file_utils import xopen
filepath = 'https://huggingface.co/datasets/espnet/yodas2/resolve/c9674490249665d658f527e2684848377108d82c/data/ru000/text/00000000.json'
xopen(filepath, 'r').read()
>>> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte
```
And the cause of this is the following:
```
import fsspec
fsspec.open(
'hf://datasets/espnet/yodas2@c9674490249665d658f527e2684848377108d82c/data/ru000/text/00000000.json',
mode='r',
hf={'token': None, 'endpoint': 'https://huggingface.co'},
).open().read()
>>> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte
```
Is it true that streaming=True loading is not supported anymore for trust_remote_code=True, even with datasets==3.6.0? This breaks backward compatibility.
### Steps to reproduce the bug
```
from datasets import load_dataset
next(iter(load_dataset('espnet/yodas2', name='ru000', split='train', streaming=True)))
```
### Expected behavior
No errors expected
### Environment info
datasets==3.6.0, ubuntu 24.04 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7692/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7692/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7691 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7691/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7691/comments | https://api.github.com/repos/huggingface/datasets/issues/7691/events | https://github.com/huggingface/datasets/issues/7691 | 3,245,547,170 | I_kwDODunzps7Bcx6i | 7,691 | Large WebDataset: pyarrow.lib.ArrowCapacityError on load() even with streaming | {
"avatar_url": "https://avatars.githubusercontent.com/u/122366389?v=4",
"events_url": "https://api.github.com/users/cleong110/events{/privacy}",
"followers_url": "https://api.github.com/users/cleong110/followers",
"following_url": "https://api.github.com/users/cleong110/following{/other_user}",
"gists_url": "https://api.github.com/users/cleong110/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cleong110",
"id": 122366389,
"login": "cleong110",
"node_id": "U_kgDOB0sptQ",
"organizations_url": "https://api.github.com/users/cleong110/orgs",
"received_events_url": "https://api.github.com/users/cleong110/received_events",
"repos_url": "https://api.github.com/users/cleong110/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cleong110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cleong110/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cleong110",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"It seems the error occurs right here, as it tries to infer the Features: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/webdataset/webdataset.py#L78-L90",
"It seems to me that if we have something that is so large that it cannot fit in pa.table, the fallback method should be to j... | 2025-07-19T18:40:27Z | 2025-07-25T08:51:10Z | null | NONE | null | null | null | null | ### Describe the bug
I am creating a large WebDataset-format dataset for sign language processing research, and a number of the videos are over 2GB. The instant I hit one of the shards with one of those videos, I get a ArrowCapacityError, even with streaming.
I made a config for the dataset that specifically includes just one problem shard, and the error triggers the instant you even run load_dataset(), even with streaming=True
```
ds = load_dataset("bible-nlp/sign-bibles", "ase_chronological_bible_translation_in_american_sign_language_119_introductions_and_passages_debugging_problem_shard", streaming=True, split="train")
```
This gives:
```
File "/opt/home/cleong/projects/semantic_and_visual_similarity/sign-bibles-dataset/sign_bibles_dataset/tasks/test_iteration.py", line 13, in iterate_keys
ds = load_dataset("bible-nlp/sign-bibles", language_subset, streaming=True, split="train")
File "/opt/home/cleong/envs/sign-bibles-dataset/lib/python3.13/site-packages/datasets/load.py", line 1409, in load_dataset
return builder_instance.as_streaming_dataset(split=split)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
File "/opt/home/cleong/envs/sign-bibles-dataset/lib/python3.13/site-packages/datasets/builder.py", line 1225, in as_streaming_dataset
splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/home/cleong/envs/sign-bibles-dataset/lib/python3.13/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 88, in _split_generators
pa.Table.from_pylist(cast_to_python_objects([example], only_1d_for_numpy=True))
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 2046, in pyarrow.lib._Tabular.from_pylist
File "pyarrow/table.pxi", line 6431, in pyarrow.lib._from_pylist
File "pyarrow/table.pxi", line 4893, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 1607, in pyarrow.lib._sanitize_arrays
File "pyarrow/table.pxi", line 1588, in pyarrow.lib._schema_from_arrays
File "pyarrow/array.pxi", line 375, in pyarrow.lib.array
File "pyarrow/array.pxi", line 45, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 3980158992
```
### Steps to reproduce the bug
```python
#!/usr/bin/env python
import argparse
from datasets import get_dataset_config_names, load_dataset
from tqdm import tqdm
from pyarrow.lib import ArrowCapacityError, ArrowInvalid
def iterate_keys(language_subset: str) -> None:
"""Iterate over all samples in the Sign Bibles dataset and print idx and sample key."""
# https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/loading_methods#datasets.load_dataset
ds = load_dataset("bible-nlp/sign-bibles", language_subset, streaming=True, split="train")
print(f"\n==> Loaded dataset config '{language_subset}'")
idx = 0
estimated_shard_index = 0
samples_per_shard = 5
with tqdm(desc=f"{language_subset} samples") as pbar:
iterator = iter(ds)
while True:
try:
if idx % samples_per_shard == 0 and idx > 0: # 5 samples per shard: 0, 1, 2, 3, 4
print(f"Estimated Shard idx (starting at 0, {samples_per_shard}/shard): {estimated_shard_index}")
estimated_shard_index += 1
sample = next(iterator)
sample_key = sample.get("__key__", "missing-key")
print(f"[{language_subset}] idx={idx}, key={sample_key}")
idx += 1
pbar.update(1)
except StopIteration:
print(f"Finished iterating through {idx} samples of {language_subset}")
break
except (ArrowCapacityError, ArrowInvalid) as e:
print(f"PyArrow error on idx={idx}, config={language_subset}: {e}")
idx += 1
pbar.update(1)
continue
except KeyError as e:
print(f"Missing key error on idx={idx}, config={language_subset}: {e}")
idx += 1
pbar.update(1)
continue
def main():
configs = get_dataset_config_names("bible-nlp/sign-bibles")
print(f"Available configs: {configs}")
configs = [
"ase_chronological_bible_translation_in_american_sign_language_119_introductions_and_passages_debugging_problem_shard"
]
for language_subset in configs:
print(f"TESTING CONFIG {language_subset}")
iterate_keys(language_subset)
# try:
# except (ArrowCapacityError, ArrowInvalid) as e:
# print(f"PyArrow error at config level for {language_subset}: {e}")
# continue
# except RuntimeError as e:
# print(f"RuntimeError at config level for {language_subset}: {e}")
# continue
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Iterate through Sign Bibles dataset and print sample keys.")
args = parser.parse_args()
main()
```
### Expected behavior
I expect, when I load with streaming=True, that there should not be any data loaded or anything like that.
https://huggingface.co/docs/datasets/main/en/package_reference/loading_methods#datasets.load_dataset says that with streaming=true,
I did expect to have some trouble with large files, but that the streaming mode would not actually try to load them unless requested, e.g. with sample["mp4"]
>In the streaming case:
> Don’t download or cache anything. Instead, the dataset is lazily loaded and will be streamed on-the-fly when iterating on it.
### Environment info
Local setup: Conda environment on Ubuntu, pip list includes the following
datasets 4.0.0
pyarrow 20.0.0
Verified on Colab: https://colab.research.google.com/drive/1HdN8stlROWrLSYXUoNeV0vQ9pClhIVM8?usp=sharing, though there it crashes by using up all available RAM
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7691/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7691/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7689 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7689/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7689/comments | https://api.github.com/repos/huggingface/datasets/issues/7689/events | https://github.com/huggingface/datasets/issues/7689 | 3,242,580,301 | I_kwDODunzps7BRdlN | 7,689 | BadRequestError for loading dataset? | {
"avatar_url": "https://avatars.githubusercontent.com/u/45011687?v=4",
"events_url": "https://api.github.com/users/WPoelman/events{/privacy}",
"followers_url": "https://api.github.com/users/WPoelman/followers",
"following_url": "https://api.github.com/users/WPoelman/following{/other_user}",
"gists_url": "https://api.github.com/users/WPoelman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/WPoelman",
"id": 45011687,
"login": "WPoelman",
"node_id": "MDQ6VXNlcjQ1MDExNjg3",
"organizations_url": "https://api.github.com/users/WPoelman/orgs",
"received_events_url": "https://api.github.com/users/WPoelman/received_events",
"repos_url": "https://api.github.com/users/WPoelman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/WPoelman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WPoelman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/WPoelman",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Same here, for `HuggingFaceFW/fineweb`. Code that worked with no issues for the last 2 months suddenly fails today. Tried updating `datasets`, `huggingface_hub`, `fsspec` to newest versions, but the same error occurs.",
"I'm also hitting this issue, with `mandarjoshi/trivia_qa`; My dataset loading was working su... | 2025-07-18T09:30:04Z | 2025-07-18T11:59:51Z | 2025-07-18T11:52:29Z | NONE | null | null | null | null | ### Describe the bug
Up until a couple days ago I was having no issues loading `Helsinki-NLP/europarl` and `Helsinki-NLP/un_pc`, but now suddenly I get the following error:
```
huggingface_hub.errors.BadRequestError: (Request ID: ...)
Bad request:
* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand
✖ Invalid input: expected array, received string
→ at paths
✖ Invalid input: expected boolean, received string
→ at expand
```
I tried with both `4.0.0` and `3.5.1` since this dataset uses `trust_remote_code`, but I get the same error with both.
What can I do to load the dataset? I checked the documentation and GitHub issues here, but couldn't find a solution.
### Steps to reproduce the bug
```python
import datasets
ds = datasets.load_dataset("Helsinki-NLP/europarl", "en-fr", streaming=True, trust_remote_code=True)["train"]
```
### Expected behavior
That the dataset loads as it did a couple days ago.
### Environment info
- `datasets` version: 3.5.1
- Platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28
- Python version: 3.11.11
- `huggingface_hub` version: 0.30.2
- PyArrow version: 20.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/17179696?v=4",
"events_url": "https://api.github.com/users/sergiopaniego/events{/privacy}",
"followers_url": "https://api.github.com/users/sergiopaniego/followers",
"following_url": "https://api.github.com/users/sergiopaniego/following{/other_user}",
"gists_url": "https://api.github.com/users/sergiopaniego/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sergiopaniego",
"id": 17179696,
"login": "sergiopaniego",
"node_id": "MDQ6VXNlcjE3MTc5Njk2",
"organizations_url": "https://api.github.com/users/sergiopaniego/orgs",
"received_events_url": "https://api.github.com/users/sergiopaniego/received_events",
"repos_url": "https://api.github.com/users/sergiopaniego/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sergiopaniego/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sergiopaniego/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sergiopaniego",
"user_view_type": "public"
} | {
"+1": 23,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 23,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7689/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7689/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7688 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7688/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7688/comments | https://api.github.com/repos/huggingface/datasets/issues/7688/events | https://github.com/huggingface/datasets/issues/7688 | 3,238,851,443 | I_kwDODunzps7BDPNz | 7,688 | No module named "distributed" | {
"avatar_url": "https://avatars.githubusercontent.com/u/45058324?v=4",
"events_url": "https://api.github.com/users/yingtongxiong/events{/privacy}",
"followers_url": "https://api.github.com/users/yingtongxiong/followers",
"following_url": "https://api.github.com/users/yingtongxiong/following{/other_user}",
"gists_url": "https://api.github.com/users/yingtongxiong/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yingtongxiong",
"id": 45058324,
"login": "yingtongxiong",
"node_id": "MDQ6VXNlcjQ1MDU4MzI0",
"organizations_url": "https://api.github.com/users/yingtongxiong/orgs",
"received_events_url": "https://api.github.com/users/yingtongxiong/received_events",
"repos_url": "https://api.github.com/users/yingtongxiong/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yingtongxiong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yingtongxiong/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yingtongxiong",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"The error ModuleNotFoundError: No module named 'datasets.distributed' means your installed datasets library is too old or incompatible with the version of Library you are using(in my case it was BEIR). The datasets.distributed module was removed in recent versions of the datasets library.\n\nDowngrade datasets to ... | 2025-07-17T09:32:35Z | 2025-07-25T15:14:19Z | null | NONE | null | null | null | null | ### Describe the bug
hello, when I run the command "from datasets.distributed import split_dataset_by_node", I always met the bug "No module named 'datasets.distributed" in different version like 4.0.0, 2.21.0 and so on. How can I solve this?
### Steps to reproduce the bug
1. pip install datasets
2. from datasets.distributed import split_dataset_by_node
### Expected behavior
expecting the command "from datasets.distributed import split_dataset_by_node" can be ran successfully
### Environment info
python: 3.12 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7688/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7688/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7687 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7687/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7687/comments | https://api.github.com/repos/huggingface/datasets/issues/7687/events | https://github.com/huggingface/datasets/issues/7687 | 3,238,760,301 | I_kwDODunzps7BC49t | 7,687 | Datasets keeps rebuilding the dataset every time i call the python script | {
"avatar_url": "https://avatars.githubusercontent.com/u/58883113?v=4",
"events_url": "https://api.github.com/users/CALEB789/events{/privacy}",
"followers_url": "https://api.github.com/users/CALEB789/followers",
"following_url": "https://api.github.com/users/CALEB789/following{/other_user}",
"gists_url": "https://api.github.com/users/CALEB789/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CALEB789",
"id": 58883113,
"login": "CALEB789",
"node_id": "MDQ6VXNlcjU4ODgzMTEz",
"organizations_url": "https://api.github.com/users/CALEB789/orgs",
"received_events_url": "https://api.github.com/users/CALEB789/received_events",
"repos_url": "https://api.github.com/users/CALEB789/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CALEB789/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CALEB789/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CALEB789",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"here is the code to load the dataset form the cache:\n\n```python\ns = load_dataset('databricks/databricks-dolly-15k')['train']\n```\n\nif you pass the location of a local directory it will create a new cache based on that directory content"
] | 2025-07-17T09:03:38Z | 2025-07-25T15:21:31Z | null | NONE | null | null | null | null | ### Describe the bug
Every time it runs, somehow, samples increase.
This can cause a 12mb dataset to have other built versions of 400 mbs+
<img width="363" height="481" alt="Image" src="https://github.com/user-attachments/assets/766ce958-bd2b-41bc-b950-86710259bfdc" />
### Steps to reproduce the bug
`from datasets import load_dataset
s = load_dataset('~/.cache/huggingface/datasets/databricks___databricks-dolly-15k')['train']
`
1. A dataset needs to be available in the .cache folder
2. Run the code multiple times, and every time it runs, more versions are created
### Expected behavior
The number of samples increases every time the script runs
### Environment info
- `datasets` version: 3.6.0
- Platform: Windows-11-10.0.26100-SP0
- Python version: 3.13.3
- `huggingface_hub` version: 0.32.3
- PyArrow version: 20.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0
| null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7687/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7687/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7686 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7686/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7686/comments | https://api.github.com/repos/huggingface/datasets/issues/7686/events | https://github.com/huggingface/datasets/issues/7686 | 3,237,201,090 | I_kwDODunzps7A88TC | 7,686 | load_dataset does not check .no_exist files in the hub cache | {
"avatar_url": "https://avatars.githubusercontent.com/u/3627235?v=4",
"events_url": "https://api.github.com/users/jmaccarl/events{/privacy}",
"followers_url": "https://api.github.com/users/jmaccarl/followers",
"following_url": "https://api.github.com/users/jmaccarl/following{/other_user}",
"gists_url": "https://api.github.com/users/jmaccarl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jmaccarl",
"id": 3627235,
"login": "jmaccarl",
"node_id": "MDQ6VXNlcjM2MjcyMzU=",
"organizations_url": "https://api.github.com/users/jmaccarl/orgs",
"received_events_url": "https://api.github.com/users/jmaccarl/received_events",
"repos_url": "https://api.github.com/users/jmaccarl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jmaccarl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmaccarl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jmaccarl",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-07-16T20:04:00Z | 2025-07-16T20:04:00Z | null | NONE | null | null | null | null | ### Describe the bug
I'm not entirely sure if this should be submitted as a bug in the `datasets` library or the `huggingface_hub` library, given it could be fixed at different levels of the stack.
The fundamental issue is that the `load_datasets` api doesn't use the `.no_exist` files in the hub cache unlike other wrapper APIs that do. This is because the `utils.file_utils.cached_path` used directly calls `hf_hub_download` instead of using `file_download.try_to_load_from_cache` from `huggingface_hub` (see `transformers` library `utils.hub.cached_files` for one alternate example).
This results in unnecessary metadata HTTP requests occurring for files that don't exist on every call. It won't generate the .no_exist cache files, nor will it use them.
### Steps to reproduce the bug
Run the following snippet as one example (setting cache dirs to clean paths for clarity)
`env HF_HOME=~/local_hf_hub python repro.py`
```
from datasets import load_dataset
import huggingface_hub
# monkeypatch to print out metadata requests being made
original_get_hf_file_metadata = huggingface_hub.file_download.get_hf_file_metadata
def get_hf_file_metadata_wrapper(*args, **kwargs):
print("File metadata request made (get_hf_file_metadata):", args, kwargs)
return original_get_hf_file_metadata(*args, **kwargs)
# Apply the patch
huggingface_hub.file_download.get_hf_file_metadata = get_hf_file_metadata_wrapper
dataset = load_dataset(
"Salesforce/wikitext",
"wikitext-2-v1",
split="test",
trust_remote_code=True,
cache_dir="~/local_datasets",
revision="b08601e04326c79dfdd32d625aee71d232d685c3",
)
```
This may be called over and over again, and you will see the same calls for files that don't exist:
```
File metadata request made (get_hf_file_metadata): () {'url': 'https://huggingface.co/datasets/Salesforce/wikitext/resolve/b08601e04326c79dfdd32d625aee71d232d685c3/wikitext.py', 'proxies': None, 'timeout': 10, 'headers': {'user-agent': 'datasets/3.6.0; hf_hub/0.33.2; python/3.12.11; torch/2.7.0; huggingface_hub/0.33.2; pyarrow/20.0.0; jax/0.5.3'}, 'token': None}
File metadata request made (get_hf_file_metadata): () {'url': 'https://huggingface.co/datasets/Salesforce/wikitext/resolve/b08601e04326c79dfdd32d625aee71d232d685c3/.huggingface.yaml', 'proxies': None, 'timeout': 10, 'headers': {'user-agent': 'datasets/3.6.0; hf_hub/0.33.2; python/3.12.11; torch/2.7.0; huggingface_hub/0.33.2; pyarrow/20.0.0; jax/0.5.3'}, 'token': None}
File metadata request made (get_hf_file_metadata): () {'url': 'https://huggingface.co/datasets/Salesforce/wikitext/resolve/b08601e04326c79dfdd32d625aee71d232d685c3/dataset_infos.json', 'proxies': None, 'timeout': 10, 'headers': {'user-agent': 'datasets/3.6.0; hf_hub/0.33.2; python/3.12.11; torch/2.7.0; huggingface_hub/0.33.2; pyarrow/20.0.0; jax/0.5.3'}, 'token': None}
```
And you can see that the .no_exist folder is never created
```
$ ls ~/local_hf_hub/hub/datasets--Salesforce--wikitext/
blobs refs snapshots
```
### Expected behavior
The expected behavior is for the print "File metadata request made" to stop after the first call, and for .no_exist directory & files to be populated under ~/local_hf_hub/hub/datasets--Salesforce--wikitext/
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-6.5.13-65-650-4141-22041-coreweave-amd64-85c45edc-x86_64-with-glibc2.35
- Python version: 3.12.11
- `huggingface_hub` version: 0.33.2
- PyArrow version: 20.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2024.9.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7686/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7686/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7685 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7685/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7685/comments | https://api.github.com/repos/huggingface/datasets/issues/7685/events | https://github.com/huggingface/datasets/issues/7685 | 3,236,979,340 | I_kwDODunzps7A8GKM | 7,685 | Inconsistent range request behavior for parquet REST api | {
"avatar_url": "https://avatars.githubusercontent.com/u/21327470?v=4",
"events_url": "https://api.github.com/users/universalmind303/events{/privacy}",
"followers_url": "https://api.github.com/users/universalmind303/followers",
"following_url": "https://api.github.com/users/universalmind303/following{/other_user}",
"gists_url": "https://api.github.com/users/universalmind303/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/universalmind303",
"id": 21327470,
"login": "universalmind303",
"node_id": "MDQ6VXNlcjIxMzI3NDcw",
"organizations_url": "https://api.github.com/users/universalmind303/orgs",
"received_events_url": "https://api.github.com/users/universalmind303/received_events",
"repos_url": "https://api.github.com/users/universalmind303/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/universalmind303/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/universalmind303/subscriptions",
"type": "User",
"url": "https://api.github.com/users/universalmind303",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"This is a weird bug, is it a range that is supposed to be satisfiable ? I mean, is it on the boundraries ?\n\nLet me know if you'r e still having the issue, in case it was just a transient bug",
"@lhoestq yes the ranges are supposed to be satisfiable, and _sometimes_ they are. \n\nThe head requests show that it ... | 2025-07-16T18:39:44Z | 2025-08-11T08:16:54Z | null | NONE | null | null | null | null | ### Describe the bug
First off, I do apologize if this is not the correct repo for submitting this issue. Please direct me to another one if it's more appropriate elsewhere.
The datasets rest api is inconsistently giving `416 Range Not Satisfiable` when using a range request to get portions of the parquet files. More often than not, I am seeing 416, but other times for an identical request, it gives me the data along with `206 Partial Content` as expected.
### Steps to reproduce the bug
repeating this request multiple times will return either 416 or 206.
```sh
$ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet"
```
Note: this is not limited to just the above file, I tried with many different datasets and am able to consistently reproduce issue across multiple datasets.
when the 416 is returned, I get the following headers
```
< HTTP/2 416
< content-type: text/html
< content-length: 49
< server: CloudFront
< date: Wed, 16 Jul 2025 14:58:43 GMT
< expires: Wed, 16 Jul 2025 14:58:43 GMT
< content-range: bytes */177
< x-cache: Error from cloudfront
< via: 1.1 873527676a354c5998cad133525df9c0.cloudfront.net (CloudFront)
<
```
this suggests to me that there is likely a CDN/caching/routing issue happening and the request is not getting routed properly.
Full verbose output via curl.
<details>
❯ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Host huggingface.co:443 was resolved.
* IPv6: (none)
* IPv4: 18.160.102.96, 18.160.102.110, 18.160.102.4, 18.160.102.86
* Trying 18.160.102.96:443...
* Connected to huggingface.co (18.160.102.96) port 443
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
} [319 bytes data]
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (IN), TLS handshake, Server hello (2):
{ [122 bytes data]
* (304) (IN), TLS handshake, Unknown (8):
{ [19 bytes data]
* (304) (IN), TLS handshake, Certificate (11):
{ [3821 bytes data]
* (304) (IN), TLS handshake, CERT verify (15):
{ [264 bytes data]
* (304) (IN), TLS handshake, Finished (20):
{ [36 bytes data]
* (304) (OUT), TLS handshake, Finished (20):
} [36 bytes data]
* SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF
* ALPN: server accepted h2
* Server certificate:
* subject: CN=huggingface.co
* start date: Apr 13 00:00:00 2025 GMT
* expire date: May 12 23:59:59 2026 GMT
* subjectAltName: host "huggingface.co" matched cert's "huggingface.co"
* issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02
* SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: huggingface.co]
* [HTTP/2] [1] [:path: /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet]
* [HTTP/2] [1] [user-agent: curl/8.7.1]
* [HTTP/2] [1] [accept: */*]
* [HTTP/2] [1] [range: bytes=217875070-218006142]
> GET /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet HTTP/2
> Host: huggingface.co
> User-Agent: curl/8.7.1
> Accept: */*
> Range: bytes=217875070-218006142
>
* Request completely sent off
< HTTP/2 416
< content-type: text/html
< content-length: 49
< server: CloudFront
< date: Wed, 16 Jul 2025 14:58:41 GMT
< expires: Wed, 16 Jul 2025 14:58:41 GMT
< content-range: bytes */177
< x-cache: Error from cloudfront
< via: 1.1 e2f1bed2f82641d6d5439eac20a790ba.cloudfront.net (CloudFront)
< x-amz-cf-pop: MSP50-P1
< x-amz-cf-id: Mo8hn-EZLJqE_hoBday8DdhmVXhV3v9-Wg-EEHI6gX_fNlkanVIUBA==
<
{ [49 bytes data]
100 49 100 49 0 0 2215 0 --:--:-- --:--:-- --:--:-- 2227
* Connection #0 to host huggingface.co left intact
(.venv) Daft main** ≡❯ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Host huggingface.co:443 was resolved.
* IPv6: (none)
* IPv4: 18.160.102.96, 18.160.102.110, 18.160.102.4, 18.160.102.86
* Trying 18.160.102.96:443...
* Connected to huggingface.co (18.160.102.96) port 443
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
} [319 bytes data]
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (IN), TLS handshake, Server hello (2):
{ [122 bytes data]
* (304) (IN), TLS handshake, Unknown (8):
{ [19 bytes data]
* (304) (IN), TLS handshake, Certificate (11):
{ [3821 bytes data]
* (304) (IN), TLS handshake, CERT verify (15):
{ [264 bytes data]
* (304) (IN), TLS handshake, Finished (20):
{ [36 bytes data]
* (304) (OUT), TLS handshake, Finished (20):
} [36 bytes data]
* SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF
* ALPN: server accepted h2
* Server certificate:
* subject: CN=huggingface.co
* start date: Apr 13 00:00:00 2025 GMT
* expire date: May 12 23:59:59 2026 GMT
* subjectAltName: host "huggingface.co" matched cert's "huggingface.co"
* issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02
* SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: huggingface.co]
* [HTTP/2] [1] [:path: /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet]
* [HTTP/2] [1] [user-agent: curl/8.7.1]
* [HTTP/2] [1] [accept: */*]
* [HTTP/2] [1] [range: bytes=217875070-218006142]
> GET /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet HTTP/2
> Host: huggingface.co
> User-Agent: curl/8.7.1
> Accept: */*
> Range: bytes=217875070-218006142
>
* Request completely sent off
< HTTP/2 416
< content-type: text/html
< content-length: 49
< server: CloudFront
< date: Wed, 16 Jul 2025 14:58:42 GMT
< expires: Wed, 16 Jul 2025 14:58:42 GMT
< content-range: bytes */177
< x-cache: Error from cloudfront
< via: 1.1 bb352451e1eacf85f8786ee3ecd07eca.cloudfront.net (CloudFront)
< x-amz-cf-pop: MSP50-P1
< x-amz-cf-id: 9xy-CX9KvlS8Ye4eFr8jXMDobZHFkvdyvkLJGmK_qiwZQywCCwfq7Q==
<
{ [49 bytes data]
100 49 100 49 0 0 2381 0 --:--:-- --:--:-- --:--:-- 2450
* Connection #0 to host huggingface.co left intact
(.venv) Daft main** ≡❯ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Host huggingface.co:443 was resolved.
* IPv6: (none)
* IPv4: 18.160.102.96, 18.160.102.110, 18.160.102.4, 18.160.102.86
* Trying 18.160.102.96:443...
* Connected to huggingface.co (18.160.102.96) port 443
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
} [319 bytes data]
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (IN), TLS handshake, Server hello (2):
{ [122 bytes data]
* (304) (IN), TLS handshake, Unknown (8):
{ [19 bytes data]
* (304) (IN), TLS handshake, Certificate (11):
{ [3821 bytes data]
* (304) (IN), TLS handshake, CERT verify (15):
{ [264 bytes data]
* (304) (IN), TLS handshake, Finished (20):
{ [36 bytes data]
* (304) (OUT), TLS handshake, Finished (20):
} [36 bytes data]
* SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF
* ALPN: server accepted h2
* Server certificate:
* subject: CN=huggingface.co
* start date: Apr 13 00:00:00 2025 GMT
* expire date: May 12 23:59:59 2026 GMT
* subjectAltName: host "huggingface.co" matched cert's "huggingface.co"
* issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02
* SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: huggingface.co]
* [HTTP/2] [1] [:path: /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet]
* [HTTP/2] [1] [user-agent: curl/8.7.1]
* [HTTP/2] [1] [accept: */*]
* [HTTP/2] [1] [range: bytes=217875070-218006142]
> GET /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet HTTP/2
> Host: huggingface.co
> User-Agent: curl/8.7.1
> Accept: */*
> Range: bytes=217875070-218006142
>
* Request completely sent off
< HTTP/2 416
< content-type: text/html
< content-length: 49
< server: CloudFront
< date: Wed, 16 Jul 2025 14:58:43 GMT
< expires: Wed, 16 Jul 2025 14:58:43 GMT
< content-range: bytes */177
< x-cache: Error from cloudfront
< via: 1.1 873527676a354c5998cad133525df9c0.cloudfront.net (CloudFront)
< x-amz-cf-pop: MSP50-P1
< x-amz-cf-id: wtBgwY4u4YJ2pD1ovM8UV770UiJoqWfs7i7VzschDyoLv5g7swGGmw==
<
{ [49 bytes data]
100 49 100 49 0 0 2273 0 --:--:-- --:--:-- --:--:-- 2333
* Connection #0 to host huggingface.co left intact
(.venv) Daft main** ≡❯ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Host huggingface.co:443 was resolved.
* IPv6: (none)
* IPv4: 18.160.102.96, 18.160.102.110, 18.160.102.4, 18.160.102.86
* Trying 18.160.102.96:443...
* Connected to huggingface.co (18.160.102.96) port 443
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
} [319 bytes data]
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (IN), TLS handshake, Server hello (2):
{ [122 bytes data]
* (304) (IN), TLS handshake, Unknown (8):
{ [19 bytes data]
* (304) (IN), TLS handshake, Certificate (11):
{ [3821 bytes data]
* (304) (IN), TLS handshake, CERT verify (15):
{ [264 bytes data]
* (304) (IN), TLS handshake, Finished (20):
{ [36 bytes data]
* (304) (OUT), TLS handshake, Finished (20):
} [36 bytes data]
* SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF
* ALPN: server accepted h2
* Server certificate:
* subject: CN=huggingface.co
* start date: Apr 13 00:00:00 2025 GMT
* expire date: May 12 23:59:59 2026 GMT
* subjectAltName: host "huggingface.co" matched cert's "huggingface.co"
* issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02
* SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: huggingface.co]
* [HTTP/2] [1] [:path: /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet]
* [HTTP/2] [1] [user-agent: curl/8.7.1]
* [HTTP/2] [1] [accept: */*]
* [HTTP/2] [1] [range: bytes=217875070-218006142]
> GET /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet HTTP/2
> Host: huggingface.co
> User-Agent: curl/8.7.1
> Accept: */*
> Range: bytes=217875070-218006142
>
* Request completely sent off
< HTTP/2 302
< content-type: text/plain; charset=utf-8
< content-length: 177
< location: https://huggingface.co/datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet
< date: Wed, 16 Jul 2025 14:58:44 GMT
< x-powered-by: huggingface-moon
< cross-origin-opener-policy: same-origin
< referrer-policy: strict-origin-when-cross-origin
< x-request-id: Root=1-6877be24-476860f03849cb1a1570c9d8
< access-control-allow-origin: https://huggingface.co
< access-control-expose-headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range,X-Linked-Size,X-Linked-ETag,X-Xet-Hash
< set-cookie: token=; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Secure; SameSite=None
< set-cookie: token=; Domain=huggingface.co; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Secure; SameSite=Lax
< x-cache: Miss from cloudfront
< via: 1.1 dd5af138aa8a11d8a70d5ef690ad1a2a.cloudfront.net (CloudFront)
< x-amz-cf-pop: MSP50-P1
< x-amz-cf-id: xuSi0X5RpH1OZqQOM8gGQLQLU8eOM6Gbkk-bgIX_qBnTTaa1VNkExA==
<
* Ignoring the response-body
100 177 100 177 0 0 2021 0 --:--:-- --:--:-- --:--:-- 2034
* Connection #0 to host huggingface.co left intact
* Issue another request to this URL: 'https://huggingface.co/datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet'
* Found bundle for host: 0x600002d54570 [can multiplex]
* Re-using existing connection with host huggingface.co
* [HTTP/2] [3] OPENED stream for https://huggingface.co/datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet
* [HTTP/2] [3] [:method: GET]
* [HTTP/2] [3] [:scheme: https]
* [HTTP/2] [3] [:authority: huggingface.co]
* [HTTP/2] [3] [:path: /datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet]
* [HTTP/2] [3] [user-agent: curl/8.7.1]
* [HTTP/2] [3] [accept: */*]
* [HTTP/2] [3] [range: bytes=217875070-218006142]
> GET /datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet HTTP/2
> Host: huggingface.co
> User-Agent: curl/8.7.1
> Accept: */*
> Range: bytes=217875070-218006142
>
* Request completely sent off
< HTTP/2 302
< content-type: text/plain; charset=utf-8
< content-length: 1317
< location: https://cas-bridge.xethub.hf.co/xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC
< date: Wed, 16 Jul 2025 14:58:44 GMT
< x-powered-by: huggingface-moon
< cross-origin-opener-policy: same-origin
< referrer-policy: strict-origin-when-cross-origin
< x-request-id: Root=1-6877be24-4f628b292dc8a7a5339c41d3
< access-control-allow-origin: https://huggingface.co
< vary: Origin, Accept
< access-control-expose-headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range,X-Linked-Size,X-Linked-ETag,X-Xet-Hash
< set-cookie: token=; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Secure; SameSite=None
< set-cookie: token=; Domain=huggingface.co; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Secure; SameSite=Lax
< x-repo-commit: 712df366ffbc959d9f4279bf2da579230b7ca5d8
< accept-ranges: bytes
< x-linked-size: 218006142
< x-linked-etag: "01736bf26d0046ddec4ab8900fba3f0dc6500b038314b44d0edb73a7c88dec07"
< x-xet-hash: cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9
< link: <https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/xet-read-token/712df366ffbc959d9f4279bf2da579230b7ca5d8>; rel="xet-auth", <https://cas-server.xethub.hf.co/reconstruction/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9>; rel="xet-reconstruction-info"
< x-cache: Miss from cloudfront
< via: 1.1 dd5af138aa8a11d8a70d5ef690ad1a2a.cloudfront.net (CloudFront)
< x-amz-cf-pop: MSP50-P1
< x-amz-cf-id: 0qXw2sJGrWCLVt7c-Vtn09uE3nu6CrJw9RmAKvNr_flG75muclvlIg==
<
* Ignoring the response-body
100 1317 100 1317 0 0 9268 0 --:--:-- --:--:-- --:--:-- 9268
* Connection #0 to host huggingface.co left intact
* Issue another request to this URL: 'https://cas-bridge.xethub.hf.co/xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC'
* Host cas-bridge.xethub.hf.co:443 was resolved.
* IPv6: (none)
* IPv4: 18.160.181.55, 18.160.181.54, 18.160.181.52, 18.160.181.88
* Trying 18.160.181.55:443...
* Connected to cas-bridge.xethub.hf.co (18.160.181.55) port 443
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
} [328 bytes data]
* (304) (IN), TLS handshake, Server hello (2):
{ [122 bytes data]
* (304) (IN), TLS handshake, Unknown (8):
{ [19 bytes data]
* (304) (IN), TLS handshake, Certificate (11):
{ [3818 bytes data]
* (304) (IN), TLS handshake, CERT verify (15):
{ [264 bytes data]
* (304) (IN), TLS handshake, Finished (20):
{ [36 bytes data]
* (304) (OUT), TLS handshake, Finished (20):
} [36 bytes data]
* SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF
* ALPN: server accepted h2
* Server certificate:
* subject: CN=cas-bridge.xethub.hf.co
* start date: Jun 4 00:00:00 2025 GMT
* expire date: Jul 3 23:59:59 2026 GMT
* subjectAltName: host "cas-bridge.xethub.hf.co" matched cert's "cas-bridge.xethub.hf.co"
* issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M04
* SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://cas-bridge.xethub.hf.co/xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: cas-bridge.xethub.hf.co]
* [HTTP/2] [1] [:path: /xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC]
* [HTTP/2] [1] [user-agent: curl/8.7.1]
* [HTTP/2] [1] [accept: */*]
* [HTTP/2] [1] [range: bytes=217875070-218006142]
> GET /xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC HTTP/2
> Host: cas-bridge.xethub.hf.co
> User-Agent: curl/8.7.1
> Accept: */*
> Range: bytes=217875070-218006142
>
* Request completely sent off
< HTTP/2 206
< content-length: 131072
< date: Mon, 14 Jul 2025 08:40:28 GMT
< x-request-id: 01K041FDPVA03RR2PRXDZSN30G
< content-disposition: inline; filename*=UTF-8''0000.parquet; filename="0000.parquet";
< cache-control: public, max-age=31536000
< etag: "cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9"
< access-control-allow-origin: *
< access-control-allow-headers: Content-Range, Content-Type, Content-Disposition, ETag
< access-control-expose-headers: Accept-Ranges, Content-Range, Content-Type, Content-Disposition, ETag, X-Cache
< x-cache: Hit from cloudfront
< via: 1.1 1c857e24a4dc84d2d9c78d5b3463bed6.cloudfront.net (CloudFront)
< x-amz-cf-pop: MSP50-P2
< x-amz-cf-id: 3SxFmQa5wLeeXbNiwaAo0_RwoR_n7-SivjsLjDLG-Pwn5UhG2oiEQA==
< age: 195496
< content-security-policy: default-src 'none'; sandbox
< content-range: bytes 217875070-218006141/218006142
<
{ [8192 bytes data]
100 128k 100 128k 0 0 769k 0 --:--:-- --:--:-- --:--:-- 769k
* Connection #1 to host cas-bridge.xethub.hf.co left intact
</details>
### Expected behavior
always get back a `206`
### Environment info
n/a | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7685/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7685/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7780 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7780/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7780/comments | https://api.github.com/repos/huggingface/datasets/issues/7780/events | https://github.com/huggingface/datasets/issues/7780 | 3,429,267,259 | I_kwDODunzps7MZnc7 | 7,780 | BIGPATENT dataset inaccessible (deprecated script loader) | {
"avatar_url": "https://avatars.githubusercontent.com/u/137755081?v=4",
"events_url": "https://api.github.com/users/ishmaifan/events{/privacy}",
"followers_url": "https://api.github.com/users/ishmaifan/followers",
"following_url": "https://api.github.com/users/ishmaifan/following{/other_user}",
"gists_url": "https://api.github.com/users/ishmaifan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ishmaifan",
"id": 137755081,
"login": "ishmaifan",
"node_id": "U_kgDOCDX5yQ",
"organizations_url": "https://api.github.com/users/ishmaifan/orgs",
"received_events_url": "https://api.github.com/users/ishmaifan/received_events",
"repos_url": "https://api.github.com/users/ishmaifan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ishmaifan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ishmaifan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ishmaifan",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! I opened https://huggingface.co/datasets/NortheasternUniversity/big_patent/discussions/7 to update the dataset, hopefully it's merged soon !"
] | 2025-09-18T08:25:34Z | 2025-09-19T14:35:54Z | null | NONE | null | null | null | null | dataset: https://huggingface.co/datasets/NortheasternUniversity/big_patent
When I try to load it with the datasets library, it fails with:
RuntimeError: Dataset scripts are no longer supported, but found big_patent.py
Could you please publish a Parquet/Arrow export of BIGPATENT on the Hugging Face so that it can be accessed with datasets>=4.x.
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7780/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7780/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7777 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7777/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7777/comments | https://api.github.com/repos/huggingface/datasets/issues/7777/events | https://github.com/huggingface/datasets/issues/7777 | 3,424,462,082 | I_kwDODunzps7MHSUC | 7,777 | push_to_hub not overwriting but stuck in a loop when there are existing commits | {
"avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4",
"events_url": "https://api.github.com/users/Darejkal/events{/privacy}",
"followers_url": "https://api.github.com/users/Darejkal/followers",
"following_url": "https://api.github.com/users/Darejkal/following{/other_user}",
"gists_url": "https://api.github.com/users/Darejkal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Darejkal",
"id": 55143337,
"login": "Darejkal",
"node_id": "MDQ6VXNlcjU1MTQzMzM3",
"organizations_url": "https://api.github.com/users/Darejkal/orgs",
"received_events_url": "https://api.github.com/users/Darejkal/received_events",
"repos_url": "https://api.github.com/users/Darejkal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Darejkal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darejkal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Darejkal",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"HTTP 412 means a commit happened in the meantime, so `get_deletions_and_dataset_card` has to retry to get the latest version of the dataset card and what files to delete based on the latest version of the dataset repository\n\nAre you running other operations in the dataset repo for your push_to_hub ?",
"There w... | 2025-09-17T03:15:35Z | 2025-09-17T19:31:14Z | 2025-09-17T19:31:14Z | NONE | null | null | null | null | ### Describe the bug
`get_deletions_and_dataset_card` stuck at error a commit has happened error since push to hub for http error 412 for tag 4.1.0. The error does not exists in 4.0.0.
### Steps to reproduce the bug
Create code to use push_to_hub, ran twice each time with different content for datasets.Dataset.
The code will stuck in time.sleep loop for `get_deletions_and_dataset_card`. If error is explicitly printed, the error is HTTP 412.
### Expected behavior
New datasets overwrite existing one on repo.
### Environment info
datasets 4.1.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4",
"events_url": "https://api.github.com/users/Darejkal/events{/privacy}",
"followers_url": "https://api.github.com/users/Darejkal/followers",
"following_url": "https://api.github.com/users/Darejkal/following{/other_user}",
"gists_url": "https://api.github.com/users/Darejkal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Darejkal",
"id": 55143337,
"login": "Darejkal",
"node_id": "MDQ6VXNlcjU1MTQzMzM3",
"organizations_url": "https://api.github.com/users/Darejkal/orgs",
"received_events_url": "https://api.github.com/users/Darejkal/received_events",
"repos_url": "https://api.github.com/users/Darejkal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Darejkal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darejkal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Darejkal",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7777/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7777/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7772 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7772/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7772/comments | https://api.github.com/repos/huggingface/datasets/issues/7772/events | https://github.com/huggingface/datasets/issues/7772 | 3,417,353,751 | I_kwDODunzps7LsK4X | 7,772 | Error processing scalar columns using tensorflow. | {
"avatar_url": "https://avatars.githubusercontent.com/u/3871483?v=4",
"events_url": "https://api.github.com/users/khteh/events{/privacy}",
"followers_url": "https://api.github.com/users/khteh/followers",
"following_url": "https://api.github.com/users/khteh/following{/other_user}",
"gists_url": "https://api.github.com/users/khteh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/khteh",
"id": 3871483,
"login": "khteh",
"node_id": "MDQ6VXNlcjM4NzE0ODM=",
"organizations_url": "https://api.github.com/users/khteh/orgs",
"received_events_url": "https://api.github.com/users/khteh/received_events",
"repos_url": "https://api.github.com/users/khteh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/khteh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khteh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/khteh",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-09-15T10:36:31Z | 2025-09-15T10:49:17Z | null | NONE | null | null | null | null | `datasets==4.0.0`
```
columns_to_return = ['input_ids','attention_mask', 'start_positions', 'end_positions']
train_ds.set_format(type='tf', columns=columns_to_return)
```
`train_ds`:
```
train_ds type: <class 'datasets.arrow_dataset.Dataset'>, shape: (1000, 9)
columns: ['question', 'sentences', 'answer', 'str_idx', 'end_idx', 'input_ids', 'attention_mask', 'start_positions', 'end_positions']
features:{'question': Value('string'), 'sentences': Value('string'), 'answer': Value('string'), 'str_idx': Value('int64'), 'end_idx': Value('int64'), 'input_ids': List(Value('int32')), 'attention_mask': List(Value('int8')), 'start_positions': Value('int64'), 'end_positions': Value('int64')}
```
`train_ds_tensor = train_ds['start_positions'].to_tensor(shape=(-1,1))` hits the following error:
```
AttributeError: 'Column' object has no attribute 'to_tensor'
```
`tf.reshape(train_ds['start_positions'], shape=[-1,1])` hits the following error:
```
TypeError: Scalar tensor has no `len()`
``` | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7772/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7772/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7767 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7767/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7767/comments | https://api.github.com/repos/huggingface/datasets/issues/7767/events | https://github.com/huggingface/datasets/issues/7767 | 3,411,654,444 | I_kwDODunzps7LWbcs | 7,767 | Custom `dl_manager` in `load_dataset` | {
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ain-soph",
"id": 13214530,
"login": "ain-soph",
"node_id": "MDQ6VXNlcjEzMjE0NTMw",
"organizations_url": "https://api.github.com/users/ain-soph/orgs",
"received_events_url": "https://api.github.com/users/ain-soph/received_events",
"repos_url": "https://api.github.com/users/ain-soph/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ain-soph",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2025-09-12T19:06:23Z | 2025-09-12T19:07:52Z | null | NONE | null | null | null | null | ### Feature request
https://github.com/huggingface/datasets/blob/4.0.0/src/datasets/load.py#L1411-L1418
```
def load_dataset(
...
dl_manager: Optional[DownloadManager] = None, # add this new argument
**config_kwargs,
) -> Union[DatasetDict, Dataset, IterableDatasetDict, IterableDataset]:
...
# Create a dataset builder
builder_instance = load_dataset_builder(
path=path,
name=name,
data_dir=data_dir,
data_files=data_files,
cache_dir=cache_dir,
features=features,
download_config=download_config,
download_mode=download_mode,
revision=revision,
token=token,
storage_options=storage_options,
**config_kwargs,
)
# Return iterable dataset in case of streaming
if streaming:
return builder_instance.as_streaming_dataset(split=split)
# Note: This is the revised part
if dl_manager is None:
if download_config is None:
download_config = DownloadConfig(
cache_dir=builder_instance._cache_downloaded_dir,
force_download=download_mode == DownloadMode.FORCE_REDOWNLOAD,
force_extract=download_mode == DownloadMode.FORCE_REDOWNLOAD,
use_etag=False,
num_proc=num_proc,
token=builder_instance.token,
storage_options=builder_instance.storage_options,
) # We don't use etag for data files to speed up the process
dl_manager = DownloadManager(
dataset_name=builder_instance.dataset_name,
download_config=download_config,
data_dir=builder_instance.config.data_dir,
record_checksums=(
builder_instance._record_infos or verification_mode == VerificationMode.ALL_CHECKS
),
)
# Download and prepare data
builder_instance.download_and_prepare(
download_config=download_config,
download_mode=download_mode,
verification_mode=verification_mode,
dl_manager=dl_manager, # pass the new argument
num_proc=num_proc,
storage_options=storage_options,
)
...
```
### Motivation
In my case, I'm hoping to deal with the cache files downloading manually (not using hash filenames and save to another location, or using potential existing local files).
### Your contribution
It's already implemented above. If maintainers think this should be considered, I'll open a PR. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7767/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7767/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7766 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7766/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7766/comments | https://api.github.com/repos/huggingface/datasets/issues/7766/events | https://github.com/huggingface/datasets/issues/7766 | 3,411,611,165 | I_kwDODunzps7LWQ4d | 7,766 | cast columns to Image/Audio/Video with `storage_options` | {
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ain-soph",
"id": 13214530,
"login": "ain-soph",
"node_id": "MDQ6VXNlcjEzMjE0NTMw",
"organizations_url": "https://api.github.com/users/ain-soph/orgs",
"received_events_url": "https://api.github.com/users/ain-soph/received_events",
"repos_url": "https://api.github.com/users/ain-soph/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ain-soph",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2025-09-12T18:51:01Z | 2025-09-12T18:51:01Z | null | NONE | null | null | null | null | ### Feature request
Allow `storage_options` to be passed in
1. `cast` related operations (e.g., `cast_columns, cast`)
2. `info` related reading (e.g., `from_dict, from_pandas, from_polars`) together with `info.features`
```python3
import datasets
image_path = "s3://bucket/sample.png"
dataset = datasets.Dataset.from_dict({"image_path": [image_path]})
# dataset = dataset.cast_column("image_path", datasets.Image()) # now works without `storage_options`
# expected behavior
dataset = dataset.cast_column("image_path", datasets.Image(), storage_options={"anon": True})
```
### Motivation
I'm using my own registered fsspec filesystem (s3 with customized local cache support). I need to pass cache folder paths `cache_dirs: list[str]` to the filesystem when I read the remote images (cast from file_paths).
### Your contribution
Could help with a PR at weekends | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7766/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7766/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7765 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7765/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7765/comments | https://api.github.com/repos/huggingface/datasets/issues/7765/events | https://github.com/huggingface/datasets/issues/7765 | 3,411,556,378 | I_kwDODunzps7LWDga | 7,765 | polars dataset cannot cast column to Image/Audio/Video | {
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ain-soph",
"id": 13214530,
"login": "ain-soph",
"node_id": "MDQ6VXNlcjEzMjE0NTMw",
"organizations_url": "https://api.github.com/users/ain-soph/orgs",
"received_events_url": "https://api.github.com/users/ain-soph/received_events",
"repos_url": "https://api.github.com/users/ain-soph/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ain-soph",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I fixed this with a combination of `to_dict` and `from_dict`:\n\n```py\ndatasets.Dataset.from_dict(df.to_dict(as_series=False))\n```",
"@samuelstevens Yeah, I'm using similar workaround as well. But it would be ideal if we can avoid the copy."
] | 2025-09-12T18:32:49Z | 2025-09-16T01:33:31Z | null | NONE | null | null | null | null | ### Describe the bug
`from_polars` dataset cannot cast column to Image/Audio/Video, while it works on `from_pandas` and `from_dict`
### Steps to reproduce the bug
```python3
import datasets
import pandas as pd
import polars as pl
image_path = "./sample.png"
# polars
df = pl.DataFrame({"image_path": [image_path]})
dataset = datasets.Dataset.from_polars(df)
dataset = dataset.cast_column("image_path", datasets.Image())
# # raises Error
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from large_string to struct using function cast_struct
# pandas
df = pd.DataFrame({"image_path": [image_path]})
dataset = datasets.Dataset.from_pandas(df)
dataset = dataset.cast_column("image_path", datasets.Image())
# # pass
{'image_path': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=338x277 at 0x7FBA719D4050>}
# dict
dataset = datasets.Dataset.from_dict({"image_path": [image_path]})
dataset = dataset.cast_column("image_path", datasets.Image())
# # pass
{'image_path': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=338x277 at 0x7FBA719D4050>}
```
### Expected behavior
`from_polars` case shouldn't raise error and have the same outputs as `from_pandas` and `from_dict`
### Environment info
```
# Name Version Build Channel
datasets 4.0.0 pypi_0 pypi
pandas 2.3.1 pypi_0 pypi
polars 1.32.3 pypi_0 pypi
``` | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7765/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7765/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7760 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7760/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7760/comments | https://api.github.com/repos/huggingface/datasets/issues/7760/events | https://github.com/huggingface/datasets/issues/7760 | 3,401,799,485 | I_kwDODunzps7Kw1c9 | 7,760 | Hugging Face Hub Dataset Upload CAS Error | {
"avatar_url": "https://avatars.githubusercontent.com/u/142820182?v=4",
"events_url": "https://api.github.com/users/n-bkoe/events{/privacy}",
"followers_url": "https://api.github.com/users/n-bkoe/followers",
"following_url": "https://api.github.com/users/n-bkoe/following{/other_user}",
"gists_url": "https://api.github.com/users/n-bkoe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/n-bkoe",
"id": 142820182,
"login": "n-bkoe",
"node_id": "U_kgDOCINDVg",
"organizations_url": "https://api.github.com/users/n-bkoe/orgs",
"received_events_url": "https://api.github.com/users/n-bkoe/received_events",
"repos_url": "https://api.github.com/users/n-bkoe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/n-bkoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n-bkoe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/n-bkoe",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"cc @jsulz maybe ?",
"Curious! I took a look at this and was unable to see why this would be occurring on our side. Tagging in @jgodlew and @bpronan since they might have insights. \n\n@n-bkoe just a few questions if you wouldn't mind: \n1. What kind of data are you uploading and what is the difference in file si... | 2025-09-10T10:01:19Z | 2025-09-16T20:01:36Z | null | NONE | null | null | null | null | ### Describe the bug
Experiencing persistent 401 Unauthorized errors when attempting to upload datasets to Hugging Face Hub using the `datasets` library. The error occurs specifically with the CAS (Content Addressable Storage) service during the upload process. Tried using HF_HUB_DISABLE_XET=1. It seems to work for smaller files.
Exact error message :
```
Processing Files (0 / 0) : | | 0.00B / 0.00B 2025-09-10T09:44:35.657565Z ERROR Fatal Error: "cas::upload_xorb" api call failed (request id 01b[...]XXX): HTTP status client error (401 Unauthorized) for url (https://cas-server.xethub.hf.co/xorb/default/7f3abdc[...]XXX)
at /home/runner/work/xet-core/xet-core/cas_client/src/retry_wrapper.rs:113
Processing Files (0 / 0) : 0%| | 0.00B / 184kB, 0.00B/s
New Data Upload : 0%| | 0.00B / 184kB, 0.00B/s
❌ Failed to push some_dataset: Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/7f3abdc[...]XXX
```
Workaround Attempts
1. **Disabled XET**: Set `HF_HUB_DISABLE_XET=1` environment variable
2. **Updated hf-xet**: Use `hf-xet==1.1.9` rather than latest
3. **Verified Authentication**: Confirmed HF token is valid and has write permissions
4. **Tested with Smaller Datasets**:
- 100 samples: ✅ **SUCCESS** (uploaded successfully)
- 10,000 samples: ❌ **FAILS** (401 Unauthorized)
### Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
# Create dataset (example with 10,000 samples)
dataset = Dataset.from_dict({
"question": questions,
"answer": answers,
# ... other fields
})
# Split into train/test
dataset_dict = dataset.train_test_split(test_size=0.1)
# Upload to Hub
dataset_dict.push_to_hub("Org/some-dataset")
```
### Expected behavior
## Expected Behavior
- Dataset should upload successfully to Hugging Face Hub
- Progress bars should complete without authentication errors
- Dataset should be accessible at the specified repository URL
## Actual Behavior
- Upload fails consistently with 401 Unauthorized error
- Error occurs specifically during CAS service interaction
- No progress is made on the upload (0% completion)
- Dataset is created on Hugging Face Hub with no data folder
### Environment info
- **Platform**: SageMaker (AWS)
- **Python Version**: 3.12
- **Libraries**:
- `datasets` library (latest version)
- `hf-xet==1.1.9` (attempted fix)
- **Authentication**: Hugging Face token configured
- **Dataset Size**: ~10,000 samples, works for smaller sizes (e.g. 100) | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7760/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7760/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7759 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7759/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7759/comments | https://api.github.com/repos/huggingface/datasets/issues/7759/events | https://github.com/huggingface/datasets/issues/7759 | 3,398,099,513 | I_kwDODunzps7KiuI5 | 7,759 | Comment/feature request: Huggingface 502s from GHA | {
"avatar_url": "https://avatars.githubusercontent.com/u/52365471?v=4",
"events_url": "https://api.github.com/users/Scott-Simmons/events{/privacy}",
"followers_url": "https://api.github.com/users/Scott-Simmons/followers",
"following_url": "https://api.github.com/users/Scott-Simmons/following{/other_user}",
"gists_url": "https://api.github.com/users/Scott-Simmons/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Scott-Simmons",
"id": 52365471,
"login": "Scott-Simmons",
"node_id": "MDQ6VXNlcjUyMzY1NDcx",
"organizations_url": "https://api.github.com/users/Scott-Simmons/orgs",
"received_events_url": "https://api.github.com/users/Scott-Simmons/received_events",
"repos_url": "https://api.github.com/users/Scott-Simmons/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Scott-Simmons/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Scott-Simmons/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Scott-Simmons",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-09-09T11:59:20Z | 2025-09-09T13:02:28Z | null | NONE | null | null | null | null | This is no longer a pressing issue, but for completeness I am reporting that in August 26th, GET requests to `https://datasets-server.huggingface.co/info\?dataset\=livebench/math` were returning 502s when invoked from [github actions](https://github.com/UKGovernmentBEIS/inspect_evals/actions/runs/17241892475/job/48921123754) (that link will expire eventually, [here are the logs](https://github.com/user-attachments/files/22233578/logs_44225296943.zip)).
When invoked from actions, it appeared to be consistently failing for ~6 hours. However, these 502s never occurred when the request was invoked from my local machine in that same time period.
I suspect that this is related to how the requests are routed with github actions versus locally.
Its not clear to me if the request even reached huggingface servers or if its the github proxy that stopped it from going through, but I wanted to report it nonetheless in case this is helpful information. I'm curious if huggingface can do anything on their end to confirm cause.
And a feature request for if this happens in the future (assuming huggingface has visibilty on it): A "datasets status" page highlighting if 502s occur for specific individual datasets could be useful for people debugging on the other end of this!
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7759/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7759/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7758 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7758/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7758/comments | https://api.github.com/repos/huggingface/datasets/issues/7758/events | https://github.com/huggingface/datasets/issues/7758 | 3,395,590,783 | I_kwDODunzps7KZJp_ | 7,758 | Option for Anonymous Dataset link | {
"avatar_url": "https://avatars.githubusercontent.com/u/38985481?v=4",
"events_url": "https://api.github.com/users/egrace479/events{/privacy}",
"followers_url": "https://api.github.com/users/egrace479/followers",
"following_url": "https://api.github.com/users/egrace479/following{/other_user}",
"gists_url": "https://api.github.com/users/egrace479/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/egrace479",
"id": 38985481,
"login": "egrace479",
"node_id": "MDQ6VXNlcjM4OTg1NDgx",
"organizations_url": "https://api.github.com/users/egrace479/orgs",
"received_events_url": "https://api.github.com/users/egrace479/received_events",
"repos_url": "https://api.github.com/users/egrace479/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/egrace479/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/egrace479/subscriptions",
"type": "User",
"url": "https://api.github.com/users/egrace479",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2025-09-08T20:20:10Z | 2025-09-08T20:20:10Z | null | NONE | null | null | null | null | ### Feature request
Allow for anonymized viewing of datasets. For instance, something similar to [Anonymous GitHub](https://anonymous.4open.science/).
### Motivation
We generally publish our data through Hugging Face. This has worked out very well as it's both our repository and archive (thanks to the DOI feature!). However, we have an increasing challenge when it comes to sharing our datasets for paper (both conference and journal) submissions. Due to the need to share data anonymously, we can't use the Hugging Face URLs, but datasets tend to be too large for inclusion as a zip. Being able to have an anonymous link would be great since we can't be double-publishing the data.
### Your contribution
Sorry, I don't have a contribution to make to the implementation of this. Perhaps it would be possible to work off the [Anonymous GitHub](https://github.com/tdurieux/anonymous_github) code to generate something analogous with pointers to the data still on Hugging Face's servers (instead of the duplication of data required for the GitHub version)? | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7758/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7758/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7757 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7757/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7757/comments | https://api.github.com/repos/huggingface/datasets/issues/7757/events | https://github.com/huggingface/datasets/issues/7757 | 3,389,535,011 | I_kwDODunzps7KCDMj | 7,757 | Add support for `.conll` file format in datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/88763593?v=4",
"events_url": "https://api.github.com/users/namesarnav/events{/privacy}",
"followers_url": "https://api.github.com/users/namesarnav/followers",
"following_url": "https://api.github.com/users/namesarnav/following{/other_user}",
"gists_url": "https://api.github.com/users/namesarnav/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/namesarnav",
"id": 88763593,
"login": "namesarnav",
"node_id": "MDQ6VXNlcjg4NzYzNTkz",
"organizations_url": "https://api.github.com/users/namesarnav/orgs",
"received_events_url": "https://api.github.com/users/namesarnav/received_events",
"repos_url": "https://api.github.com/users/namesarnav/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/namesarnav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/namesarnav/subscriptions",
"type": "User",
"url": "https://api.github.com/users/namesarnav",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"That would be cool ! feel free to ping me if I can help reviewing a PR"
] | 2025-09-06T07:25:39Z | 2025-09-10T14:22:48Z | null | NONE | null | null | null | null | ### Feature request
I’d like to request native support in the Hugging Face datasets library for reading .conll files (CoNLL format). This format is widely used in NLP tasks, especially for Named Entity Recognition (NER), POS tagging, and other token classification problems.
Right now `.conll` datasets need to be manually parsed or preprocessed before being loaded into datasets. Having built in support would save time and make workflows smoother for researchers and practitioners.
I propose -
Add a conll dataset builder or file parser to datasets that can:
- Read `.conll` files with customizable delimiters (space, tab).
- Handle sentence/document boundaries (typically indicated by empty lines).
- Support common CoNLL variants (e.g., CoNLL-2000 chunking, CoNLL-2003 NER).
- Output a dataset where each example contains:
- tokens: list of strings
- tags (or similar): list of labels aligned with tokens
Given a .conll snippet like:
```
EU NNP B-ORG
rejects VBZ O
German JJ B-MISC
call NN O
. . O
```
The dataset should load as:
```
{
"tokens": ["EU", "rejects", "German", "call", "."],
"tags": ["B-ORG", "O", "B-MISC", "O", "O"]
}
```
### Motivation
- CoNLL files are a standard benchmark format in NLP (e.g., CoNLL-2003, CoNLL-2000).
- Many users train NER or sequence labeling models (like BERT for token classification) directly on `.conll`
- Right now you have to write your own parsing scripts. Built in support would unify this process and would be much more convenient
### Your contribution
I’d be happy to contribute by implementing this feature. My plan is to-
- Add a new dataset script (conll.py) to handle .conll files.
- Implement parsing logic that supports sentence/document boundaries and token-label alignment.
- Write unit tests with small `.conll` examples to ensure correctness.
- Add documentation and usage examples so new users can easily load `.conll` datasets.
This would be my first open source contribution, so I’ll follow the `CONTRIBUTING.md` guidelines closely and adjust based on feedback from the maintainers. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7757/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7757/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7756 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7756/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7756/comments | https://api.github.com/repos/huggingface/datasets/issues/7756/events | https://github.com/huggingface/datasets/issues/7756 | 3,387,076,693 | I_kwDODunzps7J4rBV | 7,756 | datasets.map(f, num_proc=N) hangs with N>1 when run on import | {
"avatar_url": "https://avatars.githubusercontent.com/u/20065?v=4",
"events_url": "https://api.github.com/users/arjunguha/events{/privacy}",
"followers_url": "https://api.github.com/users/arjunguha/followers",
"following_url": "https://api.github.com/users/arjunguha/following{/other_user}",
"gists_url": "https://api.github.com/users/arjunguha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/arjunguha",
"id": 20065,
"login": "arjunguha",
"node_id": "MDQ6VXNlcjIwMDY1",
"organizations_url": "https://api.github.com/users/arjunguha/orgs",
"received_events_url": "https://api.github.com/users/arjunguha/received_events",
"repos_url": "https://api.github.com/users/arjunguha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/arjunguha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arjunguha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/arjunguha",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-09-05T10:32:01Z | 2025-09-05T10:32:01Z | null | NONE | null | null | null | null | ### Describe the bug
If you `import` a module that runs `datasets.map(f, num_proc=N)` at the top-level, Python hangs.
### Steps to reproduce the bug
1. Create a file that runs datasets.map at the top-level:
```bash
cat <<EOF > import_me.py
import datasets
the_dataset = datasets.load_dataset("openai/openai_humaneval")
the_dataset = the_dataset.map(lambda item: item, num_proc=2)
EOF
```
2. Start Python REPL:
```bash
uv run --python 3.12.3 --with "datasets==4.0.0" python3
Python 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
```
3. Import the file:
```python
import import_me
````
Observe hang.
### Expected behavior
Ideally would not hang, or would fallback to num_proc=1 with a warning.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-6.14.0-29-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.34.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.2
- `fsspec` version: 2025.3.0
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7756/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7756/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7753 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7753/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7753/comments | https://api.github.com/repos/huggingface/datasets/issues/7753/events | https://github.com/huggingface/datasets/issues/7753 | 3,381,831,487 | I_kwDODunzps7Jkqc_ | 7,753 | datasets massively slows data reads, even in memory | {
"avatar_url": "https://avatars.githubusercontent.com/u/1191040?v=4",
"events_url": "https://api.github.com/users/lrast/events{/privacy}",
"followers_url": "https://api.github.com/users/lrast/followers",
"following_url": "https://api.github.com/users/lrast/following{/other_user}",
"gists_url": "https://api.github.com/users/lrast/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lrast",
"id": 1191040,
"login": "lrast",
"node_id": "MDQ6VXNlcjExOTEwNDA=",
"organizations_url": "https://api.github.com/users/lrast/orgs",
"received_events_url": "https://api.github.com/users/lrast/received_events",
"repos_url": "https://api.github.com/users/lrast/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lrast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lrast/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lrast",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! you should try\n\n```python\nfrom datasets import Array3D, Dataset, Features, Value\n\nfeatures = Features({\"image\": Array3D(shape=(3, 224, 224), dtype=\"uint8\"), \"label\": Value(\"uint8\")})\nhf_dataset = Dataset.from_dict({'image': images, 'label':labels}, features=features)\n```\n\notherwise the type o... | 2025-09-04T01:45:24Z | 2025-09-18T22:08:51Z | null | NONE | null | null | null | null | ### Describe the bug
Loading image data in a huggingface dataset results in very slow read speeds, approximately 1000 times longer than reading the same data from a pytorch dataset. This applies even when the dataset is loaded into RAM using a `keep_in_memory=True` flag.
The following script reproduces the result with random data, but it applies equally to datasets that are loaded from the hub.
### Steps to reproduce the bug
The following script should reproduce the behavior
```
import torch
import time
from datasets import Dataset
images = torch.randint(0, 255, (1000, 3, 224, 224), dtype=torch.uint8)
labels = torch.randint(0, 200, (1000,), dtype=torch.uint8)
pt_dataset = torch.utils.data.TensorDataset(images, labels)
hf_dataset = Dataset.from_dict({'image': images, 'label':labels})
hf_dataset.set_format('torch', dtype=torch.uint8)
hf_in_memory = hf_dataset.map(lambda x: x, keep_in_memory=True)
# measure access speeds
def time_access(dataset, img_col):
start_time = time.time()
for i in range(1000):
_ = dataset[i][img_col].shape
end_time = time.time()
return end_time - start_time
print(f"In-memory Tensor access: {time_access(pt_dataset, 0):.4f} seconds")
print(f"HF Dataset access: {time_access(hf_dataset, 'image'):.4f} seconds")
print(f"In-memory HF Dataset access: {time_access(hf_in_memory, 'image'):.4f} seconds")
```
### Expected behavior
For me, the above script produces
```
In-memory Tensor access: 0.0025 seconds
HF Dataset access: 2.9317 seconds
In-memory HF Dataset access: 2.8082 seconds
```
I think that this difference is larger than expected.
### Environment info
- `datasets` version: 4.0.0
- Platform: macOS-14.7.7-arm64-arm-64bit
- Python version: 3.12.11
- `huggingface_hub` version: 0.34.3
- PyArrow version: 18.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7753/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7753/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7751 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7751/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7751/comments | https://api.github.com/repos/huggingface/datasets/issues/7751/events | https://github.com/huggingface/datasets/issues/7751 | 3,358,369,976 | I_kwDODunzps7ILKi4 | 7,751 | Dill version update | {
"avatar_url": "https://avatars.githubusercontent.com/u/98005188?v=4",
"events_url": "https://api.github.com/users/Navanit-git/events{/privacy}",
"followers_url": "https://api.github.com/users/Navanit-git/followers",
"following_url": "https://api.github.com/users/Navanit-git/following{/other_user}",
"gists_url": "https://api.github.com/users/Navanit-git/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Navanit-git",
"id": 98005188,
"login": "Navanit-git",
"node_id": "U_kgDOBddwxA",
"organizations_url": "https://api.github.com/users/Navanit-git/orgs",
"received_events_url": "https://api.github.com/users/Navanit-git/received_events",
"repos_url": "https://api.github.com/users/Navanit-git/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Navanit-git/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Navanit-git/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Navanit-git",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"#7752 ",
"related: #7510 "
] | 2025-08-27T07:38:30Z | 2025-09-10T14:24:02Z | null | NONE | null | null | null | null | ### Describe the bug
Why the datasets is not updating the dill ?
Just want to know if I update the dill version in dill what will be the repucssion.
For now in multiplaces I have to update the library like process requirequire dill 0.4.0 so why not datasets.
Adding a pr too.
### Steps to reproduce the bug
.
### Expected behavior
.
### Environment info
. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7751/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7751/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7746 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7746/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7746/comments | https://api.github.com/repos/huggingface/datasets/issues/7746/events | https://github.com/huggingface/datasets/issues/7746 | 3,345,391,211 | I_kwDODunzps7HZp5r | 7,746 | Fix: Canonical 'multi_news' dataset is broken and should be updated to a Parquet version | {
"avatar_url": "https://avatars.githubusercontent.com/u/187888489?v=4",
"events_url": "https://api.github.com/users/Awesome075/events{/privacy}",
"followers_url": "https://api.github.com/users/Awesome075/followers",
"following_url": "https://api.github.com/users/Awesome075/following{/other_user}",
"gists_url": "https://api.github.com/users/Awesome075/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Awesome075",
"id": 187888489,
"login": "Awesome075",
"node_id": "U_kgDOCzLzaQ",
"organizations_url": "https://api.github.com/users/Awesome075/orgs",
"received_events_url": "https://api.github.com/users/Awesome075/received_events",
"repos_url": "https://api.github.com/users/Awesome075/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Awesome075/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Awesome075/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Awesome075",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"@sayakpaul @a-r-r-o-w could you verify this issue then i can contribute to solve this issue!😊"
] | 2025-08-22T12:52:03Z | 2025-08-27T20:23:35Z | null | NONE | null | null | null | null | Hi,
The canonical `multi_news` dataset is currently broken and fails to load. This is because it points to the [alexfabri/multi_news](https://huggingface.co/datasets/alexfabbri/multi_news) repository, which contains a legacy loading script (`multi_news.py`) that requires the now-removed `trust_remote_code` parameter.
The original maintainer's GitHub and Hugging Face repositories appear to be inactive, so a community-led fix is needed.
I have created a working fix by converting the dataset to the modern Parquet format, which does not require a loading script. The fixed version is available here and loads correctly:
**[Awesome075/multi_news_parquet](https://huggingface.co/datasets/Awesome075/multi_news_parquet)**
Could the maintainers please guide me or themselves update the official `multi_news` dataset to use this working Parquet version? This would involve updating the canonical pointer for "multi_news" to resolve to the new repository.
This action would fix the dataset for all users and ensure its continued availability.
Thank you! | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7746/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7746/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7745 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7745/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7745/comments | https://api.github.com/repos/huggingface/datasets/issues/7745/events | https://github.com/huggingface/datasets/issues/7745 | 3,345,286,773 | I_kwDODunzps7HZQZ1 | 7,745 | Audio mono argument no longer supported, despite class documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/5666041?v=4",
"events_url": "https://api.github.com/users/jheitz/events{/privacy}",
"followers_url": "https://api.github.com/users/jheitz/followers",
"following_url": "https://api.github.com/users/jheitz/following{/other_user}",
"gists_url": "https://api.github.com/users/jheitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jheitz",
"id": 5666041,
"login": "jheitz",
"node_id": "MDQ6VXNlcjU2NjYwNDE=",
"organizations_url": "https://api.github.com/users/jheitz/orgs",
"received_events_url": "https://api.github.com/users/jheitz/received_events",
"repos_url": "https://api.github.com/users/jheitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jheitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jheitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jheitz",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I want to solve this problem can you please assign it to me\nand also can you please guide whether the mono parameter is required to be re-added or the documentation needs an update?"
] | 2025-08-22T12:15:41Z | 2025-08-24T18:22:41Z | null | NONE | null | null | null | null | ### Describe the bug
Either update the documentation, or re-introduce the flag (and corresponding logic to convert the audio to mono)
### Steps to reproduce the bug
Audio(sampling_rate=16000, mono=True) raises the error
TypeError: Audio.__init__() got an unexpected keyword argument 'mono'
However, in the class documentation, is says:
Args:
sampling_rate (`int`, *optional*):
Target sampling rate. If `None`, the native sampling rate is used.
mono (`bool`, defaults to `True`):
Whether to convert the audio signal to mono by averaging samples across
channels.
[...]
### Expected behavior
The above call should either work, or the documentation within the Audio class should be updated
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
- Python version: 3.12.11
- `huggingface_hub` version: 0.34.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.2
- `fsspec` version: 2025.3.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7745/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7745/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7744 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7744/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7744/comments | https://api.github.com/repos/huggingface/datasets/issues/7744/events | https://github.com/huggingface/datasets/issues/7744 | 3,343,510,686 | I_kwDODunzps7HSeye | 7,744 | dtype: ClassLabel is not parsed correctly in `features.py` | {
"avatar_url": "https://avatars.githubusercontent.com/u/43553003?v=4",
"events_url": "https://api.github.com/users/cmatKhan/events{/privacy}",
"followers_url": "https://api.github.com/users/cmatKhan/followers",
"following_url": "https://api.github.com/users/cmatKhan/following{/other_user}",
"gists_url": "https://api.github.com/users/cmatKhan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cmatKhan",
"id": 43553003,
"login": "cmatKhan",
"node_id": "MDQ6VXNlcjQzNTUzMDAz",
"organizations_url": "https://api.github.com/users/cmatKhan/orgs",
"received_events_url": "https://api.github.com/users/cmatKhan/received_events",
"repos_url": "https://api.github.com/users/cmatKhan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cmatKhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmatKhan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cmatKhan",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"I think it's \"class_label\"",
"> I think it's \"class_label\"\n\nI see -- thank you. This works\n\n```yaml\nlicense: mit\nlanguage:\n- en\ntags:\n- genomics\n- yeast\n- transcription\n- perturbation\n- response\n- overexpression\npretty_name: Hackett, 2020 Overexpression\nsize_categories:\n- 1M<n<10M\ndataset_i... | 2025-08-21T23:28:50Z | 2025-09-10T15:23:41Z | 2025-09-10T15:23:41Z | NONE | null | null | null | null | `dtype: ClassLabel` in the README.md yaml metadata is parsed incorrectly and causes the data viewer to fail.
This yaml in my metadata ([source](https://huggingface.co/datasets/BrentLab/yeast_genome_resources/blob/main/README.md), though i changed `ClassLabel` to `string` to using different dtype in order to avoid the error):
```yaml
license: mit
pretty_name: BrentLab Yeast Genome Resources
size_categories:
- 1K<n<10K
language:
- en
dataset_info:
features:
- name: start
dtype: int32
description: Start coordinate (1-based, **inclusive**)
- name: end
dtype: int32
description: End coordinate (1-based, **inclusive**)
- name: strand
dtype: ClassLabel
...
```
is producing the following error in the data viewer:
```
Error code: ConfigNamesError
Exception: ValueError
Message: Feature type 'Classlabel' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 996, in dataset_module_factory
return HubDatasetModuleFactory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 605, in get_module
dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 386, in from_dataset_card_data
dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 317, in _from_yaml_dict
yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 2027, in _from_yaml_list
return cls.from_dict(from_yaml_inner(yaml_data))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1872, in from_dict
obj = generate_from_dict(dic)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1459, in generate_from_dict
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1459, in <dictcomp>
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1465, in generate_from_dict
raise ValueError(f"Feature type '{_type}' not found. Available feature types: {list(_FEATURE_TYPES.keys())}")
ValueError: Feature type 'Classlabel' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
```
I think that this is caused by this line
https://github.com/huggingface/datasets/blob/896616c6cb03d92a33248c3529b0796cda27e955/src/datasets/features/features.py#L2013
Reproducible example from [naming.py](https://github.com/huggingface/datasets/blob/896616c6cb03d92a33248c3529b0796cda27e955/src/datasets/naming.py)
```python
import itertools
import os
import re
_uppercase_uppercase_re = re.compile(r"([A-Z]+)([A-Z][a-z])")
_lowercase_uppercase_re = re.compile(r"([a-z\d])([A-Z])")
_single_underscore_re = re.compile(r"(?<!_)_(?!_)")
_multiple_underscores_re = re.compile(r"(_{2,})")
_split_re = r"^\w+(\.\w+)*$"
def snakecase_to_camelcase(name):
"""Convert snake-case string to camel-case string."""
name = _single_underscore_re.split(name)
name = [_multiple_underscores_re.split(n) for n in name]
return "".join(n.capitalize() for n in itertools.chain.from_iterable(name) if n != "")
snakecase_to_camelcase("ClassLabel")
```
Result:
```raw
'Classlabel'
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/43553003?v=4",
"events_url": "https://api.github.com/users/cmatKhan/events{/privacy}",
"followers_url": "https://api.github.com/users/cmatKhan/followers",
"following_url": "https://api.github.com/users/cmatKhan/following{/other_user}",
"gists_url": "https://api.github.com/users/cmatKhan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cmatKhan",
"id": 43553003,
"login": "cmatKhan",
"node_id": "MDQ6VXNlcjQzNTUzMDAz",
"organizations_url": "https://api.github.com/users/cmatKhan/orgs",
"received_events_url": "https://api.github.com/users/cmatKhan/received_events",
"repos_url": "https://api.github.com/users/cmatKhan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cmatKhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmatKhan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cmatKhan",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7744/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7744/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7742 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7742/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7742/comments | https://api.github.com/repos/huggingface/datasets/issues/7742/events | https://github.com/huggingface/datasets/issues/7742 | 3,336,704,928 | I_kwDODunzps7G4hOg | 7,742 | module 'pyarrow' has no attribute 'PyExtensionType' | {
"avatar_url": "https://avatars.githubusercontent.com/u/6106392?v=4",
"events_url": "https://api.github.com/users/mnedelko/events{/privacy}",
"followers_url": "https://api.github.com/users/mnedelko/followers",
"following_url": "https://api.github.com/users/mnedelko/following{/other_user}",
"gists_url": "https://api.github.com/users/mnedelko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mnedelko",
"id": 6106392,
"login": "mnedelko",
"node_id": "MDQ6VXNlcjYxMDYzOTI=",
"organizations_url": "https://api.github.com/users/mnedelko/orgs",
"received_events_url": "https://api.github.com/users/mnedelko/received_events",
"repos_url": "https://api.github.com/users/mnedelko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mnedelko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mnedelko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mnedelko",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Just checked out the files and thishad already been addressed",
"For others who find this issue: \n\n`pip install --upgrade \"datasets>=2.20.0\"` \n\nfrom https://github.com/explodinggradients/ragas/issues/2170#issuecomment-3204393672 can fix it."
] | 2025-08-20T06:14:33Z | 2025-09-09T02:51:46Z | null | NONE | null | null | null | null | ### Describe the bug
When importing certain libraries, users will encounter the following error which can be traced back to the datasets library.
module 'pyarrow' has no attribute 'PyExtensionType'.
Example issue: https://github.com/explodinggradients/ragas/issues/2170
The issue occurs due to the following. I will proceed to submit a PR with the below fix:
**Issue Reason**
The issue is that PyArrow version 21.0.0 doesn’t have PyExtensionType. This was changed in newer versions of PyArrow. The
PyExtensionType class was renamed to ExtensionType in PyArrow 13.0.0 and later versions.
** Issue Solution**
Making the following changes to the following lib files should temporarily resolve the issue.
I will submit a PR to the dataets library in the meantime.
env_name/lib/python3.10/site-packages/datasets/features/features.py:
```
> 521 self.shape = tuple(shape)
522 self.value_type = dtype
523 self.storage_dtype = self._generate_dtype(self.value_type)
524 - pa.PyExtensionType.__init__(self, self.storage_dtype)
524 + pa.ExtensionType.__init__(self, self.storage_dtype)
525
526 def __reduce__(self):
527 return self.__class__, (
```
Updated venv_name/lib/python3.10/site-packages/datasets/features/features.py:
```
510 _type: str = field(default=“Array5D”, init=False, repr=False)
511
512
513 - class _ArrayXDExtensionType(pa.PyExtensionType):
513 + class _ArrayXDExtensionType(pa.ExtensionType):
514 ndims: Optional[int] = None
515
516 def __init__(self, shape: tuple, dtype: str):
```
### Steps to reproduce the bug
Ragas version: 0.3.1
Python version: 3.11
**Code to Reproduce**
_**In notebook:**_
!pip install ragas
from ragas import evaluate
### Expected behavior
The required package installs without issue.
### Environment info
In Jupyter Notebook.
venv | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7742/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7742/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7741 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7741/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7741/comments | https://api.github.com/repos/huggingface/datasets/issues/7741/events | https://github.com/huggingface/datasets/issues/7741 | 3,334,848,656 | I_kwDODunzps7GxcCQ | 7,741 | Preserve tree structure when loading HDF5 | {
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/klamike",
"id": 17013474,
"login": "klamike",
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"repos_url": "https://api.github.com/users/klamike/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"type": "User",
"url": "https://api.github.com/users/klamike",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [] | 2025-08-19T15:42:05Z | 2025-08-26T15:28:06Z | 2025-08-26T15:28:06Z | CONTRIBUTOR | null | null | null | null | ### Feature request
https://github.com/huggingface/datasets/pull/7740#discussion_r2285605374
### Motivation
`datasets` has the `Features` class for representing nested features. HDF5 files have groups of datasets which are nested, though in #7690 the keys are flattened. We should preserve that structure for the user.
### Your contribution
I'll open a PR (#7743) | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7741/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7741/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7739 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7739/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7739/comments | https://api.github.com/repos/huggingface/datasets/issues/7739/events | https://github.com/huggingface/datasets/issues/7739 | 3,331,537,762 | I_kwDODunzps7Gkzti | 7,739 | Replacement of "Sequence" feature with "List" breaks backward compatibility | {
"avatar_url": "https://avatars.githubusercontent.com/u/15764776?v=4",
"events_url": "https://api.github.com/users/evmaki/events{/privacy}",
"followers_url": "https://api.github.com/users/evmaki/followers",
"following_url": "https://api.github.com/users/evmaki/following{/other_user}",
"gists_url": "https://api.github.com/users/evmaki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/evmaki",
"id": 15764776,
"login": "evmaki",
"node_id": "MDQ6VXNlcjE1NzY0Nzc2",
"organizations_url": "https://api.github.com/users/evmaki/orgs",
"received_events_url": "https://api.github.com/users/evmaki/received_events",
"repos_url": "https://api.github.com/users/evmaki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/evmaki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/evmaki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/evmaki",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Backward compatibility here means 4.0.0 can load datasets saved with older versions.\n\nYou will need 4.0.0 to load datasets saved with 4.0.0"
] | 2025-08-18T17:28:38Z | 2025-09-10T14:17:50Z | null | NONE | null | null | null | null | PR #7634 replaced the Sequence feature with List in 4.0.0, so datasets saved with version 4.0.0 with that feature cannot be loaded by earlier versions. There is no clear option in 4.0.0 to use the legacy feature type to preserve backward compatibility.
Why is this a problem? I have a complex preprocessing and training pipeline dependent on 3.6.0; we manage a very large number of separate datasets that get concatenated during training. If just one of those datasets is saved with 4.0.0, they become unusable, and we have no way of "fixing" them. I can load them in 4.0.0 but I can't re-save with the legacy feature type, and I can't load it in 3.6.0 for obvious reasons.
Perhaps I'm missing something here, since the PR says that backward compatibility is preserved; if so, it's not obvious to me how. | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7739/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7739/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7738 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7738/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7738/comments | https://api.github.com/repos/huggingface/datasets/issues/7738/events | https://github.com/huggingface/datasets/issues/7738 | 3,328,948,690 | I_kwDODunzps7Ga7nS | 7,738 | Allow saving multi-dimensional ndarray with dynamic shapes | {
"avatar_url": "https://avatars.githubusercontent.com/u/82735346?v=4",
"events_url": "https://api.github.com/users/ryan-minato/events{/privacy}",
"followers_url": "https://api.github.com/users/ryan-minato/followers",
"following_url": "https://api.github.com/users/ryan-minato/following{/other_user}",
"gists_url": "https://api.github.com/users/ryan-minato/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ryan-minato",
"id": 82735346,
"login": "ryan-minato",
"node_id": "MDQ6VXNlcjgyNzM1MzQ2",
"organizations_url": "https://api.github.com/users/ryan-minato/orgs",
"received_events_url": "https://api.github.com/users/ryan-minato/received_events",
"repos_url": "https://api.github.com/users/ryan-minato/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ryan-minato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryan-minato/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ryan-minato",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"I agree this would be super valuable.\n\nIt looks like this was discussed a few years ago in https://github.com/huggingface/datasets/issues/5272#issuecomment-1550200824 but there were some issues. Those PRs are merged now and it looks like Arrow [officially supports](https://arrow.apache.org/docs/format/CanonicalE... | 2025-08-18T02:23:51Z | 2025-08-26T15:25:02Z | null | NONE | null | null | null | null | ### Feature request
I propose adding a dedicated feature to the datasets library that allows for the efficient storage and retrieval of multi-dimensional ndarray with dynamic shapes. Similar to how Image columns handle variable-sized images, this feature would provide a structured way to store array data where the dimensions are not fixed.
A possible implementation could be a new Array or Tensor feature type that stores the data in a structured format, for example,
```python
{
"shape": (5, 224, 224),
"dtype": "uint8",
"data": [...]
}
```
This would allow the datasets library to handle heterogeneous array sizes within a single column without requiring a fixed shape definition in the feature schema.
### Motivation
I am currently trying to upload data from astronomical telescopes, specifically FITS files, to the Hugging Face Hub. This type of data is very similar to images but often has more than three dimensions. For example, data from the SDSS project contains five channels (u, g, r, i, z), and the pixel values can exceed 255, making the Pillow based Image feature unsuitable.
The current datasets library requires a fixed shape to be defined in the feature schema for multi-dimensional arrays, which is a major roadblock. This prevents me from saving my data, as the dimensions of the arrays can vary across different FITS files.
https://github.com/huggingface/datasets/blob/985c9bee6bfc345787a8b9dd316e1d4f3b930503/src/datasets/features/features.py#L613-L614
A feature that supports dynamic shapes would be incredibly beneficial for the astronomy community and other fields dealing with similar high-dimensional, variable-sized data (e.g., medical imaging, scientific simulations).
### Your contribution
I am willing to create a PR to help implement this feature if the proposal is accepted. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7738/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7738/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7733 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7733/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7733/comments | https://api.github.com/repos/huggingface/datasets/issues/7733/events | https://github.com/huggingface/datasets/issues/7733 | 3,304,979,299 | I_kwDODunzps7E_ftj | 7,733 | Dataset Repo Paths to Locally Stored Images Not Being Appended to Image Path | {
"avatar_url": "https://avatars.githubusercontent.com/u/27898715?v=4",
"events_url": "https://api.github.com/users/dennys246/events{/privacy}",
"followers_url": "https://api.github.com/users/dennys246/followers",
"following_url": "https://api.github.com/users/dennys246/following{/other_user}",
"gists_url": "https://api.github.com/users/dennys246/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dennys246",
"id": 27898715,
"login": "dennys246",
"node_id": "MDQ6VXNlcjI3ODk4NzE1",
"organizations_url": "https://api.github.com/users/dennys246/orgs",
"received_events_url": "https://api.github.com/users/dennys246/received_events",
"repos_url": "https://api.github.com/users/dennys246/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dennys246/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dennys246/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dennys246",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"This is the download issues I come into, about ever other time it fails...\n<img width=\"1719\" height=\"1226\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/2e5b4b3e-7c13-4bad-a77c-34b47a932831\" />"
] | 2025-08-08T19:10:58Z | 2025-08-12T00:54:58Z | null | NONE | null | null | null | null | ### Describe the bug
I’m not sure if this is a bug or a feature and I just don’t fully understand how dataset loading is to work, but it appears there may be a bug with how locally stored Image() are being accessed. I’ve uploaded a new dataset to hugging face (rmdig/rocky_mountain_snowpack) but I’ve come into a ton of trouble trying to have the images handled properly (at least in the way I’d expect them to be handled).
I find that I cannot use relative paths for loading images remotely from the Hugging Face repo or from a local repository. Any time I do it always simply appends my current working directory to the dataset. As a result to use the datasets library with my dataset I have to change my working directory to the dataset folder or abandon the dataset object structure, which I cannot imagine you intended. As a result I have to use URL’s since an absolute path on my system obviously wouldn’t work for others. The URL works ok, but despite me having it locally downloaded, it appears to be redownloading the dataset every time I train my snowGAN model on it (and often times I’m coming into HTTPS errors for over requesting the data).
Or maybe image relative paths aren't intended to be loaded directly through your datasets library as images and should be kept as strings for the user to handle? If so I feel like you’re missing out on some pretty seamless functionality
### Steps to reproduce the bug
1. Download a local copy of the dataset (rmdig/rocky_mountain_snowpack) through git or whatever you prefer.
2. Alter the README.md YAML for file_path (the relative path to each image) to be type Image instead of type string
`
---
dataset_info:
features:
- name: image
dtype: Image
- name: file_path
dtype: Image
`
3. Initialize the dataset locally, make sure your working directory is not the dataset directory root
`dataset = datasets.load_dataset(‘path/to/local/rocky_mountain_snowpack/‘)`
4. Call to one of the samples and you’ll get an error that the image was not found in current/working/directory/preprocessed/cores/image_1.png. Showing that it’s simply looking in the current working directory + relative path
`
>>> dataset['train'][0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 2859, in __getitem__
return self._getitem(key)
^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 2841, in _getitem
formatted_output = format_table(
^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 657, in format_table
return formatter(pa_table, query_type=query_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 410, in __call__
return self.format_row(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 459, in format_row
row = self.python_features_decoder.decode_row(row)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 223, in decode_row
return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/features.py", line 2093, in decode_example
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/features.py", line 1405, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/image.py", line 171, in decode_example
image = PIL.Image.open(path)
^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/PIL/Image.py", line 3277, in open
fp = builtins.open(filename, "rb")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/Users/dennyschaedig/Datasets/preprocessed/cores/image_1.png'
`
### Expected behavior
I expect the datasets and Image() to load the locally hosted data using path/to/local/rocky_mountain_snowpack/ (that I pass in with my datasets.load_dataset() or the you all handle on the backend) call + relative path.
Instead it appears to load from my current working directory + relative path.
### Environment info
Tested on…
Windows 11, Ubuntu Linux 22.04 and Mac Sequoia 15.5 Silicone M2
datasets version 4.0.0
Python 3.12 and 3.13 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7733/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7733/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7732 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7732/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7732/comments | https://api.github.com/repos/huggingface/datasets/issues/7732/events | https://github.com/huggingface/datasets/issues/7732 | 3,304,673,383 | I_kwDODunzps7E-VBn | 7,732 | webdataset: key errors when `field_name` has upper case characters | {
"avatar_url": "https://avatars.githubusercontent.com/u/29985433?v=4",
"events_url": "https://api.github.com/users/YassineYousfi/events{/privacy}",
"followers_url": "https://api.github.com/users/YassineYousfi/followers",
"following_url": "https://api.github.com/users/YassineYousfi/following{/other_user}",
"gists_url": "https://api.github.com/users/YassineYousfi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/YassineYousfi",
"id": 29985433,
"login": "YassineYousfi",
"node_id": "MDQ6VXNlcjI5OTg1NDMz",
"organizations_url": "https://api.github.com/users/YassineYousfi/orgs",
"received_events_url": "https://api.github.com/users/YassineYousfi/received_events",
"repos_url": "https://api.github.com/users/YassineYousfi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/YassineYousfi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YassineYousfi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/YassineYousfi",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-08-08T16:56:42Z | 2025-08-08T16:56:42Z | null | CONTRIBUTOR | null | null | null | null | ### Describe the bug
When using a webdataset each sample can be a collection of different "fields"
like this:
```
images17/image194.left.jpg
images17/image194.right.jpg
images17/image194.json
images17/image12.left.jpg
images17/image12.right.jpg
images17/image12.json
```
if the field_name contains upper case characters, the HF webdataset integration throws a key error when trying to load the dataset:
e.g. from a dataset (now updated so that it doesn't throw this error)
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[1], line 2
1 from datasets import load_dataset
----> 2 ds = load_dataset("commaai/comma2k19", data_files={'train': ['data-00000.tar.gz']}, num_proc=1)
File ~/xx/.venv/lib/python3.11/site-packages/datasets/load.py:1412, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs)
1409 return builder_instance.as_streaming_dataset(split=split)
1411 # Download and prepare data
-> 1412 builder_instance.download_and_prepare(
1413 download_config=download_config,
1414 download_mode=download_mode,
1415 verification_mode=verification_mode,
1416 num_proc=num_proc,
1417 storage_options=storage_options,
1418 )
1420 # Build dataset for splits
1421 keep_in_memory = (
1422 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1423 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:894, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, dl_manager, base_path, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
892 if num_proc is not None:
893 prepare_split_kwargs["num_proc"] = num_proc
--> 894 self._download_and_prepare(
895 dl_manager=dl_manager,
896 verification_mode=verification_mode,
897 **prepare_split_kwargs,
898 **download_and_prepare_kwargs,
899 )
900 # Sync info
901 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:1609, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs)
1608 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs):
-> 1609 super()._download_and_prepare(
1610 dl_manager,
1611 verification_mode,
1612 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS
1613 or verification_mode == VerificationMode.ALL_CHECKS,
1614 **prepare_splits_kwargs,
1615 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:948, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
946 split_dict = SplitDict(dataset_name=self.dataset_name)
947 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 948 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
950 # Checksums verification
951 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File ~/xx/.venv/lib/python3.11/site-packages/datasets/packaged_modules/webdataset/webdataset.py:81, in WebDataset._split_generators(self, dl_manager)
78 if not self.info.features:
79 # Get one example to get the feature types
80 pipeline = self._get_pipeline_from_tar(tar_paths[0], tar_iterators[0])
---> 81 first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
82 if any(example.keys() != first_examples[0].keys() for example in first_examples):
83 raise ValueError(
84 "The TAR archives of the dataset should be in WebDataset format, "
85 "but the files in the archive don't share the same prefix or the same types."
86 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/packaged_modules/webdataset/webdataset.py:55, in WebDataset._get_pipeline_from_tar(cls, tar_path, tar_iterator)
53 data_extension = field_name.split(".")[-1]
54 if data_extension in cls.DECODERS:
---> 55 current_example[field_name] = cls.DECODERS[data_extension](current_example[field_name])
56 if current_example:
57 yield current_example
KeyError: 'processed_log_IMU_magnetometer_value.npy'
```
### Steps to reproduce the bug
unit test was added in: https://github.com/huggingface/datasets/pull/7726
it fails without the fixed proposed in the same PR
### Expected behavior
Not throwing a key error.
### Environment info
```
- `datasets` version: 4.0.0
- Platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
- Python version: 3.11.4
- `huggingface_hub` version: 0.33.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2025.7.0
``` | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7732/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7732/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7731 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7731/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7731/comments | https://api.github.com/repos/huggingface/datasets/issues/7731/events | https://github.com/huggingface/datasets/issues/7731 | 3,303,637,075 | I_kwDODunzps7E6YBT | 7,731 | Add the possibility of a backend for audio decoding | {
"avatar_url": "https://avatars.githubusercontent.com/u/142020129?v=4",
"events_url": "https://api.github.com/users/intexcor/events{/privacy}",
"followers_url": "https://api.github.com/users/intexcor/followers",
"following_url": "https://api.github.com/users/intexcor/following{/other_user}",
"gists_url": "https://api.github.com/users/intexcor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/intexcor",
"id": 142020129,
"login": "intexcor",
"node_id": "U_kgDOCHcOIQ",
"organizations_url": "https://api.github.com/users/intexcor/orgs",
"received_events_url": "https://api.github.com/users/intexcor/received_events",
"repos_url": "https://api.github.com/users/intexcor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/intexcor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/intexcor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/intexcor",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"is there a work around im stuck",
"never mind just downgraded"
] | 2025-08-08T11:08:56Z | 2025-08-20T16:29:33Z | null | NONE | null | null | null | null | ### Feature request
Add the possibility of a backend for audio decoding. Before version 4.0.0, soundfile was used, and now torchcodec is used, but the problem is that torchcodec requires ffmpeg, which is problematic to install on the same colab. Therefore, I suggest adding a decoder selection when loading the dataset.
### Motivation
I use a service for training models in which ffmpeg cannot be installed.
### Your contribution
I use a service for training models in which ffmpeg cannot be installed. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7731/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7731/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7729 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7729/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7729/comments | https://api.github.com/repos/huggingface/datasets/issues/7729/events | https://github.com/huggingface/datasets/issues/7729 | 3,300,672,954 | I_kwDODunzps7EvEW6 | 7,729 | OSError: libcudart.so.11.0: cannot open shared object file: No such file or directory | {
"avatar_url": "https://avatars.githubusercontent.com/u/115183904?v=4",
"events_url": "https://api.github.com/users/SaleemMalikAI/events{/privacy}",
"followers_url": "https://api.github.com/users/SaleemMalikAI/followers",
"following_url": "https://api.github.com/users/SaleemMalikAI/following{/other_user}",
"gists_url": "https://api.github.com/users/SaleemMalikAI/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SaleemMalikAI",
"id": 115183904,
"login": "SaleemMalikAI",
"node_id": "U_kgDOBt2RIA",
"organizations_url": "https://api.github.com/users/SaleemMalikAI/orgs",
"received_events_url": "https://api.github.com/users/SaleemMalikAI/received_events",
"repos_url": "https://api.github.com/users/SaleemMalikAI/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SaleemMalikAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaleemMalikAI/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SaleemMalikAI",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-08-07T14:07:23Z | 2025-08-07T14:07:23Z | null | NONE | null | null | null | null | > Hi is there any solution for that eror i try to install this one
pip install torch==1.12.1+cpu torchaudio==0.12.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
this is working fine but tell me how to install pytorch version that is fit for gpu | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7729/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7729/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7728 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7728/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7728/comments | https://api.github.com/repos/huggingface/datasets/issues/7728/events | https://github.com/huggingface/datasets/issues/7728 | 3,298,854,904 | I_kwDODunzps7EoIf4 | 7,728 | NonMatchingSplitsSizesError and ExpectedMoreSplitsError | {
"avatar_url": "https://avatars.githubusercontent.com/u/104755879?v=4",
"events_url": "https://api.github.com/users/efsotr/events{/privacy}",
"followers_url": "https://api.github.com/users/efsotr/followers",
"following_url": "https://api.github.com/users/efsotr/following{/other_user}",
"gists_url": "https://api.github.com/users/efsotr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/efsotr",
"id": 104755879,
"login": "efsotr",
"node_id": "U_kgDOBj5ypw",
"organizations_url": "https://api.github.com/users/efsotr/orgs",
"received_events_url": "https://api.github.com/users/efsotr/received_events",
"repos_url": "https://api.github.com/users/efsotr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/efsotr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efsotr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/efsotr",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-08-07T04:04:50Z | 2025-08-07T07:31:47Z | null | NONE | null | null | null | null | ### Describe the bug
When loading dataset, the info specified by `data_files` did not overwrite the original info.
### Steps to reproduce the bug
```python
from datasets import load_dataset
traindata = load_dataset(
"allenai/c4",
"en",
data_files={"train": "en/c4-train.00000-of-01024.json.gz",
"validation": "en/c4-validation.00000-of-00008.json.gz"},
)
```
```log
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=828589180707, num_examples=364868892, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=809262831, num_examples=356317, shard_lengths=[223006, 133311], dataset_name='c4')}, {'expected': SplitInfo(name='validation', num_bytes=825767266, num_examples=364608, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='validation', num_bytes=102199431, num_examples=45576, shard_lengths=None, dataset_name='c4')}]
```
```python
from datasets import load_dataset
traindata = load_dataset(
"allenai/c4",
"en",
data_files={"train": "en/c4-train.00000-of-01024.json.gz"},
split="train"
)
```
```log
ExpectedMoreSplitsError: {'validation'}
```
### Expected behavior
No error
### Environment info
datasets 4.0.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7728/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7728/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7727 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7727/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7727/comments | https://api.github.com/repos/huggingface/datasets/issues/7727/events | https://github.com/huggingface/datasets/issues/7727 | 3,295,718,578 | I_kwDODunzps7EcKyy | 7,727 | config paths that start with ./ are not valid as hf:// accessed repos, but are valid when accessed locally | {
"avatar_url": "https://avatars.githubusercontent.com/u/2229300?v=4",
"events_url": "https://api.github.com/users/doctorpangloss/events{/privacy}",
"followers_url": "https://api.github.com/users/doctorpangloss/followers",
"following_url": "https://api.github.com/users/doctorpangloss/following{/other_user}",
"gists_url": "https://api.github.com/users/doctorpangloss/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/doctorpangloss",
"id": 2229300,
"login": "doctorpangloss",
"node_id": "MDQ6VXNlcjIyMjkzMDA=",
"organizations_url": "https://api.github.com/users/doctorpangloss/orgs",
"received_events_url": "https://api.github.com/users/doctorpangloss/received_events",
"repos_url": "https://api.github.com/users/doctorpangloss/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/doctorpangloss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/doctorpangloss/subscriptions",
"type": "User",
"url": "https://api.github.com/users/doctorpangloss",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-08-06T08:21:37Z | 2025-08-06T08:21:37Z | null | NONE | null | null | null | null | ### Describe the bug
```
- config_name: some_config
data_files:
- split: train
path:
- images/xyz/*.jpg
```
will correctly download but
```
- config_name: some_config
data_files:
- split: train
path:
- ./images/xyz/*.jpg
```
will error with `FileNotFoundError` due to improper url joining. `load_dataset` on the same directory locally works fine.
### Steps to reproduce the bug
1. create a README.md with the front matter of the form
```
- config_name: some_config
data_files:
- split: train
path:
- ./images/xyz/*.jpg
```
2. `touch ./images/xyz/1.jpg`
3. Observe this directory loads with `load_dataset("filesystem_path", "some_config")` correctly.
4. Observe exceptions when you load this with `load_dataset("repoid/filesystem_path", "some_config")`
### Expected behavior
`./` prefix should be interpreted correctly
### Environment info
datasets 4.0.0
datasets 3.4.0
reproduce | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7727/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7727/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7724 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7724/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7724/comments | https://api.github.com/repos/huggingface/datasets/issues/7724/events | https://github.com/huggingface/datasets/issues/7724 | 3,292,315,241 | I_kwDODunzps7EPL5p | 7,724 | Can not stepinto load_dataset.py? | {
"avatar_url": "https://avatars.githubusercontent.com/u/13776012?v=4",
"events_url": "https://api.github.com/users/micklexqg/events{/privacy}",
"followers_url": "https://api.github.com/users/micklexqg/followers",
"following_url": "https://api.github.com/users/micklexqg/following{/other_user}",
"gists_url": "https://api.github.com/users/micklexqg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/micklexqg",
"id": 13776012,
"login": "micklexqg",
"node_id": "MDQ6VXNlcjEzNzc2MDEy",
"organizations_url": "https://api.github.com/users/micklexqg/orgs",
"received_events_url": "https://api.github.com/users/micklexqg/received_events",
"repos_url": "https://api.github.com/users/micklexqg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/micklexqg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/micklexqg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/micklexqg",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-08-05T09:28:51Z | 2025-08-05T09:28:51Z | null | NONE | null | null | null | null | I set a breakpoint in "load_dataset.py" and try to debug my data load codes, but it does not stop at any breakpoints, so "load_dataset.py" can not be stepped into ?
<!-- Failed to upload "截图 2025-08-05 17-25-18.png" --> | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7724/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7724/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7723 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7723/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7723/comments | https://api.github.com/repos/huggingface/datasets/issues/7723/events | https://github.com/huggingface/datasets/issues/7723 | 3,289,943,261 | I_kwDODunzps7EGIzd | 7,723 | Don't remove `trust_remote_code` arg!!! | {
"avatar_url": "https://avatars.githubusercontent.com/u/758925?v=4",
"events_url": "https://api.github.com/users/autosquid/events{/privacy}",
"followers_url": "https://api.github.com/users/autosquid/followers",
"following_url": "https://api.github.com/users/autosquid/following{/other_user}",
"gists_url": "https://api.github.com/users/autosquid/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/autosquid",
"id": 758925,
"login": "autosquid",
"node_id": "MDQ6VXNlcjc1ODkyNQ==",
"organizations_url": "https://api.github.com/users/autosquid/orgs",
"received_events_url": "https://api.github.com/users/autosquid/received_events",
"repos_url": "https://api.github.com/users/autosquid/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/autosquid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/autosquid/subscriptions",
"type": "User",
"url": "https://api.github.com/users/autosquid",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2025-08-04T15:42:07Z | 2025-08-04T15:42:07Z | null | NONE | null | null | null | null | ### Feature request
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
Add `trust_remote_code` arg back please!
### Motivation
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
### Your contribution
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7723/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7723/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7722 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7722/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7722/comments | https://api.github.com/repos/huggingface/datasets/issues/7722/events | https://github.com/huggingface/datasets/issues/7722 | 3,289,741,064 | I_kwDODunzps7EFXcI | 7,722 | Out of memory even though using load_dataset(..., streaming=True) | {
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/padmalcom",
"id": 3961950,
"login": "padmalcom",
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"type": "User",
"url": "https://api.github.com/users/padmalcom",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-08-04T14:41:55Z | 2025-08-04T14:41:55Z | null | NONE | null | null | null | null | ### Describe the bug
I am iterating over a large dataset that I load using streaming=True to avoid running out of memory. Unfortunately, I am observing that memory usage increases over time and I'm finally running in an oom.
### Steps to reproduce the bug
```
ds = load_dataset("openslr/librispeech_asr", split="train.clean.360", streaming=True)
for i,sample in enumerate(tqdm(ds)):
target_file = os.path.join(NSFW_TARGET_FOLDER, f'audio{i}.wav')
try:
sf.write(target_file, sample['audio']['array'], samplerate=sample['audio']['sampling_rate'])
except Exception as e:
print(f"Could not write audio {i} in ds: {e}")
```
### Expected behavior
I'd expect to have a small memory footprint and memory being freed after each iteration of the for loop. Instead the memory usage is increasing. I tried to remove the logic to write the sound file and just print the sample but the issue remains the same.
### Environment info
Python 3.12.11
Ubuntu 24
datasets 4.0.0 and 3.6.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7722/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7722/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7721 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7721/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7721/comments | https://api.github.com/repos/huggingface/datasets/issues/7721/events | https://github.com/huggingface/datasets/issues/7721 | 3,289,426,104 | I_kwDODunzps7EEKi4 | 7,721 | Bad split error message when using percentages | {
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/padmalcom",
"id": 3961950,
"login": "padmalcom",
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"type": "User",
"url": "https://api.github.com/users/padmalcom",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I'd like to work on this: add clearer validation/messages for percent-based splits + tests",
"The most basic example is this code:\n`load_dataset(\"openslr/librispeech_asr\", split=\"train[10%:20%]\")`\n\nThis results in this ValueError:\n```\n raise ValueError(f'Unknown split \"{split}\". Should be one of {l... | 2025-08-04T13:20:25Z | 2025-08-14T14:42:24Z | null | NONE | null | null | null | null | ### Describe the bug
Hi, I'm trying to download a dataset. To not load the entire dataset in memory, I split it as described [here](https://huggingface.co/docs/datasets/v4.0.0/loading#slice-splits) in 10% steps.
When doing so, the library returns this error:
raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}")
ValueError: Bad split: train[0%:10%]. Available splits: ['train']
Edit: Same happens with a split like _train[:90000]_
### Steps to reproduce the bug
```
for split in range(10):
split_str = f"train[{split*10}%:{(split+1)*10}%]"
print(f"Processing split {split_str}...")
ds = load_dataset("user/dataset", split=split_str, streaming=True)
```
### Expected behavior
I'd expect the library to split my dataset in 10% steps.
### Environment info
python 3.12.11
ubuntu 24
dataset 4.0.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7721/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7721/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7720 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7720/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7720/comments | https://api.github.com/repos/huggingface/datasets/issues/7720/events | https://github.com/huggingface/datasets/issues/7720 | 3,287,150,513 | I_kwDODunzps7D7e-x | 7,720 | Datasets 4.0 map function causing column not found | {
"avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4",
"events_url": "https://api.github.com/users/Darejkal/events{/privacy}",
"followers_url": "https://api.github.com/users/Darejkal/followers",
"following_url": "https://api.github.com/users/Darejkal/following{/other_user}",
"gists_url": "https://api.github.com/users/Darejkal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Darejkal",
"id": 55143337,
"login": "Darejkal",
"node_id": "MDQ6VXNlcjU1MTQzMzM3",
"organizations_url": "https://api.github.com/users/Darejkal/orgs",
"received_events_url": "https://api.github.com/users/Darejkal/received_events",
"repos_url": "https://api.github.com/users/Darejkal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Darejkal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darejkal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Darejkal",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi, I tried to reproduce this issue on the latest `main` branch but it seems to be working correctly now. My test script (which creates a dummy dataset and applies the `.map()` function) successfully creates and accesses the new column without a `KeyError`.\n\nIt's possible this was fixed by a recent commit. The m... | 2025-08-03T12:52:34Z | 2025-08-07T19:23:34Z | null | NONE | null | null | null | null | ### Describe the bug
Column returned after mapping is not found in new instance of the dataset.
### Steps to reproduce the bug
Code for reproduction. After running get_total_audio_length, it is errored out due to `data` not having `duration`
```
def compute_duration(x):
return {"duration": len(x["audio"]["array"]) / x["audio"]["sampling_rate"]}
def get_total_audio_length(dataset):
data = dataset.map(compute_duration, num_proc=NUM_PROC)
print(data)
durations=data["duration"]
total_seconds = sum(durations)
return total_seconds
```
### Expected behavior
New datasets.Dataset instance should have new columns attached.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.33.2
- PyArrow version: 20.0.0
- Pandas version: 2.3.0
- `fsspec` version: 2023.12.2 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7720/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7720/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7719 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7719/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7719/comments | https://api.github.com/repos/huggingface/datasets/issues/7719/events | https://github.com/huggingface/datasets/issues/7719 | 3,285,928,491 | I_kwDODunzps7D20or | 7,719 | Specify dataset columns types in typehint | {
"avatar_url": "https://avatars.githubusercontent.com/u/36135455?v=4",
"events_url": "https://api.github.com/users/Samoed/events{/privacy}",
"followers_url": "https://api.github.com/users/Samoed/followers",
"following_url": "https://api.github.com/users/Samoed/following{/other_user}",
"gists_url": "https://api.github.com/users/Samoed/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Samoed",
"id": 36135455,
"login": "Samoed",
"node_id": "MDQ6VXNlcjM2MTM1NDU1",
"organizations_url": "https://api.github.com/users/Samoed/orgs",
"received_events_url": "https://api.github.com/users/Samoed/received_events",
"repos_url": "https://api.github.com/users/Samoed/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Samoed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Samoed/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Samoed",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2025-08-02T13:22:31Z | 2025-08-02T13:22:31Z | null | NONE | null | null | null | null | ### Feature request
Make dataset optionaly generic to datasets usage with type annotations like it was done in `torch.Dataloader` https://github.com/pytorch/pytorch/blob/134179474539648ba7dee1317959529fbd0e7f89/torch/utils/data/dataloader.py#L131
### Motivation
In MTEB we're using a lot of datasets objects, but they're a bit poor in typehints. E.g. we can specify this for dataloder
```python
from typing import TypedDict
from torch.utils.data import DataLoader
class CorpusInput(TypedDict):
title: list[str]
body: list[str]
class QueryInput(TypedDict):
query: list[str]
instruction: list[str]
def queries_loader() -> DataLoader[QueryInput]:
...
def corpus_loader() -> DataLoader[CorpusInput]:
...
```
But for datasets we can only specify columns in type in comments
```python
from datasets import Dataset
QueryDataset = Dataset
"""Query dataset should have `query` and `instructions` columns as `str` """
```
### Your contribution
I can create draft implementation | null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7719/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7719/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7717 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7717/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7717/comments | https://api.github.com/repos/huggingface/datasets/issues/7717/events | https://github.com/huggingface/datasets/issues/7717 | 3,282,855,127 | I_kwDODunzps7DrGTX | 7,717 | Cached dataset is not used when explicitly passing the cache_dir parameter | {
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/padmalcom",
"id": 3961950,
"login": "padmalcom",
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"type": "User",
"url": "https://api.github.com/users/padmalcom",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi, I've investigated this issue and can confirm the bug. Here are my findings:\n\n**1. Reproduction:**\nI was able to reproduce the issue on the latest `main` branch. Using the provided code snippet, `snapshot_download` correctly populates the custom `cache_dir`, but `load_dataset` with the same `cache_dir` trigg... | 2025-08-01T07:12:41Z | 2025-08-05T19:19:36Z | null | NONE | null | null | null | null | ### Describe the bug
Hi, we are pre-downloading a dataset using snapshot_download(). When loading this exact dataset with load_dataset() the cached snapshot is not used. In both calls, I provide the cache_dir parameter.
### Steps to reproduce the bug
```
from datasets import load_dataset, concatenate_datasets
from huggingface_hub import snapshot_download
def download_ds(name: str):
snapshot_download(repo_id=name, repo_type="dataset", cache_dir="G:/Datasets/cache")
def prepare_ds():
audio_ds = load_dataset("openslr/librispeech_asr", num_proc=4, cache_dir="G:/Datasets/cache")
print(sfw_ds.features)
if __name__ == '__main__':
download_ds("openslr/librispeech_asr")
prepare_ds()
```
### Expected behavior
I'd expect that the cached version of the dataset is used. Instead, the same dataset is downloaded again to the default cache directory.
### Environment info
Windows 11
datasets==4.0.0
Python 3.12.11 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7717/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7717/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7709 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7709/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7709/comments | https://api.github.com/repos/huggingface/datasets/issues/7709/events | https://github.com/huggingface/datasets/issues/7709 | 3,276,677,990 | I_kwDODunzps7DTiNm | 7,709 | Release 4.0.0 breaks usage patterns of with_format | {
"avatar_url": "https://avatars.githubusercontent.com/u/9154515?v=4",
"events_url": "https://api.github.com/users/wittenator/events{/privacy}",
"followers_url": "https://api.github.com/users/wittenator/followers",
"following_url": "https://api.github.com/users/wittenator/following{/other_user}",
"gists_url": "https://api.github.com/users/wittenator/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wittenator",
"id": 9154515,
"login": "wittenator",
"node_id": "MDQ6VXNlcjkxNTQ1MTU=",
"organizations_url": "https://api.github.com/users/wittenator/orgs",
"received_events_url": "https://api.github.com/users/wittenator/received_events",
"repos_url": "https://api.github.com/users/wittenator/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wittenator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wittenator/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wittenator",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"This is a breaking change with 4.0 which introduced `Column` objects. To get the numpy array from a `Column` you can `col[i]`, `col[i:j]` or even `col[:]` if you want the full column as a numpy array:\n\n```python\nfrom datasets import load_dataset\ndataset = load_dataset(...)\ndataset = dataset.with_format(\"nump... | 2025-07-30T11:34:53Z | 2025-08-07T08:27:18Z | 2025-08-07T08:27:18Z | NONE | null | null | null | null | ### Describe the bug
Previously it was possible to access a whole column that was e.g. in numpy format via `with_format` by indexing the column. Now this possibility seems to be gone with the new Column() class. As far as I see, this makes working on a whole column (in-memory) more complex, i.e. normalizing an in-memory dataset for which iterating would be too slow. Is this intended behaviour? I couldn't find much documentation on the intended usage of the new Column class yet.
### Steps to reproduce the bug
Steps to reproduce:
```
from datasets import load_dataset
dataset = load_dataset("lhoestq/demo1")
dataset = dataset.with_format("numpy")
print(dataset["star"].ndim)
```
### Expected behavior
Working on whole columns should be possible.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-6.8.0-63-generic-x86_64-with-glibc2.36
- Python version: 3.12.11
- `huggingface_hub` version: 0.34.3
- PyArrow version: 21.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2025.3.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/9154515?v=4",
"events_url": "https://api.github.com/users/wittenator/events{/privacy}",
"followers_url": "https://api.github.com/users/wittenator/followers",
"following_url": "https://api.github.com/users/wittenator/following{/other_user}",
"gists_url": "https://api.github.com/users/wittenator/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wittenator",
"id": 9154515,
"login": "wittenator",
"node_id": "MDQ6VXNlcjkxNTQ1MTU=",
"organizations_url": "https://api.github.com/users/wittenator/orgs",
"received_events_url": "https://api.github.com/users/wittenator/received_events",
"repos_url": "https://api.github.com/users/wittenator/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wittenator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wittenator/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wittenator",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7709/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7709/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7707 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7707/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7707/comments | https://api.github.com/repos/huggingface/datasets/issues/7707/events | https://github.com/huggingface/datasets/issues/7707 | 3,271,867,998 | I_kwDODunzps7DBL5e | 7,707 | load_dataset() in 4.0.0 failed when decoding audio | {
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jiqing-feng",
"id": 107918818,
"login": "jiqing-feng",
"node_id": "U_kgDOBm614g",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jiqing-feng",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq . Would you please have a look at it? I use the official NV Docker ([NV official docker image](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch): `nvcr.io/nvidia/pytorch:25.06-py3`) on A100 and encountered this issue, but I don't know how to fix it.",
"Use !pip install -U datasets[audio]... | 2025-07-29T03:25:03Z | 2025-09-15T16:17:06Z | 2025-08-01T05:15:45Z | NONE | null | null | null | null | ### Describe the bug
Cannot decode audio data.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
print(dataset[0]["audio"]["array"])
```
1st round run, got
```
File "/usr/local/lib/python3.12/dist-packages/datasets/features/audio.py", line 172, in decode_example
raise ImportError("To support decoding audio data, please install 'torchcodec'.")
ImportError: To support decoding audio data, please install 'torchcodec'.
```
After `pip install torchcodec` and run, got
```
File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/_metadata.py", line 16, in <module>
from torchcodec._core.ops import (
File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 84, in <module>
load_torchcodec_shared_libraries()
File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 69, in load_torchcodec_shared_libraries
raise RuntimeError(
RuntimeError: Could not load libtorchcodec. Likely causes:
1. FFmpeg is not properly installed in your environment. We support
versions 4, 5, 6 and 7.
2. The PyTorch version (2.8.0a0+5228986c39.nv25.06) is not compatible with
this version of TorchCodec. Refer to the version compatibility
table:
https://github.com/pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec.
3. Another runtime dependency; see exceptions below.
The following exceptions were raised as we tried to load libtorchcodec:
[start of libtorchcodec loading traceback]
FFmpeg version 7: libavutil.so.59: cannot open shared object file: No such file or directory
FFmpeg version 6: libavutil.so.58: cannot open shared object file: No such file or directory
FFmpeg version 5: libavutil.so.57: cannot open shared object file: No such file or directory
FFmpeg version 4: libavutil.so.56: cannot open shared object file: No such file or directory
[end of libtorchcodec loading traceback].
```
After `apt update && apt install ffmpeg -y`, got
```
Traceback (most recent call last):
File "/workspace/jiqing/test_datasets.py", line 4, in <module>
print(dataset[0]["audio"]["array"])
~~~~~~~^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py", line 2859, in __getitem__
return self._getitem(key)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py", line 2841, in _getitem
formatted_output = format_table(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 657, in format_table
return formatter(pa_table, query_type=query_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 410, in __call__
return self.format_row(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 459, in format_row
row = self.python_features_decoder.decode_row(row)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 223, in decode_row
return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/features/features.py", line 2093, in decode_example
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/features/features.py", line 1405, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/features/audio.py", line 198, in decode_example
audio = AudioDecoder(bytes, stream_index=self.stream_index, sample_rate=self.sampling_rate)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torchcodec/decoders/_audio_decoder.py", line 62, in __init__
self._decoder = create_decoder(source=source, seek_mode="approximate")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torchcodec/decoders/_decoder_utils.py", line 33, in create_decoder
return core.create_from_bytes(source, seek_mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 144, in create_from_bytes
return create_from_tensor(buffer, seek_mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_ops.py", line 756, in __call__
return self._op(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
NotImplementedError: Could not run 'torchcodec_ns::create_from_tensor' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchcodec_ns::create_from_tensor' is only available for these backends: [Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMTIA, AutogradMAIA, AutogradMeta, Tracer, AutocastCPU, AutocastMTIA, AutocastMAIA, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
Meta: registered at /dev/null:214 [kernel]
BackendSelect: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at /__w/torchcodec/torchcodec/pytorch/torchcodec/src/torchcodec/_core/custom_ops.cpp:694 [kernel]
FuncTorchDynamicLayerBackMode: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:479 [backend fallback]
Functionalize: registered at /opt/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:349 [backend fallback]
Named: registered at /opt/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at /opt/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at /opt/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at /opt/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:104 [backend fallback]
AutogradOther: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:63 [backend fallback]
AutogradCPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:67 [backend fallback]
AutogradCUDA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:75 [backend fallback]
AutogradXLA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:87 [backend fallback]
AutogradMPS: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:95 [backend fallback]
AutogradXPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:71 [backend fallback]
AutogradHPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:108 [backend fallback]
AutogradLazy: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:91 [backend fallback]
AutogradMTIA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:79 [backend fallback]
AutogradMAIA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:83 [backend fallback]
AutogradMeta: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:99 [backend fallback]
Tracer: registered at /opt/pytorch/pytorch/torch/csrc/autograd/TraceTypeManual.cpp:294 [backend fallback]
AutocastCPU: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:322 [backend fallback]
AutocastMTIA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:466 [backend fallback]
AutocastMAIA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:504 [backend fallback]
AutocastXPU: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:542 [backend fallback]
AutocastMPS: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:209 [backend fallback]
AutocastCUDA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:165 [backend fallback]
FuncTorchBatched: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:731 [backend fallback]
BatchedNestedTensor: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:758 [backend fallback]
FuncTorchVmapMode: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:27 [backend fallback]
Batched: registered at /opt/pytorch/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:208 [backend fallback]
PythonTLSSnapshot: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:202 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:475 [backend fallback]
PreDispatch: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:206 [backend fallback]
PythonDispatcher: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:198 [backend fallback]
```
### Expected behavior
The result is
```
[0.00238037 0.0020752 0.00198364 ... 0.00042725 0.00057983 0.0010376 ]
```
on `datasets==3.6.0`
### Environment info
[NV official docker image](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch): `nvcr.io/nvidia/pytorch:25.06-py3`
```
- `datasets` version: 4.0.0
- Platform: Linux-5.4.292-1.el8.elrepo.x86_64-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.34.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jiqing-feng",
"id": 107918818,
"login": "jiqing-feng",
"node_id": "U_kgDOBm614g",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jiqing-feng",
"user_view_type": "public"
} | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7707/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7707/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7705 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7705/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7705/comments | https://api.github.com/repos/huggingface/datasets/issues/7705/events | https://github.com/huggingface/datasets/issues/7705 | 3,269,070,499 | I_kwDODunzps7C2g6j | 7,705 | Can Not read installed dataset in dataset.load(.) | {
"avatar_url": "https://avatars.githubusercontent.com/u/52521165?v=4",
"events_url": "https://api.github.com/users/HuangChiEn/events{/privacy}",
"followers_url": "https://api.github.com/users/HuangChiEn/followers",
"following_url": "https://api.github.com/users/HuangChiEn/following{/other_user}",
"gists_url": "https://api.github.com/users/HuangChiEn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HuangChiEn",
"id": 52521165,
"login": "HuangChiEn",
"node_id": "MDQ6VXNlcjUyNTIxMTY1",
"organizations_url": "https://api.github.com/users/HuangChiEn/orgs",
"received_events_url": "https://api.github.com/users/HuangChiEn/received_events",
"repos_url": "https://api.github.com/users/HuangChiEn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HuangChiEn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HuangChiEn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HuangChiEn",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"You can download the dataset locally using [huggingface_hub.snapshot_download](https://huggingface.co/docs/huggingface_hub/v0.34.3/en/package_reference/file_download#huggingface_hub.snapshot_download) and then do\n\n```python\ndataset = load_dataset(local_directory_path)\n```",
"> You can download the dataset lo... | 2025-07-28T09:43:54Z | 2025-08-05T01:24:32Z | null | NONE | null | null | null | null | Hi, folks, I'm newbie in huggingface dataset api.
As title, i'm facing the issue that the dataset.load api can not connect to the installed dataset.
code snippet :
<img width="572" height="253" alt="Image" src="https://github.com/user-attachments/assets/10f48aaf-d6ca-4239-b1cf-145d74f125d1" />
data path :
"/xxx/joseph/llava_ds/vlm_ds"
it contains all video clips i want!
<img width="1398" height="261" alt="Image" src="https://github.com/user-attachments/assets/bf213b66-e344-4311-97e7-bc209677ae77" />
i run the py script by
<img width="1042" height="38" alt="Image" src="https://github.com/user-attachments/assets/8b3fcee4-e1a6-41b8-bee1-91567b00d9d2" />
But bad happended, even i provide dataset path by "HF_HUB_CACHE", it still attempt to download data from remote side :
<img width="1697" height="813" alt="Image" src="https://github.com/user-attachments/assets/baa6cff1-a724-4710-a8c4-4805459deffb" />
Any suggestion will be appreciated!! | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7705/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7705/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7703 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7703/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7703/comments | https://api.github.com/repos/huggingface/datasets/issues/7703/events | https://github.com/huggingface/datasets/issues/7703 | 3,265,648,942 | I_kwDODunzps7Cpdku | 7,703 | [Docs] map() example uses undefined `tokenizer` — causes NameError | {
"avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4",
"events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}",
"followers_url": "https://api.github.com/users/Sanjaykumar030/followers",
"following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}",
"gists_url": "https://api.github.com/users/Sanjaykumar030/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Sanjaykumar030",
"id": 183703408,
"login": "Sanjaykumar030",
"node_id": "U_kgDOCvMXcA",
"organizations_url": "https://api.github.com/users/Sanjaykumar030/orgs",
"received_events_url": "https://api.github.com/users/Sanjaykumar030/received_events",
"repos_url": "https://api.github.com/users/Sanjaykumar030/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Sanjaykumar030/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sanjaykumar030/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Sanjaykumar030",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I've submitted PR #7704 which adds documentation to clarify the behavior of `map()` when returning `None`."
] | 2025-07-26T13:35:11Z | 2025-07-27T09:44:35Z | null | CONTRIBUTOR | null | null | null | null | ## Description
The current documentation example for `datasets.Dataset.map()` demonstrates batched processing but uses a `tokenizer` object without defining or importing it. This causes an error every time it's copied.
Here is the problematic line:
```python
# process a batch of examples
>>> ds = ds.map(lambda example: tokenizer(example["text"]), batched=True)
```
This assumes the user has already set up a tokenizer, which contradicts the goal of having self-contained, copy-paste-friendly examples.
## Problem
Users who copy and run the example as-is will encounter:
```python
NameError: name 'tokenizer' is not defined
```
This breaks the flow for users and violates HuggingFace's documentation principle that examples should "work as expected" when copied directly.
## Proposal
Update the example to include the required tokenizer setup using the Transformers library, like so:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
ds_tokenized = ds.map(lambda example: tokenizer(example["text"]), batched=True)
```
This will help new users understand the workflow and apply the method correctly.
## Note
This PR complements ongoing improvements like #7700, which clarifies multiprocessing in .map(). My change focuses on undefined tokenizer — causes NameError
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7703/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7703/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7700 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7700/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7700/comments | https://api.github.com/repos/huggingface/datasets/issues/7700/events | https://github.com/huggingface/datasets/issues/7700 | 3,263,922,255 | I_kwDODunzps7Ci4BP | 7,700 | [doc] map.num_proc needs clarification | {
"avatar_url": "https://avatars.githubusercontent.com/u/196988264?v=4",
"events_url": "https://api.github.com/users/sfc-gh-sbekman/events{/privacy}",
"followers_url": "https://api.github.com/users/sfc-gh-sbekman/followers",
"following_url": "https://api.github.com/users/sfc-gh-sbekman/following{/other_user}",
"gists_url": "https://api.github.com/users/sfc-gh-sbekman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sfc-gh-sbekman",
"id": 196988264,
"login": "sfc-gh-sbekman",
"node_id": "U_kgDOC73NaA",
"organizations_url": "https://api.github.com/users/sfc-gh-sbekman/orgs",
"received_events_url": "https://api.github.com/users/sfc-gh-sbekman/received_events",
"repos_url": "https://api.github.com/users/sfc-gh-sbekman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sfc-gh-sbekman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sfc-gh-sbekman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sfc-gh-sbekman",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-07-25T17:35:09Z | 2025-07-25T17:39:36Z | null | NONE | null | null | null | null | https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/main_classes#datasets.Dataset.map.num_proc
```
num_proc (int, optional, defaults to None) — Max number of processes when generating cache. Already cached
shards are loaded sequentially.
```
for batch:
```
num_proc (int, optional, defaults to None): The number of processes to use for multiprocessing. If None, no
multiprocessing is used. This can significantly speed up batching for large datasets.
```
So what happens to `map.num_proc` - is it the same behavior as `batch.num_proc` - so only if `num_proc=None` then no multiprocessing is used?
Let's update the doc to be unambiguous.
**bonus**: we could make all of these behave similarly to `DataLoader.num_workers` - where `num_workers==0` implies no multiprocessing. I think that's the most intuitive, IMHO. 0 workers - the main process has to do all the work. `None` could be the same as `0`.
context: debugging a failing `map`
Thank you! | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7700/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7700/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7699 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7699/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7699/comments | https://api.github.com/repos/huggingface/datasets/issues/7699/events | https://github.com/huggingface/datasets/issues/7699 | 3,261,053,171 | I_kwDODunzps7CX7jz | 7,699 | Broken link in documentation for "Create a video dataset" | {
"avatar_url": "https://avatars.githubusercontent.com/u/122366389?v=4",
"events_url": "https://api.github.com/users/cleong110/events{/privacy}",
"followers_url": "https://api.github.com/users/cleong110/followers",
"following_url": "https://api.github.com/users/cleong110/following{/other_user}",
"gists_url": "https://api.github.com/users/cleong110/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cleong110",
"id": 122366389,
"login": "cleong110",
"node_id": "U_kgDOB0sptQ",
"organizations_url": "https://api.github.com/users/cleong110/orgs",
"received_events_url": "https://api.github.com/users/cleong110/received_events",
"repos_url": "https://api.github.com/users/cleong110/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cleong110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cleong110/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cleong110",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"The URL is ok but it seems the webdataset website is down. There seems to be a related issue here: https://github.com/webdataset/webdataset/issues/155\n\nFeel free to ask the authors there for an update. Otherwise happy to witch the link to the mirror shared in that issue"
] | 2025-07-24T19:46:28Z | 2025-07-25T15:27:47Z | null | NONE | null | null | null | null | The link to "the [WebDataset documentation](https://webdataset.github.io/webdataset)." is broken.
https://huggingface.co/docs/datasets/main/en/video_dataset#webdataset
<img width="2048" height="264" alt="Image" src="https://github.com/user-attachments/assets/975dd10c-aad8-42fc-9fbc-de0e2747a326" /> | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7699/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7699/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7698 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7698/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7698/comments | https://api.github.com/repos/huggingface/datasets/issues/7698/events | https://github.com/huggingface/datasets/issues/7698 | 3,255,350,916 | I_kwDODunzps7CCLaE | 7,698 | NotImplementedError when using streaming=True in Google Colab environment | {
"avatar_url": "https://avatars.githubusercontent.com/u/100470741?v=4",
"events_url": "https://api.github.com/users/Aniket17200/events{/privacy}",
"followers_url": "https://api.github.com/users/Aniket17200/followers",
"following_url": "https://api.github.com/users/Aniket17200/following{/other_user}",
"gists_url": "https://api.github.com/users/Aniket17200/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Aniket17200",
"id": 100470741,
"login": "Aniket17200",
"node_id": "U_kgDOBf0P1Q",
"organizations_url": "https://api.github.com/users/Aniket17200/orgs",
"received_events_url": "https://api.github.com/users/Aniket17200/received_events",
"repos_url": "https://api.github.com/users/Aniket17200/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Aniket17200/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aniket17200/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Aniket17200",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi, @Aniket17200, try upgrading datasets using '!pip install -U datasets'. I hope this will resolve your issue.",
"Thank you @tanuj-rai, it's working great "
] | 2025-07-23T08:04:53Z | 2025-07-23T15:06:23Z | null | NONE | null | null | null | null | ### Describe the bug
When attempting to load a large dataset (like tiiuae/falcon-refinedweb or allenai/c4) using streaming=True in a standard Google Colab notebook, the process fails with a NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet. This issue persists even after upgrading datasets and huggingface_hub and restarting the session.
### Steps to reproduce the bug
Open a new Google Colab notebook.
(Optional but recommended) Run !pip install --upgrade datasets huggingface_hub and restart the runtime.
Run the following code:
Python
from datasets import load_dataset
try:
print("Attempting to load a stream...")
streaming_dataset = load_dataset('tiiuae/falcon-refinedweb', streaming=True)
print("Success!")
except Exception as e:
print(e)
### Expected behavior
The load_dataset command should return a StreamingDataset object without raising an error, allowing iteration over the dataset.
Actual Behavior
The code fails and prints the following error traceback:
[PASTE THE FULL ERROR TRACEBACK HERE]
(Note: Copy the entire error message you received, from Traceback... to the final error line, and paste it in this section.)
### Environment info
Platform: Google Colab
datasets version: [Run !pip show datasets in Colab and paste the version here]
huggingface_hub version: [Run !pip show huggingface_hub and paste the version here]
Python version: [Run !python --version and paste the version here] | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7698/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7698/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7697 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7697/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7697/comments | https://api.github.com/repos/huggingface/datasets/issues/7697/events | https://github.com/huggingface/datasets/issues/7697 | 3,254,526,399 | I_kwDODunzps7B_CG_ | 7,697 | - | {
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghost",
"id": 10137,
"login": "ghost",
"node_id": "MDQ6VXNlcjEwMTM3",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"repos_url": "https://api.github.com/users/ghost/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghost",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 2025-07-23T01:30:32Z | 2025-07-25T15:21:39Z | 2025-07-25T15:21:39Z | NONE | null | null | null | null | - | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7697/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7697/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7696 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7696/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7696/comments | https://api.github.com/repos/huggingface/datasets/issues/7696/events | https://github.com/huggingface/datasets/issues/7696 | 3,253,433,350 | I_kwDODunzps7B63QG | 7,696 | load_dataset() in 4.0.0 returns different audio samples compared to earlier versions breaking reproducibility | {
"avatar_url": "https://avatars.githubusercontent.com/u/25346345?v=4",
"events_url": "https://api.github.com/users/Manalelaidouni/events{/privacy}",
"followers_url": "https://api.github.com/users/Manalelaidouni/followers",
"following_url": "https://api.github.com/users/Manalelaidouni/following{/other_user}",
"gists_url": "https://api.github.com/users/Manalelaidouni/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Manalelaidouni",
"id": 25346345,
"login": "Manalelaidouni",
"node_id": "MDQ6VXNlcjI1MzQ2MzQ1",
"organizations_url": "https://api.github.com/users/Manalelaidouni/orgs",
"received_events_url": "https://api.github.com/users/Manalelaidouni/received_events",
"repos_url": "https://api.github.com/users/Manalelaidouni/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Manalelaidouni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Manalelaidouni/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Manalelaidouni",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi ! This is because `datasets` now uses the FFmpeg-based library `torchcodec` instead of the libsndfile-based library `soundfile` to decode audio data. Those two have different decoding implementations",
"I’m all for torchcodec, good luck with the migration!"
] | 2025-07-22T17:02:17Z | 2025-07-30T14:22:21Z | 2025-07-30T14:22:21Z | NONE | null | null | null | null | ### Describe the bug
In datasets 4.0.0 release, `load_dataset()` returns different audio samples compared to earlier versions, this breaks integration tests that depend on consistent sample data across different environments (first and second envs specified below).
### Steps to reproduce the bug
```python
from datasets import Audio, load_dataset
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.cast_column("audio", Audio(24000))
sample= ds[0]["audio"]["array"]
print(sample)
# sample in 3.6.0
[0.00231914 0.00245417 0.00187414 ... 0.00061956 0.00101157 0.00076325]
# sample in 4.0.0
array([0.00238037, 0.00220794, 0.00198703, ..., 0.00057983, 0.00085863,
0.00115309], dtype=float32)
```
### Expected behavior
The same dataset should load identical samples across versions to maintain reproducibility.
### Environment info
First env:
- datasets version: 3.6.0
- Platform: Windows-10-10.0.26100-SP0
- Python: 3.11.0
Second env:
- datasets version: 4.0.0
- Platform: Linux-6.1.123+-x86_64-with-glibc2.35
- Python: 3.11.13 | {
"avatar_url": "https://avatars.githubusercontent.com/u/25346345?v=4",
"events_url": "https://api.github.com/users/Manalelaidouni/events{/privacy}",
"followers_url": "https://api.github.com/users/Manalelaidouni/followers",
"following_url": "https://api.github.com/users/Manalelaidouni/following{/other_user}",
"gists_url": "https://api.github.com/users/Manalelaidouni/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Manalelaidouni",
"id": 25346345,
"login": "Manalelaidouni",
"node_id": "MDQ6VXNlcjI1MzQ2MzQ1",
"organizations_url": "https://api.github.com/users/Manalelaidouni/orgs",
"received_events_url": "https://api.github.com/users/Manalelaidouni/received_events",
"repos_url": "https://api.github.com/users/Manalelaidouni/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Manalelaidouni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Manalelaidouni/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Manalelaidouni",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7696/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7696/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7694 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7694/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7694/comments | https://api.github.com/repos/huggingface/datasets/issues/7694/events | https://github.com/huggingface/datasets/issues/7694 | 3,247,600,408 | I_kwDODunzps7BknMY | 7,694 | Dataset.to_json consumes excessive memory, appears to not be a streaming operation | {
"avatar_url": "https://avatars.githubusercontent.com/u/49603999?v=4",
"events_url": "https://api.github.com/users/ycq0125/events{/privacy}",
"followers_url": "https://api.github.com/users/ycq0125/followers",
"following_url": "https://api.github.com/users/ycq0125/following{/other_user}",
"gists_url": "https://api.github.com/users/ycq0125/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ycq0125",
"id": 49603999,
"login": "ycq0125",
"node_id": "MDQ6VXNlcjQ5NjAzOTk5",
"organizations_url": "https://api.github.com/users/ycq0125/orgs",
"received_events_url": "https://api.github.com/users/ycq0125/received_events",
"repos_url": "https://api.github.com/users/ycq0125/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ycq0125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ycq0125/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ycq0125",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! to_json is memory efficient and writes the data by batch:\n\nhttps://github.com/huggingface/datasets/blob/d9861d86be222884dabbd534a2db770c70c9b558/src/datasets/io/json.py#L153-L159\n\nWhat memory are you mesuring ? If you are mesuring RSS, it is likely that it counts the memory mapped data of the dataset. Mem... | 2025-07-21T07:51:25Z | 2025-07-25T14:42:21Z | null | NONE | null | null | null | null | ### Describe the bug
When exporting a Dataset object to a JSON Lines file using the .to_json(lines=True) method, the process consumes a very large amount of memory. The memory usage is proportional to the size of the entire Dataset object being saved, rather than being a low, constant memory operation.
This behavior is unexpected, as the JSONL format is line-oriented and ideally suited for streaming writes. This issue can easily lead to Out-of-Memory (OOM) errors when exporting large datasets, especially in memory-constrained environments like Docker containers.
<img width="1343" height="329" alt="Image" src="https://github.com/user-attachments/assets/518b4263-ad12-422d-9672-28ffe97240ce" />
### Steps to reproduce the bug
```
import os
from datasets import load_dataset, Dataset
from loguru import logger
# A public dataset to test with
REPO_ID = "adam89/TinyStoriesChinese"
SUBSET = "default"
SPLIT = "train"
NUM_ROWS_TO_LOAD = 10 # Use a reasonably large number to see the memory spike
def run_test():
"""Loads data into memory and then saves it, triggering the memory issue."""
logger.info("Step 1: Loading data into an in-memory Dataset object...")
# Create an in-memory Dataset object from a stream
# This simulates having a processed dataset ready to be saved
iterable_dataset = load_dataset(REPO_ID, name=SUBSET, split=SPLIT, streaming=True)
limited_stream = iterable_dataset.take(NUM_ROWS_TO_LOAD)
in_memory_dataset = Dataset.from_generator(limited_stream.__iter__)
logger.info(f"Dataset with {len(in_memory_dataset)} rows created in memory.")
output_path = "./test_output.jsonl"
logger.info(f"Step 2: Saving the dataset to {output_path} using .to_json()...")
logger.info("Please monitor memory usage during this step.")
# This is the step that causes the massive memory allocation
in_memory_dataset.to_json(output_path, force_ascii=False)
logger.info("Save operation complete.")
os.remove(output_path)
if __name__ == "__main__":
# To see the memory usage clearly, run this script with a memory profiler:
# python -m memray run your_script_name.py
# python -m memray tree xxx.bin
run_test()
```
### Expected behavior
I would expect the .to_json(lines=True) method to be a memory-efficient, streaming operation. The memory usage should remain low and relatively constant, as data is converted and written to the file line-by-line or in small batches. The memory footprint should not be proportional to the total number of rows in the in_memory_dataset.
### Environment info
datasets version:3.6.0
Python version:3.9.18
os:macOS 15.3.1 (arm64) | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7694/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7694/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7693 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7693/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7693/comments | https://api.github.com/repos/huggingface/datasets/issues/7693/events | https://github.com/huggingface/datasets/issues/7693 | 3,246,369,678 | I_kwDODunzps7Bf6uO | 7,693 | Dataset scripts are no longer supported, but found superb.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/114297534?v=4",
"events_url": "https://api.github.com/users/edwinzajac/events{/privacy}",
"followers_url": "https://api.github.com/users/edwinzajac/followers",
"following_url": "https://api.github.com/users/edwinzajac/following{/other_user}",
"gists_url": "https://api.github.com/users/edwinzajac/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/edwinzajac",
"id": 114297534,
"login": "edwinzajac",
"node_id": "U_kgDOBtAKvg",
"organizations_url": "https://api.github.com/users/edwinzajac/orgs",
"received_events_url": "https://api.github.com/users/edwinzajac/received_events",
"repos_url": "https://api.github.com/users/edwinzajac/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/edwinzajac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/edwinzajac/subscriptions",
"type": "User",
"url": "https://api.github.com/users/edwinzajac",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I got a pretty similar issue when I try to load bigbio/neurotrial_ner dataset. \n`Dataset scripts are no longer supported, but found neurotrial_ner.py`",
"Same here. I was running this tutorial and got a similar error: https://github.com/openai/whisper/discussions/654 (I'm a first-time transformers library user)... | 2025-07-20T13:48:06Z | 2025-09-04T10:32:12Z | null | NONE | null | null | null | null | ### Describe the bug
Hello,
I'm trying to follow the [Hugging Face Pipelines tutorial](https://huggingface.co/docs/transformers/main_classes/pipelines) but the tutorial seems to work only on old datasets versions.
I then get the error :
```
--------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[65], [line 1](vscode-notebook-cell:?execution_count=65&line=1)
----> [1](vscode-notebook-cell:?execution_count=65&line=1) dataset = datasets.load_dataset("superb", name="asr", split="test")
3 # KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item
4 # as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset
5 for out in tqdm(pipe(KeyDataset(dataset, "file"))):
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1392, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs)
1387 verification_mode = VerificationMode(
1388 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
1389 )
1391 # Create a dataset builder
-> [1392](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1392) builder_instance = load_dataset_builder(
1393 path=path,
1394 name=name,
1395 data_dir=data_dir,
1396 data_files=data_files,
1397 cache_dir=cache_dir,
1398 features=features,
1399 download_config=download_config,
1400 download_mode=download_mode,
1401 revision=revision,
1402 token=token,
1403 storage_options=storage_options,
1404 **config_kwargs,
1405 )
1407 # Return iterable dataset in case of streaming
1408 if streaming:
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1132, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, **config_kwargs)
1130 if features is not None:
1131 features = _fix_for_backward_compatible_features(features)
-> [1132](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1132) dataset_module = dataset_module_factory(
1133 path,
1134 revision=revision,
1135 download_config=download_config,
1136 download_mode=download_mode,
1137 data_dir=data_dir,
1138 data_files=data_files,
1139 cache_dir=cache_dir,
1140 )
1141 # Get dataset builder class
1142 builder_kwargs = dataset_module.builder_kwargs
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1031, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs)
1026 if isinstance(e1, FileNotFoundError):
1027 raise FileNotFoundError(
1028 f"Couldn't find any data file at {relative_to_absolute_path(path)}. "
1029 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
1030 ) from None
-> [1031](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1031) raise e1 from None
1032 else:
1033 raise FileNotFoundError(f"Couldn't find any data file at {relative_to_absolute_path(path)}.")
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:989, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs)
981 try:
982 api.hf_hub_download(
983 repo_id=path,
984 filename=filename,
(...) 987 proxies=download_config.proxies,
988 )
--> [989](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:989) raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
990 except EntryNotFoundError:
991 # Use the infos from the parquet export except in some cases:
992 if data_dir or data_files or (revision and revision != "main"):
RuntimeError: Dataset scripts are no longer supported, but found superb.py
```
NB : I tried to replace "superb" by "anton-l/superb_demo" but I get a 'torchcodec' importing error. Maybe I misunderstood something.
### Steps to reproduce the bug
```
import datasets
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
from tqdm.auto import tqdm
pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0)
dataset = datasets.load_dataset("superb", name="asr", split="test")
# KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item
# as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset
for out in tqdm(pipe(KeyDataset(dataset, "file"))):
print(out)
# {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
# {"text": ....}
# ....
```
### Expected behavior
Get the tutorial expected results
### Environment info
--- SYSTEM INFO ---
Operating System: Ubuntu 24.10
Kernel: Linux 6.11.0-29-generic
Architecture: x86-64
--- PYTHON ---
Python 3.11.13
--- VENV INFO ----
datasets=4.0.0
transformers=4.53
tqdm=4.67.1 | null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7693/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7693/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7692 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7692/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7692/comments | https://api.github.com/repos/huggingface/datasets/issues/7692/events | https://github.com/huggingface/datasets/issues/7692 | 3,246,268,635 | I_kwDODunzps7BfiDb | 7,692 | xopen: invalid start byte for streaming dataset with trust_remote_code=True | {
"avatar_url": "https://avatars.githubusercontent.com/u/5188731?v=4",
"events_url": "https://api.github.com/users/sedol1339/events{/privacy}",
"followers_url": "https://api.github.com/users/sedol1339/followers",
"following_url": "https://api.github.com/users/sedol1339/following{/other_user}",
"gists_url": "https://api.github.com/users/sedol1339/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sedol1339",
"id": 5188731,
"login": "sedol1339",
"node_id": "MDQ6VXNlcjUxODg3MzE=",
"organizations_url": "https://api.github.com/users/sedol1339/orgs",
"received_events_url": "https://api.github.com/users/sedol1339/received_events",
"repos_url": "https://api.github.com/users/sedol1339/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sedol1339/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sedol1339/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sedol1339",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! it would be cool to convert this dataset to Parquet. This will make it work for `datasets>=4.0`, enable the Dataset Viewer and make it more reliable to load/stream (currently it uses a loading script in python and those are known for having issues sometimes)\n\nusing `datasets==3.6.0`, here is the command to ... | 2025-07-20T11:08:20Z | 2025-07-25T14:38:54Z | null | NONE | null | null | null | null | ### Describe the bug
I am trying to load YODAS2 dataset with datasets==3.6.0
```
from datasets import load_dataset
next(iter(load_dataset('espnet/yodas2', name='ru000', split='train', streaming=True, trust_remote_code=True)))
```
And get `UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte`
The cause of the error is the following:
```
from datasets.utils.file_utils import xopen
filepath = 'https://huggingface.co/datasets/espnet/yodas2/resolve/c9674490249665d658f527e2684848377108d82c/data/ru000/text/00000000.json'
xopen(filepath, 'r').read()
>>> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte
```
And the cause of this is the following:
```
import fsspec
fsspec.open(
'hf://datasets/espnet/yodas2@c9674490249665d658f527e2684848377108d82c/data/ru000/text/00000000.json',
mode='r',
hf={'token': None, 'endpoint': 'https://huggingface.co'},
).open().read()
>>> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte
```
Is it true that streaming=True loading is not supported anymore for trust_remote_code=True, even with datasets==3.6.0? This breaks backward compatibility.
### Steps to reproduce the bug
```
from datasets import load_dataset
next(iter(load_dataset('espnet/yodas2', name='ru000', split='train', streaming=True)))
```
### Expected behavior
No errors expected
### Environment info
datasets==3.6.0, ubuntu 24.04 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7692/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7692/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7691 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7691/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7691/comments | https://api.github.com/repos/huggingface/datasets/issues/7691/events | https://github.com/huggingface/datasets/issues/7691 | 3,245,547,170 | I_kwDODunzps7Bcx6i | 7,691 | Large WebDataset: pyarrow.lib.ArrowCapacityError on load() even with streaming | {
"avatar_url": "https://avatars.githubusercontent.com/u/122366389?v=4",
"events_url": "https://api.github.com/users/cleong110/events{/privacy}",
"followers_url": "https://api.github.com/users/cleong110/followers",
"following_url": "https://api.github.com/users/cleong110/following{/other_user}",
"gists_url": "https://api.github.com/users/cleong110/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cleong110",
"id": 122366389,
"login": "cleong110",
"node_id": "U_kgDOB0sptQ",
"organizations_url": "https://api.github.com/users/cleong110/orgs",
"received_events_url": "https://api.github.com/users/cleong110/received_events",
"repos_url": "https://api.github.com/users/cleong110/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cleong110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cleong110/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cleong110",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"It seems the error occurs right here, as it tries to infer the Features: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/webdataset/webdataset.py#L78-L90",
"It seems to me that if we have something that is so large that it cannot fit in pa.table, the fallback method should be to j... | 2025-07-19T18:40:27Z | 2025-07-25T08:51:10Z | null | NONE | null | null | null | null | ### Describe the bug
I am creating a large WebDataset-format dataset for sign language processing research, and a number of the videos are over 2GB. The instant I hit one of the shards with one of those videos, I get a ArrowCapacityError, even with streaming.
I made a config for the dataset that specifically includes just one problem shard, and the error triggers the instant you even run load_dataset(), even with streaming=True
```
ds = load_dataset("bible-nlp/sign-bibles", "ase_chronological_bible_translation_in_american_sign_language_119_introductions_and_passages_debugging_problem_shard", streaming=True, split="train")
```
This gives:
```
File "/opt/home/cleong/projects/semantic_and_visual_similarity/sign-bibles-dataset/sign_bibles_dataset/tasks/test_iteration.py", line 13, in iterate_keys
ds = load_dataset("bible-nlp/sign-bibles", language_subset, streaming=True, split="train")
File "/opt/home/cleong/envs/sign-bibles-dataset/lib/python3.13/site-packages/datasets/load.py", line 1409, in load_dataset
return builder_instance.as_streaming_dataset(split=split)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
File "/opt/home/cleong/envs/sign-bibles-dataset/lib/python3.13/site-packages/datasets/builder.py", line 1225, in as_streaming_dataset
splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/home/cleong/envs/sign-bibles-dataset/lib/python3.13/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 88, in _split_generators
pa.Table.from_pylist(cast_to_python_objects([example], only_1d_for_numpy=True))
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 2046, in pyarrow.lib._Tabular.from_pylist
File "pyarrow/table.pxi", line 6431, in pyarrow.lib._from_pylist
File "pyarrow/table.pxi", line 4893, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 1607, in pyarrow.lib._sanitize_arrays
File "pyarrow/table.pxi", line 1588, in pyarrow.lib._schema_from_arrays
File "pyarrow/array.pxi", line 375, in pyarrow.lib.array
File "pyarrow/array.pxi", line 45, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 3980158992
```
### Steps to reproduce the bug
```python
#!/usr/bin/env python
import argparse
from datasets import get_dataset_config_names, load_dataset
from tqdm import tqdm
from pyarrow.lib import ArrowCapacityError, ArrowInvalid
def iterate_keys(language_subset: str) -> None:
"""Iterate over all samples in the Sign Bibles dataset and print idx and sample key."""
# https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/loading_methods#datasets.load_dataset
ds = load_dataset("bible-nlp/sign-bibles", language_subset, streaming=True, split="train")
print(f"\n==> Loaded dataset config '{language_subset}'")
idx = 0
estimated_shard_index = 0
samples_per_shard = 5
with tqdm(desc=f"{language_subset} samples") as pbar:
iterator = iter(ds)
while True:
try:
if idx % samples_per_shard == 0 and idx > 0: # 5 samples per shard: 0, 1, 2, 3, 4
print(f"Estimated Shard idx (starting at 0, {samples_per_shard}/shard): {estimated_shard_index}")
estimated_shard_index += 1
sample = next(iterator)
sample_key = sample.get("__key__", "missing-key")
print(f"[{language_subset}] idx={idx}, key={sample_key}")
idx += 1
pbar.update(1)
except StopIteration:
print(f"Finished iterating through {idx} samples of {language_subset}")
break
except (ArrowCapacityError, ArrowInvalid) as e:
print(f"PyArrow error on idx={idx}, config={language_subset}: {e}")
idx += 1
pbar.update(1)
continue
except KeyError as e:
print(f"Missing key error on idx={idx}, config={language_subset}: {e}")
idx += 1
pbar.update(1)
continue
def main():
configs = get_dataset_config_names("bible-nlp/sign-bibles")
print(f"Available configs: {configs}")
configs = [
"ase_chronological_bible_translation_in_american_sign_language_119_introductions_and_passages_debugging_problem_shard"
]
for language_subset in configs:
print(f"TESTING CONFIG {language_subset}")
iterate_keys(language_subset)
# try:
# except (ArrowCapacityError, ArrowInvalid) as e:
# print(f"PyArrow error at config level for {language_subset}: {e}")
# continue
# except RuntimeError as e:
# print(f"RuntimeError at config level for {language_subset}: {e}")
# continue
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Iterate through Sign Bibles dataset and print sample keys.")
args = parser.parse_args()
main()
```
### Expected behavior
I expect, when I load with streaming=True, that there should not be any data loaded or anything like that.
https://huggingface.co/docs/datasets/main/en/package_reference/loading_methods#datasets.load_dataset says that with streaming=true,
I did expect to have some trouble with large files, but that the streaming mode would not actually try to load them unless requested, e.g. with sample["mp4"]
>In the streaming case:
> Don’t download or cache anything. Instead, the dataset is lazily loaded and will be streamed on-the-fly when iterating on it.
### Environment info
Local setup: Conda environment on Ubuntu, pip list includes the following
datasets 4.0.0
pyarrow 20.0.0
Verified on Colab: https://colab.research.google.com/drive/1HdN8stlROWrLSYXUoNeV0vQ9pClhIVM8?usp=sharing, though there it crashes by using up all available RAM
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7691/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7691/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7689 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7689/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7689/comments | https://api.github.com/repos/huggingface/datasets/issues/7689/events | https://github.com/huggingface/datasets/issues/7689 | 3,242,580,301 | I_kwDODunzps7BRdlN | 7,689 | BadRequestError for loading dataset? | {
"avatar_url": "https://avatars.githubusercontent.com/u/45011687?v=4",
"events_url": "https://api.github.com/users/WPoelman/events{/privacy}",
"followers_url": "https://api.github.com/users/WPoelman/followers",
"following_url": "https://api.github.com/users/WPoelman/following{/other_user}",
"gists_url": "https://api.github.com/users/WPoelman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/WPoelman",
"id": 45011687,
"login": "WPoelman",
"node_id": "MDQ6VXNlcjQ1MDExNjg3",
"organizations_url": "https://api.github.com/users/WPoelman/orgs",
"received_events_url": "https://api.github.com/users/WPoelman/received_events",
"repos_url": "https://api.github.com/users/WPoelman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/WPoelman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WPoelman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/WPoelman",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Same here, for `HuggingFaceFW/fineweb`. Code that worked with no issues for the last 2 months suddenly fails today. Tried updating `datasets`, `huggingface_hub`, `fsspec` to newest versions, but the same error occurs.",
"I'm also hitting this issue, with `mandarjoshi/trivia_qa`; My dataset loading was working su... | 2025-07-18T09:30:04Z | 2025-07-18T11:59:51Z | 2025-07-18T11:52:29Z | NONE | null | null | null | null | ### Describe the bug
Up until a couple days ago I was having no issues loading `Helsinki-NLP/europarl` and `Helsinki-NLP/un_pc`, but now suddenly I get the following error:
```
huggingface_hub.errors.BadRequestError: (Request ID: ...)
Bad request:
* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand
✖ Invalid input: expected array, received string
→ at paths
✖ Invalid input: expected boolean, received string
→ at expand
```
I tried with both `4.0.0` and `3.5.1` since this dataset uses `trust_remote_code`, but I get the same error with both.
What can I do to load the dataset? I checked the documentation and GitHub issues here, but couldn't find a solution.
### Steps to reproduce the bug
```python
import datasets
ds = datasets.load_dataset("Helsinki-NLP/europarl", "en-fr", streaming=True, trust_remote_code=True)["train"]
```
### Expected behavior
That the dataset loads as it did a couple days ago.
### Environment info
- `datasets` version: 3.5.1
- Platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28
- Python version: 3.11.11
- `huggingface_hub` version: 0.30.2
- PyArrow version: 20.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/17179696?v=4",
"events_url": "https://api.github.com/users/sergiopaniego/events{/privacy}",
"followers_url": "https://api.github.com/users/sergiopaniego/followers",
"following_url": "https://api.github.com/users/sergiopaniego/following{/other_user}",
"gists_url": "https://api.github.com/users/sergiopaniego/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sergiopaniego",
"id": 17179696,
"login": "sergiopaniego",
"node_id": "MDQ6VXNlcjE3MTc5Njk2",
"organizations_url": "https://api.github.com/users/sergiopaniego/orgs",
"received_events_url": "https://api.github.com/users/sergiopaniego/received_events",
"repos_url": "https://api.github.com/users/sergiopaniego/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sergiopaniego/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sergiopaniego/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sergiopaniego",
"user_view_type": "public"
} | {
"+1": 23,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 23,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7689/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7689/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7688 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7688/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7688/comments | https://api.github.com/repos/huggingface/datasets/issues/7688/events | https://github.com/huggingface/datasets/issues/7688 | 3,238,851,443 | I_kwDODunzps7BDPNz | 7,688 | No module named "distributed" | {
"avatar_url": "https://avatars.githubusercontent.com/u/45058324?v=4",
"events_url": "https://api.github.com/users/yingtongxiong/events{/privacy}",
"followers_url": "https://api.github.com/users/yingtongxiong/followers",
"following_url": "https://api.github.com/users/yingtongxiong/following{/other_user}",
"gists_url": "https://api.github.com/users/yingtongxiong/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yingtongxiong",
"id": 45058324,
"login": "yingtongxiong",
"node_id": "MDQ6VXNlcjQ1MDU4MzI0",
"organizations_url": "https://api.github.com/users/yingtongxiong/orgs",
"received_events_url": "https://api.github.com/users/yingtongxiong/received_events",
"repos_url": "https://api.github.com/users/yingtongxiong/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yingtongxiong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yingtongxiong/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yingtongxiong",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"The error ModuleNotFoundError: No module named 'datasets.distributed' means your installed datasets library is too old or incompatible with the version of Library you are using(in my case it was BEIR). The datasets.distributed module was removed in recent versions of the datasets library.\n\nDowngrade datasets to ... | 2025-07-17T09:32:35Z | 2025-07-25T15:14:19Z | null | NONE | null | null | null | null | ### Describe the bug
hello, when I run the command "from datasets.distributed import split_dataset_by_node", I always met the bug "No module named 'datasets.distributed" in different version like 4.0.0, 2.21.0 and so on. How can I solve this?
### Steps to reproduce the bug
1. pip install datasets
2. from datasets.distributed import split_dataset_by_node
### Expected behavior
expecting the command "from datasets.distributed import split_dataset_by_node" can be ran successfully
### Environment info
python: 3.12 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7688/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7688/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7687 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7687/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7687/comments | https://api.github.com/repos/huggingface/datasets/issues/7687/events | https://github.com/huggingface/datasets/issues/7687 | 3,238,760,301 | I_kwDODunzps7BC49t | 7,687 | Datasets keeps rebuilding the dataset every time i call the python script | {
"avatar_url": "https://avatars.githubusercontent.com/u/58883113?v=4",
"events_url": "https://api.github.com/users/CALEB789/events{/privacy}",
"followers_url": "https://api.github.com/users/CALEB789/followers",
"following_url": "https://api.github.com/users/CALEB789/following{/other_user}",
"gists_url": "https://api.github.com/users/CALEB789/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CALEB789",
"id": 58883113,
"login": "CALEB789",
"node_id": "MDQ6VXNlcjU4ODgzMTEz",
"organizations_url": "https://api.github.com/users/CALEB789/orgs",
"received_events_url": "https://api.github.com/users/CALEB789/received_events",
"repos_url": "https://api.github.com/users/CALEB789/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CALEB789/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CALEB789/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CALEB789",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"here is the code to load the dataset form the cache:\n\n```python\ns = load_dataset('databricks/databricks-dolly-15k')['train']\n```\n\nif you pass the location of a local directory it will create a new cache based on that directory content"
] | 2025-07-17T09:03:38Z | 2025-07-25T15:21:31Z | null | NONE | null | null | null | null | ### Describe the bug
Every time it runs, somehow, samples increase.
This can cause a 12mb dataset to have other built versions of 400 mbs+
<img width="363" height="481" alt="Image" src="https://github.com/user-attachments/assets/766ce958-bd2b-41bc-b950-86710259bfdc" />
### Steps to reproduce the bug
`from datasets import load_dataset
s = load_dataset('~/.cache/huggingface/datasets/databricks___databricks-dolly-15k')['train']
`
1. A dataset needs to be available in the .cache folder
2. Run the code multiple times, and every time it runs, more versions are created
### Expected behavior
The number of samples increases every time the script runs
### Environment info
- `datasets` version: 3.6.0
- Platform: Windows-11-10.0.26100-SP0
- Python version: 3.13.3
- `huggingface_hub` version: 0.32.3
- PyArrow version: 20.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0
| null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7687/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7687/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.