Datasets:
The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: Schema at index 1 was different:
auto_tune: bool
codebase_root: string
corpus_source: string
generated_at: string
include_oracle: bool
max_constraints: int64
methods: struct<instructed: struct<avg_f1: double, avg_precision: double, avg_recall: double, exact_match_rate: double, queries: double>, naive: struct<avg_f1: double, avg_precision: double, avg_recall: double, exact_match_rate: double, queries: double>, verified_consensus: struct<avg_f1: double, avg_precision: double, avg_recall: double, exact_match_rate: double, queries: double>, verified_structural: struct<avg_f1: double, avg_precision: double, avg_recall: double, exact_match_rate: double, queries: double>, verified_structural_ensemble: struct<avg_f1: double, avg_precision: double, avg_recall: double, exact_match_rate: double, queries: double>>
min_constraints: int64
n_docs: int64
n_queries_eval: int64
n_queries_generated: int64
n_queries_requested: int64
n_queries_tune: int64
retrieval_config: struct<ensemble_top_k: int64, ensemble_vote_threshold: double, final_match_ratio: double, min_match_ratio: double, min_step_k: int64, noise_weight: double, step_k_ratio: double>
seed: int64
tune_target: string
vs
complexity: string
doc_id: string
function_type: string
has_docstring: string
keyword: string
layer: string
module: string
source_line: int64
source_path: string
text: string
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 588, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
auto_tune: bool
codebase_root: string
corpus_source: string
generated_at: string
include_oracle: bool
max_constraints: int64
methods: struct<instructed: struct<avg_f1: double, avg_precision: double, avg_recall: double, exact_match_rate: double, queries: double>, naive: struct<avg_f1: double, avg_precision: double, avg_recall: double, exact_match_rate: double, queries: double>, verified_consensus: struct<avg_f1: double, avg_precision: double, avg_recall: double, exact_match_rate: double, queries: double>, verified_structural: struct<avg_f1: double, avg_precision: double, avg_recall: double, exact_match_rate: double, queries: double>, verified_structural_ensemble: struct<avg_f1: double, avg_precision: double, avg_recall: double, exact_match_rate: double, queries: double>>
min_constraints: int64
n_docs: int64
n_queries_eval: int64
n_queries_generated: int64
n_queries_requested: int64
n_queries_tune: int64
retrieval_config: struct<ensemble_top_k: int64, ensemble_vote_threshold: double, final_match_ratio: double, min_match_ratio: double, min_step_k: int64, noise_weight: double, step_k_ratio: double>
seed: int64
tune_target: string
vs
complexity: string
doc_id: string
function_type: string
has_docstring: string
keyword: string
layer: string
module: string
source_line: int64
source_path: string
text: stringNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning: The task_categories "document-retrieval" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
Aevion Codebase RAG Benchmark
Verified structured-retrieval benchmark extracted from a real Python codebase (968 source files, 21,149 chunks) with cryptographically signed partition proofs.
What's in this dataset
| File | Description |
|---|---|
codebase_corpus.jsonl |
21,149 Python code chunks with 6-field structural metadata |
codebase_queries.jsonl |
300 enterprise query-decomposition pairs |
partition_proofs.jsonl |
XGML-signed proof bundles per query |
benchmark_results.csv |
Precision/recall/F1 per retrieval method (60 eval queries) |
benchmark_summary.json |
Aggregate metrics and auto-tuning parameters |
tuning_summary.json |
Grid-search results across 10K synthetic docs |
Corpus Schema
Each chunk in codebase_corpus.jsonl has:
{
"doc_id": "chunk_000000",
"text": "module.ClassName (path/to/file.py:20)",
"layer": "core",
"module": "verification",
"function_type": "class",
"keyword": "hash",
"complexity": "simple",
"has_docstring": "yes",
"source_path": "core/python/...",
"source_line": 20
}
Benchmark Results (60 eval queries)
| Method | Precision | Recall | F1 | Exact Match |
|---|---|---|---|---|
| naive | 0.516 | 0.657 | 0.425 | 11.7% |
| instructed | 1.000 | 0.385 | 0.463 | 23.3% |
| verified_structural | 1.000 | 0.385 | 0.463 | 23.3% |
| verified_consensus | 1.000 | 0.437 | 0.503 | 31.7% |
| verified_structural_ensemble | 1.000 | 0.459 | 0.527 | 33.3% |
Key finding: Structural + ensemble retrieval achieves 100% precision (zero irrelevant chunks) vs. 51.6% for naive keyword search.
Method
- AST extraction: Python files parsed with
astmodule → class/function/method chunks - 6-field structural metadata: layer, module, function_type, keyword, complexity, has_docstring
- Constitutional Halt labeling: VarianceHaltMonitor (σ > 2.5x threshold) as automatic quality gate
- XGML proof bundles: Ed25519-signed proof chain on every partition plan
Related
- Aevion Verifiable AI — source codebase
- Patent US 63/896,282 — Variance Halt + Constitutional AI halts
License
Apache 2.0 — freely use for research and commercial applications.
- Downloads last month
- 4