Update README.md
Browse files
README.md
CHANGED
|
@@ -20,6 +20,9 @@ size_categories:
|
|
| 20 |
This is ~220,000 open-access PDF documents from the dataset [govdocs1](https://digitalcorpora.org/corpora/file-corpora/files/). It wants to be OCR'd.
|
| 21 |
|
| 22 |
|
|
|
|
|
|
|
|
|
|
| 23 |
## Recovering the data
|
| 24 |
|
| 25 |
Download the `data/` directory (with `huggingface-cli download` or similar) extract the tar pieces:
|
|
@@ -28,6 +31,9 @@ Download the `data/` directory (with `huggingface-cli download` or similar) extr
|
|
| 28 |
cat data_pdfs_part.tar.* | tar -xf - && rm data_pdfs_part.tar.*
|
| 29 |
```
|
| 30 |
|
|
|
|
|
|
|
|
|
|
| 31 |
## GovDocs1 PDF Dataset Analysis
|
| 32 |
|
| 33 |
Based on the [index.csv](data/index.csv)
|
|
@@ -127,7 +133,7 @@ Based on the [index.csv](data/index.csv)
|
|
| 127 |
|
| 128 |
| Date Field | Range | Issues |
|
| 129 |
|------------|-------|--------|
|
| 130 |
-
| **Modified Date** | 1979-12-31 to 2025-03-31 |
|
| 131 |
| **Created Date** | Various formats | 1,573 invalid "D:00000101000000Z" |
|
| 132 |
|
| 133 |
### Critical Assessment
|
|
@@ -150,4 +156,6 @@ Based on the [index.csv](data/index.csv)
|
|
| 150 |
|
| 151 |
**Fatal Flaw**: This dataset has excellent technical extraction (99.96% success) but catastrophic intellectual organization. You're essentially working with 230K unlabeled documents.
|
| 152 |
|
| 153 |
-
**Bottom Line**: The structural data is solid, but without subject classification for 79% of documents, this is an unindexed digital landfill masquerading as an archive.
|
|
|
|
|
|
|
|
|
| 20 |
This is ~220,000 open-access PDF documents from the dataset [govdocs1](https://digitalcorpora.org/corpora/file-corpora/files/). It wants to be OCR'd.
|
| 21 |
|
| 22 |
|
| 23 |
+
- the dataset is uploaded as `tar` file pieces of ~10 GiB each due to size/file count limits with an [index.csv](data/index.csv) covering the details
|
| 24 |
+
- 5,000 randomly sampled PDFs are available unarchived in the `sample/` directory
|
| 25 |
+
|
| 26 |
## Recovering the data
|
| 27 |
|
| 28 |
Download the `data/` directory (with `huggingface-cli download` or similar) extract the tar pieces:
|
|
|
|
| 31 |
cat data_pdfs_part.tar.* | tar -xf - && rm data_pdfs_part.tar.*
|
| 32 |
```
|
| 33 |
|
| 34 |
+
---
|
| 35 |
+
|
| 36 |
+
|
| 37 |
## GovDocs1 PDF Dataset Analysis
|
| 38 |
|
| 39 |
Based on the [index.csv](data/index.csv)
|
|
|
|
| 133 |
|
| 134 |
| Date Field | Range | Issues |
|
| 135 |
|------------|-------|--------|
|
| 136 |
+
| **Modified Date** | 1979-12-31 to 2025-03-31 | (dates in 2023-2025 are incorrect/defaulted to) |
|
| 137 |
| **Created Date** | Various formats | 1,573 invalid "D:00000101000000Z" |
|
| 138 |
|
| 139 |
### Critical Assessment
|
|
|
|
| 156 |
|
| 157 |
**Fatal Flaw**: This dataset has excellent technical extraction (99.96% success) but catastrophic intellectual organization. You're essentially working with 230K unlabeled documents.
|
| 158 |
|
| 159 |
+
**Bottom Line**: The structural data is solid, but without subject classification for 79% of documents, this is an unindexed digital landfill masquerading as an archive.
|
| 160 |
+
|
| 161 |
+
---
|