You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Document OCR using DeepSeek-OCR

This dataset contains markdown-formatted OCR results from images in johnlockejrr/hayyim using DeepSeek-OCR via vLLM API.

Processing Details

Configuration

  • Image Column: image
  • Output Column: markdown
  • Dataset Split: train
  • Batch Size: 8
  • Resolution Mode: base
  • Base Size: 1024
  • Image Size: 1024
  • Crop Mode: False
  • Max Output Tokens: 4,096

Model Information

DeepSeek-OCR is a state-of-the-art document OCR model that excels at:

  • πŸ“ LaTeX equations - Mathematical formulas preserved in LaTeX format
  • πŸ“Š Tables - Extracted and formatted as HTML/markdown
  • πŸ“ Document structure - Headers, lists, and formatting maintained
  • πŸ–ΌοΈ Image grounding - Spatial layout and bounding box information
  • πŸ” Complex layouts - Multi-column and hierarchical structures
  • 🌍 Multilingual - Supports multiple languages

Resolution Modes

  • Tiny (512Γ—512): Fast processing, 64 vision tokens
  • Small (640Γ—640): Balanced speed/quality, 100 vision tokens
  • Base (1024Γ—1024): High quality, 256 vision tokens
  • Large (1280Γ—1280): Maximum quality, 400 vision tokens
  • Gundam (dynamic): Adaptive multi-tile processing for large documents

Dataset Structure

The dataset contains all original columns plus:

  • markdown: The extracted text in markdown format with preserved structure
  • inference_info: JSON list tracking all OCR models applied to this dataset

Usage

from datasets import load_dataset
import json

# Load the dataset
dataset = load_dataset("{{output_dataset_id}}", split="train")

# Access the markdown text
for example in dataset:
    print(example["markdown"])
    break

# View all OCR models applied to this dataset
inference_info = json.loads(dataset[0]["inference_info"])
for info in inference_info:
    print(f"Column: {{info['column_name']}} - Model: {{info['model_id']}}")

Performance

  • Processing Speed: ~0.0 images/second
  • Processing Method: vLLM API client

Generated with πŸ€– DeepSeek-OCR vLLM API Client

Downloads last month
19