[go: up one dir, main page]

Skip to content
#

token-classification

Here are 47 public repositories matching this topic...

The MERIT Dataset is a fully synthetic, labeled dataset created for training and benchmarking LLMs on Visually Rich Document Understanding tasks. It is also designed to help detect biases and improve interpretability in LLMs, where we are actively working. This repository is actively maintained, and new features are continuously being added.

  • Updated Sep 6, 2024
  • Python

Improve this page

Add a description, image, and links to the token-classification topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the token-classification topic, visit your repo's landing page and select "manage topics."

Learn more