Update README.md
Browse files
README.md
CHANGED
|
@@ -1,9 +1,9 @@
|
|
| 1 |
---
|
| 2 |
license: cc-by-4.0
|
| 3 |
---
|
| 4 |
-
# MapPool
|
| 5 |
|
| 6 |
-
This repository contains URLs, textual descriptions, embeddings of
|
| 7 |
|
| 8 |
## How is the data structured?
|
| 9 |
|
|
@@ -82,7 +82,7 @@ The dataset is a subset of the [CommonPool dataset (xlarge)](https://huggingfac
|
|
| 82 |
|
| 83 |
Merely averaging the embeddings and calculating the nearest distance already reached the same accuracy as the two classification networks in [Schnürer et al. 2021](https://doi.org/10.1080/00087041.2020.1738112). Training models from [scikit](https://scikit-learn.org/) to distinguish maps and non-maps increased the validation accuracy even further. The highest accuracy has been achieved with a Support Vector Machine (SVM) with a polynomial kernel.
|
| 84 |
|
| 85 |
-
Overall, downloading the CommonPool dataset, separating non-maps and uploading the maps took about 50h for 10 CPUs and 120GB RAM on average as well as caused incoming network traffic of 500MB/s. SVMs are computationally the most demanding model; luckily, the inference speed could be improved by using an [Intel Extension](https://intel.github.io/scikit-learn-intelex). Classifying 500,000 embeddings took about 10 secs.
|
| 86 |
|
| 87 |
## What are the limitations?
|
| 88 |
|
|
|
|
| 1 |
---
|
| 2 |
license: cc-by-4.0
|
| 3 |
---
|
| 4 |
+
# MapPool - Bubbling up an extremely large corpus of maps for AI
|
| 5 |
|
| 6 |
+
This repository contains URLs, textual descriptions, embeddings of 50 million potential maps. It has been derived from the [CommonPool dataset](https://huggingface.co/datasets/mlfoundations/datacomp_xlarge) from [DataComp](https://www.datacomp.ai/). The MapPool dataset may help to train resource-intensive architectures like Transformers or Diffusion Models in order to establish foundation models specialized on maps.
|
| 7 |
|
| 8 |
## How is the data structured?
|
| 9 |
|
|
|
|
| 82 |
|
| 83 |
Merely averaging the embeddings and calculating the nearest distance already reached the same accuracy as the two classification networks in [Schnürer et al. 2021](https://doi.org/10.1080/00087041.2020.1738112). Training models from [scikit](https://scikit-learn.org/) to distinguish maps and non-maps increased the validation accuracy even further. The highest accuracy has been achieved with a Support Vector Machine (SVM) with a polynomial kernel.
|
| 84 |
|
| 85 |
+
Overall, downloading the CommonPool dataset, separating non-maps and uploading the maps took about 50h for 10 CPUs and 120GB RAM on average as well as caused incoming network traffic of 500MB/s. SVMs are computationally the most demanding model among the examined ones; luckily, the inference speed could be improved by using an [Intel Extension](https://intel.github.io/scikit-learn-intelex). Classifying 500,000 embeddings took about 10 secs.
|
| 86 |
|
| 87 |
## What are the limitations?
|
| 88 |
|