Assessing the quality of individual data points is critical for improving model performance and mitigating biases. However, there is no way to systematically benchmark different algorithms.
OpenDataVal is an open-source initiative that with a diverse array of datasets/models (image, NLP, and tabular), data valuation algorithms, and evaluation tasks using just a few lines of code.
OpenDataVal also provides a leaderboards for data evaluation tasks. We've curated and added
artificial noise to some datasets. Create your own DataEvaluator
to top the leaderboards. OpenDataVal is accepted at NeurIPS 2023 track on Datasets and Benchmarks.
Overview | |
---|---|
Paper | Paper link |
Python | |
Dependencies | |
Documentation | |
CI/CD | |
Issues | |
License | |
Releases | |
Citation | Cite Us |
Feature | Status | Links | Notes |
---|---|---|---|
Datasets | Stable | Docs | Embeddings available for image/NLP datasets |
Models | Stable | Docs | Support available for sk-learn models |
Data Evaluators | Stable | Docs | |
Experiments | Stable | Docs | |
Examples | Stable | ||
CLI | Experimental | opendataval --help |
No support for null values |
It is highly reccomended to use a virtual environment for opendataval
. Check out conda!
- Install with pip
pip install opendataval
- Clone the repo and install
a. Install optional dependencies if you're contributing
git clone https://github.com/opendataval/opendataval.git make install
b. If you want to pull in kaggle datasets, I'd reccomend looking how to add a kaggle folder to the current directory. Tutorial heremake install-dev
To set up an experiment on DataEvaluators. Feel free to change the source code as needed for a project.
import opendataval
from opendataval.experiment import ExperimentMediator
from opendataval.dataval import DataOob
from opendataval.experiment import discover_corrupted_sample, noisy_detection
exper_med = ExperimentMediator.model_factory_setup(
dataset_name='iris',
force_download=False,
train_count=50,
valid_count=50,
test_count=50,
model_name='ClassifierMLP',
train_kwargs={'epochs': 5, 'batch_size': 20},
)
list_of_data_evaluators = [DataOob()] # Define evaluators here
eval_med = exper_med.compute_data_values(list_of_data_evaluators)
# Runs a discover the noisy data experiment for each DataEvaluator and plots
data, fig = eval_med.plot(discover_corrupted_sample)
# Runs non-plottable experiment
data = eval_med.evaluate(noisy_detection)
opendataval
comes with a quick CLI tool, The tool is under development and the template for a csv input is found at cli.csv
. Note that for kwarg arguments, the input must be valid json.
To use run the following command if installed with make-install:
opendataval --file cli.csv -n [job_id] -o [path/to/output/]
To run without installing the script:
python opendataval --file cli.csv -n [job_id] -o [path/to/output/]
Here are the 4 interacting parts of opendataval
DataFetcher
, Loads data and holds meta data regarding splitsModel
, trainable prediction model.DataEvaluator
, Measures the data values of input data point for a specified model.ExperimentMediator
, facilitates experiments regarding data values across severalDataEvaluator
s
The DataFetcher takes the name of a Register
dataset and loads, transforms, splits, and adds noise to the data set.
from opendataval.dataloader import DataFetcher
DataFetcher.datasets_available() # ['dataset_name1', 'dataset_name2']
fetcher = DataFetcher(dataset_name='dataset_name1')
fetcher = fetcher.split_dataset_by_count(70, 20, 10)
fetcher = fetcher.noisify(mix_labels, noise_rate=.1)
x_train, y_train, x_valid, y_valid, x_test, y_test = fetcher.datapoints
Model
is the predictive model for Data Evaluators.
from opendataval.model import LogisticRegression
model = LogisticRegression(input_dim, output_dim)
model.fit(x, y)
model.predict(x)
>>> torch.Tensor(...)
We have a catalog of DataEvaluator
to run experiments. To do so, input the Model
, DataFetcher
, and an evaluation metric (such as accuracy).
from opendataval.dataval.ame import AME
dataval = (
AME(num_models=8000)
.train(fetcher=fetcher, pred_model=model, metric=metric)
)
data_values = dataval.data_values # Cached values
data_values = dataval.evaluate_data_values() # Recomputed values
>>> np.ndarray([.888, .132, ...])
ExperimentMediator
is helps make a cohesive and controlled experiment. NOTE Warnings are raised if errors occur in a specific DataEvaluator
.
expermed = ExperimentrMediator(fetcher, model, train_kwargs, metric_name).compute_data_values(data_evaluators)
Run experiments by passing in an experiment function: (DataEvaluator, DataFetcher, ...) - > dict[str, Any]
. There are 5 found exper_methods.py
with three being plotable.
df = expermed.evaluate(noisy_detection)
df, figure = expermed.plot(discover_corrupted_sample)
For more examples, please refer to the Documentation
For datasets that start with the prefix challenge, we provide leaderboards. Compute the data values with an ExperimentMediator
and use the save_dataval
function to save a csv. Upload it to here! Uploading will allow us to systematically compare your DataEvaluator
against others in the field.
The available challenges are currently:
challenge-iris
exper_med = ExperimentMediator.model_factory_setup(
dataset_name='challenge-...', model_name=model_name, train_kwargs={...}, metric_name=metric_name
)
exper_med.compute_data_values([custom_data_evaluator]).evaluate(save_dataval, save_output=True)
If you have a quick suggestion, reccomendation, bug-fixes please open an issue. If you want to contribute to the project, either through data sets, experiments, presets, or fix stuff, please see our Contribution page.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
- clean, descriptive specification syntax -- based on modern object-oriented design principles for data science.
- fair model assessment and benchmarking -- Easily build and evaluate your Data Evaluators
- easily extensible -- Easily add your own data sets,
Distributed under the MIT License. See LICENSE.txt
for more information.
If you found the library or the paper useful, please cite us!
@article{
jiang2023opendataval,
title={OpenDataVal: a Unified Benchmark for Data Valuation},
author={Kevin Fu Jiang and Weixin Liang and James Zou and Yongchan Kwon},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=eEK99egXeB}
}