Datasets fuel AI models like gasoline (or electricity, as the case may be) fuels cars. Whether they’re tasked with generating text, recognizing objects, or predicting a company’s stock price, AI systems “learn” by sifting through countless examples to discern patterns in the data. For example, a computer vision system can be trained to recognize certain types of apparel, like coats and scarfs, by looking at different images of that clothing.
Beyond developing models, datasets are used to test trained AI systems to ensure they remain stable — and measure overall progress in the field. Models that top the leaderboards on certain open-source benchmarks are considered state-of-the-art (SOTA) for that particular task. In fact, it’s one of the major ways that researchers determine the predictive strength of a model.
But these AI and machine learning datasets — like the humans that designed them — aren’t without flaws. Studies show that biases and mistakes color many libraries used to train benchmarks, and test models, highlighting the danger of placing too much trust in data that hasn’t been thoroughly vetted — even when the data comes from vaunted institutions.
1. The training dilemma
2. Issues with labeling
3. A benchmarking problem
Source: Venturebeat