Recent advances in Natural Language Processing (NLP) have largely pushed deep transformer-based models as the go-to state-of-the-art technique without much regard to the production and utilization cost. Companies planning to adopt these methods into their business face difficulties because of the lack of machine, data, and human resources to build them. We compare both the performance and the cost of classical learning algorithms to the latest ones in common sequence and text labeling tasks. In our industrial datasets, we find that classical models often perform on par with deep neural ones despite the lower cost. We show the trade-off between performance gain and the cost across the models to give more insights for AI-pivoting business. Further, we call for more research into low-cost models, especially for under-resourced languages.
Download Full Paper