Abstract: While batch processing of data using LLM has huge advantages, a subset of LLM applications requires synchronous processing. This article provides a solution to synchronously process data at scale using OpenAI API.
Introduction Before we described the solution to process massive amounts of data by using OpenAI Batch API. Batch processing is cheaper and easier to run when you have huge amounts of data. At the same time, if your application cannot wait until the batch process is finished, the only option would be to call OpenAI inference endpoints directly.
Abstract: LLMs usage is expanding into new applications. This article provides a solution to process data at scale using OpenAI Batch API.
Introduction LLM “revolution” continues, which results in LLM to start being used in applications, which were historically performed by humans only. While LLM can be used to successfully solve a large class of tasks, with more coming in the near future, it is crucial to perform foundation work, for example: evaluation of the accuracy of the model outputs, proper data preparation, prompt engineering, model fine-tuning (if needed), etc.