All Posts

PERFORM SYNCHRONOUS DATA PROCESSING USING LLM AT SCALE

Abstract: While batch processing of data using LLM has huge advantages, a subset of LLM applications requires synchronous processing. This article provides a solution to synchronously process data at scale using OpenAI API. Introduction Before we described the solution to process massive amounts of data by using OpenAI Batch API. Batch processing is cheaper and easier to run when you have huge amounts of data. At the same time, if your application cannot wait until the batch process is finished, the only option would be to call OpenAI inference endpoints directly.

PERFORM BATCH DATA PROCESSING USING LLM AT SCALE

Abstract: LLMs usage is expanding into new applications. This article provides a solution to process data at scale using OpenAI Batch API. Introduction LLM “revolution” continues, which results in LLM to start being used in applications, which were historically performed by humans only. While LLM can be used to successfully solve a large class of tasks, with more coming in the near future, it is crucial to perform foundation work, for example: evaluation of the accuracy of the model outputs, proper data preparation, prompt engineering, model fine-tuning (if needed), etc.

TEMPLATE FOR APPLICATION GROUP

Abstract: A natural evolution of the growing platform is splitting it into almost isolated pieces of the infrastructure deployed from separate TF repositories. These TF repositories contain a lot of similar code, and having a template makes their creation easy and fast, as well as provides additional benefits. Introduction Having infrastructure monorepo for the whole platform works well in its early life stage but usually creates challenges during its growth.