Skip to content

Capabilities

WIP

Agents

WIP

Memory

WIP

Corpus “Reading”

WIP

Dynamic Personas

WIP

TextEvolve

TextEvolve is a suite of services that support the following use cases:

  • Supervised Prompt Optimization (SPO): Here, training data is curated, and an optimization algorithm uses feedback from Evaluate to fine-tune the input context, thereby producing higher-quality responses 1 2.

  • Unsupervised Upscaling (UP): This process involves providing an input context to an LLM and using Evaluate to score the outputs. The input context is then iteratively refined to improve overall scores in subsequent evaluations.

  • Guided Reasoning (GR): The debate transcript from Evaluate serves as a Chain of Thought 3 component in another prompt. The configuration of debater agents determines how reasoning is conducted. Moreover, retrieval-augmented generation (RAG) 4 implemented at the agent level provides additional control, enabling retrievals to be executed from each agent’s unique perspective.

Evaluate

WIP

Calibrate

WIP

Refine

WIP

Create

WIP

APIs

WIP

OpenAI-Compatible Endpoint

WIP

Resource Management

WIP


  1. Reid Pryzant, Dan Iter, Jerry Li, Yin Tat Lee, Chenguang Zhu, and Michael Zeng. Automatic prompt optimization with “gradient descent” and beam search. 2023. URL: https://arxiv.org/abs/2305.03495, arXiv:2305.03495

  2. Mert Yuksekgonul, Federico Bianchi, Joseph Boen, Sheng Liu, Zhi Huang, Carlos Guestrin, and James Zou. Textgrad: automatic “differentiation” via text. 2024. URL: https://arxiv.org/abs/2406.07496, arXiv:2406.07496

  3. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models. 2023. URL: https://arxiv.org/abs/2201.11903, arXiv:2201.11903

  4. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. Retrieval-augmented generation for knowledge-intensive nlp tasks. 2021. URL: https://arxiv.org/abs/2005.11401, arXiv:2005.11401