xturing Homepage, Documentation and Downloads- LLM Personalized Fine-tuning Tool- News Delivery

xturing provides fast, efficient and easy fine-tuning for LLMs such as LLaMA, GPT-J, GPT-2, OPT, Cerebras-GPT, Galactica, etc. xTuring makes it easy to build and control LLMs by providing an easy-to-use interface and personalizing LLMs to your own data and applications. The entire process can be done on your computer or in your private cloud, ensuring data privacy and security.

With xturing, you can:

  • Ingest data from different sources and preprocess it into a format that LLM can understand
  • Scale from a single GPU to multiple GPUs for faster fine-tuning
  • Take advantage of memory-efficient techniques (i.e. LoRA fine-tuning) to reduce your hardware costs and reduce time by up to 90%.
  • Explore different fine-tuning methods and benchmark against them to find the best performing model
  • Evaluate fine-tuned models on well-defined metrics for in-depth analysis




pip install xturing

quick start

from xturing.datasets import InstructionDataset
from xturing.models import BaseModel

# Load the dataset
instruction_dataset = InstructionDataset("./alpaca_data")

# Initialize the model
model = BaseModel.create("llama_lora")

# Finetune the model

# Perform inference
output = model.generate(texts=["Why LLM models are becoming so important?"])

print("Generated output by the model: {}".format(output))

#xturing #Homepage #Documentation #Downloads #LLM #Personalized #Finetuning #Tool #News Delivery


發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *