Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn Exporting and Deploying Models | Evaluation, Optimization, and Deployment
Practice
Projects
Quizzes & Challenges
Quizzes
Challenges
/
Fine-Tuning Transformers

bookExporting and Deploying Models

Swipe to show menu

When you are ready to move your fine-tuned transformer model from experimentation to real-world applications, it is essential to export the model and integrate it into your workflows for inference. Exporting a model typically means saving its architecture and learned weights to disk, so you can reload it later without retraining. Once exported, you can use the model for batch inferenceβ€”processing a large set of data at onceβ€”or for real-time inference, such as responding instantly to user queries in a web application. The process involves saving the model, loading it in your deployment environment, and ensuring the inference pipeline matches your training setup, including preprocessing steps like tokenization.

Note
Note

Always test your exported models on a variety of sample inputs before deploying to production. This helps catch any issues with serialization, preprocessing, or environmental differences that might affect predictions.

1234567891011121314151617181920212223
from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch # Load the exported (saved) model and tokenizer model_path = "path/to/your/saved-model" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForSequenceClassification.from_pretrained(model_path) # Example texts for inference texts = [ "Transformers are revolutionizing natural language processing.", "Fine-tuning allows models to adapt to specific tasks." ] # Tokenize the texts for the model inputs = tokenizer(texts, padding=True, truncation=True, return_tensors="pt") # Run inference (no gradients needed) with torch.no_grad(): outputs = model(**inputs) predictions = torch.argmax(outputs.logits, dim=1) print("Predicted classes:", predictions.tolist())
copy
Note
Note

Inference errors can occur if there are mismatches between the library or model versions used during training, export, and deployment. Always ensure your deployment environment matches your training environment as closely as possible.

question mark

What is the most important thing you should test before deploying a model to production?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 4. ChapterΒ 3

Ask AI

expand

Ask AI

ChatGPT

Ask anything or try one of the suggested questions to begin our chat

SectionΒ 4. ChapterΒ 3
some-alt