π Deploy an ML Model on Hugging Face Spaces in 10 Minutes!
π Deploy an ML Model on Hugging Face Spaces in 10 Minutes! Hey there, data nerd! π€ Ever felt like your machine learning models are hiding away in your local machine, waiting for their big moment?
π Deploy an ML Model on Hugging Face Spaces in 10 Minutes!
Hey there, data nerd! π€
Ever felt like your machine learning models are hiding away in your local machine, waiting for their big moment? Well, today's the day we give them a stage to shine β Hugging Face Spaces! In just 10 minutes, you'll have your model deployed and ready to impress. Let's dive right in!
π‘ Prerequisites
- An ML model (duh!)
- A Hugging Face account (create one if you haven't already)
- Some basic Python knowledge
- A touch of patience (but not too much, we're fast!)
π οΈ The Tutorial
1. Prepare your model
First things first, make sure your model is saved in the right format. For this example, let's use a simple text classification model trained with Hugging Face's Transformers library.
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load your tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
model = AutoModelForSequenceClassification.from_pretrained("path/to/your/model")
# Save the model in the format Hugging Face Spaces expects
torch.save(model.state_dict, "model.pt")
π‘ Tip: If you're using a different model, make sure it's saved as a .pt file containing the state dictionary.
2. Create your Space
Log into your Hugging Face account and go to Spaces (<.co/spaces>). Click on "New" to create a new space.
Name it something awesome (like "World-Dominating-Model"), then click on "Create model". Select "Custom model", give it a name, and click "Create".
3. Upload your model
In your new Space, go to the "Model" tab. Click on "Upload" and select the .pt file you saved earlier.
π¨ Warning: Make sure to keep your model's privacy settings as intended! You can change this in the "Settings" tab.
4. Create an inference script
Now, let's create a Python script that Hugging Face Spaces will use to run inferences with our model. Create a new file called app.py and add the following:
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
model = AutoModelForSequenceClassification.from_pretrained("path/to/your/model")
def predict(text):
inputs = tokenizer(text, return_tensors="pt", truncation=True)
outputs = model(**inputs)
return torch.argmax(outputs.logits).item
if __name__ == "__main__":
# Example usage:
print(predict("I love this tutorial!"))
π‘ Tip: You can customize the predict function to fit your specific use case.
5. Upload and run your script
In your Space, go back to the "Model" tab. Click on "Upload", select your app.py file, and click "Upload". Then, click on "Run".
Select "Python" as the runtime, give it a version (like "3.8"), and click "Save". Hugging Face Spaces will install the necessary dependencies ( Transformers, torch) for you.
π¨ Warning: If your model requires specific environment variables or dependencies, you'll need to provide them in the "Environment Variables" or "Additional files" sections respectively.
6. Test your deployment
Now, let's test if everything works! Go to the "Inference" tab and enter some text. Your model should make a prediction based on what it was trained for.
π Expected Result
You should now have a live, deployed machine learning model that's ready to accept inputs and make predictions! High-five your monitor (or cat, if you're feeling lonely) β you just went from local hero to global superstar in 10 minutes!
π Going Further
Now that you've got the hang of it, why not try:
- π€ Deploying a model with custom data processing steps.
- π¨ Creating an awesome UI for your Space using HTML, CSS, and JavaScript.
- βοΈ Building a complete ML web app using Hugging Face's Inference API.
Happy coding (and showing off)! βοΈπ€
Was this article helpful?
Let us know to improve our AI generation.
Related Articles
π Exploring Agent Safehouse: A New macOS-Native Sandboxing Solution
π Exploring Agent Safehouse: A New macOS-Native Sandboxing Solution Introduction Agent Safehouse is a innovative macOS-native sandboxing solution designed to enhance security and privacy for local agents.
π‘οΈ Exploring the Impact of Pentagon's Anthropic Controversy on Startup Defense Projects π‘οΈ
π‘οΈ Exploring the Impact of Pentagon's Anthropic Controversy on Startup Defense Projects π‘οΈ Introduction The Pentagon's recent controversy involving Anthropic, a San Francisco-based AI company, has sparked significant debate about the ethical and technical implications of AI in defense projects.
Exploring Common Writing Patterns and Best Practices in Large Language Models (LLMs) π
Practical tutorial: Exploring common writing patterns and best practices in Large Language Models (LLMs)