Why My Model Wouldn’t Deploy to Hugging Face Spaces (and What Git LFS Actually Does)

I trained a simple “is cat” image classifier using fastai and wanted to deploy a small demo on Hugging Face Spaces. I already had a working app.py and a trained model.pkl, so my plan felt straightforward: commit everything and push it to the remote Hugging Face repository.

At that point, I thought the hard part was already over. The model was trained, the demo worked locally, and deployment felt like it should be a routine step — commit, push, done.

That assumption didn’t last long.

What I Tried (and Why It Kept Failing)

When I tried to push the repository, Git rejected it with the following error:

remote: -------------------------------------------------------------------------
remote: Your push was rejected because it contains files larger than 10 MiB.
remote: Please use https://git-lfs.github.com/ to store large files.
remote: See also: https://hf.co/docs/hub/repositories-getting-started#terminal
remote:
remote: Offending files:
remote:   - model.pkl (ref: refs/heads/main)
remote: -------------------------------------------------------------------------
To https://huggingface.co/spaces/chloezhoudev/minima
 ! [remote rejected] main -> main (pre-receive hook declined)
error: failed to push some refs to 'https://huggingface.co/spaces/chloezhoudev/minima'

I could see what Git was complaining about — model.pkl was too large — but I didn’t really understand why this was a problem, or what “Git LFS” actually meant in practice. I had never used it before.

So I followed the instructions in the error message and tried to fix it step by step.

First, I installed Git LFS and set it up locally:

brew install git-lfs    # Install Git LFS (macOS via Homebrew)
git lfs install         # Initialize Git LFS and register Git hooks
git lfs version         # Confirm Git LFS installation

Then I told Git LFS to track my model file and committed it:

git lfs track "model.pkl"
git add model.pkl
git commit -m "Track model file with Git LFS"
git push

The push failed again — with the exact same error.

At this point, I was getting annoyed. Instead of really understanding how Git LFS worked, I tried to brute-force my way through the problem and asked ChatGPT for help. It suggested running the following command:

git rm --cached model.pkl

Then tracking the file with Git LFS again and committing:

git lfs track "model.pkl"
git commit -m "Store model file using Git LFS"
git push

Third attempt — same error. 😅

At this point, I seriously considered giving up. But since I already understood what I wanted to do, I felt I should at least understand why this wasn’t working. So I went back to ChatGPT one more time, and this time it suggested something very different:

git checkout --orphan clean-main
git add .
git commit -m "Initial deployment with Git LFS model"

git branch -M main
git push -f origin main

This time, the push finally worked.

Now that everything was deployed successfully, it was time to stop guessing and actually understand what had happened:

Why did Git keep rejecting my pushes?

What does Git LFS actually do?

Why didn’t git rm --cached help?

And why did creating an orphan branch fix everything?

What I Eventually Learned About Git and Git LFS

The first problem was that I didn’t actually understand why my push was rejected. Reading the error alone wasn’t enough — I just saw a message telling me to use Git LFS. It wasn’t until later that I learned Hugging Face repos enforce a pre-receive hook on their side that scans all commits in your push and rejects the push if any commit contains a file larger than 10 MiB.

Once that clicked, I began to think about how Git actually stores files and what role Git LFS plays. The simplest way to think about it is this:

  • A normal Git repository stores file contents as blob objects — each file you add is stored roughly at its original size in the history. For binary files, this is exactly what happens: Git takes the content and writes a blob object containing that data.

  • Git LFS replaces large files with pointer files and stores the actual big file contents separately on a dedicated LFS server. When you git add a file while Git LFS is enabled, Git LFS generates a small pointer and hands that pointer file over to Git to store in the repository. The large file itself gets uploaded to the Git LFS store instead.

💡 Key takeaway

So in my case, model.pkl was a large binary file.

Without Git LFS installed before it was ever added, Git simply stored it as a normal blob at full size.

That’s why my first push was rejected — the blob itself exceeded Hugging Face’s 10 MiB limit.

The next piece of the puzzle was the .gitattributes file that Hugging Face includes automatically when you create a Space repository. By default it contains a line like this:

*.pkl filter=lfs diff=lfs merge=lfs -text

This line tells Git which files should be tracked by Git LFS instead of by normal Git. However, at the time of my first push I didn’t have Git LFS installed yet, so that rule had no effect, and Git still stored model.pkl as a normal blob — exactly what the pre-receive hook rejects.

Which brings us to the next question: what happened on the second push?

Even after I installed Git LFS and tracked the model file, the push was rejected again.

💡 Key takeaway

Installing Git LFS does not fix files that were already committed.

The old commit containing model.pkl as a normal blob was still in the history, and Hugging Face rejected the push because of it.

What about git rm — cached?

⚠️ Important

git rm --cached only affects future commits.
It does not remove files from your existing Git history.

The command removes the file from the staging area and stops tracking it in upcoming commits, while leaving the file intact in your working directory. However, it does nothing to delete the earlier commit where the large file was first added.

Because that original commit still contained model.pkl as a normal Git blob, the problematic file remained in the repository history — and Hugging Face continued to reject the push.

Finally, creating a new branch with no history (git checkout — orphan) fixed everything because it started with a clean slate that had no commits at all.

Once I added the files with Git LFS already configured, committed them, renamed the branch to main, and force-pushed, the remote accepted it. There were no old blob objects in the history for the pre-receive hook to reject.

One more warning: using — orphan and especially git push -f is dangerous if multiple collaborators are using the same branch, because this replaces the branch’s history. In my case, the Space repo was just for deployment, so this was fine, but it’s something to be careful about in team settings.

💡 Bonus: Git LFS isn’t the only option anymore

Hugging Face now also recommends git-xet, a newer backend designed specifically for large machine learning artifacts.

Conceptually, Git LFS moves large files out of Git,

while git-xet rethinks how Git stores large content altogether.

After all that debugging, the model is finally deployed and working as expected.

You can try the live demo here.

Thanks for reading. 😊

Leave a Reply