This is a submission for the GitHub Copilot CLI Challenge
What I Built
I built BoxFinder, an iOS app that helps track physical storage containers—those mystery boxes in closets, garages, or storage units that everyone forget about.
The MVP focuses on three core questions:
- What does this box look like? → box photo
- Where is it stored? → location photo
- What’s inside? → item photos + tags
Users can:
- Create boxes with photos
- Add photos of the contents
- Manually tag items (auto-tagging later 😉)
- Search by keywords to quickly find which box contains something—and where that box lives
Demo
👉 GitHub repo:
https://github.com/michellemashutian/BoxFinder-iOS
📸 Screenshots:
Main tabs: Boxes, Search, and Settings. The Boxes screen lists stored containers with photos and locations, the Search screen lets users browse by box name or location, and the Settings screen shows app info and support options.

Screenshots of creating and managing a box in the BoxFinder app, including adding photos of the box and its location, viewing items inside, and deleting a box with a confirmation alert.
🎥 Video walkthrough:
https://drive.google.com/file/d/1UAAATtcxgWCFupV-kKk1dx0wwrbgwEgX/view?usp=drive_link
My Experience with GitHub Copilot CLI
🧭 Starting from a Spec
Before writing code, I created a simple product spec to guide development:
# BoxFinder MVP Spec (iOS 17+)
Goal:
- Track storage containers ("boxes") with:
1) box photo (how the box looks)
2) location photo (where the box is stored)
- Add item photos to a box (photo of contents).
- Each item photo has tags (auto later; manual for MVP).
- Search by keyword -> show matching boxes + where they are.
Tech:
- SwiftUI
- SwiftData
- Photos stored in app Documents directory; DB stores relative file paths.
Core screens:
- TabView:
- Boxes: list, create box, box detail
- Search: search by tags/name, show results
This spec became the backbone for how I collaborated with GitHub Copilot CLI.
Then, I began entirely inside Copilot CLI with this command:
copilot -i Read SPEC.md. Propose the minimal set of Swift files to implement: SwiftData models (Container, ItemPhoto), a PhotoStore to save/load images to Documents and return relative paths, and a basic TabView with Boxes list Search screen. Output a step-by-step plan with file names and code blocks per file.
That worked surprisingly well: Copilot generated a reasonable file layout, SwiftData models, and a first pass at the UI structure.
From there, I loaded everything into Xcode and started iterating by pasting compiler errors and simulator issues back into Copilot.
🛠 Debugging & Platform Friction
A big real-world challenge: I wanted to test on my own phone, which only supports iOS 16. My original spec said iOS 17+, so a lot of generated code used newer APIs. I ran into errors like:
- Views only available in iOS 17: `AddItemView`, `BoxesListView`,`ItemPhotosGridView`, `SearchView`
I had to:
- Convert APIs back to iOS 16-compatible patterns
- Ask Copilot to downgrade features
- Eventually update the spec itself to say iOS 16+
That back-and-forth took more time than expected, but it was interesting to see how strongly the model followed the original spec—sometimes too strongly.
📷 Camera vs Photo Picker
Another iteration was enabling taking photos directly, not just selecting from the photo library.
I had to explicitly ask:
“Why can I only select photos? Add the function for taking photos.”
This required several rounds of fixes across the image picker and permissions logic.
🎨 UI Feedback (Where It Struggled)
I’ll be honest: UI polish was the weakest part of the experience.
Even after asking Copilot CLI (using the claude-haiku-4.5 model) to optimize the UI, the app still felt:
- visually dated
- very default-SwiftUI
- lacking modern spacing, hierarchy, and personality
Also, describing UI problems purely in text was tough. I often wanted to attach screenshots and say:
“This part looks weird—how do I fix it?”

That’s something ChatGPT does better today, since I can show visuals directly.
🌍 Multilingual Surprises
I’m a native Chinese speaker, so I tested asking some questions in Chinese mid-development.
It mostly worked… but at one point Copilot changed UI strings into Chinese inside the app 😅.
🤔 Overall Takeaways
Where GitHub Copilot CLI struggled:
- UI refinement
- Long debugging loops for certain functions
- No way to reason over screenshots
Compared to my recent experiments building three other iOS tools with ChatGPT, GitHub Copilot CLI required more manual fixes and cleanup… Of course!!!! I also think it’s possible that these problems occur because I don’t know how to use this tool🥲. But it was impressive to drive an entire MVP mostly from the terminal😘😘😘.
Final Thoughts
BoxFinder is still rough, but it already answers a problem I actually have:
“Which box did I put that cable in… and where is it stored?”
This challenge pushed me to:
- write clearer specs
- rely on CLI-based AI workflows
- treat Copilot more like a junior engineer that needs careful direction
Thanks for the challenge, this was a fun way to stress-test an AI-first iOS workflow.




