Local AI is having a moment, and we want you to be part of it!
Running through May 24, the Gemma 4 Challenge invites you to explore open models. W...
For further actions, you may consider blocking this person and/or reporting abuse
This challenge is either a make or break for me! Will probably do both, but we will see.
@jess If you are doing both prompts, would it be appropriate to do in one post or do you prefer to be separate post. I say this because I was thinking about writing about Gemma 4, but also showcase what I built (not going to spoil it yet).
Regardless, hope to see what the community builts/writes :D
Hey @francistrdev! I would keep those as two separate submissions. The content should be different though, please don't just duplicate two posts.
Sounds good! Thanks! I also sent you an email on other stuff! Noticed you might be busy lol. lmk!
Interesting perspective. Curious how others are handling this.”
This one is going to be great. I’m most excited about the IOT related use cases, but it all seems cool.
I explored Gemma 4 through the lens of local AI ownership— how moving from cloud-only APIs to capable local models changes the experience for users. One of the most interesting parts for me was intentionally choosing the E4B model instead of defaulting to the largest variant, because accessibility matters just as much as raw capability. Feel free to check out my submission 😇
Excited to read what everyone else is building and writing!
love this, definitely joining in 😄
This is a yummy challenge!
Hope to see a lot of really useful projects, and at the same time some useless but still cool projects!
Love it 😍
Thanks to Google and the Dev team!
Rock & roll
This is a cool challenge. Gemma 4 should lead to some really interesting projects.
Nice! I've already got some good testing with it in my agents memory.
This is exciting. definitely joining this challenge. 🤙
I want to.. I need to find something I could do with my potato laptop. it's hot even to compile cpp with 20 lines still waiting for +5 mins.
ahh, theres solution <3 thank you google & ollama for the cloud <3 amazing!
Nice challenge!!
thank you for this amazing oppurtunity!❤️👊🏻
Right on time! I am building with it now, super excited for this opportunity. 🥳
can i download it from ollama?
Yh apparently you can, it's quite big though
ollama.com/library/gemma4/tags
Ollama has it available on their cloud platform. I believe yyou get a pretty decent amount on the free plan for gemma. If you run Opwe WebUI or equiv google ai studio has a pretty generous free tier for gemma 4 models.
The model selection framing is what I find most interesting about this challenge. Most AI challenges just say "build something with X" and leave it there, but the emphasis on why you chose the model you did as a judging criterion is a different ask entirely. It pushes you to actually think about the tradeoffs between the E2B, E4B, and the 31B dense not just grab whichever one produces the best demo output.
@ben point about IoT use cases caught my attention too. The fact that the E2B model can run on a Raspberry Pi 5 opens up a genuinely interesting design space local inference at the edge, no cloud dependency, no latency from a round trip. That's a different category of application than most AI challenges enable, and I'm curious to see what people build in that direction.
I'm currently in the middle of GSoC so bandwidth is tight, but the write track feels accessible even with a constrained schedule a comparison piece breaking down when you'd actually reach for each of the three model variants would be genuinely useful to the community and doesn't require spinning up a full project. The OpenRouter free tier being available for the 31B is also a nice touch for people who want to experiment without setting up local hardware first.
One quick clarification question for the team: for the write track, is a post that includes a small working code example and a walkthrough treated as a write submission or does it cross into build territory? Trying to understand where that line sits before I decide on an angle.
Just submitted my entry. I built a local computer vision pipeline on a Raspberry Pi 5 using Gemma 4's native bounding box output. Replaced my entire YOLO + OpenCV setup with 50 lines of code. The zero-shot detection capability is honestly what sold me on this model, no retraining needed for new object categories. Excited to see what others are building.
Thanks for sharing this challenge! I'm interested in participating. Could you clarify:
· What are the judging criteria (e.g., creativity, technical complexity, real-world use)?
· Are there any restrictions on which Gemma 4 model size we can use?
· Is fine-tuning allowed, or only prompt engineering?
· Where should we submit the final project (GitHub + Dev.to post)?
Appreciate the $3K prize pool — excited to build something useful with Gemma 4!
Eu gostaria de fazer parte do projeto.
Can I use Gemma 4 with React Native? Am thinking of building an app with rn-executorch but am not sure it Gemma is supported
Interesting challenge.
What I’m curious about is how people evaluate these models beyond demos now.
A lot of projects look impressive in short workflows, but the real separation starts showing up with:
That’s where things usually get much harder.
Thanks for sharing this challenge! I'm interested in participating. Could you clarify:
· What are the judging criteria (e.g., creativity, technical complexity, real-world use)?
· Are there any restrictions on which Gemma 4 model size we can use?
· Is fine-tuning allowed, or only prompt engineering?
· Where should we submit the final project (GitHub + Dev.to post)?
Appreciate the $3K prize pool — excited to build something useful with Gemma 4!"
Just finished posting about the Gemma 4 family- feel free to go through it.
Now I am on to the next venture, the "Build with Gemma 4" project as it is mentioned over here, and hope it stands out as well. And I also can't wait to go through the projects created by the other builders. The challenge is great. Best of luck everyone.
Excited for this one. I've been building an AI sales chatbot for Arabic-speaking merchants (Provia), and "intentional model selection" is a criterion I take personally — most multilingual benchmarks barely scratch Arabic generation quality in production contexts.
Planning to submit to the Write track: a head-to-head on Arabic e-commerce conversations, same prompts, same product catalog, controlling for everything except the model. Curious whether 140 languages means fluent or just supported.
Hoping to bring something useful to the community. Good luck everyone 🚀
Just published my submission for the Write About Gemma 4 track.
I built an ICS Tabletop Exercise Simulator -- a single Gemma 4 26B MoE model simultaneously simulating six Incident Command System positions so Emergency Operations Managers can run realistic training exercises alone, without coordinating a room full of people.
The write-up covers the architecture, a real token loop issue I hit with extended reasoning on complex multi-constraint prompts (and the fix), an honest look at what happened with RAG retrieval quality, and why the 26B MoE specifically was the right model for this workload.
[dev.to/kkierii/i-used-gemma-4-to-s...]
Thanks for the Invitation, Perfect Timing . . .
En cuanto al reto se puede presentar el archivo github para que lo valoren o es lo escribir el articulo y nada más
Ai 不能太信任
Im business started to Pakistan and work former employee rice whole seller delar
This one is going to be great. I’m e cases, but it all seems cool.🫠
Looks like a fun challenge 👀
Really curious to see what people build with local multimodal models.
Interesting perspective. Curious how others are handling this.”
This is great....
I will definitely be part of these 👌
Hii i am deepak !
I am new to this community so can anyone tell me how to register for this Gemma 4 Challenge:
@jess hii I am Deepak and I am 20yrs old i completed my project and submitted... Can you check that my project was submitted correctly for me
Cool 😋
This is so great
I love this!!
I will say the Gemma api has been getting a lot of errors recently so be careful and don't forget your rate limits.
Gemma 4's "128K context window" seems interesting, but I'm curious about its real-world scalability challenges. Has anyone tried deploying their 31B parameter dense model on AWS? For those working on the building side, I've been using prachub.com for system design mocks. Their follow-up questions on latency are really similar to what my interviewers have asked. What are you guys planning to build with Gemma 4?
On my way ❤️🔥
In this challenge, what are the standards for selecting the 10 winners?
For example, should the project focus more on being useful for users or on having a beautiful design?
Wow, excellent opportunity
Just joined this platform today and I landed here after login in
I haven
t see this challenge, so Ill just participateThis will be fun. I'll definitely join!