DEV Community

Cover image for Weekly Dev Log 2026-W03
Umitomo
Umitomo

Posted on

Weekly Dev Log 2026-W03

AI security paths and SwiftUI unit tests

🗓️ This Week

  • Finally finished the Cyber Security 101 learning path and discovered the AI Security Learning Path on TryHackMe
  • Completed 2 rooms from the AI Security Learning Path this week
  • Decided to continue working on the SwiftUI tutorial (also explored React Native with Expo out of curiosity)

📱 iOS (SwiftUI)

  • Ran unit tests for badge unlocking logic and stepped through them using breakpoints
  • Researched the differences between SwiftUI and React Native (with Expo) to determine the best platform for my learning

🌐 Web Development

  • Posted my weekly learning and development log on Dev.to📝

🔐 Security (TryHackMe)

  • Completed 2 rooms from the AI Security Learning Path on TryHackMe (AI Models & Data, Prompt Engineering)

💡 Key Takeaways

  • Learned how to use the po command and the map function in the console during debugging
  • Chose SwiftUI to focus on native iOS development (compared to React Native with Expo)

TryHackMe Learning

AI Models & Data

  • Learned that most AI models rely heavily on Common Crawl, a large public dataset collected from the internet
  • Realized that unclear data provenance and hidden sensitive data can lead to security risks
  • Learned that training decisions can impact security, including potential data leakage
  • Understood that optimization techniques introduce trade-offs between efficiency and security
  • Learned that fine-tuning inherits risks from base models such as bias and unsafe behavior
  • Realized that models are black boxes and difficult to fully audit
  • Learned that model cards are important but often incomplete

Prompt Engineering

  • Learned that LLMs process text as tokens and generate probabilistic outputs
  • Learned how parameters like temperature and top-p affect responses
  • Learned that effective prompts require clear instructions, context, format, and constraints
  • Understood the difference between system prompts and user prompts
  • Practiced prompt techniques such as zero-shot, few-shot, and Chain-of-Thought

🚀 Next Week

  • Continue working on the badge algorithm (Section 5) in the SwiftUI tutorial
  • Continue posting small articles on Dev.to
  • Continue working on the AI security Learning Path

Top comments (12)

Collapse
 
francistrdev profile image
FrancisTRᴅᴇᴠ (っ◔◡◔)っ

Great work so far! I was wondering about your goals for this year since it looks like you are doing a bit of Web Dev, Security, and IOS development?

Continue posting and good work Umitomo :D

Collapse
 
umitomo-lab profile image
Umitomo

Thank you so much! I really appreciate you following my posts 😊

That's a great point — I’ll start including my yearly goals from my next post.

For this year, my goals are:

  • iOS: Build a solid foundation in SwiftUI and create one iOS app
  • Web: Continue posting learning logs on Dev.to and eventually turn it into a portfolio site using React Router v7
  • Security: Keep learning on TryHackMe

Thanks for the suggestion! I’ll keep improving step by step.

Collapse
 
itskondrat profile image
Mykola Kondratiuk

TryHackMe's AI Security path is underrated. working through prompt injection stuff changed how I look at every LLM integration - you start noticing the attack surface everywhere.

Collapse
 
umitomo-lab profile image
Umitomo

Thanks for your comment — I really appreciate it!
Totally agree. Working through the prompt injection modules really changed how I think about LLM integrations too. I’ve started noticing potential attack surfaces everywhere.

Collapse
 
itskondrat profile image
Mykola Kondratiuk

once that lens clicks it's hard to look at any LLM call the same way. treating every external data source the model reads as a potential injection vector - slows you down at first but you ship more defensively

Thread Thread
 
umitomo-lab profile image
Umitomo

Thanks for your comment!

I completely agree. Since I started working through the AI Security path on TryHackMe, I’ve really felt my perspective on AI change compared to before. I’m starting to see things from multiple angles now.

I’ll definitely keep going little by little.

Thread Thread
 
itskondrat profile image
Mykola Kondratiuk

tryhackme's AI security path is solid for this - the injection labs build a mental model you can't unlearn. you'll start seeing it in places that aren't obvious: tool outputs, retrieval results, even 'safe' structured data your agent pulls in. the multi-angle thing compounds faster than you'd expect.

Thread Thread
 
umitomo-lab profile image
Umitomo

Thanks for your comment! It’s really interesting to hear real-world feedback about the AI Security path from someone working in the field.

I’ve been interested in building applications with AI integrations for a while, so I’m hoping to keep learning in a way that I can apply to real development as well.

Thread Thread
 
itskondrat profile image
Mykola Kondratiuk

honestly the transfer is pretty direct. once the injection patterns click in a lab you start noticing them in real app code without looking. that mental model builds fast.

Collapse
 
peacebinflow profile image
PEACEBINFLOW

The point about model cards being important but often incomplete stuck with me. It's one of those things that sounds like a documentation problem on the surface, but I think it points to something deeper about how we're building the AI supply chain.

When you learned that most models rely on Common Crawl and that training decisions can introduce security risks, it connects back to the same issue—there's this long chain of dependencies where each link assumes the previous one did its due diligence. The base model inherits risks from the training data, the fine-tuned model inherits risks from the base model, and the application inherits risks from all of it. Model cards were supposed to make that chain traceable, but they're only as good as the weakest audit in the stack.

That parallel between your debugging practice (using po and stepping through breakpoints) and what you're learning about AI security is interesting, even if unintentional. You're learning to trace execution state in one context while discovering how hard it is to trace provenance in another. One has mature tooling, the other barely has conventions.

Are you finding that the AI security material is changing how you think about the apps you're building in SwiftUI, or do those still feel like separate learning tracks for now?

Collapse
 
umitomo-lab profile image
Umitomo

Thank you so much for your thoughtful comment — I really appreciate it!

That point about tracing execution state vs. tracing data provenance really stood out to me as well. I hadn’t thought about it that way at all, but it makes a lot of sense.

I actually started learning AI security because I’m exploring how to use AI in my work, and TryHackMe released content on it at the perfect time. As I’ve been learning, I feel like my understanding of AI — especially its risks — has become much clearer.

For SwiftUI, I’m trying not to overreach and instead focus on building things step by step within my current understanding. Because of what I’ve learned about AI security, I think I’ve become a bit more cautious about integrating AI features into applications.

For now, they still feel like separate learning tracks, but I feel like they might connect more over time.

Thanks again for sharing your perspective — it gave me a lot to think about!

Some comments may only be visible to logged-in visitors. Sign in to view all comments.