Perspectives✨

Welcome to my first-ever weekly blog update. 📝 Feels a bit unusual to be writing this, but I’ve realized that documenting my journey—both in research 🔬 and entrepreneurship 🚀—is something I’d love to do. It’s not just about tracking my progress but also about sharing experiences that might help others along the way. So, here we go.
This week had a mix of things—some interesting discoveries 🧐, some research deep dives 📚, and a few moments that made me step back and rethink how I approach certain things. It’s funny how certain topics keep coming up, even when you’re not actively looking for them.
One thing that’s been on my mind is decision fatigue 🤯. The idea that the more decisions we make in a day, the worse the quality of those decisions becomes. It’s why some CEOs wear the same outfit every day 👔—to eliminate unnecessary choices and save mental energy for things that actually matter.
For me, the challenge isn’t just about making too many decisions but figuring out which ones actually move the needle. 🎯 It’s easy to get caught up in what feels urgent rather than what’s actually important. I came across an article that explains this concept brilliantly:
🔗 Life is a Picture, But You Live in a Pixel – Wait But Why 🖼️
The article talks about how we often focus too much on the tiny details of our daily lives, forgetting to zoom out and see the bigger picture. That really hit home this week. Sometimes, the things that feel overwhelming in the moment don’t even matter in the long run. ⏳
Apart from that, I spent some time exploring DeepSeek R1 🤖. For anyone using macOS, you can directly download it from LM Studio 💻 and run it locally, which makes experimenting with it really convenient. But what stood out to me was the way DeepSeek R1 optimizes reinforcement learning. I went through the research paper 📄, and it was easily one of the best-written papers I’ve read in a while. The way they’ve managed to reduce compute power while maintaining nearly the same, if not better, results makes a strong case for more efficient AI models. With how computationally heavy most models are becoming, this kind of approach feels like the way forward. 🚀
Outside of research, I caught up with a friend this week after almost a month. We ended up at a skywalk bridge 🌉, and it was one of those unplanned but great moments. Just walking, talking about a bunch of things, and realizing how much time has passed since we last met. It reminded me how easy it is to get caught up in work and schedules and forget to take a step back. Some of the best conversations happen when you’re not trying to force them. 🤝
I also noticed how much we try to measure progress 📊 all the time—whether it’s work, research, or even personal projects. And while tracking progress is important, sometimes things don’t have to be structured or lead to a specific outcome. Some experiences are just meant to be lived as they come. 🌿 Maybe that’s something I need to remind myself of more often.
That’s it for this week. If you’ve explored DeepSeek R1 🧠 or have thoughts on decision fatigue 🔄, I’d love to hear your perspective.
Until next time 👋
Om