Weekly ML for SWEs #56: Avoiding brain rot is the key to success
An AI reading list curated to make you a better engineer: 6-24-25
Hello all and welcome to Machine Learning for Software Engineers! Each week I share a thought and dump some resources and interesting articles/videos to help you become a better engineer. If you want this in your inbox each week, make sure to subscribe. Youāll also get more detailed articles including case studies, explanations, build tutorials, job market analyses, and more!
Weāre very close to bestseller status! ML for SWEs will be $3/mo until we hit and then it goes up to $5/mo. If you want to get ALL content for each, become a paid subscriber now to save.
Usually I use this article to share something valuable we learned about AI and/or software engineering from the past week, but this week I want to talk about something even more important and impactful. I want to talk to you about your time.
This is something Iāve been thinking about a lot recently. I have 5 young kids, a full-time job, and I write these articles. Thatās what I find takes up most of my time.
On top of that Iād like to spend more time cooking dinner for my family (my wife pulls the weight there), spend more time with my kids, spend more intentional time with my wife (i.e. plan things to do), get back into swimming (preferably join a team), and clear my backlog of Steam games and books I want to read.
I want to do a lot. Most of you reading this probably feel the same way otherwise you wouldnāt spending your free time learning machine learning on top of the software engineering you already know.
Iāve been thinking a lot about how much time I can actually spend on each if I want to do them all and whether or not itās even possible. Because Iām a nerd, I brought it back to numbers.
Weāve each got 168 hours in a week. Work takes up 40 of those. Sleep should take up 56 (at least). Letās say exercise is 10 hours. Letās devote about 25 hours to taking care of my house and/or focused on my kids. Then letās take the weekend āwork timeā and spend that in the same way: 18 more hours. Iād also think about 14 hours should be set aside each week to ādo nothingā. These are hours to relax to ensure the rest of your hours are productive.
Where does that leave me? With 5 hours left for all the items on my list if I donāt sacrifice anywhere else. So the answer? No, doing everything isnāt possible. But this exercise put into perspective how valuable each hour I have is and made me think deeper about how Iām spending them.
This tweet hit home yesterday and made me come back to this exercise. š
Naturally, after examining how I spend my time, I turned to Screen Time on my phone and realized just how much time I had been spending on YouTube Shorts. It wasnāt ridiculous by any means, but Iāve never finished scrolling YouTube Shorts and thought, āThat was a good use of my time.ā
So I resolved to fix this and spend more time on things I actually want to do. Things that I can look back on and think, āThat was a good use of my time.ā
My challenge for you is to track how you spend your time and do a retrospective. Think about what youāre doing with your time and then think about what you could be doing with that time. Donāt beat yourself up over not spending time wiselyājust use it to gain some perspective.
The biggest thing: Avoid brain rot. I canāt think of a single social media platform without it. Itās okay to spend time on social media and itās okay to spend time not doing things, but ensure the time you spend isnāt working against you. Thereās a resource in the āOther interesting thingsā section below to help avoid brain rot that I found interesting. š
To close, here are a few more thoughts Iāve had recently about social media:
You donāt have to make your work your entire life.
You donāt always have to be āgrindingā.
You donāt have to care what strangers think about anything youāre doing.
The opposite of these 3 ideas are perpetuated constantly on social media and they couldnāt be further from the truth. But if you spend so much time reading it, itās only a matter of time before you start believing it.
Iāve written about social media many times beforeāespecially from the angle of understanding the impact of recommendation algorithms. You can check them out here:
In case you missed ML for SWEs last week week, you can learn the best way to keep up with AI research papers here. Enjoy the resources below and donāt forget to subscribe to join us!
Resources to become a better engineer
This is a callback to last weekās article and I was incredibly grateful for this roundup this week. Limiting my exposure to social media caused me to miss most of these papers. No worries because Elvis had my back.
Less time online was a worthwhile tradeoff for sure, though.
Tensor Manipulation Unit (TMU): Reconfigurable, Near-Memory, High-Throughput AI
This paper introduces the Tensor Manipulation Unit (TMU), a novel AI accelerator that tackles performance bottlenecks by being placed near memory and featuring a reconfigurable hardware fabric. This reconfigurable design allows the TMU to adapt its dataflow to different neural network layers, achieving up to 12.8x higher throughput and 7.4x better energy efficiency compared to state-of-the-art accelerators on models like Transformers.
This paper showed me I have a ton to learn about how TPUs work because I a) didnāt know a tensor manipulation unit could have such an impact and b) canāt fully grasp whether this is a huge deal or not important at all. Definitely worth checking out.
This is an interesting read about Googleās Tensor Processing Units (TPUs) which I largely consider to be Googleās primary advantage in the race toward āAGIā (however we want to define it). My daily work is close to TPUs and theyāve been incredibly valuable for Google.
Google's TPUs are specialized ASICs that accelerate deep learning workloads through a domain-specific architecture centered on a systolic array. This key component enables massive parallel processing, performing thousands of matrix multiply-accumulate operations in a single clock cycle, often using reduced numerical precision (like INT8) for greater speed and efficiency.
Andrej Karpathy: Software Is Changing (Again)
If youāre a frequent ML for SWEs reader, this information might not be new to you as weāve discussed it many times, but Andrej Karpathy explains the shift AI is creating in software development. Itās worth a watch even if just a review.
Huang's Law by
The biggest takeaway from this is that scaling hasnāt hit a wall.
Huang's Law, as described in the article, is an observation by Nvidia CEO Jensen Huang stating that GPU performance for AI applications roughly doubles every year, driven by combined hardware and software advancements. This rapid, annual doubling significantly outpaces the traditional Moore's Law rate and is considered a key technical driver behind the acceleration of AI capabilities and the AI boom.
Other interesting things
Using Home Assistant, adguard home and an $8 smart outlet to avoid brain rot
A tweet about an article comparing learning to code to be as bad of advice as getting a face tattoo. Now is a great time to get into computer science, but my advice remains the same as it always has: Do it because youāre interested in it not for the money.
Long-anticipated Tesla robotaxi service has launched in Austin. Big news and a seriously competitor for Waymo. More competition is always better for consumers.
The lifeblood of the United Stateās advantage is high-skilled immigration (yours truly). Without it, the USās competitive advantage is reduced significantly.
Steve Jobsā take on the meaning of life: Make something wonderful and put it out there. Check out the video below.
Thanks again for reading!
Always be (machine) learning,
Logan