7 Comments
User's avatar
ARCQ AI's avatar

This is a great explanation of why machine learning development feels so different from traditional software engineering. I like that you framed it in terms of experimentation rather than just “training models,” because that really is the core of the work. The reminder that multiple training runs per model are the norm and not the exception also helps explain why scaling ML is so costly. It is the kind of perspective that engineers coming from a pure software background really need to hear.

Expand full comment
Logan Thorneloe's avatar

Realizing that production ML isn't actually about training was a huge turning point for me when it comes to working with AI. It's the reason why scale is so huge when working with AI Platforms and also the differentiating factor between building an ML system and a traditional software system.

Expand full comment
Devansh's avatar

These are some of my favorite insights from you.

Once you throw a bunch of these together, you should come on AI Made Simple for a guest post combining them. It will be super useful to everyone.

Expand full comment
Logan Thorneloe's avatar

Thanks! I've got two more planned that expand upon this. I'd love to bundle them into an article to share them on AI Made Simple once they're completed.

Expand full comment
Devansh's avatar

Very excited for this

Expand full comment
Terry underwood's avatar

Nice work, Logan. Your enthusiasm and clarity make a great one-two punch. It’s significant that machine learning improves not through rational extensions of engineers but through hypothesis testing. This approach would help in education where theoretically grounded experiments integrating AI into writing tasks could examined. There is a lot of hypothesizing but little scientific evidence. For example, I have a hypothesis about prompted writing assignments which could answer a core question about learning loss.

Expand full comment
Logan Thorneloe's avatar

Thanks for the kind words! That sounds interesting.

Expand full comment