Reflective Piece: Final Machine Learning Module (Rolfe et al., 2001 approach)

WHAT

Throughout this module, I engaged in a comprehensive journey through fundamental and practical aspects of machine learning. Among the many artifacts I produced, two major milestones stood out: our Unit 6 group project on Airbnb data and my individual project in Unit 11 focused on model performance evaluation. Each unit challenged me to apply algorithms, handle datasets, and reflect on both collaborative and individual contributions.

In Unit 6, I collaborated with a team to explore Airbnb listing patterns in NYC. We proposed questions regarding pricing variation and clustering of neighbourhoods. I contributed extensively to data preprocessing, EDA, and co-developed the K-Means clustering model. The collaborative nature of the project demanded negotiation, compromise, and clear communication. By contrast, Unit 11 shifted the responsibility entirely to me. I chose to assess model performance using classification metrics, learning how to balance accuracy with interpretability. I explored confusion matrices, precision, recall, and F1-scores in both traditional and imbalanced datasets.

Across other units, I implemented perceptrons, gradient descent, CNNs for object recognition using CIFAR-10, and performed various EDA activities. I also participated in discussions and seminar reflections that challenged me to consider the legal, ethical, and professional responsibilities of AI practitioners.

SO WHAT

These experiences pushed me beyond theoretical understanding. I discovered that machine learning is rarely about finding the perfect model—it's about asking the right questions, understanding the data's context, and communicating insights clearly. In Unit 6, the balance between teamwork and technical work revealed my tendency to take on too much when under pressure. I often became the informal 'tech lead,' which left me stretched thin. This role gave me satisfaction but also highlighted the need for better delegation.

Unit 11 was both empowering and isolating. Working solo made every mistake more personal—but also every breakthrough more rewarding. I struggled initially with evaluation metrics, often misinterpreting what accuracy meant in imbalanced datasets. After reviewing literature and guidance, I understood the deeper value of metrics like F1-score when class imbalance is present (Saito & Rehmsmeier, 2015).

My reflections across Units 7 to 9 made me aware of how my confidence in Python grew through hands-on application. Early in the course, even opening a Jupyter Notebook felt intimidating. By Unit 9, I was fine-tuning CNNs and interpreting performance charts.

On the legal and ethical front, our collaborative discussions forced me to think about AI not just as a tool, but as a responsibility. Debating fairness, bias, and accountability taught me that technical skill must always be balanced with ethical foresight.

Emotionally, I experienced everything from imposter syndrome to flow state. Sometimes I felt overwhelmed, especially during deadlines, but those emotions often led to growth. By talking to peers and revisiting reflections, I noticed patterns in how I react to pressure—and began developing healthier work rhythms.

NOW WHAT

This module fundamentally reshaped how I see myself as both a learner and a future AI practitioner. I no longer define success purely in terms of model performance, but in how well the solution fits the context and how clearly I can communicate that to others.

Moving forward, I plan to apply what I learned in my role at Red Hat, where AI and platform services are becoming central. I aim to introduce clearer evaluation standards and foster better team practices based on my Unit 6 collaboration learnings. I also see great potential in using CNNs and clustering techniques to support automation and customer experience improvements.

Personally, I've gained not only technical skills but emotional insights. I've learned that struggling doesn't mean failing—it often precedes a breakthrough. I want to mentor others in my team with this mindset and build an environment where learning from error is safe.

Finally, I will continue reflecting regularly, both informally and through structured formats like this one. As Rolfe et al. (2001) suggest, reflection is not a one-off event, but a continuous cycle of development.

References

Rolfe, G., Freshwater, D. & Jasper, M. (2001) Critical reflection in nursing and the helping professions: a user's guide. Basingstoke: Palgrave Macmillan.

Saito, T. & Rehmsmeier, M. (2015). The precision-recall plot is more informative than the ROC plot when evaluating binary classifiers on imbalanced datasets. PLOS ONE, 10(3), e0118432.

Source Artifacts | 📝 Reflection Document
Email
GitHub
LinkedIn