This blog is still a work in progress; I’ll continue to update it. last update : 2025-05-20
1. Foundations of Machine Learning
- Core concepts: supervised vs. unsupervised learning, key algorithms
- Neural networks basics: perceptron → deep feed-forward → CNNs / RNNs
- Introduction to NLP & Transformer: attention mechanism, encoder–decoder architectures
To grasp the basic concepts in about two hours, you can choose Understanding LLMs from Scratch Using Middle School Math. If you have more time, you might explore 3Blue1Brown’s series on neural networks.
2. Prompt & Engineering Practices
- Crafting effective prompts for large language models
- Data preprocessing & feature engineering: tokenization, normalization, embeddings
- Versioning & experiment tracking: MLflow, Weights & Biases, DVC
3. Practical AI Applications
- Retrieval-Augmented Generation (RAG) systems
- Model Content Protocol (MCP): standardized protocol for secure, pluggable context exchange between AI apps and external data/tools
- Autonomous agents & multi-agent workflows
- Common use cases: chatbots, recommendation engines, summarization tools
4. Model Training & Fine-Tuning
- Transfer learning & domain adaptation
- Tuning techniques: full-model vs. parameter-efficient methods (e.g. LoRA)
- Evaluation & continuous improvement: cross-validation, A/B testing, monitoring