<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>ML Optimization Lab</title><link>https://ml-optimization-lab.maaihub.com</link><description>Artifact-backed ML optimization workflows.</description><item><title>Model Monitoring Signals for Solo ML Products</title><link>https://ml-optimization-lab.maaihub.com/articles/model-monitoring-signals-for-solo-ml-products.html</link><description>A reusable checklist for solo ML model monitoring with a frozen validation contract.</description></item>
<item><title>Tabular Baseline in 30 Minutes</title><link>https://ml-optimization-lab.maaihub.com/articles/tabular-baseline-in-30-minutes.html</link><description>A strict baseline recipe that gets from CSV to reliable validation quickly.</description></item>
<item><title>Time-Series Cross-Validation Without Future Leakage</title><link>https://ml-optimization-lab.maaihub.com/articles/time-series-cross-validation-without-future-leakage.html</link><description>A blocked CV template for demand, finance, and event datasets.</description></item>
<item><title>XGBoost vs LightGBM vs CatBoost for High-Cardinality Features</title><link>https://ml-optimization-lab.maaihub.com/articles/xgboost-vs-lightgbm-vs-catboost-for-high-cardinality-features.html</link><description>A decision table and benchmark notebook template for categorical-heavy data.</description></item>
<item><title>Ranking Metrics Explained for Recommendation Models</title><link>https://ml-optimization-lab.maaihub.com/articles/ranking-metrics-explained-for-recommendation-models.html</link><description>A practical guide to MAP@K, NDCG, Recall@K, and validation traps.</description></item>
<item><title>Prompt Pack for Data Science Agents</title><link>https://ml-optimization-lab.maaihub.com/articles/prompt-pack-for-data-science-agents.html</link><description>Prompts that force agents to produce tests, checks, and artifacts instead of vague advice.</description></item>
<item><title>Out-of-Fold Ensembling Recipe for Tabular Competitions</title><link>https://ml-optimization-lab.maaihub.com/articles/out-of-fold-ensembling-recipe-for-tabular-competitions.html</link><description>A minimal OOF stacking workflow with reproducible files.</description></item>
<item><title>Public Dataset Teardown Workflow for ML Content</title><link>https://ml-optimization-lab.maaihub.com/articles/public-dataset-teardown-workflow-for-ml-content.html</link><description>A repeatable process to turn public datasets into useful articles and notebooks.</description></item>
<item><title>Optuna Search Spaces for LightGBM on Tabular Data</title><link>https://ml-optimization-lab.maaihub.com/articles/optuna-search-spaces-for-lightgbm-on-tabular-data.html</link><description>A Kaggle-style search-space recipe that prevents wasted trials.</description></item>
<item><title>Leaderboard Overfitting Warning Signs</title><link>https://ml-optimization-lab.maaihub.com/articles/leaderboard-overfitting-warning-signs.html</link><description>A sanity checklist before trusting a public leaderboard jump.</description></item>
<item><title>LightGBM Early Stopping and Seed Stability Checklist</title><link>https://ml-optimization-lab.maaihub.com/articles/lightgbm-early-stopping-and-seed-stability-checklist.html</link><description>A reproducibility checklist for noisy leaderboard improvements.</description></item>
<item><title>ML Interview Optimization Problems for Data Scientists</title><link>https://ml-optimization-lab.maaihub.com/articles/ml-interview-optimization-problems-for-data-scientists.html</link><description>A set of realistic prompts about metrics, CV, and model selection.</description></item>
<item><title>Kaggle Notebook Template for Reproducible Experiments</title><link>https://ml-optimization-lab.maaihub.com/articles/kaggle-notebook-template-for-reproducible-experiments.html</link><description>A folder and notebook structure that makes experiments comparable.</description></item>
<item><title>How Kaggle Grandmasters Prevent Cross-Validation Leakage</title><link>https://ml-optimization-lab.maaihub.com/articles/how-kaggle-grandmasters-prevent-cross-validation-leakage.html</link><description>A leakage checklist for tabular, time series, grouped, and text data.</description></item>
<item><title>Feature Selection Workflow for Noisy Tabular Data</title><link>https://ml-optimization-lab.maaihub.com/articles/feature-selection-workflow-for-noisy-tabular-data.html</link><description>A practical recipe for permutation importance, adversarial validation, and ablation.</description></item>
<item><title>Calibration Checks Before Trusting a Classifier</title><link>https://ml-optimization-lab.maaihub.com/articles/calibration-checks-before-trusting-a-classifier.html</link><description>A quick calibration workflow for imbalanced binary classification.</description></item>
<item><title>Business KPI to ML Metric Translation Framework</title><link>https://ml-optimization-lab.maaihub.com/articles/business-kpi-to-ml-metric-translation-framework.html</link><description>A framework for choosing metrics that stakeholders can trust.</description></item>
<item><title>Ablation Study Template for Feature Engineering</title><link>https://ml-optimization-lab.maaihub.com/articles/ablation-study-template-for-feature-engineering.html</link><description>A controlled experiment template to decide which feature groups are worth keeping.</description></item>
<item><title>Adversarial Validation for Train-Test Shift</title><link>https://ml-optimization-lab.maaihub.com/articles/adversarial-validation-for-train-test-shift.html</link><description>A diagnostic workflow for detecting distribution shift before modeling.</description></item>
<item><title>Automated Error Analysis for Tabular Models</title><link>https://ml-optimization-lab.maaihub.com/articles/automated-error-analysis-for-tabular-models.html</link><description>A recipe for slicing errors by segment, feature bins, and prediction confidence.</description></item></channel></rss>