Conceptly
← All Concepts
🧠

Amazon SageMaker

AI/MLFully Managed Machine Learning Platform

SageMaker is the platform layer that covers the ML lifecycle from data preparation and training to model storage and inference deployment. It gives teams one place to repeat experiments and productionize models.

Architecture Diagram

🔄 Process

Dashed line animations indicate the flow direction of data or requests

Why do you need it?

If you want to train and deploy models but notebooks, GPU training jobs, model storage, and inference endpoints all have to be assembled separately, infrastructure setup takes longer than experimentation. Once each team builds its own environment, even reproducing the same model becomes difficult.

Why did this approach emerge?

Early ML teams had to assemble training instances, data preparation, and model deployment pipelines independently. To reduce this complexity, SageMaker emerged as an integrated ML lifecycle platform.

How does it work inside?

SageMaker ties notebooks, training jobs, model artifacts, and inference endpoints into one workflow. It reads data from S3, can use custom containers from ECR, and lets teams pin down training and deployment environments to match their own standards.

What is it often confused with?

SageMaker and Bedrock are both AI services, but the approach is different. SageMaker is a platform for training, tuning, and deploying your own models, while Bedrock is a service for consuming managed foundation model APIs. If you need to build and operate models with your own data, look at SageMaker; if the goal is to add features by calling prebuilt models, look at Bedrock.

When should you use it?

Well-suited for custom model training, hyperparameter tuning, dedicated inference endpoints, and MLOps pipeline construction. Overkill if you only call foundation model APIs without training your own models.

Recommendation systemsAnomaly detectionNatural language processingComputer vision