docs.aws.amazon.com
Open in
urlscan Pro
18.66.147.76
Public Scan
URL:
https://docs.aws.amazon.com/sagemaker/latest/dg/best-practice-endpoint-security.html
Submission: On February 06 via api from US — Scanned from DE
Submission: On February 06 via api from US — Scanned from DE
Form analysis
0 forms found in the DOMText Content
SELECT YOUR COOKIE PREFERENCES We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can click “Customize cookies” to decline performance cookies. If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To continue without accepting these cookies, click “Continue without accepting.” To make more detailed choices or learn more, click “Customize cookies.” Accept all cookiesContinue without acceptingCustomize cookies CUSTOMIZE COOKIE PREFERENCES We use cookies and similar tools (collectively, "cookies") for the following purposes. ESSENTIAL Essential cookies are necessary to provide our site and services and cannot be deactivated. They are usually set in response to your actions on the site, such as setting your privacy preferences, signing in, or filling in forms. PERFORMANCE Performance cookies provide anonymous statistics about how customers navigate our site so we can improve site experience and performance. Approved third parties may perform analytics on our behalf, but they cannot use the data for their own purposes. Allow performance category Allowed FUNCTIONAL Functional cookies help us provide useful site features, remember your preferences, and display relevant content. Approved third parties may set these cookies to provide certain site features. If you do not allow these cookies, then some or all of these services may not function properly. Allow functional category Allowed ADVERTISING Advertising cookies may be set through our site by us or our advertising partners and help us deliver relevant marketing content. If you do not allow these cookies, you will experience less relevant advertising. Allow advertising category Allowed Blocking some types of cookies may impact your experience of our sites. You may review and change your choices at any time by clicking Cookie preferences in the footer of this site. We and selected third-parties use cookies or similar technologies as specified in the AWS Cookie Notice. CancelSave preferences UNABLE TO SAVE COOKIE PREFERENCES We will only store essential cookies at this time, because we were unable to save your cookie preferences. If you want to change your cookie preferences, try again later using the link in the AWS console footer, or contact support if the problem persists. Dismiss Contact Us English Create an AWS Account 1. AWS 2. ... 3. Documentation 4. Amazon SageMaker 5. Developer Guide Feedback Preferences AMAZON SAGEMAKER DEVELOPER GUIDE * What is Amazon SageMaker? * Overview of machine learning with Amazon SageMaker * SageMaker Features * Get started * Set Up Amazon SageMaker Prerequisites * Domain overview * SageMaker Domain * Prerequisites * Multiple Domains Overview * Domain resource isolation * Setting Defaults for a Domain * Attaching a Custom File System * Environment * View and Edit Domains * Delete a Domain * Domain User Profiles * Add and Remove User Profiles * View User Profiles and User Profile Details * IAM Identity Center Groups in a Domain * Quick onboarding * Custom onboarding using IAM Identity Center * Set up IAM Identity Center * Custom onboarding using IAM * Choose an Amazon VPC * Supported Regions and Quotas * Use automated ML, no-code, or low-code * SageMaker Autopilot * Create a Regression or Classification Job Using the AutoML API * Datasets Format and Problem Types * Training Modes and Algorithms * Metrics and validation * Model Deployment and Prediction * Real-time inferencing * Batch inferencing * Models generated * View model details * Model Performance Report * Notebooks generated * Data exploration report * Candidate definition notebook * Configure inference output * Tutorials and Notebooks * Example notebooks * Videos * Tutorials * Create an Image Classification job using the AutoML API * Datasets Format and Objective Metric * Model Deployment and Prediction * Explainability Report * Model Performance Report * Create a Text Classification job using the AutoML API * Datasets Format and Objective Metric * Model Deployment and Prediction * Explainability Report * Model Performance Report * Create a Time-series Forecasting job using the AutoML API * Datasets Format and Missing Values Filling Methods * National Holiday Calendars * Objective metrics * Algorithms * Model Deployment and Forecasts * Data Exploration Notebook * Reports Generated * Time-series Forecasting Resource Limits * Create an LLM fine-tuning job using the AutoML API * Supported models * Dataset file types and input data format * Hyperparameters * Metrics * Model deployment and predictions * Create a Regression or Classification Job Using the Studio Classic UI * Configure the default parameters of an Autopilot experiment (for administrators) * Example Notebooks * Quotas * API reference * SageMaker JumpStart * Solution Templates * Launch a Solution * Foundation Models * Use a Model * Select a Model * Customize a Model * Prompt engineering * Fine-tuning * Retrieval Augmented Generation (RAG) * Evaluate a Model * Task-Specific Models * Deploy a Model * Fine-Tune a Model * Share Models * Shared Models and Notebooks * SageMaker JumpStart Industry: Financial * Use machine learning environments offered by SageMaker * Studio * Migrating from Amazon SageMaker Studio Classic * Launch Amazon SageMaker Studio * Amazon SageMaker Studio UI overview * Applications supported in Amazon SageMaker Studio * Amazon SageMaker Studio spaces * Perform common tasks * Local mode support in Amazon SageMaker Studio * View and stop running instances * Amazon SageMaker Studio pricing * Troubleshooting * Studio Classic * UI Overview * Launch Amazon SageMaker Studio Classic * JupyterLab Versioning * Use the Studio Classic Launcher * Collaborate with shared spaces * Create a shared space * List and Describe shared spaces * Edit a shared space * Delete a shared space * Use Studio Classic Notebooks * Compare Studio Classic Notebooks with Notebook Instances * Get Started * Studio Classic Tour * Create or Open a Notebook * Use the Toolbar * Install External Libraries and Kernels in Amazon SageMaker Studio Classic * Share and Use a Notebook * Get Studio Classic Notebook and App Metadata * Get Notebook Differences * Manage Resources * Change an Instance Type * Change an Image or a Kernel * Shut Down Resources * Usage Metering * Available Resources * Available Studio Classic Instance Types * Available Amazon SageMaker Images * Customize Studio Classic * Bring your own image * Custom image specifications * Prerequisites * Add a Studio Classic-compatible image to ECR * Create a SageMaker image * Attach an image * Launch a custom image * Clean up resources * Use lifecycle configurations with Studio Classic * Create and associate a lifecycle configuration * Create from the AWS CLI * Create from the SageMaker Console * Set defaults * Set defaults from the AWS CLI * Set defaults from the SageMaker console * Debug lifecycle configurations * Update and detach lifecycle configurations * Attach Suggested Git Repos to Studio Classic * Attach a Git Repository from the AWS CLI * Attach from the SageMaker Console * Detach Git Repos * Perform Common Tasks * Upload Files * Clone a Git Repository * Stop a Training Job * Use TensorBoard in Amazon SageMaker Studio Classic * Using CodeWhisperer and CodeGuru extensions with SageMaker * Manage Your Amazon EFS Volume * Provide Feedback * Shut Down and Update Studio Classic and Apps * Shut Down and Update Studio Classic * Shut down and Update Studio Classic Apps * Studio Classic Pricing * Troubleshooting * JupyterLab * JupyterLab user guide * JupyterLab administrator guide * Give your users access to private spaces * Change the default storage size for your JupyterLab users * Lifecycle configurations with JupyterLab * Create and associate a lifecycle configuration * Debug lifecycle configurations * Detach lifecycle configurations * Attach a Git repo * Customize environments using custom images * Delete unused resources * Quotas * Migrating from SageMaker Studio Classic to SageMaker Studio * SageMaker Notebook Instances * Use Notebook Instances to build models * Step 1: Create an Amazon SageMaker Notebook Instance * Step 2: Create a Jupyter Notebook * Step 3: Download, Explore, and Transform Data * Step 4: Train a Model * Step 5: Deploy the Model * Step 6: Evaluate the Model * Step 7: Clean Up * AL2 instances * JupyterLab versioning * Create a Notebook Instance * Access Notebook Instances * Update a Notebook Instance * Customize a Notebook Instance * Lifecycle Configuration Best Practices * Install External Libraries and Kernels in Notebook Instances * Notebook Instance Software Updates * Control an Amazon EMR Spark Instance Using a Notebook * Example Notebooks * Set the Notebook Kernel * Git Repos * Add a Git Repository to Your Amazon SageMaker Account * Add a Git Repository to Your Amazon SageMaker Account (CLI) * Create a Notebook Instance with an Associated Git Repository * Create a Notebook Instance with an Associated Git Repository (CLI) * Associate a CodeCommit Repository in a Different AWS Account with a Notebook Instance * Use Git Repositories in a Notebook Instance * Notebook Instance Metadata * Monitor Jupyter Logs in Amazon CloudWatch Logs * SageMaker Studio Lab * Studio Lab components overview * Onboard to Studio Lab * Manage your account * Launch Studio Lab * Use Studio Lab starter assets * Studio Lab pre-installed environments * Use the Studio Lab project runtime * Studio Lab UI overview * Create or open a notebook * Use the toolbar * Manage your environment * Use external resources * Get notebook differences * Export a Studio Lab environment to Studio Classic * Shut down resources * Troubleshooting * SageMaker Canvas * Getting started * Setting Up and Managing Amazon SageMaker Canvas (for IT Administrators) * Grant Your Users Permissions to Upload Local Files * Set Up SageMaker Canvas for Your Users * Configure your Amazon S3 storage * Grant permissions for cross-account Amazon S3 storage * Encrypt Your SageMaker Canvas Data with AWS KMS * Grant Your Users Permissions to Build Custom Image and Text Prediction Models * Grant Your Users Permissions to Perform Time Series Forecasting * Grant Users Permissions to Fine-tune Foundation Models * Update SageMaker Canvas for Your Users * Request a Quota Increase * Grant Users Permissions to Import Amazon Redshift Data * Grant Users Permissions to Collaborate with Studio Classic * Grant Your Users Permissions to Send Predictions to Amazon QuickSight * Manage apps * Configure Amazon SageMaker Canvas in a VPC without internet access * Set up connections to data sources with OAuth * Import data into Canvas * Create a dataset * Update a dataset * Connect to data sources * Join data that you've imported into SageMaker Canvas * Use sample datasets * Prepare data * Create a Data Flow * Perform EDA * Transform data * Chat for data prep * Process data * Automate data preparation in SageMaker Canvas * Use generative AI with foundation models * Fine-tune foundation models * Use Ready-to-use models * Make predictions with Ready-to-use models * Use custom models * Build a custom model * Build a model * Advanced model building configurations * Preview your model * Edit an image dataset * Explore and analyze your data * Explore your data using visualization techniques * Explore your data using analytics * Prepare data with advanced transformations * Evaluate Your Model's Performance in Amazon SageMaker Canvas * Evaluate your model's performance * Use advanced metrics in your analyses * View model candidates in the model leaderboard * Metrics reference * Make predictions for your data * Make single predictions * Make batch predictions * Send predictions to Amazon QuickSight * Download a model notebook * Send your model to Amazon QuickSight * Time Series Forecasts in Amazon SageMaker Canvas * Gain additional insights from your forecast * Make a time series forecast * Updating a Model in Amazon SageMaker Canvas * Operationalize your models * Register a model version in the SageMaker model registry * Deploy your models to an endpoint * Manage automations * Collaborate with data scientists * Bring your own model to SageMaker Canvas * Logging out * Limitations and troubleshooting * Manage billing and cost * SageMaker geospatial capabilities * Getting started * Access to SageMaker geospatial * Create a SageMaker geospatial notebook * Access a raster data collection and start an earth observation job * Geospatial processing job * Overview: ScriptProcessor API * Using ScriptProcessor * Earth Observation Jobs * Create an Earth Observation Job * Types of Operations * Vector Enrichment Jobs * Visualization Using SageMaker geospatial capabilities * Amazon SageMaker geospatial Map SDK * SageMaker geospatial capabilities FAQ * Security and Permissions * Configuration and Vulnerability Analysis in SageMaker geospatial * Security Best Practices for SageMaker geospatial capabilities * Use SageMaker geospatial capabilities in Your Amazon VPC * Use AWS KMS Permissions for SageMaker geospatial capabilities * Types of compute instances * Data collections * RStudio on Amazon SageMaker * Manage RStudio on SageMaker * RStudio license * Upgrade the RStudio Version * Network and Storage * RStudioServerPro instance type * RStudio Connect URL * RStudio Package Manager * Create an Amazon SageMaker Domain with RStudio using the AWS CLI * Add RStudio support to an existing Domain * Bring your own image * Prerequisites * Custom image specifications * Create an image * Attach an image * Launch a custom image * Clean up image resources * Manage users * RStudio administrative dashboard * Shut down and restart RStudio * Manage billing and cost * Diagnose issues and get support * Use RStudio on Amazon SageMaker * Open RStudio Launcher and launch RSessions * Publish to RStudio Connect * Access Amazon SageMaker features with RStudio on Amazon SageMaker * SageMaker Code Editor * Code Editor user guide * Launch in Studio * Launch using the AWS CLI * Clone a repository * Connections and extensions * Log out and shut down resources * Code Editor adminstrator guide * Give your users access to private spaces * Change the default storage size * Create lifecycle configurations * Clone repositories * Configure extensions * SageMaker HyperPod * Prerequisites * Getting started with SageMaker HyperPod * Using the HyperPod console UI * Using the AWS CLI commands for HyperPod * Operate SageMaker HyperPod * Run jobs on HyperPod clusters * Cluster resiliency * Cluster management * SageMaker HyperPod reference * HyperPod release notes * Use generative AI in SageMaker notebook environments * Installation * Features * Model configuration * Use Jupyter AI * Label data with a human-in-the-loop * Ground Truth * Getting started * Step 1: Before You Begin * Step 2: Create a Labeling Job * Step 3: Select Workers * Step 4: Configure the Bounding Box Tool * Step 5: Monitoring Your Labeling Job * Label Images * Bounding Box * Image Semantic Segmentation * Auto-Segmentation Tool * Image Classification (Single Label) * Image Classification (Multi-label) * Image Label Verification * Label Text * Named Entity Recognition * Text Classification (Single Label) * Text Classification (Multi-label) * Label Videos and Video Frames * Video Classification * Label Video Frames * Video Frame Object Detection * Video Frame Object Tracking * Video Frame Labeling Job Overview * Worker Instructions * Work on Video Frame Object Tracking Tasks * Work on Video Frame Object Detection Tasks * Label 3D Point Clouds * Built-In Task Types * 3D Point Cloud Object Detection * 3D Point Cloud Object Tracking * 3D Point Cloud Semantic Segmentation * 3D-2D Point Cloud Object Tracking * 3D Point Cloud Labeling Job Overview * Worker Instructions * 3D Point Cloud Semantic Segmentation * 3D Point Cloud Object Detection * 3D Point Cloud Object Tracking * Verify and Adjust Labels * Creating Custom Labeling Workflows * Step 1: Setting up your workforce * Step 2: Task templates * Starting with a base template * Developing templates locally * Using External Assets * Track your variables * A simple sample * Adding automation with Liquid * End-to-end demos * Step 3: Processing with AWS Lambda * Pre-annotation and Post-annotation Lambda Function Requirements * Required Permissions To Use AWS Lambda With Ground Truth * Create Lambda Functions for a Custom Labeling Workflow * Test Pre-Annotation and Post-Annotation Lambda Functions * Demo: Image Annotation with crowd-bounding-box * Demo: Text Intent with crowd-classifier * Custom Workflows via the API * Create a Labeling Job * Built-in Task Types * Creating Instruction Pages * Create a Labeling Job (Console) * Create a Labeling Job (API) * Create a Streaming Labeling Job * Create Amazon SNS Input and Output Topics * Set up Amazon S3 Bucket Event Notifications * Create a Manifest File (Optional) * Example: Use SageMaker API To Create Streaming Labeling Job * Stop a Streaming Labeling Job * Add Label Category and Frame Attributes * Use Input and Output Data * Input Data * Use an Input Manifest File * Automated Data Setup * Supported Data Formats * Streaming Labeling Jobs * Input Data Quotas * Filter and Select Data for Labeling * 3D Point Cloud Input Data * Accepted Raw 3D Data Formats * Create an Input Manifest File for a 3D Point Cloud Labeling Job * Create a Point Cloud Frame Input Manifest File * Create a Point Cloud Sequence Input Manifest * LiDAR Coordinate System and Sensor Fusion * Video Frame Input Data * Choose Video Files or Video Frames for Input Data * Input Data Setup * Automated Video Frame Input Data Setup * Manual Input Data Setup * Output Data * Enhanced Data Labeling * Control the Flow of Data Objects Sent to Workers * Consolidate Annotations * Automate Data Labeling * Chaining Labeling Jobs * Security and Permissions * CORS Permission Requirement * IAM Permissions * Use IAM Managed Policies * IAM Permissions To Use the Ground Truth Console * Create an SageMaker Execution Role * Encrypt Output Data and Storage Volume with AWS KMS * Use Ground Truth in an Amazon VPC * Run an Amazon SageMaker Ground Truth Labeling Job in an Amazon Virtual Private Cloud * Use Amazon VPC Mode from a Private Worker Portal * Using the SageMaker console to manage a VPC config * Using the SageMaker AWS API to manage a VPC config * Data Encryption * Workforce Authentication and Restrictions * Monitor Labeling Job Status * Ground Truth Plus * Getting Started with Amazon SageMaker Ground Truth Plus. * Set Up Amazon SageMaker Ground Truth Plus Prerequisites * Core Components of Amazon SageMaker Ground Truth Plus * Request a Project * Create a Project Team * Open the Project Portal * Create a Batch * Review Metrics * Review Batches * Accept or Reject Batches * Ground Truth Synthetic Data * Getting Started with Amazon SageMaker Ground Truth Synthetic Data * Set Up Amazon SageMaker Ground Truth Synthetic Data Prerequisites * Core Components of Amazon SageMaker Ground Truth Synthetic Data * Request a Project * Share Data from Your Amazon S3 Bucket * Project Portal * Review Batches * Accept or Reject Batches * Create and Manage Workforces * Using the Amazon Mechanical Turk Workforce * Managing Vendor Workforces * Use a Private Workforce * Create and Manage Amazon Cognito Workforce * Create a Private Workforce (Amazon Cognito) * Create a Private Workforce (Amazon SageMaker Console) * Create a Private Workforce (Amazon Cognito Console) * Manage a Private Workforce (Amazon Cognito) * Manage a Workforce (Amazon SageMaker Console) * Manage a Private Workforce (Amazon Cognito Console) * Create and Manage OIDC IdP Workforce * Create a Private Workforce (OIDC IdP) * Manage a Private Workforce (OIDC IdP) * Manage Private Workforce Using the Amazon SageMaker API * Track Worker Performance * Create and manage Amazon SNS topics for your work teams * Crowd HTML Elements Reference * SageMaker Crowd HTML Elements * crowd-alert * crowd-badge * crowd-button * crowd-bounding-box * crowd-card * crowd-checkbox * crowd-classifier * crowd-classifier-multi-select * crowd-entity-annotation * crowd-fab * crowd-form * crowd-icon-button * crowd-image-classifier * crowd-image-classifier-multi-select * crowd-input * crowd-instance-segmentation * crowd-instructions * crowd-keypoint * crowd-line * crowd-modal * crowd-polygon * crowd-polyline * crowd-radio-button * crowd-radio-group * crowd-semantic-segmentation * crowd-slider * crowd-tab * crowd-tabs * crowd-text-area * crowd-toast * crowd-toggle-button * Augmented AI Crowd HTML Elements * crowd-textract-analyze-document * crowd-rekognition-detect-moderation-labels * Augmented AI * Get Started with Amazon Augmented AI * Core Components of Amazon A2I * Prerequisites to Using Augmented AI * Tutorial: Get Started in the Amazon A2I Console * Tutorial: Get Started Using the Amazon A2I API * Use Cases and Examples * Use with Amazon Textract * Use with Amazon Rekognition * Use With Custom Task Types * Create a Human Review Workflow * JSON Schema for Human Loop Activation Conditions in Amazon Augmented AI * Use Human Loop Activation Conditions JSON Schema with Amazon Textract * Use Human Loop Activation Conditions JSON Schema with Amazon Rekognition * Delete a Human Review Workflow * Create and Start a Human Loop * Delete a Human Loop * Create and Manage Worker Task Templates * Create and Delete Worker Task Templates * Create Custom Worker Task Templates * Creating Good Worker Instructions * Monitor and Manage Your Human Loop * Output Data * Permissions and Security * CloudWatch Events * API References * Prepare data * Explore, Analyze, and Process Data * Prepare Data with Data Wrangler * Get Started with Data Wrangler * Import * Create and Use a Data Wrangler Flow * Get Insights On Data and Data Quality * Automatically Train Models on Your Data Flow * Transform Data * Analyze and Visualize * Reusing Data Flows for Different Datasets * Export * Use Data Preparation in a Studio Classic Notebook to Get Data Insights * Security and Permissions * Release Notes * Troubleshoot * Increase Amazon EC2 Instance Limit * Update Data Wrangler * Shut Down Data Wrangler * Prepare Data at Scale with Studio Classic using Amazon EMR or AWS Glue * Prepare data using Amazon EMR * Configure networking (for administrators) * Create an Amazon EMR cluster from Studio Classic notebooks * Configure Amazon EMR templates in AWS Service Catalog (for administrators) * Launch an Amazon EMR cluster from Studio Classic * Use Amazon EMR clusters from Studio Classic notebooks * Configure the discoverability of Amazon EMR clusters (for administrators) * Discover Amazon EMR clusters * Connect to an Amazon EMR cluster * Connect to a cluster using runtime IAM roles * Terminate an Amazon EMR cluster * Monitor Spark workloads in Spark UI from Studio Classic * Walkthroughs and whitepapers * Additional Configuration for Cross Accounts (For Administrators) * Troubleshooting * Prepare data using AWS Glue Interactive Sessions * Get started with AWS Glue Interactive Sessions * AWS Glue Interactive Session Pricing * Process data * Data Processing with Apache Spark * Data Processing with scikit-learn * Data Processing with Framework Processors * Hugging Face Framework Processor * MXNet Framework Processor * PyTorch Framework Processor * TensorFlow Framework Processor * XGBoost Framework Processor * Use Your Own Processing Code * Run Scripts with a Processing Container * Build Your Own Processing Container * Create, store, and share features * Get started with Amazon SageMaker Feature Store * Feature Store concepts * Adding policies to your IAM role * Use Feature Store with SDK for Python (Boto3) * Introduction to Feature Store example notebook * Fraud detection with Feature Store example notebook * Using Amazon SageMaker Feature Store in the console * Delete a feature group * Data sources and ingestion * Feature Store Spark * Feature Processing * Feature Store Feature Processor SDK * Running Feature Store Feature Processor remotely * Creating and running Feature Store Feature Processor pipelines * Scheduled and event based executions for Feature Processor pipelines * Monitor Amazon SageMaker Feature Store Feature Processor pipelines * IAM permissions and execution roles * Feature Processor restrictions, limits, and quotas * Data sources * Feature Processor SDK data sources * Custom data sources * Custom data source examples * Example Feature Processing code for common use cases * Time to live (TTL) duration for records * Cross account feature group discoverability and access * Enabling cross account discoverability * Share your feature group catalog * Search discoverable resources * Enabling cross account access * Share online feature groups with AWS Resource Access Manager * Share your feature group entities * Use online store shared resources with access permissions * Cross account offline store access * Feature Store storage configurations * Online store * Offline store * Throughput modes * Collection types * Add features and records to a feature group * Find features in your feature groups * Find feature groups in your Feature Store * Adding searchable metadata to your features * Create a dataset from your feature groups * Delete records from your feature groups * Logging Feature Store operations by using AWS CloudTrail * Security and access control * Quotas, naming rules and data types * Amazon SageMaker Feature Store offline store data format * Amazon SageMaker Feature Store resources * Train machine learning models * Model Training * Choose an Algorithm * Use Built-in Algorithms * Common Information * Common data formats * Common Data Formats for Training * Common Data Formats for Inference * Suggested instance types * Logs * Tabular * AutoGluon-Tabular Algorithm * How It Works * Hyperparameters * Model Tuning * CatBoost Algorithm * How It Works * Hyperparameters * Model Tuning * Factorization Machines * How It Works * Hyperparameters * Model Tuning * Inference Formats * K-Nearest Neighbors (k-NN) Algorithm * How It Works * Hyperparameters * Model Tuning * Training Formats * Inference Formats * LightGBM Algorithm * How It Works * Hyperparameters * Model Tuning * Linear Learner Algorithm * How It Works * Hyperparameters * Model Tuning * Inference Formats * TabTransformer Algorithm * How It Works * Hyperparameters * Model Tuning * XGBoost Algorithm * How It Works * Hyperparameters * Model Tuning * Deprecated Versions of XGBoost * XGBoost Release 0.90 * XGBoost Release 0.72 * Text * BlazingText * Hyperparameters * Model Tuning * Latent Dirichlet Allocation (LDA) * How It Works * Hyperparameters * Model Tuning * Neural Topic Model (NTM) Algorithm * Hyperparameters * Model Tuning * Inference Formats * Object2Vec * How It Works * Hyperparameters * Model Tuning * Training Formats * Inference Formats: Scoring * Inference Formats: Embeddings * Sequence to Sequence (seq2seq) * How It Works * Hyperparameters * Model Tuning * Text Classification - TensorFlow * How It Works * TensorFlow Models * Hyperparameters * Model Tuning * Time-Series * DeepAR Forecasting * How DeepAR Works * Hyperparameters * Model Tuning * Inference Formats * Unsupervised * IP Insights * How It Works * Hyperparameters * Model Tuning * Data Formats * Training * Inference * K-Means Algorithm * How It Works * Hyperparameters * Model Tuning * Inference Formats * Principal Component Analysis (PCA) Algorithm * How It Works * Hyperparameters * Inference Formats * Random Cut Forest (RCF) Algorithm * How It Works * Hyperparameters * Model Tuning * Inference Formats * Vision * Image Classification - MXNet * How It Works * Hyperparameters * Model Tuning * Image Classification - TensorFlow * How It Works * TensorFlow Hub Models * Hyperparameters * Model Tuning * Object Detection - MXNet * How It Works * Hyperparameters * Model Tuning * Inference Formats * Object Detection - TensorFlow * How It Works * TensorFlow Models * Hyperparameters * Model Tuning * Semantic Segmentation * Hyperparameters * Model Tuning * Use Reinforcement Learning * Sample RL Workflow Using Amazon SageMaker RL * RL Environments in Amazon SageMaker * Distributed Training with Amazon SageMaker RL * Hyperparameter Tuning with Amazon SageMaker RL * Run local code as a remote job * Invoking a function * Configuration file * Customize your runtime environment * Container image compatibility * Logging parameters and metrics with Amazon SageMaker Experiments * Using modular code with the @remote decorator * Private repository for runtime dependencies * Example notebooks * Experiments * Create an experiment * View, search, and compare experiment runs * SageMaker integrations * Tutorials * CloudTrail metrics * Clean up experiment resources * Additional supported SDK * Experiments FAQs * Search using the console and API * Perform Automatic Model Tuning * How Hyperparameter Tuning Works * Define metrics and environment variables * Define Hyperparameter Ranges * Track and set completion criteria * Tune Multiple Algorithms * Create an HPO Tuning Job (Console) * Manage Jobs for HPO * Example: Hyperparameter Tuning Job * Create a Notebook Instance * Get the Amazon SageMaker Boto 3 Client * Get the SageMaker Execution Role * Use an Amazon S3 bucket for input and output * Download, Prepare, and Upload Training Data * Configure and Launch a Hyperparameter Tuning Job * Monitor the Progress of a Hyperparameter Tuning Job * Clean up * Stop Training Jobs Early * Run a Warm Start Hyperparameter Tuning Job * Resource Limits for Automatic Model Tuning * Best Practices for Hyperparameter Tuning * Refine data during training * Supported frameworks and AWS Regions * Apply SageMaker smart sifting to your training script * Best practices, considerations, and troubleshooting * Security in SageMaker smart sifting * SageMaker smart sifting Python SDK reference * Release notes * Debug and improve model performance * Use TensorBoard * Use SageMaker Debugger * Supported Frameworks and Algorithms * Debugger Architecture * Tutorials * Tutorial Videos * Example Notebooks * Advanced Demos and Visualization * Debug Training Jobs * Step 1: Adapt Your Training Script to Register a Hook * PyTorch * TensorFlow * Step 2: Launch and Debug Training Jobs Using SageMaker Python SDK * Configure SageMaker Debugger to Save Tensors * Configure Debugger Built-in Rules * Turn Off Debugger * Useful SageMaker Estimator Classmethods for Debugger * Debugger Interactive Report for XGBoost * Action on Debugger Rules * Debugger Built-in Actions for Rules * Create Actions on Rules Using Amazon CloudWatch and AWS Lambda * Visualize Debugger Output Tensors in TensorBoard * List of Built-in Rules * Create Custom Rules * Use Debugger with Custom Training Containers * Configure Debugger Using SageMaker API * JSON (AWS CLI) * AWS Boto3 * Best Practices for Debugger * Advanced Topics and Reference * Debugger API Operations * Pre-built Docker Images for Rules * Exceptions * Considerations for Amazon SageMaker Debugger * Debugger Usage Statistics * Access a training container through SSM for remote debugging * Release notes * Profile and optimize computational performance * Use SageMaker Profiler * Monitor AWS compute resource utilization in SageMaker Studio Classic * Configure an estimator with parameters for basic profiling using the SageMaker Debugger Python modules * Configure settings for basic profiling of system resource utilization * Configure for framework profiling * Update monitoring and profiling configuration * Turn off Debugger * Configure built-in profiler rules * List of Built-in Profiler Rules * SageMaker Debugger UI in SageMaker Studio Classic Experiments * Open the SageMaker Debugger Insights dashboard * SageMaker Debugger Insights dashboard controller * Explore the SageMaker Debugger Insights dashboard * Shut Down SageMaker Debugger Insights * Debugger interactive report * Analyze data using the Debugger Python client library * Access the profile data * Plot the system metrics and framework metrics data * Access the profiling data using the pandas data parsing tool * Access the Python profiling stats data * Merge timelines of multiple profile trace files * Profiling data loaders * Release notes * Distributed training * SageMaker distributed data parallelism library * Introduction to the SMDDP library * Supported frameworks, AWS Regions, and instances types * How to run a distributed training job with the SMDDP library * Step 1: Adapt your training script to use the SMDDP collective operations * PyTorch * PyTorch Lightning * TensorFlow (deprecated) * Step 2: Launch a distributed training job * Configuration tips * FAQ * Troubleshooting * SageMaker model parallelism library v2 * Introduction to model parallelism * Supported frameworks and AWS Regions * Get started with SMP v2 * Core features of SMP v2 * Hybrid sharded data parallelism * Compatibility with the SMDDP library * Mixed precision training * Delayed parameter initialization * Activation checkpointing * Activation offloading * Tensor parallelism * Fine-tuning * FlashAttention * Save and load checkpoints while using SMP * Best practices * SMP v2 reference * Release notes * (Archived) SageMaker model parallelism library v1.x * Introduction to Model Parallelism * Supported Frameworks and AWS Regions * Core Features * Sharded Data Parallelism * Pipelining a Model * Tensor Parallelism * How It Works * Run a Training Job with Tensor Parallelism * Support for Hugging Face Transformer Models * Ranking Mechanism * Optimizer State Sharding * Activation Checkpointing * Activation Offloading * FP16 Training with Model Parallelism * Support for FlashAttention * Run a SageMaker Distributed Training Job with Model Parallelism * Step 1: Modify Your Own Training Script * TensorFlow * PyTorch * Step 2: Launch a Training Job * Checkpointing and Fine-Tuning a Model with Model Parallelism * Best Practices * Configuration Tips and Pitfalls * Troubleshooting * SageMaker Distributed Training Notebook Examples * Distributed computing with SageMaker best practices * Training Compiler * Supported Frameworks, AWS Regions, Instance Types, and Tested Models * Bring Your Own Deep Learning Model * PyTorch * TensorFlow * Enable Training Compiler * Run PyTorch Training Jobs with Training Compiler * Run TensorFlow Training Jobs with Training Compiler * Example Notebooks and Blogs * Best Practices and Considerations * Training Compiler FAQ * Troubleshooting * Release Notes * Access Training Data * Train Using a Heterogeneous Cluster * Use Incremental Training * Use Managed Spot Training * Use Managed Warm Pools * Monitor and Analyze Using CloudWatch Metrics * Use Training Storage Paths * Use Augmented Manifest Files * Use Checkpoints * Deploy models for inference * Model Deployment * Model creation with ModelBuilder * Validating Models * Get an endpoint inference recommendation * Prerequisites * Recommendation jobs * Get instant prospective instances * Get an inference recommendation * Get an inference recommendation for an existing endpoint * Get compiled recommendations with Neo * Interpret recommendation results * Get autoscaling policy recommendations * Run a custom load test * Troubleshoot Inference Recommender errors * Real-time inference * Deploy models * Invoke models * Manage endpoints * Hosting options * Host a single model * Host multiple models in one container behind one endpoint * Create a Multi-Model Endpoint * Invoke a Multi-Model Endpoint * Add or Remove Models * Bring Your Own Container * API Container Contract * Security * CloudWatch Metrics for Multi-Model Endpoint Deployments * Set Auto Scaling Policies for Multi-Model Endpoint Deployments * Host multiple models which use different containers behind one endpoint * Use a multi-container endpoint with direct invocation * Host models along with pre-processing logic as serial inference pipeline behind one endpoint * Process Features with Spark ML and Scikit-learn * Create a Pipeline Model * Real-time Inference * Batch Transform * Logs and Metrics * Troubleshooting * Delete Endpoints and Resources * Automatically scale models * Overview * Configure model auto scaling with the console * Register a model * Define a scaling policy * Apply a scaling policy * Edit a scaling policy * Delete a scaling policy * Check the status of a scaling activity * Load testing * Use AWS CloudFormation to create a scaling policy * Update or delete endpoints that use auto scaling * Host instance storage volumes * Safely validate models in production * Production variants * Shadow variants * Clarify online explainability * Pre-check the model container * Configure and create an endpoint * Invoke the endpoint * Code examples: SDK for Python * Troubleshooting guide * Serverless Inference * Create, invoke, update, and delete a serverless endpoint * Prerequisites * Create a serverless endpoint * Invoke a serverless endpoint * Update a serverless endpoint * Describe a serverless endpoint * Delete a serverless endpoint * Monitor a serverless endpoint * Automatically scale Provisioned Concurrency for a serverless endpoint * Troubleshooting * Asynchronous inference * Create, invoke, and update an Asynchronous Endpoint * Prerequisites * Create * Invoke * Update * Delete * Monitor asynchronous endpoint * Check prediction results * Autoscale an asynchronous endpoint * Troubleshooting * Batch Transform * Associate Prediction Results with Input * Storage in Batch Transform * Troubleshooting * Model parallelism and large model inference * Deep learning containers for LMI * SageMaker endpoint parameters for LMI * LMI tutorials * LMI with DeepSpeed and DJL Serving * LMI with FasterTransformer and DJL Serving * Large model inference with TorchServe * Additional resources to get started * Configurations and settings * Choosing instance types for LMI * Deploying uncompressed models * LMI FAQs * LMI troubleshooting * Release notes for LMI deep learning containers * Update models in production * Auto-Rollback Configuration and Monitoring * Blue/Green Deployments * All At Once Traffic Shifting * Canary Traffic Shifting * Linear Traffic Shifting * Rolling Deployments * Exclusions * Shadow tests * Create a shadow test * View, monitor, and edit shadow tests * Complete a shadow test * Best Practices * Exclusions * Access containers through SSM * Deploy models with model servers * Deploy models with TorchServe * Deploy models with DJL Serving * Deploy models with Triton Inference Server * Deploy models at the edge with SageMaker Edge Manager * Getting Started * Setting Up * Train, Compile, and Package Your Model * Create and Register Fleets and Authenticate Devices * Download and Set Up Edge Manager * Run Agent * Set Up Devices and Fleets * Create a Fleet * Register a Device * Check Status * Package Model * Package a Model (Amazon SageMaker Console) * Package a Model (Boto3) * The Edge Manager Agent * Download and Set Up the Edge Manager Agent Manually * Deploy the Model Package and Edge Manager Agent with AWS IoT Greengrass * Prerequisites * Create the AWS IoT Greengrass V2 Components * Deploy the components to your device * Deploy the Model Package Directly with SageMaker Edge Manager Deployment API * Manage Model * SageMaker Edge Manager end of life * Optimize model performance using Neo * Compile Models * Prepare Model for Compilation * Compile Models: CLI * Compile Models: Console * Compile Models: SDK * Cloud Instances * Supported Instance Types and Frameworks * Deploy a Model * Prerequisites * Deploy a Compiled Model Using SageMaker SDK * Deploy a Compiled Model Using Boto3 * Deploy a Compiled Model Using the AWS CLI * Deploy a Compiled Model Using the Console * Request Inferences * Request Inferences from a Deployed Service (Amazon SageMaker SDK) * Request Inferences from a Deployed Service (Boto3) * Request Inferences from a Deployed Service (AWS CLI) * Inference Container Images * Edge Devices * Supported Frameworks, Devices, Systems, and Architectures * Supported Frameworks * Supported Devices, Chip Architectures, and Systems * Tested Models * Deploy Models * Getting Started with Neo on Edge Devices * Step 1: Compile the Model * Step 2: Set Up Your Device * Step 3: Make Inferences on Your Device * Troubleshoot Errors * Troubleshoot Neo Compilation Errors * Troubleshoot Neo Inference Errors * Troubleshoot Ambarella Errors * Elastic Inference * How EI Works * Set Up to Use EI * Attach EI to a Notebook Instance * Endpoints with Elastic Inference * Best practices * Best practices for deploying models on SageMaker Hosting Services * Monitor Security Best Practices * Low latency real-time inference with AWS PrivateLink * Migrate inference workload from x86 to AWS Graviton * Troubleshoot deployments * Inference cost optimization best practices * Best practices to minimize interruptions during GPU driver upgrades * Best practices for endpoint security * Supported features * Resources * Blogs, example notebooks, and additional resources * Troubleshooting and reference * Model Hosting FAQs * Implement MLOps * Why MLOps? * Experiments * Workflows * Amazon SageMaker Model Building Pipelines * Pipeline Overview * Structure and Execution * Access Management * Cross-Account Support * Pipeline Parameters * Pipeline Steps * Lift-and-shift Python code with the @step decorator * Create a pipeline with @step-decorated functions * Run a pipeline * Configure your pipeline * Best Practices * Limitations * Pass Data Between Steps * Caching Pipeline Steps * Retry Policy * Selective Execution * ClarifyCheck QualityCheck Baselines * Schedule Pipeline Runs * Experiments Integration * Local Mode * Troubleshooting Pipelines * Create and Manage Pipelines * Define a Pipeline * Run a pipeline * View, Track, and Execute Pipelines in Studio * View a Pipeline * View a Pipeline Execution * Download a Pipeline Definition * View Experiment Entities * Start (and Stop) a Pipeline Execution * Track the Lineage of a Pipeline * Kubernetes Orchestration * SageMaker Operators for Kubernetes * Latest SageMaker Operators for Kubernetes * Old SageMaker Operators for Kubernetes * Use SageMaker Jobs * Migrate to Latest Operator * End of support FAQ * SageMaker Components for Kubeflow Pipelines * Install Kubeflow Pipelines * Use SageMaker components * Notebook Jobs * Installation Guide * Install policies and permissions for Studio * Install policies and permissions for local Jupyter environments * Create a notebook job * Create a notebook job with SageMaker Python SDK * Create a notebook job in Studio * Set up default options for local notebooks * Create a workflow of notebook jobs * Pass information to and from your notebook step * Invoke another notebook in your notebook job * Available options * Parameterize your notebook * Connect to an Amazon EMR cluster from your notebook * Track notebook jobs and job definitions * View notebook jobs * View notebook job definitions * Troubleshooting guide * Constraints and considerations * Pricing for SageMaker Notebook Jobs * ML Lineage Tracking * Tracking Entities * SageMaker-Created Entities * Manually Create Entities * Querying Lineage Entities * Cross-Account Tracking * Catalog models with Model Registry * Models, Model Versions, and Model Groups * Create a Model Group * Delete a Model Group * Register Version * View Model Groups and Versions * View Model Version Details * Compare Model Versions * View and Manage Tags * Share Models with SageMaker Canvas Users * Delete a Model Version * Update Model Approval Status * Deploy Model * Deployment History * Collections * Prerequisites * Create a Collection * Add Model Groups to a Collection * Remove Model Groups or Collections from a Collection * Move a Model Group Between Collections * View a Model Group's Parent Collection * Constraints * Model Registry FAQ * Model Deployment * Model Monitor * Projects * SageMaker Projects * SageMaker Studio Permissions Required to Use Projects * Create an MLOps Project * Templates * Use Provided Templates * Custom Templates * View Resources * Update an MLOps Project * Delete an MLOps Project * Project walkthrough * Project Walkthrough Using Third-party Git Repos * MLOps FAQ * Monitor data and model quality * Model Monitoring * Capture data * Capture data from real-time endpoint * Capture data from batch transform job * Monitor data quality * Create a Baseline * Schedule data quality monitoring jobs * Statistics * CloudWatch Metrics * Violations * Monitor model quality * Create a Model Quality Baseline * Schedule Model Quality Monitoring Jobs * Ingest Ground Truth Labels and Merge Them With Predictions * Model Quality Metrics * Model Quality CloudWatch Metrics * Monitor bias drift * Create a Bias Drift Baseline * Bias Drift Violations * Configure Bias Drift Monitoring * Schedule Bias Drift Monitoring Jobs * Inspect Reports for Data Bias Drift * CloudWatch Metrics for Bias Drift Analysis * Monitor Feature Attribution Drift * Create a SHAP Baseline * Feature Attribution Drift Violations * Configure Attribution Drift Monitoring * Schedule Feature Attribute Drift Monitoring Jobs * Inspect Reports for Feature Attribute Drift * CloudWatch Metrics for Feature Drift Analysis * Schedule monitoring jobs * cron scheduling * Configuring SCPs for monitoring schedules * Prebuilt container * Interpret results * Visualize results for real-time endpoints * Advanced topics * Customize monitoring * Preprocessing and Postprocessing * Bring Your Own Containers * Inputs * Outputs * Statistics * Constraints * CloudWatch Metrics * AWS CloudFormation Custom Resource for Real-time Endpoints * Model Monitor FAQs * Evaluate, explain, and detect bias in models * Evaluate foundation models * Get started with model evaluations * Foundation model evaluation overview * Use a human evaluation * Use an automatic evaluation * Customize your workflow using the fmeval library * Notebook tutorials * Troubleshooting guide * Explain and detect bias * Configure a SageMaker Clarify Processing Job * Get Started: SageMaker Clarify Containers * Configure the Analysis * Data Format Compatibility Guide * Tabular data * Endpoint requests for tabular data * Endpoint response for tabular data * Pre-check endpoint request and response for tabular data * Image data * Run SageMaker Clarify Processing Jobs * Get Analysis Results * Troubleshoot Jobs * Detect Pre-training Data Bias * Measure Pre-training Bias * Class Imbalance (CI) * Label Imbalance (DPL) * Kullback-Leibler Divergence (KL) * Jensen-Shannon Divergence (JS) * Lp-norm (LP) * Total Variation Distance (TVD) * Kolmogorov-Smirnov (KS) * Conditional Demographic Disparity (CDD) * Generate Reports for Bias in Pre-training Data in SageMaker Studio * Detect Post-training Data and Model Bias * Measure Post-training Data and Model Bias * Difference in Positive Proportions in Predicted Labels (DPPL) * Disparate Impact (DI) * Difference in Conditional Acceptance (DCAcc) * Difference in Conditional Rejection (DCR) * Specificity difference (SD) * Recall Difference (RD) * Difference in Acceptance Rates (DAR) * Difference in Rejection Rates (DRR) * Accuracy Difference (AD) * Treatment Equality (TE) * Conditional Demographic Disparity in Predicted Labels (CDDPL) * Counterfactual Fliptest (FT) * Generalized entropy (GE) * Model Explainability * Shapley Values * SHAP Baselines for Explainability * Use Explainability with Autopilot * Use governance to document and track model performance * Model Cards * Create a model card * Manage model cards * Cross account support * SageMaker APIs * Model card FAQs * Model Dashboard * View Model Monitor schedules and alerts * View and edit alerts * View a model lineage graph * Introduction to entities * View Endpoint Status * Model Dashboard FAQ * Use Docker containers to build models * Docker Container Basics * Use Pre-built SageMaker Docker images * Prebuilt Deep Learning Images * Prebuilt Scikit-learn and Spark ML Images * Deep Graph Networks * Extend a Pre-built Container * Adapting your own Docker container to work with SageMaker * SageMaker Training and Inference Toolkits * Adapting your own training container * Adapt your training job to access images in a private Docker registry * Use a SageMaker estimator to run a training job * Use a Docker registry that requires authentication for training * Adapting Your Own Inference Container * Create a container with your own algorithms and models * Use Your Own Training Algorithms * Run Your Training Image * Provide Training Information * Run Training with EFA * Signal Success or Failure * Training Output * Use Your Own Inference Code * With Hosting Services * Private Docker Registry for Inference * With Batch Transform * Examples and more info * Troubleshooting * Configure security in Amazon SageMaker * Access Control * Access control and Studio notebooks * Control root access to a Notebook instance * Data Protection * Protect Data at Rest Using Encryption * Studio notebooks * Notebook instances, SageMaker jobs, and Endpoints * SageMaker geospatial capabilities * Protecting Data in Transit with Encryption * Protect Communications Between ML Compute Instances in a Distributed Training Job * Key Management * Internetwork Traffic Privacy * Identity and Access Management * How Amazon SageMaker Works with IAM * Identity-Based Policy Examples * Cross-Service Confused Deputy Prevention * SageMaker Roles * SageMaker geospatial capabilities roles * Creating an new SageMaker execution role * Adding the SageMaker geospatial service principal to an existing SageMaker execution role * StartEarthObservationJob API: Execution role permissions * StartVectorEnrichmentJob API: Execution role permissions * ExportEarthObservationJob API: Execution role permissions * ExportVectorEnrichmentJob API: Execution Role Permissions * Role Manager * Using the role manager (console) * Using the role manager (AWS CDK) * Persona reference * ML activity reference * Launch Studio Classic * Role Manager FAQs * Amazon SageMaker API Permissions Reference * AWS Managed Policies for SageMaker * SageMaker Canvas * SageMaker Cluster * SageMaker Feature Store * SageMaker geospatial * SageMaker Ground Truth * SageMaker Model Governance * Model Registry * SageMaker Notebooks * SageMaker Pipelines * SageMaker projects and JumpStart * Troubleshooting * Logging and Monitoring * Compliance validation * Resilience * Infrastructure Security * Connect to Resources From Within a VPC * Connect Studio in a VPC to External Resources * Connect Studio Notebooks in a VPC to External Resources * Connect a Notebook Instance in a VPC to External Resources * Run Training and Inference Containers in Internet-Free Mode * Connect to SageMaker Within your VPC * Connect to Studio Classic Through a VPC Endpoint * Connect to a Notebook Instance Through a VPC Interface Endpoint * Give SageMaker Access to Resources in your Amazon VPC * Give SageMaker Processing Jobs Access to Resources in Your Amazon VPC * Give SageMaker Training Jobs Access to Resources in Your Amazon VPC * Give SageMaker Hosted Endpoints Access to Resources in Your Amazon VPC * Give Batch Transform Jobs Access to Resources in Your Amazon VPC * Give Amazon SageMaker Clarify Jobs Access to Resources in Your Amazon VPC * Give SageMaker Compilation Jobs Access to Resources in Your Amazon VPC * Give Inference Recommender Jobs Access to Resources in Your Amazon VPC * Sell algorithms and packages in the AWS Marketplace * Use your own algorithms and models with the AWS Marketplace * Create Algorithm and Model Package Resources * Create an Algorithm Resource * Create a Model Package Resource * Use Algorithm and Model Package Resources * Use an Algorithm to Run a Training Job * Use an Algorithm to Run a Hyperparameter Tuning Job * Use a Model Package to Create a Model * Sell Amazon SageMaker Algorithms and Model Packages * Develop Algorithms and Models in Amazon SageMaker * List Your Algorithm or Model Package on AWS Marketplace * Find and Subscribe to Algorithms and Model Packages on AWS Marketplace * Monitor AWS resources provisioned while using Amazon SageMaker * Monitoring with CloudWatch * Logging with CloudWatch * Log SageMaker API Calls with CloudTrail * Monitoring user resource access from Amazon SageMaker Studio Classic * Automating with EventBridge * Reference * ML Frameworks and Languages * Apache MXNet * Apache Spark * SageMaker Spark for Scala examples * Custom Algorithms for Model Training and Hosting on SageMaker with Apache Spark * Use the SageMakerEstimator in a Spark Pipeline * SageMaker Spark for Python (PySpark) examples * Chainer * Hugging Face * PyTorch * R * Scikit-learn * SparkML Serving * TensorFlow * Triton Inference Server * API Reference * Programming Model for Amazon SageMaker * APIs, CLI, and SDKs * SageMaker Distribution Images * SageMaker Document History * AWS Glossary Best practices for endpoint security and health with Amazon SageMaker - Amazon SageMaker AWSDocumentationAmazon SageMakerDeveloper Guide Don't delete resources while your endpoints use themFollow these procedures to update your endpoints BEST PRACTICES FOR ENDPOINT SECURITY AND HEALTH WITH AMAZON SAGEMAKER PDFRSS To address the latest security issues, Amazon SageMaker automatically patches endpoints to the latest and most secure software. However, if you incorrectly modify your endpoint dependencies, Amazon SageMaker can't automatically patch your endpoints or replace your unhealthy instances. To ensure your endpoints remain eligible for automatic updates, apply the following best practices. DON'T DELETE RESOURCES WHILE YOUR ENDPOINTS USE THEM Avoid deleting any of the following resources if you have existing endpoints that use them: * The model definition that you create with the CreateModel action in the Amazon SageMaker API. * Any model artifacts that you specify for the ModelDataUrl parameter. * The IAM role and permissions that you specify for the ExecutionRoleArn parameter. REMINDER In the model definition that your endpoint uses, ensure that the IAM role that you specified has the correct permissions. For more information about the required permissions for Amazon SageMaker endpoints, see CreateModel API: Execution Role Permissions. * The inference images that you specify for the Image parameter, if you use your own inference code. REMINDER If you use the private registry feature, ensure that Amazon SageMaker can access the private registry as long as you're using the endpoint. * The Amazon VPC subnets and security groups that you specify for the VpcConfig parameter. * The endpoint configuration that you create with the CreateEndpointConfig action in the Amazon SageMaker API. * Any KMS keys or Amazon S3 buckets that you specify in the endpoint configuration. REMINDER Ensure you don’t disable these KMS keys. FOLLOW THESE PROCEDURES TO UPDATE YOUR ENDPOINTS When you update your Amazon SageMaker endpoints, use any of the following procedures that apply to your needs. TO UPDATE YOUR MODEL DEFINITION SETTINGS 1. Create a new model definition with your updated settings by using the CreateModel action in the Amazon SageMaker API. 2. Create a new endpoint configuration that uses the new model definition. To do this, use the CreateEndpointConfig action in the Amazon SageMaker API. 3. Update your endpoint with the new endpoint configuration so that your updated model definition settings take effect. 4. (Optional) Delete the old endpoint configuration if you're not using it with any other endpoints. You can also delete the resources that you specified in the model definition if you're not using them with any other endpoints. These resources include model artifacts in Amazon S3 and inference images. TO UPDATE YOUR ENDPOINT CONFIGURATION 1. Create a new endpoint configuration with your updated settings. 2. Update your endpoint with the new configuration so that your updates take effect. 3. (Optional) Delete the old endpoint configuration if you're not using it with any other endpoints. You can also delete the resources that you specified in the model definition if you're not using them with any other endpoints. These resources include model artifacts in Amazon S3 and inference images. Whenever you create a new model definition or endpoint configuration, we recommend that you use a unique name. If you want to update these resources and retain their original names, use the following procedures. TO UPDATE YOUR MODEL SETTINGS AND RETAIN THE ORIGINAL MODEL NAME 1. Delete the existing model definition. At this point, any endpoint that uses the model is broken, but you fix this in the following steps. 2. Create the model definition again with your updated settings, and use the same model name. 3. Create a new endpoint configuration that uses the updated model definition. 4. Update your endpoint with the new endpoint configuration so that your updates take effect. TO UPDATE YOUR ENDPOINT CONFIGURATION AND RETAIN THE ORIGINAL CONFIGURATION NAME 1. Delete the existing endpoint configuration. 2. Create a new endpoint configuration with your updated settings, and use the original name. 3. Update your endpoint with the new configuration so that your updates take effect. Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions Best practices to minimize interruptions during GPU driver upgrades Supported features Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. DID THIS PAGE HELP YOU? Yes No Provide feedback NEXT TOPIC: Supported features PREVIOUS TOPIC: Best practices to minimize interruptions during GPU driver upgrades NEED HELP? * Try AWS re:Post * Connect with an AWS IQ expert PrivacySite termsCookie preferences © 2024, Amazon Web Services, Inc. or its affiliates. All rights reserved. ON THIS PAGE * Don't delete resources while your endpoints use them * Follow these procedures to update your endpoints DID THIS PAGE HELP YOU? - NO Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. Feedback