Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update DLAMI BASE AMI Logic to switch between OSS and Proprietary Nvidia Driver AMI #3760

Merged
merged 51 commits into from
Mar 19, 2024
Merged
Show file tree
Hide file tree
Changes from 21 commits
Commits
Show all changes
51 commits
Select commit Hold shift + click to select a range
840aef6
Update DLAMI BASE AMI Logic to switch between OSS and Proprietary Nvi…
Mar 8, 2024
95f6e46
update gdrcopy to 2.4
Mar 8, 2024
2e4b09b
formatting
Mar 8, 2024
c28d32b
disable buiild and fix sm local test instance ami
Mar 11, 2024
61212b4
use proprietary drier dlami as default
Mar 11, 2024
a0f8f82
fix ul20 and aml2 dlami name logic and test only ec2
Mar 11, 2024
ce3d3da
allow test efa
Mar 11, 2024
00fba94
update oss dlami list
Mar 11, 2024
434fbdc
test curand
Mar 11, 2024
115c33c
ensure ec2 instance type fixture is ran before ec2 instance ami
Mar 12, 2024
092b14b
alter ami pulling logic
Mar 12, 2024
b75b415
usefixtures
Mar 12, 2024
9dc8fec
use parametrize
Mar 12, 2024
95ddb86
use instance ami in parametrize
Mar 12, 2024
1d9347f
add instace ami ad parametrize
Mar 12, 2024
a78962d
Merge branch 'master' into update-ami
sirutBuasai Mar 12, 2024
0a9504d
fix curand test
Mar 13, 2024
c66555e
correct ami name
Mar 13, 2024
e5716bc
correct ami format
Mar 13, 2024
66ce9fc
use proprietary dlami for curand
Mar 13, 2024
68273a4
rebuild
Mar 14, 2024
c70f0e9
logging debug
Mar 14, 2024
75f8e86
remove parametrize ami
Mar 14, 2024
5a99d36
flip logic
Mar 14, 2024
9040c77
formatting
Mar 14, 2024
2f16a5b
print instance ami
Mar 14, 2024
3b15a71
fix typo
Mar 14, 2024
b83eed1
remove parametrize logic and fix proprietary dlami name pattern
Mar 14, 2024
9f3a24d
Merge branch 'master' into update-ami
sirutBuasai Mar 14, 2024
3a78b32
revert gdr copy
Mar 14, 2024
f8af0b0
update test with gdrcopy 2.4
Mar 14, 2024
e32684a
build test pt ec2
Mar 15, 2024
c8b1ce0
build test pt sm
Mar 15, 2024
63d8a31
remove gdrcopy ami
Mar 15, 2024
e40298b
sanity and sm local testonly
Mar 15, 2024
9efd8da
build test pt sm
Mar 15, 2024
f8538bf
Merge branch 'master' into update-ami
sirutBuasai Mar 15, 2024
f9633d6
formatting
Mar 15, 2024
2682dac
test pt sm
Mar 16, 2024
b561099
build test pt sm
Mar 16, 2024
2b52804
disable build
Mar 16, 2024
dd1e2b2
build test pt sm
Mar 16, 2024
e5fe485
use get-login-password
Mar 18, 2024
ac78c1f
remove () from get-login
Mar 18, 2024
4013f89
test tensorflow
Mar 18, 2024
9f74eeb
use login_to_ecr_registry function
Mar 18, 2024
185d4a5
use dict for base dlami logic
Mar 18, 2024
a893035
use image uri instead
Mar 18, 2024
64d9afa
fix aml2 dlami logic
Mar 18, 2024
ef03578
revert toml file
Mar 19, 2024
feab20e
Merge branch 'master' into update-ami
sirutBuasai Mar 19, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions dlc_developer_config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,11 @@ deep_canary_mode = false
[build]
# Add in frameworks you would like to build. By default, builds are disabled unless you specify building an image.
# available frameworks - ["autogluon", "huggingface_tensorflow", "huggingface_pytorch", "huggingface_tensorflow_trcomp", "huggingface_pytorch_trcomp", "pytorch_trcomp", "tensorflow", "mxnet", "pytorch", "stabilityai_pytorch"]
build_frameworks = []
build_frameworks = ["pytorch"]

# By default we build both training and inference containers. Set true/false values to determine which to build.
build_training = true
build_inference = true
build_inference = false

# Set do_build to "false" to skip builds and test the latest image built by this PR
# Note: at least one build is required to set do_build to "false"
Expand All @@ -57,8 +57,8 @@ notify_test_failures = false
sanity_tests = true
safety_check_test = false
ecr_scan_allowlist_feature = false
ecs_tests = true
eks_tests = true
ecs_tests = false
eks_tests = false
ec2_tests = true
# Set it to true if you are preparing a Benchmark related PR
ec2_benchmark_tests = false
Expand All @@ -67,7 +67,7 @@ ec2_benchmark_tests = false
### default. If false, these types of tests will be skipped while other tests will run as usual.
### These tests are run in EC2 test jobs, so ec2_tests must be true if ec2_tests_on_heavy_instances is true.
### Off by default (set to false)
ec2_tests_on_heavy_instances = false
ec2_tests_on_heavy_instances = true

### SM specific tests
### Off by default
Expand Down Expand Up @@ -102,7 +102,7 @@ use_scheduler = false

# Standard Framework Training
dlc-pr-mxnet-training = ""
dlc-pr-pytorch-training = ""
dlc-pr-pytorch-training = "pytorch/training/buildspec-2-2-ec2.yml"
dlc-pr-tensorflow-2-training = ""
dlc-pr-autogluon-training = ""

Expand Down
2 changes: 1 addition & 1 deletion pytorch/training/docker/2.2/py3/cu121/Dockerfile.gpu
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ ENV TORCH_NVCC_FLAGS="-Xfatbin -compress-all"
ENV CUDNN_VERSION=8.9.2.26
ENV NCCL_VERSION=2.19.4
ENV EFA_VERSION=1.30.0
ENV GDRCOPY_VERSION=2.3.1
ENV GDRCOPY_VERSION=2.4.1

ENV CMAKE_PREFIX_PATH="$(dirname $(which conda))/../"
ENV OPEN_MPI_PATH=/opt/amazon/openmpi
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
CONTAINER_TESTS_PREFIX,
PT_GPU_PY3_BENCHMARK_IMAGENET_AMI_US_WEST_2,
UBUNTU_18_HPU_DLAMI_US_WEST_2,
UBUNTU_20_BASE_PROPRIETARY_DLAMI_US_WEST_2,
DEFAULT_REGION,
get_framework_and_version_from_tag,
is_pr_context,
Expand Down Expand Up @@ -54,6 +55,9 @@

@pytest.mark.model("resnet50")
@pytest.mark.parametrize("ec2_instance_type", [PT_EC2_GPU_SYNTHETIC_INSTANCE_TYPE], indirect=True)
@pytest.mark.parametrize(
"ec2_instance_ami", [UBUNTU_20_BASE_PROPRIETARY_DLAMI_US_WEST_2], indirect=True
)
@pytest.mark.team("conda")
def test_performance_pytorch_gpu_synthetic(
pytorch_training, ec2_connection, gpu_only, py3_only, ec2_instance_type
Expand Down
26 changes: 10 additions & 16 deletions test/dlc_tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,11 +31,7 @@
is_nightly_context,
DEFAULT_REGION,
P3DN_REGION,
UBUNTU_20_BASE_DLAMI_US_EAST_1,
UBUNTU_20_BASE_DLAMI_US_WEST_2,
PT_GPU_PY3_BENCHMARK_IMAGENET_AMI_US_EAST_1,
AML2_BASE_DLAMI_US_WEST_2,
AML2_BASE_DLAMI_US_EAST_1,
KEYS_TO_DESTROY_FILE,
are_efa_tests_disabled,
get_repository_and_tag_from_image_uri,
Expand Down Expand Up @@ -330,18 +326,11 @@ def ec2_instance_role_name(request):


@pytest.fixture(scope="function")
def ec2_instance_ami(request, region):
def ec2_instance_ami(request, region, ec2_instance_type):
return (
request.param
if hasattr(request, "param")
else UBUNTU_20_BASE_DLAMI_US_EAST_1
if region == "us-east-1"
else UBUNTU_20_BASE_DLAMI_US_WEST_2
if region == "us-west-2"
else test_utils.get_ami_id_boto3(
region_name=region,
ami_name_pattern="Deep Learning Base GPU AMI (Ubuntu 20.04) ????????",
)
else test_utils.get_instance_type_base_dlami(ec2_instance_type, region)
)


Expand Down Expand Up @@ -564,9 +553,14 @@ def ec2_instance(
)
if ec2_instance_ami != PT_GPU_PY3_BENCHMARK_IMAGENET_AMI_US_EAST_1:
ec2_instance_ami = (
AML2_BASE_DLAMI_US_EAST_1
if ec2_instance_ami == AML2_BASE_DLAMI_US_WEST_2
else UBUNTU_20_BASE_DLAMI_US_EAST_1
test_utils.get_instance_type_base_dlami(
sirutBuasai marked this conversation as resolved.
Show resolved Hide resolved
ec2_instance_type, "us-east-1", linux_dist="AML2"
)
if ec2_instance_ami
== test_utils.get_instance_type_base_dlami(
ec2_instance_type, "us-west-2", linux_dist="AML2"
)
else test_utils.get_instance_type_base_dlami(ec2_instance_type, "us-east-1")
)

ec2_key_name = f"{ec2_key_name}-{str(uuid.uuid4())}"
Expand Down
19 changes: 15 additions & 4 deletions test/dlc_tests/ec2/pytorch/training/common_cases.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,15 @@

from test.test_utils import (
CONTAINER_TESTS_PREFIX,
UBUNTU_20_BASE_OSS_DLAMI_US_WEST_2,
get_framework_and_version_from_tag,
get_cuda_version_from_tag,
)
from test.test_utils.ec2 import (
execute_ec2_training_test,
get_ec2_instance_type,
get_efa_ec2_instance_type,
get_efa_ec2_instance_ami,
)

# Test functions
Expand Down Expand Up @@ -44,6 +46,15 @@
default="g4dn.12xlarge", filter_function=ec2_utils.filter_non_g3_instance_type
)

# Instance AMI filters
PT_EC2_CPU_INSTANCE_AMI = [UBUNTU_20_BASE_OSS_DLAMI_US_WEST_2]

PT_EC2_GPU_INSTANCE_AMI = get_efa_ec2_instance_ami(PT_EC2_GPU_INSTANCE_TYPE_AND_REGION)

PT_EC2_GPU_INDUCTOR_INSTANCE_AMI = get_efa_ec2_instance_ami(
PT_EC2_GPU_INDUCTOR_INSTANCE_TYPE_AND_REGION
)


def pytorch_standalone(pytorch_training, ec2_connection):
execute_ec2_training_test(
Expand Down Expand Up @@ -219,6 +230,10 @@ def pytorch_cudnn_match_gpu(pytorch_training, ec2_connection, region):
), f"System CUDNN {system_cudnn} and torch cudnn {cudnn_from_torch} do not match. Please downgrade system CUDNN or recompile torch with correct CUDNN verson."


def pytorch_curand_gpu(pytorch_training, ec2_connection):
execute_ec2_training_test(ec2_connection, pytorch_training, CURAND_CMD)


def pytorch_linear_regression_cpu(pytorch_training, ec2_connection):
execute_ec2_training_test(
ec2_connection, pytorch_training, PT_REGRESSION_CMD, container_name="pt_reg"
Expand All @@ -236,7 +251,3 @@ def pytorch_telemetry_cpu(pytorch_training, ec2_connection):
execute_ec2_training_test(
ec2_connection, pytorch_training, PT_TELEMETRY_CMD, timeout=900, container_name="telemetry"
)


def curand_gpu(training, ec2_connection):
execute_ec2_training_test(ec2_connection, training, CURAND_CMD)
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
@pytest.mark.parametrize(
"ec2_instance_type, region", common_cases.PT_EC2_GPU_INSTANCE_TYPE_AND_REGION, indirect=True
)
@pytest.mark.parametrize("ec2_instance_ami", common_cases.PT_EC2_GPU_INSTANCE_AMI, indirect=True)
def test_pytorch_2_2_gpu(
pytorch_training___2__2, ec2_connection, region, gpu_only, ec2_instance_type
):
Expand All @@ -34,6 +35,7 @@ def test_pytorch_2_2_gpu(
(common_cases.nvapex, (pytorch_training, ec2_connection)),
(common_cases.pytorch_training_torchaudio, (pytorch_training, ec2_connection)),
(common_cases.pytorch_cudnn_match_gpu, (pytorch_training, ec2_connection, region)),
(common_cases.pytorch_curand_gpu, (pytorch_training, ec2_connection)),
sirutBuasai marked this conversation as resolved.
Show resolved Hide resolved
]

if "sagemaker" in pytorch_training:
Expand All @@ -57,6 +59,9 @@ def test_pytorch_2_2_gpu(
common_cases.PT_EC2_GPU_INDUCTOR_INSTANCE_TYPE_AND_REGION,
indirect=True,
)
@pytest.mark.parametrize(
"ec2_instance_ami", common_cases.PT_EC2_GPU_INDUCTOR_INSTANCE_AMI, indirect=True
)
def test_pytorch_2_2_gpu_inductor(
pytorch_training___2__2, ec2_connection, region, gpu_only, ec2_instance_type
):
Expand All @@ -81,6 +86,7 @@ def test_pytorch_2_2_gpu_inductor(
@pytest.mark.model("N/A")
@pytest.mark.team("conda")
@pytest.mark.parametrize("ec2_instance_type", common_cases.PT_EC2_CPU_INSTANCE_TYPE, indirect=True)
@pytest.mark.parametrize("ec2_instance_ami", common_cases.PT_EC2_CPU_INSTANCE_AMI, indirect=True)
def test_pytorch_2_2_cpu(pytorch_training___2__2, ec2_connection, cpu_only):
pytorch_training = pytorch_training___2__2

Expand Down
3 changes: 3 additions & 0 deletions test/dlc_tests/ec2/test_curand.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,9 @@
@pytest.mark.model("N/A")
@pytest.mark.team("frameworks")
@pytest.mark.parametrize("ec2_instance_type", CURAND_EC2_SINGLE_GPU_INSTANCE_TYPE, indirect=True)
@pytest.mark.parametrize(
"ec2_instance_ami", [test_utils.UBUNTU_20_BASE_PROPRIETARY_DLAMI_US_WEST_2], indirect=True
)
def test_curand_gpu(training, ec2_connection, gpu_only, ec2_instance_type):
if test_utils.is_image_incompatible_with_instance_type(training, ec2_instance_type):
pytest.skip(f"Image {training} is incompatible with instance type {ec2_instance_type}")
Expand Down
2 changes: 1 addition & 1 deletion test/dlc_tests/ec2/test_gdrcopy.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
@pytest.mark.integration("gdrcopy")
@pytest.mark.parametrize("ec2_instance_type,region", EC2_EFA_GPU_INSTANCE_TYPE_AND_REGION)
@pytest.mark.parametrize(
"ec2_instance_ami", [test_utils.UBUNTU_20_BASE_DLAMI_US_WEST_2], indirect=True
"ec2_instance_ami", [test_utils.UBUNTU_20_BASE_OSS_DLAMI_US_WEST_2], indirect=True
)
@pytest.mark.skipif(
is_pr_context() and not are_heavy_instance_ec2_tests_enabled(),
Expand Down
134 changes: 123 additions & 11 deletions test/test_utils/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,18 +78,38 @@ def get_ami_id_ssm(region_name, parameter_path):
return ami_id


# The Ubuntu 20.04 AMI which adds GDRCopy is used only for GDRCopy feature that is supported on PT1.13 and PT2.0
UBUNTU_20_BASE_DLAMI_US_WEST_2 = get_ami_id_boto3(
region_name="us-west-2", ami_name_pattern="Deep Learning Base GPU AMI (Ubuntu 20.04) ????????"
# DLAMI Base is split between OSS Nvidia Driver and Propietary Nvidia Driver. see https://docs.aws.amazon.com/dlami/latest/devguide/important-changes.html
UBUNTU_20_BASE_OSS_DLAMI_US_WEST_2 = get_ami_id_boto3(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looked at scope of removing these, and it will over-scope this PR. We can proceed with this for now

region_name="us-west-2",
ami_name_pattern="Deep Learning Base OSS Nvidia Driver GPU AMI (Ubuntu 20.04) ????????",
)
UBUNTU_20_BASE_DLAMI_US_EAST_1 = get_ami_id_boto3(
region_name="us-east-1", ami_name_pattern="Deep Learning Base GPU AMI (Ubuntu 20.04) ????????"
UBUNTU_20_BASE_OSS_DLAMI_US_EAST_1 = get_ami_id_boto3(
region_name="us-east-1",
ami_name_pattern="Deep Learning Base OSS Nvidia Driver GPU AMI (Ubuntu 20.04) ????????",
)
AML2_BASE_DLAMI_US_WEST_2 = get_ami_id_boto3(
region_name="us-west-2", ami_name_pattern="Deep Learning Base AMI (Amazon Linux 2) Version ??.?"
UBUNTU_20_BASE_PROPRIETARY_DLAMI_US_WEST_2 = get_ami_id_boto3(
region_name="us-west-2",
ami_name_pattern="Deep Learning Base Proprietary Nvidia Driver GPU AMI (Ubuntu 20.04) ????????",
)
AML2_BASE_DLAMI_US_EAST_1 = get_ami_id_boto3(
region_name="us-east-1", ami_name_pattern="Deep Learning Base AMI (Amazon Linux 2) Version ??.?"
UBUNTU_20_BASE_PROPRIETARY_DLAMI_US_EAST_1 = get_ami_id_boto3(
region_name="us-east-1",
ami_name_pattern="Deep Learning Base Proprietary Nvidia Driver GPU AMI (Ubuntu 20.04) ????????",
)
AML2_BASE_OSS_DLAMI_US_WEST_2 = get_ami_id_boto3(
region_name="us-west-2",
ami_name_pattern="Deep Learning Base OSS Nvidia Driver AMI (Amazon Linux 2) Version ??.?",
)
AML2_BASE_OSS_DLAMI_US_EAST_1 = get_ami_id_boto3(
region_name="us-east-1",
ami_name_pattern="Deep Learning Base OSS Nvidia Driver AMI (Amazon Linux 2) Version ??.?",
)
AML2_BASE_PROPRIETARY_DLAMI_US_WEST_2 = get_ami_id_boto3(
region_name="us-west-2",
ami_name_pattern="Deep Learning Base Proprietary Nvidia Driver AMI (Amazon Linux 2) Version ??.?",
)
AML2_BASE_PROPRIETARY_DLAMI_US_EAST_1 = get_ami_id_boto3(
region_name="us-east-1",
ami_name_pattern="Deep Learning Base Proprietary Nvidia Driver AMI (Amazon Linux 2) Version ??.?",
)
# We use the following DLAMI for MXNet and TensorFlow tests as well, but this is ok since we use custom DLC Graviton containers on top. We just need an ARM base DLAMI.
UL20_CPU_ARM64_US_WEST_2 = get_ami_id_boto3(
Expand Down Expand Up @@ -145,8 +165,10 @@ def get_ami_id_ssm(region_name, parameter_path):
UBUNTU_18_HPU_DLAMI_US_WEST_2 = "ami-03cdcfc91a96a8f92"
UBUNTU_18_HPU_DLAMI_US_EAST_1 = "ami-0d83d7487f322545a"
UL_AMI_LIST = [
UBUNTU_20_BASE_DLAMI_US_WEST_2,
UBUNTU_20_BASE_DLAMI_US_EAST_1,
UBUNTU_20_BASE_OSS_DLAMI_US_WEST_2,
UBUNTU_20_BASE_OSS_DLAMI_US_EAST_1,
UBUNTU_20_BASE_PROPRIETARY_DLAMI_US_WEST_2,
UBUNTU_20_BASE_PROPRIETARY_DLAMI_US_EAST_1,
UBUNTU_18_HPU_DLAMI_US_WEST_2,
UBUNTU_18_HPU_DLAMI_US_EAST_1,
PT_GPU_PY3_BENCHMARK_IMAGENET_AMI_US_EAST_1,
Expand Down Expand Up @@ -2370,3 +2392,93 @@ def get_image_spec_from_buildspec(image_uri, dlc_folder_path):
raise ValueError(f"No corresponding entry found for {image_uri} in {buildspec_path}")

return matched_image_spec


def get_instance_type_base_dlami(instance_type, region, linux_dist="UBUNTU_20"):
"""
Get Instance types based on EC2 instance, see https://docs.aws.amazon.com/dlami/latest/devguide/important-changes.html
OSS Nvidia Driver DLAMI supports the following: ["g4dn.xlarge",
"g4dn.2xlarge",
"g4dn.4xlarge",
"g4dn.8xlarge",
"g4dn.16xlarge",
"g4dn.12xlarge",
"g4dn.metal",
"g4dn.xlarge",
"g5.xlarge",
"g5.2xlarge",
"g5.4xlarge",
"g5.8xlarge",
"g5.16xlarge",
"g5.12xlarge",
"g5.24xlarge",
"g5.48xlarge",
"p4d.24xlarge",
"p4de.24xlarge",
"p5.48xlarge",]

Proprietary Nvidia Driver DLAMI supports the following: ["p3.2xlarge",
"p3.8xlarge",
"p3.16xlarge",
"p3dn.24xlarge",
"g3s.xlarge",
"g3.4xlarge",
"g3.8xlarge",
"g3.16xlarge",]

Other instances will default to Proprietary Nvidia Driver DLAMI
"""

base_proprietary_dlami_instances = [
arjkesh marked this conversation as resolved.
Show resolved Hide resolved
"p3.2xlarge",
"p3.8xlarge",
"p3.16xlarge",
"p3dn.24xlarge",
"g3s.xlarge",
"g3.4xlarge",
"g3.8xlarge",
"g3.16xlarge",
]

# set defaults
if linux_dist == "AML2":
sirutBuasai marked this conversation as resolved.
Show resolved Hide resolved
oss_dlami_us_east_1 = AML2_BASE_OSS_DLAMI_US_EAST_1
oss_dlami_us_west_2 = AML2_BASE_OSS_DLAMI_US_WEST_2
oss_dlami_name_pattern = (
"Deep Learning Base OSS Nvidia Driver AMI (Amazon Linux 2) Version ??.?"
sirutBuasai marked this conversation as resolved.
Show resolved Hide resolved
)

proprietary_dlami_us_east_1 = AML2_BASE_PROPRIETARY_DLAMI_US_EAST_1
proprietary_dlami_us_west_2 = AML2_BASE_PROPRIETARY_DLAMI_US_WEST_2
proprietary_dlami_name_pattern = (
"Deep Learning Base Proprietary Nvidia Driver AMI (Amazon Linux 2) Version ??.?"
)
else:
oss_dlami_us_east_1 = UBUNTU_20_BASE_OSS_DLAMI_US_EAST_1
oss_dlami_us_west_2 = UBUNTU_20_BASE_OSS_DLAMI_US_WEST_2
oss_dlami_name_pattern = (
"Deep Learning Base OSS Nvidia Driver GPU AMI (Ubuntu 20.04) ????????"
)

proprietary_dlami_us_east_1 = UBUNTU_20_BASE_OSS_DLAMI_US_EAST_1
proprietary_dlami_us_west_2 = UBUNTU_20_BASE_OSS_DLAMI_US_WEST_2
proprietary_dlami_name_pattern = (
"Deep Learning Base Proprietary Nvidia Driver GPU AMI (Ubuntu 20.04) ????????"
)

return (
proprietary_dlami_us_east_1
sirutBuasai marked this conversation as resolved.
Show resolved Hide resolved
if region == "us-east-1" and instance_type in base_proprietary_dlami_instances
else proprietary_dlami_us_west_2
if region == "us-west-2" and instance_type in base_proprietary_dlami_instances
else get_ami_id_boto3(
region_name=region,
ami_name_pattern=proprietary_dlami_name_pattern,
)
if instance_type in base_proprietary_dlami_instances
else oss_dlami_us_east_1
if region == "us-east-1"
else oss_dlami_us_west_2
if region == "us-west-2"
else get_ami_id_boto3(region_name=region, ami_name_pattern=oss_dlami_name_pattern)
)
Loading
Loading