Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error fetching EKS binaries from GovCloud #1536

Open
Sandeepsac opened this issue Dec 4, 2023 · 3 comments
Open

Error fetching EKS binaries from GovCloud #1536

Sandeepsac opened this issue Dec 4, 2023 · 3 comments

Comments

@Sandeepsac
Copy link

Sandeepsac commented Dec 4, 2023

What happened:

releated to #762
@cartermckinnon
In gov-cloud we facing issue while using instance profile that while connecting with S3 in region specific we getting 403 forbitten

amazon-ebs: AWS cli missing - using wget to fetch binaries from s3. Note: This won't work for private bucket. 2023-12-04T14:25:23Z: amazon-ebs: --2023-12-04 14:25:23-- https://amazon-eks.s3.us-gov-west-1.amazonaws.com/1.27.1/2023-04-19/bin/linux/amd64/kubelet

EKS version 1.27

image (23)

@donovanrost
Copy link
Contributor

donovanrost commented Dec 7, 2023

I also run into this with GovCloud. The last I looked into it, the binaries are not available in a GovCloud bucket and you can't reach into the commerial amazon-eks bucket from a GovCloud region. To solve this on my end, I presync the binaries into a my own bucket in a GovCloud region using the following

#!/usr/bin/env bash

LOCAL_SYNC_DIR="bucket"
REMOTE_SOURCE_BUCKET="amazon-eks"
REMOTE_TARGET_BUCKET="amazon-eks-binaries"
KUBERNETES_MINOR_VERSION="1.28"
OS="linux"
ARCH="amd64"
FORCE_SYNC="false"


POSITIONAL_ARGS=()

while [[ $# -gt 0 ]]; do
  case $1 in
    -sb|--remote-source-bucket)
      REMOTE_SOURCE_BUCKET="$2"
      shift 
      shift 
      ;;
    -tb|--remote-target-bucket)
      REMOTE_TARGET_BUCKET="$2"
      shift 
      shift
      ;;
    -l|--local-sync-dir)
      LOCAL_SYNC_DIR="$2"
      shift
      shift
      ;;
    -m|--kubernetes-minor-version)
      KUBERNETES_MINOR_VERSION="$2"
      shift
      shift
      ;;
    -d|--dry-run)
      DRY_RUN="true"
      shift # past argument
      ;;
    -sp|--source-profile)
      SOURCE_PROFILE="$2"
      shift
      shift
      ;;
    -tp|--target-profile)
      TARGET_PROFILE="$2"
      shift
      shift
      ;;
    -sr|--source-region)
      SOURCE_REGION="$2"
      shift
      shift
      ;;
    -tr|--target-region)
      TARGET_REGION="$2"
      shift
      shift
      ;;
    -f|--force-sync)
      FORCE_SYNC="true"
      shift
      ;;
    -*|--*)
      echo "Unknown option $1"
      exit 1
      ;;
    *)
      POSITIONAL_ARGS+=("$1") # save positional arg
      shift # past argument
      ;;
  esac
done

set -- "${POSITIONAL_ARGS[@]}" # restore positional parameters

log() {
  echo "$@" > /dev/console
}

should_sync() {
  local path=$1
  local result=0;

  if [[ "${FORCE_SYNC}" != "true" ]]
  then

    local cmd="aws s3 ls s3://${REMOTE_TARGET_BUCKET}/${path}/ --profile ${TARGET_PROFILE}"
    log "Checking to see if binaries exist in target bucket: ${REMOTE_TARGET_BUCKET}/${path} ..."
    
    local output; output=$(eval "${cmd}"); result="$?"

    log "${output}"  

    if [[ ${result} -eq 0 ]]
    then
      log "Binaries found. No need to sync!"
    else
      log "Binaries not found in target bucket. Need to sync" 
    fi
  fi
  
  if [[ "${FORCE_SYNC}" == "true" ]]
  then
    log "--force_sync was set. Forcing a sync..."
  fi

  echo $result

}

get_latest_binaries() {

  local minor_version=$1

  if [ -n "${SOURCE_PROFILE}" ]; then
    local profile_flag="--profile ${SOURCE_PROFILE}"
  fi

  local cmd="aws s3api list-objects-v2 ${profile_flag} --bucket amazon-eks --prefix "${minor_version}" --query 'Contents[*].[Key]' --output text | cut -d'/' -f-2 | sort -Vru | head -n1"

  local latest_binaries=$(eval "$cmd")

  if [ "${latest_binaries}" == "None" ]; then
    echo >&2 "No binaries available for minor version: ${minor_version}"
    exit 1
  fi

  local latest_version=$(echo "${latest_binaries}" | cut -d'/' -f1)
  local latest_build_date=$(echo "${latest_binaries}" | cut -d'/' -f2)

  echo "kubernetes_version=${latest_version} kubernetes_build_date=${latest_build_date}"
}

get_subpath() {
  local kubernetes_minor_version="$1"
  local os="$2"
  local arch="$3"
  
  # chmod +x amazon-eks-ami/hack/latest-binaries.sh
  local pairs=$(get_latest_binaries ${kubernetes_minor_version})

  local kvs=($pairs)
  
  #Sets kubernetes_version and kubernetes_build_date as returned by latest-binaries.sh
  for kv in "${kvs[@]}"
  do
    declare -g "${kv}"
  done

  echo "${kubernetes_version}/${kubernetes_build_date}/bin/${os}/${arch}" 
}

sync() {
  local source=$1
  local target=$2
  local profile=$3
  local region=$4

  local cmd="aws s3 sync ${source} ${target}"

  if [[ -n "${profile}" ]]
  then
    cmd+=" --profile ${profile}"
  fi

  if [[ -n "${region}" ]]
  then
    cmd+=" --region ${region}"
  fi
  
  if [[ -n "${DRY_RUN}" ]]
  then
    cmd+=" --dry-run"
  fi
  echo "Running ${cmd}"
  eval "$cmd"

}

syncToLocal() {
  local subpath=$1

  local remote="${REMOTE_SOURCE_BUCKET}/${subpath}"
  local local="${LOCAL_SYNC_DIR}/${subpath}"

  sync "s3://${remote}" "${local}" "${SOURCE_PROFILE}" "${SOURCE_REGION}"
}

syncToRemote() {
  local subpath=$1

  local remote="${REMOTE_TARGET_BUCKET}/${subpath}"
  local local="${LOCAL_SYNC_DIR}/${subpath}"

  sync "${local}" "s3://${remote}" "${TARGET_PROFILE}" "${TARGET_REGION}"
}

main() {
  mkdir -p "${LOCAL_SYNC_DIR}"
  subpath=$(get_subpath $KUBERNETES_MINOR_VERSION $OS $ARCH)

  if [[ $(should_sync "${subpath}") -ne 0 ]] 
  then
    syncToLocal "${subpath}"
    syncToRemote "${subpath}"
  fi
}

if [[ -z "${TARGET_PROFILE}" ]]
then
  log "target profile must be set. use -tp or --target-profile"
  exit 1
fi

if [[ -z "${SOURCE_PROFILE}" ]]
then
  log "source profile must be set. use -sp or --source-profile"
  exit 1
fi

main

If this #1483 is merged, you can then specify which bucket and region to acquire the binaries from during the build

jtnord added a commit to jtnord/amazon-eks-ami that referenced this issue Feb 8, 2024
Fixes awslabs#1536

if a suser only has access to an AWS account in GovCloud then obtaining
the binary versions from S3 will fail.

As this bucket is public we do not need any authentication so add
--no-sign-request  and hardcode the region (which would not be the
default region for any govCloud).

chose `us-west-2` as the region to use as this appears to be the
canonical location.   As it is just metdata using a region on the other
side of the world should not matter too much when the overhead of the
packer build is taken into account
@jtnord
Copy link

jtnord commented Feb 8, 2024

I created a simpler patch in #1641 which should be (hopefully) less controversial. As we don't need authentication just hardcode the bucket location and skip authentication as the bucket is public.

@stefanSpectro
Copy link

I am also running in to this issue and I would really like to avoid syncing binaries as a workaround

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants