Skip to content

Deployment of GPU instances to fine-tune LLM models and inference it through APIs.

License

Notifications You must be signed in to change notification settings

terraform-ibm-modules/terraform-ibm-rhel-ai

Repository files navigation

Terraform modules template project

Incubating (Not yet consumable) latest release pre-commit Renovate enabled semantic-release

TODO: Replace this with a description of the modules in this repo.

Overview

terraform-ibm-rhel-ai

Usage

terraform {
  required_version = ">= 1.9.0"
  required_providers {
    ibm = {
      source  = "IBM-Cloud/ibm"
      version = "X.Y.Z"  # Lock into a provider version that satisfies the module constraints
    }
  }
}

locals {
    region = "us-south"
}

provider "ibm" {
  ibmcloud_api_key = "XXXXXXXXXX"  # replace with apikey value
  region           = local.region
}

module "module_template" {
  source            = "terraform-ibm-modules/<replace>/ibm"
  version           = "X.Y.Z" # Replace "X.Y.Z" with a release version to lock into a specific release
  region            = local.region
  name              = "instance-name"
  resource_group_id = "xxXXxxXXxXxXXXXxxXxxxXXXXxXXXXX" # Replace with the actual ID of resource group to use
}

Required access policies

Requirements

No requirements.

Modules

No modules.

Resources

No resources.

Inputs

No inputs.

Outputs

No outputs.

Contributing

You can report issues and request features for this module in GitHub issues in the module repo. See Report an issue or request a feature.

To set up your local development environment, see Local development setup in the project documentation.

About

Deployment of GPU instances to fine-tune LLM models and inference it through APIs.

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •