diff --git a/index.html b/index.html index 027981f5..ba58f7d2 100644 --- a/index.html +++ b/index.html @@ -21,4 +21,4 @@ --> OKD.io - OKD.io
Skip to content
OKD logo

The Community Distribution of Kubernetes that powersRed Hat OpenShift

Latest

Announcing OKD Streams: Building the Next Generation of OKD together - see blog for details

Built around a core of OCI container packaging and Kubernetes container cluster management, OKD is also augmented by application lifecycle management functionality and DevOps tooling. OKD provides a complete open source container application platform.

OKD 4

$ openshift-install create cluster

Tons of amazing new features

Automatic updates not only for OKD but also for the host OS, k8s Operators are first class citizens, a fancy UI, and much much more

CodeReady Containers for OKD: local OKD 4 cluster for development

CodeReady Containers brings a minimal OpenShift 4 cluster to your local laptop or desktop computer! Download it here: CodeReady Containers for OKD Images

What is OKD?

OKD is a distribution of Kubernetes optimized for continuous application development and multi-tenant deployment

OKD embeds Kubernetes and extends it with security and other integrated concepts

OKD adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams

OKD is also referred to as Origin in GitHub and in the documentation

OKD is a sibling Kubernetes distribution to Red Hat OpenShift | If you are looking for enterprise-level support, or information on partner certification, Red Hat also offers Red Hat OpenShift Container Platform

OKD web UI

OKD Community

We know you've got great ideas for improving OKD and its network of open source projects. So roll up your sleeves and come join us in the community!

Get Started

All contributions are welcome! OKD uses the Apache 2 license and does not require any contributor agreement to submit patches. Please open issues for any bugs or problems you encounter, ask questions in the #openshift-users on Kubernetes Slack Channel, or get involved in the OKD-WG by joining the OKD-WG google group.

Connect to the community

Join the OKD Working Group

Talk to Us

Standardization through Containerization

Standards are powerful forces in the software industry. They can drive technology forward by bringing together the combined efforts of multiple developers, different communities, and even competing vendors.


Kubernetes

Open source container orchestration and cluster management at scale


Podman

Standardized Linux container packaging for applications and their dependencies


Fedora CoreOS

A container-focused OS that's designed for painless management in large clusters


Operator Framework

An open source project that provides developer and runtime Kubernetes tools, enabling you to accelerate the development of an Operator


cri-o

A lightweight container runtime for Kubernetes


Prometheus

Prometheus is a systems and service monitoring toolkit that collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true


OKD End User Community

There is a large, vibrant end user community

Become a part of something bigger

OpenShift Commons is open to all community participants: users, operators, enterprises, non-profits, educational institutions, partners, and service providers as well as other open source technology initiatives utilized under the hood or to extend the OpenShift platform

  • If you are an OpenShift Online or an OpenShift Container Platform customer or have deployed OKD on premise or on a public cloud
  • If you have contributed to the OKD project and want to connect with your peers and end users
  • If you simply want to stay up-to-date on the roadmap and best practices for using, deploying and operating OpenShift

... then OpenShift Commons is the right place for you

Ready to join

\ No newline at end of file + -->

Latest

Help us improve OKD by completing the 2023 OKD user survey

Built around a core of OCI container packaging and Kubernetes container cluster management, OKD is also augmented by application lifecycle management functionality and DevOps tooling. OKD provides a complete open source container application platform.

OKD 4

$ openshift-install create cluster

Tons of amazing new features

Automatic updates not only for OKD but also for the host OS, k8s Operators are first class citizens, a fancy UI, and much much more

CodeReady Containers for OKD: local OKD 4 cluster for development

CodeReady Containers brings a minimal OpenShift 4 cluster to your local laptop or desktop computer! Download it here: CodeReady Containers for OKD Images

What is OKD?

OKD is a distribution of Kubernetes optimized for continuous application development and multi-tenant deployment

OKD embeds Kubernetes and extends it with security and other integrated concepts

OKD adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams

OKD is also referred to as Origin in GitHub and in the documentation

OKD is a sibling Kubernetes distribution to Red Hat OpenShift | If you are looking for enterprise-level support, or information on partner certification, Red Hat also offers Red Hat OpenShift Container Platform

OKD web UI

OKD Community

We know you've got great ideas for improving OKD and its network of open source projects. So roll up your sleeves and come join us in the community!

Get Started

All contributions are welcome! OKD uses the Apache 2 license and does not require any contributor agreement to submit patches. Please open issues for any bugs or problems you encounter, ask questions in the #openshift-users on Kubernetes Slack Channel, or get involved in the OKD-WG by joining the OKD-WG google group.

Connect to the community

Join the OKD Working Group

Talk to Us

Standardization through Containerization

Standards are powerful forces in the software industry. They can drive technology forward by bringing together the combined efforts of multiple developers, different communities, and even competing vendors.


Kubernetes

Open source container orchestration and cluster management at scale


Podman

Standardized Linux container packaging for applications and their dependencies


Fedora CoreOS

A container-focused OS that's designed for painless management in large clusters


Operator Framework

An open source project that provides developer and runtime Kubernetes tools, enabling you to accelerate the development of an Operator


cri-o

A lightweight container runtime for Kubernetes


Prometheus

Prometheus is a systems and service monitoring toolkit that collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true


OKD End User Community

There is a large, vibrant end user community

Become a part of something bigger

OpenShift Commons is open to all community participants: users, operators, enterprises, non-profits, educational institutions, partners, and service providers as well as other open source technology initiatives utilized under the hood or to extend the OpenShift platform

  • If you are an OpenShift Online or an OpenShift Container Platform customer or have deployed OKD on premise or on a public cloud
  • If you have contributed to the OKD project and want to connect with your peers and end users
  • If you simply want to stay up-to-date on the roadmap and best practices for using, deploying and operating OpenShift

... then OpenShift Commons is the right place for you

Ready to join

\ No newline at end of file diff --git a/search/search_index.json b/search/search_index.json index 55fc6686..1cbdceed 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"OKD.io","text":"

Latest

Announcing OKD Streams: Building the Next Generation of OKD together - see blog for details

Built around a core of OCI container packaging and Kubernetes container cluster management, OKD is also augmented by application lifecycle management functionality and DevOps tooling. OKD provides a complete open source container application platform.

"},{"location":"#okd-4","title":"OKD 4","text":"

$ openshift-install create cluster

Tons of amazing new features

Automatic updates not only for OKD but also for the host OS, k8s Operators are first class citizens, a fancy UI, and much much more

CodeReady Containers for OKD: local OKD 4 cluster for development

CodeReady Containers brings a minimal OpenShift 4 cluster to your local laptop or desktop computer! Download it here: CodeReady Containers for OKD Images

"},{"location":"#what-is-okd","title":"What is OKD?","text":"

OKD is a distribution of Kubernetes optimized for continuous application development and multi-tenant deployment

OKD embeds Kubernetes and extends it with security and other integrated concepts

OKD adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams

OKD is also referred to as Origin in GitHub and in the documentation

OKD is a sibling Kubernetes distribution to Red Hat OpenShift | If you are looking for enterprise-level support, or information on partner certification, Red Hat also offers Red Hat OpenShift Container Platform

"},{"location":"#okd-community","title":"OKD Community","text":"

We know you've got great ideas for improving OKD and its network of open source projects. So roll up your sleeves and come join us in the community!

"},{"location":"#get-started","title":"Get Started","text":"

All contributions are welcome! OKD uses the Apache 2 license and does not require any contributor agreement to submit patches. Please open issues for any bugs or problems you encounter, ask questions in the #openshift-users on Kubernetes Slack Channel, or get involved in the OKD-WG by joining the OKD-WG google group.

"},{"location":"#connect-to-the-community","title":"Connect to the community","text":"

Join the OKD Working Group

"},{"location":"#talk-to-us","title":"Talk to Us","text":""},{"location":"#standardization-through-containerization","title":"Standardization through Containerization","text":"

Standards are powerful forces in the software industry. They can drive technology forward by bringing together the combined efforts of multiple developers, different communities, and even competing vendors.

Open source container orchestration and cluster management at scale

Standardized Linux container packaging for applications and their dependencies

A container-focused OS that's designed for painless management in large clusters

An open source project that provides developer and runtime Kubernetes tools, enabling you to accelerate the development of an Operator

A lightweight container runtime for Kubernetes

Prometheus is a systems and service monitoring toolkit that collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true

"},{"location":"#okd-end-user-community","title":"OKD End User Community","text":"

There is a large, vibrant end user community

"},{"location":"#become-a-part-of-something-bigger","title":"Become a part of something bigger","text":"

OpenShift Commons is open to all community participants: users, operators, enterprises, non-profits, educational institutions, partners, and service providers as well as other open source technology initiatives utilized under the hood or to extend the OpenShift platform

... then OpenShift Commons is the right place for you

"},{"location":"about/","title":"About OKD","text":"

OKD is the community distribution of Kubernetes optimized for continuous application development and multi-tenant deployment. OKD adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams. OKD is also referred to as Origin in GitHub and in the documentation. OKD makes launching Kubernetes on any cloud or bare metal a snap, simplifies running and updating clusters, and provides all of the tools to make your containerized-applications succeed.

"},{"location":"about/#features","title":"Features","text":""},{"location":"about/#what-can-i-run-on-okd","title":"What can I run on OKD?","text":"

OKD is designed to run any Kubernetes workload. It also assists in building and developing containerized applications through the developer console.

For an easier experience running your source code, Source-to-Image (S2I) allows developers to simply provide an application source repository containing code to build and run. It works by combining an existing S2I-enabled container image with application source to produce a new runnable image for your application.

You can see the full list of Source-to-Image builder images and it's straightforward to create your own. Some of our available images include:

"},{"location":"about/#what-sorts-of-security-controls-does-openshift-provide-for-containers","title":"What sorts of security controls does OpenShift provide for containers?","text":"

OKD runs with the following security policy by default:

Many containers expect to run as root (and therefore edit all the contents of the filesystem). The Image Author's guide gives recommendations on making your image more secure by default:

If you are running your own cluster and want to run a container as root, you can grant that permission to the containers in your current project with the following command:

# Gives the default service account in the current project access to run as UID 0 (root)\noc adm add-scc-to-user anyuid -z default\n

See the security documentation more on confining applications.

"},{"location":"blog/","title":"okd.io Blog","text":"

We look forward to sharing news and useful information about OKD in this blog.

You are also invited to participate: share your experiences and tips with the community by creating your own blog articles for okd.io.

"},{"location":"blog/#blogs","title":"Blogs","text":""},{"location":"blog/#2023","title":"2023","text":"Date Title 2023-07-18 State of affairs in OKD CI/CD"},{"location":"blog/#2022","title":"2022","text":"Date Title 2022-12-12 Building the OKD payload 2022-10-25 OKD Streams: Building the Next Generation of OKD together 2022-10-20 OKD at KubeCon + CloudNativeCon North America 2022 2022-09-09 An Introduction to Debugging OKD Release Artifacts"},{"location":"blog/#2021","title":"2021","text":"Date Title 2021-05-06 OKD Working Group Office Hours at KubeconEU on OpenShift.tv 2021-05-04 Rohde & Schwarz's Journey to OpenShift 4 From OKD to Azure Red Hat OpenShift 2021-03-22 Recap OKD Testing and Deployment Workshop - Videos and Additional Resources 2021-04-19 Please avoid using FCOS 33.20210301.3.1 for new OKD installs 2021-03-16 Save The Date! OKD Testing and Deployment Workshop (March 20) Register Now! 2021-03-07 okd.io now has a blog"},{"location":"charter/","title":"OKD Working Group Charter","text":"

v1.1

2019-09-21

"},{"location":"charter/#introduction","title":"Introduction","text":"

The charter describes the operations of the OKD Working Group (OKD WG).

OKD is the Origin Community Distribution of Kubernetes that is upstream to Red Hat\u2019s OpenShift Container Platform. It is built around a core of OCI containers and Kubernetes container cluster management. OKD is augmented by application lifecycle management functionality and DevOps tooling.

The OKD Working Group's purpose is to discuss, give guidance to, and enable collaboration on current development efforts for OKD, Kubernetes, and related CNCF projects. The OKD Working Group will also include the discussion of shared community goals for OKD 4 and beyond. Additionally, the Working Group will produce supporting materials and best practices for end-users and will provide guidance and coordination for CNCF projects working within the SIG's scope.

The OKD Working Group is independent of both Fedora and the Cloud Native Computing Foundation (CNCF). The OKD Working Group is a community sponsored by Red Hat.

"},{"location":"charter/#mission","title":"Mission","text":"

The mission of the OKD Working Group is:

"},{"location":"charter/#areas-considered-in-scope","title":"Areas considered in Scope","text":"

The OKD Working Group focuses on the following end-user related topics of the lifecycle of cloud-native applications:

The Working Group will work on developing best practices, fostering collaboration between related projects, and working on improving tool interoperability. Additionally, the Working Group will propose new initiatives and projects when capability gaps in the current ecosystem are defined.

The following, non-exhaustive, sample list of activities and deliverables are in-scope for the Working Group:

"},{"location":"charter/#areas-considered-out-of-scope","title":"Areas considered out of Scope","text":"

Anything not explicitly considered in the scope above. Example include:

"},{"location":"charter/#governance","title":"Governance","text":""},{"location":"charter/#operations","title":"Operations","text":"

The OKD Working Group is run and managed by the following chairs:

Note

The referenced names and chair positions will be edited in-place as chairs are added, removed, or replaced. See the roles of chairs section for more information.

A dedicated git repository will be the authoritative archive for membership list, code, documentation, and decisions made. The repository, along with this charter, will be hosted at github.com/openshift/community.

The mailing list at groups.google.com/forum/#!forum/okd-wg will be used as a place to call for and publish group decisions, and to hold discussions in general.

"},{"location":"charter/#working-group-membership","title":"Working Group Membership","text":"

All active members of the Working Group are listed in the MEMBERS.md file with their name.

New members can apply for membership by creating an Issue or Pull Request on the repository on GitHub indicating their desire to join.

Membership can be surrendered by creating an Issue stating this desire, or by creating a Pull Request to remove the own name from the members list.

"},{"location":"charter/#decision-process","title":"Decision Process","text":"

This group will seek consensus decisions. After public discussion and consideration of different opinions, the Chair and/or Co-Chair will record a decision and summarize any objections.

All WG members who have joined the GitHub group at least 21 days prior to the vote are eligible to vote. This is to prevent people from rallying outside supporters for their desired outcome.

When the group comes to a decision in a meeting, the decision is tentative. Any group participant may object to a decision reached at a meeting within 7 days of publication of the decision on the GitHub Issue and/or mailing list. That decision must then be confirmed on the GitHub Issue via a Call for Agreement.

The Call for Agreement, when a decision is required, will be posted as a GitHub Issue or Pull Request and must be announced on the mailing list. It is an instrument to reach a time-bounded lazy consensus approval and requires a voting period of no less than 7 days to be defined (including a specific date and time in UTC).

Each Call for Agreement will be considered independently, except for elections of Chairs.

The Chairs will submit all Calls for Agreement that are not vague, unprofessional, off-topic, or lacking sufficient detail to determine what is being agreed.

In the event that a Call for Agreement falls under the delegated authority or within a chartered Sub-Working Group, the Call for Agreement must be passed through the Sub-Working Group before receiving Working Group consideration.

A Call for Agreement may require quorum of Chairs under the circumstances outlined in the Charter and Governing Documents section.

A Call for Agreement is mandatory when:

Once the Call for Agreement voting period has elapsed, all votes are counted, with at least a 51% majority of votes needed for consensus. A Chair will then declare the agreement \u201caccepted\u201d or \u201cdeclined\u201d, ending the Call for Agreement.

Once rejected, a Call for Agreement must be revised before re-submission for a subsequent vote. All rejected Calls for Agreement will be reported to the Working Group as rejected.

"},{"location":"charter/#charter-and-governing-documents","title":"Charter and Governing Documents","text":"

The Working Group may, from time to time, adopt or amend its Governing Documents and Charter, using a modified Call for Agreement process:

For initial approval of this Charter via Call for Agreement all members are eligible to vote, even those that have been a member for less than 21 days. This Charter will be approved if there is a majority of positive votes.

"},{"location":"charter/#organizational-roles","title":"Organizational Roles","text":""},{"location":"charter/#role-of-chairs","title":"Role of Chairs","text":"

The primary role of Chairs is to run operations and the governance of the group. The Chairs are responsible for:

The terms for founding Chairs start on the approval of this charter.

When no candidate has submitted their name for consideration, the current Chairs may appoint an acting Chair until a candidate comes forward.

Chairs must be active members. Any inactivity, disability, or ineligibility results in immediate removal.

Chairs may be removed by petition to the Working Group through the Call for Agreement process outlined above.

Additional Chairs may be added so long as the existing number of Chairs is odd. These Chairs are added using a Call for Agreement. Extra Chairs enjoy the same rights, responsibilities, and obligations of a Chartered Chair. Upon vacancy of an Extra Chair, it may be filled by appointment by the remaining Chairs, or a majority vote of the Working Group until the term naturally expires.

In the event that an even number of Chairs exist and vote situation arises, the Chairs will randomly select one chair to abstain.

"},{"location":"charter/#role-of-sub-working-groups","title":"Role of Sub-Working Groups","text":"

Each Sub-Working Group (SWG) must have a Chair working as an active sponsor. Under the mandate of the Working Group, each SWG will have the autonomy to establish their own charter, membership rules, meeting times, and management processes. Each SWG will also have the authority to make in-scope decisions as delegated by the Working Group.

SWGs are required to submit their agreed Charter to the Working Group for information and archival. The Chairs can petition for dissolution of an inactive or hostile SWG by Call for Agreement. Once dissolved the SWG\u2019s delegated Charter and outstanding authority to make decisions is immediately revoked. The Chairs may then take any required action to restrict access to Working Group Resources.

No SWG will have authority with regards to this Charter or other OKD Working Group Governing Documents.

"},{"location":"communications/","title":"OKD Working Group Communications","text":"

The working group issues regular communications through several different methods. There are also a few ways to contact the working group depending on the type of communication needed. This page will help you navigate the various communication channels that the working group utilizes.

"},{"location":"communications/#e-mail","title":"E-Mail","text":"

The working group maintains a mailing list as well as several email addresses.

Mailing List

okd-wg mailing list

The purpose of this list is to discuss, give guidance & enable collaboration on current development efforts for OKD4, Fedora CoreOS (FCOS) & Kubernetes. Please note that the focus of this list is the active development of OKD, and the processes of this community, its is not intended as a forum for reporting bugs or requesting help with operating OKD.

Reporting Addresses

The working group uses several e-mail addresses to receive communications from the community based on the intent of the message.

chairs@okd.io

The chairs address is for messages that are related to the working group and its processes. It is intended for communications that will go directly to the working group chairs and not the wider community.

security@okd.io

The security address is intended for any reporting of sensitive or confidential security related bugs and findings about OKD.

info@okd.io

The info address is for requesting general information about the working group and its processes.

"},{"location":"communications/#social-media","title":"Social Media","text":"

The working group uses social media to broadcast updates about new releases, working group meetings, and community events.

Twitter

@okd_io

"},{"location":"communications/#slack","title":"Slack","text":"

The working group maintains a presence on the Kubernetes community Slack instance in the #openshift-users channel. This channel is a good place to come for OKD-specific help with operations and usage.

"},{"location":"communications/#github","title":"GitHub","text":"

The working group maintains several repositories on GitHub in the OKD-Project organization. These repositories contain information and discussions about OKD and the working group's future plans.

okd-project/okd discussions

The okd repository discussions board is a good place to visit for researching or raising specific operational issues with OKD.

okd-project/planning project board

The planning repository contains a kanban board which records the current state of the working group and its related projects.

"},{"location":"community/","title":"End User Community","text":"

OKD has an active community of end-users, with many different use-cases. From enterprises, academic institutions or home hobbyists. In addition to the end-user community there is a smaller community of volunteers that contribute to the OKD project by helping other users resolve issues or by participating in one of the OKD working groups to enhance the OKD project.

"},{"location":"community/#code-of-conduct","title":"Code of Conduct","text":"

We want the OKD community to be a welcoming community, where everyone is treat with respect, so the link to the code of conduct should be made visible at all events

Red Hat supports the Inclusive Naming Initiative and the OKD project follows the guidelines and recommendations from that project. All contributions to OKD must also follow their guidelines

"},{"location":"community/#end-user-community_1","title":"End-User community","text":"

The community of OKD users is a self-supporting community. There is no official support for OKD, all help is community provided.

The Help section provides details on how to get help for any issues you may be experiencing.

We encourage all users to participate in discussions and to help fellow users where they can.

"},{"location":"community/#contributing-to-okd","title":"Contributing to OKD","text":"

The OKD project has a charter, setting out how the project is run.

If you want to join the team of volunteers working on the OKD project then details of how to become a contributor are set out here.

"},{"location":"conduct/","title":"OKD Community Code of Conduct","text":"

Every community can be strengthened by a diverse variety of viewpoints, insights, opinions, skill sets, and skill levels. However, with diversity comes the potential for disagreement and miscommunication. The purpose of this Code of Conduct is to ensure that disagreements and differences of opinion are conducted respectfully and on their own merits, without personal attacks or other behavior that might create an unsafe or unwelcoming environment.

These policies are not designed to be a comprehensive set of things you cannot do. We ask that you treat your fellow community members with respect and courtesy. This Code of Conduct should be followed in spirit as much as in letter and is not exhaustive.

All okd events and community members are governed by this Code of Conduct and anti-harassment policy. We expect working group chairs and organizers to enforce these guidelines throughout all events, and we expect attendees, speakers, sponsors, and volunteers to help ensure a safe environment for our whole community.

For the purposes of this Code of Conduct:

"},{"location":"conduct/#anti-harassment-policy","title":"Anti-harassment policy","text":"

Harassment includes (but is not limited to) the following behaviors:

Community members asked to stop any harassing behavior are expected to comply immediately. In particular, community members should not use sexual images, activities, or other material. Community members should not use sexual attire or otherwise create a sexualized environment at community events.

In addition to the behaviors outlined above, continuing to behave in a certain way after you have been asked to stop also constitutes harassment, even if that behavior is not specifically outlined in this policy. It is considerate and respectful to stop doing something after you have been asked to stop, and all community members are expected to comply with such requests immediately.

"},{"location":"conduct/#policy-violations","title":"Policy violations","text":"

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting codeofconduct@okd.io.

If a community member engages in harassing behavior, organizers or working group chairs may take any action deemed appropriate. These actions may include but are not limited to warning the offender and expelling the offender from an event. The OKD working group leaders might determine that the offender should be barred from participating in the community.

Event organizers and working group leaders will be happy to help community members contact security or local law enforcement, provide escorts to an alternate location, or otherwise assist those experiencing harassment to feel safe for the duration of an event. We value the safety and well-being of our community members and want everyone to feel welcome at our events, both online and in-person.

We expect all community members to follow these policies during all of our events.

The okd Community Code of Conduct is licensed under the Creative Commons Attribution-Share Alike 3.0 license. Our Code of Conduct was adapted from Codes of Conduct of other open source projects, including:

"},{"location":"contributor/","title":"Contributor Community","text":"

OKD is built from many different open source projects - Fedora CoreOS, the CentOS Stream and UBI RPM ecosystems, cri-o, Kubernetes, and many different extensions to Kubernetes. The openshift organization on GitHub holds active development of components on top of Kubernetes and references projects built elsewhere. Generally, you'll want to find the component that interests you and review their README.md for the processes for contributing.

Community process and questions can be raised in our community repo and issues opened in this repository (Bugzilla locations coming soon).

Our unified continuous integration system tests pull requests to the ecosystem and core images, then builds and promotes them after merge. To see the latest development releases of OKD visit our continuous release page. These releases are built continuously and expire after a few days. Long lived versions are pinned and then listed on our stable release page.

All contributions are welcome - OKD uses the Apache 2 license and does not require any contributor agreement to submit patches. Please open issues for any bugs or problems you encounter, ask questions in the OKD discussion forum, or get involved in the Kubernetes project at the container runtime layer.

"},{"location":"contributor/#becoming-a-contributor","title":"Becoming a contributor","text":"

The easiest way to get involved in the community is to:

The OKD project has a charter, setting out how the project is run.

"},{"location":"contributor/#working-groups","title":"Working Groups","text":"

The project is managed by a bi-weekly working group video call:

The main working group is where are the major project decisions are made, but when a specific work item needs to be completed a sub-group may be formed, so a focussed set of volunteers can work on a specific area.

"},{"location":"crc/","title":"CodeReady Containers for OKD","text":"

CodeReady Containers brings a minimal, single node OKD 4 cluster to your local computer. This cluster provides a minimal environment for development and testing purposes. CodeReady Containers is mainly targeted at running on developers' laptops and desktops. Note that arm64 OKD payload is not yet available.

"},{"location":"crc/#download-codeready-containers-for-okd","title":"Download CodeReady Containers for OKD","text":"

Run a developer instance of OKD4 on your local workstation with CodeReady Containers built for OKD - >No Pull Secret Required! The Getting Started Guide explains how to install and use CodeReady Containers.

You can fetch crc binaries without Red Hat subscription here

$ crc config set preset okd\nChanges to configuration property 'preset' are only applied when the CRC instance is created.\nIf you already have a running CRC instance with different preset, then for this configuration change to take effect, delete the CRC instance with 'crc delete', setup it with `crc setup` and start it with 'crc start'.\n\n$ crc config view\n- consent-telemetry                     : yes\n- preset                                : okd\n

If you encounter any problems, please open a discussion item in the OKD GitHub Community!

"},{"location":"crc/#crc-working-group","title":"CRC Working group","text":"

There is a working group looking at automating the OKD CRC build process. If you want technical details on how to build OKD CRC see the working group section of this site

"},{"location":"docs/","title":"Documentation","text":"

There are 2 primary sources of information for OKD:

"},{"location":"docs/#updates-and-issues","title":"Updates and Issues","text":"

If you encounter an issue with the documentation or have an idea to improve the content or add new content then please follow the directions below to learn how you can get changes made.

The source for the documentation is managed in GitHub. There are different processes for requesting changes in the community and product documentation:

"},{"location":"docs/#community-documentation","title":"Community documentation","text":"

The OKD Documentation subgroup is responsible for the community documentation. The process for making changes is set out in the working group section of the documentation

"},{"location":"docs/#product-documentation","title":"Product documentation","text":"

The OKD docs are built off the openshift/openshift-docs repo. If you notice any problems in the OKD docs that need to be addressed, you can either create a pull request with those changes against the openshift/openshift-docs repo or create an issue to suggest the changes.

Among the changes you could suggest are:

If you create an issue, please do the following:

"},{"location":"faq/","title":"Frequently Asked Questions (FAQ)","text":"

Below are answers to common questions regarding OKD installation and administration. If you have a suggested question or a suggested improvement to an answer, please feel free to reach out.

"},{"location":"faq/#what-are-the-relations-with-ocp-project-is-okd4-an-upstream-of-ocp","title":"What are the relations with OCP project? Is OKD4 an upstream of OCP?","text":"

In 3.x release time OKD was used as an upstream project for Openshift Container Platform. OKD could be installed on Fedora/CentOS/RHEL and used CentOS based images to install the cluster. OCP, however, could be installed only on RHEL and its images were rebuilt to be RHEL-based.

Universal Base Image project has enabled us to run RHEL-based images on any platform, so the full image rebuild is no longer necessary, allowing OKD4 project to reuse most images from OCP4. There is another critical part of OCP - Red Hat Enterprise Linux CoreOS. Although RHCOS is an open source project (much like RHEL8) it's not a community-driven project. As a result, OKD workgroup has made a decision to use Fedora CoreOS - open source and community-driven project - as a base for OKD4. This decision allows end-users to modify all parts of the cluster using prepared instructions.

It should be noted that OKD4 is being automatically built from OCP4 ci stream, so most of the tests are happening in OCP CI and being mirrored to OKD. As a result, OKD4 CI doesn't have to run a lot of tests to ensure the release is valid.

These relationships are more complex than \"upstream/downstream\", so we use \"sibling distributions\" to describe its state.

"},{"location":"faq/#how-stable-is-okd4","title":"How stable is OKD4?","text":"

OKD4 builds are being automatically tested by release-controller. Release is rejected if either installation, upgrade from previous version or conformance test fails. Test results determine the upgrade graph, so for instance, if upgrade tests passed for beta5->rc edge, clusters on beta5 can be directly updated to rc release, bypassing beta6.

The OKD stable version is released bi-weekly, following Fedora CoreOS schedule, client tools are uploaded to Github and images are mirrored to Quay.

"},{"location":"faq/#can-i-run-a-single-node-cluster","title":"Can I run a single node cluster?","text":"

Currently, single-node cluster installations cannot be deployed directly by the 4.7 installer. This is a known issue. Single-node cluster installations do work with the 4.8 nightly installer builds.

As an alternative, if OKD version 4.7 is needed, you may have luck with Charro Gruver's OKD 4 Single Node Cluster instructions. You can also use Code Ready Containers (CRC) to run a single-node cluster on your desktop.

"},{"location":"faq/#what-to-do-in-case-of-errors","title":"What to do in case of errors?","text":"

If you experience problems during installation you must collect the bootstrap log bundle, see instructions

If you experience problems post installation, collect data of your cluster with:

oc adm must-gather\n

See documentation for more information.

Upload it to a file hoster and send the link to the developers (Slack channel, ...)

During installation the SSH key is required. It can be used to SSH onto the nodes later on - ssh core@<node ip>

"},{"location":"faq/#where-do-i-seek-support","title":"Where do I seek support?","text":"

OKD is a community-supported distribution, Red Hat does not provide commercial support of OKD installations.

Contact us on Slack:

See https://openshift.tips/ for useful Openshift tips

"},{"location":"faq/#where-can-i-find-upgrades","title":"Where can I find upgrades?","text":"

https://amd64.origin.releases.ci.openshift.org/

Warning

Nightly builds (from 4.x.0-0.okd) are pruned every 72 hours.

If your cluster uses these images, consider mirroring these files to a local registry.

Builds from the stable-4 stream are not removed.

"},{"location":"faq/#how-can-i-upgrade-my-cluster-to-a-new-version","title":"How can I upgrade my cluster to a new version?","text":"

Find a version where a tested upgrade path is available from your version for on

https://amd64.origin.releases.ci.openshift.org/

Upgrade options:

Preferred ways:

oc adm upgrade\n

Last resort:

Upgrade to a certain version (will ignore the update graph!)

oc adm upgrade --force --allow-explicit-upgrade=true --to-image=registry.ci.openshift.org/origin/release:4.4.0-0.okd-2020-03-16-105308\n

This will take a while; the upgrade may take several hours. Throughout the upgrade, kubernetes API would still be accessible and user workloads would be evicted and rescheduled as nodes are updated.

"},{"location":"faq/#interesting-commands-while-an-upgrade-runs","title":"Interesting commands while an upgrade runs","text":"

Check overall upgrade status:

oc get clusterversion\n

Check the status of your cluster operators:

oc get co\n

Check the status of your nodes (cluster upgrades may include base OS updates):

oc get nodes\n
"},{"location":"faq/#how-can-i-find-out-whats-inside-of-a-ci-release-and-which-commit-id-each-component-has","title":"How can I find out what's inside of a (CI) release and which commit id each component has?","text":"

This one is very helpful if you want to know if a certain commit has landed in your current version:

oc adm release info registry.ci.openshift.org/origin/release:4.4  --commit-urls\n
Name:      4.4.0-0.okd-2020-04-10-020541\nDigest:    sha256:79b82f237aad0c38b5cdaf386ce893ff86060a476a39a067b5178bb6451e713c\nCreated:   2020-04-10T02:14:15Z\nOS/Arch:   linux/amd64\nManifests: 413\n\nPull From: registry.ci.openshift.org/origin/release@sha256:79b82f237aad0c38b5cdaf386ce893ff86060a476a39a067b5178bb6451e713c\n\nRelease Metadata:\n  Version:  4.4.0-0.okd-2020-04-10-020541\n  Upgrades: <none>\n\nComponent Versions:\n  kubernetes 1.17.1\n  machine-os 31.20200407.20 Fedora CoreOS\n\nImages:\n  NAME                                           URL\n  aws-machine-controllers                        https://github.com/openshift/cluster-api-provider-aws/commit/5fa82204468e71b44f65a5f24e2675dbfa0f5c29\n  azure-machine-controllers                      https://github.com/openshift/cluster-api-provider-azure/commit/832a43a30d7f00cd6774c1f5cd117aeebbe1b730\n  baremetal-installer                            https://github.com/openshift/installer/commit/a58f24b0df7e3699b39d4ae1d23c45672706934d\n  baremetal-machine-controllers\n  baremetal-operator\n  baremetal-runtimecfg                           https://github.com/openshift/baremetal-runtimecfg/commit/09850a724d9290ffb05db3dd7f4f4c748b982759\n  branding                                       https://github.com/openshift/origin-branding/commit/068fa1eac9f31ffe13089dd3de2ec49c153b2a14\n  cli                                            https://github.com/openshift/oc/commit/2576e482bf003e34e67ba3d69edcf5d411cfd6f3\n  cli-artifacts                                  https://github.com/openshift/oc/commit/2576e482bf003e34e67ba3d69edcf5d411cfd6f3\n  cloud-credential-operator                      https://github.com/openshift/cloud-credential-operator/commit/446680ed10ac938e11626409acb0c076edd3fd52\n  ...\n
"},{"location":"faq/#how-can-i-find-out-the-version-of-a-particular-package-within-an-okd-release","title":"How can I find out the version of a particular package within an OKD release?","text":"
# Download and enter the machine-os-content container.\npodman run --rm -ti `oc adm release info quay.io/openshift/okd:4.13.0-0.okd-2023-06-24-145750 --image-for=machine-os-content`\n\n# Query the particular rpm. For example, to get the version of the cri-o package in the release, use the following:\nrpm -qa cri-o\n
"},{"location":"faq/#how-to-use-the-official-installation-container","title":"How to use the official installation container?","text":"

The official installer container is part of every release.

# Find out the installer image.\noc adm release info quay.io/openshift/okd:4.7.0-0.okd-2021-04-24-103438 --image-for=installer\n\n# Example output\n# quay.io/openshift/okd-content@sha256:521cd3ac7d826749a085418f753f1f909579e1aedfda704dca939c5ea7e5b105\n\n# Run the container via Podman or Docker to perform tasks. e.g. create ignition configurations\ndocker run -v $(pwd):/output -ti quay.io/openshift/okd-content@sha256:521cd3ac7d826749a085418f753f1f909579e1aedfda704dca939c5ea7e5b105 create ignition-configs\n
"},{"location":"help/","title":"Help","text":"

There is no official product support for OKD as it is a community project. All assistance is provided by volunteers from the user community.

"},{"location":"help/#how-to-ask-for-help","title":"How to ask for help","text":"

For questions or feedback, start a discussion on the discussion forum or reach us on Kubernetes Slack on #openshift-users

"},{"location":"help/#community-etiquette","title":"Community Etiquette","text":"

As all assistance is provided by the community, you are reminded of the code-of-conduct when asking a question or replying to a question.

Before starting a new discussion topic, do a search on the discussion forum to see if anyone else has already raised the same issue - then contribute to the existing discussion topic rather than starting a new topic.

When seeking help you should provide all the information a community volunteer may need to assist you. The easier it is for a volunteer to understand your issue, the more likely they are to provide assistance.

This information should include:

Please do not tag people you see answering other questions to try to get a faster answer as it is anti-social. We have an active community and it is up to individuals which questions they feel they want to respond to.

"},{"location":"help/#raising-bugs","title":"Raising bugs","text":"

We are trying to do all the diagnostic work in the discussion forum rather than using issues for the OKD project. If you are certain you have discovered a bug, then please raise an issue, but if you are not sure if you have found a bug then use the discussion forum to discuss it. If it turns out to be a bug, then the discussion topic can be converted to an issue.

"},{"location":"installation/","title":"Install OKD","text":""},{"location":"installation/#plan-your-installation","title":"Plan your installation","text":"

OKD supports 2 types of cluster install options:

IPI is a largely automated install process, where the installer is responsible for setting up the infrastructure, where UPI requires you to set up the base infrastructure. You can find further details in the documentation

OKD support installation on bare metal hardware, a number of virtualization platforms and a number of cloud platforms, so you need to decide where you want to install OKD and that your environment has sufficient resources for the cluster to operate. The documentation has more information to help you plan your installation.

If you want to install on a typical developer workstation, then Code-Ready Containers may be a better options, as that is a cut-down installation designed to run on limited compute and memory resources.

You can find examples of OKD installations, setup by OKD community members in the guides section.

"},{"location":"installation/#getting-started","title":"Getting Started","text":"

To obtain the openshift installer and client, visit releases for stable versions or https://amd64.origin.releases.ci.openshift.org/ for nightlies.

You can verify the downloads using:

curl https://www.okd.io/vrutkovs.pub | gpg --import\n

Output

    gpg: key 3D54B6723B20C69F: public key \"Vadim Rutkovsky <vadim@vrutkovs.eu>\" imported\n    gpg: Total number processed: 1\n    gpg:               imported: 1\n
gpg --verify sha256sum.txt.asc sha256sum.txt\n

Output

gpg: Signature made Mon May 25 18:48:22 2020 CEST\ngpg:                using RSA key DB861D01D4D1138A993ADC1A3D54B6723B20C69F\ngpg: Good signature from \"Vadim Rutkovsky <vadim@vrutkovs.eu>\" [ultimate]\ngpg:                 aka \"Vadim Rutkovsky <vrutkovs@redhat.com>\" [ultimate]\ngpg: WARNING: This key is not certified with a trusted signature!\ngpg:          There is no indication that the signature belongs to the owner.\nPrimary key fingerprint: DB86 1D01 D4D1 138A 993A  DC1A 3D54 B672 3B20 C69F\n
sha256sum -c sha256sum.txt\n

Output

release.txt: OK\nopenshift-client-linux-4.4.0-0.okd-2020-05-23-055148-beta5.tar.gz: OK\nopenshift-client-mac-4.4.0-0.okd-2020-05-23-055148-beta5.tar.gz: OK\nopenshift-client-windows-4.4.0-0.okd-2020-05-23-055148-beta5.zip: OK\nopenshift-install-linux-4.4.0-0.okd-2020-05-23-055148-beta5.tar.gz: OK\nopenshift-install-mac-4.4.0-0.okd-2020-05-23-055148-beta5.tar.gz: OK\n

Please note that each nightly release is pruned after 72 hours. If the nightly that you installed was pruned, the cluster may be unable to pull necessary images and may show errors for various functionality (including updates).

Alternatively, if you have the openshift client oc already installed, you can use it to download and extract the openshift installer and client from our container image:

oc adm release extract --tools quay.io/openshift/okd:4.5.0-0.okd-2020-07-14-153706-ga\n

Note

You need a 4.x version of oc to extract the installer and the latest client. You can initially use the official Openshift client (mirror)

There are full instructions in the OKD documentation for each supported platform, but the main steps for an IPI install are:

  1. extract the downloaded tarballs and copy the binaries into your PATH.
  2. run the following from an empty directory:
    openshift-install create cluster\n
  3. follow the prompts to create the install config

Once the install completes successfully the console URL and an admin username and password will be printed. If your DNS records were correct, you should be able to log in to your new OKD4 cluster!

To undo the installation and delete any cloud resources created by the installer, run

openshift-install destroy cluster\n

Note

The OpenShift client tools for your cluster can be downloaded from the help drop down menu at the top of the web console.

"},{"location":"working-groups/","title":"Working Groups","text":"

OKD is governed by working groups as set out in the OKD Working Group Charter

There is a primary working group, where all the main decisions are made regarding the project.

Where an area of the project needs more time or is of interest to a subset of the working group membership, then a sub-group will be formed for that specific area,

The current sub groups are:

"},{"location":"working-groups/#okd-primary-working-group","title":"OKD Primary Working Group","text":"

The OKD group meets virtually every other week.

You don't need an invitation to join a working group -- simply join the video call. You may also want to join other online discussions as set out in the contributor section

"},{"location":"blog/2021-03-07-new-blog.html/","title":"okd.io now has a blog","text":"

Todo

This content is for the current Middleman based OKD.io site

"},{"location":"blog/2021-03-07-new-blog.html/#lets-share-news-and-useful-information-with-each-other","title":"Let's share news and useful information with each other","text":"

We look forward to sharing news and useful information about OKD in this blog in the future.

You are also invited to participate: share your experiences and tips with the community by creating your own blog articles for okd.io.

Here's how to do it:

"},{"location":"blog/2021-03-16-save-the-date-okd-testing-deployment-workshop.html/","title":"Save The Date! OKD Testing and Deployment Workshop (March 20) Register Now!","text":""},{"location":"blog/2021-03-16-save-the-date-okd-testing-deployment-workshop.html/#the-okd-working-group-is-hosting-a-virtual-workshop-on-testing-and-deploying-okd4","title":"The OKD Working Group is hosting a virtual workshop on testing and deploying OKD4","text":"

On March 20th, OKD-Working Group is hosting a one day event to bring together people from the OKD and related Open Source project communities to collaborate on testing and documentation of the OKD 4 install and upgrade processes for the various platforms that people are deploying OKD 4 on as well to identify any issues with the current documentation for these processes and triage them together.

The day will start with all attendees together in the \u2018main stage\u2019 area for 2 hours where we will give an short welcome and describe the logistics for the day, give a brief introduction to OKD4 itself then walk thru a install deployment to vSphere using UPI approach along with a few other more universal best practices such as DNS/DHCP server configuration) that apply to all deployment targets.

Then we will break into tracks specific to the deployment target platforms for deep dive demos with Q/A, try and answer any questions you have about your specific deployment target's configurations, identify any missing pieces in the documentation and triage the documentation as we go.

There will be 4 track break-out rooms set-up for 3 hours of deployment walk throughs and Q/A with session leads:

Our goal is to triage our existing community documentation, identify any short comings and encourage your participation in the OKD-Working Group's testing of the installation and upgrade processes for each OKD release.

This is community event NOT meant as a substitute for Red Hat technical support.

There is no admission or ticket charge for OKD-Working Group events. However, you are required to complete a free hopin.to platform registration and watch the hopin site for updates about registration and schedule updates.

We are committed to fostering an open and welcoming environment at our working group meetings and events. We set expectations for inclusive behavior through our code of conduct and media policies, and are prepared to enforce these.

You can Register for the workshop here:

https://hopin.com/events/okd-testing-and-deployment-workshop

"},{"location":"blog/2021-03-19-please-avoid-using-fcos-33.20210301.3.1.html/","title":"Please avoid using FCOS 33.20210301.3.1 for new OKD installs","text":"

Hi,

Due to several issues ([1] and [2]) fresh installations using FCOS 33.20210301.3.1 would fail. The fix is coming in Podman 3.1.0.

Please use an older stable release - 33.20210217.3.0 - as a starting point instead. See download links at https://builds.coreos.fedoraproject.org/browser?stream=stable (might need some scrolling),

Note, that only fresh installs are affected. Also, you won't be left with outdated packages, as OKD does update themselves to latest stable FCOS content during installation/update.

  1. https://bugzilla.redhat.com/show_bug.cgi?id=1936927
  2. https://github.com/openshift/okd/issues/566

-- Cheers, Vadim

"},{"location":"blog/2021-03-22-recap-okd-testing-deployment-workshop.html/","title":"Recap OKD Testing and Deployment Workshop - Videos and Additional Resources","text":""},{"location":"blog/2021-03-22-recap-okd-testing-deployment-workshop.html/#the-okd-working-group-held-a-virtual-community-hosted-workshop-on-testing-and-deploying-okd4-on-march-20th","title":"The OKD Working Group held a virtual community-hosted workshop on testing and deploying OKD4 on March 20th","text":"

On March 20th, OKD-Working Group hosted a day-long event to bring together people from the OKD and related Open Source project communities to collaborate on testing and documentation of the OKD 4 install and upgrade processes for the various platforms that people are deploying OKD 4 on as well to identify any issues with the current documentation for these processes and triage them together.

The day started with all attendees together in the \u2018main stage\u2019 area for 2 hours where community members gave an short welcome along with the following four presentations:

Then attendees then broke into track sessions specific to the deployment target platforms for deep dive demos with live Q/A, answered as many questions as possible about that specific deployment target's configurations, attempted to identify any missing pieces in the documentation and triage the documentation as we went along.

The 4 track break-out rooms set-up for 2.5 hours of deployment walk throughs and Q/A with session leads:

Our goal was to triage our existing community documentation, identify any short comings and encourage your participation in the OKD-Working Group's testing of the installation and upgrade processes for each OKD release.

"},{"location":"blog/2021-03-22-recap-okd-testing-deployment-workshop.html/#resources","title":"Resources:","text":""},{"location":"blog/2021-05-04-From-OKD-to-OpenShift-in-3-Years.html/","title":"Rohde & Schwarz's Journey to OpenShift 4 From OKD to Azure Red Hat OpenShift","text":""},{"location":"blog/2021-05-04-From-OKD-to-OpenShift-in-3-Years.html/#from-okd-to-openshift-in-3-years-talk-by-josef-meier-rohde-schwarz-from-openshift-commons-gathering-at-kubecon","title":"From OKD to OpenShift in 3 Years - talk by Josef Meier (Rohde & Schwarz) from OpenShift Commons Gathering at Kubecon","text":"

On May 4th 2020, OKD-Working Group member Josef Meier gave a wonderful talk about Rohde & Schwarz's Journey to OpenShift 4 from OKD to ARO (Azure Red Hat OpenShift) and discussed benefits of participating in the OKD Working Group!

Join the OKD-Working Group and add your voice to the conversation!

"},{"location":"blog/2021-05-06-OKD-Office-Hours-at-KubeconEU-on-OpenShiftTV.html/","title":"OKD Working Group Office Hours at KubeconEU on OpenShift.tv","text":""},{"location":"blog/2021-05-06-OKD-Office-Hours-at-KubeconEU-on-OpenShiftTV.html/#video-from-okd-working-group-office-hours-at-kubeconeu-on-openshifttv","title":"Video from OKD Working Group Office Hours at KubeconEU on OpenShift.tv","text":"

On May 6th 2020, OKD-Working Group members hosted an hour long community led Office Hour with a brief introduction to the latest release by Red Hat's Charro Gruver then live Q/A!

Join the OKD-Working Group and add your voice to the conversation!

"},{"location":"blog/2022-09-09-an-introduction-to-debugging-okd-release-artifacts.html/","title":"An Introduction to Debugging OKD Release Artifacts","text":"

by Denis Moiseev and Michael McCune

During the course of installing, operating, and maintaining an OKD cluster it is natural for users to come across strange behaviors and failures that are difficult to understand. As Red Hat engineers working on OpenShift, we have many tools at our disposal to research cluster failures and to report our findings to our colleagues. We would like to share some of our experiences, techniques, and tools with the wider OKD community in the hopes of inspiring others to investigate these areas.

As part of our daily activities we spend a significant amount of time investigating bugs, and also failures in our release images and testing systems. As you might imagine, to accomplish this task we use many tools and pieces of tribal knowledge to understand not only the failures themselves, but the complexity of the build and testing infrastructures. As Kubernetes and OpenShift have grown, there has always been an organic growth of tooling and testing that helps to support and drive the development process forward. To fully understand the depths of these processes is to be actively following what is happening with the development cycle. This is not always easy for users who are also focused on delivering high quality service through their clusters.

On 2 September, 2022, we had the opportunity to record a video of ourselves diving into the OKD release artifacts to show how we investigate failures in the continuous integration release pipeline. In this video we walk through the process of finding a failing release test, examining the Prow console, and then exploring the results that we find. We explain what these artifacts mean, how to further research failures that are found, and share some other web-based tools that you can use to find similar failures, understand the testing workflow, and ultimately share your findings through a bug report.

To accompany the video, here are some of the links that we explore and related content:

Finally, if you do find bugs or would like report strange behavior in your clusters, remember to visit issues.redhat.com and use the project OCPBUGS.

"},{"location":"blog/2022-10-20-OKD-at-Kubecon-NA-Detroit/","title":"OKD at KubeCon + CloudNativeCon North America 2022","text":"

by Diane Mueller

date: 2022-10-20

Are you heading to Kubecon/NA October 24, 2022 - October 28, 2022 in Detroit at KubeCon + CloudNativeCon North America 2022?

If so, here's where you'll find members of the OKD Working Group and Red Hat engineers that working on delivering the latest releases of OKD at Kubecon!

"},{"location":"blog/2022-10-20-OKD-at-Kubecon-NA-Detroit/#october-25th","title":"October 25th","text":"

At the OpenShift Commons Gathering on Tuesday, October 25, 2022 | 9:00 a.m. - 6:00 p.m. EDT, we're hosting an in-person OKD Working Group Lunch & Learn Meet up from 12 noon to 3 pm lead by co-chairs Jaime Magiera (ICPSR at University of Michigan Institute for Social Research), Diane Mueller(Red Hat) and special guests including Michael McCune(Red Hat) in Break-out room D at the Westin Book Cadillac a 10 minute walk from the conference venue. followed by a Lightning Talk: OKD Working Group Update & Road Map on the OpenShift Common main stage at 3:45 pm. The main stage event will be live streamed via Hopin so if you are NOT attending in person, you'll be able to join us online.

Registration for OpenShift Commons Gathering is FREE and OPEN to ALL for both in-person and virtual attendance - https://commons.openshift.org/gatherings/kubecon-22-oct-25/

"},{"location":"blog/2022-10-20-OKD-at-Kubecon-NA-Detroit/#october-27th","title":"October 27th","text":"

At 11:30 am EDT, the OKD Working Group will hold a Kubecon Virtual Office Hour that on OKD Streams initiatives and the latest release lead by OKD Working Group members: Vadim Rutkovsky, Luigi Mario Zuccarelli, Christian Glombek and Michelle Krejci!

Registration for the virtual Kubecon/NA event is required to join the Kubecon Virtual Office Hour

If you're attending in person and just want to grab a cuppa coffee and have a chat with us, please reach ping either of the OKD working group co-chairs Jaime Magiera (ICPSR at University of Michigan Institute for Social Research), or Diane Mueller(Red Hat)

Come connect with us to discuss the OKD Road Map, OKD Streams initiative, MVP Release of OKD on CentOS Streams and the latest use cases for OKD, and talk all things open with our team.

"},{"location":"blog/2022-10-25-OKD-Streams-Building-the-Next-Generation-of-OKD-together/","title":"OKD Streams: Building the Next Generation of OKD together","text":"

by Diane Mueller

date: 2022-10-25

OKD is the community distribution of Kubernetes that powers Red Hat OpenShift. The OKD community has created reusable Tekton build pipelines on a shared Kubernetes cluster for the OKD build pipelines so that they could manage the build & release processes for OKD in the open. With the operate-first.cloud hosted at the massopen.cloud, the OKD community has launched a fully open source release pipeline that the community can participate in to help support and manage the release cycle ourselves. The OKD Community is now able to build and release stable builds of OKD 4.12 on both Fedora CoreOS and the newly introduced CentOS Stream CoreOS. We are calling it OKD Streams.

"},{"location":"blog/2022-10-25-OKD-Streams-Building-the-Next-Generation-of-OKD-together/#new-patterns-new-cicd-pipelines-and-a-new-coreos","title":"New Patterns, New CI/CD Pipelines and a new CoreOS","text":"

Today we invite you into our OKD Streams initiative. An OKD Stream refers to a build, test, and release pipeline for any configuration of OKD, the open source kubernetes distribution that powers OpenShift. The OKD working group is pleased to announce the availability of tooling and processes that will enable building and testing many configurations, or \"streams\". The OKD Working Group and Red Hat Engineering are now testing one such stream that runs an upstream version of RHEL9 via CentOS Streams CoreOS (\u2018SCOS\u2019 for short) to improve our RHEL9 readiness signal for Red Hat OpenShift. It is the first of many OKD Streams that will enable developers inside and outside of Red Hat to easily experiment with and explore Cloud Native technologies. You can check out our MVP OKD on SCOS release here.

With this initiative, the OKD working group has embraced new patterns and built new partnerships. We have leveraged the concepts in the open source managed service \u2018Operate First\u2019 pattern, worked with the CentOS and CoreOS communities to build a pipeline for building SCOS and applied new CI/CD technologies (Tekton) to build a new OKD release build pipeline service. The MVP of OKD Streams, for example, is an SCOS backed version of OKD built with a Tekton pipeline managed by the OKD working group that runs on AWS infrastructure managed by Operate First. Together we are unlocking some of the innovations to get better (and earlier) release signals for Kubernetes , OCP and RHEL and to enable the OKD community to get more deeply involved with the OKD build processes.

The OKD Working group wanted to make participation in all of these activities easier for all Cloud Native developers and this has been the motivating force behind the OKD Streams initiative.

"},{"location":"blog/2022-10-25-OKD-Streams-Building-the-Next-Generation-of-OKD-together/#from-the-one-size-fits-all-to-built-to-order","title":"From the \u2018One Size Fits All\u2019 to \u2018Built to Order\u2019","text":"

There are main three problems that both the OKD working group and Red Hat Engineering teams spend a lot of time thinking about:

  1. how do we improve our release signals for OpenShift, RHEL, CoreOS
  2. how do we get features into the hands of our customer and partners faster
  3. how do we enable engineers to experiment and innovate

Previously, what we referred to as an \u2018OKD\u2019 release, was built on the most recent release of OKD running on the latest stable release of Fedora CoreOS (FCOS for short). In actuality, we had a singular release pipeline that built a release of OKD with a bespoke version of FCOS. These releases of OKD gave us early signals for the impact of new operating system features that would eventually be landing in RHEL, where they will surface in RHEL CoreOS (RHCOS). It was (and still is) a very good way for developers to experiment with OKD and explore its functionality.

The OKD community wanted to empower wider use of OKD for experimentation in more use cases that required layering on additional resources in some cases, and in others use cases, reducing the footprints for edge and local deployments. OKD has been stable enough for some to run production deployments. CERN\u2019s OKD deployment on OpenStack, for example, is assembled with custom OKD build pipelines. The feedback from these OKD builds has been a source of inspiration for this OKD Streams initiative to enable more such use cases.

The OKD Streams initiative invites more community input and feedback quickly into the project without interrupting the productized builds for OpenShift and OpenShift customers. We can experiment with new features that can then get pushed upstream into Kubernetes or downstream into the OpenShift product. We can reuse the Tekton build pipelines for building streams specific to HPC or Openstack or Bare Metal or whatever the payload customization needs to be for their organizations.

Our goal is to make it simple for others to experiment.

We are experimenting too. The first OKD Streams \u2018experiment\u2019 built with the new Tekton build pipeline running on an Operate First AWS Cluster is OKD running on SCOS, which is a future version of OpenShift running on a near-future version of RHEL that's leveraging CentOS Streams CoreOS. This will improve our RHEL9 readiness signal for OCP. Improved RHEL9 readiness signals with input from the community will showcase our work as we explore what the new OKD build service is going to mean for all of us.

"},{"location":"blog/2022-10-25-OKD-Streams-Building-the-Next-Generation-of-OKD-together/#tekton-pipelines-as-the-building-blocks","title":"Tekton Pipelines as the Building Blocks","text":"

Our new OKD Streams are built using Tekton pipelines, which makes it easier for us to explore building many different kinds of pipelines.

Tekton is a Continuous Deployment (CD) system that enables us to run tasks and pipelines in a composable and flexible manner. This fits in nicely with our OKD Streams initiative where the focus is less on the artifacts that are produced than the pipeline that builds it.

While OKD as a payload remains the core focus of the OKD Working Group, we are also collaborating with the Operate First Community to ensure that anyone is able to take the work we have done and lift and shift it to any cloud enabling OKD to run in any Kubernetes-based infrastructure anywhere. Now anybody can experiment and build their own \u2018stream\u2019 of OKD with the Tekton pipeline.

This new pipeline approach enables builds that can be customized via parameters, even the tasks within the pipeline can be exchanged or moved around. Add your own tasks. They are reusable templates for creating your own testable stream of OKD. Run the pipelines on any infrastructure, including locally in Kubernetes using podman, for example, or you can run them on a vanilla Kubernetes cluster. We are enabling access to the Operate First managed OKD Build Service to deploy more of these builds and pipelines to get some ideas that we have at Red Hat out into the community for early feedback AND to let other community members test their ideas.

As an open source community, we\u2019re always evolving and learning together. Our goal is to make OKD the goto place to experiment and innovate for the entire OpenShift ecosystem and beyond, to showcase new features and functionalities, and to fail fast and often without impacting product releases or incurring more technical debt.

"},{"location":"blog/2022-10-25-OKD-Streams-Building-the-Next-Generation-of-OKD-together/#the-ask","title":"THE ASK","text":"

Help drive faster innovation into OCP, OKD, Kubernetes and RHEL along with the multitude of other Cloud Native open source projects that are part of the OpenShift and the cloud native ecosystem.

This project is a game changer for lots of open source communities internally and externally. We know there are folks out there in the OKD working group and in the periphery that haven't spoken up and we'd love to hear from you, especially if you are currently doing bespoke OKD builds. Will this unblock your innovation the way we think it will?

"},{"location":"blog/2022-10-25-OKD-Streams-Building-the-Next-Generation-of-OKD-together/#additional-resources","title":"Additional Resources","text":""},{"location":"blog/2022-10-25-OKD-Streams-Building-the-Next-Generation-of-OKD-together/#kudos-and-thank-you","title":"Kudos and Thank you","text":"

Operate First\u2019s Infrastructure Team: Thorsten Schwesig, Humair Khan, Tom Coufal, Marcel Hild Red Hat\u2019s CFE Team: Luigi Zuccarelli, Sherine Khoury OKD Working Group: Vadim Rutkovsky, Alessandro Di Stefano, Jaime Magiera, Brian Innes CentOS Cloud and HPC SIGs: Amy Marrich, Christian Glombek, Neal Gompa

"},{"location":"blog/2022-12-12-Building-OKD-payload/","title":"Building the OKD payload","text":"

Over the last couple of months, we've been busy building a new OKD release on CentOS Stream CoreOS (SCOS), and were able to present it for the OpenShift Commons Detroit 2022.

While some of us created a Tekton pipeline that could build SCOS on a Kind cluster, others were tediously building the OKD payload with Prow, but also creating a Tekton pipeline for building that payload on any OpenShift or OKD cluster.

The goal of this effort is to enable and facilitate community collaboration and contributions, giving anybody the ability to do their own payload builds and run tests themselves.

This process has been difficult because OpenShift's Prow CI instance is not open to the public, and changes could thus not easily be tested before PR submission. Even after opening a PR, a non-Red Hatter will require a Red Hat engineer to add the /ok-to-test label in order to start Prow testing.

With the new Tekton pipelines, we are now providing a straight forward way for anybody to build and test their own changes first (or even create their own Stream entirely), and then present the results to the OKD Working Group, which will then expedite the review process on the PR.

In this article, I will shed some light on the building blocks of the OKD on SCOS payload, how it is built, both the Prow way, and the Tekton way:

"},{"location":"blog/2022-12-12-Building-OKD-payload/#whats-the-payload","title":"What's the payload?","text":"

Until now, the OKD payload, like the OpenShift payload, was built by the ReleaseController in Prow.

The release-controller automatically builds OpenShift release images when new images are created for a given OpenShift release. It detects changes to an image stream, launches a job to build and push the release payload image using oc adm release new, and then runs zero or more ProwJobs against the artifacts generated by the payload.

A release image is nothing more than a ClusterVersionOperator image (CVO), with an extra layer containing the release-manifests folder. This folder contains : * image-references: a list of all known images with their SHA digest, * yaml manifest files for each operator controlled by the CVO.

The list of images that is included in the release-manifests is calculated from the release image stream, taking : * all images with label io.openshift.release.operator=true in that image stream * plus any images referenced in the /manifests/image-references file within each of the images with this label.

As you can imagine, the list of images in a release can change from one release to the next, depending on: * new operators being delivered within the OpenShift release * existing operators adding or removing an operand image * operators previously included that are removed from the payload to be delivered independently, through OLM instead.

In order to list the images contained in a release payload, run this command:

oc adm release info ${RELEASE_IMAGE_URL}\n

For example:

oc adm release info quay.io/okd/scos-release:4.12.0-0.okd-scos-2022-12-02-083740 

Now that we've established what needs to be built, let's take a deeper look at how the OKD on SCOS payload is built.

"},{"location":"blog/2022-12-12-Building-OKD-payload/#building-okdscos-the-prow-way-railway_track","title":"Building OKD/SCOS the Prow way :railway_track:","text":"

The obvious way to build OKD on SCOS is to use Prow - THE Kubernetes-based CI/CD system, which is what builds OCP and OKD on FCOS already today. This is what Kubernetes uses upstream as well. :shrug:

For a new OKD release to land in the releases page, there's a whole bunch of Prow jobs that run. Hang on! It's a long story...

"},{"location":"blog/2022-12-12-Building-OKD-payload/#imagestreams","title":"ImageStreams","text":"

Let's start by the end :wink:, and prepare a new image stream for OKD on SCOS images. This ImageStream (IS) is a placeholder for all images that form the OKD/SCOS payload.

For OKD on Fedora CoreOS (OKD/FCOS) it's named okd.For OKD/SCOS, this ImageStream is named okd-scos.

This ImageStream includes all payload images contained in the specific OKD release based on CentOS Stream CoreOS (SCOS)

Among these payload images, we distinguish: * Images that can be shared between OCP and OKD. These are built in Prow and mirrored into the okd-scos ImageStream. * Images that have to be specifically built for OKD/SCOS, which are directly tagged into the okd-scos ImageStream. This is the case for images that are specific to the underlying operating system, or contain RHEL packages. These are: the installer images, the machine-config-operator image, the machine-os-content that includes the base operating system OSTree, as well as the ironic image for provisioning bare-metal nodes, and a few other images.

"},{"location":"blog/2022-12-12-Building-OKD-payload/#triggers-for-building-most-payload-images","title":"Triggers for building most payload images","text":"

Now that we've got the recipient Image Stream for the OKD payload images, let's start building some payloads!

Take the Cluster Network Operator for example: For this operator, the same image can be used on OCP CI and OKD releases. Most payload images fit into this case.

For such an image, the build is pretty straight forward. When a PR is filed for a GitHub repository that is part of a release payload: * The Pre-submit jobs run. It essentially builds the image and stores it in an ImageStream in an ephemeral namespace to run tests against several platforms (AWS, GCP, BareMetal, Azure, etc) * Once the tests are green and the PR is approved and merges, the Post-submit jobs run. It essentially promotes the built image to the appropriate release-specific ImageStream: * if the PR is for master, images are pushed to the ${next-release} ImageStream * If the PR is for release-${MAJOR}.${MINOR}, images are pushed to the ${MAJOR}.${MINOR} ImageStream

Next, the OCP release controller which runs at every change to the ImageStream, will mirror all images from the ${MAJOR}.${MINOR} ImageStream to the scos-${MAJOR}.${MINOR} ImageStream.

As mentioned before, some of the images are not mirrored, and that brings us to the next section, on building those images that have content (whether code or manifests) specific to OKD.

"},{"location":"blog/2022-12-12-Building-OKD-payload/#trigger-for-building-the-okd-specific-payload-images","title":"Trigger for building the OKD-specific payload images","text":"

For the OKD-specific images, the CI process is a bit different, as the image is built in the PostSubmit job and then directly promoted to the okd-scos IS, without going through the OCP CI to OKD mirroring step. This is called a variant configuration. You can see this for MachineConfigOperator for example.

The built images land directly in the scos-${MAJOR}-${MINOR} ImageStream.

That is why there's no need for OCP's CI release controller to mirror these images from the CI ImageStream: During the PostSubmit phase, images are already getting built in parallel for OCP, OKD/FCOS and OKD/SCOS and pushed, respectively to ocp/$MAJOR.$MINOR, origin/$MAJOR.$MINOR, origin/scos-$MAJOR.$MINOR

"},{"location":"blog/2022-12-12-Building-OKD-payload/#okd-release-builds","title":"OKD release builds","text":"

Now the ImageStream scos-$MAJOR.$MINOR is getting populated by payload images. With every new image tag, the release controller for OKD/SCOS will build a release image.

The ReleaseController ensures that OpenShift update payload images (aka release images) are created whenever an ImageStream representing the images in a release is updated.

Thanks to the annotation release.openshift.io/config on the scos-${MAJOR}-{MINOR} ImageStream, the controller will: 1. Create a tag in the scos-${MAJOR}-{MINOR} ImageStream that uses the release name + current timestamp. 2. Mirror all of the tags in the input ImageStream so that they can't be pruned. 3. Launch a job in the job namespace to invoke oc adm release new from the mirror pointing to the release tag we created in step 1. 4. If the job succeeds in pushing the tag, it sets an annotation on that tag release.openshift.io/phase = \"Ready\", indicating that the release can be used by other steps. And that's how a new release appears in https://origin-release.ci.openshift.org/#4.13.0-0.okd-scos 5. The release state switches to \"Verified\" when the verification end-to-end test job succeeds.

"},{"location":"blog/2022-12-12-Building-OKD-payload/#building-the-tekton-way-motorway","title":"Building the Tekton way :motorway:","text":"

Building with Prow has the advantage of being driven by new code being pushed to payload components, thus building fresh releases as the code of github.com/openshift evolves.

The problem is that Prow, along with all the clusters involved with it, the ImageStreams, etc. are not accessible to the OKD community outside of RedHat. Also, users might be interested in building custom OKD payload, in their own environment, to experiment exchanging components for example.

To remove this impediment, the OKD team has been working on the OKD Payload pipeline based on Tekton.

Building OKD payloads with Tekton can be done by cloning the okd-payload-pipeline repository. One extra advantage of this repository is the ability to see the list of components that form the OKD payload: In fact, the list under buildconfigs corresponds to the images in the OKD final payload. This list is currently manually synced with the list of OCP images on each release.

The pipeline is fairly simple. Take the build-from-scratch.yaml for example. It has 3 main tasks: * Build the base image and the builder image, with which all the payload images will be built * The builder image is a CentOS Stream 9 container image that includes all the dependencies needed to build payload components and is used as the build environment for them * The built binaries are then layered onto a CentOS Stream 9 base image, creating a payload component image. * The base image is shared across all the images in the release payload * Build payload images in batches (starting with the ones that don't have any dependencies) * Finally, as all OKD payload component images are in the image stream, the OKD release image is in turn built, using the oc adm release new command.

"},{"location":"blog/2022-12-12-Building-OKD-payload/#triggers","title":"Triggers","text":"

For the moment, this pipeline has no triggers. It can be executed manually when needed. We are planning to automatically trigger the pipeline on a daily cadence.

"},{"location":"blog/2022-12-12-Building-OKD-payload/#batch-build-task","title":"Batch Build Task","text":"

With a set of buildConfigs passed in the parameters, this task relies on an openshift oc image containing the client binary and loops on the list of build configs with a oc start-build, and waits for all the builds to complete.

"},{"location":"blog/2022-12-12-Building-OKD-payload/#new-release-task","title":"New Release Task","text":"

This task simply uses an OpenShift client image to call oc adm release new which creates the release image from the image stream release (on the OKD/OpenShift cluster where this Tekton pipeline is running), and mirroring the release image, and all the payload component images to a registry configured in its parameters.

"},{"location":"blog/2022-12-12-Building-OKD-payload/#buildconfigs","title":"BuildConfigs","text":"

As explained above, the OKD payload Tekton pipeline heavily relies on the buildconfigs. This folder contains one buildconfig yaml file for each image included in the release payload.

Each build config simply uses a builder image to build the operator binary, invoking the correct Dockerfile in the operator repository. Then, the binary is copied as a layer on top of an OKD base image, which is built in the preparatory task of the pipeline.

This process currently uses the OpenShift Builds API. We are planning to move these builds to the Shipwright Builds API in order to enable builds outside of OCP or OKD clusters.

"},{"location":"blog/2022-12-12-Building-OKD-payload/#updating-build-configs","title":"Updating build configs","text":"

Upon deploying the Tekton OKD Payload pipeline on an OKD (or OpenShift) cluster, Kustomize is used in order to : * patch the BuildConfig files, adding TAGS to the build arguments according to the type of payload we want to build (based on FCOS, SCOS or any other custom stream) * patch the BuildConfig files, replacing the builder image references to the non-public registry.ci.openshift.org/ocp/builder in the payload component's Dockerfiles with the builder image reference from the local image stream * setting resource requests and limits if needed

"},{"location":"blog/2022-12-12-Building-OKD-payload/#preparing-for-a-new-release","title":"Preparing for a new release","text":"

The procedure to prepare a new release is still a work in progress at the time of writing.

To build a new release, each BuildConfig file should be updated with the git branch corresponding to that release. In the future, the branch can be passed along as a kustomization, or in the parameters of the pipeline.

The list of images from a new OCP release (obtained through oc adm release info) must now be synced with the BuildConfigs present here: * For any new image, a new BuildConfig file must be added * For any image removed from the OCP release, the corresponding BuildConfig file must be removed.

"},{"location":"blog/2022-12-12-Building-OKD-payload/#take-away","title":"Take away","text":""},{"location":"blog/2022-12-12-Building-OKD-payload/#what-are-our-next-steps","title":"What are our next steps?","text":"

In the coming weeks and months, you can expect lots of changes, especially as the OKD community is picking up usage of OKD/SCOS, and doing their own Tekton Pipeline runs: * Work to automate the OKD release procedure is progress by automatically verifying payload image signatures, signing the release, and tagging it on GitHub.

The goal is to deliver a new OKD/SCOS on a sprint (3-weekly) basis, and to provide both the OCP teams and the OKD community with a fresh release to test much earlier than previously with the OCP release cadence. * For the moment, OKD/SCOS releases are only verified on AWS. To gain more confidence in our release payloads, we will expand the test matrix to other platforms such as GCP, vSphere and Baremetal * Enable GitOps on the Tekton pipeline repository, so that changes to the pipeline are automatically deployed on OperateFirst for the community to use the latest and greatest. * The OKD Working Group will be collaborating with the Mass Open Cloud to allow for deployments of test clusters on their baremetal infrastructure. * The OKD Working Group will be publishing the Tekton Tasks and Pipelines used to build the SCOS Operating System as well as the OKD payload to Tekton Hub and Artifact Hub * The OKD operators Tekton pipeline will be used for community builds of optional OLM operators. A first OKD operator has already been built with it, and other operators are to follow, starting with the Pipelines operator, which has long been an ask by the community * Additionally, we are working on multi-arch releases for both OKD/SCOS and OKD/FCOS

"},{"location":"blog/2022-12-12-Building-OKD-payload/#opened-perspectives","title":"Opened perspectives","text":"

Although in the near future the OKD team will still rely on Prow to build the payload images, the Tekton pipeline will start getting used to finalize the release.

In addition, this Tekton pipeline has opened up new perspectives, even for OCP teams.

One such example is for the Openshift API team who would like to use the Tekton pipeline to test API changes by building all components that are dependent of the OpenShift API from that PR, create an OKD release and test it thus getting extra quick feedback on impacts of the API changes on the OKD (and later OCP) releases.

Another example is the possibility to build images on other platforms than Openshift or OKD platform, replacing build configs with Shipwright, or why not docker build...

Whatever your favorite flavor is, we are looking forward to seeing the pipelines in action, increasing collaboration and improving our community feedback loop.

"},{"location":"blog/2023-07-18-State-of-Affairs-in-OKD-CI-CD/","title":"State of affairs in OKD CI/CD","text":"

by Jakob Meng

date: 2023-07-18

OKD is a community distribution of Kubernetes which is built from Red Hat OpenShift components on top of Fedora CoreOS (FCOS) and recently also CentOS Stream CoreOS (SCOS). The OKD variant based on Fedora CoreOS is called OKD or OKD/FCOS. The SCOS variant is often referred to as OKD/SCOS.

The previous blog posts introduced OKD Streams and its new Tekton pipelines for building OKD/FCOS and OKD/SCOS releases. This blog post gives an overview of the current build and release processes for FCOS, SCOS and OKD. It outlines OKD's dependency on OpenShift, an remnant from the past when its Origin predecessor was a downstream rebuild of OpenShift 3, and concludes with an outlook on how OKD Streams will help users, developers and partners to experiment with future OpenShift.

"},{"location":"blog/2023-07-18-State-of-Affairs-in-OKD-CI-CD/#fedora-coreos-and-centos-stream-coreos","title":"Fedora CoreOS and CentOS Stream CoreOS","text":"

Fedora CoreOS is built with a Jenkins pipeline running in Fedora's infrastructure and is being maintained by the Fedora CoreOS team.

CentOS Stream CoreOS is built with a Tekton pipeline running in a OpenShift cluster on MOC's infrastructure and pushed to quay.io/okd/centos-stream-coreos-9. The SCOS build pipeline is owned and maintained by the OpenShift OKD Streams team and SCOS builds are being imported from quay.io into OpenShift CI as ImageStreams.

"},{"location":"blog/2023-07-18-State-of-Affairs-in-OKD-CI-CD/#openshift-payload-components","title":"OpenShift payload components","text":"

At the time of writing, most payload components for OKD/FCOS and OKD/SCOS get mirrored from OCP CI releases. OpenShift CI (Prow and ci-operator) periodically builds OCP images, e.g. for OVN-Kubernetes. OpenShift's release-controller detects changes to image streams, caused by recently built images, then builds and tests a OCP release image. When such an release image passes all non-optional tests (also see release gating docs), the release image and other payload components are mirrored to origin namespaces on quay.io (release gating is subject to change). For example, at most every 3 hours a OCP 4.14 release image will be deployed (and upgraded) on AWS and GCP and afterwards tested with OpenShift's conformance test suite. When it passes the non-optional tests the release image and its dependencies will be mirrored to quay.io/origin (except for rhel-coreos*, *-installer and some other images). These OCP CI releases are listed with a ci tag at amd64.ocp.releases.ci.openshift.org. Builds and promotions of nightly and stable OCP releases are handled differently (i.e. outside of Prow) by the Automated Release Tooling (ART) team.

"},{"location":"blog/2023-07-18-State-of-Affairs-in-OKD-CI-CD/#okd-payload-components","title":"OKD payload components","text":"

A few payload components are built specifically for OKD though, for example OKD/FCOS' okd-machine-os. Unlike RHCOS and SCOS, okd-machine-os, the operating system running on OKD/FCOS nodes, is layered on top of FCOS (also see CoreOS Layering, OpenShift Layered CoreOS).

Note, some payload components have OKD specific configuration in OpenShift CI although the resulting images are not incorporated into OKD release images. For example, OVN-Kubernetes images are built and tested in OpenShift CI to ensure OVN changes do not break OKD.

"},{"location":"blog/2023-07-18-State-of-Affairs-in-OKD-CI-CD/#okd-releases","title":"OKD releases","text":"

When OpenShift's release-controller detects changes to OKD related image streams, either due to updates of FCOS/SCOS, an OKD payload component or due to OCP payload components being mirrored after an OCP CI release promotion, it builds and tests a new OKD release image. When such an OKD release image passes all non-optional tests, the image is tagged as registry.ci.openshift.org/origin/release:4.14 etc. This CI release process is similar for OKD/FCOS and OKD/SCOS, e.g. compare these examples for OKD/FCOS 4.14 and with OKD/SCOS 4.14. OKD/FCOS's and OKD/SCOS's CI releases are listed at amd64.origin.releases.ci.openshift.org.

Promotions for OKD/FCOS to quay.io/openshift/okd (published at github.com/okd-project/okd) and for OKD/SCOS to quay.io/okd/scos-release (published at github.com/okd-project/okd-scos) are done roughly every 2 to 3 weeks. For OKD/SCOS, OKD's release pipeline is triggered manually once a sprint to promote CI releases to 4-scos-{next,stable}.

"},{"location":"blog/2023-07-18-State-of-Affairs-in-OKD-CI-CD/#okd-streams-and-customizable-tekton-pipelines","title":"OKD Streams and customizable Tekton pipelines","text":"

However, the OKD project is currently shifting its focus from doing downstream rebuilds of OCP to OKD Streams. As part of this strategic repositioning, OKD offers Argo CD workflows and Tekton pipelines to build CentOS Stream CoreOS (SCOS) (with okd-coreos-pipeline), to build OKD/SCOS (with okd-payload-pipeline) and to build operators (with okd-operator-pipeline). The OKD Streams pipelines have been created to improve the RHEL9 readiness signal for Red Hat OpenShift. It allows developers to build and compose different tasks and pipelines to easily experiment with OpenShift and related technologies. Both okd-coreos-pipeline and okd-operator-pipeline are already used in OKD's CI/CD and in the future okd-payload-pipeline might supersede OCP CI for building OKD payload components and mirroring OCP payload components.

"},{"location":"guides/automated-vsphere-upi/","title":"Implementing an Automated Installation Solution for OKD on vSphere with User Provisioned Infrastructure (UPI)","text":""},{"location":"guides/automated-vsphere-upi/#introduction","title":"Introduction","text":"

It\u2019s possible to completely automate the process of installing OpenShift/OKD on vSphere with User Provisioned Infrastructure by chaining together the various functions of OCT via a wrapper script.

"},{"location":"guides/automated-vsphere-upi/#steps","title":"Steps","text":"
  1. Deploy the DNS, DHCP, and load balancer infrastructure outlined in the Prerequisites section.
  2. Create an install-config.yaml.template file based on the format outlined in the section Sample install-config.yaml file for VMware vSphere of the OKD docs. Do not add a pull secret. The script will query you for one or it will insert a default one if you use the \u2013auto-secret flag.
  3. Create a wrapper script that:
"},{"location":"guides/automated-vsphere-upi/#prerequisites","title":"Prerequisites","text":""},{"location":"guides/automated-vsphere-upi/#dns","title":"DNS","text":"

1 entry for the bootstrap node of the format bootstrap.[cluster].domain.tld 3 entries for the master nodes of the form master-[n].[cluster].domain.tld An entry for each of the desired worker nodes in the form worker-[n].[cluster].domain.tld 1 entry for the API endpoint in the form api.[cluster].domain.tld 1 entry for the API internal endpoint in the form api-int.[cluster].domain.tld 1 wildcard entry for the Ingress endpoint in the form *.apps.[cluster].domain.tld

"},{"location":"guides/automated-vsphere-upi/#dhcp","title":"DHCP","text":""},{"location":"guides/automated-vsphere-upi/#load-balancer","title":"Load Balancer","text":"

vSphere UPI requires the use of a load balancer. There needs to be two pools.

"},{"location":"guides/automated-vsphere-upi/#proxy-optional","title":"Proxy (Optional)","text":"

If the cluster will sit on a private network, you\u2019ll need a proxy for outgoing traffic, both for the install process and for regular operation. In the case of the former, the installer needs to pull containers from the external registries. In the case of the latter, the proxy is needed when application containers need access to the outside world (e.g. yum installs, external code repositories like gitlab, etc.)

The proxy should be configured to accept connections from the IP subnet for your cluster. A simple proxy to use for this purpose is squid

"},{"location":"guides/automated-vsphere-upi/#wrapper-script","title":"Wrapper Script","text":"
#!/bin/bash\n\nmasters_count=3\nworkers_count=2\ntemplate_url=\"https://builds.coreos.fedoraproject.org/prod/streams/testing/builds/33.20210314.2.0/x86_64/fedora-coreos-33.20210314.2.0-vmware.x86_64.ova\"\ntemplate_name=\"fedora-coreos-33.20210201.2.1-vmware.x86_64\"     library=\"Linux ISOs\"\ncluster_name=\"mycluster\"\ncluster_folder=\"/MyVSPHERE/vm/Linux/OKD/mycluster\"\nnetwork_name=\"VM Network\"\ninstall_folder=`pwd`\n\n# Import the template\n./oct.sh --import-template --library \"${library}\" --template-url \"${template_url}\"\n\n# Install the desired OKD tools\noct.sh --install-tools --release 4.6\n\n# Launch the prerun to generate and modify the ignition files\noct.sh --prerun --auto-secret\n\n# Deploy the nodes for the cluster with the appropriate ignition data\noct.sh --build --template-name \"${template_name}\" --library \"${library}\" --cluster-name \"${cluster_name}\" --cluster-folder \"${cluster_folder}\" --network-name \"${network_name}\" --installation-folder \"${install_folder}\" --master-node-count ${masters_count} --worker-node-count ${workers_count} # Turn on the cluster nodes\noct.sh --cluster-power on --cluster-name \"${cluster_name}\"  --master-node-count ${masters_count} --worker-node-count ${workers_count}\n\n# Run the OpenShift installer \nbin/openshift-install --dir=$(pwd) wait-for bootstrap-complete  --log-level=info\n
"},{"location":"guides/automated-vsphere-upi/#future-updates","title":"Future Updates","text":""},{"location":"guides/aws-ipi/","title":"AWS IPI Default Deployment","text":"

This describes the resources used by OpenShift after performing an installation using the default options for the installer.

"},{"location":"guides/aws-ipi/#infrastructure","title":"Infrastructure","text":""},{"location":"guides/aws-ipi/#compute","title":"Compute","text":""},{"location":"guides/aws-ipi/#networking","title":"Networking","text":""},{"location":"guides/aws-ipi/#deployment","title":"Deployment","text":"

See the OKD documentation to proceed with deployment

"},{"location":"guides/azure-ipi/","title":"Azure IPI Default Deployment","text":"

This describes the resources used by OpenShift after performing an installation using the default options for the installer.

"},{"location":"guides/azure-ipi/#infrastructure","title":"Infrastructure","text":""},{"location":"guides/azure-ipi/#compute","title":"Compute","text":""},{"location":"guides/azure-ipi/#networking","title":"Networking","text":""},{"location":"guides/azure-ipi/#deployment","title":"Deployment","text":"

See the OKD documentation to proceed with deployment

"},{"location":"guides/gcp-ipi/","title":"GCP IPI Default Deployment","text":"

This describes the resources used by OpenShift after performing an installation using the default options for the installer.

"},{"location":"guides/gcp-ipi/#infrastructure","title":"Infrastructure","text":""},{"location":"guides/gcp-ipi/#compute","title":"Compute","text":""},{"location":"guides/gcp-ipi/#networking","title":"Networking","text":""},{"location":"guides/gcp-ipi/#platform","title":"Platform","text":""},{"location":"guides/gcp-ipi/#deployment","title":"Deployment","text":"

See the OKD documentation to proceed with deployment

"},{"location":"guides/overview/","title":"Deployment Guides","text":"

The guides linked below provide some examples of how community members are using OKD and provide details of the underlying hardware and platform configurations they are using.

"},{"location":"guides/sno/","title":"Single Node OKD Installation","text":"

This document outlines how to deploy a single node OKD cluster using virt.

"},{"location":"guides/sno/#requirements","title":"Requirements","text":""},{"location":"guides/sno/#procedure","title":"Procedure","text":"

For the complete procedure, please see Building an OKD4 single node cluster with minimal resources

"},{"location":"guides/sri/","title":"Sri's Overkill Homelab Setup","text":"

This document lays out the resources used to create my completely-overkill homelab. This cluster provides all the compute and storage I think I'll need for the foreseeable future, and the CPU, RAM, and storage can all be scaled vertically independently of each other. Not that I think I'll need to do that for a while.

More detail into the deployment and my homelab's Terraform configuration can be found here.

"},{"location":"guides/sri/#hardware","title":"Hardware","text":""},{"location":"guides/sri/#main-cluster","title":"Main cluster","text":"

My hypervisors each host an identical workload. The total size of this cluster is 3 control plane nodes, and 9 worker nodes. So it splits very nicely three ways. Each hypervisor hosts 1 control plane VM and 3 worker VMs.

"},{"location":"guides/sri/#supporting-infrastructure","title":"Supporting infrastructure","text":""},{"location":"guides/sri/#networking","title":"Networking","text":"

OKD, and especially baremetal UPI OKD, requires a very specific network setup. You will most likely need something more flexible than your ISP's router to get everything fully configured. The documentation is very clear on the various DNS records and DHCP static allocations you will need to make, so I won't go into them here.

However, there are a couple extra things that you may want to set for best results. In particular, I make sure that I have PTR records set up for all my cluster nodes. This is extremely important as the nodes need a correct PTR record set up for them to auto-discover their hostname. Clusters typically do not set themselves up properly if there are hostname collisions!

"},{"location":"guides/sri/#api-load-balancer","title":"API load balancer","text":"

I run a separate smaller VM on the NUC as a single-purpose load balancer appliance, running HAProxy.

The HAProxy config is straightforward. I adapted mine from the example config file created by the ocp4-helpernode playbook.

"},{"location":"guides/sri/#deployment","title":"Deployment","text":"

I create the VMs on the hypervisors using Terraform. The Terraform Libvirt provider is very, very cool. It's also used by openshift-install for its Libvirt-based deployments, so it supports everything needed to deploy OKD nodes. Most importantly, I can use Terraform to supply the VMs with their Ignition configs, which means I don't have to worry about passing kernel args manually or setting up a PXE server to get things going like the official OKD docs would have you do. Terraform also makes it easy to tear down the cluster and reset in case something goes wrong.

"},{"location":"guides/sri/#post-bootstrap-one-time-setup","title":"Post-Bootstrap One-Time Setup","text":""},{"location":"guides/sri/#storage-with-rook-and-ceph","title":"Storage with Rook and Ceph","text":"

I deploy a Ceph cluster into OKD using Rook. The Rook configuration deploys OSDs on top of the 4TiB HDDs assigned to each worker. I deploy an erasure-coded CephFS pool (6+2) for RWX workloads and a 3x replica block pool for RWO workloads.

"},{"location":"guides/sri/#monitoring-and-alerting","title":"Monitoring and Alerting","text":"

OKD comes with a very comprehensive monitoring and alerting suite, and it would be a shame not to take advantage of it. I set up an Alertmanager webhook to send any alerts to a small program I wrote that posts the alerts to Discord.

I also deploy a Prometheus + Grafana set up into the cluster that collects metrics from the various hypervisors and supporting infrastructure VMs. I use Grafana's built-in Discord alerting mechanism to post those alerts.

"},{"location":"guides/sri/#loadbalancer-with-metallb","title":"LoadBalancer with MetalLB","text":"

MetalLB is a piece of fantastic software that allows on-prem or otherwise non-public-cloud Kubernetes clusters to enjoy the luxury of LoadBalancer type services. It's dead simple to set up and makes you feel you're in a real datacenter. I deploy several workloads that don't use standard HTTP and so can't be deployed behind a Route. Without MetalLB, I wouldn't be able to deploy these workloads on OKD at all but with it, I can!

"},{"location":"guides/sri/#software-i-run","title":"Software I Run","text":"

I maintain an ansible playbook that handles deploying my workloads into the cluster. I prefer Ansible over other tools like Helm because it has more robust capabilities to store secrets, I find its templating capabilities more flexible and powerful than Helm's (especially when it comes to inlining config files into config maps or creating templated Dockerfiles for BuildConfigs), and because I am already familiar with Ansible and know how it works.

"},{"location":"guides/upi-sno/","title":"Single Node UPI OKD Installation","text":"

This document outlines how to deploy a single node (the real hard way) using UPI OKD cluster on bare metal or virtual machines.

"},{"location":"guides/upi-sno/#overview","title":"Overview","text":"

User provisioned infrastructure (UPI) of OKD 4.x Single Node cluster on bare metal or virtual machines

N.B. Installer provisioned infrastructure (IPI) - this is the preferred method as it is much simpler, it automatically provisions and maintains the install for you, however it is targeted towards cloud and onprem services i.e aws, gcp, azure, also for openstack, IBM, and vSphere.

If your install falls in these supported options then use IPI, if not this means that you will more than likely have to fallback on the UPI install method.

At the end of this document I have supplied a link to my repository. It includes some useful scripts and an example install-config.yaml

"},{"location":"guides/upi-sno/#requirements","title":"Requirements","text":"

The base installation should have 7 VM\u2019s (for a full production setup) but for our home lab SNO we will use 2 vm\u2019s (one for bootstrap and one for the master/worker node) with the following specs :

N.B. - firewall services are disabled for this installation process

"},{"location":"guides/upi-sno/#architecture-this-refers-to-a-full-high-availability-cluster","title":"Architecture (this refers to a full high availability cluster)","text":"

The diagram below shows an install for high availability scalable solution. For our single node install we only need a bootstrap node and a master/worker node (2 bare metal servers or 2 vm\u2019s)

"},{"location":"guides/upi-sno/#software","title":"Software","text":"

For the UPI SNO I made use of FHCOS (Fedora CoreOS)

FHCOS

OC Client & Installer

"},{"location":"guides/upi-sno/#procedure","title":"Procedure","text":"

The following is a manual process of installing and configuring the infrastructure needed.

"},{"location":"guides/upi-sno/#provision-vms-optional-skip-this-step-if-you-using-bare-metal-servers","title":"Provision VM\u2019s (Optional) - Skip this step if you using bare metal servers","text":"

The use of VM\u2019s is optional, each node could be a bare metal server. As I did not have several servers at my disposal I used a NUC (ryzen9 with 32G of RAM) and created 2 VM\u2019s (bootstrap and master/worker)

I used cockpit (fedora) to validate the network and vm setup (from the scripts). Use the virtualization software that you prefer. For the okd-svc machine I used the bare metal server and installed fedora 37 (this hosted my 2 VM's)

The bootstrap server can be shutdown once the master/worker has been fully setup

Install virtualization

sudo dnf install @virtualization\n
"},{"location":"guides/upi-sno/#setup-ips-and-mac-addreses","title":"Setup IP's and MAC addreses","text":"

Refer to the \u201cArchitecture Diagram\u201d above to setup each VM

Obviously the IP addresses will change according to you preferred setup (i.e 192.168.122.x) I have listed all servers, as it will be fairly easy to change the single node cluster to a fully fledged HA cluster, by changing the install-config.yaml

As a usefule example this is what I setup

Hard code MAC addresses (I created a text file to include in the VM network setting)

MAC: 52:54:00:3f:de:37, IP: 192.168.122.253\nMAC: 52:54:00:f5:9d:d4, IP: 192.168.122.2\nMAC: 52:54:00:70:b9:af, IP: 192.168.122.3\nMAC: 52:54:00:fd:6a:ca, IP: 192.168.122.4\nMAC: 52:54:00:bc:56:ff, IP: 192.168.122.5\nMAC: 52:54:00:4f:06:97, IP: 192.168.122.6\n
"},{"location":"guides/upi-sno/#install-configure-dependency-software","title":"Install & Configure Dependency Software","text":""},{"location":"guides/upi-sno/#install-configure-apache-web-server","title":"Install & configure Apache Web Server","text":"
dnf install httpd -y\n

Change default listen port to 8080 in httpd.conf

sed -i 's/Listen 80/Listen 0.0.0.0:8080/' /etc/httpd/conf/httpd.conf\n

Enable and start the service

 systemctl enable httpd\n systemctl start httpd\n systemctl status httpd\n

Making a GET request to localhost on port 8080 should now return the default Apache webpage

curl localhost:8080\n
"},{"location":"guides/upi-sno/#install-haproxy-and-update-the-haproxycfg-as-follows","title":"Install HAProxy and update the haproxy.cfg as follows","text":"
dnf install haproxy -y\n

Copy HAProxy config

cp ~/openshift-vm-install/haproxy.cfg /etc/haproxy/haproxy.cfg\n

Update Config

# Global settings\n#---------------------------------------------------------------------\nglobal\n    maxconn     20000\nlog         /dev/log local0 info\n    chroot      /var/lib/haproxy\n    pidfile     /var/run/haproxy.pid\n    user        haproxy\n    group       haproxy\n    daemon\n\n    # turn on stats unix socket\nstats socket /var/lib/haproxy/stats\n\n#---------------------------------------------------------------------\n# common defaults that all the 'listen' and 'backend' sections will\n# use if not designated in their block\n#---------------------------------------------------------------------\ndefaults\n    log                     global\n    mode                    http\n    option                  httplog\n    option                  dontlognull\n    option http-server-close\n    option redispatch\n    option forwardfor       except 127.0.0.0/8\n    retries                 3\nmaxconn                 20000\ntimeout http-request    10000ms\n    timeout http-keep-alive 10000ms\n    timeout check           10000ms\n    timeout connect         40000ms\n    timeout client          300000ms\n    timeout server          300000ms\n    timeout queue           50000ms\n\n# Enable HAProxy stats\nlisten stats\n    bind :9000\n    stats uri /stats\n    stats refresh 10000ms\n\n# Kube API Server\nfrontend k8s_api_frontend\n    bind :6443\n    default_backend k8s_api_backend\n    mode tcp\n\nbackend k8s_api_backend\n    mode tcp\n    balance source\nserver      bootstrap 192.168.122.253:6443 check\n    server      okd-cp-1 192.168.122.2:6443 check\n    server      okd-cp-2 192.168.122.3:6443 check\n    server      okd-cp-3 192.168.122.4:6443 check\n\n# OCP Machine Config Server\nfrontend ocp_machine_config_server_frontend\n    mode tcp\n    bind :22623\n    default_backend ocp_machine_config_server_backend\n\nbackend ocp_machine_config_server_backend\n    mode tcp\n    balance source\nserver      bootstrap 192.168.122.253:22623 check\n    server      okd-cp-1 192.168.122.2:22623 check\n    server      okd-cp-2 192.168.122.3:22623 check\n    server      okd-cp-3 192.168.122.4:22623 check\n\n# OCP Ingress - layer 4 tcp mode for each. Ingress Controller will handle layer 7.\nfrontend ocp_http_ingress_frontend\n    bind :80\n    default_backend ocp_http_ingress_backend\n    mode tcp\n\nbackend ocp_http_ingress_backend\n    balance source\nmode tcp\n    server      okd-cp-1 192.168.122.2:80 check\n    server      okd-cp-2 192.168.122.3:80 check\n    server      okd-cp-3 192.168.122.4:80 check\n    server      okd-w-1 192.168.122.5:80 check\n    server      okd-w-2 192.168.122.6:80 check\n\nfrontend ocp_https_ingress_frontend\n    bind *:443\n    default_backend ocp_https_ingress_backend\n    mode tcp\n\nbackend ocp_https_ingress_backend\n    mode tcp\n    balance source\nserver      okd-cp-1 192.168.122.2:443 check\n    server      okd-cp-2 192.168.122.3:443 check\n    server      okd-cp-3 192.168.122.4:443 check\n    server      okd-w-1 192.168.122.5:443 check\n    server      okd-w-2 192.168.122.6:443 check\n

Start the HAProxy service

sudo systemctl start haproxy\n

Install dnsmasq and set the dnsmasq.conf file as follows

# Configuration file for dnsmasq.\n\nport=53\n\n# The following two options make you a better netizen, since they\n# tell dnsmasq to filter out queries which the public DNS cannot\n# answer, and which load the servers (especially the root servers)\n# unnecessarily. If you have a dial-on-demand link they also stop\n# these requests from bringing up the link unnecessarily.\n\n# Never forward plain names (without a dot or domain part)\n#domain-needed\n# Never forward addresses in the non-routed address spaces.\nbogus-priv\n\nno-poll\n\nuser=dnsmasq\ngroup=dnsmasq\n\nbind-interfaces\n\nno-hosts\n# Include all files in /etc/dnsmasq.d except RPM backup files\nconf-dir=/etc/dnsmasq.d,.rpmnew,.rpmsave,.rpmorig\n\n# If a DHCP client claims that its name is \"wpad\", ignore that.\n# This fixes a security hole. see CERT Vulnerability VU#598349\n#dhcp-name-match=set:wpad-ignore,wpad\n#dhcp-ignore-names=tag:wpad-ignore\n\n\ninterface=eno1\ndomain=okd.lan\n\nexpand-hosts\n\naddress=/bootstrap.lab.okd.lan/192.168.122.253\nhost-record=bootstrap.lab.okd.lan,192.168.122.253\n\naddress=/okd-cp-1.lab.okd.lan/192.168.122.2\nhost-record=okd-cp-1.lab.okd.lan,192.168.122.2\n\naddress=/okd-cp-2.lab.okd.lan/192.168.122.3\nhost-record=okd-cp-2.lab.okd.lan,192.168.122.3\n\naddress=/okd-cp-3.lab.okd.lan/192.168.122.4\nhost-record=okd-cp-3.lab.okd.lan,192.168.122.4\n\naddress=/okd-w-1.lab.okd.lan/192.168.122.5\nhost-record=okd-w-1.lab.okd.lan,192.168.122.5\n\naddress=/okd-w-2.lab.okd.lan/192.168.122.6\nhost-record=okd-w-2.lab.okd.lan,192.168.122.6\n\naddress=/okd-w-3.lab.okd.lan/192.168.122.7\nhost-record=okd-w-3.lab.okd.lan,192.168.122.7\n\naddress=/api.lab.okd.lan/192.168.122.1\nhost-record=api.lab.okd.lan,192.168.122.1\naddress=/api-int.lab.okd.lan/192.168.122.1\nhost-record=api-int.lab.okd.lan,192.168.122.1\n\naddress=/etcd-0.lab.okd.lan/192.168.122.2\naddress=/etcd-1.lab.okd.lan/192.168.122.3\naddress=/etcd-2.lab.okd.lan/192.168.122.4\naddress=/.apps.lab.okd.lan/192.168.122.1\n\nsrv-host=_etcd-server-ssl._tcp,etcd-0.lab.okd.lan,2380\nsrv-host=_etcd-server-ssl._tcp,etcd-1.lab.okd.lan,2380\nsrv-host=_etcd-server-ssl._tcp,etcd-2.lab.okd.lan,2380\n\naddress=/oauth-openshift.apps.lab.okd.lan/192.168.122.1\naddress=/console-openshift-console.apps.lab.okd.lan/192.168.122.1\n

Start the dnsmasq service

sudo /usr/sbin/dnsmasq --conf-file=/etc/dnsmasq.conf\n

Test that your DNS setup is working correctly

N.B. It's important to verify that dns works, I found that for example if api-int.lab.okd.lan didn\u2019t resolve (also with reverse lookup) I had problems with bootstrap failing.

# test & results\n$ dig +noall +answer @192.168.122.1 api.lab.okd.lan\napi.lab.okd.lan.    0    IN    A    192.168.122.1\n\n$ dig +noall +answer @192.168.122.1 api-int.lab.okd.lan\napi-int.lab.okd.lan.    0    IN    A    192.168.122.1\n\n$ dig +noall +answer @192.168.122.1 random.apps.lab.okd.lan\nrandom.apps.lab.okd.lan. 0    IN    A    192.168.122.1\n\n$ dig +noall +answer @192.168.122.1 console-openshift-console.apps.lab.okd.lan\nconsole-openshift-console.apps.lab.okd.lan. 0 IN A 192.168.122.1\n\n$ dig +noall +answer @192.168.122.1 okd-bootstrap.lab.okd.lan\nokd-bootstrap.lab.okd.lan. 0    IN    A    192.168.122.253\n\n$ dig +noall +answer @192.168.122.1 okd-cp1.lab.okd.lan\nokd-cp1.lab.okd.lan.    0    IN    A    192.168.122.2\n\n$ dig +noall +answer @192.168.122.1 okd-cp2.lab.okd.lan\nokd-cp2.lab.okd.lan.    0    IN    A    192.168.122.3\n\n\n$ dig +noall +answer @192.168.122.1 okd-cp3.lab.okd.lan\nokd-cp3.lab.okd.lan.    0    IN    A    192.168.122.4\n\n$ dig +noall +answer @192.168.122.1 -x 192.168.122.1\n1.122.168.192.in-addr.arpa. 0    IN    PTR    okd-svc.okd-dev.\n\n$ dig +noall +answer @192.168.122.1 -x 192.168.122.2\n2.122.168.192.in-addr.arpa. 0    IN    PTR    okd-cp1.lab.okd.lan.\n\n$ dig +noall +answer @192.168.122.1 -x 192.168.122.3\n3.122.168.192.in-addr.arpa. 0    IN    PTR    okd-cp2.lab.okd.lan.\n\n$ dig +noall +answer @192.168.122.1 -x 192.168.122.4\n4.122.168.192.in-addr.arpa. 0    IN    PTR    okd-cp3.lab.okd.lan.\n\n$ dig +noall +answer @192.168.122.1 -x 192.168.122.5\n5.122.168.192.in-addr.arpa. 0    IN    PTR    okd-w1.lab.okd.lan.\n\n$ dig +noall +answer @192.168.122.1 -x 192.168.122.6\n6.122.168.192.in-addr.arpa. 0    IN    PTR    okd-w2.lab.okd.lan.\n\n$ dig +noall +answer @192.168.122.1 -x 192.168.122.7\n7.122.168.192.in-addr.arpa. 0    IN    PTR    okd-w3.lab.okd.lan.\n

Install and configure NFS for the OKD Registry. It is a requirement to provide storage for the Registry, emptyDir can be specified if necessary.

sudo dnf install nfs-utils -y\n

Create the share

mkdir -p /shares/registry\nchown -R nobody:nobody /shares/registry\nchmod -R 777 /shares/registry\n

Export the share, this allows any service in the 192.168.122.xxx range to access NFS

echo \"/shares/registry  192.168.122.0/24(rw,sync,root_squash,no_subtree_check,no_wdelay)\" > /etc/exports\n\nexportfs -rv\n

Enable and start the NFS related services

sudo systemctl enable nfs-server rpcbind\nsudo systemctl start nfs-server rpcbind nfs-mountd\n

Create an install directory

 mkdir ~/okd-install\n

Copy the install-config.yaml included in the cloned repository (see link at end of the document) to the install directory

cp ~/openshift-vm-install/install-config.yaml ~/okd-install\n

Where install-config.yaml is as follows

apiVersion: v1\nbaseDomain: okd.lan\ncompute:\n- hyperthreading: Enabled\nname: worker\nreplicas: 0 # Must be set to 0 for User Provisioned Installation as worker nodes will be manually deployed.\ncontrolPlane:\nhyperthreading: Enabled\nname: master\nreplicas: 3\nmetadata:\nname: lab # Cluster name\nnetworking:\nclusterNetwork:\n- cidr: 10.128.0.0/14\nhostPrefix: 23\nnetworkType: OpenShiftSDN\nserviceNetwork:\n- 172.30.0.0/16\nplatform:\nnone: {}\nfips: false\npullSecret: 'add your pull secret here'\nsshKey: 'add your ssh public key here'\n

Update the install-config.yaml with your own pull-secret and ssh key.

vim ~/okd-install/install-config.yaml\n

If needed create public/private key pair using openssh

Generate Kubernetes manifest files

~/openshift-install create manifests --dir ~/okd-install\n

A warning is shown about making the control plane nodes schedulable.

For the SNO it's mandatory to run workloads on the Control Plane nodes.

If you don't want to you (incase you move to the full HA install) you can disable this with:

`sed -i 's/mastersSchedulable: true/mastersSchedulable: false/' ~/okd-install/manifests/cluster-scheduler-02-config.yml`.\n

Make any other custom changes you like to the core Kubernetes manifest files.

Generate the Ignition config and Kubernetes auth files

~/openshift-install create ignition-configs --dir ~/okd-install\n

Create a hosting directory to serve the configuration files for the OKD booting process

mkdir /var/www/html/okd4\n

Copy all generated install files to the new web server directory

cp -R ~/okd-install/* /var/www/html/okd4\n

Move the Core OS image to the web server directory (later you need to type this path multiple times so it is a good idea to shorten the name)

mv ~/fhcos-X.X.X-x86_64-metal.x86_64.raw.gz /var/www/html/okd4/fhcos\n

Change ownership and permissions of the web server directory

chcon -R -t httpd_sys_content_t /var/www/html/okd4/\nchown -R apache: /var/www/html/okd4/\nchmod 755 /var/www/html/okd4/\n

Confirm you can see all files added to the /var/www/html/ocp4/ through Apache

curl localhost:8080/okd4/\n

Start VMS/Bare metal servers

Execute for each VM type the appropriate coreos-installer command

Change the \u2013ignition-url for each type i.e

N.B. For our SNO install we are only going to use bootstrap and master ignition files (ignore worker.ign)

Bootstrap Node

--ignition-url https://192.168.122.1:8080/okd4/bootstrap.ign\n

Master Node

--ignition-url https://192.168.122.1:8080/okd4/master.ign\n

Worker Node

--ignition-url https://192.168.122.1:8080/okd4/worker.ign\n

A typical cli for CoreOS (using master.ign would look like this)

$ sudo coreos-installer install /dev/sda --ignition-url http://192.168.122.1:8080/okd4/master.ign  --image-url http://192.168.122.1:8080/okd4/fhcos  --insecure-ignition -\u2013insecure 

NB Note if using Fedora CoreOS the device would need to change i.e /dev/vda

Once the vm\u2019s are running with the relevant ignition files

Issue the following commands

This will install and wait for the bootstrap service to complete

openshift-install --dir ~/$INSTALL_DIR wait-for bootsrap-complete --log-level=debug\n

Once the bootstrap has installed then issue this command

openshift-install --dir ~/$INSTALL_DIR wait-for install-complete --log-level=debug\n

This will take about 40 minutes (or longer) after a successful install you will need to approve certificates and setup the persistent volume for the internal registry

"},{"location":"guides/upi-sno/#post-install","title":"Post Install","text":"

At this point you can shutdown the bootstrap server

Approve certification signing request

# Export the KUBECONFIG environment variable (to gain access to the cluster)\nexport KUBECONFIG=$INSTALL_DIR/auth/kubeconfig\n\n# View CSRs\noc get csr\n# Approve all pending CSRs\noc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve\n# Wait for kubelet-serving CSRs and approve them too with the same command\noc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve\n

Configure Registry

oc edit configs.imageregistry.operator.openshift.io\n\n# update the yaml\nmanagementState: Managed\n\nstorage:\n  pvc:\n    claim: # leave the claim blank\n\n# save the changes and execute the following commands\n\n# check for \u2018pending\u2019 state\noc get pvc -n openshift-image-registry\n\noc create -f registry-pv.yaml\n# After a short wait the 'image-registry-storage' pvc should now be bound\noc get pvc -n openshift-image-registry\n

Remote Access

As haproxy has been set up as a load balancer for the cluster, add the following to your /etc/hosts file. Obviously the IP address will change according to where you setup your haproxy

192.168.8.122 okd-svc api.lab.okd.lan api-int.lab.okd.lan console-openshift-console.apps.lab.okd.lan oauth-openshift.apps.lab.okd.lan downloads-openshift-console.apps.lab.okd.lan alertmanager-main-openshift-monitoring.apps.lab.okd.lan grafana-openshift-monitoring.apps.lab.okd.lan prometheus-k8s-openshift-monitoring.apps.lab.okd.lan thanos-querier-openshift-monitoring.apps.lab.okd.lan\n

Helper Script

I have included a WIP script to help with setting up the virtual network, machines and utilities to configure the OKD install, apply haproxy config, apply dns config, setup NFS and firewall setup.

Dependencies

As mentioned it\u2019s still a work in progress, but fairly helpful (imho) for now.

A typical flow would be (once all the dependencies have been installed)

./virt-env-install.sh config # configures install-config.yaml\n./virt-env-install.sh dnsconfig\n\n# before continuing manually test your dns setup\n\n./virt-env-install.sh haproxy\n./virt-env-install.sh firewall # can be ignored as firewalld has been disabled\n./virt-env-install.sh network\n./virt-env-install.sh manifests\n./virt-env-install.sh ignition\n./virt-env-install.sh copy\n./virt-env-install.sh vm bootstrap ok (repeat this for each vm needed)\n./virt-env-install.sh vm cp-1 ok \n./virt-env-install.sh okd-install bootstrap\n./virt-env-install.sh okd-install install\n

N.B. If there are any discrepencies or improvements please make note. PR's are most welcome !!!

Screenshot of final OKD install

"},{"location":"guides/upi-sno/#acknowledgement-links","title":"Acknowledgement & Links","text":"

github repo https://github.com/lmzuccarelli/okd-baremetal-install

Thanks and acknowledgement to Ryan Hay

Reference : https://github.com/ryanhay/ocp4-metal-install

"},{"location":"guides/vadim/","title":"Vadim's homelab","text":"

This describes the resources used by OpenShift after performing an installation to make it similar to my homelab setup.

"},{"location":"guides/vadim/#compute","title":"Compute","text":"
  1. Ubiquity EdgeRouter ER-X

  2. NAS/Bastion host

  3. control plane

  4. compute nodes

"},{"location":"guides/vadim/#router-setup","title":"Router setup","text":"

Once nodes have booted assign static IPs using MAC pinning.

EdgeRouter has dnsmasq to support custom DNS entries, but I wanted to have a network-wide ad filtering and DNS-over-TLS for free, so I followed this guide to install AdGuard Home on the router.

This gives a fancy UI for DNS rewrites and gives a useful stats about the nodes on the network.

"},{"location":"guides/vadim/#nasbastion-setup","title":"NAS/Bastion setup","text":"

HAProxy setup is fairly standard - see ocp4-helpernode for idea.

Along with (fairly standard) NFS server I also run a single node Ceph cluster, so that I could benefit from CSI / autoprovision / snapshots etc.

"},{"location":"guides/vadim/#installation","title":"Installation","text":"

Currently \"single node install\" requires a dedicated throwaway bootstrap node, so I used future compute node (x220 laptop) as a bootstrap node. Once master was installed, the laptop was re-provisioned to become a compute node.

"},{"location":"guides/vadim/#upgrading","title":"Upgrading","text":"

Since I use a single master install, upgrades are bit complicated. Both nodes are labelled as workers, so upgrading those is not an issue.

Upgrading single master is tricky, so I use this script to pivot the node into expected master ignition content, which runs rpm-ostree rebase <new content>. This script needs to be cancelled before it starts installing OS extensions (NetworkManager-ovs etc.) as its necessary.

This issue as a class would be addressed in 4.8.

"},{"location":"guides/vadim/#useful-software","title":"Useful software","text":"

Grafana operator is incredibly useful to setup monitoring.

This operator helps me to define a configuration for various datasources (i.e. Promtail+Loki) and control dashboard source code using CRs.

SnapScheduler makes periodic snapshots of some PVs so that risky changes could be reverted.

Tekton operator is helping me to run a few clean up jobs in cluster periodically.

Most useful pipeline I've been using is running oc adm must-gather on this cluster, unpacking it and storing it in Git. This helps me keep track of changes in the cluster in a git repo - and, unlike gitops solution like ArgoCD - I can still tinker with things in the console.

Other useful software running in my cluster:

"},{"location":"guides/vsphere-ipi/","title":"vSphere IPI Deployment","text":"

This describes the resources used by OpenShift after performing an installation using the required options for the installer.

"},{"location":"guides/vsphere-ipi/#infrastructure","title":"Infrastructure","text":""},{"location":"guides/vsphere-ipi/#compute","title":"Compute","text":"

All vms stored within folder described above and tagged with tag created by installer.

"},{"location":"guides/vsphere-ipi/#networking","title":"Networking","text":"

Should be set up by user. Installer doesn't create anything there. Network name should be provided as installer argument.

"},{"location":"guides/vsphere-ipi/#miscellaneous","title":"Miscellaneous","text":""},{"location":"guides/vsphere-ipi/#deployment","title":"Deployment","text":"

See the OKD documentation to proceed with deployment

"},{"location":"guides/vsphere-prereqs/","title":"Prerequisites for vSphere UPI","text":"

In this example I describe the setup of a DNS/DHCP server and a Load Balancer on a Raspberry PI microcomputer. The instructions most certainly will also work for other environments.

I use Raspberry Pi OS (debian based).

"},{"location":"guides/vsphere-prereqs/#ip-addresses-of-components-in-this-example","title":"IP Addresses of components in this example","text":""},{"location":"guides/vsphere-prereqs/#upgrade-raspberry-pi","title":"Upgrade Raspberry Pi","text":"
sudo apt-get update\nsudo apt-get upgrade\nsudo reboot\n
"},{"location":"guides/vsphere-prereqs/#set-static-ip-address-on-raspberry-pi","title":"Set static IP address on Raspberry Pi","text":"

Add this:

interface eth0 \nstatic ip_address=192.168.178.5/24 \nstatic routers=192.168.178.1 \nstatic domain_name_servers=192.168.178.5 8.8.8.8\n

to /etc/dhcpcd.conf

"},{"location":"guides/vsphere-prereqs/#dhcp","title":"DHCP","text":"

Ensure that no other DHCP servers are activated in the network of your homelab e.g. in your internet router.

The DHCP server in this example is setup with DDNS (Dynamic DNS) enabled.

"},{"location":"guides/vsphere-prereqs/#install","title":"Install","text":"

sudo apt-get install isc-dhcp-server

"},{"location":"guides/vsphere-prereqs/#configure","title":"Configure","text":"

Enable DHCP server for IPv4 on eth0:

/etc/default/isc-dhcp-server

INTERFACESv4=\"eth0\" \nINTERFACESv6=\"\"\n

/etc/dhcp/dhcpd.conf

# dhcpd.conf\n#\n\n####################################################################################\n# Configuration for Dynamic DNS (DDNS) updates                                     #\n# Clients requesting an IP and sending their hostname for domain *.homelab.net     #\n# will be auto registered in the DNS server.                                       #\n####################################################################################\nddns-updates on;\nddns-update-style standard;\n\n# This option points to the copy rndc.key we created for bind9.\ninclude \"/etc/bind/rndc.key\";\n\nallow unknown-clients;\nuse-host-decl-names on;\ndefault-lease-time 300; # 5 minutes\nmax-lease-time 300;     # 5 minutes\n\n# homelab.net DNS zones\nzone homelab.net. {\n  primary 192.168.178.5; # This server is the primary DNS server for the zone\n  key rndc-key;       # Use the key we defined earlier for dynamic updates\n}\nzone 178.168.192.in-addr.arpa. {\n  primary 192.168.178.5; # This server is the primary reverse DNS for the zone\n  key rndc-key;       # Use the key we defined earlier for dynamic updates\n}\n\nddns-domainname \"homelab.net.\";\nddns-rev-domainname \"in-addr.arpa.\";\n####################################################################################\n\n\n####################################################################################\n# Basic configuration                                                              #\n####################################################################################\n# option definitions common to all supported networks...\ndefault-lease-time 300;\nmax-lease-time     300;\n\n# If this DHCP server is the official DHCP server for the local\n# network, the authoritative directive should be uncommented.\nauthoritative;\n\n# Parts of this section will be put in the /etc/resolv.conf of your hosts later\noption domain-name \"homelab.net\";\noption routers 192.168.178.1;\noption subnet-mask 255.255.255.0;\noption domain-name-servers 192.168.178.5;\n\nsubnet 192.168.178.0 netmask 255.255.255.0 {\n  range 192.168.178.40 192.168.178.199;\n}\n####################################################################################\n\n\n####################################################################################\n# Static IP addresses                                                              #\n# (Replace the MAC addresses here with the ones you set in vsphere for your vms)   #\n####################################################################################\ngroup {\n  host bootstrap {\n      hardware ethernet 00:1c:00:00:00:00;\n      fixed-address 192.168.178.200;\n  }\n\n  host master0 {\n      hardware ethernet 00:1c:00:00:00:10;\n      fixed-address 192.168.178.210;\n  }\n\n  host master1 {\n      hardware ethernet 00:1c:00:00:00:11;\n      fixed-address 192.168.178.211;\n  }\n\n  host master2 {\n      hardware ethernet 00:1c:00:00:00:12;\n      fixed-address 192.168.178.212;\n  }\n\n  host worker0 {\n      hardware ethernet 00:1c:00:00:00:20;\n      fixed-address 192.168.178.220;\n  }\n\n  host worker1 {\n      hardware ethernet 00:1c:00:00:00:21;\n      fixed-address 192.168.178.221;\n  }\n\n  host worker2 {\n      hardware ethernet 00:1c:00:00:00:22;\n      fixed-address 192.168.178.222;\n  }  \n}\n
"},{"location":"guides/vsphere-prereqs/#dns","title":"DNS","text":""},{"location":"guides/vsphere-prereqs/#install_1","title":"Install","text":"
sudo apt install bind9 dnsutils\n
"},{"location":"guides/vsphere-prereqs/#basic-configuration","title":"Basic configuration","text":"

/etc/bind/named.conf.options

include \"/etc/bind/rndc.key\";\n\nacl internals {\n    // lo adapter\n    127.0.0.1;\n\n    // CIDR for your homelab network\n    192.168.178.0/24;\n};\n\noptions {\n        directory \"/var/cache/bind\";\n\n        // If there is a firewall between you and nameservers you want\n        // to talk to, you may need to fix the firewall to allow multiple\n        // ports to talk.  See http://www.kb.cert.org/vuls/id/800113\n\n        // If your ISP provided one or more IP addresses for stable\n        // nameservers, you probably want to use them as forwarders.\n        // Uncomment the following block, and insert the addresses replacing\n        // the all-0's placeholder.\n\n        forwarders {\n          8.8.8.8;\n          8.8.4.4;\n        };\n        forward only;\n\n        //========================================================================\n        // If BIND logs error messages about the root key being expired,\n        // you will need to update your keys.  See https://www.isc.org/bind-keys\n        //========================================================================\n        dnssec-validation no;\n\n        listen-on-v6 { none; };\n        auth-nxdomain no;\n        listen-on port 53 { any; };\n\n        // Allow queries from my Homelab and also from Wireguard Clients.\n        allow-query { internals; };\n        allow-query-cache { internals; };\n        allow-update { internals; };\n        recursion yes;\n        allow-recursion { internals; };\n        allow-transfer { internals; };\n\n        dnssec-enable no;\n\n        check-names master ignore;\n        check-names slave ignore;\n        check-names response ignore;\n};\n

/etc/bind/named.conf.local

#include \"/etc/bind/rndc.key\";\n\n//\n// Do any local configuration here\n//\n\n// Consider adding the 1918 zones here, if they are not used in your\n// organization\n//include \"/etc/bind/zones.rfc1918\";\n\n# All devices that don't belong to the OKD cluster will be maintained here.\nzone \"homelab.net\" {\n   type master;\n   file \"/etc/bind/forward.homelab.net\";\n   allow-update { key rndc-key; };\n};\n\nzone \"c1.homelab.net\" {\n   type master;\n   file \"/etc/bind/forward.c1.homelab.net\";\n   allow-update { key rndc-key; };\n};\n\nzone \"178.168.192.in-addr.arpa\" {\n   type master;\n   notify no;\n   file \"/etc/bind/178.168.192.in-addr.arpa\";\n   allow-update { key rndc-key; };\n};\n

Zone file for homlab.net: /etc/bind/forward.homelab.net

;\n; BIND data file for local loopback interface\n;\n$TTL    604800\n@       IN      SOA     homelab.net. root.homelab.net. (\n                              2         ; Serial\n                         604800         ; Refresh\n                          86400         ; Retry\n                        2419200         ; Expire\n                         604800 )       ; Negative Cache TTL\n;\n@       IN      NS      homelab.net.\n@       IN      A       192.168.178.5\n@       IN      AAAA    ::1\n

The name of the next file depends on the subnet that is used:

/etc/bind/178.168.192.in-addr.arpa

$TTL 1W\n@ IN SOA ns1.homelab.net. root.homelab.net. (\n                                2019070742 ; serial\n                                10800      ; refresh (3 hours)\n                                1800       ; retry (30 minutes)\n                                1209600    ; expire (2 weeks)\n                                604800     ; minimum (1 week)\n                                )\n                        NS      ns1.homelab.net.\n\n200                     PTR     bootstrap.c1.homelab.net.\n\n210                     PTR     master0.c1.homelab.net.\n211                     PTR     master1.c1.homelab.net.\n212                     PTR     master2.c1.homelab.net.\n\n220                     PTR     worker0.c1.homelab.net.\n221                     PTR     worker1.c1.homelab.net.\n222                     PTR     worker2.c1.homelab.net.\n\n5                       PTR     api.c1.homelab.net.\n5                       PTR     api-int.c1.homelab.net.\n
"},{"location":"guides/vsphere-prereqs/#dns-records-for-okd-4","title":"DNS records for OKD 4","text":"

Zone file for c1.homelab.net (our OKD 4 cluster will be in this domain):

/etc/bind/forward.c1.homelab.net

;\n; BIND data file for local loopback interface\n;\n$TTL    604800\n@       IN      SOA     c1.homelab.net. root.c1.homelab.net. (\n                              2         ; Serial\n                         604800         ; Refresh\n                          86400         ; Retry\n                        2419200         ; Expire\n                         604800 )       ; Negative Cache TTL\n;\n@       IN      NS      c1.homelab.net.\n@       IN      A       192.168.178.5\n@       IN      AAAA    ::1\n\nload-balancer IN A      192.168.178.5\n\nbootstrap IN    A       192.168.178.200\n\nmaster0 IN      A       192.168.178.210\nmaster1 IN      A       192.168.178.211\nmaster2 IN      A       192.168.178.212\n\nworker0 IN      A       192.168.178.220\nworker1 IN      A       192.168.178.221\nworker2 IN      A       192.168.178.222\nworker3 IN      A       192.168.178.223\n\n*.apps.c1.homelab.net.  IN CNAME load-balancer.c1.homelab.net.\napi-int.c1.homelab.net. IN CNAME load-balancer.c1.homelab.net.\napi.c1.homelab.net.     IN CNAME load-balancer.c1.homelab.net.\n
"},{"location":"guides/vsphere-prereqs/#set-file-permissions","title":"Set file permissions","text":"

For dynamic DNS (ddns) to work you should do this:

sudo chown -R bind:bind /etc/bind\n
"},{"location":"guides/vsphere-prereqs/#load-balancer","title":"Load Balancer","text":""},{"location":"guides/vsphere-prereqs/#install_2","title":"Install","text":"
sudo apt-get install haproxy\n
"},{"location":"guides/vsphere-prereqs/#configure_1","title":"Configure","text":"

/etc/haproxy/haproxy.cfg

global\n        log /dev/log    local0\n        log /dev/log    local1 notice\n        chroot /var/lib/haproxy\n        stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners\n        stats timeout 30s\n        user haproxy\n        group haproxy\n        daemon\n\n        # Default SSL material locations\n        ca-base /etc/ssl/certs\n        crt-base /etc/ssl/private\n\n        # Default ciphers to use on SSL-enabled listening sockets.\n        # For more information, see ciphers(1SSL). This list is from:\n        #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/\n        # An alternative list with additional directives can be obtained from\n        #  https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy\n        ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS\n        ssl-default-bind-options no-sslv3\n\ndefaults\n        log     global\n        mode    http\n        option  httplog\n        option  dontlognull\n        timeout connect 20000\n        timeout client  10000\n        timeout server  10000\n        errorfile 400 /etc/haproxy/errors/400.http\n        errorfile 403 /etc/haproxy/errors/403.http\n        errorfile 408 /etc/haproxy/errors/408.http\n        errorfile 500 /etc/haproxy/errors/500.http\n        errorfile 502 /etc/haproxy/errors/502.http\n        errorfile 503 /etc/haproxy/errors/503.http\n        errorfile 504 /etc/haproxy/errors/504.http\n\n\n# You can see the stats and observe OKD's bootstrap process by opening\n# http://<IP>:4321/haproxy?stats\nlisten stats\n    bind :4321\n    mode            http\n    log             global\n    maxconn 10\n\n    timeout client  100s\n    timeout server  100s\n    timeout connect 100s\n    timeout queue   100s\n\n    stats enable\n    stats hide-version\n    stats refresh 30s\n    stats show-node\n    stats auth admin:password\n    stats uri  /haproxy?stats\n\n\nfrontend openshift-api-server\n    bind *:6443\n    default_backend openshift-api-server\n    mode tcp\n    option tcplog\n\nbackend openshift-api-server\n    balance source\n    mode tcp\n    server bootstrap bootstrap.c1.homelab.net:6443 check\n    server master0 master0.c1.homelab.net:6443 check\n    server master1 master1.c1.homelab.net:6443 check\n    server master2 master2.c1.homelab.net:6443 check\n\n\nfrontend machine-config-server\n    bind *:22623\n    default_backend machine-config-server\n    mode tcp\n    option tcplog\n\nbackend machine-config-server\n    balance source\n    mode tcp\n    server bootstrap bootstrap.c1.homelab.net:22623 check\n    server master0 master0.c1.homelab.net:22623 check\n    server master1 master1.c1.homelab.net:22623 check\n    server master2 master2.c1.homelab.net:22623 check\n\n\nfrontend ingress-http\n    bind *:80\n    default_backend ingress-http\n    mode tcp\n    option tcplog\n\nbackend ingress-http\n    balance source\n    mode tcp\n    server master0 master0.c1.homelab.net:80 check\n    server master1 master1.c1.homelab.net:80 check\n    server master2 master2.c1.homelab.net:80 check\n\n    server worker0 worker0.c1.homelab.net:80 check\n    server worker1 worker1.c1.homelab.net:80 check\n    server worker2 worker2.c1.homelab.net:80 check\n    server worker3 worker3.c1.homelab.net:80 check\n\n\nfrontend ingress-https\n    bind *:443\n    default_backend ingress-https\n    mode tcp\n    option tcplog\n\nbackend ingress-https\n    balance source\n    mode tcp\n\n    server master0 master0.c1.homelab.net:443 check\n    server master1 master1.c1.homelab.net:443 check\n    server master2 master2.c1.homelab.net:443 check\n\n    server worker0 worker0.c1.homelab.net:443 check\n    server worker1 worker1.c1.homelab.net:443 check\n    server worker2 worker2.c1.homelab.net:443 check\n    server worker3 worker3.c1.homelab.net:443 check\n
"},{"location":"guides/vsphere-prereqs/#reboot-and-check-status","title":"Reboot and check status","text":"

Reboot Raspberry Pi:

sudo reboot\n

Check status of DNS/DHCP server and Load Balancer:

sudo systemctl status haproxy.service \nsudo systemctl status isc-dhcp-server.service \nsudo systemctl status bind9\n
"},{"location":"guides/vsphere-prereqs/#proxy-if-on-a-private-network","title":"Proxy (if on a private network)","text":"

If the cluster will sit on a private network, you\u2019ll need a proxy for outgoing traffic, both for the install process and for regular operation. In the case of the former, the installer needs to pull containers from the external registries. In the case of the latter, the proxy is needed when application containers need access to the outside world (e.g. yum installs, external code repositories like gitlab, etc.)

The proxy should be configured to accept connections from the IP subnet for your cluster. A simple proxy to use for this purpose is squid

"},{"location":"guides/virt-baremetal-upi/","title":"OKD Virtualization on user provided infrastructure","text":""},{"location":"guides/virt-baremetal-upi/#preparing-the-hardware","title":"Preparing the hardware","text":"

As a first step for providing an infrastructure for OKD Virtualization, you need to prepare the hardware:

"},{"location":"guides/virt-baremetal-upi/#preparing-the-infrastructure","title":"Preparing the infrastructure","text":"

Once your hardware is ready and connected to the network you need to configure your services, your network and your DNS for allowing the OKD installer to deploy the software. You may also need to prepare in advance a few services you'll need during the deployment. Carefully read the Preparing the user-provisioned infrastructure section and ensure all the requirements are met.

"},{"location":"guides/virt-baremetal-upi/#provision-your-hosts","title":"Provision your hosts","text":"

For the bastion / service host you can use CentOS Stream 8. You can follow the CentOS 8 installation documentation but we recommend using the latest CentOS Stream 8 ISO.

For the OKD nodes you\u2019ll need Fedora CoreOS. You can get it from the Get Fedora! website, choose the Bare Metal ISO.

"},{"location":"guides/virt-baremetal-upi/#configure-the-bastion-to-host-needed-services","title":"Configure the bastion to host needed services","text":"

Configure Apache to serve on port 8080/8443 as the http/https port will be used by the haproxy service. Apache will be needed to provide ignition configuration for OKD nodes.

dnf install -y httpd\nsed -i 's/Listen 80/Listen 8080/' /etc/httpd/conf/httpd.conf\nsed -i 's/Listen 443/Listen 8443/' /etc/httpd/conf.d/ssl.conf\nsetsebool -P httpd_read_user_content 1\nsystemctl enable --now httpd.service\nfirewall-cmd --permanent --add-port=8080/tcp\nfirewall-cmd --permanent --add-port=8443/tcp\nfirewall-cmd --reload\n# Verify it\u2019s up:\ncurl localhost:8080\n

Configure haproxy:

dnf install haproxy -y\nfirewall-cmd --permanent --add-port=6443/tcp\nfirewall-cmd --permanent --add-port=22623/tcp\nfirewall-cmd --permanent --add-service=http\nfirewall-cmd --permanent --add-service=https\nfirewall-cmd --reload\nsetsebool -P haproxy_connect_any 1\nsystemctl enable --now haproxy.service\n
"},{"location":"guides/virt-baremetal-upi/#installing-okd","title":"Installing OKD","text":"

OKD current stable-4 branch is delivering OKD 4.8. If you're using an older version we recommend to update to ODK 4.8.

At this point you should have all OKD nodes ready to be installed with Fedora CoreOS and the bastion with all the needed services. Check that all nodes and the bastion have the correct ip addresses and fqdn and that they are resolvable via DNS.

As we are going to use the baremetal UPI installation you\u2019ll need to create a install-config.yaml following the example for installing bare metal

Remember to configure your proxy settings if you have a proxy

"},{"location":"guides/virt-baremetal-upi/#apply-the-workarounds","title":"Apply the workarounds","text":"

You can workaround this by adding a custom policy:

echo '(allow virt_qemu_ga_t container_var_lib_t (dir (search)))' >local_virtqemu_ga.cil\nsemodule -i local_virtqemu_ga.cil\n

You can workaround this by adding a custom policy:

echo '(allow iptables_t cgroup_t (dir (ioctl)))' >local_iptables.cil\nsemodule -i local_iptables.cil\n
echo '(allow rpcbind_t unreserved_port_t (udp_socket (name_bind)))' >local_rpcbind.cil\nsemodule -i local_rpcbind.cil\n

While the master node is booting edit the grub config adding to kernel command line console=null.

echo '(allow openvswitch_t init_var_run_t (capability (fsetid)))' >local_openvswitch.cil\nsemodule -i local_openvswitch.cil\n
"},{"location":"guides/virt-baremetal-upi/#installing-hco-and-kubevirt","title":"Installing HCO and KubeVirt","text":"

Once the OKD console is up, connect to it. Go to Operators -> OperatorHub, look for KubeVirt HyperConverged Cluster Operator and install it.

Click on the Create Hyperconverged button, all the defaults should be fine.

"},{"location":"guides/virt-baremetal-upi/#providing-storage","title":"Providing storage","text":"

Shared storage is not mandatory for OKD Virtualization, but without a doubt it provides many advantages over a configuration based on local storage which is considered a suboptimal configuration.

Between the advantages enabled by shared storage it is worth mentioning: - Live migration of Virtual Machines - Founding pillar for HA - Enables seamless cluster upgrades without the need to shut down and restart all the VMs on each upgrade - Centralized storage management enabling elastic scalability - Centralized backup

"},{"location":"guides/virt-baremetal-upi/#shared-storage","title":"Shared storage","text":"

TBD: rook.io deployment

"},{"location":"guides/virt-baremetal-upi/#local-storage","title":"Local storage","text":"

You can configure local storage for your virtual machines by using the OKD Virtualization hostpath provisioner feature.

When you install OKD Virtualization, the hostpath provisioner Operator is automatically installed. To use it, you must: - Configure SELinux on your worker nodes via a Machine Config object. - Create a HostPathProvisioner custom resource. - Create a StorageClass object for the hostpath provisioner.

"},{"location":"guides/virt-baremetal-upi/#configuring-selinux-for-the-hostpath-provisioner-on-okd-worker-nodes","title":"Configuring SELinux for the hostpath provisioner on OKD worker nodes","text":"

You can configure SELinux for your OKD Worker nodes using a MachineConfig.

"},{"location":"guides/virt-baremetal-upi/#creating-a-custom-resource-cr-for-the-hostpathprovisioner-operator","title":"Creating a custom resource (CR) for the HostPathProvisioner operator","text":"
  1. Create the HostPathProvisioner custom resource file. For example:

    $ touch hostpathprovisioner_cr.yaml\n
  2. Edit that file. For example:

    apiVersion: hostpathprovisioner.kubevirt.io/v1beta1\nkind: HostPathProvisioner\nmetadata:\nname: hostpath-provisioner\nspec:\nimagePullPolicy: IfNotPresent\npathConfig:\npath: \"/var/hpvolumes\" # The path of the directory on the node\nuseNamingPrefix: false # Use the name of the PVC bound to the created PV as part of the directory name.\n
  3. Create the CR in the kubevirt-hyperconverged namespace:

    $ oc create -n kubevirt-hyperconverged -f hostpathprovisioner_cr.yaml\n
"},{"location":"guides/virt-baremetal-upi/#creating-a-storageclass-for-the-hostpathprovisioner-operator","title":"Creating a StorageClass for the HostPathProvisioner operator","text":"
  1. Create the YAML file for the storage class. For example:

    $ touch hppstorageclass.yaml\n
  2. Edit that file. For example:

    apiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\nname: hostpath-provisioner\nprovisioner: kubevirt.io/hostpath-provisioner\nreclaimPolicy: Delete\nvolumeBindingMode: WaitForFirstConsumer\n
  3. Creating the Storage Class object:

    $ oc create -f hppstorageclass.yaml\n
"},{"location":"okd_tech_docs/","title":"OKD Technical Documentation","text":"

Warning

This section is under construction

This section of the documentation is for developers that want to customize OKD.

The section will cover:

The above section will allow you to work on fixes and enhancements to core OKD operators and be able to run them locally.

In addition to the above this section will also look at the Red Hat build and test setup, looking at how OpenShift and OKD operators are built and tested and how releases are created.

"},{"location":"okd_tech_docs/#okd-releases","title":"OKD Releases","text":"

OKD is a Kubernetes based platform that delivers a fully managed platform from the core operating system to the Kubernetes platform and the services running on it. All aspects of OKD are managed by a collection of operators.

OKD shares most of the same source code as Red Hat OpenShift. One of the primary differences is that OKD uses Fedora CoreOS where OpenShift uses Red Hat Enterprise Linux CoreOS as the base platform for cluster nodes.

An OKD release is a strictly defined set of software. A release is defined by a release payload, which contains an operator (Cluster Version Operator), a list of manifests to apply and a reference file. You can get information about a release using the oc command line utility, oc adm release info <release name>.

You can find the latest available release here.

You can get the current version of your cluster using the oc get clusterversion command, or from the Cluster Settings page in the Administration section of the OKD web console.

For the OKD 4.10 release named 4.10.0-0.okd-2022-03-07-131213 the command would be oc adm release info 4.10.0-0.okd-2022-03-07-131213

you can add additional command line options to get more specific information about a release:

"},{"location":"okd_tech_docs/modifying_okd/","title":"Making changes to OKD","text":"

Warning

This section is under construction

The source code for OKD is available on github. OKD is made up of many components bundled into a release. You can find the exact commit for each component included in a release using the oc adm release info command with the --commit-urls option, as outlined in the overview section.

To make a change to OKD you need to:

  1. Identify the component(s) that needs to be changed
  2. Clone/fork the git repository (you can choose to fork the exact commit used to create the image referenced by the OKD release or a newer version of the source)
  3. Make the change
  4. Build the image and push to a container registry that the OKD cluster will be able to access
  5. Run the modified container on a cluster
"},{"location":"okd_tech_docs/modifying_okd/#building-images","title":"Building images","text":"

Most component repositories contain a Dockerfile, so building the image is as simple as podman build or docker build depending on your container tool of choice.

Some component repositories contain a Makefile, so building the image can be done using the Makefile, typically with make build

First thing to do is to replace the FROM images in Dockerfile.rhel7. You may want to just copy it to Dockerfile and then make the changes.

    FROM registry.ci.openshift.org/openshift/release:golang-1.17 AS builder\n
and
    FROM registry.ci.openshift.org/origin/4.10:base\n

Note

The original and replacement image may change as golang version and release requirements change.

Question

Is there a way to find the correct base image for an OKD release?

The original images are unavailable to the public. There is an effort to update the Dockerfiles with publicly available images.

"},{"location":"okd_tech_docs/modifying_okd/#example-scenario","title":"Example Scenario","text":"

To complete the scenario the following steps need to be performed:

  1. Fork the console-operator repository
  2. Clone the new fork locally: git clone https://github.com/<username>/console-operator.git
  3. create new branch from master (or main): git switch -c <branch name>
  4. Make needed modifications. Commit/squash as needed. Maintainers like to see 1 commit rather than several.
  5. Create the image: podman build -f <Dockerfile file> -t <target repo>/<username>/console-operator:4.11-<some additional identifier>
  6. Push image to external repository: podman push <target repo>/<username>/console-operator:4.11-<some additional identifier>
  7. Create new release to test with. This requires the oc command to be available. I use the following script (make_payload.sh). It can be modified as needed, such as adding the correct container registry and username:

    server=https://api.ci.openshift.org\n\nfrom_release=registry.ci.openshift.org/origin/release:4.11.0-0.okd-2022-04-12-000907\nrelease_name=4.11.0-0.jef-2022-04-12-0\nto_image=quay.io/fortinj66/origin-release:v4.11-console-operator\n\noc adm release new --from-release ${from_release} \\\n--name ${release_name} \\\n--to-image ${to_image} \\\nconsole-operator=<target repo>/<username>/console-operator:4.11-<some additional identifier>\n

    from_release, release_name, to_image will need to be updated as needed

  8. Pull installer for cluster release: oc adm release extract --tools <to_image from above> (Make sure image is publicly available)

Warning

When working with some Go lang projects you may need to be on Go lang v1.17 or better, as some projects use language features not supported before v1.17, even though some of the project README.md files may specify V1.15, these README files are out of date

If it is not clear how to build a component you can look in the release repository at https://github.com/openshift/release/tree/master/ci-operator/config/openshift/<operator repo name>, this is used by the Red Hat build system to build components so can be used to determine how to build a component.

You should also check the repo README.md file or any documentation, typically in a doc folder, as there may be some repo specific details

Question

Are there any special repos unique to OKD that need specific mention here, such as machine config?

"},{"location":"okd_tech_docs/modifying_okd/#running-the-modified-image-on-a-cluster","title":"Running the modified image on a cluster","text":"

An OKD release contain a specific set of images and there are operators that ensure that only the correct set of images are running a cluster, so you need to do some specific actions to be able to run your modified image on a cluster. You can do this by:

  1. configuring an existing cluster to run a modified image
  2. create a new installer containing your image then creating a new cluster with the modified installer
"},{"location":"okd_tech_docs/modifying_okd/#running-on-an-existing-cluster","title":"Running on an existing cluster","text":"

The Cluster Version Operator watches the deployments and images related to the core OKD services to ensure that only valid images are running in the core. This prevents you from changing any of the core images. If you want to replace an image you need to scale the Cluster Version Operator down to 0 replicas:

oc scale --replicas=0 deployment/cluster-version-operator -n openshift-cluster-version\n

Some images, such as the Cluster Cloud Controller Manager Operator and the Machine API Operator need additional steps to be able to make changes, but these typically have a docs folder containing additional information about how to make changes to these images.

"},{"location":"okd_tech_docs/modifying_okd/#create-custom-release","title":"Create custom release","text":""},{"location":"okd_tech_docs/operators/","title":"Operator Hub Catalogs","text":"

Warning

This section is under construction

OKD contains many operators which deliver the base platform, however there is also additional capabilities delivered as operators available via the Operator Hub.

The operator hub story for OKD isn't ideal currently (as at OKD 4.10) as OKD shares source with OpenShift, the commercial sibling to OKD. OpenShift has additional operator hub catalogs provided by Red Hat, which deliver additional capabilities as part of the supported OpenShift product. These additional capabilities are not currently provided to OKD.

OpenShift and OKD share a community catalog of operators, which are a subset of the operators available in the OperatorHub. The operators in the community catalog should run on OKD/OpenShift and will include any additional configuration, such as security context configuration.

However, where an operator in the community catalog has a dependency that Red Hat supports and delivers as part of the additional OpenShift operator catalog, then the community catalog operator will specify the dependency from the supported OpenShift catalog. This results in missing dependency errors when attempting to install on OKD.

Question

Todo

Some useful repo links - do we need to create instructions for specific operators?

"},{"location":"okd_tech_docs/release/","title":"OKD Development Resources","text":"

Warning

This section is under construction

Question

What is the end-to-end process to build an OKD release? Is it possible outside Red Hat CI infrastructure?

"},{"location":"okd_tech_docs/troubleshoot/","title":"Troubleshooting OKD","text":"

Warning

This section is under construction

Todo

Complete this section from comments in discussion thread

"},{"location":"wg_crc/overview/","title":"CRC Build Subgroup","text":"

Code-Ready Containers is a cut down version of OKD, designed to run on a developer's machine, which would not have sufficient resources for a full installation of OKD.

The working group was established after a live session where Red Hat's Charro Gruver walked through the build process for OKD CRC

The build process is currently manual, so the working group was established to automate the process and investigate options for creating a continuous integration setup to build and test OKD CRC.

"},{"location":"wg_docs/content/","title":"Content guidelines","text":""},{"location":"wg_docs/content/#site-content-maintainability","title":"Site content maintainability","text":"

The site has adopted Markdown as the standard way to create content for the site. Previously the site used an HTML based framework, which resulted in content not being frequently updated as there was a steep learning curve.

All content on the site should be created using Markdown. To ensure content is maintainable going forward only markdown features outlined below should be used to create site content. If you wish to use additional components on a page then please contact the documentation working group to discuss your requirements before creating a pull request containing additional components.

MkDocs includes the ability to create custom page templates. This facility has been used to create a customized home page for the site. If any other pages require a custom layout or custom features, then a page template should be used so the content can remain in Markdown. Creation of custom page templates should be discussed with the documentation working group.

"},{"location":"wg_docs/content/#changing-content","title":"Changing content","text":"

MkDocs supports standard Markdown syntax and a set Markdown extensions provided by plugins. The exact Markdown syntax supported is based on the python implementation.

MkDocs is configured using the mkdocs.yml file in the root of the git repository.

The mkdoc.yml file defines the top level navigation for the site. The level of indentation is configurable (this requires the theme to support this feature) with Markdown headings, levels 2 (##) and 3 (###) being used for the in-page navigation on the right of the page.

"},{"location":"wg_docs/content/#standard-markdown-features","title":"Standard Markdown features","text":"

The following markdown syntax is used within the documentation

Syntax Result # Title heading - you can create up to 6 levels of headings by adding additional # characters, so ### is a level 3 heading **text** will display the word text in bold *text* will display the word text in italic `code` inline code block ```shell ... ``` multi-line (Fenced) code block 1. list item ordered list - unordered list item unordered list --- horizontal break

HTML can be embedded in Markdown, but embedded HTML should not be used in the documentation. All content should use Markdown with the permitted extensions.

"},{"location":"wg_docs/content/#indentation","title":"Indentation","text":"

MkDocs uses 4 spaces for tabs, so when indenting code ensure you are working with tabs set to 4 spaces rather than 2, which is commonly used.

When using some features of Markdown indentation is used to identify blocks.

1. Ubiquity EdgeRouter ER-X\n    - runs DHCP (embedded), custom DNS server via AdGuard\n\n    ![pic](./img/erx.jpg){width=80%}\n

In the code block above you will see the unordered list item is indented, so it aligns with the content of the ordered list (rather than aligning with the number of the ordered list). The image is also indented so it too aligns with the ordered list text.

Many of the Markdown elements can be nested and indentation is used to define the nesting relationship. If you look down on this page at the Information boxes section, the example shows an example of nesting elements and the Markdown tab shows how indentation is being used to identify the nesting relationships.

"},{"location":"wg_docs/content/#links-within-mkdocs-generated-content","title":"Links within MkDocs generated content","text":"

MkDocs will warn of any internal broken links, so it is important that links within the documentation are recognized as internal links.

Information

Internal links should be to the Markdown file (with .md extension). When the site is generated the filename will be automatically converted to the correct URL

As part of the build process a linkchecker application will check the generated html site for any broken links. You can run this linkchecker locally using the instructions. If any links in the documentation should be excluded from the link checker, such as links to localhost, then they should be added as a regex to the linkcheckerrc file, located in the root folder of the project - see linkchecker documentation for additional information

"},{"location":"wg_docs/content/#markdown-extensions-used-in-okdio","title":"Markdown Extensions used in OKD.io","text":"

There are a number of Markdown extensions being used to create the site. See the mkdocs.yml file to see which extensions are configured. The documentation for the extensions can be found here

"},{"location":"wg_docs/content/#link-configuration","title":"Link configuration","text":"

Links on the page or embedded images can be annotated to control the links and also the appearance of the links:

"},{"location":"wg_docs/content/#image","title":"Image","text":"

Images are embedded in a page using the standard Markdown syntax ![description](URL), but the image can be formatted with Attribute Lists. This is most commonly used to scale an image or center an image, e.g.

![GitHub repo url](images/github-repo-url.png){style=\"width: 80%\" .center }\n
"},{"location":"wg_docs/content/#external-links","title":"External Links","text":"

External links can also use attribute lists to control behaviors, such as open in new tab or add a css class attribute to the generated HTML, such as external in the example below:

[MkDocs](http://mkdocs.org){: target=\"_blank\" .external }\n

Info

You can embed an image as the description of a link to create clickable images that launch to another site: [![Image description](Image URL)](target URL \"hover text\"){: target=_blank}

"},{"location":"wg_docs/content/#youtube-videos","title":"YouTube videos","text":"

It is not possible to embed a YouTube video and have it play in place using pure markdown. You can use HTML within the markdown file to embed a video:

<iframe width=\"100%\" height=\"500\" src=\"https://www.youtube.com/watch?v=qh1zYW7BLxE&t=431s\" title=\"Building an OKD 4 Home Lab with special guest Craig Robinson\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>\n
"},{"location":"wg_docs/content/#tabs","title":"Tabs","text":"

Content can be organized into a set of horizontal tabs.

=== \"Tab 1\"\n    Hello\n\n=== \"Tab 2\"\n    World\n

produces :

Tab 1

Hello

Tab 2

World

"},{"location":"wg_docs/content/#information-boxes","title":"Information boxes","text":"

The Admonition extension allows you to add themed information boxes using the !!! and ??? syntax:

!!! note\n    This is a note\n

produces:

Note

This is a note

and

??? note\n    This is a collapsible note\n\n    You can add a `+` character to force the box to be initially open `???+`\n

produces a collapsible box:

Note

This is a collapsible note

You can add a + character to force the box to be initially open ???+

You can override the title of the box by providing a title after the Admonition type.

Example

You can also nest different components as required

note

Note

This is a note

collapsible note Note

This is a note

custom title note

Sample Title

This is a note

Markdown
!!!Example\n    You can also nest different components as required\n\n    === \"note\"\n        !!!note\n            This is a note\n\n    === \"collapsible note\"\n        ???+note\n            This is a note\n\n    === \"custom title note\"\n        !!!note \"Sample Title\"\n            This is a note\n
"},{"location":"wg_docs/content/#supported-admonition-classes","title":"Supported Admonition Classes","text":"

The Admonitions supported by the Material theme are :

Note

This is a note

Abstract

This is an abstract

Info

This is an info

Tip

This is a tip

Success

This is a success

Question

This is a question

Warning

This is a warning

Failure

This is a failure

Danger

This is a danger

Bug

This is a bug

Example

This is an example

Quote

This is a quote

"},{"location":"wg_docs/content/#code-blocks","title":"Code blocks","text":"

Code blocks allow you to insert code or blocks of text in line or as a block.

To use inline you simply enclose the text using a single back quote ` character. So a command can be included using `oc get pods` and will create oc get pods

When you want to include a block of code you use a fence, which is 3 back quote character at the start and end of the block. After the opening quotes you should also specify the content type contained in the block.

```shell\noc get pods\n```\n

which will produce:

oc get pods\n

Notice that the block automatically gets the copy to clipboard link to allow easy copy and paste.

Every code block needs to identify the content. Where there is no content type, then text should be used to identify the content as plain text. Some of the common content types are shown in the table below. However, a full link of supported content types can be found here, where the short name in the documentation should be used.

type Content shell Shell script content powershell Windows Power Shell content bat Windows batch file (.bat or .cmd files) json JSON content yaml YAML content markdown or md Markdown content java Java programming language javascript or js JavaScript programming language typescript or ts TypeScript programming language text Plain text content"},{"location":"wg_docs/content/#advanced-highlighting-of-code-blocks","title":"Advanced highlighting of code blocks","text":"

There are some additional features available due to the highlight plugin installed in MkDocs. Full details can be found in the MkDocs Materials documentation.

"},{"location":"wg_docs/content/#line-numbers","title":"Line numbers","text":"

You can add line numbers to a code block with the linenums directive. You must specify the starting line number, 1 in the example below:

``` javascript linenums=\"1\"\n<script>\ndocument.getElementById(\"demo\").innerHTML = \"My First JavaScript\";\n</script>\n```\n

creates

<script>\ndocument.getElementById(\"demo\").innerHTML = \"My First JavaScript\";\n</script>\n

Info

The line numbers do not get included when the copy to clipboard link is selected

"},{"location":"wg_docs/content/#spell-checking","title":"Spell checking","text":"

This project uses cSpell to check spelling within the markdown. The configuration included in the project automatically excludes content in a code block, enclosed in triple back quotes ```.

The configuration file also specifies that US English is the language used in the documentation, so only US English spellings should be used for words where alternate international English spellings exist.

You can add words to be considered valid either within a markdown document or within the cspell configuration file, cspell.json, in the root folder of the documentation repository.

Words defined within a page only apply to that page, but words added to the configuration file apply to the entire project.

"},{"location":"wg_docs/content/#adding-local-words","title":"Adding local words","text":"

You can add a list of words to be considered valid for spell checking purposes as a comment in a Markdown file.

The comment has a specific format to be picked up by the cSpell tool:

<!--- cSpell:ignore linkchecker linkcheckerrc mkdocs mkdoc -->

here the words linkchecker, linkcheckerrc, mkdocs and mkdoc are specified as words to be accepted by the spell checker within the file containing the comment.

"},{"location":"wg_docs/content/#adding-global-words","title":"Adding global words","text":"

The cSpell configuration file cspell.json contains a list of words that should always be considered valid when spell checking. The list of words applies to all files being checked.

"},{"location":"wg_docs/doc-env/","title":"Setup environment","text":""},{"location":"wg_docs/doc-env/#setting-up-a-documentation-environment","title":"Setting up a documentation environment","text":"

To work on documentation and be able to view the rendered web site you need to create an environment, which comprises of:

You can create the environment by :

Tooling within a container

You can use a container to run MkDocs so no local installation is required, however you do need to have Docker Desktop installed if using Mac OS or Windows. If running on Linux you can use Docker or Podman.

If you have a node.js environment installed that includes the npm command then you can make use of the run scripts provided in the project to run the docker or podman commands

The following commands all assume you are working in the root directory of your local git clone of your forked copy of the okd.io git repo. (your working directory should contain mkdocs.yml and package.json files)

Warning

If you are using Linux with SELinux enabled, then you need to configure your system to allow the local directory containing the cloned git repo to be mounted inside a container. The following commands will configure SELinux to allow this:

(change the path to the location of your okd.io directory)

sudo semanage fcontext -a -t container_file_t '/home/brian/Documents/projects/okd.io(/.*)?'\nsudo restorecon -Rv /home/brian/Documents/projects/okd.io\n
Editing on cluster

There is a community operator available in the OperatorHub on OKD to install Eclipse Che, the upstream project for Red Hat CodeReady Workspaces.

You can use Che to modify site content through your browser, with your OKD cluster hosting the workspace and developer environment.

You need to have access to an OKD cluster and have the Che operator installed and an Che instance deployed and running.

In your OKD console, you should have an applications link in the top toolbar. Open the Applications menu (3x3 grid icon) and select Che. This will open the Che application - Google Chrome is the supported browser and will give the best user experience.

In the Che console side menu, select to Create Workspace, then in the Import from Git section add the URL of your fork of the okd.io git repository (should be similar to https://github.com/<user or org name>/okd.io.git) then press Create & Open to start the workspace.

After a short while the workspace will open (the cluster has to download and start a number of containers, so the first run may take a few minutes depending on your cluster network access). When the workspace is displayed you may have to wait a few seconds for the workspace to initialize and clone your git repo into the workspace. You may also get asked if you trust the author of the git repository, answer yes to this question. Your environment should then be ready to start work.

The web based developer environment uses the same code base as Microsoft Visual Studio Code, so provides a similar user experience, but within your browser.

Local mkdocs and python tooling installation

You can install MkDocs and associated plugins on your development system and run the tools locally:

Note

sudo command may be needed to install globally, depending on your system configuration

You now have all the tools installed to be able to create the static HTML site from the markdown documents. The documentation for MkDocs provides full instructions for using MkDocs, but the important commands are:

There is also a convenience script ./build.sh in the root of the repository that will check spelling, build the site then run the link checker.

You should verify there are no spelling mistakes, by finding the last line of the CSpell output:

CSpell: Files checked: 31, Issues found: 0 in 0 files\n

Similarly, the link checker creates a summary after checking the site:

That's it. 662 links in 695 URLs checked. 0 warnings found. 0 errors found\n

Any issues reported should be fixed before submitting a pull request to add your changes to the okd.io site.

"},{"location":"wg_docs/doc-env/#creating-the-container","title":"Creating the container","text":"

To create the container image on your local system choose the appropriate command from the list:

This will build a local container image named mkdocs-build

"},{"location":"wg_docs/doc-env/#live-editing-of-the-content","title":"Live editing of the content","text":"

To change the content of the web site you can use your preferred editing application. To see the changes you can run a live local copy of okd.io that will automatically update as you save local changes.

Ensure you have the local container image, built in the previous step, available on your system then choose the appropriate command from the list:

You can now open a browser to localhost:8000. You should see the okd.io web site in the browser. As you change files on your local system the web pages will automatically update.

When you have completed editing the site use Ctrl-c (hold down the control key then press c) to quit the site.

"},{"location":"wg_docs/doc-env/#build-and-validate-the-site","title":"Build and validate the site","text":"

Before you submit any changes to the site in a pull request please check there are no spelling mistakes or broken links, by running the build script and checking the output.

The build script will create or update the static web site in the public directory - this is what will be created and published as the live site if you submit a pull request with your modifications.

You should verify there are no spelling mistakes, by finding the last line of the CSpell output:

CSpell: Files checked: 31, Issues found: 0 in 0 files\n

Further down in the console output wil be the summary of the link checker:

That's it. 662 links in 695 URLs checked. 0 warnings found. 0 errors found\n

Any issues reported should be fixed before submitting a pull request to add your changes to the okd.io site.

"},{"location":"wg_docs/doc-env/#live-editing-of-the-content_1","title":"Live editing of the content","text":"

To change the content of the web site you can use the in browser editor provided by Che. To see the changes you can run a live local copy of okd.io that will automatically update as you save local changes.

On the right side of the workspace window you should see 3 icons, hovering over them should reveal they are the Outline, Endpoints and Workspace. Clicking into the workspace, you should see a User Runtimes section with the option to open a new terminal, then 2 commands (Live edit and Build) and finally a link to launch MkDocs web site (initially this link will not work)

To allow you to see your changes in a live site (where any change you save will automatically be updated on the site) click on the 1. Live edit link. This will launch a new terminal window where the mkdocs serve command will run, which provides a local live site. However, as you are running the development site on a cluster, the Che runtime automatically makes this site available to you. The MkDocs link now points to the site, but you will be asked if you want to open the site in a new tab or in Preview.

Preview will add a 4th icon to the side toolbar and open the web site in the side panel. You can drag the side of the window to resize the browser view to allow you to edit on the left and view the results on the right of your browser window.

If you have multiple monitors you may want to select to open the website in a new Tab or use the MkDocs link, then drag the browser tab on to a different monitor.

By default, the Che environment auto-saves any file modification after half a second of no activity. You can alter this in the preferences section. When ever a file is saved the live site will update in the browser.

When you finished editing simply close the terminal window running the Live edit script. This will stop the web server running the preview site.

"},{"location":"wg_docs/doc-env/#build-and-validate-the-site_1","title":"Build and validate the site","text":"

The build script will create or update the static web site in the public directory - this is what will be created and published as the live site if you submit a pull request with your modifications.

To run the build script simply click the 2. Build link in the Workspace panel.

You should verify there are no spelling mistakes, by finding the last line of the CSpell output:

CSpell: Files checked: 31, Issues found: 0 in 0 files\n

Further down in the console output wil be the summary of the link checker:

That's it. 662 links in 695 URLs checked. 0 warnings found. 0 errors found\n

Any issues reported should be fixed before submitting a pull request to add your changes to the okd.io site.

"},{"location":"wg_docs/okd-io/","title":"Contributing to okd.io","text":"

The source for okd.io is in a github repository.

The site is created using MkDocs. which takes Markdown documents and turns them into a static website that can be accessed from a filesystem or served from a web server.

To update or add new content to the site you need to

The site is created using MkDocs with the Materials theme theme.

"},{"location":"wg_docs/okd-io/#updating-the-site","title":"Updating the site","text":"

To make changes to the site. Create a pull request to deliver the changes in your fork of the repo to the main branch of the okd.io repo. Before creating a pull request you should run the build script and verify there are no spelling mistakes or broken links. Details on how to do this can be found at the end of the instructions for setting up a documentation environment

Github automation is used to generate the site then publish to GitHub pages, which serves the site. If your changes contain spelling issues or broken links, then the automation will fail and the GitHub pages site will not be updated, so please do a local test using the build.sh script before creating the pull request.

"},{"location":"wg_docs/overview/","title":"Documentation Subgroup","text":"

The Documentation working group is responsible for improving the OKD documentation. Both the community documentation (this site) and the product documentation.

"},{"location":"wg_docs/overview/#joining-the-group","title":"Joining the group","text":"

The Documentation Subgroup is open to all. You don't need to be invited to join, just attend on of the bi-weekly video calls:

"},{"location":"wg_docs/overview/#product-documentation","title":"Product Documentation","text":"

The OKD product documentation is maintained in the same git repository as Red Hat OpenShift product documentation, as they are sibling projects and largely share the same source code.

The process for making changes to the documentation is outlined in the documentation section

"},{"location":"wg_docs/overview/#community-documentation","title":"Community Documentation","text":"

This site is the community documentation. It is hosted on github and uses a static site generator to convert the Markdown documents in the git repo into this website.

Details of how to modify the site content is contained on the page Modifying OKD.io.

"},{"location":"wg_virt/community/","title":"Get involved!","text":"

The OKD Virtualization SIG is a group of people just like you who are aiming to promote the adoption of the virtualization components on OKD.

"},{"location":"wg_virt/community/#social-media","title":"Social Media","text":"

Reddit : r/OKD Virtualization

YouTube : OKD Workgroup meeting

Twitter : Follow @OKD_Virt_SIG

"},{"location":"wg_virt/community/#getting-started-as-a-user-future-contributor","title":"Getting started as a user (future contributor!)","text":"

Before getting started, please read OKD community etiquette guidelines.

Feel free to dive into OKD documentation following the installation guide for setting up your initial OKD deployment on your bare metal datacenter. Once it's up, please follow the OKD documentation regarding Virtualization installation.

If you find difficulties during the process let us know! Please report issues in our GitHub tracker.

TODO: we may switch to okd organization once it will be ready

"},{"location":"wg_virt/community/#getting-started-as-contributor","title":"Getting started as contributor","text":"

The OKD Virtualization SIG is a group of multidisciplinary individuals who are contributing code, writing documentation, reporting bugs, contributing UX and design expertise, and engaging with the community.

Before getting started, we recommend that you:

The OKD Virtualization SIG is a community project, and we welcome contributions from everyone! If you'd like to write code, report bugs, contribute designs, or enhance the documentation, we would love your help!

"},{"location":"wg_virt/community/#testing","title":"Testing","text":"

We're always eager to have new contributors to join improving the OKD Virtualization quality, no matter your experience level. Please try to deploy and use OKD Virtualization and report issues in our GitHub tracker.

TODO: we may switch to okd organization once it will be ready

"},{"location":"wg_virt/community/#documentation","title":"Documentation","text":"

OKD Virtualization documentation is mostly included in GitHub openshift-docs repository and we are working for getting it published on OKD documentation website

Some additional documentation may be available within this SubGroup space.

"},{"location":"wg_virt/community/#supporters-sponsors-and-providers","title":"Supporters, Sponsors, and Providers","text":"

OKD Virtualization SIG is still in its early days.

If you are using, supporting or providing services with OKD Virtualization we would like to share your story here!

"},{"location":"wg_virt/overview/","title":"OKD Virtualization Subgroup","text":"

The Goal of the OKD Virtualization Subgroup is to provide an integrated solution for classical virtualization users based on OKD, HCO and KubeVirt, including a graphical user interface and deployed using bare metal suited method.

Meet our community!

"},{"location":"wg_virt/overview/#documentation","title":"Documentation","text":""},{"location":"wg_virt/overview/#projects","title":"Projects","text":"

The OKD Virtualization Subgroup is monitoring and integrating the following projects in a user consumable virtualization solution:

"},{"location":"wg_virt/overview/#deployment","title":"Deployment","text":""},{"location":"wg_virt/overview/#mailing-list-slack","title":"Mailing List & Slack","text":"

OKD Workgroup Google Group: https://groups.google.com/forum/#!forum/okd-wg

Slack Channel: https://kubernetes.slack.com/messages/openshift-dev

"},{"location":"wg_virt/overview/#todo","title":"TODO","text":""},{"location":"wg_virt/overview/#sig-membership","title":"SIG Membership","text":""},{"location":"wg_virt/overview/#resources-for-the-sig","title":"Resources for the SIG","text":""},{"location":"wg_virt/overview/#automation-in-place","title":"Automation in place:","text":"

HCO main branch gets tested against OKD 4.9: https://github.com/openshift/release/blob/master/ci-operator/config/kubevirt/hyperconverged-cluster-operator/kubevirt-hyperconverged-cluster-operator-main__okd.yaml

HCO precondition job: https://prow.ci.openshift.org/job-history/gs/origin-ci-test/pr-logs/directory/pull-ci-kubevirt-hyperconverged-cluster-operator-main-okd-hco-e2e-image-index-gcp

KubeVirt is uploaded to operatorhub and on community-operators: https://github.com/redhat-openshift-ecosystem/community-operators-prod/tree/main/operators/community-kubevirt-hyperconverged

"},{"location":"working-group/minutes/minutes/","title":"OKD Working Group Meeting Minutes","text":""},{"location":"working-group/minutes/2022/WG-Meeting-Minutes-04-12-2022/","title":"OKD Working Group Meeting Notes","text":""},{"location":"working-group/minutes/2022/WG-Meeting-Minutes-04-12-2022/#april-12-2022","title":"April 12, 2022","text":""},{"location":"working-group/minutes/2022/WG-Meeting-Minutes-04-12-2022/#attendees","title":"Attendees:","text":""},{"location":"working-group/minutes/2022/WG-Meeting-Minutes-04-12-2022/#agenda","title":"Agenda","text":""},{"location":"working-group/minutes/2022/WG-Meeting-Minutes-05-24-2022/","title":"OKD Working Group Meeting Notes","text":""},{"location":"working-group/minutes/2022/WG-Meeting-Minutes-05-24-2022/#may-24-2022","title":"May 24, 2022","text":""},{"location":"working-group/minutes/2022/WG-Meeting-Minutes-05-24-2022/#attendees","title":"Attendees:","text":""},{"location":"working-group/minutes/2022/WG-Meeting-Minutes-05-24-2022/#agenda","title":"Agenda","text":""}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"OKD.io","text":"

Latest

Help us improve OKD by completing the 2023 OKD user survey

Built around a core of OCI container packaging and Kubernetes container cluster management, OKD is also augmented by application lifecycle management functionality and DevOps tooling. OKD provides a complete open source container application platform.

"},{"location":"#okd-4","title":"OKD 4","text":"

$ openshift-install create cluster

Tons of amazing new features

Automatic updates not only for OKD but also for the host OS, k8s Operators are first class citizens, a fancy UI, and much much more

CodeReady Containers for OKD: local OKD 4 cluster for development

CodeReady Containers brings a minimal OpenShift 4 cluster to your local laptop or desktop computer! Download it here: CodeReady Containers for OKD Images

"},{"location":"#what-is-okd","title":"What is OKD?","text":"

OKD is a distribution of Kubernetes optimized for continuous application development and multi-tenant deployment

OKD embeds Kubernetes and extends it with security and other integrated concepts

OKD adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams

OKD is also referred to as Origin in GitHub and in the documentation

OKD is a sibling Kubernetes distribution to Red Hat OpenShift | If you are looking for enterprise-level support, or information on partner certification, Red Hat also offers Red Hat OpenShift Container Platform

"},{"location":"#okd-community","title":"OKD Community","text":"

We know you've got great ideas for improving OKD and its network of open source projects. So roll up your sleeves and come join us in the community!

"},{"location":"#get-started","title":"Get Started","text":"

All contributions are welcome! OKD uses the Apache 2 license and does not require any contributor agreement to submit patches. Please open issues for any bugs or problems you encounter, ask questions in the #openshift-users on Kubernetes Slack Channel, or get involved in the OKD-WG by joining the OKD-WG google group.

"},{"location":"#connect-to-the-community","title":"Connect to the community","text":"

Join the OKD Working Group

"},{"location":"#talk-to-us","title":"Talk to Us","text":""},{"location":"#standardization-through-containerization","title":"Standardization through Containerization","text":"

Standards are powerful forces in the software industry. They can drive technology forward by bringing together the combined efforts of multiple developers, different communities, and even competing vendors.

Open source container orchestration and cluster management at scale

Standardized Linux container packaging for applications and their dependencies

A container-focused OS that's designed for painless management in large clusters

An open source project that provides developer and runtime Kubernetes tools, enabling you to accelerate the development of an Operator

A lightweight container runtime for Kubernetes

Prometheus is a systems and service monitoring toolkit that collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true

"},{"location":"#okd-end-user-community","title":"OKD End User Community","text":"

There is a large, vibrant end user community

"},{"location":"#become-a-part-of-something-bigger","title":"Become a part of something bigger","text":"

OpenShift Commons is open to all community participants: users, operators, enterprises, non-profits, educational institutions, partners, and service providers as well as other open source technology initiatives utilized under the hood or to extend the OpenShift platform

... then OpenShift Commons is the right place for you

"},{"location":"about/","title":"About OKD","text":"

OKD is the community distribution of Kubernetes optimized for continuous application development and multi-tenant deployment. OKD adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams. OKD is also referred to as Origin in GitHub and in the documentation. OKD makes launching Kubernetes on any cloud or bare metal a snap, simplifies running and updating clusters, and provides all of the tools to make your containerized-applications succeed.

"},{"location":"about/#features","title":"Features","text":""},{"location":"about/#what-can-i-run-on-okd","title":"What can I run on OKD?","text":"

OKD is designed to run any Kubernetes workload. It also assists in building and developing containerized applications through the developer console.

For an easier experience running your source code, Source-to-Image (S2I) allows developers to simply provide an application source repository containing code to build and run. It works by combining an existing S2I-enabled container image with application source to produce a new runnable image for your application.

You can see the full list of Source-to-Image builder images and it's straightforward to create your own. Some of our available images include:

"},{"location":"about/#what-sorts-of-security-controls-does-openshift-provide-for-containers","title":"What sorts of security controls does OpenShift provide for containers?","text":"

OKD runs with the following security policy by default:

Many containers expect to run as root (and therefore edit all the contents of the filesystem). The Image Author's guide gives recommendations on making your image more secure by default:

If you are running your own cluster and want to run a container as root, you can grant that permission to the containers in your current project with the following command:

# Gives the default service account in the current project access to run as UID 0 (root)\noc adm add-scc-to-user anyuid -z default\n

See the security documentation more on confining applications.

"},{"location":"blog/","title":"okd.io Blog","text":"

We look forward to sharing news and useful information about OKD in this blog.

You are also invited to participate: share your experiences and tips with the community by creating your own blog articles for okd.io.

"},{"location":"blog/#blogs","title":"Blogs","text":""},{"location":"blog/#2023","title":"2023","text":"Date Title 2023-07-18 State of affairs in OKD CI/CD"},{"location":"blog/#2022","title":"2022","text":"Date Title 2022-12-12 Building the OKD payload 2022-10-25 OKD Streams: Building the Next Generation of OKD together 2022-10-20 OKD at KubeCon + CloudNativeCon North America 2022 2022-09-09 An Introduction to Debugging OKD Release Artifacts"},{"location":"blog/#2021","title":"2021","text":"Date Title 2021-05-06 OKD Working Group Office Hours at KubeconEU on OpenShift.tv 2021-05-04 Rohde & Schwarz's Journey to OpenShift 4 From OKD to Azure Red Hat OpenShift 2021-03-22 Recap OKD Testing and Deployment Workshop - Videos and Additional Resources 2021-04-19 Please avoid using FCOS 33.20210301.3.1 for new OKD installs 2021-03-16 Save The Date! OKD Testing and Deployment Workshop (March 20) Register Now! 2021-03-07 okd.io now has a blog"},{"location":"charter/","title":"OKD Working Group Charter","text":"

v1.1

2019-09-21

"},{"location":"charter/#introduction","title":"Introduction","text":"

The charter describes the operations of the OKD Working Group (OKD WG).

OKD is the Origin Community Distribution of Kubernetes that is upstream to Red Hat\u2019s OpenShift Container Platform. It is built around a core of OCI containers and Kubernetes container cluster management. OKD is augmented by application lifecycle management functionality and DevOps tooling.

The OKD Working Group's purpose is to discuss, give guidance to, and enable collaboration on current development efforts for OKD, Kubernetes, and related CNCF projects. The OKD Working Group will also include the discussion of shared community goals for OKD 4 and beyond. Additionally, the Working Group will produce supporting materials and best practices for end-users and will provide guidance and coordination for CNCF projects working within the SIG's scope.

The OKD Working Group is independent of both Fedora and the Cloud Native Computing Foundation (CNCF). The OKD Working Group is a community sponsored by Red Hat.

"},{"location":"charter/#mission","title":"Mission","text":"

The mission of the OKD Working Group is:

"},{"location":"charter/#areas-considered-in-scope","title":"Areas considered in Scope","text":"

The OKD Working Group focuses on the following end-user related topics of the lifecycle of cloud-native applications:

The Working Group will work on developing best practices, fostering collaboration between related projects, and working on improving tool interoperability. Additionally, the Working Group will propose new initiatives and projects when capability gaps in the current ecosystem are defined.

The following, non-exhaustive, sample list of activities and deliverables are in-scope for the Working Group:

"},{"location":"charter/#areas-considered-out-of-scope","title":"Areas considered out of Scope","text":"

Anything not explicitly considered in the scope above. Example include:

"},{"location":"charter/#governance","title":"Governance","text":""},{"location":"charter/#operations","title":"Operations","text":"

The OKD Working Group is run and managed by the following chairs:

Note

The referenced names and chair positions will be edited in-place as chairs are added, removed, or replaced. See the roles of chairs section for more information.

A dedicated git repository will be the authoritative archive for membership list, code, documentation, and decisions made. The repository, along with this charter, will be hosted at github.com/openshift/community.

The mailing list at groups.google.com/forum/#!forum/okd-wg will be used as a place to call for and publish group decisions, and to hold discussions in general.

"},{"location":"charter/#working-group-membership","title":"Working Group Membership","text":"

All active members of the Working Group are listed in the MEMBERS.md file with their name.

New members can apply for membership by creating an Issue or Pull Request on the repository on GitHub indicating their desire to join.

Membership can be surrendered by creating an Issue stating this desire, or by creating a Pull Request to remove the own name from the members list.

"},{"location":"charter/#decision-process","title":"Decision Process","text":"

This group will seek consensus decisions. After public discussion and consideration of different opinions, the Chair and/or Co-Chair will record a decision and summarize any objections.

All WG members who have joined the GitHub group at least 21 days prior to the vote are eligible to vote. This is to prevent people from rallying outside supporters for their desired outcome.

When the group comes to a decision in a meeting, the decision is tentative. Any group participant may object to a decision reached at a meeting within 7 days of publication of the decision on the GitHub Issue and/or mailing list. That decision must then be confirmed on the GitHub Issue via a Call for Agreement.

The Call for Agreement, when a decision is required, will be posted as a GitHub Issue or Pull Request and must be announced on the mailing list. It is an instrument to reach a time-bounded lazy consensus approval and requires a voting period of no less than 7 days to be defined (including a specific date and time in UTC).

Each Call for Agreement will be considered independently, except for elections of Chairs.

The Chairs will submit all Calls for Agreement that are not vague, unprofessional, off-topic, or lacking sufficient detail to determine what is being agreed.

In the event that a Call for Agreement falls under the delegated authority or within a chartered Sub-Working Group, the Call for Agreement must be passed through the Sub-Working Group before receiving Working Group consideration.

A Call for Agreement may require quorum of Chairs under the circumstances outlined in the Charter and Governing Documents section.

A Call for Agreement is mandatory when:

Once the Call for Agreement voting period has elapsed, all votes are counted, with at least a 51% majority of votes needed for consensus. A Chair will then declare the agreement \u201caccepted\u201d or \u201cdeclined\u201d, ending the Call for Agreement.

Once rejected, a Call for Agreement must be revised before re-submission for a subsequent vote. All rejected Calls for Agreement will be reported to the Working Group as rejected.

"},{"location":"charter/#charter-and-governing-documents","title":"Charter and Governing Documents","text":"

The Working Group may, from time to time, adopt or amend its Governing Documents and Charter, using a modified Call for Agreement process:

For initial approval of this Charter via Call for Agreement all members are eligible to vote, even those that have been a member for less than 21 days. This Charter will be approved if there is a majority of positive votes.

"},{"location":"charter/#organizational-roles","title":"Organizational Roles","text":""},{"location":"charter/#role-of-chairs","title":"Role of Chairs","text":"

The primary role of Chairs is to run operations and the governance of the group. The Chairs are responsible for:

The terms for founding Chairs start on the approval of this charter.

When no candidate has submitted their name for consideration, the current Chairs may appoint an acting Chair until a candidate comes forward.

Chairs must be active members. Any inactivity, disability, or ineligibility results in immediate removal.

Chairs may be removed by petition to the Working Group through the Call for Agreement process outlined above.

Additional Chairs may be added so long as the existing number of Chairs is odd. These Chairs are added using a Call for Agreement. Extra Chairs enjoy the same rights, responsibilities, and obligations of a Chartered Chair. Upon vacancy of an Extra Chair, it may be filled by appointment by the remaining Chairs, or a majority vote of the Working Group until the term naturally expires.

In the event that an even number of Chairs exist and vote situation arises, the Chairs will randomly select one chair to abstain.

"},{"location":"charter/#role-of-sub-working-groups","title":"Role of Sub-Working Groups","text":"

Each Sub-Working Group (SWG) must have a Chair working as an active sponsor. Under the mandate of the Working Group, each SWG will have the autonomy to establish their own charter, membership rules, meeting times, and management processes. Each SWG will also have the authority to make in-scope decisions as delegated by the Working Group.

SWGs are required to submit their agreed Charter to the Working Group for information and archival. The Chairs can petition for dissolution of an inactive or hostile SWG by Call for Agreement. Once dissolved the SWG\u2019s delegated Charter and outstanding authority to make decisions is immediately revoked. The Chairs may then take any required action to restrict access to Working Group Resources.

No SWG will have authority with regards to this Charter or other OKD Working Group Governing Documents.

"},{"location":"communications/","title":"OKD Working Group Communications","text":"

The working group issues regular communications through several different methods. There are also a few ways to contact the working group depending on the type of communication needed. This page will help you navigate the various communication channels that the working group utilizes.

"},{"location":"communications/#e-mail","title":"E-Mail","text":"

The working group maintains a mailing list as well as several email addresses.

Mailing List

okd-wg mailing list

The purpose of this list is to discuss, give guidance & enable collaboration on current development efforts for OKD4, Fedora CoreOS (FCOS) & Kubernetes. Please note that the focus of this list is the active development of OKD, and the processes of this community, its is not intended as a forum for reporting bugs or requesting help with operating OKD.

Reporting Addresses

The working group uses several e-mail addresses to receive communications from the community based on the intent of the message.

chairs@okd.io

The chairs address is for messages that are related to the working group and its processes. It is intended for communications that will go directly to the working group chairs and not the wider community.

security@okd.io

The security address is intended for any reporting of sensitive or confidential security related bugs and findings about OKD.

info@okd.io

The info address is for requesting general information about the working group and its processes.

"},{"location":"communications/#social-media","title":"Social Media","text":"

The working group uses social media to broadcast updates about new releases, working group meetings, and community events.

Twitter

@okd_io

"},{"location":"communications/#slack","title":"Slack","text":"

The working group maintains a presence on the Kubernetes community Slack instance in the #openshift-users channel. This channel is a good place to come for OKD-specific help with operations and usage.

"},{"location":"communications/#github","title":"GitHub","text":"

The working group maintains several repositories on GitHub in the OKD-Project organization. These repositories contain information and discussions about OKD and the working group's future plans.

okd-project/okd discussions

The okd repository discussions board is a good place to visit for researching or raising specific operational issues with OKD.

okd-project/planning project board

The planning repository contains a kanban board which records the current state of the working group and its related projects.

"},{"location":"community/","title":"End User Community","text":"

OKD has an active community of end-users, with many different use-cases. From enterprises, academic institutions or home hobbyists. In addition to the end-user community there is a smaller community of volunteers that contribute to the OKD project by helping other users resolve issues or by participating in one of the OKD working groups to enhance the OKD project.

"},{"location":"community/#code-of-conduct","title":"Code of Conduct","text":"

We want the OKD community to be a welcoming community, where everyone is treat with respect, so the link to the code of conduct should be made visible at all events

Red Hat supports the Inclusive Naming Initiative and the OKD project follows the guidelines and recommendations from that project. All contributions to OKD must also follow their guidelines

"},{"location":"community/#end-user-community_1","title":"End-User community","text":"

The community of OKD users is a self-supporting community. There is no official support for OKD, all help is community provided.

The Help section provides details on how to get help for any issues you may be experiencing.

We encourage all users to participate in discussions and to help fellow users where they can.

"},{"location":"community/#contributing-to-okd","title":"Contributing to OKD","text":"

The OKD project has a charter, setting out how the project is run.

If you want to join the team of volunteers working on the OKD project then details of how to become a contributor are set out here.

"},{"location":"conduct/","title":"OKD Community Code of Conduct","text":"

Every community can be strengthened by a diverse variety of viewpoints, insights, opinions, skill sets, and skill levels. However, with diversity comes the potential for disagreement and miscommunication. The purpose of this Code of Conduct is to ensure that disagreements and differences of opinion are conducted respectfully and on their own merits, without personal attacks or other behavior that might create an unsafe or unwelcoming environment.

These policies are not designed to be a comprehensive set of things you cannot do. We ask that you treat your fellow community members with respect and courtesy. This Code of Conduct should be followed in spirit as much as in letter and is not exhaustive.

All okd events and community members are governed by this Code of Conduct and anti-harassment policy. We expect working group chairs and organizers to enforce these guidelines throughout all events, and we expect attendees, speakers, sponsors, and volunteers to help ensure a safe environment for our whole community.

For the purposes of this Code of Conduct:

"},{"location":"conduct/#anti-harassment-policy","title":"Anti-harassment policy","text":"

Harassment includes (but is not limited to) the following behaviors:

Community members asked to stop any harassing behavior are expected to comply immediately. In particular, community members should not use sexual images, activities, or other material. Community members should not use sexual attire or otherwise create a sexualized environment at community events.

In addition to the behaviors outlined above, continuing to behave in a certain way after you have been asked to stop also constitutes harassment, even if that behavior is not specifically outlined in this policy. It is considerate and respectful to stop doing something after you have been asked to stop, and all community members are expected to comply with such requests immediately.

"},{"location":"conduct/#policy-violations","title":"Policy violations","text":"

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting codeofconduct@okd.io.

If a community member engages in harassing behavior, organizers or working group chairs may take any action deemed appropriate. These actions may include but are not limited to warning the offender and expelling the offender from an event. The OKD working group leaders might determine that the offender should be barred from participating in the community.

Event organizers and working group leaders will be happy to help community members contact security or local law enforcement, provide escorts to an alternate location, or otherwise assist those experiencing harassment to feel safe for the duration of an event. We value the safety and well-being of our community members and want everyone to feel welcome at our events, both online and in-person.

We expect all community members to follow these policies during all of our events.

The okd Community Code of Conduct is licensed under the Creative Commons Attribution-Share Alike 3.0 license. Our Code of Conduct was adapted from Codes of Conduct of other open source projects, including:

"},{"location":"contributor/","title":"Contributor Community","text":"

OKD is built from many different open source projects - Fedora CoreOS, the CentOS Stream and UBI RPM ecosystems, cri-o, Kubernetes, and many different extensions to Kubernetes. The openshift organization on GitHub holds active development of components on top of Kubernetes and references projects built elsewhere. Generally, you'll want to find the component that interests you and review their README.md for the processes for contributing.

Community process and questions can be raised in our community repo and issues opened in this repository (Bugzilla locations coming soon).

Our unified continuous integration system tests pull requests to the ecosystem and core images, then builds and promotes them after merge. To see the latest development releases of OKD visit our continuous release page. These releases are built continuously and expire after a few days. Long lived versions are pinned and then listed on our stable release page.

All contributions are welcome - OKD uses the Apache 2 license and does not require any contributor agreement to submit patches. Please open issues for any bugs or problems you encounter, ask questions in the OKD discussion forum, or get involved in the Kubernetes project at the container runtime layer.

"},{"location":"contributor/#becoming-a-contributor","title":"Becoming a contributor","text":"

The easiest way to get involved in the community is to:

The OKD project has a charter, setting out how the project is run.

"},{"location":"contributor/#working-groups","title":"Working Groups","text":"

The project is managed by a bi-weekly working group video call:

The main working group is where are the major project decisions are made, but when a specific work item needs to be completed a sub-group may be formed, so a focussed set of volunteers can work on a specific area.

"},{"location":"crc/","title":"CodeReady Containers for OKD","text":"

CodeReady Containers brings a minimal, single node OKD 4 cluster to your local computer. This cluster provides a minimal environment for development and testing purposes. CodeReady Containers is mainly targeted at running on developers' laptops and desktops. Note that arm64 OKD payload is not yet available.

"},{"location":"crc/#download-codeready-containers-for-okd","title":"Download CodeReady Containers for OKD","text":"

Run a developer instance of OKD4 on your local workstation with CodeReady Containers built for OKD - >No Pull Secret Required! The Getting Started Guide explains how to install and use CodeReady Containers.

You can fetch crc binaries without Red Hat subscription here

$ crc config set preset okd\nChanges to configuration property 'preset' are only applied when the CRC instance is created.\nIf you already have a running CRC instance with different preset, then for this configuration change to take effect, delete the CRC instance with 'crc delete', setup it with `crc setup` and start it with 'crc start'.\n\n$ crc config view\n- consent-telemetry                     : yes\n- preset                                : okd\n

If you encounter any problems, please open a discussion item in the OKD GitHub Community!

"},{"location":"crc/#crc-working-group","title":"CRC Working group","text":"

There is a working group looking at automating the OKD CRC build process. If you want technical details on how to build OKD CRC see the working group section of this site

"},{"location":"docs/","title":"Documentation","text":"

There are 2 primary sources of information for OKD:

"},{"location":"docs/#updates-and-issues","title":"Updates and Issues","text":"

If you encounter an issue with the documentation or have an idea to improve the content or add new content then please follow the directions below to learn how you can get changes made.

The source for the documentation is managed in GitHub. There are different processes for requesting changes in the community and product documentation:

"},{"location":"docs/#community-documentation","title":"Community documentation","text":"

The OKD Documentation subgroup is responsible for the community documentation. The process for making changes is set out in the working group section of the documentation

"},{"location":"docs/#product-documentation","title":"Product documentation","text":"

The OKD docs are built off the openshift/openshift-docs repo. If you notice any problems in the OKD docs that need to be addressed, you can either create a pull request with those changes against the openshift/openshift-docs repo or create an issue to suggest the changes.

Among the changes you could suggest are:

If you create an issue, please do the following:

"},{"location":"faq/","title":"Frequently Asked Questions (FAQ)","text":"

Below are answers to common questions regarding OKD installation and administration. If you have a suggested question or a suggested improvement to an answer, please feel free to reach out.

"},{"location":"faq/#what-are-the-relations-with-ocp-project-is-okd4-an-upstream-of-ocp","title":"What are the relations with OCP project? Is OKD4 an upstream of OCP?","text":"

In 3.x release time OKD was used as an upstream project for Openshift Container Platform. OKD could be installed on Fedora/CentOS/RHEL and used CentOS based images to install the cluster. OCP, however, could be installed only on RHEL and its images were rebuilt to be RHEL-based.

Universal Base Image project has enabled us to run RHEL-based images on any platform, so the full image rebuild is no longer necessary, allowing OKD4 project to reuse most images from OCP4. There is another critical part of OCP - Red Hat Enterprise Linux CoreOS. Although RHCOS is an open source project (much like RHEL8) it's not a community-driven project. As a result, OKD workgroup has made a decision to use Fedora CoreOS - open source and community-driven project - as a base for OKD4. This decision allows end-users to modify all parts of the cluster using prepared instructions.

It should be noted that OKD4 is being automatically built from OCP4 ci stream, so most of the tests are happening in OCP CI and being mirrored to OKD. As a result, OKD4 CI doesn't have to run a lot of tests to ensure the release is valid.

These relationships are more complex than \"upstream/downstream\", so we use \"sibling distributions\" to describe its state.

"},{"location":"faq/#how-stable-is-okd4","title":"How stable is OKD4?","text":"

OKD4 builds are being automatically tested by release-controller. Release is rejected if either installation, upgrade from previous version or conformance test fails. Test results determine the upgrade graph, so for instance, if upgrade tests passed for beta5->rc edge, clusters on beta5 can be directly updated to rc release, bypassing beta6.

The OKD stable version is released bi-weekly, following Fedora CoreOS schedule, client tools are uploaded to Github and images are mirrored to Quay.

"},{"location":"faq/#can-i-run-a-single-node-cluster","title":"Can I run a single node cluster?","text":"

Currently, single-node cluster installations cannot be deployed directly by the 4.7 installer. This is a known issue. Single-node cluster installations do work with the 4.8 nightly installer builds.

As an alternative, if OKD version 4.7 is needed, you may have luck with Charro Gruver's OKD 4 Single Node Cluster instructions. You can also use Code Ready Containers (CRC) to run a single-node cluster on your desktop.

"},{"location":"faq/#what-to-do-in-case-of-errors","title":"What to do in case of errors?","text":"

If you experience problems during installation you must collect the bootstrap log bundle, see instructions

If you experience problems post installation, collect data of your cluster with:

oc adm must-gather\n

See documentation for more information.

Upload it to a file hoster and send the link to the developers (Slack channel, ...)

During installation the SSH key is required. It can be used to SSH onto the nodes later on - ssh core@<node ip>

"},{"location":"faq/#where-do-i-seek-support","title":"Where do I seek support?","text":"

OKD is a community-supported distribution, Red Hat does not provide commercial support of OKD installations.

Contact us on Slack:

See https://openshift.tips/ for useful Openshift tips

"},{"location":"faq/#where-can-i-find-upgrades","title":"Where can I find upgrades?","text":"

https://amd64.origin.releases.ci.openshift.org/

Warning

Nightly builds (from 4.x.0-0.okd) are pruned every 72 hours.

If your cluster uses these images, consider mirroring these files to a local registry.

Builds from the stable-4 stream are not removed.

"},{"location":"faq/#how-can-i-upgrade-my-cluster-to-a-new-version","title":"How can I upgrade my cluster to a new version?","text":"

Find a version where a tested upgrade path is available from your version for on

https://amd64.origin.releases.ci.openshift.org/

Upgrade options:

Preferred ways:

oc adm upgrade\n

Last resort:

Upgrade to a certain version (will ignore the update graph!)

oc adm upgrade --force --allow-explicit-upgrade=true --to-image=registry.ci.openshift.org/origin/release:4.4.0-0.okd-2020-03-16-105308\n

This will take a while; the upgrade may take several hours. Throughout the upgrade, kubernetes API would still be accessible and user workloads would be evicted and rescheduled as nodes are updated.

"},{"location":"faq/#interesting-commands-while-an-upgrade-runs","title":"Interesting commands while an upgrade runs","text":"

Check overall upgrade status:

oc get clusterversion\n

Check the status of your cluster operators:

oc get co\n

Check the status of your nodes (cluster upgrades may include base OS updates):

oc get nodes\n
"},{"location":"faq/#how-can-i-find-out-whats-inside-of-a-ci-release-and-which-commit-id-each-component-has","title":"How can I find out what's inside of a (CI) release and which commit id each component has?","text":"

This one is very helpful if you want to know if a certain commit has landed in your current version:

oc adm release info registry.ci.openshift.org/origin/release:4.4  --commit-urls\n
Name:      4.4.0-0.okd-2020-04-10-020541\nDigest:    sha256:79b82f237aad0c38b5cdaf386ce893ff86060a476a39a067b5178bb6451e713c\nCreated:   2020-04-10T02:14:15Z\nOS/Arch:   linux/amd64\nManifests: 413\n\nPull From: registry.ci.openshift.org/origin/release@sha256:79b82f237aad0c38b5cdaf386ce893ff86060a476a39a067b5178bb6451e713c\n\nRelease Metadata:\n  Version:  4.4.0-0.okd-2020-04-10-020541\n  Upgrades: <none>\n\nComponent Versions:\n  kubernetes 1.17.1\n  machine-os 31.20200407.20 Fedora CoreOS\n\nImages:\n  NAME                                           URL\n  aws-machine-controllers                        https://github.com/openshift/cluster-api-provider-aws/commit/5fa82204468e71b44f65a5f24e2675dbfa0f5c29\n  azure-machine-controllers                      https://github.com/openshift/cluster-api-provider-azure/commit/832a43a30d7f00cd6774c1f5cd117aeebbe1b730\n  baremetal-installer                            https://github.com/openshift/installer/commit/a58f24b0df7e3699b39d4ae1d23c45672706934d\n  baremetal-machine-controllers\n  baremetal-operator\n  baremetal-runtimecfg                           https://github.com/openshift/baremetal-runtimecfg/commit/09850a724d9290ffb05db3dd7f4f4c748b982759\n  branding                                       https://github.com/openshift/origin-branding/commit/068fa1eac9f31ffe13089dd3de2ec49c153b2a14\n  cli                                            https://github.com/openshift/oc/commit/2576e482bf003e34e67ba3d69edcf5d411cfd6f3\n  cli-artifacts                                  https://github.com/openshift/oc/commit/2576e482bf003e34e67ba3d69edcf5d411cfd6f3\n  cloud-credential-operator                      https://github.com/openshift/cloud-credential-operator/commit/446680ed10ac938e11626409acb0c076edd3fd52\n  ...\n
"},{"location":"faq/#how-can-i-find-out-the-version-of-a-particular-package-within-an-okd-release","title":"How can I find out the version of a particular package within an OKD release?","text":"
# Download and enter the machine-os-content container.\npodman run --rm -ti `oc adm release info quay.io/openshift/okd:4.13.0-0.okd-2023-06-24-145750 --image-for=machine-os-content`\n\n# Query the particular rpm. For example, to get the version of the cri-o package in the release, use the following:\nrpm -qa cri-o\n
"},{"location":"faq/#how-to-use-the-official-installation-container","title":"How to use the official installation container?","text":"

The official installer container is part of every release.

# Find out the installer image.\noc adm release info quay.io/openshift/okd:4.7.0-0.okd-2021-04-24-103438 --image-for=installer\n\n# Example output\n# quay.io/openshift/okd-content@sha256:521cd3ac7d826749a085418f753f1f909579e1aedfda704dca939c5ea7e5b105\n\n# Run the container via Podman or Docker to perform tasks. e.g. create ignition configurations\ndocker run -v $(pwd):/output -ti quay.io/openshift/okd-content@sha256:521cd3ac7d826749a085418f753f1f909579e1aedfda704dca939c5ea7e5b105 create ignition-configs\n
"},{"location":"help/","title":"Help","text":"

There is no official product support for OKD as it is a community project. All assistance is provided by volunteers from the user community.

"},{"location":"help/#how-to-ask-for-help","title":"How to ask for help","text":"

For questions or feedback, start a discussion on the discussion forum or reach us on Kubernetes Slack on #openshift-users

"},{"location":"help/#community-etiquette","title":"Community Etiquette","text":"

As all assistance is provided by the community, you are reminded of the code-of-conduct when asking a question or replying to a question.

Before starting a new discussion topic, do a search on the discussion forum to see if anyone else has already raised the same issue - then contribute to the existing discussion topic rather than starting a new topic.

When seeking help you should provide all the information a community volunteer may need to assist you. The easier it is for a volunteer to understand your issue, the more likely they are to provide assistance.

This information should include:

Please do not tag people you see answering other questions to try to get a faster answer as it is anti-social. We have an active community and it is up to individuals which questions they feel they want to respond to.

"},{"location":"help/#raising-bugs","title":"Raising bugs","text":"

We are trying to do all the diagnostic work in the discussion forum rather than using issues for the OKD project. If you are certain you have discovered a bug, then please raise an issue, but if you are not sure if you have found a bug then use the discussion forum to discuss it. If it turns out to be a bug, then the discussion topic can be converted to an issue.

"},{"location":"installation/","title":"Install OKD","text":""},{"location":"installation/#plan-your-installation","title":"Plan your installation","text":"

OKD supports 2 types of cluster install options:

IPI is a largely automated install process, where the installer is responsible for setting up the infrastructure, where UPI requires you to set up the base infrastructure. You can find further details in the documentation

OKD support installation on bare metal hardware, a number of virtualization platforms and a number of cloud platforms, so you need to decide where you want to install OKD and that your environment has sufficient resources for the cluster to operate. The documentation has more information to help you plan your installation.

If you want to install on a typical developer workstation, then Code-Ready Containers may be a better options, as that is a cut-down installation designed to run on limited compute and memory resources.

You can find examples of OKD installations, setup by OKD community members in the guides section.

"},{"location":"installation/#getting-started","title":"Getting Started","text":"

To obtain the openshift installer and client, visit releases for stable versions or https://amd64.origin.releases.ci.openshift.org/ for nightlies.

You can verify the downloads using:

curl https://www.okd.io/vrutkovs.pub | gpg --import\n

Output

    gpg: key 3D54B6723B20C69F: public key \"Vadim Rutkovsky <vadim@vrutkovs.eu>\" imported\n    gpg: Total number processed: 1\n    gpg:               imported: 1\n
gpg --verify sha256sum.txt.asc sha256sum.txt\n

Output

gpg: Signature made Mon May 25 18:48:22 2020 CEST\ngpg:                using RSA key DB861D01D4D1138A993ADC1A3D54B6723B20C69F\ngpg: Good signature from \"Vadim Rutkovsky <vadim@vrutkovs.eu>\" [ultimate]\ngpg:                 aka \"Vadim Rutkovsky <vrutkovs@redhat.com>\" [ultimate]\ngpg: WARNING: This key is not certified with a trusted signature!\ngpg:          There is no indication that the signature belongs to the owner.\nPrimary key fingerprint: DB86 1D01 D4D1 138A 993A  DC1A 3D54 B672 3B20 C69F\n
sha256sum -c sha256sum.txt\n

Output

release.txt: OK\nopenshift-client-linux-4.4.0-0.okd-2020-05-23-055148-beta5.tar.gz: OK\nopenshift-client-mac-4.4.0-0.okd-2020-05-23-055148-beta5.tar.gz: OK\nopenshift-client-windows-4.4.0-0.okd-2020-05-23-055148-beta5.zip: OK\nopenshift-install-linux-4.4.0-0.okd-2020-05-23-055148-beta5.tar.gz: OK\nopenshift-install-mac-4.4.0-0.okd-2020-05-23-055148-beta5.tar.gz: OK\n

Please note that each nightly release is pruned after 72 hours. If the nightly that you installed was pruned, the cluster may be unable to pull necessary images and may show errors for various functionality (including updates).

Alternatively, if you have the openshift client oc already installed, you can use it to download and extract the openshift installer and client from our container image:

oc adm release extract --tools quay.io/openshift/okd:4.5.0-0.okd-2020-07-14-153706-ga\n

Note

You need a 4.x version of oc to extract the installer and the latest client. You can initially use the official Openshift client (mirror)

There are full instructions in the OKD documentation for each supported platform, but the main steps for an IPI install are:

  1. extract the downloaded tarballs and copy the binaries into your PATH.
  2. run the following from an empty directory:
    openshift-install create cluster\n
  3. follow the prompts to create the install config

Once the install completes successfully the console URL and an admin username and password will be printed. If your DNS records were correct, you should be able to log in to your new OKD4 cluster!

To undo the installation and delete any cloud resources created by the installer, run

openshift-install destroy cluster\n

Note

The OpenShift client tools for your cluster can be downloaded from the help drop down menu at the top of the web console.

"},{"location":"working-groups/","title":"Working Groups","text":"

OKD is governed by working groups as set out in the OKD Working Group Charter

There is a primary working group, where all the main decisions are made regarding the project.

Where an area of the project needs more time or is of interest to a subset of the working group membership, then a sub-group will be formed for that specific area,

The current sub groups are:

"},{"location":"working-groups/#okd-primary-working-group","title":"OKD Primary Working Group","text":"

The OKD group meets virtually every other week.

You don't need an invitation to join a working group -- simply join the video call. You may also want to join other online discussions as set out in the contributor section

"},{"location":"blog/2021-03-07-new-blog.html/","title":"okd.io now has a blog","text":"

Todo

This content is for the current Middleman based OKD.io site

"},{"location":"blog/2021-03-07-new-blog.html/#lets-share-news-and-useful-information-with-each-other","title":"Let's share news and useful information with each other","text":"

We look forward to sharing news and useful information about OKD in this blog in the future.

You are also invited to participate: share your experiences and tips with the community by creating your own blog articles for okd.io.

Here's how to do it:

"},{"location":"blog/2021-03-16-save-the-date-okd-testing-deployment-workshop.html/","title":"Save The Date! OKD Testing and Deployment Workshop (March 20) Register Now!","text":""},{"location":"blog/2021-03-16-save-the-date-okd-testing-deployment-workshop.html/#the-okd-working-group-is-hosting-a-virtual-workshop-on-testing-and-deploying-okd4","title":"The OKD Working Group is hosting a virtual workshop on testing and deploying OKD4","text":"

On March 20th, OKD-Working Group is hosting a one day event to bring together people from the OKD and related Open Source project communities to collaborate on testing and documentation of the OKD 4 install and upgrade processes for the various platforms that people are deploying OKD 4 on as well to identify any issues with the current documentation for these processes and triage them together.

The day will start with all attendees together in the \u2018main stage\u2019 area for 2 hours where we will give an short welcome and describe the logistics for the day, give a brief introduction to OKD4 itself then walk thru a install deployment to vSphere using UPI approach along with a few other more universal best practices such as DNS/DHCP server configuration) that apply to all deployment targets.

Then we will break into tracks specific to the deployment target platforms for deep dive demos with Q/A, try and answer any questions you have about your specific deployment target's configurations, identify any missing pieces in the documentation and triage the documentation as we go.

There will be 4 track break-out rooms set-up for 3 hours of deployment walk throughs and Q/A with session leads:

Our goal is to triage our existing community documentation, identify any short comings and encourage your participation in the OKD-Working Group's testing of the installation and upgrade processes for each OKD release.

This is community event NOT meant as a substitute for Red Hat technical support.

There is no admission or ticket charge for OKD-Working Group events. However, you are required to complete a free hopin.to platform registration and watch the hopin site for updates about registration and schedule updates.

We are committed to fostering an open and welcoming environment at our working group meetings and events. We set expectations for inclusive behavior through our code of conduct and media policies, and are prepared to enforce these.

You can Register for the workshop here:

https://hopin.com/events/okd-testing-and-deployment-workshop

"},{"location":"blog/2021-03-19-please-avoid-using-fcos-33.20210301.3.1.html/","title":"Please avoid using FCOS 33.20210301.3.1 for new OKD installs","text":"

Hi,

Due to several issues ([1] and [2]) fresh installations using FCOS 33.20210301.3.1 would fail. The fix is coming in Podman 3.1.0.

Please use an older stable release - 33.20210217.3.0 - as a starting point instead. See download links at https://builds.coreos.fedoraproject.org/browser?stream=stable (might need some scrolling),

Note, that only fresh installs are affected. Also, you won't be left with outdated packages, as OKD does update themselves to latest stable FCOS content during installation/update.

  1. https://bugzilla.redhat.com/show_bug.cgi?id=1936927
  2. https://github.com/openshift/okd/issues/566

-- Cheers, Vadim

"},{"location":"blog/2021-03-22-recap-okd-testing-deployment-workshop.html/","title":"Recap OKD Testing and Deployment Workshop - Videos and Additional Resources","text":""},{"location":"blog/2021-03-22-recap-okd-testing-deployment-workshop.html/#the-okd-working-group-held-a-virtual-community-hosted-workshop-on-testing-and-deploying-okd4-on-march-20th","title":"The OKD Working Group held a virtual community-hosted workshop on testing and deploying OKD4 on March 20th","text":"

On March 20th, OKD-Working Group hosted a day-long event to bring together people from the OKD and related Open Source project communities to collaborate on testing and documentation of the OKD 4 install and upgrade processes for the various platforms that people are deploying OKD 4 on as well to identify any issues with the current documentation for these processes and triage them together.

The day started with all attendees together in the \u2018main stage\u2019 area for 2 hours where community members gave an short welcome along with the following four presentations:

Then attendees then broke into track sessions specific to the deployment target platforms for deep dive demos with live Q/A, answered as many questions as possible about that specific deployment target's configurations, attempted to identify any missing pieces in the documentation and triage the documentation as we went along.

The 4 track break-out rooms set-up for 2.5 hours of deployment walk throughs and Q/A with session leads:

Our goal was to triage our existing community documentation, identify any short comings and encourage your participation in the OKD-Working Group's testing of the installation and upgrade processes for each OKD release.

"},{"location":"blog/2021-03-22-recap-okd-testing-deployment-workshop.html/#resources","title":"Resources:","text":""},{"location":"blog/2021-05-04-From-OKD-to-OpenShift-in-3-Years.html/","title":"Rohde & Schwarz's Journey to OpenShift 4 From OKD to Azure Red Hat OpenShift","text":""},{"location":"blog/2021-05-04-From-OKD-to-OpenShift-in-3-Years.html/#from-okd-to-openshift-in-3-years-talk-by-josef-meier-rohde-schwarz-from-openshift-commons-gathering-at-kubecon","title":"From OKD to OpenShift in 3 Years - talk by Josef Meier (Rohde & Schwarz) from OpenShift Commons Gathering at Kubecon","text":"

On May 4th 2020, OKD-Working Group member Josef Meier gave a wonderful talk about Rohde & Schwarz's Journey to OpenShift 4 from OKD to ARO (Azure Red Hat OpenShift) and discussed benefits of participating in the OKD Working Group!

Join the OKD-Working Group and add your voice to the conversation!

"},{"location":"blog/2021-05-06-OKD-Office-Hours-at-KubeconEU-on-OpenShiftTV.html/","title":"OKD Working Group Office Hours at KubeconEU on OpenShift.tv","text":""},{"location":"blog/2021-05-06-OKD-Office-Hours-at-KubeconEU-on-OpenShiftTV.html/#video-from-okd-working-group-office-hours-at-kubeconeu-on-openshifttv","title":"Video from OKD Working Group Office Hours at KubeconEU on OpenShift.tv","text":"

On May 6th 2020, OKD-Working Group members hosted an hour long community led Office Hour with a brief introduction to the latest release by Red Hat's Charro Gruver then live Q/A!

Join the OKD-Working Group and add your voice to the conversation!

"},{"location":"blog/2022-09-09-an-introduction-to-debugging-okd-release-artifacts.html/","title":"An Introduction to Debugging OKD Release Artifacts","text":"

by Denis Moiseev and Michael McCune

During the course of installing, operating, and maintaining an OKD cluster it is natural for users to come across strange behaviors and failures that are difficult to understand. As Red Hat engineers working on OpenShift, we have many tools at our disposal to research cluster failures and to report our findings to our colleagues. We would like to share some of our experiences, techniques, and tools with the wider OKD community in the hopes of inspiring others to investigate these areas.

As part of our daily activities we spend a significant amount of time investigating bugs, and also failures in our release images and testing systems. As you might imagine, to accomplish this task we use many tools and pieces of tribal knowledge to understand not only the failures themselves, but the complexity of the build and testing infrastructures. As Kubernetes and OpenShift have grown, there has always been an organic growth of tooling and testing that helps to support and drive the development process forward. To fully understand the depths of these processes is to be actively following what is happening with the development cycle. This is not always easy for users who are also focused on delivering high quality service through their clusters.

On 2 September, 2022, we had the opportunity to record a video of ourselves diving into the OKD release artifacts to show how we investigate failures in the continuous integration release pipeline. In this video we walk through the process of finding a failing release test, examining the Prow console, and then exploring the results that we find. We explain what these artifacts mean, how to further research failures that are found, and share some other web-based tools that you can use to find similar failures, understand the testing workflow, and ultimately share your findings through a bug report.

To accompany the video, here are some of the links that we explore and related content:

Finally, if you do find bugs or would like report strange behavior in your clusters, remember to visit issues.redhat.com and use the project OCPBUGS.

"},{"location":"blog/2022-10-20-OKD-at-Kubecon-NA-Detroit/","title":"OKD at KubeCon + CloudNativeCon North America 2022","text":"

by Diane Mueller

date: 2022-10-20

Are you heading to Kubecon/NA October 24, 2022 - October 28, 2022 in Detroit at KubeCon + CloudNativeCon North America 2022?

If so, here's where you'll find members of the OKD Working Group and Red Hat engineers that working on delivering the latest releases of OKD at Kubecon!

"},{"location":"blog/2022-10-20-OKD-at-Kubecon-NA-Detroit/#october-25th","title":"October 25th","text":"

At the OpenShift Commons Gathering on Tuesday, October 25, 2022 | 9:00 a.m. - 6:00 p.m. EDT, we're hosting an in-person OKD Working Group Lunch & Learn Meet up from 12 noon to 3 pm lead by co-chairs Jaime Magiera (ICPSR at University of Michigan Institute for Social Research), Diane Mueller(Red Hat) and special guests including Michael McCune(Red Hat) in Break-out room D at the Westin Book Cadillac a 10 minute walk from the conference venue. followed by a Lightning Talk: OKD Working Group Update & Road Map on the OpenShift Common main stage at 3:45 pm. The main stage event will be live streamed via Hopin so if you are NOT attending in person, you'll be able to join us online.

Registration for OpenShift Commons Gathering is FREE and OPEN to ALL for both in-person and virtual attendance - https://commons.openshift.org/gatherings/kubecon-22-oct-25/

"},{"location":"blog/2022-10-20-OKD-at-Kubecon-NA-Detroit/#october-27th","title":"October 27th","text":"

At 11:30 am EDT, the OKD Working Group will hold a Kubecon Virtual Office Hour that on OKD Streams initiatives and the latest release lead by OKD Working Group members: Vadim Rutkovsky, Luigi Mario Zuccarelli, Christian Glombek and Michelle Krejci!

Registration for the virtual Kubecon/NA event is required to join the Kubecon Virtual Office Hour

If you're attending in person and just want to grab a cuppa coffee and have a chat with us, please reach ping either of the OKD working group co-chairs Jaime Magiera (ICPSR at University of Michigan Institute for Social Research), or Diane Mueller(Red Hat)

Come connect with us to discuss the OKD Road Map, OKD Streams initiative, MVP Release of OKD on CentOS Streams and the latest use cases for OKD, and talk all things open with our team.

"},{"location":"blog/2022-10-25-OKD-Streams-Building-the-Next-Generation-of-OKD-together/","title":"OKD Streams: Building the Next Generation of OKD together","text":"

by Diane Mueller

date: 2022-10-25

OKD is the community distribution of Kubernetes that powers Red Hat OpenShift. The OKD community has created reusable Tekton build pipelines on a shared Kubernetes cluster for the OKD build pipelines so that they could manage the build & release processes for OKD in the open. With the operate-first.cloud hosted at the massopen.cloud, the OKD community has launched a fully open source release pipeline that the community can participate in to help support and manage the release cycle ourselves. The OKD Community is now able to build and release stable builds of OKD 4.12 on both Fedora CoreOS and the newly introduced CentOS Stream CoreOS. We are calling it OKD Streams.

"},{"location":"blog/2022-10-25-OKD-Streams-Building-the-Next-Generation-of-OKD-together/#new-patterns-new-cicd-pipelines-and-a-new-coreos","title":"New Patterns, New CI/CD Pipelines and a new CoreOS","text":"

Today we invite you into our OKD Streams initiative. An OKD Stream refers to a build, test, and release pipeline for any configuration of OKD, the open source kubernetes distribution that powers OpenShift. The OKD working group is pleased to announce the availability of tooling and processes that will enable building and testing many configurations, or \"streams\". The OKD Working Group and Red Hat Engineering are now testing one such stream that runs an upstream version of RHEL9 via CentOS Streams CoreOS (\u2018SCOS\u2019 for short) to improve our RHEL9 readiness signal for Red Hat OpenShift. It is the first of many OKD Streams that will enable developers inside and outside of Red Hat to easily experiment with and explore Cloud Native technologies. You can check out our MVP OKD on SCOS release here.

With this initiative, the OKD working group has embraced new patterns and built new partnerships. We have leveraged the concepts in the open source managed service \u2018Operate First\u2019 pattern, worked with the CentOS and CoreOS communities to build a pipeline for building SCOS and applied new CI/CD technologies (Tekton) to build a new OKD release build pipeline service. The MVP of OKD Streams, for example, is an SCOS backed version of OKD built with a Tekton pipeline managed by the OKD working group that runs on AWS infrastructure managed by Operate First. Together we are unlocking some of the innovations to get better (and earlier) release signals for Kubernetes , OCP and RHEL and to enable the OKD community to get more deeply involved with the OKD build processes.

The OKD Working group wanted to make participation in all of these activities easier for all Cloud Native developers and this has been the motivating force behind the OKD Streams initiative.

"},{"location":"blog/2022-10-25-OKD-Streams-Building-the-Next-Generation-of-OKD-together/#from-the-one-size-fits-all-to-built-to-order","title":"From the \u2018One Size Fits All\u2019 to \u2018Built to Order\u2019","text":"

There are main three problems that both the OKD working group and Red Hat Engineering teams spend a lot of time thinking about:

  1. how do we improve our release signals for OpenShift, RHEL, CoreOS
  2. how do we get features into the hands of our customer and partners faster
  3. how do we enable engineers to experiment and innovate

Previously, what we referred to as an \u2018OKD\u2019 release, was built on the most recent release of OKD running on the latest stable release of Fedora CoreOS (FCOS for short). In actuality, we had a singular release pipeline that built a release of OKD with a bespoke version of FCOS. These releases of OKD gave us early signals for the impact of new operating system features that would eventually be landing in RHEL, where they will surface in RHEL CoreOS (RHCOS). It was (and still is) a very good way for developers to experiment with OKD and explore its functionality.

The OKD community wanted to empower wider use of OKD for experimentation in more use cases that required layering on additional resources in some cases, and in others use cases, reducing the footprints for edge and local deployments. OKD has been stable enough for some to run production deployments. CERN\u2019s OKD deployment on OpenStack, for example, is assembled with custom OKD build pipelines. The feedback from these OKD builds has been a source of inspiration for this OKD Streams initiative to enable more such use cases.

The OKD Streams initiative invites more community input and feedback quickly into the project without interrupting the productized builds for OpenShift and OpenShift customers. We can experiment with new features that can then get pushed upstream into Kubernetes or downstream into the OpenShift product. We can reuse the Tekton build pipelines for building streams specific to HPC or Openstack or Bare Metal or whatever the payload customization needs to be for their organizations.

Our goal is to make it simple for others to experiment.

We are experimenting too. The first OKD Streams \u2018experiment\u2019 built with the new Tekton build pipeline running on an Operate First AWS Cluster is OKD running on SCOS, which is a future version of OpenShift running on a near-future version of RHEL that's leveraging CentOS Streams CoreOS. This will improve our RHEL9 readiness signal for OCP. Improved RHEL9 readiness signals with input from the community will showcase our work as we explore what the new OKD build service is going to mean for all of us.

"},{"location":"blog/2022-10-25-OKD-Streams-Building-the-Next-Generation-of-OKD-together/#tekton-pipelines-as-the-building-blocks","title":"Tekton Pipelines as the Building Blocks","text":"

Our new OKD Streams are built using Tekton pipelines, which makes it easier for us to explore building many different kinds of pipelines.

Tekton is a Continuous Deployment (CD) system that enables us to run tasks and pipelines in a composable and flexible manner. This fits in nicely with our OKD Streams initiative where the focus is less on the artifacts that are produced than the pipeline that builds it.

While OKD as a payload remains the core focus of the OKD Working Group, we are also collaborating with the Operate First Community to ensure that anyone is able to take the work we have done and lift and shift it to any cloud enabling OKD to run in any Kubernetes-based infrastructure anywhere. Now anybody can experiment and build their own \u2018stream\u2019 of OKD with the Tekton pipeline.

This new pipeline approach enables builds that can be customized via parameters, even the tasks within the pipeline can be exchanged or moved around. Add your own tasks. They are reusable templates for creating your own testable stream of OKD. Run the pipelines on any infrastructure, including locally in Kubernetes using podman, for example, or you can run them on a vanilla Kubernetes cluster. We are enabling access to the Operate First managed OKD Build Service to deploy more of these builds and pipelines to get some ideas that we have at Red Hat out into the community for early feedback AND to let other community members test their ideas.

As an open source community, we\u2019re always evolving and learning together. Our goal is to make OKD the goto place to experiment and innovate for the entire OpenShift ecosystem and beyond, to showcase new features and functionalities, and to fail fast and often without impacting product releases or incurring more technical debt.

"},{"location":"blog/2022-10-25-OKD-Streams-Building-the-Next-Generation-of-OKD-together/#the-ask","title":"THE ASK","text":"

Help drive faster innovation into OCP, OKD, Kubernetes and RHEL along with the multitude of other Cloud Native open source projects that are part of the OpenShift and the cloud native ecosystem.

This project is a game changer for lots of open source communities internally and externally. We know there are folks out there in the OKD working group and in the periphery that haven't spoken up and we'd love to hear from you, especially if you are currently doing bespoke OKD builds. Will this unblock your innovation the way we think it will?

"},{"location":"blog/2022-10-25-OKD-Streams-Building-the-Next-Generation-of-OKD-together/#additional-resources","title":"Additional Resources","text":""},{"location":"blog/2022-10-25-OKD-Streams-Building-the-Next-Generation-of-OKD-together/#kudos-and-thank-you","title":"Kudos and Thank you","text":"

Operate First\u2019s Infrastructure Team: Thorsten Schwesig, Humair Khan, Tom Coufal, Marcel Hild Red Hat\u2019s CFE Team: Luigi Zuccarelli, Sherine Khoury OKD Working Group: Vadim Rutkovsky, Alessandro Di Stefano, Jaime Magiera, Brian Innes CentOS Cloud and HPC SIGs: Amy Marrich, Christian Glombek, Neal Gompa

"},{"location":"blog/2022-12-12-Building-OKD-payload/","title":"Building the OKD payload","text":"

Over the last couple of months, we've been busy building a new OKD release on CentOS Stream CoreOS (SCOS), and were able to present it for the OpenShift Commons Detroit 2022.

While some of us created a Tekton pipeline that could build SCOS on a Kind cluster, others were tediously building the OKD payload with Prow, but also creating a Tekton pipeline for building that payload on any OpenShift or OKD cluster.

The goal of this effort is to enable and facilitate community collaboration and contributions, giving anybody the ability to do their own payload builds and run tests themselves.

This process has been difficult because OpenShift's Prow CI instance is not open to the public, and changes could thus not easily be tested before PR submission. Even after opening a PR, a non-Red Hatter will require a Red Hat engineer to add the /ok-to-test label in order to start Prow testing.

With the new Tekton pipelines, we are now providing a straight forward way for anybody to build and test their own changes first (or even create their own Stream entirely), and then present the results to the OKD Working Group, which will then expedite the review process on the PR.

In this article, I will shed some light on the building blocks of the OKD on SCOS payload, how it is built, both the Prow way, and the Tekton way:

"},{"location":"blog/2022-12-12-Building-OKD-payload/#whats-the-payload","title":"What's the payload?","text":"

Until now, the OKD payload, like the OpenShift payload, was built by the ReleaseController in Prow.

The release-controller automatically builds OpenShift release images when new images are created for a given OpenShift release. It detects changes to an image stream, launches a job to build and push the release payload image using oc adm release new, and then runs zero or more ProwJobs against the artifacts generated by the payload.

A release image is nothing more than a ClusterVersionOperator image (CVO), with an extra layer containing the release-manifests folder. This folder contains : * image-references: a list of all known images with their SHA digest, * yaml manifest files for each operator controlled by the CVO.

The list of images that is included in the release-manifests is calculated from the release image stream, taking : * all images with label io.openshift.release.operator=true in that image stream * plus any images referenced in the /manifests/image-references file within each of the images with this label.

As you can imagine, the list of images in a release can change from one release to the next, depending on: * new operators being delivered within the OpenShift release * existing operators adding or removing an operand image * operators previously included that are removed from the payload to be delivered independently, through OLM instead.

In order to list the images contained in a release payload, run this command:

oc adm release info ${RELEASE_IMAGE_URL}\n

For example:

oc adm release info quay.io/okd/scos-release:4.12.0-0.okd-scos-2022-12-02-083740 

Now that we've established what needs to be built, let's take a deeper look at how the OKD on SCOS payload is built.

"},{"location":"blog/2022-12-12-Building-OKD-payload/#building-okdscos-the-prow-way-railway_track","title":"Building OKD/SCOS the Prow way :railway_track:","text":"

The obvious way to build OKD on SCOS is to use Prow - THE Kubernetes-based CI/CD system, which is what builds OCP and OKD on FCOS already today. This is what Kubernetes uses upstream as well. :shrug:

For a new OKD release to land in the releases page, there's a whole bunch of Prow jobs that run. Hang on! It's a long story...

"},{"location":"blog/2022-12-12-Building-OKD-payload/#imagestreams","title":"ImageStreams","text":"

Let's start by the end :wink:, and prepare a new image stream for OKD on SCOS images. This ImageStream (IS) is a placeholder for all images that form the OKD/SCOS payload.

For OKD on Fedora CoreOS (OKD/FCOS) it's named okd.For OKD/SCOS, this ImageStream is named okd-scos.

This ImageStream includes all payload images contained in the specific OKD release based on CentOS Stream CoreOS (SCOS)

Among these payload images, we distinguish: * Images that can be shared between OCP and OKD. These are built in Prow and mirrored into the okd-scos ImageStream. * Images that have to be specifically built for OKD/SCOS, which are directly tagged into the okd-scos ImageStream. This is the case for images that are specific to the underlying operating system, or contain RHEL packages. These are: the installer images, the machine-config-operator image, the machine-os-content that includes the base operating system OSTree, as well as the ironic image for provisioning bare-metal nodes, and a few other images.

"},{"location":"blog/2022-12-12-Building-OKD-payload/#triggers-for-building-most-payload-images","title":"Triggers for building most payload images","text":"

Now that we've got the recipient Image Stream for the OKD payload images, let's start building some payloads!

Take the Cluster Network Operator for example: For this operator, the same image can be used on OCP CI and OKD releases. Most payload images fit into this case.

For such an image, the build is pretty straight forward. When a PR is filed for a GitHub repository that is part of a release payload: * The Pre-submit jobs run. It essentially builds the image and stores it in an ImageStream in an ephemeral namespace to run tests against several platforms (AWS, GCP, BareMetal, Azure, etc) * Once the tests are green and the PR is approved and merges, the Post-submit jobs run. It essentially promotes the built image to the appropriate release-specific ImageStream: * if the PR is for master, images are pushed to the ${next-release} ImageStream * If the PR is for release-${MAJOR}.${MINOR}, images are pushed to the ${MAJOR}.${MINOR} ImageStream

Next, the OCP release controller which runs at every change to the ImageStream, will mirror all images from the ${MAJOR}.${MINOR} ImageStream to the scos-${MAJOR}.${MINOR} ImageStream.

As mentioned before, some of the images are not mirrored, and that brings us to the next section, on building those images that have content (whether code or manifests) specific to OKD.

"},{"location":"blog/2022-12-12-Building-OKD-payload/#trigger-for-building-the-okd-specific-payload-images","title":"Trigger for building the OKD-specific payload images","text":"

For the OKD-specific images, the CI process is a bit different, as the image is built in the PostSubmit job and then directly promoted to the okd-scos IS, without going through the OCP CI to OKD mirroring step. This is called a variant configuration. You can see this for MachineConfigOperator for example.

The built images land directly in the scos-${MAJOR}-${MINOR} ImageStream.

That is why there's no need for OCP's CI release controller to mirror these images from the CI ImageStream: During the PostSubmit phase, images are already getting built in parallel for OCP, OKD/FCOS and OKD/SCOS and pushed, respectively to ocp/$MAJOR.$MINOR, origin/$MAJOR.$MINOR, origin/scos-$MAJOR.$MINOR

"},{"location":"blog/2022-12-12-Building-OKD-payload/#okd-release-builds","title":"OKD release builds","text":"

Now the ImageStream scos-$MAJOR.$MINOR is getting populated by payload images. With every new image tag, the release controller for OKD/SCOS will build a release image.

The ReleaseController ensures that OpenShift update payload images (aka release images) are created whenever an ImageStream representing the images in a release is updated.

Thanks to the annotation release.openshift.io/config on the scos-${MAJOR}-{MINOR} ImageStream, the controller will: 1. Create a tag in the scos-${MAJOR}-{MINOR} ImageStream that uses the release name + current timestamp. 2. Mirror all of the tags in the input ImageStream so that they can't be pruned. 3. Launch a job in the job namespace to invoke oc adm release new from the mirror pointing to the release tag we created in step 1. 4. If the job succeeds in pushing the tag, it sets an annotation on that tag release.openshift.io/phase = \"Ready\", indicating that the release can be used by other steps. And that's how a new release appears in https://origin-release.ci.openshift.org/#4.13.0-0.okd-scos 5. The release state switches to \"Verified\" when the verification end-to-end test job succeeds.

"},{"location":"blog/2022-12-12-Building-OKD-payload/#building-the-tekton-way-motorway","title":"Building the Tekton way :motorway:","text":"

Building with Prow has the advantage of being driven by new code being pushed to payload components, thus building fresh releases as the code of github.com/openshift evolves.

The problem is that Prow, along with all the clusters involved with it, the ImageStreams, etc. are not accessible to the OKD community outside of RedHat. Also, users might be interested in building custom OKD payload, in their own environment, to experiment exchanging components for example.

To remove this impediment, the OKD team has been working on the OKD Payload pipeline based on Tekton.

Building OKD payloads with Tekton can be done by cloning the okd-payload-pipeline repository. One extra advantage of this repository is the ability to see the list of components that form the OKD payload: In fact, the list under buildconfigs corresponds to the images in the OKD final payload. This list is currently manually synced with the list of OCP images on each release.

The pipeline is fairly simple. Take the build-from-scratch.yaml for example. It has 3 main tasks: * Build the base image and the builder image, with which all the payload images will be built * The builder image is a CentOS Stream 9 container image that includes all the dependencies needed to build payload components and is used as the build environment for them * The built binaries are then layered onto a CentOS Stream 9 base image, creating a payload component image. * The base image is shared across all the images in the release payload * Build payload images in batches (starting with the ones that don't have any dependencies) * Finally, as all OKD payload component images are in the image stream, the OKD release image is in turn built, using the oc adm release new command.

"},{"location":"blog/2022-12-12-Building-OKD-payload/#triggers","title":"Triggers","text":"

For the moment, this pipeline has no triggers. It can be executed manually when needed. We are planning to automatically trigger the pipeline on a daily cadence.

"},{"location":"blog/2022-12-12-Building-OKD-payload/#batch-build-task","title":"Batch Build Task","text":"

With a set of buildConfigs passed in the parameters, this task relies on an openshift oc image containing the client binary and loops on the list of build configs with a oc start-build, and waits for all the builds to complete.

"},{"location":"blog/2022-12-12-Building-OKD-payload/#new-release-task","title":"New Release Task","text":"

This task simply uses an OpenShift client image to call oc adm release new which creates the release image from the image stream release (on the OKD/OpenShift cluster where this Tekton pipeline is running), and mirroring the release image, and all the payload component images to a registry configured in its parameters.

"},{"location":"blog/2022-12-12-Building-OKD-payload/#buildconfigs","title":"BuildConfigs","text":"

As explained above, the OKD payload Tekton pipeline heavily relies on the buildconfigs. This folder contains one buildconfig yaml file for each image included in the release payload.

Each build config simply uses a builder image to build the operator binary, invoking the correct Dockerfile in the operator repository. Then, the binary is copied as a layer on top of an OKD base image, which is built in the preparatory task of the pipeline.

This process currently uses the OpenShift Builds API. We are planning to move these builds to the Shipwright Builds API in order to enable builds outside of OCP or OKD clusters.

"},{"location":"blog/2022-12-12-Building-OKD-payload/#updating-build-configs","title":"Updating build configs","text":"

Upon deploying the Tekton OKD Payload pipeline on an OKD (or OpenShift) cluster, Kustomize is used in order to : * patch the BuildConfig files, adding TAGS to the build arguments according to the type of payload we want to build (based on FCOS, SCOS or any other custom stream) * patch the BuildConfig files, replacing the builder image references to the non-public registry.ci.openshift.org/ocp/builder in the payload component's Dockerfiles with the builder image reference from the local image stream * setting resource requests and limits if needed

"},{"location":"blog/2022-12-12-Building-OKD-payload/#preparing-for-a-new-release","title":"Preparing for a new release","text":"

The procedure to prepare a new release is still a work in progress at the time of writing.

To build a new release, each BuildConfig file should be updated with the git branch corresponding to that release. In the future, the branch can be passed along as a kustomization, or in the parameters of the pipeline.

The list of images from a new OCP release (obtained through oc adm release info) must now be synced with the BuildConfigs present here: * For any new image, a new BuildConfig file must be added * For any image removed from the OCP release, the corresponding BuildConfig file must be removed.

"},{"location":"blog/2022-12-12-Building-OKD-payload/#take-away","title":"Take away","text":""},{"location":"blog/2022-12-12-Building-OKD-payload/#what-are-our-next-steps","title":"What are our next steps?","text":"

In the coming weeks and months, you can expect lots of changes, especially as the OKD community is picking up usage of OKD/SCOS, and doing their own Tekton Pipeline runs: * Work to automate the OKD release procedure is progress by automatically verifying payload image signatures, signing the release, and tagging it on GitHub.

The goal is to deliver a new OKD/SCOS on a sprint (3-weekly) basis, and to provide both the OCP teams and the OKD community with a fresh release to test much earlier than previously with the OCP release cadence. * For the moment, OKD/SCOS releases are only verified on AWS. To gain more confidence in our release payloads, we will expand the test matrix to other platforms such as GCP, vSphere and Baremetal * Enable GitOps on the Tekton pipeline repository, so that changes to the pipeline are automatically deployed on OperateFirst for the community to use the latest and greatest. * The OKD Working Group will be collaborating with the Mass Open Cloud to allow for deployments of test clusters on their baremetal infrastructure. * The OKD Working Group will be publishing the Tekton Tasks and Pipelines used to build the SCOS Operating System as well as the OKD payload to Tekton Hub and Artifact Hub * The OKD operators Tekton pipeline will be used for community builds of optional OLM operators. A first OKD operator has already been built with it, and other operators are to follow, starting with the Pipelines operator, which has long been an ask by the community * Additionally, we are working on multi-arch releases for both OKD/SCOS and OKD/FCOS

"},{"location":"blog/2022-12-12-Building-OKD-payload/#opened-perspectives","title":"Opened perspectives","text":"

Although in the near future the OKD team will still rely on Prow to build the payload images, the Tekton pipeline will start getting used to finalize the release.

In addition, this Tekton pipeline has opened up new perspectives, even for OCP teams.

One such example is for the Openshift API team who would like to use the Tekton pipeline to test API changes by building all components that are dependent of the OpenShift API from that PR, create an OKD release and test it thus getting extra quick feedback on impacts of the API changes on the OKD (and later OCP) releases.

Another example is the possibility to build images on other platforms than Openshift or OKD platform, replacing build configs with Shipwright, or why not docker build...

Whatever your favorite flavor is, we are looking forward to seeing the pipelines in action, increasing collaboration and improving our community feedback loop.

"},{"location":"blog/2023-07-18-State-of-Affairs-in-OKD-CI-CD/","title":"State of affairs in OKD CI/CD","text":"

by Jakob Meng

date: 2023-07-18

OKD is a community distribution of Kubernetes which is built from Red Hat OpenShift components on top of Fedora CoreOS (FCOS) and recently also CentOS Stream CoreOS (SCOS). The OKD variant based on Fedora CoreOS is called OKD or OKD/FCOS. The SCOS variant is often referred to as OKD/SCOS.

The previous blog posts introduced OKD Streams and its new Tekton pipelines for building OKD/FCOS and OKD/SCOS releases. This blog post gives an overview of the current build and release processes for FCOS, SCOS and OKD. It outlines OKD's dependency on OpenShift, an remnant from the past when its Origin predecessor was a downstream rebuild of OpenShift 3, and concludes with an outlook on how OKD Streams will help users, developers and partners to experiment with future OpenShift.

"},{"location":"blog/2023-07-18-State-of-Affairs-in-OKD-CI-CD/#fedora-coreos-and-centos-stream-coreos","title":"Fedora CoreOS and CentOS Stream CoreOS","text":"

Fedora CoreOS is built with a Jenkins pipeline running in Fedora's infrastructure and is being maintained by the Fedora CoreOS team.

CentOS Stream CoreOS is built with a Tekton pipeline running in a OpenShift cluster on MOC's infrastructure and pushed to quay.io/okd/centos-stream-coreos-9. The SCOS build pipeline is owned and maintained by the OpenShift OKD Streams team and SCOS builds are being imported from quay.io into OpenShift CI as ImageStreams.

"},{"location":"blog/2023-07-18-State-of-Affairs-in-OKD-CI-CD/#openshift-payload-components","title":"OpenShift payload components","text":"

At the time of writing, most payload components for OKD/FCOS and OKD/SCOS get mirrored from OCP CI releases. OpenShift CI (Prow and ci-operator) periodically builds OCP images, e.g. for OVN-Kubernetes. OpenShift's release-controller detects changes to image streams, caused by recently built images, then builds and tests a OCP release image. When such an release image passes all non-optional tests (also see release gating docs), the release image and other payload components are mirrored to origin namespaces on quay.io (release gating is subject to change). For example, at most every 3 hours a OCP 4.14 release image will be deployed (and upgraded) on AWS and GCP and afterwards tested with OpenShift's conformance test suite. When it passes the non-optional tests the release image and its dependencies will be mirrored to quay.io/origin (except for rhel-coreos*, *-installer and some other images). These OCP CI releases are listed with a ci tag at amd64.ocp.releases.ci.openshift.org. Builds and promotions of nightly and stable OCP releases are handled differently (i.e. outside of Prow) by the Automated Release Tooling (ART) team.

"},{"location":"blog/2023-07-18-State-of-Affairs-in-OKD-CI-CD/#okd-payload-components","title":"OKD payload components","text":"

A few payload components are built specifically for OKD though, for example OKD/FCOS' okd-machine-os. Unlike RHCOS and SCOS, okd-machine-os, the operating system running on OKD/FCOS nodes, is layered on top of FCOS (also see CoreOS Layering, OpenShift Layered CoreOS).

Note, some payload components have OKD specific configuration in OpenShift CI although the resulting images are not incorporated into OKD release images. For example, OVN-Kubernetes images are built and tested in OpenShift CI to ensure OVN changes do not break OKD.

"},{"location":"blog/2023-07-18-State-of-Affairs-in-OKD-CI-CD/#okd-releases","title":"OKD releases","text":"

When OpenShift's release-controller detects changes to OKD related image streams, either due to updates of FCOS/SCOS, an OKD payload component or due to OCP payload components being mirrored after an OCP CI release promotion, it builds and tests a new OKD release image. When such an OKD release image passes all non-optional tests, the image is tagged as registry.ci.openshift.org/origin/release:4.14 etc. This CI release process is similar for OKD/FCOS and OKD/SCOS, e.g. compare these examples for OKD/FCOS 4.14 and with OKD/SCOS 4.14. OKD/FCOS's and OKD/SCOS's CI releases are listed at amd64.origin.releases.ci.openshift.org.

Promotions for OKD/FCOS to quay.io/openshift/okd (published at github.com/okd-project/okd) and for OKD/SCOS to quay.io/okd/scos-release (published at github.com/okd-project/okd-scos) are done roughly every 2 to 3 weeks. For OKD/SCOS, OKD's release pipeline is triggered manually once a sprint to promote CI releases to 4-scos-{next,stable}.

"},{"location":"blog/2023-07-18-State-of-Affairs-in-OKD-CI-CD/#okd-streams-and-customizable-tekton-pipelines","title":"OKD Streams and customizable Tekton pipelines","text":"

However, the OKD project is currently shifting its focus from doing downstream rebuilds of OCP to OKD Streams. As part of this strategic repositioning, OKD offers Argo CD workflows and Tekton pipelines to build CentOS Stream CoreOS (SCOS) (with okd-coreos-pipeline), to build OKD/SCOS (with okd-payload-pipeline) and to build operators (with okd-operator-pipeline). The OKD Streams pipelines have been created to improve the RHEL9 readiness signal for Red Hat OpenShift. It allows developers to build and compose different tasks and pipelines to easily experiment with OpenShift and related technologies. Both okd-coreos-pipeline and okd-operator-pipeline are already used in OKD's CI/CD and in the future okd-payload-pipeline might supersede OCP CI for building OKD payload components and mirroring OCP payload components.

"},{"location":"guides/automated-vsphere-upi/","title":"Implementing an Automated Installation Solution for OKD on vSphere with User Provisioned Infrastructure (UPI)","text":""},{"location":"guides/automated-vsphere-upi/#introduction","title":"Introduction","text":"

It\u2019s possible to completely automate the process of installing OpenShift/OKD on vSphere with User Provisioned Infrastructure by chaining together the various functions of OCT via a wrapper script.

"},{"location":"guides/automated-vsphere-upi/#steps","title":"Steps","text":"
  1. Deploy the DNS, DHCP, and load balancer infrastructure outlined in the Prerequisites section.
  2. Create an install-config.yaml.template file based on the format outlined in the section Sample install-config.yaml file for VMware vSphere of the OKD docs. Do not add a pull secret. The script will query you for one or it will insert a default one if you use the \u2013auto-secret flag.
  3. Create a wrapper script that:
"},{"location":"guides/automated-vsphere-upi/#prerequisites","title":"Prerequisites","text":""},{"location":"guides/automated-vsphere-upi/#dns","title":"DNS","text":"

1 entry for the bootstrap node of the format bootstrap.[cluster].domain.tld 3 entries for the master nodes of the form master-[n].[cluster].domain.tld An entry for each of the desired worker nodes in the form worker-[n].[cluster].domain.tld 1 entry for the API endpoint in the form api.[cluster].domain.tld 1 entry for the API internal endpoint in the form api-int.[cluster].domain.tld 1 wildcard entry for the Ingress endpoint in the form *.apps.[cluster].domain.tld

"},{"location":"guides/automated-vsphere-upi/#dhcp","title":"DHCP","text":""},{"location":"guides/automated-vsphere-upi/#load-balancer","title":"Load Balancer","text":"

vSphere UPI requires the use of a load balancer. There needs to be two pools.

"},{"location":"guides/automated-vsphere-upi/#proxy-optional","title":"Proxy (Optional)","text":"

If the cluster will sit on a private network, you\u2019ll need a proxy for outgoing traffic, both for the install process and for regular operation. In the case of the former, the installer needs to pull containers from the external registries. In the case of the latter, the proxy is needed when application containers need access to the outside world (e.g. yum installs, external code repositories like gitlab, etc.)

The proxy should be configured to accept connections from the IP subnet for your cluster. A simple proxy to use for this purpose is squid

"},{"location":"guides/automated-vsphere-upi/#wrapper-script","title":"Wrapper Script","text":"
#!/bin/bash\n\nmasters_count=3\nworkers_count=2\ntemplate_url=\"https://builds.coreos.fedoraproject.org/prod/streams/testing/builds/33.20210314.2.0/x86_64/fedora-coreos-33.20210314.2.0-vmware.x86_64.ova\"\ntemplate_name=\"fedora-coreos-33.20210201.2.1-vmware.x86_64\"     library=\"Linux ISOs\"\ncluster_name=\"mycluster\"\ncluster_folder=\"/MyVSPHERE/vm/Linux/OKD/mycluster\"\nnetwork_name=\"VM Network\"\ninstall_folder=`pwd`\n\n# Import the template\n./oct.sh --import-template --library \"${library}\" --template-url \"${template_url}\"\n\n# Install the desired OKD tools\noct.sh --install-tools --release 4.6\n\n# Launch the prerun to generate and modify the ignition files\noct.sh --prerun --auto-secret\n\n# Deploy the nodes for the cluster with the appropriate ignition data\noct.sh --build --template-name \"${template_name}\" --library \"${library}\" --cluster-name \"${cluster_name}\" --cluster-folder \"${cluster_folder}\" --network-name \"${network_name}\" --installation-folder \"${install_folder}\" --master-node-count ${masters_count} --worker-node-count ${workers_count} # Turn on the cluster nodes\noct.sh --cluster-power on --cluster-name \"${cluster_name}\"  --master-node-count ${masters_count} --worker-node-count ${workers_count}\n\n# Run the OpenShift installer \nbin/openshift-install --dir=$(pwd) wait-for bootstrap-complete  --log-level=info\n
"},{"location":"guides/automated-vsphere-upi/#future-updates","title":"Future Updates","text":""},{"location":"guides/aws-ipi/","title":"AWS IPI Default Deployment","text":"

This describes the resources used by OpenShift after performing an installation using the default options for the installer.

"},{"location":"guides/aws-ipi/#infrastructure","title":"Infrastructure","text":""},{"location":"guides/aws-ipi/#compute","title":"Compute","text":""},{"location":"guides/aws-ipi/#networking","title":"Networking","text":""},{"location":"guides/aws-ipi/#deployment","title":"Deployment","text":"

See the OKD documentation to proceed with deployment

"},{"location":"guides/azure-ipi/","title":"Azure IPI Default Deployment","text":"

This describes the resources used by OpenShift after performing an installation using the default options for the installer.

"},{"location":"guides/azure-ipi/#infrastructure","title":"Infrastructure","text":""},{"location":"guides/azure-ipi/#compute","title":"Compute","text":""},{"location":"guides/azure-ipi/#networking","title":"Networking","text":""},{"location":"guides/azure-ipi/#deployment","title":"Deployment","text":"

See the OKD documentation to proceed with deployment

"},{"location":"guides/gcp-ipi/","title":"GCP IPI Default Deployment","text":"

This describes the resources used by OpenShift after performing an installation using the default options for the installer.

"},{"location":"guides/gcp-ipi/#infrastructure","title":"Infrastructure","text":""},{"location":"guides/gcp-ipi/#compute","title":"Compute","text":""},{"location":"guides/gcp-ipi/#networking","title":"Networking","text":""},{"location":"guides/gcp-ipi/#platform","title":"Platform","text":""},{"location":"guides/gcp-ipi/#deployment","title":"Deployment","text":"

See the OKD documentation to proceed with deployment

"},{"location":"guides/overview/","title":"Deployment Guides","text":"

The guides linked below provide some examples of how community members are using OKD and provide details of the underlying hardware and platform configurations they are using.

"},{"location":"guides/sno/","title":"Single Node OKD Installation","text":"

This document outlines how to deploy a single node OKD cluster using virt.

"},{"location":"guides/sno/#requirements","title":"Requirements","text":""},{"location":"guides/sno/#procedure","title":"Procedure","text":"

For the complete procedure, please see Building an OKD4 single node cluster with minimal resources

"},{"location":"guides/sri/","title":"Sri's Overkill Homelab Setup","text":"

This document lays out the resources used to create my completely-overkill homelab. This cluster provides all the compute and storage I think I'll need for the foreseeable future, and the CPU, RAM, and storage can all be scaled vertically independently of each other. Not that I think I'll need to do that for a while.

More detail into the deployment and my homelab's Terraform configuration can be found here.

"},{"location":"guides/sri/#hardware","title":"Hardware","text":""},{"location":"guides/sri/#main-cluster","title":"Main cluster","text":"

My hypervisors each host an identical workload. The total size of this cluster is 3 control plane nodes, and 9 worker nodes. So it splits very nicely three ways. Each hypervisor hosts 1 control plane VM and 3 worker VMs.

"},{"location":"guides/sri/#supporting-infrastructure","title":"Supporting infrastructure","text":""},{"location":"guides/sri/#networking","title":"Networking","text":"

OKD, and especially baremetal UPI OKD, requires a very specific network setup. You will most likely need something more flexible than your ISP's router to get everything fully configured. The documentation is very clear on the various DNS records and DHCP static allocations you will need to make, so I won't go into them here.

However, there are a couple extra things that you may want to set for best results. In particular, I make sure that I have PTR records set up for all my cluster nodes. This is extremely important as the nodes need a correct PTR record set up for them to auto-discover their hostname. Clusters typically do not set themselves up properly if there are hostname collisions!

"},{"location":"guides/sri/#api-load-balancer","title":"API load balancer","text":"

I run a separate smaller VM on the NUC as a single-purpose load balancer appliance, running HAProxy.

The HAProxy config is straightforward. I adapted mine from the example config file created by the ocp4-helpernode playbook.

"},{"location":"guides/sri/#deployment","title":"Deployment","text":"

I create the VMs on the hypervisors using Terraform. The Terraform Libvirt provider is very, very cool. It's also used by openshift-install for its Libvirt-based deployments, so it supports everything needed to deploy OKD nodes. Most importantly, I can use Terraform to supply the VMs with their Ignition configs, which means I don't have to worry about passing kernel args manually or setting up a PXE server to get things going like the official OKD docs would have you do. Terraform also makes it easy to tear down the cluster and reset in case something goes wrong.

"},{"location":"guides/sri/#post-bootstrap-one-time-setup","title":"Post-Bootstrap One-Time Setup","text":""},{"location":"guides/sri/#storage-with-rook-and-ceph","title":"Storage with Rook and Ceph","text":"

I deploy a Ceph cluster into OKD using Rook. The Rook configuration deploys OSDs on top of the 4TiB HDDs assigned to each worker. I deploy an erasure-coded CephFS pool (6+2) for RWX workloads and a 3x replica block pool for RWO workloads.

"},{"location":"guides/sri/#monitoring-and-alerting","title":"Monitoring and Alerting","text":"

OKD comes with a very comprehensive monitoring and alerting suite, and it would be a shame not to take advantage of it. I set up an Alertmanager webhook to send any alerts to a small program I wrote that posts the alerts to Discord.

I also deploy a Prometheus + Grafana set up into the cluster that collects metrics from the various hypervisors and supporting infrastructure VMs. I use Grafana's built-in Discord alerting mechanism to post those alerts.

"},{"location":"guides/sri/#loadbalancer-with-metallb","title":"LoadBalancer with MetalLB","text":"

MetalLB is a piece of fantastic software that allows on-prem or otherwise non-public-cloud Kubernetes clusters to enjoy the luxury of LoadBalancer type services. It's dead simple to set up and makes you feel you're in a real datacenter. I deploy several workloads that don't use standard HTTP and so can't be deployed behind a Route. Without MetalLB, I wouldn't be able to deploy these workloads on OKD at all but with it, I can!

"},{"location":"guides/sri/#software-i-run","title":"Software I Run","text":"

I maintain an ansible playbook that handles deploying my workloads into the cluster. I prefer Ansible over other tools like Helm because it has more robust capabilities to store secrets, I find its templating capabilities more flexible and powerful than Helm's (especially when it comes to inlining config files into config maps or creating templated Dockerfiles for BuildConfigs), and because I am already familiar with Ansible and know how it works.

"},{"location":"guides/upi-sno/","title":"Single Node UPI OKD Installation","text":"

This document outlines how to deploy a single node (the real hard way) using UPI OKD cluster on bare metal or virtual machines.

"},{"location":"guides/upi-sno/#overview","title":"Overview","text":"

User provisioned infrastructure (UPI) of OKD 4.x Single Node cluster on bare metal or virtual machines

N.B. Installer provisioned infrastructure (IPI) - this is the preferred method as it is much simpler, it automatically provisions and maintains the install for you, however it is targeted towards cloud and onprem services i.e aws, gcp, azure, also for openstack, IBM, and vSphere.

If your install falls in these supported options then use IPI, if not this means that you will more than likely have to fallback on the UPI install method.

At the end of this document I have supplied a link to my repository. It includes some useful scripts and an example install-config.yaml

"},{"location":"guides/upi-sno/#requirements","title":"Requirements","text":"

The base installation should have 7 VM\u2019s (for a full production setup) but for our home lab SNO we will use 2 vm\u2019s (one for bootstrap and one for the master/worker node) with the following specs :

N.B. - firewall services are disabled for this installation process

"},{"location":"guides/upi-sno/#architecture-this-refers-to-a-full-high-availability-cluster","title":"Architecture (this refers to a full high availability cluster)","text":"

The diagram below shows an install for high availability scalable solution. For our single node install we only need a bootstrap node and a master/worker node (2 bare metal servers or 2 vm\u2019s)

"},{"location":"guides/upi-sno/#software","title":"Software","text":"

For the UPI SNO I made use of FHCOS (Fedora CoreOS)

FHCOS

OC Client & Installer

"},{"location":"guides/upi-sno/#procedure","title":"Procedure","text":"

The following is a manual process of installing and configuring the infrastructure needed.

"},{"location":"guides/upi-sno/#provision-vms-optional-skip-this-step-if-you-using-bare-metal-servers","title":"Provision VM\u2019s (Optional) - Skip this step if you using bare metal servers","text":"

The use of VM\u2019s is optional, each node could be a bare metal server. As I did not have several servers at my disposal I used a NUC (ryzen9 with 32G of RAM) and created 2 VM\u2019s (bootstrap and master/worker)

I used cockpit (fedora) to validate the network and vm setup (from the scripts). Use the virtualization software that you prefer. For the okd-svc machine I used the bare metal server and installed fedora 37 (this hosted my 2 VM's)

The bootstrap server can be shutdown once the master/worker has been fully setup

Install virtualization

sudo dnf install @virtualization\n
"},{"location":"guides/upi-sno/#setup-ips-and-mac-addreses","title":"Setup IP's and MAC addreses","text":"

Refer to the \u201cArchitecture Diagram\u201d above to setup each VM

Obviously the IP addresses will change according to you preferred setup (i.e 192.168.122.x) I have listed all servers, as it will be fairly easy to change the single node cluster to a fully fledged HA cluster, by changing the install-config.yaml

As a usefule example this is what I setup

Hard code MAC addresses (I created a text file to include in the VM network setting)

MAC: 52:54:00:3f:de:37, IP: 192.168.122.253\nMAC: 52:54:00:f5:9d:d4, IP: 192.168.122.2\nMAC: 52:54:00:70:b9:af, IP: 192.168.122.3\nMAC: 52:54:00:fd:6a:ca, IP: 192.168.122.4\nMAC: 52:54:00:bc:56:ff, IP: 192.168.122.5\nMAC: 52:54:00:4f:06:97, IP: 192.168.122.6\n
"},{"location":"guides/upi-sno/#install-configure-dependency-software","title":"Install & Configure Dependency Software","text":""},{"location":"guides/upi-sno/#install-configure-apache-web-server","title":"Install & configure Apache Web Server","text":"
dnf install httpd -y\n

Change default listen port to 8080 in httpd.conf

sed -i 's/Listen 80/Listen 0.0.0.0:8080/' /etc/httpd/conf/httpd.conf\n

Enable and start the service

 systemctl enable httpd\n systemctl start httpd\n systemctl status httpd\n

Making a GET request to localhost on port 8080 should now return the default Apache webpage

curl localhost:8080\n
"},{"location":"guides/upi-sno/#install-haproxy-and-update-the-haproxycfg-as-follows","title":"Install HAProxy and update the haproxy.cfg as follows","text":"
dnf install haproxy -y\n

Copy HAProxy config

cp ~/openshift-vm-install/haproxy.cfg /etc/haproxy/haproxy.cfg\n

Update Config

# Global settings\n#---------------------------------------------------------------------\nglobal\n    maxconn     20000\nlog         /dev/log local0 info\n    chroot      /var/lib/haproxy\n    pidfile     /var/run/haproxy.pid\n    user        haproxy\n    group       haproxy\n    daemon\n\n    # turn on stats unix socket\nstats socket /var/lib/haproxy/stats\n\n#---------------------------------------------------------------------\n# common defaults that all the 'listen' and 'backend' sections will\n# use if not designated in their block\n#---------------------------------------------------------------------\ndefaults\n    log                     global\n    mode                    http\n    option                  httplog\n    option                  dontlognull\n    option http-server-close\n    option redispatch\n    option forwardfor       except 127.0.0.0/8\n    retries                 3\nmaxconn                 20000\ntimeout http-request    10000ms\n    timeout http-keep-alive 10000ms\n    timeout check           10000ms\n    timeout connect         40000ms\n    timeout client          300000ms\n    timeout server          300000ms\n    timeout queue           50000ms\n\n# Enable HAProxy stats\nlisten stats\n    bind :9000\n    stats uri /stats\n    stats refresh 10000ms\n\n# Kube API Server\nfrontend k8s_api_frontend\n    bind :6443\n    default_backend k8s_api_backend\n    mode tcp\n\nbackend k8s_api_backend\n    mode tcp\n    balance source\nserver      bootstrap 192.168.122.253:6443 check\n    server      okd-cp-1 192.168.122.2:6443 check\n    server      okd-cp-2 192.168.122.3:6443 check\n    server      okd-cp-3 192.168.122.4:6443 check\n\n# OCP Machine Config Server\nfrontend ocp_machine_config_server_frontend\n    mode tcp\n    bind :22623\n    default_backend ocp_machine_config_server_backend\n\nbackend ocp_machine_config_server_backend\n    mode tcp\n    balance source\nserver      bootstrap 192.168.122.253:22623 check\n    server      okd-cp-1 192.168.122.2:22623 check\n    server      okd-cp-2 192.168.122.3:22623 check\n    server      okd-cp-3 192.168.122.4:22623 check\n\n# OCP Ingress - layer 4 tcp mode for each. Ingress Controller will handle layer 7.\nfrontend ocp_http_ingress_frontend\n    bind :80\n    default_backend ocp_http_ingress_backend\n    mode tcp\n\nbackend ocp_http_ingress_backend\n    balance source\nmode tcp\n    server      okd-cp-1 192.168.122.2:80 check\n    server      okd-cp-2 192.168.122.3:80 check\n    server      okd-cp-3 192.168.122.4:80 check\n    server      okd-w-1 192.168.122.5:80 check\n    server      okd-w-2 192.168.122.6:80 check\n\nfrontend ocp_https_ingress_frontend\n    bind *:443\n    default_backend ocp_https_ingress_backend\n    mode tcp\n\nbackend ocp_https_ingress_backend\n    mode tcp\n    balance source\nserver      okd-cp-1 192.168.122.2:443 check\n    server      okd-cp-2 192.168.122.3:443 check\n    server      okd-cp-3 192.168.122.4:443 check\n    server      okd-w-1 192.168.122.5:443 check\n    server      okd-w-2 192.168.122.6:443 check\n

Start the HAProxy service

sudo systemctl start haproxy\n

Install dnsmasq and set the dnsmasq.conf file as follows

# Configuration file for dnsmasq.\n\nport=53\n\n# The following two options make you a better netizen, since they\n# tell dnsmasq to filter out queries which the public DNS cannot\n# answer, and which load the servers (especially the root servers)\n# unnecessarily. If you have a dial-on-demand link they also stop\n# these requests from bringing up the link unnecessarily.\n\n# Never forward plain names (without a dot or domain part)\n#domain-needed\n# Never forward addresses in the non-routed address spaces.\nbogus-priv\n\nno-poll\n\nuser=dnsmasq\ngroup=dnsmasq\n\nbind-interfaces\n\nno-hosts\n# Include all files in /etc/dnsmasq.d except RPM backup files\nconf-dir=/etc/dnsmasq.d,.rpmnew,.rpmsave,.rpmorig\n\n# If a DHCP client claims that its name is \"wpad\", ignore that.\n# This fixes a security hole. see CERT Vulnerability VU#598349\n#dhcp-name-match=set:wpad-ignore,wpad\n#dhcp-ignore-names=tag:wpad-ignore\n\n\ninterface=eno1\ndomain=okd.lan\n\nexpand-hosts\n\naddress=/bootstrap.lab.okd.lan/192.168.122.253\nhost-record=bootstrap.lab.okd.lan,192.168.122.253\n\naddress=/okd-cp-1.lab.okd.lan/192.168.122.2\nhost-record=okd-cp-1.lab.okd.lan,192.168.122.2\n\naddress=/okd-cp-2.lab.okd.lan/192.168.122.3\nhost-record=okd-cp-2.lab.okd.lan,192.168.122.3\n\naddress=/okd-cp-3.lab.okd.lan/192.168.122.4\nhost-record=okd-cp-3.lab.okd.lan,192.168.122.4\n\naddress=/okd-w-1.lab.okd.lan/192.168.122.5\nhost-record=okd-w-1.lab.okd.lan,192.168.122.5\n\naddress=/okd-w-2.lab.okd.lan/192.168.122.6\nhost-record=okd-w-2.lab.okd.lan,192.168.122.6\n\naddress=/okd-w-3.lab.okd.lan/192.168.122.7\nhost-record=okd-w-3.lab.okd.lan,192.168.122.7\n\naddress=/api.lab.okd.lan/192.168.122.1\nhost-record=api.lab.okd.lan,192.168.122.1\naddress=/api-int.lab.okd.lan/192.168.122.1\nhost-record=api-int.lab.okd.lan,192.168.122.1\n\naddress=/etcd-0.lab.okd.lan/192.168.122.2\naddress=/etcd-1.lab.okd.lan/192.168.122.3\naddress=/etcd-2.lab.okd.lan/192.168.122.4\naddress=/.apps.lab.okd.lan/192.168.122.1\n\nsrv-host=_etcd-server-ssl._tcp,etcd-0.lab.okd.lan,2380\nsrv-host=_etcd-server-ssl._tcp,etcd-1.lab.okd.lan,2380\nsrv-host=_etcd-server-ssl._tcp,etcd-2.lab.okd.lan,2380\n\naddress=/oauth-openshift.apps.lab.okd.lan/192.168.122.1\naddress=/console-openshift-console.apps.lab.okd.lan/192.168.122.1\n

Start the dnsmasq service

sudo /usr/sbin/dnsmasq --conf-file=/etc/dnsmasq.conf\n

Test that your DNS setup is working correctly

N.B. It's important to verify that dns works, I found that for example if api-int.lab.okd.lan didn\u2019t resolve (also with reverse lookup) I had problems with bootstrap failing.

# test & results\n$ dig +noall +answer @192.168.122.1 api.lab.okd.lan\napi.lab.okd.lan.    0    IN    A    192.168.122.1\n\n$ dig +noall +answer @192.168.122.1 api-int.lab.okd.lan\napi-int.lab.okd.lan.    0    IN    A    192.168.122.1\n\n$ dig +noall +answer @192.168.122.1 random.apps.lab.okd.lan\nrandom.apps.lab.okd.lan. 0    IN    A    192.168.122.1\n\n$ dig +noall +answer @192.168.122.1 console-openshift-console.apps.lab.okd.lan\nconsole-openshift-console.apps.lab.okd.lan. 0 IN A 192.168.122.1\n\n$ dig +noall +answer @192.168.122.1 okd-bootstrap.lab.okd.lan\nokd-bootstrap.lab.okd.lan. 0    IN    A    192.168.122.253\n\n$ dig +noall +answer @192.168.122.1 okd-cp1.lab.okd.lan\nokd-cp1.lab.okd.lan.    0    IN    A    192.168.122.2\n\n$ dig +noall +answer @192.168.122.1 okd-cp2.lab.okd.lan\nokd-cp2.lab.okd.lan.    0    IN    A    192.168.122.3\n\n\n$ dig +noall +answer @192.168.122.1 okd-cp3.lab.okd.lan\nokd-cp3.lab.okd.lan.    0    IN    A    192.168.122.4\n\n$ dig +noall +answer @192.168.122.1 -x 192.168.122.1\n1.122.168.192.in-addr.arpa. 0    IN    PTR    okd-svc.okd-dev.\n\n$ dig +noall +answer @192.168.122.1 -x 192.168.122.2\n2.122.168.192.in-addr.arpa. 0    IN    PTR    okd-cp1.lab.okd.lan.\n\n$ dig +noall +answer @192.168.122.1 -x 192.168.122.3\n3.122.168.192.in-addr.arpa. 0    IN    PTR    okd-cp2.lab.okd.lan.\n\n$ dig +noall +answer @192.168.122.1 -x 192.168.122.4\n4.122.168.192.in-addr.arpa. 0    IN    PTR    okd-cp3.lab.okd.lan.\n\n$ dig +noall +answer @192.168.122.1 -x 192.168.122.5\n5.122.168.192.in-addr.arpa. 0    IN    PTR    okd-w1.lab.okd.lan.\n\n$ dig +noall +answer @192.168.122.1 -x 192.168.122.6\n6.122.168.192.in-addr.arpa. 0    IN    PTR    okd-w2.lab.okd.lan.\n\n$ dig +noall +answer @192.168.122.1 -x 192.168.122.7\n7.122.168.192.in-addr.arpa. 0    IN    PTR    okd-w3.lab.okd.lan.\n

Install and configure NFS for the OKD Registry. It is a requirement to provide storage for the Registry, emptyDir can be specified if necessary.

sudo dnf install nfs-utils -y\n

Create the share

mkdir -p /shares/registry\nchown -R nobody:nobody /shares/registry\nchmod -R 777 /shares/registry\n

Export the share, this allows any service in the 192.168.122.xxx range to access NFS

echo \"/shares/registry  192.168.122.0/24(rw,sync,root_squash,no_subtree_check,no_wdelay)\" > /etc/exports\n\nexportfs -rv\n

Enable and start the NFS related services

sudo systemctl enable nfs-server rpcbind\nsudo systemctl start nfs-server rpcbind nfs-mountd\n

Create an install directory

 mkdir ~/okd-install\n

Copy the install-config.yaml included in the cloned repository (see link at end of the document) to the install directory

cp ~/openshift-vm-install/install-config.yaml ~/okd-install\n

Where install-config.yaml is as follows

apiVersion: v1\nbaseDomain: okd.lan\ncompute:\n- hyperthreading: Enabled\nname: worker\nreplicas: 0 # Must be set to 0 for User Provisioned Installation as worker nodes will be manually deployed.\ncontrolPlane:\nhyperthreading: Enabled\nname: master\nreplicas: 3\nmetadata:\nname: lab # Cluster name\nnetworking:\nclusterNetwork:\n- cidr: 10.128.0.0/14\nhostPrefix: 23\nnetworkType: OpenShiftSDN\nserviceNetwork:\n- 172.30.0.0/16\nplatform:\nnone: {}\nfips: false\npullSecret: 'add your pull secret here'\nsshKey: 'add your ssh public key here'\n

Update the install-config.yaml with your own pull-secret and ssh key.

vim ~/okd-install/install-config.yaml\n

If needed create public/private key pair using openssh

Generate Kubernetes manifest files

~/openshift-install create manifests --dir ~/okd-install\n

A warning is shown about making the control plane nodes schedulable.

For the SNO it's mandatory to run workloads on the Control Plane nodes.

If you don't want to you (incase you move to the full HA install) you can disable this with:

`sed -i 's/mastersSchedulable: true/mastersSchedulable: false/' ~/okd-install/manifests/cluster-scheduler-02-config.yml`.\n

Make any other custom changes you like to the core Kubernetes manifest files.

Generate the Ignition config and Kubernetes auth files

~/openshift-install create ignition-configs --dir ~/okd-install\n

Create a hosting directory to serve the configuration files for the OKD booting process

mkdir /var/www/html/okd4\n

Copy all generated install files to the new web server directory

cp -R ~/okd-install/* /var/www/html/okd4\n

Move the Core OS image to the web server directory (later you need to type this path multiple times so it is a good idea to shorten the name)

mv ~/fhcos-X.X.X-x86_64-metal.x86_64.raw.gz /var/www/html/okd4/fhcos\n

Change ownership and permissions of the web server directory

chcon -R -t httpd_sys_content_t /var/www/html/okd4/\nchown -R apache: /var/www/html/okd4/\nchmod 755 /var/www/html/okd4/\n

Confirm you can see all files added to the /var/www/html/ocp4/ through Apache

curl localhost:8080/okd4/\n

Start VMS/Bare metal servers

Execute for each VM type the appropriate coreos-installer command

Change the \u2013ignition-url for each type i.e

N.B. For our SNO install we are only going to use bootstrap and master ignition files (ignore worker.ign)

Bootstrap Node

--ignition-url https://192.168.122.1:8080/okd4/bootstrap.ign\n

Master Node

--ignition-url https://192.168.122.1:8080/okd4/master.ign\n

Worker Node

--ignition-url https://192.168.122.1:8080/okd4/worker.ign\n

A typical cli for CoreOS (using master.ign would look like this)

$ sudo coreos-installer install /dev/sda --ignition-url http://192.168.122.1:8080/okd4/master.ign  --image-url http://192.168.122.1:8080/okd4/fhcos  --insecure-ignition -\u2013insecure 

NB Note if using Fedora CoreOS the device would need to change i.e /dev/vda

Once the vm\u2019s are running with the relevant ignition files

Issue the following commands

This will install and wait for the bootstrap service to complete

openshift-install --dir ~/$INSTALL_DIR wait-for bootsrap-complete --log-level=debug\n

Once the bootstrap has installed then issue this command

openshift-install --dir ~/$INSTALL_DIR wait-for install-complete --log-level=debug\n

This will take about 40 minutes (or longer) after a successful install you will need to approve certificates and setup the persistent volume for the internal registry

"},{"location":"guides/upi-sno/#post-install","title":"Post Install","text":"

At this point you can shutdown the bootstrap server

Approve certification signing request

# Export the KUBECONFIG environment variable (to gain access to the cluster)\nexport KUBECONFIG=$INSTALL_DIR/auth/kubeconfig\n\n# View CSRs\noc get csr\n# Approve all pending CSRs\noc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve\n# Wait for kubelet-serving CSRs and approve them too with the same command\noc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve\n

Configure Registry

oc edit configs.imageregistry.operator.openshift.io\n\n# update the yaml\nmanagementState: Managed\n\nstorage:\n  pvc:\n    claim: # leave the claim blank\n\n# save the changes and execute the following commands\n\n# check for \u2018pending\u2019 state\noc get pvc -n openshift-image-registry\n\noc create -f registry-pv.yaml\n# After a short wait the 'image-registry-storage' pvc should now be bound\noc get pvc -n openshift-image-registry\n

Remote Access

As haproxy has been set up as a load balancer for the cluster, add the following to your /etc/hosts file. Obviously the IP address will change according to where you setup your haproxy

192.168.8.122 okd-svc api.lab.okd.lan api-int.lab.okd.lan console-openshift-console.apps.lab.okd.lan oauth-openshift.apps.lab.okd.lan downloads-openshift-console.apps.lab.okd.lan alertmanager-main-openshift-monitoring.apps.lab.okd.lan grafana-openshift-monitoring.apps.lab.okd.lan prometheus-k8s-openshift-monitoring.apps.lab.okd.lan thanos-querier-openshift-monitoring.apps.lab.okd.lan\n

Helper Script

I have included a WIP script to help with setting up the virtual network, machines and utilities to configure the OKD install, apply haproxy config, apply dns config, setup NFS and firewall setup.

Dependencies

As mentioned it\u2019s still a work in progress, but fairly helpful (imho) for now.

A typical flow would be (once all the dependencies have been installed)

./virt-env-install.sh config # configures install-config.yaml\n./virt-env-install.sh dnsconfig\n\n# before continuing manually test your dns setup\n\n./virt-env-install.sh haproxy\n./virt-env-install.sh firewall # can be ignored as firewalld has been disabled\n./virt-env-install.sh network\n./virt-env-install.sh manifests\n./virt-env-install.sh ignition\n./virt-env-install.sh copy\n./virt-env-install.sh vm bootstrap ok (repeat this for each vm needed)\n./virt-env-install.sh vm cp-1 ok \n./virt-env-install.sh okd-install bootstrap\n./virt-env-install.sh okd-install install\n

N.B. If there are any discrepencies or improvements please make note. PR's are most welcome !!!

Screenshot of final OKD install

"},{"location":"guides/upi-sno/#acknowledgement-links","title":"Acknowledgement & Links","text":"

github repo https://github.com/lmzuccarelli/okd-baremetal-install

Thanks and acknowledgement to Ryan Hay

Reference : https://github.com/ryanhay/ocp4-metal-install

"},{"location":"guides/vadim/","title":"Vadim's homelab","text":"

This describes the resources used by OpenShift after performing an installation to make it similar to my homelab setup.

"},{"location":"guides/vadim/#compute","title":"Compute","text":"
  1. Ubiquity EdgeRouter ER-X

  2. NAS/Bastion host

  3. control plane

  4. compute nodes

"},{"location":"guides/vadim/#router-setup","title":"Router setup","text":"

Once nodes have booted assign static IPs using MAC pinning.

EdgeRouter has dnsmasq to support custom DNS entries, but I wanted to have a network-wide ad filtering and DNS-over-TLS for free, so I followed this guide to install AdGuard Home on the router.

This gives a fancy UI for DNS rewrites and gives a useful stats about the nodes on the network.

"},{"location":"guides/vadim/#nasbastion-setup","title":"NAS/Bastion setup","text":"

HAProxy setup is fairly standard - see ocp4-helpernode for idea.

Along with (fairly standard) NFS server I also run a single node Ceph cluster, so that I could benefit from CSI / autoprovision / snapshots etc.

"},{"location":"guides/vadim/#installation","title":"Installation","text":"

Currently \"single node install\" requires a dedicated throwaway bootstrap node, so I used future compute node (x220 laptop) as a bootstrap node. Once master was installed, the laptop was re-provisioned to become a compute node.

"},{"location":"guides/vadim/#upgrading","title":"Upgrading","text":"

Since I use a single master install, upgrades are bit complicated. Both nodes are labelled as workers, so upgrading those is not an issue.

Upgrading single master is tricky, so I use this script to pivot the node into expected master ignition content, which runs rpm-ostree rebase <new content>. This script needs to be cancelled before it starts installing OS extensions (NetworkManager-ovs etc.) as its necessary.

This issue as a class would be addressed in 4.8.

"},{"location":"guides/vadim/#useful-software","title":"Useful software","text":"

Grafana operator is incredibly useful to setup monitoring.

This operator helps me to define a configuration for various datasources (i.e. Promtail+Loki) and control dashboard source code using CRs.

SnapScheduler makes periodic snapshots of some PVs so that risky changes could be reverted.

Tekton operator is helping me to run a few clean up jobs in cluster periodically.

Most useful pipeline I've been using is running oc adm must-gather on this cluster, unpacking it and storing it in Git. This helps me keep track of changes in the cluster in a git repo - and, unlike gitops solution like ArgoCD - I can still tinker with things in the console.

Other useful software running in my cluster:

"},{"location":"guides/vsphere-ipi/","title":"vSphere IPI Deployment","text":"

This describes the resources used by OpenShift after performing an installation using the required options for the installer.

"},{"location":"guides/vsphere-ipi/#infrastructure","title":"Infrastructure","text":""},{"location":"guides/vsphere-ipi/#compute","title":"Compute","text":"

All vms stored within folder described above and tagged with tag created by installer.

"},{"location":"guides/vsphere-ipi/#networking","title":"Networking","text":"

Should be set up by user. Installer doesn't create anything there. Network name should be provided as installer argument.

"},{"location":"guides/vsphere-ipi/#miscellaneous","title":"Miscellaneous","text":""},{"location":"guides/vsphere-ipi/#deployment","title":"Deployment","text":"

See the OKD documentation to proceed with deployment

"},{"location":"guides/vsphere-prereqs/","title":"Prerequisites for vSphere UPI","text":"

In this example I describe the setup of a DNS/DHCP server and a Load Balancer on a Raspberry PI microcomputer. The instructions most certainly will also work for other environments.

I use Raspberry Pi OS (debian based).

"},{"location":"guides/vsphere-prereqs/#ip-addresses-of-components-in-this-example","title":"IP Addresses of components in this example","text":""},{"location":"guides/vsphere-prereqs/#upgrade-raspberry-pi","title":"Upgrade Raspberry Pi","text":"
sudo apt-get update\nsudo apt-get upgrade\nsudo reboot\n
"},{"location":"guides/vsphere-prereqs/#set-static-ip-address-on-raspberry-pi","title":"Set static IP address on Raspberry Pi","text":"

Add this:

interface eth0 \nstatic ip_address=192.168.178.5/24 \nstatic routers=192.168.178.1 \nstatic domain_name_servers=192.168.178.5 8.8.8.8\n

to /etc/dhcpcd.conf

"},{"location":"guides/vsphere-prereqs/#dhcp","title":"DHCP","text":"

Ensure that no other DHCP servers are activated in the network of your homelab e.g. in your internet router.

The DHCP server in this example is setup with DDNS (Dynamic DNS) enabled.

"},{"location":"guides/vsphere-prereqs/#install","title":"Install","text":"

sudo apt-get install isc-dhcp-server

"},{"location":"guides/vsphere-prereqs/#configure","title":"Configure","text":"

Enable DHCP server for IPv4 on eth0:

/etc/default/isc-dhcp-server

INTERFACESv4=\"eth0\" \nINTERFACESv6=\"\"\n

/etc/dhcp/dhcpd.conf

# dhcpd.conf\n#\n\n####################################################################################\n# Configuration for Dynamic DNS (DDNS) updates                                     #\n# Clients requesting an IP and sending their hostname for domain *.homelab.net     #\n# will be auto registered in the DNS server.                                       #\n####################################################################################\nddns-updates on;\nddns-update-style standard;\n\n# This option points to the copy rndc.key we created for bind9.\ninclude \"/etc/bind/rndc.key\";\n\nallow unknown-clients;\nuse-host-decl-names on;\ndefault-lease-time 300; # 5 minutes\nmax-lease-time 300;     # 5 minutes\n\n# homelab.net DNS zones\nzone homelab.net. {\n  primary 192.168.178.5; # This server is the primary DNS server for the zone\n  key rndc-key;       # Use the key we defined earlier for dynamic updates\n}\nzone 178.168.192.in-addr.arpa. {\n  primary 192.168.178.5; # This server is the primary reverse DNS for the zone\n  key rndc-key;       # Use the key we defined earlier for dynamic updates\n}\n\nddns-domainname \"homelab.net.\";\nddns-rev-domainname \"in-addr.arpa.\";\n####################################################################################\n\n\n####################################################################################\n# Basic configuration                                                              #\n####################################################################################\n# option definitions common to all supported networks...\ndefault-lease-time 300;\nmax-lease-time     300;\n\n# If this DHCP server is the official DHCP server for the local\n# network, the authoritative directive should be uncommented.\nauthoritative;\n\n# Parts of this section will be put in the /etc/resolv.conf of your hosts later\noption domain-name \"homelab.net\";\noption routers 192.168.178.1;\noption subnet-mask 255.255.255.0;\noption domain-name-servers 192.168.178.5;\n\nsubnet 192.168.178.0 netmask 255.255.255.0 {\n  range 192.168.178.40 192.168.178.199;\n}\n####################################################################################\n\n\n####################################################################################\n# Static IP addresses                                                              #\n# (Replace the MAC addresses here with the ones you set in vsphere for your vms)   #\n####################################################################################\ngroup {\n  host bootstrap {\n      hardware ethernet 00:1c:00:00:00:00;\n      fixed-address 192.168.178.200;\n  }\n\n  host master0 {\n      hardware ethernet 00:1c:00:00:00:10;\n      fixed-address 192.168.178.210;\n  }\n\n  host master1 {\n      hardware ethernet 00:1c:00:00:00:11;\n      fixed-address 192.168.178.211;\n  }\n\n  host master2 {\n      hardware ethernet 00:1c:00:00:00:12;\n      fixed-address 192.168.178.212;\n  }\n\n  host worker0 {\n      hardware ethernet 00:1c:00:00:00:20;\n      fixed-address 192.168.178.220;\n  }\n\n  host worker1 {\n      hardware ethernet 00:1c:00:00:00:21;\n      fixed-address 192.168.178.221;\n  }\n\n  host worker2 {\n      hardware ethernet 00:1c:00:00:00:22;\n      fixed-address 192.168.178.222;\n  }  \n}\n
"},{"location":"guides/vsphere-prereqs/#dns","title":"DNS","text":""},{"location":"guides/vsphere-prereqs/#install_1","title":"Install","text":"
sudo apt install bind9 dnsutils\n
"},{"location":"guides/vsphere-prereqs/#basic-configuration","title":"Basic configuration","text":"

/etc/bind/named.conf.options

include \"/etc/bind/rndc.key\";\n\nacl internals {\n    // lo adapter\n    127.0.0.1;\n\n    // CIDR for your homelab network\n    192.168.178.0/24;\n};\n\noptions {\n        directory \"/var/cache/bind\";\n\n        // If there is a firewall between you and nameservers you want\n        // to talk to, you may need to fix the firewall to allow multiple\n        // ports to talk.  See http://www.kb.cert.org/vuls/id/800113\n\n        // If your ISP provided one or more IP addresses for stable\n        // nameservers, you probably want to use them as forwarders.\n        // Uncomment the following block, and insert the addresses replacing\n        // the all-0's placeholder.\n\n        forwarders {\n          8.8.8.8;\n          8.8.4.4;\n        };\n        forward only;\n\n        //========================================================================\n        // If BIND logs error messages about the root key being expired,\n        // you will need to update your keys.  See https://www.isc.org/bind-keys\n        //========================================================================\n        dnssec-validation no;\n\n        listen-on-v6 { none; };\n        auth-nxdomain no;\n        listen-on port 53 { any; };\n\n        // Allow queries from my Homelab and also from Wireguard Clients.\n        allow-query { internals; };\n        allow-query-cache { internals; };\n        allow-update { internals; };\n        recursion yes;\n        allow-recursion { internals; };\n        allow-transfer { internals; };\n\n        dnssec-enable no;\n\n        check-names master ignore;\n        check-names slave ignore;\n        check-names response ignore;\n};\n

/etc/bind/named.conf.local

#include \"/etc/bind/rndc.key\";\n\n//\n// Do any local configuration here\n//\n\n// Consider adding the 1918 zones here, if they are not used in your\n// organization\n//include \"/etc/bind/zones.rfc1918\";\n\n# All devices that don't belong to the OKD cluster will be maintained here.\nzone \"homelab.net\" {\n   type master;\n   file \"/etc/bind/forward.homelab.net\";\n   allow-update { key rndc-key; };\n};\n\nzone \"c1.homelab.net\" {\n   type master;\n   file \"/etc/bind/forward.c1.homelab.net\";\n   allow-update { key rndc-key; };\n};\n\nzone \"178.168.192.in-addr.arpa\" {\n   type master;\n   notify no;\n   file \"/etc/bind/178.168.192.in-addr.arpa\";\n   allow-update { key rndc-key; };\n};\n

Zone file for homlab.net: /etc/bind/forward.homelab.net

;\n; BIND data file for local loopback interface\n;\n$TTL    604800\n@       IN      SOA     homelab.net. root.homelab.net. (\n                              2         ; Serial\n                         604800         ; Refresh\n                          86400         ; Retry\n                        2419200         ; Expire\n                         604800 )       ; Negative Cache TTL\n;\n@       IN      NS      homelab.net.\n@       IN      A       192.168.178.5\n@       IN      AAAA    ::1\n

The name of the next file depends on the subnet that is used:

/etc/bind/178.168.192.in-addr.arpa

$TTL 1W\n@ IN SOA ns1.homelab.net. root.homelab.net. (\n                                2019070742 ; serial\n                                10800      ; refresh (3 hours)\n                                1800       ; retry (30 minutes)\n                                1209600    ; expire (2 weeks)\n                                604800     ; minimum (1 week)\n                                )\n                        NS      ns1.homelab.net.\n\n200                     PTR     bootstrap.c1.homelab.net.\n\n210                     PTR     master0.c1.homelab.net.\n211                     PTR     master1.c1.homelab.net.\n212                     PTR     master2.c1.homelab.net.\n\n220                     PTR     worker0.c1.homelab.net.\n221                     PTR     worker1.c1.homelab.net.\n222                     PTR     worker2.c1.homelab.net.\n\n5                       PTR     api.c1.homelab.net.\n5                       PTR     api-int.c1.homelab.net.\n
"},{"location":"guides/vsphere-prereqs/#dns-records-for-okd-4","title":"DNS records for OKD 4","text":"

Zone file for c1.homelab.net (our OKD 4 cluster will be in this domain):

/etc/bind/forward.c1.homelab.net

;\n; BIND data file for local loopback interface\n;\n$TTL    604800\n@       IN      SOA     c1.homelab.net. root.c1.homelab.net. (\n                              2         ; Serial\n                         604800         ; Refresh\n                          86400         ; Retry\n                        2419200         ; Expire\n                         604800 )       ; Negative Cache TTL\n;\n@       IN      NS      c1.homelab.net.\n@       IN      A       192.168.178.5\n@       IN      AAAA    ::1\n\nload-balancer IN A      192.168.178.5\n\nbootstrap IN    A       192.168.178.200\n\nmaster0 IN      A       192.168.178.210\nmaster1 IN      A       192.168.178.211\nmaster2 IN      A       192.168.178.212\n\nworker0 IN      A       192.168.178.220\nworker1 IN      A       192.168.178.221\nworker2 IN      A       192.168.178.222\nworker3 IN      A       192.168.178.223\n\n*.apps.c1.homelab.net.  IN CNAME load-balancer.c1.homelab.net.\napi-int.c1.homelab.net. IN CNAME load-balancer.c1.homelab.net.\napi.c1.homelab.net.     IN CNAME load-balancer.c1.homelab.net.\n
"},{"location":"guides/vsphere-prereqs/#set-file-permissions","title":"Set file permissions","text":"

For dynamic DNS (ddns) to work you should do this:

sudo chown -R bind:bind /etc/bind\n
"},{"location":"guides/vsphere-prereqs/#load-balancer","title":"Load Balancer","text":""},{"location":"guides/vsphere-prereqs/#install_2","title":"Install","text":"
sudo apt-get install haproxy\n
"},{"location":"guides/vsphere-prereqs/#configure_1","title":"Configure","text":"

/etc/haproxy/haproxy.cfg

global\n        log /dev/log    local0\n        log /dev/log    local1 notice\n        chroot /var/lib/haproxy\n        stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners\n        stats timeout 30s\n        user haproxy\n        group haproxy\n        daemon\n\n        # Default SSL material locations\n        ca-base /etc/ssl/certs\n        crt-base /etc/ssl/private\n\n        # Default ciphers to use on SSL-enabled listening sockets.\n        # For more information, see ciphers(1SSL). This list is from:\n        #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/\n        # An alternative list with additional directives can be obtained from\n        #  https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy\n        ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS\n        ssl-default-bind-options no-sslv3\n\ndefaults\n        log     global\n        mode    http\n        option  httplog\n        option  dontlognull\n        timeout connect 20000\n        timeout client  10000\n        timeout server  10000\n        errorfile 400 /etc/haproxy/errors/400.http\n        errorfile 403 /etc/haproxy/errors/403.http\n        errorfile 408 /etc/haproxy/errors/408.http\n        errorfile 500 /etc/haproxy/errors/500.http\n        errorfile 502 /etc/haproxy/errors/502.http\n        errorfile 503 /etc/haproxy/errors/503.http\n        errorfile 504 /etc/haproxy/errors/504.http\n\n\n# You can see the stats and observe OKD's bootstrap process by opening\n# http://<IP>:4321/haproxy?stats\nlisten stats\n    bind :4321\n    mode            http\n    log             global\n    maxconn 10\n\n    timeout client  100s\n    timeout server  100s\n    timeout connect 100s\n    timeout queue   100s\n\n    stats enable\n    stats hide-version\n    stats refresh 30s\n    stats show-node\n    stats auth admin:password\n    stats uri  /haproxy?stats\n\n\nfrontend openshift-api-server\n    bind *:6443\n    default_backend openshift-api-server\n    mode tcp\n    option tcplog\n\nbackend openshift-api-server\n    balance source\n    mode tcp\n    server bootstrap bootstrap.c1.homelab.net:6443 check\n    server master0 master0.c1.homelab.net:6443 check\n    server master1 master1.c1.homelab.net:6443 check\n    server master2 master2.c1.homelab.net:6443 check\n\n\nfrontend machine-config-server\n    bind *:22623\n    default_backend machine-config-server\n    mode tcp\n    option tcplog\n\nbackend machine-config-server\n    balance source\n    mode tcp\n    server bootstrap bootstrap.c1.homelab.net:22623 check\n    server master0 master0.c1.homelab.net:22623 check\n    server master1 master1.c1.homelab.net:22623 check\n    server master2 master2.c1.homelab.net:22623 check\n\n\nfrontend ingress-http\n    bind *:80\n    default_backend ingress-http\n    mode tcp\n    option tcplog\n\nbackend ingress-http\n    balance source\n    mode tcp\n    server master0 master0.c1.homelab.net:80 check\n    server master1 master1.c1.homelab.net:80 check\n    server master2 master2.c1.homelab.net:80 check\n\n    server worker0 worker0.c1.homelab.net:80 check\n    server worker1 worker1.c1.homelab.net:80 check\n    server worker2 worker2.c1.homelab.net:80 check\n    server worker3 worker3.c1.homelab.net:80 check\n\n\nfrontend ingress-https\n    bind *:443\n    default_backend ingress-https\n    mode tcp\n    option tcplog\n\nbackend ingress-https\n    balance source\n    mode tcp\n\n    server master0 master0.c1.homelab.net:443 check\n    server master1 master1.c1.homelab.net:443 check\n    server master2 master2.c1.homelab.net:443 check\n\n    server worker0 worker0.c1.homelab.net:443 check\n    server worker1 worker1.c1.homelab.net:443 check\n    server worker2 worker2.c1.homelab.net:443 check\n    server worker3 worker3.c1.homelab.net:443 check\n
"},{"location":"guides/vsphere-prereqs/#reboot-and-check-status","title":"Reboot and check status","text":"

Reboot Raspberry Pi:

sudo reboot\n

Check status of DNS/DHCP server and Load Balancer:

sudo systemctl status haproxy.service \nsudo systemctl status isc-dhcp-server.service \nsudo systemctl status bind9\n
"},{"location":"guides/vsphere-prereqs/#proxy-if-on-a-private-network","title":"Proxy (if on a private network)","text":"

If the cluster will sit on a private network, you\u2019ll need a proxy for outgoing traffic, both for the install process and for regular operation. In the case of the former, the installer needs to pull containers from the external registries. In the case of the latter, the proxy is needed when application containers need access to the outside world (e.g. yum installs, external code repositories like gitlab, etc.)

The proxy should be configured to accept connections from the IP subnet for your cluster. A simple proxy to use for this purpose is squid

"},{"location":"guides/virt-baremetal-upi/","title":"OKD Virtualization on user provided infrastructure","text":""},{"location":"guides/virt-baremetal-upi/#preparing-the-hardware","title":"Preparing the hardware","text":"

As a first step for providing an infrastructure for OKD Virtualization, you need to prepare the hardware:

"},{"location":"guides/virt-baremetal-upi/#preparing-the-infrastructure","title":"Preparing the infrastructure","text":"

Once your hardware is ready and connected to the network you need to configure your services, your network and your DNS for allowing the OKD installer to deploy the software. You may also need to prepare in advance a few services you'll need during the deployment. Carefully read the Preparing the user-provisioned infrastructure section and ensure all the requirements are met.

"},{"location":"guides/virt-baremetal-upi/#provision-your-hosts","title":"Provision your hosts","text":"

For the bastion / service host you can use CentOS Stream 8. You can follow the CentOS 8 installation documentation but we recommend using the latest CentOS Stream 8 ISO.

For the OKD nodes you\u2019ll need Fedora CoreOS. You can get it from the Get Fedora! website, choose the Bare Metal ISO.

"},{"location":"guides/virt-baremetal-upi/#configure-the-bastion-to-host-needed-services","title":"Configure the bastion to host needed services","text":"

Configure Apache to serve on port 8080/8443 as the http/https port will be used by the haproxy service. Apache will be needed to provide ignition configuration for OKD nodes.

dnf install -y httpd\nsed -i 's/Listen 80/Listen 8080/' /etc/httpd/conf/httpd.conf\nsed -i 's/Listen 443/Listen 8443/' /etc/httpd/conf.d/ssl.conf\nsetsebool -P httpd_read_user_content 1\nsystemctl enable --now httpd.service\nfirewall-cmd --permanent --add-port=8080/tcp\nfirewall-cmd --permanent --add-port=8443/tcp\nfirewall-cmd --reload\n# Verify it\u2019s up:\ncurl localhost:8080\n

Configure haproxy:

dnf install haproxy -y\nfirewall-cmd --permanent --add-port=6443/tcp\nfirewall-cmd --permanent --add-port=22623/tcp\nfirewall-cmd --permanent --add-service=http\nfirewall-cmd --permanent --add-service=https\nfirewall-cmd --reload\nsetsebool -P haproxy_connect_any 1\nsystemctl enable --now haproxy.service\n
"},{"location":"guides/virt-baremetal-upi/#installing-okd","title":"Installing OKD","text":"

OKD current stable-4 branch is delivering OKD 4.8. If you're using an older version we recommend to update to ODK 4.8.

At this point you should have all OKD nodes ready to be installed with Fedora CoreOS and the bastion with all the needed services. Check that all nodes and the bastion have the correct ip addresses and fqdn and that they are resolvable via DNS.

As we are going to use the baremetal UPI installation you\u2019ll need to create a install-config.yaml following the example for installing bare metal

Remember to configure your proxy settings if you have a proxy

"},{"location":"guides/virt-baremetal-upi/#apply-the-workarounds","title":"Apply the workarounds","text":"

You can workaround this by adding a custom policy:

echo '(allow virt_qemu_ga_t container_var_lib_t (dir (search)))' >local_virtqemu_ga.cil\nsemodule -i local_virtqemu_ga.cil\n

You can workaround this by adding a custom policy:

echo '(allow iptables_t cgroup_t (dir (ioctl)))' >local_iptables.cil\nsemodule -i local_iptables.cil\n
echo '(allow rpcbind_t unreserved_port_t (udp_socket (name_bind)))' >local_rpcbind.cil\nsemodule -i local_rpcbind.cil\n

While the master node is booting edit the grub config adding to kernel command line console=null.

echo '(allow openvswitch_t init_var_run_t (capability (fsetid)))' >local_openvswitch.cil\nsemodule -i local_openvswitch.cil\n
"},{"location":"guides/virt-baremetal-upi/#installing-hco-and-kubevirt","title":"Installing HCO and KubeVirt","text":"

Once the OKD console is up, connect to it. Go to Operators -> OperatorHub, look for KubeVirt HyperConverged Cluster Operator and install it.

Click on the Create Hyperconverged button, all the defaults should be fine.

"},{"location":"guides/virt-baremetal-upi/#providing-storage","title":"Providing storage","text":"

Shared storage is not mandatory for OKD Virtualization, but without a doubt it provides many advantages over a configuration based on local storage which is considered a suboptimal configuration.

Between the advantages enabled by shared storage it is worth mentioning: - Live migration of Virtual Machines - Founding pillar for HA - Enables seamless cluster upgrades without the need to shut down and restart all the VMs on each upgrade - Centralized storage management enabling elastic scalability - Centralized backup

"},{"location":"guides/virt-baremetal-upi/#shared-storage","title":"Shared storage","text":"

TBD: rook.io deployment

"},{"location":"guides/virt-baremetal-upi/#local-storage","title":"Local storage","text":"

You can configure local storage for your virtual machines by using the OKD Virtualization hostpath provisioner feature.

When you install OKD Virtualization, the hostpath provisioner Operator is automatically installed. To use it, you must: - Configure SELinux on your worker nodes via a Machine Config object. - Create a HostPathProvisioner custom resource. - Create a StorageClass object for the hostpath provisioner.

"},{"location":"guides/virt-baremetal-upi/#configuring-selinux-for-the-hostpath-provisioner-on-okd-worker-nodes","title":"Configuring SELinux for the hostpath provisioner on OKD worker nodes","text":"

You can configure SELinux for your OKD Worker nodes using a MachineConfig.

"},{"location":"guides/virt-baremetal-upi/#creating-a-custom-resource-cr-for-the-hostpathprovisioner-operator","title":"Creating a custom resource (CR) for the HostPathProvisioner operator","text":"
  1. Create the HostPathProvisioner custom resource file. For example:

    $ touch hostpathprovisioner_cr.yaml\n
  2. Edit that file. For example:

    apiVersion: hostpathprovisioner.kubevirt.io/v1beta1\nkind: HostPathProvisioner\nmetadata:\nname: hostpath-provisioner\nspec:\nimagePullPolicy: IfNotPresent\npathConfig:\npath: \"/var/hpvolumes\" # The path of the directory on the node\nuseNamingPrefix: false # Use the name of the PVC bound to the created PV as part of the directory name.\n
  3. Create the CR in the kubevirt-hyperconverged namespace:

    $ oc create -n kubevirt-hyperconverged -f hostpathprovisioner_cr.yaml\n
"},{"location":"guides/virt-baremetal-upi/#creating-a-storageclass-for-the-hostpathprovisioner-operator","title":"Creating a StorageClass for the HostPathProvisioner operator","text":"
  1. Create the YAML file for the storage class. For example:

    $ touch hppstorageclass.yaml\n
  2. Edit that file. For example:

    apiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\nname: hostpath-provisioner\nprovisioner: kubevirt.io/hostpath-provisioner\nreclaimPolicy: Delete\nvolumeBindingMode: WaitForFirstConsumer\n
  3. Creating the Storage Class object:

    $ oc create -f hppstorageclass.yaml\n
"},{"location":"okd_tech_docs/","title":"OKD Technical Documentation","text":"

Warning

This section is under construction

This section of the documentation is for developers that want to customize OKD.

The section will cover:

The above section will allow you to work on fixes and enhancements to core OKD operators and be able to run them locally.

In addition to the above this section will also look at the Red Hat build and test setup, looking at how OpenShift and OKD operators are built and tested and how releases are created.

"},{"location":"okd_tech_docs/#okd-releases","title":"OKD Releases","text":"

OKD is a Kubernetes based platform that delivers a fully managed platform from the core operating system to the Kubernetes platform and the services running on it. All aspects of OKD are managed by a collection of operators.

OKD shares most of the same source code as Red Hat OpenShift. One of the primary differences is that OKD uses Fedora CoreOS where OpenShift uses Red Hat Enterprise Linux CoreOS as the base platform for cluster nodes.

An OKD release is a strictly defined set of software. A release is defined by a release payload, which contains an operator (Cluster Version Operator), a list of manifests to apply and a reference file. You can get information about a release using the oc command line utility, oc adm release info <release name>.

You can find the latest available release here.

You can get the current version of your cluster using the oc get clusterversion command, or from the Cluster Settings page in the Administration section of the OKD web console.

For the OKD 4.10 release named 4.10.0-0.okd-2022-03-07-131213 the command would be oc adm release info 4.10.0-0.okd-2022-03-07-131213

you can add additional command line options to get more specific information about a release:

"},{"location":"okd_tech_docs/modifying_okd/","title":"Making changes to OKD","text":"

Warning

This section is under construction

The source code for OKD is available on github. OKD is made up of many components bundled into a release. You can find the exact commit for each component included in a release using the oc adm release info command with the --commit-urls option, as outlined in the overview section.

To make a change to OKD you need to:

  1. Identify the component(s) that needs to be changed
  2. Clone/fork the git repository (you can choose to fork the exact commit used to create the image referenced by the OKD release or a newer version of the source)
  3. Make the change
  4. Build the image and push to a container registry that the OKD cluster will be able to access
  5. Run the modified container on a cluster
"},{"location":"okd_tech_docs/modifying_okd/#building-images","title":"Building images","text":"

Most component repositories contain a Dockerfile, so building the image is as simple as podman build or docker build depending on your container tool of choice.

Some component repositories contain a Makefile, so building the image can be done using the Makefile, typically with make build

First thing to do is to replace the FROM images in Dockerfile.rhel7. You may want to just copy it to Dockerfile and then make the changes.

    FROM registry.ci.openshift.org/openshift/release:golang-1.17 AS builder\n
and
    FROM registry.ci.openshift.org/origin/4.10:base\n

Note

The original and replacement image may change as golang version and release requirements change.

Question

Is there a way to find the correct base image for an OKD release?

The original images are unavailable to the public. There is an effort to update the Dockerfiles with publicly available images.

"},{"location":"okd_tech_docs/modifying_okd/#example-scenario","title":"Example Scenario","text":"

To complete the scenario the following steps need to be performed:

  1. Fork the console-operator repository
  2. Clone the new fork locally: git clone https://github.com/<username>/console-operator.git
  3. create new branch from master (or main): git switch -c <branch name>
  4. Make needed modifications. Commit/squash as needed. Maintainers like to see 1 commit rather than several.
  5. Create the image: podman build -f <Dockerfile file> -t <target repo>/<username>/console-operator:4.11-<some additional identifier>
  6. Push image to external repository: podman push <target repo>/<username>/console-operator:4.11-<some additional identifier>
  7. Create new release to test with. This requires the oc command to be available. I use the following script (make_payload.sh). It can be modified as needed, such as adding the correct container registry and username:

    server=https://api.ci.openshift.org\n\nfrom_release=registry.ci.openshift.org/origin/release:4.11.0-0.okd-2022-04-12-000907\nrelease_name=4.11.0-0.jef-2022-04-12-0\nto_image=quay.io/fortinj66/origin-release:v4.11-console-operator\n\noc adm release new --from-release ${from_release} \\\n--name ${release_name} \\\n--to-image ${to_image} \\\nconsole-operator=<target repo>/<username>/console-operator:4.11-<some additional identifier>\n

    from_release, release_name, to_image will need to be updated as needed

  8. Pull installer for cluster release: oc adm release extract --tools <to_image from above> (Make sure image is publicly available)

Warning

When working with some Go lang projects you may need to be on Go lang v1.17 or better, as some projects use language features not supported before v1.17, even though some of the project README.md files may specify V1.15, these README files are out of date

If it is not clear how to build a component you can look in the release repository at https://github.com/openshift/release/tree/master/ci-operator/config/openshift/<operator repo name>, this is used by the Red Hat build system to build components so can be used to determine how to build a component.

You should also check the repo README.md file or any documentation, typically in a doc folder, as there may be some repo specific details

Question

Are there any special repos unique to OKD that need specific mention here, such as machine config?

"},{"location":"okd_tech_docs/modifying_okd/#running-the-modified-image-on-a-cluster","title":"Running the modified image on a cluster","text":"

An OKD release contain a specific set of images and there are operators that ensure that only the correct set of images are running a cluster, so you need to do some specific actions to be able to run your modified image on a cluster. You can do this by:

  1. configuring an existing cluster to run a modified image
  2. create a new installer containing your image then creating a new cluster with the modified installer
"},{"location":"okd_tech_docs/modifying_okd/#running-on-an-existing-cluster","title":"Running on an existing cluster","text":"

The Cluster Version Operator watches the deployments and images related to the core OKD services to ensure that only valid images are running in the core. This prevents you from changing any of the core images. If you want to replace an image you need to scale the Cluster Version Operator down to 0 replicas:

oc scale --replicas=0 deployment/cluster-version-operator -n openshift-cluster-version\n

Some images, such as the Cluster Cloud Controller Manager Operator and the Machine API Operator need additional steps to be able to make changes, but these typically have a docs folder containing additional information about how to make changes to these images.

"},{"location":"okd_tech_docs/modifying_okd/#create-custom-release","title":"Create custom release","text":""},{"location":"okd_tech_docs/operators/","title":"Operator Hub Catalogs","text":"

Warning

This section is under construction

OKD contains many operators which deliver the base platform, however there is also additional capabilities delivered as operators available via the Operator Hub.

The operator hub story for OKD isn't ideal currently (as at OKD 4.10) as OKD shares source with OpenShift, the commercial sibling to OKD. OpenShift has additional operator hub catalogs provided by Red Hat, which deliver additional capabilities as part of the supported OpenShift product. These additional capabilities are not currently provided to OKD.

OpenShift and OKD share a community catalog of operators, which are a subset of the operators available in the OperatorHub. The operators in the community catalog should run on OKD/OpenShift and will include any additional configuration, such as security context configuration.

However, where an operator in the community catalog has a dependency that Red Hat supports and delivers as part of the additional OpenShift operator catalog, then the community catalog operator will specify the dependency from the supported OpenShift catalog. This results in missing dependency errors when attempting to install on OKD.

Question

Todo

Some useful repo links - do we need to create instructions for specific operators?

"},{"location":"okd_tech_docs/release/","title":"OKD Development Resources","text":"

Warning

This section is under construction

Question

What is the end-to-end process to build an OKD release? Is it possible outside Red Hat CI infrastructure?

"},{"location":"okd_tech_docs/troubleshoot/","title":"Troubleshooting OKD","text":"

Warning

This section is under construction

Todo

Complete this section from comments in discussion thread

"},{"location":"wg_crc/overview/","title":"CRC Build Subgroup","text":"

Code-Ready Containers is a cut down version of OKD, designed to run on a developer's machine, which would not have sufficient resources for a full installation of OKD.

The working group was established after a live session where Red Hat's Charro Gruver walked through the build process for OKD CRC

The build process is currently manual, so the working group was established to automate the process and investigate options for creating a continuous integration setup to build and test OKD CRC.

"},{"location":"wg_docs/content/","title":"Content guidelines","text":""},{"location":"wg_docs/content/#site-content-maintainability","title":"Site content maintainability","text":"

The site has adopted Markdown as the standard way to create content for the site. Previously the site used an HTML based framework, which resulted in content not being frequently updated as there was a steep learning curve.

All content on the site should be created using Markdown. To ensure content is maintainable going forward only markdown features outlined below should be used to create site content. If you wish to use additional components on a page then please contact the documentation working group to discuss your requirements before creating a pull request containing additional components.

MkDocs includes the ability to create custom page templates. This facility has been used to create a customized home page for the site. If any other pages require a custom layout or custom features, then a page template should be used so the content can remain in Markdown. Creation of custom page templates should be discussed with the documentation working group.

"},{"location":"wg_docs/content/#changing-content","title":"Changing content","text":"

MkDocs supports standard Markdown syntax and a set Markdown extensions provided by plugins. The exact Markdown syntax supported is based on the python implementation.

MkDocs is configured using the mkdocs.yml file in the root of the git repository.

The mkdoc.yml file defines the top level navigation for the site. The level of indentation is configurable (this requires the theme to support this feature) with Markdown headings, levels 2 (##) and 3 (###) being used for the in-page navigation on the right of the page.

"},{"location":"wg_docs/content/#standard-markdown-features","title":"Standard Markdown features","text":"

The following markdown syntax is used within the documentation

Syntax Result # Title heading - you can create up to 6 levels of headings by adding additional # characters, so ### is a level 3 heading **text** will display the word text in bold *text* will display the word text in italic `code` inline code block ```shell ... ``` multi-line (Fenced) code block 1. list item ordered list - unordered list item unordered list --- horizontal break

HTML can be embedded in Markdown, but embedded HTML should not be used in the documentation. All content should use Markdown with the permitted extensions.

"},{"location":"wg_docs/content/#indentation","title":"Indentation","text":"

MkDocs uses 4 spaces for tabs, so when indenting code ensure you are working with tabs set to 4 spaces rather than 2, which is commonly used.

When using some features of Markdown indentation is used to identify blocks.

1. Ubiquity EdgeRouter ER-X\n    - runs DHCP (embedded), custom DNS server via AdGuard\n\n    ![pic](./img/erx.jpg){width=80%}\n

In the code block above you will see the unordered list item is indented, so it aligns with the content of the ordered list (rather than aligning with the number of the ordered list). The image is also indented so it too aligns with the ordered list text.

Many of the Markdown elements can be nested and indentation is used to define the nesting relationship. If you look down on this page at the Information boxes section, the example shows an example of nesting elements and the Markdown tab shows how indentation is being used to identify the nesting relationships.

"},{"location":"wg_docs/content/#links-within-mkdocs-generated-content","title":"Links within MkDocs generated content","text":"

MkDocs will warn of any internal broken links, so it is important that links within the documentation are recognized as internal links.

Information

Internal links should be to the Markdown file (with .md extension). When the site is generated the filename will be automatically converted to the correct URL

As part of the build process a linkchecker application will check the generated html site for any broken links. You can run this linkchecker locally using the instructions. If any links in the documentation should be excluded from the link checker, such as links to localhost, then they should be added as a regex to the linkcheckerrc file, located in the root folder of the project - see linkchecker documentation for additional information

"},{"location":"wg_docs/content/#markdown-extensions-used-in-okdio","title":"Markdown Extensions used in OKD.io","text":"

There are a number of Markdown extensions being used to create the site. See the mkdocs.yml file to see which extensions are configured. The documentation for the extensions can be found here

"},{"location":"wg_docs/content/#link-configuration","title":"Link configuration","text":"

Links on the page or embedded images can be annotated to control the links and also the appearance of the links:

"},{"location":"wg_docs/content/#image","title":"Image","text":"

Images are embedded in a page using the standard Markdown syntax ![description](URL), but the image can be formatted with Attribute Lists. This is most commonly used to scale an image or center an image, e.g.

![GitHub repo url](images/github-repo-url.png){style=\"width: 80%\" .center }\n
"},{"location":"wg_docs/content/#external-links","title":"External Links","text":"

External links can also use attribute lists to control behaviors, such as open in new tab or add a css class attribute to the generated HTML, such as external in the example below:

[MkDocs](http://mkdocs.org){: target=\"_blank\" .external }\n

Info

You can embed an image as the description of a link to create clickable images that launch to another site: [![Image description](Image URL)](target URL \"hover text\"){: target=_blank}

"},{"location":"wg_docs/content/#youtube-videos","title":"YouTube videos","text":"

It is not possible to embed a YouTube video and have it play in place using pure markdown. You can use HTML within the markdown file to embed a video:

<iframe width=\"100%\" height=\"500\" src=\"https://www.youtube.com/watch?v=qh1zYW7BLxE&t=431s\" title=\"Building an OKD 4 Home Lab with special guest Craig Robinson\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>\n
"},{"location":"wg_docs/content/#tabs","title":"Tabs","text":"

Content can be organized into a set of horizontal tabs.

=== \"Tab 1\"\n    Hello\n\n=== \"Tab 2\"\n    World\n

produces :

Tab 1

Hello

Tab 2

World

"},{"location":"wg_docs/content/#information-boxes","title":"Information boxes","text":"

The Admonition extension allows you to add themed information boxes using the !!! and ??? syntax:

!!! note\n    This is a note\n

produces:

Note

This is a note

and

??? note\n    This is a collapsible note\n\n    You can add a `+` character to force the box to be initially open `???+`\n

produces a collapsible box:

Note

This is a collapsible note

You can add a + character to force the box to be initially open ???+

You can override the title of the box by providing a title after the Admonition type.

Example

You can also nest different components as required

note

Note

This is a note

collapsible note Note

This is a note

custom title note

Sample Title

This is a note

Markdown
!!!Example\n    You can also nest different components as required\n\n    === \"note\"\n        !!!note\n            This is a note\n\n    === \"collapsible note\"\n        ???+note\n            This is a note\n\n    === \"custom title note\"\n        !!!note \"Sample Title\"\n            This is a note\n
"},{"location":"wg_docs/content/#supported-admonition-classes","title":"Supported Admonition Classes","text":"

The Admonitions supported by the Material theme are :

Note

This is a note

Abstract

This is an abstract

Info

This is an info

Tip

This is a tip

Success

This is a success

Question

This is a question

Warning

This is a warning

Failure

This is a failure

Danger

This is a danger

Bug

This is a bug

Example

This is an example

Quote

This is a quote

"},{"location":"wg_docs/content/#code-blocks","title":"Code blocks","text":"

Code blocks allow you to insert code or blocks of text in line or as a block.

To use inline you simply enclose the text using a single back quote ` character. So a command can be included using `oc get pods` and will create oc get pods

When you want to include a block of code you use a fence, which is 3 back quote character at the start and end of the block. After the opening quotes you should also specify the content type contained in the block.

```shell\noc get pods\n```\n

which will produce:

oc get pods\n

Notice that the block automatically gets the copy to clipboard link to allow easy copy and paste.

Every code block needs to identify the content. Where there is no content type, then text should be used to identify the content as plain text. Some of the common content types are shown in the table below. However, a full link of supported content types can be found here, where the short name in the documentation should be used.

type Content shell Shell script content powershell Windows Power Shell content bat Windows batch file (.bat or .cmd files) json JSON content yaml YAML content markdown or md Markdown content java Java programming language javascript or js JavaScript programming language typescript or ts TypeScript programming language text Plain text content"},{"location":"wg_docs/content/#advanced-highlighting-of-code-blocks","title":"Advanced highlighting of code blocks","text":"

There are some additional features available due to the highlight plugin installed in MkDocs. Full details can be found in the MkDocs Materials documentation.

"},{"location":"wg_docs/content/#line-numbers","title":"Line numbers","text":"

You can add line numbers to a code block with the linenums directive. You must specify the starting line number, 1 in the example below:

``` javascript linenums=\"1\"\n<script>\ndocument.getElementById(\"demo\").innerHTML = \"My First JavaScript\";\n</script>\n```\n

creates

<script>\ndocument.getElementById(\"demo\").innerHTML = \"My First JavaScript\";\n</script>\n

Info

The line numbers do not get included when the copy to clipboard link is selected

"},{"location":"wg_docs/content/#spell-checking","title":"Spell checking","text":"

This project uses cSpell to check spelling within the markdown. The configuration included in the project automatically excludes content in a code block, enclosed in triple back quotes ```.

The configuration file also specifies that US English is the language used in the documentation, so only US English spellings should be used for words where alternate international English spellings exist.

You can add words to be considered valid either within a markdown document or within the cspell configuration file, cspell.json, in the root folder of the documentation repository.

Words defined within a page only apply to that page, but words added to the configuration file apply to the entire project.

"},{"location":"wg_docs/content/#adding-local-words","title":"Adding local words","text":"

You can add a list of words to be considered valid for spell checking purposes as a comment in a Markdown file.

The comment has a specific format to be picked up by the cSpell tool:

<!--- cSpell:ignore linkchecker linkcheckerrc mkdocs mkdoc -->

here the words linkchecker, linkcheckerrc, mkdocs and mkdoc are specified as words to be accepted by the spell checker within the file containing the comment.

"},{"location":"wg_docs/content/#adding-global-words","title":"Adding global words","text":"

The cSpell configuration file cspell.json contains a list of words that should always be considered valid when spell checking. The list of words applies to all files being checked.

"},{"location":"wg_docs/doc-env/","title":"Setup environment","text":""},{"location":"wg_docs/doc-env/#setting-up-a-documentation-environment","title":"Setting up a documentation environment","text":"

To work on documentation and be able to view the rendered web site you need to create an environment, which comprises of:

You can create the environment by :

Tooling within a container

You can use a container to run MkDocs so no local installation is required, however you do need to have Docker Desktop installed if using Mac OS or Windows. If running on Linux you can use Docker or Podman.

If you have a node.js environment installed that includes the npm command then you can make use of the run scripts provided in the project to run the docker or podman commands

The following commands all assume you are working in the root directory of your local git clone of your forked copy of the okd.io git repo. (your working directory should contain mkdocs.yml and package.json files)

Warning

If you are using Linux with SELinux enabled, then you need to configure your system to allow the local directory containing the cloned git repo to be mounted inside a container. The following commands will configure SELinux to allow this:

(change the path to the location of your okd.io directory)

sudo semanage fcontext -a -t container_file_t '/home/brian/Documents/projects/okd.io(/.*)?'\nsudo restorecon -Rv /home/brian/Documents/projects/okd.io\n
Editing on cluster

There is a community operator available in the OperatorHub on OKD to install Eclipse Che, the upstream project for Red Hat CodeReady Workspaces.

You can use Che to modify site content through your browser, with your OKD cluster hosting the workspace and developer environment.

You need to have access to an OKD cluster and have the Che operator installed and an Che instance deployed and running.

In your OKD console, you should have an applications link in the top toolbar. Open the Applications menu (3x3 grid icon) and select Che. This will open the Che application - Google Chrome is the supported browser and will give the best user experience.

In the Che console side menu, select to Create Workspace, then in the Import from Git section add the URL of your fork of the okd.io git repository (should be similar to https://github.com/<user or org name>/okd.io.git) then press Create & Open to start the workspace.

After a short while the workspace will open (the cluster has to download and start a number of containers, so the first run may take a few minutes depending on your cluster network access). When the workspace is displayed you may have to wait a few seconds for the workspace to initialize and clone your git repo into the workspace. You may also get asked if you trust the author of the git repository, answer yes to this question. Your environment should then be ready to start work.

The web based developer environment uses the same code base as Microsoft Visual Studio Code, so provides a similar user experience, but within your browser.

Local mkdocs and python tooling installation

You can install MkDocs and associated plugins on your development system and run the tools locally:

Note

sudo command may be needed to install globally, depending on your system configuration

You now have all the tools installed to be able to create the static HTML site from the markdown documents. The documentation for MkDocs provides full instructions for using MkDocs, but the important commands are:

There is also a convenience script ./build.sh in the root of the repository that will check spelling, build the site then run the link checker.

You should verify there are no spelling mistakes, by finding the last line of the CSpell output:

CSpell: Files checked: 31, Issues found: 0 in 0 files\n

Similarly, the link checker creates a summary after checking the site:

That's it. 662 links in 695 URLs checked. 0 warnings found. 0 errors found\n

Any issues reported should be fixed before submitting a pull request to add your changes to the okd.io site.

"},{"location":"wg_docs/doc-env/#creating-the-container","title":"Creating the container","text":"

To create the container image on your local system choose the appropriate command from the list:

This will build a local container image named mkdocs-build

"},{"location":"wg_docs/doc-env/#live-editing-of-the-content","title":"Live editing of the content","text":"

To change the content of the web site you can use your preferred editing application. To see the changes you can run a live local copy of okd.io that will automatically update as you save local changes.

Ensure you have the local container image, built in the previous step, available on your system then choose the appropriate command from the list:

You can now open a browser to localhost:8000. You should see the okd.io web site in the browser. As you change files on your local system the web pages will automatically update.

When you have completed editing the site use Ctrl-c (hold down the control key then press c) to quit the site.

"},{"location":"wg_docs/doc-env/#build-and-validate-the-site","title":"Build and validate the site","text":"

Before you submit any changes to the site in a pull request please check there are no spelling mistakes or broken links, by running the build script and checking the output.

The build script will create or update the static web site in the public directory - this is what will be created and published as the live site if you submit a pull request with your modifications.

You should verify there are no spelling mistakes, by finding the last line of the CSpell output:

CSpell: Files checked: 31, Issues found: 0 in 0 files\n

Further down in the console output wil be the summary of the link checker:

That's it. 662 links in 695 URLs checked. 0 warnings found. 0 errors found\n

Any issues reported should be fixed before submitting a pull request to add your changes to the okd.io site.

"},{"location":"wg_docs/doc-env/#live-editing-of-the-content_1","title":"Live editing of the content","text":"

To change the content of the web site you can use the in browser editor provided by Che. To see the changes you can run a live local copy of okd.io that will automatically update as you save local changes.

On the right side of the workspace window you should see 3 icons, hovering over them should reveal they are the Outline, Endpoints and Workspace. Clicking into the workspace, you should see a User Runtimes section with the option to open a new terminal, then 2 commands (Live edit and Build) and finally a link to launch MkDocs web site (initially this link will not work)

To allow you to see your changes in a live site (where any change you save will automatically be updated on the site) click on the 1. Live edit link. This will launch a new terminal window where the mkdocs serve command will run, which provides a local live site. However, as you are running the development site on a cluster, the Che runtime automatically makes this site available to you. The MkDocs link now points to the site, but you will be asked if you want to open the site in a new tab or in Preview.

Preview will add a 4th icon to the side toolbar and open the web site in the side panel. You can drag the side of the window to resize the browser view to allow you to edit on the left and view the results on the right of your browser window.

If you have multiple monitors you may want to select to open the website in a new Tab or use the MkDocs link, then drag the browser tab on to a different monitor.

By default, the Che environment auto-saves any file modification after half a second of no activity. You can alter this in the preferences section. When ever a file is saved the live site will update in the browser.

When you finished editing simply close the terminal window running the Live edit script. This will stop the web server running the preview site.

"},{"location":"wg_docs/doc-env/#build-and-validate-the-site_1","title":"Build and validate the site","text":"

The build script will create or update the static web site in the public directory - this is what will be created and published as the live site if you submit a pull request with your modifications.

To run the build script simply click the 2. Build link in the Workspace panel.

You should verify there are no spelling mistakes, by finding the last line of the CSpell output:

CSpell: Files checked: 31, Issues found: 0 in 0 files\n

Further down in the console output wil be the summary of the link checker:

That's it. 662 links in 695 URLs checked. 0 warnings found. 0 errors found\n

Any issues reported should be fixed before submitting a pull request to add your changes to the okd.io site.

"},{"location":"wg_docs/okd-io/","title":"Contributing to okd.io","text":"

The source for okd.io is in a github repository.

The site is created using MkDocs. which takes Markdown documents and turns them into a static website that can be accessed from a filesystem or served from a web server.

To update or add new content to the site you need to

The site is created using MkDocs with the Materials theme theme.

"},{"location":"wg_docs/okd-io/#updating-the-site","title":"Updating the site","text":"

To make changes to the site. Create a pull request to deliver the changes in your fork of the repo to the main branch of the okd.io repo. Before creating a pull request you should run the build script and verify there are no spelling mistakes or broken links. Details on how to do this can be found at the end of the instructions for setting up a documentation environment

Github automation is used to generate the site then publish to GitHub pages, which serves the site. If your changes contain spelling issues or broken links, then the automation will fail and the GitHub pages site will not be updated, so please do a local test using the build.sh script before creating the pull request.

"},{"location":"wg_docs/overview/","title":"Documentation Subgroup","text":"

The Documentation working group is responsible for improving the OKD documentation. Both the community documentation (this site) and the product documentation.

"},{"location":"wg_docs/overview/#joining-the-group","title":"Joining the group","text":"

The Documentation Subgroup is open to all. You don't need to be invited to join, just attend on of the bi-weekly video calls:

"},{"location":"wg_docs/overview/#product-documentation","title":"Product Documentation","text":"

The OKD product documentation is maintained in the same git repository as Red Hat OpenShift product documentation, as they are sibling projects and largely share the same source code.

The process for making changes to the documentation is outlined in the documentation section

"},{"location":"wg_docs/overview/#community-documentation","title":"Community Documentation","text":"

This site is the community documentation. It is hosted on github and uses a static site generator to convert the Markdown documents in the git repo into this website.

Details of how to modify the site content is contained on the page Modifying OKD.io.

"},{"location":"wg_virt/community/","title":"Get involved!","text":"

The OKD Virtualization SIG is a group of people just like you who are aiming to promote the adoption of the virtualization components on OKD.

"},{"location":"wg_virt/community/#social-media","title":"Social Media","text":"

Reddit : r/OKD Virtualization

YouTube : OKD Workgroup meeting

Twitter : Follow @OKD_Virt_SIG

"},{"location":"wg_virt/community/#getting-started-as-a-user-future-contributor","title":"Getting started as a user (future contributor!)","text":"

Before getting started, please read OKD community etiquette guidelines.

Feel free to dive into OKD documentation following the installation guide for setting up your initial OKD deployment on your bare metal datacenter. Once it's up, please follow the OKD documentation regarding Virtualization installation.

If you find difficulties during the process let us know! Please report issues in our GitHub tracker.

TODO: we may switch to okd organization once it will be ready

"},{"location":"wg_virt/community/#getting-started-as-contributor","title":"Getting started as contributor","text":"

The OKD Virtualization SIG is a group of multidisciplinary individuals who are contributing code, writing documentation, reporting bugs, contributing UX and design expertise, and engaging with the community.

Before getting started, we recommend that you:

The OKD Virtualization SIG is a community project, and we welcome contributions from everyone! If you'd like to write code, report bugs, contribute designs, or enhance the documentation, we would love your help!

"},{"location":"wg_virt/community/#testing","title":"Testing","text":"

We're always eager to have new contributors to join improving the OKD Virtualization quality, no matter your experience level. Please try to deploy and use OKD Virtualization and report issues in our GitHub tracker.

TODO: we may switch to okd organization once it will be ready

"},{"location":"wg_virt/community/#documentation","title":"Documentation","text":"

OKD Virtualization documentation is mostly included in GitHub openshift-docs repository and we are working for getting it published on OKD documentation website

Some additional documentation may be available within this SubGroup space.

"},{"location":"wg_virt/community/#supporters-sponsors-and-providers","title":"Supporters, Sponsors, and Providers","text":"

OKD Virtualization SIG is still in its early days.

If you are using, supporting or providing services with OKD Virtualization we would like to share your story here!

"},{"location":"wg_virt/overview/","title":"OKD Virtualization Subgroup","text":"

The Goal of the OKD Virtualization Subgroup is to provide an integrated solution for classical virtualization users based on OKD, HCO and KubeVirt, including a graphical user interface and deployed using bare metal suited method.

Meet our community!

"},{"location":"wg_virt/overview/#documentation","title":"Documentation","text":""},{"location":"wg_virt/overview/#projects","title":"Projects","text":"

The OKD Virtualization Subgroup is monitoring and integrating the following projects in a user consumable virtualization solution:

"},{"location":"wg_virt/overview/#deployment","title":"Deployment","text":""},{"location":"wg_virt/overview/#mailing-list-slack","title":"Mailing List & Slack","text":"

OKD Workgroup Google Group: https://groups.google.com/forum/#!forum/okd-wg

Slack Channel: https://kubernetes.slack.com/messages/openshift-dev

"},{"location":"wg_virt/overview/#todo","title":"TODO","text":""},{"location":"wg_virt/overview/#sig-membership","title":"SIG Membership","text":""},{"location":"wg_virt/overview/#resources-for-the-sig","title":"Resources for the SIG","text":""},{"location":"wg_virt/overview/#automation-in-place","title":"Automation in place:","text":"

HCO main branch gets tested against OKD 4.9: https://github.com/openshift/release/blob/master/ci-operator/config/kubevirt/hyperconverged-cluster-operator/kubevirt-hyperconverged-cluster-operator-main__okd.yaml

HCO precondition job: https://prow.ci.openshift.org/job-history/gs/origin-ci-test/pr-logs/directory/pull-ci-kubevirt-hyperconverged-cluster-operator-main-okd-hco-e2e-image-index-gcp

KubeVirt is uploaded to operatorhub and on community-operators: https://github.com/redhat-openshift-ecosystem/community-operators-prod/tree/main/operators/community-kubevirt-hyperconverged

"},{"location":"working-group/minutes/minutes/","title":"OKD Working Group Meeting Minutes","text":""},{"location":"working-group/minutes/2022/WG-Meeting-Minutes-04-12-2022/","title":"OKD Working Group Meeting Notes","text":""},{"location":"working-group/minutes/2022/WG-Meeting-Minutes-04-12-2022/#april-12-2022","title":"April 12, 2022","text":""},{"location":"working-group/minutes/2022/WG-Meeting-Minutes-04-12-2022/#attendees","title":"Attendees:","text":""},{"location":"working-group/minutes/2022/WG-Meeting-Minutes-04-12-2022/#agenda","title":"Agenda","text":""},{"location":"working-group/minutes/2022/WG-Meeting-Minutes-05-24-2022/","title":"OKD Working Group Meeting Notes","text":""},{"location":"working-group/minutes/2022/WG-Meeting-Minutes-05-24-2022/#may-24-2022","title":"May 24, 2022","text":""},{"location":"working-group/minutes/2022/WG-Meeting-Minutes-05-24-2022/#attendees","title":"Attendees:","text":""},{"location":"working-group/minutes/2022/WG-Meeting-Minutes-05-24-2022/#agenda","title":"Agenda","text":""}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml index f24e6b32..35077da6 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -2,262 +2,262 @@ https://openshift-cs.github.io/okd.io/index.html/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/about/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/blog/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/charter/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/communications/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/community/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/conduct/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/contributor/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/crc/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/docs/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/faq/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/help/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/installation/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/working-groups/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/blog/2021-03-07-new-blog.html/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/blog/2021-03-16-save-the-date-okd-testing-deployment-workshop.html/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/blog/2021-03-19-please-avoid-using-fcos-33.20210301.3.1.html/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/blog/2021-03-22-recap-okd-testing-deployment-workshop.html/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/blog/2021-05-04-From-OKD-to-OpenShift-in-3-Years.html/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/blog/2021-05-06-OKD-Office-Hours-at-KubeconEU-on-OpenShiftTV.html/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/blog/2022-09-09-an-introduction-to-debugging-okd-release-artifacts.html/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/blog/2022-10-20-OKD-at-Kubecon-NA-Detroit/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/blog/2022-10-25-OKD-Streams-Building-the-Next-Generation-of-OKD-together/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/blog/2022-12-12-Building-OKD-payload/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/blog/2023-07-18-State-of-Affairs-in-OKD-CI-CD/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/guides/automated-vsphere-upi/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/guides/aws-ipi/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/guides/azure-ipi/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/guides/gcp-ipi/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/guides/overview/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/guides/sno/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/guides/sri/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/guides/upi-sno/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/guides/vadim/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/guides/vsphere-ipi/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/guides/vsphere-prereqs/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/guides/virt-baremetal-upi/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/okd_tech_docs/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/okd_tech_docs/modifying_okd/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/okd_tech_docs/operators/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/okd_tech_docs/release/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/okd_tech_docs/troubleshoot/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/wg_crc/overview/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/wg_docs/content/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/wg_docs/doc-env/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/wg_docs/okd-io/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/wg_docs/overview/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/wg_virt/community/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/wg_virt/overview/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/working-group/minutes/minutes/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/working-group/minutes/2022/WG-Meeting-Minutes-04-12-2022/ - 2023-07-19 + 2023-09-20 daily https://openshift-cs.github.io/okd.io/index.html/working-group/minutes/2022/WG-Meeting-Minutes-05-24-2022/ - 2023-07-19 + 2023-09-20 daily \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index 872fb3e0..d10c1cac 100644 Binary files a/sitemap.xml.gz and b/sitemap.xml.gz differ