Skip to content

Commit

Permalink
CI: Add automated container build
Browse files Browse the repository at this point in the history
This commit adds a Container which holds the CCS and SDK so they can be
used to build the various firmwares.

The pipeline can be later expanded to also actually build the
firmware(s).

Signed-off-by: Olliver Schinagl <[email protected]>
  • Loading branch information
oliv3r committed Aug 30, 2024
1 parent c2c4f2e commit 67f35ba
Show file tree
Hide file tree
Showing 5 changed files with 236 additions and 0 deletions.
2 changes: 2 additions & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
*
!container-entrypoint.sh
78 changes: 78 additions & 0 deletions .github/workflows/container-build.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
name: Create and publish Container image

on:
push:
branches:
- master
tags:
- 'v*'
pull_request:
branches:
- master

env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}

jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write

steps:
- name: Free Disk Space (Ubuntu)
uses: jlumbroso/free-disk-space@main
with:
android: true
docker-images: true
dotnet: true
haskell: true
large-packages: true
swap-storage: true
tool-cache: true

- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0

- name: Set up QEMU
uses: docker/setup-qemu-action@v3

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3

- name: Login to Container registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

- name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=ref,event=pr
type=edge
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
flavor: |
latest=auto
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
file: Containerfile
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
load: true
68 changes: 68 additions & 0 deletions Containerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# SPDX-License-Identifier: MIT
#
# Copyright (C) 2024 Olliver Schinagl <[email protected]>

ARG UBUNTU_VERSION="22.04"
ARG TARGET_ARCH="library"

FROM index.docker.io/${TARGET_ARCH}/ubuntu:${UBUNTU_VERSION}

ENV HOME="/build"

ARG SLF2_COMPONENTS="PF_WCONN"
ENV SLF2_COMPONENTS="${SLF2_COMPONENTS}"

ARG CCS_VERSION="12.5.0"
ARG CCS_RELEASE="00007"
ADD "https://dr-download.ti.com/software-development/ide-configuration-compiler-or-debugger/MD-J1VdearkvK/${CCS_VERSION}/CCS${CCS_VERSION}.${CCS_RELEASE}_linux-x64.tar.gz" "/tmp/ccs_install/"
ENV CCS_VERSION=${CCS_VERSION}.${CCS_RELEASE}

RUN apt-get update && apt-get install --yes \
'tini' \
'build-essential' \
'cmake' \
'git' \
'libc6-i386' \
'libgconf-2-4' \
'libncurses5' \
'libtinfo5' \
'libusb-0.1-4' \
'python3' \
'unzip' \
&& \
rm -f -r '/var/cache/apt' && \
rm -f -r '/var/lib/apt' && \
echo 'Extracting CCS ...' && \
tar -xvf "/tmp/ccs_install/CCS${CCS_VERSION:?}_linux-x64.tar.gz" -C '/tmp/ccs_install' && \
echo 'Installing CCS ...' && \
"/tmp/ccs_install/CCS${CCS_VERSION}_linux-x64/ccs_setup_${CCS_VERSION}.run" \
--enable-components "${SLF2_COMPONENTS:?}" \
--mode unattended \
--prefix '/opt/ti/' \
&& \
echo 'Wrapping things up' && \
rm -f -r '/tmp/ccs_install' && \
ln -f -s \
'/opt/ti/xdctools_'*'_core' \
'/opt/ti/xdctools_core' \
&& \
ln -f -s \
'/opt/ti/ccs/utils/sysconfig_'* \
'/opt/ti/ccs/utils/sysconfig' \
&& \
ln -f -s \
'/opt/ti/ccs/tools/compiler/ti-cgt-armllvm_'* \
'/opt/ti/ccs/tools/compiler/ti-cgt-armllvm' \
&& \
echo 'Installation complete'

ENV PATH="/opt/ti/ccs/eclipse:/opt/ti/ccs/utils/sysconfig/:${PATH}"
ENV XDC_INSTALL_DIR="/opt/ti/xdctools_core"
ENV SYSCONFIG_TOOL="sysconfig_cli.sh"
ENV CMAKE="cmake"
ENV PYTHON="python3"
ENV TICLANG_ARMCOMPILER="/opt/ti/ccs/tools/compiler/ti-cgt-armllvm"

COPY "container-entrypoint.sh" "/init"

ENTRYPOINT [ "/usr/bin/tini", "--", "/init" ]
27 changes: 27 additions & 0 deletions container-entrypoint.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0-or-later
#
# Copyright (C) 2024 Olliver Schinagl <[email protected]>
#
# A beginning user should be able to docker run image bash (or sh) without
# needing to learn about --entrypoint
# https://github.com/docker-library/official-images#consistency

set -eu
if [ -n "${DEBUG_TRACE_SH:-}" ] && \
[ "${DEBUG_TRACE_SH:-}" != "${DEBUG_TRACE_SH#*"$(basename "${0}")"*}" ] || \
[ "${DEBUG_TRACE_SH:-}" = 'all' ]; then
set -x
fi

bin='eclipse'

# Prefix args with $bin if $1 is not a valid command
if ! command -v -- "${1:-}" > '/dev/null' 2>&1; then
# Always register the SDK on a valid command
eclipse -noSplash -application com.ti.common.core.initialize -ccs.productDiscoveryPath "${SLF2_SDK}"
set -- "${bin:?}" "${@}"
fi
exec "${@}"

exit 0
61 changes: 61 additions & 0 deletions coordinator/Z-Stack_3.x.0/COMPILE.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,3 +35,64 @@ These changes are required to fix the coordinator crashing when the TX power is
In SDK 7.10.02.23 TI made changes to `macSetTxPowerVal` which introduced this issue.
The patch reverts these changes.
The patched `maclib_*.a` files were provided by a TI employee and will only work for the 7.41.00.17 SDK.


## Docker build environment

This repo includes a Container file, to help with setting up a build environment without the need to download and install things manually. [Docker](https://docker.com) or [Podman](https://podman.io) can be used. The following example uses docker.

1. This step builds the container locally. This step may be skipped if using a [released container](https://github.com/Koenkk/pkgs/container/Z-Stack-firmware) from this repository instead.
```console
$ docker build \
--file '../../Containerfile' \
--rm \
--tag 'z-stack:dev' \
'../../'
```

> __Note:__ The above instruction assumes we are in the coordinator/Z-Stack_3.x.0 directory, which is why we refer to the repo root. Replace `../../` with `./` if run from the root.
> __:Warning:__ The build of the container will download CCS. While docker keeps a cached copy on subsequent builds, this download can take a while and is not immediately obvious it is happening.
1. Enter the container so that the firmware can be built.
```console
$ docker run \
--interactive \
--rm \
--tty \
--volume './:/src' \
--volume '../../simplelink_f2_examples_sdk:/sdk' \
--volume './workspace:/build/workspace' \
--workdir '/build/workspace' \
'z-stack:dev' \
'/bin/bash'
```

> *Warning:* It is required that this repository has been cloned recursively and the Simple Link being volume mounted to `/sdk` as described above. Also the various `znp.syscfg` files are volume mounted to `/src`. This last path is not per-say needed and later where `znp.syscfg` gets copied could also be copied directly from the host. This is just a preference for the most part.
> __Note:__ The local directory `./workspace` is volume-mounted into the containers `/build/workspace` directory to be able to keep files from the container, but can be freely removed when done.
Within the container, we now follow similar steps as above, however the SDK needs be compiled and registered first.

1. First, the SDK libraries will need to be compiled.
`# make --directory='/sdk/cc13xx_cc26xx_sdk' --jobs=$(($(nproc) - 1)) build-ticlang`

1. Next, register the SDK to CCS
`# eclipse -noSplash -application com.ti.common.core.initialize -ccs.productDiscoveryPath '/sdk/cc13xx_cc26xx_sdk/'`

> *TIP:* Entering the container is convenient, but not required. Instead of using `/bin/bash` to enter a shell, commands can be put there instead as well. Even adding the word `eclipse` is not even needed in that case. E.g. `docker run ... --help` will already work. The SDK is registered in this case automatically as well. An alias could be set to the long docker comamnd, and thus `zstack --help` could be even used (assuming zstack is the name of the alias). This can be convenient when developing on the stack, and wanting to rebuild based on the change.
### Launching eclipse in the container
To launch eclipse in the container to test/check things out. This does require a X11 compatible host.
```
--env DISPLAY="${DISPLAY}" --volume '/tmp/.X11-unix' --network='host'
```
> __Note:__ The network mapping is required for local X11 forwarding to work due to the `DISPLAY` variable assuming `localhost`.
Within the container, eclipse does require `libswt-gtk-4-java epiphany-browser` installed (ccs/eclipse does come with chromium, but epiphany is an easy way to fulfill its missing dependencies).

Almost certainly some additional security mechanisms need to be bypassed, where a quick (and dangerous) hack is to `xhost +` on the host before launching the container.

Finally start `eclipse -data '/build/workspace'` from within the container and it will then pop up a UI window on the host.

> __Note:__ Optionally, a VNC in docker solution or X11 forwarding over ssh can also be used, but that is out of scope for here.

0 comments on commit 67f35ba

Please sign in to comment.