Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

making build -D working with podman #3796

Merged
merged 30 commits into from
Jun 25, 2024
Merged

making build -D working with podman #3796

merged 30 commits into from
Jun 25, 2024

Conversation

judovana
Copy link
Contributor

@judovana judovana commented May 7, 2024

This is resolving the small differences between podman and docker. In systems with both, it allows to select the concrete one.

@judovana judovana marked this pull request as draft May 7, 2024 16:58
@github-actions github-actions bot added docker Issues related to our docker files and docker scripts documentation Issues that request updates to our documentation security labels May 7, 2024
@judovana
Copy link
Contributor Author

judovana commented May 7, 2024

Currenlty the build fails in the mk-ca-bundle.pl - https://github.com/adoptium/temurin-build/blob/master/security/mk-ca-bundle.pl#L615 . It prints just "Couldn't open file: " Can somebody please uncover a bit of mysteries of this file, especially around line 615?

Copy link
Member

@sxa sxa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some initial thoughts - I haven't gone through all of the functionality yet but I'll do that as a second pass :-)

@@ -45,7 +45,7 @@ as we can generate valid dockerfile for it):

```bash
./makejdk-any-platform.sh --docker --clean-docker-build jdk8u
./makejdk-any-platform.sh --docker --clean-docker-build --build-variant openj9 jdk11u
./makejdk-any-platform.sh --podman --clean-docker-build --build-variant openj9 jdk11u
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason not to make this -D now?

Suggested change
./makejdk-any-platform.sh --podman --clean-docker-build --build-variant openj9 jdk11u
./makejdk-any-platform.sh -D --clean-docker-build --build-variant openj9 jdk11u

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had left -D to autodetect. Id there is podman, it will be used. If not, docekr will be used. Reason for this fallback is that if you have podman, you have also docker aliases. But if you have docker, you do no thave podman aliases.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep understood - I just wasn't sure why this example was explicitly switched to --podman instead of showing the "autodetect" version

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I jsut wanted to keep all three there. If they should be reuced, jsut say.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a slight preference fur using the generic -D, although I quite like the multiple examples too, so no real preference with your explaination :-)

docker-build.sh Outdated Show resolved Hide resolved
@@ -270,8 +270,14 @@ function parseConfigurationArguments() {
"--destination" | "-d" )
BUILD_CONFIG[TARGET_DIR]="$1"; shift;;

"--docker" | "-D" )
BUILD_CONFIG[USE_DOCKER]="true";;
"-D" )
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the reason for using this USE_DOCKER variable instead of the exsting DOCKER one. It means we now have lines elsewhere like:

${BUILD_CONFIG[DOCKER]} ${BUILD_CONFIG[USE_DOCKER]} which aren't as clear as they could be when you look at those lines in isolation.

Copy link
Contributor Author

@judovana judovana May 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes they are absolutely misleading, but they were already before. Before this goes in, I would like to rename ethem both, so they fit at least a bit to what they are doing. I have left them in for readability (although after all, it do not help.)

Originally the BUILD_CONFIG[USE_DOCKER] was true/false and ${BUILD_CONFIG[DOCKER]} was "docker" or "sudo docker". Now the BUILD_CONFIG[USE_DOCKER] is false/docker/podman and ${BUILD_CONFIG[DOCKER]} is sudo or nothing

The rname should be BUILD_CONFIG[CONTAINER_COMMAND] and ${BUILD_CONFIG[SUDO_CONTAINER]}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually I was long hesitating wheter to keep using USE_DOCKER true/false flag, and keep command in DOCKER with docker/podman/sudo docker/sudo podman and decided with slight 55:45 overwhelm to use USE_DOCKER as command and DOCKER as sudo preffix, with intention to rename at the end.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@andrew-m-leonard Am I correct in saying that thse parameters are not defined at any point in the jenkins jobs so changing them here should be "safe"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually - main reason to swithc, was to add more flexibility for possible "sudo" fixes. Eg to allow choice sudo/run0/doas ....

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@andrew-m-leonard Am I correct in saying that thse parameters are not defined at any point in the jenkins jobs so changing them here should be "safe"?

Great remark. No idea. Then the rename will go as spearte PR onc.. if... those changese ever goes in. TYVM!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually I was long hesitating wheter to keep using USE_DOCKER true/false flag, and keep command in DOCKER with docker/podman/sudo docker/sudo podman and decided with slight 55:45 overwhelm to use USE_DOCKER as command and DOCKER as sudo preffix, with intention to rename at the end.

Yeah makes sense - I think if we do the renames so that it's a little more comprehensible to new people looking at the scripts.

${BUILD_CONFIG[DOCKER]} "${BUILD_CONFIG[USE_DOCKER]}"

Looks a bit odd in isolation so something like

${BUILD_CONFIG[SUDO_CONTAINER]} BUILD_CONFIG[CONTAINER_COMMAND]

would be a lot clearer, so if we can get some of these renames in (and hope this PR rebases on master nicely!) then I'm personally good with putting this in if Andrew agrees.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. The renaming to somethnig like that will be necessary. Currently it is terribly unreadable.
yes, the renaming is missing due to rebases :) .. no it do not rebase smoothly, but I will elaborate !
CONTAINER_COMMAND sounds like clear choice. CONTAINER_AS_ROOT may be better if we want to prevent another renaming in future (and PRIVLIDGED_CONTAINER is jsut wrong). But currently it is indeed jsut sudo. So SUDO_CONTAINER would do.

makejdk-any-platform.sh Outdated Show resolved Hide resolved
docker-build.sh Outdated Show resolved Hide resolved
security/mk-cacerts.sh Outdated Show resolved Hide resolved
sbin/common/config_init.sh Outdated Show resolved Hide resolved
@judovana

This comment was marked as off-topic.

@judovana
Copy link
Contributor Author

judovana commented May 9, 2024

Configuring command and using the pre-built config params...
/openjdk/sbin/build.sh: line 620: cd: /home/jvanek/git/temurin-build/workspace/./build//src: No such file or directory

It seems I'm on wrong track/

@judovana
Copy link
Contributor Author

Configuring command and using the pre-built config params...
/openjdk/sbin/build.sh: line 620: cd: /home/jvanek/git/temurin-build/workspace/./build//src: No such file or directory

It seems I'm on wrong track/

@sxa hi!

Sorry for dummy question, but I have to be missing something. The docket build scripts, as I read it, nowhere clone/copy/mount the jdk sources. Do you recall how it was designed to obtain them? (the -l would need some if there, thats not part of this question) Thanx!

@judovana
Copy link
Contributor Author

I added few more dirs.. but that is getting back to my weeird question;

./makejdk-any-platform.sh -c --podman jdk21u
...

Completed configuring the version string parameter, config args are now:  --with-vendor-name="Undefined Vendor" --with-vendor-url=file:///dev/null --with-vendor-bug-url=file:///dev/null --with-vendor-vm-bug-url=file:///dev/null --with-version-opt=202405151419 --with-version-pre=beta
Building up the configure command...
Adjust configure for reproducible build
Configuring jvm variants if provided
Configure custom cacerts src security/certs
setting freetype dir to bundled
Completed configuring the version string parameter, config args are now:  --with-vendor-name="Undefined Vendor" --with-vendor-url=file:///dev/null --with-vendor-bug-url=file:///dev/null --with-vendor-vm-bug-url=file:///dev/null --with-version-opt=202405151419 --with-version-pre=beta --with-boot-jdk=/usr/lib/jvm/jdk20 --with-debug-level=release --with-native-debug-symbols=none  --with-alsa=/home/jvanek/git/temurin-build/workspace/./build//installedalsa --with-source-date=1715782781 --with-hotspot-build-time='2024-05-15 14:19:41' --disable-ccache --with-build-user=admin --with-extra-cflags='-fdebug-prefix-map=/home/jvanek/git/temurin-build/workspace/build/src/build/linux-x86_64-server-release/=' --with-extra-cxxflags='-fdebug-prefix-map=/home/jvanek/git/temurin-build/workspace/build/src/build/linux-x86_64-server-release/=' --with-jvm-variants=server --with-cacerts-src=/openjdk/sbin/../security/certs  --with-freetype=bundled --with-zlib=bundled
Configuring command and using the pre-built config params...
Should be in the openjdk build root directory, I'm at /home/jvanek/git/temurin-build/workspace/build/src
Currently at '/home/jvanek/git/temurin-build/workspace/build/src'
Skipping configure because we're assembling an exploded image
Should be in the openjdk build root directory, I'm at /home/jvanek/git/temurin-build/workspace/build/src
Currently at '/home/jvanek/git/temurin-build/workspace/build/src'
Skipping configure because we're assembling an exploded image
make: *** No targets specified and no makefile found.  Stop.
OpenJDK make failed, archiving make failed logs
/openjdk/sbin/build.sh: line 781: cd: build/*: No such file or directory
Archiving and compressing with gzip

real	0m0.006s
user	0m0.003s
sys	0m0.003s
Your archive was created as /home/jvanek/git/temurin-build/workspace/build/src/OpenJDK.tar.gz
Moving the artifact to location /home/jvanek/git/temurin-build/workspace/target//OpenJDK-makefailurelogs.tar.gz
archive done.
Failed to make the JDK, exiting

That it do not obtains sources...

@judovana
Copy link
Contributor Author

Just a nit, exceptnot working, this PR is doing its job. Now both podman and docker are working/failing the same. Only the wrapper do not work as expected. Maybe for pretty long time. I may try to contineu, but I need to knwo if there is any interest in that.

@judovana judovana force-pushed the podman branch 7 times, most recently from 7a33d32 to d8a44d6 Compare May 24, 2024 13:27
@judovana
Copy link
Contributor Author

@sxa @andrew-m-leonard @karianna

Skipping: Telekom Security SMIME ECC Root 2021
Parsing: Telekom Security TLS ECC Root 2020
Skipping: Telekom Security SMIME RSA Root 2023
Parsing: Telekom Security TLS RSA Root 2023
Done (147 CA certs processed, 24 skipped).
mk-ca-bundle.pl generates 147 certificates
Subject: CN=GlobalSign_Root_CA,OU=Root_CA,O=GlobalSign_nv-sa,C=BE
Generated alias: CN=GlobalSign_Root_CA,OU=Root_CA,O=GlobalSign_nv-sa,C=BE
Renaming certs/cert.crt to certs/cn_globalsign_root_ca,ou_root_ca,o_globalsign_nvsa,c_be
ERROR: Certificate alias file already exists certs/cn_globalsign_root_ca,ou_root_ca,o_globalsign_nvsa,c_be
security/mk-cacerts.sh needs ALIAS_FILENAME filter updating to make unique

Any idea?

@sxa
Copy link
Member

sxa commented Jun 17, 2024

Just a nit, exceptnot working, this PR is doing its job. Now both podman and docker are working/failing the same. Only the wrapper do not work as expected. Maybe for pretty long time. I may try to contineu, but I need to knwo if there is any interest in that.

I think it's useful to have that operational (especially since we mention this in the top level README.md in this repository), although perhaps we should just direct people to our docker images, such as adoptopenjdk/centos7_build_image now. I do feel that it's "nicer" to have a separate dockerfile, but I'm not sure how many people are trying to use it, and we haven't been actively keeping it tested and updated with anything that's needed.

@judovana
Copy link
Contributor Author

Pls, note #3855 for my continuous thoughts flow. If there is any better place where to put podman and fedora/rhel support, let me know.

@sxa
Copy link
Member

sxa commented Jun 17, 2024

Pls, note #3855 for my continuous thoughts flow. If there is any better place where to put podman and fedora/rhel support, let me know.

NO better suggestions - that issue LGTM.

@judovana
Copy link
Contributor Author

Note for myself for tomorrow:

diff --git a/sbin/common/common.sh b/sbin/common/common.sh
index 6f544a8..c559944 100755
--- a/sbin/common/common.sh
+++ b/sbin/common/common.sh
@@ -233,7 +233,7 @@ createOpenJDKArchive()
 
 function setBootJdk() {
   # Stops setting the bootJDK on the host machine when running docker-build
-  if [ "${BUILD_CONFIG[DOCKER]}" != "docker" ] || { [ "${BUILD_CONFIG[DOCKER]}" == "docker" ] && [ "${BUILD_CONFIG[DOCKER_FILE_PATH]}" != "" ]; } ; then
+  if [ "${BUILD_CONFIG[DOCKER]}" != "docker" -a "${BUILD_CONFIG[DOCKER]}" != "sudo docker" ] || { [ "${BUILD_CONFIG[DOCKER]}" == "docker" ] && [ "${BUILD_CONFIG[DOCKER_FILE_PATH]}" != "" ]; } ; then
     if [ -z "${BUILD_CONFIG[JDK_BOOT_DIR]}" ] ; then
       echo "Searching for JDK_BOOT_DIR"

Is missing ot make docker "pass" to die on

Skipping: Telekom Security SMIME RSA Root 2023
Parsing: Telekom Security TLS RSA Root 2023
Done (147 CA certs processed, 24 skipped).
Couldn't open file: No such file or directory at ./mk-ca-bundle.pl line 615.

Which was also resolve din this PR. I keep forgetting to add that condition. (as it changed in this pr)

@judovana
Copy link
Contributor Author

judovana commented Jun 17, 2024

both

sh makejdk-any-platform.sh  -c --sudo --docker jdk21u
sh makejdk-any-platform.sh  -c --custom-cacerts  --podman jdk21u

are now passing on my docker (first) and podman (second) fedora40 vm \o/

Ok to rename and merge?-)

@@ -385,5 +385,7 @@ configure_build() {
setWorkingDirectory
configureMacFreeFont
setMakeArgs
setBootJdk
if [ "${BUILD_CONFIG[USE_DOCKER]}" == false ] ; then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think from looking at the other changes, USE_DOCKER is either "docker" or "podman", and not "false"?
Which leads to the point that the variable name is confusing...?
Maybe we should add a new variable, say "CONTAINER_CMD" (docker|podman), and if we need a boolean, call that "USE_CONTAINER" (true|false) ??

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanx!

both variables must be renamed as last stage of this PR (I had nto yet done it to avoid broken rebases).

It is not visdibel in changeset, but defautl value of BUILD_CONFIG[USE_DOCKER] is really false.:

BUILD_CONFIG[USE_DOCKER]=${BUILD_CONFIG[USE_DOCKER]:-false}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

judovana added 11 commits June 24, 2024 10:41
…${BUILD_CONFIG[USE_DOCKER]}"

Originally, this patch started to fix properly quote for safety (thanx
linter), I foudn that on sme pleaces, original  ${BUILD_CONFIG[DOCKER]}
 was not repalced by new tandem.  ${BUILD_CONFIG[DOCKER]} was 'docker'
or 'sudo docker'. I had split it, so ${BUILD_CONFIG[DOCKER]} is sudo or
nothing and ${BUILD_CONFIG[USE_DOCKER]}" is docker or podman.  The
variables have to be renamed at the end to adhere more to theirs purposes.
all sub dirs should be then created by follwoing prepare-workspace
@judovana
Copy link
Contributor Author

Ok, renaming vars!

judovana added 2 commits June 24, 2024 12:00
BUILD_CONFIG[USE_DOCKER]-> BUILD_CONFIG[CONTAINER_COMMAND]
BUILD_CONFIG[DOCKER] -> BUILD_CONFIG[CONTAINER_AS_ROOT]

BUILD_CONFIG[USE_DOCKER] values: false, podman, docker
BUILD_CONFIG[DOCKER] values: sudo,empty string

Other docker based variables which are globally container bound remained
intact (CLEAN_DOCKER_BUILD, DEBUG_DOCKER, DOCKER_FILE_PATH...)
@judovana
Copy link
Contributor Author

renamed, local docker and podman builds running

@judovana judovana marked this pull request as ready for review June 24, 2024 12:42
@judovana
Copy link
Contributor Author

Podman build (sh makejdk-any-platform.sh -c --custom-cacerts --podman jdk21u ) passed. Now trying 8, looks fine too.
Pure poor docker still runs (vm in vm in vm) But it is deep in build already.

Copy link
Contributor

@andrew-m-leonard andrew-m-leonard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it looks good now

@judovana
Copy link
Contributor Author

tyvm!

@judovana
Copy link
Contributor Author

Podman build (sh makejdk-any-platform.sh -c --custom-cacerts --podman jdk21u ) passed. Now trying 8, looks fine too. Pure poor docker still runs (vm in vm in vm) But it is deep in build already.

Also docker buld passed. Now trtying 8u too

@judovana
Copy link
Contributor Author

judovana commented Jun 24, 2024

hm. Both jdk8u builds failed in free type.

Podman, free type build passed, jdk configure:

configure: Found freetype include files at /home/jvanek/git/temurin-build/workspace/./build//installedfreetype/include using --with-freetype
checking for freetype includes... /home/jvanek/git/temurin-build/workspace/build/installedfreetype/include
checking for freetype libraries... /home/jvanek/git/temurin-build/workspace/build/installedfreetype/lib
checking if we can compile and link with freetype... no
configure: Could not compile and link with freetype. This might be a 32/64-bit mismatch.
configure: Using FREETYPE_CFLAGS=-I/home/jvanek/git/temurin-build/workspace/build/installedfreetype/include/freetype2 -I/home/jvanek/git/temurin-build/workspace/build/installedfreetype/include and FREETYPE_LIBS=-L/home/jvanek/git/temurin-build/workspace/build/installedfreetype/lib -lfreetype
configure: error: Can not continue without freetype. You might be able to fix this by running 'sudo yum install freetype-devel'.
No configurations found for /home/jvanek/git/temurin-build/workspace/build/src/! Please run configure to create a configuration.
Makefile:55: *** Cannot continue.  Stop.

docker, cionfigure of freetype fails:

Cloning into 'freetype'...
Note: checking out '86bc8a95056c97a810986434a3f268cbe67f2902'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at 86bc8a950 * Version 2.9.1 released. =========================
./autogen.sh: line 102: aclocal: command not found
./autogen.sh: line 55: test: 1: unary operator expected
./autogen.sh: line 59: test: 1: unary operator expected
./autogen.sh: line 67: test: 10: unary operator expected
./autogen.sh: line 71: test: 10: unary operator expected
./autogen.sh: line 102: libtoolize: command not found
./autogen.sh: line 55: test: 2: unary operator expected
./autogen.sh: line 59: test: 2: unary operator expected
./autogen.sh: line 67: test: 2: unary operator expected
./autogen.sh: line 71: test: 2: unary operator expected
generating `configure.ac'
running `aclocal -I . --force'
./autogen.sh: line 15: aclocal: command not found
error while running `aclocal -I . --force'

However it seems, this happens also on master (verified). So the changeset seesm to not be guilty. The only "what?" is different failure in docker/podman.

@sxa
Copy link
Member

sxa commented Jun 24, 2024

test: 1: unary operator expected

Hmmm was that using the same OS image between podman and docker? That error looks to me like it was run with a bourne shell instead of base (for example Ubuntu's default sh which typically points to dash)

@judovana
Copy link
Contributor Author

judovana commented Jun 24, 2024

Indeed. I had not lookled into it more, as it fails both before and aftr this PR the same. But It remains on my todo: #3863

@judovana
Copy link
Contributor Author

Thanx! Befor acting, highlighting: need of --custom-cacerts with podman is definitly making sme hidden issue visible. I hjad not yet figured why #3796 (comment) happens/ The --custom-cacerts is workarounding it, and it do not occure with docker.

Hmmm are you saying that the ERROR: Certificate alias file already exists error only occurs when using podman and not docker? That seems very strange.

Gosh. While trying all othjer jdks (11-22) where they all passed (both with cloning and with -l ) over night, I realised I forget to add --custom-cacerts to the cmdline... and they all passed.....

Copy link
Member

@sxa sxa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM now - thanks for this :-)

@sxa sxa merged commit 310734f into adoptium:master Jun 25, 2024
29 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
docker Issues related to our docker files and docker scripts documentation Issues that request updates to our documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants