You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to run an npm build, publish build info, and execute xray scan while running inside a withDockerContainer block. However, after I try publishing the build info, a command to kick off an xray scan fails.
Current behavior
The jf 'npm i' works, then jf 'rt bp' runs without error, then jf 'bs' blows up saying it can't find the build info. When I check our artifactory, there was a build pushed but the build info doesn't have any of the metadata/dependencies from the npm install. I have a feeling from inspecting the job logs that the jfrog plugin/CLI might use environment variables tmp dirs to store build info, but they don't carry across the different commands
Reproduction steps
Run a pipeline job with steps similar to this scripted pipeline syntax:
node {
withDockerContainer(
image: 'node:18.17.1',
args: "-v <tools_dir>:<tools_dir>:rw,z" // this was necessary so that the container could use the globally configured jfrog-cli tool
) {
withEnv(["JFROG_BINARY_PATH=${tool 'jfrog-cli'}"]) {
stage('Build and scan') {
// checkout some npm project here
jf 'npm-config --repo-resolve <resolve_repo> --repo-deploy <resolve_repo>'
jf 'npm i'
jf 'rt bp'
jf 'bs' // 404 returned here
}
}
}
}
Expected behavior
The build info should be populated and pushed correctly, allowing an xray scan to run against that build's dependencies.
JFrog plugin version
1.4.0
JFrog CLI version
2.45.0
Operating system type and version
Amazon Linux / UBI
JFrog Artifactory version
7.49.6
JFrog Xray version
3.66.6
The text was updated successfully, but these errors were encountered:
Hi @five-iron , I was encountering something slightly different but related.
tl;dr try setting this environment variable:
JFROG_CLI_TEMP_DIR="${WORKSPACE_TMP}/jfrog"
The default location where build info is kept track of is in /tmp, so you lose any build-info status when a build container is destroyed. You can override the default location with the JFROG_CLI_TEMP_DIR variable. The situation I'm describing doesn't quite match the single-node example you provide, but hopefully this information will give you the ability to debug a bit further.
Describe the bug
I'm trying to run an npm build, publish build info, and execute xray scan while running inside a
withDockerContainer
block. However, after I try publishing the build info, a command to kick off an xray scan fails.Current behavior
The
jf 'npm i'
works, thenjf 'rt bp'
runs without error, thenjf 'bs'
blows up saying it can't find the build info. When I check our artifactory, there was a build pushed but the build info doesn't have any of the metadata/dependencies from the npm install. I have a feeling from inspecting the job logs that the jfrog plugin/CLI might useenvironment variablestmp dirs to store build info, but they don't carry across the different commandsReproduction steps
Run a pipeline job with steps similar to this scripted pipeline syntax:
Expected behavior
The build info should be populated and pushed correctly, allowing an xray scan to run against that build's dependencies.
JFrog plugin version
1.4.0
JFrog CLI version
2.45.0
Operating system type and version
Amazon Linux / UBI
JFrog Artifactory version
7.49.6
JFrog Xray version
3.66.6
The text was updated successfully, but these errors were encountered: