We take vulnerabilities very seriously, so that's where I'd start. I would look through security advisories of each change log and find things that are fixed. There will be links to the related CVEs that could have a real impact if not addressed (changes here: https://jenkins.io/changelog-stable).
There's a lot more to that conversation than just security, but maybe your management responds better to that than new features? It may at least open the door for more discussion.
How close is Jenkins Evergreen to GA?
I just finished a project upgrading Jenkins from 2.89.4 to 2.150.3, and upgraded more than 100 plugins at the same time. It was incredibly painful, and I'd love to have a "stable" set of plugins that are continuously updated.
Ideally a repo/project should be able to build on its own via the Jenkinsfile, by downloading already built version of its dependencies from a repository like artifactory or nexus.
Most projects should be multibranch pipelines nowadays, combined with build on push via hooks, this allows anyone on the team to build their changes in different branches.
If you really need to run several jobs, you can orchestrate them with the <code>build</code> step, for example, to rebuild the root project after building a new version of a dependency.
Assuming Jenkins 2 and Jenkinsfile is how your shell steps are defined, use node labels to determine which stages and steps run on each node.
https://jenkins.io/doc/book/pipeline/syntax/
Search for Common Options
Thanks everyone for all of your questions! This was a lot of fun ;-)
Since it's now 3 PM ET, we've completed the live version of this AMA, but we'll continue to monitor for any new questions and we'll continue to answer as best we can.
If you have more questions that we haven't answered, please feel free to reach out to us here for more help: https://info.cloudbees.com/jenkins-support-ama/
First off... if you allow
jenkins.model.Jenkins.getInstance()
Then all it grants all users who have access to repositories to be Jenkins admins. They don’t even need access to your Jenkins instance to become an admin because of how the Jenkinsfile
works.
It is the equivalent of granting everyone script console access. I recommend you reading the security warning at the top of this wiki page
https://wiki.jenkins.io/plugins/servlet/mobile?contentId=42470125#content/view/42470125
However, you can still use it yourself as an admin by exposing advanced steps via shared pipeline libraries.
https://jenkins.io/doc/book/pipeline/shared-libraries/
It runs in the same runtime as the script console. Don’t return the Jenkins instance to users but you can use it to do more advanced pipeline things exposed as one line steps to users.
As of Declarative Pipeleline 1.2 you can nest stages in the Declarative pipeline -- though it all needs to be in a "pipeline" block.
But for what you want to do, probably using a scripted pipeline is better, since you can use for loops.
node('Jenkins-Slave-1') { // You need to keep the node block since you're doing parallel on the node stage('Code Checkout') { git branch: 'newFeatures', credentialsId: 'abc123', url: 'https://some-stash/scm/ip/some.git' } withCredentials([usernamePassword(credentialsId: 'abc345', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]){
def DeploymentType = "stage" def parallelBranches = [:] for (int i=1; i<=2; i++) { // Modify range as needed parallelBranches["Upgrade CDS Machine m${i}"] = { // Weird concatenation to avoid issues with groovy vs. env variable substitution sh 'PYTHONPATH=${pwd} python2.7 ${pwd}/prod/bounce.py -e -c -z us-west-2 ' + "-fc m${i} -go -dt ${DeploymentType} " + '-k ${USERNAME} -s ${PASSWORD}' } } } stage("Upgrade CDS"){ parallel parallelBranches } }
It's early in the morning here, so probably the above probably has some misplaced quotes or braces, but FWIW you happened to catch one of the maintainers of the pipeline plugin suite.
Why didn't you take a look at their API and find the answer yourself? It says right there in the docs:
https://webhookrelay.com/api-reference/#tag/Logs
>Webhook logs hold the status of received and forwarded webhooks. You can use this API group to resend webhooks to one or more destinations
I'd recommend reaching out to the company if you have further questions.
You're going in the wrong direction I think, don't push from TFS, poll from Jenkins. Just use the regular git plugin with a multibranch project, and if you want to use a unified jenkinsfile put the jenkinsfile in it's own repo and use git submodule to include it in all the other repos: https://git-scm.com/docs/git-submodule
I think Docker (https://www.docker.com/) would be a good generic addition... Although I think they are installing Jenkins in order to use the mac as a build machine for iOS projects. Either PhoneGap/Cordova or native.
Jenkins Pipeline shell allows calling scripts, containing execution e.g. of python programs. By following such approach you can achieve both benefits of Jenkins and benefits of Python.
Unfortunately not. We’re running our agents in ECS which makes them ephemeral and single use so we don’t have to worry about workspaces and dirty build environments... but that doesn’t get around that the pipeline itself runs on the master :(
Here’s a decent article about scaling: https://jenkins.io/blog/2017/02/01/pipeline-scalability-best-practice/
We build on one set of Windows machines and test on different set of Windows machines. Both types of machines are configured as agents in the Jenkins. We have employed different types of artifact transports, including:
The machines being agents, Jenkins takes care of the transport and execution, and we only have to make sure the Jenkins agent is running on the nodes.
Has the pluggable storage work been abandoned? https://jenkins.io/sigs/cloud-native/pluggable-storage/
The only part of it that was fully delivered was the artifacts feature.
Without the features above HA jenkins and zero downtime upgrades are very difficult and it leads a lot of people away to other CI that is more mature in this respect
I don't have any experience with a sshPut, as I use Ansible for bigger deployments. However, I do have one job where the results of a successful build are moved to a web server where they can be downloaded. I accomplished this by adding the web server as a node, and then using stash() and unstash() to easily transfer the files when complete. https://jenkins.io/doc/pipeline/steps/workflow-basic-steps/#stash-stash-some-files-to-be-used-later-in-the-build
Nope you can't directly, best you can do is run a python script from shell and give params. If this is for jenkins scripted pipeline:
def myVar = params.paramName
This can now be accessed in the script, no need for env variables.
If you have a strict requiremenr for env variables:
env.envVarName = params.paramName
Another option is that you can also wrap your job using a withEnv closure. This page has examples: https://jenkins.io/doc/pipeline/examples/
Not sure about the part about triggering the jobs from TFS as I don't have experience with it but for executing the same job in hundred of pipelines and managing its configuration from a central place, you should definitely look at using Jenkins shared libraries: https://jenkins.io/doc/book/pipeline/shared-libraries/
It will make your life a lot easier at the expense of a little bit of groovy code.
Still not sure I understand your usecase but a tool that might help you would be shared pipeline libraries. Then you can encapsulate and parameterize the strategy into a few methods.
https://jenkins.io/doc/book/pipeline/shared-libraries/
combined with per project jenkinsFile and a parameterized build you may be able to get what you are after.
Not unless you compile PR-55 yourself. I'm surprised it hasn't been merged yet.
Another workaround would be if you could generate your coverage report as html and use an html publishing plugin, e.g.
If you're using pipeline, there's a built-in, cross-platform zip step: https://jenkins.io/doc/pipeline/steps/pipeline-utility-steps/#code-zip-code-create-zip-file
If you're not using pipeline, I believe you can use the jar executable from the JDK (if you have that installed on your nodes) to create zip files in a cross-platform manner.
So I'm a bit late here, but if you haven't checked out the new Pipeline DSL stuff, it's perfect for this:
//This lives in your project repo in a file called Jenkinsfile try { stage("Build") { //Assuming your build machines are labeled 'linux' node('linux') { //Checkout the project as specified in the git config checkout scm //do build stuff sh "put your build command here" //save files for later stash includes: '**', name: 'build' }
//Requires email plugin of course mail(to: '', subject: "${currentBuild.fullDisplayName} is ready for deployment", body: "URL: ${env.BUILD_URL}") }
//Not required, but sensible - this will automatically abort the build if you wait too long timeout(time: 1, unit: 'WEEK') { input "Approve/deny deployment to production system" }
//Optional - prevents older waiting builds from deploying if a newer build was already approved and got past this milestone label: "production"
stage("Production Deploy") { node('linux') { //restore previously saved files unstash 'build' //do deploy stuff sh("deploy shell command might go here") } } } catch(Error|Exception e) { mail(to: '', subject: "${currentBuild.fullDisplayName} failed!" body: "URL: ${env.BUILD_URL}, Error: ${e}") //Finish failing the build after telling someone about it throw e }
I'm totally shilling for my book, but I wrote extensively about how that exactly works in Jenkins Administrator's Guide. If you have O'Reilly subscription you can read it for free. It's in Chapter 4, Docker-outside-of-Docker in Jenkins.
The issue is with the certificate on the Jenkins side of things, a situation you won't be able to alter.
You should be able to add it either to Java or (preferably) the system. The certificates are here
>Let’s say my unit test suite relies on Java 8.2 but my performance testing suite still needs Java 7.
This has nothing to do with Jenkins, maybe the author needs to improve their systems.
>The number 1 reason we get owned is because of rogue Jenkins instances.
Once again, if the developers and backend teams cannot secure Jenkins, it has nothing to do with Jenkins. One can hack into any system if idiots are building it.
>One of the reasons we see so many engineering teams switching from Jenkins to Codefresh is because Codefresh uses container-based pipelines.
Not sure who is actually switching, but the first result on google which is not a Codefresh link and compares Codefresh with Jenkins is this:
https://www.slant.co/versus/2477/20332/~jenkins_vs_codefresh.
The numbers are clear. Also the list of companies which use CodeFresh, atleast on the link above, I have never heard of.
>Admins Are the Only Ones Who Can Really Change Things
Has the author ever heard of DevOps? As soon as you move away from SILOs of operations and create the DevOps mentality, these problems will go away.
tl;dr I don't know but try mailing list and here are some random ideas.
There is a Prometheus plugin that exposes some stats. The rest API exposes a lot more
I don't know anything about Prometheus tough. Can it pull in data?
I bet it would be easier to make a Jenkins source plugin for grafana. Though looking at the example simplejson data source it looks like they mostly can do time series data. https://grafana.com/grafana/plugins/grafana-simple-json-datasource
There's an influxdb plugin and data source. I think with influxdb you can do arbitrary data so maybe that's an option.
You might try the jenkins-users mailing list, or gitter channels, they might have more eyes.
Well there's no big magic in using a Jenkinsfile, call some docker-compose commands with appropriate commands and you have similar flow - we've set this up multiple times, there's no magic to it. There's nothing that replicates drone.io's pipelines in Jenkins afaik.
I recently wrote about "How-To: Setup a unit-testable Jenkins shared pipeline library" on dev.to.
The general approach is:
This way you should be able to:
You can have a separate agent per stage, if the stages are in sequence you can experiment if the workspace will be shared. You can however mount a dir on the slave into the container, maybe something from tmp and use that as a go between.
Check this for stage syntax https://jenkins.io/blog/2018/07/02/whats-new-declarative-piepline-13x-sequential-stages/
You can do that but it will result in a bunch of stages being created and not run.
Have you looked at this blog post? https://jenkins.io/blog/2019/11/22/welcome-to-the-matrix/
It talks about how to use excludes
vs when
.
It’s not running a binary. It’s running the jenkins pipeline shell step.
https://jenkins.io/doc/pipeline/steps/workflow-durable-task-step/#sh-shell-script
You just want put the keys in the credentials manager in Jenkins. This will encrypt it at rest and allow you to use it in scripts.
​
You can then just wrap steps that require ssh with "withCredentials"
​
https://jenkins.io/blog/2019/02/06/ssh-steps-for-jenkins-pipeline/
If you're using a Jenkinsfile, you might be able to accomplish what you want with an "Input Step" + alert step.
https://jenkins.io/doc/pipeline/steps/pipeline-input-step/
You could send an email notification at the end of that particular stage in the pipeline with a link to the Jenkins job, and then the user can click the button to have it go forward, or abort to have it start the rollback.
Sure there is both with scripted pipeline and declarative pipeline.
https://jenkins.io/security/advisories/
https://groups.google.com/forum/m/?fromgroups#%21forum/jenkinsci-advisories (sorry for mobile link I'm on my phone)
The UI pulls down a JSON file from update center so you could regularly parse that if you wanted to setup.your own system.
I see, I don't think build parameters will help with this. Instead of defining your triggers in job B (build when job A finishes), you could let job A notify job B to start at a certain point, using the pipeline build step: https://jenkins.io/doc/pipeline/steps/pipeline-build-step/
Add this to the end of stage 3. Just adding 'build job JOB_B' should do the trick.
In this case don't forget to remove the triggers from job B though, otherwise your build will get triggered twice.
This is a **VERY** bad idea. You're giving unfettered admin access to _all users_ who can write pipeline code. See my talk on mastering the Jenkins script console which directly applies to what you're approving here.
Or read the security warnings on the top of the wiki. As an admin I would not approve this request.
If you have experience using the GUI, you'll already know the syntax for pipelines in a way. All the fields you enter in the GUI will be the same except as k,v pairs. Check out the Jenkins snippet generator and this link for shared libraries.
https://jenkins.io/doc/book/pipeline/shared-libraries/
You might not have SL's but it teaches syntax and plugin usage pretty well.
This is a good place to start.
https://jenkins.io/doc/book/pipeline/shared-libraries
That and check out the groovy documentation, their docs are actually pretty good and they have some tutorials as well.
So if you can't allow external communications into Jenkins, then you need to use polling. You can poll for changes. Or just do builds every x time.
Or you can use a middle proxy, and let GitHub let you know when there are commits and stuff.
https://jenkins.io/blog/2019/01/07/webhook-firewalls/
After that it depends on what you want to do
I ended up writing a scripted pipeline, following the guide here:
https://jenkins.io/blog/2019/12/02/matrix-building-with-scripted-pipeline/
I have to say my client didn't seem all that impressed with the user experience when compared with a "traditional" freestyle matrix job. For example, you can no longer see the build history for each specific combination of axes, or even the table of which combinations are passing and which are failing.
Are you using Jenkins Pipelines with shared libraries? The easiest thing would probably be to make the code that runs the process for Job B re-usable, and then execute that code as part of Job A.
You can definitely have a job kick off another job, wait for it, then retry, but you're likely to have a rough time trying to schedule something to run on the same node while competing with other builds. You'd probably need to wipe the agent labels, add a unique one, run the job, and then restore the labels at the end (and hope nobody force kills the job before the labels can be restored).
Shared Libraries: https://jenkins.io/doc/book/pipeline/shared-libraries/
Jenkins is an open source project and not a particular person or entity so there's not a particular person you can contact. There are many forms of community contact for general questions.
https://jenkins.io/participate/
See also the Community menu at the top of https://jenkins.io/participate/
Your description of the differences between scripted and declarative pipelines is flat out wrong.
Scripted pipelines support all Groovy syntax. Declarative pipelines are a simplified version.
Both can be stored either in a Jenkinsfile in your project or saved on the Jenkins server.
Both use the exact same pipeline system.
https://jenkins.io/doc/book/pipeline/syntax/#compare
If you're going to post thinly veiled advertisements for your training services, you should probably get things right.
Ah, just read the following in the docs:
> The axis
and exclude
directives define the static set of cells that make up the matrix. That set of combinations is generated before the start of the pipeline run. The "per-cell" directives, on the other hand, are evaluated at runtime.
I guess I need to rethink how I'm going to implement this.
You don't want to do all of that on the same node. For instance, windows, mac, and Linux aren't the same node.
I recently wrote a blog post about how to matrix build across different platforms and nodes in scripted pipeline.
https://jenkins.io/blog/2019/12/02/matrix-building-with-scripted-pipeline/
You’d think so, right? This is the assumption that I had started with too... But in reality any groovy instructions (loops, conditionals, templates, json parsing, etc) are done on the master even if you have a node or agent step... the Jenkins DSL (withMaven, sh, etc) will be done on the agent. Thanks to this you’d want to move most instruction to scripts or to use CLI like jq wherever possible.
https://jenkins.io/blog/2017/02/01/pipeline-scalability-best-practice/
If I get the time I can scrub mine of sensitive stuff and try to get an example up, but in the meantime this is a basic example that's a bit easier to understand than the official docs-
https://github.com/AndreyVMarkelov/jenkins-pipeline-shared-lib-sample
The src directory contains standard groovy classes for functions, and the vars directory contains groovy closures that Jenkins interprets as steps that can be called in your stages. In these closures. you can call the classes/functions you've defined in the src directory. Finally, your Jenkinsfile in the repo will contain your scripted or declarative pipeline.
Alternatively, you can have your entire pipeline defined in a closure in the vars directory, and the Jenkinsfile can be something as simple as:
@Library('jenkinsRepo') _ appDeploy{}
In Jenkins you will have in DSL called pipeline (plugin) that helps for solution this problem.
https://jenkins.io/doc/book/pipeline/syntax/#when
Using changeset to control this kind of flow. The stage will run only passed in REGEX condition.
In "pure" mkdkr there is not flow control yet but this code can filter to you.
​
pylint: ... python:3.6-buster .. pip install pylint .. 'git diff-tree --no-commit-id --name-only HEAD~1 | grep *.py | xargs -t pylint' .
The job always run, but the filter will check if the last change alter a py file
Ah, so you wouldn't have to enter any parameters in the UI again if required? That makes more sense.
Rerun is also easily confused with replay, which was a different feature in the old UI- https://jenkins.io/doc/book/pipeline/development/#replay
It doesn't look like replay is even available in blue ocean.
I’m not entirely sure I follow, maybe you can make your question clearer. What is your repo for? Which repo should cause the build to run? What type of cheddar do you have in your fridge?
The first plugin does have a poll option which takes a Boolean. May be worth starting there.
https://jenkins.io/doc/pipeline/steps/git/#-git-git
As I recall you need to have that set to false from the first time that project interacts with that repo otherwise it doesn’t pick up the new poll value, though that but may have been fixed.
You would use the checkout scm step. See here.
That documentation is a little bit hard to follow. I'll see if I get an example when I'm at my computer.
You have a couple of options.
You can create a Shared Libary that is jenkins spcfic steps and scripts that can be called on other build pipelines.
If you just want to stash build scripts that are shared between projects in a git repo, you can just check it directly from your repo in your pipeline (perhaps to a subfolder) or add it as a git submodule in your projects.
You can also have different projects on different versions of your build scripts using submodules, which can be handy when updating.
Jenkins shared library: https://jenkins.io/doc/book/pipeline/shared-libraries/
Page shows the various ways you can check it out from scm. You’re most interested in creating custom callable steps within the vars
subdirectory of the repo
There's another possibility. If you running this from a pipeline you can use the findFiles()
method provides by Jenkins. This will list all the files that matches the glob
pattern.
files = findFiles(glob: '.')
files.each { target_file -> sh "echo ${target_file.name}" }
Find more details in here: https://jenkins.io/doc/pipeline/steps/pipeline-utility-steps/#findfiles-find-files-in-the-workspace
Please note: I didn't test the above code but it should be something very similar to this.
It's hudson.model.Item.MOVE
.
You can confirm with Snippet Generator: by setting Project-based Matrix Authorization and generate it.
https://jenkins.io/doc/book/pipeline/getting-started/#snippet-generator
For example, I get the following Jenkinsfile code generated:
> properties([authorizationMatrix(['com.cloudbees.plugins.credentials.CredentialsProvider.Create:epipolar_gineer', 'com.cloudbees.plugins.credentials.CredentialsProvider.Delete:epipolar_gineer', 'com.cloudbees.plugins.credentials.CredentialsProvider.ManageDomains:epipolar_gineer', 'com.cloudbees.plugins.credentials.CredentialsProvider.Update:epipolar_gineer', 'com.cloudbees.plugins.credentials.CredentialsProvider.View:epipolar_gineer', 'hudson.model.Item.Build:authenticated', 'hudson.model.Item.Build:epipolar_gineer', 'hudson.model.Item.Cancel:authenticated', 'hudson.model.Item.Cancel:epipolar_gineer', 'hudson.model.Item.Configure:epipolar_gineer', 'hudson.model.Item.Delete:epipolar_gineer', 'hudson.model.Item.Discover:authenticated', 'hudson.model.Item.Discover:epipolar_gineer', 'hudson.model.Item.Move:epipolar_gineer', 'hudson.model.Item.Read:epipolar_gineer', 'hudson.model.Item.Workspace:epipolar_gineer', 'hudson.model.Run.Delete:epipolar_gineer', 'hudson.model.Run.Replay:epipolar_gineer', 'hudson.model.Run.Update:epipolar_gineer', 'hudson.scm.SCM.Tag:epipolar_gineer']), [$class: 'RebuildSettings', autoRebuild: false, rebuildDisabled: false], pipelineTriggers([cron('H 11 * * 1-5')])])
I don't believe this is possible for scripted pipelines. It is possible for declarative-
https://issues.jenkins-ci.org/plugins/servlet/mobile#issue/JENKINS-33846
https://jenkins.io/doc/book/pipeline/running-pipelines/#restart-from-a-stage
If you're running the build on the same pipeline, stash/unstash would be the preferred usage: https://jenkins.io/doc/pipeline/steps/workflow-basic-steps/#stash-stash-some-files-to-be-used-later-in-the-build
Artifact copying is intended to pass artifacts between pipelines/jobs.
The Java versions need to be the same between the master and the agents due to serialization differences.
> All agents must be running on the same JVM version as the master (because of how masters and agents communicate).
https://jenkins.io/doc/administration/requirements/upgrade-java-guidelines/
I think you mean single slave executor. The issue OP is seeing can be caused by other cases than when you allo concurrent builds. For example, if your pipeline have multiple stages running in parallel ontge same node.
If you really want to use the same worksapce (although not recommended) you can use agent { node { label "some label" customWorkspace "path/to/workspace" } }
Have a look here https://jenkins.io/doc/book/pipeline/syntax/ in the "Common Options" section
Oops; no, I misread sorry!
For that you need archive and CopyArtifact. It's best to tar
up the artifacts first because Jenkins sometimes messes up the executable flags on some files on Linux.
Don't do this; you're hacking at it. Use stash; that's exactly what it is for. And it will even work automatically for you if your next stage is on another node (dream big - some day you might need it!).
https://jenkins.io/doc/book/pipeline/docker/
Like this:
docker.image('centos:7').inside() {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
Have you tried creating the containers via its dockerfile?.. That way it would create the image and run the container and also run the commands inside the containers.
​
https://jenkins.io/doc/book/pipeline/docker/#dockerfile
​
​
not quite sure exactly what you're trying to do but you can also run pipeline steps in parallel: https://jenkins.io/doc/book/pipeline/syntax/#parallel
so it could pause for the CR approval, but other steps could be running (builds, tests, etc.?)
or maybe it just needs a simpler approach like checking the "concurrent" box on the job?
I don't want to specify a single node in the pipeline code; perhaps I should have spelled that out more clearly. We have a pool of basically identical machines, any of which can service a build request. I'm expecting the node statement to allocate one node (any one, any matching a label expression, if I were to use one, which I'm not), then use that node (and ONLY that node) for all operations within the node{} block. Seems pretty straightforward to me.
From the Jenkins documentation (at https://jenkins.io/doc/pipeline/steps/workflow-durable-task-step/#node-allocate-node): "Allocates an executor on a node (typically a slave) and runs further code in the context of a workspace on that slave."
Given that braces {} have been used since the beginning of time to denote scoping, and the fact that the Jenkins documentation doesn't say diddly otherwise, one might assume "further code" to be the code defined within the scope of the open and closing braces...
You could also have two 'entry' Jenkinsfiles that have their own sets of properties which then use the 'load' step to read in the rest of the logic.
Yes. The jenkins pipeline format has two input "styles".
https://jenkins.io/doc/book/pipeline/syntax/
The original format of their early plugin is called "scripted pipeline" now and is pretty loose, allowing you to freely use elements of the DSL along with groovy stuff. You can import fairly arbitrary java/groovy and do whatever with it.
Some time later the jenkins team started cleaning up the DSL and making a more 'beginner friendly' version that had a lot more DSL defined and didn't require as much of the 'programming language' feel to it. You can still use groovy in this style but you have to wrap it in a 'script{}' closure.
I have done some stuff to interact with different services that lack good jenkins plugins by doing the http api calls directly using this: https://jenkins.io/doc/pipeline/steps/http_request/
You might use that plugin with the github api directly
https://developer.github.com/v3/pulls/#merge-a-pull-request-merge-button
The project page hopefully has some more explanation. My impression is that the goal is to build a UX oriented around the concept of a delivery pipeline since that represents an important use-case for many teams building software.
If you look at the world through a lens of delivery pipelines, freestyle jobs don't quite compute (see also: Jenkins as web-based cron). I grok their decision to skip it for now, but I don't think that precludes anybody else from extending Blue Ocean to support more job types in the future.