My procedure is:
When doing updates: 1. Grab the latest copy of the database and pull that down to my local install, I'll often also grab the wp-content folder if I know there have been changes that need looking at. 2. Update the core and the plugins as required on my local install 3. Push that back out to the dev server for testing (including my local - now up to date database) 4. If everything is good then push to production.
This is generally pretty good but I still have to manually move the database back and forth and I may have to put a busy site into maintenance mode when moving the database around but it usually ensures that broken changes don't go out to the public.
Try to keep the minified/built files out of the repository if possible. I typically stick them in .gitignore, and use a deployment service to compile them. I use Buddy.works, but others are available. You could have this deploy the finalised/built files onto a server for example, and give a link to the client.
You could maybe also configure Gulp to compress the final files into a single ZIP and email it off whenever you run Gulp Build. This would also keep the built files out of the repo.
Chrome can do it.
Puppeteer can do it:
const puppeteer = require('puppeteer');
(async () => { const browser = await puppeteer.launch({args: ['--no-sandbox', '--disable-setuid-sandbox']}); const page = await browser.newPage(); await page.goto('https://buddy.works'); await page.screenshot({path: 'buddy-screenshot.png'});
await browser.close(); })();
I am using for Android Buddy.Works and for iOS Bitrise
Buddy has amazing customer service and the UI is super slick.
Bitrise is slower on chat since I think they are dealing with people moving from buddybuild who got bought by Apple (no relation to Buddy.Works!)
Apparently Buddy.Works have iOS deployment in the pipeline.
Bitrise support are lovely but much slower and not very Ionic focused. But it gets the job done.
I prefer real build and deploy to an actual app, rather than Ionic View.
(Code saved on bitbucket or github and built on every commit)
buddy.works is excellent. It's free for one project, and you can construct complicated pipelines with integrations to pretty much everything. We use it in combination with AWS CodeDeploy.
Bitbucket -> Buddy.works -> AWS Code deploy <- Instances
Which brings me to my next question - how do these other services handle deploying to a load balanced server group. In our case, these instances are basically ephemeral, so we use codedeploy to poll AWS for new deployments and roll them out
We use Buddy to host our code and deploy to servers via Pipelines.
Pipelines are just actions that happen during deployment. Typically we first set up command execution to run "npm install" and "npm run production" to install dependencies and build the assets. Then, we just have an SFTP action to push the code to the server.
The command are executed in an isolated Docker container in the context of your repository, and then removed afterwards, so no need to push node_modules to the server or repo.
Sure thing. It doesn't need to go to GitHub. Whatever you're deploying to--say Heroku for this example--you're given a way to include environment variables that doesn't involve pushing the .env file. Frankly, your local .env variables should not match your production variables when it comes to secrets, passwords, anything hashed, etc.
If GitHub users are pulling this code and trying it out themselves, then part of their setup needs to be supplying their very own config.env file.
But wait, isn't that annoying for teams?
Yes. There are ways to get around this if that's your situation, but only do it if you're actually building this out for a team *and* if telling that team to just put in their own secrets is too onerous. It gets onerous if, say, you have to actually connect to a service while developing locally and need to actually share one of those secrets. Git-crypt to the rescue: https://buddy.works/guides/git-crypt
But typically, whatever you're deploying with or to should provide a way to handle env vars. If it's your own linux host or something, well, you'll have to just go in there and add it yourself. But don't make it the same as your local secrets
Tried all kinds of things. Doing manual uploads to a Digitalocean droplet. Custom CI pipelines via https://buddy.works, Forge, Ploi. I’d say choose whatever makes it the easiest (although there is good learning experience to be made from making a pipeline yourself) to maintain. Currently using https://ploi.io to manage the servers and deployments for my Laravel apps and running them on cheap $5 Digitalocean Droplets.
You may want to look into a CI tool like Buddy (they have a free version).
CI basically syncs your local dev environment with your server, with GH sitting in between for files that can be hosted there. You basically set up conditions that trigger commands you'd type into your local cli and server cli.
Ja, gyorsan találtam egy megoldást, ami működött, de szerintem nem a legmegfelelőbb, úgyhogy semmi olyat nem bíznék rá, amivel más pénzt csinál. Mindenesetre úgy nézett ki a dolog, hogy:
githubon volt a kód -> https://buddy.works/ buildelte -> sFTP-n keresztül felnyomta a buildelt oldalt valamelyik public mappába
és az egészet lehetett triggerelni valahogy webhookon keresztül, tehát új posztnál elindult a folyamat.
De ennél biztos van kicsit komolyabb megoldás, és soha nem használtam élesben, mert rájöttem, hogy egyszerűen nem éri meg a kockázatot az ügyfelekkel :( Saját honlapra viszont tökéletes.
https://media.makeameme.org/created/jenkins-working.jpg
Jenkins still gives me nightmares. Subject8133 might be right, this looks like some kind of Jenkins security "feature". Why torture yourself tho? There are so many better alternatives right now (one I'm using: https://buddy.works/vs/jenkins) but you could also switch to Github Actions or CircleCI. I understand that Jenkins is a company standard but perhaps it's time to suggest moving into XXI century? Good luck with Jenkins tho, you poor soul.
I won't tell you what tutorials are good, to my experience they are never as good as checking out the documentation from the CI/CD platform providers and going from there. For example https://buddy.works/guides/how-dockerize-node-application from Buddy.Works describes exactly a dockerized node app deployment, check it out!
I found links in your comment that were not hyperlinked:
I did the honors for you.
^delete ^| ^information ^| ^<3
Bazel is an amazing tool. Currently i'm using https://buddy.works for my ci/cd. Unfortunately they do not have a dedicated Bazel action (I have already asked for this feature since they currently have Maven and Gradle). But I agree with @1ewish "are going to pull exit codes from bazel command you run to report error codes" - that's how I handled this in Buddy, i have also set an action that notifies me on Slack whenever an error occurs.
Monitoring ci/cd is usually problematic. We had that issue with Jenkis and we were forced to look for alternatives. My team moved to Buddy and helm since it's way easier to manage this process. What is even better you can easily build around the base pipeline and have a constant preview of what is going inside. Buddy added Helm some time ago so I think you should be able to handle this using their dockerfile linter, k8s and helm. They even have K8s run and helm commands for YAML in their docs: https://buddy.works/docs/yaml/yaml-actions/kubernetes-run-helm-cmds.
You don't have to use Heroku (although it's pretty great and I would recommend it!)
There are services such as DeployHQ or Buddy out there that can take the pain out of the deployments through automating the pushing and pulling and running tests and things to make sure your code is valid and isn't going to break anything when it hits production hardware.
Basically you shouldn't be deploying via FTP in 2020.
Hello, Alex from Buddy here.
I strongly recommend trying https://buddy.works – we built the tool specifically to lower the entry threshold to DevOps as everything is visualized in the GUI. This means that basically everybody in the team can create their own delivery pipeline with little to no documentation support as there's no scripting involved (although you can export the config to YAML if required).
Bear in mind, however, that no matter what the company size and profile is, it's not possible to automate the whole process at once. Start small with one thing, eg. staging deployment, check if everything works, then add another step or create another environment.
What kind of development are you into (stacks, frameworks, etc.)? I might be able to send you some resources that will show you how to start.
A super easy and user friendly tool is https://buddy.works the free tier should be all you need. This is what I’m currently using and it’s setup to check unit tests, code coverage, build and deploy to S3, clears cloud front cache, and then report to slack upon success (or failure).
I think your problem can be easily solved with a pipeline from Jenkins. Buddy.works or DeployHQ. I would create three pipelines: DEV, STAGE, PROD and assign each of them to corresponding branch (DEV, STAGE, PROD).
Every pipeline could contain build, test, deploy and notify actions and once they go through correctly then deployment can be pushed to the site.
If you are ready in the development phase then you merge it to dev->stage and show it (for example) to your client on your server. Once that's ready then you merge stage to master and then it is pushed to productions (client side).
Judging by your needs such workflow can easily be set up with buddy.works.
So last week I was doing a big migration. I compared a lot of deployment offerings. The reason I basically had to stay with envoyer, even though it feels like the unloved child of the Laravel ecosystem, is because basically all other options force a deployment workflow on you. Some you can't select a branch/commit to deploy (deal breaker for us). Some had annoying pricing. And a lot didn't handle configuration files very nicely (Envoyer doesn't either though). The last thing is I don't want to have a damn docker image, I want to run commands on my own build server. A lot force a docker image on you and thus are slower.
I would say if you have just a normal Laravel app and always deploy from a specific branch then using one of the free ones would work.
Buddy was pretty impressive to me, it had good options including no docker image but the configuration files they have only work when doing their deploys, not when just calling your own scropts. Buddy also was a setup per branch, so you were forced to push to a specific branch to deploy. If those things are fine for you then I think buddy would be the way to go, it was really nice (I can't remember the pricing). https://buddy.works/
Ok a quick update from my side:
Soon I will plan to move code (themes + plugins) to separate repos and use composer for building it. I want to keep main git repo as light as possible.
For orchestration I want to use buddy.works
It's been developed by the same team behind Buddy which is by far the best CI service, deployment & automation tool I have come across:
I'd highly recommend anyone to check it out if you haven't heard of it yet.
If you'd like to keep it as simple as possible without wasting time I recommend Buddy > https://buddy.works/blog/aws-ecs. You can literally set up the whole process in 10 minutes in the GUI and export to YAML later. Top stuff from a very underrated tool.
Haven't seen buddy.works before, looks nice. I generally use serverless framework to do my deployments, it support cloudformation templates to do deployment of S3 bucket, cloudfront, DNS records etc which makes setting up test environments very painless.
Nice! I've been using Buddy.works lately -which was super easy to set up- but my app recently hit the file storage limits (500 MB). It was fun to have auto-deployment. I'll try this approach ASAP.
Not too long ago I found out about buddy.works for two different type of PHP applications (not using Laravel framework) through a consulting agency. It seems to work for them.
If you are looking for a DIY model, I've got DroneCI + Gitea / BitBucket/ and Gogs to work in a similar fashion. An "easier" DIY model will be GitLab CI.
A few things.
​
First: Use Git. You'll get file history so you're no longer maintaining file-based backups of code at points in time.
Second: Deploy from Git. You can use something like buddy.works which will wait for you to push code to Git (essentially changing the sites code in the repo/database) and then update the actual site. You don't touch the actual code on the production server.
Third: Backups you'll probably be fine with using R1Soft. I imagine you're using a host with cPanel. If not then you can use zip + rsync, a program like Restic, or pay for something such as Veeam (expensive but amazing) or Cloudberry (cheap but bad)
​
An ideal setup would be like this:
​
So the process would be you setup Git and put the current production site in there (minus config files / secrets / cached files / anything sensitive or can be regenerated.) Then on your local VM you download that. New changes you work on a separate branch and when you're done you merge and push to the remote Git repository.
From here you could skip the test server but better if not. For testing you would push to say a "develop" branch which you would then use an auto-deployer ( I use buddy.works ) to push it to your testing VM and do any setup needed. You do testing there and if it looks good you then merge from develop into master and then again use the auto-deploy to set it up on production.
​
This is a game changer - buddy.works
​
Set it up with SFTP details and push git branches to it, you can set it up to be automatic, or as I have, require you to push to it, then go to its dashboard and push specific branches to specific places. Makes deployments really simple.
Commit your source codes. Generated files should not be on version control, since they are not actually versioned but are rather generated from the files that are versioned. The difference here is the build folder will have different file names every time you run npm run build
. This is not what a VCS would expect.
Also, if you're thinking of committing dist
folder just to easily deploy it to server, take a look at some deployment automation services. I recently started using buddy.works (not affiliated) and it is just so easy to get started with.
Choosing a tool that's used in big companies doesn't necessarily translate into results. If you'd like to keep it simple I recommend https://buddy.works. Docker-based builds (preconfigured) + you can basically click up the whole pipeline from their GUI + there's a dedicated integration with DO and DO Spaces.
If you are using GitLab you could check their CI/CD pipelines, it may take some time to get it working tho.
You can also check Buddy, you can have up to 5 projects for free and it integrates with your git repo and your server easily.
Buddy is a working platform designed to provide solution on the blockchain. The platform provide developers with a seamless and intuitive interface for developing and deploying automation operations.
You are looking for something like this: https://deploybot.com/ and https://buddy.works/ which you can connect to your github/bitbucket account, and configure to push your code to your server when you commit changes.
Here is an article with different types of workflows. I think at least knowing the feature branch one is a good start since it's pretty straightforward. A lot of times student developers will only do everything on one repo, so I think going over other git workflows could be valuable.
Check out http://buddy.works! There's a self-hosted version that is super easy to setup. It's called "buddy go". I use it and in my opinion it's better than deploybot.