Can you clarify "when the job competes kill the create"?
It sounds like you want the pod to do some work, then terminate when that work is complete. If this is what you are after you should look at creating a pod or a job instead of a deployment. https://kubernetes.io/docs/concepts/workloads/controllers/job/
OpenShift runs containers by default with a random user (not directly random) https://www.openshift.com/blog/a-guide-to-openshift-and-uids . This isn‘t the nginx user and therefore you cannot create files in a directory owned by the nginx user. To solve this issue in your case, why not simply call gzip in the dockerfile (before runtime) instead of in the entrypoint (runtime)?
Openshift is Kubernetes under the covers. As a result, it isn't particularly opinionated on how you go about deploying your applications. You can use any of the methods you've listed out there - they'll all work (although with Helm there are some security considerations around Tiller you may need to care about depending on the version of Helm you use).
The other option to consider is packaging your application as an Operator. This is probably something to consider when you're more comfortable with things like Ansible or Helm, but it is becoming more commonplace and will be the primary method for vendors or ISVs to deliver applications onto OpenShift when 4.x is released later this year.
Re-read the docs and realised I needed to put <code>kubernetes.io/cluster/openshift</code> : owned
Thanks for the reply since I was clueless about setting the tags on the nodes themselves. I didn't realise it was required.
Not sure what you mean by this. OpenShift is basically K8s, with a whole bunch of extra stuff to make it more Dev / Sec / Ops friendly. And given that the whole Federation thing has gone back to the drawing board, there's nothing in OpenShift that would either be supported or do it more reliably (although as it is K8s under the covers you can still poke around with the V1 alpha API, although I couldn't say how successful you might be with it).
As the SIG qualifies exactly what it wants cluster federation to be, and it starts to edge towards something approaching GA, that'll be when you start to see it appear as a supported part of OpenShift.
I dont think it is possible. Maybe you can use a hostPath but that would require you to use a different SCC and assign it to pods serviceAccount.
https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
Use labels to make sure the pod is scheduled to that node where the extra disk is..
Or if that doesnt work maybe the local storage operator can help you
Ah ok I understand you now. Actually I believe you can't share the volume between pods at all, regardless of which node they run on. This is a limitation of Vsphere. You can see which storage providers support RWX in this table in the Kubernetes docs:
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
okay, I see that you want to run more than one container in a pod
this really only works in some use cases, because the pod will get a cluster IP addresses, the containers will not. so this is usually reserved for "sidecar" use cases
taken from the k8s docs:
https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/ > The primary reason that Pods can have multiple containers is to support helper applications that assist a primary application.
while it is possible in k8s and openshift I really wouldn't recommend it. if your pods have multiple containers you won't be able to scale efficiently, since openshift scales pods and not containers
another good reference: http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html down to "Example #1: Sidecar containers"
https://www.openshift.com/blog/federated-prometheus-with-thanos-receive
I don't know the current status of Thanos receiver with OpenShift. After the team wrote this blog post we went another direction is the engineering group.
ROSA is OpenShift whereas EKS is vanilla Kubernetes. So I think the question you're asking is what's the difference between OpenShift and Kubernetes. Hopefully this page will help illustrate the differences https://www.openshift.com/learn/topics/kubernetes/
Hi
I see. check this out reastart the pod and make it fail on purpose, then oc rsh <app pod instance>
and try curl -v servicename:5432
(not sure if you need to specify the whole address like <service>.<namespace>.svc.cluster.local:5432) and compare the output with the table on the link above. It’s weird that you get intermittent errors.
Is Postgres a single instance?
You could create a super simple image based on alpine or busybox where you install Postgres client tools and then start it by making the command be tail -f /dev/null
then oc rsh
into it and troubleshoot from there?
What does your service and corresponding Postgres pod look like over time? Can you try connecting to it with Postgres client tools when it has been running for some time? oc logs -f <pod instance>
is handy to follow a pod’s logs. You could also use stern to tail multiple instances’ logs simultaneously, it even picks up new pod logs on restarts.
Good luck
I'm planning on replacing my Proxmox cluster with OpenShift and using Kubevirt as victor brought up in the post above. I currently have a Ceph cluster and would love to switch to containerized Ceph with either OCS or upstream Rook. I'm not hip to the licencing on OCS and whether it's usable in a non-production homelab setup, and poking through Github I still wasn't able to figure out how to build the meta-operator to work with upstream Rook. I'd be really interested if there's any guide or documentation on the process, the recent blog here was a great start but doesn't touch on Ceph. Got any hints?
Tech Preview features like this are in the standard codebase but aren't supported like GA features. So you can do this for free/unsupported today from https://www.openshift.com/try.
I feel the same way about Code Ready Containers.
Yeah, there was a lot about small clusters discussed at Red Hat Summit last week. Lots of work being done there.
Given that this UserVoice suggestion is directed at OpenShift Online, I rather suspect it won't be acted upon. This is because OpenShift Online is currently in the process of being updated to OpenShift Container Platform, based on Docker and Kubernetes.
As this update will give you, as a user, a great deal more control about what goes into your containers, this issue does rather become a moot point.
I'm not in a position to give you the exact structure, but Custom Columns using either oc or kubectl would give you what you want.
They changed the platform to containers or something like that and I just want a good old VPS with SSH access to try out a few things every now and then, so I'm searching for some other alternative. If you re interested try http://codenvy.com
Yes, thats Openshift behavior, each container will get a uniq UID. Someone already mentioned https://www.openshift.com/blog/a-guide-to-openshift-and-uids describing it in detail. However, you should keep the USER <uid> statement, if there's no USER in your Dockerfile, the (possible) USER statement from the base Image would be taken into evaluation during image instantiation. Which might lead to a failure if the USER requests to be privileged.
>https://kubernetes.io/docs/concepts/workloads/controllers/job/
Yes, that's correct "want the pod to do some work, then terminate when that work is complete". Will try and see if POD or JOB works instead of deployment.
Hey I had a quick look into this for the same reason. It seems like rh made indeed 3 profiles for audit logging. The rules are i believe a config map you can edit but that propably means you need to run the operator responsible in Unmanaged mode and i m pretty sure that will cost you your support and still pay for it.
Maybe a feature request to create a custom audit policy for administrators. Its kind of overwhelming atm
Its controlled by k8s audit policies https://kubernetes.io/docs/tasks/debug-application-cluster/audit/
I thought based on this https://coreos.com/validate/
> On May 26, 2020, CoreOS Container Linux reached its end of life. CoreOS Container Linux is no longer maintained or updated, and all users should migrate to another operating system.
That the only option to find docs is on Red Hat pages.
But after your comment, started to look around and found this one https://coreos.github.io/ignition/
Found this little gem thanks to the support team, https://coreos.com/validate/
We have standard support and it took a couple of weeks to get this caught. Not blaming support but perhaps this should be right up there with the ignition config section in the docs.
Yes, you get an 60 day evaluation when you install a new cluster.
As for OKD, which is the new name formally known as Origin, you will need to build it on Fedora core OS https://getfedora.org/en/coreos/ should you choose to go down this rabbit hole. https://www.okd.io/download.html
Probably might help - https://kubernetes.io/docs/tutorials/services/source-ip/
Under that section check - externalTrafficPolicy, which basically says if you set it to local the port will only be exposed on the nodes the pod runs on as opposed to “Cluster” where the nodePort is exposed on all nodes.
Sorry if this is not what you were looking for and also about the formatting (on mobile rn)
Oh, that's a good thought! I have used ConfigMaps in several projects, but always in "providing environment variables" mode, never as writing out a file. I wonder if the ConfigMap file emitter will provide enough format flexibility to satisfy the MySQL config parser -- i.e. I'm not sure I can create INI sections. But I'll give it a whirl! Thanks!
EDIT: That totally worked. I had no idea ConfigMaps could emit arbitrary data using the literal YAML pipe syntax (a guide for future Googlers: https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/). I wrote a ConfigMap, mounted it, then adjusted MYSQL_DEFAULTS_FILE to point at it, and it's working great. Thank you so much!
You probably should upgrade to a later version of OCP first of all.
I've not used filebeat myself as we've been down this road and settled on EFK. However, we do consume filebeat. Have you read this article?
https://www.elastic.co/blog/docker-and-kubernetes-hints-based-autodiscover-with-beats
Rob Szumski did a really good write-up of the upgrade process and the criteria / thresholds employed when releasing or blocking upgrades from the edges. There's also a video on the same topic that's worth a watch.
As far as the availability of upgrades goes, I'm actually not that surprised a path has been pulled this late in the day. As more folks install / upgrade, there's naturally going to be a greater volume of telemetry to indicate success vs failure between particular releases so it may well be the case that an upgrade path that was previously thought to be stable has presented enough issues across the install base to be temporarily pulled as more data comes in.
Lets take a rule of tumb everything directly deployed on openshift is a container
Kubevirt deploys a VM in a container ( https://www.openshift.com/learn/topics/virtualization/#:~:text=OpenShift%20Virtualization%20(formerly%20container%2Dnative,serverless%20all%20in%20one%20platform. )
OpenShift works very well with ansible, including but not limited to deployment of the cluster. I would suggest the OpenShift blog to get nice articles about the inner workings of it and the OpenShift origin repo on GitHub.
I’ve deployed my lab cluster with this
https://www.openshift.com/blog/openshift-upi-using-static-ips
Would recommend familiarizing yourself with 'oc explain'
One of the most useful commands out there when learning open shift
Hey, great to here you are doing you bachelor thesis on OpenShift. Are you part of the Red Hat Graduate Program? Otherwise you can use the Free Tier of OpenShift Online (https://www.openshift.com/products/online/) to get a little bit of compute without running CRC.
If you need any help, please feel free to contact me. I'm happy to help!
Disclosure: I'm working for Red Hat :)
This is a pretty fundamental part of OpenShift 4 cluster design, which is another way of saying OpenShift isn't designed with this use case in mind. When something is automated it necessarily has to make assumptions about usage patterns. The assumption OpenShift makes about the typical deployment scenario is that the control-plane is running 100% of the time.
You might submit an RFE to Red Hat so that they know there is demand for this functionality (but if you're not a large client, I wouldn't expect it to get prioritized).
My suggestion would be to either:
Take a look at Migrating OpenShift Apps across OCP version gaps with CAM.
The CAM tool can be used to migrate applications between OpenShift clusters, including persistent volumes.
Another way to do this would be to use specific NFS exports or iSCSI targets and while you're up there persistent volumes by hand in each cluster.
Currently OCS is a converged Storage where your ceph OSD and mon runs in containers, preferably on dedicated infra worker nodes.
There is an rumoured version of the OCS operator in the works that can instead use an external ceph cluster for storage, which possibly could do what you're hinting at. ETA is unknown.
But without knowing more details of how you're intending to use your persistent volumes it's really hard to estimate.
OCS comes with Multi-cloud object gateway (Noobaa) so perhaps that could be of some use to you.
https://www.openshift.com/blog/introducing-multi-cloud-object-gateway-for-openshift
Hi! I will assume that you wan't to try this on OpenShift 4.x, if so minishift is out, that is based on OpenShift 3.x. CRC (Code Redy Containers) is a single node install for your local machine, for developers and so on. With any install of OpenShift 4.x you get 60 days evaluation. So it is totally possible for you to deploy OCP on AWS or some other cloud provider totally free for 60 days, no support of course. The cost would be the VMs and infra components. If you have access to baremetal servers that could bring your total cost down to $0. Hope this helps. https://www.openshift.com/try
Red Hat Service Mesh (Istio sidecar proxy) is huge. It will help devs separate some concerns such as SSL certs, load balancing, auth, tracing http requests visually (amazing), et cetera
https://www.openshift.com/blog/red-hat-openshift-service-mesh-is-now-available-what-you-should-know
More on Istio
This might help. Please bare in mind though, it's instructions for what was effectively a beta release. You'll still need to cross reference with the docs for the actual release you want to deploy (I assume 4.3).
One thing to note that unless it says otherwise, none of the prerequisites for the version you want to deploy are optional. Just because they may be difficult, doesn't mean you don't have to do them.