Look at the WAF rules you've enabled, likely some flavor of the OWASP top 10.
If you want to test them you can run a penetration test tool like OWASP ZAP or similar.
Depending on how your WAF is configured you'll be able to see in the logs the offending requests and if it's in prevention mode those calls will be blocked.
This actually an Apache question. what you are looking for is virtual hosts. they allow you to have more then 1 domain/subdomain on the server in the differnt folders like that. https://httpd.apache.org/docs/2.4/vhosts/examples.html on Ubuntu they are sitting in /etc/apache/site-available/ directory and are config files.
for your ssl issue. enable ssl with "sudo a2enmod ssl" and you can use something like letsencrypt (free ssl certs) with this guide from digital ocean https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-16-04
with this you will still need to setup dns for each sub domain pointing to you server like you have already. but this will allow your web server to respond to the requests.
We have this setup with a Powerapp (https://powerapps.microsoft.com/en-us/).
It makes a webhook call to fetch status upon load, and allows the user to turn on/off the VM. The buttons are dynamic depending on the status returned on the first call.
WH calls are received by an Azure function. It spits back the data and the powerapp parses the JSON.
This guy doesn’t mention how expensive it is for what it is. Last time I priced out all the little features I’d need to do a VPN, it was like $30/month. That’s bare-minimum, bottom-tier everything.
Azure changes all the time, so who know how much it costs today, but it’s very unlikely it is in the price area of say a NordVPN or PIA subscription.
> Why don't I need a hidden .git file when I clone an DevOps repo?
You do.
> Like in other cases?
You do.
> What exactly does that do then?
You seem to be mistaken.
> So you can see who made updates and what not outside of Azure Repos in DevOps, but its not needed?
Don't understand this question.
> And maybe Once you clone a repo VS code sort of uses git under the hood if you have git installed?
Having Git installed locally is a requirement for using DevOps Repos.
> And you don't need to initialize a repo or anything?
You do. But cloning an existing repo from DevOps is a perfectly valid way to initialize a local copy of that repo.
Your questions seem mostly related to not understanding Git. There are a lot of ways to address that gap... e.g. https://git-scm.com/book/en/v2 or https://learngitbranching.js.org/.
On the flip side of that coin, you could spin up DCs as VMs in Azure and connect your on prem network to your vnet via VPN. Then you could get rid of your physical boxes. There are some risks involved with this that you should research first.
Source - I have a multi site environment that has DCs on prem and in Azure.
Haven't used it myself but when researching why local printers can't be installed using intune. Most recommended that one. Even over the hybrid print solution from Microsoft.
No just enable the external api for it and use the azure plugin: https://grafana.com/plugins/grafana-azure-monitor-datasource/installation
I actually learned about grafana through aws guides and implementing it with that. All the Aws guides are amazing but azure documentation is lacking. Maybe I should start a blog.
Well, since I'm on the team that writes it, I'll plug the Azure documentation site. We take great pains to ensure that this content is relevant, up-to-date, and easy to follow. Personally, I've been involved lately in adding how-to videos to the existing written content, so it should be easier than ever to follow. You're right about there being some arcane knowledge out there, though. For instance, I had no idea about that multi-NIC VM restriction you mentioned.
If you have a Premier Support contract, get with your TAM and see if you can get in the remote-hosted Azure IaaS workshop. I co-maintained that one and developed the online delivery of it when I was a Premier Field Engineer.
Prepare both az104 and az500 - each one hour , do make sure to go over all the github labs for az104 and az500 - and if you want you can try my android app for mock exams on az104 and az500 - this way you will be able to assess your preparation. Dm me if you want to hop on to my telegram - for azure group study.
App link : https://play.google.com/store/apps/details?id=com.azure_quiz
Shameless self-promotion here:
All three courses, 70-532, 533, and 534, are available at http://www.cbtnuggets.com recorded by yours truly. :)
Our video series are significantly cheaper than in-person or online instructor led. They're self-paced, easy to digest in a variety of formats, and small enough to train on in small chunks of time. I don't know if we qualify for "instructor led" but I figured I'd let you know.
I really value our training, you should consider it too!
You could use /28 or /29 depending on how many VMs will be in the subnet. I like using this calculator here to get a good idea: https://mxtoolbox.com/subnetcalculator.aspx
You could set your router up to use a dynamic DNS service like https://freedns.afraid.org and then run a function app or automation script to watch for changes. Simple nslookup can detect the change and get the new IP to add into the NSG.
Other option is to use https://docs.microsoft.com/en-us/azure/security-center/security-center-just-in-time which will detect your current external IP when you submit an access request.
Initial response - Degradation in Multiple Services in West Europe and North Europe
Hello,
Recently you opened case number x.
We determined that the issue was related to problems that we are investigating.
The current status is:
An alert for Visual Studio Team Services, Virtual Machines, Cloud Services, App Service \ Web Apps and SQL Database in West Europe and North Europe is being investigated.
We are working on it on top most priority and we will keep updating you regarding the status of this service interruption until it is mitigated.
As always, if you would like to get the current status of the issue, please see the Microsoft Azure dashboard: http://azure.microsoft.com/en-us/support/service-dashboard/ Thank you for contacting Microsoft and reporting the issue. Your time and patience has been greatly appreciated and again apologies on the inconvenience this outage had caused. Your Microsoft Azure Technical Support Team
Been working with Azure since about 2009. It's come a long way but holy shit does it still have its issues. Clients don't want to hear that it is Azure shitting the bed. They just blame the end product. Getting people to trust the cloud is difficult already without these random service interruptions.
From personal experience you can't fully trust http://azure.microsoft.com/en-us/status/ either.
are you 100% sure you have a premium storage account? I've noticed its really easy to use a standard account if I'm not paying attention.
edit: I doubled checked to makesure I wasn't crazy. If you want to use SSD back persistent storage you have to use the DS series vms. Additionally you can only access the new premium storage drives through the new portal. Regular VHDs on regular storage accounts are kinda horrendously slow. they're useful for bulk storage
It's actually quite confusing with the wording on the site, but when you switch to standard you get a dedicated VM which you can host multiple Azure Websites on. This page probably explains it the best.
Just to be clear, it's not one website per VM.
Personally, I run an https://owncloud.org/ server to supplement my personal backups. It's slowly becoming my go to. It's just a small Ubuntu vm, so I can run other things as needed.
I also host the mp3's for my podcast in a public blob storage container.
If that's your only need for local installation, consider PowerApps (which is included in O365 - https://powerapps.microsoft.com/en-us/) as replacement and use RDS as interim. You can run with cheaper O365 licenses and wouldn't need any kind of Azure infrastructure.
You could consider using Service Bus as functions can use it. Here is the same question on stackoverflow https://stackoverflow.com/questions/39198998/can-an-azure-function-app-access-an-on-premises-resource-via-a-vpn-or-hybrid-con
I like the Workspace feature when using it in a team environment. Helps maintain better visibility of Git source control measures by tracking who changes what while in the VSCode editor.
It is possible. Of course it depends on your desired workload. What do you want to do in the cloud that you cant do at home? You can set up an Azure subscription and run a machine of your choice in the cloud. Microsoft is also releasing Windows 365 which starts at 31 USD https://www.microsoft.com/en-us/windows-365/business/compare-plans-pricing
I would recommend having a look at this, if you don't have much Linux experience. It will install all the things you need for running php websites as well as give you an easy control panel to manage multiple sites:
Don't just use a redirect. If an attacker can MITM your users with e.g. a rogue AP at a coffee shop, he can proxy the user's connection over plain HTTP and strip TLS. You need to set the HSTS header to explicitly force the browser to connect over HTTPS (and consider submitting your domain for STS preloading so they are protected before their first visit as well).
The NWebSec package makes this fairly straightforward.
You may be able to use Yumi or something like it:
https://www.pendrivelinux.com/yumi-multiboot-usb-creator/#YUMI-UEFI
Here’s how it (could) generally work:
Benefit here is keeping the image small and easy updating of contents (by just dropping one or more ISO files in image). May not work with whatever your backup solution is but might be worth a shot.
If you can pass these practice tests, you can pass the actual exam easily. I finished in around 8 minutes with a 950. Granted it's definitely not as helpful as actually learning the material and understanding how Azure works but you will definitely not have trouble with the exam.
If you are still looking, we at Site24x7 give you the capability to check for transactions of critical work flows. The response time of individual steps and also the total transaction time can be obtained. You can also reuse your existing Selenium IDE test case to set-up production monitoring of your web applications.
Based on these requirements, I'd recommend CosmosDB using the MongoDB mode. You will be able to query and structure in a similar manor as a SQL store, and you'll likely have a bunch of Mongo tools you could leverage when building the app (Object Document Mappers (ODM -- Mongoid https://github.com/mongodb/mongoid , etc...), tooling (Robomongo https://robomongo.org , etc..) and other related Mongo tools).
Beyond development experience and tooling, administration and scaling is much simpler with CosmosDB. You can replicate to multiple regions (read replicas), and have an ordered list of failover regions.
Tables would be like writing assembly when using CosmosDB is like using a high-level language.
You do. Quoting https://www.cloudflare.com/integrations/microsoft-azure/#cdn-interconnect-program > Microsoft Azure is working on its own CDN Interconnect program.
It's been like this for quite a while now, and at least a couple of months ago, couldn't find more info about that.
Yes. Node pools are provisioned in an availability set which specifies two fault domains and 3 update domains. Each node receives labels for its region and zone, which the Kubernetes scheduler takes into account when scheduling pods.
For more information on how these labels work in the scheduler check out these k8s docs on well-known labels.
VPN Gateway will do what you want it to do, via point-to-site connections. However, VPN gateway would be more expensive (even in the basic tier) than just setting up wireguard or OpenVPN on your existing IaaS Linux VM.
> From personal experience you can't fully trust http://azure.microsoft.com/en-us/status/[1] either.
Like the outage that happened in the latter part of 2014, where every major Azure service was down in 80% of regions and Microsoft was happily reporting "all services working perfectly!" for the first 2 hours, then "investigating some potential issues" for the remainder of the outage? That was fun!
Typically people use the support options -> http://azure.microsoft.com/en-us/support/options/ @AzureSupport is new and they are good at connecting you to the right resources for help.
In regards to the issue, sometimes it is a caching issue so trying in private or clearing your cache may work.
I do exactly this and I'm not really a "newbie". It comes in real handy when you don't want to shell out $ for Windows licenses on a VM sitting on your laptop. Spin up a VM running SQL Server (not an Azure SQL Database), shut it down from within the VM and deallocate it in the portal when you're done each day. You'll only pay for what you use.
If high performance isn't a concern (and that's likely the case), you can do just fine with an A2 or A3 machine running Standard Edition. You may want to get a second disk and get that attached for storage/backups.
http://azure.microsoft.com/en-us/pricing/details/virtual-machines/#Sql
The above linked Github repo is the way to go for the Resource Manager templates. There's also a searchable index maintained here - http://azure.microsoft.com/en-us/documentation/templates/
> This is a smidge away but not quite there.
Can you explain what's lacking?
What region are you in? There was a service management advisory for West US today here, though the dashboard says it's resolved (take with large portion of salt)
My personal experience - it's not. For ~$15/month you can get Windows Server 2012 R2 on 1 core, 2GB RAM, 75GB HDD, 1TB of bandwidth. Sure you don't get automatic backups, easy scaling and (geo) redundancy, CDN and stuff but it's overkill for most of small projects. There are options though, for example if you already have MSDN subscription or Bizspark member Microsoft will compensate for like $150 each month on Azure services. Check the offers: http://azure.microsoft.com/en-us/pricing/member-offers/msdn-benefits/
Three weeks, but I didn't study on the weekends. I have access to the paid version of Linux Academy. It's nice to be provided with real hands-on labs on Azure and to also have the possibility to play around a bit on Azure using their playground.
Stacksocial has a lifetime membership of Whizlabs on sale right now (https://stacksocial.com/sales/whizlabs-lifetime-membership) and they also have a AZ-103 course. While I can't speak from experience myself yet, I read that their sets of questions should be good for preparation. I'll use them in my preparation for the AZ-400 exam. They don't provide you with hands-on lab environments though.
We have an S3 instance running in the North Europe data centre.
We're experiencing frequent connection errors of the form TCP Provider: Timeout error [258].
This issue appears to have been acknowledged as a problem.
The resolution is stated as:
> To resolve this issue, try to apply the following Windows Server 2012 update rollup in Microsoft Knowledge Base first: > 2779768 Windows 8 and Windows Server 2012 update rollup: December 2012
I'm assuming that this patch has already been applied to all Azure database servers.
Is anyone else having this problem? The usability of our web app has dropped to zero on account of this.
Any suggestions on resolving this?
Something like this?
And then do something on a sever that you manage or have a process to create a user's STS key to mount a network share to the Azure files storage instance.
https://docs.microsoft.com/en-us/azure/storage/storage-file-how-to-use-files-windows
Based on what you are describing, I would call you an Azure Architect. However as you touched on that there are many areas of Azure you haven't and that's okay. All I say is where I work, we all specialize within areas of Azure because of how much depth on each subject area requires you to know before you'd be classified as an expert. You will never know everything and that's the truth. Now in terms of your length of time/experience, that's where you can add to Jr/Assoc., Sr, Prin, etc to your title and that's important to showcase. If you're only 2 years into Azure, Id place you as a Jr or Assoc.
You can find a simple example of auto start and shutdown using azure functions and powershell in my GitHub repo https://github.com/FreddyAyala/AzureCFMStarterKit this is part of a cloud financial management ebook I wrote, if you are interested in checking it out https://www.amazon.com/Azure-Cloud-Financial-Management-Handbook-ebook/dp/B0B4KF2QDF
If you can somehow use Astrill to create the connection it will dramatically help with your situation. We have many systems and users in China and tried everything to get them reliable connectivity to USA Azure servers and Astrill works really well for us. The users each have dedicated static US IPs which can be whitelisted but somehow Astrill is able to remove a huge amount of latency and hops for the connections from mainland China. I know it's a user based solution but wonder can you can architect into what you are doing?
Merge to main triggers pipeline, automated task in pipeline execute, and application deployed. Sounds like you winning to me.
As far as editing the pipeline file and triggering the pipeline. Make a new branch, modify as needed, and when finished merge into main. Azure DevOps has the ability to manually trigger a pipeline from any branch.
For expanding the pipeline. Read about continuous integration (CI). The purpose of CI is to automate testing. Write your tests and have the pipeline run them. Add security scanning so vulnerabilities can be addressed before the code is deployed. Add any task that your team does manually to ensure quality code. The goal is to automate for consistent results every time. I personally like David Farley’s philosophy on software development.
https://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912
For more learning use the below URL. Azure DevOps has a generous free tier. Go provision a new organization with a personal account and just use the product.
This just came out so might help fill in the gap - https://www.amazon.com/Microsoft-Security-Operations-Analyst-Certification-dp-1803231890/dp/1803231890/
Aside from the learning path, I'd recommend making your way through the relevant Ninja trainings for the products covered by the exam. Good collection of links here - https://azurecloudai.blog/2021/05/12/all-the-microsoft-ninja-training-i-know-about/
Definitely make sure you get as much hands-on as you can, though, to reinforce the reading.
Cool. That is pretty much my same story. Been doing this since NT 3.51 days and finally decided to get certified in Azure since I already dabble in it for several customers. I found MS Learn to be pretty effective training. If you do not mind videos, there is a series on Udemy.com by Scott Duffy that is good for low $ and John Savill on YouTube has plenty of great videos at no charge. Used a mixture of all of that and passed my AZ-104.
Everything I've read points to the opposite.
In fact a Microsoft employee and several S/O users seem to say it's fine https://stackoverflow.com/questions/36096720/move-azure-vhd-from-premium-to-standard-storage
Edit: I've also seen another post that confirms what you're saying. Still, the VM boots fine, the issue is only with connecting to it. I'll try creating the VM using the original, premium, VHD.
[I work at Elastic]
just wanted to add that we not long ago added a high level guide to what the integration and the Elastic Stack service on Azure offers on our blog - https://www.elastic.co/blog/experience-elasticsearch-microsoft-azure-portal
If you are also looking for a reliable dumps material then I will suggest Az-103 dumps because it has helped me to pass my IT certification. I downloaded Az-103 study material simply from Exam4Help got satisfactory result in the final. They provide 100% valid and latest exam dumps and gave full money back guarantee they provide useful handy study material
I could not pass my AZ-103 exam by the first attempt because I was not well prepared. I was not having a proper study material. But this time I downloaded AZ-103 dumps Question and Answer from Exam4Help and aced my certification. when i read these exam dumps i can easily memorize and understand the study material . AZ-103 exam is most tough exam ever but exam dumps of 103 is very easy i suggest you to try exam dumps from Exam4Help
I am not sure how PHP does it but for C# I leverage ChainedTokenCredential. I like the below article because it shows you steps that DefaultAzureCredential takes for obtaining a token.
https://www.nuget.org/packages/Azure.Identity/
Only difference between AzureDefaultCredential and ChainedTokenCredential is I can specify which methods to use with ChainedTokenCredential. Sorry I couldn’t find the PHP equivalent, but hope the article illustrates the process to you.
404 is a HTTP error that the URL can be found. Enter the https://fhi-storage.azurewebsites.net/...... blah blah and see if it loads. If it doesnt, thats likely your issue, wrong URL or missing files. I'd also consider using fonts.google.com as Roboto is available via thier CDN.
The following should get you in the right direction. https://docs.microsoft.com/en-us/azure/aks/custom-node-configuration for unsafe sysctl and https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ if you need it.
You could use an initContainer that runs the sysctl command you'll need, considering ACI is compatible with init containers.
Otherwise you could try specifying sysctls in spec.SecurityContext.sysctls in the deployment YAML, although that's very Kubernetes specific and I couldn't find if ACI supports it.
If you are looking for a simple command line option that isn’t dedicated to azure, rclone works across many platforms.
Here is the pip documentation.
It's a worker role. It's basically like a windows service that runs in azure. I think most people just build it as a never ending loop to do work, and if it crashes it gets automatically restarted.
My advice: be sure to be able to run your build locally. This ensures that you don't tie yourself too much with either solution, and then you won't be too sorry down the road if you realize you made the wrong choice.
just had a look - there's an azure event hub logstash plugin (https://www.elastic.co/guide/en/logstash/current/plugins-inputs-azure_event_hubs.html) - so the answer would be yes. No need to write an azure function.
Can I suggest Gitlab?
We use their self-hosted service, running on a VM inside Azure. Licensing is you pay per seat. It has a very robust CI/CD system, and offers integrations with all the major cloud provider's managed kubernetes services (AKS, EKS, GKE).
At the 'Premium' tier, they offer a service called Gitlab GEO, which is intended to synchronize multiple instances, so you can have one instance running in Azure, and one instance in AWS, and any changes, from code to administration, will propagate between the two.
Great Write up , thanks. I am planning to appear so for az305. Between
anyone looking for any Azure Certification mock exams , please checkout my Android App , it comes handy.
https://play.google.com/store/apps/details?id=com.azure_quiz
Can potentially register a domain here for cheap. Why would you be required to reg with Azure?
Pair your newly owned domain with a free/cheap DNS (cloudflare has a free tier, for instance) and point to your azure VM.
Thanks for the suggestion, I can't find any info about spot instance eviction frequency for Azure, but AWS has a nice page about it: https://aws.amazon.com/ec2/spot/instance-advisor/
My assumption here is that these numbers are similar for Azure. It looks like it really helps that my VM is small and is typically not operated during business hours.
I went off this
https://aws.amazon.com/ec2/pricing/reserved-instances/pricing
m4.xlarge if you pay upfront is $2347, but I see that doesn't include the windows license.
>The Azure Compute pre-purchase plan offers you discounts up to 63 percent off standard prices on virtual machines in
Oh, that's interesting. I wonder if this applies to their App Service plans as well.
I think the most up to date reference for 535 is the exam ref book published a few months ago. You can even read it for free for 7 days with a free Safari trial https://www.safaribooksonline.com/library/view/exam-ref-70-535/9781509304769/
If you're not experienced in moving data between platforms, it may be dangerous to do it without any professional help. If you don't want to damage or lose any of your records, it's better to use a specialized service that will help you transfer the data from Zendesk to Azure. It's definitely safer, faster, and cheaper to use a professional data migration service in this case.
You can try Help Desk Migration service in this case. It has great customer reviews and was also announced a high performer in Winter 2020 by the G2 crowd.
I'd have a look and see if there's any open source web apps that allow you to do what you're doing - something with a mySQL backend. If you can find such a web app, you could host it in azure app service for relatively cheap and use the "mySQL in App" feature to keep costs down - as you're the sole user there wouldn't be need for huge performance.
something like https://firefly-iii.org/ would be a good web app to use, and would help you learn about app service in azure, and potentially some other features.
We use Server Density with the Tether agent, with a bunch of custom plugins to monitor ~200 servers we have up in Azure.
Full Disclosure - I don't work for Server Density, but I did write the Tether agent (which I have open sourced)
Happy to help! Let us know how you shake out, I think you can get a pretty solid off-site backup strategy that won't cost an arm-and-a-leg.
For the record, if this is all you're using Azure for... I think you're probably not taking advantage of all it has to offer. That may be why it seems expensive. The solutions listed in an article like this one are focused much more closely on what you're doing: http://www.tomsitpro.com/articles/online-backup-services-business,2-918.html
That said, the additional features that Azure offers can't be beat. If you're going to be migrating workloads in Azure then you're making a great start.
[EDIT] - Also, as a shameless self-promotion, checkout http://www.cbtnuggets.com where I've got three MS Azure courses available. All of this is covered in those videos. They are certification/exam focused but you can pick and choose your way through each video if you want.
>I plan to use IdentityServer4 for the authentication of the users.
Why? Whats wrong with Azure AD?
Authorize requests to Azure Storage (REST API) | Microsoft Docs
Well for starters:
> Azure WebJobs 2.0 came out March 2017. No further development since.
And here is v3. 0:
https://www.nuget.org/packages/Microsoft.Azure.WebJobs/3.0.0
Adding to that the classical idiocity that you should use <new stuff released a month ago> to accomplish micro services architecture is the classical consultancy bullshit. It's a clear indication of someone with little actual experience of creating and maintaining large distributed systems.
So this chart in particular is loaded with Prometheus rules specific to monitoring k8s. Not that you can't load your own on top of it.
Personally I went with this over the Azure-native monitoring solution for cost reasons. Log Analytics was costing a ton of money compared to our compute cost, which didn't make sense to us. We use Prometheus strictly for monitoring the cluster, not the apps in it.
For an IaaS VM (I assume you're using linux) you'd want to look at something like this: https://prometheus.io/docs/guides/node-exporter/
I have no experience with this personally as 99% of our apps in Azure are deployed in AKS, so the helm chart above covers almost all of our hardware and cluster monitoring needs.
I wonder if Folding@Home's Coronavirus therapeutic antibody candidate search will be hosted on available Azure VMs?
Always happy to see helpful comment that starts with criticism ).
Yes I've read that exact and few more other articles and couldn't understand it (maybe I couldn't concentrate at work), planning visiting pluralsight.com so I can listen some video tutorials.
Running as a web app might be viable as well? Securing will be easier than setting up a jumpbox I guess.
I would agree with your tutor. Enroll in a training course for AZ-103 (the Skylines courses on Udemy.com are a great resource) and start getting your feet wet. They'll train you from the ground up, with minor assumptions that you understand the basics of what virtual computing is, networking, etc.
I took this journey this year, moving from CCNA R&S -> CCNA Cloud -> Azure Administrator Associate -> Azure Solutions Architect Expert. I'm about to start Azure Security Engineer now to round it out, then perhaps (painfully) start into the devops side.
I would recommend diving in and move from associate to expert without taking a break, as much of the data mirrors/regurgitates through expert.
You can try installing some third party monitoring agents to get the process level details. One good agent is Site24x7 Windows Server Monitoring. You can get individual process level cpu usage for free.
I wish I could write more on this, but I'm on my mobile at the Mall. Check out Shadow Tech. Its $30/month and I use it for different setups and scenarios all the time. My kids use it to play Fortnite. It has excellent graphics and the CPU is plenty fast.
I had asked this same question in other sub-reddits and seems like I found two solutions and we are starting to use the Apache Guacamole service. It's a free service which allows both SSH & RDP.
Flat file searches are fast as hell, as long as the file isn't too big. Sqlite and other embedded things are fine if you perform many reads but limited writes. If you start writing to the same database from multiple places at the same time you might run into problems. When it comes to images, I know that Cloudflare handles this for you. You give it one big master image, and based on how it gets displayed in page the CDN generates an appropriately sized image for you.
Kubernetes basically manages that for you, and third party tools aren't recommended for this, although you can set variables for the age of the unused containers which should be garbage collected.
You can read about it here: https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/
Also good to reconfirm that AKS is 1% about learning to use Azure and 99% using Kubernetes. Nearly everything you will see on kubernetes.io will apply to your AKS cluster.
Bonus easy explanation for the difference between AKS/ACS (besides the focus on kubernetes):
- ACS deploys virtual machines to run the kubernetes master nodes
- AKS does not, and the control plane is abstracted as a service
Couple weeks late to the party, but hopefully you guys still check inboxes occasionally!
To combo off this question a bit, does that mean AKS is using the self-hosting feature? And since that's a new experiment in 1.8, does that mean 1.7 works differently?
Thanks!
The system pods are controlled from the manifests in /etc/kubernetes/manifests
There is a guide on them here: https://kubernetes.io/docs/tasks/administer-cluster/static-pod/
They are just normal pods so you can view the logs to find out what caused them to restart.
Grafana is a open source graphing tool that can take data from multiple sources. Can't live without it these days. Learning curve isn't too bad. Plus since it's just a consumer of data you can do lots with it.
Grafana.com has all the setup documents and is really straight forward (install repo, install grafana).
Then you enable application insights in azure and use this plugin:
https://grafana.com/plugins/grafana-azure-monitor-datasource/installation
I routinely connect to my Windows VMs from Kubuntu 18.04 using either Remmina or KRDC (both are RDP clients for Linux).
Obvious caveat is to make sure your security is set up properly, for example, use the networking tab in Azure (VM) to only allow RDP access from specific IP addresses. Your needs may vary, but make sure you're restricting access.
Here are some useful readings for you http://www.itprotoday.com/microsoft-azure/azure-active-directory-vs-premises-active-directory https://jumpcloud.com/blog/active-directory-azure-active-directory/
With regards to provided information, if you do not plan to open more BOs, use 3rd DC.
You can join the azure file shares to AAD DS or you own domain, then use something like MyWorkDrive to make it available to users regardless if their machine is domain joined - https://www.myworkdrive.com/online-file-storage/azure-file-shares/
It's great, Just no built in backup or snapshots yet. Here's a comparison of Azure File Shares vs Azure NetApp Files; https://www.myworkdrive.com/online-file-storage/cloud-file-storage/
Titan SFTP in Azure could be a good consideration. Strong security and very easy to setup and use. The PAYG option is a good way to keep costs low (at least in the beginning as you assess your storage needs).
Take a look on mover.io. It's a free Microsoft service. I'm doing exactly that kind of copies with it. Currently manually triggered, but schedule is also possible. It's more a migration tool, hope it fits though.
The original goal behind Azure File Sync was to synchronize files from on premise to Azure File Shares to allow for backups, tiering (expanding storage space), and replication between sites. This was not intended for end users in an office to have direct access via Azure.
However, recently, they added the ability to enable a File Share to use Azure AD, but the permissions with on prem (NTFS) are not compatible so you have to redo any permissions. Users accessing it isn't too difficult if users are logging into an Azure AD joined machine, but gets a little odd if they're not.
The best experience with Microsoft with users in the Cloud is SharePoint Online and use OneDrive Sync to give them a classic folder structure on the local system. Also, MS just recently acquired Mover.io to make the move to SharePoint easier.
Could use something like https://www.carbonite.com/blog/article/2017/06/what-is-carbonite-move/
I think the company recently changed name. The product was originally DoubleTake Move. We considered it actually for this specific purpose.
I would suggest checking out Veeam’s data protection solution. They have free / community editions for on-premises and cloud based workloads. Also enables much easier / greater recovery options.
https://www.veeam.com/virtual-machine-backup-solution-free.html
At a quick guess it looks like you didn't set routes in your on premise gateway for your P2S IP addresses. I described briefly here: http://azure.microsoft.com/documentation/articles/web-sites-integrate-with-vnet/#accessing-on-premise-resources but do a better job talking about it in this blog http://azure.microsoft.com/blog/using-vnet-or-hybrid-conn-with-websites/ When you add a point to site IP block the VNet can't push down an update to the routes so it has to be set by hand.
Usually by browsing their docs or just google your scenario.
VPN from your site to DC, or Disaster recovery with azure replication.
I have two thoughts... First: Your last paragraph makes me think you are wanting to replicate data across a web server farm, replicate to the cloud, and then serve up the cloud data as a CDN... right?
Second: I'd recommend looking at the MSDN Forums for Azure CDN and Cache. There may be some knowledge gained there in regards to replicating the whole site and then when a file changes to push the change. I assume their would need to be some level of coding/automation needed, but don't know enough details to answer at depth.
Another interesting thread is the one about "Best Practices for CDN" that may give insight into how you can make sure your content is fresh.
Personally I would assume that everything will fail - there is a 9 hour video series on "FailSafe" that talks about the correct way to architect your solutions to be resilient to failures. There will always be a failure at some point (network, hardware, etc.).
Licensing is a beast of its own and has all sorts of ways that it can play out. I don't know the details but i'd start here which covers Windows Server / SQL Server / BizTalk.
Be aware the SQL Server in a VM behaves differently then Azure SQL databases. There are certain limitations such as things like CLR, a few others I cant think of right now. You cant shut down Azure SQL Databases to save on costs, only delete them.
I suppose you could delete them when your done with them, then redeploy a new each time, but I don't suggest you do that being its going to be configured differently each time and the startup takes many minutes.
The costs are fairly minimal per month, a 100MB one instance SQL Database per month is around $5 a month, not including bandwidth usage (From the Azure Pricing Calculator) . If your firing up a VM for several hours per month you will probably match this costs.
> DS vms have a hard disk bandwidth caps ??
Check out this link. The DS4 vm have a max disk bandwidth of 128 MB/s.
> Bit of a problem, added our USA DS4 into an availability group and it rebooted, after restart the disk attached disk I/O is super slow again
Ouch. I was hoping that all of your performance problems were fixed >.< my first thought would be to double check that the vm and the storage account are in the same datacenter. Next thing id do is double check the hard performance numbers with a tool like crystal disk mark or SQLIO. you should be able to hit the max performance of the VM (i think its 128MB/s for the DS4) you should hit 128 MB/s and >5k iops. Unfortunately I'm less familiar with linux disk testing tools. If we're unable to hit those numbers my guess would be that the vm is misconfigured with the VM/VHDs in some way.