Illumina Innovates with Rancher and Kubernetes
What do Docker containers have to do with Infrastructure as Code (IaC)? In a word, everything. Let me explain. When you compare monolithic applications to microservices, there are a number of trade-offs. On the one hand, moving from a monolithic model to a microservices model allows the processing to be separated into distinct units of work. This lets developers focus on a single function at a time, and facilitates testing and scalability.
Prometheus is a modern and popular monitoring alerting system, built at SoundCloud and eventually open sourced in 2012 – it handles multi-dimensional time series data really well, and friends at InfinityWorks have already developed a Rancher template to deploy Prometheus at click of a button.
In hybrid cloud environments, it is likely that one might be using multiple orchestration engines such as Kubernetes and Mesos, in which case it is helpful to have the stack or application portable across environments.
In my last blog post, I detailed how we can quickly and easily get the Rancher Server up and running with Github authentication and persistent storage to facilitate easy upgrades. In this post, I will step through the creation of a private Docker registry that is password protected and how to integrate this private registry into Rancher. We will then tag and push an image to this registry. Finally, we will use the Rancher Server to deploy this image onto a server.
I have already talked about several ways to monitor docker containers and also using Prometheus to monitor Rancher deployments. However, until now it has been a manual process of launching monitoring agents on our various hosts. With the release of the Rancher beta with scheduling and support for Docker compose we can begin to make monitoring a lot more automated. In today’s post we will look at using Rancher’s new \“Rancher compose\” tool to bring up our deployment with a single command, using scheduling to make sure we have a monitoring agent running on every host, and using labels to isolate and present our metrics.
[Usman is a server and infrastructure engineer, with experience in building large scale distributed services on top of various cloud platforms. You can read more of his work at techtraits.com, or follow him on twitter @usman_ismailor on GitHub.]
Magento is an open-source content management system (CMS) offering a powerful tool-set for managing eCommerce web-sites. Magento is used by thousands of companies including Nike and Office Max. Today we are going to walk through the process of setting up a Magento cluster using Docker and Rancher on the Amazon Elastic Compute Cloud (EC2).
One of the key features of the Kubernetes integration in Rancher is the application catalog that Rancher provides. Rancher provides the ability to create Kubernetes templates that give users the ability to launch sophisticated multi-node applications with the click of a button. Rancher also adds the support of Application Services to Kubernetes, which leverage the use of Rancher’s meta-data services, DNS, and Load Balancers. All of this comes with a consistent and easy to use UI.
Thanks to Docker, Orange and Blumberg Capital for hosting a great meetup last night in San Francisco. Darren Shepherd, Chief Architect of Rancher Labs introduced RancherOS for the first time, and answered questions from the audience. Learn more about
RancherOS, or download it from GitHub. If
you’d like to learn more, Darren will be presenting RancherOS at an online meetup on March 31st, 2015.
RancherOS Demo at Docker Meetup from Rancher Labs on Vimeo.
Recently Rancher provided a disk image to be used to deploy RancherOS v0.3 on Google Compute Engine (GCE). The image supports RancherOS cloud config functionality. Additionally, it merges the SSH keys from the project, instance and cloud-config and adds them to the rancher user.
Building The Setup In this post, I will cover how to use the RancherOS image on GCE to set up a MongoDB Replica Set. Additionally I will cover how to use one of the recent features of Rancher platform which is the Load Balancer.
Raul Sanchez is a microservices and Dev0ps architect in the innovation department at BBVA, exploring new technologies, bringing them to the company and the production lifecycle. In his spare time, he is a developer who collaborates on open source projects. He’s spent more than 20 years working on GNU/Linux and unix systems in different areas and sectors. Introduction GoCD is a Java open source continuous delivery system from ThoughtWorks.
Note: you can read the Part 1 and Part 2 of this series, which describes how to deploy service stacks from a private docker registry with Rancher. This is my third and final blog post, and follows part 2, where I stepped through the creation of a private, password-protected Docker registry. and integrated this private registry with Rancher. In this post, we will be putting this registry to work (although for speed, I will use public images).
Rancher Server has recently added Docker Machine support, enabling us to easily deploy new Docker hosts on multiple cloud providers via Rancher’s UI/API and automatically have those hosts registered with Rancher. For now Rancher supports DigitalOcean and Amazon EC2 clouds, and more providers will be supported in the future. Another significant feature of Rancher is its networking implementation, because it enhances and facilitates the way you connect Docker containers and those services running on them.
Containerization brings several benefits to traditional CI platforms where builds share hosts: build dependencies can be isolated, applications can be tested against multiple environments (testing a Java app against multiple versions of JVM), on-demand build environments can be created with minimal stickiness to ensure test fidelity, Docker Compose can be used to quickly bring up environments which mirror development environments. Lastly, the inherent isolation offered by Docker Compose-based stacks allow for concurrent builds -- a sticking point for traditional build environments with shared components.
At 360pi, we deliver commerce analytics that enable retailers to make sense of retail and shopper big data, which will then be used to improve their commerce strategy. Our infrastructure is all in Amazon Web Services, and up until now, was simply Ec2 instances built with our own AMIs. We used to maintain the traditional dev/test/master branch hierarchy in GitHub for our monolithic Python application, and we deployed those branches with Jenkins and Ansible scripts.
Hello, my name is Alena Prokharchyk and I am a part of the software development team at Rancher Labs. In this article I’m going to give an overview of a new feature I’ve been working on, which was released this week with Rancher 0.16 - a Docker Load Balancing service. One of the most frequently requested Rancher features, load balancers are used to distribute traffic between docker containers. Now Rancher users can configure, update and scale up an integrated load balancing service to meet their application needs, using either Rancher’s UI or API.
I recently compared several docker monitoring tools and services. Since the article went live we have gotten feedback about additional tools that should be included in our survey. I would like to highlight two such tools; Prometheus and Sysdig cloud. Prometheus is a capable self-hosted solution which is easier to manage than sensu. Sysdig cloud on the other hand provides us with another hosted service much like Scout and Datadog.
So far in this series of articles we have looked at creating continuous integration pipelines using Jenkins and continuously deployingto integration environments. We also looked at using Rancher compose to run deployments as well as Route53 integration to do basic DNS management. Today we will cover production deployments strategies and also circle back to DNS management to cover how we can run multi-region and/or multi-data-center deployments with automatic fail-over. We also look at some rudimentary auto-scaling so that we can automatically respond to request surges and scale back when request rate drops again.
In previous articles we have seen how to setup a Jenkins CI system on top of docker and leverage docker in order to create a continuous integration pipeline. As part of that we used docker to create a centrally managed build environment which can be rolled out to any number of machines. We then setup the environment in Jenkins CI and automated the continuous building, packaging and testing of the source.
Over the last year we have written about getting several application stacks running on top of docker, e.g. Magento, Jenkins, Prometheus and so forth. However, containerized deployment can be useful for more than just defining application stacks. In this series of articles we would like to cover an end-to-end development pipeline and discuss how to leverage Docker and Rancher in its’ various stages. Specifically, we’re going to cover; building code, running tests, packaging artifacts, continuous integration and deployment, as well as managing an application stack in production.
[We just came back from DockerCon 2016, the biggest and most exciting DockerCon yet. Rancher had a large and well-trafficked presence there - our developers even skipped attending breakout sessions in favor of staffing the booth, just to talk with all the people who were interested in Rancher. In only two days, over a thousand people stopped by to talk to us!]
[Docker-Native Orchestration] [Without a doubt, the biggest news out of DockerCon this year is the new built-in container orchestration capabilities in the upcoming Docker 1.
We’ve just returned from DockerCon 2017, which was a fantastic experience. I thought I’d share some of my thoughts and impressions of the event, including my perspective on some of the key announcements, while they are still fresh in my mind.
New open source projects Container adoption for production environments is very real. The keynotes on both days included some exciting announcements that should further accelerate adoption in the enterprise as well as foster innovation in the open source community.
I just came back from DockerCon EU. I have not met a more friendly and helpful group of people than the users, vendors, and Docker employees at DockerCon. It was a well-organized event and a fun experience.
I went into the event with some questions[ about where Docker was headed. Solomon Hykes addressed these questions in his keynote, which was the highlight of the entire show. Docker embracing Kubernetes is clearly the single biggest piece of news coming out of DockerCon.
Our team just spent the last 4 days in San Francisco attending the Dockercon conference and participating in the Hackathon. We decided to send the entire Rancher Labs engineering team to the conference. I’m so glad we did. There was big news and great new Docker capabilities. It gave us a chance to meet so many Rancher friends and users at one time. First there’s the city, the venue, the party, and the food.
Containers may be super cool, but at the end of the day, they’re just another kind of infrastructure. A seasoned developer is probably already familiar with several other kinds of infrastructure and approaches to deploying applications. Another is not really that big of a deal. However, when the infrastructure creates new possibilities with the way an application is architected—as containers do—that’s a huge deal. That is why the services in a microservice application are far more important than the containerized infrastructure they run on.
This year we sponsored the DockerConHackathon, and had an amazing 24 hours working with people hacking on Docker, Rancher, RancherOS and more. Two of our team, Darren Shepherd and Alena Prokharchyk were judges, so we didn’t think it would be fair to enter the contest. That said, we wanted to be involved in the hacking anyway so we built a little tool called SherDock, a simple image management tool for garbage collection, identifying orphaned volumes and more.
Monitoring your container-based infrastructure is crucial to ensure good performance, identify issues early and gain the insight necessary to maximize its efficiency. When you are dealing with a large number of often short-lived containers spread over multiple hosts and even data centers, understanding the operational health of your infrastructure implies the need to aggregate performance data from both physical hosts as well as the container cluster running on top of it.