NTT TechnoCross becomes Canonical Certified Support Partner in Japan

Ubuntu Insights, Cloud and Server - Tue, 30/01/2018 - 14:01

NTT TechnoCross Corporation has signed a partnership agreement with Canonical to provide strengthened OSS support to its customers in Japan including OpenStack deployments.

NTT TechnoCross will provide Japanese support for domestic customers and will be the first contact for customer enquiries and fault isolation and resolution phase.

NTT TechnoCross has extensive experience with OSS including OpenStack and provides a wide range of support options to customers from OS to middleware.

Working with Canonical, NTT TechnoCross will increase its presence in cloud platform with a combination of technical support on OSS and OpenStack and expand its presence in IoT/Edge Computing.

“In partnership with NTT TechnoCross, Canonical greatly expands its front line support services in Japan, extending our capacity for 24/7 support and fault resolution as part of our OpenStack and Ubuntu Advantage services” said Dustin Kirkland, VP of Product and Development, Canonical.

“NTT TechnoCross Corporation is one of the main NTT Group companies participating in the OpenStack community and working closely with the NTT Software Innovation Center. By partnering with Canonical and providing support for Ubuntu, which has a high market share in OSS operating system, we will provide high quality support to customers in Japan,” said Hikaru Suzuki, General Manager, Cloud and Security Business Department, NTT TechnoCross.

Categories: Canonical

LXD Weekly Status #32

Ubuntu Insights, Cloud and Server - Mon, 29/01/2018 - 19:15

The focus of this week has been preparing for our trip to Brussels where we’ll be spending 3 days all working together on LXD before attending and presenting at FOSDEM.

@brauner is making good progress on preparing for the liblxc 3.0 release, moving all the various language bindings and tools out of the main tree and into separate repositories.

@monstermunchkin has also been busy working on our refreshed tooling for generating LXC and LXD images which will be replacing the dated template shell scripts in 3.0.

On the LXD side, @freeekanayaka is hard at work fixing a number of remaining rough edges around the clustering branch and working with @stgraber to get things going through CI and in shape so it can be merged very soon.

We’ve also been fixing a number of issues here and there to keep CI going and have been pushing a number of bugfixes and improvements to the LXD snap.

We’re all looking forward to seeing some of you in Brussels later this week!

Upcoming conferences and events Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.

  • Nothing to report
Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.

  • Nothing to report
  • Include all the needs XFS tools
  • Harden snap configuration handling
  • Properly set LXD_EXEC_PATH when running with a debug binary
  • Include criu in the snap, protected by a new snap property (criu.enable)
  • Fix the snap configuration option names in info text
  • Bump Go to 1.9.3
Categories: Canonical

LXD: 5 Easy Pieces

Ubuntu Insights, Cloud and Server - Fri, 26/01/2018 - 15:44

This is a guest blog by Michael Iatrou

Machine containers, like LXD, proliferate in the datacenters: they provide a native control plane for OpenStack and a lightweight hypervisor for its tenants. LXD optimizes resource allocation and utilization for Kubernetes clusters, modernizes workload management in HPC infrastructure and streamlines lift and shift for legacy applications running on virtual machines or bare metal.

On the other end of the spectrum, LXD provides a great experience running on developers’ laptops, for anything from maintaining traditional monolithic applications, lightweight disposable testing environments, to developing process container encapsulated microservices. If you are taking your first steps with LXD, here are 5 simple things worth knowing:

1. Install the latest LXD

Ubuntu Xenial comes with LXD installed by default and it is fully supported for production environments for at least 5 years, as part of the LTS release. In parallel, the LXD team is quickly adding new features. If you want to see the latest and greatest, make sure you use the up to date xenial backports:

$ sudo apt install -t xenial-backports lxd lxd-client $ dpkg -l | grep lxd ii lxd 2.21-0ubuntu amd64 Container hypervisor based on LXC ii lxd-client 2.21-0ubuntu amd64 Container hypervisor based on LXC

With the packages from backports you will be able to follow closely the upstream development of LXD.

2. Use ZFS

Containers see significant benefits from a copy-on-write filesystem like ZFS. If you have a separate disk or partition, use ZFS for the LXD storage pool. Make sure you install zfsutils-linux before you initialize LXD:

$ sudo apt install zfsutils-linux $ lxd init Do you want to configure a new storage pool (yes/no) [default=yes]? Name of the new storage pool [default=default]: Name of the storage backend to use (dir, btrfs, lvm, zfs) [default=zfs]: Create a new ZFS pool (yes/no) [default=yes]? Would you like to use an existing block device (yes/no) [default=no]? yes Path to the existing block device: /dev/vdb1

Especially for launch, snapshot, restore and delete LXD operations, ZFS as a storage pool performs much better than the alternatives.

3. Get a public cloud instance on your laptop

Whether you are using AWS, Azure or GCP, named instance types are the lingua franca for resource allocation. Do you want to get a feel of how the application you develop on your laptop would behave on a cloud instance? Quickly spin up a LXD container that matches the desired cloud instance type and try it:

$ lxc launch -t m3.large ubuntu:16.04 aws-m3large Creating aws-m3large Starting aws-m3large $ lxc exec aws-m3large -- grep ^processor /proc/cpuinfo | wc -l 2 $ lxc exec aws-m3large -- free -m total used free shared buff/cache available Mem: 7680 121 7546 209 12 7546

LXD applied resource limits to the container, to restrict CPU and RAM and provide an experience analogous to the requested instance type. Does it look like you application could benefit from additional CPU cores? LXD allows to modify the available resources on the fly:

$ lxc config set aws-m3large limits.cpu 3 $ lxc exec aws-m3large -- grep ^processor /proc/cpuinfo | wc -l 3

Testing and adjusting quickly the container resources helps with accurate capacity planning and avoiding overprovisioning.

4. Customize containers with profiles

LXD profiles store any configuration that is associated with a container in the form of key/value pairs, or device designation. Each container can have one or more profiles applied to it. When multiple profiles are applied to a container, existing options are overridden in the order they are specified. A default profile is created automatically and it’s used implicitly for every container, unless a different one is specified.

A good example for customizing the default LXD profile is enabling public key SSH authentication. The LXD images have SSH enabled by default, but they do not have a password or a key. Let’s add one:

First, export the current profile:

$ lxc profile show default > lxd-profile-default.yaml

Modify the config mapping of the profile to include user.user-data as follows:

config: user.user-data: | #cloud-config ssh_authorized_keys: - @@SSHPUB@@ environment.http_proxy: "" user.network_mode: "" description: Default LXD profile devices: eth0: nictype: bridged parent: lxdbr0 type: nic root: path: / pool: default type: disk name: default used_by: []

Make sure that you have generated an SSH
key pair
for the current user, and then run:

$ sed -ri "s'@@SSHPUB@@'$(cat ~/.ssh/'" lxd-profile-default.yaml

And finally update the default profile:

$ lxc profile edit default < lxd-profile-default.yaml

You should now be able to SSH into your LXD container, as you would do with a VM. And yes, adding SSH keys barely scratches the surface of just in time configuration possibilities using cloud-init.

The LXD autostart behavior is another option to consider tuning in the default profile: out of the box, when the host machine boots, all the LXD containers are started. That is exactly the behavior that you want for your servers. For your laptop though, you might want to start containers only as needed. You can configure autostart directly for the default profile, without using an intermediate file for editing:

$ lxc profile set default boot.autostart 0

All existing and future containers with the default profile will be affected by this change. You can see the profiles applied to each existing container using:

$ lxc list -c ns46tSP

Familiarize with the functionality profiles offer, it’s a force multiplier for day to day activities and invaluable for structured repeatability.

5. Access the containers without port-forwarding

LXD creates an “internal” Linux bridge during its initialization (lxd init). The bridge enables an isolated layer 2 segment for the containers and the connectivity with external networks takes place on layer 3 using NATing and port-forwarding. Such behavior facilitates isolation and minimizes external exposure -- both of them desirable characteristics. But because LXD containers offer machine-like operational semantics, for some use cases it’s appropriate to have LXD guests share the same network segment with their host.

Let’s see how this is easily configurable on a per container basis, using profiles. It’s assumed that on the host you have configured a bridge, named br0, with a port associated to an Ethernet interface.

We create a new LXD profile:

lxc profile create bridged

And then we map the eth0 NIC of the container to the host bridge br0:

$ cat lxd-profile-bridged.yaml description: Bridged networking LXD profile devices: eth0: name: eth0 nictype: bridged parent: br0 type: nic $ lxc profile edit bridged < lxd-profile-bridged.yaml

Finally, we launch a new container with both the default and the bridged profile.

$ lxc launch ubuntu:x c1 -p default -p bridged

The container will be using DHCP to retrieve an IP address, on the same subnet as the host:

$ ip a s br0 3: br0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether AA:BB:CC:DD:EE:FF brd ff:ff:ff:ff:ff:ff inet brd scope global br0 valid_lft forever preferred_lft forever $ lxc exec c1 -- ip a s eth0 101: eth0@if102: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 00:AA:BB:CC:DD:EE brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet brd scope global eth0 valid_lft forever preferred_lft forever

Bridged networking is the simplest way to develop and test applications with
LXD in more production-like network environments. Another alternative is macvlan but be aware of its caveats, mainly host-guest communication and hardware support limitations.

These 5 easy pieces are just the tip of iceberg of what one can do with LXD. Join the Linux Containers community to discuss your use cases and questions. Delve into the documentation for inspiration and send us PRs with your favorite enhancement.

LXD is the kind of tool that after you get to know each other, using it on a daily basis feels inevitable. Give it a try!

Want to know more?

On February 7th, technical lead Stephane Graber will be presenting a webinar for Ubuntu Product Month that will dive into how LXD works, what it does, how it can be used in the enterprise, and even provide an opportunity for Q&A.

Register For Webinar

Categories: Canonical

Meltdown, Spectre and Ubuntu: What you need to know

Ubuntu Insights, Cloud and Server - Wed, 24/01/2018 - 17:45

As details of the Meltdown and Spectre vulnerabilities1 have become clearer a number of statements have been published by the multiple vendors affected; Canonical has issued advisories and updates on fixes and mitigations, the latest of which mitigate known Spectre attacks. However, most of these statements focus on the mechanics of applying fixes and corresponding damage control, and not on explaining what the problems are, how the mitigations work, and how they may affect you.

Because the vulnerabilities and their fixes are CPU-dependent and involve a major performance-security tradeoff, understanding their general model is important to every system administrator and developer. In the spirit of Ubuntu, which is known for providing easily accessible computing infrastructure — we are Linux for Human Beings, after all — this post attempts to provide an accessible description of the impact of these vulnerabilities and their mitigations.

What are the vulnerabilities?

The essence of both vulnerabilities is that a program running on a computer can read memory it is not supposed to. This is not an arbitrary code execution issue, but rather that the CPU can be tricked by a malicious program to expose memory that it wouldn’t otherwise have access to. Second, due to the way the CPU is being tricked, the exposed memory can only be retrieved relatively slowly2. In summary:

  1. A CPU needs to be running untrusted code in order to be attackable
  2. The direct consequence of an attack is unrestricted, read-only access to all system memory
  3. Memory is not exposed at a very fast rate.

Although an attacker can’t directly use Meltdown or Spectre to change anything on a system — for instance, changing a password or writing a file — the problem is that passwords, keys and other secrets are usually stored unencrypted in memory. Once stolen, these can then be used to escalate privileges, which can then be used to modify the system, and generally obtain sensitive information.

Proof of concept code exploiting these vulnerabilities has been published that demonstrates:

  • Reading passwords typed into web browsers, and reading images displayed on a web page from a separate, malicious application running on the same machine
  • Stealing hypervisor credentials, and reading secrets inside any hosted virtual machines by a malicious application running inside a virtualized guest operating system, such as a public cloud instance
  • Generally reading any kernel and user-space memory from userspace and VM-hosted applications, including memory belonging to other VMs and the hypervisor.

Other exploits that follow the patterns above almost certainly will emerge over time.

Who is most affected?

These issues affect practically every computer in server and end-user contexts, but due to the nature of the possible attack vectors, different use cases are exposed to different degrees. We have therefore provided the following risk grading:

How do the vulnerabilities work?

In order to understand what mitigations are being implemented, it’s critical to grasp what the underlying issues are, which in turn requires an understanding of operating system and processor concepts.

There are two good “kindergarten-class” analogies that may serve as useful introductions, which we have nicknamed the Helpful Grandson Analogy and the Book Voyeur Analogy. A quick read through William Cohen’s excellent post on Red Hat’s developer blog will also provide basic knowledge on CPU pipelines and cache behavior, and an optional hour-long read of Dan Luu’s branch prediction treatise will complete the necessary background for those unfamiliar with that aspect of processor design.

And that’s where our explanation starts. Summarizing the root cause:

  1. All modern CPUs pre-fetch data from system memory in advance of code being executed.
  2. Pre-fetched data is placed in internal registers and the CPU caches, which can be read much faster than system memory.
  3. Nearly all CPUs released since 19953 pre-fetch data out of order, and in addition, most pre-fetch speculatively4.
  4. Now, the fundamental issue behind the Meltdown and Spectre vulnerabilities: it is possible to trick a CPU into pre-fetching arbitrary data into internal registers and the cache, something that CPU and operating system protections would otherwise prevent.
  5. Through sophisticated techniques called cache timing side-channel attacks, the pre-fetched data can be exposed.
  6. Attacks can apply a targeted brute-force approach using these vulnerabilities to progressively read system memory regardless of any protections.

Meltdown and Spectre are caused by slightly different processor design choices, and expose system memory in different ways:

Neither Meltdown nor Spectre can be directly addressed by CPU manufacturers in existing hardware: they are a consequence of fundamental hardware design which cannot be modified in field. Mitigations involve a combination of software and CPU microcode updates to hypervisor and guests, which we will discuss next.

What mitigations are available?

Mitigations are specific to each attack, and have different performance implications. Existing mitigations are summarized in the following table:

It’s worth calling out that some of the mitigations for Spectre are not yet fully mature, which implies multiple iterations will be implemented and rolled out before the situation is fully addressed.

How will mitigations be deployed?

In order to benefit from the mitigations being provided, the following must hold true:

  1. Operating system and affected application code must be patched.
  2. For full protection against Spectre, CPU microcode or system firmware will need to be updated.
  3. The operating system’s protections must be active.
  4. In virtualized environments, all of the above must be true for both the hypervisor and the guest in order to protect against all known attacks.

Ubuntu provides security updates free of charge to all Ubuntu users. Updates will automatically install all necessary code and — where available — CPU microcode. Ubuntu will also, by default, enable all protections that are stable and safely implemented.

However, not all vulnerabilities identified have protections available covering their full extent. To further complicate matters, the protections that are available have significant performance impact for a number of workloads. The next section will discuss this in more detail.

How do the mitigations affect performance?

Performance impact from the mitigations has been a top concern, and at the highest level, the answer is that performance regressions are workload-dependent, with the slowdown varying from 0 to 50%. This makes communicating useful information somewhat challenging, so we will focus on practical advice first. The information in this section assumes all protections published by Ubuntu for Spectre and Meltdown have been enabled.

Performance degradation is highly processor-dependent: the more advanced its branch predictor, the greater the impact of the features being used to protect from attack. Conversely, Meltdown impact is reduced on platforms that have the PCID feature available, and some Spectre mitigation features are less impactful on Skylake-class hardware.

For Ubuntu Desktop users, including users running official flavors derived from Ubuntu:

For server workloads, impact is described in more detail in the following table.

Where necessary, offsetting performance impact will involve a combination of scaling out workloads, increasing compute power (by choosing a larger cloud instance type, for instance) or selectively disabling mitigations in contexts where the tradeoffs justify it. We are maintaining a Mitigation Controls page which describes the relevant knobs available on Ubuntu.

What performance data is available?

Canonical are in the process of finalizing a set of performance runs across private and cloud environments and applications. We aim to have our performance findings, based on our internal experience applying mitigations, published by February 12th. At a high level, with the currently proposed Meltdown and Spectre mitigations for Ubuntu, on our pre-Skylake build farm, we have calculated that, on average:

  • Kernel build times have increased by 50%; prior to the mitigations build times averaged at around 3.5 hours, whereas they are now averaging 5 hours.
  • Package build times, across the board, have increased by 30%.

In the meantime, we’re providing a Published Application Performance summary page which collects per-application performance descriptions published by third parties. That page will be updated as new information gets published and can assist administrators in evaluating trade-offs in risk and performance to determine their own mitigation plans.

The main issue with obtaining performance numbers generally has been the immature nature of the mitigations, made worse by an evolving understanding of which mitigation strategies should be active in what contexts — kernel/userspace, hypervisor/guest, and specific code paths. For instance, over the course of the past 15 days, we have noticed significant changes in the way public clouds have deployed mitigations. We are maintaining a Public Cloud Status page which summarizes up to date information as we collect it.

Further Reading

We have built this document with the intention of putting forward a practical framework to support decision making in this unusual situation. The evolving nature of the industry’s collective understanding and mitigation of the vulnerabilities has lead to an excess of public information, in part incomplete and in part contradictory, and we have selected here a set of links that are coherent, well-written and expand detail on what we have presented above:

We will issue updates to this post and additional information as the situation evolves. We encourage Ubuntu users who seek more information to contact an Ubuntu Advantage support representativefor an in-depth discussion relative to your use cases.

  1. These vulnerabilities are tracked as CVE-2017-5754, CVE-2017-5753 and CVE-2017-5715.
  2. These are specific to CPU and method of attack, but the Meltdown paper measured 500KB/s reads, and the Spectre paper measured 10KB/s read. Two independent runs (1, 2) of a simple Spectre PoC on Intel Core i5 based laptops averaged 8.5KB/s read.
  3. The Pentium Pro (1995) is probably the first CPU affected by Meltdown, which doesn’t strictly require speculative execution, just pipelining and probably out of order execution. Spectre requires speculative execution.
  4. “Prefetch speculatively” means that, when a CPU sees an if clause (a condition) or a for/while block (a loop), it will try and guess what will happen, and in the process fetch data it speculates it will need.
  5. It is likely every Intel CPU post the Pentium Pro are affected. Atom-based processors released prior to 2013, Quark and Itanium are in-order and are not affected.
  6. Technically, only CPUs with speculative execution are affected, but it is practical to assume that every CPU used in server, desktop and mobile environments is affected by this issue, as the non-exhaustive list of affected vendors include AMD, Apple, ARM, IBM, Intel, Marvell, Nvidia, Qualcomm and Samsung.
  7. Meltdown cannot break out from a non-PV VM due to the fact that hypervisor memory is generally not mapped when executing in guest context. However, the hypervisor has all memory mapped.
  8. PCID is a feature present in many post-2010 CPUs which has been enabled in the 4.14 kernel and included with the KPTI patchset Ubuntu backported; see this forum post by Gil Tene for more details.
  9. This paste outlines a simple, but very repeatable demonstration of a 50% slowdown by enabling just the Spectre mitigations.
  10. “HTTP application servers” and “Load balancer impact” assume userspace implementations.
Categories: Canonical

Mobile World Congress 2018

Ubuntu Insights, Cloud and Server - Wed, 24/01/2018 - 17:14
Why wait to build tomorrow’s cloud to edge infrastructure? Get your infrastructure 5G ready… now!

AI, robotics, self driving cars, Industrie 4.0… the applications for 5G are just around the corner.  Be prepared for 5G by getting your network ready with cloud.  Canonical offers agile telco infrastructure based on OpenStack, Kubernetes and Ubuntu.  Allowing you to run NFV today and be ready for the workloads of the future.

Become operational in weeks!

Stop the analysis paralysis.  Let Canonical build you a proven Telco cloud that can be delivered within weeks and operated with our Managed Service. You can reduce your time to market and TCO while getting an open vendor neutral cloud.

Run today’s NFVI and tomorrow’s applications

Today Canonical operates telco networks worldwide in partnership with leading hardware and NFV vendors . We’re also collaborating with institutions worldwide to build tomorrow’s applications on Ubuntu: blockchain, machine learning, robotics  or autonomous vehicles… To make sure that your infrastructure will be future proof.

Want a 5G head start? Book a meeting with our executive team.

Categories: Canonical

Kernel Team Summary: January 24, 2018

Ubuntu Insights, Cloud and Server - Wed, 24/01/2018 - 16:31
January 09 through January 23

The Kernel Team is completely focused on addressing any Spectre and Meltdown issues as they arise. A secure Ubuntu is our top priority. No new Livepatches are being produced and our regular SRU cycles are suspended while we address Spectre and Meltdown.

Spectre mitigation kernels are available. The kernels in the following post have been promoted to the updates repository:

The most up to date information available regarding Meltdown and Spectre is being published to:

If you would like to reach the kernel team, you can find us at the #ubuntu-kernel channel on FreeNode. Alternatively, you can mail the Ubuntu Kernel Team mailing list at:

Categories: Canonical

LXD Weekly Status #31

Ubuntu Insights, Cloud and Server - Wed, 24/01/2018 - 16:23


Nothing too major happened this past week. Part of the time was at an internal planning meeting and the rest have been working on clustering, preparation for 3.0 and fixing a variety of bugs.

Next week the entire LXD team will be traveling to Brussels to attend a small team sprint followed by FOSDEM! If you’re in town at some point between the 31st of January and 4th of February, let us know and we can meet!

Upcoming conferences and events Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.

  • Nothing to report
Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.

  • Nothing to report
  • New builds have been published following the build farm being brought back online.
  • Pushed more fixes related to xfs handling.
Categories: Canonical

Ubuntu Server development summary – 23 January 2018

Ubuntu Insights, Cloud and Server - Tue, 23/01/2018 - 18:40

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list.

  • MAAS datasource will avoid re-crawling MAAS metadata across reboots with oauth credentials haven’t changed.
  • Do not log warning on config files that represent None (LP: 1742479)
  • integration tests: pull in pylxd via git hash due to infrequent formal releases
  • SRU version 17.1
  • vmtests: switch to MAAS v3 streams for images and kernels
  • vmtests: initialize logger with class names for easy parsing
  • Standardize all license headers and file footers
Bug Work and Triage Contact the Ubuntu Server team Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Proposed Uploads to the Supported Releases

Please consider testing the following by enabling proposed, checking packages for update regressions, and making sure to mark affected bugs verified as fixed.

Total: 8

Uploads released to the Supported Releases

Total: 17

Uploads to the Development Release

Total: 4

Categories: Canonical

Spectre Mitigation Updates Available for Testing in Ubuntu Proposed

Ubuntu Insights, Cloud and Server - Wed, 17/01/2018 - 12:50

Canonical holds Ubuntu to the highest standards of security and quality.  This week we published candidate Ubuntu kernels providing mitigation for CVE-2017-5715 and CVE-2017-5753 (ie, Spectre / Variants 1 & 2) to their respective -proposed pockets for Ubuntu 17.10 (Artful), 16.04 LTS (Xenial), and 14.04 LTS (Trusty).  We have also expanded mitigation to cover s390x and ppc64el.

You are invited to test and provide feedback for the following updated Linux kernels.  We have also rebased all derivative kernels such as the public cloud kernels (Amazon, Google, Microsoft, etc) and the Hardware Enablement (HWE) kernels.

Updates for Ubuntu 12.04 ESM are in progress, and will be available for Canonical’s Ubuntu Advantage customers.  UA customers should reach out to Canonical support for access to candidate kernels.

We intend to promote the candidate kernels to the -security/-updates pocket for General Availability (GA) on Monday, January 22, 2018.

There is a corresponding intel-microcode update for many Intel CPUs, as well as an eventual amd64-microcode update, that will also need to be applied in order to fully mitigate Spectre.  In the interest of full disclosure, we understand from Intel that there are currently known issues with the intel-microcode binary:

Canonical QA and Hardware Certification teams are engaged in extensive, automated and manual testing of these kernels and the Intel microcode kernel updates on Ubuntu certified hardware, and Ubuntu certified public clouds.  The primary focus is on regression testing and security effectiveness.   We are actively investigating Google’s “Retpoline” toolchain-based approach, which requires rebuilding Ubuntu binaries but reduce performance impact of the mitigation.

For your reference, the following links explain how to enable Ubuntu’s Proposed repositories, and how to file Linux kernel bugs:

The most current information will continue to be available at:


Categories: Canonical

LXD Weekly Status #30

Ubuntu Insights, Cloud and Server - Tue, 16/01/2018 - 22:28

The main highlight for this week was the inclusion of the new proxy device in LXD, thanks to the hard work of some University of Texas students!

The rest of the time was spent fixing a number of bugs, working on various bits of kernel work, getting the upcoming clustering work to go through our CI process and preparing for a number of planning meetings that are going on this week.

Upcoming conferences and events Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.

  • Nothing to report
Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.

  • Nothing to report (build farm was offline)
  • Nothing to report (build farm was offline)
Categories: Canonical

Introduction To Juju: Automating Cloud Operations

Ubuntu Insights, Cloud and Server - Tue, 16/01/2018 - 18:54

Speaker: Tim Penhey
Date/Time: February 28, 2018 at 12PM EST / 5PM GMT

Modern software is getting complicated, and we’re simply not able to hire or train people fast enough to operate it due to the complexity of micro-services running across many machines. Juju from Canonical allows you to deploy open source operations code and model-driven operations at any scale on any cloud. Join the Juju team to find out:

  • The basics of Juju from Charms to Bundles
  • How to easily deploy solutions and scale up from one simple dashboard
  • How Juju can be used for deep learning, container orchestration, real-time big data, or stream processing
  • How you can use Juju to simplify your operations!

Register For Webinar

Categories: Canonical

From VMWare To Canonical OpenStack

Ubuntu Insights, Cloud and Server - Tue, 16/01/2018 - 18:52

Speaker: Stephan Fabel, Arturo Suarez
Date/Time: February 21, 2018 at 12PM EST / 5PM GMT

OpenStack has often been positioned as an alternative to traditional proprietary virtualization environments. Join Arturo Suarez and Stephan Fabel for:

  • A breakdown of differences between OpenStack versus traditional proprietary virtualization and information on identifying when it makes sense to use one or the other
  • Analysis on TCO, best practices and risks
  • A demo on how to migrate from a traditional proprietary environment into an OpenStack cloud

Register For Webinar

Categories: Canonical

A Technical Look At Snaps

Ubuntu Insights, Cloud and Server - Tue, 16/01/2018 - 18:48

Speaker: Evan Dandrea
Date/Time: February 15, 2018 at 12PM EST / 5PM GMT

Want to know even more about Snaps? In our second webinar on the topic we’ll be taking a technical look at Snaps themselves, our writing and publishing tool Snapcraft, how you can convert your existing applications, and the insight you’ll need to start building your first snaps! Ready to start building? You won’t want to miss this one.

This webinar is Part 2 in our series on Snaps during Ubuntu Product Month!

Register For Webinar

Categories: Canonical

An Introduction To Snaps

Ubuntu Insights, Cloud and Server - Tue, 16/01/2018 - 18:43

Speaker: Evan Dandrea
Date/Time: February 13, 2018 at 12PM EST / 5PM GMT

What if you could package, distribute, and update any application for Linux Desktops, Servers, Clouds, and IoT devices? Snaps are containerised software packages that are simple to create and install, safe to run, and can work on all major Linux systems without modification. Whether you’re a developer, desktop user, or even a device manufacturer; you won’t want to miss this!

Want to know even more? We’ll be hosting a second webinar that goes deeper into the technology behind snaps on February 15th! Find out about it here

Register For Webinar

Categories: Canonical

Introduction To LXD: The Pure-container Hypervisor

Ubuntu Insights, Cloud and Server - Tue, 16/01/2018 - 18:20

Speaker: Stephane Graber
Date/Time: February 7, 2018 at 12PM EST / 5PM GMT

What if you could move your Linux Virtual Machines straight to containers, easily, without modifying the apps or administration processes? LXD from Canonical is a pure-container hypervisor that takes the speed and latency of containers and brings them to the hypervisor world. LXD is a machine container, meaning they’re just like traditional physical and virtual machines. Join our webinar on February 7th with Technical Lead Stephane Graber to learn…

  • The difference between application & machine containers
  • How pure-container hypervisors such as LXD can reduce overhead
  • How to use LXD in practice from deployment to operations

Register For Webinar

Categories: Canonical

Monitor your Kubernetes Cluster

Ubuntu Insights, Cloud and Server - Tue, 16/01/2018 - 14:56

This article originally appeared on Kevin Monroe’s blog

Keeping an eye on logs and metrics is a necessary evil for cluster admins. The benefits are clear: metrics help you set reasonable performance goals, while log analysis can uncover issues that impact your workloads. The hard part, however, is getting a slew of applications to work together in a useful monitoring solution.

In this post, I’ll cover monitoring a Kubernetes cluster with Graylog (for logging) and Prometheus (for metrics). Of course that’s not just wiring 3 things together. In fact, it’ll end up looking like this:

As you know, Kubernetes isn’t just one thing — it’s a system of masters, workers, networking bits, etc(d). Similarly, Graylog comes with a supporting cast (apache2, mongodb, etc), as does Prometheus (telegraf, grafana, etc). Connecting the dots in a deployment like this may seem daunting, but the right tools can make all the difference.

I’ll walk through this using conjure-up and the Canonical Distribution of Kubernetes (CDK). I find the conjure-up interface really helpful for deploying big software, but I know some of you hate GUIs and TUIs and probably other UIs too. For those folks, I’ll do the same deployment again from the command line.

Before we jump in, note that Graylog and Prometheus will be deployed alongside Kubernetes and not in the cluster itself. Things like the Kubernetes Dashboard and Heapster are excellent sources of information from within a running cluster, but my objective is to provide a mechanism for log/metric analysis whether the cluster is running or not.

The Walk Through

First things first, install conjure-up if you don’t already have it. On Linux, that’s simply:

sudo snap install conjure-up --classic

There’s also a brew package for macOS users:

brew install conjure-up

You’ll need at least version 2.5.2 to take advantage of the recent CDK spell additions, so be sure to sudo snap refresh conjure-up or brew update && brew upgrade conjure-up if you have an older version installed.

Once installed, run it:


You’ll be presented with a list of various spells. Select CDK and press Enter.

At this point, you’ll see additional components that are available for the CDK spell. We’re interested in Graylog and Prometheus, so check both of those and hit Continue.

You’ll be guided through various cloud choices to determine where you want your cluster to live. After that, you’ll see options for post-deployment steps, followed by a review screen that lets you see what is about to be deployed:

In addition to the typical K8s-related applications (etcd, flannel, load-balancer, master, and workers), you’ll see additional applications related to our logging and metric selections.

The Graylog stack includes the following:

  • apache2: reverse proxy for the graylog web interface
  • elasticsearch: document database for the logs
  • filebeat: forwards logs from K8s master/workers to graylog
  • graylog: provides an api for log collection and an interface for analysis
  • mongodb: database for graylog metadata

The Prometheus stack includes the following:

  • grafana: web interface for metric-related dashboards
  • prometheus: metric collector and time series database
  • telegraf: sends host metrics to prometheus

You can fine tune the deployment from this review screen, but the defaults will suite our needs. Click Deploy all Remaining Applications to get things going.

The deployment will take a few minutes to settle as machines are brought online and applications are configured in your cloud. Once complete, conjure-up will show a summary screen that includes links to various interesting endpoints for you to browse:

Exploring Logs

Now that Graylog has been deployed and configured, let’s take a look at some of the data we’re gathering. By default, the filebeat application will send both syslog and container log events to graylog (that’s /var/log/*.log and /var/log/containers/*.log from the kubernetes master and workers).

Grab the apache2 address and graylog admin password as follows:

juju status --format yaml apache2/0 | grep public-address public-address: <your-apache2-ip> juju run-action --wait graylog/0 show-admin-password admin-password: <your-graylog-password>

Browse to http://<your-apache2-ip> and login with admin as the username and <your-graylog-password> as the password. Note: if the interface is not immediately available, please wait as the reverse proxy configuration may take up to 5 minutes to complete.

Once logged in, head to the Sources tab to get an overview of the logs collected from our K8s master and workers:

Drill into those logs by clicking the System / Inputs tab and selecting Show received messages for the filebeat input:

From here, you may want to play around with various filters or setup Graylog dashboards to help identify the events that are most important to you. Check out the Graylog Dashboard docs for details on customizing your view.

Exploring Metrics

Our deployment exposes two types of metrics through our grafana dashboards: system metrics include things like cpu/memory/disk utilization for the K8s master and worker machines, and cluster metrics include container-level data scraped from the K8s cAdvisor endpoints.

Grab the grafana address and admin password as follows:

juju status --format yaml grafana/0 | grep public-address public-address: <your-grafana-ip> juju run-action --wait grafana/0 get-admin-password password: <your-grafana-password>

Browse to http://<your-grafana-ip>:3000 and login with admin as the username and <your-grafana-password> as the password. Once logged in, check out the cluster metric dashboard by clicking the Home drop-down box and selecting Kubernetes Metrics (via Prometheus):

We can also check out the system metrics of our K8s host machines by switching the drop-down box to Node Metrics (via Telegraf)

The Other Way

As alluded to in the intro, I prefer the wizard-y feel of conjure-up to guide me through complex software deployments like Kubernetes. Now that we’ve seen the conjure-up way, some of you may want to see a command line approach to achieve the same results. Still others may have deployed CDK previously and want to extend it with the Graylog/Prometheus components described above. Regardless of why you’ve read this far, I’ve got you covered.

The tool that underpins conjure-up is Juju. Everything that the CDK spell did behind the scenes can be done on the command line with Juju. Let’s step through how that works.

Starting From Scratch

If you’re on Linux, install Juju like this:

sudo snap install juju --classic

For macOS, Juju is available from brew:

brew install juju

Now setup a controller for your preferred cloud. You may be prompted for any required cloud credentials:

juju bootstrap

We then need to deploy the base CDK bundle:

juju deploy canonical-kubernetes

Starting From CDK

With our Kubernetes cluster deployed, we need to add all the applications required for Graylog and Prometheus:

## deploy graylog-related applications juju deploy xenial/apache2 juju deploy xenial/elasticsearch juju deploy xenial/filebeat juju deploy xenial/graylog juju deploy xenial/mongodb ## deploy prometheus-related applications juju deploy xenial/grafana juju deploy xenial/prometheus juju deploy xenial/telegraf

Now that the software is deployed, connect them together so they can communicate:

## relate graylog applications juju relate apache2:reverseproxy graylog:website juju relate graylog:elasticsearch elasticsearch:client juju relate graylog:mongodb mongodb:database juju relate filebeat:beats-host kubernetes-master:juju-info juju relate filebeat:beats-host kubernetes-worker:jujuu-info ## relate prometheus applications juju relate prometheus:grafana-source grafana:grafana-source juju relate telegraf:prometheus-client prometheus:target juju relate kubernetes-master:juju-info telegraf:juju-info juju relate kubernetes-worker:juju-info telegraf:juju-info

At this point, all the applications can communicate with each other, but we have a bit more configuration to do (e.g., setting up the apache2 reverse proxy, telling prometheus how to scrape k8s, importing our grafana dashboards, etc):

## configure graylog applications juju config apache2 enable_modules="headers proxy_html proxy_http" juju config apache2 vhost_http_template="$(base64 <vhost-tmpl>)" juju config elasticsearch firewall_enabled="false" juju config filebeat \ logpath="/var/log/*.log /var/log/containers/*.log" juju config filebeat logstash_hosts="<graylog-ip>:5044" juju config graylog elasticsearch_cluster_name="<es-cluster>" ## configure prometheus applications juju config prometheus scrape-jobs="<scraper-yaml>" juju run-action --wait grafana/0 import-dashboard \ dashboard="$(base64 <dashboard-json>)"

Some of the above steps need values specific to your deployment. You can get these in the same way that conjure-up does:

  • <vhost-tmpl>: fetch our sample template from github
  • <graylog-ip>: juju run --unit graylog/0 ‘unit-get private-address’
  • <es-cluster>: juju config elasticsearch cluster-name
  • <scraper-yaml>: fetch our sample scraper from github; substituteappropriate values for K8S_PASSWORD and K8S_API_ENDPOINT
  • <dashboard-json>: fetch our host and k8s dashboards from github

Finally, you’ll want to expose the apache2 and grafana applications to make their web interfaces accessible:

## expose relevant endpoints juju expose apache2 juju expose grafana

Now that we have everything deployed, related, configured, and exposed, you can login and poke around using the same steps from the Exploring Logs and Exploring Metrics sections above.

The Wrap Up

My goal here was to show you how to deploy a Kubernetes cluster with rich monitoring capabilities for logs and metrics. Whether you prefer a guided approach or command line steps, I hope it’s clear that monitoring complex deployments doesn’t have to be a pipe dream. The trick is to figure out how all the moving parts work, make them work together repeatably, and then break/fix/repeat for a while until everyone can use it.

This is where tools like conjure-up and Juju really shine. Leveraging the expertise of contributors to this ecosystem makes it easy to manage big software. Start with a solid set of apps, customize as needed, and get back to work!

Give these bits a try and let me know how it goes. You can find enthusiasts like me on Freenode IRC in #conjure-up and #juju. Thanks for reading!


Categories: Canonical

Meltdown and Spectre Status Update

Ubuntu Insights, Cloud and Server - Fri, 12/01/2018 - 00:31

On Tuesday, January 9, 2018 we released Ubuntu kernel updates for mitigation of CVE-2017-5754 (aka Meltdown / Variant 3) for the x86-64 architecture. Releases were made for the following supported Ubuntu series:

  • 12.04 ESM Precise (kernel v3.2)
  • 14.04 LTS Trusty (kernel v3.13)
  • 16.04 LTS Xenial (kernel v4.4)
  • 17.10 Artful (kernel v4.13)

Optimized kernels based on any of the above series were also released, including linux-aws, linux-azure, linux-gcp, and hardware enablement kernels. Updated cloud images have also been built and published to ensure a consistent Ubuntu experience. In our testing of the released Meltdown mitigations, we are observing that reductions in performance vary depending on the workload.

Ubuntu Zesty 17.04 will end-of-life on Saturday, January 13, 2018. As such, there will be no updates to kernel v4.10 to mitigate Meltdown or Spectre. Users of 17.04 will need to upgrade. As Precise 12.04 LTS has reached end-of-life, only Ubuntu Advantage customers with Extended Security Maintenance for Precise 12.04 will receive updated kernels.

Our focus has now shifted to the mitigation of CVE-2017-5753 and CVE-2017-5715 (aka Spectre / Variants 1 & 2). Microcode has been released for Intel processors (see USN-3531-1). Kernel updates will begin with releasing v4.13 for Artful 17.10 on Monday, January 15, 2018, with 16.04 to follow shortly.

In addition to releasing fixes for Spectre we will be expanding the Meltdown mitigation to other supported architectures.

The industry response to this unprecedented security vulnerability continues to evolve on a daily basis. The Ubuntu Engineering team is committed to delivering high-quality, proven fixes for these issues as they become available to ensure the Ubuntu experience remains as secure and consistent as possible.

Categories: Canonical

Ubuntu Server Development Summary – 09 Jan 2018

Ubuntu Insights, Cloud and Server - Tue, 09/01/2018 - 21:03
Hello Ubuntu Server!

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list.

Spotlight: cloud-init ec2 testing

The cloud-init team has extended their integration tests to allow coverage of the ec2 datasource. This is accomplished via the use of the boto3 python library. Weekly jobs run the tests against supported releases of Ubuntu as well as for proposed testing.

  • Fix cloud-init clean subcommand to unlink symlinks instead of calling del_dir (LP: #1741093)
  • Fixe traceback when attempting to bounce the network after hostname resets on Azure (LP: #1722668)
  • Moved Launchpad repo from bzr to git
Bug Work and Triage Contact the Ubuntu Server team Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Proposed Uploads to the Supported Releases

Please consider testing the following by enabling proposed, checking packages for update regressions, and making sure to mark affected bugs verified as fixed.

Total: 7

cloud-initramfs-tools, xenial, 0.27ubuntu1.5, smoser
cloud-initramfs-tools, zesty, 0.35ubuntu2.1, smoser
cloud-initramfs-tools, artful, 0.39ubuntu1.1, smoser
sosreport, trusty, 3.5-1~ubuntu14.04.1, slashd
sosreport, xenial, 3.5-1~ubuntu16.04.1, slashd
sosreport, zesty, 3.5-1~ubuntu17.04.1, slashd
sosreport, artful, 3.5-1~ubuntu17.10.1, slashd

Uploads released to the Supported Releases

Total: 21

awstats, zesty, 7.6+dfsg-1ubuntu0.17.04.1, mdeslaur
awstats, artful, 7.6+dfsg-1ubuntu0.17.10.1, mdeslaur
awstats, xenial, 7.4+dfsg-1ubuntu0.2, mdeslaur
awstats, trusty, 7.2+dfsg-1ubuntu0.1, mdeslaur
corosync, trusty, 2.3.3-1ubuntu4, vtapia
corosync, xenial, 2.3.5-3ubuntu2, vtapia
libseccomp, xenial, 2.3.1-2.1ubuntu2~16.04.1, xnox
ntp, artful, 1:4.2.8p10+dfsg-5ubuntu3.1, paelzer
ruby1.9.1, trusty,, leosilvab
ruby2.3, artful, 2.3.3-1ubuntu1.1, leosilvab
ruby2.3, zesty, 2.3.3-1ubuntu0.3, leosilvab
ruby2.3, xenial, 2.3.1-2~16.04.4, leosilvab
slof, zesty, 20161019+dfsg-1ubuntu0.1, paelzer
slof, xenial, 20151103+dfsg-1ubuntu1.1, paelzer
slof, artful, 20170724+dfsg-1ubuntu0.1, paelzer
strongswan, xenial, 5.3.5-1ubuntu3.5, paelzer
strongswan, zesty, 5.5.1-1ubuntu3.3, paelzer
strongswan, artful, 5.5.1-4ubuntu2.2, paelzer
tomcat7, trusty, 7.0.52-1ubuntu0.13, mdeslaur
tomcat8, zesty, 8.0.38-2ubuntu2.2, mdeslaur
tomcat8, xenial, 8.0.32-1ubuntu1.5, mdeslaur

Uploads to the Development Release

Total: 10

awstats, 7.6+dfsg-1ubuntu2, ahasenack
awstats, 7.6+dfsg-1ubuntu1, mdeslaur
cloud-utils, 0.30-0ubuntu3, smoser
dpdk, 17.11-3, pkg-dpdk-devel
heartbeat, 1:3.0.6-7, debian-ha-maintainers
libpcap, 1.8.1-6ubuntu1, costamagnagianfranco
ntp, 1:4.2.8p10+dfsg-5ubuntu5, paelzer
ocfs2-tools, 1.8.5-3ubuntu1, ahasenack
python-django, 1:1.11.9-1ubuntu1, vorlon
sysstat, 11.6.1-1, robert-debian

Categories: Canonical

LXD Weekly Status #29

Ubuntu Insights, Cloud and Server - Mon, 08/01/2018 - 23:20


And we’re back from the holidays!
This “weekly” summary is covering everything that happened the past 3 weeks.

The big highlight was the release of LXD 2.21 on the 19th of December.

During the holidays, we merged quite a number of bugfixes and smaller features in LXC and LXD with the bigger feature development only resuming now.

The end of year was also the deadline for our users to migrate off of the LXD PPAs.
Those have now been fully deleted and users looking for newer builds of LXD should use the official basckport packages or the LXD snap.

Upcoming conferences and events Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.

  • Nothing to report
Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.

  • Uploaded LXD 2.21 to Ubuntu 18.04.
  • Backported LXD 2.21 to Ubuntu 16.04, 17.04 and 17.10.
  • Uploaded some bugfixes on top of LXD 2.21 to Ubuntu 18.04 and backported to 16.04, 17.04 and 17.10.
  • Updated to LXD 2.21
  • Fixed a bug related to LD_LIBRARY_PATH handling on Debian
  • Cherry-picked a number of upstream bugfixes
Categories: Canonical

Dustin Kirkland: Ubuntu Updates for the Meltdown / Spectre Vulnerabilities

Ubuntu Insights, Cloud and Server - Thu, 04/01/2018 - 21:41
  For up-to-date patch, package, and USN links, please refer to:   Unfortunately, you’ve probably already read about one of the most widespread security issues in modern computing history -- colloquially known as “Meltdown” (CVE-2017-5754) and “Spectre” (CVE-2017-5753 and CVE-2017-5715) -- affecting practically every computer built in the last 10 years, running any operating system. That includes Ubuntu. I say “unfortunately”, in part because there was a coordinated release date of January 9, 2018, agreed upon by essentially every operating system, hardware, and cloud vendor in the world. By design, operating system updates would be available at the same time as the public disclosure of the security vulnerability. While it happens rarely, this an industry standard best practice, which has broken down in this case. At its heart, this vulnerability is a CPU hardware architecture design issue. But there are billions of affected hardware devices, and replacing CPUs is simply unreasonable. As a result, operating system kernels -- Windows, MacOS, Linux, and many others -- are being patched to mitigate the critical security vulnerability. Canonical engineers have been working on this since we were made aware under the embargoed disclosure (November 2017) and have worked through the Christmas and New Years holidays, testing and integrating an incredibly complex patch set into a broad set of Ubuntu kernels and CPU architectures. Ubuntu users of the 64-bit x86 architecture (aka, amd64) can expect updated kernels by the original January 9, 2018 coordinated release date, and sooner if possible. Updates will be available for:
  • Ubuntu 17.10 (Artful) -- Linux 4.13 HWE
  • Ubuntu 16.04 LTS (Xenial) -- Linux 4.4 (and 4.4 HWE)
  • Ubuntu 14.04 LTS (Trusty) -- Linux 3.13
  • Ubuntu 12.04 ESM** (Precise) -- Linux 3.2
    • Note that an Ubuntu Advantage license is required for the 12.04 ESM kernel update, as Ubuntu 12.04 LTS is past its end-of-life
Ubuntu 18.04 LTS (Bionic) will release in April of 2018, and will ship a 4.15 kernel, which includes the KPTI patchset as integrated upstream. Ubuntu optimized kernels for the Amazon, Google, and Microsoft public clouds are also covered by these updates, as well as the rest of Canonical's Certified Public Clouds including Oracle, OVH, Rackspace, IBM Cloud, Joyent, and Dimension Data.  Important note: There are several other public clouds not listed here, which modify the Ubuntu image and/or Linux kernel, and your Ubuntu security experience there is compromised. These kernel fixes will not be Livepatch-able. The source code changes required to address this problem is comprised of hundreds of independent patches, touching hundreds of files and thousands of lines of code. The sheer complexity of this patchset is not compatible with the Linux kernel Livepatch mechanism. An update and a reboot will be required to active this update. Furthermore, you can expect Ubuntu security updates for a number of other related packages, including CPU microcode, GCC and QEMU in the coming days. Thanks, @DustinKirkland VP of Product Canonical / Ubuntu
Categories: Canonical


Subscribe to Stack Evolution aggregator - Canonical