Canonical

Security Team Weekly Summary: December 7, 2017

Ubuntu Insights, Cloud and Server - Thu, 07/12/2017 - 15:11

The Security Team weekly reports are intended to be very short summaries of the Security Team’s weekly activities.

If you would like to reach the Security Team, you can find us at the #ubuntu-hardened channel on FreeNode. Alternatively, you can mail the Ubuntu Hardened mailing list at: ubuntu-hardened@lists.ubuntu.com

Due to the holiday last week, there was no weekly report, so this report covers the previous two weeks. During the last two weeks, the Ubuntu Security team:

  • Triaged 379 public security vulnerability reports, retaining the 74 that applied to Ubuntu.
  • Published 32 Ubuntu Security Notices which fixed 70 security issues (CVEs) across 34 supported packages.
Ubuntu Security Notices

 

Bug Triage

 

Mainline Inclusion Requests

 

Development

 

  • add max compressed size check to the review tools
  • adjust review-tools runtime errors output for store (final)
  • adjust review-tools for redflagged base snap overrides
  • adjust review-tools for resquashing with fakeroot
  • upload a couple of bad snaps to test r945 of the review tools in the store. The store is correctly not auto-approving, but is also not handling them right. File LP: #1733699
  • investigate SNAPCRAFT_BUILD_INFO=1 with snapcraft cleanbuild and attempt rebuilds
  • respond to feedback in PR 4245, close and resubmit as PR 4255 (interfaces/screen-inhibit-control: fix case in screen inhibit control)
  • investigate reported godot issue. Send up PR 4257 (interfaces/opengl: also allow ‘revision’ on /sys/devices/pci…)
  • investigation of potential biometrics-observe interface
  • snapd reviews
    • PR 4258: fix unmounting on systems without rshared
    • PR 4170: cmd/snap-update-ns: add planWritableMimic
    • PR 4306 (use #include instead of bare ‘include’)
    • PR 4224 – cmd/snap-update-ns: teach update logic to handle synthetic changes
    • PR 4312 – ‘create mount targe for lib32,vulkan on demand
    • PR 4323 – interfaces: add gpio-memory-control interface
    • PR 4325 (add test for netlink-connector interface) and investigate NETLINK_CONNECTOR denials
    • review design of PR 4329 – discard stale mountspaces (v2)
  • finalized squashfs fix for 1555305 and submitted it upstream (https://sourceforge.net/p/squashfs/mailman/message/36140758/)

  • investigation into users 16.04 apparmor issues with tomcat
What the Security Team is Reading This Week

 

Weekly Meeting

 

More Info

 

Categories: Canonical

Kernel Team Summary – December 6, 2017

Ubuntu Insights, Cloud and Server - Wed, 06/12/2017 - 20:14
November 21 through December 04 Development (18.04)

Every 6 months the Ubuntu Kernel Team is tasked to pick the kernel to be used in the next release. This is a difficult thing to do because we don’t definitively know what will be going into the upstream kernel over the next 6 months nor the quality of that kernel. We look at the Ubuntu release schedule and how that will line up with the upstream kernel releases. We talk to hardware vendors about when they will be landing their changes upstream and what they would prefer as the Ubuntu kernel version. We talk to major cloud vendors and ask them what they would like. We speak to large consumers of Ubuntu to solicit their opinion. We look at what will be the next upstream stable kernel. We get input from members of the Canonical product strategy team. Taking all of that into account we are tentatively planning to converge on 4.15 for the Bionic Beaver 18.04 LTS release.

On the road to 18.04 we have a 4.14 based kernel in the Bionic -proposed repository.

Stable (Released & Supported)
  • The kernels for the current SRU cycle are being respun to include fixes for CVE-2017-16939 and CVE-2017-1000405.

  • Kernel versions in -proposed:

    trusty 3.13.0-137.186 trusty/linux-lts-xenial 4.4.0-103.126~14.04.1 xenial 4.4.0-103.126 xenial/linux-hwe 4.10.0-42.46~16.04.1 xenial/linux-hwe-edge 4.13.0-19.22~16.04.1 zesty 4.10.0-42.46 artful 4.13.0-19.22
  • Current cycle: 17-Nov through 09-Dec

    17-Nov Last day for kernel commits for this cycle. 20-Nov - 25-Nov Kernel prep week. 26-Nov - 08-Dec Bug verification & Regression testing. 11-Dec Release to -updates.
  • Next cycle: 08-Dec through 30-Dec(This cycle will only contain CVE fixes)

    08-Dec Last day for kernel commits for this cycle. 11-Dec - 16-Dec Kernel prep week. 17-Dec - 29-Dec Bug verification & Regression testing. 01-Jan Release to -updates.
Misc
  • The current CVE status
  • If you would like to reach the kernel team, you can find us at the #ubuntu-kernel
    channel on FreeNode. Alternatively, you can mail the Ubuntu Kernel Team mailing
    list at: kernel-team@lists.ubuntu.com.
Categories: Canonical

Commercetools uses Ubuntu on its next-generation ecommerce platform

Ubuntu Insights, Cloud and Server - Wed, 06/12/2017 - 19:13

Today’s shoppers are looking for a consistent experience, no matter which channels they use, whether smartphone, tablet, wearable, digital point of sale, (POS), or other. Commercetools helps enterprises to digitally transform their entire sales operations across all channels. The Software-as-a-Service approach, open source philosophy, and strong support of an API and microservices architecture of Commercetools enable the company’s customers to rapidly build highly individual shopping experiences for their own markets, without having to change their whole IT ecosystem in the process.

Highlights

  • Learn how Commercetools uses Ubuntu Server to help enterprises to digitally transform their entire sales operations across all channels.
  • The commercetools SaaS/PaaS platform with APIs and microservices enables enterprises to easily build ecommerce applications
  • Cloud native solution running on Ubuntu Servers for reliability, performance and ease of upgrade
  • Commercetools gives retailers the tools to offer unique and engaging digital commerce experiences, saving them time and effort, and increasing their profitability

 

MktoForms2.loadForm("//app-sjg.marketo.com", "066-EOV-335", 2479);

Categories: Canonical

Ubuntu Server Development Summary – 05 Dec 2017

Ubuntu Insights, Cloud and Server - Tue, 05/12/2017 - 20:49

Hello Ubuntu Server!

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list.

cloud-init
  • Queued upstream for merge into Bionic
  • Queued 17.1.46 SRU for Xenial, Zesty, and Artful
  • Fix EC2 race on sandboxed dhclient’s pidfile during tempdir teardown (LP: #1735331)
  • Enable Bionic in Integration Tests
  • Create LXD and KVM Integration Tests in Jenkins
curtin
  • Added mount ‘options’ parameter to the mount storage configuration structure
  • curthooks.write_files legacy support (LP: #1731709)
  • Merged control of curtin install unmounting
  • Added rootfs on lvm test to vmtests (LP: #1731490)
  • Fix Bionic netdeps
  • Enable Bionic in vmtests
Postgres
  • Bionic now has postgresql version 10 (LP: #1733527)
Bug Work and Triage Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Uploads to the Development Release (Bionic) byobu, 5.124-0ubuntu1, kirkland byobu, 5.123-0ubuntu2, doko cloud-init, 17.1-46-g7acc9e68-0ubuntu1, smoser cluster-glue, 1.0.12-7, None crmsh, 3.0.1-2ubuntu2, vorlon crmsh, 3.0.1-2ubuntu1, vorlon dovecot, 1:2.2.33.2-1ubuntu1, paelzer exim4, 4.89-9ubuntu3, mdeslaur iproute2, 4.9.0-1ubuntu3, paelzer nmap, 7.60-1ubuntu2, doko rrdtool, 1.7.0-0ubuntu3, doko rrdtool, 1.7.0-0ubuntu2, doko rrdtool, 1.7.0-0ubuntu1, doko Total: 13 Uploads to Supported Releases (Trusty, Xenial, Zesty, Artful) cloud-init, xenial, 17.1-46-g7acc9e68-0ubuntu1~16.04.1, smoser cloud-init, zesty, 17.1-46-g7acc9e68-0ubuntu1~17.04.1, smoser cloud-init, artful, 17.1-46-g7acc9e68-0ubuntu1~17.10.1, smoser exim4, artful, 4.89-5ubuntu1.2, mdeslaur exim4, zesty, 4.88-5ubuntu1.3, mdeslaur ipxe, zesty, 1.0.0+git-20150424.a25a16d-1ubuntu2.2, andreserl ipxe, xenial, 1.0.0+git-20150424.a25a16d-1ubuntu1.2, andreserl libvirt-python, xenial, 1.3.1-1ubuntu1.1, paelzer lxd, xenial, 2.0.11-0ubuntu1~16.04.2, stgraber maas, xenial, 2.3.0-6434-gd354690-0ubuntu1~16.04.1, andreserl maas, zesty, 2.3.0-6434-gd354690-0ubuntu1~17.04.1, andreserl maas, artful, 2.3.0-6434-gd354690-0ubuntu1~17.10.1, andreserl mailman, trusty, 1:2.1.16-2ubuntu0.3, paelzer php7.0, xenial, 7.0.25-0ubuntu0.16.04.1, nacc php7.1, artful, 7.1.11-0ubuntu0.17.10.1, nacc strongswan, artful, 5.5.1-4ubuntu2.1, paelzer Total: 16 Contact the Ubuntu Server team
Categories: Canonical

Canonical and Rancher Labs announce Kubernetes Cloud Native Platform

Ubuntu Insights, Cloud and Server - Tue, 05/12/2017 - 14:01

Canonical and Rancher Labs announce joint Kubernetes Cloud Native Platform offering

Kubecon, Austin, Texas –  5th Dec: Canonical, in partnership with Rancher Labs today announce a turn-key application delivery platform built on Ubuntu, Kubernetes, and Rancher 2.0.

The new Cloud Native Platform will make it easy for users to deploy, manage, and operate containers on Kubernetes through a single workflow management portal from dev-and-test to production environments. Users leverage a rich application catalog of docker containers and helm charts, streamlining deployments and increasing developer velocity.  

Built on Canonical’s distribution of Kubernetes and Rancher 2.0, the Cloud Native Platform will simplify enterprise usage of Kubernetes with seamless user management, access control and cluster administration.

Rancher 2.0, which will be generally available early next year, includes everything you need to manage multiple Kubernetes clusters in production.  The Rancher 2.0 user experience makes it easy to harness the full power of Kubernetes. Centralized management of user authentication, health checks and monitoring provides increased visibility and control. Users can stand up and manage new Kubernetes clusters using Canonical’s Kubernetes distribution or a cloud-hosted Kubernetes service such as Amazon EKS, Azure ACS or Google GKE.

“Our partnership with Rancher provides end to end workflow automation for the enterprise development and operations team on Canonical’s distribution of Kubernetes” said Mark Shuttleworth, CEO of Canonical. “Ubuntu has long been the platform of choice for developers driving innovation with containers. Canonical’s Kubernetes offerings include consulting, integration and fully-managed Kubernetes services on-prem and on-cloud”.

“We’re thrilled to be partnering with Canonical to build a truly open cloud-native development platform,” said Sheng Liang, CEO and Co-founder of Rancher Labs. “By integrating Rancher, Kubernetes, and Ubuntu, we can provide users with a complete end-to-end Kubernetes management solution that both accelerates Kubernetes adoption in the enterprise and simplifies operations.”

Canonical’s Ubuntu is the leading OS for cloud operations – public and private – and Canonical works with AWS, Azure, Google and Oracle to optimise Ubuntu guests for containers on those clouds. Canonical also works with Google GKE to enable hybrid operations between enterprise deployments of Kubernetes and the Google SAAS offering.

About Canonical

Canonical is the company behind Ubuntu, the leading OS for container, cloud, scale-out and hyperscale computing.  Canonical provides enterprise support and services for commercial users of Ubuntu.

About Rancher Labs

Rancher Labs builds innovative, open source software for enterprises leveraging containers to accelerate software development and improve IT operations. The flagship Rancher container management platform makes it easy to adopt, run and manage containers across multiple Kubernetes clusters.  It includes everything you need to run containers in production, on any infrastructure. For additional information, please visit www.rancher.com

 

Categories: Canonical

LXD Weekly Status #26

Ubuntu Insights, Cloud and Server - Mon, 04/12/2017 - 21:29

Introduction

Focus this week has been on infiniband support and more clustering related work with a number of bugfixes, cleanups and refactoring on the side.

We’ve been doing some small tweaks and bugfixes on the LXD snap based on user feedback as more and more users are migrating to it. We’re also getting ready to push LXD 2.0.11 to a lot of our users, fixing a lot of bugs in the process and bringing some small usability tweaks too.

The FOSDEM CFP is now closed and we’re reviewing the 45 proposals we received and carefully checking how we can fit those in the schedule. We expect to send notifications to potential speakers by the end of the week.

Upcoming conferences and events Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.

LXD LXC LXCFS
  • No change to report this week
Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.

Ubuntu
  • LXD 2.0.11 has spent the week in xenial-proposed and will be released to all Ubuntu 16.04 users very soon.
Snap
  • Added support for using the system’s CA rather than the certificates in the core snap.
  • Better handle crashes of LXD and restarts.
Categories: Canonical

Ubuntu Bionic: Netplan

Ubuntu Insights, Cloud and Server - Fri, 01/12/2017 - 00:00
Netplan

For this week’s Bionic test blitz I am looking at Netplan! Netplan enables easily configuring networking on a system via YAML files. Netplan processes the YAML and generates the required configurations for either NetworkManager or systemd-network the system’s renderer.

Netplan replaced ifupdown as the default configuration utility starting with Ubuntu 17.10 Artful.

Configuration Initial Setup in Bionic

When you install Bionic or use a cloud image of Bionic a file will appear in /etc/netplan depending on the renderer in use. Here is a breakdown of the various types:

Install Type Renderer File Server ISO systemd-networkd /etc/netplan/01-netcfg.yaml Cloud Image systemd-networkd /etc/netplan/50-cloud-init.yaml Desktop ISO NetworkManager /etc/netplan/01-network-manager-all.yaml

Do note that configuration files can exist in three different locations with the precidence from most important to least as follows:

  • /run/netplan/*.yaml
  • /etc/netplan/*.yaml
  • /lib/netplan/*.yaml

Alphabetically later files, no matter what directory they are in, will amend keys if the key does not already exist and override previous keys if they do.

Examples

The best method for demonstrating what netplan can do is by showing some examples. Keep in mind that these are very simple examples that do not demonstrate complex situations that netplan can handle.

Static and DHCP Addressing

The following configures four devices:

  • enp3s0 setup with IPv4 DHCP
  • enp4s0 setup with IPv4 static with custom MTU
  • IPv6 static tied to a specific MAC address
  • IPv4 and IPv6 DHCP with jumbo frames tied to a specific MAC address
ethernets: enp3s0: dhcp4: true enp4s0: addresses: - 192.168.0.10/24 gateway4: 192.168.0.1 mtu: 1480 nameservers: addresses: - 8.8.8.8 - 9.9.9.9 net1: addresses: - fe80::a00:10a/120 gateway6: fe80::a00:101 match: macaddress: 52:54:00:12:34:06 net2: dhcp4: true dhcp6: true match: macaddress: 52:54:00:12:34:07 mtu: 9000 Bonding

Bonding can easily be configured with the required interfaces list and by specifying the mode. The mode can be any of the valid types: balance-rr, active-backup, balance-xor, broadcast, 802.3ad, balance-tlb, balance-alb. See the bonding wiki page for more details.

bonds: bond0: dhcp4: yes interfaces: - enp3s0 - enp4s0 parameters: mode: active-backup primary: enp3s0 Bridges

Here is a very simple example of a bridge using DHCP:

bridges: br0: dhcp4: yes interfaces: - enp3s0

Additional parameters can be passed in to turn off STP for example or set priorities.

Vlans

Similarly, vlans only require a name as the key and then an id and link to use for the vlan:

vlans: vdev: id: 101 link: net1 addresses: - 10.0.1.10/24 vprod: id: 102 link: net2 addresses: - 10.0.2.10/24 vtest: id: 103 link: net3 addresses: - 10.0.3.10/24 vmgmt: id: 104 link: net4 addresses: - 10.0.4.10/24 Issues Hit

While my configurations were not overly complex and only touched the surface of what netplan and networking can do I ran across a few issues and made a recommendation:

First, either the top level global routes are not supported (LP: #1720418) or the documentation is not correct (LP: #1726695). The README shows routes as a high-level key, but they are currently only supported at the interface level.

Second, there are numerous articles, examples, guides, and use-cases where an interfaces file is modified to include pre-up, pre-down, post-up, or post-down commands. Netplan does not currently have support for this (LP: #1664818). Here is a snip from the readme: “While the netplan configuration does not include support for hook scripts, you can add systemd unit jobs with the appropriate Requires: and After: fields to run arbitrary commands once the network is up.” I could only find one example of creating a systemd unit job to meet this need.

When attempting to use the ifupdown-migrate command I found it was unable to understand any static set interfaces (LP: #1709668).

Next, there is invalid YAML in docs (LP: #1735317). In YAML the colon ‘:’ is a reserved charachter and when the in-line array syntax can cause errors. For example:

# Correct, valid YAML addresses: - 2001:1::1/64 # Invalid YAML test: [2001:1::1/64]

If you must use the in-line array then put quotes around your strings containing special character like the colon. Also I suggest using a YAML syntax validator or linter to check your configuration files.

Finally, I requested a dry run or config test option (LP: #1735318). After trying to write numerous configurations to try different settings I got frustrated needing to move the configurations to a different system or in a container to test them without touching my local system.

Next Steps

I was left with an overall very positive impression of netplan. Having the ability to write YAML configuration files and not have to worry about how the actual configuration was generated or what commands need to be used depending on the backend simplifies the process. I would like to continue to attempt some more complex configurations that I can find as well as attempt additional test cases with the ifupdown-migrate subcommand.

Links & Refrences
Categories: Canonical

LXD Weekly Status #25

Ubuntu Insights, Cloud and Server - Mon, 27/11/2017 - 21:33

Introduction

This week has been split between some upcoming feature work (infiniband and clustering), helping some new contributors get started with contributing to LXD and doing a lot of backports to the stable branches.

Our stable branch backlog is now empty on all 3 projects and @brauner is now handling this for LXC with @stgraber handling the stable branches of LXD and LXCFS.

Following the PPA deprecation warning we issued last week, we’ve also been busy with dealing with a number of issues related to the snap package as more users upgrade to it.

On the LXC side of things, we got quite a few contributions to various part of the code, which we’ve reviewed and merged.

Upcoming conferences and events Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.

LXD LXC LXCFS
  • No change to report this week
Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.

Ubuntu Snap
  • Fixed a number of runtime warnings related to os-release.
  • Included some required ebtables modules.
  • Updated a number of scripts to use /snap/lxd/current rather than the versioned path, fixing a number of issues on upgrade.
  • Fixed cgroup handling for systems that have cgroupv2.
Categories: Canonical

Making NTP best practices easy with Juju charms

Ubuntu Insights, Cloud and Server - Fri, 24/11/2017 - 10:13
NTP: a behind-the-scenes protocol

Network Time Protocol is one of those oft-ignored-but-nonetheless-essential subsystems which is largely unknown, except to a select few. Those who know it well generally fall into the following categories:

  1. time geeks who work on protocol standardisation and implementation,
  2. enthusiasts who tinker with GPS receivers or run servers in the NTP pool, or
  3. sysadmins who have dealt with the consequences of inaccurate time in operational systems. (I fall mostly into this third category, with some brief forays into the others.)

One of the consequences of NTP’s low profile is that many important best practices aren’t widely known and implemented, and in some cases, myths are perpetuated.

Fortunately, Ubuntu & other major Linux distributions come out of the box with a best-practice-informed NTP configuration which works pretty well. So sometimes taking a hands-off approach to NTP is justified, because it mostly “just works” without any special care and attention. However, some environments require tuning the NTP configuration to meet operational requirements.

When best practices require more

One such environment is Canonical’s managed OpenStack service, BootStack. A primary service provided in BootStack is the distributed storage system, Ceph. Ceph’s distributed architecture requires the system time on all nodes to be synchronised to within 50 milliseconds of each other. Ordinarily NTP has no problem achieving synchronisation an order of magnitude better than this, but some of our customers run their private clouds in far-flung parts of the world, where reliable Internet bandwidth is limited, and high-quality local time sources are not available. This has sometimes resulted in time offsets larger than Ceph will tolerate.

A technique for dealing with this problem is to select several local hosts to act as a service stratum between the global NTP pool and the other hosts in the environment. The Juju ntp charms have supported this configuration for some time, and historically in BootStack we’ve achieved this by configuring two NTP services: one containing the manually-selected service stratum hosts, and one for all the remaining hosts.

We select hosts for the service stratum using a combination of the following factors:

  • Reasonable upstream Internet connectivity is needed. It doesn’t have to be perfect – NTP can achieve less than 5 milliseconds offset over an ADSL line, and most of our customer private clouds have better than that.
  • Bare metal systems are preferred over VMs (but the latter are still workable). Containers are not viable as NTP servers because the system clock is not virtualised; time synchronisation for containers should be provided by their host.
  • There should be no “choke points” in the NTP strata – these are bad for both accuracy and availability. A minimum of 3 (but preferably 4-6) servers should be included in each stratum, and these should point to a similar number of higher-stratum NTP servers.
  • Because consistent time for Ceph is our primary goal, the Ceph hosts themselves should be clients rather than part of the service stratum, so that they get a consistent set of servers offering reliable response at local LAN latencies.
A manual service stratum deployment

Here’s a diagram depicting what a typical NTP deployment with a manual service stratum might look like (click for a larger image).


To deploy this in an existing BootStack environment, the sequence of commands might look something like this (application names are examples only):

# Create the two ntp applications: $ juju deploy cs:ntp ntp-service # ntp-service will use the default pools configuration $ juju deploy cs:ntp ntp-client $ juju add-relation ntp-service:ntpmaster ntp-client:master # ntp-client uses ntp-service as its upstream stratum # Deploy them to the cloud nodes: $ juju add-relation infra-node ntp-service # deploys ntp-service to the existing infra-node service $ juju add-relation compute-node ntp-client # deploys ntp-client to the existing compute-node service Updating the ntp charm

It’s been my desire for some time to see this process made easier, more accurate, and less manual. Our customers come to us wanting their private clouds to “just work”, and we can’t expect them to provide the ideal environment for Ceph.

One of my co-workers, Stuart Bishop, started me thinking with this quote:

[O]ne of the original goals of charms [was to] encode best practice so software can be deployed by non-experts.

That seemed like a worthy goal, so I set out to update the ntp charm to automate the service stratum host selection process.

Design criteria

My goals for this update to the charm were to:

  • provide a stable NTP service for the local cloud and avoid constantly changing upstream servers,
  • ensure that we don’t impact the NTP pool adversely, even if the charm is widely deployed to very large environments,
  • provide useful feedback in juju status which is sufficient to explain its choices,
  • use only functionality available in stock Ubuntu, Juju, and charm helpers, and
  • improve testability of the charm code and increase test suite coverage.
What it does
  • This functionality is enabled using the auto_peers configuration option; this option was previously deprecated, because it could be better achieved through juju relations.
  • On initial configuration of auto_peers, each host tests its latency to the configured time sources.
  • The charm inspects the machine type and the software running on the system, using this knowledge to reduce the likelihood of a Ceph, Swift, or Nova compute host being selected, and to increase the likelihood that bare metal hosts are used. (This usually means that the Neutron gateways and infrastructure/monitoring hosts are more likely to be selected.)
  • The above factors are then combined into an overall suitability score for the host. Each host compares its score to the other hosts in the same juju service to determine whether it should be part of the service stratum.
  • The results of the scoring process are used to provide feedback in the charm status message, visible in the output of juju status.
  • if the charm detects that it’s running in a container, it sets the charm state to blocked and adds a status message indicating that NTP should be configured on the host rather than in the container.
  • The charm makes every effort to restrict load on the configured NTP servers by testing connectivity a maximum of once per day if configuration changes are made, or once a month if running from the update-status hook.

All this means that you can deploy a single ntp charm across a large number of OpenStack hosts, and be confident that the most appropriate hosts will be selected as the NTP service stratum.

Here’s a diagram showing the resulting architecture:

How it works
  • The new code uses ntpdate in test mode to test the latency to each configured source. This results in a delay in seconds for each IP address responding to the configured DNS name.
  • The delays for responses are combined using a root mean square, then converted to a score using the negative of the natural logarithm, so that delays approaching zero result in a higher score, and larger delays result in a lower score.
  • The scores for all host names are added together. If the charm is running on a bare metal machine, the overall score given a 25% increase in weighting. If the charm is running in a VM, no weight adjustment is made. If the charm is running in a container, the above scoring is skipped entirely and the weighting is set to zero.
  • The weight is then reduced by between 10% and 25% based on the presence of the following running processes: ceph, ceph-osd, nova-compute, or swift.
  • Each unit sends its calculated scores to its peer units on the peer relation. When the peer relation is updated, each unit calculates its position in the overall scoring results, and determines whether it is in the top 6 hosts (by default – this value is tunable). If so, it updates /etc/ntp.conf to use the configured NTP servers and flags itself as connecting to the upstream stratum. If the host is not in the top 6, it configures those 6 hosts as its own servers and flags itself as a client.
How to use it

This updated ntp charm has been tested successfully with production customer workloads. It’s available now in the charm store. Those interested in the details of the code change can review the merge proposal – if you’d like to test and comment on your experiences with this feature, that would be the best place to do so.

Here’s how to deploy it:

# Create a single ntp service: $ juju deploy --channel=candidate cs:ntp ntp # ntp service still uses default pools configuration $ juju config ntp auto_peers=true # Deploy to existing nodes: $ juju add-relation infra-node ntp $ juju add-relation compute-node ntp

You can see an abbreviated example of the juju status output for the above deployment at http://pastebin.ubuntu.com/25901069/.

Categories: Canonical

Introduction to MAAS: building the agile data centre

Ubuntu Insights, Cloud and Server - Tue, 21/11/2017 - 19:44

This webinar is part of our Ubuntu Enterprise Summit, running from December 5-6

Speaker:Mark Shuttleworth &Andres Rodriguez

Time: December 6th, 11AM-12PM PST / 2PM-3PM EST / 7PM- 8PM BST

In his introduction keynote to the Ubuntu Enterprise Summit Mark Shuttleworth, founder and CEO of Canonical, will be explaining his vision for the future of enterprise architecture.

Register Now

Categories: Canonical

Special Kubernetes Announcement from KubeCon

Ubuntu Insights, Cloud and Server - Tue, 21/11/2017 - 19:42

This webinar is part of our Ubuntu Enterprise Summit, running from December 5-6

Speaker:Marco Ceppi

Time: December 5th, 9-10AM PST / 12PM- 1PM EST / 5- 6PM BST

As Kubecon starts in Austin we will have some Kubernetes related news!

This is the opportuntiy to hear the news (and see demos) before the show starts!

Expect to see some container automation in action with Ubuntu and Kubernetes and more… With Marco Ceppi at the keyboard.

Register Now

Categories: Canonical

Building 21st Century Infrastructure

Ubuntu Insights, Cloud and Server - Tue, 21/11/2017 - 19:39

This webinar is part of our Ubuntu Enterprise Summit, running from December 5-6

Speaker:Mark Shuttleworth

Time: December 5th, 8-9AM PST / 11AM- 12PM EST / 4- 5PM BST

In his introduction keynote to the Ubuntu Enterprise Summit Mark Shuttleworth, founder and CEO of Canonical, will be explaining his vision for the future of enterprise architecture.

Register Now

Categories: Canonical

Join us at the Ubuntu Enterprise Summit!

Ubuntu Insights, Cloud and Server - Tue, 21/11/2017 - 19:16

Bloomberg, Walmart, eBay, Samsung, Dell. Ever wonder how some of the world’s largest enterprises run on Ubuntu? This December, we are hosting our first ever Ubuntu Enterprise Summit to tell you how and help guide your own organisation whether it be running the cloud in a large telco to deriving revenue from your next IoT initiative. The Ubuntu Enterprise Summit is a two day event of webinars on December 5th and 6th where you can join Canonical’s product managers, technical leads, partners and customers to get an inside look at why some of the world’s largest companies have chosen Ubuntu. Whether you are focused on the cloud or are living life at the edge, the webinars will also look at trends and the considerations for your organisation when implementing such technologies. To kick off the event on December 5th, Canonical CEO and founder Mark Shuttleworth will deliver a keynote talk on 21st Century Infrastructure. Following Mark’s opening, there will be a series of other events and you can register now for those that spark your interest by clicking on the links below

Tuesday, December 5th

 

Building 21st Century Infrastructure

Speaker- Mark Shuttleworth, CEO and Founder, Canonical and Ubuntu

Time- 8-9AM PST / 11AM- 12PM EST / 4- 5PM BST

More Info

Special Kubernetes Announcement from KubeCon

Speaker- Marco Ceppi

Time- 9-10AM PST / 12PM- 1PM EST / 5- 6PM BST

More Info

Get ready for multi-cloud

Speaker- Mark Baker, Field Product Manager, Canonical

Time- 10-11AM PST / 1- 2PM EST / 6- 7PM BST

More Info

Hybrid cloud & financial services – how to compete with cloud native new entrants

Speaker- Chris Kenyon, SVP, Worldwide Sales & Business Development, Canonical

Time- 11AM- 12PM PST / 2- 3PM EST / 7- 8PM BST

More Info

 

Wednesday, December 6th

 

Ubuntu: What’s the security story?

Speaker- Dustin Kirkland, VP, Product Development

Time- 7- 8AM PST / 10- 11AM EST / 3PM- 4PM BST

More Info

How City Network solves the challenges for the modern financial company

Speaker- Johan Christenson, CEO of City Network

Time- 8- 9AM PST / 11AM-12PM EST / 4PM- 5PM BST

More Info

Appstores: The path to IoT revenue post-sale

Speaker- Mike Bell, EVP, Devices & IoT

Time- 9- 10AM PST / 12- 1PM EST / 5PM- 6PM BST

More Info

Cloud to edge: Building the software defined telco infrastructure

Speaker- Nathan Rader, Director of NFV Strategy

Time- 10-11AM PST / 1- 2PM EST / 6PM- 7PM BST

More Info

Introduction to MAAS: building the agile data centre

Speaker- Mark Shuttleworth, CEO & Founder & Andres Rodriguez, MAAS Product Manager

Time- 11AM-12PM PST / 2PM-3PM EST / 7PM- 8PM BST

More Info

If you can’t make the webinar of your choice, they will also be available to view post-event so you don’t miss out.

We look forward to seeing you at the Ubuntu Enterprise Summit!

Categories: Canonical

Ubuntu Server Development Summary – 21 Nov 2017

Ubuntu Insights, Cloud and Server - Tue, 21/11/2017 - 18:44
Hello Ubuntu Server!

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list.

Spotlight: cloud-init IPv6 Support for EC2

Chad Smith, a cloud-init team member, wrote up a post on the automatic configuration of IPv6 on EC2 instances. He details the steps required to add in the support and some additional improvements that could be made.

cloud-init
  • Released stable release update (SRU) of 17.1-27-geb292c18 (LP: #1721847)
  • Cleanup dhclient background process after EC2 network discovery.
  • ntp: fix configuration template rendering for openSUSE and SLES (Robert Schweikert) LP: #1726572
  • fix manually running cloud-init after upgrade (LP: #1732917)
Bug Work and Triage Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Uploads to the Development Release (Bionic) amavisd-new, 1:2.11.0-1ubuntu1, paelzer cloud-init, 17.1-41-g76243487-0ubuntu1, smoser docker.io, 17.03.2-0ubuntu1, mwhudson exim4, 4.89-9ubuntu1, paelzer golang-context, 1.1-2ubuntu2, mwhudson golang-dbus, 4.1.0-1, None golang-github-mattn-go-colorable, 0.0.6-2, None golang-github-mattn-go-runewidth, 0.0.2+git20170510.3.97311d9-1ubuntu2, mwhudson golang-github-olekukonko-tablewriter, 0.0~git20170719.0.be5337e-1ubuntu2, mwhudson golang-github-pborman-uuid, 0.0+git20150824.0.cccd189-1ubuntu10, mwhudson golang-gocapability-dev, 0.0~git20160928.0.e7cb7fa-1ubuntu3, mwhudson golang-golang-x-net-dev, 1:0.0+git20170629.c81e7f2+dfsg-1ubuntu2, mwhudson golang-gopkg-flosch-pongo2.v3, 3.0+git20141028.0.5e81b81-0ubuntu10, mwhudson golang-gopkg-lxc-go-lxc.v2, 0.0~git20161126.1.82a07a6-0ubuntu8, mwhudson golang-gopkg-tomb.v2, 0.0~git20161208.0.d5d1b58-1ubuntu3, mwhudson golang-goprotobuf, 0.0~git20170808.0.1909bc2-1ubuntu2, mwhudson golang-petname, 2.8-0ubuntu2, mwhudson golang-x-text, 0.0~git20170627.0.6353ef0-1ubuntu2, mwhudson golang-x-text, 0.0~git20170627.0.6353ef0-1ubuntu2, mwhudson irqbalance, 1.2.0-0.2, None ldns, 1.7.0-3ubuntu1, timo-jyrinki libmemcached, 1.0.18-4.2, None lxd, 2.20-0ubuntu4, stgraber lxd, 2.20-0ubuntu3, stgraber lxd, 2.20-0ubuntu2, stgraber lxd, 2.20-0ubuntu1, stgraber maas, 2.3.0-6434-gd354690-0ubuntu1, andreserl nut, 2.7.4-5.1ubuntu2, paelzer procmail, 3.22-26, None qemu, 1:2.10+dfsg-0ubuntu5, paelzer ruby2.3, 2.3.5-1ubuntu4, doko ruby2.3, 2.3.5-1ubuntu3, doko ruby2.3, 2.3.5-1ubuntu2, doko ruby2.3, 2.3.5-1ubuntu1, doko ruby2.3, 2.3.5-1, None sssd, 1.15.3-3ubuntu1, paelzer sysstat, 11.6.0-1ubuntu1, paelzer tgt, 1:1.0.72-1ubuntu1, paelzer Total: 38 Uploads to Supported Releases (Trusty, Xenial, Zesty, Artful) bind9, zesty, 1:9.10.3.dfsg.P4-10.1ubuntu5.3, paelzer bind9, xenial, 1:9.10.3.dfsg.P4-8ubuntu1.9, paelzer cloud-init, xenial, 17.1-27-geb292c18-0ubuntu1~16.04.1, smoser cloud-init, zesty, 17.1-27-geb292c18-0ubuntu1~17.04.1, smoser cloud-init, artful, 17.1-27-geb292c18-0ubuntu1~17.10.1, smoser docker.io, xenial, 17.03.2-0ubuntu1~16.04.1, mwhudson docker.io, zesty, 17.03.2-0ubuntu1~17.04.1, mwhudson docker.io, artful, 17.03.2-0ubuntu1~17.10.1, mwhudson docker.io, xenial, 1.13.1-0ubuntu1~16.04.2, mwhudson docker.io, zesty, 1.13.1-0ubuntu1~17.04.1, mwhudson juju-core, xenial, 2.2.6-0ubuntu0.16.04.3, mwhudson juju-core, zesty, 2.2.6-0ubuntu0.17.04.3, mwhudson juju-core, xenial, 2.2.6-0ubuntu0.16.04.2, mwhudson juju-core, zesty, 2.2.6-0ubuntu0.17.04.2, mwhudson libvirt-python, xenial, 1.3.1-1ubuntu1.1, paelzer lxd, xenial, 2.20-0ubuntu4~16.04.1, stgraber lxd, zesty, 2.20-0ubuntu4~17.04.1, stgraber lxd, artful, 2.20-0ubuntu4~17.10.1, stgraber procmail, artful, 3.22-25ubuntu0.17.10.1, mdeslaur procmail, zesty, 3.22-25ubuntu0.17.04.1, mdeslaur procmail, xenial, 3.22-25ubuntu0.16.04.1, mdeslaur procmail, trusty, 3.22-21ubuntu0.2, mdeslaur qemu, artful, 1:2.10+dfsg-0ubuntu3.1, paelzer qemu, zesty, 1:2.8+dfsg-3ubuntu2.8, dannf samba, artful, 2:4.6.7+dfsg-1ubuntu3.1, mdeslaur samba, zesty, 2:4.5.8+dfsg-0ubuntu0.17.04.8, mdeslaur samba, xenial, 2:4.3.11+dfsg-0ubuntu0.16.04.12, mdeslaur samba, trusty, 2:4.3.11+dfsg-0ubuntu0.14.04.13, mdeslaur sssd, xenial, 1.13.4-1ubuntu1.9, paelzer strongswan, artful, 5.5.1-4ubuntu2.1, paelzer websockify, xenial, 0.6.1+dfsg1-1ubuntu1, corey.bryant Total: 31 Contact the Ubuntu Server team
Categories: Canonical

MAAS 2.3.0 (final) Released!

Ubuntu Insights, Cloud and Server - Tue, 21/11/2017 - 16:19

This article originally appeared on Andres Rodriguez’s blog

I’m happy to announce that MAAS 2.3.0 (final) is now available!

This new MAAS release introduces a set of exciting features and improvements to the overall user experience. It now becomes the focus of maintenance, as it fully replaces MAAS 2.2

In order to provide with sufficient notice, please be aware that 2.3.0 will replace MAAS 2.2 in the Ubuntu Archive in the coming weeks. In the meantime, MAAS 2.3 is available in PPA and as a Snap.

PPA’s Availability

MAAS 2.3.0 is currently available in ppa:maas/next for the coming week.

sudo add-apt-repository ppa:maas/next sudo apt-get update sudo apt-get install maas

Please be aware that MAAS 2.3 will replace MAAS 2.2 in ppa:maas/stablewithin a week.

Snap Availability

For those wanting to use the snap, you can obtain it from the stablechannel:

sudo snap install maas –devmode –stable

MAAS 2.3.0 (final) Important announcements Machine network configuration now deferred to cloud-init.

Starting from MAAS 2.3, machine network configuration is now handled by cloud-init. In previous MAAS (and curtin) releases, the network configuration was performed by curtin during the installation process. In an effort to improve robustness, network configuration has now been consolidated in cloud-init. MAAS will continue to pass network configuration to curtin, which in turn, will delegate the configuration to cloud-init.

Ephemeral images over HTTP

As part of the effort to reduce dependencies and improve reliability, MAAS ephemeral (network boot) images are no longer loaded using iSCSI (tgt). By default, the ephemeral images are now obtained using HTTP requests to the rack controller. After upgrading to MAAS 2.3, please ensure you have the latest available images. For more information please refer to the section below (New features & improvements).

Advanced network configuration for CentOS & Windows

MAAS 2.3 now supports the ability to perform network configuration for CentOS and Windows. The network configuration is performed via cloud-init. MAAS CentOS images now use the latest available version of cloud-init that includes these features.

New features & improvements CentOS network configuration

MAAS can now perform machine network configuration for CentOS 6 and 7, providing networking feature parity with Ubuntu for those operating systems. The following can now be configured for MAAS deployed CentOS images:

  • Bonds, VLAN and bridge interfaces.
  • Static network configuration.

Our thanks to the cloud-init team for improving the network configuration support for CentOS.

Windows network configuration

MAAS can now configure NIC teaming (bonding) and VLAN interfaces for Windows deployments. This uses the native NetLBFO in Windows 2008+. Contact us for more information (https://maas.io/contact-us).

Improved Hardware Testing

MAAS 2.3 introduces a new and improved hardware testing framework that significantly improves the granularity and provision of hardware testing feedback. These improvements include:

  • An improved testing framework that allows MAAS to run each component individually. This allows MAAS to run tests against storage devices for example, and capture results individually.
  • The ability to describe custom hardware tests with a YAML definition:
    • This provides MAAS with information about the tests themselves, such as script name, description, required packages, and other metadata about what information the script will gather. All of which will be used by MAAS to render in the UI.
    • Determines whether the test supports a parameter, such as storage, allowing the test to be run against individual storage devices.
    • Provides the ability to run tests in parallel by setting this in the YAML definition.
  • Capture performance metrics for tests that can provide it.
    • CPU performance tests now offer a new ‘7z’ test, providing metrics.
    • Storage performance tests now include a new ‘fio’ test providing metrics.
    • Storage test ‘badblocks’ has been improved to provide the number of badblocks found as a metric.
  • The ability to override a machine that has been marked ‘Failed testing’. This allows administrators to acknowledge that a machine is usable despite it having failed testing.

Hardware testing improvements include the following UI changes:

  • Machine Listing page
    • Displays whether a test is pending, running or failed for the machine components (CPU, Memory or Storage.)
    • Displays whether a test not related to CPU, Memory or Storage has failed.
    • Displays a warning when the machine has been overridden and has failed tests, but is in a ‘Ready’ or ‘Deployed’ state.
  • Machine Details page
    • Summary tab – Provides hardware testing information about the different components (CPU, Memory, Storage).
    • Hardware Tests /Commission tab – Provides an improved view of the latest test run, its runtime as well as an improved view of previous results. It also adds more detailed information about specific tests, such as status, exit code, tags, runtime and logs/output (such as stdout and stderr).
    • Storage tab – Displays the status of specific disks, including whether a test is OK or failed after running hardware tests.

For more information please refer to https://docs.ubuntu.com/maas/2.3/en/nodes-hw-testing.

Network discovery & beaconing

In order to confirm network connectivity and aide with the discovery of VLANs, fabrics and subnets, MAAS 2.3 introduces network beaconing. MAAS now sends out encrypted beacons, facilitating network discovery and monitoring. Beacons are sent using IPv4 and IPv6 multicast (and unicast) to UDP port 5240. When registering a new controller, MAAS uses the information gathered from the beaconing protocol to ensure that newly registered interfaces on each controller are associated with existing known networks in MAAS. This aids MAAS by providing better information on determining the network topology. Using network beaconing, MAAS can better correlate which networks are connected to its controllers, even if interfaces on those controller are not configured with IP addresses. Future uses for beaconing could include validation of networks from commissioning nodes, MTU verification, and a better user experience for registering new controllers.

Upstream Proxy

MAAS 2.3 now enables an upstream HTTP proxy to be used while allowing MAAS deployed machines to continue to use the caching proxy for the repositories. Doing so provides greater flexibility for closed environments, including:

  • Enabling MAAS itself to use a corporate proxy while allowing machines to continue to use the MAAS proxy.
  • Allowing machines that don’t have access to a corporate proxy to gain network access using the MAAS proxy.

Adding upstream proxy support also includes an improved configuration on the settings page. Please refer to Settings > Proxy for more details.

Ephemeral Images over HTTP

Historically, MAAS has used ‘tgt’ to provide images over iSCSI for the ephemeral environments (e.g commissioning, deployment environment, rescue mode, etc). MAAS 2.3 changes the default behaviour by now providing images over HTTP. These images are now downloaded directly by the initrd. The change means that the initrd loaded on PXE will contact the rack controller to download the image to load in the ephemeral environment. Support for using ‘tgt’ is being phased out in MAAS 2.3, and will no longer be supported from MAAS 2.4 onwards. For users who would like to continue to use & load their ephemeral images via ‘tgt’, they can disable http boot with the following command. maas <user> maas set-config name=http_boot value=False

UI Improvements MACHINES, DEVICES, CONTROLLERS

MAAS 2.3 introduces an improved design for the machines, devices and controllers detail pages that include the following changes.

  • “Summary” tab now only provides information about the specific node (machine, device or controller), organised across cards.
  • “Configuration” has been introduced, which includes all editable settings for the specific node (machine, device or controllers).
  • “Logs” consolidates the commissioning output and the installation log output.
OTHER UI IMPROVEMENTS

Other UI improvements that have been made for MAAS 2.3 include:

  • Added DHCP status column on the ‘Subnet’s tab.
  • Added architecture filters
  • Updated VLAN and Space details page to no longer allow inline editing.
  • Updated VLAN page to include the IP ranges tables.
  • Zones page converted to AngularJS (away from YUI).
  • Added warnings when changing a Subnet’s mode (Unmanaged or Managed).
  • Renamed “Device Discovery” to “Network Discovery”.
  • Discovered devices where MAAS cannot determine the hostname now show the hostname as “unknown” and greyed out instead of using the MAC address manufacturer as the hostname.
Rack Controller Deployment

MAAS 2.3 can now automatically deploy rack controllers when deploying a machine. This is done by providing cloud-init user data, and once a machine is deployed, cloud-init will install and configure the rack controller. Upon rack controller registration, MAAS will automatically detect the machine is now a rack controller and it will be transitioned automatically. To deploy a rack controller, users can do so via the API (or CLI), e.g: maas <user> machine deploy <system_id> install_rackd=True Please note that this features makes use of the MAAS snap to configure the rack controller on the deployed machine. Since snap store mirrors are not yet available, this will require the machine to have access to the internet to be able to install the MAAS snap.

Controller Versions & Notifications

MAAS now surfaces the version of each running controller and notifies the users of any version mismatch between the region and rack controllers. This helps administrators identify mismatches when upgrading their MAAS on a multi-node MAAS cluster, such as within a HA setup.

Improved DNS Reloading

This new release introduces various improvements to the DNS reload mechanism. This allows MAAS to be smarter about when to reload DNS after changes have been automatically detected or made.

API Improvements

The machines API endpoint now provides more information on the configured storage and provides additional output that includes volume_groups, raids, cache_sets, and bcaches fields.

Django 1.11 support

MAAS 2.3 now supports the latest Django LTS version, Django 1.11. This allows MAAS to work with the newer Django version in Ubuntu Artful, which serves as a preparation for the next Ubuntu LTS release.

  • Users running MAAS in Ubuntu Artful will use Django 1.11.
  • Users running MAAS in Ubuntu Xenial will continue to use Django 1.9.

Categories: Canonical

LXD Weekly Status #24: LXD 2.20

Ubuntu Insights, Cloud and Server - Mon, 20/11/2017 - 19:10

The highlight of this week was the release of LXD 2.20 which introduces a number of exciting new features.

LXD 2.20 should now be available everywhere through both native packages and snap.
We also started the process of deprecating the various LXD PPAs, see below for details.

Our next milestone is LXD 2.21 in about a month which will be the last LXD release of the year and the last LXD 2.x release. Once 2.21 is out, we’ll focus on LXD 3.0 with the first alpha expected in January.

Deprecation of the LXD PPAs

At the end of this year, we are going to stop delivering LXD through our PPAs.
This is done to cut down the amount of time we have to spend on packaging, tooling maintenance and package version tracking when dealing with issues.

The two recommended ways of getting the latest LXD feature release are:

  • Using our snap package
    snap install lxd
  • Using our official backports in the Ubuntu archive
    apt install -t xenial-backports lxd lxd-client

PPA users have until the end of December to transition over to one of those two, a warning message has been added to the LXD package in our PPAs, including instructions on moving to either the snap or backport package while keeping all your data in place.

Upcoming conferences and events Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.

LXD LXC LXCFS
  • No change to report this week
Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.

Ubuntu
  • LXD 2.20-0ubuntu1 was uploaded to Ubuntu 18.04.
  • LXD 2.20-0ubuntu2 was uploaded to Ubuntu 18.04, fixing a build issue.
  • LXD 2.20-0ubuntu3 was uploaded to Ubuntu 18.04, fixing a test failure.
    This was then automatically picked up by our PPAs.
  • LXD 2.20-0ubuntu4 was uploaded to Ubuntu 18.04, cherry-picking a couple of bugfixes.
    This was then automatically picked up by our PPAs and manually tested and uploaded to Ubuntu 16.04, 17.04 and 17.10 through backports.
Snap
  • Updated the stable channel to LXD 2.20.
  • Cherry-picked bugfix: Unsetting core.https_address was broken
  • Cherry-picked bugfix: Hardlink remapping
Categories: Canonical

Cloud to Edge: Building the software defined telco infrastructure

Ubuntu Insights, Cloud and Server - Mon, 20/11/2017 - 17:51

This webinar is part of our Ubuntu Enterprise Summit, running from December 5-6

Speaker: Nathan Rader

Time: December 6th, 1-2PM EST

With the explosive growth of data, telcos are facing new challenges in revenue. Canonical currently provide some of the world’s leading service providers with an automated, repeatable, and tried & tested telco solution. In this webinar, join Nathan Rader to learn how Ubuntu OpenStack can help telcos:

  • Reduce time to market
  • Save money through automation as well as virtualisation
  • Increase infrastructural efficiency in order to drive towards the future

Register Now

Categories: Canonical

Ubuntu: What’s the security story?

Ubuntu Insights, Cloud and Server - Mon, 20/11/2017 - 17:40

This webinar is part of our Ubuntu Enterprise Summit, running from December 5-6

Speaker:Dustin Kirkland

Time: December 6th, 11AM-12PM EST

Of course you know Ubuntu. Your developers use it everywhere. But you’re from an enterprise, where the IT Security team has the final say. As they should. Before your app built on Ubuntu can go into production, you need their signoff. So what’s the security story with Ubuntu? How is it hardened? Are there best practices around patch management? What about standards and compliance? How can Canonical help? In this webinar, join Dustin Kirkland to learn:

  • How Ubuntu is secured out of the box
  • How we ensure that we are compliant
  • How Canonical ensure customers get a first class support experience
  • Why Ubuntu has the leading edge in security

Register Now

Categories: Canonical

How City Network solves challenges for the modern financial company

Ubuntu Insights, Cloud and Server - Mon, 20/11/2017 - 17:35

This webinar is part of our Ubuntu Enterprise Summit, running from December 5-6

Speaker:Johan Christenson

Time: December 5th, 3-4PM EST

Learn how City Network builds their Regulatory compliant Cloud services fit for some of the largest banks, insurance companies and large corporations in the world. In this talk, Johan Christenson, CEO of City Network, will detail:

  • Requirements and challenges for industries who are subject to regulatory compliance
  • How we build our clouds and our organisation to ensure regulatory compliance

Register Now

Categories: Canonical

Hybrid cloud & financial services- how to compete with new entrants

Ubuntu Insights, Cloud and Server - Mon, 20/11/2017 - 17:08

This webinar is part of our Ubuntu Enterprise Summit, running from December 5-6

Speaker: Chris Kenyon

Time: December 5th, 2-3PM EST

Finance is moving an avalanche of change and facing cloud native new entrants. To keep up with the pace of change, financial organizations need to adopt new development methodologies, new technologies (AI & Blockchain) and ultimately new infrastructure (Hybrid cloud & Kubernetes). Tight regulation, legacy systems and inflexible software stacks leave many tied them down and unable to respond to the pressure to reduce costs & change at pace.

In this presentation, Chris Kenyon will talk about how financial institutions are transforming into technology companies:

  • Making internal developers engaged & productive
  • Navigating the challenges of hybrid cloud in highly regulated environments.
  • Accelerating proof of value projects in AI / ML and Blockchain

Register Now

Categories: Canonical

Pages

Subscribe to Stack Evolution aggregator - Canonical