The Clear Linux Project for Intel Architecture is Intel’s distribution of the Linux operating system written with both the cloud and Intel silicon foremost in mind. While it has clearly not been produced with a universal market in mind, the Clear Linux Project’s interesting blend of performance, manageability, and security optimization could be ideal for organizations that use their own Linux flavors and want to get more out of their Intel architecture investments. Such benefits are not just available for on-premises deployments: Microsoft recently announced the availability of the Clear Linux Project in the Microsoft Azure Marketplace.

Clear Manageability

The Clear Linux Project’s two big contributions to manageability are its statelessness and its mixer capabilities. Statelessness in the Clear Linux Project is similar to the Fedora Project’s Stateless Linux in that it can operate without any custom configuration (in other words, it can operate as a generic host with an empty /etc directory). Administrators manage machines running the Clear Linux Project not by tweaking settings for the machines, but by making changes to the operating system’s metadata; all changes are then propagated to the managed machines because they are in effect all running the same image. Machines can thus be customized for specific workloads. Where the Clear Linux Project parts company from efforts such as Stateless Linux is that administrators can still apply specific configurations to machines for those scenarios in which a fully stateless system won’t suffice—admin-specific configurations can override the stateless defaults.

OS upgrades and distribution are slightly different in the Clear Linux Project as well. While most Linux distributions are crated off to the public in packages of compiled binaries from different open-source projects in order to provide and resolve all the dependencies necessary to install the Linux distro, the Clear Linux Project combines things differently. The Clear Linux Project still uses packages to compile binaries, but it is instead deployed through functionality-specific units that the project calls “bundles.”

Deploying the Clear Linux Project has a couple of subtle effects. First, it speeds up software updates. When administrators update the Clear Linux Project, the OS only updates those bundles that have changed files in them. This is faster than having to install a completely new Linux package. The Clear Linux Project also uses the bundle paradigm to enforce another change to its software deployment methodology. Unlike other Linux distributions, it is not possible to update individual packages or bundles of the Clear Linux Project’s OS. Each update to the Clear Linux Project represents an entirely new software version. This move is clearly intended to help administrators of cloud environments: such a methodology enforces a helpful homogeneity among VMs spawned by the thousand and makes version tracking much easier.

The second effect of the Clear Linux Project’s bundle approach is modularity. Because each bundle represents the composition of all required upstream open-source projects into one logical unit, system administrators can install various bundles for different server capabilities. Beyond mixing and matching specific bundles, the Clear Linux Project also allows administrators to overlay packages from other Linux distributions on the Clear Linux Project. Such an arrangement permits retaining the update stream from the Clear Linux Project but changing small things for testing. Alternatively, administrators can truly “roll their own” and build their own OS by taking from different distributions in addition to Clear Linux Project bundles.

Clear Performance

To a certain extent, modularity plays a part in the performance strategy for the Clear Linux Project. Current software-development methodologies either force developers to produce different versions or binaries of their software targeted at different platforms or to nerf the whole thing with a lowest-common-denominator version that can run on everything. However, the Clear Linux Project clearly envisions developers being able to write a single application that can make use of the hardware capabilities of different platforms but that will not crash if various hardware optimizations are not present. Intel refers to this compiler capability of optimizing the same code for multiple architectures and automatically selecting the correct architecture-specific version of the code at runtime in the Clear Linux Project as Function Multiversioning (FMV).

For an example of FMV in action, consider an application written to make use of the Intel Advanced Vector Extensions 2 (AVX2) instruction set extension introduced by Intel in its fourth-generation Intel Core processor family. Normally, instructing the compiler to use Intel AVX2 instructions would limit the application binaries to fourth-generation and newer Intel Core processors. However, with FMV, the compiler can generate versions of the code optimized with Intel AVX2 and can automatically ensure at runtime that only the appropriate versions of the application are used. So when the application is run on fourth-generation or newer Intel Core processors, it will use Intel AVX2, whereas that same binary run on earlier-generation processors will use the standard instructions supported by an older processor. Organizations running the Clear Linux Project can thus make more effective use of all of their investments in different Intel processors without making software development overly complex.

The Clear Linux Project also tackles cloud orchestration with its Cloud Integrated Advanced Orchestrator (ciao). Ciao is a new workload scheduler designed to address limitations in current cloud OS projects, such as OpenStack Heat. Ciao is lightweight and fully based on Transport Layer Security (TLS). It is also workload agnostic and easily updateable, and it is optimized for speed in task scheduling on OpenStack deployments.

Clear Containers

The biggest innovation in the Clear Linux Project—and the one most directly tied to cloud use cases—is that of Clear Containers. In a Linux context, containers constitute a version of OS-level virtualization that provides self-contained execution environments—including distinct, isolated processing, memory, input/output (I/O), and network resources—that share the kernel of the host operating system. Containers are popular because they afford a quick creation, update, and uninstall cycle that is easy to use and simple to manage, particularly when contrasted with traditional VMs. They also consume fewer resources on the host computer, allowing the same host hardware to run more containers than VMs.

Where containers fall down compared to VMs is security.

The Clear Linux Project aims to thread a middle path by capturing most of the speed and simplicity of containers while providing the security of full-blown VMs by using the performance boosts available through Intel Virtualization Technology (Intel VT). The Clear Linux Project optimizes the Linux kernel, systemd, Quick Emulator (QEMU) hypervisor, and user-space memory consumption. The Clear Linux Project also makes use of new kernel features that permit faster filesystem access and secure memory-page sharing. All of this speeds Intel Clear Container performance and lightens containers’ draw on host-computer resources. As the Clear Linux Project website (https://clearlinux.org/container) reports, Intel testing shows Clear Container creation speeds to be slightly slower than those for the fastest Docker startup using kernel namespaces, but fast enough for most applications and with greater security than that provided by traditional containers.

Intel Clear Containers necessarily need to supersede other containers running on the Clear Linux Project. Clear Containers add a new runtime for Docker (specifically a plugin that replaces runc with cor, Intel’s Open Container Initiative [OCI] runtime). Clear Containers are OCI-spec compatible and provide a switchable runtime in Docker 1.2. Clear Containers also integrate with container software-defined networking solutions such as FD.io’s Vector Packet Processing (VPP) project and Cisco’s Project Contiv.

Clear Linux Project: an Additional Option for Your Cloud Infrastructure

The Clear Linux Project is not for everyone. Like a high-performance racecar, Intel specifically designed this new Linux distribution for speed over general use. The Clear Linux Project’s sights are definitely set on the cloud, particularly with its built-in (and highly optimized) orchestration and container features. However, for organizations looking for better performance and security in their open-source cloud deployments, the Clear Linux Project provides additional options for their IT infrastructures. And such benefits can also flow to hybrid-cloud deployments with Microsoft’s recent announcement of the Clear Linux Project being available in the Azure Marketplace (https://azure.microsoft.com/en-us/blog/announcing-the-availability-of-clear-linux-os-in-azure-marketplace/). Find out more about the Clear Linux Project on the project’s website at https://clearlinux.org/ or on GitHub at https://github.com/clearlinux.

Share this:

FacebooktwitterlinkedinmailFacebooktwitterlinkedinmail