Posts for: #2023

Matrix 2.0: A Glimpse into the Future of Matrix

Matrix has been providing an open standard for secure, decentralized communication for the open Web for over 9 years. With over 111 million Matrix IDs, 17 million rooms, and 64,000 servers, Matrix has become a crucial tool for organizations looking for secure, self-sovereign communication. The importance of decentralization is becoming increasingly evident in today’s world, with the risks of centralized Internet services becoming more apparent. Matrix 2.0 aims to provide the missing communication layer for the open Web, with improvements in usability, performance, and stability.

Matrix 2.0 introduces several new features:

  1. Sliding Sync: This new sync API allows for instant login, launch, and sync, ensuring that essential data required for rendering the user interface is loaded instantly, regardless of the number or size of rooms.

  2. Native OIDC: Matrix 2.0 replaces its existing authentication APIs with industry-standard Open ID Connect (OIDC). This improves security and maintainability for Matrix’s authentication.

  3. Native Group VoIP: Matrix 2.0 introduces native group voice and video calling. This feature allows for end-to-end encrypted, scalable group calling and is built on top of matrix-js-sdk.

Matrix 2.0 has been implemented primarily by Element, using their Element X client as a test-bed. The implementation has been driven by the matrix-rust-sdk codebase, with Element X showcasing the new features. Matrix-rust-sdk provides high-level APIs for efficient management of room lists, room timelines, and UI components, allowing developers to focus on building UI rather than handling Matrix internals.

The Sliding Sync feature in matrix-rust-sdk has undergone significant development, addressing challenges related to room ordering and user experience on mobile. The current implementation maintains an ‘all rooms’ list, syncing room details in the background to enable instant room search and responsive UI.

The Native Group VoIP feature has also undergone development, with the implementation evolving from full mesh conferencing to a selective forwarding unit (SFU) approach. This enables scalable group voice and video calling with support for hundreds of users per call. The implementation combines elements from MSC3401 and LiveKit’s existing signaling.

Matrix 2.0 also introduces Native OIDC support, with matrix-authentication-service providing OIDC support for Synapse. Element X implements account registration, login, and management using Native OIDC.

While Matrix 2.0 is now available for developers to explore and implement, there is still ongoing work to be done. This includes refining Sliding Sync based on lessons learned, stabilizing and maturing Matrix 2.0 MSCs, adding encrypted backups to matrix-rust-sdk, and reintroducing full-mesh support for Native Matrix Group VoIP calling.

Matrix 2.0 marks a significant milestone in the development of Matrix, with improvements in usability, performance, and security. The Matrix team is excited about the future of Matrix and the potential it holds for decentralized communication.

Source: Matrix.

Introducing the ZX05: A Compact PC with Intel Alder Lake-N Starting at $150

Liliputing reports on the ZX05 mini PC, a compact computer that measures just 145 x 62 x 20mm (5.7″ x 2.4″ x 0.8″), making it small enough to fit in a pocket. Despite its size, it is a full-fledged computer with a 6-watt Intel Alder Lake-N processor and 12GB of RAM. It is currently available from AliExpress for $150 and up.

The starting price includes a model with 12GB of LPDDR5 RAM soldered to the mainboard, but no storage or operating system. However, users can provide their own storage through the computer’s M.2 2280 slot with support for a PCIe NVMe SSD. Alternatively, there are options to purchase models that come with an SSD, with prices ranging from $160 for a 128GB SSD to $200 for a 1TB model. While a spec sheet on the product page suggests that the computer may be available with optional Intel Processor N200, Core i3-N300, or Core i3-N305 Alder Lake-N processors, and up to 16GB of RAM, these configurations are not currently available for purchase.

In terms of ports, all versions of the ZX05 mini PC include 2 x HDMI, 1 x Gigabit Ethernet, 3 x USB 3.2 Gen 1 Type-A, 1 x 3.5mm audio, and 1 x USB Type-C for power input. The system also features an Intel AX201 wireless card with support for WiFi 6 and Bluetooth 5.2, as well as a CR2032 battery for a real-time clock.

This compact computer could be an interesting choice as a server paired with an NVMe SSD, as an x86_64 alternative to the Raspberry Pi for example, albeit without any GPIO and similar features.

Source: Liliputing.

Cloud Hypervisor Releases Version v35.0 of Open Source Virtual Machine Monitor

Cloud Hypervisor, an open-source Virtual Machine Monitor (VMM), has announced the release of version v35.0. This VMM runs on top of the KVM hypervisor and the Microsoft Hypervisor (MSHV). The primary focus of the Cloud Hypervisor project is to enable the running of modern cloud workloads on specific, common hardware architectures. Cloud workloads, in this context, refer to those run by customers within a Cloud Service Provider. This includes modern operating systems with most I/O handled by paravirtualized devices (such as virtio), no requirement for legacy devices, and 64-bit CPUs.

Implemented in Rust and based on the Rust VMM crates, Cloud Hypervisor offers several user-visible changes and improvements in this release. Some of the notable updates include:

  • virtio-vsock Support for Linux Guest Kernel v6.3+: With the kernel version 6.3 and newer, a vsock packet can now be included in a single descriptor, rather than being split over two descriptors. The virtio-vsock implementation in Cloud Hypervisor now supports both situations.

  • User Specified Serial Number for virtio-block: A new option called serial has been added to the --block command, allowing users to specify a serial number for block devices that will be visible to the guest.

  • vCPU TSC Frequency Included in Migration State: This enhancement ensures successful migration between hosts with different TSC frequencies when the guest is running with TSC as the source of timekeeping.

In addition to these improvements, the release also includes several bug fixes, addressing issues like concurrent CPU resizing, handling of APIC EOI messages for MSHV, memory offset calculations, spell check, block device alignment, and latency counter for block devices.

The release of version v35.0 of Cloud Hypervisor is the result of contributions from various contributors, including Alyssa Ross, Anatol Belski, Bo Chen, Christian Blichmann, Jianyong Wu, Jinank Jain, Julian Stecklina, Omer Faruk Bayram, Philipp Schuster, Rob Bradford, Ruslan Mstoi, Thomas Barrett, Wei Liu, Yi Wang, and zhongbingnan.

For more details about the release and the Cloud Hypervisor project, visit the Cloud Hypervisor v35.0 release page.

README Highlight Friday #38, 2023: K3s

In this week’s issue of README Highlight Friday, we are taking a look at K3s, a lightweight Kubernetes distribution that is production-ready, easy to install, and consumes half the memory of upstream Kubernetes. The binary size of K3s is less than 100 MB, making it a great choice for edge computing, IoT, CI/CD, development, ARM-based systems, and situations where a deep understanding of Kubernetes is not feasible.

K3s is fully conformant with Kubernetes and includes several changes to improve its performance and simplicity. It is packaged as a single binary and supports sqlite3 as the default storage backend, with options for etcd, MySQL, and PostgreSQL as well. K3s wraps Kubernetes and other components in a single launcher, making it secure by default with reasonable defaults for lightweight environments. It has minimal OS dependencies, requiring only a sane kernel and cgroup mounts.

The distribution bundles several technologies together, including Containerd and runc as container runtimes, Flannel for CNI, CoreDNS for DNS, Metrics Server for resource monitoring, Traefik for ingress, klipper-lb as an embedded service load balancer provider, kube-router for network policy, helm-controller for deploying helm manifests, Kine as a datastore shim, and local-path-provisioner for provisioning volumes using local storage. In addition, K3s includes host utilities such as iptables/nftables, ebtables, ethtool, and socat.

K3s simplifies Kubernetes operations by managing TLS certificates, the connection between worker and server nodes, and auto-deploying Kubernetes resources from local manifests in real-time. It also has plans to manage an embedded etcd cluster in the future.

K3s is not a fork of Kubernetes but a distribution that aims to remain as close to upstream Kubernetes as possible. It maintains a small set of patches, important to its use case and deployment model, while contributing changes back to upstream projects whenever possible.

The lightweight and smaller size of K3s is achieved by running many components inside a single process, reducing memory overhead. The binary size is further reduced by removing third-party storage drivers and cloud providers that can be replaced with out-of-tree alternatives like CSI and CCM.

K3s follows the release cadence of upstream Kubernetes, with patch releases being released within one week, and new minor releases within 30 days. The versioning of K3s corresponds to the version of upstream Kubernetes being released, with additional postfixes for making releases using the same version while remaining semver compliant.

For documentation and installation, users can visit the official docs site for complete information. K3s can be easily installed using the install.sh script, which downloads K3s and adds it as a service. The script also installs additional utilities such as kubectl, crictl, k3s-killall.sh, and k3s-uninstall.sh. Alternatively, users can manually download the K3s binary and run the server.

Contributions to K3s are welcome, and interested individuals can check out the contributing guide for more information. Security issues in K3s can be reported by sending an email to security@k3s.io.

Improved Performance and Power Efficiency with Linux 6.5 and AMD P-State EPP Default for Ryzen Servers

Linux 6.5 now defaults to the AMD P-State EPP driver for Zen 2 and newer Ryzen systems, as long as the system supports ACPI CPPC. However, the AMD EPYC server processors still continue to use ACPI CPUFreq by default. Given the increasing interest in the AMD Ryzen 7000 series for budget and small-to-medium-sized business (SMB) servers, the performance impact of Linux 6.5 with more server workloads was analyzed.

Phoronix has tested the changes, and testing was conducted comparing the performance of Linux 6.4 against Linux 6.5, both out-of-the-box and using the Ubuntu Mainline Kernel PPA for easy reproducibility. The default change involves going from ACPI CPUFreq Schedutil to AMD P-State EPP with the powersave governor. Additional tests were done with the performance governor for maximum performance. AMD P-State is available on earlier Linux kernel versions but is not set to be used out-of-the-box until Linux 6.5 and later. The testing was done using the ASRock Rack 1U4LW-B650/2L2T, a 1U Ryzen AM5 server platform that supports Ryzen 7000 series processors and ECC memory. No other changes were made to the server during testing, except for swapping out the Linux kernel and running secondary tests with the performance governor. The CPU clock frequency differences in the automated system table were minimal and did not affect the testing results.

The article provides valuable insights for those interested in using Ryzen processors for server applications.

Source: Phoronix.

PuzzleFS: Aims to be Top File-System Choice for Containers

PuzzleFS has been quietly making progress as a new file-system designed specifically for containers, writes Phoronix. Developed by Cisco engineers, PuzzleFS aims to address the limitations of the OCI (Open Container Initiative) and is written in the Rust programming language.

The kernel driver for PuzzleFS, also written in Rust, is currently being developed outside the mainline Linux kernel. This is due to the lack of necessary Rust abstractions in the mainline kernel. PuzzleFS offers several design goals, including immutability, reduced duplication, reproducible image builds, direct mounting support, data integrity, and memory safety guarantees. The file-system also includes optimal Zstd compression support.

Source: Phoronix.