Posts for: #kubernetes

Portainer: Embracing GitOps for a Streamlined Workflow

Portainer has published an article titled “GitOps - The Path Forward” that explores the concept of GitOps and how it can be implemented using the Portainer platform. The article begins by discussing the importance of adhering to compliance standards like GDPR and the need for secure cloud environments. GitOps is presented as a recommended operational framework for implementing infrastructure and development methodologies that ensure compliance and effective infrastructure management.

The article goes on to explain the fundamental concepts of GitOps, including automation, version control, continuous integration/continuous delivery, auditing, compliance, version rollback, and collaboration. It highlights the requirements for implementing GitOps, such as Infrastructure as Code (IaC), pull request reviews, CI/CD pipelines, automation, version control, auditability, rollback and forward capabilities, and collaboration.

The article then focuses on how Portainer facilitates the implementation of GitOps. It mentions that Portainer offers a suite of tools designed specifically for GitOps, including RBAC, automation, and visibility. It highlights the role-based access control (RBAC) feature of Portainer, which provides precise access control to Kubernetes platforms and container runtime environments. Portainer also integrates with authentication providers like LDAP and Microsoft AD. The article further explains how Portainer enables GitOps automation by connecting with Git repositories and allowing for automated application deployment to Kubernetes clusters and container environments. It also mentions how Portainer provides updates and monitoring solutions for GitOps operations through container logs, authentication logs, and event lists.

In conclusion, the article emphasizes that GitOps is a contemporary methodology for managing infrastructure and applications, and leveraging GitOps strategies like auditing, rollback, and roll forward can enhance operational agility, reliability, and compliance. The article highlights the benefits of using the Portainer platform for implementing GitOps, including RBAC, automation, and monitoring capabilities.

Argo CD Releases Version v2.9.0 - Streamlined Continuous Delivery for Kubernetes

Argo CD, a declarative, GitOps continuous delivery tool for Kubernetes, has announced the release of version v2.9.0. This release includes a total of 368 contributions from 144 contributors, with 73 new features and 59 bug fixes.

Features

  • Retry logic for Kubernetes client
  • Grace period for repository errors
  • Examples added to help output for admin.go file
  • Examples added to help output for argocd_server.go file
  • ‘Both’ option added for uibannerposition
  • PKCE authentication flow for web logins
  • Example added for generate-allow-list command
  • Examples added to help output for get KEYID command
  • Examples added to help output for “gpg_list” command
  • Examples added to help output for remaining “create PROJECT ROLE-NAME” commands
  • Examples added to argocd proj role cli family
  • Admin-app-example added to cli
  • Project flag added to avoid permission denied errors on 404
  • Notification secrets exposed for request payload templating
  • Git requests made configurable
  • Write back added to application informer
  • Print stderr output from command even on success
  • Examples added to help output for “list PROJECT” command
  • Examples added to help output for “gpg add” command
  • Recursive Helm Values files detection in UI
  • Rate limited queue implemented
  • Examples added to help output for remaining “get APPNAME” commands
  • Repocred-list-example added to cli
  • Cluster-list-example added to cli
  • Examples added to help output for “delete PROJECT ROLE-NAME” command
  • Examples added to projectwindows.go
  • Iammanager.keikoproj.io/Iamrole health check added
  • Examples added to help output for “generate-spec PROJECT” command
  • Repo-example added to cli
  • Examples added to help output for “set APPNAME” command
  • Examples added to help output for “logs APPNAME” command
  • Example added to argocd relogin command
  • Examples added to help output for all “argocd proj” commands
  • Example added to help output for bcrypt command
  • fromYaml and fromYamlArray toYaml functions added to appset
  • Example added to help output for app actions command
  • Examples added to help output for remaining “argocd account” commands
  • Examples added to help output for remaining “argocd repocreds” commands
  • Example added to help output for context command
  • Individual e2e tests retried in CI
  • ignoreApplicationDifferences added to appset
  • PushSecret health status and force-sync action implemented
  • AnsibleJob CRD health checks implemented
  • Patches field added to Kustomize
  • Support for AzureDevops Webhooks added to appset
  • Dynamic rebalancing of clusters across shards implemented
  • Tree option added to output flag for app sync, app wait, and app rollback commands
  • Automatically apply extension configs without restarting API-Server
  • Patch_ms and setop_ms timings added to reconciliation logs
  • Button added for wrapping lines in pod logs viewer
  • Option added to output flag for app get and app resources commands for tree view
  • Appset preserve labels and global preserve fields added
  • Haproxy metrics enabled through helm Chart
  • Shorthand flags added for follow and container in app logs command
  • ARGOCD_CLUSTER_CACHE_LIST_PAGE_BUFFER_SIZE environment variable added
  • RBAC validation command now takes either namespace or policy-file
  • Timezone added to projectwindows list
  • Dark theme improvements in UI
  • Auto-sync now handles ‘another operation is already in progress’ error
  • ApplicationSet now deletes Application status
  • Various bug fixes and improvements implemented

For the full changelog and more information, please visit the release-2.8…v2.9.0 comparison.

K3s Unveils Latest Version v1.28.3+k3s2

K3s, a lightweight and highly available certified Kubernetes distribution, has released version v1.28.3+k3s2. This version is designed for production workloads in resource-constrained and remote locations, as well as inside IoT appliances. K3s comes as a single binary that is less than 70MB in size, making it easy to install, run, and auto-update a production Kubernetes cluster.

The latest release updates Kubernetes to version v1.28.3 and addresses several issues. Some of the changes since v1.28.3+k3s1 include:

  • Restoration of selinux context systemd unit file
  • Update of channel to v1.27.7+k3s1
  • Bump of Sonobuoy version
  • Bump of Trivy version
  • Fix for accessing outer scope .SystemdCgroup, which resolves issues with starting with nvidia-container-runtime
  • Upgrade of traefik chart to v25.0.0
  • Update of traefik to fix registry value
  • Improvement to not use iptables-save/iptables-restore if it will corrupt rules

The components and versions included in this release are as follows:

  • Kubernetes v1.28.3
  • Kine v0.10.3
  • SQLite 3.42.0
  • Etcd v3.5.9-k3s1
  • Containerd v1.7.7-k3s1
  • Runc v1.1.8
  • Flannel v0.22.2
  • Metrics-server v0.6.3
  • Traefik v2.10.5
  • CoreDNS v1.10.1
  • Helm-controller v0.15.4
  • Local-path-provisioner v0.0.24

For more information on the release and its features, refer to the Kubernetes release notes.

Overall, this new release of K3s brings important updates and fixes to enhance the performance and reliability of Kubernetes clusters in production environments.

Source: K3s

Longhorn Unveils Latest Update: Longhorn v1.4.4 Release

Longhorn has released version v1.4.4, a distributed block storage system for Kubernetes. This release includes various enhancements, improvements, bug fixes, and stability and resilience updates. Some of the notable improvements include the addition of disk status Prometheus metrics, improved log levels for resource update failures, and support for both NFS hard and soft with custom timeo and retrans options for RWX volumes. Bugs related to volume synchronization, attaching/detaching loops, and volume mounting have also been addressed. This release is aimed at providing a more stable and reliable storage solution for Kubernetes environments. For more information, you can visit the Longhorn v1.4.4 release page.

OpenFaaS Releases Version 0.27.3 Update

OpenFaaS has released version 0.27.3, an update that makes it even easier for developers to deploy event-driven functions and microservices to Kubernetes. With OpenFaaS, developers can package their code or an existing binary in an OCI-compatible image, resulting in a highly scalable endpoint with auto-scaling and metrics.

The changelog for version 0.27.3 includes the following updates:

  • PR #1816 removes duplicates and fixes the order of adopters. This contribution was made by @nitishkumar71.
  • PR #1810 updates the contributing guide by removing references to the deprecated io/ioutil. This contribution was made by @testwill.

The update also includes several commits by different contributors, including:

To see a detailed list of changes between versions 0.27.2 and 0.27.3, you can visit the comparison page.

OpenFaaS version 0.27.3 is another step forward in providing developers with a powerful and easy-to-use platform for deploying serverless functions and microservices. With its focus on Kubernetes and its extensive list of features and updates, OpenFaaS continues to be a popular choice for those interested in servers, Linux, DevOps, and home labs.

Mixtile Cluster Box: Unleash the Power of Four Rockchip RK3588 SBCs over PCIe

The Mixtile Cluster Box is a server enclosure designed for small business applications and edge computing. It consists of four Mixtile Blade 3 Pico-ITX single board computers (SBCs), each powered by a Rockchip RK3588 processor. The SBCs are connected to the enclosure via a 4-lane PCIe Gen3 interface through a U.2 to PCIe/SATA breakout board.

The Cluster Box has been recently released by Mixtile, following the company’s work on the software and technical details. It is available for purchase on Mixtile’s website for $339, excluding the SBCs.

The specifications of the Mixtile Cluster Box include support for up to four Mixtile Blade 3 SBCs, each with up to 32GB LPDDR4 RAM and up to 256GB eMMC flash storage. The enclosure also features a control board running OpenWrt 22.03, with a MediaTek MT7620A MIPS processor, 256MB DDR2 system memory, and 16MB SPI flash storage.

The Cluster Box includes an ASMedia ASM2824 PCIe switch with four PCIe 3.0 4-lane ports. It also provides storage interfaces through four U.2 breakout boards, with four NVMe M.2 M-Key slots (PCIe 3.0 x2 each) and four SATA 3.0 ports. Networking capabilities are offered through a Gigabit Ethernet port.

The enclosure is equipped with two 60mm fans for cooling and a power button with a blue LED indicator. It is powered by a 19 to 19.5V/4.74A power supply through a DC jack. The dimensions of the Cluster Box are 213 x 190 x 129 mm, and it is made of a metal case with SGCC steel materials. It has an operating temperature range of 0°C to 80°C and a storage temperature range of -20°C to 85°C. The relative humidity ranges from 10% to 90% during operation and 5% to 95% during storage.

Users can access the Mixtile Cluster Box through OpenWrt using SSH or a web interface. The Rockchip RK3588 boards come preloaded with a customized Linux system with Kubernetes. Control of each Mixtile Blade can be done through OpenWrt using a command called “nodectl,” which allows users to list active nodes, rescan nodes, power on/off nodes, reboot nodes, flash firmware, and enter the console of a specific node.

For more technical details and a getting started guide, users can refer to the documentation website provided by Mixtile.

Overall, the Mixtile Cluster Box offers a compact and powerful solution for building a four-node server cluster with Rockchip RK3588 SBCs. With its PCIe connectivity, storage options, and OpenWrt software, it provides a versatile platform for various server, Linux, DevOps, and home lab applications.

Source: CNX Software – Embedded Systems News.