Getting Started with Kubernetes on CentOS 7
Kubernetes is an open-source platform developed by Google for managing containerized applications across a cluster of servers. It builds upon a decade and a half of experience that Google has with running clusters of containers at scale, and provides developers with Google style infrastructure, leveraging on best-of-breed open-source projects, such as:
- Docker: an application container technology.
- Etcd: a distributed key-value datastore that manages cluster-wide information and provides service discovery.
- Flannel: an overlay network fabric enabling container connectivity across multiple servers.
Kubernetes lets developers define their application infrastructure declaratively through YAML files and abstractions such as Pods, RCs and Services (more on this later) and ensures that the underlying cluster matches the user defined state at all times.
Some of its features include:
- Automatic scheduling of system resources and auto placement of application containers across a cluster.
- Scaling applications on the fly with a single command.
- Rolling updates with zero downtime.
- Self healing: auto rescheduling of an application if a server fails, auto restart of containers, health checks.
Skip ahead to Installation if you’re already familiar with Kubernetes.
Basic concepts
Kubernetes offers the following abstractions (logical units) to developers:
- Pods.
- Replication controllers.
- Labels.
- Services.
Pods
It’s the basic unit of Kubernetes workloads. A pod models an application-specific “logical host” in a containerized environment. In layman terms, it models a group of applications or services that used to run on the same server in the pre-container world. Containers inside a pod share the same network namespace and can share data volumes as well.
Replication controllers
Pods are great for grouping multiple containers into logical application units, but they don’t offer replication or rescheduling in case of server failure.
This is where a replication controller or RC comes handy. A RC ensures that a number of pods of a given service is always running across the cluster.
Labels
They are key-value metadata that can be attached to any Kubernetes resource (pods, RCs, services, nodes, …).
Services
Pods and replication controllers are great for deploying and distributing applications across a cluster, but pods have ephemeral IPs that change upon rescheduling or container restart.
A Kubernetes service provides a stable endpoint (fixed virtual IP + port binding to the host servers) for a group of pods managed by a replication controller.
Kubernetes cluster
In its simplest form, a Kubernetes cluster is composed by two types of nodes:
- 1 Kubernetes master.
- N Kubernetes nodes.
Kubernetes master
The Kubernetes master is the control unit of the entire cluster.
The main components of the master are:
- Etcd: a globally available datastore that stores information about the cluster and the services and applications running on the cluster.
- Kube API server: this is main management hub of the Kubernetes cluster and it exposes a RESTful interface.
- Controller manager: handles the replication of applications managed by replication controllers.
- Scheduler: tracks resource utilization across the cluster and assigns workloads accordingly.
Kubernetes node
The Kubernetes node are worker servers that are responsible for running pods.
The main components of a node are:
- Docker: a daemon that runs application containers defined in pods.
- Kubelet: a control unit for pods in a local system.
- Kube-proxy: a network proxy that ensures correct routing for Kubernetes services.
Installation
In this guide, we will create a 3 node cluster using CentOS 7 servers:
- 1 Kubernetes master (kube-master)
- 2 Kubernetes nodes (kube-node1, kube-node2)
You can add as many extra nodes as you want later on following the same installation procedure for Kubernetes nodes.
All nodes
Configure hostnames and /etc/hosts
:
# /etc/hostnamekube-master# or kube-node1, kube-node2# append to /etc/hostsreplace-with-master-server-ip kube-masterreplace-with-node1-ip kube-node1replace-with-node2-ip kube-node2
Disable firewalld:
systemctl disable firewalldsystemctl stop firewalld
Kubernetes master
Install Kubernetes master packages:
yum install etcd kubernetes-master
Configuration:
# /etc/etcd/etcd.conf# leave rest of the lines unchangedETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"ETCD_LISTEN_PEER_URLS="http://localhost:2380"ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"# /etc/kubernetes/config# leave rest of the lines unchangedKUBE_MASTER="--master=http://kube-master:8080"# /etc/kubernetes/apiserver# leave rest of the lines unchangedKUBE_API_ADDRESS="--address=0.0.0.0"KUBE_ETCD_SERVERS="--etcd_servers=http://kube-master:2379"
Start Etcd:
systemctl start etcd
Install and configure Flannel overlay network fabric (this is needed so that containers running on different servers can see each other):
yum install flannel
Create a Flannel configuration file (flannel-config.json
):
{ "Network": "10.20.0.0/16", "SubnetLen": 24, "Backend": { "Type": "vxlan", "VNI": 1 } }
Set the Flannel configuration in the Etcd server:
etcdctl set coreos.com/network/config < flannel-config.json
Point Flannel to the Etcd server:
# /etc/sysconfig/flanneldFLANNEL_ETCD="http://kube-master:2379"
Enable services so that they start on boot:
systemctl enable etcdsystemctl enable kube-apiserversystemctl enable kube-controller-managersystemctl enable kube-schedulersystemctl enable flanneld
Reboot server.
Kubernetes node
Install Kubernetes node packages:
yum install docker kubernetes-node
The next two steps will configure Docker to use overlayfs for better performance. For more information visit this blog post:
Delete the current docker storage directory:
systemctl stop dockerrm -rf /var/lib/docker
Change configuration files:
# /etc/sysconfig/docker# leave rest of lines unchangedOPTIONS='--selinux-enabled=false'# /etc/sysconfig/docker# leave rest of lines unchangedDOCKER_STORAGE_OPTIONS=-s overlay
Configure kube-node1 to use our previously configured master:
# /etc/kubernetes/config# leave rest of lines unchangedKUBE_MASTER="--master=http://kube-master:8080"# /etc/kubernetes/kubelet# leave rest of the lines unchangedKUBELET_ADDRESS="--address=0.0.0.0"# comment this line, so that the actual hostname is used to register the node# KUBELET_HOSTNAME="--hostname_override=127.0.0.1"KUBELET_API_SERVER="--api_servers=http://kube-master:8080"
Install and configure Flannel overlay network fabric (again – this is needed so that containers running on different servers can see each other):
yum install flannel
Point Flannel to the Etcd server:
# /etc/sysconfig/flanneldFLANNEL_ETCD="http://kube-master:2379"
Enable services:
systemctl enable dockersystemctl enable flanneldsystemctl enable kubeletsystemctl enable kube-proxy
Reboot the server.
Test your Kubernetes server
After all of the servers have rebooted, check if your Kubernetes cluster is operational:
[root@kube-master ~]# kubectl get nodesNAME LABELS STATUSkube-node1 kubernetes.io/hostname=kube-node1 Readykube-node2 kubernetes.io/hostname=kube-node2 Ready
Example: Deploying a Selenium grid using Kubernetes
Selenium is a framework for automating browsers for testing purposes. It’s a powerful tool of the arsenal of any web developer.
Selenium grid enables scalable and parallel remote execution of tests across a cluster of Selenium nodes that are connected to a central Selenium hub.
Since Selenium nodes are stateless themselves and the amount of nodes we run is flexible, depending on our testing workloads, this is a perfect candidate application to be deployed on a Kubernetes cluster.
In the next section, we’ll deploy a grid consisting of 5 application containers:
- 1 central Selenium hub that will be the remote endpoint to which our tests will connect.
- 2 Selenium nodes running Firefox.
- 2 Selenium nodes running Chrome.
Deployment strategy
To automatically manage replication and self-healing, we’ll create a Kubernetes replication controller for each type of application container we listed above.
To provide developers who are running tests with a stable Selenium hub endpoint, we’ll create a Kubernetes service connected to the hub replication controller.
Selenium hub
Replication controller
# selenium-hub-rc.yamlapiVersion: v1kind: ReplicationControllermetadata: name: selenium-hubspec: replicas: 1 selector: name: selenium-hub template: metadata: labels: name: selenium-hub spec: containers: - name: selenium-hub image: selenium/hub ports: - containerPort: 4444
Deployment:
[root@kube-master ~]# kubectl create -f selenium-hub-rc.yamlreplicationcontrollers/selenium-hub[root@kube-master ~]# kubectl get rcCONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICASselenium-hub selenium-hub selenium/hub name=selenium-hub 1[root@kube-master ~]# kubectl get podsNAME READY STATUS RESTARTS AGEselenium-hub-pilc8 1/1 Running 0 50s[root@kube-master ~]# kubectl describe pod selenium-hub-pilc8Name: selenium-hub-pilc8Namespace: defaultImage(s): selenium/hubNode: kube-node2/45.63.16.92Labels: name=selenium-hubStatus: RunningReason: Message: IP: 10.20.101.2Replication Controllers: selenium-hub (1/1 replicas created)Containers: selenium-hub: Image: selenium/hub State: Running Started: Sat, 24 Oct 2015 16:01:39 +0000 Ready: True Restart Count: 0Conditions: Type Status Ready TrueEvents: FirstSeen LastSeen Count From SubobjectPath Reason Message Sat, 24 Oct 2015 16:01:02 +0000 Sat, 24 Oct 2015 16:01:02 +0000 1 {scheduler } scheduled Successfully assigned selenium-hub-pilc8 to kube-node2 Sat, 24 Oct 2015 16:01:05 +0000 Sat, 24 Oct 2015 16:01:05 +0000 1 {kubelet kube-node2} implicitly required container POD pulled Successfully pulled Pod container image "gcr.io/google_containers/pause:0.8.0" Sat, 24 Oct 2015 16:01:05 +0000 Sat, 24 Oct 2015 16:01:05 +0000 1 {kubelet kube-node2} implicitly required container POD created Created with docker id 6de00106b19c Sat, 24 Oct 2015 16:01:05 +0000 Sat, 24 Oct 2015 16:01:05 +0000 1 {kubelet kube-node2} implicitly required container POD started Started with docker id 6de00106b19c Sat, 24 Oct 2015 16:01:39 +0000 Sat, 24 Oct 2015 16:01:39 +0000 1 {kubelet kube-node2} spec.containers pulled Successfully pulled image "selenium/hub" Sat, 24 Oct 2015 16:01:39 +0000 Sat, 24 Oct 2015 16:01:39 +0000 1 {kubelet kube-node2} spec.containers created Created with docker id 7583cc09268c Sat, 24 Oct 2015 16:01:39 +0000 Sat, 24 Oct 2015 16:01:39 +0000 1 {kubelet kube-node2} spec.containers started Started with docker id 7583cc09268c
Here we can see that Kubernetes has placed my selenium-hub container on kube-node2.
Service
# selenium-hub-service.yamlapiVersion: v1kind: Servicemetadata: name: selenium-hubspec: type: NodePort ports: - port: 4444 protocol: TCP nodePort: 30000 selector: name: selenium-hub
Deployment:
[root@kube-master ~]# kubectl create -f selenium-hub-service.yamlYou have exposed your service on an external port on all nodes in yourcluster. If you want to expose this service to the external internet, you mayneed to set up firewall rules for the service port(s) (tcp:30000) to serve traffic.See http://releases.k8s.io/HEAD/docs/user-guide/services-firewalls.md for more details.services/selenium-hub[root@kube-master ~]# kubectl get servicesNAME LABELS SELECTOR IP(S) PORT(S)kubernetes component=apiserver,provider=kubernetes <none> 10.254.0.1 443/TCPselenium-hub <none> name=selenium-hub 10.254.124.73 4444/TCP
After deploying the service, it’ll be reachable from:
- Any Kubernetes node, via the virtual IP 10.254.124.73 and the port 4444.
- External networks, via any Kubernetes nodes’ public IPs, on the port 30000.
(using the public IP of another Kubernetes node)
Selenium nodes
Firefox node replication controller:
# selenium-node-firefox-rc.yamlapiVersion: v1kind: ReplicationControllermetadata: name: selenium-node-firefoxspec: replicas: 2 selector: name: selenium-node-firefox template: metadata: labels: name: selenium-node-firefox spec: containers: - name: selenium-node-firefox image: selenium/node-firefox ports: - containerPort: 5900 env: - name: HUB_PORT_4444_TCP_ADDR value: "replace_with_service_ip" - name: HUB_PORT_4444_TCP_PORT value: "4444"
Deployment:
Replace replace_with_service_ip
in selenium-node-firefox-rc.yaml
with the actual Selenium hub service IP, in this case 10.254.124.73.
[root@kube-master ~]# kubectl create -f selenium-node-firefox-rc.yamlreplicationcontrollers/selenium-node-firefox[root@kube-master ~]# kubectl get rcCONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICASselenium-hub selenium-hub selenium/hub name=selenium-hub 1selenium-node-firefox selenium-node-firefox selenium/node-firefox name=selenium-node-firefox 2[root@kube-master ~]# kubectl get podsNAME READY STATUS RESTARTS AGEselenium-hub-pilc8 1/1 Running 1 1hselenium-node-firefox-lc6qt 1/1 Running 0 2mselenium-node-firefox-y9qjp 1/1 Running 0 2m[root@kube-master ~]# kubectl describe pod selenium-node-firefox-lc6qtName: selenium-node-firefox-lc6qtNamespace: defaultImage(s): selenium/node-firefoxNode: kube-node2/45.63.16.92Labels: name=selenium-node-firefoxStatus: RunningReason: Message: IP: 10.20.101.3Replication Controllers: selenium-node-firefox (2/2 replicas created)Containers: selenium-node-firefox: Image: selenium/node-firefox State: Running Started: Sat, 24 Oct 2015 17:08:37 +0000 Ready: True Restart Count: 0Conditions: Type Status Ready TrueEvents: FirstSeen LastSeen Count From SubobjectPath Reason Message Sat, 24 Oct 2015 17:08:13 +0000 Sat, 24 Oct 2015 17:08:13 +0000 1 {scheduler } scheduled Successfully assigned selenium-node-firefox-lc6qt to kube-node2 Sat, 24 Oct 2015 17:08:13 +0000 Sat, 24 Oct 2015 17:08:13 +0000 1 {kubelet kube-node2} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine Sat, 24 Oct 2015 17:08:13 +0000 Sat, 24 Oct 2015 17:08:13 +0000 1 {kubelet kube-node2} implicitly required container POD created Created with docker id cdcb027c6548 Sat, 24 Oct 2015 17:08:13 +0000 Sat, 24 Oct 2015 17:08:13 +0000 1 {kubelet kube-node2} implicitly required container POD started Started with docker id cdcb027c6548 Sat, 24 Oct 2015 17:08:36 +0000 Sat, 24 Oct 2015 17:08:36 +0000 1 {kubelet kube-node2} spec.containers pulled Successfully pulled image "selenium/node-firefox" Sat, 24 Oct 2015 17:08:36 +0000 Sat, 24 Oct 2015 17:08:36 +0000 1 {kubelet kube-node2} spec.containers created Created with docker id 8931b7f7a818 Sat, 24 Oct 2015 17:08:37 +0000 Sat, 24 Oct 2015 17:08:37 +0000 1 {kubelet kube-node2} spec.containers started Started with docker id 8931b7f7a818[root@kube-master ~]# kubectl describe pod selenium-node-firefox-y9qjpName: selenium-node-firefox-y9qjpNamespace: defaultImage(s): selenium/node-firefoxNode: kube-node1/185.92.221.67Labels: name=selenium-node-firefoxStatus: RunningReason: Message: IP: 10.20.92.3Replication Controllers: selenium-node-firefox (2/2 replicas created)Containers: selenium-node-firefox: Image: selenium/node-firefox State: Running Started: Sat, 24 Oct 2015 17:08:13 +0000 Ready: True Restart Count: 0Conditions: Type Status Ready TrueEvents: FirstSeen LastSeen Count From SubobjectPath Reason Message Sat, 24 Oct 2015 17:08:13 +0000 Sat, 24 Oct 2015 17:08:13 +0000 1 {scheduler } scheduled Successfully assigned selenium-node-firefox-y9qjp to kube-node1 Sat, 24 Oct 2015 17:08:13 +0000 Sat, 24 Oct 2015 17:08:13 +0000 1 {kubelet kube-node1} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine Sat, 24 Oct 2015 17:08:13 +0000 Sat, 24 Oct 2015 17:08:13 +0000 1 {kubelet kube-node1} implicitly required container POD created Created with docker id ea272dd36bd5 Sat, 24 Oct 2015 17:08:13 +0000 Sat, 24 Oct 2015 17:08:13 +0000 1 {kubelet kube-node1} implicitly required container POD started Started with docker id ea272dd36bd5 Sat, 24 Oct 2015 17:08:13 +0000 Sat, 24 Oct 2015 17:08:13 +0000 1 {kubelet kube-node1} spec.containers created Created with docker id 6edbd6b9861d Sat, 24 Oct 2015 17:08:13 +0000 Sat, 24 Oct 2015 17:08:13 +0000 1 {kubelet kube-node1} spec.containers started Started with docker id 6edbd6b9861d
As we can see, Kubernetes has created 2 replicas of selenium-firefox-node
and it has distributed them across the cluster. Pod selenium-node-firefox-lc6qt
is on kube-node2, while pod selenium-node-firefox-y9qjp
is on kube-node1.
We repeat the same process for our Selenium Chrome nodes.
Chrome node replication controller:
# selenium-node-chrome-rc.yamlapiVersion: v1kind: ReplicationControllermetadata: name: selenium-node-chrome labels: app: selenium-node-chromespec: replicas: 2 selector: app: selenium-node-chrome template: metadata: labels: app: selenium-node-chrome spec: containers: - name: selenium-node-chrome image: selenium/node-chrome ports: - containerPort: 5900 env: - name: HUB_PORT_4444_TCP_ADDR value: "replace_with_service_ip" - name: HUB_PORT_4444_TCP_PORT value: "4444"
Deployment:
[root@kube-master ~]# kubectl create -f selenium-node-chrome-rc.yamlreplicationcontrollers/selenium-node-chrome[root@kube-master ~]# kubectl get rcCONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICASselenium-hub selenium-hub selenium/hub name=selenium-hub 1selenium-node-chrome selenium-node-chrome selenium/node-chrome app=selenium-node-chrome 2selenium-node-firefox selenium-node-firefox selenium/node-firefox name=selenium-node-firefox 2[root@kube-master ~]# kubectl get podsNAME READY STATUS RESTARTS AGEselenium-hub-pilc8 1/1 Running 1 1hselenium-node-chrome-9u1ld 1/1 Running 0 1mselenium-node-chrome-mgi52 1/1 Running 0 1mselenium-node-firefox-lc6qt 1/1 Running 0 11mselenium-node-firefox-y9qjp 1/1 Running 0 11m
Wrapping up
In this guide, we’ve set up a small Kubernetes cluster of 3 servers (1 master controller + 2 workers).
Using pods, RCs and a service, we’ve successfully deployed a Selenium Grid consisting of a central hub and 4 nodes, enabling developers to run 4 concurrent Selenium tests at a time on the cluster.
Kubernetes automatically scheduled the containers across the entire cluster.
Self-healing
Kubernetes automatically reschedules pods to healthy servers if one or more of our servers go down.
In my example, kube-node2 is currently running the Selenium hub pod and 1 Selenium Firefox node pod.
[root@kube-node2 ~]# docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES5617399f146c selenium/node-firefox "/opt/bin/entry_poin 5 minutes ago Up 5 minutes k8s_selenium-node-firefox.46e635d8_selenium-node-firefox-zmj1r_default_31c89517-7a75-11e5-8648-5600001611e0_baae8e00 185230a3b431 gcr.io/google_containers/pause:0.8.0 "/pause" 5 minutes ago Up 5 minutes k8s_POD.3805e8b7_selenium-node-firefox-zmj1r_default_31c89517-7a75-11e5-8648-5600001611e0_40f809df fdd5834c249d selenium/hub "/opt/bin/entry_poin About an hour ago Up About an hour k8s_selenium-hub.cb8bf0ed_selenium-hub-pilc8_default_6c98c1ff-7a68-11e5-8648-5600001611e0_5765e2c9 00e4ccb0bda8 gcr.io/google_containers/pause:0.8.0 "/pause" About an hour ago Up About an hour k8s_POD.3b3ee8b9_selenium-hub-pilc8_default_6c98c1ff-7a68-11e5-8648-5600001611e0_8398ac33
We’ll simulate server failure by shutting down kube-node2. After a couple of minutes, you should see that the containers which were running on kube-node2 have been rescheduled to kube-node1, ensuring minimal disruption of service.
[root@kube-node1 ~]# docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES5bad5f582698 selenium/hub "/opt/bin/entry_poin 19 minutes ago Up 19 minutes k8s_selenium-hub.cb8bf0ed_selenium-hub-hycf2_default_fe9057cf-7a76-11e5-8648-5600001611e0_ccaad50a dd1565a94919 selenium/node-firefox "/opt/bin/entry_poin 20 minutes ago Up 20 minutes k8s_selenium-node-firefox.46e635d8_selenium-node-firefox-g28z5_default_fe932673-7a76-11e5-8648-5600001611e0_fc79f977 2be1a316aa47 gcr.io/google_containers/pause:0.8.0 "/pause" 20 minutes ago Up 20 minutes k8s_POD.3805e8b7_selenium-node-firefox-g28z5_default_fe932673-7a76-11e5-8648-5600001611e0_dc204ad2 da75a0242a9e gcr.io/google_containers/pause:0.8.0 "/pause" 20 minutes ago Up 20 minutes k8s_POD.3b3ee8b9_selenium-hub-hycf2_default_fe9057cf-7a76-11e5-8648-5600001611e0_1b10c0e7 c611b68330de selenium/node-firefox "/opt/bin/entry_poin 33 minutes ago Up 33 minutes k8s_selenium-node-firefox.46e635d8_selenium-node-firefox-8ylo2_default_31c8a8f3-7a75-11e5-8648-5600001611e0_922af821 828031da6b3c gcr.io/google_containers/pause:0.8.0 "/pause" 33 minutes ago Up 33 minutes k8s_POD.3805e8b7_selenium-node-firefox-8ylo2_default_31c8a8f3-7a75-11e5-8648-5600001611e0_289cd555 caf4e725512e selenium/node-chrome "/opt/bin/entry_poin 46 minutes ago Up 46 minutes k8s_selenium-node-chrome.362a34ee_selenium-node-chrome-mgi52_default_392a2647-7a73-11e5-8648-5600001611e0_3c6e855a 409a20770787 selenium/node-chrome "/opt/bin/entry_poin 46 minutes ago Up 46 minutes k8s_selenium-node-chrome.362a34ee_selenium-node-chrome-9u1ld_default_392a15a4-7a73-11e5-8648-5600001611e0_ac3f0191 7e2d942422a5 gcr.io/google_containers/pause:0.8.0 "/pause" 47 minutes ago Up 47 minutes k8s_POD.3805e8b7_selenium-node-chrome-9u1ld_default_392a15a4-7a73-11e5-8648-5600001611e0_f5858b73 a3a65ea99a99 gcr.io/google_containers/pause:0.8.0 "/pause" 47 minutes ago Up 47 minutes k8s_POD.3805e8b7_selenium-node-chrome-mgi52_default_392a2647-7a73-11e5-8648-5600001611e0_20a70ab6
Scaling your Selenium Grid
Scaling your Selenium Grid is super easy with Kubernetes. Imagine that instead of 2 Firefox nodes, I would like to run 4. The upscaling can be done with a single command:
[root@kube-master ~]# kubectl scale rc selenium-node-firefox --replicas=4scaled[root@kube-master ~]# kubectl get rcCONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICASselenium-hub selenium-hub selenium/hub name=selenium-hub 1selenium-node-chrome selenium-node-chrome selenium/node-chrome app=selenium-node-chrome 2selenium-node-firefox selenium-node-firefox selenium/node-firefox name=selenium-node-firefox 4[root@kube-master ~]# kubectl get podsNAME READY STATUS RESTARTS AGEselenium-hub-pilc8 1/1 Running 1 1hselenium-node-chrome-9u1ld 1/1 Running 0 14mselenium-node-chrome-mgi52 1/1 Running 0 14mselenium-node-firefox-8ylo2 1/1 Running 0 40sselenium-node-firefox-lc6qt 1/1 Running 0 24mselenium-node-firefox-y9qjp 1/1 Running 0 24mselenium-node-firefox-zmj1r 1/1 Running 0 40s
Want to contribute?
You could earn up to $300 by adding new articles
Suggest an update
Request an article
Leave a Comment