Showing posts with label Docker. Show all posts
Showing posts with label Docker. Show all posts

Wednesday, July 19, 2017

OpenShift Commands


Samples of OpenShift commands:

# List the environment variables defined on all pods
oc env pods --all --list

oc types
oc project <project_name>
oc status

oc get svc
oc get po

oc describe <object_type> <object_id>

oc logs -f <pod_name> <container_name>
oc logs -f pod/pod_name -c container_name
oc logs -f --tail=100 pod/pod_name -c container_name
oc logs -f build/build_name



oc edit bc/bc_name
oc edit dc/dc_name
oc edit pod/pod_name



Object TypeAbbreviated Version
build
buildConfig
bc
deploymentConfig
dc
imageStream
is
imageStreamTag
istag
imageStreamImage
isimage
event
ev
node
pod
po
replicationController
rc
service
svc
persistentVolume
pv
persistentVolumeClaim
pvc



OpenShift Concepts:

Components are contained by a project, they are flexibly linked together,  as many components as you desireany combination of components you can imagine, and optionally labeled to provide any groupings or structure. 

images in OpenShift v3 are mapped 1:1 with containers. Containers use pods as their collocation mechanism.

Source code:
With OpenShift v3, you can choose which images are built from source and that source can be located outside of OpenShift itself.

Build:
In v3, build results are first committed as an immutable image and published to an internal registry. That image is then available to launch on any of the nodes in the cluster or rollback to at a future date.

Routing:
With v3, you can use templates to setup 0-N routes for any image. These routes let you modify the scheme, host, and paths exposed as desired, and there is no distinction between system routes and user aliases.

OpenShift Architecture





OpenShift Container Platform Architecture Overview

OpenShift Container Platform 3.3 uses Kubernetes 1.3 and Docker 1.10.


Kubernetes

A Kubernetes cluster consists of one or more masters and a set of nodes. You can optionally configure your masters for high availability (HA) to ensure that the cluster has no single point of failure.



Master Components
ComponentDescription
API Server
The Kubernetes API server validates and configures the data for pods, services, and replication controllers. It also assigns pods to nodes and synchronizes pod information with service configuration. Can be run as a standalone process.
etcd
etcd stores the persistent master state while other components watch etcd for changes to bring themselves into the desired state. etcd can be optionally configured for high availability, typically deployed with 2n+1 peer services.
Controller Manager Server
The controller manager server watches etcd for changes to replication controller objects and then uses the API to enforce the desired state. Can be run as a standalone process. Several such processes create a cluster with one active leader at a time.
HAProxy
Optional, used when configuring highly-available masters with the native method to balance load between API master endpoints.
The advanced installation method can configure HAProxy for you with the native method. Alternatively, you can use the native method but pre-configure your own load balancer of choice.

When using the native HA method with HAProxy, master components have the following availability:


Availability Matrix with HAProxy
RoleStyleNotes
etcd
Active-active
Fully redundant deployment with load balancing
API Server
Active-active
Managed by HAProxy
Controller Manager Server
Active-passive
One instance is elected as a cluster leader at a time
HAProxy
Active-passive
Balances load between API master endpoints

Nodes

A node provides the runtime environments for containers. Each node in a Kubernetes cluster has the required services to be managed by the master. Nodes also have the required services to run pods, including the Docker service, a kubelet, and a service proxy.

Kubelet

Each node has a kubelet that updates the node as specified by a container manifest, which is a YAML file that describes a pod. The kubelet uses a set of manifests to ensure that its containers are started and that they continue to run. 
A container manifest can be provided to a kubelet by:
  • A file path on the command line that is checked every 20 seconds.
  • An HTTP endpoint passed on the command line that is checked every 20 seconds.
  • The kubelet watching an etcd server, such as /registry/hosts/$(hostname -f), and acting on any changes.
  • The kubelet listening for HTTP and responding to a simple API to submit a new manifest.

Service Proxy

Each node also runs a simple network proxy that reflects the services defined in the API on that node. This allows the node to do simple TCP and UDP stream forwarding across a set of back ends.