念念不忘
必有回响

世界,您好!

admin阅读(270)

欢迎使用WordPress。这是您的第一篇文章。编辑或删除它,然后开始写作吧!

CentOS中firewall-cmd的使用方法

admin阅读(15)

在CentOS6中常用的防火墙是iptables,但是升级到CentOS7以后就开始使用firewall-cmd作为防火墙软件了。

命令介绍

firewall-cmd [选项 ... ]

其常用的参数:

-h, --help    # 显示帮助信息;
-V, --version # 显示版本信息. (这个选项不能与其他选项组合);
-q, --quiet   # 不打印状态消息;

--state                # 显示firewalld的状态;
--reload               # 不中断服务的重新加载;
--complete-reload      # 中断所有连接的重新加载;
--runtime-to-permanent # 将当前防火墙的规则永久保存;
--check-config         # 检查配置正确性;

实例

对外暴露指定端口

firewall-cmd --permanent --add-port=8080/tcp

其中--permanent表示永久生效,如果不加这个参数,操作系统重启后则该规则失效。

指定某个端口只能特定的IP访问

firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="10.10.1.1/24" port protocol="tcp" port="8080" accept"

firewall-cmd --reload

8080端口只能通过10.10.1.1网段的IP访问。

将网卡添加到区域内

firewall-cmd --zone=public --add-interface=eth0

查看区域信息

firewall-cmd --get-active-zones

查看防火墙状态

firewall-cmd --state

允许/禁止 FTP服务

firewall-cmd --new-service=ftp

firewall-cmd --delete-service=ftp

类似的服务还有ssh,http

查看防火墙

firewall-cmd --list-all

端口转发

firewall-cmd --add-forward-port=port=80:proto=tcp:toport=8080

80端口转发到8080.

服务启停

systemctl start  firewalld # 启动
systemctl stop firewalld  # 停止
systemctl enable firewalld # 启用自动启动
systemctl disable firewalld # 禁用自动启动
systemctl status firewalld # 或者 firewall-cmd --state 查看状态

参考资料

 

 

Java中Comparator.comparing()需要注意的一点

admin阅读(17)

在Java开发过程中经常会用到排序,尤其是数据可视化这一块几乎涉及到表格相关的接口都需要排序,通常都会使用Comparator.comparing()来处理这种操作,简单好用。但是如果涉及到倒序,或者按照字段顺序去排序,那么就需要特别注意一点了。

常见用法

通常,我们是以如下方式进行排序的:

//stream
resultList = dataList.stream().sorted(o -> Integer.parseInt(String.valueOf(o.get("id")))).collect(Collectors.toList())
//干脆只对集合排序
dataList.sort(Comparator.comparing(o -> String.valueOf(o.get("id"))));

这样编码通常是可以正常实用的,但是一旦遇到倒序和多种排序规则,那么如果简单地使用如下方式编写就会出现一个奇怪的错误:

dataList.sort(Comparator.comparing(o -> String.valueOf(o.get("id"))).reversed());

此时,idea则会提示o.get(“id”)这个错误。

解决办法

这种问题可以使用如下两种方式解决

指定类型

使用lambda表达式的时候,通常不会去指定变量的类型,在这里需要指定它的类型即可

dataList.sort(Comparator.comparing((Map<String,Object> o) -> String.valueOf(o.get("id"))).reversed());

此时就正常了。

Functional Interface

也可以使用函数式接口来解决这个错误

Function<Map<String, Object>, Integer> orderFunc = o -> Integer.parseInt(String.valueOf(o.get("id")));
dataList.sort(Comparator.comparing(orderFunc).reversed());

原因

查阅了一些资料,按道理起始不需要指定类型应该就可以,但是实际情况单一排序的情况下是可执行的,但是多种排序规则则不生效,感觉是Java lambda的一个小缺陷。当然,本次问题出现的原因应该可以解释如下:

Lambda 分为隐式类型(参数没有清单类型)和显式类型;方法引用分为精确(无重载)和不精确。当接收者位置的泛型方法调用具有 lambda 参数,并且无法从其他参数完全推断出类型参数时,您需要提供显式 lambda、精确方法引用、目标类型转换或显式类型见证泛型方法调用以提供继续所需的附加类型信息。

补充:

Function<Map<String, Object>, Integer> orderFunc = o -> Arrays.sort(array, Comparator.<int[]>comparingInt(arr -> arr[0]).thenComparingInt(arr -> arr[1]));
  • 原地排序不需要使用 stream.
  • 使用 comparingInt 而不是 comparing.
  • 为 comparingInt 显示声明泛型参数<int[]> 而不是在 lambda 中 cast

搭建k8s集群,并安装Kubernetes-Dashboard

admin阅读(18)

环境介绍

基本环境

  • CentOS Linux release 7.5.1804 (Core)
  • JDK1.8.0_161
  • Kubernetes v1.5.2
  • yum源:清华大学

部署规划

Master:

  • ip: 10.10.202.158
  • hostname: apm-slave-02
  • 安装节点
    • docker
    • etcd
    • flannel
    • kube-apiserver
    • kube-scheduler
    • kube-controller-manager

Node:

  • ip: 10.10.202.159
  • hostname: apm-slave-03
  • 安装节点
    • docker
    • flannel
    • kubelet
    • kube-proxy

防火墙

systemctl disable firewalld.service
systemctl stop firewalld.service

部署Master节点

安装Docker

yum install docker

启动docker,并加入开机启动

systemctl start docker
systemctl enable docker

安装etcd

yum install etcd -y

配置etcd,编辑 /etc/ectd/etcd.conf

vim /etc/etcd/etcd.conf
- ETCD_LISTEN_CLIENT_URLS="http://127.0.0.1:2379"
+ ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
- ETCD_NAME="default"
+ ETCD_NAME="master"
- ETCD_ADVERTISE_CLIENT_URLS="http://127.0.0.1:2379"
+ ETCD_ADVERTISE_CLIENT_URLS="http://apm-slave02:2379,http://apm-slave02:4001"

启动ectd

systemctl start etcd

查看服务是否启动

systemctl is-active etcd

active

获取etcd的健康指标

etcdctl -C http://apm-slave02:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://apm-slave02:2379
cluster is healthy

加入开机启动

systemctl enable etcd

安装kubernetes

yum install kubernetes

配置kubernetes,编辑 /etc/kubernetes/下面的apiserverconfigscheduler配置文件

apiserver

vim /etc/kubernetes/apiserver
- KUBE_API_ADDRESS="--address=127.0.0.1"
+ KUBE_API_ADDRESS="--address=0.0.0.0"
- KUBE_ETCD_SERVERS="--etcd-servers=http://localhost:2379"
+ KUBE_ETCD_SERVERS="--etcd-servers=http://10.10.202.158:2379"
- KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota"
+ KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

config

vim /etc/kubernetes/config
- KUBE_MASTER="--master=http://127.0.0.1:8080"
+ KUBE_MASTER="--master=http://10.10.202.158:8080"

启动Master组件

systemctl start kube-apiserver.service
systemctl start kube-controller-manager.service
systemctl start kube-scheduler.service

加入开机启动

systemctl enable kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl enable kube-scheduler.service

检查

systemctl list-unit-files |grep kube
kube-apiserver.service                        enabled 
kube-controller-manager.service               enabled 
kube-proxy.service                            disabled
kube-scheduler.service                        enabled 
kubelet.service                               disabled

安装flannel

yum install flannel

配置flannel

vim /etc/sysconfig/flanneld
-FLANNEL_ETCD_ENDPOINTS="http://127.0.0.1:2379"
+FLANNEL_ETCD_ENDPOINTS="http://10.10.202.158:2379"

配置etcd中flannel的key

etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'

启动flannel

systemctl start flanneld.service

设置开机启动

systemctl enable flanneld.service

检查服务:

systemctl is-active  kube-apiserver.service kube-controller-manager.service kube-scheduler.service etcd flanneld.service

active
active
active
active
active

注意启动顺序 etcd —> kubernetes

部署Node节点

安装Docker

yum install docker

启动docker,并加入开机启动

systemctl start docker
systemctl enable docker

安装flannel

yum install flannel

配置flannel

vim /etc/sysconfig/flanneld
-FLANNEL_ETCD_ENDPOINTS="http://127.0.0.1:2379"
+FLANNEL_ETCD_ENDPOINTS="http://10.10.202.158:2379"

配置etcd中flannel的key

etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'

启动flannel

systemctl start flanneld.service

设置开机启动

systemctl enable flanneld.service

安装kubernetes

yum install kubernetes

node节点需要运行如下组件:

  • kubelet
  • kubernets-proxy

编辑/etc/kubernetes/config

-KUBE_MASTER="--master=http://127.0.0.1:8080"
+KUBE_MASTER="--master=http://10.10.202.158:8080"

编辑/etc/kubernetes/kubelet

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=apm-slave03"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://10.10.202.158:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""

注意此处的KUBELET_POD_INFRA_CONTAINER,它的值是registry.access.redhat.com/rhel7/pod-infrastructure:latest,如果没有安装rhsm会出错,具体请见《解决k8s出现pod服务一直处于ContainerCreating状态的问题》 。

启动kubernetes服务

systemctl start kubelet.service
systemctl start kube-proxy.service

加入开机启动

systemctl enable kubelet.service
systemctl enable kube-proxy.service

检测Node节点的服务

systemctl is-active kube-proxy.service kubelet.service flanneld.service

active
active
active

在Master(10.10.202.158)上执行如下命令:

kubectl get endpoints
NAME         ENDPOINTS            AGE
kubernetes   10.10.202.158:6443   2d
kubectl get nodes
NAME          STATUS    AGE
apm-slave03   Ready     22h

此时,k8s集群就已经安装完成。

安装Kubernetes-Dashboard

在Master节点的机器上创建两个文件dashboard-controller.yamldashboard-service.yaml,其具体内容为:

dashboard-controller.yaml:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      serviceAccountName: dashboard
      containers:
      - name: kubernetes-dashboard
        image: registry.cn-hangzhou.aliyuncs.com/google-containers/kubernetes-dashboard-amd64:v1.4.2
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 50Mi
        ports:
        - containerPort: 9090
        args:
        - --apiserver-host=http://10.10.202.158:8080
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30

注意- --apiserver-host=http://10.10.202.158:8080,请修改成自己的地址。
dashboard-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 80
    targetPort: 9090

执行如下命令

kubectl create -f .

显示如下日志:

deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created

查看部署

kubectl get deployments --all-namespaces                             
NAMESPACE     NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   kubernetes-dashboard   1         1         1            1           5s

看到可用节点为1个。

查看pod

kubectl get pods -n kube-system
NAME                                    READY     STATUS    RESTARTS   AGE
kubernetes-dashboard-2620295069-12qfj   1/1       Running   0          3h

访问http://10.10.202.158:8080/ui/即可打开Dashboard

解决k8s删除pod以后无限重启该pod的问题

admin阅读(21)

第一次接触K8S,一脸懵逼~
搭建完成k8s集群后,运行第一个插件Kubernetes-Dashboard,但是一直处于CrashLoopBackOff状态,下面记录下解决此问题的过程。(过程比较繁琐,因我是新手,老鸟估计一眼就看出来了)

首先,查看node节点的日志,路径在/var/log/message

Jun  1 11:32:34 apm-slave03 dockerd-current: time="2018-06-01T11:32:34.830329738+08:00" level=error msg="Handler for GET /containers/b532d65bd2ff380035560a33e435414b66ccfbfbbf6f3c9d51cb2f0add57b2d2/json returned error: No such container: b532d65bd2ff380035560a33e435414b66ccfbfbbf6f3c9d51cb2f0add57b2d2"
Jun  1 11:32:44 apm-slave03 kubelet: I0601 11:32:44.160859   20744 docker_manager.go:2495] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-latest-4167338039-95kb9"
Jun  1 11:32:44 apm-slave03 kubelet: I0601 11:32:44.161188   20744 docker_manager.go:2509] Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-latest-4167338039-95kb9_kube-system(c2097a18-654a-11e8-8d29-005056bc2ad1)
Jun  1 11:32:44 apm-slave03 kubelet: E0601 11:32:44.161302   20744 pod_workers.go:184] Error syncing pod c2097a18-654a-11e8-8d29-005056bc2ad1, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-latest-4167338039-95kb9_kube-system(c2097a18-654a-11e8-8d29-005056bc2ad1)"

这里基本上看不出问题,继续查看pod

kubectl get pods -n kube-system 
NAME                                           READY     STATUS              RESTARTS   AGE
kubernetes-dashboard-latest-4167338039-bbwp4   0/1       ContainerCreating   0          47s

删除

kubectl delete pod kubernetes-dashboard-latest-4167338039-bbwp4 -n kube-system

接着查看

kubectl get pods -n kube-system | grep -v Running                             
NAME                                           READY     STATUS    RESTARTS   AGE
kubernetes-dashboard-latest-4167338039-95kb9   0/1       Error     0          4s

居然有重启了一个,于是乎连续删了5次,依旧重启,这有点匪夷所思.
查看其中一个pod的描述信息:

 kubectl describe pod kubernetes-dashboard-latest-4167338039-95kb9 -n kube-system                                             
Name:           kubernetes-dashboard-latest-4167338039-95kb9
Namespace:      kube-system
Node:           apm-slave03/10.10.202.159
Start Time:     Fri, 01 Jun 2018 11:20:39 +0800
Labels:         k8s-app=kubernetes-dashboard
                kubernetes.io/cluster-service=true
                pod-template-hash=4167338039
                version=latest
Status:         Running
IP:             10.0.48.2
Controllers:    ReplicaSet/kubernetes-dashboard-latest-4167338039
Containers:
  kubernetes-dashboard:
    Container ID:       docker://fe222c62c496d1348b9da4d17da474721d941279c7bd476596a0e041353ccd55
    Image:              registry.cn-hangzhou.aliyuncs.com/google-containers/kubernetes-dashboard-amd64:v1.4.2
    Image ID:           docker-pullable://registry.cn-hangzhou.aliyuncs.com/google-containers/kubernetes-dashboard-amd64@sha256:8c9cafe41e0846c589a28ee337270d4e97d486058c17982314354556492f2c69
    Port:               9090/TCP
    Args:
      --apiserver-host=http://apm-slave02:8080
    Limits:
      cpu:      100m
      memory:   50Mi
    Requests:
      cpu:                      100m
      memory:                   50Mi
    State:                      Terminated
      Reason:                   Error
      Exit Code:                1
      Started:                  Fri, 01 Jun 2018 11:21:04 +0800
      Finished:                 Fri, 01 Jun 2018 11:21:06 +0800
    Last State:                 Terminated
      Reason:                   Error
      Exit Code:                1
      Started:                  Fri, 01 Jun 2018 11:20:43 +0800
      Finished:                 Fri, 01 Jun 2018 11:20:45 +0800
    Ready:                      False
▽   Restart Count:              2
    Liveness:                   http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Volume Mounts:              
    Environment Variables:      
Conditions:
  Type          Status
  Initialized   True 
  Ready         False 
  PodScheduled  True 
No volumes.
QoS Class:      Guaranteed
Tolerations:    
Events:
  FirstSeen     LastSeen        Count   From                    SubObjectPath                           Type            Reason          Message
  ---------     --------        -----   ----                    -------------                           --------        ------          -------
  28s           28s             1       {default-scheduler }                                            Normal          Scheduled       Successfully assigned kubernetes-dashboard-latest-4167338039-95kb9 to apm-slave03
  28s           28s             1       {kubelet apm-slave03}   spec.containers{kubernetes-dashboard}   Normal          Created         Created container with docker id f880c9e76de7; Security:[seccomp=unconfined]
  27s           27s             1       {kubelet apm-slave03}   spec.containers{kubernetes-dashboard}   Normal          Started         Started container with docker id f880c9e76de7
  24s           24s             1       {kubelet apm-slave03}   spec.containers{kubernetes-dashboard}   Normal          Created         Created container with docker id 16f258977612; Security:[seccomp=unconfined]
  24s           24s             1       {kubelet apm-slave03}   spec.containers{kubernetes-dashboard}   Normal          Started         Started container with docker id 16f258977612
  22s           18s             2       {kubelet apm-slave03}                                           Warning         FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-latest-4167338039-95kb9_kube-system(c2097a18-654a-11e8-8d29-005056bc2ad1)"

  28s   3s      4       {kubelet apm-slave03}                                           Warning MissingClusterDNS       kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
  28s   3s      3       {kubelet apm-slave03}   spec.containers{kubernetes-dashboard}   Normal  Pulled                  Container image "registry.cn-hangzhou.aliyuncs.com/google-containers/kubernetes-dashboard-amd64:v1.4.2" already present on machine
  3s    3s      1       {kubelet apm-slave03}   spec.containers{kubernetes-dashboard}   Normal  Created                 Created container with docker id fe222c62c496; Security:[seccomp=unconfined]
  3s    3s      1       {kubelet apm-slave03}   spec.containers{kubernetes-dashboard}   Normal  Started                 Started container with docker id fe222c62c496
  22s   0s      3       {kubelet apm-slave03}   spec.containers{kubernetes-dashboard}   Warning BackOff                 Back-off restarting failed docker container
  0s    0s      1       {kubelet apm-slave03}                                           Warning FailedSync              Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-latest-4167338039-95kb9_kube-system(c2097a18-654a-11e8-8d29-005056bc2ad1)"

似乎也没看到想要的东西,于是面向Google了一把,最终结论是需要删除deployments才能完全删除那个pod,于是先看下长啥样~

kubectl get deployments --all-namespaces
NAMESPACE     NAME                          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   kubernetes-dashboard-latest   1         1         1            0           2d

发现Name居然是我第一次创建的deployment,并不是我现在创建的,于是(咬牙切齿)删除之

kubectl delete deployments kubernetes-dashboard-latest -n kube-system
deployment "kubernetes-dashboard-latest" deleted

kubectl get deployments --all-namespaces
No resources found.

kubectl get pods -n kube-system | grep -v Running                              
No resources found.

无限重启pod的问题总算解决!

重新创建deployments

kubectl get deployments --all-namespaces                             
NAMESPACE     NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   kubernetes-dashboard   1         1         1            1           5s

终于是AVAILABLE了。而且dashboard也终于running了~ 撒花✿✿ヽ(°▽°)ノ✿

解决k8s出现pod服务一直处于ContainerCreating状态的问题

admin阅读(17)

在创建Dashborad时,查看状态总是ContainerCreating

kubectl get pods -n kube-system 
NAME                                           READY     STATUS              RESTARTS   AGE
kubernetes-dashboard-latest-4167338039-cjh13   0/1       ContainerCreating   0          47s

查看日志/var/log/message,发现有如下日志信息:

kubectl get pods -n kube-system 
May 31 17:29:19 apm-slave03 kubelet: E0531 17:29:19.526941   24919 docker_manager.go:2159] Failed to create pod infra container: ImagePullBackOff; Skipping pod "kubernetes-dashboard-latest-4167338039-cjh13_kube-system(0ba4a71f-64ae-11e8-9a31-005056bc2ad1)": Back-off pulling image "registry.access.redhat.com/rhel7/pod-infrastructure:latest"

发现此时会pull一个镜像registry.access.redhat.com/rhel7/pod-infrastructure:latest,当我手动pull时,提示如下错误:

kubectl get pods -n kube-system 
docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest
Trying to pull repository registry.access.redhat.com/rhel7/pod-infrastructure ... 
open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory

通过搜索得知,需要安装rhsm

kubectl get pods -n kube-system 
yum install *rhsm*
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirror.lzu.edu.cn
 * extras: mirror.lzu.edu.cn
 * updates: ftp.sjtu.edu.cn
base                                                                                                                                                                                  | 3.6 kB  00:00:00     
extras                                                                                                                                                                                | 3.4 kB  00:00:00     
updates                                                                                                                                                                               | 3.4 kB  00:00:00     
软件包 python-rhsm-1.19.10-1.el7_4.x86_64 被已安装的 subscription-manager-rhsm-1.20.11-1.el7.centos.x86_64 取代
软件包 subscription-manager-rhsm-1.20.11-1.el7.centos.x86_64 已安装并且是最新版本
软件包 python-rhsm-certificates-1.19.10-1.el7_4.x86_64 被已安装的 subscription-manager-rhsm-certificates-1.20.11-1.el7.centos.x86_64 取代
软件包 subscription-manager-rhsm-certificates-1.20.11-1.el7.centos.x86_64 已安装并且是最新版本

但是在/etc/rhsm/ca/目录下依旧没有证书文件,于是反复卸载与安装都不靠谱,后来发现大家所谓yum install *rhsm*其实安装的的是python-rhsm-1.19.10-1.el7_4.x86_64python-rhsm-certificates-1.19.10-1.el7_4.x86_64,但是在实际安装过程中会有如下提示:

kubectl get pods -n kube-system 
软件包 python-rhsm-1.19.10-1.el7_4.x86_64 被已安装的 subscription-manager-rhsm-1.20.11-1.el7.centos.x86_64 取代
软件包 subscription-manager-rhsm-1.20.11-1.el7.centos.x86_64 已安装并且是最新版本
软件包 python-rhsm-certificates-1.19.10-1.el7_4.x86_64 被已安装的 subscription-manager-rhsm-certificates-1.20.11-1.el7.centos.x86_64 取代
软件包 subscription-manager-rhsm-certificates-1.20.11-1.el7.centos.x86_64 已安装并且是最新版本

罪魁祸首在这里。原来我们想要安装的rpm包被取代了。而取代后的rpm包在安装完成后之创建了目录,并没有证书文件redhat-uep.pem。于是乎,手动下载以上两个包

kubectl get pods -n kube-system 
wget ftp://ftp.icm.edu.pl/vol/rzm6/linux-scientificlinux/7.4/x86_64/os/Packages/python-rhsm-certificates-1.19.9-1.el7.x86_64.rpm
wget ftp://ftp.icm.edu.pl/vol/rzm6/linux-scientificlinux/7.4/x86_64/os/Packages/python-rhsm-1.19.9-1.el7.x86_64.rpm

注意版本要匹配,卸载安装错的包

kubectl get pods -n kube-system 
yum remove *rhsm*

然后执行安装命令

kubectl get pods -n kube-system 
rpm -ivh *.rpm
kubectl get pods -n kube-system 
rpm -ivh *.rpm
警告:python-rhsm-1.19.9-1.el7.x86_64.rpm: 头V4 DSA/SHA1 Signature, 密钥 ID 192a7d7d: NOKEY
准备中...                          ################################# [100%]
正在升级/安装...
   1:python-rhsm-certificates-1.19.9-1################################# [ 50%]
   2:python-rhsm-1.19.9-1.el7         ################################# [100%]

接着验证手动pull镜像

kubectl get pods -n kube-system 
docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest
Trying to pull repository registry.access.redhat.com/rhel7/pod-infrastructure ... 
latest: Pulling from registry.access.redhat.com/rhel7/pod-infrastructure
26e5ed6899db: Pull complete 
66dbe984a319: Pull complete 
9138e7863e08: Pull complete 
Digest: sha256:92d43c37297da3ab187fc2b9e9ebfb243c1110d446c783ae1b989088495db931
Status: Downloaded newer image for registry.access.redhat.com/rhel7/pod-infrastructure:latest

此时才算是解决这个问题。

当然同样的问题同样的解法可能有人可以有人不可以,我目测是因为系统发行版的缘故吧。我是用的是CentOS Linux release 7.5.1804 (Core) 他们的版本稍微低一点吧。

修改CentOS7默认JDK

admin阅读(20)

CentOS7默认JDK是OpenJDK1.8.*,即使配置了JDK输入java -version也是如下信息:

[root@apm-master ~]# java -version
openjdk version "1.8.0_161"
OpenJDK Runtime Environment (build 1.8.0_161-b14)
OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode)

乍一看是1.8.0_161没错,可惜他是openjdk,而我们的应用通常都运行在Oracle JDK中,于是此时就需要使用alternatives命令。

alternatives --install /usr/bin/java java /usr/local/java/jdk1.8.0_161/bin/java 3
alternatives --install /usr/bin/javac javac /usr/local/java/jdk1.8.0_161/bin/javac 3

然后执行下面的命令进行设置:

alternatives --config java
共有 3 个提供“java”的程序。

  选项    命令
-----------------------------------------------
   1           java-1.7.0-openjdk.x86_64 (/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.171-2.6.13.2.el7.x86_64/jre/bin/java)
*+ 2           java-1.8.0-openjdk.x86_64 (/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-2.b14.el7.x86_64/jre/bin/java)
   3           /usr/local/java/jdk1.8.0_161/bin/java

按 Enter 保留当前选项[+],或者键入选项编号:3

修改CentOS7的DNS配置

admin阅读(114)

使用了许久的CentOS6系列,忽然现在公司大面积更换系统为CentOS7,还有些许不适应,尽管大部分命令、路径等都没多大修改,但还是有些区别,本文介绍如何修改CentOS7的DNS。

首先,修改/etc/NetworkManager/NetworkManager.conf

vim /etc/NetworkManager/NetworkManager.conf 

找到[main]修改成如下内容:

[main]
plugins=ifcfg-rh
dns=none

保存并关闭。
修改/etc/resolv.conf

vim  /etc/resolv.conf

填入如下内容

nameserver 8.8.8.8
nameserver 1.1.1.1

保存并关闭。
重启网卡

systemctl restart NetworkManager.service

修改CentOS7的hostname

admin阅读(111)

在CentOS中,有三种定义的主机名:静态的(static),瞬态的(transient),和灵活的(pretty)。静态主机名也称为内核主机名,是系统在启动时从/etc/hostname自动初始化的主机名。瞬态主机名是在系统运行时临时分配的主机名,例如,通过DHCP或mDNS服务器分配。静态主机名和瞬态主机名都遵从作为互联网域名同样的字符限制规则。而另一方面,灵活主机名则允许使用自由形式(包括特殊/空白字符)的主机名,以展示给终端用户(如Linuxidc)。

在CentOS 7中,有个叫hostnamectl的命令行工具,它允许你查看或修改与主机名相关的配置。

要查看主机名相关的设置

hostnamectl
[root@localhost ~]# hostnamectl  
   Static hostname: apm-slave01
         Icon name: computer-vm
           Chassis: vm
        Machine ID: e13059d324bc44c699cb44fd01af48df
           Boot ID: f1efa859613549008d87b0507f534d26
    Virtualization: vmware
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-862.el7.x86_64
      Architecture: x86-64

或者

hostnamectl status
[root@localhost ~]# hostnamectl status
   Static hostname: apm-slave01
         Icon name: computer-vm
           Chassis: vm
        Machine ID: e13059d324bc44c699cb44fd01af48df
           Boot ID: f1efa859613549008d87b0507f534d26
    Virtualization: vmware
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-862.el7.x86_64
      Architecture: x86-64

只查看静态、瞬态或灵活主机名

hostnamectl --static
hostnamectl --transient
hostnamectl --pretty

要同时修改所有三个主机名:静态、瞬态和灵活主机名

hostnamectl set-hostname apm-slave01

就像上面展示的那样,在修改静态/瞬态主机名时,任何特殊字符或空白字符会被移除,而提供的参数中的任何大写字母会自动转化为小写。一旦修改了静态主机名,/etc/hostname 将被自动更新。然而,/etc/hosts 不会更新以保存所做的修改,所以你每次在修改主机名后一定要手动更新/etc/hosts,之后再重启CentOS 7。否则系统再启动时会很慢。

vim /etc/hosts

[root@localhost ~]# vim /etc/hosts
127.0.0.1   localhost apm-slave01
#127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

如果你只想修改特定的主机名(静态,瞬态或灵活),你可以使用“–static”,“–transient”或“–pretty”选项。

例如,要永久修改主机名,你可以修改静态主机名:

hostnamectl --static set-hostname apm-slave01

重启

reboot 

其实,你不必重启机器以激活永久主机名修改。上面的命令会立即修改内核主机名。注销并重新登入后在命令行提示来观察新的静态主机名。

使用Ubuntu18.04打造程序员办公电脑

admin阅读(114)

Ubuntu是以桌面应用为主的Linux发行版,Ubuntu由Canonical公司发布,他们提供商业支持。它是基于自由软件,其名称来自非洲南部祖鲁语或科萨语的“Ubuntu”一词,意思是“人性”、“我的存在是因为大家的存在”,是非洲传统的一种价值观。

安装篇

应用推荐

gnome-tweak-tool

gnome美化工具,可改变主题,图标,字体等。

sudo apt install gnome-tweak-tool

uget

一个界面下载工具,可配合aria2

sudo add-apt-repository ppa:plushuang-tw/uget-stable
sudo apt update
sudo apt install uget

aria2

Aria2是一个命令行下轻量级、多协议、多来源的下载工具(支持 HTTP/HTTPS、FTP、BitTorrent、Metalink),内建 XML-RPC 和 JSON-RPC 用户界面。

sudo apt-get install aria2

gnome-shell-extensions

GnomeShell扩展是官方为增强GNOME Shell功能所开发的。

sudo apt install gnome-shell-extensions

chrome-gnome-shell

虽然有chrome的关键字,但是真的和Chrome没多大关系。美化必装~

sudo apt install chrome-gnome-shell

wiz笔记

为知笔记在新版中使用了appimage格式,下载地址为:https://url.wiz.cn/u/Wizlinux
下载完成后,右键WizNote-2.5.9-x86_64.AppImage,赋予可执行权限,或者

chmod +x WizNote-2.5.9-x86_64.AppImage

移动或删除此文件会导致winnote无法打开。

WPS

下载地址:http://wps-community.org/download.html
安装完成后会提示字体的问题,可通过百度网盘下载字体包,提取出来以后,双击安装即可。

搜狗输入法

请自行去官方下载,安装前请执行

sudo apt install --fix-broken

否则无法安装,当然这个情况并不是每次都这样,ubuntu18.04我安装过3次,其中2次是直接就安装成功了,有一次安装时提示失败,然后执行了以上命令,再次安装即成功。

Albert

熟悉mac的用户应该知道Alfred,这个软件同样也具备类似的功能

sudo add-apt-repository ppa:nilarimogard/webupd8
sudo apt-get update
sudo apt-get install albert -y

系统负载指示器

sudo apt-get install -y indicator-multiload

Nvidia驱动

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update

VS Code

去官方下载安装包,安装即可

Vim

必不可少的工具

sudo apt-get install vim

设置默认的编辑器

update-alternatives --config editor

然后选择vim.basic这项即可,不要选vim.tiny,这个vim功能太少了。

git

必不可少的工具

sudo apt-get install git

Chrome

sudo wget http://www.linuxidc.com/files/repo/google-chrome.list -P /etc/apt/sources.list.d/

wget -q -O - https://dl.google.com/linux/linux_signing_key.pub  | sudo apt-key add

sudo apt update

sudo apt install google-chrome-stable

shadowsocks-qt5

梯子必备

sudo add-apt-repository ppa:hzwhuang/ss-qt5
sudo apt-get update
sudo apt-get install shadowsocks-qt5

此时会报错

liyang@liyang:~$ sudo add-apt-repository ppa:hzwhuang/ss-qt5
 Shadowsocks-Qt5 is a cross-platform Shadowsocks GUI client.

Shadowsocks is a lightweight tool that helps you bypass firewall(s).

This PPA mainly includes packages for Shadowsocks-Qt5, which means it also includes libQtShadowsocks packages.
 更多信息: https://launchpad.net/~hzwhuang/+archive/ubuntu/ss-qt5
按 [ENTER] 继续或 Ctrl-c 取消安装。

命中:1 http://mirrors.aliyun.com/ubuntu bionic InRelease                       
命中:2 http://mirrors.aliyun.com/ubuntu bionic-updates InRelease               
忽略:3 http://ppa.launchpad.net/hzwhuang/ss-qt5/ubuntu bionic InRelease        
命中:4 http://mirrors.aliyun.com/ubuntu bionic-backports InRelease
命中:5 http://mirrors.aliyun.com/ubuntu bionic-security InRelease              
错误:6 http://ppa.launchpad.net/hzwhuang/ss-qt5/ubuntu bionic Release          
  404  Not Found [IP: 91.189.95.83 80]
正在读取软件包列表... 完成
E: 仓库 “http://ppa.launchpad.net/hzwhuang/ss-qt5/ubuntu bionic Release” 没有 Release 文件。
N: 无法安全地用该源进行更新,所以默认禁用该源。
N: 参见 apt-secure(8) 手册以了解仓库创建和用户配置方面的细节。

点击ss-qt5,发现并没有18.04版本bionic的Package,于是改成17.10版本artful即可。

sudo gedit(vim) /etc/apt/sources.list.d/hzwhuang-ubuntu-ss-qt5-bionic.list

修改成如下内容:

deb http://ppa.launchpad.net/hzwhuang/ss-qt5/ubuntu artful main 

安装完成后,填入ss信息,点击链接,如果没有提示超时并且有延迟的数值就表示已经爬上梯子了。
配置代理,在设置 -> 网络 -> 开启VPN中,选择手动代理,代理地址为socks5

127.0.0.1:1080

但是此时并不太智能,是所谓的全局模式,于是我们需要配置pac

sudo apt-get install python-pip
sudo pip install genpac

在家目录(/home/liyang)下面创建文件夹

mkdir ~/shadowsocks
cd ~/shadowsocks

生成PAC

genpac --pac-proxy "SOCKS5 127.0.0.1:1080" --gfwlist-proxy="SOCKS5 127.0.0.1:1080" --gfwlist-url=https://raw.githubusercontent.com/gfwlist/gfwlist/master/gfwlist.txt --output="autoproxy.pac"

如果有人提示改地址无法连接,那是因为你的ss-qt5没开,或者你的梯子有问题。
然后配置代理模式为自动代理,填入如下内容:

file:///home/liyang/shadowsocks/autoproxy.pac

JDK,IDEA

mkdir /usr/local/java
tar -zxvf jdk1.8.0_171.tar.gz -C /usr/local/java
sudo vim /etc/profile
## 在最下面填入:
export JAVA_HOME=/usr/local/java/jdk1.8.0_171
export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$CLASSPATH  
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH 

IDEA

mdkir /usr/local/jetbrains
tar -zxvf idea-IU-181.5087.20.tar.gz -C /usr/local/jetbrains/
cd /usr/local/jetbrains/idea-IU-181.5087.20/bin
./idea.sh

新版的IDEA在执行idea.sh后会自动在/usr/share/applications里面创建桌面图标,无需再手动创建了。

深度截图

sudo apt-get install deepin-screenshot

设置快捷键

SecureCRT

这可能是唯一一个Win/macOS/Linux下都比较好用的shell管理工具了,售价也不便宜,财务还未自由的可以选择Crack一下。
下载scrt-8.3.3-1646.ubuntu17-64.x86_64.deb 下载地址
下载完成后直接双击安装即可,安装完成后有钱淫直接输入LicenseData,穷如似我这般的就进行如下操作。
下载破解脚本

wget http://download.boll.me/securecrt_linux_crack.pl

执行如下命令进行破解

sudo perl securecrt_linux_crack.pl /usr/bin/SecureCRT
sudo perl securecrt_linux_crack.pl /usr/bin/SecureCRT
[sudo] liyang 的密码: 
crack successful

License:

  Name:    xiaobo_l
  Company:  www.boll.me
  Serial Number:  03-94-294583
  License Key:  ABJ11G 85V1F9 NENFBK RBWB5W ABH23Q 8XBZAC 324TJJ KXRE5D
  Issue Date:  04-20-2017

填入LicenseData

Telegram

曾经,在Ubuntu(Linux)系统下并没有比较称心如意的聊天工具,由于经常需要在不同的系统之间互相传送消息、文件,而wineqq会出现很多问题,于是乎,使用telegram便可以解决这个需求,只不过在用的时候需要踩着梯子。

美化篇

主题推荐

Sierra-gtk-theme

项目地址:https://github.com/vinceliuice/Sierra-gtk-theme
主题地址:https://www.gnome-look.org/p/1013714/
安装方法:

sudo apt-get install gtk2-engines-murrine gtk2-engines-pixbuf
sudo add-apt-repository ppa:dyatlov-igor/sierra-theme
sudo apt install sierra-gtk-theme

numix-gtk-theme

sudo add-apt-repository ppa:numix/ppa
sudo apt-get update
sudo apt-get install numix-gtk-theme numix-icon-theme-circle

Docky

sudo apt-get install docky

把不想要的图标,点击左键网上拖就可以删除。这个软件的效果还不错,可惜无法避免系统自带的dock,于是如果有“洁癖”的同学可以选择dash to dock这个插件。

Dash to dock

在安装gnome-shell-extensionschrome-gnome-shell后,点击gnome扩展,在这里输入dash to dock 安装即可。
具体配置会在下面展示

 

效果

Docky效果图

Dash to dock效果图