Не удалось импортировать OCP 3.10.83 из registry.redhat.io - PullRequest
0 голосов
/ 11 января 2019

OCP 3.10.83 с использованием registry.redhat.io в качестве реестра. Я могу войти вручную со всех мастеров (3) и инфра-узлов (3), используя «docker login» в registry.redhat.io и скопировав conf.json в каталог /var/lib/oshift/.docker и перезапустив atom-openshift- узел на всех узлах.

У меня все еще есть проблемы с imagestreamer. Похоже, он не может извлечь из Registry.redhat.io. Кроме того, я получаю некоторые ошибки докера.


    oc describe is/php -n openshift
        Name:                   php
        Namespace:              openshift
        Created:                9 hours ago
        Labels:                 <none>
        Annotations:            openshift.io/display-name=PHP
                                openshift.io/image.dockerRepositoryCheck=2019-01-10T09:33:27Z
        Docker Pull Spec:       docker-registry.default.svc:5000/openshift/php
        Image Lookup:           local=false
        Unique Images:          0
        Tags:                   5

        7.1 (latest)
          tagged from registry.redhat.io/rhscl/php-71-rhel7:latest
            prefer registry pullthrough when referencing this tag

          Build and run PHP 7.1 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-php-container/blob/master/7.1/README.md.
          Tags: builder, php
          Supports: php:7.1, php
          Example Repo: https://github.com/openshift/cakephp-ex.git

          ! error: Import failed (InternalError): Internal error occurred: Get https://registry.redhat.io/v2/rhscl/php-71-rhel7/manifests/latest: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/articles/3399531
              9 hours ago

        7.0
          tagged from registry.redhat.io/rhscl/php-70-rhel7:latest
            prefer registry pullthrough when referencing this tag

          Build and run PHP 7.0 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-php-container/blob/master/7.0/README.md.
          Tags: builder, php
          Supports: php:7.0, php
          Example Repo: https://github.com/openshift/cakephp-ex.git

          ! error: Import failed (InternalError): Internal error occurred: Get https://registry.redhat.io/v2/rhscl/php-70-rhel7/manifests/latest: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/articles/3399531
              9 hours ago



    [root@oshift-m1 ~]# journalctl -fu docker
    -- Logs begin at Fri 2018-12-28 03:30:23 EST. --
    Jan 10 18:36:12 oshift-m1.dcademo.com dockerd-current[4581]: time="2019-01-10T18:36:12.424348292-05:00" level=error msg="Handler for GET /v1.26/images/registry.redhat.io/openshift3/ose-web-console:v3.10.83/json
    returned error: No such image: registry.redhat.io/openshift3/ose-web-console:v3.10.83"
    Jan 10 18:36:12 oshift-m1.dcademo.com dockerd-current[4581]: time="2019-01-10T18:36:12.428803683-05:00" level=error msg="Handler for GET /v1.26/images/registry.redhat.io/openshift3/ose-web-console:v3.10.83/json
    returned error: No such image: registry.redhat.io/openshift3/ose-web-console:v3.10.83"
    Jan 10 18:36:19 oshift-m1.dcademo.com dockerd-current[4581]: time="2019-01-10T18:36:19.428158738-05:00" level=error msg="Handler for GET /v1.26/images/registry.redhat.io/openshift3/ose-service-catalog:v3.10.83/j
    son returned error: No such image: registry.redhat.io/openshift3/ose-service-catalog:v3.10.83"


    [root@oshift-m1 ~]# oc get pods -n default
    NAME                        READY     STATUS             RESTARTS   AGE
    docker-registry-1-xzpbx     1/1       Running            0          7h
    registry-console-1-deploy   0/1       DeadlineExceeded   0          7h
    router-1-477q2              1/1       Running            0          7h
    router-1-lh472              1/1       Running            0          7h
    router-1-rqfl5              1/1       Running            0          7h
    [root@oshift-m1 ~]# oc get secret -n openshift
    NAME                       TYPE                                  DATA      AGE
    builder-dockercfg-2zmjw    kubernetes.io/dockercfg               1         14h
    builder-token-kdknv        kubernetes.io/service-account-token   4         14h
    builder-token-vpdnl        kubernetes.io/service-account-token   4         14h
    default-dockercfg-nlp68    kubernetes.io/dockercfg               1         14h
    default-token-2wt2z        kubernetes.io/service-account-token   4         14h
    default-token-nsm9l        kubernetes.io/service-account-token   4         14h
    deployer-dockercfg-bz4wl   kubernetes.io/dockercfg               1         14h
    deployer-token-p86wj       kubernetes.io/service-account-token   4         14h
    deployer-token-v7rhn       kubernetes.io/service-account-token   4         14h




    [root@oshift-m1 ~]# journalctl -fu atomic-openshift-node
    -- Logs begin at Fri 2018-12-28 03:30:23 EST. --
    Jan 10 20:07:41 oshift-m1.dcademo.com atomic-openshift-node[34635]: E0110 20:07:41.426939   34635 pod_workers.go:186] Error syncing pod 67fbdc51-14f4-11e9-bba9-566ff7b20000 ("apiserver-6k6qb_openshift-template-service-broker(67fbdc51-14f4-11e9-bba9-566ff7b20000)"), skipping: failed to "StartContainer" for "c" with ImagePullBackOff: "Back-off pulling image \"registry.redhat.io/openshift3/ose-template-service-broker:v3.10.83\""
    Jan 10 20:07:42 oshift-m1.dcademo.com atomic-openshift-node[34635]: I0110 20:07:42.426138   34635 kuberuntime_manager.go:513] Container {Name:webconsole Image:registry.redhat.io/openshift3/ose-web-console:v3.10.83 Command:[/usr/bin/origin-web-console --audit-log-path=- -v=0 --config=/var/webconsole-config/webconsole-config.yaml] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:8443 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[memory:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:serving-cert ReadOnly:false MountPath:/var/serving-cert SubPath: MountPropagation:<nil>} {Name:webconsole-config ReadOnly:false MountPath:/var/webconsole-config SubPath: MountPropagation:<nil>} {Name:webconsole-token-96c6z ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -c if [[ ! -f /tmp/webconsole-config.hash ]]; then \
    Jan 10 20:07:42 oshift-m1.dcademo.com atomic-openshift-node[34635]: md5sum /var/webconsole-config/webconsole-config.yaml > /tmp/webconsole-config.hash; \
    Jan 10 20:07:42 oshift-m1.dcademo.com atomic-openshift-node[34635]: elif [[ $(md5sum /var/webconsole-config/webconsole-config.yaml) != $(cat /tmp/webconsole-config.hash) ]]; then \
    Jan 10 20:07:42 oshift-m1.dcademo.com atomic-openshift-node[34635]: echo 'webconsole-config.yaml has changed.'; \
    Jan 10 20:07:42 oshift-m1.dcademo.com atomic-openshift-node[34635]: exit 1; \
    Jan 10 20:07:42 oshift-m1.dcademo.com atomic-openshift-node[34635]: fi && curl -k -f https://0.0.0.0:8443/console/],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:8443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[KILL MKNOD SETGID SETUID],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000090000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
    Jan 10 20:07:42 oshift-m1.dcademo.com atomic-openshift-node[34635]: E0110 20:07:42.429496   34635 pod_workers.go:186] Error syncing pod 1c6ebfec-14f4-11e9-bba9-566ff7b20000 ("webconsole-988998dbf-b6q96_openshift-web-console(1c6ebfec-14f4-11e9-bba9-566ff7b20000)"), skipping: failed to "StartContainer" for "webconsole" with ImagePullBackOff: "Back-off pulling image \"registry.redhat.io/openshift3/ose-web-console:v3.10.83\""
    Jan 10 20:07:47 oshift-m1.dcademo.com atomic-openshift-node[34635]: I0110 20:07:47.422570   34635 kuberuntime_manager.go:513] Container {Name:controller-manager Image:registry.redhat.io/openshift3/ose-service-catalog:v3.10.83 Command:[/usr/bin/service-catalog] Args:[controller-manager --secure-port 6443 -v 3 --leader-election-namespace kube-service-catalog --leader-elect-resource-lock configmaps --cluster-id-configmap-namespace=kube-service-catalog --broker-relist-interval 5m --feature-gates OriginatingIdentity=true --feature-gates AsyncBindingOperations=true] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:6443 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:K8S_NAMESPACE Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:service-catalog-ssl ReadOnly:true MountPath:/var/run/kubernetes-service-catalog SubPath: MountPropagation:<nil>} {Name:service-catalog-controller-token-2jrfm ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:1,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[KILL MKNOD SETGID SETUID],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000100000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
    Jan 10 20:07:47 oshift-m1.dcademo.com atomic-openshift-node[34635]: E0110 20:07:47.424452   34635 pod_workers.go:186] Error syncing pod 3ce0b769-14f4-11e9-bba9-566ff7b20000 ("controller-manager-5t65k_kube-service-catalog(3ce0b769-14f4-11e9-bba9-566ff7b20000)"), skipping: failed to "StartContainer" for "controller-manager" with ImagePullBackOff: "Back-off pulling image \"registry.redhat.io/openshift3/ose-service-catalog:v3.10.83\""
    Jan 10 20:07:51 oshift-m1.dcademo.com atomic-openshift-node[34635]: I0110 20:07:51.422471   34635 kuberuntime_manager.go:513] Container {Name:apiserver Image:registry.redhat.io/openshift3/ose-service-catalog:v3.10.83 Command:[/usr/bin/service-catalog] Args:[apiserver --storage-type etcd --secure-port 6443 --etcd-servers https://oshift-etcd1.dcademo.com:2379,https://oshift-etcd2.dcademo.com:2379,https://oshift-etcd3.dcademo.com:2379 --etcd-cafile /etc/origin/master/master.etcd-ca.crt --etcd-certfile /etc/origin/master/master.etcd-client.crt --etcd-keyfile /etc/origin/master/master.etcd-client.key -v 3 --cors-allowed-origins localhost --enable-admission-plugins KubernetesNamespaceLifecycle,DefaultServicePlan,ServiceBindingsLifecycle,ServicePlanChangeValidator,BrokerAuthSarCheck --feature-gates OriginatingIdentity=true] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:6443 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:apiserver-ssl ReadOnly:true MountPath:/var/run/kubernetes-service-catalog SubPath: MountPropagation:<nil>} {Name:etcd-host-cert ReadOnly:true MountPath:/etc/origin/master SubPath: MountPropagation:<nil>} {Name:service-catalog-apiserver-token-5vffl ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:1,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
    Jan 10 20:07:51 oshift-m1.dcademo.com atomic-openshift-node[34635]: E0110 20:07:51.423758   34635 pod_workers.go:186] Error syncing pod 3a1ca9bc-14f4-11e9-bba9-566ff7b20000 ("apiserver-js7cn_kube-service-catalog(3a1ca9bc-14f4-11e9-bba9-566ff7b20000)"), skipping: failed to "StartContainer" for "apiserver" with ImagePullBackOff: "Back-off pulling image \"registry.redhat.io/openshift3/ose-service-catalog:v3.10.83\""
    ^C

    ```




...