Skip to content

Image Digest is not updated in Deployment for main-Tag #803

@mariuss97

Description

@mariuss97

I have a Deployment like below and Keel detects changes (regarding the Log). The Pod restarts but the used Image Digest is the same. The new is not used. Is this intended and should i use imagePullPolicy: Always. I wanted to avoid this to save some traffic for other Tags.

time="2025-03-13T18:04:58Z" level=debug msg="secrets.defaultGetter: secret looked up successfully" image=cr.mycr.io/mylab/my-backend namespace=default provider=kubernetes registry=cr.mycr.io
time="2025-03-13T18:04:58Z" level=debug msg="registry.manifest.head url=https://cr.mycr.io/v2/mylab/my-backend/manifests/main repository=mylab/my-backend reference=main"
time="2025-03-13T18:04:58Z" level=debug msg="trigger.poll.WatchTagJob: checking digest" current_digest="sha256:9b2c283d6645a454237edc684edadda246273ede366ae322bce6cf90b81068df" image="mylab/my-backend:main" new_digest="sha256:9d0a31dbd92277ec7399fa27714d3e67a51b6febc0051b89e889d572eed751e9" registry_url="https://cr.mycr.io"
time="2025-03-13T18:04:58Z" level=info msg="trigger.poll.WatchTagJob: digest change detected, submiting event to providers" image="mylab/my-backend:main" new_digest="sha256:9d0a31dbd92277ec7399fa27714d3e67a51b6febc0051b89e889d572eed751e9"
time="2025-03-13T18:04:58Z" level=debug msg="provider.kubernetes.checkVersionedDeployment: keel policy found, checking resource..." kind=deployment name=my-backend-depl namespace=default policy=force
time="2025-03-13T18:04:58Z" level=debug msg="provider.kubernetes: checking image" image="cr.mycr.io/mylab/my-backend:main" kind=deployment name=my-backend-depl namespace=default parsed_image_name="cr.mycr.io/mylab/my-backend:main" policy=force target_image_name=cr.mycr.io/mylab/my-backend target_tag=main
time="2025-03-13T18:04:58Z" level=warning msg="provider.kubernetes: got error while archiving approvals counter after successful update" error="approval not found: record not found" kind=deployment name=my-backend-depl namespace=default
time="2025-03-13T18:04:58Z" level=info msg="provider.kubernetes: resource updated" kind=deployment name=my-backend-depl namespace=default new=main previous=main
time="2025-03-13T18:04:58Z" level=debug msg="updated deployment my-backend-depl" context=translator
time="2025-03-13T18:04:58Z" level=debug msg="updated deployment my-backend-depl" context=translator

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    keel.sh/match-tag: digest
    keel.sh/policy: force
    keel.sh/pollSchedule: '@every 15m'
    keel.sh/track-digest: "true"
    keel.sh/trigger: poll
    meta.helm.sh/release-name: my-backend
    meta.helm.sh/release-namespace: default
  creationTimestamp: "2025-03-11T10:23:40Z"
  generation: 3
  labels:
    app.kubernetes.io/instance: my-backend
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: my-backend
    app.service: my-backend
    helm.sh/chart: base-backend-1.0.0
  name: my-backend-depl
  namespace: default
  resourceVersion: "904165"
  uid: e4770685-540d-4037-a4a4-ef2ca2bbe810
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: my-backend
      app.kubernetes.io/name: my-backend
      app.service: my-backend
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: my-backend
        app.kubernetes.io/name:my-backend
        app.service: my-backend
    spec:
      containers:
      - envFrom:
        - configMapRef:
            name: my-backend-env-service
            optional: true
        - configMapRef:
            name: my-backend-env
        image: cr.mycr.io/mylab/my-server:main
        imagePullPolicy: IfNotPresent
        name: adm
        ports:
        - containerPort: 8080
          protocol: TCP
        resources:
          limits:
            cpu: "2"
            memory: 2000Mi
          requests:
            cpu: 250m
            memory: 250Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /logs
          name: my-backend-pvc
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: crmylab
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: my-backend-pvc
        persistentVolumeClaim:
          claimName: my-backend-pvc
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2025-03-11T10:23:40Z"
    lastUpdateTime: "2025-03-11T10:23:51Z"
    message: ReplicaSet "my-backend-depl-d9cdfdb58" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: "2025-03-11T14:16:33Z"
    lastUpdateTime: "2025-03-11T14:16:33Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 3
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions