Commit c63c5a56 authored by loganhz's avatar loganhz Committed by Denise

Remove Istio 0.0.1 and rename 0.0.2 to 0.1.0

parent b7fd40d0
apiVersion: v1
name: rancher-istio
version: 0.0.1
appVersion: 1.2.0
tillerVersion: ">=2.7.2-0"
description: Helm chart for all istio components
home: https://istio.io/
keywords:
- istio
- security
- sidecarInjectorWebhook
- mixer
- pilot
- galley
sources:
- http://github.com/istio/istio
engine: gotpl
icon: https://istio.io/favicons/android-192x192.png
maintainers:
- name: istio
# Istio
[Istio](https://istio.io/) is an open platform for providing a uniform way to integrate microservices, manage traffic flow across microservices, enforce policies and aggregate telemetry data.
The documentation here is for developers only, please follow the installation instructions from [istio.io](https://istio.io/docs/setup/kubernetes/install/helm/) for all other uses.
## Introduction
This chart bootstraps all Istio [components](https://istio.io/docs/concepts/what-is-istio/overview.html) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
## Chart Details
This chart can install multiple Istio components as subcharts:
- ingressgateway
- egressgateway
- sidecarInjectorWebhook
- galley
- mixer
- pilot
- security(citadel)
- grafana
- prometheus
- tracing(jaeger)
- kiali
To enable or disable each component, change the corresponding `enabled` flag.
## Prerequisites
- Kubernetes 1.9 or newer cluster with RBAC (Role-Based Access Control) enabled is required
- Helm 2.7.2 or newer or alternately the ability to modify RBAC rules is also required
- If you want to enable automatic sidecar injection, Kubernetes 1.9+ with `admissionregistration` API is required, and `kube-apiserver` process must have the `admission-control` flag set with the `MutatingAdmissionWebhook` and `ValidatingAdmissionWebhook` admission controllers added and listed in the correct order.
- The `istio-init` chart must be run to completion prior to install the `istio` chart.
## Resources Required
The chart deploys pods that consume minimum resources as specified in the resources configuration parameter.
## Installing the Chart
1. If a service account has not already been installed for Tiller, install one:
```
$ kubectl apply -f install/kubernetes/helm/helm-service-account.yaml
```
1. Install Tiller on your cluster with the service account:
```
$ helm init --service-account tiller
```
1. Set and create the namespace where Istio was installed:
```
$ NAMESPACE=istio-system
$ kubectl create ns $NAMESPACE
```
1. If you are enabling `kiali`, you need to create the secret that contains the username and passphrase for `kiali` dashboard:
```
$ echo -n 'admin' | base64
YWRtaW4=
$ echo -n '1f2d1e2e67df' | base64
MWYyZDFlMmU2N2Rm
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: kiali
namespace: $NAMESPACE
labels:
app: kiali
type: Opaque
data:
username: YWRtaW4=
passphrase: MWYyZDFlMmU2N2Rm
EOF
```
1. If you are using security mode for Grafana, create the secret first as follows:
- Encode username, you can change the username to the name as you want:
```
$ echo -n 'admin' | base64
YWRtaW4=
```
- Encode passphrase, you can change the passphrase to the passphrase as you want:
```
$ echo -n '1f2d1e2e67df' | base64
MWYyZDFlMmU2N2Rm
```
- Create secret for Grafana:
```
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: grafana
namespace: $NAMESPACE
labels:
app: grafana
type: Opaque
data:
username: YWRtaW4=
passphrase: MWYyZDFlMmU2N2Rm
EOF
```
1. To install the chart with the release name `istio` in namespace $NAMESPACE you defined above:
- With [automatic sidecar injection](https://istio.io/docs/setup/kubernetes/sidecar-injection/#automatic-sidecar-injection) (requires Kubernetes >=1.9.0):
```
$ helm install istio --name istio --namespace $NAMESPACE
```
- Without the sidecar injection webhook:
```
$ helm install istio --name istio --namespace $NAMESPACE --set sidecarInjectorWebhook.enabled=false
```
## Configuration
The Helm chart ships with reasonable defaults. There may be circumstances in which defaults require overrides.
To override Helm values, use `--set key=value` argument during the `helm install` command. Multiple `--set` operations may be used in the same Helm operation.
Helm charts expose configuration options which are currently in alpha. The currently exposed options can be found [here](https://istio.io/docs/reference/config/installation-options/).
## Uninstalling the Chart
To uninstall/delete the `istio` release but continue to track the release:
```
$ helm delete istio
```
To uninstall/delete the `istio` release completely and make its name free for later use:
```
$ helm delete istio --purge
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: certmanager
namespace: {{ .Release.Namespace }}
labels:
app: certmanager
chart: {{ template "certmanager.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: certmanager
template:
metadata:
labels:
app: certmanager
chart: {{ template "certmanager.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.podLabels }}
{{ toYaml .Values.podLabels | indent 8 }}
{{- end }}
annotations:
sidecar.istio.io/inject: "false"
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
serviceAccountName: certmanager
{{- if .Values.global.priorityClassName }}
priorityClassName: "{{ .Values.global.priorityClassName }}"
{{- end }}
containers:
- name: certmanager
{{- if .Values.global.systemDefaultRegistry }}
image: "{{ template "system_default_registry" . }}{{ .Values.image.repository }}:{{ .Values.image.tag }}"
{{- else }}
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
{{- end }}
imagePullPolicy: {{ .Values.global.imagePullPolicy }}
args:
- --cluster-resource-namespace=$(POD_NAMESPACE)
- --leader-election-namespace=$(POD_NAMESPACE)
{{- if .Values.extraArgs }}
{{ toYaml .Values.extraArgs | indent 8 }}
{{- end }}
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
resources:
{{ toYaml .Values.resources | indent 10 }}
{{- if .Values.podDnsPolicy }}
dnsPolicy: {{ .Values.podDnsPolicy }}
{{- end }}
{{- if .Values.podDnsConfig }}
dnsConfig:
{{ toYaml .Values.podDnsConfig | indent 8 }}
{{- end }}
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 6 }}
{{- end }}
\ No newline at end of file
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: certmanager
labels:
app: certmanager
chart: {{ template "certmanager.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups: ["certmanager.k8s.io"]
resources: ["certificates", "certificates/finalizers", "issuers", "clusterissuers", "orders", "orders/finalizers", "challenges"]
verbs: ["*"]
- apiGroups: [""]
resources: ["configmaps", "secrets", "events", "services", "pods"]
verbs: ["*"]
- apiGroups: ["extensions"]
resources: ["ingresses"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: certmanager
labels:
app: certmanager
chart: {{ template "certmanager.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: certmanager
subjects:
- name: certmanager
namespace: {{ .Release.Namespace }}
kind: ServiceAccount
# Certmanager uses ACME to sign certificates. Since Istio gateways are
# mounting the TLS secrets the Certificate CRDs must be created in the
# istio-system namespace. Once the certificate has been created, the
# gateway must be updated by adding 'secretVolumes'. After the gateway
# restart, DestinationRules can be created using the ACME-signed certificates.
enabled: false
replicaCount: 1
image:
repository: rancher/jetstack-cert-manager-controller
tag: v0.6.2
resources: {}
nodeSelector: {}
tolerations: []
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
# already running on the node rather than based on labels on nodes.
# There are currently two types of anti-affinity:
# "requiredDuringSchedulingIgnoredDuringExecution"
# "preferredDuringSchedulingIgnoredDuringExecution"
# which denote “hard” vs. “soft” requirements, you can define your values
# in "podAntiAffinityLabelSelector" and "podAntiAffinityTermLabelSelector"
# correspondingly.
# For example:
# podAntiAffinityLabelSelector:
# - key: security
# operator: In
# values: S1,S2
# topologyKey: "kubernetes.io/hostname"
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: istio-galley-{{ .Release.Namespace }}
labels:
app: {{ template "galley.name" . }}
chart: {{ template "galley.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups: ["admissionregistration.k8s.io"]
resources: ["validatingwebhookconfigurations"]
verbs: ["*"]
- apiGroups: ["config.istio.io"] # istio mixer CRD watcher
resources: ["*"]
verbs: ["get", "list", "watch"]
- apiGroups: ["networking.istio.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
- apiGroups: ["authentication.istio.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
- apiGroups: ["rbac.istio.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions","apps"]
resources: ["deployments"]
resourceNames: ["istio-galley"]
verbs: ["get"]
- apiGroups: [""]
resources: ["pods", "nodes", "services", "endpoints"]
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
resources: ["ingresses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
resources: ["deployments/finalizers"]
resourceNames: ["istio-galley"]
verbs: ["update"]
apiVersion: v1
kind: ConfigMap
metadata:
name: istio-galley-configuration
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "galley.name" . }}
chart: {{ template "galley.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
istio: galley
data:
validatingwebhookconfiguration.yaml: |-
{{- include "validatingwebhookconfiguration.yaml.tpl" . | indent 4}}
\ No newline at end of file
apiVersion: apps/v1
kind: Deployment
metadata:
name: istio-galley
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "galley.name" . }}
chart: {{ template "galley.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
istio: galley
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
istio: galley
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: {{ template "galley.name" . }}
chart: {{ template "galley.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
istio: galley
annotations:
sidecar.istio.io/inject: "false"
spec:
serviceAccountName: istio-galley-service-account
{{- if .Values.global.priorityClassName }}
priorityClassName: "{{ .Values.global.priorityClassName }}"
{{- end }}
containers:
- name: galley
image: "{{ template "system_default_registry" . }}{{ .Values.repository }}:{{ .Values.tag }}"
imagePullPolicy: {{ .Values.global.imagePullPolicy }}
ports:
- containerPort: 443
- containerPort: {{ .Values.global.monitoringPort }}
- containerPort: 9901
command:
- /usr/local/bin/galley
- server
- --meshConfigFile=/etc/mesh-config/mesh
- --livenessProbeInterval=1s
- --livenessProbePath=/healthliveness
- --readinessProbePath=/healthready
- --readinessProbeInterval=1s
- --deployment-namespace={{ .Release.Namespace }}
{{- if $.Values.global.controlPlaneSecurityEnabled}}
- --insecure=false
{{- else }}
- --insecure=true
{{- end }}
{{- if not $.Values.global.useMCP }}
- --enable-server=false
{{- end }}
- --validation-webhook-config-file
- /etc/config/validatingwebhookconfiguration.yaml
- --monitoringPort={{ .Values.global.monitoringPort }}
{{- if $.Values.global.logging.level }}
- --log_output_level={{ $.Values.global.logging.level }}
{{- end}}
volumeMounts:
- name: certs
mountPath: /etc/certs
readOnly: true
- name: config
mountPath: /etc/config
readOnly: true
- name: mesh-config
mountPath: /etc/mesh-config
readOnly: true
livenessProbe:
exec:
command:
- /usr/local/bin/galley
- probe
- --probe-path=/healthliveness
- --interval=10s
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
exec:
command:
- /usr/local/bin/galley
- probe
- --probe-path=/healthready
- --interval=10s
initialDelaySeconds: 5
periodSeconds: 5
resources:
{{- if .Values.resources }}
{{ toYaml .Values.resources | indent 12 }}
{{- else }}
{{ toYaml .Values.global.defaultResources | indent 12 }}
{{- end }}
volumes:
- name: certs
secret:
secretName: istio.istio-galley-service-account
- name: config
configMap:
name: istio-galley-configuration
- name: mesh-config
configMap:
name: istio
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 6 }}
{{- end }}
{{ define "validatingwebhookconfiguration.yaml.tpl" }}
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
name: istio-galley
labels:
app: {{ template "galley.name" . }}
chart: {{ template "galley.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
istio: galley
webhooks:
{{- if .Values.global.configValidation }}
- name: pilot.validation.istio.io
clientConfig:
service:
name: istio-galley
namespace: {{ .Release.Namespace }}
path: "/admitpilot"
caBundle: ""
rules:
- operations:
- CREATE
- UPDATE
apiGroups:
- config.istio.io
apiVersions:
- v1alpha2
resources:
- httpapispecs
- httpapispecbindings
- quotaspecs
- quotaspecbindings
- operations:
- CREATE
- UPDATE
apiGroups:
- rbac.istio.io
apiVersions:
- "*"
resources:
- "*"
- operations:
- CREATE
- UPDATE
apiGroups:
- authentication.istio.io
apiVersions:
- "*"
resources:
- "*"
- operations:
- CREATE
- UPDATE
apiGroups:
- networking.istio.io
apiVersions:
- "*"
resources:
- destinationrules
- envoyfilters
- gateways
- serviceentries
- sidecars
- virtualservices
failurePolicy: Fail
sideEffects: None
- name: mixer.validation.istio.io
clientConfig:
service:
name: istio-galley
namespace: {{ .Release.Namespace }}
path: "/admitmixer"
caBundle: ""
rules:
- operations:
- CREATE
- UPDATE
apiGroups:
- config.istio.io
apiVersions:
- v1alpha2
resources:
- rules
- attributemanifests
- circonuses
- deniers
- fluentds
- kubernetesenvs
- listcheckers
- memquotas
- noops
- opas
- prometheuses
- rbacs
- solarwindses
- stackdrivers
- cloudwatches
- dogstatsds
- statsds
- stdios
- apikeys
- authorizations
- checknothings
# - kuberneteses
- listentries
- logentries
- metrics
- quotas
- reportnothings
- tracespans
- adapters
- handlers
- instances
- templates
- zipkins
failurePolicy: Fail
sideEffects: None
{{- end }}
{{- end }}
#
# galley configuration
#
enabled: true
replicaCount: 1
nodeSelector: {}
tolerations: []
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
# already running on the node rather than based on labels on nodes.
# There are currently two types of anti-affinity:
# "requiredDuringSchedulingIgnoredDuringExecution"
# "preferredDuringSchedulingIgnoredDuringExecution"
# which denote “hard” vs. “soft” requirements, you can define your values
# in "podAntiAffinityLabelSelector" and "podAntiAffinityTermLabelSelector"
# correspondingly.
# For example:
# podAntiAffinityLabelSelector:
# - key: security
# operator: In
# values: S1,S2
# topologyKey: "kubernetes.io/hostname"
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
{{/* affinity - https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ */}}
{{- define "gatewaynodeaffinity" }}
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
{{- include "gatewayNodeAffinityRequiredDuringScheduling" . }}
preferredDuringSchedulingIgnoredDuringExecution:
{{- include "gatewayNodeAffinityPreferredDuringScheduling" . }}
{{- end }}
{{- define "gatewayNodeAffinityRequiredDuringScheduling" }}
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
{{- range $key, $val := .root.Values.global.arch }}
{{- if gt ($val | int) 0 }}
- {{ $key }}
{{- end }}
{{- end }}
{{- $nodeSelector := default .root.Values.global.defaultNodeSelector .nodeSelector -}}
{{- range $key, $val := $nodeSelector }}
- key: {{ $key }}
operator: In
values:
- {{ $val }}
{{- end }}
{{- end }}
{{- define "gatewayNodeAffinityPreferredDuringScheduling" }}
{{- range $key, $val := .root.Values.global.arch }}
{{- if gt ($val | int) 0 }}
- weight: {{ $val | int }}
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- {{ $key }}
{{- end }}
{{- end }}
{{- end }}
{{- define "gatewaypodAntiAffinity" }}
{{- if or .podAntiAffinityLabelSelector .podAntiAffinityTermLabelSelector}}
podAntiAffinity:
{{- if .podAntiAffinityLabelSelector }}
requiredDuringSchedulingIgnoredDuringExecution:
{{- include "gatewaypodAntiAffinityRequiredDuringScheduling" . }}
{{- end }}
{{- if .podAntiAffinityTermLabelSelector }}
preferredDuringSchedulingIgnoredDuringExecution:
{{- include "gatewaypodAntiAffinityPreferredDuringScheduling" . }}
{{- end }}
{{- end }}
{{- end }}
{{- define "gatewaypodAntiAffinityRequiredDuringScheduling" }}
{{- range $index, $item := .podAntiAffinityLabelSelector }}
- labelSelector:
matchExpressions:
- key: {{ $item.key }}
operator: {{ $item.operator }}
{{- if $item.values }}
values:
{{- $vals := split "," $item.values }}
{{- range $i, $v := $vals }}
- {{ $v }}
{{- end }}
{{- end }}
topologyKey: {{ $item.topologyKey }}
{{- end }}
{{- end }}
{{- define "gatewaypodAntiAffinityPreferredDuringScheduling" }}
{{- range $index, $item := .podAntiAffinityTermLabelSelector }}
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: {{ $item.key }}
operator: {{ $item.operator }}
{{- if $item.values }}
values:
{{- $vals := split "," $item.values }}
{{- range $i, $v := $vals }}
- {{ $v }}
{{- end }}
{{- end }}
topologyKey: {{ $item.topologyKey }}
weight: 100
{{- end }}
{{- end }}
{{- range $key, $spec := .Values }}
{{- if ne $key "enabled" }}
{{- if $spec.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $key }}
namespace: {{ $spec.namespace | default $.Release.Namespace }}
labels:
chart: {{ template "gateway.chart" $ }}
heritage: {{ $.Release.Service }}
release: {{ $.Release.Name }}
{{- range $key, $val := $spec.labels }}
{{ $key }}: {{ $val }}
{{- end }}
spec:
{{- if not $spec.autoscaleEnabled }}
{{- if $spec.replicaCount }}
replicas: {{ $spec.replicaCount }}
{{- else }}
replicas: 1
{{- end }}
{{- end }}
selector:
matchLabels:
release: {{ $.Release.Name }}
{{- range $key, $val := $spec.labels }}
{{ $key }}: {{ $val }}
{{- end }}
template:
metadata:
labels:
chart: {{ template "gateway.chart" $ }}
heritage: {{ $.Release.Service }}
release: {{ $.Release.Name }}
{{- range $key, $val := $spec.labels }}
{{ $key }}: {{ $val }}
{{- end }}
annotations:
sidecar.istio.io/inject: "false"
{{- if $spec.podAnnotations }}
{{ toYaml $spec.podAnnotations | indent 8 }}
{{ end }}
spec:
serviceAccountName: {{ $key }}-service-account
{{- if $.Values.global.priorityClassName }}
priorityClassName: "{{ $.Values.global.priorityClassName }}"
{{- end }}
{{- if $.Values.global.proxy.enableCoreDump }}
initContainers:
- name: enable-core-dump
image: "{{ template "system_default_registry" $ }}{{ $.Values.global.proxy_init.repository }}:{{ $.Values.global.proxy_init.tag }}"
imagePullPolicy: {{ $.Values.global.imagePullPolicy }}
command:
- /bin/sh
args:
- -c
- sysctl -w kernel.core_pattern=/var/lib/istio/core.proxy && ulimit -c unlimited
securityContext:
privileged: true
{{- end }}
containers:
{{- if $spec.sds }}
{{- if $spec.sds.enabled }}
- name: ingress-sds
image: "{{ template "system_default_registry" $ }}{{ $.Values.global.nodeAgent.repository }}:{{ $.Values.global.nodeAgent.tag }}"
imagePullPolicy: {{ $.Values.global.imagePullPolicy }}
resources:
{{- if $spec.sds.resources }}
{{ toYaml $spec.sds.resources | indent 12 }}
{{- else }}
{{ toYaml $.Values.global.defaultResources | indent 12 }}
{{- end }}
env:
- name: "ENABLE_WORKLOAD_SDS"
value: "false"
- name: "ENABLE_INGRESS_GATEWAY_SDS"
value: "true"
- name: "INGRESS_GATEWAY_NAMESPACE"
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
volumeMounts:
- name: ingressgatewaysdsudspath
mountPath: /var/run/ingress_gateway
{{- end }}
{{- end }}
- name: istio-proxy
image: "{{ template "system_default_registry" $ }}{{ $.Values.global.proxy.repository }}:{{ $.Values.global.proxy.tag }}"
imagePullPolicy: {{ $.Values.global.imagePullPolicy }}
ports:
{{- range $key, $val := $spec.ports }}
- containerPort: {{ $val.port }}
{{- end }}
- containerPort: 15090
protocol: TCP
name: http-envoy-prom
args:
- proxy
- router
- --domain
- $(POD_NAMESPACE).svc.{{ $.Values.global.proxy.clusterDomain }}
{{- if $.Values.global.proxy.logLevel }}
- --proxyLogLevel={{ $.Values.global.proxy.logLevel }}
{{- end}}
{{- if $.Values.global.proxy.componentLogLevel }}
- --proxyComponentLogLevel={{ $.Values.global.proxy.componentLogLevel }}
{{- end}}
{{- if $.Values.global.logging.level }}
- --log_output_level={{ $.Values.global.logging.level }}
{{- end}}
- --drainDuration
- '45s' #drainDuration
- --parentShutdownDuration
- '1m0s' #parentShutdownDuration
- --connectTimeout
- '10s' #connectTimeout
- --serviceCluster
- {{ $key }}
- --zipkinAddress
{{- if $.Values.global.tracer.zipkin.address }}
- {{ $.Values.global.tracer.zipkin.address }}
{{- else if $.Values.global.istioNamespace }}
- zipkin.{{ $.Values.global.istioNamespace }}:9411
{{- else }}
- zipkin:9411
{{- end }}
{{- if $.Values.global.proxy.envoyStatsd.enabled }}
- --statsdUdpAddress
- {{ $.Values.global.proxy.envoyStatsd.host }}:{{ $.Values.global.proxy.envoyStatsd.port }}
{{- end }}
{{- if $.Values.global.proxy.envoyMetricsService.enabled }}
- --envoyMetricsServiceAddress
- {{ $.Values.global.proxy.envoyMetricsService.host }}:{{ $.Values.global.proxy.envoyMetricsService.port }}
{{- end }}
- --proxyAdminPort
- "15000"
- --statusPort
- "15020"
{{- if $.Values.global.controlPlaneSecurityEnabled }}
- --controlPlaneAuthPolicy
- MUTUAL_TLS
- --discoveryAddress
{{- if $.Values.global.istioNamespace }}
- istio-pilot.{{ $.Values.global.istioNamespace }}:15011
{{- else }}
- istio-pilot:15011
{{- end }}
{{- else }}
- --controlPlaneAuthPolicy
- NONE
- --discoveryAddress
{{- if $.Values.global.istioNamespace }}
- istio-pilot.{{ $.Values.global.istioNamespace }}:15010
{{- else }}
- istio-pilot:15010
{{- end }}
{{- end }}
{{- if $.Values.global.trustDomain }}
- --trust-domain={{ $.Values.global.trustDomain }}
{{- end }}
readinessProbe:
failureThreshold: 30
httpGet:
path: /healthz/ready
port: 15020
scheme: HTTP
initialDelaySeconds: 1
periodSeconds: 2
successThreshold: 1
timeoutSeconds: 1
resources:
{{- if $spec.resources }}
{{ toYaml $spec.resources | indent 12 }}
{{- else }}
{{ toYaml $.Values.global.defaultResources | indent 12 }}
{{- end }}
env:
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: HOST_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
- name: ISTIO_META_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: ISTIO_META_CONFIG_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
{{- if $spec.sds }}
{{- if $spec.sds.enabled }}
- name: ISTIO_META_USER_SDS
value: "true"
{{- end }}
{{- end }}
{{- if $spec.env }}
{{- range $key, $val := $spec.env }}
- name: {{ $key }}
value: {{ $val }}
{{- end }}
{{- end }}
volumeMounts:
{{- if $.Values.global.sds.enabled }}
- name: sdsudspath
mountPath: /var/run/sds
readOnly: true
{{- if $.Values.global.sds.useTrustworthyJwt }}
- name: istio-token
mountPath: /var/run/secrets/tokens
{{- end }}
{{- end }}
{{- if $spec.sds }}
{{- if $spec.sds.enabled }}
- name: ingressgatewaysdsudspath
mountPath: /var/run/ingress_gateway
{{- end }}
{{- end }}
- name: istio-certs
mountPath: /etc/certs
readOnly: true
{{- range $spec.secretVolumes }}
- name: {{ .name }}
mountPath: {{ .mountPath | quote }}
readOnly: true
{{- end }}
{{- if $spec.additionalContainers }}
{{ toYaml $spec.additionalContainers | indent 8 }}
{{- end }}
volumes:
{{- if $spec.sds }}
{{- if $spec.sds.enabled }}
- name: ingressgatewaysdsudspath
emptyDir: {}
{{- end }}
{{- end }}
{{- if $.Values.global.sds.enabled }}
- name: sdsudspath
hostPath:
path: /var/run/sds
{{- if $.Values.global.sds.useTrustworthyJwt }}
- name: istio-token
projected:
sources:
- serviceAccountToken:
path: istio-token
expirationSeconds: 43200
audience: {{ $.Values.global.trustDomain }}
{{- end }}
{{- end }}
- name: istio-certs
secret:
secretName: istio.{{ $key }}-service-account
optional: true
{{- range $spec.secretVolumes }}
- name: {{ .name }}
secret:
secretName: {{ .secretName | quote }}
optional: true
{{- end }}
{{- range $spec.configVolumes }}
- name: {{ .name }}
configMap:
name: {{ .configMapName | quote }}
optional: true
{{- end }}
affinity:
{{- include "gatewaynodeaffinity" (dict "root" $ "nodeSelector" $spec.nodeSelector) | indent 6 }}
{{- include "gatewaypodAntiAffinity" (dict "podAntiAffinityLabelSelector" $spec.podAntiAffinityLabelSelector "podAntiAffinityTermLabelSelector" $spec.podAntiAffinityTermLabelSelector) | indent 6 }}
{{- if $spec.tolerations }}
tolerations:
{{ toYaml $spec.tolerations | indent 6 }}
{{- end }}
---
{{- end }}
{{- end }}
{{- end }}
#
# Gateways Configuration
# By default (if enabled) a pair of Ingress and Egress Gateways will be created for the mesh.
# You can add more gateways in addition to the defaults but make sure those are uniquely named
# and that NodePorts are not conflicting.
# Disable specifc gateway by setting the `enabled` to false.
#
enabled: true
istio-ingressgateway:
enabled: true
#
# Secret Discovery Service (SDS) configuration for ingress gateway.
#
sds:
# If true, ingress gateway fetches credentials from SDS server to handle TLS connections.
enabled: false
# SDS server that watches kubernetes secrets and provisions credentials to ingress gateway.
# This server runs in the same pod as ingress gateway.
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 2000m
memory: 1024Mi
labels:
app: istio-ingressgateway
istio: ingressgateway
autoscaleEnabled: true
autoscaleMin: 1
autoscaleMax: 5
# specify replicaCount when autoscaleEnabled: false
# replicaCount: 1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 2000m
memory: 1024Mi
cpu:
targetAverageUtilization: 80
loadBalancerIP: ""
loadBalancerSourceRanges: []
externalIPs: []
serviceAnnotations: {}
podAnnotations: {}
type: LoadBalancer #change to NodePort, ClusterIP or LoadBalancer if need be
#externalTrafficPolicy: Local #change to Local to preserve source IP or Cluster for default behaviour or leave commented out
ports:
## You can add custom gateway ports
# Note that AWS ELB will by default perform health checks on the first port
# on this list. Setting this to the health check port will ensure that health
# checks always work. https://github.com/istio/istio/issues/12503
- port: 80
targetPort: 80
name: http2
nodePort: 31380
- port: 443
name: https
nodePort: 31390
### PORTS FOR UI/metrics #####
## Disable if not needed
addOnPorts:
# - port: 15029
# targetPort: 15029
# name: https-kiali
# - port: 15030
# targetPort: 15030
# name: https-prometheus
# - port: 15031
# targetPort: 15031
# name: https-grafana
# - port: 15032
# targetPort: 15032
# name: https-tracing
# # This is the port where sni routing happens
# - port: 15443
# targetPort: 15443
# name: tls
# - port: 15020
# targetPort: 15020
# name: status-port
#### MESH EXPANSION PORTS ########
# Pilot and Citadel MTLS ports are enabled in gateway - but will only redirect
# to pilot/citadel if global.meshExpansion settings are enabled.
# Delete these ports if mesh expansion is not enabled, to avoid
# exposing unnecessary ports on the web.
# You can remove these ports if you are not using mesh expansion
meshExpansionPorts:
- port: 15011
targetPort: 15011
name: tcp-pilot-grpc-tls
- port: 15004
targetPort: 15004
name: tcp-mixer-grpc-tls
- port: 8060
targetPort: 8060
name: tcp-citadel-grpc-tls
- port: 853
targetPort: 853
name: tcp-dns-tls
####### end MESH EXPANSION PORTS ######
##############
secretVolumes:
- name: ingressgateway-certs
secretName: istio-ingressgateway-certs
mountPath: /etc/istio/ingressgateway-certs
- name: ingressgateway-ca-certs
secretName: istio-ingressgateway-ca-certs
mountPath: /etc/istio/ingressgateway-ca-certs
### Advanced options ############
# Ports to explicitly check for readiness. If configured, the readiness check will expect a
# listener on these ports. A comma separated list is expected, such as "80,443".
#
# Warning: If you do not have a gateway configured for the ports provided, this check will always
# fail. This is intended for use cases where you always expect to have a listener on the port,
# such as 80 or 443 in typical setups.
applicationPorts: ""
env:
# A gateway with this mode ensures that pilot generates an additional
# set of clusters for internal services but without Istio mTLS, to
# enable cross cluster routing.
ISTIO_META_ROUTER_MODE: "sni-dnat"
nodeSelector: {}
tolerations: []
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
# already running on the node rather than based on labels on nodes.
# There are currently two types of anti-affinity:
# "requiredDuringSchedulingIgnoredDuringExecution"
# "preferredDuringSchedulingIgnoredDuringExecution"
# which denote “hard” vs. “soft” requirements, you can define your values
# in "podAntiAffinityLabelSelector" and "podAntiAffinityTermLabelSelector"
# correspondingly.
# For example:
# podAntiAffinityLabelSelector:
# - key: security
# operator: In
# values: S1,S2
# topologyKey: "kubernetes.io/hostname"
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
istio-egressgateway:
enabled: false
labels:
app: istio-egressgateway
istio: egressgateway
autoscaleEnabled: true
autoscaleMin: 1
autoscaleMax: 5
# specify replicaCount when autoscaleEnabled: false
# replicaCount: 1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 2000m
memory: 256Mi
cpu:
targetAverageUtilization: 80
serviceAnnotations: {}
podAnnotations: {}
type: ClusterIP #change to NodePort or LoadBalancer if need be
ports:
- port: 80
name: http2
- port: 443
name: https
# This is the port where sni routing happens
- port: 15443
targetPort: 15443
name: tls
secretVolumes:
- name: egressgateway-certs
secretName: istio-egressgateway-certs
mountPath: /etc/istio/egressgateway-certs
- name: egressgateway-ca-certs
secretName: istio-egressgateway-ca-certs
mountPath: /etc/istio/egressgateway-ca-certs
#### Advanced options ########
env:
# Set this to "external" if and only if you want the egress gateway to
# act as a transparent SNI gateway that routes mTLS/TLS traffic to
# external services defined using service entries, where the service
# entry has resolution set to DNS, has one or more endpoints with
# network field set to "external". By default its set to "" so that
# the egress gateway sees the same set of endpoints as the sidecars
# preserving backward compatibility
# ISTIO_META_REQUESTED_NETWORK_VIEW: ""
# A gateway with this mode ensures that pilot generates an additional
# set of clusters for internal services but without Istio mTLS, to
# enable cross cluster routing.
ISTIO_META_ROUTER_MODE: "sni-dnat"
nodeSelector: {}
tolerations: []
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
# already running on the node rather than based on labels on nodes.
# There are currently two types of anti-affinity:
# "requiredDuringSchedulingIgnoredDuringExecution"
# "preferredDuringSchedulingIgnoredDuringExecution"
# which denote “hard” vs. “soft” requirements, you can define your values
# in "podAntiAffinityLabelSelector" and "podAntiAffinityTermLabelSelector"
# correspondingly.
# For example:
# podAntiAffinityLabelSelector:
# - key: security
# operator: In
# values: S1,S2
# topologyKey: "kubernetes.io/hostname"
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
# Mesh ILB gateway creates a gateway of type InternalLoadBalancer,
# for mesh expansion. It exposes the mtls ports for Pilot,CA as well
# as non-mtls ports to support upgrades and gradual transition.
istio-ilbgateway:
enabled: false
labels:
app: istio-ilbgateway
istio: ilbgateway
autoscaleEnabled: true
autoscaleMin: 1
autoscaleMax: 5
# specify replicaCount when autoscaleEnabled: false
# replicaCount: 1
cpu:
targetAverageUtilization: 80
resources:
requests:
cpu: 800m
memory: 512Mi
#limits:
# cpu: 1800m
# memory: 256Mi
loadBalancerIP: ""
serviceAnnotations:
cloud.google.com/load-balancer-type: "internal"
podAnnotations: {}
type: LoadBalancer
ports:
## You can add custom gateway ports - google ILB default quota is 5 ports,
- port: 15011
name: grpc-pilot-mtls
# Insecure port - only for migration from 0.8. Will be removed in 1.1
- port: 15010
name: grpc-pilot
- port: 8060
targetPort: 8060
name: tcp-citadel-grpc-tls
# Port 5353 is forwarded to kube-dns
- port: 5353
name: tcp-dns
secretVolumes:
- name: ilbgateway-certs
secretName: istio-ilbgateway-certs
mountPath: /etc/istio/ilbgateway-certs
- name: ilbgateway-ca-certs
secretName: istio-ilbgateway-ca-certs
mountPath: /etc/istio/ilbgateway-ca-certs
nodeSelector: {}
tolerations: []
apiVersion: v1
kind: ConfigMap
metadata:
name: istio-grafana-custom-resources
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
istio: grafana
data:
custom-resources.yaml: |-
{{- include "grafana-default.yaml.tpl" . | indent 4}}
run.sh: |-
{{- include "install-custom-resources.sh.tpl" . | indent 4}}
{{- $files := .Files }}
{{- range $path, $bytes := .Files.Glob "dashboards/*.json" }}
{{- $filename := trimSuffix (ext $path) (base $path) }}
apiVersion: v1
kind: ConfigMap
metadata:
name: istio-grafana-configuration-dashboards-{{ $filename }}
namespace: {{ $.Release.Namespace }}
labels:
app: {{ template "grafana.name" $ }}
chart: {{ template "grafana.chart" $ }}
heritage: {{ $.Release.Service }}
release: {{ $.Release.Name }}
istio: grafana
data:
{{ base $path }}: '{{ $files.Get $path }}'
---
{{- end }}
apiVersion: v1
kind: ConfigMap
metadata:
name: istio-grafana
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
istio: grafana
data:
{{- if .Values.datasources }}
{{- range $key, $value := .Values.datasources }}
{{ $key }}: |
{{ toYaml $value | indent 4 }}
{{- end -}}
{{- end -}}
{{- if .Values.dashboardProviders }}
{{- range $key, $value := .Values.dashboardProviders }}
{{ $key }}: |
{{ toYaml $value | indent 4 }}
{{- end -}}
{{- end -}}
apiVersion: v1
kind: ServiceAccount
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.global.imagePullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
metadata:
name: istio-grafana-post-install-account
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.global.imagePullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: istio-grafana-post-install-{{ .Release.Namespace }}
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups: ["authentication.istio.io"] # needed to create default authn policy
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: istio-grafana-post-install-role-binding-{{ .Release.Namespace }}
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: istio-grafana-post-install-{{ .Release.Namespace }}
subjects:
- kind: ServiceAccount
name: istio-grafana-post-install-account
namespace: {{ .Release.Namespace }}
---
apiVersion: batch/v1
kind: Job
metadata:
name: istio-grafana-post-install-{{ .Values.global.tag | printf "%v" | trunc 32 }}
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-delete-policy": hook-succeeded
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
template:
metadata:
name: istio-grafana-post-install
labels:
app: istio-grafana
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
serviceAccountName: istio-grafana-post-install-account
containers:
- name: kubectl
image: "{{ template "system_default_registry" . }}{{ .Values.global.kubectl.repository }}:{{ .Values.global.kubectl.tag }}"
command: [ "/bin/bash", "/tmp/grafana/run.sh", "/tmp/grafana/custom-resources.yaml" ]
volumeMounts:
- mountPath: "/tmp/grafana"
name: tmp-configmap-grafana
volumes:
- name: tmp-configmap-grafana
configMap:
name: istio-grafana-custom-resources
restartPolicy: OnFailure
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
annotations:
sidecar.istio.io/inject: "false"
prometheus.io/scrape: "true"
spec:
securityContext:
fsGroup: 472
{{- if .Values.global.priorityClassName }}
priorityClassName: "{{ .Values.global.priorityClassName }}"
{{- end }}
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.global.imagePullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ template "system_default_registry" . }}{{ .Values.repository }}:{{ .Values.tag }}"
imagePullPolicy: {{ .Values.global.imagePullPolicy }}
ports:
- containerPort: 3000
readinessProbe:
httpGet:
path: /login
port: 3000
env:
- name: GRAFANA_PORT
value: "3000"
{{- if .Values.security.enabled }}
- name: GF_SECURITY_ADMIN_USER
valueFrom:
secretKeyRef:
name: {{ .Values.security.secretName }}
key: {{ .Values.security.usernameKey }}
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Values.security.secretName }}
key: {{ .Values.security.passphraseKey }}
- name: GF_AUTH_BASIC_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "false"
- name: GF_AUTH_DISABLE_LOGIN_FORM
value: "false"
{{- else }}
- name: GF_AUTH_BASIC_ENABLED
value: "false"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
{{- end }}
- name: GF_PATHS_DATA
value: /data/grafana
resources:
{{- if .Values.resources }}
{{ toYaml .Values.resources | indent 12 }}
{{- else }}
{{ toYaml .Values.global.defaultResources | indent 12 }}
{{- end }}
volumeMounts:
- name: data
mountPath: /data/grafana
{{- range $path, $bytes := .Files.Glob "dashboards/*.json" }}
{{- $filename := trimSuffix (ext $path) (base $path) }}
- name: dashboards-istio-{{ $filename }}
mountPath: "/var/lib/grafana/dashboards/istio/{{ base $path }}"
subPath: {{ base $path }}
readOnly: true
{{- end }}
- name: config
mountPath: "/etc/grafana/provisioning/datasources/datasources.yaml"
subPath: datasources.yaml
- name: config
mountPath: "/etc/grafana/provisioning/dashboards/dashboardproviders.yaml"
subPath: dashboardproviders.yaml
- name: grafana-proxy
image: "{{ template "system_default_registry" . }}{{ .Values.global.nginxProxy.repository }}:{{ .Values.global.nginxProxy.tag }}"
args:
- nginx
- -g
- daemon off;
- -c
- /nginx/nginx.conf
ports:
- name: http
containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /nginx/
name: grafana-nginx
{{- if and .Values.resources .Values.resources.proxy }}
resources:
{{ toYaml .Values.resources.proxy | indent 10 }}
{{- end }}
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 6 }}
{{- end }}
volumes:
- name: config
configMap:
name: istio-grafana
- name: grafana-nginx
configMap:
name: grafana-nginx
items:
- key: nginx.conf
mode: 438
path: nginx.conf
- name: data
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
claimName: {{ .Values.persistence.existingClaim | default ("istio-grafana-pvc") }}
{{- else }}
emptyDir: {}
{{- end }}
{{- range $path, $bytes := .Files.Glob "dashboards/*.json" }}
{{- $filename := trimSuffix (ext $path) (base $path) }}
- name: dashboards-istio-{{ $filename }}
configMap:
name: istio-grafana-configuration-dashboards-{{ $filename }}
{{- end }}
{{- if .Values.ingress.enabled -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
annotations:
{{- range $key, $value := .Values.ingress.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
rules:
{{- if .Values.ingress.hosts }}
{{- range $host := .Values.ingress.hosts }}
- host: {{ $host }}
http:
paths:
- path: {{ if $.Values.contextPath }} {{ $.Values.contextPath }} {{ else }} / {{ end }}
backend:
serviceName: grafana
servicePort: 3000
{{- end -}}
{{- else }}
- http:
paths:
- path: {{ if .Values.contextPath }} {{ .Values.contextPath }} {{ else }} / {{ end }}
backend:
serviceName: grafana
servicePort: 3000
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{ toYaml .Values.ingress.tls | indent 4 }}
{{- end -}}
{{- end -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-nginx
labels:
app: grafana-nginx
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
data:
nginx.conf: |-
user nginx;
worker_processes auto;
error_log /dev/stdout warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
log_format main '[$time_local - $status] $remote_addr - $remote_user $request ($http_referer)';
proxy_connect_timeout 10;
proxy_read_timeout 180;
proxy_send_timeout 5;
proxy_buffering off;
proxy_cache_path /tmp/nginx levels=1:2 keys_zone=my_zone:100m inactive=1d max_size=10g;
server {
listen 80;
access_log off;
gzip on;
gzip_min_length 1k;
gzip_comp_level 2;
gzip_types text/plain application/javascript application/x-javascript text/css application/xml text/javascript image/jpeg image/gif image/png;
gzip_vary on;
gzip_disable "MSIE [1-6]\.";
proxy_set_header Host $host;
location /api/dashboards {
proxy_pass http://localhost:3000;
}
location /api/search {
proxy_pass http://localhost:3000;
sub_filter_types application/json;
sub_filter_once off;
sub_filter '"url":"/d' '"url":"d';
}
location / {
proxy_cache my_zone;
proxy_cache_valid 200 302 1d;
proxy_cache_valid 301 30d;
proxy_cache_valid any 5m;
proxy_cache_bypass $http_cache_control;
add_header X-Proxy-Cache $upstream_cache_status;
add_header Cache-Control "public";
proxy_pass http://localhost:3000/;
sub_filter_types text/html;
sub_filter_once off;
sub_filter '"appSubUrl":""' '"appSubUrl":"."';
sub_filter '"url":"/' '"url":"./';
sub_filter ':"/avatar/' ':"avatar/';
if ($request_filename ~ .*\.(?:js|css|jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm)$) {
expires 90d;
}
}
}
}
{{- if .Values.persistence.enabled }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: istio-grafana-pvc
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
storageClassName: {{ .Values.persistence.storageClass }}
accessModes:
- {{ .Values.persistence.accessMode }}
resources:
requests:
storage: {{ .Values.persistence.size }}
{{- end }}
apiVersion: v1
kind: Service
metadata:
name: grafana
namespace: {{ .Release.Namespace }}
annotations:
{{- range $key, $val := .Values.service.annotations }}
{{ $key }}: {{ $val | quote }}
{{- end }}
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
kubernetes.io/cluster-service: "true"
spec:
type: {{ .Values.service.type }}
ports:
- name: http-access-grafana
protocol: TCP
targetPort: 80
port: 80
selector:
app: grafana
{{- if .Values.global.enableHelmTest }}
apiVersion: v1
kind: Pod
metadata:
name: {{ template "grafana.fullname" . }}-test
namespace: {{ .Release.Namespace }}
labels:
app: grafana-test
chart: {{ template "grafana.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
istio: grafana
annotations:
sidecar.istio.io/inject: "false"
helm.sh/hook: test-success
spec:
{{- if .Values.global.priorityClassName }}
priorityClassName: "{{ .Values.global.priorityClassName }}"
{{- end }}
containers:
- name: "{{ template "grafana.fullname" . }}-test"
image: "{{ template "system_default_registry" . }}{{ .Values.global.proxy.repository }}:{{ .Values.global.proxy.tag }}"
imagePullPolicy: "{{ .Values.global.imagePullPolicy }}"
command: ['curl']
args: ['http://grafana:{{ .Values.grafana.service.externalPort }}']
restartPolicy: Never
affinity:
{{- include "nodeaffinity" . | indent 4 }}
{{- include "podAntiAffinity" . | indent 4 }}
{{- end }}
#
# addon grafana configuration
#
enabled: false
replicaCount: 1
ingress:
enabled: false
## Used to create an Ingress record.
hosts:
- grafana.local
annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
tls:
# Secrets must be manually created in the namespace.
# - secretName: grafana-tls
# hosts:
# - grafana.local
persistence:
enabled: false
storageClass: ""
accessMode: ReadWriteOnce
existingClaim: ""
size: 5Gi
security:
enabled: false
secretName: grafana
usernameKey: username
passphraseKey: passphrase
nodeSelector: {}
tolerations: []
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
# already running on the node rather than based on labels on nodes.
# There are currently two types of anti-affinity:
# "requiredDuringSchedulingIgnoredDuringExecution"
# "preferredDuringSchedulingIgnoredDuringExecution"
# which denote “hard” vs. “soft” requirements, you can define your values
# in "podAntiAffinityLabelSelector" and "podAntiAffinityTermLabelSelector"
# correspondingly.
# For example:
# podAntiAffinityLabelSelector:
# - key: security
# operator: In
# values: S1,S2
# topologyKey: "kubernetes.io/hostname"
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
contextPath: /grafana
service:
annotations: {}
type: ClusterIP
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
orgId: 1
url: http://prometheus:9090
access: proxy
isDefault: true
jsonData:
timeInterval: 5s
editable: true
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'istio'
orgId: 1
folder: 'istio'
type: file
disableDeletion: false
options:
path: /var/lib/grafana/dashboards/istio
resources: {}
\ No newline at end of file
apiVersion: apps/v1
kind: Deployment
metadata:
name: istiocoredns
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "istiocoredns.name" . }}
chart: {{ template "istiocoredns.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: istiocoredns
template:
metadata:
name: istiocoredns
labels:
app: istiocoredns
chart: {{ template "istiocoredns.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
annotations:
sidecar.istio.io/inject: "false"
spec:
serviceAccountName: istiocoredns-service-account
{{- if .Values.global.priorityClassName }}
priorityClassName: "{{ .Values.global.priorityClassName }}"
{{- end }}
containers:
- name: coredns
image: "{{ template "system_default_registry" . }}{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.global.imagePullPolicy }}
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
resources:
{{- if .Values.resources }}
{{ toYaml .Values.resources | indent 10 }}
{{- else }}
{{ toYaml .Values.global.defaultResources | indent 10 }}
{{- end }}
- name: istio-coredns-plugin
command:
- /usr/local/bin/plugin
image: "{{ template "system_default_registry" . }}{{ .Values.pluginImage.repository }}:{{ .Values.pluginImage.tag }}"
imagePullPolicy: {{ .Values.global.imagePullPolicy }}
ports:
- containerPort: 8053
name: dns-grpc
protocol: TCP
resources:
{{- if .Values.resources }}
{{ toYaml .Values.resources | indent 10 }}
{{- else }}
{{ toYaml .Values.global.defaultResources | indent 10 }}
{{- end }}
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 6 }}
{{- end }}
#
# addon istiocoredns tracing configuration
#
enabled: false
replicaCount: 1
# Source code for the plugin can be found at
# https://github.com/istio-ecosystem/istio-coredns-plugin
# The plugin listens for DNS requests from coredns server at 127.0.0.1:8053
nodeSelector: {}
tolerations: []
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
# already running on the node rather than based on labels on nodes.
# There are currently two types of anti-affinity:
# "requiredDuringSchedulingIgnoredDuringExecution"
# "preferredDuringSchedulingIgnoredDuringExecution"
# which denote “hard” vs. “soft” requirements, you can define your values
# in "podAntiAffinityLabelSelector" and "podAntiAffinityTermLabelSelector"
# correspondingly.
# For example:
# podAntiAffinityLabelSelector:
# - key: security
# operator: In
# values: S1,S2
# topologyKey: "kubernetes.io/hostname"
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
apiVersion: v1
description: Kiali is an open source project for service mesh observability, refer to https://www.kiali.io for details.
name: kiali
version: 1.1.0
appVersion: 0.20
tillerVersion: ">=2.7.2"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: istio-kiali-admin-role-binding-{{ .Release.Namespace }}
labels:
app: {{ template "kiali.name" . }}
chart: {{ template "kiali.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kiali{{- if .Values.dashboard.viewOnlyMode }}-viewer{{- end }}
subjects:
- kind: ServiceAccount
name: kiali-service-account
namespace: {{ .Release.Namespace }}
apiVersion: v1
kind: ConfigMap
metadata:
name: kiali
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "kiali.name" . }}
chart: {{ template "kiali.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
data:
config.yaml: |
istio_namespace: {{ .Release.Namespace }}
server:
port: 20001
external_services:
tracing:
service: "tracing/jaeger"
{{- if and .Values.global.rancher (and .Values.global.rancher.domain .Values.global.rancher.clusterId) }}
{{- if not .Values.dashboard.jaegerURL }}
url: 'https://{{ .Values.global.rancher.domain }}/k8s/clusters/{{ .Values.global.rancher.clusterId }}/api/v1/namespaces/{{ .Release.Namespace }}/services/http:tracing:80/proxy/jaeger'
{{- end }}
{{- end }}
grafana:
custom_metrics_url: "http://prometheus.{{ .Release.Namespace }}:9090"
{{- if .Values.dashboard.grafanaURL }}
url: {{ .Values.dashboard.grafanaURL }}
{{- end }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: kiali
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "kiali.name" . }}
chart: {{ template "kiali.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: kiali
template:
metadata:
name: kiali
labels:
app: kiali
chart: {{ template "kiali.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
annotations:
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
prometheus.io/scrape: "true"
prometheus.io/port: "9090"
spec:
serviceAccountName: kiali-service-account
{{- if .Values.global.priorityClassName }}
priorityClassName: "{{ .Values.global.priorityClassName }}"
{{- end }}
containers:
- image: "{{ template "system_default_registry" . }}{{ .Values.repository }}:{{ .Values.tag }}"
name: kiali
command:
- "/opt/kiali/kiali"
- "-config"
- "/kiali-configuration/config.yaml"
- "-v"
- "4"
env:
{{- if and .Values.global.rancher (and .Values.global.rancher.domain .Values.global.rancher.clusterId) }}
{{- if not .Values.dashboard.grafanaURL }}
- name: GRAFANA_URL
value: 'https://{{ .Values.global.rancher.domain }}/k8s/clusters/{{ .Values.global.rancher.clusterId }}/api/v1/namespaces/{{ .Release.Namespace }}/services/http:grafana:80/proxy/'
{{- end }}
{{- end }}
- name: ACTIVE_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: AUTH_STRATEGY
value: {{ .Values.dashboard.authStrategy }}
- name: SERVER_CREDENTIALS_USERNAME
valueFrom:
secretKeyRef:
name: {{ .Values.dashboard.secretName }}
key: username
optional: true
- name: SERVER_CREDENTIALS_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Values.dashboard.secretName }}
key: passphrase
optional: true
- name: PROMETHEUS_SERVICE_URL
value: {{ .Values.prometheusAddr }}
{{- if .Values.contextPath }}
- name: SERVER_WEB_ROOT
value: {{ .Values.contextPath }}
{{- end }}
volumeMounts:
- name: kiali-configuration
mountPath: "/kiali-configuration"
- name: kiali-secret
mountPath: "/kiali-secret"
resources:
{{- if .Values.resources }}
{{ toYaml .Values.resources | indent 10 }}
{{- else }}
{{ toYaml .Values.global.defaultResources | indent 10 }}
{{- end }}
- name: kiali-proxy
image: "{{ template "system_default_registry" . }}{{ .Values.global.nginxProxy.repository }}:{{ .Values.global.nginxProxy.tag }}"
args:
- nginx
- -g
- daemon off;
- -c
- /nginx/nginx.conf
ports:
- name: http
containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /nginx/
name: kiali-nginx
{{- if and .Values.resources .Values.resources.proxy }}
resources:
{{ toYaml .Values.resources.proxy | indent 10 }}
{{- end }}
volumes:
- name: kiali-configuration
configMap:
name: kiali
- name: kiali-nginx
configMap:
name: kiali-nginx
items:
- key: nginx.conf
mode: 438
path: nginx.conf
- name: kiali-secret
secret:
secretName: {{ .Values.dashboard.secretName }}
optional: true
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
{{- if .Values.global.enableHelmTest }}
apiVersion: v1
kind: Pod
metadata:
name: {{ template "kiali.fullname" . }}-test
namespace: {{ .Release.Namespace }}
labels:
app: kiali-test
chart: {{ template "kiali.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
istio: kiali
annotations:
sidecar.istio.io/inject: "false"
helm.sh/hook: test-success
spec:
{{- if .Values.global.priorityClassName }}
priorityClassName: "{{ .Values.global.priorityClassName }}"
{{- end }}
containers:
- name: "{{ template "kiali.fullname" . }}-test"
image: "{{ template "system_default_registry" . }}{{ .Values.global.proxy.repository }}:{{ .Values.global.proxy.tag }}"
imagePullPolicy: "{{ .Values.global.imagePullPolicy }}"
command: ['curl']
args: ['http://kiali:20001']
restartPolicy: Never
affinity:
{{- include "nodeaffinity" . | indent 4 }}
{{- include "podAntiAffinity" . | indent 4 }}
{{- end }}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus-{{ .Release.Namespace }}
labels:
app: prometheus
chart: {{ template "prometheus.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups: [""]
resources:
- nodes
- services
- endpoints
- pods
- nodes/proxy
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment