{"cells":[{"cell_type":"markdown","execution_count":null,"metadata":{"tags":[]},"outputs":[],"source":["# CER103 - Configure Cluster with externally signed certificates\n","\n","## Description\n","\n","The purpose of this notebook is to rotate the endpoint certificates with\n","the ones generated and signed outside of Big Data Cluster. It’s expected\n","that the user generates and signs the certificates for the following\n","endpoints: Management Proxy, Gateway, App-Proxy, Master and Controller.\n","Please refer to individual certificate sections below to check the\n","certificate requirements imposed by the notebook. Note that for each pod\n","in master pool in a HA environment we require a different certificate.\n","Certificates will be read from the paths specified in the `Parameters`\n","cell below. This notebook performs the following steps below:\n","\n","1. Upload and install external root CA used for signing the\n"," certificates.\n","2. Upload generated endpoint certificate to controller pod.\n","3. Validate and install each endpoint certificate into the Big Data\n"," Cluster.\n","\n","All certificates will be stored temporarily in the controller pod (at\n","the `controller_cert_store_root` location).\n","\n","Please note that it can take up to 30 minutes for the notebook to be\n","executed.\n","\n","Upon completion of this notebook, https:// access to the Big Data\n","Cluster endpoints from any machine that installs the external CA will\n","show as being secure.\n","\n","### Parameters\n","\n","The parameters set here will override the default parameters set in each\n","individual notebook (`azdata notebook run` injects a `Parameters` cell\n","at runtime with the values passed in from the `-a` argument). Values of\n","these parameters can be modified to point to the paths where\n","user-generated certificates are located. Please note that backslash\n","characters in Windows file paths need to be escaped with another\n","backslash."]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["parameters"]},"outputs":[],"source":["root_ca_local_certificate_dir = \"/var/opt/secrets/mssql-cluster-certificates\"\n","root_ca_certificate_file_name = \"cacert.pem\"\n","\n","mgmtproxy_local_certificate_dir = \"/var/opt/secrets/mssql-cluster-certificates/mgmtproxy\"\n","mgmtproxy_certificate_file_name = \"service-proxy-certificate.pem\"\n","mgmtproxy_private_key_file_name = \"service-proxy-privatekey.pem\"\n","\n","knox_local_certificate_dir = \"/var/opt/secrets/mssql-cluster-certificates/knox\"\n","knox_certificate_file_name = \"knox-certificate.pem\"\n","knox_private_key_file_name = \"knox-privatekey.pem\"\n","\n","appproxy_local_certificate_dir = \"/var/opt/secrets/mssql-cluster-certificates/appproxy\"\n","appproxy_certificate_file_name = \"service-proxy-certificate.pem\"\n","appproxy_private_key_file_name = \"service-proxy-privatekey.pem\"\n","\n","master_local_certificate_dir = \"/var/opt/secrets/mssql-cluster-certificates/master\"\n","master_certificate_file_names = [\"master-0-certificate.pem\", \"master-1-certificate.pem\", \"master-2-certificate.pem\"]\n","master_private_key_file_names = [\"master-0-privatekey.pem\", \"master-1-privatekey.pem\", \"master-2-privatekey.pem\"]\n","\n","controller_local_certificate_dir = \"/var/opt/secrets/mssql-cluster-certificates/controller\"\n","controller_certificate_file_name = \"controller-certificate.pem\"\n","controller_private_key_file_name = \"controller-privatekey.pem\"\n","controller_pfx_file_name = \"controller-certificate.p12\"\n","\n","controller_cert_store_root = \"/var/opt/secrets/externally-signed-certificates\"\n","local_certificate_dir = \"mssql-cluster-certificates\""]},{"cell_type":"markdown","execution_count":null,"metadata":{"tags":[]},"outputs":[],"source":["### Common functions\n","\n","Define helper functions used in this notebook."]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["hide_input"]},"outputs":[],"source":["# Define `run` function for transient fault handling, hyperlinked suggestions, and scrolling updates on Windows\n","import sys\n","import os\n","import re\n","import platform\n","import shlex\n","import shutil\n","import datetime\n","\n","from subprocess import Popen, PIPE\n","from IPython.display import Markdown\n","\n","retry_hints = {} # Output in stderr known to be transient, therefore automatically retry\n","error_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help\n","install_hint = {} # The SOP to help install the executable if it cannot be found\n","\n","def run(cmd, return_output=False, no_output=False, retry_count=0, base64_decode=False, return_as_json=False):\n"," \"\"\"Run shell command, stream stdout, print stderr and optionally return output\n","\n"," NOTES:\n","\n"," 1. Commands that need this kind of ' quoting on Windows e.g.:\n","\n"," kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name}\n","\n"," Need to actually pass in as '\"':\n","\n"," kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='\"'data-pool'\"')].metadata.name}\n","\n"," The ' quote approach, although correct when pasting into Windows cmd, will hang at the line:\n"," \n"," `iter(p.stdout.readline, b'')`\n","\n"," The shlex.split call does the right thing for each platform, just use the '\"' pattern for a '\n"," \"\"\"\n"," MAX_RETRIES = 5\n"," output = \"\"\n"," retry = False\n","\n"," # When running `azdata sql query` on Windows, replace any \\n in \"\"\" strings, with \" \", otherwise we see:\n"," #\n"," # ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)')\n"," #\n"," if platform.system() == \"Windows\" and cmd.startswith(\"azdata sql query\"):\n"," cmd = cmd.replace(\"\\n\", \" \")\n","\n"," # shlex.split is required on bash and for Windows paths with spaces\n"," #\n"," cmd_actual = shlex.split(cmd)\n","\n"," # Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries\n"," #\n"," user_provided_exe_name = cmd_actual[0].lower()\n","\n"," # When running python, use the python in the ADS sandbox ({sys.executable})\n"," #\n"," if cmd.startswith(\"python \"):\n"," cmd_actual[0] = cmd_actual[0].replace(\"python\", sys.executable)\n","\n"," # On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail\n"," # with:\n"," #\n"," # UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)\n"," #\n"," # Setting it to a default value of \"en_US.UTF-8\" enables pip install to complete\n"," #\n"," if platform.system() == \"Darwin\" and \"LC_ALL\" not in os.environ:\n"," os.environ[\"LC_ALL\"] = \"en_US.UTF-8\"\n","\n"," # When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc`\n"," #\n"," if cmd.startswith(\"kubectl \") and \"AZDATA_OPENSHIFT\" in os.environ:\n"," cmd_actual[0] = cmd_actual[0].replace(\"kubectl\", \"oc\")\n","\n"," # To aid supportability, determine which binary file will actually be executed on the machine\n"," #\n"," which_binary = None\n","\n"," # Special case for CURL on Windows. The version of CURL in Windows System32 does not work to\n"," # get JWT tokens, it returns \"(56) Failure when receiving data from the peer\". If another instance\n"," # of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost\n"," # always the first curl.exe in the path, and it can't be uninstalled from System32, so here we\n"," # look for the 2nd installation of CURL in the path)\n"," if platform.system() == \"Windows\" and cmd.startswith(\"curl \"):\n"," path = os.getenv('PATH')\n"," for p in path.split(os.path.pathsep):\n"," p = os.path.join(p, \"curl.exe\")\n"," if os.path.exists(p) and os.access(p, os.X_OK):\n"," if p.lower().find(\"system32\") == -1:\n"," cmd_actual[0] = p\n"," which_binary = p\n"," break\n","\n"," # Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this\n"," # seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound) \n"," #\n"," # NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.\n"," #\n"," if which_binary == None:\n"," which_binary = shutil.which(cmd_actual[0])\n","\n"," # Display an install HINT, so the user can click on a SOP to install the missing binary\n"," #\n"," if which_binary == None:\n"," print(f\"The path used to search for '{cmd_actual[0]}' was:\")\n"," print(sys.path)\n","\n"," if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:\n"," display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))\n","\n"," raise FileNotFoundError(f\"Executable '{cmd_actual[0]}' not found in path (where/which)\")\n"," else: \n"," cmd_actual[0] = which_binary\n","\n"," start_time = datetime.datetime.now().replace(microsecond=0)\n","\n"," print(f\"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)\")\n"," print(f\" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})\")\n"," print(f\" cwd: {os.getcwd()}\")\n","\n"," # Command-line tools such as CURL and AZDATA HDFS commands output\n"," # scrolling progress bars, which causes Jupyter to hang forever, to\n"," # workaround this, use no_output=True\n"," #\n","\n"," # Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait\n"," #\n"," wait = True \n","\n"," try:\n"," if no_output:\n"," p = Popen(cmd_actual)\n"," else:\n"," p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)\n"," with p.stdout:\n"," for line in iter(p.stdout.readline, b''):\n"," line = line.decode()\n"," if return_output:\n"," output = output + line\n"," else:\n"," if cmd.startswith(\"azdata notebook run\"): # Hyperlink the .ipynb file\n"," regex = re.compile(' \"(.*)\"\\: \"(.*)\"') \n"," match = regex.match(line)\n"," if match:\n"," if match.group(1).find(\"HTML\") != -1:\n"," display(Markdown(f' - \"{match.group(1)}\": \"{match.group(2)}\"'))\n"," else:\n"," display(Markdown(f' - \"{match.group(1)}\": \"[{match.group(2)}]({match.group(2)})\"'))\n","\n"," wait = False\n"," break # otherwise infinite hang, have not worked out why yet.\n"," else:\n"," print(line, end='')\n","\n"," if wait:\n"," p.wait()\n"," except FileNotFoundError as e:\n"," if install_hint is not None:\n"," display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))\n","\n"," raise FileNotFoundError(f\"Executable '{cmd_actual[0]}' not found in path (where/which)\") from e\n","\n"," exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()\n","\n"," if not no_output:\n"," for line in iter(p.stderr.readline, b''):\n"," try:\n"," line_decoded = line.decode()\n"," except UnicodeDecodeError:\n"," # NOTE: Sometimes we get characters back that cannot be decoded(), e.g.\n"," #\n"," # \\xa0\n"," #\n"," # For example see this in the response from `az group create`:\n"," #\n"," # ERROR: Get Token request returned http error: 400 and server \n"," # response: {\"error\":\"invalid_grant\",# \"error_description\":\"AADSTS700082: \n"," # The refresh token has expired due to inactivity.\\xa0The token was \n"," # issued on 2018-10-25T23:35:11.9832872Z\n"," #\n"," # which generates the exception:\n"," #\n"," # UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte\n"," #\n"," print(\"WARNING: Unable to decode stderr line, printing raw bytes:\")\n"," print(line)\n"," line_decoded = \"\"\n"," pass\n"," else:\n","\n"," # azdata emits a single empty line to stderr when doing an hdfs cp, don't\n"," # print this empty \"ERR:\" as it confuses.\n"," #\n"," if line_decoded == \"\":\n"," continue\n"," \n"," print(f\"STDERR: {line_decoded}\", end='')\n","\n"," if line_decoded.startswith(\"An exception has occurred\") or line_decoded.startswith(\"ERROR: An error occurred while executing the following cell\"):\n"," exit_code_workaround = 1\n","\n"," # inject HINTs to next TSG/SOP based on output in stderr\n"," #\n"," if user_provided_exe_name in error_hints:\n"," for error_hint in error_hints[user_provided_exe_name]:\n"," if line_decoded.find(error_hint[0]) != -1:\n"," display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))\n","\n"," # Verify if a transient error, if so automatically retry (recursive)\n"," #\n"," if user_provided_exe_name in retry_hints:\n"," for retry_hint in retry_hints[user_provided_exe_name]:\n"," if line_decoded.find(retry_hint) != -1:\n"," if retry_count \u003c MAX_RETRIES:\n"," print(f\"RETRY: {retry_count} (due to: {retry_hint})\")\n"," retry_count = retry_count + 1\n"," output = run(cmd, return_output=return_output, retry_count=retry_count)\n","\n"," if return_output:\n"," if base64_decode:\n"," import base64\n"," return base64.b64decode(output).decode('utf-8')\n"," else:\n"," return output\n","\n"," elapsed = datetime.datetime.now().replace(microsecond=0) - start_time\n","\n"," # WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so\n"," # don't wait here, if success known above\n"," #\n"," if wait: \n"," if p.returncode != 0:\n"," raise SystemExit(f'Shell command:\\n\\n\\t{cmd} ({elapsed}s elapsed)\\n\\nreturned non-zero exit code: {str(p.returncode)}.\\n')\n"," else:\n"," if exit_code_workaround !=0 :\n"," raise SystemExit(f'Shell command:\\n\\n\\t{cmd} ({elapsed}s elapsed)\\n\\nreturned non-zero exit code: {str(exit_code_workaround)}.\\n')\n","\n"," print(f'\\nSUCCESS: {elapsed}s elapsed.\\n')\n","\n"," if return_output:\n"," if base64_decode:\n"," import base64\n"," return base64.b64decode(output).decode('utf-8')\n"," else:\n"," return output\n","\n","\n","\n","# Hints for tool retry (on transient fault), known errors and install guide\n","#\n","retry_hints = {'azdata': ['Endpoint sql-server-master does not exist', 'Endpoint livy does not exist', 'Failed to get state for cluster', 'Endpoint webhdfs does not exist', 'Adaptive Server is unavailable or does not exist', 'Error: Address already in use', 'Login timeout expired (0) (SQLDriverConnect)', 'SSPI Provider: No Kerberos credentials available', ], 'kubectl': ['A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond', ], 'python': [ ], }\n","error_hints = {'azdata': [['Please run \\'azdata login\\' to first authenticate', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['The token is expired', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Reason: Unauthorized', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Max retries exceeded with url: /api/v1/bdc/endpoints', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Look at the controller logs for more details', 'TSG027 - Observe cluster deployment', '../diagnose/tsg027-observe-bdc-create.ipynb'], ['provided port is already allocated', 'TSG062 - Get tail of all previous container logs for pods in BDC namespace', '../log-files/tsg062-tail-bdc-previous-container-logs.ipynb'], ['Create cluster failed since the existing namespace', 'SOP061 - Delete a big data cluster', '../install/sop061-delete-bdc.ipynb'], ['Failed to complete kube config setup', 'TSG067 - Failed to complete kube config setup', '../repair/tsg067-failed-to-complete-kube-config-setup.ipynb'], ['Data source name not found and no default driver specified', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Can\\'t open lib \\'ODBC Driver 17 for SQL Server', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Control plane upgrade failed. Failed to upgrade controller.', 'TSG108 - View the controller upgrade config map', '../diagnose/tsg108-controller-failed-to-upgrade.ipynb'], ['NameError: name \\'azdata_login_secret_name\\' is not defined', 'SOP013 - Create secret for azdata login (inside cluster)', '../common/sop013-create-secret-for-azdata-login.ipynb'], ['ERROR: No credentials were supplied, or the credentials were unavailable or inaccessible.', 'TSG124 - \\'No credentials were supplied\\' error from azdata login', '../repair/tsg124-no-credentials-were-supplied.ipynb'], ['Please accept the license terms to use this product through', 'TSG126 - azdata fails with \\'accept the license terms to use this product\\'', '../repair/tsg126-accept-license-terms.ipynb'], ], 'kubectl': [['no such host', 'TSG010 - Get configuration contexts', '../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb'], ['No connection could be made because the target machine actively refused it', 'TSG056 - Kubectl fails with No connection could be made because the target machine actively refused it', '../repair/tsg056-kubectl-no-connection-could-be-made.ipynb'], ], 'python': [['Library not loaded: /usr/local/opt/unixodbc', 'SOP012 - Install unixodbc for Mac', '../install/sop012-brew-install-odbc-for-sql-server.ipynb'], ['WARNING: You are using pip version', 'SOP040 - Upgrade pip in ADS Python sandbox', '../install/sop040-upgrade-pip.ipynb'], ], }\n","install_hint = {'azdata': [ 'SOP063 - Install azdata CLI (using package manager)', '../install/sop063-packman-install-azdata.ipynb' ], 'kubectl': [ 'SOP036 - Install kubectl command line interface', '../install/sop036-install-kubectl.ipynb' ], }\n","\n","\n","print('Common functions defined successfully.')"]},{"cell_type":"markdown","execution_count":null,"metadata":{"tags":[]},"outputs":[],"source":["### Create a temporary directory to stage files"]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["hide_input"]},"outputs":[],"source":["# Create a temporary directory to hold configuration files\n","\n","import tempfile\n","\n","temp_dir = os.path.join(tempfile.gettempdir(), local_certificate_dir)\n","if not os.path.exists(temp_dir):\n"," os.mkdir(temp_dir)\n"," print(f\"Temporary directory created: {temp_dir}\")"]},{"cell_type":"markdown","execution_count":null,"metadata":{"tags":[]},"outputs":[],"source":["### Helper function for running notebooks with `azdata notebook run`\n","\n","To pass ‘list’ types to `azdata notebook run --arguments`, flatten to\n","string"]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["hide_input"]},"outputs":[],"source":["# Define helper function 'run_notebook'\n","\n","def run_notebook(name, arguments):\n"," for key, value in arguments.items():\n"," if isinstance(value, list):\n"," arguments[key] = str(value).replace(\"'\", \"\") # Remove the quotes, to enable passing to azdata notebook run --arguments\n"," elif isinstance(value, bool):\n"," arguments[key] = '\"' + str(value) + '\"' # Add quotes, to enable passing to azdata notebook run --arguments, use bool(arg) to parse in target notebooks\n","\n"," # --arguments have to be passed as \\\" \\\" quoted strings on Windows cmd line\n"," #\n"," arguments = str(arguments).replace(\"'\", '\\\\\"') \n","\n"," # `app create` and `app run` can take a long time, so pass in a 30 minute cell timeout\n"," #\n"," # The cwd for the azdata process about to be launched becomes the --output-path (or the auto generated one\n"," # if it isn't specified), but these canary notebooks go onto run the notebooks in the notebook-o16n\n"," # directory, using a relative link, so here we set the --output-path to the cwd. This isn't great because\n"," # then the output-* notebooks also go into this directory (which is the location of the book)\n"," #\n"," run(f'azdata notebook run -p \"{os.path.join(\"..\", \"notebook-o16n\", name)}\" --arguments \"{arguments}\" --output-path \"{os.getcwd()}\" --output-html --timeout 1800')\n","\n","print(\"Function 'run_notebook' defined\")"]},{"cell_type":"markdown","execution_count":null,"metadata":{"tags":[]},"outputs":[],"source":["### Login\n","\n","Perform Big Data Cluster login."]},{"cell_type":"code","execution_count":null,"metadata":{"tags":[]},"outputs":[],"source":["run_notebook(os.path.join(\"..\", \"common\", \"sop028-azdata-login.ipynb\"), {})\n","\n","print(\"Notebook ran successfully.\")"]},{"cell_type":"markdown","execution_count":null,"metadata":{"tags":[]},"outputs":[],"source":["### Install root CA certificate\n","\n","This sections installs the external root CA certificate. System needs a\n","public key of external root CA certificate in PEM format. After the\n","installation all pods will be restarted for new root CA certificate to\n","be picked up."]},{"cell_type":"code","execution_count":null,"metadata":{"tags":[]},"outputs":[],"source":["import os\n","import tempfile\n","from shutil import copy\n","\n","# Copy certificate to temporary directory\n","#\n","cert_file = os.path.join(root_ca_local_certificate_dir, root_ca_certificate_file_name)\n","copy(cert_file, temp_dir)\n","\n","cer05_args = {\"test_cert_store_root\": controller_cert_store_root, \"local_certificate_dir\": local_certificate_dir, \"ca_certificate_file_name\": root_ca_certificate_file_name }\n","\n","notebooks = [\n"," [ os.path.join(\"..\", \"cert-management\", \"cer005-install-existing-root-ca.ipynb\"), cer05_args ]\n","]\n","\n","for notebook in notebooks:\n"," run_notebook(notebook[0], notebook[1])\n","\n","print(\"Notebooks ran successfully.\")"]},{"cell_type":"markdown","execution_count":null,"metadata":{"tags":[]},"outputs":[],"source":["### Rotate management proxy certificate\n","\n","This section uploads, validates and installs management proxy\n","certificate. Mgmtproxy pod is restarted at the end for the new\n","certificate to be picked up. Big Data Cluster requires public and\n","private key in PEM format. Common name should be set to “mgmtproxy-svc”.\n","Following DNS names should be set in subject alternative name of the\n","certificate:\n","\n","- mgmtproxy-svc\n","- mgmtproxy-svc.{kubernetes_cluster_namespace}.{kubernetes_cluster_dns_suffix}.\n"," Example: mgmtproxy-svc.mssql-cluster.svc.cluster.local.\n","- mgmtproxy-svc.{ad_subdomain_name}.{ad_domain_dns_name} or\n"," mgmtproxy-svc.{cluster_kubernetes_cluster_namespace}.{ad_domain_dns_name}\n"," in case if subdomain is not set. Example:\n"," mgmtproxy-svc.bdc.contoso.local. Required only in AD enabled Big\n"," Data Cluster.\n","- {mgmtproxy_endpoint_dns_name}. Example: monitor.bdc.contoso.local.\n"," Required only in AD enabled Big Data Cluster."]},{"cell_type":"code","execution_count":null,"metadata":{"tags":[]},"outputs":[],"source":["import os\n","import tempfile\n","from shutil import copy\n","\n","# Copy certificate and private key to temporary directory\n","#\n","cert_file = os.path.join(mgmtproxy_local_certificate_dir, mgmtproxy_certificate_file_name)\n","key_file = os.path.join(mgmtproxy_local_certificate_dir, mgmtproxy_private_key_file_name)\n","copy(cert_file, temp_dir)\n","copy(key_file, temp_dir)\n","\n","cer025_args = {\"test_cert_store_root\": controller_cert_store_root, \"local_certificate_dir\": local_certificate_dir, \"certificate_file_name\": mgmtproxy_certificate_file_name, \"private_key_file_name\": mgmtproxy_private_key_file_name }\n","cer04_args = { \"test_cert_store_root\": controller_cert_store_root }\n","\n","notebooks = [\n"," [ os.path.join(\"..\", \"cert-management\", \"cer025-upload-management-service-proxy-cert.ipynb\"), cer025_args ],\n"," [ os.path.join(\"..\", \"cert-management\", \"cer040-install-service-proxy-cert.ipynb\"), cer04_args ],\n"," [ os.path.join(\"..\", \"cert-management\", \"cer050-wait-cluster-healthy.ipynb\"), {} ]\n","]\n","\n","for notebook in notebooks:\n"," run_notebook(notebook[0], notebook[1])\n","\n","print(\"Notebooks ran successfully.\")"]},{"cell_type":"markdown","execution_count":null,"metadata":{"tags":[]},"outputs":[],"source":["### Rotate knox certificate\n","\n","This section uploads, validates and installs knox certificate. Gateway\n","pod is restarted at the end for the new certificate to be picked up. Big\n","Data Cluster requires public and private key in PEM format. Common name\n","should be set to “gateway-svc”. Following DNS names should be set in\n","subject alternative name of the certificate:\n","\n","- gateway-svc\n","- gateway-svc.{kubernetes_cluster_namespace}.{kubernetes_cluster_dns_suffix}.\n"," Example: gateway-svc.mssql-cluster.svc.cluster.local.\n","- gateway-svc.{ad_subdomain_name}.{ad_domain_dns_name} or\n"," gateway-svc.{cluster_kubernetes_cluster_namespace}.{ad_domain_dns_name}\n"," in case if subdomain is not set. Example:\n"," gateway-svc.bdc.contoso.local. Required only in AD enabled Big Data\n"," Cluster.\n","- {gateway_endpoint_dns_name}. Example: knox.bdc.contoso.local.\n"," Required only in AD enabled Big Data Cluster.\n","- gateway-0\n","- gateway-0.{cluster_kubernetes_cluster_namespace}.{kubernetes_cluster_dns_suffix}.\n"," Example: gateway-0.mssql-cluster.svc.cluster.local.\n","- gateway-0.{ad_subdomain_name}.{ad_domain_dns_name} or\n"," gateway-0.{cluster_kubernetes_cluster_namespace}.{ad_domain_dns_name}\n"," in case if subdomain is not set. Example:\n"," gateway-0.bdc.contoso.local. Required only in AD enabled Big Data\n"," Cluster."]},{"cell_type":"code","execution_count":null,"metadata":{"tags":[]},"outputs":[],"source":["import os\n","import tempfile\n","from shutil import copy\n","\n","# Copy certificate and private key to temporary directory\n","#\n","cert_file = os.path.join(knox_local_certificate_dir, knox_certificate_file_name)\n","key_file = os.path.join(knox_local_certificate_dir, knox_private_key_file_name)\n","copy(cert_file, temp_dir)\n","copy(key_file, temp_dir)\n","\n","cer026_args = {\"test_cert_store_root\": controller_cert_store_root, \"local_certificate_dir\": local_certificate_dir, \"certificate_file_name\": knox_certificate_file_name, \"private_key_file_name\": knox_private_key_file_name }\n","cer04_args = { \"test_cert_store_root\": controller_cert_store_root }\n","\n","notebooks = [\n"," [ os.path.join(\"..\", \"cert-management\", \"cer026-upload-knox-cert.ipynb\"), cer026_args ],\n"," [ os.path.join(\"..\", \"cert-management\", \"cer041-install-knox-cert.ipynb\"), cer04_args ],\n"," [ os.path.join(\"..\", \"cert-management\", \"cer050-wait-cluster-healthy.ipynb\"), {} ]\n","]\n","\n","for notebook in notebooks:\n"," run_notebook(notebook[0], notebook[1])\n","\n","print(\"Notebooks ran successfully.\")"]},{"cell_type":"markdown","execution_count":null,"metadata":{"tags":[]},"outputs":[],"source":["### Rotate app proxy certificate\n","\n","This section uploads, validates and installs app proxy certificate.\n","Appproxy pod is restarted at the end for the new certificate to be\n","picked up. Big Data Cluster requires public and private key in PEM\n","format. Common name should be set to “appproxy-svc”. Following DNS names\n","should be set in subject alternative name of the certificate:\n","\n","- appproxy-svc\n","- appproxy-svc.{kubernetes_cluster_namespace}.{kubernetes_cluster_dns_suffix}.\n"," Example: appproxy-svc.mssql-cluster.svc.cluster.local.\n","- appproxy-svc.{ad_subdomain_name}.{ad_domain_dns_name} or\n"," appproxy-svc.{cluster_kubernetes_cluster_namespace}.{ad_domain_dns_name}\n"," in case if subdomain is not set. Example:\n"," appproxy-svc.bdc.contoso.local. Required only in AD enabled Big Data\n"," Cluster.\n","- {appproxy_endpoint_dns_name}. Example:\n"," application.bdc.contoso.local. Required only in AD enabled Big Data\n"," Cluster."]},{"cell_type":"code","execution_count":null,"metadata":{"tags":[]},"outputs":[],"source":["import os\n","import tempfile\n","from shutil import copy\n","\n","# Copy certificate and private key to temporary directory\n","#\n","cert_file = os.path.join(appproxy_local_certificate_dir, appproxy_certificate_file_name)\n","key_file = os.path.join(appproxy_local_certificate_dir, appproxy_private_key_file_name)\n","copy(cert_file, temp_dir)\n","copy(key_file, temp_dir)\n","\n","cer027_args = {\"test_cert_store_root\": controller_cert_store_root, \"local_certificate_dir\": local_certificate_dir, \"certificate_file_name\": appproxy_certificate_file_name, \"private_key_file_name\": appproxy_private_key_file_name }\n","cer04_args = { \"test_cert_store_root\": controller_cert_store_root }\n","\n","notebooks = [\n"," [ os.path.join(\"..\", \"cert-management\", \"cer027-upload-app-proxy-cert.ipynb\"), cer027_args ],\n"," [ os.path.join(\"..\", \"cert-management\", \"cer042-install-app-proxy-cert.ipynb\"), cer04_args ],\n"," [ os.path.join(\"..\", \"cert-management\", \"cer050-wait-cluster-healthy.ipynb\"), {} ]\n","]\n","\n","for notebook in notebooks:\n"," run_notebook(notebook[0], notebook[1])\n","\n","print(\"Notebooks ran successfully.\")"]},{"cell_type":"markdown","execution_count":null,"metadata":{"tags":[]},"outputs":[],"source":["### Get the Kubernetes namespace for the big data cluster\n","\n","Get the namespace of the Big Data Cluster use the kubectl command line\n","interface .\n","\n","**NOTE:**\n","\n","If there is more than one Big Data Cluster in the target Kubernetes\n","cluster, then either:\n","\n","- set \\[0\\] to the correct value for the big data cluster.\n","- set the environment variable AZDATA_NAMESPACE, before starting Azure\n"," Data Studio."]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["hide_input"]},"outputs":[],"source":["# Place Kubernetes namespace name for BDC into 'namespace' variable\n","\n","if \"AZDATA_NAMESPACE\" in os.environ:\n"," namespace = os.environ[\"AZDATA_NAMESPACE\"]\n","else:\n"," try:\n"," namespace = run(f'kubectl get namespace --selector=MSSQL_CLUSTER -o jsonpath={{.items[0].metadata.name}}', return_output=True)\n"," except:\n"," from IPython.display import Markdown\n"," print(f\"ERROR: Unable to find a Kubernetes namespace with label 'MSSQL_CLUSTER'. SQL Server Big Data Cluster Kubernetes namespaces contain the label 'MSSQL_CLUSTER'.\")\n"," display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))\n"," display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))\n"," display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))\n"," raise\n","\n","print(f'The SQL Server Big Data Cluster Kubernetes namespace is: {namespace}')"]},{"cell_type":"markdown","execution_count":null,"metadata":{"tags":[]},"outputs":[],"source":["### Get the name of the `master` `pods`"]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["hide_input"]},"outputs":[],"source":["# Place the name of the master pods in variable `pods`\n","\n","podNames = run(f'kubectl get pod --selector=app=master -n {namespace} -o jsonpath={{.items[*].metadata.name}}', return_output=True)\n","pods = podNames.split(\" \")\n","\n","print(f\"Master pod names: {pods}\")"]},{"cell_type":"markdown","execution_count":null,"metadata":{"tags":[]},"outputs":[],"source":["### Rotate master certificates\n","\n","This section uploads, validates and installs master certificate. Master\n","pods are restarted at the end for the new certificate to be picked up.\n","In case of HA environemnt manual failover API is invoked to make sure\n","the restart is performed in a safe manner. Common name should be set to\n","“master-svc”. Following DNS names should be set in subject alternative\n","name of the certificate:\n","\n","- master-svc\n","- master-svc.{kubernetes_cluster_namespace}.{kubernetes_cluster_dns_suffix}.\n"," Example: master-svc.mssql-cluster.svc.cluster.local.\n","- master-svc.{ad_subdomain_name}.{ad_domain_dns_name} or\n"," master-svc.{cluster_kubernetes_cluster_namespace}.{ad_domain_dns_name}\n"," in case if subdomain is not set. Example:\n"," master-svc.bdc.contoso.local. Required only in AD enabled Big Data\n"," Cluster.\n","- {master_endpoint_dns_name}. Example: master.bdc.contoso.local.\n"," Required only in AD enabled Big Data Cluster.\n","- {master_readable_secondary_endpoint_dns_name}. Example:\n"," master-secondary.bdc.contoso.local. Required only in AD enabled Big\n"," Data Cluster.\n","- master-{pod_id}. Example: master-0.\n","- master-{pod_id}.{cluster_kubernetes_cluster_namespace}.{kubernetes_cluster_dns_suffix}.\n"," Example: master-0.mssql-cluster.svc.cluster.local.\n","- master-{pod_id}.{ad_subdomain_name}.{ad_domain_dns_name} or\n"," master-{pod_id}.{cluster_kubernetes_cluster_namespace}.{ad_domain_dns_name}\n"," in case if subdomain is not set. Example:\n"," master-0.bdc.contoso.local. Required only in AD enabled Big Data\n"," Cluster."]},{"cell_type":"code","execution_count":null,"metadata":{"tags":[]},"outputs":[],"source":["import os\n","import tempfile\n","from shutil import copy\n","\n","# Copy certificate and private key to temporary directory\n","#\n","\n","pods.sort()\n"," \n","for i in range(len(pods)):\n"," pod = pods[i]\n"," cert_file = os.path.join(master_local_certificate_dir, master_certificate_file_names[i])\n"," key_file = os.path.join(master_local_certificate_dir, master_private_key_file_names[i])\n"," copy(cert_file, f'{temp_dir}/{pod}-certificate.pem' )\n"," copy(key_file, f'{temp_dir}/{pod}-privatekey.pem')\n","\n","cer028_args = {\"test_cert_store_root\": controller_cert_store_root, \"local_certificate_dir\": local_certificate_dir}\n","cer04_args = { \"test_cert_store_root\": controller_cert_store_root }\n","\n","notebooks = [\n"," [ os.path.join(\"..\", \"cert-management\", \"cer028-upload-master-certs.ipynb\"), cer028_args ],\n"," [ os.path.join(\"..\", \"cert-management\", \"cer043-install-master-certs.ipynb\"), cer04_args ],\n"," [ os.path.join(\"..\", \"cert-management\", \"cer050-wait-cluster-healthy.ipynb\"), {} ]\n","]\n","\n","for notebook in notebooks:\n"," run_notebook(notebook[0], notebook[1])\n","\n","\n","print(\"Notebooks ran successfully.\")"]},{"cell_type":"markdown","execution_count":null,"metadata":{"tags":[]},"outputs":[],"source":["### Rotate controller certificate\n","\n","This section uploads, validates and installs controller certificate.\n","Controller pod is restarted at the end for the new certificate to be\n","picked up. Big Data Cluster requires public and private key in PEM\n","format. Certifiacte in PKCS \\#12 format is also required. Common name\n","should be set to “controller-svc”. Following DNS names should be set in\n","subject alternative name of the certificate:\n","\n","- controller-svc\n","- controller-svc.{kubernetes_cluster_namespace}.{kubernetes_cluster_dns_suffix}.\n"," Example: controller-svc.mssql-cluster.svc.cluster.local.\n","- controller-svc.{ad_subdomain_name}.{ad_domain_dns_name} or\n"," controller-svc.{cluster_kubernetes_cluster_namespace}.{ad_domain_dns_name}\n"," in case if subdomain is not set. Example:\n"," controller-svc.bdc.contoso.local. Required only in AD enabled Big\n"," Data Cluster.\n","- {controller_endpoint_dns_name}. Example: control.bdc.contoso.local.\n"," Required only in AD enabled Big Data Cluster.\n","- localhost\n","- hdfsvault-svc\n","- hdfsvault-svc.{ad_subdomain_name}.{ad_domain_dns_name} or\n"," hdfsvault-svc.{cluster_kubernetes_cluster_namespace}.{ad_domain_dns_name}\n"," in case if subdomain is not set. Example\n"," hdfsvault-svc.bdc.contoso.local. Required only in AD enabled Big\n"," Data Cluster.\n","- mssqlvault-svc\n","- mssqlvault-svc.{ad_subdomain_name}.{ad_domain_dns_name} or\n"," mssqlvault-svc.{cluster_kubernetes_cluster_namespace}.{ad_domain_dns_name}\n"," in case if subdomain is not set. Example\n"," hdfsvault-svc.bdc.contoso.local. Required only in AD enabled Big\n"," Data Cluster."]},{"cell_type":"code","execution_count":null,"metadata":{"tags":[]},"outputs":[],"source":["import os\n","import tempfile\n","from shutil import copy\n","\n","# Copy certificate and private key to temporary directory\n","#\n","cert_file = os.path.join(controller_local_certificate_dir, controller_certificate_file_name)\n","key_file = os.path.join(controller_local_certificate_dir, controller_private_key_file_name)\n","pfx_file = os.path.join(controller_local_certificate_dir, controller_pfx_file_name)\n","copy(cert_file, temp_dir)\n","copy(key_file, temp_dir)\n","copy(pfx_file, temp_dir)\n","\n","\n","cer029_args = {\"test_cert_store_root\": controller_cert_store_root, \"local_certificate_dir\": local_certificate_dir, \"certificate_file_name\": controller_certificate_file_name, \"private_key_file_name\": controller_private_key_file_name, \"pfx_file_name\": controller_pfx_file_name }\n","cer04_args = { \"test_cert_store_root\": controller_cert_store_root }\n","\n","notebooks = [\n"," [ os.path.join(\"..\", \"cert-management\", \"cer029-upload-controller-cert.ipynb\"), cer029_args ],\n"," [ os.path.join(\"..\", \"cert-management\", \"cer044-install-controller-cert.ipynb\"), cer04_args ],\n"," [ os.path.join(\"..\", \"cert-management\", \"cer050-wait-cluster-healthy.ipynb\"), {} ]\n","]\n","\n","for notebook in notebooks:\n"," run_notebook(notebook[0], notebook[1])\n","\n","print(\"Notebooks ran successfully.\")"]},{"cell_type":"code","execution_count":null,"metadata":{"tags":[]},"outputs":[],"source":["print(\"Notebook execution is complete.\")"]},{"cell_type":"markdown","execution_count":null,"metadata":{"tags":[]},"outputs":[],"source":["Related\n","-------\n","\n","- [CER005 - Install new Root CA certificate](../cert-management/cer005-install-existing-root-ca.ipynb)\n","- [CER025 - Upload existing Management Proxy certificate](../cert-management/cer025-upload-management-service-proxy-cert.ipynb)\n","- [CER026 - Upload existing Gateway certificate](../cert-management/cer026-upload-knox-cert.ipynb)\n","- [CER027 - Upload existing App Service Proxy certificate](../cert-management/cer027-upload-app-proxy-cert.ipynb)\n","- [CER028 - Upload existing Master certificates](../cert-management/cer028-upload-master-certs.ipynb)\n","- [CER028 - Upload existing Contoller certificate](../cert-management/cer029-upload-controller-cert.ipynb)\n","- [CER040 - Install signed Management Proxy certificate](../cert-management/cer040-install-service-proxy-cert.ipynb)\n","- [CER041 - Install signed Knox certificate](../cert-management/cer041-install-knox-cert.ipynb)\n","- [CER042 - Install signed App-Proxy certificate](../cert-management/cer042-install-app-proxy-cert.ipynb)\n","- [CER043 - Install signed Master certificates](../cert-management/cer043-install-master-certs.ipynb)\n","- [CER044 - Install signed Controller certificate](../cert-management/cer044-install-controller-cert.ipynb)\n"]}],"nbformat":4,"nbformat_minor":5,"metadata":{"kernelspec":{"name":"python3","display_name":"Python 3"},"pansop":{"related":"CER005, CER025, CER026, CER027, CER028, CER029, CER040, CER041, CER042,\nCER043, CER044","test":{"strategy":"","types":null,"disable":{"reason":"","workitems":null,"types":null}},"target":{"current":"","final":""},"internal":{"parameters":null,"symlink":false},"timeout":"0"},"language_info":{"codemirror_mode":"{ Name: \"\", Version: \"\"}","file_extension":"","mimetype":"","name":"","nbconvert_exporter":"","pygments_lexer":"","version":""},"widgets":[]}}