|
%!s(int64=3) %!d(string=hai) anos | |
---|---|---|
.. | ||
backup-restore | %!s(int64=3) %!d(string=hai) anos | |
cert-management | %!s(int64=3) %!d(string=hai) anos | |
common | %!s(int64=3) %!d(string=hai) anos | |
diagnose | %!s(int64=3) %!d(string=hai) anos | |
install | %!s(int64=3) %!d(string=hai) anos | |
log-analyzers | %!s(int64=3) %!d(string=hai) anos | |
log-files | %!s(int64=3) %!d(string=hai) anos | |
monitor-bdc | %!s(int64=3) %!d(string=hai) anos | |
monitor-k8s | %!s(int64=3) %!d(string=hai) anos | |
notebook-runner | %!s(int64=3) %!d(string=hai) anos | |
password-rotation | %!s(int64=3) %!d(string=hai) anos | |
repair | %!s(int64=3) %!d(string=hai) anos | |
sample | %!s(int64=3) %!d(string=hai) anos | |
tde | %!s(int64=3) %!d(string=hai) anos | |
troubleshooters | %!s(int64=3) %!d(string=hai) anos | |
readme.md | %!s(int64=3) %!d(string=hai) anos |
This Jupyter Book
of executable notebooks (.ipynb) is a companion for SQL Server 2019
to assist in operating and supporting Big Data Clusters
.
Each notebook is designed to check for its own dependencies. Pressing the 'run cells' button will either complete successfully or will raise an exception with a hyperlinked 'HINT' to another notebook to resolve a missing dependency. Follow the 'HINT' hyperlink to the subsequent notebook, press the 'run cells' button, and on success return back to the original notebook, and 'run cells'.
Once all dependencies are installed, if 'run cells' still fails, each notebook will analyze results and where possible, produce a hyperlinked 'HINT' to another notebook to further aid in resolving the issue.
The notebooks in this book are designed to abstract away environmental aspects:
1. Running outside or inside the Big Data Cluster - The overlay network addresses will be used when a notebook is run inside the cluster, and when run outside the cluster, the addresses returned from `azdata bdc endpoint list` will be used.
2. AZDATA_OPENSHIFT: Using Openshift - set the environment variable AZDATA_OPENSHIFT, to ensure the `oc` command is used instead of `kubectl`, and this will automatically workaround other compatibility issues.
3. AZDATA_NAMESPACE: Using multiple Big Data Clusters in the same Kubernetes cluster - set AZDATA_NAMESPACE to target the correct cluster. By default these notebooks will target the cluster whose Kubernetes namespace comes alphabetically first.
Some chapters are effectively self-contained applications. These chapters use the following numbering convention for the contained notebooks.
The '100' notebook, i.e. NTB100, is usually the 'go to' notebook to run in a chapter.
Big Data Cluster Dashboard
in Azure Data Studio
.Big Data Cluster
.Big Data Cluster
.Big Data Cluster
using the azdata
command line tool.Kubernetes
cluster hosting a Big Data Cluster
.Big Data Cluster
.Big Data Cluster
features and functionality.Big Data Cluster
endpoints.Big Data Cluster
.azdata login / logout
.Big Data Cluster
endpoints.