Skip to content

Commit

Permalink
[DATALAD RUNCMD] run codespell throughout fixing typo automagically
Browse files Browse the repository at this point in the history
=== Do not change lines below ===
{
 "chain": [],
 "cmd": "codespell -w",
 "exit": 0,
 "extra_inputs": [],
 "inputs": [],
 "outputs": [],
 "pwd": "."
}
^^^ Do not change lines above ^^^
  • Loading branch information
yarikoptic committed Oct 26, 2023
1 parent 1117042 commit ef2e1d4
Show file tree
Hide file tree
Showing 64 changed files with 93 additions and 93 deletions.
2 changes: 1 addition & 1 deletion community/examples/AMD/hpc-amd-slurm.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@ deployment_groups:
disable_public_ips: true
instance_image:
# these images must match the images used by Slurm modules below because
# we are building OpenMPI with PMI support in libaries contained in
# we are building OpenMPI with PMI support in libraries contained in
# Slurm installation
family: slurm-gcp-5-9-hpc-centos-7
project: schedmd-slurm-public
Expand Down
2 changes: 1 addition & 1 deletion community/examples/flux-framework/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ The cluster includes
> **_NOTE:_** prior to running this HPC Toolkit example the [Flux Framework GCP Images](https://github.com/GoogleCloudPlatform/scientific-computing-examples/tree/main/fluxfw-gcp/img#flux-framework-gcp-images)
> must be created in your project.
### Intial Setup for flux-framework Cluster
### Initial Setup for flux-framework Cluster

Before provisioning any infrastructure in this project you should follow the
Toolkit guidance to enable [APIs][apis] and establish minimum resource
Expand Down
2 changes: 1 addition & 1 deletion community/examples/intel/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -272,7 +272,7 @@ Both daos-server instances should show a state of *Joined*.
#### About the DAOS Command Line Tools
The DAOS Management tool `dmg` is used by System Administrators to manange the DAOS storage [system](https://docs.daos.io/v2.2/overview/architecture/#daos-system) and DAOS [pools](https://docs.daos.io/v2.2/overview/storage/#daos-pool). Therefore, `sudo` must be used when running `dmg`.
The DAOS Management tool `dmg` is used by System Administrators to manage the DAOS storage [system](https://docs.daos.io/v2.2/overview/architecture/#daos-system) and DAOS [pools](https://docs.daos.io/v2.2/overview/storage/#daos-pool). Therefore, `sudo` must be used when running `dmg`.
The DAOS CLI `daos` is used by both users and System Administrators to create and manage [containers](https://docs.daos.io/v2.2/overview/storage/#daos-container). It is not necessary to use `sudo` with the `daos` command.
Expand Down
4 changes: 2 additions & 2 deletions community/front-end/ofe/deploy.sh
Original file line number Diff line number Diff line change
Expand Up @@ -178,7 +178,7 @@ error() {
# Capture user entry.
# - Has an option to hide the response, which is useful for passwords.
# - Accepts a default that is used when no user entry.
# - Note: this function is used in command substition, i.e. foo=$(ask "bar")
# - Note: this function is used in command substitution, i.e. foo=$(ask "bar")
# so no echo commands can be used
#
# Usage:
Expand Down Expand Up @@ -439,7 +439,7 @@ create_service_account() {
getcred=1
;;
*)
verbose "assuming re-use of account"
verbose "assuming reuse of account"
echo ""
echo " Using existing service account: ${service_account}"
case $(ask " Do you want to regenerate a credential? [y/N] ") in
Expand Down
2 changes: 1 addition & 1 deletion community/front-end/ofe/docs/ClusterCommandControl.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Command and Control of Clusters
Previous incarnations of this Frontend relied on the frontend webserver instance being able to SSH directly to clusters in order to performance command and control (C2) operations. When clusters were created, an admin user was set that would accept a public ssh key for which the webserver owned the private key. This was largely straightfoward, and worked quite well. The clusters were also able to make HTTP API queries to the webserver.
Previous incarnations of this Frontend relied on the frontend webserver instance being able to SSH directly to clusters in order to performance command and control (C2) operations. When clusters were created, an admin user was set that would accept a public ssh key for which the webserver owned the private key. This was largely straightforward, and worked quite well. The clusters were also able to make HTTP API queries to the webserver.

This works well in the case where webserver and clusters all have public IP addresses, and are able to receive inbound requests, but it breaks down in the case where a user may wish to have the compute clusters not be directly exposed to the public internet.

Expand Down
2 changes: 1 addition & 1 deletion community/front-end/ofe/docs/WorkbenchUser.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ on the workbench page.

It is important to remember that all data stored on the workbench instance will
be deleted unless it has been saved in another place such as a shared
filesystem or transferred elsewhere in another way. Once the destory button is
filesystem or transferred elsewhere in another way. Once the destroy button is
clicked a confirmation page will be displayed.

![Destroy confirmation](images/Workbench_userguide/destroy_confirm.png)
Expand Down
10 changes: 5 additions & 5 deletions community/front-end/ofe/docs/admin_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ applications. and manage user access. Normal HPC users should refer to the
[User Guide](user_guide.md) for guidance on how to prepare and run jobs on
clusters that have been set up by administrators.

Basic administrator knowledge of the Google Cloud Plaform is needed in order to
Basic administrator knowledge of the Google Cloud Platform is needed in order to
create projects and user accounts, but all other low-level administration tasks
are handled by the portal.

Expand Down Expand Up @@ -75,7 +75,7 @@ All further deployment actions must be performed from this directory.

#### Google Cloud Platform

Your organisation must already have access to the Google Cloud Plaform (GCP)
Your organisation must already have access to the Google Cloud Platform (GCP)
and be able to create projects and users. A project and a user account with
enabled APIs and roles/permissions need to be created. The user account must
also be authenticated on the client machine to allow it to provision GCP
Expand Down Expand Up @@ -299,7 +299,7 @@ be reachable by the VPC subnets intended to be used for clusters.

An internal address can be used if the cluster shares the same VPC with the
imported filesystem. Alternatively, system administrators can set up hybrid
connectivity (such as extablishing network peering) beforing mounting the
connectivity (such as extablishing network peering) before mounting the
external filesystem located elsewhere on GCP.

## Cluster Management
Expand Down Expand Up @@ -352,7 +352,7 @@ A typical workflow for creating a new cluster is as follows:
cluster can be specified.
1. In the *Create a new cluster* form, give the new cluster a name. Cloud
resource names are subject to naming constraints and will be validated by the
system. In general, lower-case alpha-numeric names with hyphens are
system. In general, lower-case alphanumeric names with hyphens are
accepted.
1. From the *Subnet* dropdown list, select the subnet within which the cluster
resides.
Expand Down Expand Up @@ -554,5 +554,5 @@ back-end logic is handled, which can also help with certain issues.
`terraform destroy` there for clean up cloud resources.
- Certain database records might get corrupted and need to be removed for
failed clusters or network/filesystem components. This can be done from the
Django Admin site, although adminstrators need to exercise caution while
Django Admin site, although administrators need to exercise caution while
modifying the raw data in Django database.
2 changes: 1 addition & 1 deletion community/front-end/ofe/docs/developer_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ The home directory of the *gcluster* account is at `/opt/gcluster`. For a new de
- `supvisor.log` -  Django application server log. Python `print` from
Django source files will appear in this file for debugging purposes.
- `django.log` - additional debugging information generated by the Python
logging module is writen here.
logging module is written here.

### Run-time data

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -692,7 +692,7 @@ def _make_run_script(job_dir, uid, gid, orig_run_script):
)
elif script_url.scheme in ["http", "https"]:
if recursive_fetch:
logger.error("Not Implemented recursive HTTP/HTTPS fetchs")
logger.error("Not Implemented recursive HTTP/HTTPS fetches")
return None
fetch = f"curl --silent -O '{text}'"

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
# limitations under the License.

---
- name: Add Enviornment Modules
- name: Add Environment Modules
ansible.builtin.yum:
name:
- environment-modules
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@
# If no 'target' (aka, coming from the clusters:
# * source={cluster_id} - Who sent it?

# Command with reponse callback
# Command with response callback
#
# Commands that require a response should encode a unique key as a message
# field ('ackid').
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -246,7 +246,7 @@ def get_region_zone_info(cloud_provider, credentials):
if cloud_provider == "GCP":
return _get_gcp_region_zone_info(credentials, ttl_hash=_get_ttl_hash())
else:
raise Exception("Unsupport Cloud Provider")
raise Exception("Unsupported Cloud Provider")


def _get_gcp_subnets(credentials):
Expand Down Expand Up @@ -274,7 +274,7 @@ def get_subnets(cloud_provider, credentials):
if cloud_provider == "GCP":
return _get_gcp_subnets(credentials)
else:
raise Exception("Unsupport Cloud Provider")
raise Exception("Unsupported Cloud Provider")


_gcp_services_list = None
Expand Down Expand Up @@ -585,7 +585,7 @@ def get_gcp_workbench_region_zone_info(


def get_gcp_filestores(credentials):
"""Returns an array of Filestore instance informations
"""Returns an array of Filestore instance information
E.g.
[
{'createTime': ...,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -623,7 +623,7 @@ def _apply_terraform(self):
)
if len(mgmt_nodes) != 1:
logger.warning(
"Found %d contoller nodes, there should be only 1",
"Found %d controller nodes, there should be only 1",
len(mgmt_nodes),
)
if len(mgmt_nodes):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
# limitations under the License.

'''
This is a backend part of custom image creation fuctionality.
This is a backend part of custom image creation functionality.
Frontend views will talk with functions here to perform real actions.
'''

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,7 @@ def rsync_dir(
rsync_cmd.extend([src_dir, tgt_dir])

new_env = os.environ.copy()
# Don't have terraform try to re-use any existing SSH agent
# Don't have terraform try to reuse any existing SSH agent
# It has its own keys
if "SSH_AUTH_SOCK" in new_env:
del new_env["SSH_AUTH_SOCK"]
Expand Down Expand Up @@ -205,7 +205,7 @@ def run_terraform(target_dir, command, arguments=None, extra_env=None):
log_err_fn = Path(target_dir) / f"terraform_{command}_log.stderr"

new_env = os.environ.copy()
# Don't have terraform try to re-use any existing SSH agent
# Don't have terraform try to reuse any existing SSH agent
# It has its own keys
if "SSH_AUTH_SOCK" in new_env:
del new_env["SSH_AUTH_SOCK"]
Expand Down
4 changes: 2 additions & 2 deletions community/front-end/ofe/website/ghpcfe/cluster_manager/vpc.py
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ def generate_vpc_tf_datablock(vpc: VirtualNetwork, target_dir: Path) -> Path:
key = "name"
else:
raise NotImplementedError(
f"Cloud Provider {vpc.cloud_provider} not yet implmeneted"
f"Cloud Provider {vpc.cloud_provider} not yet implemented"
)
with output_file.open("w") as fp:
fp.write(
Expand All @@ -180,7 +180,7 @@ def generate_subnet_tf_datablock(
key = "name"
else:
raise NotImplementedError(
f"Cloud Provider {subnet.cloud_provider} not yet implmeneted"
f"Cloud Provider {subnet.cloud_provider} not yet implemented"
)
with output_file.open("w") as fp:
fp.write(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,7 @@ def copy_startup_script(self):
with startup_script.open("w") as f:
f.write(
f"""#!/bin/bash
echo "starting starup script at `date`" | tee -a /tmp/startup.log
echo "starting startup script at `date`" | tee -a /tmp/startup.log
echo "Getting username..." | tee -a /tmp/startup.log
{startup_script_vars}
Expand Down
4 changes: 2 additions & 2 deletions community/front-end/ofe/website/ghpcfe/forms.py
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ def __init__(self, *args, **kwargs):
)
]

# For machine types, will use JS to get valid types dependant on
# For machine types, will use JS to get valid types dependent on
# cloud zone. So bypass cleaning and choices
def prep_dynamic_select(field, value):
self.fields[field].widget.choices = [
Expand Down Expand Up @@ -233,7 +233,7 @@ def __init__(self, *args, **kwargs):


class ClusterPartitionForm(forms.ModelForm):
"""Form for Cluster Paritions"""
"""Form for Cluster Partitions"""

machine_type = forms.ChoiceField(widget=forms.Select())
GPU_type = forms.ChoiceField(widget=forms.Select()) # pylint: disable=invalid-name
Expand Down
2 changes: 1 addition & 1 deletion community/front-end/ofe/website/ghpcfe/grafana.py
Original file line number Diff line number Diff line change
Expand Up @@ -327,7 +327,7 @@ def create_cluster_dashboard(cluster):
"uid": None,
"title": f"Cluster {cluster.name}",
"panels": panels,
"verison": 0,
"version": 0,
},
"filderId": 0,
"overwrite": True,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -82,4 +82,4 @@ def handle(self, *args, **kwargs):
socialapp.save()
socialapp.sites.add(site)
except Exception as err:
raise CommandError("Initalization failed.") from err
raise CommandError("Initialization failed.") from err
12 changes: 6 additions & 6 deletions community/front-end/ofe/website/ghpcfe/templates/document.html
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ <h2>Documentation</h2>
<a href="https://cloud.google.com/iam/docs/service-accounts" target="_blank">Service Account</a>.
To create a service account: your account must have sufficient permissions.
If you are not the <strong>Owner</strong> or <strong>Editor</strong> of the
GCP project, follow the instrcutions below. When certain permissions are
GCP project, follow the instructions below. When certain permissions are
missing, GCP will give clear error messages. Note the permissions required,
locate them in <a href="https://cloud.google.com/iam/docs/understanding-roles" target="_blank">this page</a>, and identify suitable
roles that provide them. Ask the project Owner to assign those extra roles
Expand Down Expand Up @@ -89,7 +89,7 @@ <h2>Documentation</h2>
<p><strong>From command line</strong></p>

<p>It is assumed that you have the gcloud command-line tool installed on your
development syste, or use GCP cloud shell which has this tool pre-installed. </p>
development system, or use GCP cloud shell which has this tool pre-installed. </p>


<hr>
Expand All @@ -100,15 +100,15 @@ <h2>Documentation</h2>
platform. This is because one cluster can support multiple machine types. There
are, of course, many good reasons to create multiple clusters on the same
platform, e.g. for project management purpose, or to map cloud usages to
organisational strucutres.</p>
organisational structures.</p>
<p>For each cluster, admin users can choose the suitable machine types for the
organisation's workloads and impose resource limits, e.g. the maximum number
of compute nodes for each machine types.</p>
<p>At any time, each cluster is in one of the following status:</p>
<ul>
<li><strong>New</strong>: Cluster is being newly configured by a user through the web interface.</li>
<li><strong>Creating</strong>: Cluster is being created (i.e. hardware is being brought up online).</li>
<li><strong>Initialising</strong>: Cluster is being initialised (i.e. software is being installed and enviroment is being prepared). By default, these clusters use a CentOS 7 based operating system with software preconfigured to support distributed MPI and hybrid jobs.</li>
<li><strong>Initialising</strong>: Cluster is being initialised (i.e. software is being installed and environment is being prepared). By default, these clusters use a CentOS 7 based operating system with software preconfigured to support distributed MPI and hybrid jobs.</li>
<li><strong>Ready</strong>: Cluster is ready for jobs.</li>
<li><strong>Terminating</strong>: Cluster is being terminated.</li>
<li><strong>Stopped</strong>: Cluster is stopped (can be restarted).</li>
Expand All @@ -129,12 +129,12 @@ <h2>Documentation</h2>
means. Spack, an established package management system for HPC, contains
build recipes of most widely used applications, including almost all popular
open-source packages. For applications not yet covered by the Spack
package repository, e.g. codes developped in-house, or commercial packages
package repository, e.g. codes developed in-house, or commercial packages
that require complex set-up, a script-based approach can be used register them
to the system.</p>
<p>Note that an application in this system refers to a unique binary
installation of a package, as identified by a software version, and a specific
target architecuture. Spack has intrinsic support on both of these factors,
target architecture. Spack has intrinsic support on both of these factors,
generating multiple binaries as required. Spack also builds variants of the same software, e.g. packages with optional features switched on. Experienced users should be able to specify such using Spack specifiers.</p>

<a name="spack-applications"><h5>Spack applications</h5></a>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ <h2>Job List</h2>
<tr>
<th scope="col">#</th>
<th scope="col">Name</th>
<th scope="col">Submited at</th>
<th scope="col">Submitted at</th>
<th scope="col">Cluster</th>
<th scope="col">Application</th>
<th scope="col">Instance Type</th>
Expand Down
2 changes: 1 addition & 1 deletion community/front-end/ofe/website/ghpcfe/views/asyncview.py
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ def set_cluster_status(self, cluster_id, status):
c.save()

def get_task_record_data(self, request):
"""Called from a syncronous context"""
"""Called from a synchronous context"""
return {}

async def _cmd(self, *args, **kwargs):
Expand Down
4 changes: 2 additions & 2 deletions community/front-end/ofe/website/ghpcfe/views/clusters.py
Original file line number Diff line number Diff line change
Expand Up @@ -445,7 +445,7 @@ def form_valid(self, form):



# Verify formset validity (suprised there's no method to do this)
# Verify formset validity (surprised there's no method to do this)
for formset, formset_name in [
(mountpoints, "mountpoints"),
(partitions, "partitions"),
Expand Down Expand Up @@ -646,7 +646,7 @@ def get_file_info(self):


class ClusterLogView(LoginRequiredMixin, generic.DetailView):
"""View to diplay cluster log files"""
"""View to display cluster log files"""

model = Cluster
template_name = "cluster/log.html"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ def create(self, request):


class CredentialValidateAPIView(APIView):
"""Validte credential against cloud platform"""
"""Validate credential against cloud platform"""

def post(self, request):
credential = request.data.__getitem__("detail").rstrip()
Expand Down
2 changes: 1 addition & 1 deletion community/front-end/ofe/website/ghpcfe/views/workbench.py
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@ def form_valid(self, form):
context = self.get_context_data()
workbenchmountpoints = context["mountpoints_formset"]

# Verify formset validity (suprised there's no method to do this)
# Verify formset validity (surprised there's no method to do this)
for formset in workbenchmountpoints:
if not formset.is_valid():
for error in formset.errors:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ No resources.
| <a name="input_partition_name"></a> [partition\_name](#input\_partition\_name) | The name of the slurm partition | `string` | n/a | yes |
| <a name="input_preemptible_bursting"></a> [preemptible\_bursting](#input\_preemptible\_bursting) | Should use preemptibles to burst | `string` | `false` | no |
| <a name="input_regional_capacity"></a> [regional\_capacity](#input\_regional\_capacity) | If True, then create instances in the region that has available capacity. Specify the region in the zone field. | `bool` | `false` | no |
| <a name="input_regional_policy"></a> [regional\_policy](#input\_regional\_policy) | locationPolicy defintion for regional bulkInsert() | `any` | `{}` | no |
| <a name="input_regional_policy"></a> [regional\_policy](#input\_regional\_policy) | locationPolicy definition for regional bulkInsert() | `any` | `{}` | no |
| <a name="input_static_node_count"></a> [static\_node\_count](#input\_static\_node\_count) | Number of nodes to be statically created | `number` | `0` | no |
| <a name="input_subnetwork_name"></a> [subnetwork\_name](#input\_subnetwork\_name) | The name of the pre-defined VPC subnet you want the nodes to attach to based on Region. | `string` | n/a | yes |
| <a name="input_zone"></a> [zone](#input\_zone) | Compute Platform zone where the notebook server will be located | `string` | n/a | yes |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ variable "regional_capacity" {
}

variable "regional_policy" {
description = "locationPolicy defintion for regional bulkInsert()"
description = "locationPolicy definition for regional bulkInsert()"
type = any
default = {}
}
Expand Down
Loading

0 comments on commit ef2e1d4

Please sign in to comment.