Skip to content

Commit

Permalink
Merge pull request newcontext-oss#17 from newcontext-oss/refactor
Browse files Browse the repository at this point in the history
standardized, improved documentation
  • Loading branch information
cvoid-newcontext authored Mar 9, 2021
2 parents c1eac23 + d7f0459 commit 0b3b3c9
Show file tree
Hide file tree
Showing 18 changed files with 86 additions and 111 deletions.
7 changes: 5 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,21 +15,23 @@ cd aws/
Before you get going, there are a some variables you will probably want to set. All of these can be found in `aws/terraform.tfvars`:
- `allowed_ips_application`: Array containing each of the IPs that are allowed to access the web application. Default `0.0.0.0/0` all IPs.
- `availability_zone`: The AWS availability zone. Default `us-east-1a`.
- `instance_type`: The AWS instance type to use. Default `t3.2xlarge` (8x32).
- `login_email`: The e-mail address used to login to the application. Default `[email protected]`.
- `region`: The AWS region used. Default `us-east`. **NOTE:** if you change this, you will need to change the remote state region in `aws/main.tf`. Variable interpolation is not allowed in that block so it has to be hardcoded.
- `root_volume_size`: The root volume size for the EC2 instance. Without this, the volume is 7.7GB and fills up in a day. Default `32` (GB). Note that this will incur costs.
- `storage_bucket`: The name of the S3 bucket to store scripts and remote state in. Default `opencti-storage`.
- `subnet_id`: The AWS subnet to use. No default specified.
- `vpc_id`: The VPC to use. No default specified.

If your AWS credentials are not stored in `~/.aws/credentials`, you will need to edit that line in `aws/main.tf`.

#### Remote state
The remote state is defined in `aws/main.tf`. Variable interpolation is not allowed in that block and the easiest choice (both for writing the code and for you using the code) was to pick sensible defaults and hardcode them. The variables are:
- `bucket`: The name of the S3 bucket to store the state file in. Default `opencti-storage`.
- `key`: The name of the state file. Default `terraform_state`.
- `region`: The region to use. Default `us-east-1`.
- `storage_bucket`: The name of the S3 bucket to store the state file in. Default `opencti-storage`.

This is mentioned as an FYI for the end user, but if you change the region in `aws/terraform.tfvars`, you will want to change the region here, too. If you want to change the S3 bucket name (defined as a local variable in `aws/main.tf`), you will also want to change it here.
**Important:** If you change the region in `aws/terraform.tfvars`, you will want to change the region here, too. If you want to change the S3 bucket name (defined in `aws/terraform.tfvars`), you will also want to change it here.

### Azure
First, change into the `azure/` directory:
Expand All @@ -45,6 +47,7 @@ Before you deploy, you may wish to change some of the settings. These are all in
- `location`: The Azure region to deploy in. Default `eastus`.
- `login_email`: The e-mail address used to login to the OpenCTI web frontend. Default `[email protected]`.
- `os_disk_size`: The VM's disk size (in GB). Default `32` (the [minimum recommended spec](https://github.com/OpenCTI-Platform/opencti/blob/5ede2579ee3c09c248d2111b483560f07d2f2c18/opencti-documentation/docs/getting-started/requirements.md)).
- `storage_bucket`: Name of the storage bucket for storing scripts. Default `opencti-storage`.

### GCP
Change into the `gcp/` directory:
Expand Down
27 changes: 0 additions & 27 deletions aws/ec2.tf

This file was deleted.

4 changes: 1 addition & 3 deletions aws/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ provider "aws" {
# Store Terraform state in S3
terraform {
backend "s3" {
# The bucket name is a variable defined below (as `s3_bucket`), but variables are not allowed in this block. If you change this, you will probably also want to change that.
# The bucket name is a variable defined in `terraform.tfvars` (as `storage_bucket`), but variables are not allowed in this block. If you change this, you will need to change that.
bucket = "opencti-storage"
key = "terraform.tfstate"
# Again, no variable interpolation in this block so make sure this matches the region defined in `terraform.tfvars`. Default `us-east-1`.
Expand All @@ -18,9 +18,7 @@ terraform {
# These variables aren't meant to be changed by the end user.
locals {
ami_id = "ami-0074ee617a234808d" # Ubuntu 20.04 LTS
instance_type = "t3.2xlarge" # 8x32 with EBS-backed storage
opencti_dir = "/opt/opencti"
opencti_install_script_name = "opencti-installer.sh"
opencti_connectors_script_name = "opencti-connectors.sh"
s3_bucket = "opencti-storage"
}
42 changes: 0 additions & 42 deletions aws/s3.tf

This file was deleted.

6 changes: 3 additions & 3 deletions aws/storage.tf
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# S3 bucket to store install and connectors scripts.
resource "aws_s3_bucket" "opencti_bucket" {
bucket = local.s3_bucket
bucket = var.storage_bucket
acl = "private"

# Turn on bucket versioning. We'll be storing the Terraform state in S3 and versioning will help protect against human error.
Expand All @@ -17,8 +17,8 @@ data "aws_iam_policy_document" "opencti_s3" {
]

resources = [
"arn:aws:s3:::${local.s3_bucket}",
"arn:aws:s3:::${local.s3_bucket}/*",
"arn:aws:s3:::${var.storage_bucket}",
"arn:aws:s3:::${var.storage_bucket}/*",
]
}
}
Expand Down
2 changes: 2 additions & 0 deletions aws/terraform.tfvars
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
# allowed_ips_application = ["0.0.0.0/0"]
# availability_zone = "us-east-1a"
# instance_type = "t3.2xlarge"
# login_email = "[email protected]"
# region = "us-east-1"
# root_volume_size = 32
# storage_bucket = "opencti-storage"
subnet_id = ""
vpc_id = ""
12 changes: 12 additions & 0 deletions aws/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,12 @@ variable "availability_zone" {
default = "us-east-1a"
}

variable "instance_type" {
description = "Instance type to use. Default is 8x32."
type = string
default = "t3.2xlarge"
}

variable "login_email" {
description = "The e-mail address to use for logging into the OpenCTI instance."
type = string
Expand All @@ -28,6 +34,12 @@ variable "root_volume_size" {
default = 32
}

variable "storage_bucket" {
description = "The name of the S3 storage bucket to store scripts and remote state in."
type = string
default = "opencti-storage"
}

variable "subnet_id" {
description = "The subnet ID to use."
type = string
Expand Down
29 changes: 29 additions & 0 deletions aws/vm.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# EC2 Instance
resource "aws_instance" "opencti_instance" {
ami = local.ami_id
instance_type = var.instance_type

associate_public_ip_address = true
iam_instance_profile = aws_iam_instance_profile.opencti_profile.name
root_block_device {
volume_size = var.root_volume_size
}
subnet_id = var.subnet_id

# The wrapper script is used by each of the providers and each variable has to be filled out in order to run. Unfortunately, this means that if you change something in one provider, you have to change it in each of the others. It's not ideal, but FYI.
user_data = templatefile("../userdata/installation-wrapper-script.sh", {
account_name = "only for azure",
cloud = "aws",
connection_string = "only for azure",
connectors_script_name = local.opencti_connectors_script_name,
install_script_name = local.opencti_install_script_name,
login_email = var.login_email,
storage_bucket = var.storage_bucket
})

vpc_security_group_ids = [aws_security_group.opencti_sg.id]

tags = {
Name = "opencti"
}
}
15 changes: 2 additions & 13 deletions azure/storage.tf
Original file line number Diff line number Diff line change
@@ -1,17 +1,6 @@
# Storage for boot diagnostics.
# Each storage account must have a unique name:
resource "random_id" "randomId" {
keepers = {
# Generate a new ID only when a new resource group is defined
resource_group = azurerm_resource_group.opencti_rg.name
}

byte_length = 8
}

# Create storage account (the name is based on the above randomId). The default storage type is StorageV2: https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview
# Create storage account. The default storage type is StorageV2: https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview
resource "azurerm_storage_account" "opencti_storage" {
name = "opencti${random_id.randomId.hex}"
name = var.storage_bucket
resource_group_name = azurerm_resource_group.opencti_rg.name
location = var.location
account_replication_type = "LRS"
Expand Down
3 changes: 2 additions & 1 deletion azure/terraform.tfvars
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
account_name = ""
account_name = "Pay-As-You-Go"
# admin_user = "azureuser"
# location = "eastus"
login_email = "[email protected]"
# os_disk_size = 32
# storage_bucket = "opencti"
6 changes: 6 additions & 0 deletions azure/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -26,3 +26,9 @@ variable "os_disk_size" {
type = number
default = 32
}

variable "storage_bucket" {
description = "Name of the storage bucket."
type = string
default = "opencti"
}
7 changes: 3 additions & 4 deletions azure/vm.tf
Original file line number Diff line number Diff line change
Expand Up @@ -9,16 +9,15 @@ output "tls_private_key" { value = tls_private_key.ssh_private_key.private_key_p
# Bootstrap deployment with wrapper script.
data "template_file" "wrapper_script" {
template = file("../userdata/installation-wrapper-script.sh")
# The wrapper script is used by each of the providers and each variable has to be filled out in order to run. Unfortunately, this means that if you change something in one provider, you have to change it in each of the others. It's not ideal, but FYI.
vars = {
"account_name" = var.account_name
"bucket_name" = "only for aws"
# The wrapper script is used for multiple clouds. This defines this cloud.
"account_name" = var.account_name
"cloud" = "azure"
"connection_string" = azurerm_storage_account.opencti_storage.primary_connection_string
"connectors_script_name" = "opencti-connectors.sh"
"container_name" = azurerm_storage_container.opencti-storage-container.name
"install_script_name" = "opencti-installer.sh"
"login_email" = var.login_email
"storage_bucket" = azurerm_storage_container.opencti-storage-container.name
}
}

Expand Down
1 change: 0 additions & 1 deletion gcp/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,6 @@ resource "google_project_service" "iam_api" {
}

locals {
bucket_name = "opencti"
connectors_script_name = "opencti-connectors.sh"
install_script_name = "opencti-installer.sh"
}
2 changes: 1 addition & 1 deletion gcp/storage.tf
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Create storage bucket and add install and connectors script to it.
resource "google_storage_bucket" "opencti_storage" {
name = local.bucket_name
name = var.storage_bucket
location = var.region
force_destroy = true
}
Expand Down
1 change: 1 addition & 0 deletions gcp/terraform.tfvars
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,5 @@
# machine_type = "e2-standard-8"
project_id = ""
# region = "us-east1"
# storage_bucket = "opencti-storage"
# zone = "us-east1-b"
6 changes: 6 additions & 0 deletions gcp/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,12 @@ variable "region" {
default = "us-east1"
}

variable "storage_bucket" {
description = "Name of the storage bucket."
type = string
default = "opencti-storage"
}

variable "zone" {
description = "The Google Cloud zone to run the instance in."
type = string
Expand Down
9 changes: 4 additions & 5 deletions gcp/vm.tf
Original file line number Diff line number Diff line change
@@ -1,16 +1,15 @@
# Startup script template
data "template_file" "startup_script" {
template = file("../userdata/installation-wrapper-script.sh")
# The wrapper script is used by each of the providers and each variable has to be filled out in order to run. Unfortunately, this means that if you change something in one provider, you have to change it in each of the others. It's not ideal, but FYI.
vars = {
"account_name" = "only for azure"
"bucket_name" = local.bucket_name
# The wrapper script is used for multiple clouds. This defines this cloud.
"account_name" = "only for azure"
"cloud" = "gcp"
"connection_string" = "only for azure"
"connectors_script_name" = "opencti-connectors.sh"
"container_name" = "only for azure"
"install_script_name" = "opencti-installer.sh"
"login_email" = var.login_email
"storage_bucket" = var.storage_bucket
}
}

Expand Down Expand Up @@ -41,6 +40,6 @@ resource "google_compute_instance" "opencti_instance" {
service_account {
email = google_service_account.storage.email
# Scopes are outlined here: https://cloud.google.com/sdk/gcloud/reference/alpha/compute/instances/set-scopes#--scopes
scopes = [ "cloud-platform", "storage-ro" ]
scopes = ["cloud-platform", "storage-ro"]
}
}
18 changes: 9 additions & 9 deletions userdata/installation-wrapper-script.sh
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
#!/bin/bash -e
# Why a wrapper script? Terraform doesn't support massively long scripts (the limit is 16k characters). The install script is much longer than that (even without the banner). This wrapper script sets up the base OS, installs the necessary tools to run the installation and connector scripts, and then pulls those in from an S3 bucket (as defined in s3.tf).
# The wrapper script is used by each of the providers (in `*/vm.tf`) and each variable has to be filled out in order to run. Unfortunately, this means that if you change something in one provider (or in this script), you have to change it in each of the others. It's not ideal, but FYI.

# Print all output to the specified logfile, the system log (-t: as opencti-install), and STDERR (-s).
exec > >(tee /var/log/opencti-install.log|logger -t opencti-install -s 2>/dev/console) 2>&1
Expand All @@ -8,28 +9,29 @@ echo "Update base OS"
apt-get update
apt-get upgrade -y

# Copy install and connectors script down from cloud storage.
if [[ ${cloud} == "aws" ]]
then
echi "Install AWS CLI"
echo "Install AWS CLI"
apt-get install -y awscli
echo "Copy the opencti installer script to /opt"
aws s3 cp s3://${bucket_name}/${install_script_name} /opt/${install_script_name}
aws s3 cp s3://${storage_bucket}/${install_script_name} /opt/${install_script_name}
echo "Copy opencti connectors script to /opt"
aws s3 cp s3://${bucket_name}/${connectors_script_name} /opt/${connectors_script_name}
aws s3 cp s3://${storage_bucket}/${connectors_script_name} /opt/${connectors_script_name}
elif [[ ${cloud} == "azure" ]]
then
echo "Install Azure CLI (this can take several minutes)"
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
echo "Copy the opencti installer script to /opt"
az storage blob download --account-name "${account_name}" --container-name "${container_name}" --name "${install_script_name}" --file /opt/"${install_script_name}" --connection-string "${connection_string}"
az storage blob download --account-name "${account_name}" --container-name "${storage_bucket}" --name "${install_script_name}" --file /opt/"${install_script_name}" --connection-string "${connection_string}"
echo "Copy opencti connectors script to /opt"
az storage blob download --account-name "${account_name}" --container-name "${container_name}" --name "${connectors_script_name}" --file /opt/"${connectors_script_name}" --connection-string "${connection_string}"
az storage blob download --account-name "${account_name}" --container-name "${storage_bucket}" --name "${connectors_script_name}" --file /opt/"${connectors_script_name}" --connection-string "${connection_string}"
elif [[ ${cloud} == "gcp" ]]
then
echo "Copy the opencti installer script to /opt"
gsutil cp gs://${bucket_name}/${install_script_name} /opt/${install_script_name}
gsutil cp gs://${storage_bucket}/${install_script_name} /opt/${install_script_name}
echo "Copy opencti connectors script to /opt"
gsutil cp gs://${bucket_name}/${connectors_script_name} /opt/${connectors_script_name}
gsutil cp gs://${storage_bucket}/${connectors_script_name} /opt/${connectors_script_name}
fi

echo "Make scripts executable"
Expand All @@ -40,11 +42,9 @@ echo "Starting OpenCTI installation script"
# Run the install script with the provided e-mail address.
# AWS automatically runs the script as root, Azure doesn't.
sudo /opt/${install_script_name} -e "${login_email}"

echo "OpenCTI installation script complete."

echo "Starting OpenCTI connectors script."
# Run the script without prompting the user (the default, `-p 0`, will prompt if the user wants to apply; this is less than ideal for an automated script).
sudo /opt/${connectors_script_name} -p 1

echo "OpenCTI wrapper script complete."

0 comments on commit 0b3b3c9

Please sign in to comment.