Skip to content

Commit

Permalink
Merge pull request #582 from nasa/release-1.5.0
Browse files Browse the repository at this point in the history
Release 1.5.0
  • Loading branch information
laurenfrederick authored Aug 26, 2019
2 parents f331c8c + 3be9ad9 commit a6a0e24
Show file tree
Hide file tree
Showing 49 changed files with 2,094 additions and 1,837 deletions.
21 changes: 20 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,24 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.

## [Unreleased]

## [v1.5.0] - 2019-08-26

### BREAKING CHANGES

- You must be using Cumulus API version v1.14.0 or above in order to use the new distribution metrics functionality.

### Added

- **CUMULUS-1337**
- Must use Cumulus API version v1.14.0 or above in order to use the new distribution metrics functionality.
- Distribution metrics are no longer served from the Cumulus API , but are computed from the logs in an ELK stack.
- If you want to display distribution metrics using a Kibana instance (ELK stack), you need to set the environment variables `KIBANAROOT` to point to the base url of an accessible Kibana instance as well as `ESROOT` to the Elastic Search endpoint holding your metrics.
- The `KIBANAROOT` will be used to generate links to the kibana discovery page to interrogate errors/successes further.
- The `ESROOT` is used to query Elasticsearch directly to retrieve the displayed counts.
- For information on setting up the Cumulus Distribution API Logs and S3 Server Access see the [Cumulus distribution metrics documentation](https://nasa.github.io/cumulus/docs/features/distribution-metrics).
- See this project's `README.md` for instructions on setting up development access for Kibana and Elasticsearch.


## [v1.4.0] - 2019-04-19

### BREAKING CHANGES
Expand Down Expand Up @@ -87,7 +105,8 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.

- Versioning and changelog [CUMULUS-197] by @kkelly51

[Unreleased]: https://github.com/nasa/cumulus-dashboard/compare/v1.4.0...HEAD
[Unreleased]: https://github.com/nasa/cumulus-dashboard/compare/v1.5.0...HEAD
[v1.5.0]: https://github.com/nasa/cumulus-dashboard/compare/v1.4.0...v1.5.0
[v1.4.0]: https://github.com/nasa/cumulus-dashboard/compare/v1.3.0...v1.4.0
[v1.3.0]: https://github.com/nasa/cumulus-dashboard/compare/v1.2.0...v1.3.0
[v1.2.0]: https://github.com/nasa/cumulus-dashboard/compare/v1.1.0...v1.2.0
Expand Down
64 changes: 56 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,13 +23,21 @@ The information needed to configure the dashboard is stored at `app/scripts/conf

The following environment variables override the default values in `config.js`:

| Env Name | Description
| -------- | -----------
| HIDE_PDR | whether to hide the PDR menu, default to true
| DAAC\_NAME | e.g. LPDAAC, default to Local
| STAGE | e.g. UAT, default to development
| LABELS | gitc or daac localization (defaults to daac)
| APIROOT | the API URL. This must be set as it defaults to example.com
| Env Name | Description |
| -------- | ----------- |
| HIDE_PDR | whether to hide the PDR menu, default to true |
| DAAC\_NAME | e.g. LPDAAC, default to Local |
| STAGE | e.g. UAT, default to development |
| LABELS | gitc or daac localization (defaults to daac) |
| APIROOT | the API URL. This must be set by the user as it defaults to example.com |
| ENABLE\_RECOVERY | If true, adds recovery options to the granule and collection pages. default: false |
| KIBANAROOT | \<optional\> Should point to a Kibana endpoint. Must be set to examine distribution metrics details. |
| SHOW\_TEA\_METRICS | \<optional\> display metrics from Thin Egress Application (TEA). default: true |
| SHOW\_DISTRIBUTION\_API\_METRICS | \<optional\> Display metrics from Cumulus Distribution API. default: false |
| ESROOT | \<optional\> Should point to an Elasticsearch endpoint. Must be set for distribution metrics to be displayed. |
| ES\_USER | \<optional\> Elasticsearch username, needed when protected by basic authorization |
| ES\_PASSWORD | \<optional\> Elasticsearch password,needed when protected by basic authorization |


## Building or running locally

Expand Down Expand Up @@ -105,6 +113,46 @@ For development and testing purposes, you can use a fake API server provided wit
$ APIROOT=http://localhost:5001 yarn run serve
```

#### NGAP Sandbox Metrics Development

##### Kibana and Elasticsearch access

In order to develop features that interact with Kibana or Elasticsearch in the NGAP sandbox, you need to set up tunnels through the metric's teams bastion-host. First you must get access to the metric's host. This will require a [NASD ticket](https://bugs.earthdata.nasa.gov/servicedesk/customer/portal/7/create/79) and permission from the metrics team. Once you have access to the metrics-bastion-host you can get the IP addresses for the Bastion, Kibana and Elasticsearch from the metrics team and configure your `.ssh/config` file to create you local tunnels. This configuration will open traffic to the Kibana endpoint on localhost:5601 and Elasticsearch on localhost:9201 tunneling traffic through the Bastion and Kibana machines.

```
Host metrics-bastion-host
Hostname "Bastion.Host.Ip.Address"
User ec2-user
IdentitiesOnly yes
IdentityFile ~/.ssh/your_private_bastion_key
Host metrics-elk-tunnels
Hostname "Kibana.Host.IP.Address"
IdentitiesOnly yes
ProxyCommand ssh metrics-bastion-host -W %h:%p
User ec2-user
IdentityFile ~/.ssh/your_private_bastion_key
# kibana
LocalForward 5601 "Kibana.Host.IP.Address":5601
# elastic search
LocalForward 9201 "Elasticsearch.Host.IP.Address":9201
```

Now you can configure you sandbox environment with these variables.

```sh
export ESROOT=http://localhost:9201
export KIBANAROOT=http://localhost:5601
```

If the Elasticsearch machine is protected by basic authorization, the following two variables should also be set.

```sh
export ES_USER=<username>
export ES_PASSWORD=<password>
```



### Running locally in Docker

There is a script called `bin/build_docker_image.sh` which will build a Docker image
Expand Down Expand Up @@ -219,4 +267,4 @@ Follow the [Github documentation to create a new release](https://help.github.co

The updates to the CHANGELOG and the version number still need to be merged back to the `develop` branch.

Create a PR for the `release-vX.X.X` branch against the `develop` branch. Verify that the Circle CI build for the PR succeeds and then merge to `develop`.
Create a PR for the `release-vX.X.X` branch against the `develop` branch. Verify that the Circle CI build for the PR succeeds and then merge to `develop`.
2 changes: 1 addition & 1 deletion TABLES.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ A basic table component that supports row selection and dumb sorting (see below)
- **data**: Array of data items. Items can be objects or arrays, depending on the accessor functions defined in `row`.
- **header**: Array of strings representing the header row.
- **row**: Array of items representing columns in each row. Items can be accessor functions with the arguments `data[k], k, data` (where `k` is the index of the current loop), or string values, ie `"collectionName"`.
- **props**: Array of property names to send to elasticsearch for a re-ordering query.
- **props**: Array of property names to send to Elasticsearch for a re-ordering query.
- **sortIdx**: The current index of the `props` array to sort by.
- **order**: Either 'desc' or 'asc', corresponding to sort order.
- **changeSortProps**: Callback when a new sort order is defined, passed an object with the properties `{ sortIdx, order }`.
Expand Down
4 changes: 4 additions & 0 deletions USAGE.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,3 +53,7 @@ The forms do not yet support undo or redo actions, so if you accidentally make a
## On successful save

When you successfully save a record, you will be redirected to the overview for that section. If you do not see your changes immediately represented in the tables, check back in 15 or 30 seconds, as it sometimes takes a bit for the changes to propagate through the Cumulus system.

## Recovery options

If the dashboard is started with a truthful value for the environment variable ENABLE_RECOVERY. "Recover Granule" and "Recover Collection" buttons are added to the the granule and collection dashboard pages, respectively. Which workflow is run by these buttons is configured in a collection under meta.granuleRecoveryWorkflow. If a user attempts to recover a granule from a collection that doesn't have this value configured, an error will show up on the dashboard and no workflow will be started.
89 changes: 89 additions & 0 deletions app/scripts/actions/action-config/apiGatewaySearch.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
'use strict';

export const apiGatewaySearchTemplate = (prefix, startTimeEpochMilli, endTimeEpochMilli) => `{
"aggs": {
"2": {
"filters": {
"filters": {
"ApiExecutionErrors": {
"query_string": {
"query": "+\\"Method completed with status:\\" +(4?? 5??)",
"analyze_wildcard": true,
"default_field": "*"
}
},
"ApiExecutionSuccesses": {
"query_string": {
"query": "+\\"Method completed with status:\\" +(2?? 3??)",
"analyze_wildcard": true,
"default_field": "*"
}
},
"ApiAccessErrors": {
"query_string": {
"query": "status:[400 TO 599]",
"analyze_wildcard": true,
"default_field": "*"
}
},
"ApiAccessSuccesses": {
"query_string": {
"query": "status:[200 TO 399]",
"analyze_wildcard": true,
"default_field": "*"
}
}
}
}
}
},
"size": 0,
"_source": {
"excludes": []
},
"stored_fields": [
"*"
],
"script_fields": {},
"docvalue_fields": [
{
"field": "@timestamp",
"format": "date_time"
}
],
"query": {
"bool": {
"must": [
{
"match_all": {}
},
{
"range": {
"@timestamp": {
"gte": ${startTimeEpochMilli},
"lte": ${endTimeEpochMilli},
"format": "epoch_millis"
}
}
},
{
"match_phrase": {
"_index": {
"query": "${prefix}-cloudwatch*"
}
}
},
{
"match_phrase": {
"logGroup": {
"query": "\\"API\\\\-Gateway\\\\-Execution*\\""
}
}
}
],
"filter": [],
"should": [],
"must_not": []
}
}
}`;
75 changes: 75 additions & 0 deletions app/scripts/actions/action-config/apiLambdaSearch.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
'use strict';

export const apiLambdaSearchTemplate = (prefix, startTimeEpochMilli, endTimeEpochMilli) => `{
"aggs": {
"2": {
"filters": {
"filters": {
"LambdaAPIErrors": {
"query_string": {
"query": "message:(+GET +HTTP +(4?? 5??) -(200 307))",
"analyze_wildcard": true,
"default_field": "*"
}
},
"LambdaAPISuccesses": {
"query_string": {
"query": "message:(+GET +HTTP +(2?? 3??))",
"analyze_wildcard": true,
"default_field": "*"
}
}
}
}
}
},
"size": 0,
"_source": {
"excludes": []
},
"stored_fields": [
"*"
],
"script_fields": {},
"docvalue_fields": [
{
"field": "@timestamp",
"format": "date_time"
}
],
"query": {
"bool": {
"must": [
{
"match_all": {}
},
{
"range": {
"@timestamp": {
"gte": ${startTimeEpochMilli},
"lte": ${endTimeEpochMilli},
"format": "epoch_millis"
}
}
},
{
"match_phrase": {
"_index": {
"query": "${prefix}-cloudwatch*"
}
}
},
{
"match_phrase": {
"logGroup": {
"query": "/aws/lambda/${prefix}-ApiDistribution"
}
}
}
],
"filter": [],
"should": [],
"must_not": []
}
}
}`;
75 changes: 75 additions & 0 deletions app/scripts/actions/action-config/s3AccessSearch.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
'use strict';

export const s3AccessSearchTemplate = (prefix, startTimeEpochMilli, endTimeEpochMilli) => `{
"aggs": {
"2": {
"filters": {
"filters": {
"s3AccessSuccesses": {
"query_string": {
"query": "response:200",
"analyze_wildcard": true,
"default_field": "*"
}
},
"s3AccessFailures": {
"query_string": {
"query": "NOT response:200",
"analyze_wildcard": true,
"default_field": "*"
}
}
}
}
}
},
"size": 0,
"_source": {
"excludes": []
},
"stored_fields": [
"*"
],
"script_fields": {},
"docvalue_fields": [
{
"field": "@timestamp",
"format": "date_time"
}
],
"query": {
"bool": {
"must": [
{
"match_all": {}
},
{
"range": {
"@timestamp": {
"gte": ${startTimeEpochMilli},
"lte": ${endTimeEpochMilli},
"format": "epoch_millis"
}
}
},
{
"match_phrase": {
"_index": {
"query": "${prefix}-s3*"
}
}
},
{
"match_phrase": {
"operation": {
"query": "REST.GET.OBJECT"
}
}
}
],
"filter": [],
"should": [],
"must_not": []
}
}
}`;
Loading

0 comments on commit a6a0e24

Please sign in to comment.