diff --git a/examples/10_Prognostics Server.ipynb b/examples/10_Prognostics Server.ipynb index f3b3637..448bb1c 100644 --- a/examples/10_Prognostics Server.ipynb +++ b/examples/10_Prognostics Server.ipynb @@ -5,24 +5,534 @@ "metadata": {}, "source": [ "# Prognostics Server (prog_server)\n", - "**A version of this notebook will be added in release v1.8**" + "\n", + "The ProgPy Server (prog_server) is a simplified implementation of a Service-Oriented Architecture (SOA) for performing prognostics (estimation of time until events and future system states) of engineering systems. prog_server is a wrapper around the ProgPy package, allowing one or more users to access the features of these packages through a REST API. The package is intended to be used as a research tool to prototype and benchmark Prognostics As-A-Service (PaaS) architectures and work on the challenges facing such architectures, including Generality, Communication, Security, Environmental Complexity, Utility, and Trust.\n", + "\n", + "The ProgPy Server is actually two packages, prog_server and prog_client. The prog_server package is a prognostics server that provides the REST API. The prog_client package is a python client that provides functions to interact with the server via the REST API.\n", + "\n", + "**TODO(CT): IMAGE- server with clients**\n", + "\n", + "## Installing\n", + "\n", + "prog_server can be installed using pip\n", + "\n", + "```console\n", + "$ pip install prog_server\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Starting prog_server" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "prog_server can be started 2 ways. Through command line or programatically (i.e., in a python script). Once the server is started it will take a short time to initialize, then it will start receiving requests for sessions from clients using prog_client, or interacting directly using the REST interface." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Starting prog_server in command line\n", + "Generally, you can start the prog_server by running the module, like this:\n", + "\n", + "```console\n", + "$ python -m prog_server\n", + "```\n", + "\n", + "Note that you can force the server to start in debug mode using the `debug` flag. For example, `python -m prog_server --debug`" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Starting prog_server programatically\n", + "There are two methods to start the prog_server in python. The first, below, is non-blocking allowing users to perform other functions while the server is running." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import prog_server\n", + "prog_server.start()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "When starting a server, users can also provide arguments to customize the way the server runs. Here are the main arguments used by \n", + "\n", + "* host (str): Server host address. Defaults to ‘127.0.0.1’\n", + "* port (int): Server port address. Defaults to 8555\n", + "* debug (bool): If the server is to be started in debug mode\n", + "\n", + "Now prog_server is ready to start receiving session requests from users. The server can also be stopped using the stop() function" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "prog_server.stop()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "prog_server can also be started in blocked mode using the following command:\n", + "\n", + "```python\n", + ">>> prog_server.run()\n", + "```\n", + "\n", + "We will not execute it here, because it would block execution in this notebook until force quit.\n", + "\n", + "For details on all supported arguments see [API Doc](https://nasa.github.io/progpy/api_ref/prog_server/prog_server.html#prog_server.start)\n", + "\n", + "The basis of prog_server is the session. Each user creates one or more session. These sessions are each a request for prognostic services. Then the user can interact with the open session. You'll see examples of this in the future sections." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's restart the server (so it can be used with the below examples)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "prog_server.start()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Using prog_server with prog_client\n", + "For users using python, prog_server can be interacted with using the prog_client package distributed with progpy. This section describes a few examples using prog_client and prog_server together.\n", + "\n", + "Before using prog_client import the package:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import prog_client" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Example: Online Prognostics\n", + "This example creates a session with the server to run prognostics for a Thrown Object, a simplified model of an object thrown into the air. Data is then sent to the server and a prediction is requested. The prediction is then displayed.\n", + "\n", + "**Note: before running this example, make sure prog_server is running**\n", + "\n", + "The first step is to open a session with the server. This starts a session for prognostics with the ThrownObject model, with default parameters. The prediction configuration is updated to have a save frequency of every 1 second." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "session = prog_client.Session('ThrownObject', pred_cfg={'save_freq': 1})\n", + "print(session) # Printing the Session Information" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If you were to re-run the lines above, it would start a new session, with a new number.\n", + "\n", + "Next, we need to prepare the data we will use for this example. The data is a dictionary, and the keys are the names of the inputs and outputs in the model with format (time, value).\n", + "\n", + "Note: in an actual application, the data would be received from a sensor or other source. The structure below is used to emulate the sensor." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "example_data = [\n", + " (0, {'x': 1.83}), \n", + " (0.1, {'x': 5.81}), \n", + " (0.2, {'x': 9.75}), \n", + " (0.3, {'x': 13.51}), \n", + " (0.4, {'x': 17.20}), \n", + " (0.5, {'x': 20.87}), \n", + " (0.6, {'x': 24.37}), \n", + " (0.7, {'x': 27.75}), \n", + " (0.8, {'x': 31.09}), \n", + " (0.9, {'x': 34.30}), \n", + " (1.0, {'x': 37.42}),\n", + " (1.1, {'x': 40.43}),\n", + " (1.2, {'x': 43.35}),\n", + " (1.3, {'x': 46.17}),\n", + " (1.4, {'x': 48.91}),\n", + " (1.5, {'x': 51.53}),\n", + " (1.6, {'x': 54.05}),\n", + " (1.7, {'x': 56.50}),\n", + " (1.8, {'x': 58.82}),\n", + " (1.9, {'x': 61.05}),\n", + " (2.0, {'x': 63.20}),\n", + " (2.1, {'x': 65.23}),\n", + " (2.2, {'x': 67.17}),\n", + " (2.3, {'x': 69.02}),\n", + " (2.4, {'x': 70.75}),\n", + " (2.5, {'x': 72.40})\n", + "] " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we can start sending the data to the server, checking periodically to see if there is a completed prediction." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from time import sleep\n", + "\n", + "LAST_PREDICTION_TIME = None\n", + "for i in range(len(example_data)):\n", + " # Send data to server\n", + " print(f'{example_data[i][0]}s: Sending data to server... ', end='')\n", + " session.send_data(time=example_data[i][0], **example_data[i][1])\n", + "\n", + " # Check for a prediction result\n", + " status = session.get_prediction_status()\n", + " if LAST_PREDICTION_TIME != status[\"last prediction\"]: \n", + " # New prediction result\n", + " LAST_PREDICTION_TIME = status[\"last prediction\"]\n", + " print('Prediction Completed')\n", + " \n", + " # Get prediction\n", + " # Prediction is returned as a type uncertain_data, so you can manipulate it like that datatype.\n", + " # See https://nasa.github.io/prog_algs/uncertain_data.html\n", + " t, prediction = session.get_predicted_toe()\n", + " print(f'Predicted ToE (using state from {t}s): ')\n", + " print(prediction.mean)\n", + "\n", + " # Get Predicted future states\n", + " # You can also get the predicted future states of the model.\n", + " # States are saved according to the prediction configuration parameter 'save_freq' or 'save_pts'\n", + " # In this example we have it setup to save every 1 second.\n", + " # Return type is UnweightedSamplesPrediction (since we're using the monte carlo predictor)\n", + " # See https://nasa.github.io/prog_algs\n", + " t, event_states = session.get_predicted_event_state()\n", + " print(f'Predicted Event States (using state from {t}s): ')\n", + " es_means = [(event_states.times[i], event_states.snapshot(i).mean) for i in range(len(event_states.times))]\n", + " for time, es_mean in es_means:\n", + " print(f\"\\t{time}s: {es_mean}\")\n", + "\n", + " # Note: you can also get the predicted future states of the model (see get_predicted_states()) or performance parameters (see get_predicted_performance_metrics())\n", + "\n", + " else:\n", + " print('No prediction yet')\n", + " # No updated prediction, send more data and check again later.\n", + " sleep(0.1)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Notice that the prediction wasn't updated every time step. It takes a bit of time to perform a prediction.\n", + "\n", + "*Note*: You can also get the model from prog_server to work with directly." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "model = session.get_model()\n", + "print(model)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Example: Option Scoring\n", + "This example creates a session with the server to run prognostics for a BatteryCircuit. Three options with different loading profiles are compared by creating a session for each option and comparing the resulting prediction metrics.\n", + "\n", + "First step is to prepare load profiles to compare. Each load profile has format `Array[Dict]`. Where each dict is in format {TIME: LOAD}, where TIME is the start of that loading in seconds. LOAD is a dict with keys corresponding to model.inputs.\n", + "\n", + "Note: Dict must be in order of increasing time\n", + "\n", + "Here we introduce 3 load profiles to be used with simulation:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "plan0 = {\n", + " 0: {'i': 2},\n", + " 600: {'i': 1},\n", + " 900: {'i': 4},\n", + " 1800: {'i': 2},\n", + " 3000: {'i': 3}\n", + "}" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "plan1 = {\n", + " 0: {'i': 3},\n", + " 900: {'i': 2},\n", + " 1000: {'i': 3.5},\n", + " 2000: {'i': 2.5},\n", + " 2300: {'i': 3}\n", + "}" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "plan2 = {\n", + " 0: {'i': 1.25},\n", + " 800: {'i': 2},\n", + " 1100: {'i': 2.5},\n", + " 2200: {'i': 6},\n", + "}" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "LOAD_PROFILES = [plan0, plan1, plan2]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The next step is to open a session with the battery circuit model for each of the 3 plans. We are specifying a time of interest of 2000 seconds (for the sake of a demo). This could be the end of a mission/session, or some inspection time." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "sessions = [\n", + " prog_client.Session(\n", + " 'BatteryCircuit',\n", + " pred_cfg = {\n", + " 'save_pts': [2000],\n", + " 'save_freq': 1e99, 'n_samples':15},\n", + " load_est = 'Variable',\n", + " load_est_cfg = LOAD_PROFILES[i]) \n", + " for i in range(len(LOAD_PROFILES))]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now let's wait for prognostics to complete" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "for session in sessions:\n", + " sessions_in_progress = True\n", + " while sessions_in_progress:\n", + " sessions_in_progress = False\n", + " status = session.get_prediction_status()\n", + " if status['in progress'] != 0:\n", + " print(f'\\tSession {session.session_id} is still in progress')\n", + " sessions_in_progress = True\n", + " time.sleep(STEP)\n", + " print(f'\\tSession {session.session_id} complete')\n", + "print('All sessions complete')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now that the sessions are complete, we can get the results." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "results = [session.get_predicted_toe()[1] for session in sessions]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now let's compare results. First let's look at the mean Time to Event (ToE):" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print('Mean ToE:')\n", + "best_toe = 0\n", + "best_plan = None\n", + "for i in range(len(results)):\n", + " mean_toe = results[i].mean['EOD']\n", + " print(f'\\tOption {i}: {mean_toe:0.2f}s')\n", + " if mean_toe > best_toe:\n", + " best_toe = mean_toe\n", + " best_plan = i\n", + "print(f'Best option using method 1: Option {best_plan}')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As a second metric, let's look at the SOC at our point of interest (2000 seconds)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "best_soc = 0\n", + "best_plan = None\n", + "soc = [session.get_predicted_event_state()[1] for session in sessions]\n", + "for i in range(len(soc)):\n", + " mean_soc = soc[i].snapshot(-1).mean['EOD']\n", + " print(f'\\tOption {i}: {mean_soc:0.3f} SOC')\n", + " if mean_soc > best_soc:\n", + " best_soc = mean_soc\n", + " best_plan = i\n", + "print(f'Best option using method 2: Option {best_plan}')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Other metrics can be used as well, like probability of mission success given a certain mission time, uncertainty in ToE estimate, final state at end of mission, etc. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Using prog_server - REST Interface\n", + "\n", + "Communication with ProgPy is through a rest interface. The RestAPI is described here: [Rest API](https://app.swaggerhub.com/apis-docs/teubert/prog_server/).\n", + "\n", + "Most programming languages have a way of interacting with REST APIs (either native or through a package/library). `curl` requests can also be used by command line or apps like Postman." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Custom Models\n", + "**A version of this section will be added in release v1.8** " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Closing\n", + "When you're done using prog_server, make sure you turn off the server." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "prog_server.stop()" ] } ], "metadata": { "kernelspec": { - "display_name": "Python 3.10.7 64-bit", + "display_name": "Python 3.11.0 ('env': venv)", "language": "python", "name": "python3" }, "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", "name": "python", - "version": "3.10.7" + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.0" }, "orig_nbformat": 4, "vscode": { "interpreter": { - "hash": "610c699f0cd8c4f129acd9140687fff6866bed0eb8e82f249fc8848b827b628c" + "hash": "71ccad9e81d0b15f7bb5ef75e2d2ca570011b457fb5a41421e3ae9c0e4c33dfc" } } }, diff --git a/sphinx-config/prog_server_guide.rst b/sphinx-config/prog_server_guide.rst index b807e55..0215608 100644 --- a/sphinx-config/prog_server_guide.rst +++ b/sphinx-config/prog_server_guide.rst @@ -84,11 +84,7 @@ There are two methods for starting the prog_server, described below: >>> import prog_server >>> prog_server.run() # Starts the server- blocking. - Both run and start accept the following optional keyword arguments: - - * **host** (str): Server host address. Defaults to '127.0.0.1' - * **port** (int): Server port address. Defaults to 5000 - * **debug** (bool): If the server is to be started in debug mode + See :py:func:`prog_server.start` and :py:func:`prog_server.run` for details on accepted command line arguments Examples ------------