|
14 | 14 | "**In This Document**\n", |
15 | 15 | "\n", |
16 | 16 | "- [Overview](#overview)\n", |
17 | | - "- [Stock Trading](#stocks-demo)\n", |
| 17 | + "- [Image Classification](#image-classification-demo)\n", |
18 | 18 | "- [Predictive Infrastructure Monitoring](#netops-demo)\n", |
19 | | - "- [Image Recognition](#image-classification-demo)\n", |
20 | 19 | "- [Natural Language Processing (NLP)](#nlp-demo)\n", |
21 | | - "- [Stream Enrichment](#stream-enrich-demo)" |
| 20 | + "- [Stream Enrichment](#stream-enrich-demo)\n", |
| 21 | + "- [Stock Trading](#stocks-demo)" |
22 | 22 | ] |
23 | 23 | }, |
24 | 24 | { |
|
35 | 35 | "cell_type": "markdown", |
36 | 36 | "metadata": {}, |
37 | 37 | "source": [ |
38 | | - "<a id=\"stocks-demo\"></a>\n", |
39 | | - "## Smart Stock Trading\n", |
| 38 | + "<a id=\"image-classification-demo\"></a>\n", |
| 39 | + "## Image Classification\n", |
40 | 40 | "\n", |
41 | | - "The [**stocks**](stocks/01-gen-demo-data.ipynb) demo demonstrates a smart stock-trading application: \n", |
42 | | - "the application reads stock-exchange data from an internet service into a time-series database (TSDB); uses Twitter to analyze the market sentiment on specific stocks, in real time; and saves the data to a platform NoSQL table that is used for generating reports and analyzing and visualizing the data on a Grafana dashboard.\n", |
| 41 | + "The [**image-classification**](image-classification/01-image-classification.ipynb) demo demonstrates image recognition: the application builds and trains an ML model that identifies (recognizes) and classifies images.\n", |
43 | 42 | "\n", |
44 | | - "- The stock data is read from Twitter by using the [TwythonStreamer](https://twython.readthedocs.io/en/latest/usage/streaming_api.html) Python wrapper to the Twitter Streaming API, and saved to TSDB and NoSQL tables in the platform.\n", |
45 | | - "- Sentiment analysis is done by using the [TextBlob](https://textblob.readthedocs.io/) Python library for natural language processing (NLP).\n", |
46 | | - "- The analyzed data is visualized as graphs on a [Grafana](https://grafana.com/grafana) dashboard, which is created from the Jupyter notebook code.\n", |
47 | | - " The data is read from both the TSDB and NoSQL stock tables." |
| 43 | + "This example is using TensorFlow, Horovod, and Nuclio demonstrating end to end solution for image classification, \n", |
| 44 | + "it consists of 4 MLRun and Nuclio functions:\n", |
| 45 | + "\n", |
| 46 | + "1. import an image archive from S3 to the cluster file system\n", |
| 47 | + "2. Tag the images based on their name structure \n", |
| 48 | + "3. Distrubuted training using TF, Keras and Horovod\n", |
| 49 | + "4. Automated deployment of Nuclio model serving function (form [Notebook](nuclio-serving-tf-images.ipynb) and from [Dockerfile](./inference-docker))\n", |
| 50 | + "\n", |
| 51 | + "The Example also demonstrate an [automated pipeline](mlrun_mpijob_pipe.ipynb) using MLRun and KubeFlow pipelines " |
48 | 52 | ] |
49 | 53 | }, |
50 | 54 | { |
|
67 | 71 | "cell_type": "markdown", |
68 | 72 | "metadata": {}, |
69 | 73 | "source": [ |
70 | | - "<a id=\"image-classification-demo\"></a>\n", |
71 | | - "## Image Recognition\n", |
| 74 | + "<a id=\"nlp-demo\"></a>\n", |
| 75 | + "## Natural Language Processing (NLP)\n", |
72 | 76 | "\n", |
73 | | - "The [**image-classification**](image-classification/keras-cnn-dog-or-cat-classification.ipynb) demo demonstrates image recognition: the application builds and trains an ML model that identifies (recognizes) and classifies images.\n", |
| 77 | + "The [**nlp**](nlp/nlp-example.ipynb) demo demonstrates natural language processing (NLP): the application processes natural-language textual data — including spelling correction and sentiment analysis — and generates a Nuclio serverless function that translates any given text string to another (configurable) language.\n", |
74 | 78 | "\n", |
75 | | - "- The data is collected by downloading images of dogs and cats from the Iguazio sample data-set AWS bucket.\n", |
76 | | - "- The training data for the ML model is prepared by using [pandas](https://pandas.pydata.org/) DataFrames to build a predecition map.\n", |
77 | | - " The data is visualized by using the [Matplotlib](https://matplotlib.org/) Python library.\n", |
78 | | - "- An image recognition and classification ML model that identifies the animal type is built and trained by using [Keras](https://keras.io/), [TensorFlow](https://www.tensorflow.org/), and [scikit-learn](https://scikit-learn.org) (a.k.a. sklearn)." |
| 79 | + "- The textual data is collected and processed by using the [TextBlob](https://textblob.readthedocs.io/) Python NLP library. The processing includes spelling correction and sentiment analysis.\n", |
| 80 | + "- A serverless function that translates text to another language, which is configured in an environment variable, is generated by using the [Nuclio](https://nuclio.io/) framework." |
79 | 81 | ] |
80 | 82 | }, |
81 | 83 | { |
82 | 84 | "cell_type": "markdown", |
83 | 85 | "metadata": {}, |
84 | 86 | "source": [ |
85 | | - "<a id=\"nlp-demo\"></a>\n", |
86 | | - "## Natural Language Processing (NLP)\n", |
| 87 | + "<a id=\"stocks-demo\"></a>\n", |
| 88 | + "## Smart Stock Trading\n", |
87 | 89 | "\n", |
88 | | - "The [**nlp**](nlp/nlp-example.ipynb) demo demonstrates natural language processing (NLP): the application processes natural-language textual data — including spelling correction and sentiment analysis — and generates a Nuclio serverless function that translates any given text string to another (configurable) language.\n", |
| 90 | + "The [**stocks**](stocks/01-gen-demo-data.ipynb) demo demonstrates a smart stock-trading application: \n", |
| 91 | + "the application reads stock-exchange data from an internet service into a time-series database (TSDB); uses Twitter to analyze the market sentiment on specific stocks, in real time; and saves the data to a platform NoSQL table that is used for generating reports and analyzing and visualizing the data on a Grafana dashboard.\n", |
89 | 92 | "\n", |
90 | | - "- The textual data is collected and processed by using the [TextBlob](https://textblob.readthedocs.io/) Python NLP library. The processing includes spelling correction and sentiment analysis.\n", |
91 | | - "- A serverless function that translates text to another language, which is configured in an environment variable, is generated by using the [Nuclio](https://nuclio.io/) framework." |
| 93 | + "- The stock data is read from Twitter by using the [TwythonStreamer](https://twython.readthedocs.io/en/latest/usage/streaming_api.html) Python wrapper to the Twitter Streaming API, and saved to TSDB and NoSQL tables in the platform.\n", |
| 94 | + "- Sentiment analysis is done by using the [TextBlob](https://textblob.readthedocs.io/) Python library for natural language processing (NLP).\n", |
| 95 | + "- The analyzed data is visualized as graphs on a [Grafana](https://grafana.com/grafana) dashboard, which is created from the Jupyter notebook code.\n", |
| 96 | + " The data is read from both the TSDB and NoSQL stock tables." |
92 | 97 | ] |
93 | 98 | }, |
94 | 99 | { |
|
128 | 133 | } |
129 | 134 | }, |
130 | 135 | "nbformat": 4, |
131 | | - "nbformat_minor": 2 |
| 136 | + "nbformat_minor": 4 |
132 | 137 | } |
0 commit comments