repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/structured/labs/3b_bqml_linear_transform_babyweight.ipynb
apache-2.0
[ "LAB 3b: BigQuery ML Model Linear Feature Engineering/Transform.\nLearning Objectives\n\nCreate and evaluate linear model with BigQuery's ML.FEATURE_CROSS\nCreate and evaluate linear model with BigQuery's ML.FEATURE_CROSS and ML.BUCKETIZE\nCreate and evaluate linear model with ML.TRANSFORM\n\nIntroduction\nIn this notebook, we will create multiple linear models to predict the weight of a baby before it is born, using increasing levels of feature engineering using BigQuery ML. If you need a refresher, you can go back and look how we made a baseline model in the previous notebook BQML Baseline Model.\nWe will create and evaluate a linear model using BigQuery's ML.FEATURE_CROSS, create and evaluate a linear model using BigQuery's ML.FEATURE_CROSS and ML.BUCKETIZE, and create and evaluate a linear model using BigQuery's ML.TRANSFORM.\nEach learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.\nLoad necessary libraries\nCheck that the Google BigQuery library is installed and if not, install it.", "%%bash\npip freeze | grep google-cloud-bigquery==1.6.1 || \\\npip install google-cloud-bigquery==1.6.1", "Verify tables exist\nRun the following cells to verify that we previously created the dataset and data tables. If not, go back to lab 1b_prepare_data_babyweight to create them.", "%%bigquery\n-- LIMIT 0 is a free query; this allows us to check that the table exists.\nSELECT * FROM babyweight.babyweight_data_train\nLIMIT 0\n\n%%bigquery\n-- LIMIT 0 is a free query; this allows us to check that the table exists.\nSELECT * FROM babyweight.babyweight_data_eval\nLIMIT 0", "Lab Task #1: Model 1: Apply the ML.FEATURE_CROSS clause to categorical features\nBigQuery ML now has ML.FEATURE_CROSS, a pre-processing clause that performs a feature cross with syntax ML.FEATURE_CROSS(STRUCT(features), degree) where features are comma-separated categorical columns and degree is highest degree of all combinations.\nCreate model with feature cross.", "%%bigquery\nCREATE OR REPLACE MODEL\n babyweight.model_1\n\nOPTIONS (\n MODEL_TYPE=\"LINEAR_REG\",\n INPUT_LABEL_COLS=[\"weight_pounds\"],\n L2_REG=0.1,\n DATA_SPLIT_METHOD=\"NO_SPLIT\") AS\n\nSELECT\n # TODO: Add base features and label\n ML.FEATURE_CROSS(\n # TODO: Cross categorical features\n ) AS gender_plurality_cross\nFROM\n babyweight.babyweight_data_train", "Create two SQL statements to evaluate the model.", "%%bigquery\nSELECT\n *\nFROM\n ML.EVALUATE(MODEL babyweight.model_1,\n (\n SELECT\n # TODO: Add same features and label as training\n FROM\n babyweight.babyweight_data_eval\n ))\n\n%%bigquery\nSELECT\n # TODO: Select just the calculated RMSE\nFROM\n ML.EVALUATE(MODEL babyweight.model_1,\n (\n SELECT\n # TODO: Add same features and label as training\n FROM\n babyweight.babyweight_data_eval\n ))", "Lab Task #2: Model 2: Apply the BUCKETIZE Function\nBucketize is a pre-processing function that creates \"buckets\" (e.g bins) - e.g. it bucketizes a continuous numerical feature into a string feature with bucket names as the value with syntax ML.BUCKETIZE(feature, split_points) with split_points being an array of numerical points to determine bucket bounds.\nApply the BUCKETIZE function within FEATURE_CROSS.\n\nHint: Create a model_2.", "%%bigquery\nCREATE OR REPLACE MODEL\n babyweight.model_2\n\nOPTIONS (\n MODEL_TYPE=\"LINEAR_REG\",\n INPUT_LABEL_COLS=[\"weight_pounds\"],\n L2_REG=0.1,\n DATA_SPLIT_METHOD=\"NO_SPLIT\") AS\n\nSELECT\n weight_pounds,\n is_male,\n mother_age,\n plurality,\n gestation_weeks,\n ML.FEATURE_CROSS(\n STRUCT(\n is_male,\n ML.BUCKETIZE(\n # TODO: Bucketize mother_age\n ) AS bucketed_mothers_age,\n plurality,\n ML.BUCKETIZE(\n # TODO: Bucketize gestation_weeks\n ) AS bucketed_gestation_weeks\n )\n ) AS crossed\nFROM\n babyweight.babyweight_data_train", "Create three SQL statements to EVALUATE the model.\nLet's now retrieve the training statistics and evaluate the model.", "%%bigquery\nSELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_2)", "We now evaluate our model on our eval dataset:", "%%bigquery\nSELECT\n *\nFROM\n ML.EVALUATE(MODEL babyweight.model_2,\n (\n SELECT\n # TODO: Add same features and label as training\n FROM\n babyweight.babyweight_data_eval))", "Let's select the mean_squared_error from the evaluation table we just computed and square it to obtain the rmse.", "%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL babyweight.model_2,\n (\n SELECT\n # TODO: Add same features and label as training\n FROM\n babyweight.babyweight_data_eval))", "Lab Task #3: Model 3: Apply the TRANSFORM clause\nBefore we perform our prediction, we should encapsulate the entire feature set in a TRANSFORM clause. This way we can have the same transformations applied for training and prediction without modifying the queries.\nLet's apply the TRANSFORM clause to the model_3 and run the query.", "%%bigquery\nCREATE OR REPLACE MODEL\n babyweight.model_3\n\nTRANSFORM(\n # TODO: Add base features and label as you would in select\n # TODO: Add transformed features as you would in select\n)\n\nOPTIONS (\n MODEL_TYPE=\"LINEAR_REG\",\n INPUT_LABEL_COLS=[\"weight_pounds\"],\n L2_REG=0.1,\n DATA_SPLIT_METHOD=\"NO_SPLIT\") AS\n\nSELECT\n *\nFROM\n babyweight.babyweight_data_train", "Let's retrieve the training statistics:", "%%bigquery\nSELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_3)", "We now evaluate our model on our eval dataset:", "%%bigquery\nSELECT\n *\nFROM\n ML.EVALUATE(MODEL babyweight.model_3,\n (\n SELECT\n *\n FROM\n babyweight.babyweight_data_eval\n ))", "Let's select the mean_squared_error from the evaluation table we just computed and square it to obtain the rmse.", "%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL babyweight.model_3,\n (\n SELECT\n *\n FROM\n babyweight.babyweight_data_eval\n ))", "Lab Summary:\nIn this lab, we created and evaluated a linear model using BigQuery's ML.FEATURE_CROSS, created and evaluated a linear model using BigQuery's ML.FEATURE_CROSS and ML.BUCKETIZE, and created and evaluated a linear model using BigQuery's ML.TRANSFORM and L2 regularization.\nCopyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
WomensCodingCircle/CodingCirclePython
Lesson12_TabularData/Tabular Data.ipynb
mit
[ "Using Tabular Data in Python\ncsv module\nPython has a csv reader/writer as part of its built in library. It is called csv. This is the simplest way to read tabular data (data in table format). The type of data you used to use excel to process (hopefully you will try out python now). It must be in text format to use the csv module, so .csv (comma separated) or .tsv (tab separated)\nHere is the documentation: https://docs.python.org/2/library/csv.html\nTo use it, first you must import it\nimport csv", "import csv", "Next we create a csv reader. You give it a file handle and optionally the dialect, the separator (usually commas or tabs), and the quote character.\nwith open(filename, 'r') as fh:\n reader = csv.reader(fh, delimiter='\\t', quotechar='\"')", "with open('walks.csv', 'r') as fh:\n reader = csv.reader(fh, delimiter=',')", "The reader doesn't do anything yet. It is a generator that allows you to loop through the data (it is very similar to a file handle).\nTo loop through the data you just write a simple for loop\nfor row in reader:\n #process row\n\nThe each row will be a list with each element corresponding to a single column.", "with open('walks.csv', 'r') as fh:\n reader = csv.reader(fh, delimiter=',')\n for row in reader:\n print(row) ", "TRY IT\nOpen up the file workout.txt (tab delimited, tab='\\t') with the csv reader and print out each row.\nDoesn't that look nice?\nWell there are a few problems that I can see. First the header, how do we deal with that?\nHeaders\nThe easiest way I have found is to use the next method (that is available with any generator) before the for loop and to store that in a header variable. That reads the first line and stores it (so that you can use it later) and then advances the pointer to the next line so when you run the for loop it is only on the data.\nheader = reader.next()\nfor row in reader:\n #process data", "with open('walks.csv', 'r') as fh:\n reader = csv.reader(fh, delimiter=',')\n header = next(reader)\n for row in reader:\n print(row) \n print(\"Header\", header)", "Values are Strings\nNotice that each item is a string. You'll need to remember that and convert things that actually should be numbers using the float() or int() functions.", "with open('walks.csv', 'r') as fh:\n reader = csv.reader(fh, delimiter=',')\n header = next(reader)\n for row in reader:\n float_row = [float(row[0]), float(row[1])]\n print(float_row) ", "TRY IT\nOpen workouts with a csv reader. Save the header line to a variable called header. Convert each value in the data rows to ints and print them out.\nAnalyzing our data\nYou can use just about everything we have learned up until this point to analyze your data: if statements, regexs, math, data structures. Let's look at some examples.", "# Let's find the average distance for all walks. \n\nwith open('walks.csv', 'r') as fh:\n reader = csv.reader(fh, delimiter=',')\n header = next(reader)\n # Empty list for storing all distances\n walks = []\n for row in reader:\n #distance is in the first column\n dist = row[0]\n # Convert to float so we can do math\n dist = float(dist)\n # Append to our list\n walks.append(dist)\n \n # Use list aggregation methods to get average distance\n ave_dist = sum(walks) / len(walks)\n print(\"Average distance walked: {0:.1f}\".format(ave_dist))\n\n# Let's see our pace for each walk\nwith open('walks.csv', 'r') as fh:\n reader = csv.reader(fh, delimiter=',')\n header = next(reader)\n for row in reader:\n #distance is in the first column\n dist = row[0]\n # Convert to float so we can do math\n dist = float(dist)\n #time in minutes is in the second column\n time_minutes = row[1]\n # Convert to float so we can do math\n time_minutes = float(time_minutes)\n # calculate pace as minutes / kilometer\n pace = time_minutes /dist\n print(\"Pace: {0:.1f} min/km\".format(pace))\n \n# If you want a challenge, try to make this seconds/mile\n\n# We can filter data. Let's get the ave pace only for walks longer than\n# 3 km\n\n# Let's see our pace for each walk\nwith open('walks.csv', 'r') as fh:\n reader = csv.reader(fh, delimiter=',')\n header = next(reader)\n paces = []\n for row in reader:\n #distance is in the first column\n dist = row[0]\n # Convert to float so we can do math\n dist = float(dist)\n # Don't count short walks\n if dist >= 3.0:\n #time in minutes is in the second column\n time_minutes = row[1]\n # Convert to float so we can do math\n time_minutes = float(time_minutes)\n pace = time_minutes /dist\n paces.append(pace)\n \nave_pace = sum(paces) / len(paces)\nprint(\"Average walking pace: {0:.1f} min/km\".format(ave_pace))", "Here is something I do all the time. It is a little more complicated than the above examples, so take your time trying to understand it. What I like to do is to read the csv data and transform it to a dictionary of lists. This allows me to use it in many different ways later in the code. It is most useful with larger dataset that I will be analyzing and using many different times. (You can even print it out as JSON!)", "# Lets see our pace for each walk\nwith open('walks.csv', 'r') as fh:\n reader = csv.reader(fh, delimiter=',')\n header = next(reader)\n # This is the dictionary we will put our data from the csv into\n # The key's are the column headers and the values is a list of\n # all the data in that column (transformed into floats)\n data = {}\n # Initialize our dictionary with keys from header and values\n # as empty lists\n for column in header:\n data[column] = []\n for row in reader:\n # Enumerate give us the index and the value so \n # we don't have to use a count variable\n for index, column in enumerate(header):\n # convert data point to float\n data_point = float(row[index])\n # append data to dictionary's list for that column\n data[column].append(data_point)\n # look at that beautiful data. You can do anything with that!\n print(data)\n ", "TRY IT\nFind the average number of squats done from the workouts.txt file. Feel free to copy the code for opening from the previous TRY IT.\nWriting CSVs\nThe csv module also contains code for writing csvs.\nTo write, you create a writer using the writer method and give it a filehandle and optionally delimiter and quotechar.\nwith open('my_file.csv', 'w') as fh:\n writer = csv.writer(fh, delimiter=',', quotechar='\"')\n\nThen use the writerow method with a list to write as it's argument.\nwriter.writerow([item1, item2])", "import random\nwith open('sleep.csv', 'w') as fh:\n writer = csv.writer(fh, delimiter='\\t', quotechar='\"')\n header = ['day', 'sleep (hr)']\n writer.writerow(header)\n for i in range(1,11):\n hr_sleep = random.randint(4,10)\n writer.writerow([i, hr_sleep])\n \n#open the file to prove you wrote it. (Open in excel for best results)", "TRY IT\nWrite the following data to a file called age_walking_pace.csv:\nage | pace\n----------\n5 | 12.5\n15 | 9.4\n18 | 7.8\n25 | 8.1\n48 | 9.2\n91 | 105.1\n\nSeparator is comma, quote char is \"\nProject\nThe file fiveK.csv contains data from the top 100 female finishers for the Firecracer 5k held in Reston on the 4th of July 2016 http://www.prraces.com/firecracker/\n\nLook at the data. \nThen use your csv reading skills to read in the data using the csv reader.\nParse out the header and store in a variable called header.\nChoose one of the following to calculate\nEasy: Calculate the pace for each runner and print it out.\nMedium Easy: Calculate the average time of all runners\nMedium: Calculate the average pace for those under 40 and those over 40\nMedium Hard: Calculate the average time for those from VA vs. those who traveled to attend the race\nHard: Did people who registered early have faster times than those who registered late? Calculate the average time for the registration number in 500 increment chunks. i.e. Ave pace for registration number 1-500, 501-1000, etc.\nHardest: Do people who put their city in all caps do better or worse than those who put their city in mixed case or lower case. (HINT: use regexs)\n\n\nIf you picked medium or above. Print out the category and the result into a csv called results.csv" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
adamsteer/nci-notebooks
training/py3/Python3_Siphon_II.ipynb
apache-2.0
[ "<img src=\"http://nci.org.au/wp-content/themes/nci/img/img-logo-large.png\", width=400>\n\nProgrammatically accessing data through THREDDS and the VDI\n...using Python 3\nIn this notebook:\n\n<a href='#part1'>Using the Siphon Python package to programmatically access THREDDS data service endpoints</a>\n<a href='#part2'>Programmatically accessing files from the VDI</a>\n\nThe following material uses CSIRO IMOS TERN-AusCover MODIS Data Collection. For more information on the collection and licensing, please click here.\nPrerequisites:\n\n\nA python 3 virtual environment with the following Python modules loaded:\n\nmatplotlib\nnetcdf4\nsiphon\nshapely\nrequests\n\n\n\nSome knowledge of navigating the NCI data catalogues to find a dataset. Screenshots at the start of this notebook are a useful example, although using a different dataset.\n\n\nSetup instructions for python 3 virtual environments can be found here.\n\n<br>\nImport python packages", "from netCDF4 import Dataset\nimport matplotlib.pyplot as plt \nfrom siphon import catalog, ncss\nimport datetime\n%matplotlib inline", "Start by defining the parent catalog URL from NCI's THREDDS Data Server\nNote: Switch the '.html' ending on the URL to '.xml'", "url = 'http://dapds00.nci.org.au/thredds/catalog/u39/public/data/modis/fractionalcover-clw/v2.2/netcdf/catalog.xml'", "<a id='part1'></a> \nUsing Siphon\nSiphon is a collection of Python utilities for downloading data from Unidata data technologies. More information on installing and using Unidata's Siphon can be found: \nhttps://github.com/Unidata/siphon\nOnce selecting a parent dataset directory, Siphon can be used to search and use the data access methods and services provided by THREDDS. For example, Siphon will return a list of data endpoints for the OPeNDAP data URL, NetCDF Subset Service (NCSS), Web Map Service (WMS), Web Coverage Service (WCS), and the HTTP link for direct download. \nIn this Notebook, we'll be demonstrating the Netcdf Subset Service (NCSS).", "tds = catalog.TDSCatalog(url)\ndatasets = list(tds.datasets)\nendpts = list(tds.datasets.values())\n\nlist(tds.datasets.keys())", "The possible data services end points through NCI's THREDDS includes: OPeNDAP, Netcdf Subset Service (NCSS), HTTP download, Web Map Service (WMS), Web Coverage Service (WCS), NetCDF Markup Language (NcML), and a few metadata services (ISO, UDDC).", "for key, value in endpts[0].access_urls.items():\n print('{}, {}'.format(key, value))", "We can create a small function that uses Siphon's Netcdf Subset Service (NCSS) to extract a spatial request (defined by a lat/lon box)", "def get_data(dataset, bbox): \n nc = ncss.NCSS(dataset.access_urls['NetcdfSubset'])\n query = nc.query()\n query.lonlat_box(north=bbox[3],south=bbox[2],east=bbox[1],west=bbox[0])\n query.variables('bs')\n \n data = nc.get_data(query)\n \n lon = data['longitude'][:]\n lat = data['latitude'][:]\n bs = data['bs'][0,:,:]\n t = data['time'][:]\n \n time_base = datetime.date(year=1800, month=1, day=1)\n time = time_base + datetime.timedelta(t[0])\n \n return lon, lat, bs, time", "Query a single file and view result", "bbox = (135, 140, -31, -27)\nlon, lat, bs, t = get_data(endpts[0], bbox)\n\nplt.figure(figsize=(10,10))\nplt.imshow(bs, extent=bbox, cmap='gist_earth', origin='upper')\n\nplt.xlabel('longitude (degrees)', fontsize=14)\nplt.ylabel('latitude (degrees)', fontsize=14)\nprint(\"Date: {}\".format(t))", "Loop and query over the collection", "bbox = (135, 140, -31, -27)\nplt.figure(figsize=(10,10))\n\nfor endpt in endpts[:15]:\n try:\n lon, lat, bs, t = get_data(endpt, bbox)\n\n plt.imshow(bs, extent=bbox, cmap='gist_earth', origin='upper')\n plt.clim(vmin=-2, vmax=100)\n\n plt.tick_params(labelsize=14)\n plt.xlabel('longitude (degrees)', fontsize=14)\n plt.ylabel('latitude (degrees)', fontsize=14)\n\n plt.title(\"Date: \"+str(t), fontsize=16, weight='bold')\n plt.savefig(\"./images/\"+endpt.name+\".png\")\n plt.cla()\n except:\n pass\n\nplt.close()", "Can make an animation of the temporal evolution (this example is by converting the series of *.png files above into a GIF)\n<img src=\"./images/animated.gif\">\nCan also use Siphon to extract a single point", "def get_point(dataset, lat, lon):\n nc = ncss.NCSS(dataset.access_urls['NetcdfSubset'])\n query = nc.query()\n query.lonlat_point(lon, lat)\n query.variables('bs')\n \n data = nc.get_data(query)\n bs = data['bs'][0]\n date = data['date'][0]\n \n return bs, date\n\nbs, date = get_point(endpts[4], -27.75, 137)\nprint(\"{}, {}\".format(bs, date))", "Time series example", "data = []\nfor endpt in endpts[::20]:\n bs, date = get_point(endpt, -27.75, 137)\n data.append([date, bs])\n\nimport numpy as np\n\nBS = np.array(data)[:,1]\nDate = np.array(data)[:,0]\n\nplt.figure(figsize=(12,6))\nplt.plot(Date, BS, '-o', linewidth=2, markersize=8)\n\nplt.tick_params(labelsize=14)\nplt.xlabel('date', fontsize=14)\nplt.ylabel('fractional cover of bare soil (%)', fontsize=14)\nplt.title('Lat, Lon: -27.75, 137', fontsize=16)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
iRipVanWinkle/ml
Data Science UA - September 2017/Lecture 07 - Simulation Modeling/Portfolio_Simulation_Modeling.ipynb
mit
[ "Portfolio Simulation Modeling\nDownload Data\nDownload S&P 500 index data from Yahoo Finance", "import pandas as pd\nimport pandas.io.data as web\nimport matplotlib.pyplot as plt\nimport datetime as datetime\nfrom numpy import *\n%matplotlib inline", "Set start date, end date and data source ('Yahoo Finance', 'Google Finance', etc.). Download S&P 500 index data from Yahoo Finance.", "start_date = datetime.date(1976,1,1)\nend_date = datetime.date(2017,1,1)\n# Download S&P 500 index data\ntry:\n SnP500_Ddata = web.DataReader('^GSPC','yahoo',start_date,end_date)\nexcept:\n SnP500_Ddata = pd.read_csv(\"http://analytics.romanko.ca/data/SP500_hist.csv\")\n SnP500_Ddata.index = pd.to_datetime(SnP500_Ddata.Date)\nSnP500_Ddata.head()", "Transform daily data into annual data:", "# Create a time-series of annual data points from daily data\nSnP500_Adata = SnP500_Ddata.resample('A').last()\nSnP500_Adata[['Volume','Adj Close']].tail()", "Compute annual return of S&P 500 index:", "SnP500_Adata[['Adj Close']] = SnP500_Adata[['Adj Close']].apply(pd.to_numeric, errors='ignore')\nSnP500_Adata['returns'] = SnP500_Adata['Adj Close'] / SnP500_Adata['Adj Close'].shift(1) -1\nSnP500_Adata = SnP500_Adata.dropna()\nprint SnP500_Adata['returns']", "Compute average annual return and standard deviation of return for S&P 500 index:", "SnP500_mean_ret = float(SnP500_Adata[['returns']].mean())\nSnP500_std_ret = float(SnP500_Adata[['returns']].std())\nprint (\"S&P 500 average return = %g%%, st. dev = %g%%\") % (100*SnP500_mean_ret, 100*SnP500_std_ret)", "Simulation Example 1\nWe want to invest \\$1000 in the US stock market for 1 year: $v_0 = 1000$", "v0 = 1000 # Initial capital", "In our example we assume that the return of the market over the next year follow Normal distribution.\nBetween 1977 and 2014, S&P 500 returned 9.38% per year on average with a standard deviation of 16.15%.\nGenerate 100 scenarios for the market return over the next year (draw 100 random numbers from a Normal distribution with mean 9.38% and standard deviation of 16.15%):", "Ns = 100 # Number of scenarios\nr01 = random.normal(SnP500_mean_ret, SnP500_std_ret, Ns)\nr01", "Value of investment at the end of year 1:\n$v_1 = v_0 + r_{0,1}\\cdot v_0 = (1 + r_{0,1})\\cdot v_0$", "# Distribution of value at the end of year 1\nv1 = (r01 + 1) * v0\nv1", "Mean:", "mean(v1)", "Standard deviation:", "std(v1)", "Minimum, maximum:", "min(v1), max(v1)", "Persentiles\n5th percentile, median, 95th percentile:", "percentile(v1, [5, 50,95])", "Alternative way to compute percentiles\n5th percentile, median, 95th percentile:", "sortedScen = sorted(v1) # Sort scenarios\nsortedScen[5-1], sortedScen[50-1], sortedScen[95-1]", "Plot a histogram of the distribution of outcomes for v1:", "hist, bins = histogram(v1)\npositions = (bins[:-1] + bins[1:]) / 2\nplt.bar(positions, hist, width=60)\nplt.xlabel('portfolio value after 1 year')\nplt.ylabel('frequency')\nplt.show()", "Simulated paths over time:", "# Plot simulated paths over time\nfor res in v1:\n plt.plot((0,1), (v0, res))\nplt.xlabel('time step')\nplt.ylabel('portfolio value')\nplt.show()", "Simulation Example 2\nYou are planning for retirement and decide to invest in the market for the next 30 years (instead of only the next year as in example 1).\nAssume that every year your investment returns from investing into the S&P 500 will follow a Normal distribution with the mean and standard deviation as in example 1.\nYour initial capital is still \\$1000", "v0 = 1000 # Initial capital", "Between 1977 and 2014, S&P 500 returned 9.38% per year on average with a standard deviation of 16.15%\nSimulate 30 columns of 100 observations each of single period returns:", "r_speriod30 = random.normal(SnP500_mean_ret, SnP500_std_ret, (Ns, 30))\nr_speriod30", "Compute and plot $v_{30}$", "v30 = prod(1 + r_speriod30 , 1) * v0\n\nhist, bins = histogram(v30)\npositions = (bins[:-1] + bins[1:]) / 2\nwidth = (bins[1] - bins[0]) * 0.8\nplt.bar(positions, hist, width=width)\nplt.xlabel('portfolio value after 30 years')\nplt.ylabel('frequency')\nplt.show()", "Simulated paths over time:", "for scenario in r_speriod30:\n y = [prod(1 + scenario[0:i]) * v0 for i in range(0,31)]\n plt.plot(range(0,31), y)\nplt.xlabel('time step')\nplt.ylabel('portfolio value')\nplt.show()", "Simulation Example 3\nDownload US Treasury bill data from Federal Reserve:", "# Download 3-month T-bill rates from Federal Reserve\nstart_date_b = datetime.date(1977,1,1)\nend_date_b = datetime.date(2017,1,1)\nTBill_Ddata = web.DataReader('DTB3','fred',start_date_b,end_date_b)\nTBill_Ddata.head()", "Transform daily data into annual data:", "# Create a time-series of annual data points from daily data\nTBill_Adata = TBill_Ddata.resample('A').last()\nTBill_Adata[['DTB3']].tail()", "Compute annual return for bonds:", "TBill_Adata['returns'] = TBill_Adata['DTB3'] / 100\nTBill_Adata = TBill_Adata.dropna()\nprint TBill_Adata['returns']", "Compute average annual return and standard deviation of return for bonds:", "TBill_mean_ret = float(TBill_Adata[['returns']].mean())\nTBill_std_ret = float(TBill_Adata[['returns']].std())\nprint (\"T-bill average return = %g%%, st. dev = %g%%\") % (100*TBill_mean_ret, 100*TBill_std_ret)", "Compute covariance matrix:", "covMat = cov(array(SnP500_Adata[['returns']]),array(TBill_Adata[['returns']]),rowvar=0)\ncovMat", "Simulate portfolio:", "v0 = 1000 # Initial capital\nNs = 5000 # Number of scenarios\n\nmu = [SnP500_mean_ret, TBill_mean_ret] # Expected return\nmu\n\nstockRet = ones(Ns)\nbondsRet = ones(Ns)\n\nscenarios = random.multivariate_normal(mu, covMat, Ns)\nfor year in range(1, 31):\n scenarios = random.multivariate_normal(mu, covMat, Ns)\n stockRet *= (1 + scenarios[:,0])\n bondsRet *= (1 + scenarios[:,1])\n\nv30 = 0.5 * v0 * stockRet + 0.5 * v0 * bondsRet\n\nhist, bins = histogram(v30, bins = 100)\npositions = (bins[:-1] + bins[1:]) / 2\nwidth = (bins[1] - bins[0]) * 0.8\nplt.bar(positions, hist, width=width)\nplt.xlabel('portfolio value after 30 years')\nplt.ylabel('frequency')\nplt.show()", "Simulation Example 4\nCompare two portfolios", "# Compute portfolios by iterating through different combinations of weights\nv30comp = []\nfor w in arange(0.2, 1.01, 0.2):\n v30comp += [w * v0 * stockRet + (1 - w) * v0 * bondsRet]\n\n# Plot a histogram of the distribution of\n# differences in outcomes for v30\n# (Stratery 4 - Strategy 2)\nv30d = v30comp[3] - v30comp[1]\n\nhist, bins = histogram(v30d, bins = 50)\npositions = (bins[:-1] + bins[1:]) / 2\nwidth = (bins[1] - bins[0]) * 0.8\nplt.bar(positions, hist, width=width)\nplt.show()\n\n# Compute number of elements in v30d that are > 0 and < 0 and compare\npos_count = (v30d > 0).sum()\nneg_count = (v30d <= 0).sum()\n\nprint u\"\"\"Strategy 4 was better in %d cases \nStrategy 2 was better in %d cases\nDifference = %d\"\"\" % (pos_count, neg_count, pos_count - neg_count)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
gabicfa/RedesSociais
encontro04/encontro04.ipynb
gpl-3.0
[ "Encontro 04: Suporte para Análise Espectral de Grafos\nEste guia foi escrito para ajudar você a atingir os seguintes objetivos:\n\nlembrar conceitos básicos de geometria analítica e álgebra linear;\nexplicar conceitos básicos de matriz de adjacência.\n\nAs seguintes bibliotecas serão usadas:", "import numpy as np\nimport socnet as sn\nimport easyplot as ep", "Terminologia e notação\n\nUm escalar $\\alpha \\in \\mathbb{R}$ é denotado por uma letra grega minúscula.\nUm vetor $a \\in \\mathbb{R}^n$ é denotado por uma letra romana minúscula.\nUma matriz $A \\in \\mathbb{R}^{n \\times m}$ é denotada por uma letra romana maiúscula.\n\nGeometria analítica\n\n\nConsidere dois vetores, $a = (\\alpha_0, \\ldots, \\alpha_{n-1})$ e $b = (\\beta_0, \\ldots, \\beta_{n-1})$. O produto interno desses vetores é denotado por $a \\cdot b$ e definido como\n$\\sum^{n-1}_{i = 0} \\alpha_i \\beta_i$.\n\n\nDizemos que $a$ e $b$ são ortogonais se $a \\cdot b = 0$.\n\n\nA norma de $a$ é denotada por $\\|a\\|$ e definida como $\\sqrt{a \\cdot a}$, ou seja, $\\sqrt{\\sum^{n-1}_{i = 0} \\alpha^2_i}$.\n\n\nDizemos que $a$ é um versor se $\\|a\\| = 1$.\n\n\nNormalizar $a$ significa considerar o versor $\\frac{a}{\\|a\\|}$.\n\n\nÁlgebra linear\n\n\nConsidere um conjunto de vetores $a_0, \\ldots, a_{m-1}$. Uma combinação linear desses vetores é uma soma $\\gamma_0 a_0 + \\cdots + \\gamma_{m-1} a_{m-1}$.\n\n\nDizemos que $a_0, \\ldots, a_{m-1}$ é uma base se todo vetor em $\\mathbb{R}^n$ é uma combinação linear desses vetores.\n\n\nConsidere uma matriz $A$. Sua transposta é denotada por $A^t$ e definida como uma matriz tal que, para todo $i$ e $j$, o elemento na linha $i$ e coluna $j$ de $A$ é o elemento na linha $j$ e coluna $i$ de $A^t$.\n\n\nEm multiplicações, um vetor é por padrão \"de pé\", ou seja, uma matriz com uma única coluna. Disso segue que o produto $Ab$ é uma combinação linear das colunas de $A$.\n\n\nComo consequência, a transposta de um vetor é por padrão \"deitada\", ou seja, uma matriz com uma única linha. Disse segue que o produto $b^t A$ é a transposta de uma combinação linear das linhas de $A$.\n\n\nAutovetores e autovalores\nConsidere um vetor $b$ e uma matriz $A$. Dizemos que $b$ é um autovetor de $A$ se existe $\\lambda$ tal que\n$$Ab = \\lambda b.$$\nNesse caso, dizemos que $\\lambda$ é o autovalor de $A$ correspondente a $b$.\nNote que a multiplicação pela matriz pode mudar o módulo de um autovetor, mas não pode mudar sua direção. Essa interpretação geométrica permite visualizar um algoritmo surpreendentemente simples para obter um autovetor.", "from random import randint, uniform\nfrom math import pi, cos, sin\n\n\nNUM_PAIRS = 10\nNUM_FRAMES = 10\n\n\n# devolve um versor positivo aleatório\ndef random_pair():\n angle = uniform(0, pi / 2)\n\n return np.array([cos(angle), sin(angle)])\n\n\n# devolve uma cor aleatória\ndef random_color():\n r = randint(0, 255)\n g = randint(0, 255)\n b = randint(0, 255)\n\n return (r, g, b)\n\n\n# matriz da qual queremos descobrir um autovetor\nA = np.array([\n [ 2, -1],\n [-1, 2]\n])\n\n# versores positivos e cores aleatórias\npairs = []\ncolors = []\nfor i in range(NUM_PAIRS):\n pairs.append(random_pair())\n colors.append(random_color())\n\nframes = []\n\nfor i in range(NUM_FRAMES):\n frames.append(ep.frame_vectors(pairs, colors))\n\n # multiplica cada vetor por A\n pairs = [A.dot(pair) for pair in pairs]\n\nep.show_animation(frames, xrange=[-5, 5], yrange=[-5, 5])", "Note que as multiplicações por $A$ fazem o módulo dos vetores aumentar indefinidamente, mas a direção converge. Para deixar isso mais claro, vamos normalizar depois de multiplicar.", "# normaliza um vetor\ndef normalize(a):\n return a / np.linalg.norm(a)\n\n\n# versores positivos e cores aleatórias\npairs = []\ncolors = []\nfor i in range(NUM_PAIRS):\n pairs.append(random_pair())\n colors.append(random_color())\n\nframes = []\n\nfor i in range(NUM_FRAMES):\n frames.append(ep.frame_vectors(pairs, colors))\n\n # multiplica cada vetor por A e normaliza\n pairs = [normalize(A.dot(pair)) for pair in pairs]\n\nep.show_animation(frames, xrange=[-1, 1], yrange=[-1, 1])", "Portanto, o algoritmo converge para uma direção que a multiplicação por $A$ não pode mudar. Isso corresponde à definição de autovetor dada acima!\nCabe enfatizar, porém, que nem toda matriz garante convergência.\nMatriz de adjacência\nConsidere um grafo $(N, E)$ e uma matriz $A \\in {0, 1}^{|N| \\times |N|}$. Denotando por $\\alpha_{ij}$ o elemento na linha $i$ e coluna $j$, dizemos que $A$ é a matriz de adjacência do grafo $(N, E)$ se:\n$$\\textrm{supondo } (N, E) \\textrm{ não-dirigido}, \\alpha_{ij} = 1 \\textrm{ se } {i, j} \\in E \\textrm{ e } \\alpha_{ij} = 0 \\textrm{ caso contrário};$$\n$$\\textrm{supondo } (N, E) \\textrm{ dirigido}, \\alpha_{ij} = 1 \\textrm{ se } (i, j) \\in E \\textrm{ e } \\alpha_{ij} = 0 \\textrm{ caso contrário}.$$\nVamos construir a matriz de adjacência de um grafo dos encontros anteriores.", "sn.graph_width = 320\nsn.graph_height = 180\n\ng = sn.load_graph('encontro02/3-bellman.gml', has_pos=True)\n\nfor n in g.nodes():\n g.node[n]['label'] = str(n)\n\nsn.show_graph(g, nlab=True)\n\nmatrix = sn.build_matrix(g)\n\nprint(matrix)", "Exercício 1\nConsiderando um grafo não-dirigido, como encontrar vizinhos na matriz de adjacência? Há mais de uma maneira.\n\nPara encontrar vizinhos em uma matriz adjacente é preciso encontrar o valor \"1\" e checar em que coluna j e linha i esse valor se encontra. Ao fazer isso conclue-se que o nó i é vizinho do nó j, portanto a linha j e a coluna i da matriz adjacente também deve apresentar o valor \"1\". \n\nExercício 2\nCite uma propriedade que matrizes de grafos não-dirigidos possuem, mas matrizes de grafos dirigidos não possuem.\n\nEm uma matriz de grafo não-dirigido se o valor de αij=1, o valor de αji também deve ser 1, enquanto em uma matriz de grafo não dirigido isso não necessáriamente é verdade \n\nExercício 3\nConsiderando um grafo dirigido, como encontrar sucessores na matriz de adjacência?\n\nPara encontrar sucessores em uma matriz adjacente é preciso encontrar o valor \"1\" e checar qm qual coluna j e linha i esse valor se encontra. Ao fazer isso pode-se concluir que o nó j é sucessor do noó i.\n\nExercício 4\nConsiderando um grafo dirigido, como encontrar predecessores na matriz de adjacência?\n\nPara encontrar predecessores em uma matriz adjacente é preciso encontrar o valor \"1\" e checar qm qual coluna j e linha i esse valor se encontra. Ao fazer isso pode-se concluir que o nó i é predecessores do nó j.\n\nExercício 5\nSe $A$ é a matriz de adjacência de um grafo não-dirigido, o que o produto $AA$ representa?\n\nO valor presente na linha i e na coluna j na matriz AA representa o números de nós os quais o nó j está ligado, que também estão ligados ao nó i\n\nExercício 6\nSe $A$ é a matriz de adjacência de um grafo não-dirigido, o que o produto $AA^t$ representa?\n\nO valor presente na linha i e na coluna j na matriz AAt representa o números de nós os quais o nó j está ligado, que também estão ligados ao nó i\n\nExercício 7\nSe $A$ é a matriz de adjacência de um grafo dirigido, o que o produto $AA$ representa?\n\n\n\nExercício 8\nSe $A$ é a matriz de adjacência de um grafo dirigido, o que o produto $AA^t$ representa?\n(sua resposta)\nEm encontros futuros, veremos como alguns dos conceitos das primeiras seções podem ser usados em análise de redes sociais." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
BrownDwarf/ApJdataFrames
notebooks/Scholz2012.ipynb
mit
[ "ApJdataFrames Scholz2012\nTitle: SUBSTELLAR OBJECTS IN NEARBY YOUNG CLUSTERS (SONYC). VI. THE PLANETARY-MASS DOMAIN OF NGC 1333\nAuthors: Alexander Scholz et al.\nData is from this paper:\nhttp://iopscience.iop.org/0004-637X/702/1/805/", "%pylab inline\n\nimport seaborn as sns\nsns.set_context(\"notebook\", font_scale=1.5)\n\n#import warnings\n#warnings.filterwarnings(\"ignore\")\n\nimport pandas as pd", "Table 1 - New Very Low Mass Members of NGC 1333\nInternet is still not working", "tbl1 = pd.read_clipboard(#\"http://iopscience.iop.org/0004-637X/756/1/24/suppdata/apj437811t1_ascii.txt\",\n sep='\\t', skiprows=[0,1,2,4], skipfooter=3, engine='python', usecols=range(10))\ntbl1\n\n! mkdir ../data/Scholz2012\n\ntbl1.to_csv(\"../data/Scholz2012/tbl1.csv\", index=False)", "Script finished." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
ernestyalumni/MLgrabbag
CNN/CNN_tf.ipynb
mit
[ "Convolution Neural Networks with TensorFlow", "%matplotlib inline\n\nimport os,sys\n\nsys.path.append( os.getcwd() + '/ML' )\n\nimport numpy\nimport numpy as np\n\nimport tensorflow\nimport tensorflow as tf", "MNIST Data\ncf. Found here http://www.iro.umontreal.ca/~lisa/deep/data/mnist/mnist.pkl.gz or https://www-labs.iro.umontreal.ca/~lisa/deep/data/mnist/?C=M;O=A", "import gzip\nimport six.moves.cPickle as pickle \n\n# find where `mnist.pkl.gz` is on your own computer \nf=gzip.open(\"../Data/mnist.pkl.gz\",'rb')\ntry:\n train_set,valid_set,test_set = pickle.load(f,encoding='latin1')\nexcept:\n train_set,valid_set,test_set = pickle.load(f)\nf.close()\n\ntrain_set_x,train_set_y=train_set\nvalid_set_x,valid_set_y=valid_set\ntest_set_x,test_set_y=test_set\ntrain_set_x = train_set_x.astype(np.float32)\ntrain_set_y = train_set_y.astype(np.float32)\nvalid_set_x = valid_set_x.astype(np.float32)\nvalid_set_y = valid_set_y.astype(np.float32)\ntest_set_x = test_set_x.astype(np.float32)\ntest_set_y = test_set_y.astype(np.float32)\nprint(train_set_x.shape,train_set_y.shape) # observe the value for (m,d), number of training examples x number of features\nprint(valid_set_x.shape,valid_set_y.shape)\nprint(test_set_x.shape,test_set_y.shape)\n\n# this is for reshaping the MNIST data into something for convolution neural networks\ntrain_set_x = train_set_x.reshape((train_set_x.shape[0],28,28,1))\nvalid_set_x = valid_set_x.reshape((valid_set_x.shape[0],28,28,1))\ntest_set_x = test_set_x.reshape((test_set_x.shape[0],28,28,1))\nprint(train_set_x.shape)\nprint(valid_set_x.shape)\nprint(test_set_x.shape)\n\nprint(train_set_y.min())\nprint(train_set_y.max())\nprint(valid_set_y.min())\nprint(valid_set_y.max())\nprint(test_set_y.min())\nprint(test_set_y.max())", "Turn this into a so-called \"one-hot vector representation.\" \nRecall that whereas the original labels (in the variable y) were 0,1, ..., 9 for 10 different (single) digits, for the purpose of training a neural network, we need to recode these labels as vectors containing only values 0 or 1.", "K=10 \nm_train = train_set_y.shape[0]\nm_valid = valid_set_y.shape[0]\nm_test = test_set_y.shape[0]\ny_train = [np.zeros(K) for row in train_set_y] # list of m_train numpy arrays of size dims. (10,)\ny_valid = [np.zeros(K) for row in valid_set_y] # list of m_valid numpy arrays of size dims. (10,)\ny_test = [np.zeros(K) for row in test_set_y] # list of m_test numpy arrays of size dims. (10,)\nfor i in range(m_train):\n y_train[i][ int(train_set_y[i]) ] = 1.\nfor i in range(m_valid):\n y_valid[i][ int(valid_set_y[i]) ] = 1.\nfor i in range(m_test):\n y_test[i][ int(test_set_y[i]) ] = 1.\ny_train = np.array(y_train).astype(np.float32)\ny_valid = np.array(y_valid).astype(np.float32)\ny_test = np.array(y_test).astype(np.float32) \nprint(y_train.shape)\nprint(y_valid.shape)\nprint(y_test.shape)\n\nprint(y_train.min())\nprint(y_train.max())\nprint(y_valid.min())\nprint(y_valid.max())\nprint(y_test.min())\nprint(y_test.max())", "Convolution operator $*$ and related filter (stencil) $c$ example", "X = tf.placeholder(tf.float32, shape=[None,None,None,3],name=\"X\")\n\nfilter_size = (9,9,3,2) # (W_1,W_2,C_lm1,C_l)\nc_bound = np.sqrt(3*9*9)\nc = tf.Variable( tf.random_uniform(filter_size,\n minval=-1.0/c_bound,\n maxval=1.0/c_bound))\n\nb=tf.Variable(tf.random_uniform([2,],minval=-.5,maxval=.5), dtype=tf.float32)\n\nprint(c.shape)\nprint(b.shape)\n\nconv_out=tf.nn.conv2d(X,\n c,\n strides=[1,1,1,1],\n padding=\"VALID\",\n use_cudnn_on_gpu=True,name=None)\n\nzl = tf.nn.sigmoid( conv_out + b )", "Let's have a little bit of fun with this:", "import pylab\nfrom PIL import Image\n\n# open example image of dimensions 639x516, HxW \nimg = Image.open(open('../Data/3wolfmoon.jpg'))\nprint(img.size) # WxH\nprint(np.asarray(img).max())\nprint(np.asarray(img).min())\n\n# dimensions are (height,width,channel)\nimg_np=np.asarray(img,dtype=np.float32)/256.\nprint(img_np.shape)\nprint(img_np.max())\nprint(img_np.min())\n\n# put image in 4D tensor of shape (1,height,width,3)\nimg_ = img_np.reshape(1,639,516,3)\nprint(img_.shape)\nprint(img_.max())\nprint(img_.min())\n\nsess = tf.Session()\ninit_op=tf.global_variables_initializer()\nsess.run(init_op) \n\nfiltered_img = sess.run(zl,feed_dict={X:img_})\n\nprint(type(filtered_img))\nprint(filtered_img.shape)\n\n# plot original image and first and second components of output\npylab.subplot(1, 3, 1); pylab.axis('off'); pylab.imshow(img)\npylab.gray();\n# recall that the convOp output (filtered image) is actually a \"minibatch\",\n# of size 1 here, so we take index 0 in the first dimension:\npylab.subplot(1, 3, 2); pylab.axis('off'); pylab.imshow(filtered_img[0, :, :,0])\npylab.subplot(1, 3, 3); pylab.axis('off'); pylab.imshow(filtered_img[0, :, :,1])\npylab.show()\n\nprint(type(img_))\nprint(img_.shape)\nprint(img_.max())\nprint(img_.min())\nprint(type(filtered_img))\nprint(filtered_img.shape)\nprint(filtered_img.max())\nprint(filtered_img.min())", "Empirically, we seen that the image \"shrank\" in size dimensions. We can infer that this convolution operation doesn't assume anything about the boundary conditions, and so the filter (stencil), requiring a, in this case, 9x9 \"block\" or 9x9 values, will only, near the boundaries, output values for the \"inside\" cells/grid points. \nMaxPooling", "input = tf.placeholder(tf.float32, shape=[None,None,None,None],name=\"input\")\nmaxpool_shape=(2,2)\nwindow_size = (1,) + maxpool_shape + (1,)\npool_out = tf.nn.max_pool(input,\n ksize=window_size,\n strides=window_size,padding=\"VALID\")\n\ntf.reset_default_graph() \n\nsess = tf.Session()\ninit_op=tf.global_variables_initializer()\nsess.run(init_op) \n\ninvals=np.random.RandomState(1).rand(3,5,5,2)\nprint(invals.shape)\ninvals_max = sess.run(pool_out,feed_dict={input:invals})\nprint(invals_max.shape)\n\ninvals2 = np.random.RandomState(1).rand(3,256,64,2)\nprint(invals2.shape)\ninvals2_max = sess.run(pool_out,feed_dict={input:invals2})\nprint(invals2_max.shape)\ninvals3 = np.random.RandomState(1).rand(3,257,65,2)\nprint(invals3.shape)\ninvals3_max = sess.run(pool_out,feed_dict={input:invals3})\nprint(invals3_max.shape)\n\n\ntf.reset_default_graph() \n\ninput = tf.placeholder(tf.float32, shape=[None,None,None,None],name=\"input\")\nmaxpool_shape2=(4,4)\nwindow_size2 = (1,) + maxpool_shape2 + (1,)\npool_out2 = tf.nn.max_pool(input,\n ksize=window_size2,\n strides=window_size2,padding=\"VALID\")\n\nsess = tf.Session()\ninit_op=tf.global_variables_initializer()\nsess.run(init_op) \n\ninvals2_max = sess.run(pool_out2,feed_dict={input:invals2})\nprint(invals2_max.shape)", "Convolution axon test", "sys.path.append( \"../ML/\")\n\nimport CNN_tf\nfrom CNN_tf import Axon_CNN\n\ntf.reset_default_graph() \n\nfilter_size = (9,9,3,2) # (W_1,W_2,C_lm1,C_l)\nc_bound = np.sqrt(3*9*9)\nc = tf.Variable( tf.random_uniform(filter_size,\n minval=-1.0/c_bound,\n maxval=1.0/c_bound))\n\nb=tf.Variable(tf.random_uniform([2,],minval=-.5,maxval=.5), dtype=tf.float32)\n\nConv_axon_test=Axon_CNN(1,(3,2),(9,9),Pl=None,c=c,b=b,activation=None)\n\nConv_axon_test.connect_through()\n\nsess = tf.Session()\ninit_op=tf.global_variables_initializer()\nsess.run(init_op) \n\nfiltered_img_Conv_axon_test = sess.run(Conv_axon_test.al,feed_dict={Conv_axon_test.alm1:img_})\n\n# plot original image and first and second components of output\npylab.subplot(1, 3, 1); pylab.axis('off'); pylab.imshow(img)\npylab.gray();\n# recall that the convOp output (filtered image) is actually a \"minibatch\",\n# of size 1 here, so we take index 0 in the first dimension:\npylab.subplot(1, 3, 2); pylab.axis('off'); pylab.imshow(filtered_img_Conv_axon_test[0, :, :,0])\npylab.subplot(1, 3, 3); pylab.axis('off'); pylab.imshow(filtered_img_Conv_axon_test[0, :, :,1])\npylab.show()\n\nprint(type(img_))\nprint(img_.shape)\nprint(img_.max())\nprint(img_.min())\nprint(type(filtered_img_Conv_axon_test))\nprint(filtered_img_Conv_axon_test.shape)\nprint(filtered_img_Conv_axon_test.max())\nprint(filtered_img_Conv_axon_test.min())\n\nrand_unif_init_vals *= np.float32(4)\n\n(1,) + (2,2) + (1,)\n\n[1,] + [2,2] + [1,]\n\n[1,] + (2,2) + [1,]\n\nlist( [1,2,3])\n\ntestplacehold = tf.placeholder(tf.float32, shape=[None, 100,100,2])\n\ntestplacehold.shape\n\ntf.reshape( testplacehold, shape=[-1,int( testplacehold.shape[1]*testplacehold.shape[2]*testplacehold.shape[3])]).shape\n\nlen(testplacehold.shape)", "CNN Feedforward test for Convolution Neural Networks", "# sanity check\nCNNFF_test = CNN_tf.Feedforward(2,('C','C'),\n [{\"C_ls\":(1,20) ,\"Wl\":(5,5),\"Pl\":(2,2),\"Ll\":(28,28)},\n {\"C_ls\":(20,50),\"Wl\":(5,5),\"Pl\":(2,2),\"Ll\":(12,12)}],\n psi_L=tf.tanh )\n\ns_l = (50*4*4,500)\ntf.reshape( CNNFF_test.Axons[-1].al, shape=[-1,s_l[0] ]).shape\n\n# sanity check\nCNNFF_test = CNN_tf.Feedforward(3,('C','C','D'),\n [{\"C_ls\":(1,20) ,\"Wl\":(5,5),\"Pl\":(2,2),\"Ll\":(28,28)},\n {\"C_ls\":(20,50),\"Wl\":(5,5),\"Pl\":(2,2),\"Ll\":(12,12)},\n (50*4*4,500)],\n psi_L=tf.tanh )", "CNN class test as a Deep Neural Network (i.e. Artificial neural network, i.e. no convolution)", "L=4\nCorD=('D','D','D','D')\ndims_data_test=[(784,392), (392,196),(196,98),(98,10)]\n\nCNNFF_test=CNN_tf.Feedforward(L,CorD,dims_data_test,psi_L=tf.nn.softmax)\n\nCNN_test=CNN_tf.CNN(CNNFF_test)\n\nCNN_test.connect_through()\n\nCNN_test.build_J_L2norm_w_reg(lambda_val=0.01)\n\nCNN_test.build_optimizer()\n\nCNN_test.train_model(max_iters=10000, X_data=train_set_x[:20000],y_data=y_train[:20000])\n\nyhat_test = CNN_test.predict()\n\nprint(type(yhat_test))\nprint(yhat_test.shape)\nprint(np.argmax(yhat_test,axis=1).shape)\nnp.mean( train_set_y[:20000]==np.argmax(yhat_test,axis=1) )", "CNN class test for Convolution Neural Networks", "sys.path.append( \"../ML/\")\n\nimport CNN_tf\nfrom CNN_tf import Axon_CNN\n\nL=4\nCorD=('C','C','D','D')\ndims_data_test=[{\"C_ls\":(1,20) ,\"Wl\":(5,5),\"Pl\":(2,2),\"Ll\":(28,28)},\n {\"C_ls\":(20,50) ,\"Wl\":(5,5),\"Pl\":(2,2),\"Ll\":(12,12)},\n (50*4*4,500),(500,10)] \nCNNFF_test = CNN_tf.Feedforward(L,CorD,dims_data_test,psi_L=tf.nn.softmax)\n\nCNN_test = CNN_tf.CNN(CNNFF_test)\n\nCNN_test.connect_through()\n\nCNN_test.build_J_logistic_w_reg(lambda_val=0.01)\n\nCNN_test.build_optimizer(alpha=0.005)\n\nCNN_test.train_model(max_iters=10, X_data=train_set_x[:30000],y_data=y_train[:30000])\n\nCNN_test._CNN_model.Axons[0].alm1\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ianabc/vitale
vitale.ipynb
gpl-2.0
[ "The Vitale Property\nThis program calculates the Polydivisible numbers. It is inspired by a\nblog post on (republicofmath.com) which highlighted a result of Ben Vitale\nfor a 25 digit polydivisible number. More information on polydivisible number can be found on the wikipedia page, or the Online Encyclopedia of Integer Sequences.\nPolydivisible Numbers\n3608528850368400786036725 is the only 25 digit number which satisfies\nthe polydivisibilty (or Vitale) property. It is divisible by 25, it's\nfirst 24 digits are divisible by 24, it's first 23 digits are divisible\nby 23 etc. all the way down to 2. There are NO 26 digit numbers which\nextend this property.", "import math\n\ndef vitaleProperty(n):\n if n == 2:\n return range(10, 99, 2)\n else:\n vnums = []\n for vnum in vitaleProperty(n-1):\n vnum = vnum * 10\n for j in range(10):\n if ((vnum + j) % n == 0):\n vnums.append(vnum + j)\n\n return vnums\n\nn = 2\nnvnums = []\n\nwhile True:\n vitale_n = vitaleProperty(n)\n if (len(vitale_n) > 0):\n nvnums.append(len(vitale_n))\n n = n + 1\n else:\n break\n\ntry:\n import matplotlib.pyplot as plt\n %matplotlib inline\n plt.plot(nvnums, 'ro', markersize=5)\n plt.ylim([0, max(nvnums) * 1.1])\n plt.show()\nexcept ImportError:\n print('\\n'.join('{:3}: {}'.format(*k) for k in enumerate(nvnums)))", "As discussed in the wikipedia article, a polydivisible number with n-1 digits can be extended to a polydivisible number with n digits in 10/n different ways. So we can estimate the number of n-digit polydivisible numbers\n$$ F(n) \\approx \\frac{9\\times 10^{n-1}}{n!} $$\nThe value tends to zero as $n\\to \\infty$ and we can sum this over the values of n to get an estimate of the total number of polydivisible numbers\n$$\\frac{9(e^{10}-1)}{10}$$", "S_f = (9 * math.e**(10))/10\n\nint(S_f)\n\n\nsum(nvnums)", "So the estimate is off by only", "(sum(nvnums) - S_f) / sum(nvnums)", "i.e. $3\\%$" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/ncc/cmip6/models/sandbox-1/aerosol.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Aerosol\nMIP Era: CMIP6\nInstitute: NCC\nSource ID: SANDBOX-1\nTopic: Aerosol\nSub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. \nProperties: 69 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:24\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'ncc', 'sandbox-1', 'aerosol')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Meteorological Forcings\n5. Key Properties --&gt; Resolution\n6. Key Properties --&gt; Tuning Applied\n7. Transport\n8. Emissions\n9. Concentrations\n10. Optical Radiative Properties\n11. Optical Radiative Properties --&gt; Absorption\n12. Optical Radiative Properties --&gt; Mixtures\n13. Optical Radiative Properties --&gt; Impact Of H2o\n14. Optical Radiative Properties --&gt; Radiative Scheme\n15. Optical Radiative Properties --&gt; Cloud Interactions\n16. Model \n1. Key Properties\nKey properties of the aerosol model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of aerosol model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrognostic variables in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/volume ratio for aerosols\" \n# \"3D number concenttration for aerosols\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of tracers in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre aerosol calculations generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nPhysical properties of seawater in ocean\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the time evolution of the prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses atmospheric chemistry time stepping\" \n# \"Specific timestepping (operator splitting)\" \n# \"Specific timestepping (integrated)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the aerosol model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Meteorological Forcings\n**\n4.1. Variables 3D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nThree dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Variables 2D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTwo dimensionsal forcing variables, e.g. land-sea mask definition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Frequency\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nFrequency with which meteological forcings are applied (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Resolution\nResolution in the aersosol model grid\n5.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for aerosol model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Transport\nAerosol transport\n7.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of transport in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for aerosol transport modeling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Specific transport scheme (eulerian)\" \n# \"Specific transport scheme (semi-lagrangian)\" \n# \"Specific transport scheme (eulerian and semi-lagrangian)\" \n# \"Specific transport scheme (lagrangian)\" \n# TODO - please enter value(s)\n", "7.3. Mass Conservation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to ensure mass conservation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Mass adjustment\" \n# \"Concentrations positivity\" \n# \"Gradients monotonicity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.4. Convention\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTransport by convention", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.convention') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Convective fluxes connected to tracers\" \n# \"Vertical velocities connected to tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Emissions\nAtmospheric aerosol emissions\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of emissions in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to define aerosol species (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Prescribed (climatology)\" \n# \"Prescribed CMIP6\" \n# \"Prescribed above surface\" \n# \"Interactive\" \n# \"Interactive above surface\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the aerosol species are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Volcanos\" \n# \"Bare ground\" \n# \"Sea surface\" \n# \"Lightning\" \n# \"Fires\" \n# \"Aircraft\" \n# \"Anthropogenic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prescribed Climatology\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify the climatology type for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Interannual\" \n# \"Annual\" \n# \"Monthly\" \n# \"Daily\" \n# TODO - please enter value(s)\n", "8.5. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed via a climatology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Other Method Characteristics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCharacteristics of the &quot;other method&quot; used for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Concentrations\nAtmospheric aerosol concentrations\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of concentrations in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as mass mixing ratios.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as AOD plus CCNs.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Optical Radiative Properties\nAerosol optical and radiative properties\n10.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of optical and radiative properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Optical Radiative Properties --&gt; Absorption\nAbsortion properties in aerosol scheme\n11.1. Black Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.2. Dust\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of dust at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Organics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of organics at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12. Optical Radiative Properties --&gt; Mixtures\n**\n12.1. External\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there external mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Internal\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there internal mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.3. Mixing Rule\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf there is internal mixing with respect to chemical composition then indicate the mixinrg rule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Optical Radiative Properties --&gt; Impact Of H2o\n**\n13.1. Size\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact size?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.2. Internal Mixture\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact internal mixture?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Optical Radiative Properties --&gt; Radiative Scheme\nRadiative scheme for aerosol\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Shortwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of shortwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Optical Radiative Properties --&gt; Cloud Interactions\nAerosol-cloud interactions\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol-cloud interactions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Twomey\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the Twomey effect included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.3. Twomey Minimum Ccn\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the Twomey effect is included, then what is the minimum CCN number?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Drizzle\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect drizzle?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.5. Cloud Lifetime\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect cloud lifetime?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Model\nAerosol model\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the Aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dry deposition\" \n# \"Sedimentation\" \n# \"Wet deposition (impaction scavenging)\" \n# \"Wet deposition (nucleation scavenging)\" \n# \"Coagulation\" \n# \"Oxidation (gas phase)\" \n# \"Oxidation (in cloud)\" \n# \"Condensation\" \n# \"Ageing\" \n# \"Advection (horizontal)\" \n# \"Advection (vertical)\" \n# \"Heterogeneous chemistry\" \n# \"Nucleation\" \n# TODO - please enter value(s)\n", "16.3. Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther model components coupled to the Aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Radiation\" \n# \"Land surface\" \n# \"Heterogeneous chemistry\" \n# \"Clouds\" \n# \"Ocean\" \n# \"Cryosphere\" \n# \"Gas phase chemistry\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.4. Gas Phase Precursors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of gas phase aerosol precursors.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.gas_phase_precursors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"DMS\" \n# \"SO2\" \n# \"Ammonia\" \n# \"Iodine\" \n# \"Terpene\" \n# \"Isoprene\" \n# \"VOC\" \n# \"NOx\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.5. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bulk\" \n# \"Modal\" \n# \"Bin\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.6. Bulk Scheme Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of species covered by the bulk scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.bulk_scheme_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon / soot\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
iRipVanWinkle/ml
Data Science UA - September 2017/Lecture 01 - Introduction/US_Baby_Names-2010.ipynb
mit
[ "MIE1624H Introductory Example\nUS Baby Names 2010", "%pwd", "http://www.ssa.gov/oact/babynames/limits.html\nLoad file into a DataFrame", "import pandas as pd\n\nnames2010 = pd.read_csv('/resources/yob2010.txt', names=['name', 'sex', 'births'])\nnames2010", "Total number of birth in year 2010 by sex", "names2010.groupby('sex').births.sum()", "Insert prop column for each group", "def add_prop(group):\n # Integer division floors\n births = group.births.astype(float)\n\n group['prop'] = births / births.sum()\n return group\nnames2010 = names2010.groupby(['sex']).apply(add_prop)\n\nnames2010", "Verify that the prop clumn sums to 1 within all the groups", "import numpy as np\n\nnp.allclose(names2010.groupby(['sex']).prop.sum(), 1)", "Extract a subset of the data with the top 10 names for each sex", "def get_top10(group):\n return group.sort_index(by='births', ascending=False)[:10]\ngrouped = names2010.groupby(['sex'])\ntop10 = grouped.apply(get_top10)\n\ntop10.index = np.arange(len(top10))\n\ntop10", "Aggregate all birth by the first latter from name column", "# extract first letter from name column\nget_first_letter = lambda x: x[0]\nfirst_letters = names2010.name.map(get_first_letter)\nfirst_letters.name = 'first_letter'\n\ntable = names2010.pivot_table('births', index=first_letters,\n columns=['sex'], aggfunc=sum)\n\ntable.head()", "Normalize the table", "table.sum()\n\nletter_prop = table / table.sum().astype(float)", "Plot proportion of boys and girls names starting in each letter", "%matplotlib inline\nimport matplotlib.pyplot as plt\n\nfig, axes = plt.subplots(2, 1, figsize=(10, 8))\nletter_prop['M'].plot(kind='bar', rot=0, ax=axes[0], title='Male')\nletter_prop['F'].plot(kind='bar', rot=0, ax=axes[1], title='Female',\n legend=False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
darkomen/TFG
medidas/12082015/.ipynb_checkpoints/Análisis de datos Ensayo 2-checkpoint.ipynb
cc0-1.0
[ "Análisis de los datos obtenidos\nUso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 12 de Agosto del 2015\nLos datos del experimento:\n* Hora de inicio: 11:05\n* Hora final : 11:35\n* Filamento extruido: 435cm\n* $T: 150ºC$\n* $V_{min} tractora: 1.5 mm/s$\n* $V_{max} tractora: 3.4 mm/s$\n* Los incrementos de velocidades en las reglas del sistema experto son distintas:\n * En el caso 5 se pasa de un incremento de velocidad de +1 a un incremento de +2.", "#Importamos las librerías utilizadas\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\n#Mostramos las versiones usadas de cada librerías\nprint (\"Numpy v{}\".format(np.__version__))\nprint (\"Pandas v{}\".format(pd.__version__))\nprint (\"Seaborn v{}\".format(sns.__version__))\n\n#Abrimos el fichero csv con los datos de la muestra\ndatos = pd.read_csv('ensayo1.CSV')\n\n%pylab inline\n\n#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar\ncolumns = ['Diametro X','Diametro Y', 'RPM TRAC']\n\n#Mostramos un resumen de los datos obtenidoss\ndatos[columns].describe()\n#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]", "Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica", "datos.ix[:, \"Diametro X\":\"Diametro Y\"].plot(figsize=(16,10),ylim=(0.5,3)).hlines([1.85,1.65],0,3500,colors='r')\n#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')\n\ndatos.ix[:, \"Diametro X\":\"Diametro Y\"].boxplot(return_type='axes')", "Con esta segunda aproximación se ha conseguido estabilizar los datos. Se va a tratar de bajar ese porcentaje. Como segunda aproximación, vamos a modificar los incrementos en los que el diámetro se encuentra entre $1.80mm$ y $1.70 mm$, en ambos sentidos. (casos 3 a 6)\nComparativa de Diametro X frente a Diametro Y para ver el ratio del filamento", "plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')", "Filtrado de datos\nLas muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.", "datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]\n\n#datos_filtrados.ix[:, \"Diametro X\":\"Diametro Y\"].boxplot(return_type='axes')", "Representación de X/Y", "plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')", "Analizamos datos del ratio", "ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']\nratio.describe()\n\nrolling_mean = pd.rolling_mean(ratio, 50)\nrolling_std = pd.rolling_std(ratio, 50)\nrolling_mean.plot(figsize=(12,6))\n# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)\nratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))", "Límites de calidad\nCalculamos el número de veces que traspasamos unos límites de calidad. \n$Th^+ = 1.85$ and $Th^- = 1.65$", "Th_u = 1.85\nTh_d = 1.65\n\ndata_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |\n (datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]\n\ndata_violations.describe()\n\ndata_violations.plot(subplots=True, figsize=(12,12))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Kaggle/learntools
notebooks/sql_advanced/raw/tut2.ipynb
apache-2.0
[ "Introduction\nIn the Intro to SQL micro-course, you learned how to use aggregate functions, which perform calculations based on sets of rows. In this tutorial, you'll learn how to define analytic functions, which also operate on a set of rows. However, unlike aggregate functions, analytic functions return a (potentially different) value for each row in the original table.\nAnalytic functions allow us to perform complex calculations with relatively straightforward syntax. For instance, we can quickly calculate moving averages and running totals, among other quantities.\nSyntax\nTo understand how to write analytic functions, we'll work with a small table containing data from two different people who are training for a race. The id column identifies each runner, the date column holds the day of the training session, and time shows the time (in minutes) that the runner dedicated to training. Say we'd like to calculate a moving average of the training times for each runner, where we always take the average of the current and previous training sessions. We can do this with the following query:\n\nAll analytic functions have an OVER clause, which defines the sets of rows used in each calculation. The OVER clause has three (optional) parts:\n- The PARTITION BY clause divides the rows of the table into different groups. In the query above, we divide by id so that the calculations are separated by runner.\n- The ORDER BY clause defines an ordering within each partition. In the sample query, ordering by the date column ensures that earlier training sessions appear first.\n- The final clause (ROWS BETWEEN 1 PRECEDING AND CURRENT ROW) is known as a window frame clause. It identifies the set of rows used in each calculation. We can refer to this group of rows as a window. (Actually, analytic functions are sometimes referred to as analytic window functions or simply window functions!) \n\n(More on) window frame clauses\nThere are many ways to write window frame clauses:\n- ROWS BETWEEN 1 PRECEDING AND CURRENT ROW - the previous row and the current row.\n- ROWS BETWEEN 3 PRECEDING AND 1 FOLLOWING - the 3 previous rows, the current row, and the following row.\n- ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING - all rows in the partition.\nOf course, this is not an exhaustive list, and you can imagine that there are many more options! In the code below, you'll see some of these clauses in action.\nThree types of analytic functions\nThe example above uses only one of many analytic functions. BigQuery supports a wide variety of analytic functions, and we'll explore a few here. For a complete listing, you can take a look at the documentation.\n1) Analytic aggregate functions\nAs you might recall, AVG() (from the example above) is an aggregate function. The OVER clause is what ensures that it's treated as an analytic (aggregate) function. Aggregate functions take all of the values within the window as input and return a single value.\n\nMIN() (or MAX()) - Returns the minimum (or maximum) of input values\nAVG() (or SUM()) - Returns the average (or sum) of input values \nCOUNT() - Returns the number of rows in the input\n\n2) Analytic navigation functions\nNavigation functions assign a value based on the value in a (usually) different row than the current row.\n- FIRST_VALUE() (or LAST_VALUE()) - Returns the first (or last) value in the input\n- LEAD() (and LAG()) - Returns the value on a subsequent (or preceding) row\n3) Analytic numbering functions\nNumbering functions assign integer values to each row based on the ordering.\n- ROW_NUMBER() - Returns the order in which rows appear in the input (starting with 1)\n- RANK() - All rows with the same value in the ordering column receive the same rank value, where the next row receives a rank value which increments by the number of rows with the previous rank value.\nExample\nWe'll work with the San Francisco Open Data dataset. We begin by reviewing the first several rows of the bikeshare_trips table. (The corresponding code is hidden, but you can un-hide it by clicking on the \"Code\" button below.)", "#$HIDE_INPUT$\nfrom google.cloud import bigquery\n\n# Create a \"Client\" object\nclient = bigquery.Client()\n\n# Construct a reference to the \"san_francisco\" dataset\ndataset_ref = client.dataset(\"san_francisco\", project=\"bigquery-public-data\")\n\n# API request - fetch the dataset\ndataset = client.get_dataset(dataset_ref)\n\n# Construct a reference to the \"bikeshare_trips\" table\ntable_ref = dataset_ref.table(\"bikeshare_trips\")\n\n# API request - fetch the table\ntable = client.get_table(table_ref)\n\n# Preview the first five lines of the table\nclient.list_rows(table, max_results=5).to_dataframe()", "Each row of the table corresponds to a different bike trip, and we can use an analytic function to calculate the cumulative number of trips for each date in 2015.", "# Query to count the (cumulative) number of trips per day\nnum_trips_query = \"\"\"\n WITH trips_by_day AS\n (\n SELECT DATE(start_date) AS trip_date,\n COUNT(*) as num_trips\n FROM `bigquery-public-data.san_francisco.bikeshare_trips`\n WHERE EXTRACT(YEAR FROM start_date) = 2015\n GROUP BY trip_date\n )\n SELECT *,\n SUM(num_trips) \n OVER (\n ORDER BY trip_date\n ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW\n ) AS cumulative_trips\n FROM trips_by_day\n \"\"\"\n\n# Run the query, and return a pandas DataFrame\nnum_trips_result = client.query(num_trips_query).result().to_dataframe()\nnum_trips_result.head()", "The query uses a common table expression (CTE) to first calculate the daily number of trips. Then, we use SUM() as an aggregate function.\n- Since there is no PARTITION BY clause, the entire table is treated as a single partition.\n- The ORDER BY clause orders the rows by date, where earlier dates appear first. \n- By setting the window frame clause to ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW, we ensure that all rows up to and including the current date are used to calculate the (cumulative) sum. (Note: If you read the documentation, you'll see that this is the default behavior, and so the query would return the same result if we left out this window frame clause.)\nThe next query tracks the stations where each bike began (in start_station_id) and ended (in end_station_id) the day on October 25, 2015.", "# Query to track beginning and ending stations on October 25, 2015, for each bike\nstart_end_query = \"\"\"\n SELECT bike_number,\n TIME(start_date) AS trip_time,\n FIRST_VALUE(start_station_id)\n OVER (\n PARTITION BY bike_number\n ORDER BY start_date\n ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING\n ) AS first_station_id,\n LAST_VALUE(end_station_id)\n OVER (\n PARTITION BY bike_number\n ORDER BY start_date\n ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING\n ) AS last_station_id,\n start_station_id,\n end_station_id\n FROM `bigquery-public-data.san_francisco.bikeshare_trips`\n WHERE DATE(start_date) = '2015-10-25' \n \"\"\"\n\n# Run the query, and return a pandas DataFrame\nstart_end_result = client.query(start_end_query).result().to_dataframe()\nstart_end_result.head()", "The query uses both FIRST_VALUE() and LAST_VALUE() as analytic functions.\n- The PARTITION BY clause breaks the data into partitions based on the bike_number column. Since this column holds unique identifiers for the bikes, this ensures the calculations are performed separately for each bike.\n- The ORDER BY clause puts the rows within each partition in chronological order.\n- Since the window frame clause is ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING, for each row, its entire partition is used to perform the calculation. (This ensures the calculated values for rows in the same partition are identical.)\nYour turn\nWrite your own analytic functions!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gaufung/Data_Analytics_Learning_Note
python-statatics-tutorial/basic-theme/python-language/Regex.ipynb
mit
[ "正则表达式\n1 基础部分\n\n\n管道符号(|)匹配多个正则表达式:\nat | home 匹配 at,home\n\n\n匹配任意单一字符(.): \nt.o 匹配 tao,tzo\n\n\n字符串和单词开始和结尾位置匹配:\n\n\n(^) 匹配字符串开始位置:^From 匹配 From 开始的字符串\n\n\n(\\$) 匹配字符串结尾的位置: /bin/tsch\\$ 匹配/bin/tsch结束的字符串\n\n\n\n\n(\\b) 匹配单词的边界:\\bthe 匹配the开头的单词\n\n(\\B) 与\\b 相反\n([]) 创建匹配字符集合:b[aeiu]t 匹配 bat,bet,bit,but\n(-) 指定范围匹配: [a-z]匹配a到z的字符\n(^)否定:[^aeiou]匹配非元音\n*: 出现一次;+: 出现1次和多次;?:出现1次和0次\n\\d: 匹配数字,\\D: 相反;\\w 整个字符数字的字符集,\\W 相反;\\s 空白字符,\\S 相反。\n(()):进行分组匹配\n\n2 Re模块\n2.1 常用函数\n\ncomple(pattern, flags=0) \n对正则表达式进行编译,返回regex对象\nmatch(pattern, string, flags=0) \n尝试用一个正则表达式模式pattern对一个字符串进行匹配,如果匹配成功,返回匹配的对象 \nsearch(pattern, string, flags=0) \n在字符串中搜索pattern的第一次出现\nfindall(pattern, string[,flags])和finditer(pattern, string[,flags]) \n返回字符串中模式所有的出现,返回分别为列表和迭代对象 \nsplit(pattern, string, max=0) \n根据正则表达式将字符串分割成一个列表 \nsub(pattern, repl, string, max=0)\n把字符串的中符合pattern的部分用repl替换掉\ngroup(num=0) \n返回全部匹配对象(或者指定编号是num的子组)\ngroup()\n包含全部数组的子组的字符串\n\n2.2 match", "import re\nm = re.match('foo', 'foo')\nif m is not None: m.group()\n\nm\n\nm = re.match('foo', 'bar')\nif m is not None: m.group()\n\nre.match('foo', 'foo on the table').group()\n\n# raise attributeError\nre.match('bar', 'foo on the table').group()", "2.3 search\nmatch 从字符串开始位置进行匹配,但是模式出现在字符串的中间的位置比开始位置的概率大得多", "m = re.match('foo','seafood')\nif m is not None: m.group()", "search 函数将返回字符串开始模式首次出现的位置", "re.search('foo', 'seafood').group()", "2.4 匹配多个字符串", "bt = 'bat|bet|bit'\n\nre.match(bt,'bat').group()\n\nre.match(bt, 'blt').group()\n\nre.match(bt, 'He bit me!').group()\n\nre.search(bt, 'He bit me!').group()", "2.5 匹配任意单个字符(.)\n句点不能匹配换行符或者匹配非字符串(空字符串)", "anyend='.end'\n\nre.match(anyend, 'bend').group()\n\nre.match(anyend, 'end').group()\n\nre.search(anyend, '\\nend').group()", "2.6 创建字符集合([ ])", "pattern = '[cr][23][dp][o2]'\n\nre.match(pattern, 'c3po').group()\n\nre.match(pattern, 'c3do').group()\n\nre.match('r2d2|c3po', 'c2do').group()\n\nre.match('r2d2|c3po', 'r2d2').group()", "2.7 分组\n2.7.1匹配邮箱", "patt = '\\w+@(\\w+\\.)?\\w+\\.com'\nre.match(patt, 'nobady@xxx.com').group()\n\nre.match(patt, 'nobody@www.xxx.com').group()\n\n# 匹配多个子域名\npatt = '\\w+@(\\w+\\.)*\\w+\\.com'\nre.match(patt, 'nobody@www.xxx.yyy.zzz.com').group()", "2.7.2 分组表示", "patt = '(\\w\\w\\w)-(\\d\\d\\d)'\nm = re.match(patt, 'abc-123')\n\nm.group()\n\nm.group(1)\n\nm.group(2)\n\nm.groups()\n\nm = re.match('ab', 'ab')\nm.group()\n\nm.groups()\n\nm = re.match('(ab)','ab')\nm.groups()\n\nm.group(1)\n\nm = re.match('(a(b))', 'ab')\nm.group()\n\nm.group(1)\n\nm.group(2)\n\nm.groups()", "2.8 字符串开头或者单词边界\n2.8.1 字符串开头或者结尾", "re.match('^The', 'The end.').group()\n\n# raise attributeError\nre.match('^The', 'end. The').group()", "2.8.2 单词边界", "re.search(r'\\bthe', 'bite the dog').group()\n\nre.search(r'\\bthe', 'bitethe dog').group()\n\nre.search(r'\\Bthe', 'bitthe dog').group()", "2.9 find 模块", "re.findall('car', 'car')\n\nre.findall('car', 'scary')\n\nre.findall('car', 'carry, the barcardi to the car')", "2.10 sub()和subn()函数", "(re.sub('X', 'Mr. Smith', 'attn: X\\n\\nDear X, \\n'))\n\nprint re.subn('X', 'Mr. Smith', 'attn: X\\n\\nDear X, \\n')\n\nre.sub('[ae]', 'X', 'abcdedf')", "2.11 split分割", "re.split(':','str1:str2:str3')\n\nfrom os import popen\nfrom re import split\nf = popen('who', 'r')\nfor eachLine in f.readlines():\n print split('\\s\\s+|\\t', eachLine.strip())\nf.close()", "3 搜索和匹配的比较,“贪婪”匹配", "string = 'Thu Feb 15 17:46:04 2007::gaufung@cumt.edu.cn::1171590364-6-8'\npatt = '.+\\d+-\\d+-\\d+'\nre.match(patt, string).group()\n\npatt = '.+(\\d+-\\d+-\\d+)'\nre.match(patt, string).group(1)", "由于通配符“.”默认贪心的,所以'.+'将会匹配尽可能多的字符,所以\n\nThu Feb 15 17:46:04 2007::gaufung@cumt.edu.cn::117159036 \n\n将匹配'.+',而分组匹配的内容则是“4-6-8”,非贪婪算法则通过'?'解决", "patt = '.+?(\\d+-\\d+-\\d+)'\nre.match(patt, string).group(1)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cs207-project/TimeSeries
docs/Web_service_demo.ipynb
mit
[ "import os, sys, inspect\ncurrentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))\nparentdir = os.path.dirname(currentdir)\nsys.path.insert(0, parentdir)\n\nsys.path.append('/Users/Elena/Desktop/TimeSeries/')\n\nimport time\nimport signal\nimport subprocess\nimport numpy as np\nfrom scipy.stats import norm\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sns\nsns.set_style('white')\nsns.set_context('notebook')\n\nfrom web.web_for_coverage import WebInterface\nfrom timeseries.TimeSeries import TimeSeries", "TIMESERIES WEB API\ninsert_ts(self, pk, ts):\n \"\"\"\n Insert a timeseries into the database by sending a request to the server.\n\n Parameters\n ----------\n primary_key: int\n a unique identifier for the timeseries\n\n ts: a TimeSeries object\n the timeseries object intended to be inserted to database\n \"\"\"\n\ndelete_ts(self, pk):\n\"\"\"\nDelete a timeseries from the database by sending a request to the server.\n\nParameters\n----------\nprimary_key: int\n a unique identifier for the timeseries\n\"\"\"\n\nupsert_meta(self, pk, md):\n\"\"\"\nUpserting metadata into the timeseries in the database designated by the promary key by sending the server a request.\n\nParameters\n----------\nprimary_key: int\n a unique identifier for the timeseries\n\nmetadata_dict: dict\n the metadata to upserted into the timeseries\n\"\"\"\n\nselect(self, md={}, fields=None, additional=None):\n\"\"\"\nSelecting timeseries elements in the database that match the criteria\nset in metadata_dict and return corresponding fields with additional\nfeatures.\n\nParameters\n----------\nmetadata_dict: dict\n the selection criteria (filters)\n (Options : 'blarg', 'order')\n\nfields: dict\n If not `None`, only these fields of the timeseries are returned.\n Otherwise, the timeseries are returned.\n\nadditional: dict\n additional computation to perform on the query matches before they're\n returned. Currently provide \"sort_by\" and \"limit\" functionality\n\n\"\"\"\n\naugmented_select(self, proc, target, arg=None, md={}, additional=None):\n\"\"\"\nParameters\n----------\nproc : enum\n which of the modules in procs,\n or name of module in procs with coroutine main.\n (Options: 'corr', 'junk', 'stats')\ntarget : array of fieldnames\n will be mapped to the array of results from the coroutine.\n If the target is None rather than a list of fields, we'll assume no upserting\narg : additional argument\n (ex : Timeseries object)\nmetadata_dict : dict\n store info for TimeSeries other than TimeSeries object itself\n (ex. vantage point is metadata_dict['ts-14']['vp']\nadditional : dict\n (Options: {\"sort_by\":\"-order\"})\n\nReturns\n-------\ntsdb status &amp; payload\n\"\"\"\n\nadd_trigger(self, proc, onwhat, target, arg=None):\n\"\"\"\nSend the server a request to add a trigger.\n\nParameters\n----------\n`proc` : enum\n which of the modules in procs,\n or name of module in procs with coroutine main.\n (Options: 'corr', 'junk', 'stats')\n`onwhat` :\n which op is this trigger running on\n (ex : \"insert_ts\")\n`target` : array of fieldnames\n will be mapped to the array of results from the coroutine.\n If the target is None rather than a list of fields, we'll assume no upserting\n`arg` :\n additional argument\n (ex : Timeseries object)\n\"\"\"\n\nremove_trigger(self, proc, onwhat, target=None):\n\"\"\"\nSend the server a request to REMOVE a trigger.\n\nParameters\n----------\n`proc` : enum\n which of the modules in procs,\n or name of module in procs with coroutine main.\n (Options: 'corr', 'junk', 'stats')\n`onwhat` :\n which op is this trigger running on\n (ex : \"insert_ts\")\n`target` : array of fieldnames\n will be mapped to the array of results from the coroutine.\n If the target is None rather than a list of fields, we'll assume no upserting\n\"\"\"\n\nHere in the notebook we use subprocess to launch our server, to see the output better, we strongly suggest to launch server directly in terminal whenever possible!\nBelow queries are just to offer you an sense of how our web api works, feel free to try more with the api description we proveded above!", "# server = subprocess.Popen(['python', '../go_persistent_server.py'])\n# time.sleep(3)\n\n# web = subprocess.Popen(['python', '../go_web.py'])\n# time.sleep(3)\n\nweb_interface = WebInterface()\n\nresults = web_interface.add_trigger(\n 'junk', 'insert_ts', None, 'db:one:ts')\nassert results[0] == 200\nprint(results)\n\nresults = web_interface.add_trigger(\n 'stats', 'insert_ts', ['mean', 'std'], None)\nassert results[0] == 200\nprint(results[1])\n\ndef tsmaker(m, s, j):\n '''\n Helper function: randomly generates a time series for testing.\n\n Parameters\n ----------\n m : float\n Mean value for generating time series data\n s : float\n Standard deviation value for generating time series data\n j : float\n Quantifies the \"jitter\" to add to the time series data\n\n Returns\n -------\n A time series and associated meta data.\n '''\n\n # generate metadata\n meta = {}\n meta['order'] = int(np.random.choice(\n [-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5]))\n meta['blarg'] = int(np.random.choice([1, 2]))\n\n # generate time series data\n t = np.arange(0.0, 1.0, 0.01)\n v = norm.pdf(t, m, s) + j * np.random.randn(ts_length)\n\n # return time series and metadata\n return meta, TimeSeries(t, v)\n\nmus = np.random.uniform(low=0.0, high=1.0, size=50)\nsigs = np.random.uniform(low=0.05, high=0.4, size=50)\njits = np.random.uniform(low=0.05, high=0.2, size=50)\nts_length = 100\n\n# initialize dictionaries for time series and their metadata\ntsdict = {}\nmetadict = {}\nfor i, m, s, j in zip(range(50), mus, sigs, jits):\n meta, tsrs = tsmaker(m, s, j)\n # the primary key format is ts-1, ts-2, etc\n pk = \"ts-{}\".format(i)\n tsdict[pk] = tsrs\n meta['vp'] = False # augment metadata with a boolean asking if this is a VP.\n metadict[pk] = meta\n\n\nvpkeys = [\"ts-{}\".format(i) for i in np.random.choice(range(50), size=5, replace=False)]\nfor i in range(5):\n # add 5 triggers to upsert distances to these vantage points\n # data = json.dumps({'proc':'corr', 'onwhat':'insert_ts', 'target':[\"d_vp-{}\".format(i)], 'arg':tsdict[vpkeys[i]].to_json()})\n # r = requests.post(self.web_url+'/add_trigger', data)\n\n r = web_interface.add_trigger('corr', 'insert_ts', [\"d_vp-{}\".format(i)], tsdict[vpkeys[i]].to_json())\n assert(r[0] == 200)\n # change the metadata for the vantage points to have meta['vp']=True\n metadict[vpkeys[i]]['vp'] = True", "Having set up the triggers, now insert the time series, and upsert the metadata\n==========================================\nWhen it's first time to insert these keys in TSDB_server,\ninsert_ts will work and return TSDBStatus.OK\n==========================================", "for k in tsdict:\n results = web_interface.insert_ts(k, tsdict[k])\n assert results[0] == 200\n # upsert meta\n results = web_interface.upsert_meta(k, metadict[k])\n assert results[0] == 200\n\nresults = web_interface.add_trigger(\n 'junk', 'insert_ts', None, 'db:one:ts')\n\nresults\n\n# ==========================================\n# However if it's not first time to insert these keys,\n# insert_ts will return TSDBStatus.INVALID_KEY\n# ==========================================\n# pick a random pk\nidx = np.random.choice(list(tsdict.keys()))\n\n# check that the time series is there now\nresults = web_interface.select({\"primary_key\": idx})\nassert results[0] == 200\n\n# delete an existing time series\nresults = web_interface.delete_ts(idx)\nassert results[0] == 200\n\n# check that the time series is no longer there\nresults = web_interface.select({\"md\":{\"order\": 1}, \"fields\":[\"ts\"], \"additional\":{\"sort_by\":\"-order\"}})\nassert results[0] == 200\n\n# add the time series back in\nresults = web_interface.insert_ts(idx, tsdict[idx])\nassert results[0] == 200" ]
[ "code", "markdown", "code", "markdown", "code" ]
TheOregonian/long-term-care-db
notebooks/analysis/washington-gardens.ipynb
mit
[ "Data were munged here.", "import pandas as pd\nimport numpy as np\nfrom IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:100% !important; }</style>\"))\ndf = pd.read_csv('../../data/processed/complaints-3-25-scrape.csv')", "<h3>How many substantiated complaints occured by the time Marian Ewins moved to Washington Gardens?</h3>\n\nMarian Ewins moved in to Washington Gardens in May, 2015.", "move_in_date = '2015-05-01'", "The facility_id for Washington Gardens is 50R382.", "df[(df['facility_id']=='50R382') & (df['incident_date']<move_in_date)].count()[0]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
facaiy/book_notes
machine_learning/logistic_regression/demo.ipynb
cc0-1.0
[ "# %load /Users/facai/Study/book_notes/preconfig.py\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nimport numpy as np\nimport scipy as sp\n\nimport pandas as pd\npd.options.display.max_rows = 20", "逻辑回归算法简介和Python实现\n0. 实验数据", "names = [(\"x\", k) for k in range(8)] + [(\"y\", 8)]\ndf = pd.read_csv(\"./res/dataset/pima-indians-diabetes.data\", names=names)\ndf.head(3)", "1. 二分类\n1.0 基本原理\nref: http://www.robots.ox.ac.uk/~az/lectures/ml/2011/lect4.pdf\n逻辑回归,是对线性分类 $f(x) = w^T x + b$ 结果,额外做了sigmoid变换$\\sigma(x) = \\frac1{1 + e^x}$。加了sigmoid变换,更适合于做分类器,效果见下图。", "x = np.linspace(-1.5, 1.5, 1000)\n\ny1 = 0.5 * x + 0.5\ny2 = sp.special.expit(5 * x)\n\npd.DataFrame({'linear': y1, 'logistic regression': y2}).plot()", "所以,逻辑回归的定义式是\n\\begin{equation}\n g(x) = \\sigma(f(x)) = \\frac1{1 + e^{-(w^T x + b)}}\n\\end{equation}\n那么问题来了,如何找到参数$w$呢?\n1.0.0 损失函数\n在回答如何找之前,我们得先定义找什么:即什么参数是好的?\n观察到,对于二分类 $y \\in {-1, 1}$,有\n\\begin{align}\n g(x) & \\to 1 \\implies y = 1 \\\n 1 - g(x) & \\to 1 \\implies y = 0\n\\end{align}\n可以将$g(x)$看作是$y = 1$的概率值,$1 - g(x)$看作是$y = 0$的概率值,整理为:\n\\begin{align}\n P(y = 1 | x, w) &= g(x) & &= \\frac1{1 + e^{-z}} &= \\frac1{1 + e^{-y z}} \\\n P(y = 0 | x, w) &= 1 - g(x) &= 1 - \\frac1{1 + e^{-z}} &= \\frac1{1 + e^z} &= \\frac1{1 + e^{-y z}} \\\n\\end{align}\n即,可合并为 $P(y|x, w) = \\frac1{1 + e^{-y z}}$,其中$z = w^T x + b$。\n好了,常理而言,我们可以认为,对于给定的$x$,有$w$,使其对应标签$y$的概率值越大,则此$w$参数越好。\n在训练数据中是有许多样本的,即有多个$x$,如何全部利用起来呢?\n假定样本间是独立的,最直接的想法是将它们的预测概率值累乘,又称Maximum likelihood estimation\n\\begin{equation}\n{\\mathcal {L}}(\\theta \\,;\\,x_{1},\\ldots ,x_{n})=f(x_{1},x_{2},\\ldots ,x_{n}\\mid \\theta )=\\prod {i=1}^{n}f(x{i}\\mid \\theta )\n\\end{equation}\n好,我们用这个方法来描述最合适的参数$w$是\n\\begin{align}\n w &= \\operatorname{arg \\ max} \\prod_i^n P(y_i | x_i, w) \\\n &= \\operatorname{arg \\ min} - \\log \\left ( \\prod_i^n P(y_i | x_i, w) \\right ) \\quad \\text{用negative log likelihood转成极小值} \\\n & = \\operatorname{arg \\ min} - \\log \\left ( \\prod_i^n \\frac1{1 + e^{-y z}} \\right ) \\\n & = \\operatorname{arg \\ min} \\sum_i^n - \\log \\left ( \\frac1{1 + e^{-y z}} \\right ) \\\n & = \\operatorname{arg \\ min} \\sum_i^n \\log ( 1 + e^{-y z} ) \\\n\\end{align}\n于是,我们可以定义损失函数\n\\begin{equation}\n L(w) = \\log (1 + e^{-y z}) = \\log \\left ( 1 + e^{-y (w^T x + b)} \\right )\n\\end{equation}\n最好旳$w$值是$\\operatorname{arg \\ min} L(w)$。\n1.0.1 一阶导数\n知道找什么这个目标后,怎么找就比较套路了。因为定义了损失函数,很自然地可以用数值寻优方法来寻找。很多数值寻优方法需要用到一阶导数信息,所以简单对损失函数求导,可得:\n\\begin{align}\n \\frac{\\partial L}{\\partial w} &= \\frac1{1 + e^{-y (w^T x + b)}} \\cdot e^{-yb} \\cdot -yx e^{-y w^T x} \\\n &= \\frac{e^{-y (w^T x + b)}}{1 + e^{-y (w^T x + b)}} \\cdot -y x \\\n &= -y \\left ( 1 - \\frac1{1 + e^{-y (w^T x + b)}} \\right ) x\n\\end{align}\n有了损失函数和导数,用数值寻优方法就可找到合适的$w$值,具体见下一节的演示。\n1.1 实现演示\n上一节,我们得到了损失函数和导数:\n\\begin{align}\n L(w) &= \\log \\left ( 1 + e^{-y (w^T x + b)} \\right ) \\\n \\frac{\\partial L}{\\partial w} &= -y \\left ( 1 - \\frac1{1 + e^{-y (w^T x + b)}} \\right ) x\n\\end{align}\n对于$n$个训练样本,需要加总起来:\n\\begin{align}\n L(w) &= \\sum_i^n \\log \\left ( 1 + e^{-y (w^T x + b)} \\right ) \\\n \\frac{\\partial L}{\\partial w} &= \\sum_i^n -y \\left ( 1 - \\frac1{1 + e^{-y (w^T x + b)}} \\right ) x\n\\end{align}\n注意:导数是个向量。\n我们可以用矩阵运算,来替换掉上面的向量运算和累加。\n令有训练集$\\mathbf{X} = [x_0; x_1; \\dots; x_n]^T$,其中毎个样本长度为$m$,即有$x_0 = [x_0^0, x_0^1, \\dots, x_0^m]$。对应有标签集$y = [y_0, y_1, \\dots, y_n]^T$。\n令参数值$w$,矩阵乘法记为$\\times$,向量按元素相乘记为$\\cdot$,则可改写前面公式为:\n\\begin{align}\n z &= \\exp \\left ( -y \\cdot (X \\times w) \\right ) \\\n L(w) &= \\sum \\log (1 + z) \\\n \\frac{\\partial L}{\\partial w} &= (-y \\cdot (1 - \\frac1{1 + z}))^T \\times X\n\\end{align}\n依上式写函数如下:", "def logit_loss_and_grad(w, X, y):\n w = w[:, None] if len(w.shape) == 1 else w\n \n z = np.exp(np.multiply(-y, np.dot(X, w)))\n loss = np.sum(np.log1p(z))\n grad = np.dot((np.multiply(-y, (1 - 1 / (1 + z)))).T, X)\n \n return loss, grad.flatten()\n\n# 测试数据\nX = df[\"x\"].as_matrix()\n# 标签\ny = df[\"y\"].as_matrix()\n\n# 初始权重值\nw0 = np.zeros(X.shape[1])\n\n# 演示一轮损失函数和导数值\nlogit_loss_and_grad(w0, X, y)\n\n# 调过数值寻优方法,求解得到w \n\n(w, loss, info) = sp.optimize.fmin_l_bfgs_b(logit_loss_and_grad, w0, args=(X, y))\n\nw\n\n# 预测概率值\ny_pred_probability = 1 / (1 + np.exp(np.multiply(-y, np.dot(X, w[:, None]))))\n# 预测结果\ny_pred = (y_pred_probability >= 0.5).astype(int)\n\nfrom sklearn.metrics import accuracy_score, auc, precision_score\n\n# 预测准确度\naccuracy_score(y, y_pred)\n\nauc(y, y_pred_probability, reorder=True)", "好了,到此,就完成了二分类的演示。\n2. 多分类\n2.0 基本原理\n前面讲到二分类借助的是logistic function,当问题推广到多分类时,自然而然想到就可借助softmax functioin。\n\nIn mathematics, the softmax function, or normalized exponential function, is a generalization of the logistic function)$\\sigma (\\mathbf {z} ){j}={\\frac {e^{z{j}}}{\\sum {k=1}^{K}e^{z{k}}}}$\n\n可以看到softmax本身是很简单的形式,它要求给出模型预测值$e^{z_k}$,归一化作为概率值输出。所以只要指定这个$e^{z_k}$是线性模型$\\beta_0 + \\beta x$,问题就解决了。具体见下面。\n对于K分类问题,假设有样本$x$,标签$y \\in {0, 1, 2, ..., K - 1}$。我们将$y = 0$选为标杆,得到K-1个线性模型:\n\\begin{align}\n \\log \\frac{P(y = 1 | x)}{P(y = 0 | x)} &= \\beta_{10} + \\beta_1 x \\\n \\cdots \\\n \\log \\frac{P(y = K - 1 | x)}{P(y = 0 | x)} &= \\beta_{(K-1)0} + \\beta_{K-1} x \\\n\\end{align}\n特别地,用同样的式子列写$y = 0$,可得到\n\\begin{align}\n \\log \\frac{P(y = 0 | x)}{P(y = 0 | x)} &= \\log(1) \\\n & = 0 \\\n & = 0 + [0, 0, \\dots, 0] x \\\n & = \\beta_{00} + \\beta_0 x \\quad \\text{令$\\beta_0$是零阵} \\\n\\end{align}\n也就是说,令$\\beta_0$是零阵,则可以将上面式子统一写为\n\\begin{equation}\n \\log \\frac{P(y = k | x)}{P(y = 0 | x)} = \\beta_{k0} + \\beta_k x\n\\end{equation}\n即有 $P(y = k | x) = e^{\\beta_{k0} + \\beta_k} x P(y = 0 | x)$, \n又概率相加总为1,$\\sum_k P(y = k | x) = 1$,两式联立可得:\n\\begin{equation}\n P(y = k | x) = \\frac{e^{\\beta_{k0} + \\beta_k x}}{\\sum_i e^{\\beta_{i0} + \\beta_i x}}\n\\end{equation}\n我们希望模型参数$\\beta$使对应的$P(y | x, \\beta)$时越大越好,所以,可定义损失函数为\n\\begin{align}\n L(\\beta) &= -log P(y = k | x, \\beta) \\\n &= \\log(\\sum_i e^{\\beta_{i0} + \\beta_i x)}) - (\\beta_{k0} + \\beta_k x)\n\\end{align}\n再求得一阶导数\n\\begin{align}\n \\frac{\\partial L}{\\partial \\beta} &= \\frac1{\\sum_i e^{\\beta_{i0} + \\beta_i x} x} e^{\\beta_{k0} + \\beta_k x} - x I(y = k) \\\n &= x \\left ( \\frac{e^{\\beta_{k0} + \\beta_k x}}{\\sum_i e^{\\beta_{i0} + \\beta_i x}} - I(y = k) \\right ) \\\n\\end{align}\n有了损失函数和一阶导数,同样地,可以使用数值优化法来寻优。上面的式子容易溢出,相应的变形见逻辑回归在spark中的实现简介第2.2节。\n2.1 特例\n令$K=2$,代入多分类式子:\n\\begin{align}\n P(y = 1 | x) &= \\frac{e^{\\beta_{k0} + \\beta_k x}}{\\sum_i e^{\\beta_{i0} + \\beta_i x}} |{k = 1} \\\n &= \\frac{e^{\\beta{10} + \\beta_1 x}}{e^{\\beta_{00} + \\beta_0 x} + e^{\\beta_{10} + \\beta_1 x}} \\\n &= \\frac{e^{\\beta_{10} + \\beta_1 x}}{1 + e^{\\beta_{10} + \\beta_1 x}} \\\n &= \\frac1{1 + e^{- (\\beta_{10} + \\beta_1 x)}} \\\n &= \\frac1{1 + e^{- (w^T x + b)}} \\\n\\end{align}\n可以看到,二分类逻辑回归只是多分类的特例。\n3.0 小结\n本文简要介绍了逻辑回归二分类和多分类的理论表达式,并对二分类做了代码演示。" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
landmanbester/fundamentals_of_interferometry
6_Deconvolution/6_1_sky_models.ipynb
gpl-2.0
[ "Outline\nGlossary\n6. Deconvolution in Imaging \nPrevious: 6. Introduction \nNext: 6.2 Interative Deconvolution with Point Sources (CLEAN)\n\n\n\n\nImport standard modules:", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom IPython.display import HTML \nHTML('../style/course.css') #apply general CSS", "Import section specific modules:", "import matplotlib.image as mpimg\nfrom IPython.display import Image\nfrom astropy.io import fits\nimport aplpy\n\n#Disable astropy/aplpy logging\nimport logging\nlogger0 = logging.getLogger('astropy')\nlogger0.setLevel(logging.CRITICAL)\nlogger1 = logging.getLogger('aplpy')\nlogger1.setLevel(logging.CRITICAL)\n\nfrom IPython.display import HTML\nHTML('../style/code_toggle.html')", "6.1 Sky Models<a id='deconv:sec:skymodels'></a>\nBefore we dive into deconvolution methods we need to introduce the concept of a sky model. Since we are making an incomplete sampling of the visibilities with limited resolution we do not recover the 'true' sky from an observation. The dirty image is the 'true' sky convolved (effectively blurred) out by the array point spread function (PSF). As discussed in the previous chapter, the PSF acts as a type of low-pass spatial filter limiting our resolution of the sky. We would like to some how recover a model for the true sky. At the end of deconvolution one of the outputs is the sky model.\nWe can look at the deconvolution process backwards by taking an ideal sky, shown in the left figure below (it may be difficult to see but there are various pixels with different intensities), and convolving it with the PSF response of the KAT-7 array, shown on the right, this is a point-source based sky model which will be discussed below. The sky model looks like a mostly empty image with a few non-zero pixels. The PSF is the same as the one shown in the previous chapter.", "fig = plt.figure(figsize=(16, 7))\n\ngc1 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-model.fits', \\\n figure=fig, subplot=[0.0,0.1,0.35,0.8])\ngc1.show_colorscale(vmin=-0.1, vmax=1.0, cmap='viridis')\ngc1.hide_axis_labels()\ngc1.hide_tick_labels()\nplt.title('Sky Model')\ngc1.add_colorbar()\n\ngc2 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-psf.fits', \\\n figure=fig, subplot=[0.5,0.1,0.35,0.8])\ngc2.show_colorscale(cmap='viridis')\ngc2.hide_axis_labels()\ngc2.hide_tick_labels()\nplt.title('KAT-7 PSF')\ngc2.add_colorbar()\n\nfig.canvas.draw()", "Left: a point-source sky model of a field of sources with various intensities. Right: PSF response of KAT-7 for a 6 hour observation at a declination of $-30^{\\circ}$.\nBy convolving the ideal sky with the array PSF we effectively are recreating the dirty image. The figure on the left below shows the sky model convolved with the KAT-7 PSF. The centre image is the original dirty image created using uniform weighting in the previous chapter. The figure on the right is the difference between the two figures. The negative offset is an effect of the imager producing an absolute value PSF image. The main point to note is that the difference image shows the bright sources removed resulting in a fairly noise-like image.", "fig = plt.figure(figsize=(16, 5))\n\nfh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-model.fits')\nskyModel = fh[0].data\nfh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-psf.fits')\npsf = fh[0].data\nfh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-dirty.fits')\ndirtyImg = fh[0].data\nfh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-residual.fits')\nresidualImg = fh[0].data\n\n#convolve the sky model with the PSF\nsampFunc = np.fft.fft2(psf) #sampling function\nskyModelVis = np.fft.fft2(skyModel) #sky model visibilities\nsampModelVis = sampFunc * skyModelVis #sampled sky model visibilities\nconvImg = np.fft.fftshift(np.fft.ifft2(sampModelVis)).real + residualImg #sky model convolved with PSF\n\ngc1 = aplpy.FITSFigure(convImg, figure=fig, subplot=[0,0.0,0.30,1])\ngc1.show_colorscale(vmin=-1., vmax=3.0, cmap='viridis')\ngc1.hide_axis_labels()\ngc1.hide_tick_labels()\nplt.title('PSF convolved with Sky Model')\ngc1.add_colorbar()\n\ngc2 = aplpy.FITSFigure(dirtyImg, figure=fig, subplot=[0.33,0.0,0.30,1])\ngc2.show_colorscale(vmin=-1., vmax=3.0, cmap='viridis')\ngc2.hide_axis_labels()\ngc2.hide_tick_labels()\nplt.title('Dirty')\ngc2.add_colorbar()\n\ngc3 = aplpy.FITSFigure(dirtyImg - convImg, figure=fig, subplot=[0.67,0.0,0.30,1])\ngc3.show_colorscale(cmap='viridis')\ngc3.hide_axis_labels()\ngc3.hide_tick_labels()\nplt.title('Difference')\ngc3.add_colorbar()\n\nfig.canvas.draw()", "Left: the point-source sky model convolved with the KAT-7 PSF with the residual image added. Centre: the original dirty image. Right: the difference between the PSF-convoled sky model and the dirty image.\nNow that we see we can recreate the dirty image from a sky model and the array PSF, we just need to learn how to do the opposite operation, deconvolution. In order to simplify the process we incorporate knowledge about the sky primarily by assuming a simple model for the sources.\n6.1.1 The Point Source Assumption\nWe can use some prior information about the sources in the sky and the array as a priori information in our deconvolution attempt. The array and observation configuration results in a PSF which has a primary lobe of a particular scale, this is the effective resolution of the array. As we have seen in $\\S$ 5.4 &#10142; the choice of weighting functions can result in different PSF resolutions. But, no matter the array there is a limit to the resolution, so any source which has a smaller angular scale than the PSF resolution appears to be a point source. A point source is an idealized source which has no angular scale and is represented by a spatial Dirac $\\delta$-function. Though all sources in the sky have a angular scale, many are much smaller than the angular resolution of the array PSF, so they can be approximated as a simple point source.\nA nice features of the point source model of a source is that the Fourier transform of a Dirac $\\delta$-function, by the Fourier shift theorem, is a simple complex phase function and the constant flux\n$$\n\\mathscr{F} { C(\\nu) \\cdot \\delta\\,(l- l_0, m - m_0)}(u, v) = C(\\nu) \\cdot\\iint \\limits_{-\\infty}^{\\infty} \\delta\\,(l- l_0, m - m_0) \\, e^{-2 \\pi i (ul + vm)}\\,dl\\,dm = C(\\nu) \\cdot e^{-2 \\pi i (ul_0 + vm_0)} \\quad (6.1.1)\n$$\nwhere $C(\\nu)$ is the flux of the source (which can include a dependence on observing frequency $\\nu$), and $(l_0, m_0)$ is the phase centre. At the phase centre $(l_0, m_0)$, $\\mathscr{F} { C \\cdot \\delta\\,(0, 0)} = C$. A $\\delta$-function based sky model leads to a nice, and computationally fast, method to generate visibilities, which is useful for deconovlution methods as we will see later in this chapter.\nOf course we need to consider if using a collection of $\\delta$-functions for a sky model is actually a good idea. The short answer is 'yes', the long answer 'yes to a limit' and current research is focused on using more advanced techniques to improve deconvolution. This is because the 2D Dirac $\\delta$-function can be used a complete orthogonal basis set to describe any 2D function. Most sources in the sky are unresolved, that is they have a much smaller angular resolution then that of the array PSF. Sky sources which are resolved are a bit trickier, we will consider these sources later in the section. \nA $\\delta$-function basis set is also good for astronomical images because these images are generally sparse. That is, out of the thousands or millions of pixels in the image, only a few of the pixels contain sky sources (i.e. these pixels contain the information we desire) above the observed noise floor, the rest of the pixels contain mostly noise (i.e. contain no information). This differs from a natural image, for example a photograph of a duck, which is simply a collection of $\\delta$-functions with different constant scale factors, one for each pixel in the image. Every pixel in a natural image generally contains information. We would say that in the $\\delta$-function basis set a natural image is not sparse. As a side note, a natural image is usually sparse in wavelet space which is why image compression uses wavelet transforms.\nSince an astronomical image is sparse we should be able to reduce the image down to separate out the noise from the sources and produce a small model which represents the true sky, this is the idea behind deconvolution. Looked in a different way, deconvolution is a process of filtering out the true sky flux from the instrument induced noise in each pixel. This idea of sparseness and information in aperture synthesis images is related to the field of compressed sensing. Much of the current research in radio interferometric deconvolution is framed in the compressed sensing context.\nAt the end of an deconvolution process we end up with two products: the sky model, and a set of residuals. A sky model can be as simple as a list of positions (either pixel number of sky position) and a flux value (and maybe a spectral index), e.g.\n| Source ID | RA (H) | Dec (deg) | Flux (Jy) | SPI |\n| --------- | ----------- | ------------ | --------- | ----- |\n| 0 | 00:02:18.81 | -29:47:17.82 | 3.55 | -0.73 |\n| 1 | 00:01:01.84 | -30:06:27.53 | 2.29 | -0.52 |\n| 2 | 00:03:05.54 | -30:00:22.57 | 1.01 | -0.60 |\nTable: A simple sky model of three unpolarized sources with different spectral indicies.\nIn this simple sky model there are three sources near right ascension 0 hours and declination $-30^{\\circ}$, each source has an unpolarized flux in Janskys and a spectral index.\nThe residuals are generally an image which results from the subtraction of the sky model from the original image. Additionally, a restored image is often produced, this is constructed from the sky model and residual image. This will be discussed further in the next few sections. An example of residual and restored images are shown below.", "fig = plt.figure(figsize=(16, 7))\n\ngc1 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-residual.fits', \\\n figure=fig, subplot=[0.1,0.1,0.35,0.8])\ngc1.show_colorscale(vmin=-0.8, vmax=3., cmap='viridis')\ngc1.hide_axis_labels()\ngc1.hide_tick_labels()\nplt.title('Residual')\ngc1.add_colorbar()\n\ngc2 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-image.fits', \\\n figure=fig, subplot=[0.5,0.1,0.35,0.8])\ngc2.show_colorscale(vmin=-0.8, vmax=3., cmap='viridis')\ngc2.hide_axis_labels()\ngc2.hide_tick_labels()\nplt.title('Restored')\ngc2.add_colorbar()\n\nfig.canvas.draw()", "Left: residual image after running a CLEAN deconvolution. Right: restored image constructed from convolving the point-source sky model with an 'ideal' PSF and adding the residual image.\nDeconvolution, as can be seen in the figure on the left, builds a sky model by subtracts sources from the dirty image and adding them to a sky model. The resulting image shows the residual flux which was not added to the sky model. The restored image is a reconstruction of the field by convolving the sky model with an 'ideal' PSF and adding the residual image. This process will be discussed in the sections that follow.\nThe assumption that most sources are point source-like or can be represented by a set of point sources is the basis for the standard deconvolution process in radio interferometry, CLEAN. Though there are many new methods to perform deconvolution, CLEAN is the standard method and continues to dominate the field.\n6.1.2 Resolved Sources\nWe need to consider what it means for a source to be 'resolved'. Each baseline in an array measures a particular spatial resolution. If the angular scale of a source is smaller than the spatial resolution of the longest baseline then the source is unresolved on every baseline. If the angular scale of a source is larger than the shortest baseline spatial resolution then the source is resolved on all baselines. In between these two extremes a source is resolved on longer baselines and unresolved on shorter baselines, thus the source is said to be partially resolved. The term extended source is often used as a synonym for fully- and partially-resolved sources.\nSimple Gaussian extended sources are shown in the figure below. All sources have been normalized to have the same integrated flux. On the left is a very extended source, moving right is progressively smaller extended sources until the right most source which is a nearly point-source like object. Transforming each source into the visibility space, via Fourier transform, we plot the flux of each source as a function of baseline length. Baseline direction does not matter in these simple examples because the sources are circular Gaussians. For a very extended source (blue) the flux drops off quickly as a function of baseline length. In the limit the Gassuain size is decreased to that of a delta function the flux distribution (dashed black) is flat across all baseline lengths (this is the ideal case).", "def gauss2d(sigma):\n \"\"\"Return a normalized 2d Gaussian function, sigma: size in pixels\"\"\"\n return lambda x,y: (1./(2.*np.pi*(sigma**2.))) * np.exp(-1. * ((xpos**2. + ypos**2.) / (2. * sigma**2.)))\n\nimgSize = 512\nxpos, ypos = np.mgrid[0:imgSize, 0:imgSize].astype(float)\nxpos -= imgSize/2.\nypos -= imgSize/2.\nsigmas = [64., 16., 4., 1.]\n\nfig = plt.figure(figsize=(16, 7))\n\n#Gaussian image-domain source\nax1 = plt.subplot2grid((2, 4), (0, 0))\ngauss1 = gauss2d(sigmas[0])\nax1.imshow(gauss1(xpos, ypos))\nax1.axis('off')\nplt.title('Sigma: %i'%int(sigmas[0]))\n\n#Gaussian image-domain source\nax2 = plt.subplot2grid((2, 4), (0, 1))\ngauss2 = gauss2d(sigmas[1])\nax2.imshow(gauss2(xpos, ypos))\nax2.axis('off')\nplt.title('Sigma: %i'%int(sigmas[1]))\n\n#Gaussian image-domain source\nax3 = plt.subplot2grid((2, 4), (0, 2))\ngauss3 = gauss2d(sigmas[2])\nax3.imshow(gauss3(xpos, ypos))\nax3.axis('off')\nplt.title('Sigma: %i'%int(sigmas[2]))\n \n#Gaussian image-domain source\nax4 = plt.subplot2grid((2, 4), (0, 3))\ngauss4 = gauss2d(sigmas[3])\nax4.imshow(gauss4(xpos, ypos))\nax4.axis('off')\nplt.title('Sigma: %i'%int(sigmas[3]))\n\n#plot the visibility flux distribution as a function of baseline length\nax5 = plt.subplot2grid((2, 4), (1, 0), colspan=4)\nvisGauss1 = np.abs( np.fft.fftshift( np.fft.fft2(gauss1(xpos, ypos))))\nvisGauss2 = np.abs( np.fft.fftshift( np.fft.fft2(gauss2(xpos, ypos))))\nvisGauss3 = np.abs( np.fft.fftshift( np.fft.fft2(gauss3(xpos, ypos))))\nvisGauss4 = np.abs( np.fft.fftshift( np.fft.fft2(gauss4(xpos, ypos))))\nax5.plot(visGauss1[int(imgSize/2),int(imgSize/2):], label='%i'%int(sigmas[0]))\nax5.plot(visGauss2[int(imgSize/2),int(imgSize/2):], label='%i'%int(sigmas[1]))\nax5.plot(visGauss3[int(imgSize/2),int(imgSize/2):], label='%i'%int(sigmas[2]))\nax5.plot(visGauss4[int(imgSize/2),int(imgSize/2):], label='%i'%int(sigmas[3]))\nax5.hlines(1., xmin=0, xmax=int(imgSize/2)-1, linestyles='dashed')\nplt.legend()\nplt.ylabel('Flux')\nplt.xlabel('Baseline Length')\nplt.xlim(0, int(imgSize/8)-1)\nax5.set_xticks([])\nax5.set_yticks([])", "Top Figures: simple Gaussian extented sources, all sources are normalized to the same integrated flux. Bottom figure: the baseline length-dependent flux distribution in the visibility space for each source (labels indicate relavive width of source compared to the smallest source). The dashed line represents an idealized point source with the same flux as the Gaussian sources.\nThe smaller the extended source the better a $\\delta$-function is at approximating the source. But, a $\\delta$-function does not work well for extended sources. A simple solution is to use a set of point sources to model an extended object. This can work well to an extent, but there is a trade off to be made. Because the $\\delta$-function has a flat baseline-dependent response in the visibility domain the curves shown in the plot above can not be modeled with $\\delta$-functions, the curves can only be approximated. As the limits of using a $\\delta$-function sky model for extended sources is reached new sky models need to be introduced. Though it is beyond the scope of this work there has been siginificant work done on using Gaussian <cite data-cite='2008ISTSP...2..793C'>[1]</cite> &#10548; and Shapelet <cite data-cite='2002ApJ...570..447C'>[2]</cite> &#10548; <cite data-cite='python_shapelets'>[3]</cite> &#10548; functions to model extended sources.\nAn additional issue that arises is that an array PSF size varies depending on observing frequency during an observation. An observation is made over a set frequency bandwidth, and the baseline length is measured in number of wavelengths and not metres. We used this to our advantage to fill the uv-plane in $\\S$ 5.2 &#10142;. The highest frequency observation will have a better resolution then the lowest frequency observation. In the past this was not an issue because the difference in fractional bandwidth between the top and bottom of the observing band was usually small and the resolution could be approximated as constant across the band. But, modern telescopes have large fractional bandwidths, resulting in significant resolution difference between the lowest and highest frequencies. This leads to issues where, on a single baselines, a source can be resolved at high frequency but is unresolved at a lower frequency. The issue imaging and deconvolution with wide bandwidth systems is a topic of active research.\nNow that we have justified the use of $\\delta$-functions to describe a sky model, even as approximations to extended sources, we are now ready to see how a sky model is constructed from the dirty image via deconvolution.\n\nNext: 6.2 Interative Deconvolution with Point Sources (CLEAN)\n<div class=warn><b>Future Additions:</b></div>\n\n\nexample: extended object, cygnus a?\nexample: point source, gaussian, shapelet source models of the same source" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
statsmodels/statsmodels.github.io
v0.13.1/examples/notebooks/generated/formulas.ipynb
bsd-3-clause
[ "Formulas: Fitting models using R-style formulas\nSince version 0.5.0, statsmodels allows users to fit statistical models using R-style formulas. Internally, statsmodels uses the patsy package to convert formulas and data to the matrices that are used in model fitting. The formula framework is quite powerful; this tutorial only scratches the surface. A full description of the formula language can be found in the patsy docs: \n\nPatsy formula language description\n\nLoading modules and functions", "import numpy as np # noqa:F401 needed in namespace for patsy\nimport statsmodels.api as sm", "Import convention\nYou can import explicitly from statsmodels.formula.api", "from statsmodels.formula.api import ols", "Alternatively, you can just use the formula namespace of the main statsmodels.api.", "sm.formula.ols", "Or you can use the following convention", "import statsmodels.formula.api as smf", "These names are just a convenient way to get access to each model's from_formula classmethod. See, for instance", "sm.OLS.from_formula", "All of the lower case models accept formula and data arguments, whereas upper case ones take endog and exog design matrices. formula accepts a string which describes the model in terms of a patsy formula. data takes a pandas data frame or any other data structure that defines a __getitem__ for variable names like a structured array or a dictionary of variables. \ndir(sm.formula) will print a list of available models. \nFormula-compatible models have the following generic call signature: (formula, data, subset=None, *args, **kwargs)\nOLS regression using formulas\nTo begin, we fit the linear model described on the Getting Started page. Download the data, subset columns, and list-wise delete to remove missing observations:", "dta = sm.datasets.get_rdataset(\"Guerry\", \"HistData\", cache=True)\n\ndf = dta.data[[\"Lottery\", \"Literacy\", \"Wealth\", \"Region\"]].dropna()\ndf.head()", "Fit the model:", "mod = ols(formula=\"Lottery ~ Literacy + Wealth + Region\", data=df)\nres = mod.fit()\nprint(res.summary())", "Categorical variables\nLooking at the summary printed above, notice that patsy determined that elements of Region were text strings, so it treated Region as a categorical variable. patsy's default is also to include an intercept, so we automatically dropped one of the Region categories.\nIf Region had been an integer variable that we wanted to treat explicitly as categorical, we could have done so by using the C() operator:", "res = ols(formula=\"Lottery ~ Literacy + Wealth + C(Region)\", data=df).fit()\nprint(res.params)", "Patsy's mode advanced features for categorical variables are discussed in: Patsy: Contrast Coding Systems for categorical variables\nOperators\nWe have already seen that \"~\" separates the left-hand side of the model from the right-hand side, and that \"+\" adds new columns to the design matrix. \nRemoving variables\nThe \"-\" sign can be used to remove columns/variables. For instance, we can remove the intercept from a model by:", "res = ols(formula=\"Lottery ~ Literacy + Wealth + C(Region) -1 \", data=df).fit()\nprint(res.params)", "Multiplicative interactions\n\":\" adds a new column to the design matrix with the interaction of the other two columns. \"*\" will also include the individual columns that were multiplied together:", "res1 = ols(formula=\"Lottery ~ Literacy : Wealth - 1\", data=df).fit()\nres2 = ols(formula=\"Lottery ~ Literacy * Wealth - 1\", data=df).fit()\nprint(res1.params, \"\\n\")\nprint(res2.params)", "Many other things are possible with operators. Please consult the patsy docs to learn more.\nFunctions\nYou can apply vectorized functions to the variables in your model:", "res = smf.ols(formula=\"Lottery ~ np.log(Literacy)\", data=df).fit()\nprint(res.params)", "Define a custom function:", "def log_plus_1(x):\n return np.log(x) + 1.0\n\n\nres = smf.ols(formula=\"Lottery ~ log_plus_1(Literacy)\", data=df).fit()\nprint(res.params)", "Any function that is in the calling namespace is available to the formula.\nUsing formulas with models that do not (yet) support them\nEven if a given statsmodels function does not support formulas, you can still use patsy's formula language to produce design matrices. Those matrices \ncan then be fed to the fitting function as endog and exog arguments. \nTo generate numpy arrays:", "import patsy\n\nf = \"Lottery ~ Literacy * Wealth\"\ny, X = patsy.dmatrices(f, df, return_type=\"matrix\")\nprint(y[:5])\nprint(X[:5])", "To generate pandas data frames:", "f = \"Lottery ~ Literacy * Wealth\"\ny, X = patsy.dmatrices(f, df, return_type=\"dataframe\")\nprint(y[:5])\nprint(X[:5])\n\nprint(sm.OLS(y, X).fit().summary())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kubeflow/kfserving-lts
docs/samples/explanation/aix/mnist/query_explain.ipynb
apache-2.0
[ "KFServing Model Explainability\nInstall the necessary libraries", "import os\nimport sys\nimport requests\nimport json\nfrom matplotlib import pyplot as plt\nimport numpy as np\nfrom aix360.datasets import MNISTDataset\nfrom keras.applications import inception_v3 as inc_net\nfrom keras.preprocessing import image\nfrom keras.applications.imagenet_utils import decode_predictions\nimport time\nfrom skimage.color import gray2rgb, rgb2gray, label2rgb # since the code wants color images", "Get endpoint, host headers, and load the image from a file or from the MNIST dataset.", "print('************************************************************')\nprint('************************************************************')\nprint('************************************************************')\nprint(\"starting query\")\n\nif len(sys.argv) < 3:\n raise Exception(\"No endpoint specified. \")\n\nendpoint = sys.argv[1]\nheaders = {\n 'Host': sys.argv[2]\n}\ntest_num = 1002\nis_file = False\nif len(sys.argv) > 3:\n try:\n test_num = int(sys.argv[2])\n except:\n is_file = True\n\nif is_file:\n inputs = open(sys.argv[2])\n inputs = json.load(inputs)\n actual = \"unk\"\nelse:\n data = MNISTDataset()\n inputs = data.test_data[test_num]\n labels = data.test_labels[test_num]\n actual = 0\n for x in range(1, len(labels)):\n if labels[x] != 0:\n actual = x\n inputs = gray2rgb(inputs.reshape((-1, 28, 28)))\n inputs = np.reshape(inputs, (28,28,3))\ninput_image = {\"instances\": [inputs.tolist()]}", "Display the input image to be used.", "fig0 = (inputs[:,:,0] + 0.5)*255\nf, axarr = plt.subplots(1, 1, figsize=(10,10))\naxarr.set_title(\"Original Image\")\naxarr.imshow(fig0, cmap=\"gray\")\nplt.show()", "Send the image to the inferenceservice.", "print(\"Sending Explain Query\")\n\nx = time.time()\n\nres = requests.post(endpoint, json=input_image, headers=headers)\n\nprint(\"TIME TAKEN: \", time.time() - x)", "Unwrap the response from the inferenceservice and display the explanations.", "print(res)\nif not res.ok:\n res.raise_for_status()\nres_json = res.json()\ntemp = np.array(res_json[\"explanations\"][\"temp\"])\nmasks = np.array(res_json[\"explanations\"][\"masks\"])\ntop_labels = np.array(res_json[\"explanations\"][\"top_labels\"])\n\nfig, m_axs = plt.subplots(2,5, figsize = (12,6))\nfor i, c_ax in enumerate(m_axs.flatten()):\n mask = masks[i]\n c_ax.imshow(label2rgb(mask, temp, bg_label = 0), interpolation = 'nearest')\n c_ax.set_title('Positive for {}\\nActual {}'.format(top_labels[i], actual))\n c_ax.axis('off')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
pysal/spaghetti
notebooks/transportation-problem.ipynb
bsd-3-clause
[ "If any part of this notebook is used in your research, please cite with the reference found in README.md.\n\nThe Transportation Problem\nIntegrating pysal/spaghetti and python-mip for optimal shipping\nAuthor: James D. Gaboardi &#106;&#103;&#97;&#98;&#111;&#97;&#114;&#100;&#105;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;\nThis notebook provides a use case for:\n\nIntroducing the Transportation Problem\nDeclaration of a solution class and model parameters\nSolving the Transportation Problem for an optimal shipment plan", "%config InlineBackend.figure_format = \"retina\"\n\n%load_ext watermark\n%watermark\n\nimport geopandas\nfrom libpysal import examples\nimport matplotlib\nimport mip\nimport numpy\nimport os\nimport spaghetti\nimport matplotlib_scalebar\nfrom matplotlib_scalebar.scalebar import ScaleBar\n\n%matplotlib inline\n%watermark -w\n%watermark -iv", "1 Introduction\nScenario\nThere are 8 schools in Neighborhood Y of City X and a total of 100 microscopes for the biology classes at the 8 schools, though the microscopes are not evenly distributed across the locations. Since last academic year there has been a significant enrollment shift in the neighborhood, and at 4 of the schools there is a surplus whereas the remaining 4 schools require additional microscopes. Dr. Rachel Carson, the head of the biology department at City X's School Board decides to utilize a mathematical programming model to solve the microscope discrepency. After consideration, she selects the Transportation Problem.\nThe Transportation Problem seeks to allocate supply to demand while minimizing transportation costs and was formally described by Hitchcock (1941). Supply ($\\textit{n}$) and demand ($\\textit{m}$) are generally represented as unit weights of decision variables at facilities along a network with the time or distance between nodes representing the cost of transporting one unit from a supply node to a demand node. These costs are stored in an $\\textit{n x m}$ cost matrix.\n\nInteger Linear Programming Formulation based on Daskin (2013, Ch. 2).\n$\\begin{array}\n\\displaystyle \\normalsize \\textrm{Minimize} & \\displaystyle \\normalsize \\sum_{i \\in I} \\sum_{j \\in J} c_{ij}x_{ij} & & & & \\normalsize (1) \\\n\\normalsize \\textrm{Subject To} & \\displaystyle \\normalsize \\sum_{j \\in J} x_{ij} \\leq S_i & \\normalsize \\forall i \\in I; & & &\\normalsize (2)\\\n & \\displaystyle \\normalsize \\sum_{i \\in I} x_{ij} \\geq D_j & \\normalsize \\forall j \\in J; & & &\\normalsize (3)\\\n& \\displaystyle \\normalsize x_{ij} \\geq 0 & \\displaystyle \\normalsize \\forall i \\in I & \\displaystyle \\normalsize \\normalsize \\forall j \\in j. & &\\normalsize (4)\\\n\\end{array}$\n$\\begin{array}\n\\displaystyle \\normalsize \\textrm{Where} & \\small i & \\small = & \\small \\textrm{each potential origin node} &&&&\\\n& \\small I & \\small = & \\small \\textrm{the complete set of potential origin nodes} &&&&\\\n& \\small j & \\small = & \\small \\textrm{each potential destination node} &&&&\\\n& \\small J & \\small = & \\small \\textrm{the complete set of potential destination nodes} &&&&\\\n& \\small x_{ij} & \\small = & \\small \\textrm{amount to be shipped from } i \\in I \\textrm{ to } j \\in J &&&&\\\n& \\small c_{ij} & \\small = & \\small \\textrm{per unit shipping costs between all } i,j \\textrm{ pairs} &&&& \\\n& \\small S_i & \\small = & \\small \\textrm{node } i \\textrm{ supply for } i \\in I &&&&\\\n& \\small D_j & \\small = & \\small \\textrm{node } j \\textrm{ demand for } j \\in J &&&&\\\n\\end{array}$\n\nReferences\n\n\nChurch, Richard L. and Murray, Alan T. (2009) Business Site Selection, Locational Analysis, and GIS. Hoboken. John Wiley & Sons, Inc.\n\n\nDaskin, M. (2013) Network and Discrete Location: Models, Algorithms, and Applications. New York: John Wiley & Sons, Inc.\n\n\nGass, S. I. and Assad, A. A. (2005) An Annotated Timeline of Operations Research: An Informal History. Springer US.\n\n\nHitchcock, Frank L. (1941) The Distribution of a Product from Several Sources to Numerous Localities. Journal of Mathematics and Physics. 20(1):224-230.\n\n\nKoopmans, Tjalling C. (1949) Optimum Utilization of the Transportation System. Econometrica. 17:136-146.\n\n\nMiller, H. J. and Shaw, S.-L. (2001) Geographic Information Systems for Transportation: Principles and Applications. New York. Oxford University Press.\n\n\nPhillips, Don T. and Garcia‐Diaz, Alberto. (1981) Fundamentals of Network Analysis. Englewood Cliffs. Prentice Hall. \n\n\n\n2. A model, data, and parameters\nSchools labeled as either 'supply' or 'demand' locations", "supply_schools = [1, 6, 7, 8]\ndemand_schools = [2, 3, 4, 5]", "Amount of supply and demand at each location (indexed by supply_schools and demand_schools)", "amount_supply = [20, 30, 15, 35]\namount_demand = [5, 45, 10, 40]", "Solution class", "class TransportationProblem:\n def __init__(\n self,\n supply_nodes,\n demand_nodes,\n cij,\n si,\n dj,\n xij_tag=\"x_%s,%s\",\n supply_constr_tag=\"supply(%s)\",\n demand_constr_tag=\"demand(%s)\",\n solver=\"cbc\",\n display=True,\n ):\n \"\"\"Instantiate and solve the Primal Transportation Problem\n based the formulation from Daskin (2013, Ch. 2).\n \n Parameters\n ----------\n supply_nodes : geopandas.GeoSeries\n Supply node decision variables.\n demand_nodes : geopandas.GeoSeries\n Demand node decision variables.\n cij : numpy.array\n Supply-to-demand distance matrix for nodes.\n si : geopandas.GeoSeries\n Amount that can be supplied by each supply node.\n dj : geopandas.GeoSeries\n Amount that can be received by each demand node.\n xij_tag : str\n Shipping decision variable names within the model. Default is\n 'x_%s,%s' where %s indicates string formatting.\n supply_constr_tag : str\n Supply constraint labels. Default is 'supply(%s)'.\n demand_constr_tag : str\n Demand constraint labels. Default is 'demand(%s)'.\n solver : str\n Default is 'cbc' (coin-branch-cut). Can be set\n to 'gurobi' (if Gurobi is installed).\n display : bool\n Print out solution results.\n \n Attributes\n ----------\n supply_nodes : See description in above. \n demand_nodes : See description in above.\n cij : See description in above.\n si : See description in above.\n dj : See description in above.\n xij_tag : See description in above.\n supply_constr_tag : See description in above.\n demand_constr_tag : See description in above.\n rows : int\n The number of supply nodes.\n rrows : range\n The index of supply nodes.\n cols : int\n The number of demand nodes.\n rcols : range\n The index of demand nodes.\n model : mip.model.Model\n Integer Linear Programming problem instance.\n xij : numpy.array\n Shipping decision variables (``mip.entities.Var``).\n \"\"\"\n\n # all nodes to be visited\n self.supply_nodes, self.demand_nodes = supply_nodes, demand_nodes\n # shipping costs (distance matrix) and amounts\n self.cij, self.si, self.dj = cij, si.values, dj.values\n self.ensure_float()\n # alpha tag for decision variables\n self.xij_tag = xij_tag\n # alpha tag for supply and demand constraints\n self.supply_constr_tag = supply_constr_tag\n self.demand_constr_tag = demand_constr_tag\n \n # instantiate a model\n self.model = mip.Model(\" TransportationProblem\", solver_name=solver)\n # define row and column indices\n self.rows, self.cols = self.si.shape[0], self.dj.shape[0]\n self.rrows, self.rcols = range(self.rows), range(self.cols)\n # create and set the decision variables\n self.shipping_dvs()\n # set the objective function\n self.objective_func()\n # add supply constraints\n self.add_supply_constrs()\n # add demand constraints\n self.add_demand_constrs()\n # solve\n self.solve(display=display)\n # shipping decisions lookup\n self.get_decisions(display=display)\n\n def ensure_float(self):\n \"\"\"Convert integers to floats (rough edge in mip.LinExpr)\"\"\"\n self.cij = self.cij.astype(float)\n self.si = self.si.astype(float)\n self.dj = self.dj.astype(float)\n\n def shipping_dvs(self):\n \"\"\"Create the shipping decision variables - eq (4).\"\"\"\n\n def _s(_x):\n \"\"\"Helper for naming variables\"\"\"\n return self.supply_nodes[_x].split(\"_\")[-1]\n\n def _d(_x):\n \"\"\"Helper for naming variables\"\"\"\n return self.demand_nodes[_x].split(\"_\")[-1]\n\n xij = numpy.array(\n [\n [self.model.add_var(self.xij_tag % (_s(i), _d(j))) for j in self.rcols]\n for i in self.rrows\n ]\n )\n self.xij = xij\n\n def objective_func(self):\n \"\"\"Add the objective function - eq (1).\"\"\"\n self.model.objective = mip.minimize(\n mip.xsum(\n self.cij[i, j] * self.xij[i, j] for i in self.rrows for j in self.rcols\n )\n )\n\n def add_supply_constrs(self):\n \"\"\"Add supply contraints to the model - eq (2).\"\"\"\n for i in self.rrows:\n rhs, label = self.si[i], self.supply_constr_tag % i\n self.model += mip.xsum(self.xij[i, j] for j in self.rcols) <= rhs, label\n\n def add_demand_constrs(self):\n \"\"\"Add demand contraints to the model - eq (3).\"\"\"\n for j in self.rcols:\n rhs, label = self.dj[j], self.demand_constr_tag % j\n self.model += mip.xsum(self.xij[i, j] for i in self.rrows) >= rhs, label\n\n def solve(self, display=True):\n \"\"\"Solve the model\"\"\"\n self.model.optimize()\n if display:\n obj = round(self.model.objective_value, 4)\n print(\"Minimized shipping costs: %s\" % obj)\n\n def get_decisions(self, display=True):\n \"\"\"Fetch the selected decision variables.\"\"\"\n shipping_decisions = {}\n if display:\n print(\"\\nShipping decisions:\")\n for i in self.rrows:\n for j in self.rcols:\n v, vx = self.xij[i, j], self.xij[i, j].x\n if vx > 0:\n if display:\n print(\"\\t\", v, vx)\n shipping_decisions[v.name] = vx\n self.shipping_decisions = shipping_decisions\n\n def print_lp(self, name=None):\n \"\"\"Save LP file in order to read in and print.\"\"\"\n if not name:\n name = self.model.name\n lp_file_name = \"%s.lp\" % name\n self.model.write(lp_file_name)\n lp_file = open(lp_file_name, \"r\")\n lp = lp_file.read()\n print(\"\\n\", lp)\n lp_file.close()\n os.remove(lp_file_name)\n\n def extract_shipments(self, paths, id_col, ship=\"ship\"):\n \"\"\"Extract the supply to demand shipments as a \n ``geopandas.GeoDataFrame`` of ``shapely.geometry.LineString`` objects.\n \n Parameters\n ----------\n paths : geopandas.GeoDataFrame\n Shortest-path routes between all ``self.supply_nodes``\n and ``self.demand_nodes``.\n id_col : str\n ID column name.\n ship : str\n Column name for the amount of good shipped.\n Default is 'ship'.\n \n Returns\n -------\n shipments : geopandas.GeoDataFrame\n Optimal shipments from ``self.supply_nodes`` to\n ``self.demand_nodes``.\n \"\"\"\n\n def _id(sp):\n \"\"\"ID label helper\"\"\"\n return tuple([int(i) for i in sp.split(\"_\")[-1].split(\",\")])\n\n paths[ship] = int\n # set label of the shipping path for each OD pair.\n for ship_path, shipment in self.shipping_decisions.items():\n paths.loc[(paths[id_col] == _id(ship_path)), ship] = shipment\n # extract only shiiping paths\n shipments = paths[paths[ship] != int].copy()\n shipments[ship] = shipments[ship].astype(int)\n\n return shipments", "Plotting helper functions and constants\nNote: originating shipments", "shipping_colors = [\"maroon\", \"cyan\", \"magenta\", \"orange\"]\n\ndef obs_labels(o, b, s, col=\"id\", **kwargs):\n \"\"\"Label each point pattern observation.\"\"\"\n\n def _lab_loc(_x):\n \"\"\"Helper for labeling observations.\"\"\"\n return _x.geometry.coords[0]\n\n if o.index.name != \"schools\":\n X = o.index.name[0]\n else:\n X = \"\"\n kws = {\"size\": s, \"ha\": \"left\", \"va\": \"bottom\", \"style\": \"oblique\"}\n kws.update(kwargs)\n o.apply(lambda x: b.annotate(text=X+str(x[col]), xy=_lab_loc(x), **kws), axis=1)\n\ndef make_patches(objects):\n \"\"\"Create patches for legend\"\"\"\n patches = []\n for _object in objects:\n try:\n oname = _object.index.name\n except AttributeError:\n oname = \"shipping\"\n if oname.split(\" \")[0] in [\"schools\", \"supply\", \"demand\"]:\n ovalue = _object.shape[0]\n if oname == \"schools\":\n ms, m, c, a = 3, \"o\", \"k\", 1\n elif oname.startswith(\"supply\"):\n ms, m, c, a = 10, \"o\", \"b\", 0.25\n elif oname.startswith(\"demand\"):\n ms, m, c, a = 10, \"o\", \"g\", 0.25\n if oname.endswith(\"snapped\"):\n ms, m, a = float(ms) / 2.0, \"x\", 1\n _kws = {\"lw\": 0, \"c\": c, \"marker\": m, \"ms\": ms, \"alpha\": a}\n label = \"%s — %s\" % (oname.capitalize(), int(ovalue))\n p = matplotlib.lines.Line2D([], [], label=label, **_kws)\n patches.append(p)\n else:\n patch_info = plot_shipments(_object, \"\", for_legend=True)\n for c, lw, lwsc, (i, j) in patch_info:\n label = \"s%s$\\\\rightarrow$d%s — %s microscopes\" % (i, j, lw)\n _kws = {\"alpha\": 0.75, \"c\": c, \"lw\": lwsc, \"label\": label}\n p = matplotlib.lines.Line2D([], [], solid_capstyle=\"round\", **_kws)\n patches.append(p)\n return patches\n\ndef legend(objects, anchor=(1.005, 1.016)):\n \"\"\"Add a legend to a plot\"\"\"\n patches = make_patches(objects)\n kws = {\"fancybox\": True, \"framealpha\": 0.85, \"fontsize\": \"x-large\"}\n kws.update({\"bbox_to_anchor\":anchor, \"labelspacing\":2., \"borderpad\":2.})\n legend = matplotlib.pyplot.legend(handles=patches, **kws)\n legend.get_frame().set_facecolor(\"white\")\n\ndef plot_shipments(sd, b, scaled=0.75, for_legend=False):\n \"\"\"Helper for plotting shipments based on OD and magnitude\"\"\"\n _patches = []\n _plot_kws = {\"alpha\":0.75, \"zorder\":0, \"capstyle\":\"round\"}\n for c, (g, gdf) in zip(shipping_colors, sd):\n lw, lw_scaled, ids = gdf[\"ship\"], gdf[\"ship\"] * scaled, gdf[\"id\"]\n if for_legend:\n for _lw, _lwsc, _id in zip(lw, lw_scaled, ids):\n _patches.append([c, _lw, _lwsc, _id])\n else:\n gdf.plot(ax=b, color=c, lw=lw_scaled, **_plot_kws)\n if for_legend:\n return _patches", "Streets", "streets = geopandas.read_file(examples.get_path(\"streets.shp\"))\nstreets.crs = \"esri:102649\"\nstreets = streets.to_crs(\"epsg:2762\")", "Schools", "schools = geopandas.read_file(examples.get_path(\"schools.shp\"))\nschools.index.name = \"schools\"\nschools.crs = \"esri:102649\"\nschools = schools.to_crs(\"epsg:2762\")", "Schools - supply nodes", "schools_supply = schools[schools[\"POLYID\"].isin(supply_schools)]\nschools_supply.index.name = \"supply\"\nschools_supply", "Schools - demand nodes", "schools_demand = schools[schools[\"POLYID\"].isin(demand_schools)]\nschools_demand.index.name = \"demand\"\nschools_demand", "Instantiate a network object", "ntw = spaghetti.Network(in_data=streets)\nvertices, arcs = spaghetti.element_as_gdf(ntw, vertices=True, arcs=True)", "Plot", "# plot network\nbase = arcs.plot(linewidth=3, alpha=0.25, color=\"k\", zorder=0, figsize=(10, 10))\nvertices.plot(ax=base, markersize=2, color=\"red\", zorder=1)\n# plot observations\nschools.plot(ax=base, markersize=5, color=\"k\", zorder=2)\nschools_supply.plot(ax=base, markersize=100, alpha=0.25, color=\"b\", zorder=2)\nschools_demand.plot(ax=base, markersize=100, alpha=0.25, color=\"g\", zorder=2)\n# add labels\nobs_labels(schools, base, 14, col=\"POLYID\", c=\"k\", weight=\"bold\")\n# add legend\nelements = [schools, schools_supply, schools_demand]\nlegend(elements)\n# add scale bar\nscalebar = ScaleBar(1, units=\"m\", location=\"lower left\")\nbase.add_artist(scalebar);", "Associate both the supply and demand schools with the network and plot", "ntw.snapobservations(schools_supply, \"supply\")\nsupply = spaghetti.element_as_gdf(ntw, pp_name=\"supply\")\nsupply.index.name = \"supply\"\nsupply_snapped = spaghetti.element_as_gdf(ntw, pp_name=\"supply\", snapped=True)\nsupply_snapped.index.name = \"supply snapped\"\nsupply_snapped\n\nntw.snapobservations(schools_demand, \"demand\")\ndemand = spaghetti.element_as_gdf(ntw, pp_name=\"demand\")\ndemand.index.name = \"demand\"\ndemand_snapped = spaghetti.element_as_gdf(ntw, pp_name=\"demand\", snapped=True)\ndemand_snapped.index.name = \"demand snapped\"\ndemand_snapped\n\n# plot network\nbase = arcs.plot(linewidth=3, alpha=0.25, color=\"k\", zorder=0, figsize=(10, 10))\nvertices.plot(ax=base, markersize=5, color=\"r\", zorder=1)\n# plot observations\nschools.plot(ax=base, markersize=5, color=\"k\", zorder=2)\nsupply.plot(ax=base, markersize=100, alpha=0.25, color=\"b\", zorder=3)\nsupply_snapped.plot(ax=base, markersize=20, marker=\"x\", color=\"b\", zorder=3)\ndemand.plot(ax=base, markersize=100, alpha=0.25, color=\"g\", zorder=2)\ndemand_snapped.plot(ax=base, markersize=20, marker=\"x\", color=\"g\", zorder=3)\n# add labels\nobs_labels(supply, base, 14, c=\"b\")\nobs_labels(demand, base, 14, c=\"g\")\n# add legend\nelements += [supply_snapped, demand_snapped]\nlegend(elements)\n# add scale bar\nscalebar = ScaleBar(1, units=\"m\", location=\"lower left\")\nbase.add_artist(scalebar);", "Calculate distance matrix while generating shortest path trees", "s2d, tree = ntw.allneighbordistances(\"supply\", \"demand\", gen_tree=True)\ns2d[:3, :3]\n\nlist(tree.items())[:4], list(tree.items())[-4:]", "3. The Transportation Problem\nCreate decision variables for the supply locations and amount to be supplied", "supply[\"dv\"] = supply[\"id\"].apply(lambda _id: \"s_%s\" % _id)\nsupply[\"s_i\"] = amount_supply\nsupply", "Create decision variables for the demand locations and amount to be received", "demand[\"dv\"] = demand[\"id\"].apply(lambda _id: \"d_%s\" % _id)\ndemand[\"d_j\"] = amount_demand\ndemand", "Solve the Transportation Problem\nNote: shipping costs are in meters per microscope", "s, d, s_i, d_j = supply[\"dv\"], demand[\"dv\"], supply[\"s_i\"], demand[\"d_j\"]\ntrans_prob = TransportationProblem(s, d, s2d, s_i, d_j)", "Linear program (compare to its formulation in the Introduction)", "trans_prob.print_lp()", "Extract all network shortest paths", "paths = ntw.shortest_paths(tree, \"supply\", \"demand\")\npaths_gdf = spaghetti.element_as_gdf(ntw, routes=paths)\npaths_gdf.head()", "Extract the shipping paths", "shipments = trans_prob.extract_shipments(paths_gdf, \"id\")\nshipments", "Plot optimal shipping schedule", "# plot network\nbase = arcs.plot(alpha=0.2, linewidth=1, color=\"k\", figsize=(10, 10), zorder=0)\nvertices.plot(ax=base, markersize=1, color=\"r\", zorder=2)\n# plot observations\nschools.plot(ax=base, markersize=5, color=\"k\", zorder=2)\nsupply.plot(ax=base, markersize=100, alpha=0.25, color=\"b\", zorder=3)\nsupply_snapped.plot(ax=base, markersize=20, marker=\"x\", color=\"b\", zorder=3)\ndemand.plot(ax=base, markersize=100, alpha=0.25, color=\"g\", zorder=2)\ndemand_snapped.plot(ax=base, markersize=20, marker=\"x\", color=\"g\", zorder=3)\n# plot shipments\nplot_shipments(shipments.groupby(\"O\"), base)\n# add labels\nobs_labels(supply, base, 14, c=\"b\")\nobs_labels(demand, base, 14, c=\"g\")\n# add legend\nelements += [shipments.groupby(\"O\")]\nlegend(elements)\n# add scale bar\nscalebar = ScaleBar(1, units=\"m\", location=\"lower left\")\nbase.add_artist(scalebar);", "By utilizing the Transportation Problem, Dr. Carson has been able to minimize shipping costs and redistribute the microscopes to the schools in need!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
marcinofulus/teaching
ML_SS2017/Numpy_cwiczenia.ipynb
gpl-3.0
[ "import numpy as np", "1. Utwórz wektor zer o rozmiarze 10\npython\n np.zeros\n2. Ile pamięci zajmuje tablica?\n3.Utwórz wektor 10 zer z wyjątkiem 5-tego elementu równego 4\n4. Utwórz wektor kolejnych liczb od 111 do 144.\nnp.arange\n\n5. Odwróć kolejność elementów wektora.\n6. Utwórz macierz 4x4 z wartościamy od 0 do 15\nreshape\n\n7. Znajdź wskażniki niezerowych elementów wektora\nnp.nonzero\n\n8. Znajdż miejsca zerowe funkcji wykorzystując np.nonzero.\n\nznajdź odcinek na którym funkcja zmienia znak\n\nwykonaj liniową interpolację mjejsca zerowego\n\n\nAlgorytm powinien zawierać tylko wektorowe operacje. \n\nFunkcja jest dana jako tablice argumentów i wartosci.", "import numpy as np \nx = np.linspace(0,10,23)\nf = np.sin(x)\n\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\n\nplt.plot(x,f,'o-')\nplt.plot(4,0,'ro')\n\n# f1 = f[1:-1] * f[:]\nprint(np.shape(f[:-1]))\nprint(np.shape(f[1:]))\nff = f[:-1] * f[1:]\nprint(ff.shape)\n\nx_zero = x[np.where(ff < 0)]\nx_zero2 = x[np.where(ff < 0)[0] + 1]\nf_zero = f[np.where(ff < 0)]\nf_zero2 = f[np.where(ff < 0)[0] + 1]\nprint(x_zero)\nprint(f_zero)\n\nDx = x_zero2 - x_zero\ndf = np.abs(f_zero)\nDf = np.abs(f_zero - f_zero2)\nprint(Dx)\nprint(df)\nprint(Df)\n\nxz = x_zero + (df * Dx) / Df\nxz\n\nplt.plot(x,f,'o-')\nplt.plot(x_zero,f_zero,'ro')\nplt.plot(x_zero2,f_zero2,'go')\nplt.plot(xz,np.zeros_like(xz),'yo-')\n\nnp.where(ff < 0)[0] + 1", "9. Utwórz macierz 3x3:\n\nidentycznościową np.eye\nlosową z wartościami 0,1,2\n\n10. Znajdz minimalną wartość macierzy i jej wskaźnik\n11. Znajdz średnie odchylenie od wartości średniej dla wektora", "Z = np.random.random(30)\n", "12. Siatka 2d.\nUtworz index-array warości współrzędnych x i y dla obszaru $(-2,1)\\times(-1,3)$.\n * Oblicz na nim wartości funkcji $sin(x^2+y^2)$\n * narysuj wynik za pomocą imshow i countour", "x = np.linspace(0,3,64)\ny = np.linspace(0,3,64)\n\nX,Y = np.meshgrid(x,y)\n\n\nX\n\nY\n\nnp.sin(X**2+Y**2)\n\nplt.contourf(X,Y,np.sin(X**2+Y**2))\n", "13. Operator Laplace'a\nOblicz wartość numerycznego operatowa Laplace'a dla funkcji wypróbkowanej w poprzednim zadaniu.\n\nPorównaj tak otrzymane wartości z wypróbkowanym analitycznym Laplasjanem.\n\n14. Równanie logistyczne\nZaimplementuj w numpy algorytm tworzący diagram bifurkacyjny dla równania logistycznego.\n\nhttps://pl.wikipedia.org/wiki/Odwzorowanie_logistyczne" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Chiroptera/QCThesis
notebooks/Horn accuracy.ipynb
mit
[ "author: Diogo Silva\nThe commentaries made on this notebook were made before the correction of the accuracy algorithm. The accuracies before this correction were significantly worse.", "%pylab inline\n#%qtconsole\n\nimport seaborn as sns\nimport sklearn\nfrom sklearn import preprocessing,decomposition,datasets\n\n%cd /home/chiroptera/workspace/QCThesis/\nimport haddock.cluster.Horn as HornAlg\nreload(HornAlg)\n\n%cd /home/chiroptera/workspace/QCThesis/cluster\\ consistency\nimport determine_ci\nreload(determine_ci)\n\n# These are the \"Tableau 20\" colors as RGB. \ntableau20 = [(31, 119, 180), (174, 199, 232), (255, 127, 14), (255, 187, 120), \n (44, 160, 44), (152, 223, 138), (214, 39, 40), (255, 152, 150), \n (148, 103, 189), (197, 176, 213), (140, 86, 75), (196, 156, 148), \n (227, 119, 194), (247, 182, 210), (127, 127, 127), (199, 199, 199), \n (188, 189, 34), (219, 219, 141), (23, 190, 207), (158, 218, 229)]\n# Scale the RGB values to the [0, 1] range, which is the format matplotlib accepts. \nfor i in range(len(tableau20)): \n r, g, b = tableau20[i] \n tableau20[i] = (r / 255., g / 255., b / 255.)\n\ndef save_subfig(filename,subfig,format=\"eps\",dpi=500):\n fig=subfig.get_figure()\n allformats=list()\n \n if type(format) == str:\n allformats.append(format)\n elif type(format) == list:\n allformats=format\n else:\n raise Exception(\"incorrect format type\")\n\n for f in allformats:\n \n # Save just the portion _inside_ the second axis's boundaries\n extent = subfig.get_window_extent().transformed(fig.dpi_scale_trans.inverted())\n\n # Pad the saved area by 10% in the x-direction and 20% in the y-direction\n fig.savefig(filename+\".\"+f, bbox_inches=extent.expanded(1.2, 1.25),format=f, dpi=dpi)\n \ndef save_fig(filename,fig=None,format=\"eps\",dpi=500):\n allformats=list()\n \n if type(format) == str:\n allformats.append(format)\n elif type(format) == list:\n allformats=format\n else:\n raise Exception(\"incorrect format type\")\n\n for f in allformats:\n if fig != None:\n fig.savefig(filename+\".\"+f, format=f, dpi=dpi)\n else:\n plt.savefig(filename+\".\"+f, format=f, dpi=dpi)", "Quantum Clustering with Schrödinger's equation\nBackground\nThis method starts off by creating a Parzen-window density estimation of the input data by associating a Gaussian with each point, such that\n$$ \\psi (\\mathbf{x}) = \\sum ^N _{i=1} e^{- \\frac{\\left \\| \\mathbf{x}-\\mathbf{x}_i \\right \\| ^2}{2 \\sigma ^2}} $$\nwhere $N$ is the total number of points in the dataset, $\\sigma$ is the variance and $\\psi$ is the probability density estimation. $\\psi$ is chosen to be the wave function in Schrödinger's equation. The details of why this is are better described in [1-4]. Schrödinger's equation is solved in order of the potential function $V(x)$, whose minima will be the centers of the clusters of our data:\n$$\nV(\\mathbf{x}) = E + \\frac {\\frac{\\sigma^2}{2}\\nabla^2 \\psi }{\\psi}\n= E - \\frac{d}{2} + \\frac {1}{2 \\sigma^2 \\psi} \\sum ^N _{i=1} \\left \\| \\mathbf{x}-\\mathbf{x}_i \\right \\| ^2 e^{- \\frac{\\left \\| \\mathbf{x}-\\mathbf{x}_i \\right \\| ^2}{2 \\sigma ^2}}\n$$\nAnd since the energy should be chosen such that $\\psi$ is the groundstate (i.e. eigenstate corresponding to minimum eigenvalue) of the Hamiltonian operator associated with Schrödinger's equation (not represented above), the following is true\n$$\nE = - min \\frac {\\frac{\\sigma^2}{2}\\nabla^2 \\psi }{\\psi}\n$$\nWith all of this, $V(x)$ can be computed. However, it's very computationally intensive to compute V(x) to the whole space, so we only compute the value of this function close to the datapoints. This should not be problematic since clusters' centers are generally close to the datapoints themselves. Even so, the minima may not lie on the datapoints themselves, so what we do is compute the potential at all datapoints and then apply the gradient descent method to move them to regions in space with lower potential.\nThere is another method to evolve the system other then by gradient descent which is explained in [4] and complements this on the Dynamic Quantum Clustering algorithm.\nThe code for this algorithm is available in Matlab in one of the authur's webpage (David Horn). That code has been ported to Python and the version used in this notebook can be found here.\nReferences\n[1] D. Horn and A. Gottlieb, “The Method of Quantum Clustering.,” NIPS, no. 1, 2001.\n[2] D. Horn, T. Aviv, A. Gottlieb, H. HaSharon, I. Axel, and R. Gan, “Method and Apparatus for Quantum Clustring,” 2010.\n[3] D. Horn and A. Gottlieb, “Algorithm for Data Clustering in Pattern Recognition Problems Based on Quantum Mechanics,” Phys. Rev. Lett., vol. 88, no. 1, pp. 1–4, 2001.\n[4] M. Weinstein and D. Horn, “Dynamic quantum clustering: a method for visual exploration of structures in data,” pp. 1–15.", "def fineCluster2(xyData,pV,minD):\n\t\n\tn = xyData.shape[0]\n\tclust = np.zeros(n)\n \n\t# index of points sorted by potential\n\tsortedUnclust=pV.argsort()\n\n\t# index of unclestered point with lowest potential\n\ti=sortedUnclust[0]\n\n\t# fist cluster index is 1\n\tclustInd=1\n\n\twhile np.min(clust)==0:\n\t\tx=xyData[i]\n\n\t\t# euclidean distance form 1 point to others\n\t\tD = np.sum((x-xyData)**2,axis=1)\n\t\tD = D**0.5\n\n\t\tclust = np.where(D<minD,clustInd,clust)\n\t\t\n\t\t# index of non clustered points\n\t\t# unclust=[x for x in clust if x == 0]\n\t\tclusted= clust.nonzero()[0]\n\n\t\t# sorted index of non clustered points\n\t\tsortedUnclust=[x for x in sortedUnclust if x not in clusted]\n\n\t\tif len(sortedUnclust) == 0:\n\t\t\tbreak\n\n\t\t#index of unclestered point with lowest potential\n\t\ti=sortedUnclust[0]\n\n\t\tclustInd += 1\n\n\treturn clust", "Iris\nThe iris dataset (available at the UCI ML repository) has 3 classes each with 50 datapoints each. There are 4 features. The data is preprocessed using PCA.", "# load data\n#dataset='/home/chiroptera/workspace/datasets/iris/iris.csv'\ndataset='https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data'\nirisPCA=True\nnormalize=False\n\nirisData=np.genfromtxt(dataset,delimiter=',')\nirisData_o=irisData[:,:-1] # remove classification column\n\niN,iDims=irisData_o.shape\n\n# PCA of data\nif irisPCA:\n irisData_c,iComps,iEigs=HornAlg.pcaFun(irisData_o,whiten=True,center=True,method='eig',type='corr',normalize=normalize)\n \n#print irisData, nirisData\n#iris true assignment\nirisAssign=np.ones(150)\nirisAssign[50:99]=2\nirisAssign[100:149]=3\n\n#nirisData=sklearn.preprocessing.normalize(nirisData,axis=0)\n\niFig1=plt.figure()\nplt.title('PCA of iris')\nplt.xlabel('PC 1')\nplt.ylabel('PC 2')\nfor i in range(150):\n plt.plot(irisData_c[i,0],irisData_c[i,1],marker='.',c=tableau20[int(irisAssign[i]-1)*2])\n\nprint 'Energy of PC (in percentage):'\nprint np.cumsum(iEigs)*100/np.sum(iEigs)", "I choose $\\sigma=\\frac{1}{4}$ to reproduce the experiments in [3]. We use only the first two PC here. For more complete results the algorithm is also executed using all PC.", "#%%timeit\nsigma=0.25\nsteps=80\n\nirisD1,iV1,iE=HornAlg.graddesc(irisData_c[:,0:2],sigma=sigma,steps=steps)\n\n#%%timeit\nsigma=0.9\nsteps=80\n\nirisD2,iV2,iE=HornAlg.graddesc(irisData_c,sigma=sigma,steps=steps)", "Comments\nThe results shown above distinguish cluster assignment by colour. However, the colours might not be consistent throughout all figures. They serve only as a visual way to see how similar clusters are. This is due to the cluster assignment algorithm being used. Two methods may be used and differ only in the order on which they pick points to cluster. They both pick a point from the clustered data and compute the distance of that point to all the other points. All the points to which the corresponding distance is below a certain threshold belong to the same cluster. This process is repeated until all points are clustered. In one method the point picked to compute the distance is the first unclustered one. In the other method, the point picked is the one unassigned point that has the lowest potential value. Both methods suffer from assigning clusters to outliers.\nBefore analysing the results, some general comments may be made about the algorithm. A big advantage of the algorithm is that is does not make assumptions about the data (number of clusters, shape of clusters, intra-cluster distribution, etc.). A big disadvantage is that it doesn't assign points to clusters, if operated in an efficient way, i.e. not computing potential value on all points but only on datapoints and the direction they take. This is because the algorithm will converge points toward potential minima (akin to cluster centers) but will not tell which are these centers, which is the reason that the assignment methods described above are needed.\nResults\nPC 1 & 2", "dist=1.8\nirisClustering=HornAlg.fineCluster(irisD1,dist)#,potential=iV)\n\nprint 'Number of clusters:',max(irisClustering)\n\niFig2=plt.figure(figsize=(16,12))\niAx1=iFig2.add_subplot(2,2,1)\niAx2=iFig2.add_subplot(2,2,2)\niAx3=iFig2.add_subplot(2,2,4)\n\niAx1.set_title('Final quantum system')\niAx1.set_xlabel('PC1')\niAx1.set_ylabel('PC2')\n\nfor i in range(iN):\n if max(irisClustering) >=10:\n c=0\n else:\n c=int(irisClustering[i]-1)*2\n iAx1.plot(irisD1[i,0],irisD1[i,1],marker='.',c=tableau20[c])\n\niAx2.set_title('Final clustering')\niAx2.set_xlabel('PC1')\niAx2.set_ylabel('PC2')\n\nfor i in range(iN):\n if max(irisClustering) > 10:\n break\n iAx2.plot(irisData_c[i,0],irisData_c[i,1],marker='.',c=tableau20[int(irisClustering[i]-1)*2])\n\niAx3.set_title('Original clustering')\niAx3.set_xlabel('PC1')\niAx3.set_ylabel('PC2')\n\nfor i in range(iN):\n if max(irisClustering) > 10:\n break\n iAx3.plot(irisData_c[i,0],irisData_c[i,1],marker='.',c=tableau20[int(irisAssign[i]-1)*2])\n \n\nirisCI=determine_ci.ConsistencyIndex(N=150)\nirisAccuracy=irisCI.score(irisClustering,irisAssign,format='array')\n\nprint 'Accuracy:\\t',irisAccuracy\nprint 'Errors:\\t\\t',irisCI.unmatch_count", "Turning to the results, in the first case (clustering on the 2 first PC), the results show the clustering algorithm was able to cluster well one of the clusters (the one that is linearly seperable from the other two) but struggled with outliers present in the space of the other 2 clusters. Furthermore, the separation between the yellow and green clusters is hard, which not happens on the natural clusters. Observing the final quantum system, it's clear that all points converged to some minima as they are concentrated around some point and well seperated from other groups of points (other minima). If we were to take each minima as an independent cluster we would have 11 different clusters, which is considerably more than the natural 3. This means that some of the minima might represent micro clusters inside the natural clusters or outliers.\nAll PC", "dist=4.5\nirisClustering=HornAlg.fineCluster(irisD2,dist,potential=iV2)\n\nprint 'Number of clusters:',max(irisClustering)\n\niFig2=plt.figure(figsize=(16,6))\niAx1=iFig2.add_subplot(1,2,1)\niAx2=iFig2.add_subplot(1,2,2)\n#iAx3=iFig2.add_subplot(2,2,4)\n\niAx1.set_title('Final quantum system')\niAx1.set_xlabel('PC1')\niAx1.set_ylabel('PC2')\n\nfor i in range(iN):\n if max(irisClustering) >=10:\n c=0\n else:\n c=int(irisClustering[i]-1)*2\n iAx1.plot(irisD2[i,0],irisD2[i,1],marker='.',c=tableau20[c])\n\niAx2.set_title('Final clustering')\niAx2.set_xlabel('PC1')\niAx2.set_ylabel('PC2')\n\nfor i in range(iN):\n if max(irisClustering) > 10:\n break\n iAx2.plot(irisData_c[i,0],irisData_c[i,1],marker='.',c=tableau20[int(irisClustering[i]-1)*2])\n\"\"\"\niAx3.set_title('Original clustering')\niAx3.set_xlabel('PC1')\niAx3.set_ylabel('PC2')\n\nfor i in range(iN):\n iAx3.plot(irisData_c[i,0],irisData_c[i,1],marker='.',c=tableau20[int(irisAssign[i]-1)*2])\n\"\"\"\nirisCI=determine_ci.ConsistencyIndex(N=150)\nirisAccuracy=irisCI.score(irisClustering,irisAssign,format='array')\n\nprint 'Accuracy:\\t',irisAccuracy\nprint 'Errors:\\t\\t',irisCI.unmatch_count", "In this case, we use all PC. In the final quantum system, the number of minima is the same. However, some of the minima are very close to others and have less datapoints assigned which suggest that they might be local minima and should probably be annexed to the bigger minima close by. Once again the outliers were not correctly classified. In this case, though, there is no hard boundary between the green and yellow clusters. This is due to the fact that we're now clustering on all PC which bring a greater ammount of information to the problem (the 2 first PC only ammounted around 95% of the energy).\nCrab\nPreparing dataset\nHere we're loading the crab dataset and preprocessing it.", "crabsPCA=True\ncrabsNormalize=False\n\ncrabs=np.genfromtxt('/home/chiroptera/workspace/datasets/crabs/crabs.dat')\ncrabsData=crabs[1:,3:]\n\n# PCA\nif crabsPCA:\n ncrabsData1, cComps,cEigs=HornAlg.pcaFun(crabsData,whiten=True,center=False,\n method='eig',type='cov',normalize=crabsNormalize)\n ncrabsData2, cComps,cEigs=HornAlg.pcaFun(crabsData,whiten=True,center=True,\n method='eig',type='corr',normalize=crabsNormalize)\n ncrabsData3, cComps,cEigs=HornAlg.pcaFun(crabsData,whiten=True,center=True,\n method='eig',type='cov',normalize=crabsNormalize)\n\n # real assignment\ncrabsAssign=np.ones(200)\ncrabsAssign[50:99]=2\ncrabsAssign[100:149]=3\ncrabsAssign[150:199]=4", "We're visualizing the data projected on the second and third principal components to replicate the results presented on [3]. They use PCA with the correlation matrix. Below we can see the data on different representations. The closest representation of the data is using the covariance matrix with uncentered data (unconventional practice). Using the correlation matrix we get similar representation to unprocessed data. Although nonconvenional, the uncentered data plot suggests that data is more seperated that with centered data, using the covariance matrix.", "cFig1=plt.figure(figsize=(16,12))\ncF1Ax1=cFig1.add_subplot(2,2,1)\ncF1Ax2=cFig1.add_subplot(2,2,2)\ncF1Ax3=cFig1.add_subplot(2,2,3)\ncF1Ax4=cFig1.add_subplot(2,2,4)\n\ncF1Ax1.set_title('Original crab data')\nfor i in range(len(crabsAssign)):\n cF1Ax1.plot(crabsData[i,2],crabsData[i,1],marker='.',c=tableau20[int(crabsAssign[i]-1)*2])\n\ncF1Ax2.set_title('Crab projected on PC, Covariance, Uncentered')\ncF1Ax2.set_xlabel('PC3')\ncF1Ax2.set_ylabel('PC2')\nfor i in range(len(crabsAssign)):\n cF1Ax2.plot(ncrabsData1[i,2],ncrabsData1[i,1],marker='.',c=tableau20[int(crabsAssign[i]-1)*2])\n \ncF1Ax3.set_title('Crab projected on PC, Correlation, Centered')\nfor i in range(len(crabsAssign)):\n cF1Ax3.plot(ncrabsData2[i,2],ncrabsData2[i,1],marker='.',c=tableau20[int(crabsAssign[i]-1)*2])\n \ncF1Ax4.set_title('Crab projected on PC, Covariance, Centered')\nfor i in range(len(crabsAssign)):\n cF1Ax4.plot(ncrabsData3[i,2],ncrabsData3[i,1],marker='.',c=tableau20[int(crabsAssign[i]-1)*2])", "Cluster\nWe're clustering according to the second and third PC to try to replicate [3], along with the same $\\sigma$.", "#%%timeit\n\nsigma=1.0/sqrt(2)\nsteps=80\ncrab2cluster=ncrabsData1\ncrabD,V,E=HornAlg.graddesc(crab2cluster[:,1:3],sigma=sigma,steps=steps)\n\n\ndist=1\ncrabClustering=HornAlg.fineCluster(crabD,dist,potential=V)\n\nprint 'Number of clusters:',max(crabClustering)\nprint 'Unclestered points:', np.count_nonzero(crabClustering==0)\n\ncFig2=plt.figure(figsize=(16,12))\ncAx1=cFig2.add_subplot(2,2,1)\ncAx2=cFig2.add_subplot(2,2,2)\n#cAx3=cFig2.add_subplot(2,2,4)\n#cFig2,(cAx1,cAx2)=plt.subplots(nrows=1, ncols=2, )\n\ncAx1.set_title('Final quantum system')\ncAx1.set_xlabel(\"PC3\")\ncAx1.set_xlabel(\"PC2\")\nfor i in range(len(crabsAssign)):\n if max(crabClustering) >= 10:\n c=0\n else:\n c=int(crabClustering[i]-1)*2\n cAx1.plot(crabD[i,0],crabD[i,1],marker='.',c=tableau20[c])\n\ncAx2.set_title('Final clustering')\ncAx2.set_xlabel(\"PC3\")\ncAx2.set_xlabel(\"PC2\")\nfor i in range(len(crabsAssign)):\n if max(crabClustering) > 10:\n break\n cAx2.plot(crab2cluster[i,2],crab2cluster[i,1],marker='.',c=tableau20[int(crabClustering[i]-1)*2])\n\"\"\"\ncAx3.set_title('Original clustering')\ncAx3.set_xlabel(\"PC3\")\ncAx3.set_xlabel(\"PC2\")\nfor i in range(len(crabsAssign)):\n cAx3.plot(crab2cluster[i,2],crab2cluster[i,1],marker='.',c=tableau20[int(crabsAssign[i]-1)*2])\n\"\"\"\n# Consistency index accuracy\ncrabCI=determine_ci.ConsistencyIndex(N=200)\ncrabAccuracy=crabCI.score(crabClustering,crabsAssign,format='array')\n\n# Hungarian accuracy\nhungAcc = determine_ci.HungarianAccuracy(nsamples=200)\nhungAcc.score(crabClustering,crabsAssign,format='array')\nprint 'Hungarian Accuracy:\\t',hungAcc.accuracy\n\nprint 'CI Accuracy:\\t',crabAccuracy\n\ncrabCI.clusts1_.shape", "The 'Final quantum system' shows how the points evolved in 80 steps. We can see that they all converged to 4 minima of the potential for $\\sigma=\\frac{1}{\\sqrt{2}}$, making it easy to identify the number of clusters to choose. However, this is only clear observing the results. The distance used to actually assign the points to the clusters need tampering with a per problem basis. We can see that outliers usually were incorrectly clustered. Plus, a considerable portion of data was also wrongly clustered. The accuracy of the clustering was the following:\nConventional PCA\nNow we'll cluster with the conventional PCA, with centered data.", "sigma=1.0/sqrt(2)\nsteps=80\ncrab2cluster=ncrabsData3\ncrabD,V,E=HornAlg.graddesc(crab2cluster[:,1:3],sigma=sigma,steps=steps)\n\n#%%debug\ndist=1\ncrabClustering=HornAlg.fineCluster(crabD,dist,potential=V)\n\nprint 'Number of clusters:',max(crabClustering)\nprint 'Unclestered points:', np.count_nonzero(crabClustering==0)\n\ncFig2=plt.figure(figsize=(16,12))\ncAx1=cFig2.add_subplot(2,2,1)\ncAx2=cFig2.add_subplot(2,2,2)\ncAx3=cFig2.add_subplot(2,2,4)\n#cFig2,(cAx1,cAx2)=plt.subplots(nrows=1, ncols=2, )\n\ncAx1.set_title('Final quantum system')\nfor i in range(len(crabsAssign)):\n if max(crabClustering) >= 10:\n c=0\n else:\n c=int(crabClustering[i]-1)*2\n cAx1.plot(crabD[i,0],crabD[i,1],marker='.',c=tableau20[c])\n\ncAx2.set_title('Final clustering')\nfor i in range(len(crabsAssign)):\n if max(crabClustering) > 10:\n break\n cAx2.plot(crab2cluster[i,2],crab2cluster[i,1],marker='.',c=tableau20[int(crabClustering[i]-1)*2])\n \ncAx3.set_title('Original clustering')\nfor i in range(len(crabsAssign)):\n cAx3.plot(crab2cluster[i,2],crab2cluster[i,1],marker='.',c=tableau20[int(crabsAssign[i]-1)*2])\n \ncrabCI=determine_ci.ConsistencyIndex(N=200)\ncrabAccuracy=crabCI.score(crabClustering,crabsAssign,format='array')\nprint \"consistency index: {}\".format(crabAccuracy)\n\n# Hungarian accuracy\nhungAcc = determine_ci.HungarianAccuracy(nsamples=200)\nhungAcc.score(crabClustering,crabsAssign,format='array')\nprint 'Hungarian Accuracy:\\t',hungAcc.accuracy", "Using conventional PCA, clustering results are better.\nOther preprocessing\nLet's now consider clustering on data projected on all principal components (with centered data) and on original data.", "#1.0/np.sqrt(2)\nsigma_allpc=0.5\nsteps_allpc=200\ncrabD_allpc,V_allpc,E=HornAlg.graddesc(ncrabsData1[:,:3],sigma=sigma_allpc,steps=steps_allpc)\n\nsigma_origin=1.0/sqrt(2)\nsteps_origin=80\ncrabD_origin,V_origin,E=HornAlg.graddesc(crabsData,sigma=sigma_origin,steps=steps_origin)\n\ndist_allpc=12\ndist_origin=15\n\ncrabClustering_allpc=HornAlg.fineCluster(crabD_allpc,dist_allpc,potential=V_allpc)\ncrabClustering_origin=HornAlg.fineCluster(crabD_origin,dist_origin,potential=V_origin)\n\ncrabCI=determine_ci.ConsistencyIndex(N=200)\ncrabAccuracy=crabCI.score(crabClustering_allpc,crabsAssign,format='array')\n\n# Hungarian accuracy\nhungAcc = determine_ci.HungarianAccuracy(nsamples=200)\nhungAcc.score(crabClustering_allpc,crabsAssign,format='array')\ncrabHA = hungAcc.accuracy\n\ncrabCI2=determine_ci.ConsistencyIndex(N=200)\ncrabAccuracy2=crabCI2.score(crabClustering_origin,crabsAssign,format='array')\n\n# Hungarian accuracy\nhungAcc = determine_ci.HungarianAccuracy(nsamples=200)\nhungAcc.score(crabClustering_origin,crabsAssign,format='array')\ncrabHA2 = hungAcc.accuracy\n\nprint 'All PC\\t\\tNumber of clusters:',max(crabClustering_allpc)\nprint 'All PC\\t\\tUnclestered points:', np.count_nonzero(crabClustering_allpc==0)\nprint 'All PC\\t\\tConsistency index:',crabAccuracy\nprint 'All PC\\t\\tHungarian Accuracy:',crabHA\nprint 'Original data\\tNumber of clusters:',max(crabClustering_origin)\nprint 'Original data\\tUnclestered points:', np.count_nonzero(crabClustering_origin==0)\nprint 'Original data\\tConsistency index:',crabAccuracy2\nprint 'Original data\\t\\tHungarian Accuracy:',crabHA2\n\ncFig2=plt.figure(figsize=(16,12))\ncAx1=cFig2.add_subplot(3,2,1)\ncAx2=cFig2.add_subplot(3,2,3)\ncAx3=cFig2.add_subplot(3,2,5)\n\ncAx4=cFig2.add_subplot(3,2,2)\ncAx5=cFig2.add_subplot(3,2,4)\ncAx6=cFig2.add_subplot(3,2,6)\n\ncAx1.set_title('Final quantum system, All PC')\nfor i in range(len(crabsAssign)):\n if max(crabClustering_allpc) >= 10:\n c=0\n else:\n c=int(crabClustering_allpc[i]-1)*2\n cAx1.plot(crabD_allpc[i,2],crabD_allpc[i,1],marker='.',c=tableau20[c])\n\ncAx2.set_title('Final clustering, All PC')\nfor i in range(len(crabsAssign)):\n if max(crabClustering_allpc) > 10:\n break\n cAx2.plot(ncrabsData1[i,2],ncrabsData1[i,1],marker='.',c=tableau20[int(crabClustering_allpc[i]-1)*2])\n \ncAx3.set_title('Original clustering, All PC')\nfor i in range(len(crabsAssign)):\n cAx3.plot(ncrabsData1[i,2],ncrabsData1[i,1],marker='.',c=tableau20[int(crabsAssign[i]-1)*2])\n\n #--------------------------------------------------------------#\n \ncAx4.set_title('Final quantum system, Original data')\nfor i in range(len(crabsAssign)):\n if max(crabClustering_origin) >= 10:\n c=0\n else:\n c=int(crabClustering_origin[i]-1)*2\n cAx4.plot(crabD_origin[i,0],crabD_origin[i,1],marker='.',c=tableau20[c])\n\ncAx5.set_title('Final clustering, Original Data')\nfor i in range(len(crabsAssign)):\n if max(crabClustering_origin) > 10:\n break\n cAx5.plot(crabsData[i,0],crabsData[i,1],marker='.',c=tableau20[int(crabClustering_origin[i]-1)*2])\n \ncAx6.set_title('Original clustering, Original Data')\nfor i in range(len(crabsAssign)):\n cAx6.plot(crabsData[i,0],crabsData[i,1],marker='.',c=tableau20[int(crabsAssign[i]-1)*2])", "The results of the last experimens show considerably worse results. The final quantum system suggests a great ammount of minima and bigger variance on the final convergence of the points. Furthermore the distribution of the minima doesn't suggest any natural clustering for the user, contrary to what happened before.\nThe clustering on raw data is very bad. This was to be expected considering the distribution and shape of the original data across all dimensions and clusters.\nGaussian blobs\nOriginal Mix", "n_samples=400\nn_features=5\ncenters=4\n\nx_Gauss,x_assign=sklearn.datasets.make_blobs(n_samples=n_samples,n_features=n_features,centers=centers)\n#nX=sklearn.preprocessing.normalize(x_Gauss,axis=0)\nx_2cluster=x_Gauss\n\ngMix_fig=plt.figure()\nplt.title('Gaussian Mix, '+str(n_features)+' features')\nfor i in range(x_Gauss.shape[0]):\n plt.plot(x_2cluster[i,0],x_2cluster[i,1],marker='.',c=tableau20[int(x_assign[i])*2])\n\nsigma=2.\nsteps=200\ngaussD,V,E=HornAlg.graddesc(x_2cluster,sigma=sigma,steps=steps)\n\ndist=6\nnX_clustering=HornAlg.fineCluster(gaussD,dist,potential=V)\nprint 'number of clusters=',max(nX_clustering)\n\ngRes_fig=plt.figure(figsize=(16,12))\ngRes_ax1=gRes_fig.add_subplot(2,2,1)\ngRes_ax2=gRes_fig.add_subplot(2,2,2)\ngRes_ax3=gRes_fig.add_subplot(2,2,4)\n\n\ngRes_ax1.set_title('Final quantum system')\nfor i in range(x_Gauss.shape[0]):\n if max(nX_clustering) > 10:\n c=0\n else:\n c=int(nX_clustering[i]-1)*2\n gRes_ax1.plot(gaussD[i,0],gaussD[i,1],marker='.',c=tableau20[c])\n\ngRes_ax2.set_title('Final clustering')\nfor i in range(len(nX_clustering)):\n if max(nX_clustering) >10 :\n break\n gRes_ax2.plot(x_2cluster[i,0],x_2cluster[i,1],marker='.',c=tableau20[int(nX_clustering[i]-1)*2])\n \ngRes_ax3.set_title('Correct clustering')\nfor i in range(x_Gauss.shape[0]):\n gRes_ax3.plot(x_2cluster[i,0],x_2cluster[i,1],marker='.',c=tableau20[int(x_assign[i])*2])", "PCA Mix", "pcaX,gaussComps,gaussEigs=HornAlg.pcaFun(x_Gauss,whiten=True,center=True,\n method='eig',type='cov',normalize=False)\ngPCAf=plt.figure()\nplt.title('PCA')\nfor i in range(x_Gauss.shape[0]):\n plt.plot(pcaX[i,0],pcaX[i,1],marker='.',c=tableau20[int(x_assign[i])*2])\n\nsigma=2.\nsteps=400\npcaGaussD,V,E,eta=HornAlg.graddesc(pcaX,sigma=sigma,steps=steps,return_eta=True)\n\ndist=28\npcaX_clustering=HornAlg.fineCluster(pcaGaussD,dist,potential=V)\nprint 'number of clusters=',max(pcaX_clustering)\n\ngPCARes_fig,(gPCARes_ax1,gPCARes_ax2)=plt.subplots(nrows=1, ncols=2, figsize=(16,6))\n\ngPCARes_ax1.set_title('Final quantum system')\nfor i in range(x_Gauss.shape[0]):\n if max(pcaX_clustering) > 10:\n c=0\n else:\n c=int(pcaX_clustering[i]-1)*2\n gPCARes_ax1.plot(pcaGaussD[i,0],pcaGaussD[i,2],marker='.',c=tableau20[c])\n\ngPCARes_ax2.set_title('Final clustering')\nfor i in range(len(pcaX_clustering)):\n if max(pcaX_clustering) >10 :\n break\n gPCARes_ax2.plot(pcaX[i,0],pcaX[i,1],marker='.',c=tableau20[int(pcaX_clustering[i]-1)*2])", "Comments\nThe algorithm performed very poorly in unprocessed data. Even with a high $$\\sigma$$ and big number of steps for the gradient descent, the final quantum system had points scattered all over, not even seemingly alike the original data, i.e. the points diverged. The performance on the projected data was significantly better. The final quantum system suggests some natural clustering to a user, assimilating 4 seperate clusters. However the assignment algorithm did a very poor job and the final clustering is all off. Paying close attention to the colours, though, we can see that the left most cluster only has two colours in both plots, which suggest a correspondence. The same can be done to the other clusters. A user using this algorithm would be able to do a better clustering in selecting which points should be together by analyzing the final quantum system plot. The assignment algorithm probably performs worse because of other dimensions." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/lucid
notebooks/feature-visualization/any_number_channels.ipynb
apache-2.0
[ "Arbitrary number of channels parametrization\nThis notebook uses the new param.image parametrization that takes any number of channels.", "import numpy as np\nimport tensorflow as tf\n\nimport lucid.modelzoo.vision_models as models\nfrom lucid.misc.io import show\nimport lucid.optvis.objectives as objectives\nimport lucid.optvis.param as param\nimport lucid.optvis.render as render\nimport lucid.optvis.transform as transform\n\nmodel = models.InceptionV1()\nmodel.load_graphdef()", "Testing params\nThe following params are introduced to test the new param.imag parametrization by going back to three channels for the existing modelzoo models", "def arbitrary_channels_to_rgb(*args, channels=None, **kwargs):\n channels = channels or 10\n full_im = param.image(*args, channels=channels, **kwargs)\n r = tf.reduce_mean(full_im[...,:channels//3]**2, axis=-1)\n g = tf.reduce_mean(full_im[...,channels//3:2*channels//3]**2, axis=-1)\n b = tf.reduce_mean(full_im[...,2*channels//3:]**2, axis=-1)\n return tf.stack([r,g,b], axis=-1)\n\ndef grayscale_image_to_rgb(*args, **kwargs):\n \"\"\"Takes same arguments as image\"\"\"\n output = param.image(*args, channels=1, **kwargs)\n return tf.tile(output, (1,1,1,3))", "Arbitrary channels parametrization\nparam.arbitrary_channels calls param.image and then reduces the arbitrary number of channels to 3 for visualizing with modelzoo models.", "_ = render.render_vis(model, \"mixed4a_pre_relu:476\", param_f=lambda:arbitrary_channels_to_rgb(128, channels=10))", "Grayscale parametrization\nparam.grayscale_image creates param.image with a single channel and then tiles them 3 times for visualizing with modelzoo models.", "_ = render.render_vis(model, \"mixed4a_pre_relu:476\", param_f=lambda:grayscale_image_to_rgb(128))", "Testing different objectives\nDifferent objectives applied to both parametrizations.", "_ = render.render_vis(model, objectives.deepdream(\"mixed4a_pre_relu\"), param_f=lambda:arbitrary_channels_to_rgb(128, channels=10))\n_ = render.render_vis(model, objectives.channel(\"mixed4a_pre_relu\", 360), param_f=lambda:arbitrary_channels_to_rgb(128, channels=10))\n_ = render.render_vis(model, objectives.neuron(\"mixed4a_pre_relu\", 476), param_f=lambda:arbitrary_channels_to_rgb(128, channels=10))\n\n_ = render.render_vis(model, objectives.deepdream(\"mixed4a_pre_relu\"), param_f=lambda:grayscale_image_to_rgb(128))\n_ = render.render_vis(model, objectives.channel(\"mixed4a_pre_relu\", 360), param_f=lambda:grayscale_image_to_rgb(128))\n_ = render.render_vis(model, objectives.neuron(\"mixed4a_pre_relu\", 476), param_f=lambda:grayscale_image_to_rgb(128))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jacobdein/alpine-soundscapes
figures/Figure 3 - noise removal.ipynb
mit
[ "Figure 3 - noise removal\nThe figure in this notebook illustrates: <br />\na) an example of a spectrogram from a section of an original recording <br />\nb) Spectrogram of biophony after we applied the modified-ALE technique to the original spectrogram <br />\nc) Spectrogram from a section of an original recording <br />\nd) Spectrogram of biophony after we applied the modified-ALE technique to the original spectrogram <br />\nimport statements", "from matplotlib import pyplot\n%matplotlib inline\nfrom matplotlib.patches import Rectangle\nfrom matplotlib.lines import Line2D\n\nimport numpy\nfrom scipy.io import wavfile\n\nfrom os import path\nfrom datetime import timedelta\n\nfrom django.db import connection\nfrom database.models import Sound\nfrom database.models import Site\n\nfrom nacoustik import Wave\nfrom nacoustik.spectrum import psd\nfrom nacoustik.noise import remove_background_noise, remove_anthrophony\n\nfrom figutils import query, style", "variable definitions\nfigure directory", "figure_directory = \"\"", "example recording 1", "site1 = Site.objects.get(name='Höttinger Rain')\nsound_db1 = Sound.objects.get(id=147)", "example recording 2", "site2 = Site.objects.get(name='Pfaffensteig')\nsound_db2 = Sound.objects.get(id=158)", "formating", "style.set_font()", "remove noise\nremove noise from example recordings 1 and 2 using the adaptive level equalization algorithm", "# example recording 1\nwave1 = Wave(sound_db1.get_filepath())\nwave1.read()\nwave1.normalize()\nsamples1 = wave1.samples[(100 * wave1.rate):(160 * wave1.rate)]\nduration = 60\nf, t, a_pass = psd(samples1, rate=wave1.rate, window_length=512)\nale_pass = remove_background_noise(a_pass, N=0.18, iterations=3)\nb_pass = remove_anthrophony(ale_pass, time_delta=t[1]-t[0], freq_delta=f[1]-f[0])\nb_pass = numpy.ma.masked_equal(b_pass, value=0)\n\n# example recording 2\nwave2 = Wave(sound_db2.get_filepath())\nwave2.read()\nwave2.normalize()\nsamples2 = wave2.samples[(0 * wave2.rate):(60 * wave2.rate)]\nduration = 60\nf, t, a_fail = psd(samples2, rate=wave2.rate, window_length=512)\nale_fail = remove_background_noise(a_fail, N=0.18, iterations=3)\nb_fail = remove_anthrophony(ale_fail, time_delta=t[1]-t[0], freq_delta=f[1]-f[0])\nb_fail = numpy.ma.masked_equal(b_fail, value=0)", "plot", "# create figure\nfigure3 = pyplot.figure()\n#figure3.subplots_adjust(left=0.04, bottom=0.12, right=0.96, top=0.97, wspace=0, hspace=0)\nfigure3.subplots_adjust(left=0.04, bottom=0.04, right=0.96, top=0.99, wspace=0, hspace=0)\nfigure3.set_figwidth(6.85)\nfigure3.set_figheight(9.21)\n\n# specify frequency bins (width of 1 kiloherz)\nbins = numpy.arange(0, (wave1.rate / 2), 1000)\n\n# axes\nax_a = pyplot.subplot2grid((21, 1), (0, 0), rowspan=5, colspan=1)\nax_b = pyplot.subplot2grid((21, 1), (5, 0), rowspan=5, colspan=1, sharex=ax_a, sharey=ax_a)\nax_c = pyplot.subplot2grid((21, 1), (11, 0), rowspan=5, colspan=1, sharey=ax_a)\nax_d = pyplot.subplot2grid((21, 1), (16, 0), rowspan=5, colspan=1, sharex=ax_c, sharey=ax_a)\n\n# compute xlabels\nstart_time = sound_db1.get_datetime() + timedelta(seconds=100)\ntime_delta = 10\nn = int((duration / time_delta) + 1)\nxlabels_pass = [(start_time + timedelta(seconds=i*time_delta)).strftime(\"%H:%M:%S\") for i in range(n)]\nstart_time = sound_db1.get_datetime()\nxlabels_fail = [(start_time + timedelta(seconds=i*time_delta)).strftime(\"%H:%M:%S\") for i in range(n)]\nylabels = [\"\", \"2\", \"\", \"4\", \"\", \"6\", \"\", \"8\", \"\", \"10\", \"\", \"\"]\n\n# original - example 1\nspec_1 = ax_a.pcolormesh(t, f, a_pass[0], cmap='Greys', vmin=-150, vmax=-80)\nax_a.set(ylim=([0, wave1.rate / 2]),\n yticks = bins.astype(numpy.int) + 1000)\nax_a.set_yticklabels(ylabels)\nax_a.set_ylabel(\"frequency (kilohertz)\")\nax_a.tick_params(length=6, color='black', direction='in',\n bottom=True, labelbottom=False,\n top=False, labeltop=False,\n left=True, labelleft=False,\n right=True, labelright=True)\nax_a.set_frame_on(False)\n\n# after adaptive level equalization - example 1\nspec_2 = ax_b.pcolormesh(t, f, b_pass[0], cmap='Greys', vmin=-150, vmax=-80)\nax_b.set(ylim=([0, wave1.rate / 2]),\n yticks = bins.astype(numpy.int) + 1000)\nax_b.set_xticklabels(xlabels_pass)\nax_b.set_ylabel(\"frequency (kilohertz)\")\nax_b.tick_params(length=6, color='black', direction='in',\n bottom=True, labelbottom=True,\n top=False, labeltop=False,\n left=True, labelleft=False,\n right=True, labelright=True)\nax_b.set_frame_on(False)\n\n# original - example 2\nspec_3 = ax_c.pcolormesh(t, f, a_fail[1], cmap='Greys', vmin=-150, vmax=-80)\nax_c.set(ylim=([0, wave2.rate / 2]),\n yticks = bins.astype(numpy.int) + 1000)\nax_c.set_ylabel(\"frequency (kilohertz)\")\nax_c.tick_params(length=6, color='black', direction='in',\n bottom=True, labelbottom=False,\n top=False, labeltop=False,\n left=True, labelleft=False,\n right=True, labelright=True)\nax_c.set_frame_on(False)\n\n# after adaptive level equalization - example 2\nspec_4 = ax_d.pcolormesh(t, f, b_fail[1], cmap='Greys', vmin=-150, vmax=-80)\nax_d.set(ylim=([0, wave2.rate / 2]),\n yticks = bins.astype(numpy.int) + 1000)\nax_d.set_xticklabels(xlabels_fail)\nax_d.set_ylabel(\"frequency (kilohertz)\")\nax_d.tick_params(length=6, color='black', direction='in',\n bottom=True, labelbottom=True,\n top=False, labeltop=False,\n left=True, labelleft=False,\n right=True, labelright=True)\nax_d.set_frame_on(False)\nax_d.set_xlabel(\"time of day (hours:minutes:seconds)\")\n\n# axes borders\nax_a.add_line(Line2D([t[0], t[-1:]], [1, 1], color='black', linewidth=1))\nax_a.add_line(Line2D([t[0], t[0]], [0, 12000], color='black', linewidth=1))\nax_a.add_line(Line2D([t[-1:], t[-1:]], [0, 12000], color='black', linewidth=1))\nax_b.add_line(Line2D([t[0], t[-1:]], [1, 1], color='black', linewidth=1))\nax_b.add_line(Line2D([t[0], t[0]], [0, 12000], color='black', linewidth=1))\nax_b.add_line(Line2D([t[-1:], t[-1:]], [0, 12000], color='black', linewidth=1))\nax_c.add_line(Line2D([t[0], t[-1:]], [1, 1], color='black', linewidth=1))\nax_c.add_line(Line2D([t[0], t[0]], [0, 12000], color='black', linewidth=1))\nax_c.add_line(Line2D([t[-1:], t[-1:]], [0, 12000], color='black', linewidth=1))\nax_d.add_line(Line2D([t[0], t[-1:]], [1, 1], color='black', linewidth=1))\nax_d.add_line(Line2D([t[0], t[0]], [0, 12000], color='black', linewidth=1))\nax_d.add_line(Line2D([t[-1:], t[-1:]], [0, 12000], color='black', linewidth=1))\n\n# annotation\nax_a.add_line(Line2D([t[0], t[-1:]], [2000, 2000], color='black', linewidth=1, linestyle='--'))\nt1 = ax_a.text(14, 2100, '2 kilohertz', color='black', ha='left', va='bottom')\nb1 = ax_a.add_patch(Rectangle((23, 0), 9, 6100, facecolor='none', edgecolor='black', linestyle='--'))\nb2 = ax_a.add_patch(Rectangle((30, 0), 9, 11500, facecolor='none', edgecolor='black', linestyle='--'))\nap = dict(arrowstyle='-',\n connectionstyle='arc3,rad=0.2')\na1 = ax_a.annotate('plane landing', (23, 4000), xytext=(21, 6000), ha='right', va='center', arrowprops=ap)\na2 = ax_a.annotate('car passing', (39, 9000), xytext=(42, 10000), ha='left', va='center', arrowprops=ap)\nstyle.multi_annotate(ax_a, 'birds calling', xy_list=[(53.5, 8500), (45.5, 4000)], xytext=(50, 6300), \n ha='center', va='center',\n arrowprops=dict(arrowstyle='->',\n connectionstyle='arc3,rad=0.2'))\n\n# title formatting\ntitle_font = {\n 'size': 12.0,\n 'weight': 'bold'\n}\nax_a2 = pyplot.axes([0.005, 0, 1, 0.99], facecolor=(1, 1, 1, 0), frameon=False)\nax_a2.tick_params(bottom=False, labelbottom=False,\n top=False, labeltop=False,\n left=False, labelleft=False,\n right=False, labelright=False)\nax_b2 = pyplot.axes([0.005, 0, 1, 0.76], facecolor=(1, 1, 1, 0), frameon=False)\nax_b2.tick_params(bottom=False, labelbottom=False,\n top=False, labeltop=False,\n left=False, labelleft=False,\n right=False, labelright=False)\nax_c2 = pyplot.axes([0.005, 0, 1, 0.49], facecolor=(1, 1, 1, 0), frameon=False)\nax_c2.tick_params(bottom=False, labelbottom=False,\n top=False, labeltop=False,\n left=False, labelleft=False,\n right=False, labelright=False)\nax_d2 = pyplot.axes([0.005, 0, 1, 0.26], facecolor=(1, 1, 1, 0), frameon=False)\nax_d2.tick_params(bottom=False, labelbottom=False,\n top=False, labeltop=False,\n left=False, labelleft=False,\n right=False, labelright=False)\nt1 = ax_a2.text(0, 1, 'a', horizontalalignment='left', verticalalignment='top', \n fontdict=title_font)\nt2 = ax_b2.text(0, 1, 'b', horizontalalignment='left', verticalalignment='top', \n fontdict=title_font)\nt3 = ax_c2.text(0, 1, 'c', horizontalalignment='left', verticalalignment='top', \n fontdict=title_font)\nt4 = ax_d2.text(0, 1, 'd', horizontalalignment='left', verticalalignment='top', \n fontdict=title_font)", "save figure", "#figure3.savefig(path.join(figure_directory, \"figure3.png\"), dpi=300)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AdrianoValdesGomez/Master-Thesis
3D_Plots_01.ipynb
cc0-1.0
[ "import numpy as np\nfrom matplotlib import pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\n%matplotlib inline", "Line plot", "ts = np.linspace(0,16*np.pi,1000)\nxs = np.sin(ts)\nys = np.cos(ts)\nzs = ts\n\nfig = plt.figure()\nax = fig.add_subplot(111, projection ='3d')\nax.plot(xs,ys,zs, zdir = 'z')", "Esta es la manera \"canónica\" de generar una gráfica en la que habrá una curva en $\\mathbb{R}^3$. Las xs, ys, zs son las coordenadas de la curva, en este caso están dadas por arreglos de numpy. zdir hace alusión a la dirección que se considerará como la dirección z en caso de introducir una gráfica 2D en esta misma.\nScatter Plot\nDe igual forma podemos generar una gráfica constituída por puntos; se le denomina \"scatter\"", "ts = np.linspace(0,8*np.pi,1000)\nxs = np.sin(ts)\nys = np.cos(ts)\nzs = ts\n\nfig = plt.figure()\nax = fig.add_subplot(111, projection ='3d')\nax.scatter(xs,ys,zs, zdir = 'z', alpha = 0.3)", "Wireframe Plot\nEn este caso necesitamos arreglos bidimensionales para las xs y las ys, para ello usamos la función meshgrid, de la siguiente forma", "x = np.linspace(-1.5,1.5,100)\ny = np.linspace(-1.5,1.5,100)\nXs, Ys = np.meshgrid(x,y)\n\nZs = np.sin(2*Xs)*np.sin(2*Ys)\nfig = plt.figure(figsize=(5.9,5.9))\nax = fig.add_subplot(111, projection ='3d')\nax.plot_wireframe(Xs,Ys,Zs, rstride=3, cstride=3, alpha = 0.4)\n\n#plt.figure?", "Quiver Plot", "pts_x_ini = np.array([0])\npts_y_ini = np.array([0])\npts_z_ini = np.array([0])\npts_x_fin = np.array([0])\npts_y_fin = np.array([0])\npts_z_fin = np.array([1])\nfig = plt.figure()\nax = fig.add_subplot(111, projection = '3d')\nax.quiver(0,0,0,0,0,10,length=1.0, arrow_length_ratio = .1)\n\nax.set_xlim(-1,1)\nax.set_ylim(-1,1)\nax.set_zlim(-1,1)\n\nax.quiver?", "Vector FIeld", "xc, yc, zc = np.meshgrid(np.arange(-0.8, 1, 0.2),\n np.arange(-0.8, 1, 0.2),\n np.arange(-0.8, 1, 0.8))\n\nu = np.sin(np.pi * xc) * np.cos(np.pi * yc) * np.cos(np.pi * zc)\nv = -np.cos(np.pi * xc) * np.sin(np.pi * yc) * np.cos(np.pi * zc)\nw = (np.sqrt(2.0 / 3.0) * np.cos(np.pi * xc) * np.cos(np.pi * yc) *\n np.sin(np.pi * zc))\nfig = plt.figure()\nax = fig.add_subplot(111, projection = '3d')\nax.quiver(xc, yc, zc, u, v, w, length=0.1, color = 'g')\n\nplt.show()", "Campo vectorial Eléctroestático", "xr,yr,zr = np.meshgrid(np.arange(-1,1,.1),np.arange(-1,1,.1),np.arange(-1,1,.1))\ntheta = np.linspace(0,np.pi,100)\nphi = np.linspace(0,2*np.pi,100)\nr = 1/np.sqrt(xr**2+yr**2+zr**2)\nfig = plt.figure()\nU,V,W = np.sin(theta)*np.cos(phi), np.sin(theta)*np.sin(phi), np.cos(theta)\nax = fig.add_subplot(111,projection = '3d')\nax.quiver(xr,yr,zr, U,V,W, length=0.2, color = 'b')", "2D plots inside 3D plots", "fig = plt.figure()\nax = fig.gca(projection='3d')\n\nEx = np.linspace(0, 2*np.pi, 100)\nEy = np.sin(Ex * 2 * np.pi) / 2 + 0.5\nax.plot(Ex, Ey, zs=0, zdir='z', label='zs=0, zdir=z')\n\n\nBx = np.linspace(0, 2*np.pi, 100)\nBy = np.sin(Bx * 2 * np.pi) / 2 + 0.5\nax.plot(Bx, By, zs=0, zdir='y', label='zs=0, zdir=z')\n\n\n\n#colors = ('r', 'g', 'b', 'k')\n#for c in colors:\n# x = np.random.sample(200)\n# y = np.random.sample(200)\n# ax.scatter(x, y, 0, zdir='y', c=c, alpha = 0.2)\n\nax.legend()\nax.set_xlim3d(0, 2*np.pi)\nax.set_ylim3d(-1.1, 1.1)\nax.set_zlim3d(-1.1, 1.1)\n\nplt.show()\n\nfig.gca?", "Fill_Between in 3D plots", "import math as mt\nimport matplotlib.pyplot as pl\nimport numpy as np\nimport random as rd\n\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom mpl_toolkits.mplot3d.art3d import Poly3DCollection\n\n\n# Parameter (reference height)\nh = 0.0\n\n# Code to generate the data\nn = 200\nalpha = 0.75 * mt.pi\ntheta = [alpha + 2.0 * mt.pi * (float(k) / float(n)) for k in range(0, n + 1)]\nxs = [1.0 * mt.cos(k) for k in theta]\nys = [1.0 * mt.sin(k) for k in theta]\nzs = [abs(k - alpha - mt.pi) * rd.random() for k in theta]\n\n# Code to convert data in 3D polygons\nv = []\nfor k in range(0, len(xs) - 1):\n x = [xs[k], xs[k+1], xs[k+1], xs[k]]\n y = [ys[k], ys[k+1], ys[k+1], ys[k]]\n z = [zs[k], zs[k+1], h, h]\n v.append(zip(x, y, z))\npoly3dCollection = Poly3DCollection(v)\n\n# Code to plot the 3D polygons\nfig = pl.figure()\nax = Axes3D(fig)\nax.add_collection3d(poly3dCollection)\nax.set_xlim([min(xs), max(xs)])\nax.set_ylim([min(ys), max(ys)])\nax.set_zlim([min(zs), max(zs)])\nax.set_xlabel(\"x\")\nax.set_ylabel(\"y\")\nax.set_zlabel(\"z\")\n\npl.show()", "Putting text inside the plots", "fig = plt.figure()\nax = fig.gca(projection='3d')\nplt.rc('text', usetex=True)\nzdirs = (None, 'x', 'y', 'z', (1, 1, 0), (1, 1, 1))\nxs = (1, 4, 4, 9, 4, 1)\nys = (2, 5, 8, 10, 1, 2)\nzs = (10, 3, 8, 9, 1, 8)\n\nfor zdir, x, y, z in zip(zdirs, xs, ys, zs):\n label = '(%d, %d, %d), dir=%s' % (x, y, z, zdir)\n ax.text(x, y, z, label, zdir)\n\nplt.rc('text', usetex=True)\nax.text(9, 0, 0, \"red\", color='red')\nax.text2D(0.05, 0.95, r\"2D Text $\\frac{n!}{2\\pi i}\\oint \\frac{f(\\xi)}{\\xi-z_0}\\,d\\xi$\", transform=ax.transAxes)\n\nax.set_xlim3d(0, 10)\nax.set_ylim3d(0, 10)\nax.set_zlim3d(0, 10)\n\nax.set_xlabel('X axis')\nax.set_ylabel('Y axis')\nax.set_zlabel('Z axis')\n\nplt.show()\n\nax.transAxes?\n\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib.collections import PolyCollection\nfrom matplotlib.colors import colorConverter\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n\nfig = plt.figure()\nax = fig.gca(projection='3d')\n\n\ndef cc(arg):\n return colorConverter.to_rgba(arg, alpha=0.6)\n\nxs = np.arange(0, 10, 0.4)\nverts = []\nzs = [0.0, 1.0, 2.0, 3.0]\nfor z in zs:\n ys = np.random.rand(len(xs))\n ys[0], ys[-1] = 0, 0\n verts.append(list(zip(xs, ys)))\n\npoly = PolyCollection(verts, facecolors=[cc('r'), cc('g'), cc('b'),\n cc('y')])\npoly.set_alpha(0.7)\nax.add_collection3d(poly, zs=zs, zdir='y')\n\nax.set_xlabel('X')\nax.set_xlim3d(0, 10)\nax.set_ylabel('Y')\nax.set_ylim3d(-1, 4)\nax.set_zlabel('Z')\nax.set_zlim3d(0, 1)\n\nplt.show()\n" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
CondensedOtters/PHYSIX_Utils
Projects/Moog_2016-2019/CO2/CO2_NN/forces.ipynb
gpl-3.0
[ "Matching Atomic Forces using Neural Nets and Gaussians Overlaps\nLoading Tech Stuff\nFirst we load the python path to LibAtomicSim, which will give us some useful functions", "import sys\nsys.path.append(\"/Users/mathieumoog/Documents/LibAtomicSim/Python/\")", "Then we load the modules, some issues remain with matplotlib on Jupyter, but we'll fix them later.", "# NN\nimport keras\n# Descriptor (unused)\nimport dscribe\n# Custom Libs\nimport cpmd\nimport filexyz\n# Maths\nimport numpy as np\nfrom scipy.spatial.distance import cdist\n# Plots\nimport matplotlib\nmatplotlib.use('nbAgg')\nimport matplotlib.pyplot as plt\n# Scalers\nfrom sklearn.preprocessing import StandardScaler, MinMaxScaler\nfrom sklearn.decomposition import PCA, KernelPCA\nfrom keras.regularizers import l2\n#['GTK3Agg', 'GTK3Cairo', 'MacOSX', 'nbAgg', 'Qt4Agg', 'Qt4Cairo', 'Qt5Agg', 'Qt5Cairo', 'TkAgg', 'TkCairo', 'WebAgg', 'WX', 'WXAgg', 'WXCairo', 'agg', 'cairo', 'pdf', 'pgf', 'ps', 'svg', 'template']\n# Need to install some of those stuff at some point", "Then we write some functions that are not yet on LibAtomicSim, but should be soon(ish)", "def getDistance1Dsq( position1, position2, length):\n dist = position1-position2\n half_length = length*0.5\n if dist > half_length :\n dist -= length\n elif dist < -half_length:\n dist += length\n return dist*dist\ndef getDistanceOrtho( positions, index1, index2, cell_lengths ):\n dist=0\n for i in range(3):\n dist += getDistance1Dsq( positions[index1,i], positions[index2,i], cell_lengths[i] )\n return np.sqrt(dist)\ndef getDistance( position1, position2, cell_length ):\n dist=0\n for i in range(3):\n dist += getDistance1Dsq( position1[i], position2[i], cell_lengths[i] )\n return np.sqrt(dist)\ndef computeDistanceMatrix( positions, cell_lengths):\n nb_atoms = len(positions[:,0])\n matrix = np.zeros(( nb_atoms, nb_atoms ))\n for atom in range(nb_atoms):\n for atom2 in range(atom+1,nb_atoms):\n dist = getDistanceOrtho( positions, atom, atom2, cell_lengths )\n matrix[atom,atom2] = dist\n matrix[atom2,atom] = dist\n return matrix", "Data Parameters", "volume=8.82\ntemperature=3000\nnb_type=2\nnbC=32\nnbO=64\nrun_nb=1\nnb_atoms=nbC+nbO\npath_sim = str( \"/Users/mathieumoog/Documents/CO2/\" + str(volume) + \"/\" + str(temperature) + \"K/\" + str(run_nb) + \"-run/\")", "Loading Trajectory\nHere we load the trajectory, including forces and velocities, and convert the positions back into angstroms, while the forces are still in a.u (although we could do everything in a.u.).", "cell_lengths = np.ones(3)*volume\nftraj_path = str( path_sim + \"FTRAJECTORY\" )\npositions, velocities, forces = cpmd.readFtraj( ftraj_path, True )\nnb_step = positions.shape[0]\nang2bohr = 0.529177\npositions = positions*ang2bohr\nfor i in range(3):\n positions[:,:,i] = positions[:,:,i] % cell_lengths[i]", "Data parametrization\nSetting up the parameters for the data construction.", "sigma_C = 0.9\nsigma_O = 0.9\nsize_data = nb_step*nbC\ndx = 0.1\npositions_offset = np.zeros( (6,3), dtype=float )\nsize_off = 6\nn_features=int(2*(size_off+1))\nfor i,ival in enumerate(np.arange(0,size_off,2)): \n positions_offset[ ival , i ] += dx\n positions_offset[ ival+1 , i ] -= dx", "Building complete data set, with the small caveat that we don't seek to load all of the positions for time constraints (for now at least).", "max_step = 1000\nstart_step = 1000\nstride = 10\nsize_data = max_step*nbC\ndata = np.zeros( (max_step*nbC, size_off+1, nb_type ), dtype=float )\nfor step in np.arange(start_step,stride*max_step+start_step,stride):\n # Distance from all atoms (saves time?)\n matrix = computeDistanceMatrix( positions[step,:,:], cell_lengths)\n for carbon in range(nbC):\n # Data Adress\n add_data = int((step-start_step)/stride)*nbC + carbon\n # C-C\n for carbon2 in range(nbC):\n # Gaussians at atomic site\n data[ add_data, 0, 0 ] += np.exp( -(matrix[carbon,carbon2]*matrix[carbon,carbon2])/(2*sigma_C*sigma_C) )\n # Gaussians with small displacement from site\n if carbon != carbon2:\n for i in range(size_off):\n dist = getDistance( positions[step, carbon2, :], (positions[step,carbon,:]+positions_offset[i,:])%cell_lengths[0], cell_lengths )\n data[ add_data, i+1, 0 ] += np.exp( -(dist*dist)/(2*sigma_C*sigma_C) )\n # C-O\n for oxygen in range(nbC,nb_atoms):\n # Gaussians at atomic site\n data[ add_data, 0, 1 ] += np.exp( -(matrix[carbon,oxygen]*matrix[carbon,oxygen])/(2*sigma_O*sigma_O) )\n # Gaussians with small displacement from site\n for i in range(size_off):\n dist = getDistance( positions[step, oxygen,:], (positions[step,carbon,:]+positions_offset[i,:])%cell_lengths[0], cell_lengths )\n data[ add_data, i+1, 1 ] += np.exp( -(dist*dist)/(2*sigma_O*sigma_O) )", "Creating test and train set\nHere we focus on the carbon atoms, and we create the input and output shape of the data. The input is created by reshaping the positions array, while the output is simply the forces reshaped. Once this is done, we chose the train et test set by making sure that there is no overlap between them.", "nb_data_train = 30000\nnb_data_test = 1000\nsize_data = max_step*nbC\nif nb_data_train + nb_data_test > data.shape[0]:\n print(\"Datasets larger than amount of available data\")\ndata = data.reshape( size_data, int(2*(size_off+1)) )\nchoice = np.random.choice( size_data, nb_data_train+nb_data_test, replace=False)\nchoice_train = choice[0:nb_data_train]\nchoice_test = choice[nb_data_train:nb_data_train+nb_data_test]", "Here we reshape the data and choose the point for the train and test set making sure that they do not overlap", "input_train = data[ choice_train ]\ninput_test = data[ choice_test ]\noutput_total = forces[start_step:start_step+max_step*stride:stride,0:nbC,0].reshape(size_data,1)\noutput_train = output_total[ choice_train ]\noutput_test = output_total[ choice_test ]", "Scaling input and output for the Neural Net", "# Creating Scalers\nscaler = [] \nscaler.append( StandardScaler() )\nscaler.append( StandardScaler() )\n# Fitting Scalers\nscaler[0].fit( input_train ) \nscaler[1].fit( output_train ) \n# Scaling input and output\ninput_train_scale = scaler[0].transform( input_train )\ninput_test_scale = scaler[0].transform( input_test)\noutput_train_scale = scaler[1].transform( output_train )\noutput_test_scale = scaler[1].transform( output_test )", "Neural Net Structure\nHere we set the NN parameters", "# Iteration parameters\nloss_fct = 'mean_squared_error' # Loss function in the NN\noptimizer = 'Adam' # Choice of optimizers for training of the NN weights \nlearning_rate = 0.001\nn_epochs = 5000 # Number of epoch for optimization?\npatience = 100 # Patience for convergence\nrestore_weights = True\nbatch_size = 16\nearly_stop_metric=['mse']\n\n# Subnetorks structure\nactivation_fct = 'tanh' # Activation function in the dense hidden layers\nnodes = [15,15,15]\n\n# Dropout rates\ndropout_rate_init = 0.2\ndropout_rate_within = 0.5 ", "Here we create the neural net structure and compile it", "# Individual net structure\nforce_net = keras.Sequential(name='force_net')\n#force_net.add( keras.layers.Dropout( dropout_rate_init ) )\nfor node in nodes:\n force_net.add( keras.layers.Dense( node, activation=activation_fct, kernel_constraint=keras.constraints.maxnorm(3)))\n #force_net.add( keras.layers.Dropout( dropout_rate_within ) )\nforce_net.add( keras.layers.Dense( 1, activation='linear') )\n\ninput_layer = keras.layers.Input(shape=(n_features,), name=\"gauss_input\") \noutput_layer = force_net( input_layer )\nmodel = keras.models.Model(inputs=input_layer ,outputs=output_layer ) \nmodel.compile(loss=loss_fct, optimizer=optimizer, metrics=['mse'])\n\n keras.utils.plot_model(model,to_file=\"/Users/mathieumoog/network.png\", show_shapes=True, show_layer_names=True )\n\nearly_stop = keras.callbacks.EarlyStopping( monitor='val_loss', mode='min', verbose=2, patience=patience, restore_best_weights=True)\n\nhistory = model.fit( input_train_scale, output_train_scale, validation_data=( input_test_scale, output_test_scale ), epochs=n_epochs, verbose=2, callbacks=[early_stop])\n\npredictions = model.predict( input_test_scale )\nplt.plot( output_test_scale, predictions,\"r.\" )\nplt.plot( output_train_scale, output_train_scale,\"g.\" )\nplt.plot( output_test_scale, output_test_scale,\"b.\" )\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cornhundred/ipywidgets
docs/source/examples/Widget Styling.ipynb
bsd-3-clause
[ "Layout and Styling of Jupyter widgets\nThis notebook presents how to layout and style Jupyter interactive widgets to build rich and reactive widget-based applications.\nThe layout attribute.\nJupyter interactive widgets have a layout attribute exposing a number of CSS properties that impact how widgets are laid out.\nExposed CSS properties\n<div class=\"alert alert-info\" style=\"margin: 20px\">\nThe following properties map to the values of the CSS properties of the same name (underscores being replaced with dashes), applied to the top DOM elements of the corresponding widget.\n</div>\n\n Sizes \n- height\n- width\n- max_height\n- max_width\n- min_height\n- min_width\n Display \n\nvisibility\ndisplay\noverflow\noverflow_x\noverflow_y\n\n Box model \n- border \n- margin\n- padding\n Positioning \n- top\n- left\n- bottom\n- right\n Flexbox \n- order\n- flex_flow\n- align_items\n- flex\n- align_self\n- align_content\n- justify_content\nShorthand CSS properties\nYou may have noticed that certain CSS properties such as margin-[top/right/bottom/left] seem to be missing. The same holds for padding-[top/right/bottom/left] etc.\nIn fact, you can atomically specify [top/right/bottom/left] margins via the margin attribute alone by passing the string\nmargin: 100px 150px 100px 80px;\nfor a respectively top, right, bottom and left margins of 100, 150, 100 and 80 pixels.\nSimilarly, the flex attribute can hold values for flex-grow, flex-shrink and flex-basis. The border attribute is a shorthand property for border-width, border-style (required), and border-color.\nSimple examples\nThe following example shows how to resize a Button so that its views have a height of 80px and a width of 50% of the available space:", "from ipywidgets import Button, Layout\n\nb = Button(description='(50% width, 80px height) button',\n layout=Layout(width='50%', height='80px'))\nb", "The layout property can be shared between multiple widgets and assigned directly.", "Button(description='Another button with the same layout', layout=b.layout)", "Description\nYou may have noticed that the widget's length is shorter in presence of a description. This because the description is added inside of the widget's total length. You cannot change the width of the internal description field. If you need more flexibility to layout widgets and captions, you should use a combination with the Label widgets arranged in a layout.", "from ipywidgets import HBox, Label, IntSlider\n\nHBox([Label('A too long description'), IntSlider()])", "Natural sizes, and arrangements using HBox and VBox\nMost of the core-widgets have \n- a natural width that is a multiple of 148 pixels\n- a natural height of 32 pixels or a multiple of that number.\n- a default margin of 2 pixels\nwhich will be the ones used when it is not specified in the layout attribute.\nThis allows simple layouts based on the HBox and VBox helper functions to align naturally:", "from ipywidgets import Button, HBox, VBox\n\nwords = ['correct', 'horse', 'battery', 'staple']\nitems = [Button(description=w) for w in words]\n\nHBox([VBox([items[0], items[1]]), VBox([items[2], items[3]])])", "Latex\nWidgets such as sliders and text inputs have a description attribute that can render Latex Equations. The Label widget also renders Latex equations.", "from ipywidgets import IntSlider, Label\n\nIntSlider(description='$\\int_0^t f$')\n\nLabel(value='$e=mc^2$')", "Number formatting\nSliders have a readout field which can be formatted using Python's Format Specification Mini-Language. If the space available for the readout is too narrow for the string representation of the slider value, a different styling is applied to show that not all digits are visible.\nThe Flexbox layout\nIn fact, the HBox and VBox helpers used above are functions returning instances of the Box widget with specific options.\nThe Box widgets enables the entire CSS Flexbox spec, enabling rich reactive layouts in the Jupyter notebook. It aims at providing an efficient way to lay out, align and distribute space among items in a container.\nAgain, the whole Flexbox spec is exposed via the layout attribute of the container widget (Box) and the contained items. One may share the same layout attribute among all the contained items.\nAcknowledgement\nThe following tutorial on the Flexbox layout follows the lines of the article A Complete Guide to Flexbox by Chris Coyier.\nBasics and terminology\nSince flexbox is a whole module and not a single property, it involves a lot of things including its whole set of properties. Some of them are meant to be set on the container (parent element, known as \"flex container\") whereas the others are meant to be set on the children (said \"flex items\").\nIf regular layout is based on both block and inline flow directions, the flex layout is based on \"flex-flow directions\". Please have a look at this figure from the specification, explaining the main idea behind the flex layout.\n\nBasically, items will be laid out following either the main axis (from main-start to main-end) or the cross axis (from cross-start to cross-end).\n\nmain axis - The main axis of a flex container is the primary axis along which flex items are laid out. Beware, it is not necessarily horizontal; it depends on the flex-direction property (see below).\nmain-start | main-end - The flex items are placed within the container starting from main-start and going to main-end.\nmain size - A flex item's width or height, whichever is in the main dimension, is the item's main size. The flex item's main size property is either the ‘width’ or ‘height’ property, whichever is in the main dimension.\ncross axis - The axis perpendicular to the main axis is called the cross axis. Its direction depends on the main axis direction.\ncross-start | cross-end - Flex lines are filled with items and placed into the container starting on the cross-start side of the flex container and going toward the cross-end side.\ncross size - The width or height of a flex item, whichever is in the cross dimension, is the item's cross size. The cross size property is whichever of ‘width’ or ‘height’ that is in the cross dimension.\n\nProperties of the parent\n\n\ndisplay (must be equal to 'flex' or 'inline-flex')\n\nThis defines a flex container (inline or block).\n- flex-flow (shorthand for two properties)\nThis is a shorthand flex-direction and flex-wrap properties, which together define the flex container's main and cross axes. Default is row nowrap.\n- `flex-direction` (row | row-reverse | column | column-reverse)\n\n This establishes the main-axis, thus defining the direction flex items are placed in the flex container. Flexbox is (aside from optional wrapping) a single-direction layout concept. Think of flex items as primarily laying out either in horizontal rows or vertical columns.\n ![Direction](./images/flex-direction1.svg)\n\n- `flex-wrap` (nowrap | wrap | wrap-reverse)\n\n By default, flex items will all try to fit onto one line. You can change that and allow the items to wrap as needed with this property. Direction also plays a role here, determining the direction new lines are stacked in.\n ![Wrap](./images/flex-wrap.svg)\n\n\njustify-content (flex-start | flex-end | center | space-between | space-around)\n\nThis defines the alignment along the main axis. It helps distribute extra free space left over when either all the flex items on a line are inflexible, or are flexible but have reached their maximum size. It also exerts some control over the alignment of items when they overflow the line.\n \n\nalign-items (flex-start | flex-end | center | baseline | stretch)\n\nThis defines the default behaviour for how flex items are laid out along the cross axis on the current line. Think of it as the justify-content version for the cross-axis (perpendicular to the main-axis).\n \n\nalign-content (flex-start | flex-end | center | baseline | stretch)\n\nThis aligns a flex container's lines within when there is extra space in the cross-axis, similar to how justify-content aligns individual items within the main-axis.\n \nNote: this property has no effect when there is only one line of flex items.\nProperties of the items\n\nThe flexbox-related CSS properties of the items have no impact if the parent element is not a flexbox container (i.e. has a display attribute equal to flex or inline-flex).\n\norder\n\nBy default, flex items are laid out in the source order. However, the order property controls the order in which they appear in the flex container.\n <img src=\"./images/order-2.svg\" alt=\"Order\" style=\"width: 500px;\"/>\n\n\nflex (shorthand for three properties)\n This is the shorthand for flex-grow, flex-shrink and flex-basis combined. The second and third parameters (flex-shrink and flex-basis) are optional. Default is 0 1 auto.\n\nflex-grow\n\nThis defines the ability for a flex item to grow if necessary. It accepts a unitless value that serves as a proportion. It dictates what amount of the available space inside the flex container the item should take up.\nIf all items have flex-grow set to 1, the remaining space in the container will be distributed equally to all children. If one of the children a value of 2, the remaining space would take up twice as much space as the others (or it will try to, at least).\n \n\nflex-shrink\n\nThis defines the ability for a flex item to shrink if necessary.\n\nflex-basis\n\nThis defines the default size of an element before the remaining space is distributed. It can be a length (e.g. 20%, 5rem, etc.) or a keyword. The auto keyword means \"look at my width or height property\".\n\n\nalign-self\n\n\nThis allows the default alignment (or the one specified by align-items) to be overridden for individual flex items.\n\nThe VBox and HBox helpers\nThe VBox and HBox helper provide simple defaults to arrange child widgets in Vertical and Horizontal boxes.\n```Python\ndef VBox(pargs, kwargs):\n \"\"\"Displays multiple widgets vertically using the flexible box model.\"\"\"\n box = Box(pargs, **kwargs)\n box.layout.display = 'flex'\n box.layout.flex_flow = 'column'\n box.layout.align_items = 'stretch'\n return box\ndef HBox(pargs, kwargs):\n \"\"\"Displays multiple widgets horizontally using the flexible box model.\"\"\"\n box = Box(pargs, **kwargs)\n box.layout.display = 'flex'\n box.layout.align_items = 'stretch'\n return box\n```\nExamples\nFour buttons in a VBox. Items stretch to the maximum width, in a vertical box taking 50% of the available space.", "from ipywidgets import Layout, Button, Box\n\nitems_layout = Layout(flex='1 1 auto',\n width='auto') # override the default width of the button to 'auto' to let the button grow\n\nbox_layout = Layout(display='flex',\n flex_flow='column', \n align_items='stretch', \n border='solid',\n width='50%')\n\nwords = ['correct', 'horse', 'battery', 'staple']\nitems = [Button(description=w, layout=items_layout, button_style='danger') for w in words]\nbox = Box(children=items, layout=box_layout)\nbox", "Three buttons in an HBox. Items flex proportionaly to their weight.", "from ipywidgets import Layout, Button, Box\n\nitems = [\n Button(description='weight=1'),\n Button(description='weight=2', layout=Layout(flex='2 1 auto', width='auto')),\n Button(description='weight=1'),\n ]\n\nbox_layout = Layout(display='flex',\n flex_flow='row', \n align_items='stretch', \n border='solid',\n width='50%')\nbox = Box(children=items, layout=box_layout)\nbox", "A more advanced example: a reactive form.\nThe form is a VBox of width '50%'. Each row in the VBox is an HBox, that justifies the content with space between..", "from ipywidgets import Layout, Button, Box, FloatText, Textarea, Dropdown, Label, IntSlider\n\nform_item_layout = Layout(\n display='flex',\n flex_flow='row',\n justify_content='space-between'\n)\n\nform_items = [\n Box([Label(value='Age of the captain'), IntSlider(min=40, max=60)], layout=form_item_layout),\n Box([Label(value='Egg style'), \n Dropdown(options=['Scrambled', 'Sunny side up', 'Over easy'])], layout=form_item_layout),\n Box([Label(value='Ship size'), \n FloatText()], layout=form_item_layout),\n Box([Label(value='Information'), \n Textarea()], layout=form_item_layout)\n]\n\nform = Box(form_items, layout=Layout(\n display='flex',\n flex_flow='column',\n border='solid 2px',\n align_items='stretch',\n width='50%'\n))\nform", "A more advanced example: a carousel.", "from ipywidgets import Layout, Button, Box\n\nitem_layout = Layout(height='100px', min_width='40px')\nitems = [Button(layout=item_layout, description=str(i), button_style='warning') for i in range(40)]\nbox_layout = Layout(overflow_x='scroll',\n border='3px solid black',\n width='500px',\n height='',\n flex_direction='row',\n display='flex')\ncarousel = Box(children=items, layout=box_layout)\nVBox([Label('Scroll horizontally:'), carousel])", "Predefined styles\nIf you wish the styling of widgets to make use of colors and styles defined by the environment (to be consistent with e.g. a notebook theme), many widgets enable choosing in a list of pre-defined styles.\nFor example, the Button widget has a button_style attribute that may take 5 different values:\n\n'primary'\n'success'\n'info'\n'warning'\n'danger'\n\nbesides the default empty string ''.", "from ipywidgets import Button\n\nButton(description='Danger Button', button_style='danger')", "The style attribute\nWhile the layout attribute only exposes layout-related CSS properties for the top-level DOM element of widgets, the\nstyle attribute is used to expose non-layout related styling attributes of widgets.\nHowever, the properties of the style atribute are specific to each widget type.", "b1 = Button(description='Custom color')\nb1.style.button_color = 'lightgreen'\nb1", "Just like the layout attribute, widget styles can be assigned to other widgets.", "b2 = Button()\nb2.style = b1.style\nb2", "Widget styling attributes are specific to each widget type.", "s1 = IntSlider(description='Blue handle')\ns1.style.handle_color = 'lightblue'\ns1" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Kaggle/learntools
notebooks/deep_learning/raw/ex5_data_augmentation.ipynb
apache-2.0
[ "Exercise Introduction\nWe will return to the automatic rotation problem you worked on in the previous exercise. But we'll add data augmentation to improve your model.\nThe model specification and compilation steps don't change when you start using data augmentation. The code you've already worked with for specifying and compiling a model is in the cell below. Run it so you'll be ready to work on data augmentation.", "from tensorflow.keras.applications import ResNet50\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Flatten, GlobalAveragePooling2D\n\nnum_classes = 2\nresnet_weights_path = '../input/resnet50/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5'\n\nmy_new_model = Sequential()\nmy_new_model.add(ResNet50(include_top=False, pooling='avg', weights=resnet_weights_path))\nmy_new_model.add(Dense(num_classes, activation='softmax'))\n\nmy_new_model.layers[0].trainable = False\n\nmy_new_model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['accuracy'])\n\n# Set up code checking\nfrom learntools.core import binder\nbinder.bind(globals())\nfrom learntools.deep_learning.exercise_5 import *\nprint(\"Setup Complete\")", "1) Fit the Model Using Data Augmentation\nHere is some code to set up some ImageDataGenerators. Run it, and then answer the questions below about it.", "from tensorflow.keras.applications.resnet50 import preprocess_input\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\n\nimage_size = 224\n\n# Specify the values for all arguments to data_generator_with_aug.\ndata_generator_with_aug = ImageDataGenerator(preprocessing_function=preprocess_input,\n horizontal_flip = True,\n width_shift_range = 0.1,\n height_shift_range = 0.1)\n \ndata_generator_no_aug = ImageDataGenerator(preprocessing_function=preprocess_input)\n", "Why do we need both a generator with augmentation and a generator without augmentation? After thinking about it, check out the solution below.", "# Check your answer (Run this code cell to receive credit!)\nq_1.solution()", "2) Choosing Augmentation Types\nImageDataGenerator offers many types of data augmentation. For example, one argument is rotation_range. This rotates each image by a random amount that can be up to whatever value you specify.\nWould it be sensible to use automatic rotation for this problem? Why or why not?", "# Check your answer (Run this code cell to receive credit!)\nq_2.solution()", "3) Code\nFill in the missing pieces in the following code. We've supplied some boilerplate. You need to think about what ImageDataGenerator is used for each data source.", "# Specify which type of ImageDataGenerator above is to load in training data\ntrain_generator = data_generator_with_aug.flow_from_directory(\n directory = '../input/dogs-gone-sideways/images/train',\n target_size=(image_size, image_size),\n batch_size=12,\n class_mode='categorical')\n\n# Specify which type of ImageDataGenerator above is to load in validation data\nvalidation_generator = data_generator_no_aug.flow_from_directory(\n directory = '../input/dogs-gone-sideways/images/val',\n target_size=(image_size, image_size),\n class_mode='categorical')\n\nmy_new_model.fit_generator(\n ____, # if you don't know what argument goes first, try the hint\n epochs = 3,\n steps_per_epoch=19,\n validation_data=____)\n\n# Check your answer\nq_3.check()\n\n# q_3.hint()\n# q_3.solution()\n\n#%%RM_IF(PROD)%%\n\ntrain_generator = data_generator_with_aug.flow_from_directory(\n directory = '../input/dogs-gone-sideways/images/train',\n target_size=(image_size, image_size),\n batch_size=12,\n class_mode='categorical')\n\n# Specify which type of ImageDataGenerator above is to load in validation data\nvalidation_generator = data_generator_no_aug.flow_from_directory(\n directory = '../input/dogs-gone-sideways/images/val',\n target_size=(image_size, image_size),\n class_mode='categorical')\n\nmy_new_model.fit_generator(\n train_generator,\n epochs = 3,\n steps_per_epoch=19,\n validation_data=validation_generator)\n\nq_3.assert_check_passed()", "4) Did Data Augmentation Help?\nHow could you test whether data augmentation improved your model accuracy?", "# Check your answer (Run this code cell to receive credit!)\nq_4.solution()", "Keep Going\nYou are ready for a deeper understanding of deep learning." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gfeiden/Notebook
Daily/20150729_young_magnetic_models.ipynb
mit
[ "Young Magnetic Models – A Brief Exploration\nPreliminary comparison of magnetic stellar models against non-magnetic, standard stellar models.", "%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom scipy.interpolate import interp1d\nimport numpy as np", "Magnetic isochrones were computed earlier. Details can be found in this notebook entry on a small magnetic stellar grid. I'll focus on those computed with the Grevesse, Asplund, & Sauval (2007; henceforth GAS07) solar abundance distribution. Three ages will be examined: 5 Myr, 12 Myr, and 30 Myr, in line with the previous magnetic model diagnostic figures.", "std_iso_05 = np.genfromtxt('files/dmestar_00005.0myr_z+0.00_a+0.00_gas07_t010.iso')\nstd_iso_12 = np.genfromtxt('files/dmestar_00012.0myr_z+0.00_a+0.00_gas07_t010.iso')\nstd_iso_30 = np.genfromtxt('files/dmestar_00030.0myr_z+0.00_a+0.00_gas07_t010.iso')\n\nmag_iso_05 = np.genfromtxt('files/dmestar_00005.0myr_gas07_z+0.00_a+0.00_mag25kG.iso')\nmag_iso_12 = np.genfromtxt('files/dmestar_00012.0myr_gas07_z+0.00_a+0.00_mag25kG.iso')\nmag_iso_30 = np.genfromtxt('files/dmestar_00030.0myr_gas07_z+0.00_a+0.00_mag25kG.iso')", "Examining, first, the mass-radius, mass-Teff, and mass-luminosity relationships as a function of age.", "fig, ax = plt.subplots(3, 1, figsize=(8, 12), sharex=True)\n\nax[2].set_xlabel('Mass ($M_{\\\\odot}$)', fontsize=20.)\nax[0].set_ylabel('Radius ($R_{\\\\odot}$)', fontsize=20.)\nax[1].set_ylabel('Temperature (K)', fontsize=20.)\nax[2].set_ylabel('Luminosity ($L_{\\\\odot}$)', fontsize=20.)\nfor axis in ax:\n axis.tick_params(which='major', axis='both', length=10., labelsize=16.)\n\n# Standard model Mass-Radius\nax[0].plot(std_iso_05[:, 0], 10**std_iso_05[:, 4], '-', lw=2, color='#555555')\nax[0].plot(std_iso_12[:, 0], 10**std_iso_12[:, 4], '-', lw=2, color='#1e90ff')\nax[0].plot(std_iso_30[:, 0], 10**std_iso_30[:, 4], '-', lw=2, color='#800000')\n\n# Magnetic model Mass-Radius\nax[0].plot(mag_iso_05[:, 0], 10**mag_iso_05[:, 4], '--', lw=2, color='#555555', dashes=(20.,10.))\nax[0].plot(mag_iso_12[:, 0], 10**mag_iso_12[:, 4], '--', lw=2, color='#1e90ff', dashes=(20.,10.))\nax[0].plot(mag_iso_30[:, 0], 10**mag_iso_30[:, 4], '--', lw=2, color='#800000', dashes=(20.,10.))\n\n# Standard model Mass-Teff\nax[1].plot(std_iso_05[:, 0], 10**std_iso_05[:, 1], '-', lw=2, color='#555555')\nax[1].plot(std_iso_12[:, 0], 10**std_iso_12[:, 1], '-', lw=2, color='#1e90ff')\nax[1].plot(std_iso_30[:, 0], 10**std_iso_30[:, 1], '-', lw=2, color='#800000')\n\n# Magnetic model Mass-Teff\nax[1].plot(mag_iso_05[:, 0], 10**mag_iso_05[:, 1], '--', lw=2, color='#555555', dashes=(20.,10.))\nax[1].plot(mag_iso_12[:, 0], 10**mag_iso_12[:, 1], '--', lw=2, color='#1e90ff', dashes=(20.,10.))\nax[1].plot(mag_iso_30[:, 0], 10**mag_iso_30[:, 1], '--', lw=2, color='#800000', dashes=(20.,10.))\n\n# Standard model Mass-Luminosity\nax[2].plot(std_iso_05[:, 0], 10**std_iso_05[:, 3], '-', lw=2, color='#555555')\nax[2].plot(std_iso_12[:, 0], 10**std_iso_12[:, 3], '-', lw=2, color='#1e90ff')\nax[2].plot(std_iso_30[:, 0], 10**std_iso_30[:, 3], '-', lw=2, color='#800000')\n\n# Magnetic model Mass-Luminosity\nax[2].plot(mag_iso_05[:, 0], 10**mag_iso_05[:, 3], '--', lw=2, color='#555555', dashes=(20.,10.))\nax[2].plot(mag_iso_12[:, 0], 10**mag_iso_12[:, 3], '--', lw=2, color='#1e90ff', dashes=(20.,10.))\nax[2].plot(mag_iso_30[:, 0], 10**mag_iso_30[:, 3], '--', lw=2, color='#800000', dashes=(20.,10.))\n\nfig.tight_layout()", "Note that, in the figure above, standard stellar evolution models are shown as solid lines and magnetic stellar evolution models as dashed lines. Ages are indicated by color: grey = 5 Myr, blue = 12 Myr, red = 30 Myr.\nHR Diagram comparison:", "fig, ax = plt.subplots(1, 1, figsize=(8.0, 8.0))\n\nax.set_xlabel('Effective Temperature (K)', fontsize=20.)\nax.set_ylabel('$\\\\log_{10} (L / L_{\\\\odot})$', fontsize=20.)\nax.set_xlim(5000., 2500.)\nax.tick_params(which='major', axis='both', length=10., labelsize=16.)\n\n# Standard models\nax.plot(10**std_iso_05[:, 1], std_iso_05[:, 3], '-', lw=2, color='#555555')\nax.plot(10**std_iso_12[:, 1], std_iso_12[:, 3], '-', lw=2, color='#1e90ff')\nax.plot(10**std_iso_30[:, 1], std_iso_30[:, 3], '-', lw=2, color='#800000')\n\n# Magnetic models\nax.plot(10**mag_iso_05[:, 1], mag_iso_05[:, 3], '--', lw=2, color='#555555', dashes=(20.,10.))\nax.plot(10**mag_iso_12[:, 1], mag_iso_12[:, 3], '--', lw=2, color='#1e90ff', dashes=(20.,10.))\nax.plot(10**mag_iso_30[:, 1], mag_iso_30[:, 3], '--', lw=2, color='#800000', dashes=(20.,10.))", "Line styles and colors represent the same model combinations, as before.\nLithium abundance as a function of mass, temperature, and luminosity:", "fig, ax = plt.subplots(1, 3, figsize=(15, 5), sharey=True)\n\nax[0].set_xlabel('Mass ($M_{\\\\odot}$)', fontsize=20.)\nax[1].set_xlabel('Temperature (K)', fontsize=20.)\nax[2].set_xlabel('$\\\\log_{10}(L/L_{\\\\odot})$', fontsize=20.)\nax[0].set_ylabel('A(Li)', fontsize=20.)\nfor axis in ax:\n axis.set_ylim(1.5, 3.5)\n axis.tick_params(which='major', axis='both', length=10., labelsize=16.)\n\n# x-axis limits\nax[0].set_xlim(1.0, 0.1)\nax[1].set_xlim(4500., 2500.)\nax[2].set_xlim(0.0, -2.5)\n\n# Standard model Mass-A(Li)\nax[0].plot(std_iso_05[:, 0], std_iso_05[:, 5], '-', lw=2, color='#555555')\nax[0].plot(std_iso_12[:, 0], std_iso_12[:, 5], '-', lw=2, color='#1e90ff')\nax[0].plot(std_iso_30[:, 0], std_iso_30[:, 5], '-', lw=2, color='#800000')\n\n# Magnetic model Mass-A(Li)\nax[0].plot(mag_iso_05[:, 0], mag_iso_05[:, 5], '--', lw=2, color='#555555', dashes=(20.,10.))\nax[0].plot(mag_iso_12[:, 0], mag_iso_12[:, 5], '--', lw=2, color='#1e90ff', dashes=(20.,10.))\nax[0].plot(mag_iso_30[:, 0], mag_iso_30[:, 5], '--', lw=2, color='#800000', dashes=(20.,10.))\n\n# Standard model Teff-A(Li)\nax[1].plot(10**std_iso_05[:, 1], std_iso_05[:, 5], '-', lw=2, color='#555555')\nax[1].plot(10**std_iso_12[:, 1], std_iso_12[:, 5], '-', lw=2, color='#1e90ff')\nax[1].plot(10**std_iso_30[:, 1], std_iso_30[:, 5], '-', lw=2, color='#800000')\n\n# Magnetic model Teff-A(Li)\nax[1].plot(10**mag_iso_05[:, 1], mag_iso_05[:, 5], '--', lw=2, color='#555555', dashes=(20.,10.))\nax[1].plot(10**mag_iso_12[:, 1], mag_iso_12[:, 5], '--', lw=2, color='#1e90ff', dashes=(20.,10.))\nax[1].plot(10**mag_iso_30[:, 1], mag_iso_30[:, 5], '--', lw=2, color='#800000', dashes=(20.,10.))\n\n# Standard model Luminosity-A(Li)\nax[2].plot(std_iso_05[:, 3], std_iso_05[:, 5], '-', lw=2, color='#555555')\nax[2].plot(std_iso_12[:, 3], std_iso_12[:, 5], '-', lw=2, color='#1e90ff')\nax[2].plot(std_iso_30[:, 3], std_iso_30[:, 5], '-', lw=2, color='#800000')\n\n# Magnetic model Luminosity-A(Li)\nax[2].plot(mag_iso_05[:, 3], mag_iso_05[:, 5], '--', lw=2, color='#555555', dashes=(20.,10.))\nax[2].plot(mag_iso_12[:, 3], mag_iso_12[:, 5], '--', lw=2, color='#1e90ff', dashes=(20.,10.))\nax[2].plot(mag_iso_30[:, 3], mag_iso_30[:, 5], '--', lw=2, color='#800000', dashes=(20.,10.))\n\nfig.tight_layout()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
joshnsolomon/phys202-2015-work
assignments/assignment10/ODEsEx01.ipynb
mit
[ "Ordinary Differential Equations Exercise 1\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns\nfrom scipy.integrate import odeint\nfrom IPython.html.widgets import interact, fixed", "Euler's method\nEuler's method is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation\n$$ \\frac{dy}{dx} = f(y(x), x) $$\nwith the initial condition:\n$$ y(x_0)=y_0 $$\nEuler's method performs updates using the equations:\n$$ y_{n+1} = y_n + h f(y_n,x_n) $$\n$$ h = x_{n+1} - x_n $$\nWrite a function solve_euler that implements the Euler method for a 1d ODE and follows the specification described in the docstring:", "def solve_euler(derivs, y0, x):\n \"\"\"Solve a 1d ODE using Euler's method.\n \n Parameters\n ----------\n derivs : function\n The derivative of the diff-eq with the signature deriv(y,x) where\n y and x are floats.\n y0 : float\n The initial condition y[0] = y(x[0]).\n x : np.ndarray, list, tuple\n The array of times at which of solve the diff-eq.\n \n Returns\n -------\n y : np.ndarray\n Array of solutions y[i] = y(x[i])\n \"\"\"\n y = np.zeros(len(x))\n y[0] = y0\n h = (x[1]-x[0])\n for a in range(1,len(x)):\n y[a]=y[a-1]+h*derivs(y[a-1],x[a-1])\n return y\n \n \n \n\nassert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])", "The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation:\n$$ y_{n+1} = y_n + h f\\left(y_n+\\frac{h}{2}f(y_n,x_n),x_n+\\frac{h}{2}\\right) $$\nWrite a function solve_midpoint that implements the midpoint method for a 1d ODE and follows the specification described in the docstring:", "def solve_midpoint(derivs, y0, x):\n \"\"\"Solve a 1d ODE using the Midpoint method.\n \n Parameters\n ----------\n derivs : function\n The derivative of the diff-eq with the signature deriv(y,x) where y\n and x are floats.\n y0 : float\n The initial condition y[0] = y(x[0]).\n x : np.ndarray, list, tuple\n The array of times at which of solve the diff-eq.\n \n Returns\n -------\n y : np.ndarray\n Array of solutions y[i] = y(x[i])\n \"\"\"\n y = np.zeros(len(x))\n y[0] = y0\n for a in range(1,len(x)):\n h = x[a]-x[a-1]\n y[a]=y[a-1]+h*derivs(y[a-1]+(h/2)*derivs(y[a-1],x[a-1]),x[a-1]+(h/2))\n return y\n\nassert np.allclose(solve_midpoint(lambda y, x: 1, 0, [0,1,2]), [0,1,2])", "You are now going to solve the following differential equation:\n$$\n\\frac{dy}{dx} = x + 2y\n$$\nwhich has the analytical solution:\n$$\ny(x) = 0.25 e^{2x} - 0.5 x - 0.25\n$$\nFirst, write a solve_exact function that compute the exact solution and follows the specification described in the docstring:", "def solve_exact(x):\n \"\"\"compute the exact solution to dy/dx = x + 2y.\n \n Parameters\n ----------\n x : np.ndarray\n Array of x values to compute the solution at.\n \n Returns\n -------\n y : np.ndarray\n Array of solutions at y[i] = y(x[i]).\n \"\"\"\n y = (.25)*np.exp(2*x) - (.5*x)-(.25)\n return y\n\nassert np.allclose(solve_exact(np.array([0,1,2])),np.array([0., 1.09726402, 12.39953751]))", "In the following cell you are going to solve the above ODE using four different algorithms:\n\nEuler's method\nMidpoint method\nodeint\nExact\n\nHere are the details:\n\nGenerate an array of x values with $N=11$ points over the interval $[0,1]$ ($h=0.1$).\nDefine the derivs function for the above differential equation.\nUsing the solve_euler, solve_midpoint, odeint and solve_exact functions to compute\n the solutions using the 4 approaches.\n\nVisualize the solutions on a sigle figure with two subplots:\n\nPlot the $y(x)$ versus $x$ for each of the 4 approaches.\nPlot $\\left|y(x)-y_{exact}(x)\\right|$ versus $x$ for each of the 3 numerical approaches.\n\nYour visualization should have legends, labeled axes, titles and be customized for beauty and effectiveness.\nWhile your final plot will use $N=10$ points, first try making $N$ larger and smaller to see how that affects the errors of the different approaches.", "x = np.linspace(0,1,11)\ndef derivs(y,x):\n return x + 2*y\n \n\ny1 = solve_euler(derivs, 0, x)\ny2 = solve_midpoint(derivs, 0, x)\ny3 = odeint(derivs,0,x)\ny4 = solve_exact(x)\n\nplt.subplot(2,2,1) # 2 rows x 1 col, plot 1\nplt.plot(x,y1) \nplt.ylabel('Euler\\'s Method')\n\nplt.subplot(2,2,2) \nplt.plot(x,y2) \nplt.ylabel('Midpoint Method')\n\nplt.subplot(2,2,3) \nplt.plot(x,y3) \nplt.ylabel('ODE int')\nplt.xlabel('x')\n\nplt.subplot(2,2,4) \nplt.plot(x,y4) \nplt.ylabel('Analytic Solution')\nplt.xlabel('x')\n\nplt.tight_layout()\n\nwhy3 = []\nfor a in y3:\n why3.append(a[0])\n \n\nplt.subplot(3,1,1) \nplt.plot(x,np.abs(y1-y4)) \nplt.ylabel('Euler Error')\n\nplt.subplot(3,1,2) \nplt.plot(x,np.abs(y2-y4)) \nplt.ylabel('Midpoint Error')\n\nplt.subplot(3,1,3) \nplt.plot(x,np.abs(why3-y4)) \nplt.ylabel('ODE Error')\n\n\nplt.tight_layout()\n\nassert True # leave this for grading the plots" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tritemio/multispot_paper
Multi-spot Gamma Fitting.ipynb
mit
[ "Multi-spot Gamma Fitting", "from fretbursts import fretmath\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nfrom cycler import cycler\nimport seaborn as sns\n%matplotlib inline\n%config InlineBackend.figure_format='retina' # for hi-dpi displays\n\nimport matplotlib as mpl\nfrom cycler import cycler\n\nbmap = sns.color_palette(\"Set1\", 9)\ncolors = np.array(bmap)[(1,0,2,3,4,8,6,7), :]\nmpl.rcParams['axes.prop_cycle'] = cycler('color', colors)\ncolors_labels = ['blue', 'red', 'green', 'violet', 'orange', 'gray', 'brown', 'pink', ]\nfor c, cl in zip(colors, colors_labels):\n locals()[cl] = tuple(c) # assign variables with color names\nsns.palplot(colors)\n\nsns.set_style('whitegrid')", "Load Data\nMultispot\nLoad the leakage coefficient from disk (computed in Multi-spot 5-Samples analyis - Leakage coefficient fit):", "leakage_coeff_fname = 'results/Multi-spot - leakage coefficient KDE wmean DexDem.csv'\nleakageM = float(np.loadtxt(leakage_coeff_fname, ndmin=1))\n\nprint('Multispot Leakage Coefficient:', leakageM)", "Load the direct excitation coefficient ($d_{dirT}$) from disk (computed in usALEX - Corrections - Direct excitation physical parameter):", "dir_ex_coeff_fname = 'results/usALEX - direct excitation coefficient dir_ex_t beta.csv'\ndir_ex_t = float(np.loadtxt(dir_ex_coeff_fname, ndmin=1))\n\nprint('Direct excitation coefficient (dir_ex_t):', dir_ex_t)", "Multispot PR for FRET population:", "mspot_filename = 'results/Multi-spot - dsDNA - PR - all_samples all_ch.csv'\n\nE_pr_fret = pd.read_csv(mspot_filename, index_col=0)\nE_pr_fret", "usALEX\nCorrected $E$ from μs-ALEX data:", "data_file = 'results/usALEX-5samples-E-corrected-all-ph.csv'\ndata_alex = pd.read_csv(data_file).set_index('sample')#[['E_pr_fret_kde']]\ndata_alex.round(6)\n\nE_alex = data_alex.E_gauss_w\nE_alex", "Multi-spot gamma fitting", "import lmfit\n\ndef residuals(params, E_raw, E_ref):\n gamma = params['gamma'].value\n # NOTE: leakageM and dir_ex_t are globals\n return E_ref - fretmath.correct_E_gamma_leak_dir(E_raw, leakage=leakageM, gamma=gamma, dir_ex_t=dir_ex_t)\n\nparams = lmfit.Parameters()\nparams.add('gamma', value=0.5) \n\nE_pr_fret_mean = E_pr_fret.mean(1)\nE_pr_fret_mean\n\nm = lmfit.minimize(residuals, params, args=(E_pr_fret_mean, E_alex))\nlmfit.report_fit(m.params, show_correl=False)\n\nE_alex['12d'], E_pr_fret_mean['12d']\n\nm = lmfit.minimize(residuals, params, args=(np.array([E_pr_fret_mean['12d']]), np.array([E_alex['12d']])))\nlmfit.report_fit(m.params, show_correl=False)\n\nprint('Fitted gamma(multispot):', m.params['gamma'].value)\n\nmultispot_gamma = m.params['gamma'].value\nmultispot_gamma\n\nE_fret_mch = fretmath.correct_E_gamma_leak_dir(E_pr_fret, leakage=leakageM, dir_ex_t=dir_ex_t, \n gamma=multispot_gamma)\nE_fret_mch = E_fret_mch.round(6)\nE_fret_mch\n\nE_fret_mch.to_csv('results/Multi-spot - dsDNA - Corrected E - all_samples all_ch.csv')\n\n'%.5f' % multispot_gamma\n\nwith open('results/Multi-spot - gamma factor.csv', 'wt') as f:\n f.write('%.5f' % multispot_gamma)\n\nnorm = (E_fret_mch.T - E_fret_mch.mean(1))#/E_pr_fret.mean(1)\nnorm_rel = (E_fret_mch.T - E_fret_mch.mean(1))/E_fret_mch.mean(1)\nnorm.plot()\nnorm_rel.plot()", "Plot FRET vs distance", "sns.set_style('whitegrid')\n\nCH = np.arange(8)\nCH_labels = ['CH%d' % i for i in CH]\ndist_s_bp = [7, 12, 17, 22, 27]\n\nfontsize = 16\n\nfig, ax = plt.subplots(figsize=(8, 5))\n\nax.plot(dist_s_bp, E_fret_mch, '+', lw=2, mew=1.2, ms=10, zorder=4)\nax.plot(dist_s_bp, E_alex, '-', lw=3, mew=0, alpha=0.5, color='k', zorder=3)\n\nplt.title('Multi-spot smFRET dsDNA, Gamma = %.2f' % multispot_gamma)\nplt.xlabel('Distance in base-pairs', fontsize=fontsize); \nplt.ylabel('E', fontsize=fontsize)\nplt.ylim(0, 1); plt.xlim(0, 30)\nplt.grid(True)\nplt.legend(['CH1','CH2','CH3','CH4','CH5','CH6','CH7','CH8', u'μsALEX'], \n fancybox=True, prop={'size':fontsize-1},\n loc='best');", "NOTE The fact the we fit the 27d with a single Gaussian may account for the slight shift of the FRET efficiency compared to the us-ALEX measurements. The shift is bigger when using an asymmetric-gaussian model for multi-spot fitting. Probably, for consistency with us-ALEX fitting we should stick to the plain gaussian." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
McIntyre-Lab/ipython-demo
interactive_plotting.ipynb
gpl-2.0
[ "Example of Interactive Plotting\nThe IPython notebook excels at interactive science. By its very nature you can easily create a bit of code, run it, look at the output, and adjust the code. This allows you to do very rapid development and work you way through a problem while documenting you thought process. As an example, I am going simulate some data and do some interactive plotting.", "# Import Module\nimport numpy as np\nimport scipy as sp\nimport scipy.stats as stats\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport pandas as pd", "Simulate data\nI am going to simulate data using the various density functions available in scipy. During QC, we typically are trying to identify either samples or values (e.g. genes, exons, compounds) that do not behave as expected. We use various plots to help identify outliers and remove them from the dataset.\nFor this example I am going to simulate a value $\\theta$. Here I expect $\\theta$ to be normally distributed around 0.5, however I include some bad values that are normally distributed around 0.2. I am relating these bad values to another variable coverage. When coverage is low then we will not accurately capture $\\theta$ causing a shift in the distribution of values.", "# Simulate $\\theta$\nsp.random.seed(42)\ntheta1 = sp.random.normal(loc=0.5, scale=0.1, size=1000)\ntheta2 = sp.random.normal(loc=0.2, scale=0.1, size=360)\n\n# Simulate coverage\ncvg1 = sp.random.poisson(20, size=1000)\ncvg2 = sp.random.poisson(4, size=360)\n\n## I can't have a coverage of 0, so replace 0's with 1\ncvg1[cvg1 == 0] = 1\ncvg2[cvg2 == 0] = 1\n\n## Create joint of theta1 and theat2\ntheta = np.concatenate((theta1, theta2))\n\n## Create joint of cvg1 and cvg2\ncvg = np.concatenate((cvg1, cvg2))\n\n# Density of Plot $\\theta$ 1 and 2\n## Get x coordinates from 0 to 1\nxs = np.linspace(0, 1, num=100)\n\n## Get Density functions\ndensity1 = stats.gaussian_kde(theta1)\ndensity2 = stats.gaussian_kde(theta2)\ndensity = stats.gaussian_kde(theta)\n\n## Plot\nfig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))\nax1.plot(xs, density1(xs), label=r'$\\theta$1')\nax1.plot(xs, density2(xs), label=r'$\\theta$2')\nax1.set_title(r'Distribution of $\\theta$1 and $\\theta$2', fontsize=12)\nax1.legend()\n\nax2.plot(xs, density(xs), color='k', label=r'$\\theta$1 + $\\theta$2')\nax2.set_title(r'Joint Distribution of $\\theta$1 and $\\theta2$2', fontsize=12)\nax2.legend()", "Now lets look at the distribution of our coverage counts", "# Plot Distribution of Coverage\n## Figure out the x limits\nxs = np.linspace(0, cvg.max(), num=100)\n\n## Get Density functions\ndensity1 = stats.gaussian_kde(cvg1)\ndensity2 = stats.gaussian_kde(cvg2)\n\n## Plot\nplt.plot(xs, density1(xs), label='High Coverage')\nplt.plot(xs, density2(xs), label='Low Coverage')\nplt.title('Distribution of Coverage')\nplt.legend()", "Combine everything into a single dataset.", "# Create Data Frame\ndat = pd.DataFrame({'theta': theta, 'cvg': cvg})\ndat.head(3)\n\n# Plotting Desnsities is a lot easier with data frames\nfig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))\ndat['theta'].plot(kind='kde', ax=ax1, title=r'Distribution of $\\theta$')\ndat['cvg'].plot(kind='kde', ax=ax2, title='Distribution of Coverage')", "QC Time\nNow that we have our simulated data, lets do some QC. Lets see what happens if we filter low coverage reads.\nFirst we will create a plotting function that takes a cutoff value.", "def pltLow(dat, cutoff):\n \"\"\" Function to plot density after filtering\"\"\"\n clean = dat[dat['cvg'] >= cutoff]\n clean['theta'].plot(kind='kde', title=r'Distribution of $\\theta${}Coverage Count Cutoff $\\geq$ {}'.format('\\n',cutoff), xlim=(-0.2, 1.2))\n\n# Test plot function\npltLow(dat, 1)", "Interactive Plotting\nIpython offers a simple way to create interactive plots. You import a function called interact, and use that to call your plotting function.", "from IPython.html.widgets import interact, interact_manual, IntSlider, fixed\n\ninteract(pltLow, dat=fixed(dat), cutoff=IntSlider(min=0, max=20))", "If you have a lot of data, then interact can be slow because at each step along the slider it tries to calculate the filter. There is a noter interactive widget interact_manual that only runs calculations when you hit the run button.", "interact_manual(pltLow, dat=fixed(dat), cutoff=IntSlider(min=0, max=20))", "Other types of interactivity\nWhile there are a number of IPython widgets that may be useful, there are other packages that offer interactivity. One I have been playing with is a module that translates matplotlib plots into D3.js plots. I will demonstrate that here.", "# Import the mpld3 library\nimport mpld3\n\n# Plain Scatter plot showing relationship between coverage and theta\ndat.plot(kind='scatter', x='cvg', y='theta', figsize=(10, 10))\n\n# Plot figure with mpld3\nfig, ax = plt.subplots(figsize=(10, 10))\nscatter = ax.scatter(dat['cvg'], dat['theta'])\nlabels = ['row {}'.format(i) for i in dat.index.tolist()]\ntooltip = mpld3.plugins.PointLabelTooltip(scatter, labels=labels)\nmpld3.plugins.connect(fig, tooltip)\nmpld3.display()", "Now lets mess with a point and see if it changes.", "dat.ix[262, 'theta'] = -0.1\n\n# Plot figure with mpld3\nfig, ax = plt.subplots(figsize=(10, 10))\nscatter = ax.scatter(dat['cvg'], dat['theta'])\nlabels = ['row {}'.format(i) for i in dat.index.tolist()]\ntooltip = mpld3.plugins.PointLabelTooltip(scatter, labels=labels)\nmpld3.plugins.connect(fig, tooltip)\nmpld3.display()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
yashdeeph709/Algorithms
PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/List Comprehensions-checkpoint.ipynb
apache-2.0
[ "Comprehensions\nIn addition to sequence operations and list methods, Python includes a more advanced operation called a list comprehension.\nList comprehensions allow us to build out lists using a different notation. You can think of it as essentially a one line for loop built inside of brackets. For a simple example:\nExample 1", "# Grab every letter in string\nlst = [x for x in 'word']\n\n# Check\nlst", "This is the basic idea of a list comprehension. If you're familiar with mathematical notation this format should feel familiar for example: x^2 : x in { 0,1,2...10} \nLets see a few more example of list comprehensions in Python:\nExample 2", "# Square numbers in range and turn into list\nlst = [x**2 for x in range(0,11)]\n\nlst", "Example 3\nLets see how to add in if statements:", "# Check for even numbers in a range\nlst = [x for x in range(11) if x % 2 == 0]\n\nlst", "Example 4\nCan also do more complicated arithmetic:", "# Convert Celsius to Fahrenheit\ncelsius = [0,10,20.1,34.5]\n\nfahrenheit = [ ((float(9)/5)*temp + 32) for temp in celsius ]\n\nfahrenheit", "Example 5\nWe can also perform nested list comprehensions, for example:", "lst = [ x**2 for x in [x**2 for x in range(11)]]\nlst", "Later on in the course we will learn about generator comprehensions. After this lecture you should feel comfortable reading and writing basic list comprehensions." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
stable/_downloads/8b7a85d4b98927c93b7d9ca1da8d2ab2/compute_mne_inverse_volume.ipynb
bsd-3-clause
[ "%matplotlib inline", "Compute MNE-dSPM inverse solution on evoked data in volume source space\nCompute dSPM inverse solution on MNE evoked dataset in a volume source\nspace and stores the solution in a nifti file for visualisation.", "# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>\n#\n# License: BSD-3-Clause\n\nfrom nilearn.plotting import plot_stat_map\nfrom nilearn.image import index_img\n\nfrom mne.datasets import sample\nfrom mne import read_evokeds\nfrom mne.minimum_norm import apply_inverse, read_inverse_operator\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nmeg_path = data_path / 'MEG' / 'sample'\nfname_inv = meg_path / 'sample_audvis-meg-vol-7-meg-inv.fif'\nfname_evoked = meg_path / 'sample_audvis-ave.fif'\n\nsnr = 3.0\nlambda2 = 1.0 / snr ** 2\nmethod = \"dSPM\" # use dSPM method (could also be MNE or sLORETA)\n\n# Load data\nevoked = read_evokeds(fname_evoked, condition=0, baseline=(None, 0))\ninverse_operator = read_inverse_operator(fname_inv)\nsrc = inverse_operator['src']\n\n# Compute inverse solution\nstc = apply_inverse(evoked, inverse_operator, lambda2, method)\nstc.crop(0.0, 0.2)\n\n# Export result as a 4D nifti object\nimg = stc.as_volume(src,\n mri_resolution=False) # set True for full MRI resolution\n\n# Save it as a nifti file\n# nib.save(img, 'mne_%s_inverse.nii.gz' % method)\n\nt1_fname = data_path / 'subjects' / 'sample' / 'mri' / 'T1.mgz'", "Plot with nilearn:", "plot_stat_map(index_img(img, 61), str(t1_fname), threshold=8.,\n title='%s (t=%.1f s.)' % (method, stc.times[61]))" ]
[ "code", "markdown", "code", "markdown", "code" ]
isb-cgc/examples-Python
notebooks/isb_cgc_bam_slicing_with_pysam.ipynb
apache-2.0
[ "Bam slicing in a cloud hosted jupyter notebook.\nWelcome to the ‘Query of the Month’. This is part of our collection of new and interesting queries to demonstrate the powerful combination of BigData from the NCI cancer programs like TCGA, and BigQuery from Google.\nPlease let us know if you have an idea or a suggestion for our next QotM!\nQuery of the Month is produced by:\nDavid L Gibbs (david.gibbs ( ~ at ~ ) systemsbiology ( ~ dot ~ ) org)\nKawther Abdilleh (kawther.abdilleh ( ~ at ~ ) gdit (~ dot ~) com)\nSheila M Reynolds (sheila.reynolds ( ~ at ~ ) systemsbiology ( ~ dot ~ ) org)\nIn this notebook, using the Pysam package, we demonstrate how to slice bam files stored in GCS buckets.\nHere, you'll learn how to: \n\nHow to invoke bash commands within a Jupyter environment. \nHow to install packages/programs within a Jupyter environment\nHow to use available BigQuery tables within ISB-CGC to query and identify Google Cloud Storage bucket locations for BAM files of interest\nHow to use PySam to slice BAM files\nHow to save slices in your bucket and retrieve them\nBrief example of working with reads\n\nLet's authenticate ourselves", "from google.colab import auth\nauth.authenticate_user()\nprint('Authorized')", "First, let's prep to install Pysam and the HTSlib\nPysam is a python wrapper around samtools, and samtools uses the HTSlib (http://www.htslib.org/). So we need to make sure we have the necessary libraries to compile HTSlib and samtools. The compilation is needed to activate the ability to read from google cloud buckets.", "import os\nos.environ['HTSLIB_CONFIGURE_OPTIONS'] = \"--enable-gcs\"", "We can invoke bash commands to see what was downloaded into our current working directory. Bash commands can be invoked by putting an exclamation point (!) before the command.", "\n!ls -lha\n\n!sudo apt-get install autoconf automake make gcc perl zlib1g-dev libbz2-dev liblzma-dev libcurl4-openssl-dev libssl-dev\n\n!pip3 install pysam -v --force-reinstall --no-binary :all:\n\n# Without forcing the compilation, we get error \n# [Errno 93] could not open alignment file '...': Protocol not supported\n\nimport pysam", "First, we need to set our project. Replace the assignment below with your project ID.", "# First, we need to set our project. Replace the assignment below\n# with your project ID.\n# project_id = 'isb-cgc-02-0001'\n#!gcloud config set project {project_id}\n\n#import os\n#os.environ['GCS_OAUTH_TOKEN'] = \"gcloud auth application-default print-access-token\"\n", "Now that we have Pysam installed, let's write an SQL query to locate BAM files in Google Cloud Storage Buckets. \nIn the query below, we are looking to identify the Google Cloud Storage bucket locations for TCGA Ovarian Cancer BAMs obtained via whole genome sequencing (WGS) generated using the SOLiD sequencing system", "%%bigquery --project isb-cgc-02-0001 df\nSELECT * FROM `isb-cgc.TCGA_hg19_data_v0.tcga_metadata_data_hg19_18jul`\nwhere \ndata_format = 'BAM' \nAND disease_code = 'OV' \nAND experimental_strategy = \"WGS\" \nAND platform = 'ABI SOLiD'\nLIMIT 5\n", "Now using the following Pysam command, let's read a bam file from GCS and slice out a section of the bam using the fetch function. For the purposes of the BAM slicing exercise, we will use an open-access CCLE BAM File open-access BAM file. CCLE open access BAM files are stored here", "\nsamfile = pysam.AlignmentFile('gs://isb-ccle-open/gdc/0a109993-2d5b-4251-bcab-9da4a611f2b1/C836.Calu-3.2.bam', \"rb\")\nfor read in samfile.fetch('7', 140453130, 140453135):\n print(read)\n\nsamfile.close()", "The output from the above command is a tab-delimited human readable table of a slice of the BAM file. This table gives us information on reads that mapped to the region that we \"extracted\" from chromosome 7 between the coordinates of 140453130 and 140453135.\nNow, let's suppose you would like to save those reads to your own Google cloud storage bucket...", "samfile = pysam.AlignmentFile('gs://isb-ccle-open/gdc/0a109993-2d5b-4251-bcab-9da4a611f2b1/C836.Calu-3.2.bam', \"rb\")\nfetchedreads = pysam.AlignmentFile(\"test.bam\", \"wb\", template=samfile)\nfor read in samfile.fetch('7', 140453130, 140453135):\n fetchedreads.write(read)\n\nfetchedreads.close()\nsamfile.close()", "Let's see if we saved it?", "!ls -lha\n\n#if you don't already have a google cloud storage bucket, you can make one using the following command:\n#The mb command creates a new bucket. \n#gsutil mb gs://your_bucket\n\n#to see what's in the bucket..\n#!gsutil ls gs://your_bucket/\n\n\n\n# then we can copy over the file\n\n!gsutil cp gs://bam_bucket_1/test.bam test_dl.bam\n\n# and it made it?\n\n!gsutil ls gs://bam_bucket_1/", "Now, can we read it back?!?", "newsamfile = pysam.AlignmentFile('gs://bam_bucket_1/test.bam', 'rb')\nfor r in newsamfile.fetch(until_eof=True):\n print(r)\n# \n# \n# No. But maybe soon. #\n ", "Let's move our slice back to this instance.", "!gsutil ls gs://bam_bucket_1/\n\n!gsutil cp gs://bam_bucket_1/test.bam test_dl.bam", "Now we're ready to work with our bam-slice!\nVery brief examples of working with reads.\nThe Alignment file is read as a pysam.AlignedSegment, which is a python class. The methods and class variables can be found here: https://pysam.readthedocs.io/en/latest/api.html#pysam.AlignedSegment", "import numpy as np\n\n# first we'll open our bam-slice\ndlsamfile = pysam.AlignmentFile('test_dl.bam', 'rb')\n\n# and we'll save the read quality scores in a list\nquality = []\nfor read in dlsamfile:\n quality.append(read.mapping_quality)\n \n# then we can compute statistics on them\nprint(\"Average quality score\")\nprint(np.mean(quality))\n\n# again open our bam-slice\ndlsamfile = pysam.AlignmentFile('test_dl.bam', 'rb')\n\n# here, we'll extract the sequences to process\nseqs = []\nfor read in dlsamfile:\n seqs.append(read.query_sequence)\n \n\n# let's count the number of times nucleotide bases are read.\nbaseCount = dict()\nbaseCount['A'] = 0\nbaseCount['C'] = 0\nbaseCount['G'] = 0\nbaseCount['T'] = 0\nbaseCount['N'] = 0\ntotalCount = 0\n\nfor si in seqs:\n for base in si:\n baseCount[base] += 1\n totalCount +=1\n\nfor bi in ['A', 'G', 'C', 'T']:\n baseCount[bi] /= totalCount\n \nprint(\"Percent bases\")\nprint(baseCount)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
PythonFreeCourse/Notebooks
week02/7_Summary.ipynb
mit
[ "<img src=\"images/logo.jpg\" style=\"display: block; margin-left: auto; margin-right: auto;\" alt=\"לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית.\">\n<p style=\"text-align:right;direction:rtl\">תרגילים</p>\n<p style=\"text-align:right;direction:rtl;\">מערכת בנק</p>\n<p style=\"text-align:right; direction:rtl;\">\nבתרגיל הזה נכתוב מערכת מעט חכמה יותר לבנק.<br>\n נבנה אותה בשלבים באמצעות חלוקה לפונקציות.<br>\nבשלב הראשון נבנה פונקציה למחולל סיסמאות, שמקבלת שם משתמש ומחזירה סיסמה.<br>\nהסיסמה תורכב כך עבור שם משתמש נתון:<br>\n\n<ul style=\"text-align:right; direction:rtl;\"><li>שם המשתמש באותיות קטנות ומייד אחריו שם המשתמש באותיות גדולות.\n </li><li>הוספת האות <em>\"X\"</em> מימין לשם המשתמש, כמספר התווים בשם המשתמש.</li></ul>\n</p>\n<p style=\"text-align:right;direction:rtl;\">שימו לב כי שם המשתמש יכול לכלול ספרות וסימנים, נוסף על אותיות.<br>\n כמו כן, לא נאפשר שם משתמש שהוא מחרוזת ריקה. במקרה זה החזירו מחרוזת ריקה.<br>\n <br> לדוגמה, עבור שם המשתמש <em>'stam'</em> המחולל יחזיר את הסיסמה: <samp>'stamSTAMXXXX'</samp><br>הריצו את הפונקציה על 4 דוגמאות נוספות לפחות, באורכים שונים, ובדקו שקיבלתם פלט כמצופה.</p>", "# כתבו את הפונקציה שלכם כאן", "<p style=\"text-align:right;direction:rtl;\">הריצו את הפונקציה כך: <code>password_generator('stam')</code>\n\n<p style=\"text-align:right;direction:rtl;\">כעת נקבל שם משתמש וסיסמה, ונבדוק אם השילוב הוא נכון. בדקו באמצעות מחולל הסיסמאות מהסעיף הקודם אם הסיסמה של המשתמש תואמת את הסיסמה שיוצר המחולל. הדפיסו \"ברוך הבא\" אם הסיסמה נכונה, אחרת הדפיסו \"סיסמה שגויה\".</p>\n<code>login('stam', 'stamSTAMXXXX')</code><br><samp>Welcome stam!</samp><br><br><code>login('stam', 'mats')</code><br><samp>Wrong Password!</samp>", "# כתבו את הפונקציה שלכם כאן", "<p style=\"text-align:right;direction:rtl;\">כתבו פונקציה שמחזירה <code>True</code>\n אם בוצע חיבור מוצלח, אחרת החזירו <code>False</code>.<br>\nפונקציה זו דומה מאוד לפונקציה הקודמת שכתבתם, רק שהיא אינה מדפיסה דבר.<br> במקום ההדפסה יוחזר ערך בוליאני מתאים.<br>לדוגמה:<br></p>\n<br><code>login('stam', 'stamSTAMXXXX')</code><br><samp>True</samp><br><code>login('stam', 'mats')</code><br><samp>False</samp><br>", "# כתבו את הפונקציה שלכם כאן", "<p style=\"text-align:right;direction:rtl;\">\n כעת ענו על השאלה הקודמת באמצעות הפונקציה שכתבתם בסעיף זה, כלומר כתבו פונקציה שמשתמשת בפונקציה המחזירה ערך בוליאני ומדפיסה בהתאם להוראת מהסעיף הקודם.<br>רמז: <span style=\"direction: rtl; background: #000; text: #000\">השתמשו בערך ההחזרה של הפונקציה מהסעיף הקודם, בתוך if.</span></p>", "# כתבו את הפונקציה שלכם כאן", "<p style=\"text-align:right;direction:rtl;\">כעת נרחיב את מערכת הבנק שלנו.<br>\n נניח כי לכל לקוח יש בחשבון הבנק 500 ש\"ח.<br>\n באמצעות הפוקנציות הקודמות שכתבנו נממש את התוכנית הבאה:<br>\n<ul style=\"text-align:right; direction:rtl;\">\n <li>נבקש מהמשתמש שם משתמש וסיסמה.</li>\n <li>נאמת את שם המשתמש והסיסמה בעזרת מחולל הסיסמאות. אם האימות הצליח נדפיס: \n <samp> ?Login succeeded. How much you'd like to withdraw </samp><br>\n אחרת נדפיס: <samp>.Login failed</samp></li>\n <li>כיון שלמשתמש יש 500 ש\"ח בחשבון, עלינו לוודא שסכום המשיכה אינו גבוה מ־500 ש\"ח. אם הסכום אכן גבוה מ־500 ש\"ח נדפיס: \n <samp>.Withdrawal denied</samp></li>\n <li>כמו כן, עלינו לוודא כי הסכום אינו שלילי או 0. במקרה זה נדפיס: \n <samp>.Invalid amount</samp></li>\n <li>אם הסכום חוקי, נדפיס: \n <samp>.(Please take your money (amount asked). Your balance is: (amount left in balance</samp></li>\n</ul>\n</p>\n\n<p style=\"text-align:right;direction:rtl\">ציירו תחילה על דף תרשים זרימה של הקוד – תרשים מעויינים ללא קוד.\nזה יעזור לכם במימוש.</p\n\n### <p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nדוגמאות:</p>\n<p style=\"text-align: left; direction: ltr; float: left; clear: both;\">\n Insert username: <samp>stam</samp><br>\n Insert password: <samp>stamSTAMXXXX</samp><br>\n Login succeeded. How much you'd like to withdraw? <samp>200</samp><br>\n Please take your money (200NIS). Your balance is: 300NIS.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n דוגמה לסיסמה שגויה:\n</p>\n<p style=\"text-align: left; direction: ltr; float: left; clear: both;\">\n Insert username: <samp>stam</samp><br>\n Insert password: <samp>mats</samp><br>\n Login failed.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n דוגמה למשיכה לא חוקית:\n</p>\n\n<p style=\"text-align: left; direction: ltr; float: left; clear: both;\">\n Insert username: <samp>stam</samp><br>\n Insert password: <samp>stamSTAMXXXX</samp><br>\n Login succeeded. How much you'd like to withdraw? <samp>600</samp><br>\n Withdrawal denied.\n</p>\n\n<p style=\"text-align:right;direction:rtl;\">דוגמה למשיכה של 0 ש\"ח:</p>\n\n<p style=\"text-align: left; direction: ltr; float: left; clear: both;\">\n Insert username: <samp>stam</samp><br>\n Insert password: <samp>stamSTAMXXXX</samp><br>\n Login succeeded. How much you'd like to withdraw? <samp>0</samp><br>\n Invalid amount.\n</p>", "# כתבו את הפונקציה שלכם כאן", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n לאחרונה עלתה דרישה לשכלל את הבנק שלנו, כך שרק מספר מצומצם של לקוחות יוכלו לגשת לבנק.<br>\n הגדירו רשימה של שמות של לקוחות שעבורם יתאפשר החיבור.<br>\n עבור לקוחות שאינם ברשימה יכתוב הבנק <samp>You are not a customer of the bank</samp>.\n</p>", "# כתבו את הפונקציה שלכם כאן", "<p style=\"text-align:right;direction:rtl;\">מתודות של מחרוזות</p>\n<p style=\"text-align:right;direction:rtl;\"> ניזכר בכמה פעולות של מחרוזות:<br>לכל אחד מהתרגילים הבאים הריצו את הדוגמה וכתבו בעצמכם 3 דוגמאות נוספות. הסבירו לעצמכם מה עושה כל מתודה למחרוזת שהיא מקבלת.<br>\nאם תרצו להיזכר מה עושה מתודה מסוימת תוכלו להריץ אותה כך:<br></p>\n<code>str.method-name?</code>\n<p style=\"text-align:right;direction:rtl;\"> לדוגמה:<br></p>\n<code>str.split?</code><br>\n<p style=\"text-align:right;direction:rtl;\"> הריצו את התא הבא לקבלת מידע על המתודה <em>split</em>:<br></p>", "str.split?\n\n\"abcdef:ghijk:xyz\".split(\":\")\n\n# כתבו דוגמה למתודה זו\n\n# כתבו דוגמה למתודה זו\n\n# כתבו דוגמה למתודה זו\n\n\"543\".zfill(4)\n\n# כתבו דוגמה למתודה זו\n\n# כתבו דוגמה למתודה זו\n\n# כתבו דוגמה למתודה זו\n\n\"now i am a lowercase string, one day i will be upper\".upper()\n\n# כתבו דוגמה למתודה זו\n\n# כתבו דוגמה למתודה זו\n\n# כתבו דוגמה למתודה זו\n\n\"I AM AN UPPERCASE STRING. I AM AFRAID TO BE LOWERED!\".lower()\n\n# כתבו דוגמה למתודה זו\n\n# כתבו דוגמה למתודה זו\n\n# כתבו דוגמה למתודה זו\n\n\"wow!_I_am_using_underscore_as_space!\".replace(\"_\", \" \")\n\n# כתבו דוגמה למתודה זו\n\n# כתבו דוגמה למתודה זו\n\n# כתבו דוגמה למתודה זו\n\n\" i need some spaceeeeeeee \".strip()\n\n# כתבו דוגמה למתודה זו\n\n# כתבו דוגמה למתודה זו\n\n# כתבו דוגמה למתודה זו\n\n\"i^wonder&if@all##characters$in%%%this string are alpha-numeric???\".isalnum()\n\n\"onlylettersandnumbers2342343\".isalnum()\n\n# כתבו דוגמה למתודה זו\n\n# כתבו דוגמה למתודה זו\n\n# כתבו דוגמה למתודה זו\n\n\"lettersonly\".isalpha()\n\n# כתבו דוגמה למתודה זו\n\n# כתבו דוגמה למתודה זו\n\n# כתבו דוגמה למתודה זו\n\n\"4535354353\".isdecimal()\n\n# כתבו דוגמה למתודה זו\n\n# כתבו דוגמה למתודה זו\n\n# כתבו דוגמה למתודה זו", "<p style=\"text-align:right;direction:rtl;\">שעון עולמי</p>\n<p style=\"text-align:right;direction:rtl;\">בשאלה זו נכתוב גרסה של שעון עולמי התומך ב־4 אזורי זמן:<br>\n\n<ul style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li>תל אביב – TLV</li>\n <li>לונדון – LDN</li>\n <li>ניו יורק – NYC</li>\n <li>טוקיו – TYO</li>\n</ul>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nאם נקבל תל אביב, נחזיר את השעה בתוספת 3 שעות.<br>\nאם נקבל ניו יורק נחזיר את השעה פחות 4.<br>\nאם נקבל טוקיו נחזיר את השעה ועוד 9.<br>\nבכל מקרה אחר נחזיר את השעה כמו שקיבלנו אותה.\n</p>\n\n\n<p style=\"text-align:right;direction:rtl;\">השעון שלנו בתרגיל זה הוא שעון 24 שעות בפורמט HH:MM.</p>\n<p style=\"text-align:right;direction:rtl;\">תחילה, כתבו פונקציה המקבלת את השעה בפורמט HH:MM, ומקבלת את מספר השעות שיש להוסיף או להוריד מהשעה הנתונה, ומחזירה את השעה המעודכנת.</p><br>\n<code>time_shift(\"08:44\", 1)</code><br>\n<samp style=\"text-align:left;direction:ltr;\">\"09:44\"</samp><br>\n<code>time_shift(\"07:31\", -2)</code><br>\n<samp>\"05:31\"</samp>\n<p style=\"text-align:right;direction:rtl;\">כמו כן עליכם לוודא שמוכנסת שעה חוקית, ולהדפיס שגיאה אם לא:</p><br>\n<code>time_shift(\"32:12\", 4)</code><br>\n<samp>\"Invalid time inserted\"</samp>\n<p style=\"text-align:right;direction:rtl;\">נוסף לכך, עליכם לתמוך במעברים בין יממות. כלומר עליכם לבצע נכון פעולות מסוג זה:</p><br>\n<code>time_shift(\"23:30\", 2)</code><br>\n<samp>\"01:30\"</samp><br>\n<code>time_shift(\"04:13\", -5)</code><br>\n<samp>\"23:13\"</samp>", "# כתבו את הפונקציה שלכם כאן", "<p style=\"text-align:right;direction:right;\">רמזים</p>\n<p style=\"text-align:right;direction:rtl;\">פונקציות שימושיות:\n <span style=\"direction: rtl; background: #000; text: #000\"><br><em>split</em> – מתודה של <em>string</em>.<br>\n האופרטור % (מודולו) – חשבו עם איזה מספר צריך לעשות מודולו.<br>\n <em>zfill</em> – השתמשו בה במקרה שהשעה חד־ספרתית (לדוגמה 1:05 תהפוך ל־01:05) </span>\n\n<p style=\"text-align:right;direction:rtl;\">רמזים נוספים:<br>\n <span style=\"direction: rtl; background: #000; text: #000\">מומלץ להמיר את השעה מ־<em>string</em> ל־<em>int</em> ואז לבצע את פעולות החשבון, ולבסוף להמיר חזרה ל־<em>string</em>\n </span>\n</p>\n\n<p style=\"text-align:right;direction:rtl;\"> כעת כתבו פונקציה המקבלת שני פרמטרים – שעה ואזור זמן ובאמצעות הפונקציה מהתרגיל הקודם מחזירה את השעה באזור הזמן המבוקש.\n<br>\n לדוגמה:\n</p>\n<p style=\"text-align:left;direction:ltr;\">\n <code>convert_to_timezone(\"10:34\", \"TLV\")</code>\n <br>\n <samp>\"13:34\"</samp>\n</p>", "# כתבו את הפונקציה שלכם כאן", "<span style=\"text-align: right; direction:rtl; float: right; clear: both;\">אורכי רשימות</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n כתבו תוכנית שמקבלת 2 רשימות שונות, ומדפיסה:\n</p>\n\n<ul style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li>\"<samp>Same length</samp>\" אם הן באותו אורך.</li>\n <li>\"<samp>Not same length</samp>\" אם הן באורך שונה.</li>\n <li>\"<samp>Got empty list</samp>\" אם רשימה שקיבלנו היא באורך 0.</li>\n</ul>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n לדוגמה – אם קיבלנו 2 רשימות ריקות, נדפיס:<br>\n <samp style=\"text-align: left; direction: ltr; float: left; clear: both;\">\n \"Got empty list\"\n \"Got empty list\"\n \"Same length\"\n </samp>\n</p>", "# כתבו את הפונקציה שלכם כאן", "<span style=\"text-align: right; direction:rtl; float: right; clear: both;\">מיקומים</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nכתבו פונקציה שמקבלת רשימה של רשימות.<br>\nאם הרשימה החיצונית לא באורך 6, הפונקציה תדפיס <samp>Only lists of length 6 are allowed</samp>.<br>\nהפונקציה תדפיס \"<samp dir=\"ltr\">Yes!</samp>\" אם אחד מהבאים מתקיים:\n</p>\n\n<ul style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li>אורך הרשימה במקום ה־0 שווה לאורך הרשימה במקום ה־4</li>\n <li>אורך הרשימה במקום ה־3 שווה לאורך הרשימה במקום ה־2 וה־1</li>\n <li>אורך הרשימה במקום ה־5 שווה לאורך הרשימה במקום ה־3</li>\n</ul>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nהפונקציה תדפיס \"<samp dir=\"ltr\">Yes!</samp>\" עבור כל תנאי שמתקיים, גם אם קיים יותר מאחד כזה.<br>\nלדוגמה, עבור:<br>\n<code dir=\"ltr\" style=\"direction: ltr;\">multi = [[0], [1], [2], [3], [4], [5]]</code><br>\nכל התנאים מתקיימים, ולכן נדפיס \"<samp dir=\"ltr\">Yes!</samp>\" 3 פעמים.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nכתבו לפחות 3 דוגמאות שונות שמדגימות:\n</p>\n\n<ul style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li>אפס הדפסות</li>\n <li>הדפסה אחת</li>\n <li>שתי הדפסות</li>\n <li>שלוש הדפסות</li>\n</ul>", "# כתבו את הפונקציה שלכם כאן" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
steinam/teacher
jup_notebooks/data-science-ipython-notebooks-master/misc/Algorithmia.ipynb
mit
[ "This notebook was prepared by Algorithmia. Source and license info is on GitHub.\nAlgorithmia\nReference: Algorithmia Documentation\nTable of Contents:\n1. Installation\n2. Authentication\n3. Face Detection\n4. Content Summarizer\n5. Latent Dirichlet Allocation\n6. Optical Character Recognition\n1. Installation\nYou need to have the algorithmia package (version 0.9.3) installed for this notebook.\nYou can install the package using the pip package manager:", "pip install algorithmia==0.9.3\n\nimport Algorithmia\nimport pprint\n\npp = pprint.PrettyPrinter(indent=2)", "2. Authentication\nYou only need your Algorithmia API Key to run the following commands.", "API_KEY = 'YOUR_API_KEY'\n# Create a client instance\nclient = Algorithmia.client(API_KEY)", "3. Face Detection\nUses a pretrained model to detect faces in a given image.\nRead more about Face Detection here", "from IPython.display import Image\n\nface_url = 'https://s3.amazonaws.com/algorithmia-assets/data-science-ipython-notebooks/face.jpg'\n\n# Sample Face Image\nImage(url=face_url)\n\nAlgorithmia.apiKey = 'Simple ' + API_KEY\n\ninput = [face_url, \"data://.algo/temp/face_result.jpg\"]\n\nalgo = client.algo('opencv/FaceDetection/0.1.8')\nalgo.pipe(input)\n\n# Result Image is in under another algorithm name because FaceDetection calls ObjectDetectionWithModels\nresult_image_data_api_path = '.algo/opencv/ObjectDetectionWithModels/temp/face_result.jpg'\n\n# Result Image with coordinates for the detected face region\nresult_coord_data_api_path = '.algo/opencv/ObjectDetectionWithModels/temp/face_result.jpgrects.txt'\n\nresult_file = Algorithmia.file(result_image_data_api_path).getBytes()\n\nresult_coord = Algorithmia.file(result_coord_data_api_path).getString()\n\n# Show Result Image\nImage(data=result_file)\n\n# Show detected face region coordinates\nprint 'Detected face region coordinates: ' + result_coord", "4. Content Summarizer\nSummarAI is an advanced content summarizer with the option of generating context-controlled summaries. It is based on award-winning patented methods related to artificial intelligence and vector space developed at Lawrence Berkeley National Laboratory.", "# Get a Wikipedia article as content\nwiki_article_name = 'Technological Singularity'\nclient = Algorithmia.client(API_KEY)\nalgo = client.algo('web/WikipediaParser/0.1.0')\nwiki_page_content = algo.pipe(wiki_article_name)['content']\nprint 'Wikipedia article length: ' + str(len(wiki_page_content))\n\n# Summarize the Wikipedia article\nclient = Algorithmia.client(API_KEY)\nalgo = client.algo('SummarAI/Summarizer/0.1.2')\nsummary = algo.pipe(wiki_page_content.encode('utf-8'))\nprint 'Wikipedia generated summary length: ' + str(len(summary['summarized_data']))\nprint summary['summarized_data']", "5. Latent Dirichlet Allocation\nThis algorithm takes a group of documents (anything that is made of up text), and returns a number of topics (which are made up of a number of words) most relevant to these documents.\nRead more about Latent Dirichlet Allocation here", "# Get up to 20 random Wikipedia articles\nclient = Algorithmia.client(API_KEY)\nalgo = client.algo('web/WikipediaParser/0.1.0')\nrandom_wiki_article_names = algo.pipe({\"random\":20})\n\nrandom_wiki_articles = []\n\nfor article_name in random_wiki_article_names:\n try:\n article_content = algo.pipe(article_name)['content']\n random_wiki_articles.append(article_content)\n except:\n pass\nprint 'Number of Wikipedia articles scraped: ' + str(len(random_wiki_articles))\n\n# Find topics from 20 random Wikipedia articles\nalgo = client.algo('nlp/LDA/0.1.0')\n\ninput = {\"docsList\": random_wiki_articles, \"mode\": \"quality\"}\n\ntopics = algo.pipe(input)\n\npp.pprint(topics)", "6. Optical Character Recognition\nRecognize text in your images.\nRead more about Optical Character Recognition here", "from IPython.display import Image\n\nbusinesscard_url = 'https://s3.amazonaws.com/algorithmia-assets/data-science-ipython-notebooks/businesscard.jpg'\n\n# Sample Image\nImage(url=businesscard_url)\n\ninput = {\"src\": businesscard_url,\n\"hocr\":{\n\"tessedit_create_hocr\":1,\n\"tessedit_pageseg_mode\":1,\n\"tessedit_char_whitelist\":\"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-@/.,:()\"}}\n\nalgo = client.algo('tesseractocr/OCR/0.1.0')\npp.pprint(algo.pipe(input))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
karlstroetmann/Artificial-Intelligence
Python/6 Classification/Gender-Estimation.ipynb
gpl-2.0
[ "from IPython.core.display import HTML\nwith open (\"../style.css\", \"r\") as file:\n css = file.read()\nHTML(css)", "Gender Estimation for First Names\nThis notebook gives a simple example for a naive Bayes classifier. We try to predict the gender of a first name. In order to train our classifier, we need a training set of names that are marked as being either male. We happen to have two text files, names-female.txt and names-male.txt containing female and male first names. We start by defining the function read_names. This function reads a file of strings and returns a list of all the names given in the file. Care is taken that the newline character at the end of each line is discarded.", "def read_names(file_name):\n Result = []\n with open(file_name, 'r') as file:\n for name in file:\n Result.append(name[:-1]) # discard newline\n return Result\n\nFemaleNames = read_names('names-female.txt')\nMaleNames = read_names('names-male.txt')", "Let us compute the prior probabilities $P(\\texttt{Female})$ and $P(\\texttt{Male})$ for the classes $\\texttt{Female}$ and $\\texttt{Male}$. In the lecture it was shown that the prior probability of a class $C$ in a training set $T$ is given as:\n$$ P(C) \\approx \n \\frac{\\mathtt{card}\\bigl({t \\in T \\;|\\; \\mathtt{class}(t) = C }\\bigr)}{\\mathtt{card}(T)}\n$$\nTherefore, these probabilities are computed as follows.", "pFemale = len(FemaleNames) / (len(FemaleNames) + len(MaleNames))\npMale = len(MaleNames) / (len(FemaleNames) + len(MaleNames))\npFemale", "As a first attempt to solve the problem we will use the last character of a name as its feature. We have to compute the conditional probability for every possible letter that occurs as the last letter of a name. The general formula to compute the conditional probability of a feature $f$ given a class $C$ is the following:\n$$ P(f\\;|\\;C) \\approx \n \\frac{\\mathtt{card}\\bigl({t \\in T \\;|\\; \\mathtt{class}(t) = C \\wedge \\mathtt{has}(t, f) }\\bigr)}{\n \\mathtt{card}\\bigl({t \\in T \\;|\\; \\mathtt{class}(t) = C }\\bigr)} \n$$\nThe function conditional_prop takes a character $c$ and a gender $g$ and determines the conditional probability of seeing $c$ as a last character of a name that has the gender $g$.", "def conditional_prop(c, g):\n if g == 'f':\n return len([n for n in FemaleNames if n[-1] == c]) / len(FemaleNames)\n else:\n return len([n for n in MaleNames if n[-1] == c]) / len(MaleNames)", "Next, we define a dictionary Conditional_Probability. For every character $c$ and every gender $g \\in {\\texttt{'f'}, \\texttt{'m'}}$, the entry $\\texttt{Conditional_Probability}[(c,g)]$ is the conditional probability of observing the last character $c$ if the gender is known to be $g$.", "Conditional_Probability = {}\nfor c in 'abcdefghijklmnopqrstuvwxyz':\n for g in ['f', 'm']:\n Conditional_Probability[c, g] = conditional_prop(c, g)", "Now that have both the prior probabilities $P(\\texttt{'f'})$ and $P(\\texttt{'m'})$ and also all the conditional probabilities $P(c|g)$, we are ready to implement our naive Bayes classifier.", "def classify(name):\n last = name[-1]\n female = Conditional_Probability[(last, 'f')] * pFemale\n male = Conditional_Probability[(last, 'm')] * pMale\n if female >= male:\n return 'f'\n else:\n return 'm'", "We test our classifier with two common names.", "classify('Christian')\n\nclassify('Elena')", "Let us check the overall accuracy of our classifier with respect to the training set.", "total = 0\ncorrect = 0\nfor n in FemaleNames:\n if classify(n) == 'f':\n correct += 1\n total += 1\nfor n in MaleNames:\n if classify(n) == 'm':\n correct += 1\n total += 1\naccuracy = correct / total\naccuracy", "An accuracy of 76% is not too bad for a first attempt, but we can do better by using more sophisticated features." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
zzsza/Datascience_School
24. PCA/01. 고유분해와 특이값 분해.ipynb
mit
[ "고유분해와 특이값 분해\n정방 행렬 $A$에 대해 다음 식을 만족하는 단위 벡터 $v$, 스칼라 $\\lambda$을 여러 개 찾을 수 있다.\n$$ Av = \\lambda v $$\n\n\n$ A \\in \\mathbf{R}^{M \\times M} $\n\n\n$ \\lambda \\in \\mathbf{R} $\n\n\n$ v \\in \\mathbf{R}^{M} $\n\n\n이러한 실수 $\\lambda$를 고유값(eigenvalue), 단위 벡터 $v$ 를 고유벡터(eigenvector) 라고 하며 고유값과 고유벡터를 찾는 작업을 고유분해(eigen-decomposition)라고 한다.\n$ A \\in \\mathbf{R}^{M \\times M} $ 에 대해 최대 $M$개의 고유값-고유벡터 쌍이 존재할 수 있다.\n예를 들어 다음 행렬 $A$\n$$\nA=\n\\begin{bmatrix}\n1 & -2 \\\n2 & -3\n\\end{bmatrix}\n$$\n에 대해 다음 단위 벡터와 스칼라 값은 고유벡터-고유값이 된다.\n$$\\lambda = -1$$\n$$\nv=\n\\begin{bmatrix}\n\\dfrac{1}{\\sqrt{2}} \\\n\\dfrac{1}{\\sqrt{2}}\n\\end{bmatrix}\n$$\n복수 개의 고유 벡터가 존재하는 경우에는 다음과 같이 고유벡터 행렬 $V$와 고유값 행렬 $\\Lambda$로 표기할 수 있다.\n$$ \nA \\left[ v_1 \\cdots v_M \\right] =\n\\left[ \\lambda_1 v_1 \\cdots \\lambda_M v_M \\right] =\n\\left[ v_1 \\cdots v_M \\right] \n\\begin{bmatrix}\n\\lambda_{1} & 0 & \\cdots & 0 \\\n0 & \\lambda_{2} & \\cdots & 0 \\\n\\vdots & \\vdots & \\ddots & \\vdots \\\n0 & 0 & \\cdots & \\lambda_{M} \\\n\\end{bmatrix}\n$$\n$$ AV = V\\Lambda $$\n여기에서 \n$$\nV = \\left[ v_1 \\cdots v_M \\right]\n$$\n$$\n\\Lambda =\n\\begin{bmatrix}\n\\lambda_{1} & 0 & \\cdots & 0 \\\n0 & \\lambda_{2} & \\cdots & 0 \\\n\\vdots & \\vdots & \\ddots & \\vdots \\\n0 & 0 & \\cdots & \\lambda_{M} \\\n\\end{bmatrix}\n$$\nnumpy linalg 서브패키지에서는 고유값과 고유벡터를 구할 수 있는 eig 명령을 제공한다.", "w, V = np.linalg.eig(np.array([[1, -2], [2, -3]]))\n\nw\n\nV", "대칭 행렬의 고유 분해\n행렬 $A$가 대칭(symmetric) 행렬이면 고유값 벡터 행렬 $V$는 다음과 같이 전치 행렬이 역행렬과 같아진다.\n$$ V^T V = V V^T = I$$\n이 때는 고유 분해가 다음과 같이 표시된다.\n$$ A = V\\Lambda V^T = \\sum_{i=1}^{M} {\\lambda_i} v_i v_i^T$$\n$$ A^{-1} = V \\Lambda^{-1} V^T = \\sum_{i=1}^{M} \\dfrac{1}{\\lambda_i} v_i v_i^T$$\n확률 변수의 좌표 변환\n확률 변수의 공분산 행렬 $\\Sigma$ 은 대칭 행렬이므로 위의 관계식이 성립한다. \n따라서 다변수 가우시안 정규 분포의 확률 밀도 함수는 다음과 같이 표시할 수 있다.\n$$\n\\begin{eqnarray}\n\\mathcal{N}(x \\mid \\mu, \\Sigma) \n&=& \\dfrac{1}{(2\\pi)^{D/2} |\\Sigma|^{1/2}} \\exp \\left( -\\dfrac{1}{2} (x-\\mu)^T \\Sigma^{-1} (x-\\mu) \\right) \\\n&=& \\dfrac{1}{(2\\pi)^{D/2} |\\Sigma|^{1/2}} \\exp \\left( -\\dfrac{1}{2} (x-\\mu)^T V \\Lambda^{-1} V^T (x-\\mu) \\right) \\\n&=& \\dfrac{1}{(2\\pi)^{D/2} |\\Sigma|^{1/2}} \\exp \\left( -\\dfrac{1}{2} (V^T(x-\\mu))^T \\Lambda^{-1} (V^T (x-\\mu)) \\right) \\\n\\end{eqnarray}\n$$\n즉 변환 행렬 $V^T$로 좌표 변환하면 서로 독립인 성분들로 나누어진다.", "mu = [2, 3]\ncov = [[2, 3],[3, 7]]\nrv = sp.stats.multivariate_normal(mu, cov)\nxx = np.linspace(0, 4, 120)\nyy = np.linspace(1, 5, 150)\nXX, YY = np.meshgrid(xx, yy)\nplt.grid(False)\nplt.contourf(XX, YY, rv.pdf(np.dstack([XX, YY])))\n\nx1 = np.array([0, 2])\nx1_mu = x1 - mu\nx2 = np.array([3, 4])\nx2_mu = x2 - mu\nplt.plot(x1_mu[0] + mu[0], x1_mu[1] + mu[1], 'bo', ms=20)\nplt.plot(x2_mu[0] + mu[0], x2_mu[1] + mu[1], 'ro', ms=20)\n\nplt.axis(\"equal\")\nplt.show()\n\nw, V = np.linalg.eig(cov)\n\nw\n\nV\n\nrv = sp.stats.multivariate_normal(mu, w)\nxx = np.linspace(0, 4, 120)\nyy = np.linspace(1, 5, 150)\nXX, YY = np.meshgrid(xx, yy)\nplt.grid(False)\nplt.contourf(XX, YY, rv.pdf(np.dstack([XX, YY])))\n\nx1 = np.array([0, 2])\nx1_mu = x1 - mu\nx2 = np.array([3, 4])\nx2_mu = x2 - mu\n\nx1t_mu = V.T.dot(x1_mu) # 좌표 변환\nx2t_mu = V.T.dot(x2_mu) # 좌표 변환\n\nplt.plot(x1t_mu[0] + mu[0], x1t_mu[1] + mu[1], 'bo', ms=20)\nplt.plot(x2t_mu[0] + mu[0], x2t_mu[1] + mu[1], 'ro', ms=20)\n\nplt.axis(\"equal\")\nplt.show()", "특이값 분해\n정방 행렬이 아닌 행렬 $M$에 대해서도 고유 분해와 유사한 분해가 가능하다. 이를 특이값 분해(singular value decomposition)이라고 한다.\n\n$M \\in \\mathbf{R}^{m \\times n}$ \n\n$$M = U \\Sigma V^T$$ \n여기에서 \n* $U \\in \\mathbf{R}^{m \\times m}$ \n* $\\Sigma \\in \\mathbf{R}^{m \\times n}$\n* $V \\in \\mathbf{R}^{n \\times n}$ \n이고 행렬 $U$와 $V$는 다음 관계를 만족한다.\n$$ U^T U = UU^T = I $$\n$$ V^T V = VV^T = I $$\n예를 들어\n$$\\mathbf{M} = \\begin{bmatrix}\n 1 & 0 & 0 & 0 & 2 \\\n 0 & 0 & 3 & 0 & 0 \\\n 0 & 0 & 0 & 0 & 0 \\\n 0 & 2 & 0 & 0 & 0\n \\end{bmatrix}\n$$\n에 대한 특이값 분해 결과는 다음과 같다.\n$$\n\\begin{align}\n\\mathbf{U} &= \\begin{bmatrix}\n 0 & 0 & 1 & 0 \\\n 1 & 0 & 0 & 0 \\\n 0 & 0 & 0 & -1 \\\n 0 & 1 & 0 & 0 \\\n \\end{bmatrix} \\\n\\boldsymbol{\\Sigma} &= \\begin{bmatrix}\n \\sqrt{5} & 0 & 0 & 0 & 0 \\\n 0 & 2 & 0 & 0 & 0 \\\n 0 & 0 & 1 & 0 & 0 \\\n 0 & 0 & 0 & 0 & 0\n \\end{bmatrix} \\\n\\mathbf{V}^T &= \\begin{bmatrix}\n 0 & 0 & \\sqrt{0.2} & 0 & \\sqrt{0.8} \\\n 0 & 1 & 0 & 0 & 0 \\\n 1 & 0 & 0 & 0 & 0 \\\n 0 & 0 & -\\sqrt{0.8} & 0 & \\sqrt{0.2} \\\n 0 & 0 & 0 & 1 & 0 \\\n \\end{bmatrix}\n\\end{align}$$\n이는 다음과 같이 확인 할 수 있다.\n$$\\begin{align}\n\\mathbf{U} \\mathbf{U^T} &=\n \\begin{bmatrix}\n 0 & 0 & 1 & 0 \\\n 1 & 0 & 0 & 0 \\\n 0 & 0 & 0 & -1 \\\n 0 & 1 & 0 & 0 \\\n \\end{bmatrix}\n\\cdot\n \\begin{bmatrix}\n 0 & 1 & 0 & 0 \\\n 0 & 0 & 0 & 1 \\\n 1 & 0 & 0 & 0 \\\n 0 & 0 & -1 & 0 \\\n \\end{bmatrix}\n = \n \\begin{bmatrix}\n 1 & 0 & 0 & 0 \\\n 0 & 1 & 0 & 0 \\\n 0 & 0 & 1 & 0 \\\n 0 & 0 & 0 & 1\n \\end{bmatrix} \n = \\mathbf{I}_4 \\\n\\mathbf{V} \\mathbf{V^T} &=\n \\begin{bmatrix}\n 0 & 0 & \\sqrt{0.2} & 0 & \\sqrt{0.8} \\\n 0 & 1 & 0 & 0 & 0 \\\n 1 & 0 & 0 & 0 & 0 \\\n 0 & 0 & -\\sqrt{0.8} & 0 & \\sqrt{0.2} \\\n 0 & 0 & 0 & 1 & 0 \\\n \\end{bmatrix}\n \\cdot\n \\begin{bmatrix}\n 0 & 0 & 1 & 0 & 0 \\\n 0 & 1 & 0 & 0 & 0 \\\n \\sqrt{0.2} & 0 & 0 & -\\sqrt{0.8} & 0\\\n 0 & 0 & 0 & 0 & 1 \\\n \\sqrt{0.8} & 0 & 0 & \\sqrt{0.2} & 0 \\\n \\end{bmatrix} \n =\n \\begin{bmatrix}\n 1 & 0 & 0 & 0 & 0 \\\n 0 & 1 & 0 & 0 & 0 \\\n 0 & 0 & 1 & 0 & 0 \\\n 0 & 0 & 0 & 1 & 0 \\\n 0 & 0 & 0 & 0 & 1\n \\end{bmatrix} \n = \\mathbf{I}_5\n\\end{align}$$", "from pprint import pprint\nM = np.array([[1,0,0,0,0],[0,0,2,0,3],[0,0,0,0,0],[0,2,0,0,0]])\nprint(\"\\nM:\"); pprint(M)\nU, S0, V0 = np.linalg.svd(M, full_matrices=True)\nprint(\"\\nU:\"); pprint(U)\nS = np.hstack([np.diag(S0), np.zeros(M.shape[0])[:, np.newaxis]])\nprint(\"\\nS:\"); pprint(S)\nprint(\"\\nV:\"); pprint(V)\nV = V0.T\nprint(\"\\nU.dot(U.T):\"); pprint(U.dot(U.T))\nprint(\"\\nV.dot(V.T):\"); pprint(V.dot(V.T))\nprint(\"\\nU.dot(S).dot(V.T):\"); pprint(U.dot(S).dot(V.T))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/asl-ml-immersion
notebooks/introduction_to_tensorflow/labs/4_keras_functional_api.ipynb
apache-2.0
[ "Introducing the Keras Functional API\nLearning Objectives\n 1. Understand embeddings and how to create them with the feature column API\n 1. Understand Deep and Wide models and when to use them\n 1. Understand the Keras functional API and how to build a deep and wide model with it\nIntroduction\nIn the last notebook, we learned about the Keras Sequential API. The Keras Functional API provides an alternate way of building models which is more flexible. With the Functional API, we can build models with more complex topologies, multiple input or output layers, shared layers or non-sequential data flows (e.g. residual layers).\nIn this notebook we'll use what we learned about feature columns to build a Wide & Deep model. Recall, that the idea behind Wide & Deep models is to join the two methods of learning through memorization and generalization by making a wide linear model and a deep learning model to accommodate both. You can have a look at the original research paper here: Wide & Deep Learning for Recommender Systems.\n<img src='assets/wide_deep.png' width='80%'>\n<sup>(image: https://ai.googleblog.com/2016/06/wide-deep-learning-better-together-with.html)</sup>\nThe Wide part of the model is associated with the memory element. In this case, we train a linear model with a wide set of crossed features and learn the correlation of this related data with the assigned label. The Deep part of the model is associated with the generalization element where we use embedding vectors for features. The best embeddings are then learned through the training process. While both of these methods can work well alone, Wide & Deep models excel by combining these techniques together. \nStart by importing the necessary libraries for this lab.", "import datetime\nimport os\nimport shutil\n\nimport numpy as np\nimport pandas as pd\nimport tensorflow as tf\nfrom matplotlib import pyplot as plt\nfrom tensorflow import feature_column as fc\nfrom tensorflow import keras\nfrom tensorflow.keras import Model\nfrom tensorflow.keras.callbacks import TensorBoard\nfrom tensorflow.keras.layers import Dense, DenseFeatures, Input, concatenate\n\nprint(tf.__version__)\n\n%matplotlib inline", "Load raw data\nWe will use the taxifare dataset, using the CSV files that we created in the first notebook of this sequence. Those files have been saved into ../data.", "!ls -l ../data/*.csv", "Use tf.data to read the CSV files\nWe wrote these functions for reading data from the csv files above in the previous notebook. For this lab we will also include some additional engineered features in our model. In particular, we will compute the difference in latitude and longitude, as well as the Euclidean distance between the pick-up and drop-off locations. We can accomplish this by adding these new features to the features dictionary with the function add_engineered_features below. \nNote that we include a call to this function when collecting our features dict and labels in the features_and_labels function below as well.", "CSV_COLUMNS = [\n \"fare_amount\",\n \"pickup_datetime\",\n \"pickup_longitude\",\n \"pickup_latitude\",\n \"dropoff_longitude\",\n \"dropoff_latitude\",\n \"passenger_count\",\n \"key\",\n]\nLABEL_COLUMN = \"fare_amount\"\nDEFAULTS = [[0.0], [\"na\"], [0.0], [0.0], [0.0], [0.0], [0.0], [\"na\"]]\nUNWANTED_COLS = [\"pickup_datetime\", \"key\"]\n\n\ndef features_and_labels(row_data):\n label = row_data.pop(LABEL_COLUMN)\n features = row_data\n\n for unwanted_col in UNWANTED_COLS:\n features.pop(unwanted_col)\n\n return features, label\n\n\ndef create_dataset(pattern, batch_size=1, mode=\"eval\"):\n dataset = tf.data.experimental.make_csv_dataset(\n pattern, batch_size, CSV_COLUMNS, DEFAULTS\n )\n\n dataset = dataset.map(features_and_labels)\n\n if mode == \"train\":\n dataset = dataset.shuffle(buffer_size=1000).repeat()\n\n # take advantage of multi-threading; 1=AUTOTUNE\n dataset = dataset.prefetch(1)\n return dataset", "Feature columns for Wide and Deep model\nFor the Wide columns, we will create feature columns of crossed features. To do this, we'll create a collection of Tensorflow feature columns to pass to the tf.feature_column.crossed_column constructor. The Deep columns will consist of numeric columns and the embedding columns we want to create. \nExercise. In the cell below, create feature columns for our wide-and-deep model. You'll need to build \n1. bucketized columns using tf.feature_column.bucketized_column for the pickup and dropoff latitude and longitude,\n2. crossed columns using tf.feature_column.crossed_column for those bucketized columns, and \n3. embedding columns using tf.feature_column.embedding_column for the crossed columns.", "# 1. Bucketize latitudes and longitudes\nNBUCKETS = 16\nlatbuckets = np.linspace(start=38.0, stop=42.0, num=NBUCKETS).tolist()\nlonbuckets = np.linspace(start=-76.0, stop=-72.0, num=NBUCKETS).tolist()\n\nfc_bucketized_plat = # TODO: Your code goes here.\nfc_bucketized_plon = # TODO: Your code goes here.\nfc_bucketized_dlat = # TODO: Your code goes here.\nfc_bucketized_dlon = # TODO: Your code goes here.\n\n# 2. Cross features for locations\nfc_crossed_dloc = # TODO: Your code goes here.\nfc_crossed_ploc = # TODO: Your code goes here.\nfc_crossed_pd_pair = # TODO: Your code goes here.\n\n# 3. Create embedding columns for the crossed columns\nfc_pd_pair = # TODO: Your code goes here.\nfc_dloc = # TODO: Your code goes here.\nfc_ploc = # TODO: Your code goes here.", "Gather list of feature columns\nNext we gather the list of wide and deep feature columns we'll pass to our Wide & Deep model in Tensorflow. Recall, wide columns are sparse, have linear relationship with the output while continuous columns are deep, have a complex relationship with the output. We will use our previously bucketized columns to collect crossed feature columns and sparse feature columns for our wide columns, and embedding feature columns and numeric features columns for the deep columns.\nExercise. Collect the wide and deep columns into two separate lists. You'll have two lists: One called wide_columns containing the one-hot encoded features from the crossed features and one called deep_columns which contains numeric and embedding feature columns.", "# TODO 2\nwide_columns = [\n # One-hot encoded feature crosses\n # TODO: Your code goes here.\n]\n\ndeep_columns = [\n # Embedding_columns\n # TODO: Your code goes here.\n # Numeric columns\n # TODO: Your code goes here.\n]", "Build a Wide and Deep model in Keras\nTo build a wide-and-deep network, we connect the sparse (i.e. wide) features directly to the output node, but pass the dense (i.e. deep) features through a set of fully connected layers. Here’s that model architecture looks using the Functional API.\nFirst, we'll create our input columns using tf.keras.layers.Input.", "INPUT_COLS = [\n \"pickup_longitude\",\n \"pickup_latitude\",\n \"dropoff_longitude\",\n \"dropoff_latitude\",\n \"passenger_count\",\n]\n\ninputs = {\n colname: Input(name=colname, shape=(), dtype=\"float32\")\n for colname in INPUT_COLS\n}", "Then, we'll define our custom RMSE evaluation metric and build our wide and deep model.\nExercise. Complete the code in the function build_model below so that it returns a compiled Keras model. The argument dnn_hidden_units should represent the number of units in each layer of your network. Use the Functional API to build a wide-and-deep model. Use the deep_columns you created above to build the deep layers and the wide_columns to create the wide layers. Once you have the wide and deep components, you will combine them to feed to a final fully connected layer.", "def rmse(y_true, y_pred):\n return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))\n\n\ndef build_model(dnn_hidden_units):\n # Create the deep part of model\n deep = # TODO: Your code goes here.\n \n # Create the wide part of model\n wide = # TODO: Your code goes here.\n\n # Combine deep and wide parts of the model\n combined = # TODO: Your code goes here.\n\n # Map the combined outputs into a single prediction value\n output = # TODO: Your code goes here.\n \n # Finalize the model\n model = # TODO: Your code goes here.\n\n # Compile the keras model\n model.compile(\n # TODO: Your code goes here.\n )\n return model", "Next, we can call the build_model to create the model. Here we'll have two hidden layers, each with 10 neurons, for the deep part of our model. We can also use plot_model to see a diagram of the model we've created.", "HIDDEN_UNITS = [10, 10]\n\nmodel = build_model(dnn_hidden_units=HIDDEN_UNITS)\n\ntf.keras.utils.plot_model(model, show_shapes=False, rankdir=\"LR\")", "Next, we'll set up our training variables, create our datasets for training and validation, and train our model.\n(We refer you the the blog post ML Design Pattern #3: Virtual Epochs for further details on why express the training in terms of NUM_TRAIN_EXAMPLES and NUM_EVALS and why, in this training code, the number of epochs is really equal to the number of evaluations we perform.)", "BATCH_SIZE = 1000\nNUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset will repeat, wrap around\nNUM_EVALS = 50 # how many times to evaluate\nNUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample\n\ntrainds = create_dataset(\n pattern=\"../data/taxi-train*\", batch_size=BATCH_SIZE, mode=\"train\"\n)\n\nevalds = create_dataset(\n pattern=\"../data/taxi-valid*\", batch_size=BATCH_SIZE, mode=\"eval\"\n).take(NUM_EVAL_EXAMPLES // 1000)\n\n%%time\nsteps_per_epoch = NUM_TRAIN_EXAMPLES // (BATCH_SIZE * NUM_EVALS)\n\nOUTDIR = \"./taxi_trained\"\nshutil.rmtree(path=OUTDIR, ignore_errors=True) # start fresh each time\n\nhistory = model.fit(\n x=trainds,\n steps_per_epoch=steps_per_epoch,\n epochs=NUM_EVALS,\n validation_data=evalds,\n callbacks=[TensorBoard(OUTDIR)],\n)", "Just as before, we can examine the history to see how the RMSE changes through training on the train set and validation set.", "RMSE_COLS = [\"rmse\", \"val_rmse\"]\n\npd.DataFrame(history.history)[RMSE_COLS].plot()", "Copyright 2021 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mediagit2016/workcamp-maschinelles-lernen-grundlagen
17-12-11-workcamp-ml/2017-12-11-arbeiten-mit-dictionaries-10.ipynb
gpl-3.0
[ "<h1>Dictionaries</h1>\n<li>d={}\n<li>d.values()\n<li>d.keys()\n<li>d.items()\n<li>d.clear()\n<li>d.copy()\n<li>d.get(k,x)\n<li>k in d\n<li>d.setdefault(k[ ,x])\n<li>d1.update(d2)", "mktcaps = {'AAPL':538.7,'GOOG':68.7,'IONS':4.6}# Dictionary wird initialisiert\nprint(type(mktcaps))\nprint(mktcaps)\nprint(mktcaps.values())\nprint(mktcaps.keys())\nprint(mktcaps.items())\nc=mktcaps.items()\nprint c[0]\n\n\nmktcaps['AAPL'] #Gibt den Wert zurück der mit dem Schlüssel \"AAPL\" verknüpft ist\n\nmktcaps['GS'] #Fehler weil GS nicht in mktcaps enthalten ist\n\nmktcaps.get('GS') #Ergibt keinen Wert, da GS nicht in mktcaps enthalten ist\n\nmktcaps['GS'] = 88.65 #Fügt GS in das dictionary ein\nprint(mktcaps) \n\ndel(mktcaps['GOOG']) #ENtfernt GOOG von mktcaps\nprint(mktcaps)\n\nmktcaps.keys() #gibt alle keys zurück\n\nmktcaps.values() #gibt alle Werte zurück\n\nimport hashlib\nl=('AAA','BBB','CCC','DDD','EEE')\nprint(l)\nprint(len(l))\nhshdict={'AAA':hashlib.sha256('AAA)')}\nhshdict.values()\nv=hshdict['AAA']\nm=v.hexdigest()\nprint(m)\n", "<h3>Beispiel: Alter</h3>", "alter = {'Peter':45,'Julia':23,'Mathias':36} #Erzeugen eines Dictionaries\nprint(alter)\n\nalter['Julia']=27 #Ändern des Alters\nalter['Monika']=33 #Hinzufügen von Monika - die Reihenfolge der Schlüssel spielt keine Rolle\nprint(alter)\nif 'Monika' in alter:\n print (alter['Monika'])\n", "<h3>Beispiel: Temperaturen in Staedten</h3>", "temperatur={'stuttgart':32.9,'muenchen':29.8,'hamburg':24.4}# Erzeugen eines dictionaries mit Temperaturen in verschiedenen Städten\ntemperatur['koeln']=29.7 #hinzufuegen der temperatur in koeln\nprint(temperatur) #ausgabe der temperaturen\n\nfor stadt in temperatur:\n print('Die Temperatur in %s ist %g °C' % (stadt,temperatur[stadt]))\n\nif 'Berlin' in temperatur:\n print ('Berlin:', temperatur['Berlin'])\nelse:\n print ('Keine Daten für Berlin gefunden')\n\n'stuttgart' in temperatur #überprüfen ob Schlüssel in temperatur enthalten ist\n\ntemperatur.keys() #Ausgabe der Schlüssel im Dictionary\n\ntemperatur.values()#ausgabe der Werte im Dictionary\n\nfor stadt in sorted(temperatur):\n print(stadt)\n\ntemperatur_kopie=temperatur.copy() #erstellt eine KOpie des dictonaries\nprint (temperatur_kopie)\n\ntemperatur2={'stuttgart':22.9,'muenchen':23.8,'hamburg':21.4} #ein 2-tes dictionary\ntemperatur.update(temperatur2)\nfor stadt in temperatur:\n print('Die Temperatur in %s ist %g °C' % (stadt,temperatur[stadt]))\nprint('Anzahl enthaltene Staedte: %g'% len(temperatur))\n\n\ntemperatur2={'stuttgart':22.9,'muenchen':23.8,'hamburg':21.4,'koeln':18.6,'frankfurt':20.6, 'weimar':18.8} #ein 2-tes dictionary\ntemperatur.update(temperatur2)\nfor stadt in temperatur:\n print('Die Temperatur in %s ist %g °C' % (stadt,temperatur[stadt]))\nprint('Anzahl enthaltene Staedte: %g'% len(temperatur))", "<h2>Beispiel Studenten - mit dictionary</h2>", "st={}#Erzeugen des leeren dictionarys\nst['100100'] = {'Mathe':1.0, 'Bwl':2.5}\nst['100200'] = {'Mathe':2.3, 'Bwl':1.8}\nprint(st.items())\nprint(type(st))\nprint(st.values())\nprint(st.keys())\n\n", "<h2>Schrittweiser Aufbau eines Studentenverezichnisses</h2>", "def stud_verz():\n stud={}#erzeugen eines leeren dictionaries\n student=input('Matrikel-Nr als string eingeben:')\n while student:\n Mathe = input('Mathe Note eingeben:')\n Bwl = input('Bwl Note eingeben:')\n stud[student]={\"Mathematik\":Mathe,\"BWL\":Bwl}\n student=input('Matrikel-Nr als string eingeben:')\n return stud\n\nprint (stud_verz())\n\n \n", "<h2>Ein Dictionary aus anderen zusammensetzen\n<li>d2.update(d1)", "d1={'hans':1.8,'peter':1.73,'rainer':1.74}\nd2={'petra':1.8,'hannes':1.73,'rainer':1.78}\nd1.update(d2)\nprint(d1)", "<h2>Datenzugriff in einem dictionary", "deutsch = {'key':['Schluessel','Taste'],'slice':['Scheibe','Schnitte','Stueck'],'value':['Wert']}\nprint(deutsch)\n######Abfangen von Abfragefehlern\ndef uebersetze(wort,d):\n if wort in d:\n return d[wort]\n else:\n return 'unbekannt'\n\nprint(uebersetze('slice',deutsch))\nuebersetze('search',deutsch)", "<h1>Vokabeltrainer entwickeln", "#Vokabeltrainer entwickeln\nimport random\n#Definition der Funktionen\ndef dict_laden(pfad):\n d={}\n try:\n datei = open(pfad)\n liste = datei.readlines()\n for eintrag in liste:\n l_eintrag = eintrag.split()\n d[l_eintrag[0]]=l_eintrag[1:]\n datei.close()\n except:\n pass\n return d\n\n#def aufgabe(d):\n zufall = random.randint(0, len(d.keys())-1)\n vokabel = list(d.keys())[zufall]\n #print(vokabel +'?')\n#Datei liegt auf dem Pfad\n#c:\\\\Benutzer\\\\ramon\\\\Dokumente\\\\Python Scripts\\\\python-edx-07-07-17\\\\woerterbuch.txt'\n#woerterbuch liste von einträgen mit leerzeichen getrennt\n\nd={}\ndatei=open('woerterbuch.txt')\nliste = datei.readlines()\nprint(liste)\nfor eintrag in liste:\n l_eintrag = eintrag.split()#trennung an leerzeichen\n #print(l_eintrag[0])\n #print(l_eintrag[1])\n d[l_eintrag[0]]=l_eintrag[1:]\ndatei.close()\nprint(d)\nzufall = random.randint(0, len(d.keys())-1)\nvokabel = list(d.keys())[zufall]\nprint(vokabel+' ?') \nantwort=input()\n\n " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sriharshams/mlnd
boston_housing/boston_housing.ipynb
apache-2.0
[ "Machine Learning Engineer Nanodegree\nModel Evaluation & Validation\nProject: Predicting Boston Housing Prices\nWelcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!\nIn addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. \n\nNote: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.\n\nGetting Started\nIn this project, you will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a good fit could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis.\nThe dataset for this project originates from the UCI Machine Learning Repository. The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preprocessing steps have been made to the dataset:\n- 16 data points have an 'MEDV' value of 50.0. These data points likely contain missing or censored values and have been removed.\n- 1 data point has an 'RM' value of 8.78. This data point can be considered an outlier and has been removed.\n- The features 'RM', 'LSTAT', 'PTRATIO', and 'MEDV' are essential. The remaining non-relevant features have been excluded.\n- The feature 'MEDV' has been multiplicatively scaled to account for 35 years of market inflation.\nRun the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.", "# Import libraries necessary for this project\nimport numpy as np\nimport pandas as pd\nfrom sklearn.cross_validation import ShuffleSplit\n\n# Import supplementary visualizations code visuals.py\nimport visuals as vs\n\n# Pretty display for notebooks\n%matplotlib inline\n\n# Load the Boston housing dataset\ndata = pd.read_csv('housing.csv')\nprices = data['MEDV']\nfeatures = data.drop('MEDV', axis = 1)\n \n# Success\nprint \"Boston housing dataset has {} data points with {} variables each.\".format(*data.shape)", "Data Exploration\nIn this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results.\nSince the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MEDV', will be the variable we seek to predict. These are stored in features and prices, respectively.\nImplementation: Calculate Statistics\nFor your very first coding implementation, you will calculate descriptive statistics about the Boston housing prices. Since numpy has already been imported for you, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model.\nIn the code cell below, you will need to implement the following:\n- Calculate the minimum, maximum, mean, median, and standard deviation of 'MEDV', which is stored in prices.\n - Store each calculation in their respective variable.", "# TODO: Minimum price of the data\nminimum_price = np.min(prices)\n\n# TODO: Maximum price of the data\nmaximum_price = np.max(prices)\n\n# TODO: Mean price of the data\nmean_price = np.mean(prices)\n\n# TODO: Median price of the data\nmedian_price = np.median(prices)\n\n# TODO: Standard deviation of prices of the data\nstd_price = np.std(prices)\n\n# Show the calculated statistics\nprint \"Statistics for Boston housing dataset:\\n\"\nprint \"Minimum price: ${:,.2f}\".format(minimum_price)\nprint \"Maximum price: ${:,.2f}\".format(maximum_price)\nprint \"Mean price: ${:,.2f}\".format(mean_price)\nprint \"Median price ${:,.2f}\".format(median_price)\nprint \"Standard deviation of prices: ${:,.2f}\".format(std_price)", "Question 1 - Feature Observation\nAs a reminder, we are using three features from the Boston housing dataset: 'RM', 'LSTAT', and 'PTRATIO'. For each data point (neighborhood):\n- 'RM' is the average number of rooms among homes in the neighborhood.\n- 'LSTAT' is the percentage of homeowners in the neighborhood considered \"lower class\" (working poor).\n- 'PTRATIO' is the ratio of students to teachers in primary and secondary schools in the neighborhood.\nUsing your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an increase in the value of 'MEDV' or a decrease in the value of 'MEDV'? Justify your answer for each.\nHint: Would you expect a home that has an 'RM' value of 6 be worth more or less than a home that has an 'RM' value of 7?\nAnswer: \nBased on my intution, each of the below features explained thier effect on the value of 'MEDV' (label or target). This is further supported with Linear regression plots.\n- 'RM' : Typically increase in 'RM' would lead to an increase in 'MEDV'. The 'RM' is also a good indication of the size of the homes, bigger house, bigger value.\n- 'LSTAT' : Typically decrease in 'LSTAT' would lead to an increse in 'MEDV'. As 'LSTAT' is the indication of working class, \"lower\" class reduces the value of home.\n- 'PTRATIO' : Typically decrease in 'PTRATIO' (students) would lead to an increase in 'MEDV'. As 'PTRATIO' is an indication that schools are good and suffeciently staffed and funded. \nRM", "import matplotlib.pyplot as plt\n%matplotlib inline\nfrom sklearn.linear_model import LinearRegression\nreg = LinearRegression()\npt_ratio = data[\"RM\"].reshape(-1,1)\nreg.fit(pt_ratio, prices)\n\n# Create the figure window\nplt.plot(pt_ratio, reg.predict(pt_ratio), color='red', lw=1)\nplt.scatter(pt_ratio, prices, alpha=0.5, c=prices)\nplt.xlabel('RM')\nplt.ylabel('Prices')\nplt.show()", "LSTAT", "import matplotlib.pyplot as plt\n%matplotlib inline\nfrom sklearn.linear_model import LinearRegression\nreg = LinearRegression()\npt_ratio = data[\"LSTAT\"].reshape(-1,1)\nreg.fit(pt_ratio, prices)\n\n# Create the figure window\nplt.plot(pt_ratio, reg.predict(pt_ratio), color='red', lw=1)\nplt.scatter(pt_ratio, prices, alpha=0.5, c=prices)\nplt.xlabel('LSTAT')\nplt.ylabel('Prices')\nplt.show()", "PTRATIO", "import matplotlib.pyplot as plt\n%matplotlib inline\nfrom sklearn.linear_model import LinearRegression\nreg = LinearRegression()\npt_ratio = data[\"PTRATIO\"].reshape(-1,1)\nreg.fit(pt_ratio, prices)\n\n# Create the figure window\nplt.plot(pt_ratio, reg.predict(pt_ratio), color='red', lw=1)\nplt.scatter(pt_ratio, prices, alpha=0.5, c=prices)\nplt.xlabel('PTRATIO')\nplt.ylabel('Prices')\nplt.show()\n", "Developing a Model\nIn this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions.\nImplementation: Define a Performance Metric\nIt is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the coefficient of determination, R<sup>2</sup>, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how \"good\" that model is at making predictions. \nThe values for R<sup>2</sup> range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the target variable. A model with an R<sup>2</sup> of 0 is no better than a model that always predicts the mean of the target variable, whereas a model with an R<sup>2</sup> of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the features. A model can be given a negative R<sup>2</sup> as well, which indicates that the model is arbitrarily worse than one that always predicts the mean of the target variable.\nFor the performance_metric function in the code cell below, you will need to implement the following:\n- Use r2_score from sklearn.metrics to perform a performance calculation between y_true and y_predict.\n- Assign the performance score to the score variable.", "# TODO: Import 'r2_score'\nfrom sklearn.metrics import r2_score\n\ndef performance_metric(y_true, y_predict):\n \"\"\" Calculates and returns the performance score between \n true and predicted values based on the metric chosen. \"\"\"\n \n # TODO: Calculate the performance score between 'y_true' and 'y_predict'\n score = r2_score(y_true, y_predict)\n \n # Return the score\n return score", "Question 2 - Goodness of Fit\nAssume that a dataset contains five data points and a model made the following predictions for the target variable:\n| True Value | Prediction |\n| :-------------: | :--------: |\n| 3.0 | 2.5 |\n| -0.5 | 0.0 |\n| 2.0 | 2.1 |\n| 7.0 | 7.8 |\n| 4.2 | 5.3 |\nWould you consider this model to have successfully captured the variation of the target variable? Why or why not? \nRun the code cell below to use the performance_metric function and calculate this model's coefficient of determination.", "# Calculate the performance of this model\nscore = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])\nprint \"Model has a coefficient of determination, R^2, of {:.3f}.\".format(score)", "Answer:\n- Yes I would consider this model to have successfully captured the variation of the target variable. \n- R2 is 0.923 which is very close to 1, means the True Value is 92.3% predicted from Prediction\n- As shown below it is possible to plot the values to get the visual representation in this scenario", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ntrue, pred = [3.0, -0.5, 2.0, 7.0, 4.2],[2.5, 0.0, 2.1, 7.8, 5.3]\n#plot true values\ntrue_handle = plt.scatter(true, true, alpha=0.6, color='blue', label = 'True' )\n\n#reference line\nfit = np.poly1d(np.polyfit(true, true, 1))\nlims = np.linspace(min(true)-1, max(true)+1)\nplt.plot(lims, fit(lims), alpha = 0.3, color = \"black\")\n\n#plot predicted values\npred_handle = plt.scatter(true, pred, alpha=0.6, color='red', label = 'Pred')\n\n#legend & show\nplt.legend(handles=[true_handle, pred_handle], loc=\"upper left\")\nplt.show()\n", "Implementation: Shuffle and Split Data\nYour next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset.\nFor the code cell below, you will need to implement the following:\n- Use train_test_split from sklearn.cross_validation to shuffle and split the features and prices data into training and testing sets.\n - Split the data into 80% training and 20% testing.\n - Set the random_state for train_test_split to a value of your choice. This ensures results are consistent.\n- Assign the train and testing splits to X_train, X_test, y_train, and y_test.", "# TODO: Import 'train_test_split'\nfrom sklearn.cross_validation import train_test_split\n# TODO: Shuffle and split the data into training and testing subsets\nX_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=0)\n\n# Success\nprint \"Training and testing split was successful.\"", "Question 3 - Training and Testing\nWhat is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?\nHint: What could go wrong with not having a way to test your model?\nAnswer: \n- Learning algorithm used for prediction or inference of datasets. We do not need learning algorithm to predict known response lables, we do want learning algorithm to predict response label from unkown dataset. That is why it is benefitial to hold out some ratio of dataset as test dataset not known to learning algorithm. Learning algorithm is fitted against traning subset, which then can be used to predict response label from test dataset to measure performance of learning algorithm.\n- Splitting dataset into some ratio of training and testing subsets, we can provide only training subset data to learning algorithm and learn behavior of response label against features. We can then provide testing subset not known to learning algorithm and have learning algorithm predict label. Predicted label can be compared with actuals of testing subset to find test error. Test error is a better metric to measure the performance of a learning algorithm compared to training error.\n- Using training and testing subsets we can tune the learning algorithm to reduce bias and variance.\n - If we do not have a way to test with testing subsets, approximation of traning error is used as a performance metric for learning algorithm, in some cases learning algorithm could have high variance and might not be the right algorithm for the dataset.\n\nAnalyzing Model Performance\nIn this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing 'max_depth' parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone.\nLearning Curves\nThe following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded region of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R<sup>2</sup>, the coefficient of determination. \nRun the code cell below and use these graphs to answer the following question.", "# Produce learning curves for varying training set sizes and maximum depths\nvs.ModelLearning(features, prices)", "Question 4 - Learning the Data\nChoose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model?\nHint: Are the learning curves converging to particular scores?\nAnswer: \n- Maximum depth for the model is max_depth = 1\n- Score of the training curve is decreased for more training points added\n- Score of the testing cureve is increased for more training points added\n- Both training and testing curve are platoed or have very minimal gain in score for more traning points added after around 300 training points, so more traning points wont benefit the model. Learning curves of both training and testing curves seem to converge around score 0.4\n - traning curve seems to be detoriating and indicates high bias\nComplexity Curves\nThe following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the learning curves, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the performance_metric function. \nRun the code cell below and use this graph to answer the following two questions.", "vs.ModelComplexity(X_train, y_train)", "Question 5 - Bias-Variance Tradeoff\nWhen the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?\nHint: How do you know when a model is suffering from high bias or high variance?\nAnswer: \n- Yes when models is trained with a maximum depth of 1, model suffer from high bias and low variance\n- When model is trained with a maxmum depth of 10, model suffer from high variance and low bais\n- If training and validation score are close to each other it shows that there is low variance in the model. In the graph as both training and testing score are less, it could be that model is not using sufficient data so it could be biased or underfitting the data. When there is a huge difference in training score and validation score there is a high variance, this could be because, model has learnt very well, and fitted to training data, indicates that model is overfitting or has high variance. At about Maximum depth 4, model seem to be performing optimal for both traning and validations core having the right trade of bias and variance.\nQuestion 6 - Best-Guess Optimal Model\nWhich maximum depth do you think results in a model that best generalizes to unseen data? What intuition lead you to this answer?\nAnswer: \n- At about Maximum depth 4, model seem to be performing optimal for both traning and validations core having the right trade of bias and variance. Afyer Maximum depth 4, validation score starts detoriting and doesnt show any improvement where as traning score is increasing is a sign of overfitting or high variance being introduced by model. \n\nEvaluating Model Performance\nIn this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from fit_model.\nQuestion 7 - Grid Search\nWhat is the grid search technique and how it can be applied to optimize a learning algorithm?\nAnswer: \nGrid search technique is a way of performing hyperparameter optimization. This is simply an exhaustive seraching through a manually specified subset of hyperparameter of a learning algorithm.\nGrid serach will methodically build and evaluate a model for each combination of learning algorithm parameters specified in a grid. A Grid search algorithm is guided by performance metric like typically measured by cross-validation on the training set or evaluation on a held-out validation set, and best combination is retained. \nQuestion 8 - Cross-Validation\nWhat is the k-fold cross-validation training technique? What benefit does this technique provide for grid search when optimizing a model?\nHint: Much like the reasoning behind having a testing set, what could go wrong with using grid search without a cross-validated set?\nAnswer: \n- In k-fold cross-validation training technique, the original dataset is randomly partitioned into k equal sized subsets. Of the k subsets, a single subset is retained as the validation data (or test data) for testing the model, and the remaining k − 1 subsets are used as training data. The cross-validation process is then repeated k times (the folds), with each of the k subsets used exactly once as the validation data. The k results from the folds can then be averaged to produce a single estimation. 10-fold cross-validation is commonly used, but in general k remains an unfixed parameter.\n- A grid search algorithm must be guided by a performance metric, typically measured by cross-validation. The advantage is that all observations are used for both training and validation, and each observation is used for validation exactly once. By doing this there will be low variance and no overfitting of the data by the optimized model.\nImplementation: Fitting a Model\nYour final implementation requires that you bring everything together and train a model using the decision tree algorithm. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the 'max_depth' parameter for the decision tree. The 'max_depth' parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called supervised learning algorithms.\nIn addition, you will find your implementation is using ShuffleSplit() for an alternative form of cross-validation (see the 'cv_sets' variable). While it is not the K-Fold cross-validation technique you describe in Question 8, this type of cross-validation technique is just as useful!. The ShuffleSplit() implementation below will create 10 ('n_splits') shuffled sets, and for each shuffle, 20% ('test_size') of the data will be used as the validation set. While you're working on your implementation, think about the contrasts and similarities it has to the K-fold cross-validation technique.\nPlease note that ShuffleSplit has different parameters in scikit-learn versions 0.17 and 0.18.\nFor the fit_model function in the code cell below, you will need to implement the following:\n- Use DecisionTreeRegressor from sklearn.tree to create a decision tree regressor object.\n - Assign this object to the 'regressor' variable.\n- Create a dictionary for 'max_depth' with the values from 1 to 10, and assign this to the 'params' variable.\n- Use make_scorer from sklearn.metrics to create a scoring function object.\n - Pass the performance_metric function as a parameter to the object.\n - Assign this scoring function to the 'scoring_fnc' variable.\n- Use GridSearchCV from sklearn.grid_search to create a grid search object.\n - Pass the variables 'regressor', 'params', 'scoring_fnc', and 'cv_sets' as parameters to the object. \n - Assign the GridSearchCV object to the 'grid' variable.", "# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'\nfrom sklearn.tree import DecisionTreeRegressor\nfrom sklearn.metrics import make_scorer\nfrom sklearn.grid_search import GridSearchCV\n\ndef fit_model(X, y):\n \"\"\" Performs grid search over the 'max_depth' parameter for a \n decision tree regressor trained on the input data [X, y]. \"\"\"\n \n # Create cross-validation sets from the training data\n cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0)\n\n # TODO: Create a decision tree regressor object\n regressor = DecisionTreeRegressor()\n\n # TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10\n params = {'max_depth': range(1,11)}\n\n # TODO: Transform 'performance_metric' into a scoring function using 'make_scorer' \n scoring_fnc = make_scorer(performance_metric)\n\n # TODO: Create the grid search object\n grid = GridSearchCV(regressor, params, scoring = scoring_fnc, cv = cv_sets)\n\n # Fit the grid search object to the data to compute the optimal model\n grid = grid.fit(X, y)\n\n # Return the optimal model after fitting the data\n return grid.best_estimator_", "Making Predictions\nOnce a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on.\nQuestion 9 - Optimal Model\nWhat maximum depth does the optimal model have? How does this result compare to your guess in Question 6? \nRun the code block below to fit the decision tree regressor to the training data and produce an optimal model.", "# Fit the training data to the model using grid search\nreg = fit_model(X_train, y_train)\n\n# Produce the value for 'max_depth'\nprint \"Parameter 'max_depth' is {} for the optimal model.\".format(reg.get_params()['max_depth'])", "Answer: \n- 4, this is same as the result of my guess in Question 6\nQuestion 10 - Predicting Selling Prices\nImagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients:\n| Feature | Client 1 | Client 2 | Client 3 |\n| :---: | :---: | :---: | :---: |\n| Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms |\n| Neighborhood poverty level (as %) | 17% | 32% | 3% |\n| Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 |\nWhat price would you recommend each client sell his/her home at? Do these prices seem reasonable given the values for the respective features?\nHint: Use the statistics you calculated in the Data Exploration section to help justify your response. \nRun the code block below to have your optimized model make predictions for each client's home.", "# Produce a matrix for client data\nclient_data = [[5, 17, 15], # Client 1\n [4, 32, 22], # Client 2\n [8, 3, 12]] # Client 3\n\n# Show predictions\nfor i, price in enumerate(reg.predict(client_data)):\n print \"Predicted selling price for Client {}'s home: ${:,.2f}\".format(i+1, price)", "Answer: \nThe predicted selling prices are \\$391,183.33, \\$189,123.53 and \\$942,666.67 for Client 1's home, Client 2's home and Client 3's home respectively.\nFacts from the descriptive statistics:\n- Distribution: \n Statistics for Boston housing dataset:\n Minimum price: \\$105,000.00\n Maximum price: \\$1,024,800.00\n Mean price: \\$454,342.94\n Median price \\$438,900.00\n Standard deviation of prices: \\$165,171.13\n- Effects of features: \n Based on my intution, each of the below features explained thier effect on the value of 'MEDV' (label or target).\n - 'RM' : Typically increase in 'RM' would lead to an increase in 'MEDV'. The 'RM' is also a good indication of the size of the homes, bigger house, bigger value.\n - 'LSTAT' : Typically decrease in 'LSTAT' would lead to an increse in 'MEDV'. As 'LSTAT' is the indication of working class, \"lower\" class reduces the value of home.\n - 'PTRATIO' : Typically decrease in 'PTRATIO' (students) would lead to an increase in 'MEDV'. As 'PTRATIO' is an indication that schools are good and suffeciently staffed and funded.\nAre the estimates reasonable:\n- Client 1's home (\\$391,183.33):\n - Distribution: The estimate is inside the normal range of prices we have (closer than one standard deviation to mean and median).\n - Feature effects: The feature values all are in between those for the other clients. Thus, it seems reasonable that the estimated price is also in between.\n - Conclusion: reasonable estimate\n- Client 2's home (\\$189,123.53)\n - Distribution: The estimate is more than one standard deviation below the mean but less than two. Thus, it is not really a typical value for me still ok.\n - Feature effects: Of the 3 clients' houses, this one has lowest RM, highest LSTAT, and highest PTRATIO. All this should decrease the price, which is in line with it being the lowest of all prices.\n - Conclusion: it is reasonable that the price is low, but my confidence in the exact value of the estimate is lower than for client 1. Still, you would say you could use the model for client 2.\n- Client 3's home (\\$942,666.67):\n - Distribution: The estimate is more than 3 standard deviations above the mean (bigger than \\$906,930.78) and very close to the maximum of \\$1,024,800.00. Thus, this value is very atypical for this dataset and should be viewed with scepticism.\n - Feature effects: This is the house with highest RM, lowest LSTAT, and lowest PTRATIO of all 3 clients. Thus, it seems theoretically ok that is has the highest price too.\n - Conclusion: The price should indeed be high, but I would not trust an estimate that far off the mean. Hence, my confidence in this prediction is lowest. I would not recommend using the model for estimates in this range.\nSide note: arguing with summary statistics like mean and standard deviations relies on house prices being at least somewhat normally distributed. \nWe can alos see from below plots, the client features against datset.", "from matplotlib import pyplot as plt\n\nclients = np.transpose(client_data)\npred = reg.predict(client_data)\nfor i, feat in enumerate(['RM', 'LSTAT', 'PTRATIO']):\n plt.scatter(features[feat], prices, alpha=0.25, c=prices)\n plt.scatter(clients[i], pred, color='black', marker='x', linewidths=2)\n plt.xlabel(feat)\n plt.ylabel('MEDV')\n plt.show()", "Sensitivity\nAn optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. Run the code cell below to run the fit_model function ten times with different training and testing sets to see how the prediction for a specific client changes with the data it's trained on.", "vs.PredictTrials(features, prices, fit_model, client_data)", "Question 11 - Applicability\nIn a few sentences, discuss whether the constructed model should or should not be used in a real-world setting.\nHint: Some questions to answering:\n- How relevant today is data that was collected from 1978?\n- Are the features present in the data sufficient to describe a home?\n- Is the model robust enough to make consistent predictions?\n- Would data collected in an urban city like Boston be applicable in a rural city?\nAnswer: \nI am a data sceptic and don't recommend using the model in production.\nReasons:\n- Age of dataset: The age of the dataset may render some features of the model useless for estimations today. For instance, the imporatance of the pupil/teacher ratio may decrease in time. The theory is: the pupil/teacher ratio in the neighborhood drives demand for houses in the area (and thereby house prices) if families rely on local schools for their kid's education. Let's say this has been so in 1978. However, if people nowadays are more flexible when choosing schools, this effect could vanish. 3 minutes of googling delivered this researcher (http://sites.duke.edu/urbaneconomics/?p=863) who claims Charter schools are getting more popular in the US. The effect supposedly is that pupils are not limited to local schools anymore. If this is true, our model may overemphasize the effect of PTRATIO today.\n- Number of features: we have only used 3 of 14 available features. It may be good to explore others as well. For instance, there is another feature (called DIS) in the dataset which gives a weighted distance to one of 5 employment centers in Boston (https://archive.ics.uci.edu/ml/datasets/Housing). Using simple intution we can sys this feature to have a large impact on price. AS reserched, it was found higher prices to be close to the city center where most people work. Typically houses closer to employment centers are more in demand.\n- Robustness: Looking at the sensitivity analysis above, we can see that the 10 trials delivered estimates ranging from \\$351,577.61 to \\$420,622.22, two values more than ~\\$70k apart. With an average price of little more than \\$450k and a such a large variance in estimates, the model is hardly usable. For people buying or selling a house, it will make a huge difference whether the price is \\$300k or \\$400k.\n- Generelizability to other cities/areas: House prices in Boston are likely not the same as in rural areas. For San Francisco Bay Area, we can tell for sure there is a large difference between urban and rural house prices. We should expect the same to be true for the Boston. Thus, the price level will be completely different, suggesting that urban vs. rural should be a feature for itself. Features may also have different effects in rural areas. For instance, the number of rooms probably correlates strongly with the size of a house. If the cost per square meter in urban areas is larger than in rural areas, the positive effect of number of rooms should also be larger in urban areas. Moreover, some features in our model may not even make sense in rural areas. For instance, PTRATIO may not be defined in a very rural area if people go to a school in a different town (no school -> no pupil / teacher ratio).\n\nNote: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to\nFile -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
fangohr/polygon-finite-difference-mesh-tools
notebooks/example.ipynb
bsd-2-clause
[ "import polygonmeshtools as pmt\n%matplotlib inline\n\nfor obj in dir(pmt):\n if hasattr(eval(\"pmt.\" + obj), '__call__'):\n print obj", "CartesianCoords and PolarCoords are classes that were designed to be used in-house for the conversion between Cartesian and Polar coordinates. You just need to initialise the object with some coordinates, and then it is easy to extract the relevant information.\n3D coordinates are possible, but the z-coordinate has a default value of 0.", "cc = pmt.CartesianCoords(5,5)\n\nprint(\"2D\\n\")\n\nprint(\"x-coordinate: {}\".format(cc.x))\nprint(\"y-coordinate: {}\".format(cc.y))\nprint(\"radial: {}\".format(cc.r))\nprint(\"azimuth: {}\".format(cc.a))\n\ncc3D = pmt.CartesianCoords(1,2,3)\nprint(\"\\n3D\\n\")\n\nprint(\"x-coordinate: {}\".format(cc3D.x))\nprint(\"y-coordinate: {}\".format(cc3D.y))\nprint(\"z-coordinate: {}\".format(cc3D.z))\nprint(\"radial: {}\".format(cc3D.r))\nprint(\"azimuth: {}\".format(cc3D.a))\nprint(\"height: {}\".format(cc3D.h))", "pmt.PolarCoords works in exactly the same way, but instead you initialise it with polar coordinates (radius, azimuth and height (optional), respectively) and the cartesian ones can be extracted as above.\nFunction 1: in_poly", "print(pmt.in_poly.__doc__)", "Takes three arguments by default:\n\nx, specifying the x-coordinate of the point you would like to test\ny, specifying the y-coordinate of the point you would like to test\nn, the number of sides of the polygon\n\nOptional arguments are:\n\nr, the radius of the circumscribed circle (equal to the distance from the circumcentre to one of the vertices). Default r=1\nrotation, the anti-clockwise rotation of the shape in radians. Default rotation=0\ntranslate, specifies the coordinates of the circumcentre, given as a tuple (x,y). Default translate=(0,0)\nplot, a boolean value to determine whether or not the plot is shown. Default plot=False\n\nExamples below:", "pmt.in_poly(x=5, y=30, n=3, r=40, plot=True)\n\npmt.in_poly(x=5, y=30, n=3, r=40) # No graph will be generated, more useful for use within other functions\n\npmt.in_poly(x=0, y=10, n=6, r=20, plot=True) # Dot changes colour to green when inside the polygon\n\nimport numpy as np\n\npmt.in_poly(x=-10, y=-25, n=6, r=20, rotation=np.pi/6, translate=(5,-20), plot=True) # Rotation and translation", "And of course, as n becomes large, the polygon tends to a circle:", "pmt.in_poly(x=3, y=5, n=100, r=10, plot=True)", "Function 2: plot_circular_fidi_mesh", "print(pmt.plot_circular_fidi_mesh.__doc__)", "Only has one default argument:\n\ndiameter, the diameter of the circle you would like to plot\n\nOptional arguments:\n\nx_spacing, the width of the mesh elements. Default x_spacing=2\n\ny_spacing, the height of the mesh elements. Default y_spacing=2 (only integers are currently supported for x- and y-spacing.)\n\n\ncentre_mesh, outlined in the documentation above. Default centre_mesh='auto'\n\nshow_axes, boolean, self-explanatory. Default show_axes=True\nshow_title, boolean, self-explanatory. Default show_title=True", "pmt.plot_circular_fidi_mesh(diameter=60)\n\npmt.plot_circular_fidi_mesh(diameter=60, x_spacing=2, y_spacing=2, centre_mesh=True)\n\n# Note the effect of centre_mesh=True. In the previous plot, the element boundaries are aligned with 0 on the x- and y-axes.\n# In this case, centring the mesh has the effect of producing a mesh that is slightly wider than desired, shown below.\n\npmt.plot_circular_fidi_mesh(diameter=30, x_spacing=1, y_spacing=2, show_axes=False, show_title=False)\n\n# Flexible element sizes. Toggling axes and title can make for prettier (albeit less informative) pictures.", "Function 3: plot_poly_fidi_mesh", "print(pmt.plot_poly_fidi_mesh.__doc__)", "Requires two arguments:\n\ndiameter, the diameter of the circumscribed circle\nn, the number of sides the polygon should have\n\nOptional arguments:\n\nx_spacing\ny_spacing\ncentre_mesh\nshow_axes\nshow_title\n\n(All of the above have the same function as in plot_circular_fidi_mesh, and below, like in_poly)\n\nrotation\ntranslate", "pmt.plot_poly_fidi_mesh(diameter=50, n=5, x_spacing=1, y_spacing=1, rotation=np.pi/10)", "Function 4: find_circumradius", "print(pmt.find_circumradius.__doc__)", "If you need to specify the side length, or the distance from the circumcentre to the middle of one of the faces, this function will convert that value to the circumradius (not diameter!) that would give the correct side length or apothem.", "pmt.find_circumradius(n=3, side=10)", "Using this in combination with plot_poly_fidi_mesh:", "d1 = 2*pmt.find_circumradius(n=3, side=40)\n\npmt.plot_poly_fidi_mesh(diameter=d1, n=3, x_spacing=1, y_spacing=1)\n\n# It can be seen on the y-axis that the side has a length of 40, as desired.\n\nd2 = 2*pmt.find_circumradius(n=5, apothem=20)\n\npmt.plot_poly_fidi_mesh(diameter=d2, n=5, x_spacing=1, y_spacing=1)\n\n# The circumcentre lies at (0,0), and the leftmost side is in line with x=-20" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
CalPolyPat/Python-Workshop
Python Workshop/NumPy.ipynb
mit
[ "NumPy\nNumPy is a library with various math and numerical functions. A library is a bunch of functions that someone else wrote and you would like to use. To use these functions, you must first import them. It is typically good practice to place all imports in the first cell.", "import numpy as np", "The line above tells the computer to make the functions available and nickname the master object np. To call a function from the master object, we use the syntax np.function(). To find out what functions numpy has for you to use, go to their documentation at https://docs.scipy.org/doc/numpy-dev/user/quickstart.html. Learning to use documentation and proper googling is a very important tool for any programmer. We will cover some effective googling techniques and how to find and use documentation later. \nThe main thing that numpy gives us are arrays. They are more flexible than lists and have better functionality as we will see. To create a new array we have a couple of methods:\n#First make a list and then copy it into a new array\nlist = [1,2,3,4]\narray = np.array(list) # This tells the computer to make a copy of list and turn it into an array\n\n#Use one of NumPy's special array types\n\narrayofZeros = np.zeros((x-dim, y-dim, etc...)) #creates an x by y by etc. array of all zeros\n\narrayofOnes = np.ones((x-dim, y-dim, etc...)) #creates an x by y by etc. array of all ones\n\nemptyArray = np.empty((x-dim, y-dim, etc...)) #creates an x by y by etc. array of whatever happened to be in memory at the time of instantiation.\n\nrangeArray = np.arange(start, stop, step) # Works just like range() starting at **start** and ending at **stop** return values in step size of **step**\n\nlinearspaceArray = np.linspace(start, stop, # of vals) # Creates a linear spaced array between start and stop with a # of vals in the array.\n\ndiagonalArray = np.diagflat(#input list, set, array, etc. goes here) #creates a 2-d matrix with the input list, set, array, etc. as the main diagonal.\n\nNumPy arrays act a little differently than lists and other containers. The first major difference between arrays and other containers is that arrays may only contain one type of thing. That is to say, we may no longer be sloppy about placing an int and a string in the same array. Further, NumPy arrays treat operators differently than other containers. Operations are carried out only between arrays of the same size and are computed elementwise. For example, the sum of two 3 by 1 arrays(could be vectors in $!R^3$) would be the sum of the 1st, 2nd, and 3rd components added individually. Let's solidify these ideas with some examples.", "print(\"Just an Array: \\n\",np.array([0,1,2,34,5]))\n\nprint(\"An Array of Zeros: \\n\",np.zeros((2,3)))\n\nprint(\"An Array of Ones: \\n\",np.ones((2,)))\n\nprint(\"A Clever Way to Build an Array: \\n\",np.pi*np.ones((4,3)))\n\nprint(\"A Bunch of Random Junk: \\n\",np.empty((2,2)))\n\nprint(\"A Range of Values: \\n\",np.arange(0,100, 3))\n\nprint(\"A Linearly-Spaced Range of Values: \\n\",np.linspace(0,100, 33))\n\nprint(\"A Diagonal Array: \\n\",np.diagflat(np.linspace(1,10,10)))\n\n#Let's check out some NumPy operations\na = np.array([[1,2],[3,4]])\nprint(\"First Array: \\n\",a)\nb = np.array([[4,3],[2,1]])\nprint(\"Second Array: \\n\",b)\nprint(\"Sum:\\n\",a+b)\nprint(\"Product:\\n\",a*b)\nprint(\"Power:\\n\",a**b)", "Now that we have seen how NumPy sees operations, let's practice a bit.\nExercises\n\n\nCreate at least 2 arrays with each different method.\n\n\nIn the next cell are two arrays of measurements, you happen to know that their sum over their product squared is a quantity of interest, calculate this quantity for every pair of elements in the arrays.\n\n\nnp.random.normal(mean, std, (x-dim, y-dim, etc...)) creates an x by y by etc... array of normally distributed random numbers with mean mean, and standard deviation std. Using this function, create an appropriatly sized array of random \"noise\" to multiply with the data from 2. Compute the interesting quantity with the \"noisy\" data.\n\n\nnp.mean(#some array here) computes the mean value of all the elements of the array provided. Compute the average difference between your \"noisy\" interesting result from 3. and your original interesting result from 2. You have just modeled a system with simulated noise!", "p = np.array([1,2,3,5,1,2,3,1,2,2,6,3,1,1,5,1,1,3,2,1])\nl = 100*np.array([-0.06878584, -0.13865453, -1.61421586, 1.02892411, 0.31529163, -0.06186747, -0.15273951, 1.67466332, -1.88215846, 0.67427142, 1.2747444, -0.0391945, -0.81211282, -0.38412292, -1.01116052, 0.25611357, 0.3126883, 0.8011353, 0.64691918, 0.34564225])", "Some Other Interesting NumPy Functions\n\nnp.dot(array1, array2) #Computes the dot product of array1 and array2.\nnp.cross(array1, array2) #Computes the cross product of array1 and array2.\nnp.eye(size) #Creates a size by size identity matrix/array\n\nNumPy Array Slicing\nNumPy slicing is only slightly different than list slicing. With NumPy arrays, to access a single element you perform the typical\narray = np.array([1,2,3,4])\n\narray[index]\narray[0] #Returns 1\n\nWhen you want to take a range of elements, you use the following syntax\narray[start:stop+1]\narray[1:3] #Returns [2,3]\n\nOr if you would like to move in steps.\narray[start:stop+1:step]\narray[0:4:2] #Returns [1,3]", "array = np.array([1,2,3,4])\nprint(array[0])\nprint(array[1:4])\nprint(array[0:4:2])", "Masking\nMasking is a special type of slicing which uses boolean values to decide whether to show or hide the values in another array. A mask must be a boolean array of the same size as the original array. To apply a mask to an array, yous use the folllowing syntax:\nmask = np.array([True, False])\narray = np.array([25, 30])\narray[mask] #Returns [25]\n\nLet's check this masking action out", "mask = np.array([[1,1,1,1,0,0,1],[1,0,0,0,0,1,0]], dtype=bool)\n#^^^This converts the ones and zeros into trues and falses because I'm lazy^^^\narray = np.array([[5,7,3,4,5,7,1],np.random.randn(7)])\nprint(array)\nprint(mask)\n\nprint(array[mask])", "Let's say that we have measured some quantity with a computer and generated a really long numpy array, like, really long. It just so happens that we are interested in how many of these numbers are greater than zero. We could try to make a mask with the methods used above, but the people who made masks gave us a tool to do it without ever making a mask.(That's what it does behind the scenes, but we'll just ignore that) I have the data in the next cell, let's check it out.", "data = np.random.normal(0,3,10000) #Wow, I made 10,000 measurements, wouldn't mastoridis be proud.\n\ndata[data>0].size #This returns only the elements of data that are greater than 0", "This is a powerful tool that you should keep in the back of your head that can often greatly simplify problems.\nUniversal Functions\nUniversal functions are NumPy functions that help in applying functions to every element in an array. sin(), cos(), exp(), are all universal functions and when applied to an array, they take the sin(), cos(), or exp() of each element withing the array.", "import matplotlib.pyplot as plt\n%matplotlib inline\nx = np.linspace(0,2*np.pi,1000)\ny = np.sin(x)\nplt.subplot(211)\nplt.plot(x)\nplt.subplot(212)\nplt.plot(x,y)", "A list of all the universal functions is included at the end of this notebook.\nExercises\n\n\nCreate a couple of arrays of various type and size and play with them until you feel comfortable moving on.\n\n\nYou know that a certain quantity can be calculated using the following formula:\nf(x)=x^e^sin(x^2)-sin(x*ln(x))\nGiven that you measured x in the cell below, calculate f(x)\n\n\nUsing the same x as above, write a function to transform on x. Then create a mask that will keep any value of the reuslting array whose value is greater than 2*$\\pi^2$", "x = np.random.rand(1000)*np.linspace(0,10,1000)", "Universal Functions\n| Function | Description |\n|:--------:|:-----------:|\n|add(a,b), + |Addition|\n|subtract(a,b), - | Subtraction|\n|multiply(a,b), * | Multiplication|\n|divide(a,b), / | Division|\n|power(a,b), **|Power|\n|mod(a,b), % |Modulo/Remainder|\n|abs(a) | Absolute Value|\n|sqrt(a) | Square Root|\n|conj(a) | Complex Conjugate|\n|exp(a) | Exponential|\n|log(a) | Natural Log|\n|log2(a) |Log base 2|\n|log10(a) |Log base 10|\n|sin(a) |Sine|\n|cos(a) |Cosine|\n|tan(a) | Tangent|\n|minimum(a,b) | Minimum|\n|maximum(a,b) | Maximum|\n|isreal(a) | Tests for zero complex component|\n|iscomplex(a) | Tests for zero real component|\n|isfinite(a) | Tests for finiteness|\n|isinf(a) | Tests for infiniteness|\n|isnan(a) | Tests for Not a Number|\n|floor(a) | Rounds down to next integer value|\n|ceil(a) | Rounds up to next integer value|\n|trunc(a) | Truncate all noninteger bits|\nOther Valuable NumPy Functions\n|Function | Description|\n|:-------:|:----------:|\n|sum(a) | Sums all elements of a|\n|prod(a) |Takes the product of all elements of a|\n|min(a) |Finds the minimum value in a|\n|max(a) |Finds the maximum value in a|\n|argmin(a) |Returns the index or location of the minimum value in a|\n|argmax(a) |Returns the index or location of the maximum value in a|\n|dot(a,b) |Takes the dot product of a and b|\n|cross(a,b) |Takes the cross product of a and b|\n|einsum(subs, arrs) |Takes the Einstein sum over subscripts and a list of arrays|\n|mean(a) | Computes the average value of all the elements in a|\n|median(a) | Finds the median value in a|\n|average(a, weights) |Computes the weighted average of a|\n|std(a) |Computes the standard deviation of a|\n|var(a) |Computes the variance of a|\n|unique(a) |Returns the unique elements of a in a sorted manner|\n|asarray(a, dtype) | Makes a copy of given array converting every element to type dtype|\n|atleast_1d(a) |Tests that the array is at least one-dimensional|\n|atleast_2d(a) |\"\"|\n|atleast_3d(a) |\"\"|\n|append(a,b) |Appends b to the end of a|\n|save(file, a) |Saves an array to a file|\n|load(file) |Loads an array saved as a file|\nChallenge Exercise\nA prime number seive is an algorithm that will find prime numbers. Your challenge is to recreate the Sieve of Eratosthenes. Use all you learned about NumPy and loops and create a function that takes a max value, as in the sieve, and returns a list of primes. Good Luck!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
bsd-3-clause
[ "%matplotlib inline", "Background information on filtering\nHere we give some background information on filtering in general, and\nhow it is done in MNE-Python in particular.\nRecommended reading for practical applications of digital\nfilter design can be found in Parks & Burrus (1987) [1] and\nIfeachor & Jervis (2002) [2], and for filtering in an\nM/EEG context we recommend reading Widmann et al. (2015) [7]_.\nTo see how to use the default filters in MNE-Python on actual data, see\nthe tut-filter-resample tutorial.\nProblem statement\nPractical issues with filtering electrophysiological data are covered\nin Widmann et al. (2012) [6]_, where they conclude with this statement:\nFiltering can result in considerable distortions of the time course\n(and amplitude) of a signal as demonstrated by VanRullen (2011) [[3]_].\nThus, filtering should not be used lightly. However, if effects of\nfiltering are cautiously considered and filter artifacts are minimized,\na valid interpretation of the temporal dynamics of filtered\nelectrophysiological data is possible and signals missed otherwise\ncan be detected with filtering.\n\nIn other words, filtering can increase signal-to-noise ratio (SNR), but if it\nis not used carefully, it can distort data. Here we hope to cover some\nfiltering basics so users can better understand filtering trade-offs and why\nMNE-Python has chosen particular defaults.\nFiltering basics\nLet's get some of the basic math down. In the frequency domain, digital\nfilters have a transfer function that is given by:\n\\begin{align}H(z) &= \\frac{b_0 + b_1 z^{-1} + b_2 z^{-2} + \\ldots + b_M z^{-M}}\n {1 + a_1 z^{-1} + a_2 z^{-2} + \\ldots + a_N z^{-M}} \\\n &= \\frac{\\sum_{k=0}^Mb_kz^{-k}}{\\sum_{k=1}^Na_kz^{-k}}\\end{align}\nIn the time domain, the numerator coefficients $b_k$ and denominator\ncoefficients $a_k$ can be used to obtain our output data\n$y(n)$ in terms of our input data $x(n)$ as:\n\\begin{align}:label: summations\ny(n) &amp;= b_0 x(n) + b_1 x(n-1) + \\ldots + b_M x(n-M)\n - a_1 y(n-1) - a_2 y(n - 2) - \\ldots - a_N y(n - N)\\\\\n &amp;= \\sum_{k=0}^M b_k x(n-k) - \\sum_{k=1}^N a_k y(n-k)\\end{align}\n\nIn other words, the output at time $n$ is determined by a sum over\n1. the numerator coefficients $b_k$, which get multiplied by\n the previous input values $x(n-k)$, and\n2. the denominator coefficients $a_k$, which get multiplied by\n the previous output values $y(n-k)$.\n\nNote that these summations correspond to (1) a weighted moving average and\n(2) an autoregression.\nFilters are broken into two classes: FIR_ (finite impulse response) and\nIIR_ (infinite impulse response) based on these coefficients.\nFIR filters use a finite number of numerator\ncoefficients $b_k$ ($\\forall k, a_k=0$), and thus each output\nvalue of $y(n)$ depends only on the $M$ previous input values.\nIIR filters depend on the previous input and output values, and thus can have\neffectively infinite impulse responses.\nAs outlined in Parks & Burrus (1987) [1]_, FIR and IIR have different\ntrade-offs:\n* A causal FIR filter can be linear-phase -- i.e., the same time delay\n across all frequencies -- whereas a causal IIR filter cannot. The phase\n and group delay characteristics are also usually better for FIR filters.\n* IIR filters can generally have a steeper cutoff than an FIR filter of\n equivalent order.\n* IIR filters are generally less numerically stable, in part due to\n accumulating error (due to its recursive calculations).\n\nIn MNE-Python we default to using FIR filtering. As noted in Widmann et al.\n(2015) [7]_:\nDespite IIR filters often being considered as computationally more\nefficient, they are recommended only when high throughput and sharp\ncutoffs are required (Ifeachor and Jervis, 2002 [[2]_], p. 321)...\nFIR filters are easier to control, are always stable, have a\nwell-defined passband, can be corrected to zero-phase without\nadditional computations, and can be converted to minimum-phase.\nWe therefore recommend FIR filters for most purposes in\nelectrophysiological data analysis.\n\nWhen designing a filter (FIR or IIR), there are always trade-offs that\nneed to be considered, including but not limited to:\n1. Ripple in the pass-band\n2. Attenuation of the stop-band\n3. Steepness of roll-off\n4. Filter order (i.e., length for FIR filters)\n5. Time-domain ringing\n\nIn general, the sharper something is in frequency, the broader it is in time,\nand vice-versa. This is a fundamental time-frequency trade-off, and it will\nshow up below.\nFIR Filters\nFirst, we will focus on FIR filters, which are the default filters used by\nMNE-Python.\nDesigning FIR filters\nHere we'll try to design a low-pass filter and look at trade-offs in terms\nof time- and frequency-domain filter characteristics. Later, in\ntut_effect_on_signals, we'll look at how such filters can affect\nsignals when they are used.\nFirst let's import some useful tools for filtering, and set some default\nvalues for our data that are reasonable for M/EEG.", "import numpy as np\nfrom numpy.fft import fft, fftfreq\nfrom scipy import signal\nimport matplotlib.pyplot as plt\n\nfrom mne.time_frequency.tfr import morlet\nfrom mne.viz import plot_filter, plot_ideal_filter\n\nimport mne\n\nsfreq = 1000.\nf_p = 40.\nflim = (1., sfreq / 2.) # limits for plotting", "Take for example an ideal low-pass filter, which would give a magnitude\nresponse of 1 in the pass-band (up to frequency $f_p$) and a magnitude\nresponse of 0 in the stop-band (down to frequency $f_s$) such that\n$f_p=f_s=40$ Hz here (shown to a lower limit of -60 dB for simplicity):", "nyq = sfreq / 2. # the Nyquist frequency is half our sample rate\nfreq = [0, f_p, f_p, nyq]\ngain = [1, 1, 0, 0]\n\nthird_height = np.array(plt.rcParams['figure.figsize']) * [1, 1. / 3.]\nax = plt.subplots(1, figsize=third_height)[1]\nplot_ideal_filter(freq, gain, ax, title='Ideal %s Hz lowpass' % f_p, flim=flim)", "This filter hypothetically achieves zero ripple in the frequency domain,\nperfect attenuation, and perfect steepness. However, due to the discontinuity\nin the frequency response, the filter would require infinite ringing in the\ntime domain (i.e., infinite order) to be realized. Another way to think of\nthis is that a rectangular window in the frequency domain is actually a sinc_\nfunction in the time domain, which requires an infinite number of samples\n(and thus infinite time) to represent. So although this filter has ideal\nfrequency suppression, it has poor time-domain characteristics.\nLet's try to naïvely make a brick-wall filter of length 0.1 s, and look\nat the filter itself in the time domain and the frequency domain:", "n = int(round(0.1 * sfreq))\nn -= n % 2 - 1 # make it odd\nt = np.arange(-(n // 2), n // 2 + 1) / sfreq # center our sinc\nh = np.sinc(2 * f_p * t) / (4 * np.pi)\nplot_filter(h, sfreq, freq, gain, 'Sinc (0.1 s)', flim=flim, compensate=True)", "This is not so good! Making the filter 10 times longer (1 s) gets us a\nslightly better stop-band suppression, but still has a lot of ringing in\nthe time domain. Note the x-axis is an order of magnitude longer here,\nand the filter has a correspondingly much longer group delay (again equal\nto half the filter length, or 0.5 seconds):", "n = int(round(1. * sfreq))\nn -= n % 2 - 1 # make it odd\nt = np.arange(-(n // 2), n // 2 + 1) / sfreq\nh = np.sinc(2 * f_p * t) / (4 * np.pi)\nplot_filter(h, sfreq, freq, gain, 'Sinc (1.0 s)', flim=flim, compensate=True)", "Let's make the stop-band tighter still with a longer filter (10 s),\nwith a resulting larger x-axis:", "n = int(round(10. * sfreq))\nn -= n % 2 - 1 # make it odd\nt = np.arange(-(n // 2), n // 2 + 1) / sfreq\nh = np.sinc(2 * f_p * t) / (4 * np.pi)\nplot_filter(h, sfreq, freq, gain, 'Sinc (10.0 s)', flim=flim, compensate=True)", "Now we have very sharp frequency suppression, but our filter rings for the\nentire 10 seconds. So this naïve method is probably not a good way to build\nour low-pass filter.\nFortunately, there are multiple established methods to design FIR filters\nbased on desired response characteristics. These include:\n1. The Remez_ algorithm (:func:`scipy.signal.remez`, `MATLAB firpm`_)\n2. Windowed FIR design (:func:`scipy.signal.firwin2`,\n :func:`scipy.signal.firwin`, and `MATLAB fir2`_)\n3. Least squares designs (:func:`scipy.signal.firls`, `MATLAB firls`_)\n4. Frequency-domain design (construct filter in Fourier\n domain and use an :func:`IFFT &lt;numpy.fft.ifft&gt;` to invert it)\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>Remez and least squares designs have advantages when there are\n \"do not care\" regions in our frequency response. However, we want\n well controlled responses in all frequency regions.\n Frequency-domain construction is good when an arbitrary response\n is desired, but generally less clean (due to sampling issues) than\n a windowed approach for more straightforward filter applications.\n Since our filters (low-pass, high-pass, band-pass, band-stop)\n are fairly simple and we require precise control of all frequency\n regions, we will primarily use and explore windowed FIR design.</p></div>\n\nIf we relax our frequency-domain filter requirements a little bit, we can\nuse these functions to construct a lowpass filter that instead has a\ntransition band, or a region between the pass frequency $f_p$\nand stop frequency $f_s$, e.g.:", "trans_bandwidth = 10 # 10 Hz transition band\nf_s = f_p + trans_bandwidth # = 50 Hz\n\nfreq = [0., f_p, f_s, nyq]\ngain = [1., 1., 0., 0.]\nax = plt.subplots(1, figsize=third_height)[1]\ntitle = '%s Hz lowpass with a %s Hz transition' % (f_p, trans_bandwidth)\nplot_ideal_filter(freq, gain, ax, title=title, flim=flim)", "Accepting a shallower roll-off of the filter in the frequency domain makes\nour time-domain response potentially much better. We end up with a more\ngradual slope through the transition region, but a much cleaner time\ndomain signal. Here again for the 1 s filter:", "h = signal.firwin2(n, freq, gain, nyq=nyq)\nplot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (1.0 s)',\n flim=flim, compensate=True)", "Since our lowpass is around 40 Hz with a 10 Hz transition, we can actually\nuse a shorter filter (5 cycles at 10 Hz = 0.5 s) and still get acceptable\nstop-band attenuation:", "n = int(round(sfreq * 0.5)) + 1\nh = signal.firwin2(n, freq, gain, nyq=nyq)\nplot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (0.5 s)',\n flim=flim, compensate=True)", "But if we shorten the filter too much (2 cycles of 10 Hz = 0.2 s),\nour effective stop frequency gets pushed out past 60 Hz:", "n = int(round(sfreq * 0.2)) + 1\nh = signal.firwin2(n, freq, gain, nyq=nyq)\nplot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (0.2 s)',\n flim=flim, compensate=True)", "If we want a filter that is only 0.1 seconds long, we should probably use\nsomething more like a 25 Hz transition band (0.2 s = 5 cycles @ 25 Hz):", "trans_bandwidth = 25\nf_s = f_p + trans_bandwidth\nfreq = [0, f_p, f_s, nyq]\nh = signal.firwin2(n, freq, gain, nyq=nyq)\nplot_filter(h, sfreq, freq, gain, 'Windowed 50 Hz transition (0.2 s)',\n flim=flim, compensate=True)", "So far, we have only discussed non-causal filtering, which means that each\nsample at each time point $t$ is filtered using samples that come\nafter ($t + \\Delta t$) and before ($t - \\Delta t$) the current\ntime point $t$.\nIn this sense, each sample is influenced by samples that come both before\nand after it. This is useful in many cases, especially because it does not\ndelay the timing of events.\nHowever, sometimes it can be beneficial to use causal filtering,\nwhereby each sample $t$ is filtered only using time points that came\nafter it.\nNote that the delay is variable (whereas for linear/zero-phase filters it\nis constant) but small in the pass-band. Unlike zero-phase filters, which\nrequire time-shifting backward the output of a linear-phase filtering stage\n(and thus becoming non-causal), minimum-phase filters do not require any\ncompensation to achieve small delays in the pass-band. Note that as an\nartifact of the minimum phase filter construction step, the filter does\nnot end up being as steep as the linear/zero-phase version.\nWe can construct a minimum-phase filter from our existing linear-phase\nfilter with the :func:scipy.signal.minimum_phase function, and note\nthat the falloff is not as steep:", "h_min = signal.minimum_phase(h)\nplot_filter(h_min, sfreq, freq, gain, 'Minimum-phase', flim=flim)", "Applying FIR filters\nNow lets look at some practical effects of these filters by applying\nthem to some data.\nLet's construct a Gaussian-windowed sinusoid (i.e., Morlet imaginary part)\nplus noise (random and line). Note that the original clean signal contains\nfrequency content in both the pass band and transition bands of our\nlow-pass filter.", "dur = 10.\ncenter = 2.\nmorlet_freq = f_p\ntlim = [center - 0.2, center + 0.2]\ntticks = [tlim[0], center, tlim[1]]\nflim = [20, 70]\n\nx = np.zeros(int(sfreq * dur) + 1)\nblip = morlet(sfreq, [morlet_freq], n_cycles=7)[0].imag / 20.\nn_onset = int(center * sfreq) - len(blip) // 2\nx[n_onset:n_onset + len(blip)] += blip\nx_orig = x.copy()\n\nrng = np.random.RandomState(0)\nx += rng.randn(len(x)) / 1000.\nx += np.sin(2. * np.pi * 60. * np.arange(len(x)) / sfreq) / 2000.", "Filter it with a shallow cutoff, linear-phase FIR (which allows us to\ncompensate for the constant filter delay):", "transition_band = 0.25 * f_p\nf_s = f_p + transition_band\nfreq = [0., f_p, f_s, sfreq / 2.]\ngain = [1., 1., 0., 0.]\n# This would be equivalent:\nh = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,\n fir_design='firwin', verbose=True)\nx_v16 = np.convolve(h, x)\n# this is the linear->zero phase, causal-to-non-causal conversion / shift\nx_v16 = x_v16[len(h) // 2:]\n\nplot_filter(h, sfreq, freq, gain, 'MNE-Python 0.16 default', flim=flim,\n compensate=True)", "Filter it with a different design method fir_design=\"firwin2\", and also\ncompensate for the constant filter delay. This method does not produce\nquite as sharp a transition compared to fir_design=\"firwin\", despite\nbeing twice as long:", "transition_band = 0.25 * f_p\nf_s = f_p + transition_band\nfreq = [0., f_p, f_s, sfreq / 2.]\ngain = [1., 1., 0., 0.]\n# This would be equivalent:\n# filter_dur = 6.6 / transition_band # sec\n# n = int(sfreq * filter_dur)\n# h = signal.firwin2(n, freq, gain, nyq=sfreq / 2.)\nh = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,\n fir_design='firwin2', verbose=True)\nx_v14 = np.convolve(h, x)[len(h) // 2:]\n\nplot_filter(h, sfreq, freq, gain, 'MNE-Python 0.14 default', flim=flim,\n compensate=True)", "Let's also filter with the MNE-Python 0.13 default, which is a\nlong-duration, steep cutoff FIR that gets applied twice:", "transition_band = 0.5 # Hz\nf_s = f_p + transition_band\nfilter_dur = 10. # sec\nfreq = [0., f_p, f_s, sfreq / 2.]\ngain = [1., 1., 0., 0.]\n# This would be equivalent\n# n = int(sfreq * filter_dur)\n# h = signal.firwin2(n, freq, gain, nyq=sfreq / 2.)\nh = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,\n h_trans_bandwidth=transition_band,\n filter_length='%ss' % filter_dur,\n fir_design='firwin2', verbose=True)\nx_v13 = np.convolve(np.convolve(h, x)[::-1], h)[::-1][len(h) - 1:-len(h) - 1]\n# the effective h is one that is applied to the time-reversed version of itself\nh_eff = np.convolve(h, h[::-1])\nplot_filter(h_eff, sfreq, freq, gain, 'MNE-Python 0.13 default', flim=flim,\n compensate=True)", "Let's also filter it with the MNE-C default, which is a long-duration\nsteep-slope FIR filter designed using frequency-domain techniques:", "h = mne.filter.design_mne_c_filter(sfreq, l_freq=None, h_freq=f_p + 2.5)\nx_mne_c = np.convolve(h, x)[len(h) // 2:]\n\ntransition_band = 5 # Hz (default in MNE-C)\nf_s = f_p + transition_band\nfreq = [0., f_p, f_s, sfreq / 2.]\ngain = [1., 1., 0., 0.]\nplot_filter(h, sfreq, freq, gain, 'MNE-C default', flim=flim, compensate=True)", "And now an example of a minimum-phase filter:", "h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,\n phase='minimum', fir_design='firwin',\n verbose=True)\nx_min = np.convolve(h, x)\ntransition_band = 0.25 * f_p\nf_s = f_p + transition_band\nfilter_dur = 6.6 / transition_band # sec\nn = int(sfreq * filter_dur)\nfreq = [0., f_p, f_s, sfreq / 2.]\ngain = [1., 1., 0., 0.]\nplot_filter(h, sfreq, freq, gain, 'Minimum-phase filter', flim=flim)", "Both the MNE-Python 0.13 and MNE-C filters have excellent frequency\nattenuation, but it comes at a cost of potential\nringing (long-lasting ripples) in the time domain. Ringing can occur with\nsteep filters, especially in signals with frequency content around the\ntransition band. Our Morlet wavelet signal has power in our transition band,\nand the time-domain ringing is thus more pronounced for the steep-slope,\nlong-duration filter than the shorter, shallower-slope filter:", "axes = plt.subplots(1, 2)[1]\n\n\ndef plot_signal(x, offset):\n \"\"\"Plot a signal.\"\"\"\n t = np.arange(len(x)) / sfreq\n axes[0].plot(t, x + offset)\n axes[0].set(xlabel='Time (s)', xlim=t[[0, -1]])\n X = fft(x)\n freqs = fftfreq(len(x), 1. / sfreq)\n mask = freqs >= 0\n X = X[mask]\n freqs = freqs[mask]\n axes[1].plot(freqs, 20 * np.log10(np.maximum(np.abs(X), 1e-16)))\n axes[1].set(xlim=flim)\n\n\nyscale = 30\nyticklabels = ['Original', 'Noisy', 'FIR-firwin (0.16)', 'FIR-firwin2 (0.14)',\n 'FIR-steep (0.13)', 'FIR-steep (MNE-C)', 'Minimum-phase']\nyticks = -np.arange(len(yticklabels)) / yscale\nplot_signal(x_orig, offset=yticks[0])\nplot_signal(x, offset=yticks[1])\nplot_signal(x_v16, offset=yticks[2])\nplot_signal(x_v14, offset=yticks[3])\nplot_signal(x_v13, offset=yticks[4])\nplot_signal(x_mne_c, offset=yticks[5])\nplot_signal(x_min, offset=yticks[6])\naxes[0].set(xlim=tlim, title='FIR, Lowpass=%d Hz' % f_p, xticks=tticks,\n ylim=[-len(yticks) / yscale, 1. / yscale],\n yticks=yticks, yticklabels=yticklabels)\nfor text in axes[0].get_yticklabels():\n text.set(rotation=45, size=8)\naxes[1].set(xlim=flim, ylim=(-60, 10), xlabel='Frequency (Hz)',\n ylabel='Magnitude (dB)')\nmne.viz.tight_layout()\nplt.show()", "IIR filters\nMNE-Python also offers IIR filtering functionality that is based on the\nmethods from :mod:scipy.signal. Specifically, we use the general-purpose\nfunctions :func:scipy.signal.iirfilter and :func:scipy.signal.iirdesign,\nwhich provide unified interfaces to IIR filter design.\nDesigning IIR filters\nLet's continue with our design of a 40 Hz low-pass filter and look at\nsome trade-offs of different IIR filters.\nOften the default IIR filter is a Butterworth filter_, which is designed\nto have a maximally flat pass-band. Let's look at a few filter orders,\ni.e., a few different number of coefficients used and therefore steepness\nof the filter:\n<div class=\"alert alert-info\"><h4>Note</h4><p>Notice that the group delay (which is related to the phase) of\n the IIR filters below are not constant. In the FIR case, we can\n design so-called linear-phase filters that have a constant group\n delay, and thus compensate for the delay (making the filter\n non-causal) if necessary. This cannot be done with IIR filters, as\n they have a non-linear phase (non-constant group delay). As the\n filter order increases, the phase distortion near and in the\n transition band worsens. However, if non-causal (forward-backward)\n filtering can be used, e.g. with :func:`scipy.signal.filtfilt`,\n these phase issues can theoretically be mitigated.</p></div>", "sos = signal.iirfilter(2, f_p / nyq, btype='low', ftype='butter', output='sos')\nplot_filter(dict(sos=sos), sfreq, freq, gain, 'Butterworth order=2', flim=flim,\n compensate=True)\nx_shallow = signal.sosfiltfilt(sos, x)\ndel sos", "The falloff of this filter is not very steep.\n<div class=\"alert alert-info\"><h4>Note</h4><p>Here we have made use of second-order sections (SOS)\n by using :func:`scipy.signal.sosfilt` and, under the\n hood, :func:`scipy.signal.zpk2sos` when passing the\n ``output='sos'`` keyword argument to\n :func:`scipy.signal.iirfilter`. The filter definitions\n given `above <tut_filtering_basics>` use the polynomial\n numerator/denominator (sometimes called \"tf\") form ``(b, a)``,\n which are theoretically equivalent to the SOS form used here.\n In practice, however, the SOS form can give much better results\n due to issues with numerical precision (see\n :func:`scipy.signal.sosfilt` for an example), so SOS should be\n used whenever possible.</p></div>\n\nLet's increase the order, and note that now we have better attenuation,\nwith a longer impulse response. Let's also switch to using the MNE filter\ndesign function, which simplifies a few things and gives us some information\nabout the resulting filter:", "iir_params = dict(order=8, ftype='butter')\nfilt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,\n method='iir', iir_params=iir_params,\n verbose=True)\nplot_filter(filt, sfreq, freq, gain, 'Butterworth order=8', flim=flim,\n compensate=True)\nx_steep = signal.sosfiltfilt(filt['sos'], x)", "There are other types of IIR filters that we can use. For a complete list,\ncheck out the documentation for :func:scipy.signal.iirdesign. Let's\ntry a Chebychev (type I) filter, which trades off ripple in the pass-band\nto get better attenuation in the stop-band:", "iir_params.update(ftype='cheby1',\n rp=1., # dB of acceptable pass-band ripple\n )\nfilt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,\n method='iir', iir_params=iir_params,\n verbose=True)\nplot_filter(filt, sfreq, freq, gain,\n 'Chebychev-1 order=8, ripple=1 dB', flim=flim, compensate=True)", "If we can live with even more ripple, we can get it slightly steeper,\nbut the impulse response begins to ring substantially longer (note the\ndifferent x-axis scale):", "iir_params['rp'] = 6.\nfilt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,\n method='iir', iir_params=iir_params,\n verbose=True)\nplot_filter(filt, sfreq, freq, gain,\n 'Chebychev-1 order=8, ripple=6 dB', flim=flim,\n compensate=True)", "Applying IIR filters\nNow let's look at how our shallow and steep Butterworth IIR filters\nperform on our Morlet signal from before:", "axes = plt.subplots(1, 2)[1]\nyticks = np.arange(4) / -30.\nyticklabels = ['Original', 'Noisy', 'Butterworth-2', 'Butterworth-8']\nplot_signal(x_orig, offset=yticks[0])\nplot_signal(x, offset=yticks[1])\nplot_signal(x_shallow, offset=yticks[2])\nplot_signal(x_steep, offset=yticks[3])\naxes[0].set(xlim=tlim, title='IIR, Lowpass=%d Hz' % f_p, xticks=tticks,\n ylim=[-0.125, 0.025], yticks=yticks, yticklabels=yticklabels,)\nfor text in axes[0].get_yticklabels():\n text.set(rotation=45, size=8)\naxes[1].set(xlim=flim, ylim=(-60, 10), xlabel='Frequency (Hz)',\n ylabel='Magnitude (dB)')\nmne.viz.adjust_axes(axes)\nmne.viz.tight_layout()\nplt.show()", "Some pitfalls of filtering\nMultiple recent papers have noted potential risks of drawing\nerrant inferences due to misapplication of filters.\nLow-pass problems\nFilters in general, especially those that are non-causal (zero-phase), can\nmake activity appear to occur earlier or later than it truly did. As\nmentioned in VanRullen (2011) [3], investigations of commonly (at the time)\nused low-pass filters created artifacts when they were applied to simulated\ndata. However, such deleterious effects were minimal in many real-world\nexamples in Rousselet (2012) [5].\nPerhaps more revealing, it was noted in Widmann & Schröger (2012) [6] that\nthe problematic low-pass filters from VanRullen (2011) [3]:\n\nUsed a least-squares design (like :func:scipy.signal.firls) that\n included \"do-not-care\" transition regions, which can lead to\n uncontrolled behavior.\nHad a filter length that was independent of the transition bandwidth,\n which can cause excessive ringing and signal distortion.\n\nHigh-pass problems\nWhen it comes to high-pass filtering, using corner frequencies above 0.1 Hz\nwere found in Acunzo et al. (2012) [4]_ to:\n\"... generate a systematic bias easily leading to misinterpretations of\n neural activity.”\nIn a related paper, Widmann et al. (2015) [7] also came to suggest a\n0.1 Hz highpass. More evidence followed in Tanner et al. (2015) [8] of\nsuch distortions. Using data from language ERP studies of semantic and\nsyntactic processing (i.e., N400 and P600), using a high-pass above 0.3 Hz\ncaused significant effects to be introduced implausibly early when compared\nto the unfiltered data. From this, the authors suggested the optimal\nhigh-pass value for language processing to be 0.1 Hz.\nWe can recreate a problematic simulation from Tanner et al. (2015) [8]_:\n\"The simulated component is a single-cycle cosine wave with an amplitude\n of 5µV [sic], onset of 500 ms poststimulus, and duration of 800 ms. The\n simulated component was embedded in 20 s of zero values to avoid\n filtering edge effects... Distortions [were] caused by 2 Hz low-pass\n and high-pass filters... No visible distortion to the original\n waveform [occurred] with 30 Hz low-pass and 0.01 Hz high-pass filters...\n Filter frequencies correspond to the half-amplitude (-6 dB) cutoff\n (12 dB/octave roll-off).\"\n<div class=\"alert alert-info\"><h4>Note</h4><p>This simulated signal contains energy not just within the\n pass-band, but also within the transition and stop-bands -- perhaps\n most easily understood because the signal has a non-zero DC value,\n but also because it is a shifted cosine that has been\n *windowed* (here multiplied by a rectangular window), which\n makes the cosine and DC frequencies spread to other frequencies\n (multiplication in time is convolution in frequency, so multiplying\n by a rectangular window in the time domain means convolving a sinc\n function with the impulses at DC and the cosine frequency in the\n frequency domain).</p></div>", "x = np.zeros(int(2 * sfreq))\nt = np.arange(0, len(x)) / sfreq - 0.2\nonset = np.where(t >= 0.5)[0][0]\ncos_t = np.arange(0, int(sfreq * 0.8)) / sfreq\nsig = 2.5 - 2.5 * np.cos(2 * np.pi * (1. / 0.8) * cos_t)\nx[onset:onset + len(sig)] = sig\n\niir_lp_30 = signal.iirfilter(2, 30. / sfreq, btype='lowpass')\niir_hp_p1 = signal.iirfilter(2, 0.1 / sfreq, btype='highpass')\niir_lp_2 = signal.iirfilter(2, 2. / sfreq, btype='lowpass')\niir_hp_2 = signal.iirfilter(2, 2. / sfreq, btype='highpass')\nx_lp_30 = signal.filtfilt(iir_lp_30[0], iir_lp_30[1], x, padlen=0)\nx_hp_p1 = signal.filtfilt(iir_hp_p1[0], iir_hp_p1[1], x, padlen=0)\nx_lp_2 = signal.filtfilt(iir_lp_2[0], iir_lp_2[1], x, padlen=0)\nx_hp_2 = signal.filtfilt(iir_hp_2[0], iir_hp_2[1], x, padlen=0)\n\nxlim = t[[0, -1]]\nylim = [-2, 6]\nxlabel = 'Time (sec)'\nylabel = r'Amplitude ($\\mu$V)'\ntticks = [0, 0.5, 1.3, t[-1]]\naxes = plt.subplots(2, 2)[1].ravel()\nfor ax, x_f, title in zip(axes, [x_lp_2, x_lp_30, x_hp_2, x_hp_p1],\n ['LP$_2$', 'LP$_{30}$', 'HP$_2$', 'LP$_{0.1}$']):\n ax.plot(t, x, color='0.5')\n ax.plot(t, x_f, color='k', linestyle='--')\n ax.set(ylim=ylim, xlim=xlim, xticks=tticks,\n title=title, xlabel=xlabel, ylabel=ylabel)\nmne.viz.adjust_axes(axes)\nmne.viz.tight_layout()\nplt.show()", "Similarly, in a P300 paradigm reported by Kappenman & Luck (2010) [12]_,\nthey found that applying a 1 Hz high-pass decreased the probability of\nfinding a significant difference in the N100 response, likely because\nthe P300 response was smeared (and inverted) in time by the high-pass\nfilter such that it tended to cancel out the increased N100. However,\nthey nonetheless note that some high-passing can still be useful to deal\nwith drifts in the data.\nEven though these papers generally advise a 0.1 Hz or lower frequency for\na high-pass, it is important to keep in mind (as most authors note) that\nfiltering choices should depend on the frequency content of both the\nsignal(s) of interest and the noise to be suppressed. For example, in\nsome of the MNE-Python examples involving sample-data,\nhigh-pass values of around 1 Hz are used when looking at auditory\nor visual N100 responses, because we analyze standard (not deviant) trials\nand thus expect that contamination by later or slower components will\nbe limited.\nBaseline problems (or solutions?)\nIn an evolving discussion, Tanner et al. (2015) [8] suggest using baseline\ncorrection to remove slow drifts in data. However, Maess et al. (2016) [9]\nsuggest that baseline correction, which is a form of high-passing, does\nnot offer substantial advantages over standard high-pass filtering.\nTanner et al. (2016) [10]_ rebutted that baseline correction can correct\nfor problems with filtering.\nTo see what they mean, consider again our old simulated signal x from\nbefore:", "def baseline_plot(x):\n all_axes = plt.subplots(3, 2)[1]\n for ri, (axes, freq) in enumerate(zip(all_axes, [0.1, 0.3, 0.5])):\n for ci, ax in enumerate(axes):\n if ci == 0:\n iir_hp = signal.iirfilter(4, freq / sfreq, btype='highpass',\n output='sos')\n x_hp = signal.sosfiltfilt(iir_hp, x, padlen=0)\n else:\n x_hp -= x_hp[t < 0].mean()\n ax.plot(t, x, color='0.5')\n ax.plot(t, x_hp, color='k', linestyle='--')\n if ri == 0:\n ax.set(title=('No ' if ci == 0 else '') +\n 'Baseline Correction')\n ax.set(xticks=tticks, ylim=ylim, xlim=xlim, xlabel=xlabel)\n ax.set_ylabel('%0.1f Hz' % freq, rotation=0,\n horizontalalignment='right')\n mne.viz.adjust_axes(axes)\n mne.viz.tight_layout()\n plt.suptitle(title)\n plt.show()\n\n\nbaseline_plot(x)", "In response, Maess et al. (2016) [11]_ note that these simulations do not\naddress cases of pre-stimulus activity that is shared across conditions, as\napplying baseline correction will effectively copy the topology outside the\nbaseline period. We can see this if we give our signal x with some\nconsistent pre-stimulus activity, which makes everything look bad.\n<div class=\"alert alert-info\"><h4>Note</h4><p>An important thing to keep in mind with these plots is that they\n are for a single simulated sensor. In multi-electrode recordings\n the topology (i.e., spatial pattern) of the pre-stimulus activity\n will leak into the post-stimulus period. This will likely create a\n spatially varying distortion of the time-domain signals, as the\n averaged pre-stimulus spatial pattern gets subtracted from the\n sensor time courses.</p></div>\n\nPutting some activity in the baseline period:", "n_pre = (t < 0).sum()\nsig_pre = 1 - np.cos(2 * np.pi * np.arange(n_pre) / (0.5 * n_pre))\nx[:n_pre] += sig_pre\nbaseline_plot(x)", "Both groups seem to acknowledge that the choices of filtering cutoffs, and\nperhaps even the application of baseline correction, depend on the\ncharacteristics of the data being investigated, especially when it comes to:\n\nThe frequency content of the underlying evoked activity relative\n to the filtering parameters.\nThe validity of the assumption of no consistent evoked activity\n in the baseline period.\n\nWe thus recommend carefully applying baseline correction and/or high-pass\nvalues based on the characteristics of the data to be analyzed.\nFiltering defaults\nDefaults in MNE-Python\nMost often, filtering in MNE-Python is done at the :class:mne.io.Raw level,\nand thus :func:mne.io.Raw.filter is used. This function under the hood\n(among other things) calls :func:mne.filter.filter_data to actually\nfilter the data, which by default applies a zero-phase FIR filter designed\nusing :func:scipy.signal.firwin. In Widmann et al. (2015) [7]_, they\nsuggest a specific set of parameters to use for high-pass filtering,\nincluding:\n\"... providing a transition bandwidth of 25% of the lower passband\nedge but, where possible, not lower than 2 Hz and otherwise the\ndistance from the passband edge to the critical frequency.”\n\nIn practice, this means that for each high-pass value l_freq or\nlow-pass value h_freq below, you would get this corresponding\nl_trans_bandwidth or h_trans_bandwidth, respectively,\nif the sample rate were 100 Hz (i.e., Nyquist frequency of 50 Hz):\n+------------------+-------------------+-------------------+\n| l_freq or h_freq | l_trans_bandwidth | h_trans_bandwidth |\n+==================+===================+===================+\n| 0.01 | 0.01 | 2.0 |\n+------------------+-------------------+-------------------+\n| 0.1 | 0.1 | 2.0 |\n+------------------+-------------------+-------------------+\n| 1.0 | 1.0 | 2.0 |\n+------------------+-------------------+-------------------+\n| 2.0 | 2.0 | 2.0 |\n+------------------+-------------------+-------------------+\n| 4.0 | 2.0 | 2.0 |\n+------------------+-------------------+-------------------+\n| 8.0 | 2.0 | 2.0 |\n+------------------+-------------------+-------------------+\n| 10.0 | 2.5 | 2.5 |\n+------------------+-------------------+-------------------+\n| 20.0 | 5.0 | 5.0 |\n+------------------+-------------------+-------------------+\n| 40.0 | 10.0 | 10.0 |\n+------------------+-------------------+-------------------+\n| 50.0 | 12.5 | 12.5 |\n+------------------+-------------------+-------------------+\nMNE-Python has adopted this definition for its high-pass (and low-pass)\ntransition bandwidth choices when using l_trans_bandwidth='auto' and\nh_trans_bandwidth='auto'.\nTo choose the filter length automatically with filter_length='auto',\nthe reciprocal of the shortest transition bandwidth is used to ensure\ndecent attenuation at the stop frequency. Specifically, the reciprocal\n(in samples) is multiplied by 3.1, 3.3, or 5.0 for the Hann, Hamming,\nor Blackman windows, respectively, as selected by the fir_window\nargument for fir_design='firwin', and double these for\nfir_design='firwin2' mode.\n<div class=\"alert alert-info\"><h4>Note</h4><p>For ``fir_design='firwin2'``, the multiplicative factors are\n doubled compared to what is given in Ifeachor & Jervis (2002) [2]_\n (p. 357), as :func:`scipy.signal.firwin2` has a smearing effect\n on the frequency response, which we compensate for by\n increasing the filter length. This is why\n ``fir_desgin='firwin'`` is preferred to ``fir_design='firwin2'``.</p></div>\n\nIn 0.14, we default to using a Hamming window in filter design, as it\nprovides up to 53 dB of stop-band attenuation with small pass-band ripple.\n<div class=\"alert alert-info\"><h4>Note</h4><p>In band-pass applications, often a low-pass filter can operate\n effectively with fewer samples than the high-pass filter, so\n it is advisable to apply the high-pass and low-pass separately\n when using ``fir_design='firwin2'``. For design mode\n ``fir_design='firwin'``, there is no need to separate the\n operations, as the lowpass and highpass elements are constructed\n separately to meet the transition band requirements.</p></div>\n\nFor more information on how to use the\nMNE-Python filtering functions with real data, consult the preprocessing\ntutorial on tut-filter-resample.\nDefaults in MNE-C\nMNE-C by default uses:\n\n5 Hz transition band for low-pass filters.\n3-sample transition band for high-pass filters.\nFilter length of 8197 samples.\n\nThe filter is designed in the frequency domain, creating a linear-phase\nfilter such that the delay is compensated for as is done with the MNE-Python\nphase='zero' filtering option.\nSquared-cosine ramps are used in the transition regions. Because these\nare used in place of more gradual (e.g., linear) transitions,\na given transition width will result in more temporal ringing but also more\nrapid attenuation than the same transition width in windowed FIR designs.\nThe default filter length will generally have excellent attenuation\nbut long ringing for the sample rates typically encountered in M/EEG data\n(e.g. 500-2000 Hz).\nDefaults in other software\nA good but possibly outdated comparison of filtering in various software\npackages is available in Widmann et al. (2015) [7]_. Briefly:\n\nEEGLAB\n MNE-Python 0.14 defaults to behavior very similar to that of EEGLAB\n (see the EEGLAB filtering FAQ_ for more information).\nFieldTrip\n By default FieldTrip applies a forward-backward Butterworth IIR filter\n of order 4 (band-pass and band-stop filters) or 2 (for low-pass and\n high-pass filters). Similar filters can be achieved in MNE-Python when\n filtering with :meth:raw.filter(..., method='iir') &lt;mne.io.Raw.filter&gt;\n (see also :func:mne.filter.construct_iir_filter for options).\n For more information, see e.g. the\n FieldTrip band-pass documentation &lt;ftbp_&gt;_.\n\nReporting Filters\nOn page 45 in Widmann et al. (2015) [7]_, there is a convenient list of\nimportant filter parameters that should be reported with each publication:\n\nFilter type (high-pass, low-pass, band-pass, band-stop, FIR, IIR)\nCutoff frequency (including definition)\nFilter order (or length)\nRoll-off or transition bandwidth\nPassband ripple and stopband attenuation\nFilter delay (zero-phase, linear-phase, non-linear phase) and causality\nDirection of computation (one-pass forward/reverse, or two-pass forward\n and reverse)\n\nIn the following, we will address how to deal with these parameters in MNE:\nFilter type\nDepending on the function or method used, the filter type can be specified.\nTo name an example, in :func:mne.filter.create_filter, the relevant\narguments would be l_freq, h_freq, method, and if the method is\nFIR fir_window and fir_design.\nCutoff frequency\nThe cutoff of FIR filters in MNE is defined as half-amplitude cutoff in the\nmiddle of the transition band. That is, if you construct a lowpass FIR filter\nwith h_freq = 40, the filter function will provide a transition\nbandwidth that depends on the h_trans_bandwidth argument. The desired\nhalf-amplitude cutoff of the lowpass FIR filter is then at\nh_freq + transition_bandwidth/2..\nFilter length (order) and transition bandwidth (roll-off)\nIn the tut_filtering_in_python section, we have already talked about\nthe default filter lengths and transition bandwidths that are used when no\ncustom values are specified using the respective filter function's arguments.\nIf you want to find out about the filter length and transition bandwidth that\nwere used through the 'auto' setting, you can use\n:func:mne.filter.create_filter to print out the settings once more:", "# Use the same settings as when calling e.g., `raw.filter()`\nfir_coefs = mne.filter.create_filter(\n data=None, # data is only used for sanity checking, not strictly needed\n sfreq=1000., # sfreq of your data in Hz\n l_freq=None,\n h_freq=40., # assuming a lowpass of 40 Hz\n method='fir',\n fir_window='hamming',\n fir_design='firwin',\n verbose=True)\n\n# See the printed log for the transition bandwidth and filter length.\n# Alternatively, get the filter length through:\nfilter_length = fir_coefs.shape[0]", "<div class=\"alert alert-info\"><h4>Note</h4><p>If you are using an IIR filter, :func:`mne.filter.create_filter`\n will not print a filter length and transition bandwidth to the log.\n Instead, you can specify the roll-off with the ``iir_params``\n argument or stay with the default, which is a fourth order\n (Butterworth) filter.</p></div>\n\nPassband ripple and stopband attenuation\nWhen use standard :func:scipy.signal.firwin design (as for FIR filters in\nMNE), the passband ripple and stopband attenuation are dependent upon the\nwindow used in design. For standard windows the values are listed in this\ntable (see Ifeachor & Jervis (2002) [2]_, p. 357):\n+-------------------------+-----------------+----------------------+\n| Name of window function | Passband ripple | Stopband attenuation |\n+=========================+=================+======================+\n| Hann | 0.0545 dB | 44 dB |\n+-------------------------+-----------------+----------------------+\n| Hamming | 0.0194 dB | 53 dB |\n+-------------------------+-----------------+----------------------+\n| Blackman | 0.0017 dB | 74 dB |\n+-------------------------+-----------------+----------------------+\nFilter delay and direction of computation\nFor reporting this information, it might be sufficient to read the docstring\nof the filter function or method that you apply. For example in the\ndocstring of mne.filter.create_filter, for the phase parameter it says:\nPhase of the filter, only used if method='fir'.\n By default, a symmetric linear-phase FIR filter is constructed.\n If phase='zero' (default), the delay of this filter\n is compensated for. If phase=='zero-double', then this filter\n is applied twice, once forward, and once backward. If 'minimum',\n then a minimum-phase, causal filter will be used.\nSummary\nWhen filtering, there are always trade-offs that should be considered.\nOne important trade-off is between time-domain characteristics (like ringing)\nand frequency-domain attenuation characteristics (like effective transition\nbandwidth). Filters with sharp frequency cutoffs can produce outputs that\nring for a long time when they operate on signals with frequency content\nin the transition band. In general, therefore, the wider a transition band\nthat can be tolerated, the better behaved the filter will be in the time\ndomain.\nReferences\n.. [1] Parks TW, Burrus CS (1987). Digital Filter Design.\n New York: Wiley-Interscience.\n.. [2] Ifeachor, E. C., & Jervis, B. W. (2002). Digital Signal Processing:\n A Practical Approach. Prentice Hall.\n.. [3] Vanrullen, R. (2011). Four common conceptual fallacies in mapping\n the time course of recognition. Perception Science, 2, 365.\n.. [4] Acunzo, D. J., MacKenzie, G., & van Rossum, M. C. W. (2012).\n Systematic biases in early ERP and ERF components as a result\n of high-pass filtering. Journal of Neuroscience Methods,\n 209(1), 212–218. https://doi.org/10.1016/j.jneumeth.2012.06.011\n.. [5] Rousselet, G. A. (2012). Does filtering preclude us from studying\n ERP time-courses? Frontiers in Psychology, 3(131)\n.. [6] Widmann, A., & Schröger, E. (2012). Filter effects and filter\n artifacts in the analysis of electrophysiological data.\n Perception Science, 233.\n.. [7] Widmann, A., Schröger, E., & Maess, B. (2015). Digital filter\n design for electrophysiological data – a practical approach.\n Journal of Neuroscience Methods, 250, 34–46.\n https://doi.org/10.1016/j.jneumeth.2014.08.002\n.. [8] Tanner, D., Morgan-Short, K., & Luck, S. J. (2015).\n How inappropriate high-pass filters can produce artifactual effects\n and incorrect conclusions in ERP studies of language and cognition.\n Psychophysiology, 52(8), 997–1009. https://doi.org/10.1111/psyp.12437\n.. [9] Maess, B., Schröger, E., & Widmann, A. (2016).\n High-pass filters and baseline correction in M/EEG analysis.\n Commentary on: “How inappropriate high-pass filters can produce\n artifacts and incorrect conclusions in ERP studies of language\n and cognition.” Journal of Neuroscience Methods, 266, 164–165.\n.. [10] Tanner, D., Norton, J. J. S., Morgan-Short, K., & Luck, S. J. (2016).\n On high-pass filter artifacts (they’re real) and baseline correction\n (it’s a good idea) in ERP/ERMF analysis.\n.. [11] Maess, B., Schröger, E., & Widmann, A. (2016).\n High-pass filters and baseline correction in M/EEG analysis-continued\n discussion. Journal of Neuroscience Methods, 266, 171–172.\n Journal of Neuroscience Methods, 266, 166–170.\n.. [12] Kappenman E. & Luck, S. (2010). The effects of impedance on data\n quality and statistical significance in ERP recordings.\n Psychophysiology, 47, 888-904." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
akseshina/dl_course
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
gpl-3.0
[ "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport numpy as np\nfrom sklearn.metrics import confusion_matrix\nimport time\nfrom datetime import timedelta\nimport math\nimport os\n\n# Use PrettyTensor to simplify Neural Network construction.\nimport prettytensor as pt", "Load Data", "import cifar10", "Set the path for storing the data-set on your computer.\nThe CIFAR-10 data-set is about 163 MB and will be downloaded automatically if it is not located in the given path.", "cifar10.maybe_download_and_extract()", "Load the class-names.", "class_names = cifar10.load_class_names()\nclass_names", "Load the training-set. This returns the images, the class-numbers as integers, and the class-numbers as One-Hot encoded arrays called labels.", "images_train, cls_train, labels_train = cifar10.load_training_data()", "Load the test-set.", "images_test, cls_test, labels_test = cifar10.load_test_data()", "The CIFAR-10 data-set has now been loaded and consists of 60,000 images and associated labels (i.e. classifications of the images). The data-set is split into 2 mutually exclusive sub-sets, the training-set and the test-set.", "print(\"Size of:\")\nprint(\"- Training-set:\\t\\t{}\".format(len(images_train)))\nprint(\"- Test-set:\\t\\t{}\".format(len(images_test)))", "The data dimensions are used in several places in the source-code below. They have already been defined in the cifar10 module, so we just need to import them.", "from cifar10 import img_size, num_channels, num_classes", "The images are 32 x 32 pixels, but we will crop the images to 24 x 24 pixels.", "img_size_cropped = 24", "Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.", "def plot_images(images, cls_true, cls_pred=None, smooth=True):\n\n assert len(images) == len(cls_true) == 9\n\n # Create figure with sub-plots.\n fig, axes = plt.subplots(3, 3)\n\n # Adjust vertical spacing if we need to print ensemble and best-net.\n if cls_pred is None:\n hspace = 0.3\n else:\n hspace = 0.6\n fig.subplots_adjust(hspace=hspace, wspace=0.3)\n\n for i, ax in enumerate(axes.flat):\n # Interpolation type.\n if smooth:\n interpolation = 'spline16'\n else:\n interpolation = 'nearest'\n\n # Plot image.\n ax.imshow(images[i, :, :, :],\n interpolation=interpolation)\n \n # Name of the true class.\n cls_true_name = class_names[cls_true[i]]\n\n # Show true and predicted classes.\n if cls_pred is None:\n xlabel = \"True: {0}\".format(cls_true_name)\n else:\n # Name of the predicted class.\n cls_pred_name = class_names[cls_pred[i]]\n\n xlabel = \"True: {0}\\nPred: {1}\".format(cls_true_name, cls_pred_name)\n\n # Show the classes as the label on the x-axis.\n ax.set_xlabel(xlabel)\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Plot a few images to see if data is correct", "# Get the first images from the test-set.\nimages = images_test[0:9]\n\n# Get the true classes for those images.\ncls_true = cls_test[0:9]\n\n# Plot the images and labels using our helper-function above.\nplot_images(images=images, cls_true=cls_true, smooth=False)", "The pixelated images above are what the neural network will get as input. The images might be a bit easier for the human eye to recognize if we smoothen the pixels.", "plot_images(images=images, cls_true=cls_true, smooth=True)\n\nx = tf.placeholder(tf.float32, shape=[None, img_size, img_size, num_channels], name='x')\n\ny_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')\n\ny_true_cls = tf.argmax(y_true, dimension=1)", "Data augmentation for images\nThe following helper-functions create the part of the TensorFlow computational graph that pre-processes the input images. Nothing is actually calculated at this point, the function merely adds nodes to the computational graph for TensorFlow.\nThe pre-processing is different for training and testing of the neural network:\n* For training, the input images are randomly cropped, randomly flipped horizontally, and the hue, contrast and saturation is adjusted with random values. This artificially inflates the size of the training-set by creating random variations of the original input images. Examples of distorted images are shown further below.\n\nFor testing, the input images are cropped around the centre and nothing else is adjusted.", "def pre_process_image(image, training):\n # This function takes a single image as input,\n # and a boolean whether to build the training or testing graph.\n \n if training:\n # For training, add the following to the TensorFlow graph.\n\n # Randomly crop the input image.\n image = tf.random_crop(image, size=[img_size_cropped, img_size_cropped, num_channels])\n\n # Randomly flip the image horizontally.\n image = tf.image.random_flip_left_right(image)\n \n # Randomly adjust hue, contrast and saturation.\n image = tf.image.random_hue(image, max_delta=0.05)\n image = tf.image.random_contrast(image, lower=0.3, upper=1.0)\n image = tf.image.random_brightness(image, max_delta=0.2)\n image = tf.image.random_saturation(image, lower=0.0, upper=2.0)\n\n # Some of these functions may overflow and result in pixel\n # values beyond the [0, 1] range. It is unclear from the\n # documentation of TensorFlow whether this is\n # intended. A simple solution is to limit the range.\n\n # Limit the image pixels between [0, 1] in case of overflow.\n image = tf.minimum(image, 1.0)\n image = tf.maximum(image, 0.0)\n else:\n # For training, add the following to the TensorFlow graph.\n\n # Crop the input image around the centre so it is the same\n # size as images that are randomly cropped during training.\n image = tf.image.resize_image_with_crop_or_pad(image,\n target_height=img_size_cropped,\n target_width=img_size_cropped)\n\n return image", "The function above is called for each image in the input batch using the following function.", "def pre_process(images, training):\n # Use TensorFlow to loop over all the input images and call\n # the function above which takes a single image as input.\n images = tf.map_fn(lambda image: pre_process_image(image, training), images)\n\n return images", "In order to plot the distorted images, we create the pre-processing graph for TensorFlow, so we may execute it later.", "distorted_images = pre_process(images=x, training=True)", "Creating Main Processing", "def main_network(images, training):\n # Wrap the input images as a Pretty Tensor object.\n x_pretty = pt.wrap(images)\n\n # Pretty Tensor uses special numbers to distinguish between\n # the training and testing phases.\n if training:\n phase = pt.Phase.train\n else:\n phase = pt.Phase.infer\n\n # Create the convolutional neural network using Pretty Tensor.\n with pt.defaults_scope(activation_fn=tf.nn.relu, phase=phase):\n y_pred, loss = x_pretty.\\\n conv2d(kernel=5, depth=64, name='layer_conv1', batch_normalize=True).\\\n max_pool(kernel=2, stride=2).\\\n conv2d(kernel=5, depth=64, name='layer_conv2').\\\n max_pool(kernel=2, stride=2).\\\n flatten().\\\n fully_connected(size=256, name='layer_fc1').\\\n fully_connected(size=128, name='layer_fc2').\\\n softmax_classifier(num_classes=num_classes, labels=y_true)\n\n return y_pred, loss", "Creating Neural Network\nNote that the neural network is enclosed in the variable-scope named 'network'. This is because we are actually creating two neural networks in the TensorFlow graph. By assigning a variable-scope like this, we can re-use the variables for the two neural networks, so the variables that are optimized for the training-network are re-used for the other network that is used for testing.", "def create_network(training):\n # Wrap the neural network in the scope named 'network'.\n # Create new variables during training, and re-use during testing.\n with tf.variable_scope('network', reuse=not training):\n # Just rename the input placeholder variable for convenience.\n images = x\n\n # Create TensorFlow graph for pre-processing.\n images = pre_process(images=images, training=training)\n\n # Create TensorFlow graph for the main processing.\n y_pred, loss = main_network(images=images, training=training)\n\n return y_pred, loss", "Create Neural Network for Training Phase\nNote that trainable=False which means that TensorFlow will not try to optimize this variable.", "global_step = tf.Variable(initial_value=0,\n name='global_step', trainable=False)", "Create the neural network to be used for training. The create_network() function returns both y_pred and loss, but we only need the loss-function during training.", "_, loss = create_network(training=True)", "Create an optimizer which will minimize the loss-function. Also pass the global_step variable to the optimizer so it will be increased by one after each iteration.", "optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss, global_step=global_step)", "Create Neural Network for Test Phase / Inference\nNow create the neural network for the test-phase. Once again the create_network() function returns the predicted class-labels y_pred for the input images, as well as the loss-function to be used during optimization. During testing we only need y_pred.", "y_pred, _ = create_network(training=False)", "We then calculate the predicted class number as an integer. The output of the network y_pred is an array with 10 elements. The class number is the index of the largest element in the array.", "y_pred_cls = tf.argmax(y_pred, dimension=1)", "Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.", "correct_prediction = tf.equal(y_pred_cls, y_true_cls)", "The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.", "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))", "Saver\nIn order to save the variables of the neural network, so they can be reloaded quickly without having to train the network again, we now create a so-called Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done further below.", "saver = tf.train.Saver()", "Getting the Weights\nFurther below, we want to plot the weights of the neural network. When the network is constructed using Pretty Tensor, all the variables of the layers are created indirectly by Pretty Tensor. We therefore have to retrieve the variables from TensorFlow.\nWe used the names layer_conv1 and layer_conv2 for the two convolutional layers. These are also called variable scopes. Pretty Tensor automatically gives names to the variables it creates for each layer, so we can retrieve the weights for a layer using the layer's scope-name and the variable-name.\nThe implementation is somewhat awkward because we have to use the TensorFlow function get_variable() which was designed for another purpose; either creating a new variable or re-using an existing variable. The easiest thing is to make the following helper-function.", "def get_weights_variable(layer_name):\n # Retrieve an existing variable named 'weights' in the scope\n # with the given layer_name.\n # This is awkward because the TensorFlow function was\n # really intended for another purpose.\n\n with tf.variable_scope(\"network/\" + layer_name, reuse=True):\n variable = tf.get_variable('weights')\n\n return variable", "Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like: contents = session.run(weights_conv1) as demonstrated further below.", "weights_conv1 = get_weights_variable(layer_name='layer_conv1')\nweights_conv2 = get_weights_variable(layer_name='layer_conv2')\n\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n print(sess.run(weights_conv1).shape)\n\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n print(sess.run(weights_conv2).shape)", "Getting the Layer Outputs\nSimilarly we also need to retrieve the outputs of the convolutional layers. The function for doing this is slightly different than the function above for getting the weights. Here we instead retrieve the last tensor that is output by the convolutional layer.", "def get_layer_output(layer_name):\n # The name of the last operation of the convolutional layer.\n # This assumes you are using Relu as the activation-function.\n tensor_name = \"network/\" + layer_name + \"/Relu:0\"\n\n # Get the tensor with this name.\n tensor = tf.get_default_graph().get_tensor_by_name(tensor_name)\n\n return tensor", "Get the output of the convoluational layers so we can plot them later.", "output_conv1 = get_layer_output(layer_name='layer_conv1')\noutput_conv2 = get_layer_output(layer_name='layer_conv2')", "TensorFlow Run\nCreate TensorFlow session\nOnce the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.", "session = tf.Session()", "Restore or initialize variables\nTraining this neural network may take a long time, especially if you do not have a GPU. We therefore save checkpoints during training so we can continue training at another time (e.g. during the night), and also for performing analysis later without having to train the neural network every time we want to use it.\nIf you want to restart the training of the neural network, you have to delete the checkpoints first.\nThis is the directory used for the checkpoints.", "save_dir = 'checkpoints/'", "Create the directory if it does not exist.", "if not os.path.exists(save_dir):\n os.makedirs(save_dir)", "This is the base-filename for the checkpoints, TensorFlow will append the iteration number, etc.", "save_path = os.path.join(save_dir, 'cifar10_cnn')", "First try to restore the latest checkpoint. This may fail and raise an exception e.g. if such a checkpoint does not exist, or if you have changed the TensorFlow graph.", "try:\n print(\"Trying to restore last checkpoint ...\")\n\n # Use TensorFlow to find the latest checkpoint - if any.\n last_chk_path = tf.train.latest_checkpoint(checkpoint_dir=save_dir)\n\n # Try and load the data in the checkpoint.\n saver.restore(session, save_path=last_chk_path)\n\n # If we get to this point, the checkpoint was successfully loaded.\n print(\"Restored checkpoint from:\", last_chk_path)\nexcept:\n # If the above failed for some reason, simply\n # initialize all the variables for the TensorFlow graph.\n print(\"Failed to restore checkpoint. Initializing variables instead.\")\n session.run(tf.global_variables_initializer())", "Helper-function to get a random training-batch\nThere are 50,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.\nIf your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.", "train_batch_size = 64", "Function for selecting a random batch of images from the training-set.", "def random_batch():\n # Number of images in the training-set.\n num_images = len(images_train)\n\n # Create a random index.\n idx = np.random.choice(num_images,\n size=train_batch_size,\n replace=False)\n\n # Use the random index to select random images and labels.\n x_batch = images_train[idx, :, :, :]\n y_batch = labels_train[idx, :]\n\n return x_batch, y_batch", "Optimization\nThe progress is printed every 100 iterations. A checkpoint is saved every 1000 iterations and also after the last iteration.", "def optimize(num_iterations):\n # Start-time used for printing time-usage below.\n start_time = time.time()\n\n for i in range(num_iterations):\n # Get a batch of training examples.\n # x_batch now holds a batch of images and\n # y_true_batch are the true labels for those images.\n x_batch, y_true_batch = random_batch()\n\n # Put the batch into a dict with the proper names\n # for placeholder variables in the TensorFlow graph.\n feed_dict_train = {x: x_batch,\n y_true: y_true_batch}\n\n # Run the optimizer using this batch of training data.\n # TensorFlow assigns the variables in feed_dict_train\n # to the placeholder variables and then runs the optimizer.\n # We also want to retrieve the global_step counter.\n i_global, _ = session.run([global_step, optimizer],\n feed_dict=feed_dict_train)\n\n # Print status to screen every 100 iterations (and last).\n if (i_global % 100 == 0) or (i == num_iterations - 1):\n # Calculate the accuracy on the training-batch.\n batch_acc = session.run(accuracy,\n feed_dict=feed_dict_train)\n\n # Print status.\n msg = \"Global Step: {0:>6}, Training Batch Accuracy: {1:>6.1%}\"\n print(msg.format(i_global, batch_acc))\n\n # Save a checkpoint to disk every 1000 iterations (and last).\n if (i_global % 1000 == 0) or (i == num_iterations - 1):\n # Save all variables of the TensorFlow graph to a\n # checkpoint. Append the global_step counter\n # to the filename so we save the last several checkpoints.\n saver.save(session,\n save_path=save_path,\n global_step=global_step)\n\n print(\"Saved checkpoint.\")\n\n # Ending time.\n end_time = time.time()\n\n # Difference between start and end-times.\n time_dif = end_time - start_time\n\n # Print the time-usage.\n print(\"Time usage: \" + str(timedelta(seconds=int(round(time_dif)))))", "Plot example errors\nFunction for plotting examples of images from the test-set that have been mis-classified.", "def plot_example_errors(cls_pred, correct):\n # This function is called from print_test_accuracy() below.\n\n # cls_pred is an array of the predicted class-number for\n # all images in the test-set.\n\n # correct is a boolean array whether the predicted class\n # is equal to the true class for each image in the test-set.\n\n # Negate the boolean array.\n incorrect = (correct == False)\n \n # Get the images from the test-set that have been\n # incorrectly classified.\n images = images_test[incorrect]\n \n # Get the predicted classes for those images.\n cls_pred = cls_pred[incorrect]\n\n # Get the true classes for those images.\n cls_true = cls_test[incorrect]\n \n # Plot the first 9 images.\n plot_images(images=images[0:9],\n cls_true=cls_true[0:9],\n cls_pred=cls_pred[0:9])", "Plot confusion matrix", "def plot_confusion_matrix(cls_pred):\n # This is called from print_test_accuracy() below.\n\n # cls_pred is an array of the predicted class-number for\n # all images in the test-set.\n\n # Get the confusion matrix using sklearn.\n cm = confusion_matrix(y_true=cls_test, # True class for test-set.\n y_pred=cls_pred) # Predicted class.\n\n # Print the confusion matrix as text.\n for i in range(num_classes):\n # Append the class-name to each line.\n class_name = \"({}) {}\".format(i, class_names[i])\n print(cm[i, :], class_name)\n\n # Print the class-numbers for easy reference.\n class_numbers = [\" ({0})\".format(i) for i in range(num_classes)]\n print(\"\".join(class_numbers))", "Calculating classifications\nThis function calculates the predicted classes of images and also returns a boolean array whether the classification of each image is correct.\nThe calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.", "# Split the data-set in batches of this size to limit RAM usage.\nbatch_size = 256\n\ndef predict_cls(images, labels, cls_true):\n # Number of images.\n num_images = len(images)\n\n # Allocate an array for the predicted classes which\n # will be calculated in batches and filled into this array.\n cls_pred = np.zeros(shape=num_images, dtype=np.int)\n\n # Now calculate the predicted classes for the batches.\n # We will just iterate through all the batches.\n # There might be a more clever and Pythonic way of doing this.\n\n # The starting index for the next batch is denoted i.\n i = 0\n\n while i < num_images:\n # The ending index for the next batch is denoted j.\n j = min(i + batch_size, num_images)\n\n # Create a feed-dict with the images and labels\n # between index i and j.\n feed_dict = {x: images[i:j, :],\n y_true: labels[i:j, :]}\n\n # Calculate the predicted class using TensorFlow.\n cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)\n\n # Set the start-index for the next batch to the\n # end-index of the current batch.\n i = j\n\n # Create a boolean array whether each image is correctly classified.\n correct = (cls_true == cls_pred)\n\n return correct, cls_pred", "Calculate the predicted class for the test-set.", "def predict_cls_test():\n return predict_cls(images = images_test,\n labels = labels_test,\n cls_true = cls_test)", "Helper-functions for the classification accuracy\nThis function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. classification_accuracy([True, True, False, False, False]) = 2/5 = 0.4. The function also returns the number of correct classifications.", "def classification_accuracy(correct):\n # When averaging a boolean array, False means 0 and True means 1.\n # So we are calculating: number of True / len(correct) which is\n # the same as the classification accuracy.\n \n # Return the classification accuracy\n # and the number of correct classifications.\n return correct.mean(), correct.sum()", "Helper-function for showing the performance\nFunction for printing the classification accuracy on the test-set.\nIt takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.", "def print_test_accuracy(show_example_errors=False,\n show_confusion_matrix=False):\n\n # For all the images in the test-set,\n # calculate the predicted classes and whether they are correct.\n correct, cls_pred = predict_cls_test()\n \n # Classification accuracy and the number of correct classifications.\n acc, num_correct = classification_accuracy(correct)\n \n # Number of images being classified.\n num_images = len(correct)\n\n # Print the accuracy.\n msg = \"Accuracy on Test-Set: {0:.1%} ({1} / {2})\"\n print(msg.format(acc, num_correct, num_images))\n\n # Plot some examples of mis-classifications, if desired.\n if show_example_errors:\n print(\"Example errors:\")\n plot_example_errors(cls_pred=cls_pred, correct=correct)\n\n # Plot the confusion matrix, if desired.\n if show_confusion_matrix:\n print(\"Confusion Matrix:\")\n plot_confusion_matrix(cls_pred=cls_pred)", "Helper-function for plotting convolutional weights", "def plot_conv_weights(weights, input_channel=0):\n # Assume weights are TensorFlow ops for 4-dim variables\n # e.g. weights_conv1 or weights_conv2.\n\n # Retrieve the values of the weight-variables from TensorFlow.\n # A feed-dict is not necessary because nothing is calculated.\n w = session.run(weights)\n\n # Print statistics for the weights.\n print(\"Min: {0:.5f}, Max: {1:.5f}\".format(w.min(), w.max()))\n print(\"Mean: {0:.5f}, Stdev: {1:.5f}\".format(w.mean(), w.std()))\n \n # Get the lowest and highest values for the weights.\n # This is used to correct the colour intensity across\n # the images so they can be compared with each other.\n w_min = np.min(w)\n w_max = np.max(w)\n abs_max = max(abs(w_min), abs(w_max))\n\n # Number of filters used in the conv. layer.\n num_filters = w.shape[3]\n\n # Number of grids to plot.\n # Rounded-up, square-root of the number of filters.\n num_grids = math.ceil(math.sqrt(num_filters))\n \n # Create figure with a grid of sub-plots.\n fig, axes = plt.subplots(num_grids, num_grids)\n\n # Plot all the filter-weights.\n for i, ax in enumerate(axes.flat):\n # Only plot the valid filter-weights.\n if i<num_filters:\n # Get the weights for the i'th filter of the input channel.\n # The format of this 4-dim tensor is determined by the\n img = w[:, :, input_channel, i]\n\n # Plot image.\n ax.imshow(img, vmin=-abs_max, vmax=abs_max,\n interpolation='nearest', cmap='seismic')\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Helper-function for plotting the output of convolutional layers", "def plot_layer_output(layer_output, image):\n # Assume layer_output is a 4-dim tensor\n # e.g. output_conv1 or output_conv2.\n\n # Create a feed-dict which holds the single input image.\n # Note that TensorFlow needs a list of images,\n # so we just create a list with this one image.\n feed_dict = {x: [image]}\n \n # Retrieve the output of the layer after inputting this image.\n values = session.run(layer_output, feed_dict=feed_dict)\n\n # Get the lowest and highest values.\n # This is used to correct the colour intensity across\n # the images so they can be compared with each other.\n values_min = np.min(values)\n values_max = np.max(values)\n\n # Number of image channels output by the conv. layer.\n num_images = values.shape[3]\n\n # Number of grid-cells to plot.\n # Rounded-up, square-root of the number of filters.\n num_grids = math.ceil(math.sqrt(num_images))\n \n # Create figure with a grid of sub-plots.\n fig, axes = plt.subplots(num_grids, num_grids)\n\n # Plot all the filter-weights.\n for i, ax in enumerate(axes.flat):\n # Only plot the valid image-channels.\n if i<num_images:\n # Get the images for the i'th output channel.\n img = values[0, :, :, i]\n\n # Plot image.\n ax.imshow(img, vmin=values_min, vmax=values_max,\n interpolation='nearest', cmap='binary')\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Examples of distorted input images\nIn order to artificially inflate the number of images available for training, the neural network uses pre-processing with random distortions of the input images. This should hopefully make the neural network more flexible at recognizing and classifying images.\nThis is a helper-function for plotting distorted input images.", "def plot_distorted_image(image, cls_true):\n # Repeat the input image 9 times.\n image_duplicates = np.repeat(image[np.newaxis, :, :, :], 9, axis=0)\n\n # Create a feed-dict for TensorFlow.\n feed_dict = {x: image_duplicates}\n\n # Calculate only the pre-processing of the TensorFlow graph\n # which distorts the images in the feed-dict.\n result = session.run(distorted_images, feed_dict=feed_dict)\n\n # Plot the images.\n plot_images(images=result, cls_true=np.repeat(cls_true, 9))", "Helper-function for getting an image and its class-number from the test-set.", "def get_test_image(i):\n return images_test[i, :, :, :], cls_test[i]", "Get an image and its true class from the test-set.", "img, cls = get_test_image(16)", "Plot 9 random distortions of the image. If you re-run this code you will get slightly different results.", "plot_distorted_image(img, cls)", "Perform optimization", "# if False:\noptimize(num_iterations=1000)", "Results\nExamples of mis-classifications are plotted below. Some of these are difficult to recognize even for humans and others are reasonable mistakes e.g. between a large car and a truck, or between a cat and a dog, while other mistakes seem a bit strange.", "print_test_accuracy(show_example_errors=True,\n show_confusion_matrix=True)", "Convolutional Weights\nThe following shows some of the weights (or filters) for the first convolutional layer. There are 3 input channels so there are 3 of these sets, which you may plot by changing the input_channel.\nNote that positive weights are red and negative weights are blue.", "plot_conv_weights(weights=weights_conv1, input_channel=0)", "Plot some of the weights (or filters) for the second convolutional layer. These are apparently closer to zero than the weights for the first convolutional layers, see the lower standard deviation.", "plot_conv_weights(weights=weights_conv2, input_channel=1)", "Output of convolutional layers\nHelper-function for plotting an image.", "def plot_image(image):\n # Create figure with sub-plots.\n fig, axes = plt.subplots(1, 2)\n\n # References to the sub-plots.\n ax0 = axes.flat[0]\n ax1 = axes.flat[1]\n\n # Show raw and smoothened images in sub-plots.\n ax0.imshow(image, interpolation='nearest')\n ax1.imshow(image, interpolation='spline16')\n\n # Set labels.\n ax0.set_xlabel('Raw')\n ax1.set_xlabel('Smooth')\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Plot an image from the test-set. The raw pixelated image is used as input to the neural network.", "img, cls = get_test_image(16)\nplot_image(img)", "Use the raw image as input to the neural network and plot the output of the first convolutional layer.", "plot_layer_output(output_conv1, image=img)", "Using the same image as input to the neural network, now plot the output of the second convolutional layer.", "plot_layer_output(output_conv2, image=img)", "Predicted class-labels\nGet the predicted class-label and class-number for this image.", "label_pred, cls_pred = session.run([y_pred, y_pred_cls],\n feed_dict={x: [img]})", "Print the predicted class-label.", "# Set the rounding options for numpy.\nnp.set_printoptions(precision=3, suppress=True)\n\n# Print the predicted label.\nprint(label_pred[0])", "The predicted class-label is an array of length 10, with each element indicating how confident the neural network is that the image is the given class.\nIn this case the element with index 3 has a value of 0.493, while the element with index 5 has a value of 0.490. This means the neural network believes the image either shows a class 3 or class 5, which is a cat or a dog, respectively.", "class_names[3]\n\nclass_names[5]", "Close TensorFlow Session\nWe are now done using TensorFlow, so we close the session to release its resources.", "# This has been commented out in case you want to modify and experiment\n# with the Notebook without having to restart it.\n# session.close()", "Homework\nThese are a few suggestions for exercises that may help improve your skills with TensorFlow. It is important to get hands-on experience with TensorFlow in order to learn how to use it properly.\nYou may want to backup this Notebook before making any changes.\n\nRun the optimization for 100,000 iterations and see what the classification accuracy is. This will create a checkpoint that saves all the variables of the TensorFlow graph.\nTry changing the structure of the neural network to AlexNet. How does it affect the training time and the classification accuracy? Note that the checkpoints cannot be reloaded when you change the structure of the neural network." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
afeiguin/comp-phys
08_01_Schroedinger.ipynb
mit
[ "Time dependent Schrödinger equation\nWe want to describe an electron wavefunction by a wavepacket\n$\\psi (x,y)$ that is a function of position $x$ and time $t$. We assume\nthat the electron is initially localized around $x_0$, and model this by\na Gaussian multiplying a plane wave:\n$$\\psi(x,t=0)=\\exp{\\left[-\\frac{1}{2}\\left(\\frac{x-x_0}{\\sigma _0}\n\\right)^2\\right ]} e^{ik_0x}\n$$\nThis wave function does not correspond to an electron with a well\ndefined momentum. However, if the width of the Gaussian $\\sigma _0$ is\nmade very large, the electron gets spread over a sufficiently large\nregion of space and can be considered as a plane wave with momentum\n$k_0$ with a slowly varying amplitude.\nThe behavior of this wave packet as a function of time is described by\nthe time-dependent Schröedinger equation (here in 1d):\n$$i\\frac{\\partial \\psi}{\\partial t}=H\\psi(x,t).$$ \n$H$ is the Hamiltonian operator:\n$$H=-\\frac{1}{2m}\\frac{\\partial ^2}{\\partial x^2}+V(x),$$ \nwhere $V(x)$ is a time independent potential. The\nHamiltonian is chosen to be real. we have picked teh energy units such\nthat $\\hbar=1$, and from now on, we will pick mass units such that\n$2m=1$ to make equations simpler.\nScrhödinger’s equation is obviously a PDE, and we can use\ngeneralizations of the techniques learned in previous sections to solve\nit. The main observation is that this time we have to deal with complex\nnumbers, and the function $\\psi (x,y)$ has real and imaginary parts:\n$$\\psi (x,t) = R(x,t)+iI(x,t).$$ However, is this section we will\npresent an alternative method that makes the quantum mechanical nature\nof this problem more transparent.\nThe time-evolution operator\nThe Scrödinger equation ([time]) can be integrated in a formal sense\nto obtain: \n$$\\psi(x,t)=U(t)\\psi(x,t=0)=e^{-iHt}\\psi(x,t).$$ \nFrom here we deduce that the wave function can be\nevolved forward in time by applying the time-evolution operator\n$U(t)=\\exp{-iHt}$: $$\\psi(t+\\Delta t)= e^{-iH\\delta t}\\psi(t).$$\nLikewise, the inverse of the time-evolution operator moves the wave\nfunction back in time: $$\\psi(t-\\Delta t)=e^{iH\\Delta t}\\psi(t),$$ where\nwe have use the property $$U^{-1}(t)=U(-t).$$ Although it would be nice\nto have an algorithm based on the direct application of $U$, it has been\nshown that this is not stable. Hence, we apply the following relation:\n$$\\psi(t+\\Delta t)=\\psi(t-\\Delta t)+\\left[e^{-iH\\Delta t}-e^{iH\\Delta\nt}\\right]\\psi(t).$$ Now, the derivatives with recpect to $x$ can be\napproximated by \n$$\\begin{aligned}\n\\frac{\\partial \\psi}{\\partial t}\n&\\sim& \\frac{\\psi(x,t+\\Delta t)-\\psi(x, t)}{\\Delta t}, \\\n\\frac{\\partial ^2 \\psi}{\\partial x^2} &\\sim& \\frac{\\psi(x+\\Delta %\nx,t)+\\psi(x-\\Delta x,t)-2\\psi(x,t)}{(\\Delta x)^2}.\n\\end{aligned}$$ \nThe time evolution operator is\napproximated by: $$U(\\Delta t)=e^{-iH\\Delta t} \\sim 1+iH\\Delta t.$$\nReplacing the expression ([hami]) for $H$, we obtain:\n$$\\psi(x,t+\\Delta t)=\\psi(x,t)-i[(2\\alpha+\\Delta t\nV(x))\\psi(x,t)-\\alpha(\\psi(x+\\Delta x,t)+\\psi(x-\\Delta x,t))],\n$$ \nwith $\\alpha=\\frac{\\Delta t}{(\\Delta x)^2}$. The\nprobability of finding an electron at $(x,t)$ is given by\n$|\\psi(x,t)|^2$. This equations do no conserve this probability exactly,\nbut the error is of the order of $(\\Delta t)^2$. The convergence can be\ndetermined by using smaller steps.\nWe can write this expression explicitly for the real and imaginary parts, becoming:\n$$\\begin{aligned}\n\\mathrm{Im} \\psi(x, t + \\Delta t) = \\mathrm{Im} \\psi(x, t) + \\alpha \\mathrm{Re} \\psi (x + \\Delta x, t) + \\alpha \\mathrm{Re}\\psi(x − \\Delta x, t) − (2\\alpha + \\Delta t V (x)) \\mathrm{Re} \\psi(x, t) \\\n\\mathrm{Re} \\psi(x, t + \\Delta t) = \\mathrm{Re} \\psi(x, t) − \\alpha \\mathrm{Im} \\psi (x + \\Delta x, t) − \\alpha \\mathrm{Im} \\psi (x − \\Delta x, t) + (2\\alpha + \\Delta tV (x)) \\mathrm{Im} \\psi(x, t)\n\\end{aligned}$$\nNotice the symmetry between these equations a: while the calculation of the imaginary part of the wave function at the later time involves a weighted average of the real part of the wave function at di erent positions from the earlier time, the calculation of the real part involves a weighted average of the imaginary part for di erent positions at the earlier time. This intermixing of the real and imaginary parts of the wave function may seem a bit strange, but remember that this situation is a direct result of our breaking up the wave function into its real and imaginary parts in the first place.\nExercise 8.1: Harmonic Potential\nSimulate a Gaussian wave-packet moving along the $x$ axis in a harmonic potential", "%matplotlib inline\nimport numpy as np\nfrom matplotlib import pyplot\nimport math\nimport matplotlib.animation as animation\nfrom IPython.display import HTML\n\n\nlx=20\ndx = 0.04\nnx = int(lx/dx)\ndt = dx**2/20.\nV0 = 60\nalpha = dt/dx**2\n\nfig = pyplot.figure()\nax = pyplot.axes(xlim=(0, lx), ylim=(0, 2), xlabel='x', ylabel='|Psi|^2')\npoints, = ax.plot([], [], marker='', linestyle='-', lw=3)\n\npsi0_r = np.zeros(nx+1)\npsi0_i = np.zeros(nx+1)\n\nx = np.arange(0, lx+dx, dx)\n\n#Define your potential \nV = np.zeros(nx+1)\nV = V0*(x-lx/2)**2\n\n#Initial conditions: wave packet\nsigma2 = 0.5**2\nk0 = np.pi*10.5\nx0 = lx/2\nfor ix in range(0,nx):\n psi0_r[ix] = math.exp(-0.5*((ix*dx-x0)**2)/sigma2)*math.cos(k0*ix*dx)\n psi0_i[ix] = math.exp(-0.5*((ix*dx-x0)**2)/sigma2)*math.sin(k0*ix*dx)\n \ndef solve(i):\n global psi0_r, psi0_i\n\n for ix in range(1,nx-2):\n psi0_i[ix]=psi0_i[ix]+alpha*psi0_r[ix+1]+alpha*psi0_r[ix-1]-(2.*alpha+dt*V[ix])*psi0_r[ix]\n for ix in range(1,nx-2):\n psi0_r[ix]=psi0_r[ix]-alpha*psi0_i[ix+1]-alpha*psi0_i[ix-1]+(2.*alpha+dt*V[ix])*psi0_i[ix]\n \n points.set_data(x,psi0_r**2 + psi0_i**2)\n return (points,)\n\n\n#for i in range(2000):\n# solve(i)\n \n#pyplot.plot(x,psi0_r**2+psi0_i**2);\n\n\nanim = animation.FuncAnimation(fig, solve, frames = 8000, interval=50)\n\nHTML(anim.to_jshtml()) \n\n", "Exercise 8.2: Potential barrier\nSimulate a Gaussian wave-packet moving along the x axis passing through a potential barrier", "import matplotlib.animation as animation\nfrom IPython.display import HTML\n\n\nfig = pyplot.figure()\nax = pyplot.axes(xlim=(0, lx), ylim=(0, 2), xlabel='x', ylabel='$|\\Psi|^2$')\npoints, = ax.plot([], [], marker='', linestyle='-', lw=3)\n\nx0=6\n\nfor ix in range(0,nx):\n psi0_r[ix] = math.exp(-0.5*((ix*dx-x0)**2)/sigma2)*math.cos(k0*ix*dx)\n psi0_i[ix] = math.exp(-0.5*((ix*dx-x0)**2)/sigma2)*math.sin(k0*ix*dx)\n\nx = np.arange(0, lx+dx, dx)\nV = np.zeros(nx+1)\n\nfor ix in range(nx//2-20,nx//2+20):\n V[ix]=2000.\n \ndef solve(i):\n global psi0_r, psi0_i\n\n psi0_r[1:-1] = psi0_r[1:-1]- alpha*(psi0_i[2:]+psi0_i[:-2]-2*psi0_i[1:-1])+dt*V[1:-1]*psi0_i[1:-1] \n psi0_i[1:-1] = psi0_i[1:-1]+ alpha*(psi0_r[2:]+psi0_r[:-2]-2*psi0_r[1:-1])-dt*V[1:-1]*psi0_r[1:-1] \n\n points.set_data(x,psi0_r**2 + psi0_i**2)\n return points\n\n\n#for i in range(2000):\n# solve(i)\n \n#pyplot.plot(x,psi0_r**2+psi0_i**2);\n\n\nanim = animation.FuncAnimation(fig, solve, frames = 2000, interval=10)\n\nHTML(anim.to_jshtml()) \n\n\n", "Exercise 8.2: Single-slit diffraction\nYoung’s single-slit experiment consists of a wave passing though a small\nslit, which causes the emerging wavelets to intefere with eachother\nforming a diffraction pattern. In quantum mechanics, where particles are\nrepresented by probabilities, and probabilities by wave packets, it\nmeans that the same phenomenon should occur when a particle (electron,\nneutron) passes though a small slit. Consider a wave packet of initial\nwidth 3 incident on a slit of width 5, and plot the probability density\n$|\\psi ^2|$ as the packet crosses the slit. Generalize the\ntime-evolution equation ([time_diff]) for 2 dimensions. Model the\nslit with a potential wall:\n$$V(x,y)=100 \\,\\,\\,\\,\\,\\ \\mathrm{for}\\,\\,x=10,|y|\\geq 2.5.$$", "%matplotlib inline\nimport numpy as np\nfrom matplotlib import pyplot\nimport math\n\nlx = 20 #Box length in x\nly = 20 #Box length in y\ndx = 0.25 #Incremental step size in x (Increased this to decrease the time of the sim)\ndy = dx #Incremental step size in y\nnx = int(lx/dx) #Number of steps in x\nny = int(ly/dy) #Number of steps in y\ndt = dx**2/20. #Incremental step size in time\nsigma2 = 0.5**2 #Sigma2 Value\nk0 = np.pi*10.5 #K0 value\namp = math.pow(1./2., 64) #Amplitude (to avoid large values out of range. This was one issue)\n\nalpha = (dt/2.)/dx**2 #Alpha\n\npsi0_r = np.zeros(shape=(ny+1,nx+1)) #Initialize real part of psi\npsi0_i = np.zeros(shape=(ny+1,nx+1)) #Initialize imaginary part of psi\nV = np.zeros(shape=(ny+1,nx+1)) #Initialize Potential\n\n#Define your potential wall\nV = np.zeros(shape=(ny+1,nx+1))\nfor ix in range(nx//2-20,nx//2+20):\n for iy in range(0,ny):\n if(abs(iy*dy-ly/2)>2.5):\n V[iy,ix] = 200.\n\n#Initial conditions: wave packet\nx0 = 6.\ny0 = ly/2.\nfor x in range(0,nx):\n for y in range(0,ny):\n psi0_r[y,x] = math.exp(-0.5*((x*dx-x0)**2+(y*dy-y0)**2)/sigma2)*math.cos(k0*x*dx)\n psi0_i[y,x] = math.exp(-0.5*((x*dx-x0)**2+(y*dy-y0)**2)/sigma2)*math.sin(k0*x*dx)\n \nx = np.arange(0, lx+dx, dx)\ny = np.arange(0, ly+dy, dy)\nX, Y = np.meshgrid(x, y)\n\npyplot.contourf(X,Y,psi0_r**2+psi0_i**2)\npyplot.contour(X,Y,V)\n\n\n#Function to solve incremental changes in psi\ndef solve():\n #Grab psi lists\n global psi0_r, psi0_i\n\n #Calculate Imaginary Part in all points except for last 2 (because of indice notation)\n for x in range(1,nx-2):\n for y in range(1,ny-2):\n psi0_i[y,x] = psi0_i[y,x] + alpha*psi0_r[y,x+1] + alpha*psi0_r[y,x-1] - (2*alpha + dt*V[y,x])*psi0_r[y,x] + alpha*psi0_r[y+1,x] + alpha*psi0_r[y-1,x] - (2*alpha+dt*V[y,x])*psi0_r[y,x]\n\n #Calculate Real Part in all points except for last 2 (because of indice notation)\n for x in range(1,nx-2):\n for y in range(1,ny-2):\n psi0_r[y,x] = psi0_r[y,x] - alpha*psi0_i[y,x+1] - alpha*psi0_i[y,x-1] + (2*alpha + dt*V[y,x])*psi0_i[y,x] - alpha*psi0_i[y+1,x] - alpha*psi0_i[y-1,x] + (2*alpha+dt*V[y,x])*psi0_i[y,x]\n\n\nfor i in range(1000):\n solve()\n \npyplot.contourf(X,Y,psi0_r**2+psi0_i**2);\n#pyplot.contour(X,Y,V);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
lrayle/rental-listings-census
src/rental_listings_modeling.ipynb
bsd-3-clause
[ "import pandas as pd\nimport paramiko\nimport os\nimport numpy as np\nimport math\nimport json\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline", "This notebook explores merged craigslist listings/census data and fits some initial models\nRemote connection parameters\nIf data is stored remotely", "# TODO: add putty connection too. \n\n#read SSH connection parameters\nwith open('ssh_settings.json') as settings_file: \n settings = json.load(settings_file)\n\nhostname = settings['hostname']\nusername = settings['username']\npassword = settings['password']\nlocal_key_dir = settings['local_key_dir']\n\ncensus_dir = 'synthetic_population/'\n\"\"\"Remote directory with census data\"\"\"\n\nresults_dir = 'craigslist_census/'\n\"\"\"Remote directory for results\"\"\"\n\n# estbalish SSH connection\nssh = paramiko.SSHClient() \nssh.load_host_keys(local_key_dir)\nssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())\nssh.connect(hostname,username=username, password=password)\nsftp = ssh.open_sftp()", "Data Preparation", "def read_listings_file(fname):\n \"\"\"Read csv file via SFTP and return as dataframe.\"\"\"\n with sftp.open(os.path.join(listings_dir,fname)) as f:\n df = pd.read_csv(f, delimiter=',', dtype={'fips_block':str,'state':str,'mpo_id':str}, date_parser=['date'])\n # TODO: parse dates. \n return df\n\ndef log_var(x):\n \"\"\"Return log of x, but NaN if zero.\"\"\"\n if x==0:\n return np.nan\n else:\n return np.log(x)\n \ndef create_census_vars(df):\n \"\"\"Make meaningful variables and return the dataframe.\"\"\"\n df['pct_white'] = df['race_of_head_1']/df['hhs_tot']\n df['pct_black'] = df['race_of_head_2']/df['hhs_tot']\n df['pct_amer_native'] = df['race_of_head_3']/df['hhs_tot']\n df['pct_alaska_native'] = df['race_of_head_4']/df['hhs_tot']\n df['pct_any_native'] = df['race_of_head_5']/df['hhs_tot']\n df['pct_asian'] = df['race_of_head_6']/df['hhs_tot']\n df['pct_pacific'] = df['race_of_head_7']/df['hhs_tot']\n df['pct_other_race'] = df['race_of_head_8']/df['hhs_tot']\n df['pct_mixed_race'] = df['race_of_head_9']/df['hhs_tot']\n df['pct_mover'] = df['recent_mover_1']/df['hhs_tot']\n df['pct_owner'] = df['tenure_1']/df['hhs_tot']\n df['avg_hh_size'] = df['persons_tot']/df['hhs_tot']\n df['cars_per_hh'] = df['cars_tot']/df['hhs_tot']\n \n df['ln_rent'] = df['rent'].apply(log_var)\n df['ln_income'] = df.income_med.apply(log_var)\n return df\n\ndef filter_outliers(df, rent_range=(100,10000),sqft_range=(10,5000)):\n \"\"\"Drop outliers from listings dataframe. For now, only need to filter out rent and sq ft. \n Args: \n df: Dataframe with listings. Cols names include ['rent','sqft']\n rent_range (tuple): min and max rent\n sqft_range (tuple): min and max sqft\n Returns: \n DataFrame: listings data without outliers. \n \"\"\"\n n0=len(df)\n df=df[(df.rent>=rent_range[0])&(df.rent<rent_range[1])]\n n1=len(df)\n print('Dropped {} outside rent range ${}-${}'.format(n0-n1,rent_range[0],rent_range[1]))\n df=df[(df.sqft>=sqft_range[0])&(df.sqft<sqft_range[1])]\n n2=len(df)\n print('Dropped {} outside sqft range {}-{} sqft. {} rows remaining'.format(n1-n2,sqft_range[0],sqft_range[1],len(df)))\n return(df)\n\n# get list of files and load. \n# for remotely stored data by state (just do one state for now)\nstate='CA'\ninfile='cl_census_{}.csv'.format(state)\n#data = read_listings_file(infile) # uncomment to get remote data. \n\n# for local data: \ndata_dir = '../data/'\ndata_file = 'sfbay_listings_03032017.csv'\n\ndata = pd.read_csv(os.path.join(data_dir,data_file),parse_dates=[1],dtype={'listing_id':str, 'rent':float, 'bedrooms':float, 'bathrooms':float, 'sqft':float,\n 'rent_sqft':float, 'fips_block':str, 'state':str, 'region':str, 'mpo_id':str, 'lng':float, 'lat':float,\n 'cars_tot':float, 'children_tot':float, 'persons_tot':float, 'workers_tot':float,\n 'age_of_head_med':float, 'income_med':float, 'hhs_tot':float, 'race_of_head_1':float,\n 'race_of_head_2':float, 'race_of_head_3':float, 'race_of_head_4':float, 'race_of_head_5':float,\n 'race_of_head_6':float, 'race_of_head_7':float, 'race_of_head_8':float, 'race_of_head_9':float,\n 'recent_mover_0':float, 'recent_mover_1':float, 'tenure_1':float, 'tenure_2':float})\n\nprint(len(data))\ndata.head()\n\n# for census vars, NA really means 0...\ncensus_cols = ['cars_tot', 'children_tot','persons_tot', 'workers_tot', 'age_of_head_med', 'income_med','hhs_tot', 'race_of_head_1', 'race_of_head_2', 'race_of_head_3','race_of_head_4', 'race_of_head_5', 'race_of_head_6', 'race_of_head_7','race_of_head_8', 'race_of_head_9', 'recent_mover_0', 'recent_mover_1','tenure_1', 'tenure_2']\nfor col in census_cols:\n data[col] = data[col].fillna(0)", "create variables\nvariable codes\nRace codes (from PUMS)\n1 .White alone\n 2 .Black or African American alone\n 3 .American Indian alone\n 4 .Alaska Native alone\n 5 .American Indian and Alaska Native tribes specified; or American\n .Indian or Alaska native, not specified and no other races\n 6 .Asian alone\n 7 .Native Hawaiian and Other Pacific Islander alone\n 8 .Some other race alone\n 9 .Two or more major race groups\ntenure_1 = owner (based on my guess; didn't match the PUMS codes)\nmover_1 = moved past year (based on my guess)", "# create useful variables \ndata = create_census_vars(data)\n\n# define some feature to include in the model. \nfeatures_to_examine = ['rent','ln_rent', 'bedrooms','bathrooms','sqft','pct_white', 'pct_black','pct_asian','pct_mover','pct_owner','income_med','age_of_head_med','avg_hh_size','cars_per_hh']\ndata[features_to_examine].describe()", "Filter outliers", "# I've already identified these ranges as good at exluding outliers\nrent_range=(100,10000)\nsqft_range=(10,5000)\ndata = filter_outliers(data, rent_range=rent_range, sqft_range=sqft_range)\n\n# Use this to explore outliers yourself. \ng=sns.distplot(data['rent'], kde=False)\ng.set_xlim(0,10000)\n\ng=sns.distplot(data['sqft'], kde=False)\ng.set_xlim(0,10000)", "Examine missing data", "# examine NA's\nprint('Total rows:',len(data))\nprint('Rows with any NA:',len(data[pd.isnull(data).any(axis=1)]))\nprint('Rows with bathroom NA:',len(data[pd.isnull(data.bathrooms)]))\nprint('% rows missing bathroom col:',len(data[pd.isnull(data.bathrooms)])/len(data))", "uh oh, 74% are missing bathrooms feature. Might have to omit that one. Only 0.02% of rows have other missing values, so that should be ok.", "#for d in range(1,31):\n# print(d,'% rows missing bathroom col:',len(data[pd.isnull(data.bathrooms)&((data.date.dt.month==12)&(data.date.dt.day==d))])/len(data[(data.date.dt.month==12)&(data.date.dt.day==d)]))", "Bathrooms were added on Dec 21. After that, if bathrooms aren't in the listing, the listing is thrown out. Let's try to find the date when the bathrooms column was added. So if need to use bathrooms feature, can use listings Dec 22 and after.", "# uncommon to only use data after Dec 21. \n#data=data[(data.date.dt.month>=12)&(data.date.dt.day>=22)]\n#data.shape\n\n# Uncomment to drop NA's\n#data = data.dropna()\n#print('Dropped {} rows with NAs'.format(n0-len(data)))\n\n", "Look at distributions\nSince rent has a more or less logarithmic distribution, use ln_rent instead", "p=sns.distplot(data.rent, kde=False)\np.set_title('rent')\n\np=sns.distplot(data.ln_rent, kde=False)\np.set_title('ln rent')\n\nplot_rows = math.ceil(len(features_to_examine)/2)\n\nf, axes = plt.subplots(plot_rows,2, figsize=(8,15))\nsns.despine(left=True)\n\nfor i,col in enumerate(features_to_examine):\n row_position = math.floor(i/2)\n col_position = i%2\n data_notnull = data[pd.notnull(data[col])] # exclude NA values from plot\n sns.distplot(data_notnull[col], ax=axes[row_position, col_position],kde=False)\n axes[row_position, col_position].set_title('{}'.format(col)) \n\nplt.tight_layout()\nplt.show()\n\ndata_notnull = data[pd.notnull(data['ln_income'])]\np=sns.distplot(data_notnull['ln_income'],kde=False)\np.set_title('ln med income')\n# ln med income is not more normal.. use med income instead. ", "look at correlations", "# correlation heatmap\ncorrmat=data[features_to_examine].corr()\ncorrmat.head()\n\nf, ax = plt.subplots(figsize=(12, 9))\nsns.heatmap(corrmat, vmax=.8, square=True)\n\nf.tight_layout()", "The correlations appear as expected, except for cars_per_hh. Maybe this is because cars_per_hh is reflecting the size of the household more than income. Might want to try cars per adult instead..", "print(data.columns)\n#'pct_amer_native','pct_alaska_native',\nx_cols = ['bedrooms','bathrooms', 'sqft','age_of_head_med', 'income_med','pct_white', 'pct_black', 'pct_any_native', 'pct_asian', 'pct_pacific',\n 'pct_other_race', 'pct_mixed_race', 'pct_mover', 'pct_owner', 'avg_hh_size', 'cars_per_hh']\ny_col = 'ln_rent'\n\nprint(len(data))\n\n# exclude missing values\ndata_notnull= data[(pd.notnull(data[x_cols])).all(axis=1)]\ndata_notnull= data_notnull[(pd.notnull(data_notnull[y_col]))]\nprint('using {} rows of {} total'.format(len(data_notnull),len(data)))", "Comparison of models\nTry a linear model\nWe'll start with a linear model to use as the baseline.", "from sklearn import linear_model, cross_validation\n\n# create training and testing datasets. \n# this creates a test set that is 30% of total obs. \nX_train, X_test, y_train, y_test = cross_validation.train_test_split(data_notnull[x_cols],data_notnull[y_col], test_size = .3, random_state = 201)\n\nregr = linear_model.LinearRegression()\nregr.fit(X_train, y_train)\n\n# Intercept\nprint('Intercept:', regr.intercept_)\n\n# The coefficients\nprint('Coefficients:')\npd.Series(regr.coef_, index=x_cols)\n\n# See mean square error, using test data\nprint(\"Mean squared error: %.2f\" % np.mean((regr.predict(X_test) - y_test) ** 2))\nprint(\"RMSE:\", np.sqrt(np.mean((regr.predict(X_test) - y_test) ** 2)))\n# Explained variance score: 1 is perfect prediction\nprint('Variance score: %.2f' % regr.score(X_test, y_test))\n\n# Plot predicted values vs. observed\nplt.scatter(regr.predict(X_train),y_train, color='blue',s=1, alpha=.5)\nplt.show()\n\n# plot residuals vs predicted values\nplt.scatter(regr.predict(X_train), regr.predict(X_train)- y_train, color='blue',s=1, alpha=.5)\nplt.scatter(regr.predict(X_test), regr.predict(X_test)- y_test, color='green',s=1, alpha=.5)\nplt.show()", "The residuals look pretty normally distributed.\nI wonder if inclusion of all these race variables is leading to overfitting. If so, we'd have small error on training set and large error on test set.", "print(\"Training set. Mean squared error: %.5f\" % np.mean((regr.predict(X_train) - y_train) ** 2), '| Variance score: %.5f' % regr.score(X_train, y_train))\nprint(\"Test set. Mean squared error: %.5f\" % np.mean((regr.predict(X_test) - y_test) ** 2), '| Variance score: %.5f' % regr.score(X_test, y_test))", "Try Ridge Regression (linear regression with regularization )\nSince the training error and test error are about the same, and since we're using few features, overfitting probably isn't a problem. If it were a problem, we would want to try a regression with regularization. \nLet's try it just for the sake of demonstration.", "from sklearn.linear_model import Ridge\n\n# try a range of different regularization terms.\nfor a in [10,1,0.1,.01,.001,.00001]:\n ridgereg = Ridge(alpha=a)\n ridgereg.fit(X_train, y_train)\n \n print('\\n alpha:',a)\n print(\"Mean squared error: %.5f\" % np.mean((ridgereg.predict(X_test) - y_test) ** 2),'| Variance score: %.5f' % ridgereg.score(X_test, y_test))\n\n# Intercept\nprint('Intercept:', ridgereg.intercept_)\n\n# The coefficients\nprint('Coefficients:')\npd.Series(ridgereg.coef_, index=x_cols)", "As expected, Ridge regression doesn't help much. \nThe best way to improve the model at this point is probably to add more features. \nRandom Forest", "from sklearn.ensemble import RandomForestRegressor\nfrom sklearn.model_selection import KFold\nfrom sklearn.metrics import mean_squared_error\n\ndef RMSE(y_actual, y_predicted):\n return np.sqrt(mean_squared_error(y_actual, y_predicted))\n\ndef cross_val_rf(X, y,max_f='auto', n_trees = 50, cv_method='kfold', k=5):\n \"\"\"Estimate a random forest model using cross-validation and return the average error across the folds.\n Args: \n X (DataFrame): features data\n y (Series): target data\n max_f (str or int): how to select max features to consider for the best split. \n If “auto”, then max_features=n_features.\n If “sqrt”, then max_features=sqrt(n_features)\n If “log2”, then max_features=log2(n_features)\n If int, then consider max_features features at each split\n n_trees (number of trees to build)\n cv_method (str): how to split the data ('kfold' (default) or 'timeseries')\n k (int): number of folds (default=5)\n Returns: \n float: mean error (RMSE) across all training/test sets.\n \"\"\"\n if cv_method == 'kfold':\n kf = KFold(n_splits=k, shuffle=True, random_state=2012016) # use random seed for reproducibility. \n \n E = np.ones(k) # this array will hold the errors. \n i=0\n for train, test in kf.split(X, y): \n train_data_x = X.iloc[train]\n train_data_y = y.iloc[train] \n test_data_x = X.iloc[test]\n test_data_y = y.iloc[test]\n\n # n_estimators is number of trees to build. \n # max_features = 'auto' means the max_features = n_features. This is a parameter we should tune. \n random_forest = RandomForestRegressor(n_estimators=n_trees, max_features=max_f, criterion='mse', max_depth=None)\n random_forest.fit(train_data_x,train_data_y)\n predict_y=random_forest.predict(test_data_x)\n E[i] = RMSE(test_data_y, predict_y)\n i+=1\n return np.mean(E)\n\n\ndef optimize_rf(df_X, df_y, max_n_trees=100, n_step = 20, cv_method='kfold', k=5): \n \"\"\"Optimize hyperparameters for a random forest regressor.\n Args: \n df_X (DataFrame): features data\n df_y (Series): target data\n max_n_trees (int): max number of trees to generate\n n_step (int): intervals to use for max_n_trees\n cv_method (str): how to split the data ('kfold' (default) or 'timeseries')\n k (int): number of folds (default=5)\n \"\"\"\n max_features_methods = ['auto','sqrt','log2'] # methods of defining max_features to try.\n \n # create a place to store the results, for easy plotting later. \n results = pd.DataFrame(columns=max_features_methods, index=[x for x in range(10,max_n_trees+n_step,n_step)])\n \n for m in max_features_methods:\n print('max_features:',m)\n for n in results.index:\n error = cross_val_rf(df_X, df_y,max_f=m, n_trees=n)\n print('n_trees:',n,' error:',error)\n results.ix[n,m] = error\n return results\n\n# data to use - exclude nulls\ndf_X = data_notnull[x_cols]\ndf_y = data_notnull[y_col]\nprint(df_X.shape, df_y.shape)\n#df_all = pd.concat([data_notnull[x_cols],data_notnull[y_col]], axis=1)\n#df_all.shape\n\n# basic model to make sure it workds\nrandom_forest = RandomForestRegressor(n_estimators=10, criterion='mse', max_depth=None)\nrandom_forest.fit(df_X,df_y)\ny_predict = random_forest.predict(df_X)\nRMSE(df_y,y_predict)", "We can use k-fold validation if we believe the samples are independently and identically distributed. That's probably fine right now because we have only 1.5 months of data, but later we may have some time-dependent processes in these timeseries data. If we do use k-fold, I think we should shuffle the samples, because they do not come in a non-random sequence.", "# without parameter tuning\ncross_val_rf(df_X,df_y)\n\n# tune the parameters\nrf_results = optimize_rf(df_X,df_y, max_n_trees = 100, n_step = 20) # this is sufficient; very little improvement after n_trees=100. \n#rf_results2 = optimize_rf(df_X,df_y, max_n_trees = 500, n_step=100)\nrf_results\n\nax = rf_results.plot()\nax.set_xlabel('number of trees')\nax.set_ylabel('RMSE')\n#rf_results2.plot()", "Using m=sqrt(n_features) and log2(n_features) gives similar performance, and a slight improvement over m = n_features. After about 100 trees the error levels off. One of the nice things about random forest is that using additional trees doesn't lead to overfitting, so we could use more, but it's not necessary. Now we can fit the model using n_trees = 100 and m = sqrt.", "random_forest = RandomForestRegressor(n_estimators=100, max_features='sqrt', criterion='mse', max_depth=None)\nrandom_forest.fit(df_X,df_y)\npredict_y=random_forest.predict(df_X)", "The 'importance' score provides an ordered qualitative ranking of the importance of each feature. It is calculated from the improvement in MSE provided by each feature when it is used to split the tree.", "# plot the importances\nrf_o = pd.DataFrame({'features':x_cols,'importance':random_forest.feature_importances_})\nrf_o= rf_o.sort_values(by='importance',ascending=False)\n\n\nplt.figure(1,figsize=(12, 6))\nplt.xticks(range(len(rf_o)), rf_o.features,rotation=45)\nplt.plot(range(len(rf_o)),rf_o.importance,\"o\")\nplt.title('Feature importances')\nplt.show()", "It's not surprising sqft is the most important predictor, although it is strange cars_per_hh is the second most important. I would have expected incometo be higher in the list. \nIf we don't think the samples are i.i.d., it's better to use time series CV.", "from sklearn.model_selection import TimeSeriesSplit\ntscv = TimeSeriesSplit(n_splits=5)", "Try Boosted Forest", "from sklearn.ensemble import GradientBoostingRegressor\n\ndef cross_val_gb(X,y,cv_method='kfold',k=5, **params):\n \"\"\"Estimate gradient boosting regressor using cross validation.\n \n Args: \n X (DataFrame): features data\n y (Series): target data\n cv_method (str): how to split the data ('kfold' (default) or 'timeseries')\n k (int): number of folds (default=5)\n **params: keyword arguments for regressor\n Returns: \n float: mean error (RMSE) across all training/test sets.\n \"\"\"\n if cv_method == 'kfold':\n kf = KFold(n_splits=k, shuffle=True, random_state=2012016) # use random seed for reproducibility. \n \n E = np.ones(k) # this array will hold the errors. \n i=0\n for train, test in kf.split(X, y): \n train_data_x = X.iloc[train]\n train_data_y = y.iloc[train] \n test_data_x = X.iloc[test]\n test_data_y = y.iloc[test]\n\n # n_estimators is number of trees to build. \n grad_boost = GradientBoostingRegressor(loss='ls',criterion='mse', **params)\n grad_boost.fit(train_data_x,train_data_y)\n predict_y=grad_boost.predict(test_data_x)\n E[i] = RMSE(test_data_y, predict_y)\n i+=1\n return np.mean(E)\n\n\nparams = {'n_estimators':100,\n 'learning_rate':0.1,\n 'max_depth':1,\n 'min_samples_leaf':4\n }\ngrad_boost = GradientBoostingRegressor(loss='ls',criterion='mse', **params)\ngrad_boost.fit(df_X,df_y)\ncross_val_gb(df_X,df_y, **params)\n\nn_trees = 100\nl_rate = 0.1\nmax_d = 1\ncross_val_gb(df_X,df_y, l_rate,max_d)", "tune parameters\nThis time we'll use Grid Search in scikit-learn. This conducts an exhaustive search through the given parameters to find the best for the given estimator.", "from sklearn.model_selection import GridSearchCV\nparam_grid = {'learning_rate':[.1, .05, .02, .01],\n 'max_depth':[2,4,6],\n 'min_samples_leaf': [3,5,9,17],\n 'max_features': [1, .3, .1]\n }\n\nest= GradientBoostingRegressor(n_estimators = 1000)\ngs_cv = GridSearchCV(est,param_grid).fit(df_X,df_y)\n\nprint(gs_cv.best_params_)\nprint(gs_cv.best_score_)\n\n# best parameters\nparams = {'n_estimators':1000,\n 'learning_rate':0.05,\n 'max_depth':6,\n 'min_samples_leaf':3\n }\ngrad_boost = GradientBoostingRegressor(loss='ls',criterion='mse', **params)\ngrad_boost.fit(df_X,df_y)\ncross_val_gb(df_X,df_y, **params)\n\n# plot the importances\ngb_o = pd.DataFrame({'features':x_cols,'importance':grad_boost.feature_importances_})\ngb_o= gb_o.sort_values(by='importance',ascending=False)\n\n\nplt.figure(1,figsize=(12, 6))\nplt.xticks(range(len(gb_o)), gb_o.features,rotation=45)\nplt.plot(range(len(gb_o)),gb_o.importance,\"o\")\nplt.title('Feature importances')\nplt.show()", "Let's use partial_dependence to look at feature interactions. Look at the four most important features.", "from sklearn.ensemble.partial_dependence import plot_partial_dependence\nfrom sklearn.ensemble.partial_dependence import partial_dependence\n\ndf_X.columns\n\nfeatures = [0,1,2, 15, 4,5,14, 12]\nnames = df_X.columns\nfig, axs = plot_partial_dependence(grad_boost, df_X, features,feature_names=names, grid_resolution=50, figsize = (10,8))\nfig.suptitle('Partial dependence of rental price features')\nplt.subplots_adjust(top=0.9) # tight_layout causes overlap with suptitle\nplt.show()", "The partial dependence plots show how predicted values vary with the given covariate, \"controlling for\" the influence of other covariates (Friedman, 2001). In the top three plots above, we can see non-linear relationships between the features and predicted values. Bedrooms generally has a positive influence on rent, but it peaks at 5-6 bedrooms and then falls. With bedrooms, there is an interesting dip at 3 bedrooms. Perhaps this reflects lower rents for large shared houses that have many rooms that might be in older, poor-conditions buildings. \nThe most important feature, sqft, has a generally positive influence on rent, except for a dip at the high end. Also interesting is the plot for income_med; rent increases with income, but levels off at the high end. Put another way, income of the neigborhood matters, up to a limit.", "features = [(0,1),(0,2),(4,2), (4,15),(14,15)]\nnames = df_X.columns\nfig, axs = plot_partial_dependence(grad_boost, df_X, features,feature_names=names, grid_resolution=50, figsize = (9,6))\nfig.suptitle('Partial dependence of rental price features')\nplt.subplots_adjust(top=0.9) # tight_layout causes overlap with suptitle\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.21/_downloads/a2556d0f8dd65be2930ab61c812863c7/plot_read_epochs.ipynb
bsd-3-clause
[ "%matplotlib inline", "Reading epochs from a raw FIF file\nThis script shows how to read the epochs from a raw file given\na list of events. For illustration, we compute the evoked responses\nfor both MEG and EEG data by averaging all the epochs.", "# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>\n# Matti Hämäläinen <msh@nmr.mgh.harvard.edu>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne import io\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()", "Set parameters", "raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nevent_id, tmin, tmax = 1, -0.2, 0.5\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)\n\n# Set up pick list: EEG + MEG - bad channels (modify to your needs)\nraw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more\npicks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True,\n exclude='bads')\n\n# Read epochs\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n picks=picks, baseline=(None, 0), preload=True,\n reject=dict(grad=4000e-13, mag=4e-12, eog=150e-6))\n\nevoked = epochs.average() # average epochs to get the evoked response", "Show result", "evoked.plot(time_unit='s')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
secimTools/SECIMTools
notebook/pca_testing.ipynb
mit
[ "PCA Python vs R\nOriginally, R was used to calculate PCA using both princomp and prcomp. However, rpy2 stopped was intorducing some issues on the galaxy server. I decided to switch the calculation over to a pure python solution. scikit-learn has a PCA package which we can used, but it only does SVD and matches the output of prcomp with its default values. \nHere I am testing and figuing out how to output the different values.", "import pandas as pd\nimport numpy as np\nfrom sklearn.decomposition import PCA", "Import Example Data", "dat = pd.read_table('../example_data/ST000015_log.tsv')\ndat.set_index('Name', inplace=True)\n\ndat[:3]", "Use R to calculate PCA\nMi has looked at this already, but wanted to put the R example here to be complete. Here are the two R methods to output PCA", "%%R -i dat\n# First method uses princomp to calulate PCA using eigenvalues and eigenvectors\npr = princomp(dat)\n#str(pr)\nloadings = pr$loadings\nscores = pr$scores\n#summary(pr)\n\n%%R -i dat\npr = prcomp(dat)\n#str(pr)\nloadings = pr$rotation\nscores = pr$x\nsd = pr$sdev\n#summary(pr)", "Use Python to calculate PCA\nscikit-learn has a PCA package that we will use. It uses the SVD method, so results match the prcomp from R. \nhttp://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html\nGenerate PCA with default settings", "# Initiate PCA class\npca = PCA()\n\n# Fit the model and transform data\nscores = pca.fit_transform(dat)\n\n# Get loadings\nloadings = pca.components_\n\n# R also outputs the following in their summaries\nsd = loadings.std(axis=0)\npropVar = pca.explained_variance_ratio_\ncumPropVar = propVar.cumsum()", "I compared these results with prcomp and they are identical, note that the python version formats the data in scientific notation.\nBuild output tables that match the original PCA script\nBuild comment block\nAt the top of each output file, the original R version includes the standard deviation and the proportion of variance explained. I want to first build this block.", "# Labels used for the comment block\nlabels = np.array(['#Std. deviation', '#Proportion of variance explained', '#Cumulative proportion of variance explained'])\n\n# Stack the data into a matrix\ndata = np.vstack([sd, propVar, cumPropVar])\n\n# Add the labels to the first position in the matrix\nblock = np.column_stack([labels, data])\n\n# Create header\nheader = np.array(['Comp{}'.format(x+1) for x in range(loadings.shape[1])])\ncompoundIndex = np.hstack([dat.index.name, dat.index])\nsampleIndex = np.hstack(['sampleID', dat.columns])\n\n# Create loadings output\nloadHead = np.vstack([header, loadings])\nloadIndex = np.column_stack([sampleIndex, loadHead])\nloadOut = np.vstack([block, loadIndex])\n\n# Create scores output\nscoreHead = np.vstack([header, scores])\nscoreIndex = np.column_stack([compoundIndex, scoreHead])\nscoreOut = np.vstack([block, scoreIndex])\n\nnp.savetxt('/home/jfear/tmp/dan.tsv', loadOut, fmt=\"%s\", delimiter='\\t')\n\nbob = pd.DataFrame(loadOut)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Evfro/polara
examples/Hyper-parameter tuning and cross-validation tutorial.ipynb
mit
[ "Introduction\n<div class=\"alert alert-block alert-info\">In this tutorial we will go through a full cycle of model tuning and evaluation to perform a fair comparison of recommendation algorithms with Polara.</div>\n\nThis will include 2 phases: grid-search for finding (almost) optimal values of hyper-parameters and verification of results via 5-fold cross-validation. We will compare a popular ALS-based matrix factorization (MF) model called Weighted Regularized Matrix Factorization (WRMF a.k.a. iALS) [Hu, 2008] with much simpler SVD-based models.\nWe will use a standard sparse implementation of SVD from Scipy for the latter and a great library called implicit for iALS. Both are wrapped by Polara and can be accessed via the corresponding recommender model classes. Due to its practicality the implicit library is often recommended to beginners and sometimes even serves as a default tool in production. On the other hand, there are some important yet often overlooked features, which make SVD-based models stand out. Ignoring them in my opinion leads to certain misconceptions and myths, not to say that it also overcomplicates things quite a bit.\nNote that by saying SVD I really mean Singular Value Decomposition, not just an arbitrary matrix factorization. In that sense, methods like FunkSVD, SVD++, SVDFeature, etc., are not SVD-based at all, even though historically they use SVD acronym in their names and are often referenced as if they are real substitutes for SVD. These methods utilize another optimization algorithm, typically based on stochastic gradient descent, and do not preserve the algebraic properties of SVD. This is really an important distinction, especially in the view of the following remarks:\n\nSVD-based approach has a number of unique and beneficial properties. To name a few, it produces stable and determenistic output with global guarantees (can be critical for non-regression testing). It admits the same prediction formula for both known and previously unseen users as long as at least one user rating is known (this is especially handy for online regime). It requires minimal tuning and allows to compute and store a single latent feature matrix - either for users or for items - instead of computing and storing both of them. This luxury is not available in the majority of other matrix factorization approaches, definitely not in popular ones. \nThanks to the Lanczos procedure, computational complexity of truncated SVD scales linearly with the number of known observations and with the number of users/items. It scales quadratically only with the rank of decomposition. There are open source implementations, allowing to handle nearly billion-scale problems with one of its efficient randomized versions. \nAt least in some cases the simplest possible model called PureSVD outperforms other more sophisticated matrix factorization methods [Cremonesi, 2010].\nMoreover, PureSVD can be quite easily tuned to perform much better [Nikolakopoulos, 2019].\nFinally, it can take a hybrid form to include side information via the generalized formulation (see Chapter 6 of my thesis). Even without hybridization it can be quite successfully applied in the cold start regime [Frolov, 2019]. \n\nDespite that impresisve list, SVD-based models rarely get into the list of baselines to compare or to start with.\n<div class=\"alert alert-block alert-info\">Hence, this tutorial also aims at performing an assessment of the default choice of many practitioners to see whether it really stays advantageous over the simpler SVD-based approach after a thorough tuning of both models.</div>\n\nReferences\n\n[Hu, 2008] Hu Y., Koren, Y. and Volinsky, C., 2008. Collaborative Filtering for Implicit Feedback Datasets. In ICDM (Vol. 8, pp. 263-272). Link. \n[Cremonesi, 2010] Cremonesi, P., Koren, Y. and Turrin, R., 2010. Performance of recommender algorithms on top-n recommendation tasks. In Proceedings of the fourth ACM conference on Recommender systems (pp. 39-46). ACM. Link. \n[Nikolakopoulos, 2019] Nikolakopoulos, A.N., Kalantzis, V., Gallopoulos, E. and Garofalakis, J.D., 2019. EigenRec: generalizing PureSVD for effective and efficient top-N recommendations. Knowledge and Information Systems, 58(1), pp.59-81. Link.\n[Frolov, 2019] Frolov, E. and Oseledets, I., 2019. HybridSVD: When Collaborative Information is Not Enough. To appear in Proceedings of the Thirteenth ACM Conference on Recommender Systems. ACM. Link\n\nDownloading data\nWe will use the Movielens-10M dataset for our experiments. It's large enough to perform reliable evaluation; however, not that large that you would spend too many hours waiting for results. If you don't plan to play with the code, you may want to run all cells with a single command and leave the notebook running in the background, while reading the text. It takes around 1.5h to complete this notebook on a modern laptop.\n<div class=\"alert alert-block alert-warning\">Note that you'll need an internet connection in order to run the cell below.</div>\n\nIt will automatically download data, store it in a temporary location, and convert into a pandas dataframe. Alternatively, if you have already downloaded the dataset, you can use its local path as an input for the get_movielens_data function instead of tmp_file.", "import urllib\nfrom polara import (get_movielens_data, # returns data in the pandas dataframe format\n RecommenderData) # provides common interface to access data \n\nurl = 'http://files.grouplens.org/datasets/movielens/ml-10m.zip'\ntmp_file, _ = urllib.request.urlretrieve(url) # this may take some time depending on your internet connection\n\ndata = get_movielens_data(tmp_file, include_time=True)\ndata.head()", "The resulting dataframe has a bit more than 10M ratings, as expected.", "data.shape", "Basic data stats:", "data.apply('nunique')", "Preparing data\nAs always, you need to firstly define a data model that will provide a common interface for all recommendation algorithms used in experiments:", "data_model = RecommenderData(data, 'userid', 'movieid', 'rating', custom_order='timestamp', seed=0)\ndata_model.fields", "Setting seed=0 ensures controllable randomization when sampling test data, which enhances reproducibility; custom_order allows to select observations for evaluation based on their timestamp, rather than on rating value (more on that later). \nLet's look at the default configuration of data model:", "data_model.get_configuration()", "By default, Polara samples 20% of users and marks them for test (test_ratio attribute). These users would be excluded from the training dataset, if warm_start attribute remained set to True (strong generalization test). However, in the iALS case such setting would require running additional half-step optimization (folding-in) for each test user in order to obtain their latent representation. To avoid that we turn the \"warm start\" setting off and perform standard evaluation (weak generalization test). In that case test users are part of the training (except for the ratings that were held out for evaluation) and one can directly invoke scalar products of latent factors. Note that SVD recommendations do not depend on this setting due to uniform projection formula applicable to both known and \"warm\" users: $r = VV^\\top p$, where $r$ is a vector of predicted relevance scores, $p$ is a vector of any known preferences and $V$ is an item latent features matrix.", "data_model.holdout_size = 1 # hold out 1 item from every test user\ndata_model.random_holdout = False # take items with the latest timstamp\ndata_model.warm_start = False # standard case\ndata_model.prepare()", "The holdout_size attribute controls how many user preferences will be used for evaluation. Current configuration instructs data model to holdout one item from every test user. The random_holdout=False setting along with custom_order input argumment of data model make sure that only the latest rated item is taken for evaluation, allowing to avoid \"recommendations from future\". All these items are available via data_model.test.holdout.\nGeneral configuration\ntechnical settings\nSometimes due to the size of dataset evaluation make take considerably longer than actual training time. If that's the case, you may want to give Polara more resources to perform evaluation, which is mostly controlled by changing memory_hard_limit and max_test_workers settings from their defaults. The former defines how much memory is allowed to use when generating predictions for test users. Essentially, Polara avoids running an inefficient by-user-loop for that task and makes calculation in bulk, which allos to invoke linear algebra kernels and speed up calculations. This, however, generates dense $M \\times N$ matrix, where $M$ is the number of test users and $N$ is the number of all items seen during training. If this matrix doesn't fit into the memory constraint defined by memory_hard_limit (1Gb by default), calculations will be perfomed on a sequence of groups of $m<M$ users so that each group respects the constraint. If you have enough resources it can be a good idea to increase this memory limit. However, depending on hardware specs, manipulating huge amount of test data can be also slow. In that case it can be useful to still split test users into smaller group, however run calculations on each group in parallel. This can be achieved by setting max_test_workers integer to some number above 0, which will spawn the corresponding number of parallel threads. For example, instead of generating 60Gb matrix in a single run, one can define memory_hard_limit=15 and max_test_workers=4, which may help complete calculations faster.\nIn our case simply increasing the memory limit to 2Gb is sufficient for optimal performance.", "from polara.recommender import defaults\n\ndefaults.memory_hard_limit = 2\ndefaults.max_test_workers = None # None is the default value", "common config\nevaluation\nIn order to avoid undesired effects, related to the positivity bias, models will be trained only on interactions with ratings not lower than 4. This can be achieved by setting feedback_threshold attribute of models to 4. Due to the same reason, during the evaluation only items rated with ratings $\\geq 4$ will be counted as true positive and used to calculate evaluation scores. This is controlled by the switch_positive setting.", "defaults.switch_positive = 4\ninit_config = {'feedback_threshold': 4} # alternatively could set defaults.feedback_threshold", "The default metric for tuning hyper-parameters and selecting the best model will be Mean Reciprocal Rank (MRR).", "target_metric = 'mrr'", "model tuning\nAll MF models will be tested on the following grid of rank values (number of latent features):", "max_rank = 150\nrank_grid = range(10, max_rank+1, 10)", "Creating and tuning models\nWe will start from a simple PureSVD model and use it as a baseline in comparison with its own scaling modification and iALS algorithm.\nPureSVD\nOn one hand, tuning SVD is limited due to its strict least squares formulation, which doesn't leave too much freedom comparing to more general matrix factorization approaches. On the other hand, this means less parameters to tune. Moreover, for whatever configuration of hyper-parameters, once you compute some SVD-based model of rank $r$ you can immediately obtain a model of rank $r' < r$ by a simple truncation procedure, which doesn't require recomputing the model. This makes grid-search with SVD very efficient and allows to explore a broader hyper-parameter space with less time.", "try: # import package to track grid search progress\n from ipypb import track # lightweight progressbar, doesn't depend on widgets\nexcept ImportError:\n from tqdm import tqdm_notebook as track\n\nfrom polara import SVDModel\nfrom polara.evaluation.pipelines import find_optimal_svd_rank\n\n%matplotlib inline\n\npsvd = SVDModel(data_model)\n\n# the model will be computed only once for the max value of rank\npsvd_best_rank, psvd_rank_scores = find_optimal_svd_rank(psvd,\n rank_grid,\n target_metric,\n config=init_config,\n return_scores=True,\n iterator=track)", "Note that in this case the most of the time is spent on evaluation, rather than on model computation. The model was computed only once. You can verify it by calling psvd.training_time list attribute and seeing that it contains only one entry:", "psvd.training_time", "Let's see how quality of recommendations changes with rank (number of latent features).", "ax = psvd_rank_scores.plot(ylim=(0, None))\nax.set_xlabel('# of latent factors')\nax.set_ylabel(target_metric.upper());", "Scaled PureSVD model\nWe will employ a simple scaling trick over the rating matrix $R$ that was proposed by the authors of the EIGENREC model [Nikolakopoulos2019]: $R \\rightarrow RD^{f-1},$\nwhere $D$ is a diagonal scaling matrix with elements corresponding to the norm of the matrix columns (or square root of the number of nonzero elements in each column for the binary case). Parameter $f$ controls the effect of scaling and typically lies in the range [0, 1]. Finding the optimal value is an experimental task and will be performed via grid-search. We will use built-in support for such model in Polara. If you're interested in technical aspects of this implementation, see Reproducing EIGENREC results tutorial.\nGrid search", "from polara.recommender.models import ScaledSVD\nfrom polara.evaluation.pipelines import find_optimal_config # generic routine for grid-search", "Now we have to compute SVD model for every value of $f$. However, we can still avoid computing the model for each rank value by the virtue of rank truncation.", "def fine_tune_scaledsvd(model, ranks, scale_params, target_metric, config=None):\n rev_ranks = sorted(ranks, key=lambda x: -x) # descending order helps avoiding model recomputation\n param_grid = [(s1, r) for s1 in scale_params for r in rev_ranks]\n param_names = ('col_scaling', 'rank')\n return find_optimal_config(model,\n param_grid,\n param_names,\n target_metric,\n init_config=config,\n return_scores=True,\n force_build=False, # avoid recomputing the model\n iterator=track)", "We already know an approximate range of values for the scaling factor. You may also want to play with other values, especially when working with a different dataset.", "ssvd = ScaledSVD(data_model) # create model\nscaling = [0.2, 0.4, 0.6, 0.8]\n\nssvd_best_config, ssvd_scores = fine_tune_scaledsvd(ssvd,\n rank_grid,\n scaling,\n target_metric,\n config=init_config)", "Note that during this grid search the model was computed only len(scaling)=4 number of times, other points were found via rank truncation. Let's see how quality changes with different values of scaling parameter $f$.", "for cs in scaling:\n cs_scores = ssvd_scores.xs(cs, level='col_scaling')\n ax = cs_scores.plot(label=f'col_scaling: {cs}')\nax.set_title(f'Recommendations quality for {ssvd.method} model')\nax.set_ylim(0, None)\nax.set_ylabel(target_metric.upper())\nax.legend();\n\nssvd_rank_scores = ssvd_scores.xs(ssvd_best_config['col_scaling'], level='col_scaling')", "The optimal set of hyper-parameters:", "ssvd_best_config", "iALS\nUsing implicit library in Polara is almost as simple as using SVD-based models. Make sure you have it installed in your python environment (follow instructions at https://github.com/benfred/implicit ).", "import os; os.environ[\"MKL_NUM_THREADS\"] = \"1\" # as required by implicit\nimport numpy as np\n\nfrom polara.recommender.external.implicit.ialswrapper import ImplicitALS\nfrom polara.evaluation.pipelines import random_grid, set_config", "defining hyper-parameter grid\nHyper-parameter space in that case is much broader. We will start by adjusting all hyper-parameters expect the rank value and then, once an optimal config is found, we will perform full grid-search over the range of rank values defined by rank_grid.", "als_params = dict(alpha = [0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100],\n epsilon = [0.01, 0.03, 0.1, 0.3, 1],\n weight_func = [None, np.sign, np.sqrt, np.log2, np.log10],\n regularization = [0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3],\n rank = [40] # enforce rank value for quick exploration of other parameters\n )", "In order to avoid too long computation time, grid-search is performed over 60 random points, which is enough to get within 5% of the optimum with 95% confidence. The grid is generated with the built-in random_grid function.", "als_param_grid, als_param_names = random_grid(als_params, n=60)", "random grid search", "ials = ImplicitALS(data_model) # create model\n\nials_best_config, ials_grid_scores = find_optimal_config(ials,\n als_param_grid, # hyper-parameters grid\n als_param_names, # hyper-parameters' names\n target_metric,\n init_config=init_config,\n return_scores=True,\n iterator=track)", "rank tuning\nIn contrast to the case of SVD-based algorithms, iALS requires recomputing the model for every new rank value, therefore in addition to the previous 60 times, the model will be computed len(rank_grid) more times for all rank values.", "ials_best_rank, ials_rank_scores = find_optimal_config(ials,\n rank_grid,\n 'rank',\n target_metric,\n # configs are applied in the order they're provided\n init_config=[init_config,\n ials_best_config],\n return_scores=True,\n iterator=track)", "Let's combine the best rank value with other optimal parameters:", "ials_best_config.update(ials_best_rank)\nials_best_config", "visualizing rank tuning results\nWe can now see how all three algorithms compare to each other.", "def plot_rank_scores(scores):\n ax = None\n for sc in scores:\n ax = sc.sort_index().plot(label=sc.name, ax=ax)\n ax.set_ylim(0, None)\n ax.set_title('Recommendations quality')\n ax.set_xlabel('# of latent factors')\n ax.set_ylabel(target_metric.upper());\n ax.legend()\n return ax\n\nplot_rank_scores([ssvd_rank_scores,\n ials_rank_scores,\n psvd_rank_scores]);", "It can be seen that scaling (PureSVD-s line) has a significant impact on the quality of recommendations. This is, however, a preliminary result, which is yet to be verified via cross-validation.\nModels comparison\nThe results above were computed only with a single split into train-test corresponding to a single fold. In order to verify the obtained results, perform a full CV with optimal parameters fixed. It can be achieved with the built-in run_cv_experiment function from Polara's evaluation engine as shown below.\ncross-validation experiment", "from polara.evaluation import evaluation_engine as ee", "Fixing optimal configurations:", "set_config(psvd, {'rank': psvd_best_rank})\nset_config(ssvd, ssvd_best_config)\nset_config(ials, ials_best_config)", "Performing 5-fold CV:", "models = [psvd, ssvd, ials]\nmetrics = ['ranking', 'relevance', 'experience']\n\n# run experiments silently\ndata_model.verbose = False\nfor model in models:\n model.verbose = False\n\n# perform cross-validation on models, report scores according to metrics\ncv_results = ee.run_cv_experiment(models,\n metrics=metrics,\n iterator=track)", "The output contains results for all folds:", "cv_results.head()", "plotting results\nWe will plot average scores and confidence intervals for them. The following function will do this based on raw input from CV:", "def plot_cv_results(scores, subplot_size=(6, 3.5)):\n scores_mean = scores.mean(level='model')\n scores_errs = ee.sample_ci(scores, level='model')\n # remove top-level columns with classes of metrics (for convenience)\n scores_mean.columns = scores_mean.columns.get_level_values(1)\n scores_errs.columns = scores_errs.columns.get_level_values(1)\n # plot results\n n = len(scores_mean.columns)\n return scores_mean.plot.bar(yerr=scores_errs, rot=0,\n subplots=True, layout=(1, n),\n figsize=(subplot_size[0]*n, subplot_size[1]),\n legend=False);\n\nplot_cv_results(cv_results);", "The difference between PureSVD and iALS is not significant.\n<div class=\"alert alert-block alert-success\">In contrast, the advantage of the scaled version of PureSVD denoted as `PureSVD-s` over the other models is much more pronounced making it a clear favorite.</div>\nInterestingly, the difference is especially pronounced in terms of the coverage metric, which is defined as the ratio of unique recommendations generated for all test users to the total number of items in the training data. This indicates that generated recommendations are not only more relevant but also are significantly more diverse. \ncomparing training time\nAnother important practical aspect is how long does it take to compute a model? Sometimes the best model in terms of quality of recommendations can be the slowest to compute. You can check each model's training time by accessing the training_time list attribute. It holds the history of trainings, hence, if you have just performed 5-fold CV experiment, the last 5 entries in this list will correspond to the training time on each fold. This information can be used to get average time with some error bounds as shown below.", "import pandas as pd\n\ntimings = {}\nfor model in models:\n timings[f'{model.method} rank {model.rank}'] = model.training_time[-5:]\n\ntime_df = pd.DataFrame(timings)\ntime_df.mean().plot.bar(yerr=time_df.std(), rot=0, title='Computation time for optimal config, s');", "PureSVD-s compares favoribly to the iALS, even though it requires higher rank value, which results in a longer training time comparing to PureSVD. Another interesting measure is what time does it take to achieve approximately the same quality by all models. \nNote that all models give approximately the same quality at the optimal rank of iALS. Let's compare training time for this value of rank.", "fixed_rank_timings = {}\nfor model in models:\n model.rank = ials_best_config['rank']\n model.build()\n fixed_rank_timings[model.method] = model.training_time[-1]\n\npd.Series(fixed_rank_timings).plot.bar(rot=0, title=f'Rank {ials.rank} computation time, s')", "By all means computing SVD on this dataset is much faster than ALS. This may, however, vary on other datasets due to a different sparsity structure. Nevertheless, you can still expect, that SVD-based models will be perfroming well due to the usage of highly optimized BLAS and LAPACK routines.\nBonus: scaling for iALS\nYou may reasonably question whether that scaling trick also works for non SVD-based models. Let's verify its applicability for iALS. We will reuse Polara's built-in scaling functions in order to create a new class of the scaled iALS-based model.", "from polara.recommender.models import ScaledMatrixMixin\n\nclass ScaledIALS(ScaledMatrixMixin, ImplicitALS): pass # similarly to how PureSVD is extended to its scaled version\n\nsals = ScaledIALS(data_model)", "In order to save time, we will utilize the optimal configuration for scaling, found by tuning scaled version of PureSVD. Alternatively, you could include scaling parameters into the grid search step by extending als_param_grid and als_param_names variables. However, taking configuration of PureSVD-s should be a good enough approximation at least for verifying the effect of scaling. The tuning itself has to be repeated from the beginning.\nhyper-parameter tuning", "sals_best_config, sals_param_scores = find_optimal_config(sals,\n als_param_grid,\n als_param_names,\n target_metric,\n init_config=[init_config,\n ssvd_best_config], # the rank value will be overriden\n return_scores=True,\n iterator=track)\n\nsals_best_rank, sals_rank_scores = find_optimal_config(sals,\n rank_grid,\n 'rank',\n target_metric,\n init_config=[init_config,\n ssvd_best_config,\n sals_best_config],\n return_scores=True,\n iterator=track)", "visualizing rank tuning results", "plot_rank_scores([ssvd_rank_scores,\n sals_rank_scores,\n ials_rank_scores,\n psvd_rank_scores]);", "There seem to be no difference between the original and scaled versions of iALS. Let's verify this with CV experiment.\ncross-validation\nYou only need to perform CV computations for the new model. Configuration of data will be the same as previously, as the data_model instance ensures reproducible data state.", "sals_best_config.update(sals_best_rank)\nsals_best_config\n\nset_config(sals, sals_best_config)\n\nsals.verbose = False\nsals_cv_results = ee.run_cv_experiment([sals],\n metrics=metrics,\n iterator=track)\n\nplot_cv_results(cv_results.append(sals_cv_results));", "<div class=\"alert alert-block alert-warning\">Surprisingly, the iALS model remains largely insensitive to the scaling trick. At least in the current settings and for the current dataset.</div>\n\nRemark: You may want to repeat all experiments in a different setting with random_holdout set to True. My own results indicate that in this case iALS performs slightly better than PureSVD and also becomes more responsive to scaling, giving the same result as the scaled version of SVD. However, the scaled version of SVD is still easier to compute and tune.\nConclusion\nWith a proper tuning the quality of recommendations of one of the simplest SVD-based models can be substantially improved. Despite common beliefs, it turns out that PureSVD with simple scaling trick compares favorably to a much more popular iALS algorithm. The former not only generates more relevant recommendations, but also makes them more diverse and potentially more interesting. In addition to that, it has a number of unique advantages and merely requires to do from scipy.sparse.linalg import svds to get started. Of course, the obtained results may not necessarily hold on all other datasets and require further verification.\n<div class=\"alert alert-block alert-success\">However, in the view of all its features and advantages, the scaled SVD-based model certainly deserves to be included into the list of the default baselines.</div>\n\nThe result obtained here resembles situation in the Natural Language Processing field, where simple SVD-based model with proper tuning turns out to be competitive on a variety of downstream tasks even in comparison with Neural Network-based models. In the end it's all about adequate tuning and fair comparison.\nAs a final remark, this tutorial is a part of a series of tutorials demonstrating usage scenarios for the Polara framework. Polara is designed to support openness and reproducibility of research. It provides controlled environment and rich functionality with high level abstractions, which allows conducting thorough experiments with minimal efforts and can be used to either quickly test known ideas or implement something new." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jastarex/DeepLearningCourseCodes
05_Image_recognition_and_classification/cnn.ipynb
apache-2.0
[ "卷积神经网络 - 图像分类与识别 - TensorFlow实现\n需要Anaconda3-1.3.0 (Python 3)环境运行", "import time\nimport math\nimport random\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport dataset\nimport cv2\n\nfrom sklearn.metrics import confusion_matrix\nfrom datetime import timedelta\n\n%matplotlib inline", "超参配置", "# Convolutional Layer 1.\nfilter_size1 = 3 \nnum_filters1 = 32\n\n# Convolutional Layer 2.\nfilter_size2 = 3\nnum_filters2 = 32\n\n# Convolutional Layer 3.\nfilter_size3 = 3\nnum_filters3 = 64\n\n# Fully-connected layer.\nfc_size = 128 # Number of neurons in fully-connected layer.\n\n# Number of color channels for the images: 1 channel for gray-scale.\nnum_channels = 3\n\n# image dimensions (only squares for now)\nimg_size = 128\n\n# Size of image when flattened to a single dimension\nimg_size_flat = img_size * img_size * num_channels\n\n# Tuple with height and width of images used to reshape arrays.\nimg_shape = (img_size, img_size)\n\n# class info\nclasses = ['dogs', 'cats']\nnum_classes = len(classes)\n\n# batch size\nbatch_size = 32\n\n# validation split\nvalidation_size = .16\n\n# how long to wait after validation loss stops improving before terminating training\nearly_stopping = None # use None if you don't want to implement early stoping\n\ntrain_path = 'data/train/'\ntest_path = 'data/test/'\ncheckpoint_dir = \"models/\"", "数据载入", "data = dataset.read_train_sets(train_path, img_size, classes, validation_size=validation_size)\ntest_images, test_ids = dataset.read_test_set(test_path, img_size)\n\nprint(\"Size of:\")\nprint(\"- Training-set:\\t\\t{}\".format(len(data.train.labels)))\nprint(\"- Test-set:\\t\\t{}\".format(len(test_images)))\nprint(\"- Validation-set:\\t{}\".format(len(data.valid.labels)))", "绘图函数\nFunction used to plot 9 images in a 3x3 grid (or fewer, depending on how many images are passed), and writing the true and predicted classes below each image.", "def plot_images(images, cls_true, cls_pred=None):\n \n if len(images) == 0:\n print(\"no images to show\")\n return \n else:\n random_indices = random.sample(range(len(images)), min(len(images), 9))\n \n \n images, cls_true = zip(*[(images[i], cls_true[i]) for i in random_indices])\n \n # Create figure with 3x3 sub-plots.\n fig, axes = plt.subplots(3, 3)\n fig.subplots_adjust(hspace=0.3, wspace=0.3)\n\n for i, ax in enumerate(axes.flat):\n # Plot image.\n ax.imshow(images[i].reshape(img_size, img_size, num_channels))\n\n # Show true and predicted classes.\n if cls_pred is None:\n xlabel = \"True: {0}\".format(cls_true[i])\n else:\n xlabel = \"True: {0}, Pred: {1}\".format(cls_true[i], cls_pred[i])\n\n # Show the classes as the label on the x-axis.\n ax.set_xlabel(xlabel)\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "随机显示图像,检查是否载入正确", "# Get some random images and their labels from the train set.\n\nimages, cls_true = data.train.images, data.train.cls\n\n# Plot the images and labels using our helper-function above.\nplot_images(images=images, cls_true=cls_true)", "TensorFlow 图模型\n主要包含以下部分:\n* Placeholder variables used for inputting data to the graph.\n* Variables that are going to be optimized so as to make the convolutional network perform better.\n* The mathematical formulas for the convolutional network.\n* A cost measure that can be used to guide the optimization of the variables.\n* An optimization method which updates the variables.\n协助创建新参数的函数\nFunctions for creating new TensorFlow variables in the given shape and initializing them with random values. Note that the initialization is not actually done at this point, it is merely being defined in the TensorFlow graph.", "def new_weights(shape):\n return tf.Variable(tf.truncated_normal(shape, stddev=0.05))\n\ndef new_biases(length):\n return tf.Variable(tf.constant(0.05, shape=[length]))", "协助创建新卷积层的函数\n此函数用于定义模型,输入维度假设为4维:\n1. Image number. 图片数量\n2. Y-axis of each image. 图片高度\n3. X-axis of each image. 图片宽度\n4. Channels of each image. 图片通道数\n通道数可能为原始图片色彩通道数,也可能为之前所生成特征图的通道数\n输出同样为4维:\n1. Image number, same as input. 图片数量\n2. Y-axis of each image. 图片高度,如经过2x2最大池化,则减半\n3. X-axis of each image. 同上\n4. Channels produced by the convolutional filters. 输出特征图通道数,由本层卷积核数目决定", "def new_conv_layer(input, # The previous layer.\n num_input_channels, # Num. channels in prev. layer.\n filter_size, # Width and height of each filter.\n num_filters, # Number of filters.\n use_pooling=True): # Use 2x2 max-pooling.\n\n # Shape of the filter-weights for the convolution.\n # This format is determined by the TensorFlow API.\n shape = [filter_size, filter_size, num_input_channels, num_filters]\n\n # Create new weights aka. filters with the given shape.\n weights = new_weights(shape=shape)\n\n # Create new biases, one for each filter.\n biases = new_biases(length=num_filters)\n\n # Create the TensorFlow operation for convolution.\n # Note the strides are set to 1 in all dimensions.\n # The first and last stride must always be 1,\n # because the first is for the image-number and\n # the last is for the input-channel.\n # But e.g. strides=[1, 2, 2, 1] would mean that the filter\n # is moved 2 pixels across the x- and y-axis of the image.\n # The padding is set to 'SAME' which means the input image\n # is padded with zeroes so the size of the output is the same.\n layer = tf.nn.conv2d(input=input,\n filter=weights,\n strides=[1, 1, 1, 1],\n padding='SAME')\n\n # Add the biases to the results of the convolution.\n # A bias-value is added to each filter-channel.\n layer += biases\n\n # Use pooling to down-sample the image resolution?\n if use_pooling:\n # This is 2x2 max-pooling, which means that we\n # consider 2x2 windows and select the largest value\n # in each window. Then we move 2 pixels to the next window.\n layer = tf.nn.max_pool(value=layer,\n ksize=[1, 2, 2, 1],\n strides=[1, 2, 2, 1],\n padding='SAME')\n\n # Rectified Linear Unit (ReLU).\n # It calculates max(x, 0) for each input pixel x.\n # This adds some non-linearity to the formula and allows us\n # to learn more complicated functions.\n layer = tf.nn.relu(layer)\n\n # Note that ReLU is normally executed before the pooling,\n # but since relu(max_pool(x)) == max_pool(relu(x)) we can\n # save 75% of the relu-operations by max-pooling first.\n\n # We return both the resulting layer and the filter-weights\n # because we will plot the weights later.\n return layer, weights", "协助展开(一维化)卷积层的函数\nA convolutional layer produces an output tensor with 4 dimensions. We will add fully-connected layers after the convolution layers, so we need to reduce the 4-dim tensor to 2-dim which can be used as input to the fully-connected layer.", "def flatten_layer(layer):\n # Get the shape of the input layer.\n layer_shape = layer.get_shape()\n\n # The shape of the input layer is assumed to be:\n # layer_shape == [num_images, img_height, img_width, num_channels]\n\n # The number of features is: img_height * img_width * num_channels\n # We can use a function from TensorFlow to calculate this.\n num_features = layer_shape[1:4].num_elements()\n \n # Reshape the layer to [num_images, num_features].\n # Note that we just set the size of the second dimension\n # to num_features and the size of the first dimension to -1\n # which means the size in that dimension is calculated\n # so the total size of the tensor is unchanged from the reshaping.\n layer_flat = tf.reshape(layer, [-1, num_features])\n\n # The shape of the flattened layer is now:\n # [num_images, img_height * img_width * num_channels]\n\n # Return both the flattened layer and the number of features.\n return layer_flat, num_features", "协助创建全连接层的函数\nThis function creates a new fully-connected layer in the computational graph for TensorFlow. Nothing is actually calculated here, we are just adding the mathematical formulas to the TensorFlow graph.\nIt is assumed that the input is a 2-dim tensor of shape [num_images, num_inputs]. The output is a 2-dim tensor of shape [num_images, num_outputs].", "def new_fc_layer(input, # The previous layer.\n num_inputs, # Num. inputs from prev. layer.\n num_outputs, # Num. outputs.\n use_relu=True): # Use Rectified Linear Unit (ReLU)?\n\n # Create new weights and biases.\n weights = new_weights(shape=[num_inputs, num_outputs])\n biases = new_biases(length=num_outputs)\n\n # Calculate the layer as the matrix multiplication of\n # the input and weights, and then add the bias-values.\n layer = tf.matmul(input, weights) + biases\n\n # Use ReLU?\n if use_relu:\n layer = tf.nn.relu(layer)\n\n return layer", "Placeholder 参数定义\nPlaceholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.\nFirst we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional vector or matrix. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.", "x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')", "The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:", "x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])", "Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes.", "y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')", "We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.", "y_true_cls = tf.argmax(y_true, dimension=1)", "卷积层 1\nCreate the first convolutional layer. It takes x_image as input and creates num_filters1 different filters, each having width and height equal to filter_size1. Finally we wish to down-sample the image so it is half the size by using 2x2 max-pooling.", "layer_conv1, weights_conv1 = \\\n new_conv_layer(input=x_image,\n num_input_channels=num_channels,\n filter_size=filter_size1,\n num_filters=num_filters1,\n use_pooling=True)\n\nlayer_conv1", "卷积层 2 和 3\nCreate the second and third convolutional layers, which take as input the output from the first and second convolutional layer respectively. The number of input channels corresponds to the number of filters in the previous convolutional layer.", "layer_conv2, weights_conv2 = \\\n new_conv_layer(input=layer_conv1,\n num_input_channels=num_filters1,\n filter_size=filter_size2,\n num_filters=num_filters2,\n use_pooling=True)\n\nlayer_conv2\n\nlayer_conv3, weights_conv3 = \\\n new_conv_layer(input=layer_conv2,\n num_input_channels=num_filters2,\n filter_size=filter_size3,\n num_filters=num_filters3,\n use_pooling=True)\n\nlayer_conv3", "展开层\nThe convolutional layers output 4-dim tensors. We now wish to use these as input in a fully-connected network, which requires for the tensors to be reshaped or flattened to 2-dim tensors.", "layer_flat, num_features = flatten_layer(layer_conv3)\n\nlayer_flat\n\nnum_features", "全连接层 1\nAdd a fully-connected layer to the network. The input is the flattened layer from the previous convolution. The number of neurons or nodes in the fully-connected layer is fc_size. ReLU is used so we can learn non-linear relations.", "layer_fc1 = new_fc_layer(input=layer_flat,\n num_inputs=num_features,\n num_outputs=fc_size,\n use_relu=True)", "Check that the output of the fully-connected layer is a tensor with shape (?, 128) where the ? means there is an arbitrary number of images and fc_size == 128.", "layer_fc1", "全连接层 2\nAdd another fully-connected layer that outputs vectors of length num_classes for determining which of the classes the input image belongs to. Note that ReLU is not used in this layer.", "layer_fc2 = new_fc_layer(input=layer_fc1,\n num_inputs=fc_size,\n num_outputs=num_classes,\n use_relu=False)\n\nlayer_fc2", "所预测类\nThe second fully-connected layer estimates how likely it is that the input image belongs to each of the 2 classes. However, these estimates are a bit rough and difficult to interpret because the numbers may be very small or large, so we want to normalize them so that each element is limited between zero and one and the all the elements sum to one. This is calculated using the so-called softmax function and the result is stored in y_pred.", "y_pred = tf.nn.softmax(layer_fc2)", "The class-number is the index of the largest element.", "y_pred_cls = tf.argmax(y_pred, dimension=1)", "将要优化的损失函数\nTo make the model better at classifying the input images, we must somehow change the variables for all the network layers. To do this we first need to know how well the model currently performs by comparing the predicted output of the model y_pred to the desired output y_true.\nThe cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the variables of the network layers.\nTensorFlow has a built-in function for calculating the cross-entropy. Note that the function calculates the softmax internally so we must use the output of layer_fc2 directly rather than y_pred which has already had the softmax applied.", "cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2,\n labels=y_true)", "We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.", "cost = tf.reduce_mean(cross_entropy)", "优化方法\nNow that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the AdamOptimizer which is an advanced form of Gradient Descent.\nNote that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.", "optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(cost)", "评判手段\nWe need a few more performance measures to display the progress to the user.\nThis is a vector of booleans whether the predicted class equals the true class of each image.", "correct_prediction = tf.equal(y_pred_cls, y_true_cls)", "This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.", "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))", "TensorFlow图模型的编译与运行\n创建 TensorFlow session\nOnce the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.", "session = tf.Session()", "初始化参数\nThe variables for weights and biases must be initialized before we start optimizing them.", "session.run(tf.initialize_all_variables())", "协助优化迭代的函数\nIt takes a long time to calculate the gradient of the model using the entirety of a large dataset\n. We therefore only use a small batch of images in each iteration of the optimizer.\nIf your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.", "train_batch_size = batch_size\n\ndef print_progress(epoch, feed_dict_train, feed_dict_validate, val_loss):\n # Calculate the accuracy on the training-set.\n acc = session.run(accuracy, feed_dict=feed_dict_train)\n val_acc = session.run(accuracy, feed_dict=feed_dict_validate)\n msg = \"Epoch {0} --- Training Accuracy: {1:>6.1%}, Validation Accuracy: {2:>6.1%}, Validation Loss: {3:.3f}\"\n print(msg.format(epoch + 1, acc, val_acc, val_loss))", "Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.", "# Counter for total number of iterations performed so far.\ntotal_iterations = 0\n\ndef optimize(num_iterations):\n # Ensure we update the global variable rather than a local copy.\n global total_iterations\n\n # Start-time used for printing time-usage below.\n start_time = time.time()\n \n best_val_loss = float(\"inf\")\n patience = 0\n\n for i in range(total_iterations,\n total_iterations + num_iterations):\n\n # Get a batch of training examples.\n # x_batch now holds a batch of images and\n # y_true_batch are the true labels for those images.\n x_batch, y_true_batch, _, cls_batch = data.train.next_batch(train_batch_size)\n x_valid_batch, y_valid_batch, _, valid_cls_batch = data.valid.next_batch(train_batch_size)\n\n # Convert shape from [num examples, rows, columns, depth]\n # to [num examples, flattened image shape]\n\n x_batch = x_batch.reshape(train_batch_size, img_size_flat)\n x_valid_batch = x_valid_batch.reshape(train_batch_size, img_size_flat)\n\n # Put the batch into a dict with the proper names\n # for placeholder variables in the TensorFlow graph.\n feed_dict_train = {x: x_batch,\n y_true: y_true_batch}\n \n feed_dict_validate = {x: x_valid_batch,\n y_true: y_valid_batch}\n\n # Run the optimizer using this batch of training data.\n # TensorFlow assigns the variables in feed_dict_train\n # to the placeholder variables and then runs the optimizer.\n session.run(optimizer, feed_dict=feed_dict_train)\n \n\n # Print status at end of each epoch (defined as full pass through training dataset).\n if i % int(data.train.num_examples/batch_size) == 0: \n val_loss = session.run(cost, feed_dict=feed_dict_validate)\n epoch = int(i / int(data.train.num_examples/batch_size))\n \n print_progress(epoch, feed_dict_train, feed_dict_validate, val_loss)\n \n if early_stopping: \n if val_loss < best_val_loss:\n best_val_loss = val_loss\n patience = 0\n else:\n patience += 1\n\n if patience == early_stopping:\n break\n\n # Update the total number of iterations performed.\n total_iterations += num_iterations\n\n # Ending time.\n end_time = time.time()\n\n # Difference between start and end-times.\n time_dif = end_time - start_time\n\n # Print the time-usage.\n print(\"Time elapsed: \" + str(timedelta(seconds=int(round(time_dif)))))", "协助绘制错误结果的函数\nFunction for plotting examples of images from the test-set that have been mis-classified.", "def plot_example_errors(cls_pred, correct):\n # cls_pred is an array of the predicted class-number for\n # all images in the test-set.\n\n # correct is a boolean array whether the predicted class\n # is equal to the true class for each image in the test-set.\n\n # Negate the boolean array.\n incorrect = (correct == False)\n \n # Get the images from the test-set that have been\n # incorrectly classified.\n images = data.valid.images[incorrect]\n \n # Get the predicted classes for those images.\n cls_pred = cls_pred[incorrect]\n\n # Get the true classes for those images.\n cls_true = data.valid.cls[incorrect]\n \n # Plot the first 9 images.\n plot_images(images=images[0:9],\n cls_true=cls_true[0:9],\n cls_pred=cls_pred[0:9])", "协助绘制混淆矩阵的函数", "def plot_confusion_matrix(cls_pred):\n # cls_pred is an array of the predicted class-number for\n # all images in the test-set.\n\n # Get the true classifications for the test-set.\n cls_true = data.valid.cls\n \n # Get the confusion matrix using sklearn.\n cm = confusion_matrix(y_true=cls_true,\n y_pred=cls_pred)\n\n # Print the confusion matrix as text.\n print(cm)\n\n # Plot the confusion matrix as an image.\n plt.matshow(cm)\n\n # Make various adjustments to the plot.\n plt.colorbar()\n tick_marks = np.arange(num_classes)\n plt.xticks(tick_marks, range(num_classes))\n plt.yticks(tick_marks, range(num_classes))\n plt.xlabel('Predicted')\n plt.ylabel('True')\n\n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "协助展示实验结果与模型性能的函数\nFunction for printing the classification accuracy on the test-set.\nIt takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.\nNote that this function can use a lot of computer memory, which is why the test-set is split into smaller batches. If you have little RAM in your computer and it crashes, then you can try and lower the batch-size.", "def print_validation_accuracy(show_example_errors=False,\n show_confusion_matrix=False):\n\n # Number of images in the test-set.\n num_test = len(data.valid.images)\n\n # Allocate an array for the predicted classes which\n # will be calculated in batches and filled into this array.\n cls_pred = np.zeros(shape=num_test, dtype=np.int)\n\n # Now calculate the predicted classes for the batches.\n # We will just iterate through all the batches.\n # There might be a more clever and Pythonic way of doing this.\n\n # The starting index for the next batch is denoted i.\n i = 0\n\n while i < num_test:\n # The ending index for the next batch is denoted j.\n j = min(i + batch_size, num_test)\n\n # Get the images from the test-set between index i and j.\n images = data.valid.images[i:j, :].reshape(batch_size, img_size_flat)\n \n\n # Get the associated labels.\n labels = data.valid.labels[i:j, :]\n\n # Create a feed-dict with these images and labels.\n feed_dict = {x: images,\n y_true: labels}\n\n # Calculate the predicted class using TensorFlow.\n cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)\n\n # Set the start-index for the next batch to the\n # end-index of the current batch.\n i = j\n\n cls_true = np.array(data.valid.cls)\n cls_pred = np.array([classes[x] for x in cls_pred]) \n\n # Create a boolean array whether each image is correctly classified.\n correct = (cls_true == cls_pred)\n\n # Calculate the number of correctly classified images.\n # When summing a boolean array, False means 0 and True means 1.\n correct_sum = correct.sum()\n\n # Classification accuracy is the number of correctly classified\n # images divided by the total number of images in the test-set.\n acc = float(correct_sum) / num_test\n\n # Print the accuracy.\n msg = \"Accuracy on Test-Set: {0:.1%} ({1} / {2})\"\n print(msg.format(acc, correct_sum, num_test))\n\n # Plot some examples of mis-classifications, if desired.\n if show_example_errors:\n print(\"Example errors:\")\n plot_example_errors(cls_pred=cls_pred, correct=correct)\n\n # Plot the confusion matrix, if desired.\n if show_confusion_matrix:\n print(\"Confusion Matrix:\")\n plot_confusion_matrix(cls_pred=cls_pred)", "1次优化迭代后的结果", "optimize(num_iterations=1)\nprint_validation_accuracy()", "100次优化迭代后的结果\nAfter 100 optimization iterations, the model should have significantly improved its classification accuracy.", "optimize(num_iterations=99) # We already performed 1 iteration above.\n\nprint_validation_accuracy(show_example_errors=True)", "1000次优化迭代后的结果", "optimize(num_iterations=900) # We performed 100 iterations above.\n\nprint_validation_accuracy(show_example_errors=True)", "10000次优化迭代后的结果", "optimize(num_iterations=9000) # We performed 1000 iterations above.\n\nprint_validation_accuracy(show_example_errors=True, show_confusion_matrix=True)", "权值与卷积层的可视化\nIn trying to understand why the convolutional neural network can recognize images, we will now visualize the weights of the convolutional filters and the resulting output images.\n协助绘制卷积核权值的函数", "def plot_conv_weights(weights, input_channel=0):\n # Assume weights are TensorFlow ops for 4-dim variables\n # e.g. weights_conv1 or weights_conv2.\n \n # Retrieve the values of the weight-variables from TensorFlow.\n # A feed-dict is not necessary because nothing is calculated.\n w = session.run(weights)\n\n # Get the lowest and highest values for the weights.\n # This is used to correct the colour intensity across\n # the images so they can be compared with each other.\n w_min = np.min(w)\n w_max = np.max(w)\n\n # Number of filters used in the conv. layer.\n num_filters = w.shape[3]\n\n # Number of grids to plot.\n # Rounded-up, square-root of the number of filters.\n num_grids = math.ceil(math.sqrt(num_filters))\n \n # Create figure with a grid of sub-plots.\n fig, axes = plt.subplots(num_grids, num_grids)\n\n # Plot all the filter-weights.\n for i, ax in enumerate(axes.flat):\n # Only plot the valid filter-weights.\n if i<num_filters:\n # Get the weights for the i'th filter of the input channel.\n # See new_conv_layer() for details on the format\n # of this 4-dim tensor.\n img = w[:, :, input_channel, i]\n\n # Plot image.\n ax.imshow(img, vmin=w_min, vmax=w_max,\n interpolation='nearest', cmap='seismic')\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "协助绘制卷积层输出的函数", "def plot_conv_layer(layer, image):\n # Assume layer is a TensorFlow op that outputs a 4-dim tensor\n # which is the output of a convolutional layer,\n # e.g. layer_conv1 or layer_conv2.\n \n image = image.reshape(img_size_flat)\n\n # Create a feed-dict containing just one image.\n # Note that we don't need to feed y_true because it is\n # not used in this calculation.\n feed_dict = {x: [image]}\n\n # Calculate and retrieve the output values of the layer\n # when inputting that image.\n values = session.run(layer, feed_dict=feed_dict)\n\n # Number of filters used in the conv. layer.\n num_filters = values.shape[3]\n\n # Number of grids to plot.\n # Rounded-up, square-root of the number of filters.\n num_grids = math.ceil(math.sqrt(num_filters))\n \n # Create figure with a grid of sub-plots.\n fig, axes = plt.subplots(num_grids, num_grids)\n\n # Plot the output images of all the filters.\n for i, ax in enumerate(axes.flat):\n # Only plot the images for valid filters.\n if i<num_filters:\n # Get the output image of using the i'th filter.\n # See new_conv_layer() for details on the format\n # of this 4-dim tensor.\n img = values[0, :, :, i]\n\n # Plot image.\n ax.imshow(img, interpolation='nearest', cmap='binary')\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "输入图像\nHelper-function for plotting an image.", "def plot_image(image):\n plt.imshow(image.reshape(img_size, img_size, num_channels),\n interpolation='nearest')\n plt.show()", "Plot an image from the test-set which will be used as an example below.", "image1 = test_images[0]\nplot_image(image1)", "Plot another example image from the test-set.", "image2 = test_images[13]\nplot_image(image2)", "卷积层 1\nNow plot the filter-weights for the first convolutional layer.\nNote that positive weights are red and negative weights are blue.", "plot_conv_weights(weights=weights_conv1)", "Applying each of these convolutional filters to the first input image gives the following output images, which are then used as input to the second convolutional layer. Note that these images are down-sampled to about half the resolution of the original input image.", "plot_conv_layer(layer=layer_conv1, image=image1)", "The following images are the results of applying the convolutional filters to the second image.", "plot_conv_layer(layer=layer_conv1, image=image2)", "卷积层 2\nNow plot the filter-weights for the second convolutional layer.\nThere are 16 output channels from the first conv-layer, which means there are 16 input channels to the second conv-layer. The second conv-layer has a set of filter-weights for each of its input channels. We start by plotting the filter-weigths for the first channel.\nNote again that positive weights are red and negative weights are blue.", "plot_conv_weights(weights=weights_conv2, input_channel=0)", "There are 16 input channels to the second convolutional layer, so we can make another 15 plots of filter-weights like this. We just make one more with the filter-weights for the second channel.", "plot_conv_weights(weights=weights_conv2, input_channel=1)", "It can be difficult to understand and keep track of how these filters are applied because of the high dimensionality.\nApplying these convolutional filters to the images that were ouput from the first conv-layer gives the following images.\nNote that these are down-sampled yet again to half the resolution of the images from the first conv-layer.", "plot_conv_layer(layer=layer_conv2, image=image1)", "And these are the results of applying the filter-weights to the second image.", "plot_conv_layer(layer=layer_conv2, image=image2)", "将测试结果写入 CSV 文件", "# def write_predictions(ims, ids):\n# ims = ims.reshape(ims.shape[0], img_size_flat)\n# preds = session.run(y_pred, feed_dict={x: ims})\n# result = pd.DataFrame(preds, columns=classes)\n# result.loc[:, 'id'] = pd.Series(ids, index=result.index)\n# pred_file = 'predictions.csv'\n# result.to_csv(pred_file, index=False)\n\n# write_predictions(test_images, test_ids)", "关闭 TensorFlow Session\nWe are now done using TensorFlow, so we close the session to release its resources.", "session.close()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
yuvrajsingh86/DeepLearning_Udacity
first-neural-network/Your_first_neural_network.ipynb
mit
[ "Your first neural network\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "Load and prepare the data\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!", "data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)\n\nrides.head()", "Checking out the data\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.\nBelow is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.", "rides[:24*10].plot(x='dteday', y='cnt')", "Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().", "dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()", "Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\nThe scaling factors are saved so we can go backwards when we use the network for predictions.", "quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std", "Splitting the data into training, testing, and validation sets\nWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.", "# Save data for approximately the last 21 days \ntest_data = data[-21*24:]\n\n# Now remove the test data from the data set \ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]", "We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).", "# Hold out the last 60 days or so of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]", "Time to build the network\nBelow you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\n<img src=\"assets/neural_network.png\" width=300px>\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.\n\nHint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.\n2. Implement the forward pass in the train method.\n3. Implement the backpropagation algorithm in the train method, including calculating the output error.\n4. Implement the forward pass in the run method.", "class NeuralNetwork(object):\n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n self.activation_function = lambda x : 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.\n \n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, \n (self.input_nodes, self.hidden_nodes))\n\n self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n self.lr = learning_rate\n \n #### TODO: Set self.activation_function to your implemented sigmoid function ####\n #\n # Note: in Python, you can define a function with a lambda expression,\n # as shown below.\n \n ### If the lambda code above is not something you're familiar with,\n # You can uncomment out the following three lines and put your \n # implementation there instead.\n #\n #def sigmoid(x):\n # return 0 # Replace 0 with your sigmoid calculation here\n #self.activation_function = sigmoid\n \n \n \n def train(self, features, targets):\n ''' Train the network on batch of features and targets. \n \n Arguments\n ---------\n \n features: 2D array, each row is one data record, each column is a feature\n targets: 1D array of target values\n \n '''\n n_records = features.shape[0]\n delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)\n delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)\n for X, y in zip(features, targets):\n #### Implement the forward pass here ####\n ### Forward pass ###\n # TODO: Hidden layer - Replace these values with your calculations.\n hidden_inputs = np.dot(X,self.weights_input_to_hidden) # signals into hidden layer\n \n \n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n # TODO: Output layer - Replace these values with your calculations.\n \n final_inputs = np.dot(hidden_outputs,self.weights_hidden_to_output) # signals into final output layer\n final_outputs = final_inputs # signals from final output layer\n \n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # TODO: Output error - Replace this value with your calculations.\n error = y-final_outputs # Output layer error is the difference between desired target and actual output.\n \n # TODO: Calculate the backpropagated error term (delta) for the output \n output_error_term = error \n \n # TODO: Calculate the hidden layer's contribution to the error\n hidden_error = np.dot(self.weights_hidden_to_output,output_error_term)\n \n \n # TODO: Calculate the backpropagated error term (delta) for the hidden layer\n hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)\n \n # Weight step (input to hidden)\n delta_weights_i_h += hidden_error_term*X[:,None]\n # Weight step (hidden to output)\n \n delta_weights_h_o += output_error_term * hidden_outputs[:,None]\n \n # TODO: Update the weights - Replace these values with your calculations.\n self.weights_hidden_to_output += self.lr*delta_weights_h_o/n_records # update hidden-to-output weights with gradient descent step\n self.weights_input_to_hidden += self.lr*delta_weights_i_h/n_records # update input-to-hidden weights with gradient descent step\n \n def run(self, features):\n ''' Run a forward pass through the network with input features \n \n Arguments\n ---------\n features: 1D array of feature values\n '''\n \n #### Implement the forward pass here ####\n # TODO: Hidden layer - replace these values with the appropriate calculations.\n hidden_inputs = np.dot(features,self.weights_input_to_hidden) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n \n # TODO: Output layer - Replace these values with the appropriate calculations.\n final_inputs = np.dot(hidden_outputs,self.weights_hidden_to_output) # signals into final output layer\n final_outputs = final_inputs # signals from final output layer \n \n return final_outputs\n\ndef MSE(y, Y):\n return np.mean((y-Y)**2)", "Unit tests\nRun these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.", "import unittest\n\ninputs = np.array([[0.5, -0.2, 0.1]])\ntargets = np.array([[0.4]])\ntest_w_i_h = np.array([[0.1, -0.2],\n [0.4, 0.5],\n [-0.3, 0.2]])\ntest_w_h_o = np.array([[0.3],\n [-0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328], \n [-0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, -0.20185996], \n [0.39775194, 0.50074398], \n [-0.29887597, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\n\n\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)\n", "Training the network\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\nChoose the number of iterations\nThis is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing.\nChoose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\nChoose the number of hidden nodes\nIn a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data. \nTry a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.", "import sys\n\n\n### Set the hyperparameters here ###\niterations = 5000\nlearning_rate = 0.5\nhidden_nodes =26\noutput_nodes = 1\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor ii in range(iterations):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']\n \n network.train(X, y)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: {:2.1f}\".format(100 * ii/float(iterations)) \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n sys.stdout.flush()\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)\n\nplt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\n_ = plt.ylim()", "Check out your predictions\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.", "fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features).T*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)", "OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).\nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\nNote: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\nYour answer below" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dsacademybr/PythonFundamentos
Cap14/DSA-Python-Cap14-01-WebScraping.ipynb
gpl-3.0
[ "<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 14</font>\nDownload: http://github.com/dsacademybr", "# Versão da Linguagem Python\nfrom platform import python_version\nprint('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())", "Web Scraping", "# Biblioteca usada para requisitar uma página de um web site\nimport urllib.request\n\n# Definimos a url\n# Verifique as permissões em https://www.python.org/robots.txt\nwith urllib.request.urlopen(\"https://www.python.org\") as url:\n page = url.read()\n\n# Imprime o conteúdo\nprint(page)\n\nfrom bs4 import BeautifulSoup\n\n# Analise o html na variável 'page' e armazene-o no formato Beautiful Soup\nsoup = BeautifulSoup(page, \"html.parser\")\n\nsoup.title\n\nsoup.title.string\n\nsoup.a \n\nsoup.find_all(\"a\")\n\ntables = soup.find('table')\n\nprint(tables)", "Fim\nObrigado\nVisite o Blog da Data Science Academy - <a href=\"http://blog.dsacademy.com.br\">Blog DSA</a>" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
NervanaSystems/coach
tutorials/1. Implementing an Algorithm.ipynb
apache-2.0
[ "Implementing an Algorithm\nIn this tutorial we'll build a new agent that implements the Categorical Deep Q Network (C51) algorithm (https://arxiv.org/pdf/1707.06887.pdf), and a preset that runs the agent on the 'Breakout' game of the Atari environment.\nImplementing an algorithm typically consists of 3 main parts:\n\nImplementing the agent object\nImplementing the network head (optional)\nImplementing a preset to run the agent on some environment\n\nThe entire agent can be defined outside of the Coach framework, but in Coach you can find multiple predefined agents under the agents directory, network heads under the architecure/tensorflow_components/heads directory, and presets under the presets directory, for you to reuse.\nFor more information, we recommend going over the following page in the documentation: https://nervanasystems.github.io/coach/contributing/add_agent/\nThe Network Head\nWe'll start by defining a new head for the neural network used by this algorithm - CategoricalQHead. \nA head is the final part of the network. It takes the embedding from the middleware embedder and passes it through a neural network to produce the output of the network. There can be multiple heads in a network, and each one has an assigned loss function. The heads are algorithm dependent.\nThe rest of the network can be reused from the predefined parts, and the input embedder and middleware structure can also be modified, but we won't go into that in this tutorial.\nThe head will typically be defined in a new file - architectures/tensorflow_components/heads/categorical_dqn_head.py.\nFirst - some imports.", "import os\nimport sys\nmodule_path = os.path.abspath(os.path.join('..'))\nif module_path not in sys.path:\n sys.path.append(module_path)\n\nimport tensorflow as tf\nfrom rl_coach.architectures.tensorflow_components.heads.head import Head\nfrom rl_coach.architectures.head_parameters import HeadParameters\nfrom rl_coach.base_parameters import AgentParameters\nfrom rl_coach.core_types import QActionStateValue\nfrom rl_coach.spaces import SpacesDefinition", "Now let's define a class - CategoricalQHead class. Each class in Coach has a complementary Parameters class which defines its constructor parameters. So we will additionally define the CategoricalQHeadParameters class. The network structure should be defined in the _build_module function, which gets the previous layer output as an argument. In this function there are several variables that should be defined:\n* self.input - (optional) a list of any additional input to the head\n* self.output - the output of the head, which is also one of the outputs of the network\n* self.target - a placeholder for the targets that will be used to train the network\n* self.regularizations - (optional) any additional regularization losses that will be applied to the network\n* self.loss - the loss that will be used to train the network\nCategorical DQN uses the same network as DQN, and only changes the last layer to output #actions x #atoms elements with a softmax function. Additionally, we update the loss function to cross entropy.", "class CategoricalQHeadParameters(HeadParameters):\n def __init__(self, activation_function: str ='relu', name: str='categorical_q_head_params'):\n super().__init__(parameterized_class=CategoricalQHead, activation_function=activation_function, name=name)\n\nclass CategoricalQHead(Head):\n def __init__(self, agent_parameters: AgentParameters, spaces: SpacesDefinition, network_name: str,\n head_idx: int = 0, loss_weight: float = 1., is_local: bool = True, activation_function: str ='relu'):\n super().__init__(agent_parameters, spaces, network_name, head_idx, loss_weight, is_local, activation_function)\n self.name = 'categorical_dqn_head'\n self.num_actions = len(self.spaces.action.actions)\n self.num_atoms = agent_parameters.algorithm.atoms\n self.return_type = QActionStateValue\n\n def _build_module(self, input_layer):\n self.actions = tf.placeholder(tf.int32, [None], name=\"actions\")\n self.input = [self.actions]\n\n values_distribution = tf.layers.dense(input_layer, self.num_actions * self.num_atoms, name='output')\n values_distribution = tf.reshape(values_distribution, (tf.shape(values_distribution)[0], self.num_actions,\n self.num_atoms))\n # softmax on atoms dimension\n self.output = tf.nn.softmax(values_distribution)\n\n # calculate cross entropy loss\n self.distributions = tf.placeholder(tf.float32, shape=(None, self.num_actions, self.num_atoms),\n name=\"distributions\")\n self.target = self.distributions\n self.loss = tf.nn.softmax_cross_entropy_with_logits(labels=self.target, logits=values_distribution)\n tf.losses.add_loss(self.loss)", "The Agent\nThe agent will implement the Categorical DQN algorithm. Each agent has a complementary AgentParameters class, which allows selecting the parameters of the agent sub modules: \n* the algorithm\n* the exploration policy\n* the memory\n* the networks\nNow let's go ahead and define the network parameters - it will reuse the DQN network parameters but the head parameters will be our CategoricalQHeadParameters. The network parameters allows selecting any number of heads for the network by defining them in a list, but in this case we only have a single head, so we will point to its parameters class.", "from rl_coach.agents.dqn_agent import DQNNetworkParameters\n\n\nclass CategoricalDQNNetworkParameters(DQNNetworkParameters):\n def __init__(self):\n super().__init__()\n self.heads_parameters = [CategoricalQHeadParameters()]", "Next we'll define the algorithm parameters, which are the same as the DQN algorithm parameters, with the addition of the Categorical DQN specific v_min, v_max and number of atoms.\nWe'll also define the parameters of the exploration policy, which is epsilon greedy with epsilon starting at a value of 1.0 and decaying to 0.01 throughout 1,000,000 steps.", "from rl_coach.agents.dqn_agent import DQNAlgorithmParameters\nfrom rl_coach.exploration_policies.e_greedy import EGreedyParameters\nfrom rl_coach.schedules import LinearSchedule\n\n\nclass CategoricalDQNAlgorithmParameters(DQNAlgorithmParameters):\n def __init__(self):\n super().__init__()\n self.v_min = -10.0\n self.v_max = 10.0\n self.atoms = 51\n\n\nclass CategoricalDQNExplorationParameters(EGreedyParameters):\n def __init__(self):\n super().__init__()\n self.epsilon_schedule = LinearSchedule(1, 0.01, 1000000)\n self.evaluation_epsilon = 0.001 ", "Now let's define the agent parameters class which contains all the parameters to be used by the agent - the network, algorithm and exploration parameters that we defined above, and also the parameters of the memory module to be used, which is the default experience replay buffer in this case. \nNotice that the networks are defined as a dictionary, where the key is the name of the network and the value is the network parameters. This will allow us to later access each of the networks through self.networks[network_name].\nThe path property connects the parameters class to its corresponding class that is parameterized. In this case, it is the CategoricalDQNAgent class that we'll define in a moment.", "from rl_coach.agents.value_optimization_agent import ValueOptimizationAgent\nfrom rl_coach.base_parameters import AgentParameters\nfrom rl_coach.core_types import StateType\nfrom rl_coach.memories.non_episodic.experience_replay import ExperienceReplayParameters\n\n\nclass CategoricalDQNAgentParameters(AgentParameters):\n def __init__(self):\n super().__init__(algorithm=CategoricalDQNAlgorithmParameters(),\n exploration=CategoricalDQNExplorationParameters(),\n memory=ExperienceReplayParameters(),\n networks={\"main\": CategoricalDQNNetworkParameters()})\n\n @property\n def path(self):\n return 'agents.categorical_dqn_agent:CategoricalDQNAgent'", "The last step is to define the agent itself - CategoricalDQNAgent - which is a type of value optimization agent so it will inherit the ValueOptimizationAgent class. It could have also inheritted DQNAgent, which would result in the same functionality. Our agent will implement the learn_from_batch function which updates the agent's networks according to an input batch of transitions.\nAgents typically need to implement the training function - learn_from_batch, and a function that defines which actions to select given a state - choose_action. In our case, we will reuse the choose_action function implemented by the generic ValueOptimizationAgent, and just update the internal function for fetching q values for each of the actions - get_all_q_values_for_states.\nThis code may look intimidating at first glance, but basically it is just following the algorithm description in the Distributional DQN paper:\n<img src=\"files/categorical_dqn.png\" width=400>", "from typing import Union\n\n\n# Categorical Deep Q Network - https://arxiv.org/pdf/1707.06887.pdf\nclass CategoricalDQNAgent(ValueOptimizationAgent):\n def __init__(self, agent_parameters, parent: Union['LevelManager', 'CompositeAgent']=None):\n super().__init__(agent_parameters, parent)\n self.z_values = np.linspace(self.ap.algorithm.v_min, self.ap.algorithm.v_max, self.ap.algorithm.atoms)\n\n def distribution_prediction_to_q_values(self, prediction):\n return np.dot(prediction, self.z_values)\n\n # prediction's format is (batch,actions,atoms)\n def get_all_q_values_for_states(self, states: StateType):\n prediction = self.get_prediction(states)\n return self.distribution_prediction_to_q_values(prediction)\n\n def learn_from_batch(self, batch):\n network_keys = self.ap.network_wrappers['main'].input_embedders_parameters.keys()\n\n # for the action we actually took, the error is calculated by the atoms distribution\n # for all other actions, the error is 0\n distributed_q_st_plus_1, TD_targets = self.networks['main'].parallel_prediction([\n (self.networks['main'].target_network, batch.next_states(network_keys)),\n (self.networks['main'].online_network, batch.states(network_keys))\n ])\n\n # only update the action that we have actually done in this transition\n target_actions = np.argmax(self.distribution_prediction_to_q_values(distributed_q_st_plus_1), axis=1)\n m = np.zeros((self.ap.network_wrappers['main'].batch_size, self.z_values.size))\n\n batches = np.arange(self.ap.network_wrappers['main'].batch_size)\n for j in range(self.z_values.size):\n tzj = np.fmax(np.fmin(batch.rewards() +\n (1.0 - batch.game_overs()) * self.ap.algorithm.discount * self.z_values[j],\n self.z_values[self.z_values.size - 1]),\n self.z_values[0])\n bj = (tzj - self.z_values[0])/(self.z_values[1] - self.z_values[0])\n u = (np.ceil(bj)).astype(int)\n l = (np.floor(bj)).astype(int)\n m[batches, l] = m[batches, l] + (distributed_q_st_plus_1[batches, target_actions, j] * (u - bj))\n m[batches, u] = m[batches, u] + (distributed_q_st_plus_1[batches, target_actions, j] * (bj - l))\n # total_loss = cross entropy between actual result above and predicted result for the given action\n TD_targets[batches, batch.actions()] = m\n\n result = self.networks['main'].train_and_sync_networks(batch.states(network_keys), TD_targets)\n total_loss, losses, unclipped_grads = result[:3]\n\n return total_loss, losses, unclipped_grads", "Some important things to notice here:\n* self.networks['main'] is a NetworkWrapper object. It holds all the copies of the 'main' network: \n - a global network which is shared between all the workers in distributed training\n - an online network which is a local copy of the network intended to keep the weights static between training steps\n - a target network which is a local slow updating copy of the network, and is intended to keep the targets of the training process more stable\n In this case, we have the online network and the target network. The global network will only be created if we run the algorithm with multiple workers. The A3C agent would be one kind of example. \n* There are two network prediction functions available - predict and parallel_prediction. predict is quite straightforward - get some inputs, forward them through the network and return the output. parallel_prediction is an optimized variant of predict, which allows running a prediction on the online and target network in parallel, instead of running them sequentially.\n* The network train_and_sync_networks function makes a single training step - running a forward pass of the online network, calculating the losses, running a backward pass to calculate the gradients and applying the gradients to the network weights. If multiple workers are used, instead of applying the gradients to the online network weights, they are applied to the global (shared) network weights, and then the weights are copied back to the online network.\nThe Preset\nThe final part is the preset, which will run our agent on some existing environment with any custom parameters.\nThe new preset will be typically be defined in a new file - presets/atari_categorical_dqn.py.\nFirst - let's select the agent parameters we defined above. \nIt is possible to modify internal parameters such as the learning rate.", "from rl_coach.agents.categorical_dqn_agent import CategoricalDQNAgentParameters\n\n\nagent_params = CategoricalDQNAgentParameters()\nagent_params.network_wrappers['main'].learning_rate = 0.00025", "Now, let's define the environment parameters. We will use the default Atari parameters (frame skip of 4, taking the max over subsequent frames, etc.), and we will select the 'Breakout' game level.", "from rl_coach.environments.gym_environment import Atari, atari_deterministic_v4\n\n\nenv_params = Atari(level='BreakoutDeterministic-v4')", "Connecting all the dots together - we'll define a graph manager with the Categorial DQN agent parameters, the Atari environment parameters, and the scheduling and visualization parameters", "from rl_coach.graph_managers.basic_rl_graph_manager import BasicRLGraphManager\nfrom rl_coach.base_parameters import VisualizationParameters\nfrom rl_coach.environments.gym_environment import atari_schedule\n\ngraph_manager = BasicRLGraphManager(agent_params=agent_params, env_params=env_params,\n schedule_params=atari_schedule, vis_params=VisualizationParameters())\ngraph_manager.visualization_parameters.render = True", "Running the Preset\n(this is normally done from command line by running coach -p Atari_C51 -lvl breakout)", "# let the adventure begin\ngraph_manager.improve()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
JamesSample/icpw
check_core_icpw.ipynb
mit
[ "%matplotlib inline\nimport pandas as pd\nimport nivapy3 as nivapy\nimport matplotlib.pyplot as plt\n\nplt.style.use('ggplot')", "Explore \"core\" ICPW data\nPrior to updating the \"core\" ICPW datasets in RESA, I need to get an overview of what's already in the database and what isn't.", "# Connect to db\neng = nivapy.da.connect()", "1. Query ICPW projects\nThe are 18 projects (one for each country) currently in RESA. We also have data for some countries that do not yet have a project defined (e.g. the Netherlands).", "# Query projects\nprj_grid = nivapy.da.select_resa_projects(eng)\nprj_grid\n\nprj_df = prj_grid.get_selected_df()\nprint(len(prj_df))\nprj_df", "2. Get station list\nThere are 262 stations currently associated with the projects in RESA.", "# Get stations\nstn_df = nivapy.da.select_resa_project_stations(prj_df, eng)\nprint(len(stn_df))\nstn_df.head()\n\n# Map\nnivapy.spatial.quickmap(stn_df, popup='station_code')", "3. Get parameters\nGet a list of parameters available at these stations. I assume that all data submissions to ICPW will report pH, so extracting pH data should be a good way to get an indication of which stations actually have data.", "# Select parameters\npar_grid = nivapy.da.select_resa_station_parameters(stn_df,\n '1970-01-01',\n '2019-01-01',\n eng)\npar_grid\n\n# Get selected pars\npar_df = par_grid.get_selected_df()\npar_df", "4. Get chemistry data", "# Get data\nwc_df, dup_df = nivapy.da.select_resa_water_chemistry(stn_df,\n par_df,\n '1970-01-01',\n '2019-01-01',\n eng,\n lod_flags=False,\n drop_dups=True)\n\nwc_df.head()\n\n# How many stations have pH data\nlen(wc_df['station_code'].unique())\n\n# Which stations do not have pH data?\nall_stns = set(stn_df['station_code'].unique())\nno_ph = list(all_stns - set(wc_df['station_code'].unique()))\nno_ph_stns = stn_df.query('station_code in @no_ph').reset_index()\nprint(len(no_ph_stns))\nno_ph_stns\n\n# What data do these stations have?\npar_grid2 = nivapy.da.select_resa_station_parameters(no_ph_stns,\n '1970-01-01',\n '2019-01-01',\n eng)\npar_grid2", "So, there are 262 stations within the \"core\" ICPW projects, but 24 of these have no data whatsoever associated with them (listed above).\n5. Date for last sample by country\nThe code below gets the most recent pH sample in the database for each country.", "# Most recent datab\nfor idx, row in prj_df.iterrows():\n # Get stations\n cnt_stns = nivapy.da.select_resa_project_stations([row['project_id'],], eng)\n \n # Get pH data\n wc, dups = nivapy.da.select_resa_water_chemistry(cnt_stns,\n [1,], # pH\n '1970-01-01',\n '2019-01-01',\n eng,\n lod_flags=False,\n drop_dups=True)\n \n # Print results\n print(row['project_name'], '\\t', len(cnt_stns), '\\t', wc['sample_date'].max())", "These results are largely as expected:\n\n\nUS. Complete up to 2012. John sent an entirely new dataset for the 2018 trends work, which can be used to replace the existing data series for the \"core\" stations as well\n\n\nNorway. Transferred automatically\n\n\nCanada. Partly processed for the trends work\n\n\nUK. Partly processed for the trends work\n\n\nFinland. Jussi sent an entirely new dataset (with changes suggested to site selections) in 2018\n\n\nSweden. Update using data collected via API. Also need to modify site selections\n\n\nBelarus. Data up to 2010 are already in the database. Completed templates are available for 2012 to 2014 (2011 data are missing). No response to calls for data since 2014\n\n\nCzech Republic. 2016 data needs to be uploaded\n\n\nItaly. Data submitted during 2017 and 2018 need adding. Several new sites need creating\n\n\nPoland. Recent data submissions seem very complex! Need to check through e-mails\n\n\nSwitzerland. Templates need combining and adding\n\n\nLatvia. Data for 2016 and 2017 need combining and tidying\n\n\nEstonia. No data for 2012 or 2014. Data from 2013, 2015, 2016 and 2017 needs merging and adding\n\n\nIreland. An entirely new set of stations and codes have been proposed. Some overlap with old stations, but need checking\n\n\nMontenegro. Data from 2006 to 2009 are available in Excel. Need transferring to template and adding to database. Nothing since 2009\n\n\nArmenia. Data from 2004 to 2008 are in Excel. Nothing since and no replies to annual Calls for Data\n\n\nGermany. Excel templates cover 2015 to 2017. Nothing for 2013 or 2014? Some site codes have changes and some are no longer monitored. Also some old errors to fix - see e-mails?\n\n\nAustria. Data up to 2012 are in Excel. No data since then\n\n\nMoldova. Data from 2014 to 2017 in Excel. No project or stations in database\n\n\nNetherlands. Excel data supplied in 2016. No idea what to make of these spreadsheets! No stations or project in RESA\n\n\nRussia. Data provided from 2009 to 2014 in Excel. No stations or projects in database\n\n\nSlovakia. Not currently in the \"core\" ICPW project, but data supplied by Jiri as part of the trends work. Fits well with Polish data" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.14/_downloads/plot_mixed_source_space_inverse.ipynb
bsd-3-clause
[ "%matplotlib inline", "Compute MNE inverse solution on evoked data in a mixed source space\nCreate a mixed source space and compute MNE inverse solution on evoked dataset", "# Author: Annalisa Pascarella <a.pascarella@iac.cnr.it>\n#\n# License: BSD (3-clause)\n\nimport os.path as op\nimport matplotlib.pyplot as plt\nimport mne\n\nfrom mne.datasets import sample\nfrom mne import setup_volume_source_space\nfrom mne import make_forward_solution\nfrom mne.minimum_norm import make_inverse_operator, apply_inverse\n\nfrom nilearn import plotting\n\n# Set dir\ndata_path = sample.data_path()\nsubject = 'sample'\ndata_dir = op.join(data_path, 'MEG', subject)\nsubjects_dir = op.join(data_path, 'subjects')\nbem_dir = op.join(subjects_dir, subject, 'bem')\n\n# Set file names\nfname_mixed_src = op.join(bem_dir, '%s-oct-6-mixed-src.fif' % subject)\nfname_aseg = op.join(subjects_dir, subject, 'mri', 'aseg.mgz')\n\nfname_model = op.join(bem_dir, '%s-5120-bem.fif' % subject)\nfname_bem = op.join(bem_dir, '%s-5120-bem-sol.fif' % subject)\n\nfname_evoked = data_dir + '/sample_audvis-ave.fif'\nfname_trans = data_dir + '/sample_audvis_raw-trans.fif'\nfname_fwd = data_dir + '/sample_audvis-meg-oct-6-mixed-fwd.fif'\nfname_cov = data_dir + '/sample_audvis-shrunk-cov.fif'", "Set up our source space.", "# List substructures we are interested in. We select only the\n# sub structures we want to include in the source space\nlabels_vol = ['Left-Amygdala',\n 'Left-Thalamus-Proper',\n 'Left-Cerebellum-Cortex',\n 'Brain-Stem',\n 'Right-Amygdala',\n 'Right-Thalamus-Proper',\n 'Right-Cerebellum-Cortex']\n\n# Get a surface-based source space. We could set one up like this::\n#\n# >>> src = setup_source_space(subject, fname=None, spacing='oct6',\n# add_dist=False, subjects_dir=subjects_dir)\n#\n# But we already have one saved:\n\nsrc = mne.read_source_spaces(op.join(bem_dir, 'sample-oct-6-src.fif'))\n\n# Now we create a mixed src space by adding the volume regions specified in the\n# list labels_vol. First, read the aseg file and the source space bounds\n# using the inner skull surface (here using 10mm spacing to save time):\n\nvol_src = setup_volume_source_space(\n subject, mri=fname_aseg, pos=7.0, bem=fname_model,\n volume_label=labels_vol, subjects_dir=subjects_dir, verbose=True)\n\n# Generate the mixed source space\nsrc += vol_src\n\n# Visualize the source space.\nsrc.plot(subjects_dir=subjects_dir)\n\nn = sum(src[i]['nuse'] for i in range(len(src)))\nprint('the src space contains %d spaces and %d points' % (len(src), n))\n\n# We could write the mixed source space with::\n#\n# >>> write_source_spaces(fname_mixed_src, src, overwrite=True)\n#", "Export source positions to nift file:", "nii_fname = op.join(bem_dir, '%s-mixed-src.nii' % subject)\nsrc.export_volume(nii_fname, mri_resolution=True)\n\nplotting.plot_img(nii_fname, cmap=plt.cm.spectral)\nplt.show()\n\n# Compute the fwd matrix\nfwd = make_forward_solution(fname_evoked, fname_trans, src, fname_bem,\n mindist=5.0, # ignore sources<=5mm from innerskull\n meg=True, eeg=False, n_jobs=1)\n\nleadfield = fwd['sol']['data']\nprint(\"Leadfield size : %d sensors x %d dipoles\" % leadfield.shape)\n\nsrc_fwd = fwd['src']\nn = sum(src_fwd[i]['nuse'] for i in range(len(src_fwd)))\nprint('the fwd src space contains %d spaces and %d points' % (len(src_fwd), n))\n\n# Load data\ncondition = 'Left Auditory'\nevoked = mne.read_evokeds(fname_evoked, condition=condition,\n baseline=(None, 0))\nnoise_cov = mne.read_cov(fname_cov)\n\n# Compute inverse solution and for each epoch\nsnr = 3.0 # use smaller SNR for raw data\ninv_method = 'MNE' # sLORETA, MNE, dSPM\nparc = 'aparc' # the parcellation to use, e.g., 'aparc' 'aparc.a2009s'\n\nlambda2 = 1.0 / snr ** 2\n\n# Compute inverse operator\ninverse_operator = make_inverse_operator(evoked.info, fwd, noise_cov,\n loose=None, depth=None,\n fixed=False)\n\nstcs = apply_inverse(evoked, inverse_operator, lambda2, inv_method,\n pick_ori=None)\n\n# Get labels for FreeSurfer 'aparc' cortical parcellation with 34 labels/hemi\nlabels_parc = mne.read_labels_from_annot(subject, parc=parc,\n subjects_dir=subjects_dir)\n\n# Average the source estimates within each label of the cortical parcellation\n# and each sub structure contained in the src space\n# If mode = 'mean_flip' this option is used only for the surface cortical label\nsrc = inverse_operator['src']\n\nlabel_ts = mne.extract_label_time_course([stcs], labels_parc, src,\n mode='mean',\n allow_empty=True,\n return_generator=False)\n\n# plot the times series of 2 labels\nfig, axes = plt.subplots(1)\naxes.plot(1e3 * stcs.times, label_ts[0][0, :], 'k', label='bankssts-lh')\naxes.plot(1e3 * stcs.times, label_ts[0][71, :].T, 'r',\n label='Brain-stem')\naxes.set(xlabel='Time (ms)', ylabel='MNE current (nAm)')\naxes.legend()\nmne.viz.tight_layout()\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Unidata/unidata-python-workshop
notebooks/XArray/XArray and CF.ipynb
mit
[ "<div style=\"width:1000 px\">\n\n<div style=\"float:right; width:98 px; height:98px;\">\n<img src=\"https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png\" alt=\"Unidata Logo\" style=\"height: 98px;\">\n</div>\n\n<h1>XArray & CF Introduction</h1>\n<h3>Unidata Sustainable Science Workshop</h3>\n\n<div style=\"clear:both\"></div>\n</div>\n\n<hr style=\"height:2px;\">\n\n<div style=\"float:right; width:250 px\"><img src=\"http://xarray.pydata.org/en/stable/_static/dataset-diagram-logo.png\" alt=\"NumPy Logo\" style=\"height: 250px;\"></div>\n\nOverview:\n\nTeaching: 25 minutes\nExercises: 20 minutes\n\nQuestions\n\nWhat is XArray?\nHow does XArray fit in with Numpy and Pandas?\nWhat is the CF convention and how do we use it with Xarray?\n\nObjectives\n\nCreate a DataArray.\nOpen netCDF data using XArray\nSubset the data.\nWrite a CF-compliant netCDF file\n\nXArray\nXArray expands on the capabilities on NumPy arrays, providing a lot of streamlined data manipulation. It is similar in that respect to Pandas, but whereas Pandas excels at working with tabular data, XArray is focused on N-dimensional arrays of data (i.e. grids). Its interface is based largely on the netCDF data model (variables, attributes, and dimensions), but it goes beyond the traditional netCDF interfaces to provide functionality similar to netCDF-java's Common Data Model (CDM). \nDataArray\nThe DataArray is one of the basic building blocks of XArray. It provides a NumPy ndarray-like object that expands to provide two critical pieces of functionality:\n\nCoordinate names and values are stored with the data, making slicing and indexing much more powerful\nIt has a built-in container for attributes", "# Convention for import to get shortened namespace\nimport numpy as np\nimport xarray as xr\n\n# Create some sample \"temperature\" data\ndata = 283 + 5 * np.random.randn(5, 3, 4)\ndata", "Here we create a basic DataArray by passing it just a numpy array of random data. Note that XArray generates some basic dimension names for us.", "temp = xr.DataArray(data)\ntemp", "We can also pass in our own dimension names:", "temp = xr.DataArray(data, dims=['time', 'lat', 'lon'])\ntemp", "This is already improved upon from a numpy array, because we have names for each of the dimensions (or axes in NumPy parlance). Even better, we can take arrays representing the values for the coordinates for each of these dimensions and associate them with the data when we create the DataArray.", "# Use pandas to create an array of datetimes\nimport pandas as pd\ntimes = pd.date_range('2018-01-01', periods=5)\ntimes\n\n# Sample lon/lats\nlons = np.linspace(-120, -60, 4)\nlats = np.linspace(25, 55, 3)", "When we create the DataArray instance, we pass in the arrays we just created:", "temp = xr.DataArray(data, coords=[times, lats, lons], dims=['time', 'lat', 'lon'])\ntemp", "...and we can also set some attribute metadata:", "temp.attrs['units'] = 'kelvin'\ntemp.attrs['standard_name'] = 'air_temperature'\n\ntemp", "Notice what happens if we perform a mathematical operaton with the DataArray: the coordinate values persist, but the attributes are lost. This is done because it is very challenging to know if the attribute metadata is still correct or appropriate after arbitrary arithmetic operations.", "# For example, convert Kelvin to Celsius\ntemp - 273.15", "Selection\nWe can use the .sel method to select portions of our data based on these coordinate values, rather than using indices (this is similar to the CDM).", "temp.sel(time='2018-01-02')", ".sel has the flexibility to also perform nearest neighbor sampling, taking an optional tolerance:", "from datetime import timedelta\ntemp.sel(time='2018-01-07', method='nearest', tolerance=timedelta(days=2))", "Exercise\n.interp() works similarly to .sel(). Using .interp(), get an interpolated time series \"forecast\" for Boulder (40°N, 105°W) or your favorite latitude/longitude location. (Documentation for <a href=\"http://xarray.pydata.org/en/stable/interpolation.html\">interp</a>).", "# Your code goes here\n", "Solution", "# %load solutions/interp_solution.py\n", "Slicing with Selection", "temp.sel(time=slice('2018-01-01', '2018-01-03'), lon=slice(-110, -70), lat=slice(25, 45))", ".loc\nAll of these operations can also be done within square brackets on the .loc attribute of the DataArray. This permits a much more numpy-looking syntax, though you lose the ability to specify the names of the various dimensions. Instead, the slicing must be done in the correct order.", "# As done above\ntemp.loc['2018-01-02']\n\ntemp.loc['2018-01-01':'2018-01-03', 25:45, -110:-70]\n\n# This *doesn't* work however:\n#temp.loc[-110:-70, 25:45,'2018-01-01':'2018-01-03']", "Opening netCDF data\nWith its close ties to the netCDF data model, XArray also supports netCDF as a first-class file format. This means it has easy support for opening netCDF datasets, so long as they conform to some of XArray's limitations (such as 1-dimensional coordinates).", "# Open sample North American Reanalysis data in netCDF format\nds = xr.open_dataset('../../data/NARR_19930313_0000.nc')\nds", "This returns a Dataset object, which is a container that contains one or more DataArrays, which can also optionally share coordinates. We can then pull out individual fields:", "ds.isobaric1", "or", "ds['isobaric1']", "Datasets also support much of the same subsetting operations as DataArray, but will perform the operation on all data:", "ds_1000 = ds.sel(isobaric1=1000.0)\nds_1000\n\nds_1000.Temperature_isobaric", "Aggregation operations\nNot only can you use the named dimensions for manual slicing and indexing of data, but you can also use it to control aggregation operations, like sum:", "u_winds = ds['u-component_of_wind_isobaric']\nu_winds.std(dim=['x', 'y'])", "Exercise\nUsing the sample dataset, calculate the mean temperature profile (temperature as a function of pressure) over Colorado within this dataset. For this exercise, consider the bounds of Colorado to be:\n* x: -182km to 424km\n* y: -1450km to -990km\n(37°N to 41°N and 102°W to 109°W projected to Lambert Conformal projection coordinates)\nSolution", "# %load solutions/mean_profile.py\n", "Resources\nThere is much more in the XArray library. To learn more, visit the XArray Documentation\nIntroduction to Climate and Forecasting Metadata Conventions\nIn order to better enable reproducible data and research, the Climate and Forecasting (CF) metadata convention was created to have proper metadata in atmospheric data files. In the remainder of this notebook, we will introduce the CF data model and discuss some netCDF implementation details to consider when deciding how to write data with CF and netCDF. We will cover gridded data in this notebook, with more in depth examples provided in the full CF notebook. Xarray makes the creation of netCDFs with proper metadata simple and straightforward, so we will use that, instead of the netCDF-Python library.\nThis assumes a basic understanding of netCDF.\n<a name=\"gridded\"></a>\nGridded Data\nLet's say we're working with some numerical weather forecast model output. Let's walk through the steps necessary to store this data in netCDF, using the Climate and Forecasting metadata conventions to ensure that our data are available to as many tools as possible.\nTo start, let's assume the following about our data:\n* It corresponds to forecast three dimensional temperature at several times\n* The native coordinate system of the model is on a regular grid that represents the Earth on a Lambert conformal projection.\nWe'll also go ahead and generate some arrays of data below to get started:", "# Import some useful Python tools\nfrom datetime import datetime\n\n# Twelve hours of hourly output starting at 22Z today\nstart = datetime.utcnow().replace(hour=22, minute=0, second=0, microsecond=0)\ntimes = np.array([start + timedelta(hours=h) for h in range(13)])\n\n# 3km spacing in x and y\nx = np.arange(-150, 153, 3)\ny = np.arange(-100, 100, 3)\n\n# Standard pressure levels in hPa\npress = np.array([1000, 925, 850, 700, 500, 300, 250])\n\ntemps = np.random.randn(times.size, press.size, y.size, x.size)", "Time coordinates must contain a units attribute with a string value with a form similar to 'seconds since 2019-01-06 12:00:00.00'. 'seconds', 'minutes', 'hours', and 'days' are the most commonly used units for time. Due to the variable length of months and years, they are not recommended.\nBefore we can write data, we need to first need to convert our list of Python datetime instances to numeric values. We can use the cftime library to make this easy to convert using the unit string as defined above.", "from cftime import date2num\ntime_units = 'hours since {:%Y-%m-%d 00:00}'.format(times[0])\ntime_vals = date2num(times, time_units)\ntime_vals", "Now we can create the forecast_time variable just as we did before for the other coordinate variables:\nConvert arrays into Xarray Dataset", "ds = xr.Dataset({'temperature': (['time', 'z', 'y', 'x'], temps, {'units':'Kelvin'})},\n coords={'x_dist': (['x'], x, {'units':'km'}),\n 'y_dist': (['y'], y, {'units':'km'}),\n 'pressure': (['z'], press, {'units':'hPa'}),\n 'forecast_time': (['time'], times)\n })\nds", "Due to how xarray handles time units, we need to encode the units in the forecast_time coordinate.", "ds.forecast_time.encoding['units'] = time_units", "If we look at our data variable, we can see the units printed out, so they were attached properly!", "ds.temperature", "We're going to start by adding some global attribute metadata. These are recommendations from the standard (not required), but they're easy to add and help users keep the data straight, so let's go ahead and do it.", "ds.attrs['Conventions'] = 'CF-1.7'\nds.attrs['title'] = 'Forecast model run'\nds.attrs['nc.institution'] = 'Unidata'\nds.attrs['source'] = 'WRF-1.5'\nds.attrs['history'] = str(datetime.utcnow()) + ' Python'\nds.attrs['references'] = ''\nds.attrs['comment'] = ''\nds", "We can also add attributes to this variable to define metadata. The CF conventions require a units attribute to be set for all variables that represent a dimensional quantity. The value of this attribute needs to be parsable by the UDUNITS library. Here we have already set it to a value of 'Kelvin'. We also set the standard (optional) attributes of long_name and standard_name. The former contains a longer description of the variable, while the latter comes from a controlled vocabulary in the CF conventions. This allows users of data to understand, in a standard fashion, what a variable represents. If we had missing values, we could also set the missing_value attribute to an appropriate value.\n\nNASA Dataset Interoperability Recommendations:\nSection 2.2 - Include Basic CF Attributes\nInclude where applicable: units, long_name, standard_name, valid_min / valid_max, scale_factor / add_offset and others.", "ds.temperature.attrs['standard_name'] = 'air_temperature'\nds.temperature.attrs['long_name'] = 'Forecast air temperature'\nds.temperature.attrs['missing_value'] = -9999\nds.temperature", "Coordinate variables\nTo properly orient our data in time and space, we need to go beyond dimensions (which define common sizes and alignment) and include values along these dimensions, which are called \"Coordinate Variables\". Generally, these are defined by creating a one dimensional variable with the same name as the respective dimension.\nTo start, we define variables which define our x and y coordinate values. These variables include standard_names which allow associating them with projections (more on this later) as well as an optional axis attribute to make clear what standard direction this coordinate refers to.", "ds.x.attrs['axis'] = 'X' # Optional\nds.x.attrs['standard_name'] = 'projection_x_coordinate'\nds.x.attrs['long_name'] = 'x-coordinate in projected coordinate system'\n\nds.y.attrs['axis'] = 'Y' # Optional\nds.y.attrs['standard_name'] = 'projection_y_coordinate'\nds.y.attrs['long_name'] = 'y-coordinate in projected coordinate system'", "We also define a coordinate variable pressure to reference our data in the vertical dimension. The standard_name of 'air_pressure' is sufficient to identify this coordinate variable as the vertical axis, but let's go ahead and specify the axis as well. We also specify the attribute positive to indicate whether the variable increases when going up or down. In the case of pressure, this is technically optional.", "ds.pressure.attrs['axis'] = 'Z' # Optional\nds.pressure.attrs['standard_name'] = 'air_pressure'\nds.pressure.attrs['positive'] = 'down' # Optional\n\nds.forecast_time['axis'] = 'T' # Optional\nds.forecast_time['standard_name'] = 'time' # Optional\nds.forecast_time['long_name'] = 'time'", "Auxilliary Coordinates\nOur data are still not CF-compliant because they do not contain latitude and longitude information, which is needed to properly locate the data. To solve this, we need to add variables with latitude and longitude. These are called \"auxillary coordinate variables\", not because they are extra, but because they are not simple one dimensional variables.\nBelow, we first generate longitude and latitude values from our projected coordinates using the pyproj library.", "from pyproj import Proj\nX, Y = np.meshgrid(x, y)\nlcc = Proj({'proj':'lcc', 'lon_0':-105, 'lat_0':40, 'a':6371000.,\n 'lat_1':25})\nlon, lat = lcc(X * 1000, Y * 1000, inverse=True)", "Now we can create the needed variables. Both are dimensioned on y and x and are two-dimensional. The longitude variable is identified as actually containing such information by its required units of 'degrees_east', as well as the optional 'longitude' standard_name attribute. The case is the same for latitude, except the units are 'degrees_north' and the standard_name is 'latitude'.", "ds = ds.assign_coords(lon = (['y', 'x'], lon))\nds = ds.assign_coords(lat = (['y', 'x'], lat))\nds\n\nds.lon.attrs['units'] = 'degrees_east'\nds.lon.attrs['standard_name'] = 'longitude' # Optional\nds.lon.attrs['long_name'] = 'longitude'\n\nds.lat.attrs['units'] = 'degrees_north'\nds.lat.attrs['standard_name'] = 'latitude' # Optional\nds.lat.attrs['long_name'] = 'latitude'", "With the variables created, we identify these variables as containing coordinates for the Temperature variable by setting the coordinates value to a space-separated list of the names of the auxilliary coordinate variables:", "ds", "Coordinate System Information\nWith our data specified on a Lambert conformal projected grid, it would be good to include this information in our metadata. We can do this using a \"grid mapping\" variable. This uses a dummy scalar variable as a namespace for holding all of the required information. Relevant variables then reference the dummy variable with their grid_mapping attribute.\nBelow we create a variable and set it up for a Lambert conformal conic projection on a spherical earth. The grid_mapping_name attribute describes which of the CF-supported grid mappings we are specifying. The names of additional attributes vary between the mappings.", "ds['lambert_projection'] = int()\nds.lambert_projection.attrs['grid_mapping_name'] = 'lambert_conformal_conic'\nds.lambert_projection.attrs['standard_parallel'] = 25.\nds.lambert_projection.attrs['latitude_of_projection_origin'] = 40.\nds.lambert_projection.attrs['longitude_of_central_meridian'] = -105.\nds.lambert_projection.attrs['semi_major_axis'] = 6371000.0\nds.lambert_projection", "Now that we created the variable, all that's left is to set the grid_mapping attribute on our Temperature variable to the name of our dummy variable:", "ds.temperature.attrs['grid_mapping'] = 'lambert_projection' # or proj_var.name\nds", "Write to NetCDF\nXarray has built-in support for a few flavors of netCDF. Here we'll write a netCDF4 file from our Dataset.", "ds.to_netcdf('test_netcdf.nc', format='NETCDF4')\n\n!ncdump test_netcdf.nc" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ericmjl/Network-Analysis-Made-Simple
notebooks/01-introduction/02-networkx-intro.ipynb
mit
[ "%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'", "Introduction", "from IPython.display import YouTubeVideo\n\nYouTubeVideo(id='sdF0uJo2KdU', width=\"100%\")", "In this chapter, we will introduce you to the NetworkX API.\nThis will allow you to create and manipulate graphs in your computer memory,\nthus giving you a language \nto more concretely explore graph theory ideas.\nThroughout the book, we will be using different graph datasets\nto help us anchor ideas.\nIn this section, we will work with a social network of seventh graders.\nHere, nodes are individual students,\nand edges represent their relationships.\nEdges between individuals show how often\nthe seventh graders indicated other seventh graders as their favourite.\nThe data are taken from the Konect graph data repository\nData Model\nIn NetworkX, graph data are stored in a dictionary-like fashion.\nThey are placed under a Graph object,\ncanonically instantiated with the variable G as follows:\npython\nG = nx.Graph()\nOf course, you are free to name the graph anything you want!\nNodes are part of the attribute G.nodes.\nThere, the node data are housed in a dictionary-like container,\nwhere the key is the node itself\nand the values are a dictionary of attributes. \nNode data are accessible using syntax that looks like:\npython\nG.nodes[node1]\nEdges are part of the attribute G.edges,\nwhich is also stored in a dictionary-like container.\nEdge data are accessible using syntax that looks like: \npython\nG.edges[node1, node2]\nBecause of the dictionary-like implementation of the graph,\nany hashable object can be a node.\nThis means strings and tuples, but not lists and sets.\nLoad Data\nLet's load some real network data to get a feel for the NetworkX API. This dataset comes from a study of 7th grade students.\n\nThis directed network contains proximity ratings between students\nfrom 29 seventh grade students from a school in Victoria.\nAmong other questions the students were asked\nto nominate their preferred classmates for three different activities.\nA node represents a student.\nAn edge between two nodes shows that\nthe left student picked the right student as his or her answer.\nThe edge weights are between 1 and 3 \nand show how often the left student chose the right student as his/her favourite.\n\nIn the original dataset, students were from an all-boys school.\nHowever, I have modified the dataset to instead be a mixed-gender school.", "import networkx as nx\nfrom datetime import datetime\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport warnings\nfrom nams import load_data as cf\n\nwarnings.filterwarnings('ignore')\n\nG = cf.load_seventh_grader_network()", "Understanding a graph's basic statistics\nWhen you get graph data,\none of the first things you'll want to do is to\ncheck its basic graph statistics:\nthe number of nodes\nand the number of edges\nthat are represented in the graph.\nThis is a basic sanity-check on your data\nthat you don't want to skip out on.\nQuerying graph type\nThe first thing you need to know is the type of the graph:", "type(G)", "Because the graph is a DiGraph,\nthis tells us that the graph is a directed one.\nIf it were undirected, the type would change:", "H = nx.Graph()\ntype(H)", "Querying node information\nLet's now query for the nodeset:", "list(G.nodes())[0:5]", "G.nodes() returns a \"view\" on the nodes.\nWe can't actually slice into the view and grab out a sub-selection,\nbut we can at least see what nodes are present.\nFor brevity, we have sliced into G.nodes() passed into a list() constructor,\nso that we don't pollute the output.\nBecause a NodeView is iterable, though,\nwe can query it for its length:", "len(G.nodes())", "If our nodes have metadata attached to them,\nwe can view the metadata at the same time\nby passing in data=True:", "list(G.nodes(data=True))[0:5]", "G.nodes(data=True) returns a NodeDataView,\nwhich you can see is dictionary-like.\nAdditionally, we can select out individual nodes:", "G.nodes[1]", "Now, because a NodeDataView is dictionary-like,\nlooping over G.nodes(data=True)\nis very much like looping over key-value pairs of a dictionary.\nAs such, we can write things like:\npython\nfor n, d in G.nodes(data=True):\n # n is the node\n # d is the metadata dictionary\n ...\nThis is analogous to how we would loop over a dictionary:\npython\nfor k, v in dictionary.items():\n # do stuff in the loop\nNaturally, this leads us to our first exercise.\nExercise: Summarizing node metadata\n\nCan you count how many males and females are represented in the graph?", "from nams.solutions.intro import node_metadata\n\n#### REPLACE THE NEXT LINE WITH YOUR ANSWER\nmf_counts = node_metadata(G)", "Test your implementation by checking it against the test_answer function below.", "from typing import Dict\n\ndef test_answer(mf_counts: Dict):\n assert mf_counts['female'] == 17\n assert mf_counts['male'] == 12\n \ntest_answer(mf_counts)", "With this dictionary-like syntax,\nwe can query back the metadata that's associated with any node.\nQuerying edge information\nNow that you've learned how to query for node information,\nlet's now see how to query for all of the edges in the graph:", "list(G.edges())[0:5]", "Similar to the NodeView, G.edges() returns an EdgeView that is also iterable.\nAs with above, we have abbreviated the output inside a sliced list\nto keep things readable.\nBecause G.edges() is iterable, we can get its length to see the number of edges\nthat are present in a graph.", "len(G.edges())", "Likewise, we can also query for all of the edge's metadata:", "list(G.edges(data=True))[0:5]", "Additionally, it is possible for us to select out individual edges, as long as they exist in the graph:", "G.edges[15, 10]", "This yields the metadata dictionary for that edge.\nIf the edge does not exist, then we get an error:\n```python\n\n\n\nG.edges[15, 16]\n```\n\n\n\n```python\nKeyError Traceback (most recent call last)\n<ipython-input-21-ce014cab875a> in <module>\n----> 1 G.edges[15, 16]\n~/anaconda/envs/nams/lib/python3.7/site-packages/networkx/classes/reportviews.py in getitem(self, e)\n 928 def getitem(self, e):\n 929 u, v = e\n--> 930 return self._adjdict[u][v]\n 931 \n 932 # EdgeDataView methods\nKeyError: 16\n```\nAs with the NodeDataView, the EdgeDataView is dictionary-like,\nwith the difference being that the keys are 2-tuple-like\ninstead of being single hashable objects.\nThus, we can write syntax like the following to loop over the edgelist:\npython\nfor n1, n2, d in G.edges(data=True):\n # n1, n2 are the nodes\n # d is the metadata dictionary\n ...\nNaturally, this leads us to our next exercise.\nExercise: Summarizing edge metadata\n\nCan you write code to verify\nthat the maximum times any student rated another student as their favourite\nis 3 times?", "from nams.solutions.intro import edge_metadata\n\n#### REPLACE THE NEXT LINE WITH YOUR ANSWER\nmaxcount = edge_metadata(G)", "Likewise, you can test your answer using the test function below:", "def test_maxcount(maxcount):\n assert maxcount == 3\n \ntest_maxcount(maxcount)", "Manipulating the graph\nGreat stuff! You now know how to query a graph for:\n\nits node set, optionally including metadata\nindividual node metadata\nits edge set, optionally including metadata, and \nindividual edges' metadata\n\nNow, let's learn how to manipulate the graph.\nSpecifically, we'll learn how to add nodes and edges to a graph.\nAdding Nodes\nThe NetworkX graph API lets you add a node easily:\npython\nG.add_node(node, node_data1=some_value, node_data2=some_value)\nAdding Edges\nIt also allows you to add an edge easily:\npython\nG.add_edge(node1, node2, edge_data1=some_value, edge_data2=some_value)\nMetadata by Keyword Arguments\nIn both cases, the keyword arguments that are passed into .add_node()\nare automatically collected into the metadata dictionary.\nKnowing this gives you enough knowledge to tackle the next exercise.\nExercise: adding students to the graph\n\nWe found out that there are two students that we left out of the network,\nstudent no. 30 and 31. \nThey are one male (30) and one female (31), \nand they are a pair that just love hanging out with one another \nand with individual 7 (i.e. count=3), in both directions per pair. \nAdd this information to the graph.", "from nams.solutions.intro import adding_students\n\n#### REPLACE THE NEXT LINE WITH YOUR ANSWER\nG = adding_students(G)", "You can verify that the graph has been correctly created\nby executing the test function below.", "def test_graph_integrity(G):\n assert 30 in G.nodes()\n assert 31 in G.nodes()\n assert G.nodes[30]['gender'] == 'male'\n assert G.nodes[31]['gender'] == 'female'\n assert G.has_edge(30, 31)\n assert G.has_edge(30, 7)\n assert G.has_edge(31, 7)\n assert G.edges[30, 7]['count'] == 3\n assert G.edges[7, 30]['count'] == 3\n assert G.edges[31, 7]['count'] == 3\n assert G.edges[7, 31]['count'] == 3\n assert G.edges[30, 31]['count'] == 3\n assert G.edges[31, 30]['count'] == 3\n print('All tests passed.')\n \ntest_graph_integrity(G)", "Coding Patterns\nThese are some recommended coding patterns when doing network analysis using NetworkX,\nwhich stem from my personal experience with the package.\nIterating using List Comprehensions\nI would recommend that you use the following for compactness: \npython\n[d['attr'] for n, d in G.nodes(data=True)]\nAnd if the node is unimportant, you can do:\npython\n[d['attr'] for _, d in G.nodes(data=True)]\nIterating over Edges using List Comprehensions\nA similar pattern can be used for edges:\npython\n[n2 for n1, n2, d in G.edges(data=True)]\nor\npython\n[n2 for _, n2, d in G.edges(data=True)]\nIf the graph you are constructing is a directed graph,\nwith a \"source\" and \"sink\" available,\nthen I would recommend the following naming of variables instead:\npython\n[(sc, sk) for sc, sk, d in G.edges(data=True)]\nor \npython\n[d['attr'] for sc, sk, d in G.edges(data=True)]\nFurther Reading\nFor a deeper look at the NetworkX API,\nbe sure to check out the NetworkX docs.\nFurther Exercises\nHere's some further exercises that you can use to get some practice.\nExercise: Unrequited Friendships\n\nTry figuring out which students have \"unrequited\" friendships, that is, \nthey have rated another student as their favourite at least once, \nbut that other student has not rated them as their favourite at least once.\n\nHint: the goal here is to get a list of edges for which the reverse edge is not present.\nHint: You may need the class method G.has_edge(n1, n2). This returns whether a graph has an edge between the nodes n1 and n2.", "from nams.solutions.intro import unrequitted_friendships_v1\n#### REPLACE THE NEXT LINE WITH YOUR ANSWER\nunrequitted_friendships = unrequitted_friendships_v1(G)\nassert len(unrequitted_friendships) == 124", "In a previous session at ODSC East 2018, a few other class participants provided the following solutions,\nwhich you can take a look at by uncommenting the following cells.\nThis first one by @schwanne is the list comprehension version of the above solution:", "from nams.solutions.intro import unrequitted_friendships_v2\n# unrequitted_friendships_v2??", "This one by @end0 is a unique one involving sets.", "from nams.solutions.intro import unrequitted_friendships_v3\n# unrequitted_friendships_v3??", "Solution Answers\nHere are the answers to the exercises above.", "import nams.solutions.intro as solutions\nimport inspect\n\nprint(inspect.getsource(solutions))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
quantopian/research_public
notebooks/lectures/Ranking_Universes_by_Factors/notebook.ipynb
apache-2.0
[ "Ranking Universes by Factors\nBy Delaney Granizo-Mackenzie and Gilbert Wassermann\nPart of the Quantopian Lecture Series:\n\nwww.quantopian.com/lectures\nhttps://github.com/quantopian/research_public\n\nOne common technique in quantitative finance is that of ranking stocks in some way. This ranking can be whatever you come up with, but will often be a combination of fundamental factors and price-based signals. One example could be the following\n\nScore stocks based on 0.5 x the PE Ratio of that stock + 0.5 x the 30 day price momentum\nRank stocks based on that score\n\nThese ranking systems can be used to construct long-short equity strategies. The Long-Short Equity Lecture is recommended reading before this Lecture.\nIn order to develop a good ranking system, we need to first understand how to evaluate ranking systems. We will show a demo here.\nWARNING:\nThis notebook does analysis over thousands of equities and hundreds of timepoints. The resulting memory usage can crash the research server if you are running other notebooks. Please shut down other notebooks in the main research menu before running this notebook. You can tell if other notebooks are running by checking the color of the notebook symbol. Green indicates running, grey indicates not.", "import numpy as np\nimport statsmodels.api as sm\nimport scipy.stats as stats\nimport scipy\nfrom statsmodels import regression\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pandas as pd", "Getting Data\nThe first thing we're gonna do is get monthly values for the Market Cap, P/E Ratio and Monthly Returns for every equity. Monthly Returns is a metric that takes the returns accrued over an entire month of trading by dividing the last close price by the first close price and subtracting 1.", "from quantopian.pipeline import Pipeline\nfrom quantopian.pipeline.data import morningstar\nfrom quantopian.pipeline.data.builtin import USEquityPricing\nfrom quantopian.pipeline.factors import CustomFactor, Returns\n \ndef make_pipeline():\n \"\"\"\n Create and return our pipeline.\n \n We break this piece of logic out into its own function to make it easier to\n test and modify in isolation.\n \"\"\"\n \n pipe = Pipeline(\n columns = {\n 'Market Cap' : morningstar.valuation.market_cap.latest,\n 'PE Ratio' : morningstar.valuation_ratios.pe_ratio.latest,\n 'Monthly Returns': Returns(window_length=21),\n })\n \n return pipe\n\npipe = make_pipeline()", "Let's take a look at the data to get a quick sense of what we have. This may take a while.", "from quantopian.research import run_pipeline\n\nstart_date = '2013-01-01'\nend_date = '2015-02-01'\n\ndata = run_pipeline(pipe, start_date, end_date)\n\n# remove NaN values\ndata = data.dropna()\n\n# show data\ndata", "Now, we need to take each of these individual factors, clean them to remove NaN values and aggregate them for each month.", "cap_data = data['Market Cap'].transpose().unstack() # extract series of data\ncap_data = cap_data.T.dropna().T # remove NaN values\ncap_data = cap_data.resample('M', how='last') # use last instance in month to aggregate\n\npe_data = data['PE Ratio'].transpose().unstack()\npe_data = pe_data.T.dropna().T\npe_data = pe_data.resample('M', how='last')\n\nmonth_data = data['Monthly Returns'].transpose().unstack()\nmonth_data = month_data.T.dropna().T\nmonth_data = month_data.resample('M', how='last')", "The next step is to figure out which equities we have data for. Data sources are never perfect, and stocks go in and out of existence with Mergers, Acquisitions, and Bankruptcies. We'll make a list of the stocks common to all three sources (our factor data sets) and then filter down both to just those stocks.", "common_equities = cap_data.T.index.intersection(pe_data.T.index).intersection(month_data.T.index)", "Now, we will make sure that each time series is being run over identical an identical set of securities.", "cap_data_filtered = cap_data[common_equities][:-1]\nmonth_forward_returns = month_data[common_equities][1:]\npe_data_filtered = pe_data[common_equities][:-1]", "Here, is the filtered data for market cap over all equities for the first 5 months, as an example.", "cap_data_filtered.head()", "Because we're dealing with ranking systems, at several points we're going to want to rank our data. Let's check how our data looks when ranked to get a sense for this.", "cap_data_filtered.rank().head()", "Looking at Correlations Over Time\nNow that we have the data, let's do something with it. Our first analysis will be to measure the monthly Spearman rank correlation coefficient between Market Cap and month-forward returns. In other words, how predictive of 30-day returns is ranking your universe by market cap.", "scores = np.zeros(24)\npvalues = np.zeros(24)\nfor i in range(24):\n score, pvalue = stats.spearmanr(cap_data_filtered.iloc[i], month_forward_returns.iloc[i])\n pvalues[i] = pvalue\n scores[i] = score\n \nplt.bar(range(1,25),scores)\nplt.hlines(np.mean(scores), 1, 25, colors='r', linestyles='dashed')\nplt.xlabel('Month')\nplt.xlim((1, 25))\nplt.legend(['Mean Correlation over All Months', 'Monthly Rank Correlation'])\nplt.ylabel('Rank correlation between Market Cap and 30-day forward returns');", "We can see that the average correlation is positive, but varies a lot from month to month.\nLet's look at the same analysis, but with PE Ratio.", "scores = np.zeros(24)\npvalues = np.zeros(24)\nfor i in range(24):\n score, pvalue = stats.spearmanr(pe_data_filtered.iloc[i], month_forward_returns.iloc[i])\n pvalues[i] = pvalue\n scores[i] = score\n \nplt.bar(range(1,25),scores)\nplt.hlines(np.mean(scores), 1, 25, colors='r', linestyles='dashed')\nplt.xlabel('Month')\nplt.xlim((1, 25))\nplt.legend(['Mean Correlation over All Months', 'Monthly Rank Correlation'])\nplt.ylabel('Rank correlation between PE Ratio and 30-day forward returns');", "The correlation of PE Ratio and 30-day returns seems to be near 0 on average. It's important to note that this monthly and between 2012 and 2015. Different factors are predictive on differen timeframes and frequencies, so the fact that PE Ratio doesn't appear predictive here is not necessary throwing it out as a useful factor. Beyond it's usefulness in predicting returns, it can be used for risk exposure analysis as discussed in the Factor Risk Exposure Lecture.\nBasket Returns\nThe next step is to compute the returns of baskets taken out of our ranking. If we rank all equities and then split them into $n$ groups, what would the mean return be of each group? We can answer this question in the following way. The first step is to create a function that will give us the mean return in each basket in a given the month and a ranking factor.", "def compute_basket_returns(factor_data, forward_returns, number_of_baskets, month):\n\n data = pd.concat([factor_data.iloc[month-1],forward_returns.iloc[month-1]], axis=1)\n # Rank the equities on the factor values\n data.columns = ['Factor Value', 'Month Forward Returns']\n data.sort('Factor Value', inplace=True)\n \n # How many equities per basket\n equities_per_basket = np.floor(len(data.index) / number_of_baskets)\n\n basket_returns = np.zeros(number_of_baskets)\n\n # Compute the returns of each basket\n for i in range(number_of_baskets):\n start = i * equities_per_basket\n if i == number_of_baskets - 1:\n # Handle having a few extra in the last basket when our number of equities doesn't divide well\n end = len(data.index) - 1\n else:\n end = i * equities_per_basket + equities_per_basket\n # Actually compute the mean returns for each basket\n basket_returns[i] = data.iloc[start:end]['Month Forward Returns'].mean()\n \n return basket_returns", "The first thing we'll do with this function is compute this for each month and then average. This should give us a sense of the relationship over a long timeframe.", "number_of_baskets = 10\nmean_basket_returns = np.zeros(number_of_baskets)\nfor m in range(1, 25):\n basket_returns = compute_basket_returns(cap_data_filtered, month_forward_returns, number_of_baskets, m)\n mean_basket_returns += basket_returns\n\nmean_basket_returns /= 24 \n\n# Plot the returns of each basket\nplt.bar(range(number_of_baskets), mean_basket_returns)\nplt.ylabel('Returns')\nplt.xlabel('Basket')\nplt.legend(['Returns of Each Basket']);", "Spread Consistency\nOf course, that's just the average relationship. To get a sense of how consistent this is, and whether or not we would want to trade on it, we should look at it over time. Here we'll look at the monthly spreads for the first year. We can see a lot of variation, and further analysis should be done to determine whether Market Cap is tradeable.", "f, axarr = plt.subplots(3, 4)\nfor month in range(1, 13):\n basket_returns = compute_basket_returns(cap_data_filtered, month_forward_returns, 10, month)\n\n r = np.floor((month-1) / 4)\n c = (month-1) % 4\n axarr[r, c].bar(range(number_of_baskets), basket_returns)\n axarr[r, c].xaxis.set_visible(False) # Hide the axis lables so the plots aren't super messy\n axarr[r, c].set_title('Month ' + str(month))", "We'll repeat the same analysis for PE Ratio.", "number_of_baskets = 10\nmean_basket_returns = np.zeros(number_of_baskets)\nfor m in range(1, 25):\n basket_returns = compute_basket_returns(pe_data_filtered, month_forward_returns, number_of_baskets, m)\n mean_basket_returns += basket_returns\n\nmean_basket_returns /= 24 \n\n# Plot the returns of each basket\nplt.bar(range(number_of_baskets), mean_basket_returns)\nplt.ylabel('Returns')\nplt.xlabel('Basket')\nplt.legend(['Returns of Each Basket']);\n\nf, axarr = plt.subplots(3, 4)\nfor month in range(1, 13):\n basket_returns = compute_basket_returns(pe_data_filtered, month_forward_returns, 10, month)\n\n r = np.floor((month-1) / 4)\n c = (month-1) % 4\n axarr[r, c].bar(range(10), basket_returns)\n axarr[r, c].xaxis.set_visible(False) # Hide the axis lables so the plots aren't super messy\n axarr[r, c].set_title('Month ' + str(month))", "Sometimes Factors are Just Other Factors\nOften times a new factor will be discovered that seems to induce spread, but it turns out that it is just a new and potentially more complicated way to compute a well known factor. Consider for instance the case in which you have poured tons of resources into developing a new factor, it looks great, but how do you know it's not just another factor in disguise?\nTo check for this, there are many analyses that can be done.\nCorrelation Analysis\nOne of the most intuitive ways is to check what the correlation of the factors is over time. We'll plot that here.", "scores = np.zeros(24)\npvalues = np.zeros(24)\nfor i in range(24):\n score, pvalue = stats.spearmanr(cap_data_filtered.iloc[i], pe_data_filtered.iloc[i])\n pvalues[i] = pvalue\n scores[i] = score\n \nplt.bar(range(1,25),scores)\nplt.hlines(np.mean(scores), 1, 25, colors='r', linestyles='dashed')\nplt.xlabel('Month')\nplt.xlim((1, 25))\nplt.legend(['Mean Correlation over All Months', 'Monthly Rank Correlation'])\nplt.ylabel('Rank correlation between Market Cap and PE Ratio');", "And also the p-values because the correlations may not be that meaningful by themselves.", "scores = np.zeros(24)\npvalues = np.zeros(24)\nfor i in range(24):\n score, pvalue = stats.spearmanr(cap_data_filtered.iloc[i], pe_data_filtered.iloc[i])\n pvalues[i] = pvalue\n scores[i] = score\n \nplt.bar(range(1,25),pvalues)\nplt.xlabel('Month')\nplt.xlim((1, 25))\nplt.legend(['Mean Correlation over All Months', 'Monthly Rank Correlation'])\nplt.ylabel('Rank correlation between Market Cap and PE Ratio');", "There is interesting behavior, and further analysis would be needed to determine whether a relationship existed.", "pe_dataframe = pd.DataFrame(pe_data_filtered.iloc[0])\npe_dataframe.columns = ['F1']\ncap_dataframe = pd.DataFrame(cap_data_filtered.iloc[0])\ncap_dataframe.columns = ['F2']\nreturns_dataframe = pd.DataFrame(month_forward_returns.iloc[0])\nreturns_dataframe.columns = ['Returns']\n\ndata = pe_dataframe.join(cap_dataframe).join(returns_dataframe)\n\ndata = data.rank(method='first')\n\nheat = np.zeros((len(data), len(data)))\n\nfor e in data.index:\n F1 = data.loc[e]['F1']\n F2 = data.loc[e]['F2']\n R = data.loc[e]['Returns']\n heat[F1-1, F2-1] += R\n \nheat = scipy.signal.decimate(heat, 40)\nheat = scipy.signal.decimate(heat.T, 40).T\n\np = sns.heatmap(heat, xticklabels=[], yticklabels=[])\n# p.xaxis.set_ticks([])\n# p.yaxis.set_ticks([])\np.xaxis.set_label_text('F1 Rank')\np.yaxis.set_label_text('F2 Rank')\np.set_title('Sum Rank of Returns vs Factor Ranking');", "How to Choose Ranking System\nThe ranking system is the secret sauce of many strategies. Choosing a good ranking system, or factor, is not easy and the subject of much research. We'll discuss a few starting points here.\nClone and Tweak\nChoose one that is commonly discussed and see if you can modify it slightly to gain back an edge. Often times factors that are public will have no signal left as they have been completely arbitraged out of the market. However, sometimes they lead you in the right direction of where to go.\nPricing Models\nAny model that predicts future returns can be a factor. The future return predicted is now that factor, and can be used to rank your universe. You can take any complicated pricing model and transform it into a ranking.\nPrice Based Factors (Technical Indicators)\nPrice based factors take information about the historical price of each equity and use it to generate the factor value. Examples could be 30-day momentum, or volatility measures.\nReversion vs. Momentum\nIt's important to note that some factors bet that prices, once moving in a direction, will continue to do so. Some factors bet the opposite. Both are valid models on different time horizons and assets, and it's important to investigate whether the underlying behavior is momentum or reversion based.\nFundamental Factors (Value Based)\nThis is using combinations of fundamental values as we discussed today. Fundamental values contain information that is tied to real world facts about a company, so in many ways can be more robust than prices.\nThe Arms Race\nUltimately, developing predictive factors is an arms race in which you are trying to stay one step ahead. Factors get arbitraged out of markets and have a lifespan, so it's important that you are constantly doing work to determine how much decay your factors are experiencing, and what new factors might be used to take their place.\nThis presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. (\"Quantopian\"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
google-coral/tutorials
fix_conversion_issues_ptq_tf2.ipynb
apache-2.0
[ "Copyright 2020 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\")", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Fix top issues when converting a TF model for Edge TPU\nThis page shows how to fix some known issues when converting TensorFlow 2 models for the Edge TPU. \n<a href=\"https://colab.research.google.com/github/google-coral/tutorials/blob/master/fix_conversion_issues_ptq_tf2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\"></a>\n&nbsp;&nbsp;&nbsp;&nbsp;\n<a href=\"https://github.com/google-coral/tutorials/blob/master/fix_conversion_issues_ptq_tf2.ipynb\" target=\"_parent\"><img src=\"https://img.shields.io/static/v1?logo=GitHub&label=&color=333333&style=flat&message=View%20on%20GitHub\" alt=\"View in GitHub\"></a>\nTo run all the code in this tutorial, select Runtime > Run all in the Colab toolbar.\nSet up the environment\nImport the Python libraries:", "import tensorflow as tf\nassert float(tf.__version__[:3]) >= 2.3\n\nimport os\nimport numpy as np\nimport matplotlib.pyplot as plt", "Install the Edge TPU Compiler:", "! curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -\n\n! echo \"deb https://packages.cloud.google.com/apt coral-edgetpu-stable main\" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list\n\n! sudo apt-get update\n\n! sudo apt-get install edgetpu-compiler\t", "Create quantization function", "def quantize_model(converter):\n # This generator provides a junk representative dataset\n # (It creates a poor model but is only for demo purposes)\n def representative_data_gen():\n for i in range(10):\n image = tf.random.uniform([1, 224, 224, 3])\n yield [image]\n \n converter.optimizations = [tf.lite.Optimize.DEFAULT]\n converter.representative_dataset = representative_data_gen\n converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]\n converter.inference_input_type = tf.uint8\n converter.inference_output_type = tf.uint8\n\n return converter.convert()", "Can't compile due to dynamic batch size\nThe Edge TPU Compiler fails for some models such as MobileNetV1 if the input shape batch size is not set to 1, although this isn't exactly obvious from the compiler's output:\nInvalid model: mobilenet_quant.tflite\nModel not quantized\nThat error might be caused by something else, but you should try the following solution because although it's not required for all models, it shouldn't hurt.\nSolution for a Keras model object", "model = tf.keras.applications.MobileNet()", "The following creates a TFLite file that will fail in the Edge TPU Compiler:", "converter = tf.lite.TFLiteConverter.from_keras_model(model)\ntflite_model = quantize_model(converter)\n\nwith open('mobilenet_quant_before.tflite', 'wb') as f:\n f.write(tflite_model)\n\n! edgetpu_compiler mobilenet_quant_before.tflite", "It won't compile because the model has a dynamic batch size as shown here (None means it's dynamic):", "model.input.shape", "So to fix it, we need to set that to 1:", "model.input.set_shape((1,) + model.input.shape[1:])\nmodel.input.shape", "Now we can convert it again and it will compile:", "converter = tf.lite.TFLiteConverter.from_keras_model(model)\ntflite_model = quantize_model(converter)\n\nwith open('mobilenet_quant_after.tflite', 'wb') as f:\n f.write(tflite_model)\n\n! edgetpu_compiler mobilenet_quant_after.tflite", "Solution for a SavedModel\nIf you're loading a SavedModel file, then the fix looks a little different.\nSo let's say you saved a model like this:", "model = tf.keras.applications.MobileNet()\nsave_path = os.path.join(\"mobilenet/1/\")\ntf.saved_model.save(model, save_path)", "Ideally, you could later load the model like this:", "converter = tf.lite.TFLiteConverter.from_saved_model(save_path)", "But the saved model's input still has a dynamic batch size, so you need to instead load the model with saved_model.load() and modify the input's concrete function so it has a batch size of 1. Then load it into TFLiteConverter using the concrete function:", "imported = tf.saved_model.load(save_path)\nconcrete_func = imported.signatures[\"serving_default\"]\nconcrete_func.inputs[0].set_shape([1, 224, 224, 3])\nconverter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func], imported)", "Now you can convert to TFLite and it will compile for the Edge TPU:", "tflite_model = quantize_model(converter)\nwith open('mobilenet_imported_quant.tflite', 'wb') as f:\n f.write(tflite_model)\n\n! edgetpu_compiler mobilenet_imported_quant.tflite", "Can't import a SavedModel without a signature\nSometimes, a SavedModel does not include a signature (such as when the model was built with a custom tf.Module), making it impossible to load using TFLiteConverter. So you can instead add the batch size as follows.\nNote: If you created the model yourself, see how to specify the signature during export so this isn't a problem.", "# First get the Inception SavedModel, which is lacking a signature\n!wget -O imagenet_inception_v2_classification_4.tar.gz https://tfhub.dev/google/imagenet/inception_v2/classification/4?tf-hub-format=compressed\n!mkdir -p imagenet_inception_v2_classification_4\n!tar -xvzf imagenet_inception_v2_classification_4.tar.gz --directory imagenet_inception_v2_classification_4", "For example, this fails because the model has no signature:", "#converter = tf.lite.TFLiteConverter.from_saved_model(\"imagenet_inception_v2_classification_4\")", "Whereas other code above loads the input's concrete function by calling upon its \"serving_default\" signature, we can't do that if the model has no signature. So we instead get the concrete function by specifying its known input tensor shape:", "imported = tf.saved_model.load(\"imagenet_inception_v2_classification_4\")\nconcrete_func = imported.__call__.get_concrete_function(\n tf.TensorSpec([1, 224, 224, 3]))\nconverter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func], imported)", "Now we can convert and compile:", "tflite_model = quantize_model(converter)\nwith open('inceptionv2_imported_quant.tflite', 'wb') as f:\n f.write(tflite_model)\n\n! edgetpu_compiler inceptionv2_imported_quant.tflite" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ehthiede/PyEDGAR
examples/Delay_Embedding/.ipynb_checkpoints/Delay_Embedding-checkpoint.ipynb
mit
[ "Delay Embedding and the MFPT\nHere, we give an example script, showing the effect of Delay Embedding on a Brownian motion on the Muller-Brown potential, projeted onto its y-axis. This script may take a long time to run, as considerable data is required to accurately reconstruct the hidden degrees of freedom.", "import matplotlib.pyplot as plt\nimport numpy as np\nimport pyedgar\nfrom pyedgar.data_manipulation import tlist_to_flat, flat_to_tlist, delay_embed, lift_function\n\n%matplotlib inline", "Load Data and set Hyperparameters\nWe first load in the pre-sampled data. The data consists of 400 short trajectories, each with 30 datapoints. The precise sampling procedure is described in \"Galerkin Approximation of Dynamical Quantities using Trajectory Data\" by Thiede et al. Note that this is a smaller dataset than in the paper. We use a smallar dataset to ensure the diffusion map basis construction runs in a reasonably short time.\nSet Hyperparameters\nHere we specify a few hyperparameters. Thes can be varied to study the behavior of the scheme in various limits by the user.", "ntraj = 700\ntrajectory_length = 40\nlag_values = np.arange(1, 37, 2)\nembedding_values = lag_values[1:] - 1", "Load and format the data", "trajs_2d = np.load('data/muller_brown_trajs.npy')[:ntraj, :trajectory_length] # Raw trajectory\ntrajs = trajs_2d[:, :, 1] # Only keep y coordinate\nstateA = (trajs > 1.15).astype('float')\nstateB = (trajs < 0.15).astype('float')\n\n# Convert to list of trajectories format\ntrajs = [traj_i.reshape(-1, 1) for traj_i in trajs]\nstateA = [A_i for A_i in stateA]\nstateB = [B_i for B_i in stateB]\n\n# Load the true results\ntrue_mfpt = np.load('data/htAB_1_0_0_1.npy')", "We also convert the data into the flattened format. This converts the data into a 2D array, which allows the data to be passed into many ML packages that require a two-dimensional dataset. In particular, this is the format accepted by the Diffusion Atlas object. Trajectory start/stop points are then stored in the traj_edges array.", "flattened_trajs, traj_edges = tlist_to_flat(trajs)\nflattened_stateA = np.hstack(stateA)\nflattened_stateB = np.hstack(stateB)\nprint(\"Flattened Shapes are: \", flattened_trajs.shape, flattened_stateA.shape, flattened_stateB.shape,)", "Construct DGA MFPT by increasing lag times\nWe first construct the MFPT with increasing lag times.", "# Build the basis set\ndiff_atlas = pyedgar.basis.DiffusionAtlas.from_sklearn(alpha=0, k=500, bandwidth_type='-1/d', epsilon='bgh_generous')\ndiff_atlas.fit(flattened_trajs)\nflat_basis = diff_atlas.make_dirichlet_basis(200, in_domain=(1. - flattened_stateA))\nbasis = flat_to_tlist(flat_basis, traj_edges)\nflat_basis_no_boundaries = diff_atlas.make_dirichlet_basis(200)\nbasis_no_boundaries = flat_to_tlist(flat_basis_no_boundaries, traj_edges)\n\n# Perform DGA calculation\nmfpt_BA_lags = []\nfor lag in lag_values:\n mfpt = pyedgar.galerkin.compute_mfpt(basis, stateA, lag=lag)\n pi = pyedgar.galerkin.compute_change_of_measure(basis_no_boundaries, lag=lag)\n flat_pi = np.array(pi).ravel()\n flat_mfpt = np.array(mfpt).ravel()\n mfpt_BA = np.mean(flat_mfpt * flat_pi * np.array(stateB).ravel()) / np.mean(flat_pi * np.array(stateB).ravel())\n mfpt_BA_lags.append(mfpt_BA)", "Construct DGA MFPT with increasing Delay Embedding\nWe now construct the MFPT using delay embedding. To accelerate the process, we will only use every fifth value of the delay length.", "mfpt_BA_embeddings = []\nfor lag in embedding_values:\n # Perform delay embedding\n debbed_traj = delay_embed(trajs, n_embed=lag)\n lifted_A = lift_function(stateA, n_embed=lag)\n lifted_B = lift_function(stateB, n_embed=lag)\n \n flat_debbed_traj, embed_edges = tlist_to_flat(debbed_traj)\n flat_lifted_A = np.hstack(lifted_A)\n \n # Build the basis \n diff_atlas = pyedgar.basis.DiffusionAtlas.from_sklearn(alpha=0, k=500, bandwidth_type='-1/d',\n epsilon='bgh_generous', neighbor_params={'algorithm':'brute'})\n diff_atlas.fit(flat_debbed_traj)\n flat_deb_basis = diff_atlas.make_dirichlet_basis(200, in_domain=(1. - flat_lifted_A))\n deb_basis = flat_to_tlist(flat_deb_basis, embed_edges)\n \n flat_pi_basis = diff_atlas.make_dirichlet_basis(200)\n pi_basis = flat_to_tlist(flat_deb_basis, embed_edges)\n \n \n # Construct the Estimate\n deb_mfpt = pyedgar.galerkin.compute_mfpt(deb_basis, lifted_A, lag=1)\n pi = pyedgar.galerkin.compute_change_of_measure(pi_basis)\n flat_pi = np.array(pi).ravel()\n flat_mfpt = np.array(deb_mfpt).ravel()\n deb_mfpt_BA = np.mean(flat_mfpt * flat_pi * np.array(lifted_B).ravel()) / np.mean(flat_pi * np.array(lifted_B).ravel())\n mfpt_BA_embeddings.append(deb_mfpt_BA)", "Plot the Results\nWe plot the results of our calculation, against the true value (black line, with the standard deviation in stateB given by the dotted lines). We see that increasing the lag time causes the mean-first-passage time to grow unboundedly. In contrast, with delay embedding the mean-first-passage time converges. We do, however, see one bad fluction at a delay length of 16, and that as the the delay length gets sufficiently long, the calculation blows up.", "plt.plot(embedding_values, mfpt_BA_embeddings, label=\"Delay Embedding\")\nplt.plot(lag_values, mfpt_BA_lags, label=\"Lags\")\nplt.axhline(true_mfpt[0] * 10, color='k', label='True')\nplt.axhline((true_mfpt[0] + true_mfpt[1]) * 10., color='k', linestyle=':')\nplt.axhline((true_mfpt[0] - true_mfpt[1]) * 10., color='k', linestyle=':')\n\nplt.legend()\nplt.ylim(0, 100)\n\nplt.xlabel(\"Lag / Delay Length\")\nplt.ylabel(\"Estimated MFPT\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
DB2-Samples/db2odata
Notebooks/DB2 Jupyter Extensions Tutorial.ipynb
apache-2.0
[ "DB2 Jupyter Notebook Extensions Tutorial\nThe SQL code tutorials for DB2 rely on a Jupyter notebook extension, commonly refer to as a \"magic\" command. The beginning of all of the notebooks begin with the following command which will load the extension and allow the remainder of the notebook to use the %sql magic command.\n<pre>\n&#37;run db2.ipynb\n</pre>\nThe cell below will load the db2 extension. Note that it will take a few seconds for the extension to load, so you should generally wait until the \"DB2 Extensions Loaded\" message is displayed in your notebook.", "%run db2.ipynb", "Connections to DB2\nBefore any SQL commands can be issued, a connection needs to be made to the DB2 database that you will be using. The connection can be done manually (through the use of the CONNECT command), or automatically when the first %sql command is issued.\nThe DB2 magic command tracks whether or not a connection has occured in the past and saves this information between notebooks and sessions. When you start up a notebook and issue a command, the program will reconnect to the database using your credentials from the last session. In the event that you have not connected before, the system will prompt you for all the information it needs to connect. This information includes:\n\nDatabase name (SAMPLE) \nHostname - localhost (enter an IP address if you need to connect to a remote server) \nPORT - 50000 (this is the default but it could be different) \nUserid - DB2INST1 \nPassword - No password is provided so you have to enter a value \nMaximum Rows - 10 lines of output are displayed when a result set is returned \n\nThere will be default values presented in the panels that you can accept, or enter your own values. All of the information will be stored in the directory that the notebooks are stored on. Once you have entered the information, the system will attempt to connect to the database for you and then you can run all of the SQL scripts. More details on the CONNECT syntax will be found in a section below.\nThe next statement will force a CONNECT to occur with the default values. If you have not connected before, it will prompt you for the information.", "%sql CONNECT", "Line versus Cell Command\nThe DB2 extension is made up of one magic command that works either at the LINE level (%sql) or at the CELL level (%%sql). If you only want to execute a SQL command on one line in your script, use the %sql form of the command. If you want to run a larger block of SQL, then use the %%sql form. Note that when you use the %%sql form of the command, the entire contents of the cell is considered part of the command, so you cannot mix other commands in the cell.\nThe following is an example of a line command:", "%sql VALUES 'HELLO THERE'", "The %sql syntax allows you to pass local variables to the script. There are 5 predefined variables defined in the program:\n\ndb2_database - The name of the database you are connected to\ndb2_uid - The userid that you connected with\ndb2_host = The IP address of the host system\ndb2_port - The port number of the host system\ndb2_max - The maximum number of rows to return in an answer set\n\nTo pass a value to a LINE script, use the braces {} to surround the name of the variable:\n<pre>\n {db2_database}\n</pre>\n\nThe next line will display the currently connected database.", "%sql VALUES '{db2_database}'", "You cannot use variable substitution with the CELL version of the %sql command. If your SQL statement extends beyond one line, and you want to use variable substitution, you can use a couple of techniques to make it look like one line. The simplest way is to add the backslash character (\\) at the end of every line. The following\nexample illustrates the technique.", "%sql VALUES \\\n '{db2_database}'", "If you have SQL that requires multiple lines, of if you need to execute many lines of SQL, then you should \nbe using the CELL version of the %sql command. To start a block of SQL, start the cell with %%sql and do not place\nany SQL following the command. Subsequent lines can contain SQL code, with each SQL statement delimited with the\nsemicolon (;). You can change the delimiter if required for procedures, etc... More details on this later.", "%%sql\nVALUES\n 1,\n 2,\n 3", "If you are using a single statement then there is no need to use a delimiter. However, if you are combining a number of commands then you must use the semicolon.", "%%sql\nDROP TABLE STUFF;\nCREATE TABLE STUFF (A INT);\nINSERT INTO STUFF VALUES\n 1,2,3;\nSELECT * FROM STUFF;", "The script will generate messages and output as it executes. Each SQL statement that generates results will have a table displayed with the result set. If a command is executed, the results of the execution get listed as well. The script you just ran probably generated an error on the DROP table command.\nOptions\nBoth forms of the %sql command have options that can be used to change the behavior of the code. For both forms of the command (%sql, %%sql), the options must be on the same line as the command:\n<pre>\n%sql -t ...\n%%sql -t\n</pre>\n\nThe only difference is that the %sql command can have SQL following the parameters, while the %%sql requires the\nSQL to be placed on subsequent lines.\nThere are a number of parameters that you can specify as part of the %sql statement. \n\n-d - Use alternative delimiter\n-t - Time the statement execution\n-n - Run all statements as commands (no answer sets)\n-s - Run all statements as SQL\n-q - Suppress messages \n-qq - Suppress messages and any SQL output\n-j - JSON formatting of a column\n-a - Show all output\n-pb - Bar chart of results\n-pp - Pie chart of results \n-pl - Line chart of results\n-sampledata Load the database with the sample EMPLOYEE and DEPARTMENT tables\n-r - Return the results into a variable (list of rows)\n\nMultiple parameters are allowed on a command line. Each option should be separated by a space:\n<pre>\n%sql -a -j ...\n</pre>\n\nThe sections below will explain the options in more detail.\nDelimiters\nThe default delimiter for all SQL statements is the semicolon. However, this becomes a problem when you try to create a trigger, function, or procedure that uses SQLPL (or PL/SQL). Use the -d option to turn the SQL delimiter into the at (@) sign. The semi-colon is then ignored as a delimiter.\nFor example, the following SQL will use the @ sign as the delimiter.", "%%sql -d\nDROP TABLE STUFF\n@\nCREATE TABLE STUFF (A INT)\n@\nINSERT INTO STUFF VALUES\n 1,2,3\n@\nSELECT * FROM STUFF\n@", "The delimiter change will only take place for the statements following the %%sql command. Subsequent cells\nin the notebook will still use the semicolon. You must use the -d option for every cell that needs to use the\nsemicolon in the script.\nLimiting Result Sets\nThe default number of rows displayed for any result set is 10. You have the option of changing this option when initially connecting to the database. If you want to override the number of rows display you can either update\nthe control variable, or use the -a option. The -a option will display all of the rows in the answer set. For instance, the following SQL will only show 10 rows even though we inserted 15 values:", "%sql values 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15", "You will notice that the displayed result will split the visible rows to the first 5 rows and the last 5 rows.\nUsing the -a option will display all values:", "%sql -a values 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15", "To change the default value of rows displayed, you can either do a CONNECT RESET (discussed later) or set the\ndb2 control variable db2_max to a different value. A value of -1 will display all rows.", "# Save previous version of maximum rows\nlast_max = db2_max\ndb2_max = 5\n%sql values 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15\n# Set the maximum back\ndb2_max = last_max", "Quiet Mode\nEvery SQL statement will result in some output. You will either get an answer set (SELECT), or an indication if\nthe command worked. For instance, the following set of SQL will generate some error messages since the tables \nwill probably not exist:", "%%sql\nDROP TABLE TABLE_NOT_FOUND;\nDROP TABLE TABLE_SPELLED_WRONG;", "If you know that these errors may occur you can silence them with the -q option.", "%%sql -q\nDROP TABLE TABLE_NOT_FOUND;\nDROP TABLE TABLE_SPELLED_WRONG;", "SQL output will not be suppressed, so the following command will still show the results.", "%%sql -q\nDROP TABLE TABLE_NOT_FOUND;\nDROP TABLE TABLE_SPELLED_WRONG;\nVALUES 1,2,3;", "To have the messages returned as text only, you must set the db2_error_highlight variable to False. This is a change that will affect all messages in the notebook.", "db2_error_highlight = False\n%sql DROP TABLE TABLE_NOT_FOUND;", "To set the messages back to being formatted, set db2_error_highlight to True.", "db2_error_highlight = True\n%sql DROP TABLE TABLE_NOT_FOUND;", "SQL and Command Mode\nThe %sql looks at the beginning of your SQL to determine whether or not to run it as a SELECT statement, or\nas a command. There are three statements that trigger SELECT mode: SELECT, WITH, and VALUES. Aside from \nCONNECT, all other statements will be considered to be an SQL statement that does not return a result set. In\nmost cases this will work fine. However, it is possible that you may have a statement that is actually a command and does not return a result set. However, because of the simplistic nature of determining the statement \ntype, the statement will not be executed properly. To treat all statements as commands, use the -n flag (no output) and the -s flag for output.\nTiming SQL Statements\nSometimes you want to see how the execution of a statement changes with the addition of indexes or other\noptimization changes. The -t option will run the statement on the LINE or one SQL statement in the CELL for \nexactly one second. The results will be displayed and optionally placed into a variable. The syntax of the\ncommand is:\n<pre>\nsql_time = %sql -t SELECT * FROM EMPLOYEE\n</pre>\nFor instance, the following SQL will time the VALUES clause.", "%sql -t VALUES 1,2,3,4,5,6,7,8,9", "When timing a statement, no output will be displayed. If your SQL statement takes longer than one second you\nwill need to modify the db2_runtime variable. This variable must be set to the number of seconds that you\nwant to run the statement.", "db2_runtime = 5\n%sql -t VALUES 1,2,3,4,5,6,7,8,9", "JSON Formatting\nDB2 supports querying JSON that is stored in a column within a table. Standard output would just display the \nJSON as a string. For instance, the following statement would just return a large string of output.", "%%sql\nVALUES \n '{\n \"empno\":\"000010\",\n \"firstnme\":\"CHRISTINE\",\n \"midinit\":\"I\",\n \"lastname\":\"HAAS\",\n \"workdept\":\"A00\",\n \"phoneno\":[3978],\n \"hiredate\":\"01/01/1995\",\n \"job\":\"PRES\",\n \"edlevel\":18,\n \"sex\":\"F\",\n \"birthdate\":\"08/24/1963\",\n \"pay\" : {\n \"salary\":152750.00,\n \"bonus\":1000.00,\n \"comm\":4220.00}\n }'", "Adding the -j option to the %sql (or %%sql) command will format the first column of a return set to better\ndisplay the structure of the document. Note that if your answer set has additional columns associated with it, they will not be displayed in this format.", "%%sql -j\nVALUES \n '{\n \"empno\":\"000010\",\n \"firstnme\":\"CHRISTINE\",\n \"midinit\":\"I\",\n \"lastname\":\"HAAS\",\n \"workdept\":\"A00\",\n \"phoneno\":[3978],\n \"hiredate\":\"01/01/1995\",\n \"job\":\"PRES\",\n \"edlevel\":18,\n \"sex\":\"F\",\n \"birthdate\":\"08/24/1963\",\n \"pay\" : {\n \"salary\":152750.00,\n \"bonus\":1000.00,\n \"comm\":4220.00}\n }'", "Plotting\nSometimes it would be useful to display a result set as either a bar, pie, or line chart. The first one or two\ncolumns of a result set need to contain the values need to plot the information.\nThe three possible plot options are:\n\n-pb - bar chart (x,y)\n-pp - pie chart (y)\n-pl - line chart (x,y)\n\nThe following data will be used to demonstrate the different charting options.", "%sql values 1,2,3,4,5", "Since the results only have one column, the pie, line, and bar charts will not have any labels associated with\nthem. The first example is a bar chart.", "%sql -pb values 1,2,3,4,5", "The same data as a pie chart.", "%sql -pp values 1,2,3,4,5", "And finally a line chart.", "%sql -pl values 1,2,3,4,5", "If you retrieve two columns of information, the first column is used for the labels (X axis or pie slices) and \nthe second column contains the data.", "%sql -pb values ('A',1),('B',2),('C',3),('D',4),('E',5)", "For a pie chart, the first column is used to label the slices, while the data comes from the second column.", "%sql -pp values ('A',1),('B',2),('C',3),('D',4),('E',5)", "Finally, for a line chart, the x contains the labels and the y values are used.", "%sql -pl values ('A',1),('B',2),('C',3),('D',4),('E',5)", "The following SQL will plot the number of employees per department.", "%%sql -pb\nSELECT WORKDEPT, COUNT(*) \n FROM EMPLOYEE\nGROUP BY WORKDEPT", "Sample Data\nMany of the DB2 notebooks depend on two of the tables that are found in the SAMPLE database. Rather than\nhaving to create the entire SAMPLE database, this option will create and populate the EMPLOYEE and \nDEPARTMENT tables in your database. Note that if you already have these tables defined, they will not be dropped.", "%sql -sampledata", "Result Set\nBy default, any %sql block will return the contents of a result set as a table that is displayed in the notebook. If you want to capture the results from the SQL into a variable, you would use the -r option:\n<pre>\nvar = %sql -r select * from employee\n</pre>\nThe result is a list of rows. Each row is a list itself. The rows and columns all start at zero (0), so\nto access the first column of the first row, you would use var[0][0] to access it.", "rows = %sql -r select * from employee\nprint(rows[0][0])", "The number of rows in the result set can be determined by using the length function.", "print(len(rows))", "If you want to iterate over all of the rows and columns, you could use the following Python syntax instead of\ncreating a for loop that goes from 0 to 41.", "for row in rows:\n line = \"\"\n for col in row:\n line = line + str(col) + \",\"\n print(line)", "Since the data may be returned in different formats (like integers), you should use the str() function\nto convert the values to strings. Otherwise, the concatenation function used in the above example will fail. For\ninstance, the 6th field is a birthdate field. If you retrieve it as an individual value and try and concatenate a string to it, you get the following error.", "print(\"Birth Date=\"+rows[0][6])", "You can fix this problem by adding the str function to convert the date.", "print(\"Birth Date=\"+str(rows[0][6]))", "DB2 CONNECT Statement\nAs mentioned at the beginning of this notebook, connecting to DB2 is automatically done when you issue your first\n%sql statement. Usually the program will prompt you with what options you want when connecting to a database. The\nother option is to use the CONNECT statement directly. The CONNECT statement is similar to the native DB2\nCONNECT command, but includes some options that allow you to connect to databases that has not been\ncatalogued locally.\nThe CONNECT command has the following format:\n<pre>\n%sql CONNECT TO &lt;database&gt; USER &lt;userid&gt; USING &lt;password | ?&gt; HOST &lt;ip address&gt; PORT &lt;port number&gt;\n</pre>\nIf you use a \"?\" for the password field, the system will prompt you for a password. This avoids typing the \npassword as clear text on the screen. If a connection is not successful, the system will print the error\nmessage associated with the connect request.\nIf the connection is successful, the parameters are saved on your system and will be used the next time you\nrun a SQL statement, or when you issue the %sql CONNECT command with no parameters.\nIf you want to force the program to connect to a different database (with prompting), use the CONNECT RESET command. The next time you run a SQL statement, the program will prompt you for the the connection\nand will force the program to reconnect the next time a SQL statement is executed.", "%sql CONNECT RESET\n\n%sql CONNECT", "Credits: IBM 2017, George Baklarz [baklarz@ca.ibm.com]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Xilinx/meta-petalinux
recipes-multimedia/gstreamer/gstreamer-vcu-notebooks/vcu-demo-camera-encode-file.ipynb
mit
[ "Video Codec Unit (VCU) Demo Example: CAMERA->ENCODE ->FILE\nIntroduction\nVideo Codec Unit (VCU) in ZynqMP SOC is capable of encoding and decoding AVC/HEVC compressed video streams in real time.\nThis notebook example shows video audio (AV) recording usecase – the process of capturing raw video and audio(optional), encode using VCU and then store content in file. The file stored is recorded compressed file. \nImplementation Details\n<img src=\"pictures/block-diagram-camera-encode-file.png\" align=\"center\" alt=\"Drawing\" style=\"width: 400px; height: 200px\"/>\nBoard Setup\n\nConnect Ethernet cable.\nConnect serial cable to monitor logs on serial console.\nConnect USB camera(preferably Logitech HD camera, C920) with board.\nIf Board is connected to private network, then export proxy settings in /home/root/.bashrc file as below, \ncreate/open a bashrc file using \"vi ~/.bashrc\" \nInsert below line to bashrc file\nexport http_proxy=\"< private network proxy address >\"\nexport https_proxy=\"< private network proxy address >\"\n\n\nSave and close bashrc file.\n\n\n\n\nDetermine Audio input device names based on requirements. Please refer Determine AUDIO Device Names section.\n\nDetermine Audio Device Names\nThe audio device name of audio source(Input device) and playback device(output device) need to be determined using arecord and aplay utilities installed on platform.\nAudio Input\nALSA sound device names for capture devices\n- Run below command to get ALSA sound device names for capture devices\nroot@zcu106-zynqmp:~#arecord -l\nIt shows list of Audio Capture Hardware Devices. For e.g\n - card 1: C920 [HD Pro Webcam C920], device 0: USB Audio [USB Audio]\n - Subdevices: 1/1\n - Subdevice #0: subdevice #0\nHere card number of capture device is 1 and device id is 0. Hence \" hw:1,0 \" to be passed as auido input device.\nPulse sound device names for capture devices\n- Run below command to get PULSE sound device names for capture devices\nroot@zcu106-zynqmp:~#pactl list short sources\nIt shows list of Audio Capture Hardware Devices. For e.g\n - 0 alsa_input.usb-046d_HD_Pro_Webcam_C920_758B5BFF-02.analog-stereo ...\nHere \"alsa_input.usb-046d_HD_Pro_Webcam_C920_758B5BFF-02.analog-stereo\" is the name of audio capture device. Hence it can be passed as auido input device.\nUSB Camera Capabilities\nResolutions for this example need to set based on USB Camera Capabilities\n- Capabilities can be found by executing below command on board\nroot@zcu106-zynqmp:~#\"v4l2-ctl -d < dev-id > --list-formats-ext\".\n< dev-id >:- It can be found using dmesg logs. Mostly it would be like \"/dev/video0\"\n\nV4lutils if not installed in the pre-built image, need to install using dnf or rebuild petalinux image including v4lutils", "from IPython.display import HTML\n\nHTML('''<script>\ncode_show=true; \nfunction code_toggle() {\n if (code_show){\n $('div.input').hide();\n } else {\n $('div.input').show();\n }\n code_show = !code_show\n} \n$( document ).ready(code_toggle);\n</script>\n<form action=\"javascript:code_toggle()\"><input type=\"submit\" value=\"Click here to toggle on/off the raw code.\"></form>''')", "Run the Demo", "from ipywidgets import interact\nimport ipywidgets as widgets\nfrom common import common_vcu_demo_camera_encode_file\nimport os\nfrom ipywidgets import HBox, VBox, Text, Layout", "Video", "video_capture_device=widgets.Text(value='',\n placeholder='\"/dev/video1\"',\n description='Camera Dev Id:',\n style={'description_width': 'initial'},\n #layout=Layout(width='35%', height='30px'), \n disabled=False)\nvideo_capture_device\n\ncodec_type=widgets.RadioButtons(\n options=['avc', 'hevc'],\n description='Codec Type:',\n disabled=False)\nsink_name=widgets.RadioButtons(\n options=['none', 'fakevideosink'],\n description='Video Sink:',\n disabled=False)\nvideo_size=widgets.RadioButtons(\n options=['640x480', '1280x720', '1920x1080', '3840x2160'],\n description='Resolution:',\n description_tooltip='To select the values, please refer USB Camera Capabilities section',\n disabled=False)\nHBox([codec_type, video_size, sink_name])", "Audio", "device_id=Text(value='',\n placeholder='(optional) \"hw:1\"',\n description='Input Dev:',\n description_tooltip='To select the values, please refer Determine Audio Device Names section',\n disabled=False)\ndevice_id\n\naudio_sink={'none':['none'], 'aac':['auto','alsasink','pulsesink'],'vorbis':['auto','alsasink','pulsesink']}\naudio_src={'none':['none'], 'aac':['auto','alsasrc','pulseaudiosrc'],'vorbis':['auto','alsasrc','pulseaudiosrc']}\n\n#val=sorted(audio_sink, key = lambda k: (-len(audio_sink[k]), k))\ndef print_audio_sink(AudioSink):\n pass\n \ndef print_audio_src(AudioSrc):\n pass\n\ndef select_audio_sink(AudioCodec):\n audio_sinkW.options = audio_sink[AudioCodec]\n audio_srcW.options = audio_src[AudioCodec]\n\naudio_codecW = widgets.RadioButtons(options=sorted(audio_sink.keys(), key=lambda k: len(audio_sink[k])), description='Audio Codec:')\n\ninit = audio_codecW.value\n\naudio_sinkW = widgets.RadioButtons(options=audio_sink[init], description='Audio Sink:')\naudio_srcW = widgets.RadioButtons(options=audio_src[init], description='Audio Src:')\n#j = widgets.interactive(print_audio_sink, AudioSink=audio_sinkW)\nk = widgets.interactive(print_audio_src, AudioSrc=audio_srcW)\ni = widgets.interactive(select_audio_sink, AudioCodec=audio_codecW)\n\nHBox([i, k])", "Advanced options:", "frame_rate=widgets.Text(value='',\n placeholder='(optional) 15, 30, 60',\n description='Frame Rate:',\n disabled=False)\nbit_rate=widgets.Text(value='',\n placeholder='(optional) 1000, 20000',\n description='Bit Rate(Kbps):',\n style={'description_width': 'initial'},\n disabled=False)\ngop_length=widgets.Text(value='',\n placeholder='(optional) 30, 60',\n description='Gop Length',\n disabled=False)\n\ndisplay(HBox([bit_rate, frame_rate, gop_length]))\n\nno_of_frames=Text(value='',\n placeholder='(optional) 1000, 2000',\n description=r'<p>Frame Nos:</p>',\n #layout=Layout(width='25%', height='30px'),\n disabled=False)\noutput_path=widgets.Text(value='',\n placeholder='(optional) /mnt/sata/op.ts',\n description='Output Path:',\n disabled=False)\nentropy_buffers=widgets.Dropdown(\n options=['2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15'],\n value='5',\n description='Entropy Buffers Nos:',\n style={'description_width': 'initial'},\n disabled=False,)\n#entropy_buffers\n#output_path\n#gop_length\nHBox([entropy_buffers, no_of_frames, output_path])\n\n\n#entropy_buffers\nshow_fps=widgets.Checkbox(\n value=False,\n description='show-fps',\n #style={'description_width': 'initial'},\n disabled=False)\ncompressed_mode=widgets.Checkbox(\n value=False,\n description='compressed-mode',\n disabled=False)\nHBox([compressed_mode, show_fps])\n\nfrom IPython.display import clear_output\nfrom IPython.display import Javascript\n\ndef run_all(ev):\n display(Javascript('IPython.notebook.execute_cells_below()'))\n\ndef clear_op(event):\n clear_output(wait=True)\n return\n\nbutton1 = widgets.Button(\n description='Clear Output',\n style= {'button_color':'lightgreen'},\n #style= {'button_color':'lightgreen', 'description_width': 'initial'},\n layout={'width': '300px'}\n)\nbutton2 = widgets.Button(\n description='',\n style= {'button_color':'white'},\n #style= {'button_color':'lightgreen', 'description_width': 'initial'},\n layout={'width': '83px'}\n)\nbutton1.on_click(run_all)\nbutton1.on_click(clear_op)\n\ndef start_demo(event):\n #clear_output(wait=True)\n arg = [];\n arg = common_vcu_demo_camera_encode_file.cmd_line_args_generator(device_id.value, video_capture_device.value, video_size.value, codec_type.value, audio_codecW.value, frame_rate.value, output_path.value, no_of_frames.value, bit_rate.value, entropy_buffers.value, show_fps.value, audio_srcW.value, compressed_mode.value, gop_length.value, sink_name.value);\n #!sh vcu-demo-camera-encode-decode-display.sh $arg > logs.txt 2>&1\n !sh vcu-demo-camera-encode-file.sh $arg\n return\n\nbutton = widgets.Button(\n description='click to start camera-encode-file demo',\n style= {'button_color':'lightgreen'},\n #style= {'button_color':'lightgreen', 'description_width': 'initial'},\n layout={'width': '300px'}\n)\nbutton.on_click(start_demo)\nHBox([button, button2, button1])", "References\n[1] https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18842546/Xilinx+Video+Codec+Unit\n[2] https://www.xilinx.com/support.html#documentation (Refer to PG252)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
JQGoh/jqlearning
posts/Pair-Distances.ipynb
gpl-3.0
[ "Pair Distances\nIt is a common practice that we try to make use of the available data and combine data sets from different sources for further analysis. For example, the following stations1.csv and stations2.csv files have different sets of stations with latitude, longitude information. \nAssuming that stations1.csv has air quality information while stations2.csv has stations has weather information measured by those stations respectively. \nOne might assume that the air quality measured by a station in the first data set has a strong correlation with the weather condition registered by its closest station. To combine the two data sets, we need to determine the stations of stations2.csv that are closest to those stations of stations1.csv.", "# Import functions\nimport sys\nsys.path.append(\"../\")\nimport pandas as pd\n\ndf1 = pd.read_csv(\"stations1.csv\")\ndf2 = pd.read_csv(\"stations2.csv\")\n\ndf1.head(10)\n\ndf2.head(10)", "Before checking for the closest stations, we can understand the composition of stations found in the different data sets. The designed sets_grps function from preprocess.py file is useful for this purpose.", "from script.preprocess import sets_grps\n\nsets_grps(df1.station_id, df2.station_id)", "The summary, as shown above, suggests that both data sets have 9 common stations. \nTo evaluate the distances between the selected features of two data sets, the designed pair_dist function from preprocess.py file can be handy.\nOne can provide the dataframes he/she is interested to work with, and the selected features in key-value pair (Python dict). The key-value pair specifies the group label (station_id in this example), and the features (latitude and longitude) we will use for distance evaluation. The distance calculation is based on the cdist function from scipy package.\nThe pair_dist function expects the provided key-value pairs are of the same size as the distance calculation will refer to a consistent set of features. The returned dataframe will have the group labels of first data set as its index, while the group labels of the second data set as its columns.", "from script.preprocess import pair_dist\n\npair_dist(df1, \n df2, \n {\"station_id\": [\"latitude\", \"longitude\"]}, \n {\"station_id\": [\"latitude\", \"longitude\"]})", "It is straightforward to find the closest stations by using the idxmin function.\nThis jupyter notebook is available at my Github page: Pair-Distances.ipynb, and it is part of the repository jqlearning.", "station_pairs_df = pair_dist(df1, \n df2, \n {\"station_id\": [\"latitude\", \"longitude\"]}, \n {\"station_id\": [\"latitude\", \"longitude\"]})\nstation_pairs_df.idxmin(axis=1)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/asl-ml-immersion
notebooks/tfx_pipelines/guided_projects/guided_project_3_nlp_starter/tfx_starter.ipynb
apache-2.0
[ "Text Classification TFX Pipeline Starter\nObjective: In this notebook, we show you how to put a text classification model implemented in model.py, preprocessing.py, and config.py into an interactive TFX pipeline. Using these files and the code snippets in this notebook, you'll configure a TFX pipeline generated by the tfx template tool as in the previous guided project so that the text classification can be run on a CAIP Pipelines Kubeflow cluster. The dataset itself consists of article titles along with their source, and the goal is to predict the source from the title. (This dataset can be re-generated by running either the keras_for_text_classification.ipynb notebook or the reusable_embeddings.ipynb notebook, which contain different models to solve this problem.) The solution we propose here is fairly simple and you can build on it by inspecting these notebooks.", "import os\nimport tempfile\nimport time\nfrom pprint import pprint\n\nimport absl\nimport pandas as pd\nimport tensorflow as tf\nimport tensorflow_data_validation as tfdv\nimport tensorflow_model_analysis as tfma\nimport tensorflow_transform as tft\nimport tfx\nfrom tensorflow_metadata.proto.v0 import (\n anomalies_pb2,\n schema_pb2,\n statistics_pb2,\n)\nfrom tensorflow_transform.tf_metadata import schema_utils\nfrom tfx.components import (\n CsvExampleGen,\n Evaluator,\n ExampleValidator,\n InfraValidator,\n Pusher,\n ResolverNode,\n SchemaGen,\n StatisticsGen,\n Trainer,\n Transform,\n Tuner,\n)\nfrom tfx.components.common_nodes.importer_node import ImporterNode\nfrom tfx.components.trainer import executor as trainer_executor\nfrom tfx.dsl.components.base import executor_spec\nfrom tfx.dsl.experimental import latest_blessed_model_resolver\nfrom tfx.orchestration import metadata, pipeline\nfrom tfx.orchestration.experimental.interactive.interactive_context import (\n InteractiveContext,\n)\nfrom tfx.proto import (\n evaluator_pb2,\n example_gen_pb2,\n infra_validator_pb2,\n pusher_pb2,\n trainer_pb2,\n)\nfrom tfx.proto.evaluator_pb2 import SingleSlicingSpec\nfrom tfx.types import Channel\nfrom tfx.types.standard_artifacts import (\n HyperParameters,\n InfraBlessing,\n Model,\n ModelBlessing,\n)", "Note: this lab was developed and tested with the following TF ecosystem package versions:\nTensorflow Version: 2.3.1\nTFX Version: 0.25.0\nTFDV Version: 0.25.0\nTFMA Version: 0.25.0\nIf you encounter errors with the above imports (e.g. TFX component not found), check your package versions in the cell below.", "print(\"Tensorflow Version:\", tf.__version__)\nprint(\"TFX Version:\", tfx.__version__)\nprint(\"TFDV Version:\", tfdv.__version__)\nprint(\"TFMA Version:\", tfma.VERSION_STRING)\n\nabsl.logging.set_verbosity(absl.logging.INFO)", "If the versions above do not match, update your packages in the current Jupyter kernel below. The default %pip package installation location is not on your system installation PATH; use the command below to append the local installation path to pick up the latest package versions. Note that you may also need to restart your notebook kernel to pick up the specified package versions and re-run the imports cell above before proceeding with the lab.", "os.environ[\"PATH\"] += os.pathsep + \"/home/jupyter/.local/bin\"", "Configure lab settings\nSet constants, location paths and other environment settings.", "ARTIFACT_STORE = os.path.join(os.sep, \"home\", \"jupyter\", \"artifact-store\")\nSERVING_MODEL_DIR = os.path.join(os.sep, \"home\", \"jupyter\", \"serving_model\")\nDATA_ROOT = \"./data\"\n\nDATA_ROOT = f\"{ARTIFACT_STORE}/data\"\n!mkdir -p $DATA_ROOT", "Preparing the dataset", "data = pd.read_csv(\"./data/titles_sample.csv\")\ndata.head()\n\nLABEL_MAPPING = {\"github\": 0, \"nytimes\": 1, \"techcrunch\": 2}\ndata[\"source\"] = data[\"source\"].apply(lambda label: LABEL_MAPPING[label])\ndata.head()\n\ndata.to_csv(f\"{DATA_ROOT}/dataset.csv\", index=None)\n!head $DATA_ROOT/*.csv", "Interactive Context\nTFX Interactive Context allows you to create and run TFX Components in an interactive mode. It is designed to support experimentation and development in a Jupyter Notebook environment. It is an experimental feature and major changes to interface and functionality are expected. When creating the interactive context you can specifiy the following parameters:\n- pipeline_name - Optional name of the pipeline for ML Metadata tracking purposes. If not specified, a name will be generated for you.\n- pipeline_root - Optional path to the root of the pipeline's outputs. If not specified, an ephemeral temporary directory will be created and used.\n- metadata_connection_config - Optional metadata_store_pb2.ConnectionConfig instance used to configure connection to a ML Metadata connection. If not specified, an ephemeral SQLite MLMD connection contained in the pipeline_root directory with file name \"metadata.sqlite\" will be used.", "PIPELINE_NAME = \"tfx-title-classifier\"\nPIPELINE_ROOT = os.path.join(\n ARTIFACT_STORE, PIPELINE_NAME, time.strftime(\"%Y%m%d_%H%M%S\")\n)\nos.makedirs(PIPELINE_ROOT, exist_ok=True)\n\ncontext = InteractiveContext(\n pipeline_name=PIPELINE_NAME,\n pipeline_root=PIPELINE_ROOT,\n metadata_connection_config=None,\n)", "Ingesting data using ExampleGen\nIn any ML development process the first step is to ingest the training and test datasets. The ExampleGen component ingests data into a TFX pipeline. It consumes external files/services to generate a set file files in the TFRecord format, which will be used by other TFX components. It can also shuffle the data and split into an arbitrary number of partitions.\n<img src=../../images/ExampleGen.png width=\"300\">\nConfigure and run CsvExampleGen", "output_config = example_gen_pb2.Output(\n split_config=example_gen_pb2.SplitConfig(\n splits=[\n example_gen_pb2.SplitConfig.Split(name=\"train\", hash_buckets=4),\n example_gen_pb2.SplitConfig.Split(name=\"eval\", hash_buckets=1),\n ]\n )\n)\n\nexample_gen = tfx.components.CsvExampleGen(\n input_base=DATA_ROOT, output_config=output_config\n)\n\ncontext.run(example_gen)", "Examine the ingested data", "examples_uri = example_gen.outputs[\"examples\"].get()[0].uri\n\ntfrecord_filenames = [\n os.path.join(examples_uri, \"train\", name)\n for name in os.listdir(os.path.join(examples_uri, \"train\"))\n]\n\ndataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type=\"GZIP\")\n\nfor tfrecord in dataset.take(2):\n example = tf.train.Example()\n example.ParseFromString(tfrecord.numpy())\n\n for name, feature in example.features.feature.items():\n if feature.HasField(\"bytes_list\"):\n value = feature.bytes_list.value\n if feature.HasField(\"float_list\"):\n value = feature.float_list.value\n if feature.HasField(\"int64_list\"):\n value = feature.int64_list.value\n print(f\"{name}: {value}\")\n print(\"******\")", "Generating statistics using StatisticsGen\nThe StatisticsGen component generates data statistics that can be used by other TFX components. StatisticsGen uses TensorFlow Data Validation. StatisticsGen generate statistics for each split in the ExampleGen component's output. In our case there two splits: train and eval.\n<img src=../../images/StatisticsGen.png width=\"200\">\nConfigure and run the StatisticsGen component", "statistics_gen = tfx.components.StatisticsGen(\n examples=example_gen.outputs[\"examples\"]\n)\n\ncontext.run(statistics_gen)", "Visualize statistics\nThe generated statistics can be visualized using the tfdv.visualize_statistics() function from the TensorFlow Data Validation library or using a utility method of the InteractiveContext object. In fact, most of the artifacts generated by the TFX components can be visualized using InteractiveContext.", "context.show(statistics_gen.outputs[\"statistics\"])", "Infering data schema using SchemaGen\nSome TFX components use a description input data called a schema. The schema is an instance of schema.proto. It can specify data types for feature values, whether a feature has to be present in all examples, allowed value ranges, and other properties. SchemaGen automatically generates the schema by inferring types, categories, and ranges from data statistics. The auto-generated schema is best-effort and only tries to infer basic properties of the data. It is expected that developers review and modify it as needed. SchemaGen uses TensorFlow Data Validation.\nThe SchemaGen component generates the schema using the statistics for the train split. The statistics for other splits are ignored.\n<img src=../../images/SchemaGen.png width=\"200\">\nConfigure and run the SchemaGen components", "schema_gen = SchemaGen(\n statistics=statistics_gen.outputs[\"statistics\"], infer_feature_shape=False\n)\n\ncontext.run(schema_gen)", "Visualize the inferred schema", "context.show(schema_gen.outputs[\"schema\"])", "Updating the auto-generated schema\nIn most cases the auto-generated schemas must be fine-tuned manually using insights from data exploration and/or domain knowledge about the data. For example, you know that in the covertype dataset there are seven types of forest cover (coded using 1-7 range) and that the value of the Slope feature should be in the 0-90 range. You can manually add these constraints to the auto-generated schema by setting the feature domain.\nLoad the auto-generated schema proto file", "schema_proto_path = \"{}/{}\".format(\n schema_gen.outputs[\"schema\"].get()[0].uri, \"schema.pbtxt\"\n)\nschema = tfdv.load_schema_text(schema_proto_path)", "Modify the schema\nYou can use the protocol buffer APIs to modify the schema using tfdv.set_somain.\nReview the TFDV library API documentation on setting a feature's domain. You can use the protocol buffer APIs to modify the schema. Review the Tensorflow Metadata proto definition for configuration options.\nSave the updated schema", "schema_dir = os.path.join(ARTIFACT_STORE, \"schema\")\ntf.io.gfile.makedirs(schema_dir)\nschema_file = os.path.join(schema_dir, \"schema.pbtxt\")\n\ntfdv.write_schema_text(schema, schema_file)\n\n!cat {schema_file}", "Importing the updated schema using ImporterNode\nThe ImporterNode component allows you to import an external artifact, including the schema file, so it can be used by other TFX components in your workflow. \nConfigure and run the ImporterNode component", "schema_importer = ImporterNode(\n instance_name=\"Schema_Importer\",\n source_uri=schema_dir,\n artifact_type=tfx.types.standard_artifacts.Schema,\n reimport=False,\n)\n\ncontext.run(schema_importer)", "Visualize the imported schema", "context.show(schema_importer.outputs[\"result\"])", "Validating data with ExampleValidator\nThe ExampleValidator component identifies anomalies in data. It identifies anomalies by comparing data statistics computed by the StatisticsGen component against a schema generated by SchemaGen or imported by ImporterNode.\nExampleValidator can detect different classes of anomalies. For example it can:\n\nperform validity checks by comparing data statistics against a schema \ndetect training-serving skew by comparing training and serving data.\ndetect data drift by looking at a series of data.\n\nThe ExampleValidator component validates the data in the eval split only. Other splits are ignored. \n<img src=../../images/ExampleValidator.png width=\"350\">\nConfigure and run the ExampleValidator component", "example_validator = ExampleValidator(\n instance_name=\"Data_Validation\",\n statistics=statistics_gen.outputs[\"statistics\"],\n schema=schema_importer.outputs[\"result\"],\n)\n\ncontext.run(example_validator)", "Examine the output of ExampleValidator\nThe output artifact of the ExampleValidator is the anomalies.pbtxt file describing an anomalies_pb2.Anomalies protobuf.", "train_uri = example_validator.outputs[\"anomalies\"].get()[0].uri\ntrain_anomalies_filename = os.path.join(train_uri, \"train/anomalies.pbtxt\")\n!cat $train_anomalies_filename", "Visualize validation results\nThe file anomalies.pbtxt can be visualized using context.show.", "context.show(example_validator.outputs[\"output\"])", "In our case no anomalies were detected in the eval split.\nFor a detailed deep dive into data validation and schema generation refer to the lab-31-tfdv-structured-data lab.\nPreprocessing data with Transform\nThe Transform component performs data transformation and feature engineering. The Transform component consumes tf.Examples emitted from the ExampleGen component and emits the transformed feature data and the SavedModel graph that was used to process the data. The emitted SavedModel can then be used by serving components to make sure that the same data pre-processing logic is applied at training and serving.\nThe Transform component requires more code than many other components because of the arbitrary complexity of the feature engineering that you may need for the data and/or model that you're working with. It requires code files to be available which define the processing needed.\n<img src=../../images/Transform.png width=\"400\">\nDefine the pre-processing module\nTo configure Transform, you need to encapsulate your pre-processing code in the Python preprocessing_fn function and save it to a python module that is then provided to the Transform component as an input. This module will be loaded by transform and the preprocessing_fn function will be called when the Transform component runs.\nIn most cases, your implementation of the preprocessing_fn makes extensive use of TensorFlow Transform for performing feature engineering on your dataset.", "%%writefile config.py\nFEATURE_KEY = \"title\"\nLABEL_KEY = \"source\"\nN_CLASSES = 3\nHUB_URL = \"https://tfhub.dev/google/nnlm-en-dim50/2\"\nHUB_DIM = 50\nN_NEURONS = 16\nTRAIN_BATCH_SIZE = 5\nEVAL_BATCH_SIZE = 5\nMODEL_NAME = \"tfx_title_classifier\"\n\n\ndef transformed_name(key):\n return key + \"_xf\"\n\n%%writefile preprocessing.py\nimport tensorflow as tf\nfrom config import FEATURE_KEY, LABEL_KEY, N_CLASSES, transformed_name\n\n\ndef _fill_in_missing(x):\n default_value = \"\" if x.dtype == tf.string else 0\n return tf.squeeze(\n tf.sparse.to_dense(\n tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),\n default_value,\n ),\n axis=1,\n )\n\n\ndef preprocessing_fn(inputs):\n features = _fill_in_missing(inputs[FEATURE_KEY])\n labels = _fill_in_missing(inputs[LABEL_KEY])\n return {\n transformed_name(FEATURE_KEY): features,\n transformed_name(LABEL_KEY): labels,\n }\n\nTRANSFORM_MODULE = \"preprocessing.py\"", "Configure and run the Transform component.", "transform = Transform(\n examples=example_gen.outputs[\"examples\"],\n schema=schema_importer.outputs[\"result\"],\n module_file=TRANSFORM_MODULE,\n)\n\ncontext.run(transform)", "Examine the Transform component's outputs\nThe Transform component has 2 outputs:\n\ntransform_graph - contains the graph that can perform the preprocessing operations (this graph will be included in the serving and evaluation models).\ntransformed_examples - contains the preprocessed training and evaluation data.\n\nTake a peek at the transform_graph artifact: it points to a directory containing 3 subdirectories:", "os.listdir(transform.outputs[\"transform_graph\"].get()[0].uri)", "And the transform.examples artifact", "os.listdir(transform.outputs[\"transformed_examples\"].get()[0].uri)\n\ntransform_uri = transform.outputs[\"transformed_examples\"].get()[0].uri\n\ntfrecord_filenames = [\n os.path.join(transform_uri, \"train\", name)\n for name in os.listdir(os.path.join(transform_uri, \"train\"))\n]\n\ndataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type=\"GZIP\")\n\nfor tfrecord in dataset.take(4):\n example = tf.train.Example()\n example.ParseFromString(tfrecord.numpy())\n for name, feature in example.features.feature.items():\n if feature.HasField(\"bytes_list\"):\n value = feature.bytes_list.value\n if feature.HasField(\"float_list\"):\n value = feature.float_list.value\n if feature.HasField(\"int64_list\"):\n value = feature.int64_list.value\n print(f\"{name}: {value}\")\n print(\"******\")", "Train your TensorFlow model with the Trainer component\nThe Trainer component trains a model using TensorFlow.\nTrainer takes:\n\ntf.Examples used for training and eval.\nA user provided module file that defines the trainer logic.\nA data schema created by SchemaGen or imported by ImporterNode.\nA proto definition of train args and eval args.\nAn optional transform graph produced by upstream Transform component.\nAn optional base models used for scenarios such as warmstarting training.\n\n<img src=../../images/Trainer.png width=\"400\">\nDefine the trainer module\nTo configure Trainer, you need to encapsulate your training code in a Python module that is then provided to the Trainer as an input.", "%%writefile model.py\nimport tensorflow as tf\nimport tensorflow_transform as tft\nfrom config import (\n EVAL_BATCH_SIZE,\n HUB_DIM,\n HUB_URL,\n LABEL_KEY,\n MODEL_NAME,\n N_CLASSES,\n N_NEURONS,\n TRAIN_BATCH_SIZE,\n transformed_name,\n)\nfrom tensorflow.keras.callbacks import TensorBoard\nfrom tensorflow.keras.layers import Dense\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow_hub import KerasLayer\nfrom tfx_bsl.tfxio import dataset_options\n\n\ndef _get_serve_tf_examples_fn(model, tf_transform_output):\n model.tft_layer = tf_transform_output.transform_features_layer()\n\n @tf.function\n def serve_tf_examples_fn(serialized_tf_examples):\n \"\"\"Returns the output to be used in the serving signature.\"\"\"\n feature_spec = tf_transform_output.raw_feature_spec()\n feature_spec.pop(LABEL_KEY)\n parsed_features = tf.io.parse_example(\n serialized_tf_examples, feature_spec\n )\n transformed_features = model.tft_layer(parsed_features)\n return model(transformed_features)\n\n return serve_tf_examples_fn\n\n\ndef _input_fn(file_pattern, data_accessor, tf_transform_output, batch_size=200):\n return data_accessor.tf_dataset_factory(\n file_pattern,\n dataset_options.TensorFlowDatasetOptions(\n batch_size=batch_size, label_key=transformed_name(LABEL_KEY)\n ),\n tf_transform_output.transformed_metadata.schema,\n )\n\n\ndef _load_hub_module_layer():\n hub_module = KerasLayer(\n HUB_URL,\n output_shape=[HUB_DIM],\n input_shape=[],\n dtype=tf.string,\n trainable=True,\n )\n return hub_module\n\n\ndef _build_keras_model():\n hub_module = _load_hub_module_layer()\n model = Sequential(\n [\n hub_module,\n Dense(N_NEURONS, activation=\"relu\"),\n Dense(N_CLASSES, activation=\"softmax\"),\n ]\n )\n model.compile(\n optimizer=\"adam\",\n loss=\"sparse_categorical_crossentropy\",\n metrics=[tf.keras.metrics.SparseCategoricalAccuracy()],\n )\n return model\n\n\ndef run_fn(fn_args):\n\n tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)\n\n train_dataset = _input_fn(\n fn_args.train_files,\n fn_args.data_accessor,\n tf_transform_output,\n TRAIN_BATCH_SIZE,\n )\n\n eval_dataset = _input_fn(\n fn_args.eval_files,\n fn_args.data_accessor,\n tf_transform_output,\n EVAL_BATCH_SIZE,\n )\n\n mirrored_strategy = tf.distribute.MirroredStrategy()\n\n with mirrored_strategy.scope():\n model = _build_keras_model()\n\n tensorboard_callback = tf.keras.callbacks.TensorBoard(\n log_dir=fn_args.model_run_dir, update_freq=\"batch\"\n )\n\n model.fit(\n train_dataset,\n steps_per_epoch=fn_args.train_steps,\n validation_data=eval_dataset,\n validation_steps=fn_args.eval_steps,\n callbacks=[tensorboard_callback],\n )\n\n signatures = {\n \"serving_default\": _get_serve_tf_examples_fn(\n model, tf_transform_output\n ).get_concrete_function(\n tf.TensorSpec(shape=[None], dtype=tf.string, name=\"examples\")\n ),\n }\n model.save(\n fn_args.serving_model_dir, save_format=\"tf\", signatures=signatures\n )\n\nTRAINER_MODULE_FILE = \"model.py\"", "Create and run the Trainer component\nAs of the 0.25.0 release of TFX, the Trainer component only supports passing a single field - num_steps - through the train_args and eval_args arguments.", "trainer = Trainer(\n custom_executor_spec=executor_spec.ExecutorClassSpec(\n trainer_executor.GenericExecutor\n ),\n module_file=TRAINER_MODULE_FILE,\n transformed_examples=transform.outputs.transformed_examples,\n schema=schema_importer.outputs.result,\n transform_graph=transform.outputs.transform_graph,\n train_args=trainer_pb2.TrainArgs(splits=[\"train\"], num_steps=20),\n eval_args=trainer_pb2.EvalArgs(splits=[\"eval\"], num_steps=5),\n)\n\ncontext.run(trainer)", "Analyzing training runs with TensorBoard\nIn this step you will analyze the training run with TensorBoard.dev. TensorBoard.dev is a managed service that enables you to easily host, track and share your ML experiments.\nRetrieve the location of TensorBoard logs\nEach model run's train and eval metric logs are written to the model_run directory by the Tensorboard callback defined in model.py.", "logs_path = trainer.outputs[\"model_run\"].get()[0].uri\nprint(logs_path)", "Upload the logs and start TensorBoard.dev\n\n\nOpen a new JupyterLab terminal window\n\n\nFrom the terminal window, execute the following command\ntensorboard dev upload --logdir [YOUR_LOGDIR]\n\n\nWhere [YOUR_LOGDIR] is an URI retrieved by the previous cell.\nYou will be asked to authorize TensorBoard.dev using your Google account. If you don't have a Google account or you don't want to authorize TensorBoard.dev you can skip this exercise.\nAfter the authorization process completes, follow the link provided to view your experiment.\nEvaluating trained models with Evaluator\nThe Evaluator component analyzes model performance using the TensorFlow Model Analysis library. It runs inference requests on particular subsets of the test dataset, based on which slices are defined by the developer. Knowing which slices should be analyzed requires domain knowledge of what is important in this particular use case or domain. \nThe Evaluator can also optionally validate a newly trained model against a previous model. In this lab, you only train one model, so the Evaluator automatically will label the model as \"blessed\".\n<img src=../../images/Evaluator.png width=\"400\">\nConfigure and run the Evaluator component\nUse the ResolverNode to pick the previous model to compare against. The model resolver is only required if performing model validation in addition to evaluation. In this case we validate against the latest blessed model. If no model has been blessed before (as in this case) the evaluator will make our candidate the first blessed model.", "model_resolver = ResolverNode(\n instance_name=\"latest_blessed_model_resolver\",\n resolver_class=latest_blessed_model_resolver.LatestBlessedModelResolver,\n model=Channel(type=Model),\n model_blessing=Channel(type=ModelBlessing),\n)\n\ncontext.run(model_resolver)", "Configure evaluation metrics and slices.", "accuracy_threshold = tfma.MetricThreshold(\n value_threshold=tfma.GenericValueThreshold(\n lower_bound={\"value\": 0.30}, upper_bound={\"value\": 0.99}\n )\n)\n\nmetrics_specs = tfma.MetricsSpec(\n metrics=[\n tfma.MetricConfig(\n class_name=\"SparseCategoricalAccuracy\", threshold=accuracy_threshold\n ),\n tfma.MetricConfig(class_name=\"ExampleCount\"),\n ]\n)\n\neval_config = tfma.EvalConfig(\n model_specs=[tfma.ModelSpec(label_key=\"source\")],\n metrics_specs=[metrics_specs],\n)\neval_config\n\nmodel_analyzer = Evaluator(\n examples=example_gen.outputs.examples,\n model=trainer.outputs.model,\n baseline_model=model_resolver.outputs.model,\n eval_config=eval_config,\n)\n\ncontext.run(model_analyzer, enable_cache=False)", "Check the model performance validation status", "model_blessing_uri = model_analyzer.outputs.blessing.get()[0].uri\n!ls -l {model_blessing_uri}", "Deploying models with Pusher\nThe Pusher component checks whether a model has been \"blessed\", and if so, deploys it by pushing the model to a well known file destination.\n<img src=../../images/Pusher.png width=\"400\">\nConfigure and run the Pusher component", "trainer.outputs[\"model\"]\n\npusher = Pusher(\n model=trainer.outputs[\"model\"],\n model_blessing=model_analyzer.outputs[\"blessing\"],\n push_destination=pusher_pb2.PushDestination(\n filesystem=pusher_pb2.PushDestination.Filesystem(\n base_directory=SERVING_MODEL_DIR\n )\n ),\n)\ncontext.run(pusher)", "Examine the output of Pusher", "pusher.outputs\n\n# Set `PATH` to include a directory containing `saved_model_cli.\nPATH = get_ipython().run_line_magic(\"env\", \"PATH\")\n%env PATH=/opt/conda/envs/tfx/bin:{PATH}\n\nlatest_pushed_model = os.path.join(\n SERVING_MODEL_DIR, max(os.listdir(SERVING_MODEL_DIR))\n)\n!saved_model_cli show --dir {latest_pushed_model} --all", "License\n<font size=-1>Licensed under the Apache License, Version 2.0 (the \\\"License\\\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \\\"AS IS\\\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.</font>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
joverbee/electromagnetism_course
multipole.ipynb
gpl-3.0
[ "Visualisation of a generic multipole\nThis notebook shows how to numerically calculate and visualise the fields around an electrostatic multipole.\nQuestions:\n\ndo you see local minima or maxima in the potential? (would a 3D generalisation be better?)\nwhat happens if npoles get higher and higher? inside and outside?\n\nload the required libraries for calculation and plotting", "import numpy as np\nimport matplotlib.pyplot as plt", "create grid to plot (choose 2D plane for visualisation cutting through charge centers , but calculation is correct for 3D)", "xpoints=512 #nr of grid points in 1 direction\nxmax=1 #extension of grid [m]\npref=9e9 # 1/(4pi eps0)\nx=np.linspace(-xmax,xmax,xpoints)\ny=x\n[x2d,y2d]=np.meshgrid(x,y,indexing='ij') #2D matrices holding x or y coordinate for each point on the grid\n\n#define multipole\nnpoles=6 #number of poles, needs to be even\nfradius=0.5*xmax #field radius\nsradius=0.1*xmax #radius of spheres making up the multipole\nvamp=1 #voltage amplitude on multipole (half of the poles have +vamp, other half has -vamp)", "calculate the potential of the set of spheres (use a function that we can reuse later)", "def multipolepotential(x,y,z,npoles,v,fradius,sradius):\n #assume a set of n conducting spheres of radius on a circle of radius fradius (field radius)\n #npoles is number of poles and needs to be even >0\n #the spheres are positioned in the xy plane and have a potential of V for the even spheres and -V for the odd spheres\n out=np.zeros(x.shape)\n potentialin=np.zeros(x.shape)\n potential=np.zeros(x.shape)\n theta=np.linspace(0,2*np.pi,npoles+1)\n if(npoles % 2) == 0:\n for nid in range(npoles):\n #make a superposition of the potential for each of the spheres\n vin=v*(-1.0)**nid \n xn=fradius*np.cos(theta[nid])\n yn=fradius*np.sin(theta[nid])\n r=np.sqrt(np.square(x-xn)+np.square(y-yn)+np.square(z)) #distance to sphere n\n in1=r<sradius #logical function 1 if inside sphere, 0 if outside\n out1=r>=sradius #logical function 0 if inside sphere, 1 if outside\n potential=potential+vin*sradius*np.multiply(np.power(r,-1),out1) \n out=out+out1\n potentialin=potentialin+vin*in1 \n #do a rescaling to match potential as the superposition changes the actual potential on the spheres slighlty\n idin=np.where(potentialin)\n idout=np.where(out)\n potential[idin]=potentialin[idin]\n potential[idout]=v*(potential[idout]/np.max(potential[idout]))\n else:\n potential=None\n #undefined\n return potential\n\nv=multipolepotential(x2d,y2d,np.zeros(x2d.shape),npoles,vamp,fradius,sradius)\nex,ey=np.gradient(-v,x,y) #strange ordering due to meshgrid\ne=np.sqrt(ex**2+ey**2)", "And now its showtime!", "#show vector plot, but limit number of points to keep the number of vector reasonable\nskippts=20\nskip=(slice(None,None,skippts),slice(None,None,skippts)) #dont plot all points in a quiver as this becomes unreadable\nplt.quiver(x2d[skip],y2d[skip],ex[skip],ey[skip])\nplt.title('electric field')\nplt.xlabel('x')\nplt.ylabel('y')\nplt.axis('square')", "Note how the field emanates from the positive charge sinks into the negative charge", "plt.imshow(e,extent=[-xmax, xmax, -xmax, xmax])\nplt.title('electric field and fieldlines')\nplt.xlabel('x');\nplt.ylabel('y');\nplt.streamplot(x2d,x2d,ey,ex)\nplt.axis('square')\nplt.colorbar\nplt.show()", "Note the interesting npoles/2 fold symmetry of the field", "plt.imshow(v,extent=[-xmax, xmax, -xmax, xmax]) \nplt.title('electrostatic potential V')\nplt.xlabel('x')\nplt.ylabel('y')\nplt.axis('square')\nplt.colorbar()\nplt.show()\n\nnlines=50;\nplt.contour(x2d,y2d,v,nlines)\nplt.title('equipotential surfaces')\nplt.xlabel('x')\nplt.ylabel('y')\nplt.axis('square')\nplt.colorbar\nplt.show()", "Equipotential lines are always perpendicular to the fieldlines." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Danghor/Formal-Languages
ANTLR4-Python/SLR-Parser-Generator/Shift-Reduce-Parser-AST.ipynb
gpl-2.0
[ "from IPython.core.display import HTML\nwith open (\"../../style.css\", \"r\") as file:\n css = file.read()\nHTML(css)", "<b>Exercise</b>: Extending a Shift-Reduce Parser\nIn this exercise your task is to extend the shift-reduce parser\nthat has been discussed in the lecture so that it returns an abstract syntax tree. You should test it with the program sum-for.sl that is given the directory Examples.", "cat Examples/sum-for.sl", "The grammar that should be used to parse this program is given in the file\nExamples/simple.g. It is very similar to the grammar that we have developed previously for our interpreter. I have simplified this grammar at various places to make it more suitable\nfor the current task.", "cat Examples/simple.g", "Exercise 1: Generate both the action-table and the goto table for this grammar using the notebook SLR-Table-Generator.ipynb. \nImplementing a Scanner", "import re", "Exercise 2: The function tokenize(s) transforms the string s into a list of tokens. \nGiven the program sum-for.sl it should produce the list of tokens shown further below. Note that a number n is stored as a pairs of the form \n('NUMBER', n)\nand an identifier v is stored as the pair\n('ID', v).\nYou have to take care of keywords like for or while: Syntactically, they are equal to identifiers, but the scanner should <u>not</u> turn them into pairs but rather return them as strings so that the parser does not mistake them for identifiers. \nBelow is the token list that should be produced from scanning the file sum-for.sl:\n['function',\n ('ID', 'sum'),\n '(',\n ('ID', 'n'),\n ')',\n '{',\n ('ID', 's'),\n ':=',\n ('NUMBER', 0),\n ';',\n 'for',\n '(',\n ('ID', 'i'),\n ':=',\n ('NUMBER', 1),\n ';',\n ('ID', 'i'),\n '≤',\n ('ID', 'n'),\n '*',\n ('ID', 'n'),\n ';',\n ('ID', 'i'),\n ':=',\n ('ID', 'i'),\n '+',\n ('NUMBER', 1),\n ')',\n '{',\n ('ID', 's'),\n ':=',\n ('ID', 's'),\n '+',\n ('ID', 'i'),\n ';',\n '}',\n 'return',\n ('ID', 's'),\n ';',\n '}',\n ('ID', 'print'),\n '(',\n ('ID', 'sum'),\n '(',\n ('NUMBER', 6),\n ')',\n ')',\n ';']\nFor reference, I have given the old implementation of the function tokenize that has been used in the notebook Shift-Reduce-Parser-Pure.ipynb. You have to edit this function so that it works with the grammar simple.g.", "def tokenize(s):\n '''Transform the string s into a list of tokens. The string s\n is supposed to represent an arithmetic expression.\n '''\n \"Edit the code below!\"\n lexSpec = r'''([ \\t\\n]+) | # blanks and tabs\n ([1-9][0-9]*|0) | # number\n ([()]) | # parentheses \n ([-+*/]) | # arithmetical operators\n (.) # unrecognized character\n '''\n tokenList = re.findall(lexSpec, s, re.VERBOSE)\n result = []\n for ws, number, parenthesis, operator, error in tokenList:\n if ws: # skip blanks and tabs\n continue\n elif number:\n result += [ 'NUMBER' ]\n elif parenthesis:\n result += [ parenthesis ]\n elif operator:\n result += [ operator ]\n elif error:\n result += [ f'ERROR({error})']\n return result", "The cell below tests your tokenizer. Your task is to compare the output with the output shown above.", "with open('Examples/sum-for.sl', 'r', encoding='utf-8') as file:\n program = file.read()\ntokenize(program) \n\nclass ShiftReduceParser():\n def __init__(self, actionTable, gotoTable):\n self.mActionTable = actionTable\n self.mGotoTable = gotoTable", "The function parse(self, TL) is called with two arguments:\n- self ia an object of class ShiftReduceParser that maintain both an action table \n and a goto table.\n- TL is a list of tokens. Tokens are either \n - literals, i.e. strings enclosed in single quote characters, \n - pairs of the form ('NUMBER', n) where n is a natural number, or \n - the symbol $ denoting the end of input.\nBelow, it is assumed that parse-table.py is the file that you have created in \nExercise 1.", "%run parse-table.py", "Exercise 3:\nThe function parse given below is the from the notebook Shift-Reduce-Parser.ipynb. Adapt this function so that it does not just return Trueor False\nbut rather returns a parse tree as a nested list. The key idea is that the list Symbols\nshould now be a list of parse trees and tokens instead of just syntactical variables and tokens, i.e. the syntactical variables should be replaced by their parse trees. \nIt might be useful to implement an auxilliary function combine_trees that takes a \nlist of parse trees and combines the into a new parse tree.", "def parse(self, TL):\n \"\"\"\n Edit this code so that it returns a parse tree.\n Make use of the auxiliary function combine_trees that you have to\n implement in Exercise 4.\n \"\"\"\n index = 0 # points to next token\n Symbols = [] # stack of symbols\n States = ['s0'] # stack of states, s0 is start state\n TL += ['$']\n while True:\n q = States[-1]\n t = TL[index]\n print('Symbols:', ' '.join(Symbols + ['|'] + TL[index:]).strip())\n p = self.mActionTable.get((q, t), 'error')\n if p == 'error': \n return False\n elif p == 'accept':\n return True\n elif p[0] == 'shift':\n s = p[1]\n Symbols += [t]\n States += [s]\n index += 1\n elif p[0] == 'reduce':\n head, body = p[1]\n n = len(body)\n if n > 0:\n Symbols = Symbols[:-n]\n States = States [:-n]\n Symbols = Symbols + [head]\n state = States[-1]\n States += [ self.mGotoTable[state, head] ]\n\nShiftReduceParser.parse = parse\ndel parse", "Exercise 4: \nGiven a list of tokens and parse trees TL the function combine_trees combines these trees into a new parse tree. The parse trees are represented as nested tuples. The data type of a nested tuple is defined recursively:\n- A nested tuple is a tuple of the form (Head,) + Body where\n * Head is a string and\n * Body is a tuple of strings, integers, and nested tuples. \nWhen the nested tuple (Head,) + Body is displayed as a tree, Head is used as the label at the root of the tree. If len(Body) = n, then the root has n children. These n children are obtained by displaying Body[0], $\\cdots$, Body[n-1] as trees.\nIn order to convert the list of tokens and parse trees into a nested tuple we need a string that can serve as the Head of the parse tree. The easiest way to to this is to take the first element of TL that is a string because the strings in TL are keywords like for or while or they are operator symbols. The remaining strings after the first in TL can be discarded.\nIf there is no string in TL, you can define Head as the empty string. \nI suggest a recursive implementation of this function.\nThe file sum-st.pdf shows the parse tree of the program that is stored in the file sum-for.sl.", "def combine_trees(TL):\n if len(TL) == 0:\n return ()\n if isinstance(TL, str):\n return (str(TL),)\n Literals = [t for t in TL if isinstance(t, str)]\n Trees = [t for t in TL if not isinstance(t, str)]\n if len(Literals) > 0:\n label = Literals[0]\n else:\n label = ''\n result = (label,) + tuple(Trees)\n return result\n\nVoidKeys = { '', '(', ';', 'NUMBER', 'ID' }", "Exercise 5: \nThe function simplfy_tree(tree) transforms the parse tree tree into an abstract syntax tree. The parse tree tree is represented as a nested tuple of the form\ntree = (head,) + body\nThe function should simplify the tree as follows:\n- If head == '' and body is a tuple of length 2 that starts with an empty string,\n then this tree should be simplified to body[1].\n- If head does not contain useful information, for example if head is the empty string\n or an opening parenthesis and, furthermore, body is a tuple of length 1,\n then this tree should be simplified to body[0].\n- By convention, remaining empty Head labels should be replaced by the label '.'\n as this label is traditionally used to construct lists.\nI suggest a recursive implementation of this function.\nThe file sum-ast.pdf shows the abstract syntax tree of the program that is stored in the file sum-for.sl.", "def simplify_tree(tree):\n if isinstance(tree, int) or isinstance(tree, str):\n return tree\n head, *body = tree\n if body == []:\n return tree\n if head == '' and len(body) == 2 and body[0] == ('',):\n return simplify_tree(body[1])\n if head in VoidKeys and len(body) == 1:\n return simplify_tree(body[0])\n body_simplified = simplify_tree_list(body)\n if head == '(' and len(body) == 2:\n return (body_simplified[0],) + body_simplified[1:]\n if head == '':\n head = '.'\n return (head,) + body_simplified\n\ndef simplify_tree_list(TL):\n if TL == []:\n return ()\n tree, *Rest = TL\n return (simplify_tree(tree),) + simplify_tree_list(Rest)", "Testing\nThe notebook ../AST-2-Dot.ipynb implements the function tuple2dot(nt) that displays the nested tuple nt as a tree via graphvis.", "%run ../AST-2-Dot.ipynb\n\ncat -n Examples/sum-for.sl\n\ndef test(file):\n with open(file, 'r', encoding='utf-8') as file:\n program = file.read() \n parser = ShiftReduceParser(actionTable, gotoTable)\n TL = tokenize(program)\n st = parser.parse(TL)\n ast = simplify_tree(st)\n return st, ast", "Calling the function test below should produce the following nested tuple as parse tree:\n('', ('', ('', ('function', ('ID', 'sum'), ('', ('ID', 'n')), ('', ('', ('', ('',), (';', (':=', ('ID', 's'), ('', ('', ('', ('NUMBER', 0))))))), ('for', (':=', ('ID', 'i'), ('', ('', ('', ('NUMBER', 1))))), ('', ('', ('', ('≤', ('', ('', ('', ('ID', 'i')))), ('', ('*', ('', ('', ('ID', 'n'))), ('', ('ID', 'n')))))))), (':=', ('ID', 'i'), ('+', ('', ('', ('', ('ID', 'i')))), ('', ('', ('NUMBER', 1))))), ('', ('',), (';', (':=', ('ID', 's'), ('+', ('', ('', ('', ('ID', 's')))), ('', ('', ('ID', 'i'))))))))), ('return', ('', ('', ('', ('ID', 's')))))))), (';', ('', ('', ('(', ('ID', 'print'), ('', ('', ('', ('(', ('ID', 'sum'), ('', ('', ('', ('', ('NUMBER', 6)))))))))))))))\nThe file sum-st.pdf shows this nested tuple as a tree.\nTransforming the parse tree into an abstract syntax tree should yield the following nested tuple:\n('.', ('function', 'sum', 'n', ('.', ('.', (':=', 's', 0), ('for', (':=', 'i', 1), ('≤', 'i', ('*', 'n', 'n')), (':=', 'i', ('+', 'i', 1)), (':=', 's', ('+', 's', 'i')))), ('return', 's'))), ('print', ('sum', 6)))\nThe file sum-ast.pdf shows this nested tuple as a tree.", "st, ast = test('Examples/sum-for.sl')\nprint(st)\nprint(ast)\ndisplay(tuple2dot(st))\ndisplay(tuple2dot(ast))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
aschaffn/phys202-2015-work
assignments/assignment03/NumpyEx01.ipynb
mit
[ "Numpy Exercise 1\nImports", "import numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nimport antipackage\nimport github.ellisonbg.misc.vizarray as va", "Checkerboard\nWrite a Python function that creates a square (size,size) 2d Numpy array with the values 0.0 and 1.0:\n\nYour function should work for both odd and even size.\nThe 0,0 element should be 1.0.\nThe dtype should be float.", "# there's got to be a more efficient way using some sort \n# of list comprehension\n\ndef checkerboard(size):\n cb = np.ones((size,size), dtype = float)\n for i in range(size):\n for j in range(size):\n if(i+j) % 2 == 1:\n cb[i,j] = 0.0\n return cb\n \ncheckerboard(4)\n\na = checkerboard(4)\nassert a[0,0]==1.0\nassert a.sum()==8.0\nassert a.dtype==np.dtype(float)\nassert np.all(a[0,0:5:2]==1.0)\nassert np.all(a[1,0:5:2]==0.0)\n\nb = checkerboard(5)\nassert b[0,0]==1.0\nassert b.sum()==13.0\nassert np.all(b.ravel()[0:26:2]==1.0)\nassert np.all(b.ravel()[1:25:2]==0.0)", "Use vizarray to visualize a checkerboard of size=20 with a block size of 10px.", "va.set_block_size(10)\nva.enable()\ncheckerboard(20)\n\n\nassert True", "Use vizarray to visualize a checkerboard of size=27 with a block size of 5px.", "va.set_block_size(5)\nva.enable()\ncheckerboard(27)\n\nassert True" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
texib/spark_tutorial
3.AnalysisArticle_HTML.ipynb
gpl-2.0
[ "Parse Json", "def parseRaw(json_map):\n url = json_map['url']\n content = json_map['html']\n return (url,content)", "載入原始 RAW Data", "import json\nimport pprint\npp = pprint.PrettyPrinter(indent=2)\npath = \"./pixnet.txt\"\nall_content = sc.textFile(path).map(json.loads).map(parseRaw)", "利用 LXML Parser 來分析文章結構\n\nlxml.html urlparse 需在涵式內被import,以供RDD運算時使用\n其他import python package的方法 Submitting Applications\nUse spark-submit --py-files to add .py, .zip or .egg files to be distributed with your application. \n\n\nlxml.html.fromstring 的input為HTML string,回傳為可供 xpath 處理的物件\nXPath syntax Ref_1, Ref_2\n/ Selects from the root node\n// Selects all nodes in the document from the current node\n@ Selects attributes\n//@lang Selects all attributes that are named lang\n//title[@lang] Selects all the title elements that have an attribute named lang\n\n\nXPath usful Chrome plugin XPath Helper", "def parseImgSrc(x):\n try:\n urls = list()\n import lxml.html\n from urlparse import urlparse\n node = lxml.html.fromstring(x)\n root = node.getroottree()\n for src in root.xpath('//img/@src'):\n try :\n host = urlparse(src).netloc\n if '.' not in host : continue\n if host.count('.') == 1 : \n pass\n else: \n host = host[host.index('.')+1:]\n urls.append('imgsrc_'+host)\n except :\n print \"Error Parse At:\" , src\n \n for src in root.xpath('//input[@src]/@src'):\n try :\n host = urlparse(src).netloc\n if '.' not in host : continue\n if host.count('.') == 1 : \n pass\n else: \n host = host[host.index('.')+1:]\n urls.append('imgsrc_'+host)\n except :\n print \"Error parseImgSrc At:\" , src\n \n except :\n print \"Unexpected error:\", sys.exc_info()\n return urls\n\nall_content.map(lambda x: x[1]).first()[:100]", "取出 Image Src 的列表", "image_list = all_content.map(lambda x :parseImgSrc(x[1]))\npp.pprint(image_list.first()[:10])", "統計 Image Src 的列表", "img_src_count = all_content.map(\n lambda x :parseImgSrc(x[1])).flatMap(\n lambda x: x).countByValue()\nfor i in img_src_count:\n print i , ':' , img_src_count[i]", "<span style=\"color: blue\">請使用 reduceByKey , sortBy 來計算出 img src 排行榜</span>\n請參照以下文件\n[http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.RDD]\n幾種RDD sorting的方式\n\n針對key值排序\n使用 sortByKey\nsc.parallelize(tmp).sortByKey(True, 1).collect()\n\n\n使用 sortBy\nsc.parallelize(tmp).sortBy(lambda x: x[0]).collect()\n\n\n使用 takeOrdered\nsc.parallelize(tmp).takeOrdered(10, lambda s: -1 * s[0])\n\n\n\n\n針對value排序\n使用 sortBy\nsc.parallelize(tmp).sortBy(lambda x: x[1]).collect()\n\n\n使用 takeOrdered\nsc.parallelize(tmp).takeOrdered(10, lambda s: -1 * s[1])\n\n\n\n\n\ntakeOrdered()使用方式\n\nsort by keys (ascending): RDD.takeOrdered(num, key = lambda x: x[0])\nsort by keys (descending): RDD.takeOrdered(num, key = lambda x: -x[0])\nsort by values (ascending): RDD.takeOrdered(num, key = lambda x: x[1])\nsort by values (descending): RDD.takeOrdered(num, key = lambda x: -x[1])", "from operator import add\nall_content.map(\n lambda x :parseImgSrc(x[1])).flatMap(lambda x: x).map(lambda x: (x,1)).reduceByKey(add).sortBy(\n lambda x: x[1], ascending=False).collect()", "正確的排行如下:\n<span style=\"color: red\">[說明]</span> 由於是實際網頁資料,結果多少會有變動出入,大致上符合或無明顯異常即可。\n<code> \n[('imgsrc_pixfs.net', 219),\n ('imgsrc_agoda.net', 103),\n ('imgsrc_static.flickr.com', 53),\n ('imgsrc_staticflickr.com', 28),\n ('imgsrc_pimg.tw', 19),\n ('imgsrc_facebook.com', 12),\n ('imgsrc_sitebro.com', 10),\n ('imgsrc_linkwithin.com', 5),\n ('imgsrc_cloudfront.net', 5),\n ('imgsrc_prchecker.info', 5),\n ('imgsrc_visit-japan.jp', 5),\n ('imgsrc_yimg.com', 2),\n ('imgsrc_zenfs.com', 2),\n ('imgsrc_googleusercontent.com', 1)]\n</code>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/image_classification/labs/4_tpu_training.ipynb
apache-2.0
[ "Transfer Learning on TPUs\nIn the <a href=\"3_tf_hub_transfer_learning.ipynb\">previous notebook</a>, we learned how to do transfer learning with TensorFlow Hub. In this notebook, we're going to kick up our training speed with TPUs.\nLearning Objectives\n\nKnow how to set up a TPU strategy for training\nKnow how to use a TensorFlow Hub Module when training on a TPU\nKnow how to create and specify a TPU for training\n\nFirst things first. Configure the parameters below to match your own Google Cloud project details.\nEach learning objective will correspond to a #TODO in the notebook, where you will complete the notebook cell's code before running the cell. Refer to the solution notebook)for reference.", "import os\nos.environ[\"BUCKET\"] = \"your-bucket-here\"", "Packaging the Model\nIn order to train on a TPU, we'll need to set up a python module for training. The skeleton for this has already been built out in tpu_models with the data processing functions from the pevious lab copied into <a href=\"tpu_models/trainer/util.py\">util.py</a>.\nSimilarly, the model building and training functions are pulled into <a href=\"tpu_models/trainer/model.py\">model.py</a>. This is almost entirely the same as before, except the hub module path is now a variable to be provided by the user. We'll get into why in a bit, but first, let's take a look at the new task.py file.\nWe've added five command line arguments which are standard for cloud training of a TensorFlow model: epochs, steps_per_epoch, train_path, eval_path, and job-dir. There are two new arguments for TPU training: tpu_address and hub_path\ntpu_address is going to be our TPU name as it appears in Compute Engine Instances. We can specify this name with the ctpu up command.\nhub_path is going to be a Google Cloud Storage path to a downloaded TensorFlow Hub module.\nThe other big difference is some code to deploy our model on a TPU. To begin, we'll set up a TPU Cluster Resolver, which will help tensorflow communicate with the hardware to set up workers for training (more on TensorFlow Cluster Resolvers). Once the resolver connects to and initializes the TPU system, our Tensorflow Graphs can be initialized within a TPU distribution strategy, allowing our TensorFlow code to take full advantage of the TPU hardware capabilities.\nTODO: Complete the code below to setup the resolver and define the TPU training strategy.", "%%writefile tpu_models/trainer/task.py\nimport argparse\nimport json\nimport os\nimport sys\n\nimport tensorflow as tf\n\nfrom . import model\nfrom . import util\n\n\ndef _parse_arguments(argv):\n \"\"\"Parses command-line arguments.\"\"\"\n parser = argparse.ArgumentParser()\n parser.add_argument(\n '--epochs',\n help='The number of epochs to train',\n type=int, default=5)\n parser.add_argument(\n '--steps_per_epoch',\n help='The number of steps per epoch to train',\n type=int, default=500)\n parser.add_argument(\n '--train_path',\n help='The path to the training data',\n type=str, default=\"gs://cloud-ml-data/img/flower_photos/train_set.csv\")\n parser.add_argument(\n '--eval_path',\n help='The path to the evaluation data',\n type=str, default=\"gs://cloud-ml-data/img/flower_photos/eval_set.csv\")\n parser.add_argument(\n '--tpu_address',\n help='The path to the evaluation data',\n type=str, required=True)\n parser.add_argument(\n '--hub_path',\n help='The path to TF Hub module to use in GCS',\n type=str, required=True)\n parser.add_argument(\n '--job-dir',\n help='Directory where to save the given model',\n type=str, required=True)\n return parser.parse_known_args(argv)\n\n\ndef main():\n \"\"\"Parses command line arguments and kicks off model training.\"\"\"\n args = _parse_arguments(sys.argv[1:])[0]\n \n # TODO: define a TPU strategy\n resolver = # TODO: Your code goes here\n tf.config.experimental_connect_to_cluster(resolver)\n tf.tpu.experimental.initialize_tpu_system(resolver)\n strategy = # TODO: Your code goes here\n \n with strategy.scope():\n train_data = util.load_dataset(args.train_path)\n eval_data = util.load_dataset(args.eval_path, training=False)\n image_model = model.build_model(args.job_dir, args.hub_path)\n\n model_history = model.train_and_evaluate(\n image_model, args.epochs, args.steps_per_epoch,\n train_data, eval_data, args.job_dir)\n\n\nif __name__ == '__main__':\n main()\n", "The TPU server\nBefore we can start training with this code, we need a way to pull in MobileNet. When working with TPUs in the cloud, the TPU will not have access to the VM's local file directory since the TPU worker acts as a server. Because of this all data used by our model must be hosted on an outside storage system such as Google Cloud Storage. This makes caching our dataset especially critical in order to speed up training time.\nTo access MobileNet with these restrictions, we can download a compressed saved version of the model by using the wget command. Adding ?tf-hub-format=compressed at the end of our module handle gives us a download URL.", "!wget https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4?tf-hub-format=compressed", "This model is still compressed, so lets uncompress it with the tar command below and place it in our tpu_models directory.", "%%bash\nrm -r tpu_models/hub\nmkdir tpu_models/hub\ntar xvzf 4?tf-hub-format=compressed -C tpu_models/hub/", "Finally, we need to transfer our materials to the TPU. We'll use GCS as a go-between, using gsutil cp to copy everything.", "!gsutil rm -r gs://$BUCKET/tpu_models\n!gsutil cp -r tpu_models gs://$BUCKET/tpu_models", "Spinning up a TPU\nTime to wake up a TPU! Open the Google Cloud Shell and copy the gcloud compute command below. Say 'Yes' to the prompts to spin up the TPU.\ngcloud compute tpus execution-groups create \\\n --name=my-tpu \\\n --zone=us-central1-b \\\n --tf-version=2.3.2 \\\n --machine-type=n1-standard-1 \\\n --accelerator-type=v3-8\nIt will take about five minutes to wake up. Then, it should automatically SSH into the TPU, but alternatively Compute Engine Interface can be used to SSH in. You'll know you're running on a TPU when the command line starts with your-username@your-tpu-name.\nThis is a fresh TPU and still needs our code. Run the below cell and copy the output into your TPU terminal to copy your model from your GCS bucket. Don't forget to include the . at the end as it tells gsutil to copy data into the currect directory.", "!echo \"gsutil cp -r gs://$BUCKET/tpu_models .\"", "Time to shine, TPU! Run the below cell and copy the output into your TPU terminal. Training will be slow at first, but it will pick up speed after a few minutes once the Tensorflow graph has been built out.\nTODO: Complete the code below by adding flags for tpu_address and the hub_path. Have another look at task.py to see how these flags are used. The tpu_address denotes the TPU you created above and hub_path should denote the location of the TFHub module. (Note that the training code requires a TPU_NAME environment variable, set in the first two lines below -- you may reuse it in your code.)", "%%bash\nexport TPU_NAME=my-tpu\necho \"export TPU_NAME=\"$TPU_NAME\necho \"python3 -m tpu_models.trainer.task \\\n # TODO: Your code goes here \\\n # TODO: Your code goes here \\\n --job-dir=gs://$BUCKET/flowers_tpu_$(date -u +%y%m%d_%H%M%S)\"", "How did it go? In the previous lab, it took about 2-3 minutes to get through 25 images. On the TPU, it took 5-6 minutes to get through 2500. That's more than 40x faster! And now our accuracy is over 90%! Congratulations!\nTime to pack up shop. Run exit in the TPU terminal to close the SSH connection, and gcloud compute tpus execution-groups delete my-tpu --zone=us-central1-b in the Cloud Shell to delete the Cloud TPU and Compute Engine instances. Alternatively, they can be deleted through the Compute Engine Interface, but don't forget to separately delete the TPU too!\nCopyright 2022 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
grigorisg9gr/menpo-notebooks
menpo/Shape/Graph.ipynb
bsd-3-clause
[ "Graph\nThis notebook presents the basic concepts behind Menpo's Graph class and subclasses. The aim of this notebook is to introduce the user to Graph very basic functionality; a preliminary step before moving on to the PointGraph notebook. For higher level functionality along with visualization methods, the user is encouraged to go through the PointGraph notebook.\nThe basic Graph subclasses are:\n* UndirectedGraph: graphs with undirected edge connections\n* DirectedGraph: graphs with directed edge connections\n* Tree: directed graph in which any two vertices are connected with exactly one path\nThe corresponding subclasses of PointGraph are basically Graphs with geometry (PointCloud). This means that apart from the edge connections between vertices, they also define spatial location coordinates for each vertex. The PointGraph subclasses are:\n* PointUndirectedGraph: graphs with undirected edge connections\n* PointDirectedGraph: graphs with directed edge connections\n* PointTree: directed graph in which any two vertices are connected with exactly one path\nBelow, we split the presentation for Graph only in the following sections:\n\nMathematical notation\nGraphs representation\nUndirected graph\nIsolated vertices\nDirected graph\nTree\nBasic graph properties\nBasic tree properties\n\n1. Mathematical notation\nA graph is mathematically defined as \n$$G = (V, E)$$, \nwhere $V$ is the set of vertices and $E$ is the set of edges. In Menpo, we assume that the vertices in $V$ are numbered with consecutive positive integer numbers starting from 0, i.e. $V={0, 1, \\ldots, |V|}$. By defining an edge between two vertices $v_i, v_j\\in V$ as $e_{ij}=(v_i,v_j)$, the set of edges can be represented as $E={e_{ij}}, \\forall i,j:(v_i,v_j)~\\text{is edge}$.\nIn Menpo, a graph is represented using the adjacency matrix, which stores which vertices are adjacent to which other vertices. This is mathematically expressed as the $|V|\\times|V|$ sparse matrix\n$$A=\\left\\lbrace\\begin{array}{rl}w_{ij}, & \\text{for}~i,j:e_{ij}\\in E\\ 0, & \\text{otherwise}\\end{array}\\right.$$\nwhere $w_{ij}$ is the weight of edge $e_{ij}$.\nNote the following:\n* The adjacency matrix $A$ of an undirected graph must be symmetric.\n* An edge $e_{ij}$ stored in the adjacency matrix $A$ of a directed graph or tree denote the edge that starts from vertex $v_i$ and ends to vertex $v_j$. So the rows of matrix $A$ are the parent vertices and the columns are the children vertices.\n2. Graphs representation\nBased on the above, the adjacency matrix of size $|V|\\times|V|$ that gets passed to a Graph constructor can be defined either as a numpy.ndarray or a scipy.sparse.csr_matrix. Then it is internally stored as a scipy.sparse.csr_matrix for memory efficiency reasons.\nSo let's make the necessary imports.", "import numpy as np\nfrom scipy.sparse import csr_matrix\n\nfrom menpo.shape import UndirectedGraph, DirectedGraph, Tree", "3. Undirected graph\nThe following undirected graph:\n|---0---|\n | |\n | |\n 1-------2\n | |\n | |\n 3-------4\n |\n |\n 5\ncan be defined as:", "adj_undirected = np.array([[0, 1, 1, 0, 0, 0],\n [1, 0, 1, 1, 0, 0], \n [1, 1, 0, 0, 1, 0],\n [0, 1, 0, 0, 1, 1],\n [0, 0, 1, 1, 0, 0],\n [0, 0, 0, 1, 0, 0]])\nundirected_graph = UndirectedGraph(adj_undirected)\nprint(undirected_graph)", "or", "adj_undirected = csr_matrix(([1] * 14, ([0, 1, 0, 2, 1, 2, 1, 3, 2, 4, 3, 4, 3, 5], \n [1, 0, 2, 0, 2, 1, 3, 1, 4, 2, 4, 3, 5, 3])), \n shape=(6, 6))\nundirected_graph = UndirectedGraph(adj_undirected)\nprint(undirected_graph)", "4. Isolated vertices\nNote that any directed or undirected graph (not a tree) can have isolated vertices, i.e. vertices with no edge connections. For example the following undirected graph:\n```\n 0---|\n |\n |\n 1 2\n |\n |\n 3-------4\n 5\n\n```\ncan be defined as:", "adj_isolated = np.array([[0, 0, 1, 0, 0, 0],\n [0, 0, 0, 0, 0, 0],\n [1, 0, 0, 0, 1, 0],\n [0, 0, 0, 0, 1, 0],\n [0, 0, 1, 1, 0, 0],\n [0, 0, 0, 0, 0, 0]])\nisolated_graph = UndirectedGraph(adj_isolated)\nprint(isolated_graph)", "or", "adj_isolated = csr_matrix(([1] * 6, ([0, 2, 2, 4, 3, 4], [2, 0, 4, 2, 4, 3])), shape=(6, 6))\nisolated_graph = UndirectedGraph(adj_isolated)\nprint(isolated_graph)", "5. Directed graph\nThe following directed graph:\n|--&gt;0&lt;--|\n | |\n | |\n 1&lt;-----&gt;2\n | |\n v v\n 3------&gt;4\n |\n v\n 5\ncan be defined as:", "adj_directed = np.array([[0, 0, 0, 0, 0, 0],\n [1, 0, 1, 1, 0, 0],\n [1, 1, 0, 0, 1, 0],\n [0, 0, 0, 0, 1, 1],\n [0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0]])\ndirected_graph = DirectedGraph(adj_directed)\nprint(directed_graph)", "or", "adj_directed = csr_matrix(([1] * 8, ([1, 2, 1, 2, 1, 2, 3, 3], [0, 0, 2, 1, 3, 4, 4, 5])), shape=(6, 6))\ndirected_graph = DirectedGraph(adj_directed)\nprint(directed_graph)", "6. Tree\nA Tree in Menpo is defined as a directed graph, thus Tree is a subclass of DirectedGraph. The following tree:\n0\n |\n ___|___\n 1 2\n | |\n _|_ |\n 3 4 5\n | | |\n | | |\n 6 7 8\ncan be defined as:", "adj_tree = np.array([[0, 1, 1, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 1, 1, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 1, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 1, 0, 0],\n [0, 0, 0, 0, 0, 0, 0, 1, 0],\n [0, 0, 0, 0, 0, 0, 0, 0, 1],\n [0, 0, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0, 0, 0]])\ntree = Tree(adj_tree, root_vertex=0)\nprint(tree)", "or", "adj_tree = csr_matrix(([1] * 8, ([0, 0, 1, 1, 2, 3, 4, 5], [1, 2, 3, 4, 5, 6, 7, 8])), shape=(9, 9))\ntree = Tree(adj_tree, root_vertex=0)\nprint(tree)", "7. Basic graph properties\nBelow we show how to retrieve basic properties from all the previously defined graphs, i.e. undirected_graph, isolated_graph, directed_graph and tree of Sections 3, 4, 5 and 6 repsectively.\nNumber of vertices and edges\nFor all the above defined graphs, we can get the number of vertices $|V|$ and number of edges $|E|$ as:", "print(\"The undirected_graph has {} vertices and {} edges.\".format(undirected_graph.n_vertices, undirected_graph.n_edges))\nprint(\"The isolated_graph has {} vertices and {} edges.\".format(isolated_graph.n_vertices, isolated_graph.n_edges))\nprint(\"The directed_graph has {} vertices and {} edges.\".format(directed_graph.n_vertices, directed_graph.n_edges))\nprint(\"The tree has {} vertices and {} edges.\".format(tree.n_vertices, tree.n_edges))", "Sets of vertices and edges\nWe can also get the sets of vertices and edges, i.e. $V$ and $E$ respectively, as:", "print(\"undirected_graph: The set of vertices $V$ is\")\nprint(undirected_graph.vertices)\nprint(\"and the set of edges $E$ is\")\nprint(undirected_graph.edges)", "Adjacency list\nWe can also retrieve the adjacency list, i.e. a list that for each vertex stores the list of its neighbours (or children in the case of directed graphs). For example:", "print(\"The adjacency list of the undirected_graph is {}.\".format(undirected_graph.get_adjacency_list()))\nprint(\"The adjacency list of the directed_graph is {}.\".format(directed_graph.get_adjacency_list()))", "Isolated vertices\nThere are methods to check and retrieve isolated vertices, for example:", "print(\"Has the undirected_graph any isolated vertices? {}.\".format(undirected_graph.has_isolated_vertices()))\nprint(\"Has the isolated_graph any isolated vertices? {}, it has {}.\".format(isolated_graph.has_isolated_vertices(), \n isolated_graph.isolated_vertices()))", "Neighbours and is_edge\nWe can check if a pair of vertices are connected with an edge:", "i = 4\nj = 7\nprint(\"Are vertices {} and {} of the tree connected? {}.\".format(i, j, tree.is_edge(i, j)))\n\ni = 5\nj = 1\nprint(\"Are vertices {} and {} of the directed_graph connected? {}.\".format(i, j, directed_graph.is_edge(i, j)))", "We can also retrieve whether a vertex has neighbours (or children) and who are they, as:", "v = 1\nprint(\"How many neighbours does vertex {} of the isolated_graph have? {}.\".format(v, isolated_graph.n_neighbours(v)))\nprint(\"How many children does vertex {} of the directed_graph have? {}, they are {}.\".format(v, directed_graph.n_children(v), \n directed_graph.children(v)))\nprint(\"Who is the parent of vertex {} of the tree? Vertex {}.\".format(v, tree.parent(v)))", "Cycles and trees\nWe can check whether a graph has cycles", "print(\"Does the undirected_graph have cycles? {}.\".format(undirected_graph.has_cycles()))\nprint(\"Does the isolated_graph have cycles? {}.\".format(isolated_graph.has_cycles()))\nprint(\"Does the directed_graph have cycles? {}.\".format(directed_graph.has_cycles()))\nprint(\"Does the tree have cycles? {}.\".format(tree.has_cycles()))", "and, of course whether a graph is a tree", "print(\"Is the undirected_graph a tree? {}.\".format(undirected_graph.is_tree()))\nprint(\"Is the directed_graph a tree? {}.\".format(directed_graph.is_tree()))\nprint(\"Is the tree a tree? {}.\".format(tree.is_tree()))", "8. Basic tree properties\nMenpo's Tree instance has additional basic properties.\nPredecessors list\nApart from the adjacency list mentioned above, a tree can also be represented by a predecessors list, i.e. a list that stores the parent for each vertex. None denotes the root vertex. For example", "print(tree.predecessors_list)", "Depth\nWe can find the maximum depth of a tree", "print(tree.maximum_depth)", "as well as the depth of a specific vertex", "print(\"The depth of vertex 4 is {}.\".format(tree.depth_of_vertex(4)))\nprint(\"The depth of vertex 0 is {}.\".format(tree.depth_of_vertex(0)))", "Leaves\nFinally, we can get the number of leaves as well as whether a specific vertex is a leaf (has no children):", "print(\"The tree has {} leaves.\".format(tree.n_leaves))\nprint(\"Is vertex 7 a leaf? {}.\".format(tree.is_leaf(7)))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/tensorflow
tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb
apache-2.0
[ "Copyright 2022 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Text Searcher with TensorFlow Lite Model Maker\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/lite/models/modify/model_maker/text_searcher\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n <td>\n <a href=\"https://tfhub.dev/google/universal-sentence-encoder-lite/2\"><img src=\"https://www.tensorflow.org/images/hub_logo_32px.png\" />See TF Hub model</a>\n </td>\n</table>\n\nIn this colab notebook, you can learn how to use the TensorFlow Lite Model Maker library to create a TFLite Searcher model. You can use a text Searcher model to build Sematic Search or Smart Reply for your app. This type of model lets you take a text query and search for the most related entries in a text dataset, such as a database of web pages. The model returns a list of the smallest distance scoring entries in the dataset, including metadata you specify, such as URL, page title, or other text entry identifiers. After building this, you can deploy it onto devices (e.g. Android) using Task Library Searcher API to run inference with just a few lines of code.\nThis tutorial leverages CNN/DailyMail dataset as an instance to create the TFLite Searcher model. You can try with your own dataset with the compatible input comma separated value (CSV) format.\nText search using Scalable Nearest Neighbor\nThis tutorial uses the publicly available CNN/DailyMail non-anonymized summarization dataset, which was produced from the GitHub repo. This dataset contains over 300k news articles, which makes it a good dataset to build the Searcher model, and return various related news during model inference for a text query.\nThe text Searcher model in this example uses a ScaNN (Scalable Nearest Neighbors) index file that can search for similar items from a predefined database. ScaNN achieves state-of-the-art performance for efficient vector similarity search at scale.\nHighlights and urls in this dataset are used in this colab to create the model:\n\nHighlights are the text for generating the embedding feature vectors and then used for search.\nUrls are the returned result shown to users after searching the related highlights.\n\nThis tutorial saves these data into the CSV file and then uses the CSV file to build the model. Here are several examples from the dataset.\n| Highlights | Urls\n| ---------- |----------\n|Hawaiian Airlines again lands at No. 1 in on-time performance. The Airline Quality Rankings Report looks at the 14 largest U.S. airlines. ExpressJet <br> and American Airlines had the worst on-time performance. Virgin America had the best baggage handling; Southwest had lowest complaint rate. | http://www.cnn.com/2013/04/08/travel/airline-quality-report\n| European football's governing body reveals list of countries bidding to host 2020 finals. The 60th anniversary edition of the finals will be hosted by 13 <br> countries. Thirty-two countries are considering bids to host 2020 matches. UEFA will announce host cities on September 25. | http://edition.cnn.com:80/2013/09/20/sport/football/football-euro-2020-bid-countries/index.html?\n| Once octopus-hunter Dylan Mayer has now also signed a petition of 5,000 divers banning their hunt at Seacrest Park. Decision by Washington <br> Department of Fish and Wildlife could take months. | http://www.dailymail.co.uk:80/news/article-2238423/Dylan-Mayer-Washington-considers-ban-Octopus-hunting-diver-caught-ate-Puget-Sound.html?\n| Galaxy was observed 420 million years after the Big Bang. found by NASA’s Hubble Space Telescope, Spitzer Space Telescope, and one of nature’s <br> own natural 'zoom lenses' in space. | http://www.dailymail.co.uk/sciencetech/article-2233883/The-furthest-object-seen-Record-breaking-image-shows-galaxy-13-3-BILLION-light-years-Earth.html\nSetup\nStart by installing the required packages, including the Model Maker package from the GitHub repo.", "!sudo apt -y install libportaudio2\n!pip install -q tflite-model-maker\n!pip install gdown", "Import the required packages.", "from tflite_model_maker import searcher", "Prepare the dataset\nThis tutorial uses the dataset CNN / Daily Mail summarization dataset from the GitHub repo.\nFirst, download the text and urls of cnn and dailymail and unzip them. If it\nfailed to download from google drive, please wait a few minutes to try it again or download it manually and then upload it to the colab.", "!gdown https://drive.google.com/uc?id=0BwmD_VLjROrfTHk4NFg2SndKcjQ\n!gdown https://drive.google.com/uc?id=0BwmD_VLjROrfM1BxdkxVaTY2bWs\n\n!wget -O all_train.txt https://raw.githubusercontent.com/abisee/cnn-dailymail/master/url_lists/all_train.txt\n!tar xzf cnn_stories.tgz\n!tar xzf dailymail_stories.tgz", "Then, save the data into the CSV file that can be loaded into tflite_model_maker library. The code is based on the logic used to load this data in tensorflow_datasets. We can't use tensorflow_dataset directly since it doesn't contain urls which are used in this colab.\nSince it takes a long time to process the data into embedding feature vectors\nfor the whole dataset. Only first 5% stories of CNN and Daily Mail dataset are\nselected by default for demo purpose. You can adjust the\nfraction or try with the pre-built TFLite model with 50% stories of CNN and Daily Mail dataset to search as well.", "#@title Save the highlights and urls to the CSV file\n#@markdown Load the highlights from the stories of CNN / Daily Mail, map urls with highlights, and save them to the CSV file.\n\nCNN_FRACTION = 0.05 #@param {type:\"number\"}\nDAILYMAIL_FRACTION = 0.05 #@param {type:\"number\"}\n\nimport csv\nimport hashlib\nimport os\nimport tensorflow as tf\n\ndm_single_close_quote = u\"\\u2019\" # unicode\ndm_double_close_quote = u\"\\u201d\"\nEND_TOKENS = [\n \".\", \"!\", \"?\", \"...\", \"'\", \"`\", '\"', dm_single_close_quote,\n dm_double_close_quote, \")\"\n] # acceptable ways to end a sentence\n\n\ndef read_file(file_path):\n \"\"\"Reads lines in the file.\"\"\"\n lines = []\n with tf.io.gfile.GFile(file_path, \"r\") as f:\n for line in f:\n lines.append(line.strip())\n return lines\n\n\ndef url_hash(url):\n \"\"\"Gets the hash value of the url.\"\"\"\n h = hashlib.sha1()\n url = url.encode(\"utf-8\")\n h.update(url)\n return h.hexdigest()\n\n\ndef get_url_hashes_dict(urls_path):\n \"\"\"Gets hashes dict that maps the hash value to the original url in file.\"\"\"\n urls = read_file(urls_path)\n return {url_hash(url): url[url.find(\"id_/\") + 4:] for url in urls}\n\n\ndef find_files(folder, url_dict):\n \"\"\"Finds files corresponding to the urls in the folder.\"\"\"\n all_files = tf.io.gfile.listdir(folder)\n ret_files = []\n for file in all_files:\n # Gets the file name without extension.\n filename = os.path.splitext(os.path.basename(file))[0]\n if filename in url_dict:\n ret_files.append(os.path.join(folder, file))\n return ret_files\n\n\ndef fix_missing_period(line):\n \"\"\"Adds a period to a line that is missing a period.\"\"\"\n if \"@highlight\" in line:\n return line\n if not line:\n return line\n if line[-1] in END_TOKENS:\n return line\n return line + \".\"\n\n\ndef get_highlights(story_file):\n \"\"\"Gets highlights from a story file path.\"\"\"\n lines = read_file(story_file)\n\n # Put periods on the ends of lines that are missing them\n # (this is a problem in the dataset because many image captions don't end in\n # periods; consequently they end up in the body of the article as run-on\n # sentences)\n lines = [fix_missing_period(line) for line in lines]\n\n # Separate out article and abstract sentences\n highlight_list = []\n next_is_highlight = False\n for line in lines:\n if not line:\n continue # empty line\n elif line.startswith(\"@highlight\"):\n next_is_highlight = True\n elif next_is_highlight:\n highlight_list.append(line)\n\n # Make highlights into a single string.\n highlights = \"\\n\".join(highlight_list)\n\n return highlights\n\nurl_hashes_dict = get_url_hashes_dict(\"all_train.txt\")\ncnn_files = find_files(\"cnn/stories\", url_hashes_dict)\ndailymail_files = find_files(\"dailymail/stories\", url_hashes_dict)\n\n# The size to be selected.\ncnn_size = int(CNN_FRACTION * len(cnn_files))\ndailymail_size = int(DAILYMAIL_FRACTION * len(dailymail_files))\nprint(\"CNN size: %d\"%cnn_size)\nprint(\"Daily Mail size: %d\"%dailymail_size)\n\nwith open(\"cnn_dailymail.csv\", \"w\") as csvfile:\n writer = csv.DictWriter(csvfile, fieldnames=[\"highlights\", \"urls\"])\n writer.writeheader()\n\n for file in cnn_files[:cnn_size] + dailymail_files[:dailymail_size]:\n highlights = get_highlights(file)\n # Gets the filename which is the hash value of the url.\n filename = os.path.splitext(os.path.basename(file))[0]\n url = url_hashes_dict[filename]\n writer.writerow({\"highlights\": highlights, \"urls\": url})\n", "Build the text Searcher model\nCreate a text Searcher model by loading a dataset, creating a model with the data and exporting the TFLite model.\nStep 1. Load the dataset\nModel Maker takes the text dataset and the corresponding metadata of each text string (such as urls in this example) in the CSV format. It embeds the text strings into feature vectors using the user-specified embedder model.\nIn this demo, we build the Searcher model using Universal Sentence Encoder, a state-of-the-art sentence embedding model which is already retrained from colab. The model is optimized for on-device inference performance, and only takes 6ms to embed a query string (measured on Pixel 6). Alternatively, you can use this quantized version, which is smaller but takes 38ms for each embedding.", "!wget -O universal_sentence_encoder.tflite https://storage.googleapis.com/download.tensorflow.org/models/tflite_support/searcher/text_to_image_blogpost/text_embedder.tflite", "Create a searcher.TextDataLoader instance and use data_loader.load_from_csv method to load the dataset. It takes ~10 minutes for this\nstep since it generates the embedding feature vector for each text one by one. You can try to upload your own CSV file and load it to build the customized model as well.\nSpecify the name of text column and metadata column in the CSV file.\n* Text is used to generate the embedding feature vectors.\n* Metadata is the content to be shown when you search the certain text.\nHere are the first 4 lines of the CNN-DailyMail CSV file generated above.\n| highlights| urls\n| ---------- |----------\n|Syrian official: Obama climbed to the top of the tree, doesn't know how to get down. Obama sends a letter to the heads of the House and Senate. Obama <br> to seek congressional approval on military action against Syria. Aim is to determine whether CW were used, not by whom, says U.N. spokesman.|http://www.cnn.com/2013/08/31/world/meast/syria-civil-war/\n|Usain Bolt wins third gold of world championship. Anchors Jamaica to 4x100m relay victory. Eighth gold at the championships for Bolt. Jamaica double <br> up in women's 4x100m relay.|http://edition.cnn.com/2013/08/18/sport/athletics-bolt-jamaica-gold\n|The employee in agency's Kansas City office is among hundreds of \"virtual\" workers. The employee's travel to and from the mainland U.S. last year cost <br> more than $24,000. The telecommuting program, like all GSA practices, is under review.|http://www.cnn.com:80/2012/08/23/politics/gsa-hawaii-teleworking\n|NEW: A Canadian doctor says she was part of a team examining Harry Burkhart in 2010. NEW: Diagnosis: \"autism, severe anxiety, post-traumatic stress <br> disorder and depression\" Burkhart is also suspected in a German arson probe, officials say. Prosecutors believe the German national set a string of fires <br> in Los Angeles.|http://edition.cnn.com:80/2012/01/05/justice/california-arson/index.html?", "data_loader = searcher.TextDataLoader.create(\"universal_sentence_encoder.tflite\", l2_normalize=True)\ndata_loader.load_from_csv(\"cnn_dailymail.csv\", text_column=\"highlights\", metadata_column=\"urls\")", "For image use cases, you can create a searcher.ImageDataLoader instance and then use data_loader.load_from_folder to load images from the folder. The searcher.ImageDataLoader instance needs to be created by a TFLite embedder model because it will be leveraged to encode queries to feature vectors and be exported with the TFLite Searcher model. For instance:\npython\ndata_loader = searcher.ImageDataLoader.create(\"mobilenet_v2_035_96_embedder_with_metadata.tflite\")\ndata_loader.load_from_folder(\"food/\")\nStep 2. Create the Searcher model\n\nConfigure ScaNN options. See api doc for more details.\nCreate the Searcher model from data and ScaNN options. You can see the in-depth examination to learn more about the ScaNN algorithm.", "scann_options = searcher.ScaNNOptions(\n distance_measure=\"dot_product\",\n tree=searcher.Tree(num_leaves=140, num_leaves_to_search=4),\n score_ah=searcher.ScoreAH(dimensions_per_block=1, anisotropic_quantization_threshold=0.2))\nmodel = searcher.Searcher.create_from_data(data_loader, scann_options)", "In the above example, we define the following options:\n* distance_measure: we use \"dot_product\" to measure the distance between two embedding vectors. Note that we actually compute the negative dot product value to preserve the notion that \"smaller is closer\".\n\n\ntree: the dataset is divided the dataset into 140 partitions (roughly the square root of the data size), and 4 of them are searched during retrieval, which is roughly 3% of the dataset.\n\n\nscore_ah: we quantize the float embeddings to int8 values with the same dimension to save space.\n\n\nStep 3. Export the TFLite model\nThen you can export the TFLite Searcher model.", "model.export(\n export_filename=\"searcher.tflite\",\n userinfo=\"\",\n export_format=searcher.ExportFormat.TFLITE)", "Test the TFLite model on your query\nYou can test the exported TFLite model using custom query text. To query text using the Searcher model, initialize the model and run a search with text phrase, as follows:", "from tflite_support.task import text\n\n# Initializes a TextSearcher object.\nsearcher = text.TextSearcher.create_from_file(\"searcher.tflite\")\n\n# Searches the input query.\nresults = searcher.search(\"The Airline Quality Rankings Report looks at the 14 largest U.S. airlines.\")\nprint(results)", "See the Task Library documentation for more information about how to integrate the model to various platforms.\nRead more\nFor more information, please refer to:\n\n\nTensorFlow Lite Model Maker guide and API reference.\n\n\nTask Library: TextSearcher for deployment.\n\nThe end-to-end reference apps: Android." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
steinam/teacher
jup_notebooks/data-science-ipython-notebooks-master/python-data/files.ipynb
mit
[ "This notebook was prepared by Donne Martin. Source and license info is on GitHub.\nFiles\n\nRead a File\nWrite a File\nRead and Write UTF-8\n\nRead a File\nOpen a file in read-only mode.<br>\nIterate over the file lines. rstrip removes the EOL markers.<br>", "old_file_path = 'type_util.py'\nwith open(old_file_path, 'r') as old_file:\n for line in old_file:\n print(line.rstrip())", "Write to a file\nCreate a new file overwriting any previous file with the same name, write text, then close the file:", "new_file_path = 'hello_world.txt'\nwith open(new_file_path, 'w') as new_file:\n new_file.write('hello world!')", "Read and Write UTF-8", "import codecs\nwith codecs.open(\"hello_world_new.txt\", \"a\", \"utf-8\") as new_file:\n with codecs.open(\"hello_world.txt\", \"r\", \"utf-8\") as old_file: \n for line in old_file:\n new_file.write(line + '\\n')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
antoniomezzacapo/qiskit-tutorial
community/aqua/chemistry/h2o.ipynb
apache-2.0
[ "Qiskit Aqua Chemistry, H2O ground state computation\nThis notebook demonstrates how to use Qiskit Aqua Chemistry to compute the ground state energy of a water (H2O) molecule using VQE and UCCSD.\nThis notebook has been written to use the PYSCF chemistry driver. See the PYSCF chemistry driver readme if you need to install the external PySCF library that this driver requires.", "from qiskit_aqua_chemistry import AquaChemistry\n\n# Input dictionary to configure Qiskit Aqua Chemistry for the chemistry problem.\naqua_chemistry_dict = {\n 'problem': {'random_seed': 50},\n 'driver': {'name': 'PYSCF'},\n 'PYSCF': {'atom': 'O 0.0 0.0 0.0; H 0.757 0.586 0.0; H -0.757 0.586 0.0', 'basis': 'sto-3g'},\n 'operator': {'name': 'hamiltonian', 'freeze_core': True},\n 'algorithm': {'name': 'ExactEigensolver'}\n}", "With the above input problem dictionary for water we now create an AquaChemistry object and call run on it passing in the dictionary to get a result. We use ExactEigensolver first as a reference.", "solver = AquaChemistry()\nresult = solver.run(aqua_chemistry_dict)", "The run method returns a result dictionary. Some notable fields include 'energy' which is the computed ground state energy.", "print('Ground state energy: {}'.format(result['energy']))", "There is also a 'printable' field containing a complete ready to print readable result", "for line in result['printable']:\n print(line)", "We update the dictionary, for VQE with UCCSD, and run the computation again.", "aqua_chemistry_dict['algorithm']['name'] = 'VQE'\naqua_chemistry_dict['optimizer'] = {'name': 'COBYLA', 'maxiter': 25000}\naqua_chemistry_dict['variational_form'] = {'name': 'UCCSD'}\naqua_chemistry_dict['initial_state'] = {'name': 'HartreeFock'}\n\nsolver = AquaChemistry()\nresult = solver.run(aqua_chemistry_dict)\n\nprint('Ground state energy: {}'.format(result['energy']))\n\nfor line in result['printable']:\n print(line)\n\nprint('Actual VQE evaluations taken: {}'.format(result['algorithm_retvals']['eval_count']))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AllenDowney/ProbablyOverthinkingIt
shuffle_pairs.ipynb
mit
[ "Solution to a problem posted here:\nhttp://stackoverflow.com/questions/36455104/create-a-random-order-of-x-y-pairs-without-repeating-subsequent-xs#\n\nSay I have a list of valid X = [1, 2, 3, 4, 5] and a list of valid Y = [1, 2, 3, 4, 5].\nI need to generate all combinations of every element in X and every element in Y (in this case, 25) and get those combinations in random order.\nThis in itself would be simple, but there is an additional requirement: In this random order, there cannot be a repetition of the same x in succession. For example, this is okay:\n\n[1, 3]\n[2, 5]\n[1, 2]\n...\n[1, 4]\n\n\nThis is not:\n\n[1, 3]\n[1, 2] &lt;== the \"1\" cannot repeat, because there was already one before\n[2, 5]\n...\n[1, 4]\n\n\nNow, the least efficient idea would be to simply randomize the full set as long as there are no more repetitions. My approach was a bit different, repeatedly creating a shuffled variant of X, and a list of all Y * X, then picking a random next one from that. So far, I've come up with this:\n\n[...]\n\n\nBut I'm sure this can be done even more efficiently or in a more succinct way?\nAlso, my solution first goes through all X values before continuing with the full set again, which is not perfectly random. I can live with that for my particular application case.\n\nMy solution uses NumPy", "import numpy as np", "Here are some example values for x and y. I assume that there are no repeated values in x.", "xs = np.arange(10, 14)\nys = np.arange(20, 25)\nprint(xs, ys)", "indices is the list of indices I'll choose from at random:", "n = len(xs)\nm = len(ys)\nindices = np.arange(n)", "Now I'll make an array to hold the values of y:", "array = np.tile(ys, (n, 1))\nprint(array)", "And shuffle the rows independently", "[np.random.shuffle(array[i]) for i in range(n)] \nprint(array)", "I'll keep track of how many unused ys there are in each row", "counts = np.full_like(xs, m)\nprint(counts)", "Now I'll choose a row, using the counts as weights", "weights = np.array(counts, dtype=float)\nweights /= np.sum(weights)\nprint(weights)", "i is the row I chose, which corresponds to a value of x.", "i = np.random.choice(indices, p=weights)\nprint(i)", "Now I decrement the counter associated with i, assemble a pair by choosing a value of x and a value of y.\nI also clobber the array value I used, which is not necessary, but helps with visualization.", "counts[i] -= 1\npair = xs[i], array[i, counts[i]]\narray[i, counts[i]] = -1\nprint(pair)", "We can check that the counts got decremented", "print(counts)", "And one of the values in array got used", "print(array)", "The next time through is almost the same, except that when we assemble the weights, we give zero weight to the index we just used.", "weights = np.array(counts, dtype=float)\nweights[i] = 0\nweights /= np.sum(weights)\nprint(weights)", "Everything else is the same", "i = np.random.choice(indices, p=weights)\ncounts[i] -= 1\npair = xs[i], array[i, counts[i]]\narray[i, counts[i]] = -1\nprint(pair)\n\nprint(counts)\n\nprint(array)", "Now we can wrap all that up in a function, using a special value for i during the first iteration.", "def generate_pairs(xs, ys):\n n = len(xs)\n m = len(ys)\n indices = np.arange(n)\n \n array = np.tile(ys, (n, 1))\n [np.random.shuffle(array[i]) for i in range(n)]\n \n counts = np.full_like(xs, m)\n i = -1\n\n for _ in range(n * m):\n weights = np.array(counts, dtype=float)\n if i != -1:\n weights[i] = 0\n weights /= np.sum(weights)\n\n i = np.random.choice(indices, p=weights)\n counts[i] -= 1\n pair = xs[i], array[i, counts[i]]\n array[i, counts[i]] = -1\n \n yield pair", "And here's how it works:", "for pairs in generate_pairs(xs, ys):\n print(pairs)", "Inside the loop, we have to copy the weights, add them up, and choose a random index using the weights. These are all linear in n. So the overall complexity to generate all pairs is O(n^2 m)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
lukasmerten/CRPropa3
doc/pages/example_notebooks/basics/basics.v4.ipynb
gpl-3.0
[ "Introduction to Python Steering\nThe following is a tour of the basic layout of CRPropa 3, showing how to setup and run a 1D simulation of the extragalactic propagation of UHECR protons from a Python shell.\nSimulation setup\nWe start with a ModuleList, which is a container for simulation modules, and represents the simulation.\nThe first module in a simulation should be a propagation module, which will move the cosmic rays. In a 1D simulation magnetic deflections of charged particles are not considered, thus we can use the SimplePropagation module for rectalinear propagation.\nNext we add modules for photo-pion and electron-pair production with the cosmic microwave background and a module for neutron and nuclear decay. Finally we add a minimum energy requirement: Cosmic rays are stopped once they reach the minimum energy.\nIn general the order of modules doesn't matter much for sufficiently small propagation steps. For good practice, we recommend the order: Propagator --> Interactions -> Break conditions -> Observer / Output.\nPlease note that all input, output and internal calculations are done using SI-units to enforce expressive statements such as E = 1 * EeV or D = 100 * Mpc.", "from crpropa import *\n\n# simulation: a sequence of simulation modules\nsim = ModuleList()\n\n# add propagator for rectalinear propagation\nsim.add(SimplePropagation())\n\n# add interaction modules\nsim.add(PhotoPionProduction(CMB()))\nsim.add(ElectronPairProduction(CMB()))\nsim.add(NuclearDecay())\nsim.add(MinimumEnergy(1 * EeV))", "Propagating a single particle\nThe simulation can now be used to propagate a cosmic ray, which is called candidate. We create a 100 EeV proton and propagate it using the simulation. The propagation stops when the energy drops below the minimum energy requirement that was specified. The possible propagation distances are rather long since we are neglecting cosmology in this example.", "cosmicray = Candidate(nucleusId(1, 1), 200 * EeV, Vector3d(100 * Mpc, 0, 0))\n\nsim.run(cosmicray)\nprint(cosmicray)\nprint('Propagated distance', cosmicray.getTrajectoryLength() / Mpc, 'Mpc')", "Defining an observer\nTo define an observer within the simulation we create a Observer object.\nThe convention of 1D simulations is that cosmic rays, starting from positive coordinates, propagate in the negative direction until the reach the observer at 0. Only the x-coordinate is used in the three-vectors that represent position and momentum.", "# add an observer\nobs = Observer()\nobs.add(ObserverPoint()) # observer at x = 0\nsim.add(obs)\nprint(obs)", "Defining the output file\nWe want to save the propagated cosmic rays to an output file.\nPlain text output is provided by the TextOutput module.\nFor the type of information being stored we can use one of five presets: Event1D, Event3D, Trajectory1D, Trajectory3D and Everything.\nWe can also fine tune with enable(XXXColumn) and disable(XXXColumn)", "# trajectory output\noutput1 = TextOutput('trajectories.txt', Output.Trajectory1D)\n#sim.add(output1) # generates a lot of output\n\n#output1.disable(Output.RedshiftColumn) # don't save the current redshift\n#output1.disableAll() # disable everything to start from scratch\n#output1.enable(Output.CurrentEnergyColumn) # current energy\n#output1.enable(Output.CurrentIdColumn) # current particle type\n# ...\n", "If in the example above output1 is added to the module list, it is called on every propagation step to write out the cosmic ray information. \nTo save only cosmic rays that reach our observer, we add an output to the observer that we previously defined.\nThis time we are satisfied with the output type Event1D.", "# event output\noutput2 = TextOutput('events.txt', Output.Event1D)\nobs.onDetection(output2)\n\n#sim.run(cosmicray)\n#output2.close()", "Similary, the output could be linked to the MinimumEnergy module to save those cosmic rays that fall below the minimum energy, and so on.\nNote: If we want to use the CRPropa output file from within the same script that runs the simulation, the output module should be explicitly closed after the simulation run in order to get all events flushed to the file.\nDefining the source\nTo avoid setting each individual cosmic ray by hand we defince a cosmic ray source.\nThe source is located at a distance of 100 Mpc and accelerates protons with a power law spectrum and energies between 1 - 200 EeV.", "# cosmic ray source\nsource = Source()\nsource.add(SourcePosition(100 * Mpc))\nsource.add(SourceParticleType(nucleusId(1, 1)))\nsource.add(SourcePowerLawSpectrum(1 * EeV, 200 * EeV, -1))\nprint(source)", "Running the simulation\nFinally we run the simulation to inject and propagate 10000 cosmic rays. An optional progress bar can show the progress of the simulation.", "sim.setShowProgress(True) # switch on the progress bar\nsim.run(source, 10000)", "(Optional) Plotting\nThis is not part of CRPropa, but since we're at it we can plot the energy spectrum of detected particles to observe the GZK suppression.\nThe plotting is done here using matplotlib, but of course you can use whatever plotting tool you prefer.", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\noutput2.close() # close output file before loading\ndata = np.genfromtxt('events.txt', names=True)\nprint('Number of events', len(data))\n\nlogE0 = np.log10(data['E0']) + 18\nlogE = np.log10(data['E']) + 18\n\nplt.figure(figsize=(10, 7))\nh1 = plt.hist(logE0, bins=25, range=(18, 20.5), histtype='stepfilled', alpha=0.5, label='At source')\nh2 = plt.hist(logE, bins=25, range=(18, 20.5), histtype='stepfilled', alpha=0.5, label='Observed')\nplt.xlabel('log(E/eV)')\nplt.ylabel('N(E)')\nplt.legend(loc = 'upper left', fontsize=20)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
aleph314/K2
EDA/EDA_MTA_Exercises.ipynb
gpl-3.0
[ "Exploratory Data Analysis with Python\nWe will explore the NYC MTA turnstile data set. These data files are from the New York Subway. It tracks the hourly entries and exits to turnstiles (UNIT) by day in the subway system.\nHere is an example of what you could do with the data. James Kao investigates how subway ridership is affected by incidence of rain.\nExercise 1\n\nDownload at least 2 weeks worth of MTA turnstile data (You can do this manually or via Python)\nOpen up a file, use csv reader to read it, make a python dict where there is a key for each (C/A, UNIT, SCP, STATION). These are the first four columns. The value for this key should be a list of lists. Each list in the list is the rest of the columns in a row. For example, one key-value pair should look like{ ('A002','R051','02-00-00','LEXINGTON AVE'): \n [\n ['NQR456', 'BMT', '01/03/2015', '03:00:00', 'REGULAR', '0004945474', '0001675324'], \n ['NQR456', 'BMT', '01/03/2015', '07:00:00', 'REGULAR', '0004945478', '0001675333'], \n ['NQR456', 'BMT', '01/03/2015', '11:00:00', 'REGULAR', '0004945515', '0001675364'],\n ... \n ] \n}\n\n\n\nStore all the weeks in a data structure of your choosing", "import csv\nimport os", "Field Description\n\nC/A = Control Area (A002)\nUNIT = Remote Unit for a station (R051)\nSCP = Subunit Channel Position represents an specific address for a device (02-00-00)\nSTATION = Represents the station name the device is located at\nLINENAME = Represents all train lines that can be boarded at this station. Normally lines are represented by one character. LINENAME 456NQR repersents train server for 4, 5, 6, N, Q, and R trains.\nDIVISION = Represents the Line originally the station belonged to BMT, IRT, or IND \nDATE = Represents the date (MM-DD-YY)\nTIME = Represents the time (hh:mm:ss) for a scheduled audit event\nDESC = Represent the \"REGULAR\" scheduled audit event (Normally occurs every 4 hours)\nAudits may occur more that 4 hours due to planning, or troubleshooting activities. \nAdditionally, there may be a \"RECOVR AUD\" entry: This refers to a missed audit that was recovered. \n\n\nENTRIES = The comulative entry register value for a device\nEXIST = The cumulative exit register value for a device", "turnstile = {}\n\n# looping through all files in data dir starting with MTA_Turnstile\nfor filename in os.listdir('data'):\n if filename.startswith('MTA_Turnstile'): \n # reading file and writing each row in a dict\n with open(os.path.join('data', filename), newline='') as csvfile:\n mtareader = csv.reader(csvfile, delimiter=',')\n next(mtareader)\n for row in mtareader:\n key = (row[0], row[1], row[2], row[3])\n value = [row[4], row[5], row[6], row[7], row[8], row[9], row[10].rstrip()]\n if key in turnstile:\n turnstile[key].append(value)\n else:\n turnstile[key] = [value]\n\n# test value for dict\ntest = ('A002','R051','02-00-00','59 ST')\n\nturnstile[test]#[:2]", "Exercise 2\n\nLet's turn this into a time series.\n\nFor each key (basically the control area, unit, device address and station of a specific turnstile), have a list again, but let the list be comprised of just the point in time and the cumulative count of entries.\nThis basically means keeping only the date, time, and entries fields in each list. You can convert the date and time into datetime objects -- That is a python class that represents a point in time. You can combine the date and time fields into a string and use the dateutil module to convert it into a datetime object.\nYour new dict should look something like\n{ ('A002','R051','02-00-00','LEXINGTON AVE'): \n [\n [datetime.datetime(2013, 3, 2, 3, 0), 3788],\n [datetime.datetime(2013, 3, 2, 7, 0), 2585],\n [datetime.datetime(2013, 3, 2, 12, 0), 10653],\n [datetime.datetime(2013, 3, 2, 17, 0), 11016],\n [datetime.datetime(2013, 3, 2, 23, 0), 10666],\n [datetime.datetime(2013, 3, 3, 3, 0), 10814],\n [datetime.datetime(2013, 3, 3, 7, 0), 10229],\n ...\n ],\n ....\n }", "import numpy as np\nimport datetime\nfrom dateutil.parser import parse\n\n# With respect to the solutions I converted the cumulative entries in the number of entries in the period\n# That's ok I think since it is required below to do so...\n\nturnstile_timeseries = {}\n\n# looping through each key in dict, parsing the date and calculating the difference between previous and current count\nfor key in turnstile:\n prev = np.nan\n value = []\n for el in turnstile[key]:\n value.append([parse(el[2] + ' ' + el[3]), int(el[5]) - prev])\n prev = int(el[5])\n if key in turnstile_timeseries:\n turnstile_timeseries[key].append(value)\n else:\n turnstile_timeseries[key] = value\n\nturnstile_timeseries[test]#[:5]\n# ('R305', 'R206', '01-00-00','125 ST')", "Exercise 3\n\nThese counts are cumulative every n hours. We want total daily entries. \n\nNow make it that we again have the same keys, but now we have a single value for a single day, which is not cumulative counts but the total number of passengers that entered through this turnstile on this day.", "# In the solutions there's a check for abnormal values, I added it in the exercises below\n# because I found out about the problem later in the analysis\n\nturnstile_daily = {}\n\n# looping through each key in the timeseries, tracking if the date change while cumulating partial counts\nfor key in turnstile_timeseries:\n value = []\n prev_date = ''\n daily_entries = 0\n for el in turnstile_timeseries[key]:\n curr_date = el[0].date()\n daily_entries += el[1]\n # if the current date differs from the previous I write the value in the dict and reset the other data\n # I check that the date isn't empty to avoid writing the initial values for each key\n if prev_date != curr_date:\n if prev_date != '':\n value.append([prev_date, daily_entries])\n daily_entries = 0\n prev_date = curr_date\n # I write the last value of the loop in each case, this is the closing value of the period\n value.append([prev_date, daily_entries])\n if key in turnstile_daily:\n turnstile_daily[key].append(value)\n else:\n turnstile_daily[key] = value\n\nturnstile_daily[test]", "Exercise 4\n\nWe will plot the daily time series for a turnstile.\n\nIn ipython notebook, add this to the beginning of your next cell: \n%matplotlib inline\n\nThis will make your matplotlib graphs integrate nicely with the notebook.\nTo plot the time series, import matplotlib with \nimport matplotlib.pyplot as plt\n\nTake the list of [(date1, count1), (date2, count2), ...], for the turnstile and turn it into two lists:\ndates and counts. This should plot it:\nplt.figure(figsize=(10,3))\nplt.plot(dates,counts)", "import matplotlib.pyplot as plt\n%matplotlib inline\n\n# using list comprehension, there are other ways such as dict.keys() and dict.items()\ndates = [el[0] for el in turnstile_daily[test]]\ncounts = [el[1] for el in turnstile_daily[test]]\n\nfig = plt.figure(figsize=(14, 5))\nax = plt.axes()\nax.plot(dates, counts)\nplt.grid('on');", "Exercise 5\n\nSo far we've been operating on a single turnstile level, let's combine turnstiles in the same ControlArea/Unit/Station combo. There are some ControlArea/Unit/Station groups that have a single turnstile, but most have multiple turnstilea-- same value for the C/A, UNIT and STATION columns, different values for the SCP column.\n\nWe want to combine the numbers together -- for each ControlArea/UNIT/STATION combo, for each day, add the counts from each turnstile belonging to that combo.", "temp = {}\n\n# for each key I form the new key and check if it's already in the new dict\n# I append the date in this temp dict to make it easier to sum the values\n# then I create a new dict with the required keys\nfor key in turnstile_daily:\n new_key = list(key[0:2]) + list(key[-1:])\n for el in turnstile_daily[key]:\n # setting single negative values to 0:\n # possible causes:\n # strange things in data such as totals that lessen each hour going forward\n # also setting single values over 10.000.000 to 0 to avoid integer overflow:\n # possible causes:\n # data recovery\n value = np.int64(el[1])\n if value < 0 or value > 10000000:\n value = 0 # Maybe nan is a better choice...\n if tuple(new_key + [el[0]]) in temp:\n temp[tuple(new_key + [el[0]])] += value\n else:\n temp[tuple(new_key + [el[0]])] = value\n\nca_unit_station = {}\n\nfor key in temp:\n new_key = key[0:3]\n date = key[-1]\n if new_key in ca_unit_station:\n ca_unit_station[new_key].append([date, temp[key]])\n else:\n ca_unit_station[new_key] = [[date, temp[key]]]\n\nca_unit_station[('R305', 'R206', '125 ST')]", "Exercise 6\n\nSimilarly, combine everything in each station, and come up with a time series of [(date1, count1),(date2,count2),...] type of time series for each STATION, by adding up all the turnstiles in a station.", "temp = {}\n\n# for each key I form the new key and check if it's already in the new dict\n# I append the date in this temp dict to make it easier to sum the values\n# then I create a new dict with the required keys\nfor key in turnstile_daily:\n new_key = key[-1]\n for el in turnstile_daily[key]:\n # setting single negative values to 0:\n # possible causes:\n # strange things in data such as totals that lessen each hour going forward\n # also setting single values over 10.000.000 to 0 to avoid integer overflow:\n # possible causes:\n # data recovery\n value = np.int64(el[1])\n if value < 0 or value > 10000000:\n value = 0\n if (new_key, el[0]) in temp:\n temp[(new_key, el[0])] += value\n else:\n temp[(new_key, el[0])] = value\n\nstation = {}\n\nfor key in temp:\n new_key = key[0]\n date = key[-1]\n if new_key in station:\n station[new_key].append([date, temp[key]])\n else:\n station[new_key] = [[date, temp[key]]]\n\nstation['59 ST']", "Exercise 7\n\nPlot the time series for a station", "test_station = '59 ST'\n\ndates = [el[0] for el in station[test_station]]\ncounts = [el[1] for el in station[test_station]]\n\nfig = plt.figure(figsize=(14, 5))\nax = plt.axes()\nax.plot(dates, counts)\nplt.grid('on');", "Exercise 8\n\nMake one list of counts for one week for one station. Monday's count, Tuesday's count, etc. so it's a list of 7 counts.\nMake the same list for another week, and another week, and another week.\nplt.plot(week_count_list) for every week_count_list you created this way. You should get a rainbow plot of weekly commute numbers on top of each other.", "fig = plt.figure(figsize=(16, 6))\nax = plt.axes()\nn = len(station[test_station])\n# creating a list with all the counts for the station\nall_counts = [el[1] for el in station[test_station]]\n# splitting counts every 7 values to get weekly data\nfor i in range(int(np.floor(n/7))):\n ax.plot(all_counts[i*7: 7 + i*7])\nax.set_xticklabels(['', 'Saturday', 'Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday'])\nplt.grid('on');", "Exercise 9\n\nOver multiple weeks, sum total ridership for each station and sort them, so you can find out the stations with the highest traffic during the time you investigate", "total_ridership = {}\n\n# just looping through keys and summing all elements inside the dict\nfor key in station:\n for el in station[key]:\n if key in total_ridership:\n total_ridership[key] += el[1]\n else:\n total_ridership[key] = el[1]\n\nimport operator\nsorted(total_ridership.items(), key=operator.itemgetter(1), reverse=True)", "Exercise 10\n\nMake a single list of these total ridership values and plot it with plt.hist(total_ridership_counts) to get an idea about the distribution of total ridership among different stations. \nThis should show you that most stations have a small traffic, and the histogram bins for large traffic volumes have small bars.\n\nAdditional Hint: \nIf you want to see which stations take the meat of the traffic, you can sort the total ridership counts and make a plt.bar graph. For this, you want to have two lists: the indices of each bar, and the values. The indices can just be 0,1,2,3,..., so you can do \nindices = range(len(total_ridership_values))\nplt.bar(indices, total_ridership_values)", "fig = plt.figure(figsize=(16, 10))\nax = plt.axes()\nax.hist(list(total_ridership.values()));\n\nfig = plt.figure(figsize=(16, 10))\nax = plt.axes()\nax.bar(range(len(total_ridership)), sorted(list(total_ridership.values())));" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NYUDataBootcamp/Projects
MBA_S16/Jonathan Broch - Demographics & War.ipynb
mit
[ "Demographics and War\nAuthored by: Jonathan Broch jb5365.\nWar has been around since the dawn of time. As long as humans have been around they have bee fighting eachother over resources and even ideas. My project strives to take the first step in answering if there are demographic enablers for war. Traditionally, in order to fight civil wars, populations need a pool of young to be used as fighters. No youth, no war? The other parameter that must be satisfied in order for armed conflict to occur is that young people have to choose violence over the alternative. If all the youth are already employed in careers of their choice, why would they fight? My project strives to look for patterns in the population growth and youth unemployment data to see if countries that are currently experiencing civil war can be grouped together by this data.\n\nI got population data from the UN website and datbase. I averaged the population growth rates from 1985 - 2005 because people born in those years would be considered \"youth\", between 15 and 25 years old by 2010. 2010 is the year I chose because it is the year before civil war broke out in Syria and other armed conflicts in the middle east.\n\nI used the youth unemployment data from the World Bank. I only used youth unemployment data for 2010 because that is the year relevent to my analysis and the average population growth rate calculated with the UN data.", "# import packages \nimport pandas as pd # data management\nimport matplotlib.pyplot as plt # graphics\nimport urllib\nimport numpy as np\n\n# IPython command, puts plots in notebook \n%matplotlib inline\n\n# check Python version \nimport datetime as dt \nimport sys\nprint('Today is', dt.date.today())\nprint('What version of Python are we running? \\n', sys.version, sep='')\n\n#I am downloading and saving the UN data on population growth by year\n\nimport requests\n\nurl1 = \"http://esa.un.org/unpd/wpp/DVD/Files/\"\nurl2 = \"1_Indicators%20(Standard)/EXCEL_FILES/\"\nurl3 = \"1_Population/WPP2015_POP_F02_POPULATION_GROWTH_RATE.XLS\"\nUNdata = url1 + url2 + url3\n\nresp = requests.get(UNdata)\nwith open('UNdata.xls', 'wb') as output:\n output.write(resp.content)\n\n#I am downloading and saving the World Bank data on youth unemployment\n\nurl4 = \"http://api.worldbank.org/v2/en/indicator/sl.uem.1524.zs?downloadformat=excel\"\nurl5 = ''\nurl6 = ''\nWBdata = url4 + url5 + url6\n\nresp = requests.get(WBdata)\nwith open('WBdata.xls', 'wb') as output:\n output.write(resp.content)", "DATA\nAfter downloading both files, I created a consolidated sheet on excel. This was difficult because the UN and World Banks did not use the same names for the same country. For example, \"PDR Korea\" in the UN file was classified as \"the people's republic of Korea\" in the World Bank. There were around 25 instances of this occuring. Next, I used formulas in excel to extract data from the UN and WB files to populate a table that had \n\"Country name\" in the first column, \n\"Country code\" in the second, \n\"Population Growth\" in the third, \n\"Youth Unemployment\" in the 4th column, and \n\"Group\" in the fifth. The group designation will be explained later.", "#after combining both files, I uploaded the excel document to my dropbox at the link below\n\nxls_file = pd.ExcelFile('https://dl.dropboxusercontent.com/u/16846867/UN-WB%202010%20PG%20vs%20YU.xlsx')\nxls_file\n\n#Here is the data I will analyze. Population growth is an average from 1985-2005 and youth unemployment data is from 2010.\n\nDataurl = 'https://dl.dropboxusercontent.com/u/16846867/UN-WB%202010%20PG%20vs%20YU.xlsx'\n\nFP1 = pd.read_excel(Dataurl, sheetname=1, skiprows=0, na_values=['…'])\n\nData = FP1[list(range(5))]\n\nData\n\n#here is all the data in a scatter plot\nData.plot.scatter(x=\"Population Growth\", y=\"Youth Unemployment\", figsize=(10, 5), alpha=1)\n\n#Here are the values I am using the split the data into 4 groups\n\nData.median(axis=None, skipna=True, level=None, numeric_only=None)\n\n#Here is the same scatter plot with lines dividing the data into groups.\n\n#Group A: Low pop growth, Low youth unemployment. Lower left\n#Group B: High pop growth, Low youth unemployment. Lower right\n#Group C: Low pop growth, High youth unemployment. Upper left\n#Group D: High pop growth, High youth unemployment Upper right\n\nData.plot.scatter(x=\"Population Growth\", y=\"Youth Unemployment\", figsize=(20, 10), alpha=.9, s=50, color =\"cyan\")\n\n#I used the numbers for median pop growth and youth unemployment to divide the data into 4 groups.\n#Below is the code i used to intput the dividing lines\n\nPM1 = [1.770625, 1.770625]\nPM2 = [0,70]\nplt.plot(PM1, PM2)\n\nYE1 = [-2, 7]\nYE2 = [16.326897,16.326897]\nplt.plot(YE1, YE2)\nplt.show()\n\n#I am most interested by countries with high population growth and high youth unemployment, Group D.\n#My prediction is that countries in this group are more likely to be countries currently undergoing or about to undergo civil unrest.\n#As you can see, some of the most obvious countries currently undergoing violent unrest are in the top 12 of Group D.\n\nData[(Data.Group == \"D\")].sort_values(\"Population Growth\", axis=0, ascending=False, inplace=False, kind='quicksort', na_position='last').head(12)\n\n#This table shows countries in group A, sorted by lowest youth unemployment. \n#Most of these countries are traditionally known as being extremely stable in 2010.\n\nData[(Data.Group == \"A\")].sort_values(\"Youth Unemployment\", axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last').head(12)\n\n#This table shows countries with the lowest Youth Unemployment. At a glance, they seem to be countries that have\n#undergone violent civil strife in the past 50 years. Rwanda, Cambodia, Liberia, Sierra Leone\n\nData.sort_values(\"Youth Unemployment\", axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last').head(10)\n\n#This table shows countries with the lowest populaiton growth. Coincidentally, there are many countries in eastern Europe.\n\nData.sort_values(\"Population Growth\", axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last').head(12)\n\n#This table shows countries that have the highest population growth rates.\n\nData.sort_values(\"Population Growth\", axis=0, ascending=False, inplace=False, kind='quicksort', na_position='last').head(12)", "The Grand Finale\nAnd now back to what I thought was interesting. Here is the scatter plot with countries that have had thousands of death in the past years due to armed conflict as well as some of the largest economies in the world highlighted\nin RED Afghanistan, Iraq, Yemen, Sudan, Syria, and Libya\nin BLUE The United States\nin YELLOW China\nin GREEN India", "fig, ax = plt.subplots()\nData.plot.scatter(ax=ax, x=\"Population Growth\", y=\"Youth Unemployment\", figsize=(20,10), alpha=.5, color = 'white')\nData.iloc[199:201].plot.scatter(ax=ax, x=\"Population Growth\", y=\"Youth Unemployment\", figsize=(20,10), alpha=.9, color = 'red', s=100)\nData.iloc[172:173].plot.scatter(ax=ax, x=\"Population Growth\", y=\"Youth Unemployment\", figsize=(20,10), alpha=.9, color = 'red', s=100)\nData.iloc[164:165].plot.scatter(ax=ax, x=\"Population Growth\", y=\"Youth Unemployment\", figsize=(20,10), alpha=.9, color = 'red', s=100)\nData.iloc[188:189].plot.scatter(ax=ax, x=\"Population Growth\", y=\"Youth Unemployment\", figsize=(20,10), alpha=.9, color = 'red', s=100)\nData.iloc[129:130].plot.scatter(ax=ax, x=\"Population Growth\", y=\"Youth Unemployment\", figsize=(20,10), alpha=.9, color = 'red', s=100)\nData.iloc[64:65].plot.scatter(ax=ax, x=\"Population Growth\", y=\"Youth Unemployment\", figsize=(20,10), alpha=.9, color = 'blue', s=100)\nData.iloc[66:67].plot.scatter(ax=ax, x=\"Population Growth\", y=\"Youth Unemployment\", figsize=(20,10), alpha=.9, color = 'Yellow', s=100)\nData.iloc[109:110].plot.scatter(ax=ax, x=\"Population Growth\", y=\"Youth Unemployment\", figsize=(20,10), alpha=.9, color = 'green', s=100)\n\nPM1 = [1.770625, 1.770625]\nPM2 = [0,70]\nplt.plot(PM1, PM2)\n\nYE1 = [-2, 7]\nYE2 = [16.326897,16.326897]\nplt.plot(YE1, YE2)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
lily-tian/fanfictionstatistics
jupyter_notebooks/story_performance.ipynb
mit
[ "# imports libraries\nimport pickle\t\t\t\t\t\t\t\t\t\t# import/export lists\nimport math\t\t\t\t\t\t\t\t\t\t\t# mathematical functions\nimport datetime\t\t\t\t\t\t\t\t\t\t# dates\nimport string\nimport re \t\t\t\t\t\t\t\t\t\t\t# regular expression\nimport pandas as pd\t\t\t\t\t\t\t\t\t# dataframes\nimport numpy as np\t\t\t\t\t\t\t\t\t# numerical computation\nimport matplotlib.pyplot as plt\t\t\t\t\t\t# plot graphics\nimport seaborn as sns\t\t\t\t\t\t\t\t# graphics supplemental\nimport statsmodels.formula.api as smf\t\t\t\t# statistical models\nfrom statsmodels.stats.outliers_influence import (\n variance_inflation_factor as vif)\t\t\t\t# vif\nfrom nltk.corpus import stopwords\n\n# opens cleaned data\nwith open ('../clean_data/df_story', 'rb') as fp:\n df = pickle.load(fp)\n\n# creates subset of data of online stories\ndf_online = df.loc[df.state == 'online', ].copy()\n\n# sets current year\ncyear = datetime.datetime.now().year\n\n# sets stop word list for text parsing\nstop_word_list = stopwords.words('english')", "Fanfiction Story Analysis\nPerformance benchmarking and prediction\nThe success of a story is typically judged by the number of reviews, favorites, or followers it recieves. Here, we will try to predict how successful a story will be given select observable features, as well as develop a way to benchmark existing stories. That is, if we were given a story's features, we can determine whether that story is overperforming or underperforming relative to its peers.\nReviews, favorites, and follows\nFirst and foremost, let us examine the distribution of each of these \"success\" metrics.", "# examines distribution of number of words\n\ndf_online['reviews'].fillna(0).plot.hist(normed=True, \n bins=np.arange(0, 50, 1), alpha=0.5, histtype='step', linewidth='2')\ndf_online['favs'].fillna(0).plot.hist(normed=True, \n bins=np.arange(0, 50, 1), alpha=0.5, histtype='step', linewidth='2')\ndf_online['follows'].fillna(0).plot.hist(normed=True, \n bins=np.arange(0, 50, 1), alpha=0.5, histtype='step', linewidth='2')\nplt.xlim(0,50)\nplt.legend().set_visible(True)\n\nplt.show()", "As expected, reviews, favorites, and follows all have heavily right-skewing distributions. However, there are also differences. A story is mostly likely to have 1 or 2 reviews, not 0. A story is mostly likely to have 0 favorites, but otherwise the favorites distribution looks very similar to reviews. Follows is the one that deviates the most. About one-fourth of stories have 0 or 1 follows. \nWe assumed authors prefer having reviews first and foremost, then favorites, then follows. The data reveals that it is actually follows that is the most \"rare\" out of the three metrics, then favorites, and finally review. \nThis is accordance with intuition. Anyone can sign a review, with or without an account. Only users with accounts can increase a story's favorite counter. Finally, follows are the most \"hassling\", as they send update messages to a follower's email inbox. Consequently, they are the least common.", "df_online.columns.values\n\n# creates regressand variables\ndf_online['ratedM'] = [row == 'M' for row in df_online['rated']]\ndf_online['age'] = [cyear - int(row) for row in df_online['pub_year']]\ndf_online['fansize'] = [fandom[row] for row in df_online['fandom']]\ndf_online['complete'] = [row == 'Complete' for row in df_online['status']]\ndf_online['lnchapters'] = np.log(df_online['chapters'])\n\n# creates independent variables\ndf_online['lnreviews'] = np.log(df_online['reviews']+1)\ndf_online['lnfavs'] = np.log(df_online['favs']+1)\ndf_online['lnfollows'] = np.log(df_online['follows']+1)\n\ndf_online['lnfavs'] = np.log(df_online['favs']+1)\n\nsns.pairplot(data=df_online, y_vars=['lnfavs'], x_vars=['lnchapters', 'lnwords1k', 'age'])\nsns.pairplot(data=df_online, y_vars=['favs'], x_vars=['chapters', 'words', 'age'])\n\nplt.show()\n\n\n\nsns.pairplot(data=df_online, y_vars=['lnreviews'], x_vars=['lnchapters', 'lnwords1k', 'age'])\nsns.pairplot(data=df_online, y_vars=['reviews'], x_vars=['chapters', 'words', 'age'])\n\nplt.show()\n\n# runs OLS regression\nformula = 'reviews ~ chapters + words1k + ratedM + age + fansize + complete'\nreg = smf.ols(data=df_online, formula=formula).fit()\nprint(reg.summary())\n\n# runs OLS regression\nformula = 'lnreviews ~ lnchapters + lnwords1k + ratedM + age + fansize + complete'\nreg = smf.ols(data=df_online, formula=formula).fit()\nprint(reg.summary())\n\n# runs OLS regression\nformula = 'lnfavs ~ lnchapters + lnwords1k + ratedM + age + fansize'\nreg = smf.ols(data=df_online, formula=formula).fit()\nprint(reg.summary())\n\n# creates copy of only active users\ndf_active = df_profile.loc[df_profile.status != 'inactive', ].copy()\n\n# creates age variable\ndf_active['age'] = 17 - pd.to_numeric(df_active['join_year'])\ndf_active.loc[df_active.age < 0, 'age'] = df_active.loc[df_active.age < 0, 'age'] + 100\ndf_active = df_active[['st', 'fa', 'fs', 'cc', 'age']]\n\n# turns cc into binary\ndf_active.loc[df_active['cc'] > 0, 'cc'] = 1", "Multicollinearity", "# displays correlation matrix\ndf_active.corr()\n\n# creates design_matrix \nX = df_active\nX['intercept'] = 1\n\n# displays variance inflation factor\nvif_results = pd.DataFrame()\nvif_results['VIF Factor'] = [vif(X.values, i) for i in range(X.shape[1])]\nvif_results['features'] = X.columns\nvif_results", "Results indicate there is some correlation between two of the independent variables: 'fa' and 'fs', implying one of them may not be necessary in the model.\nNonlinearity\nWe know from earlier distributions that some of the variables are heavily right-skewed. We created some scatter plots to confirm that the assumption of linearity holds.\nThe data is clustered around the zeros. Let's try a log transformation.\nRegression Model", "# runs OLS regression\nformula = 'st ~ fa + fs + cc + age'\nreg = smf.ols(data=df_active, formula=formula).fit()\nprint(reg.summary())", "The log transformations helped increase the fit from and R-squared of ~0.05 to ~0.20.\nFrom these results, we can see that:\n\nA 1% change in number of authors favorited is associated with a ~15% change in the number of stories written.\nA 1% change in number of stories favorited is associated with a ~4% change in the number of stories written.\nBeing in a community is associated with a ~0.7 increase in the number of stories written.\nOne more year on the site is associated with a ~3% change in the number of stories written.\n\nWe noted earlier that 'fa' and 'fs' had a correlation of ~0.7. As such, we reran the regression without 'fa' first, then again without 'fs'. The model without 'fs' yielded a better fit (R-squared), as well as AIC and BIC.", "# runs OLS regression\nformula = 'st ~ fa + cc + age'\nreg = smf.ols(data=df_active, formula=formula).fit()\nprint(reg.summary())", "Without 'fs', we lost some information but not much:\n\nA 1% change in number of authors favorited is associated with a ~20% change in the number of stories written.\nBeing in a community is associated with a ~0.7 increase in the number of stories written.\nOne more year on the site is associated with a ~3% change in the number of stories written.\n\nAll these results seem to confirm a basic intuition that the more active an user reads (as measured by favoriting authors and stories), the likely it is that user will write more stories. Being longer on the site and being part of a community is also correlated to publications.\nTo get a sense of the actual magnitude of these effects, let's attempt some plots:", "def graph(formula, x_range): \n y = np.array(x_range)\n x = formula(y)\n plt.plot(y,x) \n\ngraph(lambda x : (np.exp(reg.params[0]+reg.params[1]*(np.log(x-1)))), \n range(2,100,1))\ngraph(lambda x : (np.exp(reg.params[0]+reg.params[1]*(np.log(x-1))+reg.params[2])), \n range(2,100,1))\n\nplt.show() \n\nages = [0, 1, 5, 10, 15]\nfor age in ages:\n graph(lambda x : (np.exp(reg.params[0]+reg.params[1]*(np.log(x-1))+reg.params[3]*age)), \n range(2,100,1))\n\nplt.show() " ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bgroveben/python3_machine_learning_projects
introduction_to_ml_with_python/1_Introduction.ipynb
mit
[ "Chapter 1: Introduction", "import numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport mglearn\nfrom IPython.display import display\n%matplotlib inline", "Essential Libraries and Tools\nNumPy", "import numpy as np\n\nx = np.array([[1,2,3],[4,5,6]])\nprint(\"x:\\n{}\".format(x))", "SciPy", "from scipy import sparse\n\n# Create a 2D NumPy array with a diagonal of ones, and zeros everywhere else (aka an identity matrix).\neye = np.eye(4)\nprint(\"NumPy array:\\n{}\".format(eye))\n\n# Convert the NumPy array to a SciPy sparse matrix in CSR format. \n# The CSR format stores a sparse m × n matrix M in row form using three (one-dimensional) arrays (A, IA, JA). \n# Only the nonzero entries are stored. \n# http://www.scipy-lectures.org/advanced/scipy_sparse/csr_matrix.html \nsparse_matrix = sparse.csr_matrix(eye)\nprint(\"\\nSciPy sparse CSR matrix:\\n{}\".format(sparse_matrix))", "Usually it isn't possible to create dense representations of sparse data (they won't fit in memory), so we need to create sparse representations directly.\nHere is a way to create the same sparse matrix as before using the COO format:", "data = np.ones(4)\nrow_indices = np.arange(4)\ncol_indices = np.arange(4)\neye_coo = sparse.coo_matrix((data, (row_indices, col_indices)))\nprint(\"COO representation:\\n{}\".format(eye_coo))", "More details on SciPy sparse matrices can be found in the SciPy Lecture Notes.\nmatplotlib", "# %matplotlib inline -- the default, just displays the plot in the browser.\n# %matplotlib notebook -- provides an interactive environment for the plot.\nimport matplotlib.pyplot as plt\n\n# Generate a sequnce of numbers from -10 to 10 with 100 steps (points) in between.\nx = np.linspace(-10, 10, 100)\n# Create a second array using sine.\ny = np.sin(x)\n# The plot function makes a line chart of one array against another.\nplt.plot(x, y, marker=\"x\")\nplt.title(\"Simple line plot of a sine function using matplotlib\")\nplt.show()", "pandas\nHere is a small example of creating a pandas DataFrame using a Python dictionary.", "import pandas as pd\nfrom IPython.display import display\n\n# Create a simple dataset of people\ndata = {'Name': [\"John\", \"Anna\", \"Peter\", \"Linda\"],\n 'Location' : [\"New York\", \"Paris\", \"Berlin\", \"London\"],\n 'Age' : [24, 13, 53, 33]\n }\n\ndata_pandas = pd.DataFrame(data)\n# IPython.display allows for \"pretty printing\" of dataframes in the Jupyter notebooks.\ndisplay(data_pandas)", "There are several possible ways to query this table.\nHere is one example:", "# Select all rows that have an age column greater than 30:\ndisplay(data_pandas[data_pandas.Age > 30])", "mglearn\nThe mglearn package is a library of utility functions written specifically for this book, so that the code listings don't become too cluttered with details of plotting and data loading.\nThe mglearn library can be found at the author's Github repository, and can be installed with the command pip install mglearn.\nNote\nAll of the code in this book will assume the following imports: \nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport mglearn\nfrom IPython.display import display\nWe also assume that you will run the code in a Jupyter Notebook with the %matplotlib notebook or %matplotlib inline magic enabled to show plots.\nIf you are not using the notebook or these magic commands, you will have to call plt.show to actually show any of the figures.", "# Make sure your dependencies are similar to the ones in the book.\n\nimport sys\nprint(\"Python version: {}\".format(sys.version))\n\nimport pandas as pd\nprint(\"pandas version: {}\".format(pd.__version__))\n\nimport matplotlib\nprint(\"matplotlib version: {}\".format(matplotlib.__version__))\n\nimport numpy as np\nprint(\"NumPy version: {}\".format(np.__version__))\n\nimport scipy as sp\nprint(\"SciPy version: {}\".format(sp.__version__))\n\nimport IPython\nprint(\"IPython version: {}\".format(IPython.__version__))\n\nimport sklearn\nprint(\"scikit-learn version: {}\".format(sklearn.__version__))", "A First Application: Classifying Iris Species\nMeet the Data\nThe data we will use for this example is the Iris dataset, which is a commonly used dataset in machine learning and statistics tutorials.\nThe Iris dataset is included in scikit-learn in the datasets module.\nWe can load it by calling the load_iris function.", "from sklearn.datasets import load_iris\niris_dataset = load_iris()\niris_dataset", "The iris object that is returned by load_iris is a Bunch object, which is very similar to a dictionary.\nIt contains keys and values:", "print(\"Keys of iris_dataset: \\n{}\".format(iris_dataset.keys()))", "The value of the key DESCR is a short description of the dataset.", "print(iris_dataset['DESCR'][:193] + \"\\n...\")", "The value of the key target_names is an array of strings, containing the species of flower that we want to predict.", "print(\"Target names: {}\".format(iris_dataset['target_names']))", "The value of feature_names is a list of strings, giving the description of each feature:", "print(\"Feature names: \\n{}\".format(iris_dataset['feature_names']))", "The data itself is contained in the target and data fields.\ndata contains the numeric measurements of sepal length, sepal width, petal length, and petal width in a NumPy array:", "print(\"Type of data: {}\".format(type(iris_dataset['data'])))", "The rows in the data array correspond to flowers, while the columns represent the four measurements that were taken for each flower.", "print(\"Shape of data: {}\".format(iris_dataset['data'].shape))", "The shape of the data array is the number of samples (flowers) multiplied by the number of features (properties, e.g. sepal width).\nHere are the feature values for the first five samples:", "print(\"First five rows of data:\\n{}\".format(iris_dataset['data'][:5]))", "The data tells us that all of the first five flowers have a petal width of 0.2 cm and that the first flower has the longest sepal (5.1 cm)\nThe target array contains the species of each of the flowers that were measured, also as a NumPy array:", "print(\"Type of target: {}\".format(type(iris_dataset['target'])))", "target is a one-dimensional array, with one entry per flower:", "print(\"Shape of target: {}\".format(iris_dataset['target'].shape))", "The species are encoded as integers from 0 to 2:", "print(\"Target:\\n{}\".format(iris_dataset['target']))", "The meanings of the numbers are given by the iris['target_names'] array:\n0 means setosa, 1 means versicolor, and 2 means virginica.\nMeasuring Success: Training and Testing Data\nWe want to build a machine learning model from this data that can predict the species of iris for a new set of measurements.\nTo assess the model's performance, we show it new data for which we have labels.\nThis is usually done by splitting the labeled data into training data and test data.\nscikit-learn contains a function called train_test_split that shuffles the data and splits it for you (the default is 75% train and 25% test).\nIn scikit-learn, data is usually denoted with a capital X, while labels are denoted by a lowercase y.\nThis is inspired by the standard formulation f(x)=y in mathematics, where x is the input to a function and y is the output.\nFollowing more conventions from mathematics, we use a capital X because the data is a two-dimensional array (a matrix) and a lowercase y because the target is a one-dimensional array (a vector).\nLet's call train_test_split on our data and assign the outputs using this nomenclature:", "from sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(\n iris_dataset['data'], iris_dataset['target'], random_state=0)\n# The random_state parameter gives the pseudorandom number generator a fixed (set) seed.\n# Setting the seed allows us to obtain reproducible results from randomized procedures.\n\nprint(\"X_train: \\n{}\".format(X_train))\nprint(\"X_test: \\n{}\".format(X_test))\nprint(\"y_train: \\n{}\".format(y_train))\nprint(\"y_test: \\n{}\".format(y_test))", "The output of the train_test_split function is X_train, X_test, y_train, and y_test, which are all NumPy arrays.\nX_train contains 75% of the rows in the dataset, and X_test contains the remaining 25%.", "print(\"X_train shape: \\n{}\".format(X_train.shape))\nprint(\"y_train shape: \\n{}\".format(y_train.shape))\n\nprint(\"X_test shape: \\n{}\".format(X_test.shape))\nprint(\"y_test shape: \\n{}\".format(y_test.shape))", "First Things First: Look at Your Data\nBefore building a machine learning model, it is often a good idea to inspect the data for several reasons:\n- so you can see if the task can be solved without machine learning.\n- so you can see if the desired information is contained in the data or not.\n- so you can detect abnormalities or peculiarities in the data (inconsistent measurements, etc).\nOne of the best ways to inspect data is to visualize it.\nIn the example below we will be building a type of scatter plot known as a pair plot.\nThe data points are colored according to the species the iris belongs to.\nTo create the plot, we first convert the NumPy array into a pandas DataFrame.\npandas has a function to create pair plots called scatter_matrix.\nThe diagonal of this matrix is filled with histograms of each feature.", "# Create dataframe from data in X_train.\n# Label the columns using the strings in iris_dataset.feature_names.\niris_dataframe = pd.DataFrame(X_train, columns=iris_dataset.feature_names)\n# Create a scatter matrix from the dataframe, color by y_train.\npd.plotting.scatter_matrix(iris_dataframe, c=y_train, figsize=(15, 15), marker='o',\n hist_kwds={'bins': 20}, s=60, alpha=.8, cmap=mglearn.cm3)", "From the plots, we can see that the three classes seem to be relatively well separated using the sepal and petal measurements.\nThis means that a machine learning model will likely be able to learn to separate them.\nBuilding Your First Model: k-Nearest Neighbors\nThere are many classification algorithms in scikit-learn that we can use; here we're going to implement the k-nearest neighbors classifier.\nThe k in k-nearest neighbors refers to the number of nearest neighbors that will be used to predict the new data point.\nWe can consider any fixed number k of neighbors; the default for sklearn.neighbors.KNeighborsClassifier is 5, we're going to keep things simple and use 1 for k.\nAll machine learning models in scikit-learn are implemented in their own classes, which are called Estimator classes.\nThe k-nearest neighbors classification algorithm is implemented in the KNeighborsClassifier class in the neighbors module.\nMore information about the Nearest Neighbors Classification can be found here, and an example can be found here.\nBefore we can use the model, we need to instantiate the class into an object.\nThis is when we will set any parameters of the model, the most important of which is the number of neighbors, which we will set to 1:", "from sklearn.neighbors import KNeighborsClassifier\nknn = KNeighborsClassifier(n_neighbors=1)", "The knn object encapsulates the algorithm that will be used to build the model from the training data, as well as the algorithm to make predictions on new data points.\nIt will also hold the information that the algorithm has extracted from the training data.\nIn the case of KNeighborsClassifier, it will just store the training set.\nTo build the model on the training set, we call the fit method of the knn object, which takes as arguments the NumPy array X_train containing the training data and the NumPy array y_train of the corresponding training labels:", "knn.fit(X_train, y_train)", "The fit method returns the knn object itself (and modifies it in place), so we get a string representation of our classifier.\nThe representation shows us which parameters were used in creating the model. \nNearly all of them are the default values, but you can also find n_neighbors=1, which is the parameter that we passed.\nMost models in scikit-learn have many parameters, but the majority of them are either speed optimizations or for very special use cases.\nThe important parameters will be covered in Chapter 2.\nMaking Predictions\nNow we can make predictions using this model on new data which isn't labeled.\nLet's use an example iris with a sepal length of 5cm, sepal width of 2.9cm, petal length of 1cm, and petal width of 0.2cm.\nWe can put this data into a NumPy array by calculating the shape, which is the number of samples(1) multiplied by the number of features(4):", "X_new = np.array([[5, 2.9, 1, 0.2]])\nprint(\"X_new.shape: \\n{}\".format(X_new.shape))", "Note that we made the measurements of this single flower into a row in a two-dimensional NumPy array.\nscikit-learn always expects two-dimensional arrays for the data.\nNow, to make a prediction, we call the predict method of the knn object:", "prediction = knn.predict(X_new)\nprint(\"Prediction: \\n{}\".format(prediction))\nprint(\"Predicted target name: \\n{}\".format(\n iris_dataset['target_names'][prediction]))", "Our model predicts that this new iris belongs to the class 0, meaning its species is setosa.\nHow do we know whether we can trust our model?\nWe don't know the correct species of this sample, which is the whole point of building the model.\nEvaluating the Model\nThis is where the test set that we created earlier comes into play.\nThe test data wasn't used to build the model, but we do know what the correct species is for each iris in the test set.\nTherefore, thus, hence, ergo, we can make a prediction for each iris in the test data and compare it against its label (the known species).\nWe can measure how well the model works by computing the accuracy, which is the fraction of flowers for which the correct species was predicted:", "y_pred = knn.predict(X_test)\nprint(\"Test set predictions: \\n{}\".format(y_pred))\n\nprint(\"Test set score: \\n{:.2f}\".format(np.mean(y_pred == y_test)))", "We can also use the score method of the knn object, which will compute the test set accuracy for us:", "print(\"Test set score: \\n{:.2f}\".format(knn.score(X_test, y_test)))", "For this model, the test set accuracy is about 0.97, which means that we made the correct prediction for 97% of the irises in the test set.\nIn later chapters we will discuss how we can improve performance, and what caveats there are in tuning a model.\nSummary and Outlook\nHere is a summary of the code needed for the whole training and evaluation procedure:", "X_train, X_test, y_train, y_test = train_test_split(\n iris_dataset['data'], iris_dataset['target'], random_state=0)\n\nknn = KNeighborsClassifier(n_neighbors=1)\nknn.fit(X_train, y_train)\n\nprint(\"Test set score: \\n{:.2f}\".format(knn.score(X_test, y_test)))", "This snippet contains the core code for applying any machine learning algorithm using scikit-learn.\nThe fit, predict, and score methods are the common interface to supervised models in scikit-learn, and with the concepts introduced in this chapter, you can apply these models to many machine learning tasks.\nIn the next chapter, we will go more into depth about the different kinds of supervised models in scikit-learn and how to apply them successfully." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
NEONInc/NEON-Data-Skills
code/Python/uncertainty/hyperspectral-validation.ipynb
gpl-2.0
[ "Spectrometer accuracy assesment using validation tarps\nBackground\nIn this lesson we will be examing the accuracy of the Neon Imaging Spectrometer (NIS) against targets with known reflectance. The targets consist of two 10 x 10 m tarps which have been specially designed to have 3% reflectance (black tarp) and 48% reflectance (white tarp) across all of the wavelengths collected by the NIS (see images below). During the Sept. 12 2016 flight over the Chequamegon-Nicolet National Forest, an area in D05 which is part of Steigerwaldt (STEI) site, these tarps were deployed in a gravel pit. During the airborne overflight, observations were also taken over the tarps with an ASD field spectrometer. The ASD measurments provide a validation source against the the airborne measurements. \n\n\n\nTo test the accuracy, we will utilize reflectance curves from the tarps as well as from the associated flight line and execute absolute and relative comparisons. The major error sources in the NIS can be generally categorized into the following sources\n1) Calibration of the sensor\n2) Quality of ortho-rectification\n3) Accuracy of radiative transfer code and subsequent ATCOR interpolation\n4) Selection of atmospheric input parameters\n5) Terrain relief\n6) Terrain cover\nNote that the manual for ATCOR, the atmospheric correction software used by AOP, specifies the accuracy of reflectance retrievals to be between 3 and 5% of total reflectance. The tarps are located in a flat area, therefore, influences by terrain releif should be minimal. We will ahve to keep the remining errors in mind as we analyze the data. \nObjective\nIn this lesson we will learn how to retrieve relfectance curves from a pre-specified coordainte in a NEON AOP HDF 5 file, learn how to read a tab delimited text file, retrive bad bad window indexes and mask portions of a reflectance curve, plot reflectance curves on a graph and save the file, gain an understanding of some sources of uncertainty in NIS data.\nSuggested pre-requisites\nWorking with NEON AOP Hyperspectral Data in Python Jupyter Notebooks\nLearn to Efficiently Process NEON Hyperspectral Data\nWe'll start by adding all of the necessary libraries to our python script", "import h5py\nimport csv\nimport numpy as np\nimport os\nimport gdal\nimport matplotlib.pyplot as plt\nimport sys\nfrom math import floor\nimport time\nimport warnings\nwarnings.filterwarnings('ignore')\n%matplotlib inline", "As well as our function to read the hdf5 reflectance files and associated metadata", "def h5refl2array(h5_filename):\n hdf5_file = h5py.File(h5_filename,'r')\n\n #Get the site name\n file_attrs_string = str(list(hdf5_file.items()))\n file_attrs_string_split = file_attrs_string.split(\"'\")\n sitename = file_attrs_string_split[1]\n refl = hdf5_file[sitename]['Reflectance']\n reflArray = refl['Reflectance_Data']\n refl_shape = reflArray.shape\n wavelengths = refl['Metadata']['Spectral_Data']['Wavelength']\n #Create dictionary containing relevant metadata information\n metadata = {}\n metadata['shape'] = reflArray.shape\n metadata['mapInfo'] = refl['Metadata']['Coordinate_System']['Map_Info']\n #Extract no data value & set no data value to NaN\\n\",\n metadata['scaleFactor'] = float(reflArray.attrs['Scale_Factor'])\n metadata['noDataVal'] = float(reflArray.attrs['Data_Ignore_Value'])\n metadata['bad_band_window1'] = (refl.attrs['Band_Window_1_Nanometers'])\n metadata['bad_band_window2'] = (refl.attrs['Band_Window_2_Nanometers'])\n metadata['projection'] = refl['Metadata']['Coordinate_System']['Proj4'].value\n metadata['EPSG'] = int(refl['Metadata']['Coordinate_System']['EPSG Code'].value)\n mapInfo = refl['Metadata']['Coordinate_System']['Map_Info'].value\n mapInfo_string = str(mapInfo); #print('Map Info:',mapInfo_string)\\n\",\n mapInfo_split = mapInfo_string.split(\",\")\n #Extract the resolution & convert to floating decimal number\n metadata['res'] = {}\n metadata['res']['pixelWidth'] = mapInfo_split[5]\n metadata['res']['pixelHeight'] = mapInfo_split[6]\n #Extract the upper left-hand corner coordinates from mapInfo\\n\",\n xMin = float(mapInfo_split[3]) #convert from string to floating point number\\n\",\n yMax = float(mapInfo_split[4])\n #Calculate the xMax and yMin values from the dimensions\\n\",\n xMax = xMin + (refl_shape[1]*float(metadata['res']['pixelWidth'])) #xMax = left edge + (# of columns * resolution)\\n\",\n yMin = yMax - (refl_shape[0]*float(metadata['res']['pixelHeight'])) #yMin = top edge - (# of rows * resolution)\\n\",\n metadata['extent'] = (xMin,xMax,yMin,yMax),\n metadata['ext_dict'] = {}\n metadata['ext_dict']['xMin'] = xMin\n metadata['ext_dict']['xMax'] = xMax\n metadata['ext_dict']['yMin'] = yMin\n metadata['ext_dict']['yMax'] = yMax\n hdf5_file.close \n return reflArray, metadata, wavelengths", "Define the location where you are holding the data for the data institute. The h5_filename will be the flightline which contains the tarps, and the tarp_48_filename and tarp_03_filename contain the field validated spectra for the white and black tarp respectively, organized by wavelength and reflectance.", "print('Start CHEQ tarp uncertainty script')\n\nh5_filename = 'C:/RSDI_2017/data/CHEQ/H5/NEON_D05_CHEQ_DP1_20160912_160540_reflectance.h5'\ntarp_48_filename = 'C:/RSDI_2017/data/CHEQ/H5/CHEQ_Tarp_48_01_refl_bavg.txt'\ntarp_03_filename = 'C:/RSDI_2017/data/CHEQ/H5/CHEQ_Tarp_03_02_refl_bavg.txt'\n", "We want to pull the spectra from the airborne data from the center of the tarp to minimize any errors introduced by infiltrating light in adjecent pixels, or through errors in ortho-rectification (source 2). We have pre-determined the coordinates for the center of each tarp which are as follows:\n48% reflectance tarp UTMx: 727487, UTMy: 5078970\n3% reflectance tarp UTMx: 727497, UTMy: 5078970\n\nLet's define these coordaintes", "tarp_48_center = np.array([727487,5078970])\ntarp_03_center = np.array([727497,5078970])", "Now we'll use our function designed for NEON AOP's HDF5 files to access the hyperspectral data", "[reflArray,metadata,wavelengths] = h5refl2array(h5_filename)", "Within the reflectance curves there are areas with noisey data due to atmospheric windows in the water absorption bands. For this exercise we do not want to plot these areas as they obscure detailes in the plots due to their anamolous values. The meta data assocaited with these band locations is contained in the metadata gatherd by our function. We will pull out these areas as 'bad band windows' and determine which indexes in the reflectance curves contain the bad bands", "bad_band_window1 = (metadata['bad_band_window1'])\nbad_band_window2 = (metadata['bad_band_window2'])\n\nindex_bad_window1 = [i for i, x in enumerate(wavelengths) if x > bad_band_window1[0] and x < bad_band_window1[1]]\nindex_bad_window2 = [i for i, x in enumerate(wavelengths) if x > bad_band_window2[0] and x < bad_band_window2[1]]\n", "Now join the list of indexes together into a single variable", "index_bad_windows = index_bad_window1+index_bad_window2", "The reflectance data is saved in files which are 'tab delimited.' We will use a numpy function (genfromtxt) to quickly import the tarp reflectance curves observed with the ASD using the '\\t' delimeter to indicate tabs are used.", "tarp_48_data = np.genfromtxt(tarp_48_filename, delimiter = '\\t')\ntarp_03_data = np.genfromtxt(tarp_03_filename, delimiter = '\\t')", "Now we'll set all the data inside of those windows to NaNs (not a number) so they will not be included in the plots", "tarp_48_data[index_bad_windows] = np.nan\ntarp_03_data[index_bad_windows] = np.nan", "The next step is to determine which pixel in the reflectance data belongs to the center of each tarp. To do this, we will subtract the tarp center pixel location from the upper left corner pixels specified in the map info of the H5 file. This information is saved in the metadata dictionary output from our function that reads NEON AOP HDF5 files. The difference between these coordaintes gives us the x and y index of the reflectance curve.", "x_tarp_48_index = int((tarp_48_center[0] - metadata['ext_dict']['xMin'])/float(metadata['res']['pixelWidth']))\ny_tarp_48_index = int((metadata['ext_dict']['yMax'] - tarp_48_center[1])/float(metadata['res']['pixelHeight']))\n\nx_tarp_03_index = int((tarp_03_center[0] - metadata['ext_dict']['xMin'])/float(metadata['res']['pixelWidth']))\ny_tarp_03_index = int((metadata['ext_dict']['yMax'] - tarp_03_center[1])/float(metadata['res']['pixelHeight']))", "Next, we will plot both the curve from the airborne data taken at the center of the tarps as well as the curves obtained from the ASD data to provide a visualisation of thier consistency for both tarps. Once generated, we will also save the figure to a pre-determined location.", "plt.figure(1)\ntarp_48_reflectance = np.asarray(reflArray[y_tarp_48_index,x_tarp_48_index,:], dtype=np.float32)/metadata['scaleFactor']\ntarp_48_reflectance[index_bad_windows] = np.nan\nplt.plot(wavelengths,tarp_48_reflectance,label = 'Airborne Reflectance')\nplt.plot(wavelengths,tarp_48_data[:,1], label = 'ASD Reflectance')\nplt.title('CHEQ 20160912 48% tarp')\nplt.xlabel('Wavelength (nm)'); plt.ylabel('Refelctance (%)')\nplt.legend()\nplt.savefig('CHEQ_20160912_48_tarp.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)\n\nplt.figure(2)\ntarp_03_reflectance = np.asarray(reflArray[y_tarp_03_index,x_tarp_03_index,:], dtype=np.float32)/ metadata['scaleFactor']\ntarp_03_reflectance[index_bad_windows] = np.nan\nplt.plot(wavelengths,tarp_03_reflectance,label = 'Airborne Reflectance')\nplt.plot(wavelengths,tarp_03_data[:,1],label = 'ASD Reflectance')\nplt.title('CHEQ 20160912 3% tarp')\nplt.xlabel('Wavelength (nm)'); plt.ylabel('Refelctance (%)')\nplt.legend()\nplt.savefig('CHEQ_20160912_3_tarp.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)\n\n", "This produces plots showing the results of the ASD and airborne measurements over the 48% tarp. Visually, the comparison between the two appears to be fairly good. However, over the 3% tarp we appear to be over-estimating the reflectance. Large absolute differences could be associated with ATCOR input parameters (source 4). For example, the user must input the local visibility, which is related to aerosal optical thickness (AOT). We don't measure this at every site, therefore input a standard parameter for all sites. \nGiven the 3% reflectance tarp has much lower overall reflactance, it may be more informative to determine what the absolute difference between the two curves are and plot that as well.", "plt.figure(3)\nplt.plot(wavelengths,tarp_48_reflectance-tarp_48_data[:,1])\nplt.title('CHEQ 20160912 48% tarp absolute difference')\nplt.xlabel('Wavelength (nm)'); plt.ylabel('Absolute Refelctance Difference (%)')\nplt.savefig('CHEQ_20160912_48_tarp_absolute_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)\n\nplt.figure(4)\nplt.plot(wavelengths,tarp_03_reflectance-tarp_03_data[:,1])\nplt.title('CHEQ 20160912 3% tarp absolute difference')\nplt.xlabel('Wavelength (nm)'); plt.ylabel('Absolute Refelctance Difference (%)')\nplt.savefig('CHEQ_20160912_3_tarp_absolute_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)\n", "From this we are able to see that the 48% tarp actually has larger absolute differences than the 3% tarp. The 48% tarp performs poorly at the shortest and longest waveleghts as well as near the edges of the 'bad band windows.' This is related to difficulty in calibrating the sensor in these sensitive areas (source 1).\nLet's now determine the result of the percent difference, which is the metric used by ATCOR to report accuracy. We can do this by calculating the ratio of the absolute difference between curves to the total reflectance", "\nplt.figure(5)\nplt.plot(wavelengths,100*np.divide(tarp_48_reflectance-tarp_48_data[:,1],tarp_48_data[:,1]))\nplt.title('CHEQ 20160912 48% tarp percent difference')\nplt.xlabel('Wavelength (nm)'); plt.ylabel('Percent Refelctance Difference')\nplt.ylim((-100,100))\nplt.savefig('CHEQ_20160912_48_tarp_relative_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)\n\nplt.figure(6)\nplt.plot(wavelengths,100*np.divide(tarp_03_reflectance-tarp_03_data[:,1],tarp_03_data[:,1]))\nplt.title('CHEQ 20160912 3% tarp percent difference')\nplt.xlabel('Wavelength (nm)'); plt.ylabel('Percent Refelctance Difference')\nplt.ylim((-100,150))\nplt.savefig('CHEQ_20160912_3_tarp_relative_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)\n", "From these plots we can see that even though the absolute error on the 48% tarp was larger, the relative error on the 48% tarp is generally much smaller. The 3% tarp can have errors exceeding 50% for most of the tarp. This indicates that targets with low reflectance values may have higher relative errors." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
smalladi78/SEF
notebooks/42_ExtractDonor_Data.ipynb
unlicense
[ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sns\nimport matplotlib.ticker as ticker\nimport calendar\n\nmkdir -p viz\n\ndf = pd.read_pickle('out/21/donations.pkl')\nevents = pd.read_pickle('out/41/events.pkl')\n\ndf.columns\n\ndf = df[['donor_id','activity_year', 'amount', 'is_service','channel', 'city','county','state','activity_date','activity_ym','activity_month','appeal','campaign_location_id','campaign_month_id']]", "Nomenclature:\n1. Donation - is a charitable contribution\n2. Contribution - is not a charitable contribution", "def get_data(rows):\n '''\n input: rows from dataframe for a specific donor\n output: money donated and contributed over the years\n '''\n return rows\\\n .groupby(['activity_year', 'activity_month', 'is_service', 'state'])\\\n .agg({'amount': sum}).reset_index()\n\ndf[(df.donor_id=='_1D50SWTKX') & (df.activity_year == 2014)].reset_index()\n\ndonor_data = df\\\n.groupby('donor_id')\\\n.apply(get_data)\n\ndonor_data.index = donor_data.index.droplevel(1)\ndonor_data = donor_data.reset_index()\n\nyear_of_donation = donor_data[(donor_data.is_service==False)]\\\n.groupby('donor_id')['activity_year']\\\n.rank(method='dense')\n\nyear_of_donation.name = 'year_of_donation'\n\nyear_of_contribution = donor_data[(donor_data.is_service==True)]\\\n.groupby('donor_id')['activity_year']\\\n.rank(method='dense')\n\nyear_of_contribution.name = 'year_of_contribution'\n\ndonor_data = pd.merge(donor_data, pd.DataFrame(year_of_donation), how='left', left_index=True, right_index=True)\ndonor_data = pd.merge(donor_data, pd.DataFrame(year_of_contribution), how='left', left_index=True, right_index=True)\n\n# First forward fill the data and then replace with zeros if there are still any nulls lying around\ndonor_data.year_of_contribution = donor_data.year_of_contribution.fillna(method='ffill')\ndonor_data.year_of_donation = donor_data.year_of_donation.fillna(method='ffill')\n\ndonor_data.year_of_contribution = donor_data.year_of_contribution.fillna(0)\ndonor_data.year_of_donation = donor_data.year_of_donation.fillna(0)\n\ndonor_data[donor_data.donor_id=='_1D50SWTKX']\n\ndonor_data[donor_data['donor_id'] == '-0Q51CZR36']\n\ndef get_repeat_years(years):\n '''\n input: years of activity for donor\n output: list of boolean representing if the year was a repeat year donation\n '''\n #years = rows.activity_year.unique()\n repeat_years = [y for y in years.values if y-1 in years.values]\n return years.isin(repeat_years)\n\ndonor_data['is_repeat_year'] = donor_data[(donor_data.is_service==False)]\\\n .groupby('donor_id')['activity_year']\\\n .apply(get_repeat_years)\n\n!mkdir -p out/42\ndonor_data.to_pickle('out/42/donors.pkl')", "Plots! Plots! Plots!", "donor_data = pd.read_pickle('out/42/donors.pkl')\ndonations = df\n\nimport locale\n\ncolor1 = '#67a9cf'\ncolor2 = '#fc8d59'\ncolors = [color1, color2]\n\n_ = locale.setlocale(locale.LC_ALL, '')\nthousands_sep = lambda x: locale.format(\"%.2f\", x, grouping=True)", "Do people tend to give more money along the years?\nCalculate the amount donated in the Nth year of donation.", "yearly_donors = donor_data[donor_data.is_service==True]\\\n.groupby(['year_of_donation', 'donor_id'])\\\n.amount.sum()\\\n.to_frame()\n\nyearly_donors.index = yearly_donors.index.droplevel(1)\ndata = yearly_donors.reset_index().groupby('year_of_donation').amount.median().reset_index()\n\ndata.columns = ['year_of_donation', 'amount']\n\nfig, ax = plt.subplots(figsize=(12,6))\nplt.bar(data[:-2].year_of_donation, data[:-2].amount, color=color1)\nplt.xlabel('Nth year of donation', fontsize=16)\nplt.ylabel('Median amount donated (in dollars) in that year', fontsize=16)\nax.xaxis.set_major_formatter(ticker.NullFormatter())\n\nax.set_ylim([0,400])\n\nplt.tick_params(labelsize=16)\nax.xaxis.set_minor_locator(ticker.FixedLocator(data.index.values+0.5))\nax.xaxis.set_minor_formatter(ticker.FixedFormatter(data.index.values.astype('int')))\nax.tick_params(axis='x', labelsize=16)\n#_ = fig.suptitle('Average amount donated vs year of donation', fontsize=16)\nplt.savefig('viz/Median_Amount_Donated_In_Nth_Year.png')", "New donors vs repeat donors", "data1 = donor_data[(donor_data.is_service==False)].groupby(['activity_year', 'is_repeat_year']).donor_id.nunique().unstack().fillna(0)\ndata1 = pd.DataFrame(data1.values, columns=['New','Repeat'], index=np.sort(data1.index.unique()))\ndata1 = data1.apply(lambda x: x/x.sum(), axis=1)\n\ndata2 = donor_data[(donor_data.is_service==False)].groupby(['activity_year', 'is_repeat_year']).amount.sum().unstack().fillna(0)\ndata2 = pd.DataFrame(data2.values, columns=['New','Repeat'], index=np.sort(data2.index.unique()))\ndata2 = data2.apply(lambda x: x/x.sum(), axis=1)\n\nfig, (ax1, ax2) = plt.subplots(nrows=2, ncols=1, sharex=True, figsize=(10,10))\nax1.bar(data1.index.values, data1.Repeat, color = color2)\nax1.bar(data1.index.values, data1.New, color = color1, bottom=data1.Repeat)\nax1.tick_params(labelsize=16)\n\nax2.bar(data2.index.values, data2.Repeat, color = color2)\nax2.bar(data2.index.values, data2.New, color = color1, bottom=data2.Repeat)\nax2.tick_params(labelsize=16)\n\nlocs, labels = plt.xticks()\nplt.setp(labels, rotation=90)\nplt.xticks(np.sort(donor_data.activity_year.unique()))\n\nplt.savefig('viz/NewVsRepeatDonors.png')", "What proportion of money is coming in through various marketing channels", "x = donations\\\n .groupby(['activity_year', 'channel']).amount.sum().to_frame().unstack().fillna(0)\nx.columns = x.columns.droplevel(0)\n\nx = x/1000000\n\nplot = x.plot(kind='line', colormap=plt.cm.jet,\n fontsize=12,\n figsize=(12,18),\n )\n\n#plt.legend().set_visible(False)\nplt.legend(prop={'size':16}, loc='upper center')\n#plot.set_title('Donations flowing through different marketing channels',fontsize=16)\nplt.xlabel('Year of donation', fontsize=16)\nplt.ylabel('Donation amount (in millions of dollars)', fontsize=16)\nplt.tick_params(labelsize=16)\n\nfor idx, x_value in enumerate(x[x.index==2014].values[0]):\n plt.text(2014, x_value, x.columns[idx], fontsize=14)\nplt.savefig('viz/DonationsFromDifferentMarketingChannels.png')\n\nx = donations[(donations.activity_year==2014) & (donations.channel=='Other')]\\\n .groupby(['appeal']).amount.sum().to_frame().unstack().fillna(0).sort_values(ascending=False)[:15].to_frame()\nx.index = x.index.droplevel(0)\nx.columns = ['Total Donation Amount']\n\nplot = x.plot(kind='bar',\n fontsize=12,\n color=color1,\n figsize=(12,12))\n\nplt.legend().set_visible(False)\nplot.set_title('Donations flowing through different marketing channels (2014)',fontsize=16)\nplt.xlabel('Donations in 2014', fontsize=14)\n\nplt.savefig('viz/OtherIn2014.png')\n\ncumulative_years = np.cumsum(\n df[(df.activity_year > 2010) & (df.is_service==False)]\\\n .groupby(['activity_year', 'activity_month'])['amount', ]\\\n .sum()\\\n .unstack()\\\n .fillna(0)\n , axis=1, dtype='int64').T\n\ncumulative_years.index = cumulative_years.index.droplevel(0)\ncumulative_years.index = calendar.month_abbr[1:]\ncumulative_years = cumulative_years/1000000\n\nplot = cumulative_years.plot(kind='line',\n fontsize=12,\n figsize=(12,12))\n\nplt.xlabel('Month of donation', fontsize=16)\nplt.ylabel('Cumulative Donation amount (in millions of dollars)', fontsize=16)\nplt.tick_params(labelsize=16)\n#plot.set_title('Cumulative donation amounts over the years',fontsize=16)\nplt.legend(prop={'size':16}, loc='upper center')\n\nvals = cumulative_years.ffill(axis=0)[-1:].columns.values\nheights = cumulative_years.ffill(axis=0)[-1:].values[0]\n\n[plt.text(11, height, val, fontsize=14) for (val, height) in zip(vals, heights)]\n\nplt.savefig('viz/CumulativeDonationsOverTheYears.png')\n\nymdata = df[(df.activity_year > 2010) & (df.is_service==False)].groupby(['activity_year', 'activity_month'])['amount', ]\\\n .sum()\\\n .unstack()\\\n .fillna(0).T\n\nymdata.index = ymdata.index.droplevel(0)\nymdata.index = calendar.month_abbr[1:13]\nymdata = ymdata/1000000\nplot = ymdata.plot(kind='line',\n fontsize=12,\n figsize=(12,12))\n\nplt.xlabel('Month of donation', fontsize=16)\nplt.ylabel('Donation amount (in millions of dollars)', fontsize=16)\n#plot.set_title('Monthly donation amounts over the years',fontsize=16)\n\nplt.legend().set_visible(False)\nplt.tick_params(labelsize=16)\n\nvals = ymdata.ffill(axis=0)[-1:].columns.values\nheights = ymdata.ffill(axis=0)[-1:].values[0]\n\n[plt.text(11, height, val, fontsize=14) for (val, height) in zip(vals, heights)]\n\nplt.savefig('viz/DonationsOverTheYears.png')", "Churn of donors", "donor_data.head()\n\ndef get_churn(year):\n return len(set(\n donor_data[(donor_data.activity_year==year) & (donor_data.is_service==False)].donor_id.unique())\\\n.difference(set(donor_data[(donor_data.activity_year>year) & (donor_data.is_service==False)].donor_id.unique())))\n\nchurn = pd.Series(\n [-get_churn(year)\n for year\n in np.sort(donor_data.activity_year.unique()[:-1])],\n name='Churn',\n index=np.sort(donor_data.activity_year.unique()[:-1]))\nnew_donors = donor_data[donor_data.year_of_donation==1].groupby('activity_year').donor_id.nunique()[:-1]\nnew_donors.name = 'New'\n\n# We drop the last row since it does not make sense to predict yearly churn until the year has passed\nchurn = churn.drop(churn.tail(1).index)\nnew_donors = new_donors.drop(new_donors.tail(1).index)\n\nx = churn.index.values\nfig = plt.figure(figsize=(10,10))\n#plt.title('Churn vs New donors for every year', fontsize=16)\nax = plt.subplot(111)\nax.bar(x, new_donors, width=0.5, color=color1, align='center', label='New donors')\nax.bar(x, churn, width=0.5, color=color2, align='center', label='Donors churned')\nplt.legend(prop={'size':16}, loc=(1,0.33))\nplt.tick_params(labelsize=16)\nplt.xlabel('Year of donation', fontsize=16)\nplt.ylabel('Number of donors', fontsize=16)\nplt.savefig('viz/ChurnVsNewDonors.png', bbox_inches='tight')", "How do fund-raisers impact donation dollars?", "from itertools import cycle\n\ndef plot_event_donation_activity(state, years):\n\n ymdata = np.cumsum(\n donations[(donations.state==state)].groupby(['activity_year','activity_month'])['amount', ]\\\n .sum()\\\n .unstack()\\\n .fillna(0),\n axis=1, dtype='int64')\n \n state_events = events[(events.state==state)][['event_name', 'amount', 'activity_month', 'activity_year']]\\\n .sort_values(by=['activity_year', 'activity_month']).reset_index(drop=True)\n \n ymdata.columns = ymdata.columns.droplevel(0)\n \n fig, ax1 = plt.subplots()\n ax1.set_xlabel('Month')\n ax1.set_ylabel('Donation amount')\n\n vals = ymdata.index.values\n heights = ymdata.ffill(axis=1)[-1:].values[0]\n #[plt.text(12, height, val, fontsize=14) for (val, height) in zip(vals, heights)]\n\n ax2 = ax1.twinx()\n ax2.set_ylabel('Event contributions')\n\n colors = cycle([\"r\", \"b\", \"g\"])\n for year in years:\n color = next(colors)\n s1 = ymdata[ymdata.index==year].values[0]\n t = range(1,13)\n ax1.plot(t, s1, color=color, label=year)\n\n evs = state_events[state_events.activity_year==year]\n for ev in evs.iterrows():\n bar = ax2.bar(ev[1].activity_month, ev[1].amount, width=-0.4, alpha=0.2, color=color)\n label = ev[1].event_name\n \n # Put event_name on top of the bars\n rect = bar.patches[0]\n height = rect.get_height()\n ax2.text(rect.get_x() + rect.get_width()/2,\n height + 5,\n label,\n ha='center',\n va='bottom',\n rotation='vertical',\n fontsize=12)\n ax1.legend(prop={'size':16}, loc='upper left')\n plt.savefig('viz/Events_vs_Donations_{0}.png'.format(state))\n return ymdata\n\nymdata = plot_event_donation_activity('WA', [2011])\n\nymdata.ffill(axis=1)[:-1].index.values\nymdata.ffill(axis=1)[-1:].values[0]\n\ncumulative_years = np.cumsum(\n donations[(donations.activity_year > 2010)]\\\n .groupby(['activity_year', 'activity_month'])['amount', ]\\\n .sum()\\\n .unstack()\\\n .fillna(0)\n , axis=1, dtype='int64').T\n\ncumulative_years" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jhconning/Dev-II
notebooks/DataAPIs.ipynb
bsd-3-clause
[ "Data APIs and pandas operations\nSeveral of the notebooks we've already explored loaded datasets into a python pandas dataframe for analysis. Local copies of some of these datasets had been previously saved to disk in a few cases we read in the data directly from an online sources via a data API. This section explains how that is done in a bit more depth. Some of the possible advantages of reading in the data this way is that it allows would-be users to modify and extend the analysis, perhaps focusing on different time-periods or adding in other variables of interest. \nEasy to use python wrappers for data APIs have been written for the World Bank and several other online data providers (including FRED, Eurostat, and many many others). The pandas-datareader library allows access to several databases from the World Bank's datasets and other sources. \nIf you haven't already installed the pandas-datareader library you can do so directly from a jupyter notebook code cell:\n!pip install pandas-datareader\nOnce the library is in installed we can load it as:", "%matplotlib inline\nimport seaborn as sns\nimport warnings\nimport numpy as np\nimport statsmodels.formula.api as smf\nimport datetime as dt\n\nfrom pandas_datareader import wb", "Data on urban bias\nOur earlier analysis of the Harris_Todaro migration model suggested that policies designed to favor certain sectors or labor groups \nLet's search for indicators (and their identification codes) relating to GDP per capita and urban population share. We could look these up in a book or from the website http://data.worldbank.org/ but we can also search for keywords directly.\nFirst lets search for series having to do with gdp per capita", "wb.search('gdp.*capita.*const')[['id','name']]", "We will use NY.GDP.PCAP.KD for GDP per capita (constant 2010 US$).\nYou can also first browse and search for data series from the World Bank's DataBank page at http://databank.worldbank.org/data/. Then find the 'id' for the series that you are interested in in the 'metadata' section from the webpage\nNow let's look for data on urban population share:", "wb.search('Urban Population')[['id','name']].tail()", "Let's use the ones we like but use a python dictionary to rename these to shorter variable names when we load the data into a python dataframe:", "indicators = ['NY.GDP.PCAP.KD', 'SP.URB.TOTL.IN.ZS']", "Since we are interested in exploring the extent of 'urban bias' in some countries, let's load data from 1980 which was toward the end of the era of import-substituting industrialization when urban-biased policies were claimed to be most pronounced.", "dat = wb.download(indicator=indicators, country = 'all', start=1980, end=1980)\n\ndat.columns", "Let's rename the columns to something shorter and then plot and regress log gdp per capita against urban extent we get a pretty tight fit:", "dat.columns = [['gdppc', 'urbpct']]\ndat['lngpc'] = np.log(dat.gdppc)\n\ng = sns.jointplot(\"lngpc\", \"urbpct\", data=dat, kind=\"reg\",\n color =\"b\", size=7)", "That is a pretty tight fit: urbanization rises with income per-capita, but there are several middle income country outliersthat have considerably higher urbanization than would be predicted. Let's look at the regression line.", "mod = smf.ols(\"urbpct ~ lngpc\", dat).fit()\n\nprint(mod.summary())", "Now let's just look at a list of countries sorted by the size of their residuals in this regression line. Countries with the largest residuals had urbanization in excess of what the model predicts from their 1980 level of income per capita.\nHere is the sorted list of top 15 outliers.", "mod.resid.sort_values(ascending=False).head(15)", "This is of course only suggestive but (leaving aside the island states like Singapore and Hong-Kong) the list is dominated by southern cone countries such as Chile, Argentina and Peru which in addition to having legacies of heavy political centralization also pursued ISI policies in the 60s and 70s that many would associate with urban biased policies. \nPanel data\nVery often we want data on several indicators and a whole group of countries over a number of years. we could also have used datetime format dates:", "countries = ['CHL', 'USA', 'ARG']\nstart, end = dt.datetime(1950, 1, 1), dt.datetime(2016, 1, 1)\ndat = wb.download(\n indicator=indicators, \n country = countries, \n start=start, \n end=end).dropna()", "Lets use shorter column names", "dat.columns\n\ndat.columns = [['gdppc', 'urb']]\n\ndat.head()", "Notice this has a two-level multi-index. The outer level is named 'country' and the inner level is 'year'\nWe can pull out group data for a single country like this using the .xs or cross section method.", "dat.xs('Chile',level='country').head(3)", "(Note we could have also used dat.loc['Chile'].head())\nAnd we can pull a 'year' level cross section like this:", "dat.xs('2007', level='year').head()", "Note that what was returned was a dataframe with the data just for our selected country. We can in turn further specify what column(s) from this we want:", "dat.loc['Chile']['gdppc'].head()", "Unstack data\nThe unstack method turns index values into column names while stack method converts column names to index values. Here we apply unstack.", "datyr = dat.unstack(level='country')\ndatyr.head()", "We can now easily index a 2015 cross-section of GDP per capita like so:", "datyr.xs('1962')['gdppc']", "We'd get same result from datyr.loc['2015']['gdppc']\nWe can also easily plot all countries:", "datyr['urb'].plot(kind='line');" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mbakker7/ttim
pumpingtest_benchmarks/4_test_of_gridley.ipynb
mit
[ "Confined Aquifer Test\nThis test is taken from AQTESOLV examples.", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nfrom ttim import *", "Set basic parameters:", "b = -5.4846 #aquifer thickness in m\nQ = 1199.218 #constant discharge in m^3/d\nr = 251.1552 #distance between observation well to test well in m\nrw = 0.1524 #screen radius of test well in m", "Load dataset:", "data1 = np.loadtxt('data/gridley_well_1.txt')\nt1 = data1[:, 0]\nh1 = data1[:, 1]\ndata2 = np.loadtxt('data/gridley_well_3.txt')\nt2 = data2[:, 0]\nh2 = data2[:, 1]", "Create conceptual model:", "ml = ModelMaq(kaq=10, z=[0, b], Saq=0.001, tmin=0.001, tmax=1, topboundary='conf')\nw = Well(ml, xw=0, yw=0, rw=rw, tsandQ=[(0, Q)], layers=0)\nml.solve()", "Calibrate with two datasets simultaneously:", "#unknown parameters: kaq, Saq\nca_0 = Calibrate(ml)\nca_0.set_parameter(name='kaq0', initial=10)\nca_0.set_parameter(name='Saq0', initial=1e-4)\nca_0.series(name='obs1', x=r, y=0, t=t1, h=h1, layer=0)\nca_0.fit(report=True)\n\ndisplay(ca_0.parameters)\nprint('rmse:', ca_0.rmse())\n\nhm_0 = ml.head(r, 0, t1)\nplt.figure(figsize = (8, 5))\nplt.semilogx(t1, hm_0[0], label = 'ttim')\nplt.semilogx(t1, h1, '.', label = 'obs1')\nplt.xlabel('time(d)')\nplt.ylabel('drawdown(m)')\nplt.legend();\n\n#unknown parameters: kaq, Saq\nca_1 = Calibrate(ml)\nca_1.set_parameter(name='kaq0', initial=10)\nca_1.set_parameter(name='Saq0', initial=1e-4)\nca_1.series(name='obs2', x=0, y=0, t=t2, h=h2, layer=0)\nca_1.fit(report=True)\n\ndisplay(ca_1.parameters)\nprint('rmse:', ca_1.rmse())\n\nhm_1 = ml.head(0, 0, t2)\nplt.figure(figsize = (8, 5))\nplt.semilogx(t2, hm_1[0], label = 'ttim')\nplt.semilogx(t2, h2, '.', label = 'obs2')\nplt.xlabel('time(d)')\nplt.ylabel('drawdown(m)')\nplt.legend();", "Calibrate with two datasets simultaneously:", "ml_1 = ModelMaq(kaq=10, z=[0, b], Saq=0.001, tmin=0.001, tmax=1, topboundary='conf')\nw_1 = Well(ml_1, xw=0, yw=0, rw=rw, tsandQ=[(0, Q)], layers=0)\nml_1.solve()\n\nca_2 = Calibrate(ml_1)\nca_2.set_parameter(name='kaq0', initial=10)\nca_2.set_parameter(name='Saq0', initial=1e-4, pmin=0)\nca_2.series(name='obs1', x=r, y=0, t=t1, h=h1, layer=0)\nca_2.series(name='obs2', x=0, y=0, t=t2, h=h2, layer=0)\nca_2.fit(report=True)\n\ndisplay(ca_2.parameters)\nprint('rmse:', ca_2.rmse())\n\nhm1_2 = ml.head(r, 0, t1)\nhm2_2 = ml.head(0, 0, t2)\nplt.figure(figsize = (8, 5))\nplt.semilogx(t1, hm1_2[0], label = 'ttim1')\nplt.semilogx(t1, h1, '.', label = 'obs1')\nplt.semilogx(t2, hm2_2[0], label = 'ttim3')\nplt.semilogx(t2, h2, '.', label = 'obs3')\nplt.xlabel('time(d)')\nplt.ylabel('drawdown(m)')\nplt.legend();", "Ty adding well skin resistance and wellbore storage:", "ml_2 = ModelMaq(kaq=10, z=[0, b], Saq=0.001, tmin=0.001, tmax=1, topboundary='conf')\nw_2 = Well(ml_2, xw=0, yw=0, rw=rw, rc=0.2, res=0.2, tsandQ=[(0, Q)], layers=0)\nml_2.solve()", "If adding wellbore sotrage to the parameters to be optimized, the fit gives extremely large values of each parameter which is imposiible. However, when remove rc from well function, the fit cannot be completed with uncertainties. Thus, the rc value is determined as 0.2 by trial-and-error procedure.", "ca_3 = Calibrate(ml_2)\nca_3.set_parameter(name = 'kaq0', initial = 10)\nca_3.set_parameter(name = 'Saq0', initial = 1e-4, pmin=0)\nca_3.set_parameter_by_reference(name='res', parameter=w_2.res, initial =0.2)\nca_3.series(name='obs1', x=r, y=0, t=t1, h=h1, layer=0)\nca_3.series(name='obs3', x=0, y=0, t=t2, h=h2, layer=0)\nca_3.fit(report=True)\n\ndisplay(ca_3.parameters)\nprint('rmse:', ca_3.rmse())\n\nhw1 = ml_2.head(r, 0, t1)\nhw2 = ml_2.head(0, 0, t2)\nplt.figure(figsize = (8, 5))\nplt.semilogx(t1, hw1[0], label = 'ttim1')\nplt.semilogx(t1, h1, '.', label = 'obs1')\nplt.semilogx(t2, hw2[0], label = 'ttim3')\nplt.semilogx(t2, h2, '.', label = 'obs3')\nplt.xlabel('time(d)')\nplt.ylabel('drawdown(m)')\nplt.legend();", "Summary of values simulated by AQTESOLV and MLU\nThe results simulated by different methods with two datasets simultaneously are presented below. In the example of AQTESOLV, result simulated with only observation well is presented. The comparision of results when only observation well is included can be found in the report related to this test.", "t = pd.DataFrame(columns=['k [m/d]', 'Ss [1/m]', 'res'], \\\n index=['MLU', 'AQTESOLV', 'ttim', 'ttim-res&rc'])\nt.loc['MLU'] = [38.094, 1.193E-06, '-']\nt.loc['AQTESOLV'] = [37.803, 1.356E-06, '-']\nt.loc['ttim'] = np.append(ca_2.parameters['optimal'].values, '-')\nt.loc['ttim-res&rc'] = ca_3.parameters['optimal'].values \nt['rc'] = ['-', '-', '-', 0.2]\nt['RMSE'] = [0.259, 0.270, 0.272, 0.192]\nt" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.21/_downloads/f094864c4eeae2b4353a90789dd18b2b/plot_mixed_source_space_inverse.ipynb
bsd-3-clause
[ "%matplotlib inline", "Compute MNE inverse solution on evoked data in a mixed source space\nCreate a mixed source space and compute an MNE inverse solution on an\nevoked dataset.", "# Author: Annalisa Pascarella <a.pascarella@iac.cnr.it>\n#\n# License: BSD (3-clause)\n\nimport os.path as op\nimport matplotlib.pyplot as plt\n\nfrom nilearn import plotting\n\nimport mne\nfrom mne.minimum_norm import make_inverse_operator, apply_inverse\n\n# Set dir\ndata_path = mne.datasets.sample.data_path()\nsubject = 'sample'\ndata_dir = op.join(data_path, 'MEG', subject)\nsubjects_dir = op.join(data_path, 'subjects')\nbem_dir = op.join(subjects_dir, subject, 'bem')\n\n# Set file names\nfname_mixed_src = op.join(bem_dir, '%s-oct-6-mixed-src.fif' % subject)\nfname_aseg = op.join(subjects_dir, subject, 'mri', 'aseg.mgz')\n\nfname_model = op.join(bem_dir, '%s-5120-bem.fif' % subject)\nfname_bem = op.join(bem_dir, '%s-5120-bem-sol.fif' % subject)\n\nfname_evoked = data_dir + '/sample_audvis-ave.fif'\nfname_trans = data_dir + '/sample_audvis_raw-trans.fif'\nfname_fwd = data_dir + '/sample_audvis-meg-oct-6-mixed-fwd.fif'\nfname_cov = data_dir + '/sample_audvis-shrunk-cov.fif'", "Set up our source space\nList substructures we are interested in. We select only the\nsub structures we want to include in the source space:", "labels_vol = ['Left-Amygdala',\n 'Left-Thalamus-Proper',\n 'Left-Cerebellum-Cortex',\n 'Brain-Stem',\n 'Right-Amygdala',\n 'Right-Thalamus-Proper',\n 'Right-Cerebellum-Cortex']", "Get a surface-based source space, here with few source points for speed\nin this demonstration, in general you should use oct6 spacing!", "src = mne.setup_source_space(subject, spacing='oct5',\n add_dist=False, subjects_dir=subjects_dir)", "Now we create a mixed src space by adding the volume regions specified in the\nlist labels_vol. First, read the aseg file and the source space bounds\nusing the inner skull surface (here using 10mm spacing to save time,\nwe recommend something smaller like 5.0 in actual analyses):", "vol_src = mne.setup_volume_source_space(\n subject, mri=fname_aseg, pos=10.0, bem=fname_model,\n volume_label=labels_vol, subjects_dir=subjects_dir,\n add_interpolator=False, # just for speed, usually this should be True\n verbose=True)\n\n# Generate the mixed source space\nsrc += vol_src\n\n# Visualize the source space.\nsrc.plot(subjects_dir=subjects_dir)\n\nn = sum(src[i]['nuse'] for i in range(len(src)))\nprint('the src space contains %d spaces and %d points' % (len(src), n))", "Viewing the source space\nWe could write the mixed source space with::\n\n\n\nwrite_source_spaces(fname_mixed_src, src, overwrite=True)\n\n\n\nWe can also export source positions to nifti file and visualize it again:", "nii_fname = op.join(bem_dir, '%s-mixed-src.nii' % subject)\nsrc.export_volume(nii_fname, mri_resolution=True, overwrite=True)\nplotting.plot_img(nii_fname, cmap='nipy_spectral')", "Compute the fwd matrix", "fwd = mne.make_forward_solution(\n fname_evoked, fname_trans, src, fname_bem,\n mindist=5.0, # ignore sources<=5mm from innerskull\n meg=True, eeg=False, n_jobs=1)\n\nleadfield = fwd['sol']['data']\nprint(\"Leadfield size : %d sensors x %d dipoles\" % leadfield.shape)\n\nsrc_fwd = fwd['src']\nn = sum(src_fwd[i]['nuse'] for i in range(len(src_fwd)))\nprint('the fwd src space contains %d spaces and %d points' % (len(src_fwd), n))\n\n# Load data\ncondition = 'Left Auditory'\nevoked = mne.read_evokeds(fname_evoked, condition=condition,\n baseline=(None, 0))\nnoise_cov = mne.read_cov(fname_cov)", "Compute inverse solution", "snr = 3.0 # use smaller SNR for raw data\ninv_method = 'dSPM' # sLORETA, MNE, dSPM\nparc = 'aparc' # the parcellation to use, e.g., 'aparc' 'aparc.a2009s'\nloose = dict(surface=0.2, volume=1.)\n\nlambda2 = 1.0 / snr ** 2\n\ninverse_operator = make_inverse_operator(\n evoked.info, fwd, noise_cov, depth=None, loose=loose, verbose=True)\n\nstc = apply_inverse(evoked, inverse_operator, lambda2, inv_method,\n pick_ori=None)\nsrc = inverse_operator['src']", "Plot the mixed source estimate", "initial_time = 0.1\nstc_vec = apply_inverse(evoked, inverse_operator, lambda2, inv_method,\n pick_ori='vector')\nbrain = stc_vec.plot(\n hemi='both', src=inverse_operator['src'], views='coronal',\n initial_time=initial_time, subjects_dir=subjects_dir)", "Plot the surface", "brain = stc.surface().plot(initial_time=initial_time,\n subjects_dir=subjects_dir)", "Plot the volume", "fig = stc.volume().plot(initial_time=initial_time, src=src,\n subjects_dir=subjects_dir)", "Process labels\nAverage the source estimates within each label of the cortical parcellation\nand each sub structure contained in the src space", "# Get labels for FreeSurfer 'aparc' cortical parcellation with 34 labels/hemi\nlabels_parc = mne.read_labels_from_annot(\n subject, parc=parc, subjects_dir=subjects_dir)\n\nlabel_ts = mne.extract_label_time_course(\n [stc], labels_parc, src, mode='mean', allow_empty=True)\n\n# plot the times series of 2 labels\nfig, axes = plt.subplots(1)\naxes.plot(1e3 * stc.times, label_ts[0][0, :], 'k', label='bankssts-lh')\naxes.plot(1e3 * stc.times, label_ts[0][71, :].T, 'r', label='Brain-stem')\naxes.set(xlabel='Time (ms)', ylabel='MNE current (nAm)')\naxes.legend()\nmne.viz.tight_layout()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ecabreragranado/OpticaFisicaII
Experimento de Young/.ipynb_checkpoints/Trenes de Onda-checkpoint.ipynb
gpl-3.0
[ "Experimento de Young con trenes de ondas\nConsideraciones iniciales", "from IPython.display import Image\nImage(filename=\"EsquemaYoung.png\")", "Cuando estudiamos el experimento de Young, asumimos que iluminábamos con radiación monocromática. En este caso, la posición de los máximos y mínimos de irradiancia venían dados por, \n<div class=\"alert alert-error\">\nMáximos de irradiancia. $\\delta = 2 m \\pi \\implies \\frac{a x}{D} = m \\implies$\n\n$$x_{max} = \\frac{m \\lambda D}{a}$$\n</div>\n<div class=\"alert alert-error\">\nMínimos de irradiancia. $\\delta = (2 m + 1)\\pi \\implies \\frac{a x}{D} = (m +1/2) \\implies$\n\n$$x_{min} = \\frac{(m + 1/2) \\lambda D}{a}$$\n</div>\n\nComo vemos en las fórmulas anteriores, la posición de los máximos y mínimos dependen de la longitud de onda $\\lambda$. Cuando iluminamos el mismo experimento con radiación no monocromática, podemos considerar que cada longitud de onda que compone el espectro de la radiación forma su patrón de interferencias. Pero cada patrón de interferencias tendrá los máximos en posiciones ligeramente distintas. Esto va a llevar a una reducción del constraste y finalmente, a la desaparición de las franjas de interferencia. Vamos a estudiar este proceso con más detalle, viendo primero una mejor aproximación que una onda monocromática a la radiación que emiten la fuentes de luz reales, y posteriormente, cómo afecta este tipo de radiación a las interferencias en un experimento de Young\nTrenes de onda\nAunque la abstracción de tratar una onda monocromática es extremadamente útil, las fuentes de luz reales no emiten tal radiación. La razón es sencilla: una onda monocromática pura (es decir, un seno o un coseno) no tiene ni principio ni final, por lo que para emitir una onda de este tipo se necesitaría energía infinita.\nLo más próximo que podemos obtener a una onda monocromática es una sucesión de trenes de ondas armónicos separados unos de otros por saltos aleatorios en la fase de la onda. \nEl siguiente código muestra un ejemplo de este tipo de trenes de onda.", "import matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline\nplt.style.use('fivethirtyeight')\n#import ipywidgets as widg\n#from IPython.display import display\n\n#####\n#PARÁMETROS QUE SE PUEDEN MODIFICAR\n#####\nLambda = 5e-7\nc0 = 3e8\nomega = 2*np.pi*c0/Lambda\nT = 2*np.pi/omega\ntau = 2*T\n###########\n\n\ntime = np.linspace(0,18*T,600)\n\ndef campo(t,w,tau0):\n numsaltos = (int)(np.floor(t[-1]/tau0))\n phi = (np.random.random(numsaltos)-0.5)*4*np.pi\n phi_aux = np.array([np.ones(np.size(t)/numsaltos)*phi[i] for i in range(numsaltos)])\n phi_t = np.reshape(phi_aux,np.shape(phi_aux)[0]*np.shape(phi_aux)[1])\n phi_t = np.pad(phi_t,(np.size(t)-np.size(phi_t),0),mode='edge')\n e1 = np.cos(omega*t + phi_t)\n fig,ax = plt.subplots(1,1,figsize=(8,4))\n ax.plot(t,e1)\n ax.set_xlabel('Tiempo (s)')\n ax.set_ylabel('Campo (u.a.)')\n return None\n\ncampo(time,omega,tau)", "Longitud de coherencia\nEl tiempo en el que la fase de la onda permanece constante (tiempo entre saltos consecutivos) se llama tiempo de coherencia y nosotros lo denominaremos $t_c$. \nSi observamos el tren de ondas espacialmente, veremos una figura similar a la anterior, es decir, una figura sinusoidal con un periodo igual a la longitud de onda $\\lambda$ y con saltos de fase cada cierta distancia. A esta distancia se le denomina longitud de coherencia ($l_c$) y se relaciona con el tiempo de coherencia mediante la relación, \n$$l_c = c t_c$$\ndonde $c$ es la velocidad de la luz. \nAnchura espectral\nUn tren de ondas deja de ser una radiación completamente monocromática, es decir, con una única longitud de onda o frecuencia, pasando a tener una cierta anchura espectral. Lo podemos entender observando que un tren de ondas deja de ser un coseno o un seno debido a esos saltos de fase aleatorios, pasando a tener una evolución temporal más compleja. \nLa anchura en frecuencias (o longitudes de onda) de un tren de ondas la podemos hallar mediante la transformada de Fourier. Este análisis queda fuera del objeto de este curso pero sí nos vas a resultar útil un resultado que emerge de esta transformada: la relación entre anchura espectral (rango de frecuencias presentes en la radiación $\\Delta \\nu$) y tiempo de coherencia. Esta relación es, \n$$t_c \\simeq \\frac{1}{\\Delta \\nu}$$\nTeniendo en cuenta que $\\nu = c/\\lambda$ podemos llegar a la relación entre la longitud de coherencia y la anchura espectral expresada en longitudes de onda, \n$$l_c \\simeq \\frac{\\lambda^2}{\\Delta \\lambda}$$\nLa anterior relación nos dice que a mayor longitud de coherencia, menor anchura espectral de la radiación, o lo que es lo mismo, más monocromática será.", "import matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline\nplt.style.use('fivethirtyeight')\nimport ipywidgets as widg\nfrom IPython.display import display\n\n#####\n#PARÁMETROS QUE SE PUEDEN MODIFICAR\n#####\nLambda = 5e-7\nc0 = 3e8\nomega = 2*np.pi*c0/Lambda\nT = 2*np.pi/omega\ntime = np.linspace(0,30*T,1500)\ntau = 2*T\n###########\ndef campofft(t,w,tau0):\n numsaltos = (int)(np.floor(t[-1]/tau0))\n phi = (np.random.random(numsaltos)-0.5)*4*np.pi\n phi_aux = np.array([np.ones(np.size(t)/numsaltos)*phi[i] for i in range(numsaltos)])\n phi_t = np.reshape(phi_aux,np.shape(phi_aux)[0]*np.shape(phi_aux)[1])\n phi_t = np.pad(phi_t,(np.size(t)-np.size(phi_t),0),mode='edge')\n e1 = np.cos(omega*t + phi_t)\n fig1,ax1 = plt.subplots(1,2,figsize=(10,4))\n ax1[0].plot(t,e1)\n ax1[0].set_title('Campo')\n ax1[0].set_xlabel('tiempo (s)')\n ax1[1].set_ylabel('E(t)')\n freq = np.fft.fftfreq(t.shape[0],t[1]-t[0])\n e1fft = np.fft.fft(e1)\n ax1[1].plot(freq,np.abs(e1fft)**2)\n ax1[1].set_xlim(0,0.1e16)\n ax1[1].set_title('Espectro del campo')\n ax1[1].vlines(omega/(2*np.pi),0,np.max(np.abs(e1fft)**2),'k')\n return \n\ncampofft(time,omega,tau)\n", "¿Qué ocurre si iluminamos el experimento de Young con este tipo de radiación?\nSi iluminamos una doble rendija con un tren de ondas como el representado anteriormente, tendremos dos ondas llegando a un cierto punto de la pantalla con la misma evolución temporal pero una de ellas retrasada con respecto a la otra. Esto es debido a la diferencia de camino óptico recorrido por cada tren de onda. \nCuando superponemos ambos trenes (uno con un cierto retraso con respecto al otro), la diferencia entre las fases iniciales de cada onda dependerá del tiempo. Además, como los saltos de fase en el tren de ondas son aleatorios, esa diferencia de fase cambiara a su vez aleatoriamente. El siguiente codigo muestra esta diferencia.\nEsta diferencia aleatoria tiene un gran efecto en la irradiancia total del patron de interferencias.\nRecordemos que la intensidad total viene dada por:\n$$ I_t = I_1 + I_2 + \\epsilon_0 c n < E_1 E_2>_{\\tau}$$\nEn la anterior expresion hemos dejado explícitamente en el término interferencial el promedio sobre el producto escalar de los campos que interfieren. Este producto escalar nos da lugar a, \n$$\\int_0^\\tau \\cos(k_1 r - \\omega t + \\phi_1) \\cos(k_2 r - \\omega t + \\phi_2) dt$$\nque podemos escribir en función de la diferencia de fases iniciales $\\phi_1 - \\phi_2$. Si esta diferencia varía aleatoriamente durante el intervalo de tiempo $\\tau$, su promedio sera nulo y el termino interferencial tambien lo sera. Por tanto la irradiancia total sera, \n$$I_t = I_1 + I_2$$\nEs decir, se pierden las franjas de interferencia. Esta situación ocurrirá cuando la diferencia de camino sea suficiente como para que no se solapen las zonas de los trenes de ondas que interfieren con la misma fase. Desde el centro de la pantalla (diferencia de fase igual a cero entre las ondas que interfieren) veremos entonces cómo las franjas se van perdiendo gradualmente a medida que nos alejamos a puntos exteriores (el contraste disminuye progresivamente) hasta que se pierden por completo (contraste igual a cero). En este punto, la irradiancia total será simplemente la suma de las irradiancias de los haces que interfieren.\nEl punto en el que las franjas se pierden por completo será, como se ha comentado, aquel que haga que no haya solapamiento entre las zonas de los trenes de ondas con la misma fase. Es decir, la diferencia de camino ha de ser mayor que la distancia característica de cada una de estas zonas. Esta distancia es simplemente la longitud de coherencia. Por tanto, perderemos la interferencia si, \n$$\\Delta > l_c$$\ndonde $\\Delta$ denota la diferencia de camino entre los haces.\nEl siguiente código muestra el patrón de interferencias cuando iluminamos el experimento de Young con un tren de ondas.", "from matplotlib.pyplot import *\nfrom numpy import *\n%matplotlib inline\nstyle.use('fivethirtyeight')\n\n#####\n#PARÁMETROS QUE SE PUEDEN MODIFICAR\n#####\nLambda = 5e-7 # longitud de onda de la radiación de 500 nm\nk = 2.0*pi/Lambda\nD = 3.5# en metros\na = 0.003 # separación entre fuentes de 3 mm\nDeltaLambda = 7e-8 # anchura espectral\n###########\n\nlc = (Lambda**2)/DeltaLambda\ninterfranja = Lambda*D/a\nprint (\"interfranja\",interfranja*1e3, \"(mm)\") # muestra el valor de la interfranja en mm\nprint( \"longitud de coherencia\", lc*1e6, \"(um)\") #muestra el valor de la long. de coherencia en um\n\nx = linspace(-10*interfranja,10*interfranja,500)\nI1 = 1 # Consideramos irradiancias normalizadas a un cierto valor.\nI2 = 1\n\nX,Y = meshgrid(x,x)\nDelta = a*X/D\ndelta = k*Delta\n\n\n\ngamma12 = (1 - np.abs(Delta)/lc)*(np.abs(Delta)<lc)\nItotal = I1 + I2 + 2.0*sqrt(I1*I2)*gamma12*cos(delta)\n\nfigure(figsize=(14,5))\nsubplot(121)\npcolormesh(x*1e3,x*1e3,Itotal,cmap = 'gray',vmin=0,vmax=4)\nxlabel(\"x (mm)\")\nylabel(\"y (mm)\")\nsubplot(122)\nplot(x*1e3,Itotal[(int)(x.shape[0]/2),:])\nxlim(-5,5)\nylim(0,4)\nxlabel(\"x (mm)\")\nylabel(\"Irradiancia total normalizada\");" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AnyBody-Research-Group/AnyPyTools
docs/Tutorial/01_Getting_started_with_anypytools.ipynb
mit
[ "Getting Started with AnyPyTools\nRunning a simple macro\n<img src=\"Tutorial_files/knee.gif\" alt=\"Drawing\" align=\"Right\" width=120 />\nFor the sake of the tutorial we will use a small 'toy' model of a simplified knee joint (see the figure.) The model is defined in the file Knee.any, which is placed in the current working directory.\nNext, let us run the model from python. First, we import the AnyPyProcess class and create an instance of the class.", "from anypytools import AnyPyProcess \napp = AnyPyProcess()", "Next, we need to instruct the AnyBody Modelling System to load the and run the model. We do this using AnyScript macro commands. These are short commands that can automate operations in the AnyBody Modeling System (AMS). Operation that are normally done by pointing and clicking in the AMS graphical user interface. \nYou can read more on AnyScript macros in the \"User Interface Features\" tutorial that accompanies the AnyBody Modeling System.\nNow we define an AnyScript macro that we want to run on the model.\nload \"Knee.any\"\noperation Main.MyStudy.Kinematics\nrun\nThe macro will command AnyBody to load the model and run the Kinematics operation. \nThe macro is executed by parsing it to the start_macro() method of the AnyPyProcess object.", "macrolist = [\n 'load \"Knee.any\"',\n 'operation Main.MyStudy.Kinematics',\n 'run',\n]\n\napp.start_macro(macrolist);", "Running multiple macros\nIt is easy to run multiple macros by adding an extra set of macro commands to the macro list.", "macrolist = [\n ['load \"Knee.any\"',\n 'operation Main.MyStudy.Kinematics',\n 'run'],\n ['load \"Knee.any\"',\n 'operation Main.MyStudy.InverseDynamics',\n 'run'],\n]\napp.start_macro(macrolist);", "Parallel execution\nNotice that AnyPyProcess will run the anyscript macros in parallel. Modern computers have multiple cores, but a single AnyBody instance can only utilize a single core, leaving us with a great potential for speeding things up through parallelization.\nTo test this, let us create ten macros in a for-loop.", "macrolist = []\nfor i in range(40):\n macro = [\n 'load \"Knee.any\"', \n 'operation Main.MyStudy.InverseDynamics',\n 'run',\n ]\n macrolist.append(macro)", "AnyPyProcess has a parameter 'num_processes' that controls the number of parallel processes. Let us try a small example to see the difference in speed:", "# First sequentially\napp = AnyPyProcess(num_processes = 1)\napp.start_macro(macrolist);\n\n# Then with parallization\napp = AnyPyProcess(num_processes = 4)\napp.start_macro(macrolist);", "Note: In general you should not user a num_processes larger than the number of cores in your computer.\nGetting data from the AnyBody Model\nIn the following macro, we have added a new class operation to 'Dump' the result of the maximum muscle activity. The start_macro method will return all the dumped variables:", "import numpy as np\nmacrolist = [\n 'load \"Knee.any\"',\n 'operation Main.MyStudy.InverseDynamics',\n 'run',\n 'classoperation Main.MyStudy.Output.MaxMuscleActivity \"Dump\"',\n] \n\nresults = app.start_macro(macrolist)", "We can export more variables by adding more classoperation. But there is a better way of doing this, as we shall see in the next tutorials. \nFinally, to make a plot we import the matplotlib library, and enable inline figures.", "max_muscle_act = results[0]['Main.MyStudy.Output.MaxMuscleActivity']\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.plot(max_muscle_act);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mairas/delta_calibration
delta_calibration.ipynb
mit
[ "Delta printer geometry calibration using bed auto-leveling\nCopyright (c) 2015 Matti Airas\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\nInitial setup", "from mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import cm\nfrom matplotlib.ticker import LinearLocator, FormatStrFormatter\nimport matplotlib.pyplot as plt\n\nimport numpy as np\nfrom scipy.optimize import leastsq, minimize\n\n%matplotlib inline", "Import calibration data. The calibration data should be acquired by first running commands \"G28\" and then \"G29 V3\" on the printer. To increase the amount of points, you can set AUTO_BED_LEVELING_GRID_POINTS to some larger value. I used 25.", "in_data = \"\"\"\n< 21:10:41: Bed X: 0.000 Y: -70.000 Z: 1.370\n< 21:10:42: Bed X: 25.000 Y: -65.000 Z: 1.280\n< 21:10:42: Bed X: 20.000 Y: -65.000 Z: 1.390\n< 21:10:43: Bed X: 15.000 Y: -65.000 Z: 1.430\n< 21:10:43: Bed X: 10.000 Y: -65.000 Z: 1.500\n< 21:10:44: Bed X: 5.000 Y: -65.000 Z: 1.530\n< 21:10:44: Bed X: 0.000 Y: -65.000 Z: 1.590\n< 21:10:45: Bed X: -5.000 Y: -65.000 Z: 1.610\n< 21:10:45: Bed X: -10.000 Y: -65.000 Z: 1.640\n< 21:10:46: Bed X: -15.000 Y: -65.000 Z: 1.610\n< 21:10:46: Bed X: -20.000 Y: -65.000 Z: 1.580\n< 21:10:47: Bed X: -25.000 Y: -65.000 Z: 1.520\n< 21:10:48: Bed X: -35.000 Y: -60.000 Z: 1.420\n< 21:10:48: Bed X: -30.000 Y: -60.000 Z: 1.510\n< 21:10:49: Bed X: -25.000 Y: -60.000 Z: 1.590\n< 21:10:49: Bed X: -20.000 Y: -60.000 Z: 1.620\n< 21:10:50: Bed X: -15.000 Y: -60.000 Z: 1.660\n< 21:10:50: Bed X: -10.000 Y: -60.000 Z: 1.680\n< 21:10:51: Bed X: -5.000 Y: -60.000 Z: 1.660\n< 21:10:51: Bed X: 0.000 Y: -60.000 Z: 1.660\n< 21:10:52: Bed X: 5.000 Y: -60.000 Z: 1.660\n< 21:10:52: Bed X: 10.000 Y: -60.000 Z: 1.620\n< 21:10:53: Bed X: 15.000 Y: -60.000 Z: 1.570\n< 21:10:53: Bed X: 20.000 Y: -60.000 Z: 1.510\n< 21:10:54: Bed X: 25.000 Y: -60.000 Z: 1.430\n< 21:10:54: Bed X: 30.000 Y: -60.000 Z: 1.350\n< 21:10:55: Bed X: 35.000 Y: -60.000 Z: 1.260\n< 21:10:56: Bed X: 40.000 Y: -55.000 Z: 1.350\n< 21:10:56: Bed X: 35.000 Y: -55.000 Z: 1.360\n< 21:10:57: Bed X: 30.000 Y: -55.000 Z: 1.450\n< 21:10:57: Bed X: 25.000 Y: -55.000 Z: 1.560\n< 21:10:58: Bed X: 20.000 Y: -55.000 Z: 1.620\n< 21:10:58: Bed X: 15.000 Y: -55.000 Z: 1.670\n< 21:10:59: Bed X: 10.000 Y: -55.000 Z: 1.710\n< 21:10:59: Bed X: 5.000 Y: -55.000 Z: 1.730\n< 21:11:00: Bed X: 0.000 Y: -55.000 Z: 1.740\n< 21:11:00: Bed X: -5.000 Y: -55.000 Z: 1.740\n< 21:11:01: Bed X: -10.000 Y: -55.000 Z: 1.740\n< 21:11:01: Bed X: -15.000 Y: -55.000 Z: 1.730\n< 21:11:02: Bed X: -20.000 Y: -55.000 Z: 1.700\n< 21:11:02: Bed X: -25.000 Y: -55.000 Z: 1.650\n< 21:11:03: Bed X: -30.000 Y: -55.000 Z: 1.580\n< 21:11:04: Bed X: -35.000 Y: -55.000 Z: 1.510\n< 21:11:04: Bed X: -40.000 Y: -55.000 Z: 1.430\n< 21:11:05: Bed X: -45.000 Y: -50.000 Z: 1.430\n< 21:11:05: Bed X: -40.000 Y: -50.000 Z: 1.480\n< 21:11:06: Bed X: -35.000 Y: -50.000 Z: 1.540\n< 21:11:06: Bed X: -30.000 Y: -50.000 Z: 1.620\n< 21:11:07: Bed X: -25.000 Y: -50.000 Z: 1.690\n< 21:11:07: Bed X: -20.000 Y: -50.000 Z: 1.740\n< 21:11:08: Bed X: -15.000 Y: -50.000 Z: 1.780\n< 21:11:08: Bed X: -10.000 Y: -50.000 Z: 1.790\n< 21:11:09: Bed X: -5.000 Y: -50.000 Z: 1.830\n< 21:11:09: Bed X: 0.000 Y: -50.000 Z: 1.840\n< 21:11:10: Bed X: 5.000 Y: -50.000 Z: 1.840\n< 21:11:10: Bed X: 10.000 Y: -50.000 Z: 1.820\n< 21:11:11: Bed X: 15.000 Y: -50.000 Z: 1.780\n< 21:11:12: Bed X: 20.000 Y: -50.000 Z: 1.720\n< 21:11:12: Bed X: 25.000 Y: -50.000 Z: 1.680\n< 21:11:13: Bed X: 30.000 Y: -50.000 Z: 1.590\n< 21:11:13: Bed X: 35.000 Y: -50.000 Z: 1.500\n< 21:11:14: Bed X: 40.000 Y: -50.000 Z: 1.440\n< 21:11:14: Bed X: 45.000 Y: -50.000 Z: 1.370\n< 21:11:15: Bed X: 50.000 Y: -45.000 Z: 1.490\n< 21:11:15: Bed X: 45.000 Y: -45.000 Z: 1.500\n< 21:11:16: Bed X: 40.000 Y: -45.000 Z: 1.550\n< 21:11:16: Bed X: 35.000 Y: -45.000 Z: 1.630\n< 21:11:17: Bed X: 30.000 Y: -45.000 Z: 1.700\n< 21:11:18: Bed X: 25.000 Y: -45.000 Z: 1.790\n< 21:11:18: Bed X: 20.000 Y: -45.000 Z: 1.840\n< 21:11:19: Bed X: 15.000 Y: -45.000 Z: 1.900\n< 21:11:19: Bed X: 10.000 Y: -45.000 Z: 1.910\n< 21:11:20: Bed X: 5.000 Y: -45.000 Z: 1.930\n< 21:11:20: Bed X: 0.000 Y: -45.000 Z: 1.910\n< 21:11:21: Bed X: -5.000 Y: -45.000 Z: 1.920\n< 21:11:21: Bed X: -10.000 Y: -45.000 Z: 1.880\n< 21:11:22: Bed X: -15.000 Y: -45.000 Z: 1.870\n< 21:11:22: Bed X: -20.000 Y: -45.000 Z: 1.810\n< 21:11:23: Bed X: -25.000 Y: -45.000 Z: 1.750\n< 21:11:23: Bed X: -30.000 Y: -45.000 Z: 1.690\n< 21:11:24: Bed X: -35.000 Y: -45.000 Z: 1.620\n< 21:11:24: Bed X: -40.000 Y: -45.000 Z: 1.520\n< 21:11:25: Bed X: -45.000 Y: -45.000 Z: 1.460\n< 21:11:26: Bed X: -50.000 Y: -45.000 Z: 1.410\n< 21:11:26: Bed X: -55.000 Y: -40.000 Z: 1.360\n< 21:11:27: Bed X: -50.000 Y: -40.000 Z: 1.440\n< 21:11:27: Bed X: -45.000 Y: -40.000 Z: 1.530\n< 21:11:28: Bed X: -40.000 Y: -40.000 Z: 1.580\n< 21:11:28: Bed X: -35.000 Y: -40.000 Z: 1.670\n< 21:11:29: Bed X: -30.000 Y: -40.000 Z: 1.750\n< 21:11:29: Bed X: -25.000 Y: -40.000 Z: 1.820\n< 21:11:30: Bed X: -20.000 Y: -40.000 Z: 1.880\n< 21:11:30: Bed X: -15.000 Y: -40.000 Z: 1.920\n< 21:11:31: Bed X: -10.000 Y: -40.000 Z: 1.950\n< 21:11:31: Bed X: -5.000 Y: -40.000 Z: 2.010\n< 21:11:32: Bed X: 0.000 Y: -40.000 Z: 2.010\n< 21:11:32: Bed X: 5.000 Y: -40.000 Z: 2.030\n< 21:11:33: Bed X: 10.000 Y: -40.000 Z: 2.010\n< 21:11:34: Bed X: 15.000 Y: -40.000 Z: 2.010\n< 21:11:34: Bed X: 20.000 Y: -40.000 Z: 1.960\n< 21:11:35: Bed X: 25.000 Y: -40.000 Z: 1.920\n< 21:11:35: Bed X: 30.000 Y: -40.000 Z: 1.820\n< 21:11:36: Bed X: 35.000 Y: -40.000 Z: 1.770\n< 21:11:36: Bed X: 40.000 Y: -40.000 Z: 1.670\n< 21:11:37: Bed X: 45.000 Y: -40.000 Z: 1.610\n< 21:11:37: Bed X: 50.000 Y: -40.000 Z: 1.640\n< 21:11:38: Bed X: 50.000 Y: -35.000 Z: 1.740\n< 21:11:38: Bed X: 45.000 Y: -35.000 Z: 1.750\n< 21:11:39: Bed X: 40.000 Y: -35.000 Z: 1.810\n< 21:11:39: Bed X: 35.000 Y: -35.000 Z: 1.870\n< 21:11:40: Bed X: 30.000 Y: -35.000 Z: 1.940\n< 21:11:41: Bed X: 25.000 Y: -35.000 Z: 2.010\n< 21:11:41: Bed X: 20.000 Y: -35.000 Z: 2.060\n< 21:11:42: Bed X: 15.000 Y: -35.000 Z: 2.100\n< 21:11:42: Bed X: 10.000 Y: -35.000 Z: 2.110\n< 21:11:43: Bed X: 5.000 Y: -35.000 Z: 2.100\n< 21:11:43: Bed X: 0.000 Y: -35.000 Z: 2.090\n< 21:11:44: Bed X: -5.000 Y: -35.000 Z: 2.090\n< 21:11:44: Bed X: -10.000 Y: -35.000 Z: 2.030\n< 21:11:45: Bed X: -15.000 Y: -35.000 Z: 2.020\n< 21:11:45: Bed X: -20.000 Y: -35.000 Z: 1.950\n< 21:11:46: Bed X: -25.000 Y: -35.000 Z: 1.910\n< 21:11:46: Bed X: -30.000 Y: -35.000 Z: 1.830\n< 21:11:47: Bed X: -35.000 Y: -35.000 Z: 1.760\n< 21:11:47: Bed X: -40.000 Y: -35.000 Z: 1.660\n< 21:11:48: Bed X: -45.000 Y: -35.000 Z: 1.570\n< 21:11:49: Bed X: -50.000 Y: -35.000 Z: 1.510\n< 21:11:49: Bed X: -55.000 Y: -35.000 Z: 1.450\n< 21:11:50: Bed X: -60.000 Y: -35.000 Z: 1.350\n< 21:11:50: Bed X: -60.000 Y: -30.000 Z: 1.380\n< 21:11:51: Bed X: -55.000 Y: -30.000 Z: 1.500\n< 21:11:51: Bed X: -50.000 Y: -30.000 Z: 1.540\n< 21:11:52: Bed X: -45.000 Y: -30.000 Z: 1.640\n< 21:11:52: Bed X: -40.000 Y: -30.000 Z: 1.720\n< 21:11:53: Bed X: -35.000 Y: -30.000 Z: 1.830\n< 21:11:53: Bed X: -30.000 Y: -30.000 Z: 1.920\n< 21:11:54: Bed X: -25.000 Y: -30.000 Z: 2.000\n< 21:11:55: Bed X: -20.000 Y: -30.000 Z: 2.050\n< 21:11:55: Bed X: -15.000 Y: -30.000 Z: 2.110\n< 21:11:56: Bed X: -10.000 Y: -30.000 Z: 2.150\n< 21:11:56: Bed X: -5.000 Y: -30.000 Z: 2.210\n< 21:11:57: Bed X: 0.000 Y: -30.000 Z: 2.200\n< 21:11:57: Bed X: 5.000 Y: -30.000 Z: 2.220\n< 21:11:58: Bed X: 10.000 Y: -30.000 Z: 2.230\n< 21:11:58: Bed X: 15.000 Y: -30.000 Z: 2.230\n< 21:11:59: Bed X: 20.000 Y: -30.000 Z: 2.200\n< 21:11:59: Bed X: 25.000 Y: -30.000 Z: 2.120\n< 21:12:00: Bed X: 30.000 Y: -30.000 Z: 2.080\n< 21:12:00: Bed X: 35.000 Y: -30.000 Z: 1.990\n< 21:12:01: Bed X: 40.000 Y: -30.000 Z: 1.930\n< 21:12:01: Bed X: 45.000 Y: -30.000 Z: 1.890\n< 21:12:02: Bed X: 50.000 Y: -30.000 Z: 1.870\n< 21:12:03: Bed X: 50.000 Y: -25.000 Z: 2.000\n< 21:12:03: Bed X: 45.000 Y: -25.000 Z: 1.990\n< 21:12:04: Bed X: 40.000 Y: -25.000 Z: 2.040\n< 21:12:04: Bed X: 35.000 Y: -25.000 Z: 2.100\n< 21:12:05: Bed X: 30.000 Y: -25.000 Z: 2.170\n< 21:12:05: Bed X: 25.000 Y: -25.000 Z: 2.240\n< 21:12:06: Bed X: 20.000 Y: -25.000 Z: 2.280\n< 21:12:06: Bed X: 15.000 Y: -25.000 Z: 2.280\n< 21:12:07: Bed X: 10.000 Y: -25.000 Z: 2.310\n< 21:12:07: Bed X: 5.000 Y: -25.000 Z: 2.300\n< 21:12:08: Bed X: 0.000 Y: -25.000 Z: 2.280\n< 21:12:08: Bed X: -5.000 Y: -25.000 Z: 2.260\n< 21:12:09: Bed X: -10.000 Y: -25.000 Z: 2.210\n< 21:12:09: Bed X: -15.000 Y: -25.000 Z: 2.170\n< 21:12:10: Bed X: -20.000 Y: -25.000 Z: 2.120\n< 21:12:11: Bed X: -25.000 Y: -25.000 Z: 2.060\n< 21:12:11: Bed X: -30.000 Y: -25.000 Z: 1.990\n< 21:12:12: Bed X: -35.000 Y: -25.000 Z: 1.900\n< 21:12:12: Bed X: -40.000 Y: -25.000 Z: 1.810\n< 21:12:13: Bed X: -45.000 Y: -25.000 Z: 1.720\n< 21:12:13: Bed X: -50.000 Y: -25.000 Z: 1.640\n< 21:12:14: Bed X: -55.000 Y: -25.000 Z: 1.560\n< 21:12:14: Bed X: -60.000 Y: -25.000 Z: 1.450\n< 21:12:15: Bed X: -65.000 Y: -25.000 Z: 1.350\n< 21:12:16: Bed X: -65.000 Y: -20.000 Z: 1.420\n< 21:12:16: Bed X: -60.000 Y: -20.000 Z: 1.520\n< 21:12:17: Bed X: -55.000 Y: -20.000 Z: 1.630\n< 21:12:17: Bed X: -50.000 Y: -20.000 Z: 1.710\n< 21:12:18: Bed X: -45.000 Y: -20.000 Z: 1.770\n< 21:12:18: Bed X: -40.000 Y: -20.000 Z: 1.880\n< 21:12:19: Bed X: -35.000 Y: -20.000 Z: 2.000\n< 21:12:19: Bed X: -30.000 Y: -20.000 Z: 2.070\n< 21:12:20: Bed X: -25.000 Y: -20.000 Z: 2.130\n< 21:12:20: Bed X: -20.000 Y: -20.000 Z: 2.200\n< 21:12:21: Bed X: -15.000 Y: -20.000 Z: 2.260\n< 21:12:21: Bed X: -10.000 Y: -20.000 Z: 2.300\n< 21:12:22: Bed X: -5.000 Y: -20.000 Z: 2.350\n< 21:12:22: Bed X: 0.000 Y: -20.000 Z: 2.380\n< 21:12:23: Bed X: 5.000 Y: -20.000 Z: 2.400\n< 21:12:24: Bed X: 10.000 Y: -20.000 Z: 2.390\n< 21:12:24: Bed X: 15.000 Y: -20.000 Z: 2.400\n< 21:12:25: Bed X: 20.000 Y: -20.000 Z: 2.370\n< 21:12:25: Bed X: 25.000 Y: -20.000 Z: 2.330\n< 21:12:26: Bed X: 30.000 Y: -20.000 Z: 2.280\n< 21:12:26: Bed X: 35.000 Y: -20.000 Z: 2.220\n< 21:12:27: Bed X: 40.000 Y: -20.000 Z: 2.170\n< 21:12:27: Bed X: 45.000 Y: -20.000 Z: 2.140\n< 21:12:28: Bed X: 50.000 Y: -20.000 Z: 2.130\n< 21:12:28: Bed X: 50.000 Y: -15.000 Z: 2.220\n< 21:12:29: Bed X: 45.000 Y: -15.000 Z: 2.270\n< 21:12:30: Bed X: 40.000 Y: -15.000 Z: 2.250\n< 21:12:30: Bed X: 35.000 Y: -15.000 Z: 2.310\n< 21:12:31: Bed X: 30.000 Y: -15.000 Z: 2.380\n< 21:12:31: Bed X: 25.000 Y: -15.000 Z: 2.430\n< 21:12:32: Bed X: 20.000 Y: -15.000 Z: 2.470\n< 21:12:32: Bed X: 15.000 Y: -15.000 Z: 2.500\n< 21:12:33: Bed X: 10.000 Y: -15.000 Z: 2.490\n< 21:12:33: Bed X: 5.000 Y: -15.000 Z: 2.480\n< 21:12:34: Bed X: 0.000 Y: -15.000 Z: 2.440\n< 21:12:34: Bed X: -5.000 Y: -15.000 Z: 2.380\n< 21:12:35: Bed X: -10.000 Y: -15.000 Z: 2.380\n< 21:12:35: Bed X: -15.000 Y: -15.000 Z: 2.330\n< 21:12:36: Bed X: -20.000 Y: -15.000 Z: 2.290\n< 21:12:36: Bed X: -25.000 Y: -15.000 Z: 2.220\n< 21:12:37: Bed X: -30.000 Y: -15.000 Z: 2.150\n< 21:12:38: Bed X: -35.000 Y: -15.000 Z: 2.060\n< 21:12:38: Bed X: -40.000 Y: -15.000 Z: 1.980\n< 21:12:39: Bed X: -45.000 Y: -15.000 Z: 1.880\n< 21:12:39: Bed X: -50.000 Y: -15.000 Z: 1.790\n< 21:12:40: Bed X: -55.000 Y: -15.000 Z: 1.730\n< 21:12:40: Bed X: -60.000 Y: -15.000 Z: 1.630\n< 21:12:41: Bed X: -65.000 Y: -15.000 Z: 1.520\n< 21:12:41: Bed X: -65.000 Y: -10.000 Z: 1.570\n< 21:12:42: Bed X: -60.000 Y: -10.000 Z: 1.670\n< 21:12:43: Bed X: -55.000 Y: -10.000 Z: 1.770\n< 21:12:43: Bed X: -50.000 Y: -10.000 Z: 1.870\n< 21:12:44: Bed X: -45.000 Y: -10.000 Z: 1.940\n< 21:12:44: Bed X: -40.000 Y: -10.000 Z: 2.040\n< 21:12:45: Bed X: -35.000 Y: -10.000 Z: 2.130\n< 21:12:45: Bed X: -30.000 Y: -10.000 Z: 2.220\n< 21:12:46: Bed X: -25.000 Y: -10.000 Z: 2.300\n< 21:12:46: Bed X: -20.000 Y: -10.000 Z: 2.370\n< 21:12:47: Bed X: -15.000 Y: -10.000 Z: 2.410\n< 21:12:47: Bed X: -10.000 Y: -10.000 Z: 2.470\n< 21:12:48: Bed X: -5.000 Y: -10.000 Z: 2.530\n< 21:12:48: Bed X: 0.000 Y: -10.000 Z: 2.540\n< 21:12:49: Bed X: 5.000 Y: -10.000 Z: 2.560\n< 21:12:49: Bed X: 10.000 Y: -10.000 Z: 2.590\n< 21:12:50: Bed X: 15.000 Y: -10.000 Z: 2.600\n< 21:12:50: Bed X: 20.000 Y: -10.000 Z: 2.570\n< 21:12:51: Bed X: 25.000 Y: -10.000 Z: 2.530\n< 21:12:52: Bed X: 30.000 Y: -10.000 Z: 2.480\n< 21:12:52: Bed X: 35.000 Y: -10.000 Z: 2.410\n< 21:12:53: Bed X: 40.000 Y: -10.000 Z: 2.350\n< 21:12:53: Bed X: 45.000 Y: -10.000 Z: 2.340\n< 21:12:54: Bed X: 50.000 Y: -10.000 Z: 2.310\n< 21:12:54: Bed X: 50.000 Y: -5.000 Z: 2.430\n< 21:12:55: Bed X: 45.000 Y: -5.000 Z: 2.410\n< 21:12:55: Bed X: 40.000 Y: -5.000 Z: 2.450\n< 21:12:56: Bed X: 35.000 Y: -5.000 Z: 2.490\n< 21:12:56: Bed X: 30.000 Y: -5.000 Z: 2.560\n< 21:12:57: Bed X: 25.000 Y: -5.000 Z: 2.600\n< 21:12:57: Bed X: 20.000 Y: -5.000 Z: 2.650\n< 21:12:58: Bed X: 15.000 Y: -5.000 Z: 2.660\n< 21:12:59: Bed X: 10.000 Y: -5.000 Z: 2.670\n< 21:12:59: Bed X: 5.000 Y: -5.000 Z: 2.650\n< 21:13:00: Bed X: 0.000 Y: -5.000 Z: 2.630\n< 21:13:00: Bed X: -5.000 Y: -5.000 Z: 2.590\n< 21:13:01: Bed X: -10.000 Y: -5.000 Z: 2.550\n< 21:13:01: Bed X: -15.000 Y: -5.000 Z: 2.490\n< 21:13:02: Bed X: -20.000 Y: -5.000 Z: 2.440\n< 21:13:02: Bed X: -25.000 Y: -5.000 Z: 2.370\n< 21:13:03: Bed X: -30.000 Y: -5.000 Z: 2.300\n< 21:13:03: Bed X: -35.000 Y: -5.000 Z: 2.210\n< 21:13:04: Bed X: -40.000 Y: -5.000 Z: 2.110\n< 21:13:04: Bed X: -45.000 Y: -5.000 Z: 2.030\n< 21:13:05: Bed X: -50.000 Y: -5.000 Z: 1.960\n< 21:13:06: Bed X: -55.000 Y: -5.000 Z: 1.870\n< 21:13:06: Bed X: -60.000 Y: -5.000 Z: 1.760\n< 21:13:07: Bed X: -65.000 Y: -5.000 Z: 1.650\n< 21:13:07: Bed X: -70.000 Y: 0.000 Z: 1.610\n< 21:13:08: Bed X: -65.000 Y: 0.000 Z: 1.730\n< 21:13:08: Bed X: -60.000 Y: 0.000 Z: 1.820\n< 21:13:09: Bed X: -55.000 Y: 0.000 Z: 1.920\n< 21:13:09: Bed X: -50.000 Y: 0.000 Z: 2.000\n< 21:13:10: Bed X: -45.000 Y: 0.000 Z: 2.090\n< 21:13:11: Bed X: -40.000 Y: 0.000 Z: 2.180\n< 21:13:11: Bed X: -35.000 Y: 0.000 Z: 2.280\n< 21:13:12: Bed X: -30.000 Y: 0.000 Z: 2.360\n< 21:13:12: Bed X: -25.000 Y: 0.000 Z: 2.460\n< 21:13:13: Bed X: -20.000 Y: 0.000 Z: 2.530\n< 21:13:13: Bed X: -15.000 Y: 0.000 Z: 2.570\n< 21:13:14: Bed X: -10.000 Y: 0.000 Z: 2.650\n< 21:13:14: Bed X: -5.000 Y: 0.000 Z: 2.670\n< 21:13:15: Bed X: 0.000 Y: 0.000 Z: 2.710\n< 21:13:15: Bed X: 5.000 Y: 0.000 Z: 2.740\n< 21:13:16: Bed X: 10.000 Y: 0.000 Z: 2.750\n< 21:13:16: Bed X: 15.000 Y: 0.000 Z: 2.750\n< 21:13:17: Bed X: 20.000 Y: 0.000 Z: 2.750\n< 21:13:17: Bed X: 25.000 Y: 0.000 Z: 2.700\n< 21:13:18: Bed X: 30.000 Y: 0.000 Z: 2.660\n< 21:13:18: Bed X: 35.000 Y: 0.000 Z: 2.590\n< 21:13:19: Bed X: 40.000 Y: 0.000 Z: 2.540\n< 21:13:20: Bed X: 45.000 Y: 0.000 Z: 2.540\n< 21:13:20: Bed X: 50.000 Y: 0.000 Z: 2.530\n< 21:13:21: Bed X: 50.000 Y: 5.000 Z: 2.610\n< 21:13:21: Bed X: 45.000 Y: 5.000 Z: 2.620\n< 21:13:22: Bed X: 40.000 Y: 5.000 Z: 2.640\n< 21:13:22: Bed X: 35.000 Y: 5.000 Z: 2.690\n< 21:13:23: Bed X: 30.000 Y: 5.000 Z: 2.730\n< 21:13:23: Bed X: 25.000 Y: 5.000 Z: 2.790\n< 21:13:24: Bed X: 20.000 Y: 5.000 Z: 2.810\n< 21:13:24: Bed X: 15.000 Y: 5.000 Z: 2.820\n< 21:13:25: Bed X: 10.000 Y: 5.000 Z: 2.830\n< 21:13:25: Bed X: 5.000 Y: 5.000 Z: 2.810\n< 21:13:26: Bed X: 0.000 Y: 5.000 Z: 2.780\n< 21:13:27: Bed X: -5.000 Y: 5.000 Z: 2.750\n< 21:13:27: Bed X: -10.000 Y: 5.000 Z: 2.700\n< 21:13:28: Bed X: -15.000 Y: 5.000 Z: 2.660\n< 21:13:28: Bed X: -20.000 Y: 5.000 Z: 2.600\n< 21:13:29: Bed X: -25.000 Y: 5.000 Z: 2.520\n< 21:13:29: Bed X: -30.000 Y: 5.000 Z: 2.450\n< 21:13:30: Bed X: -35.000 Y: 5.000 Z: 2.360\n< 21:13:30: Bed X: -40.000 Y: 5.000 Z: 2.260\n< 21:13:31: Bed X: -45.000 Y: 5.000 Z: 2.180\n< 21:13:31: Bed X: -50.000 Y: 5.000 Z: 2.090\n< 21:13:32: Bed X: -55.000 Y: 5.000 Z: 2.000\n< 21:13:33: Bed X: -60.000 Y: 5.000 Z: 1.890\n< 21:13:33: Bed X: -65.000 Y: 5.000 Z: 1.800\n< 21:13:34: Bed X: -65.000 Y: 10.000 Z: 1.840\n< 21:13:34: Bed X: -60.000 Y: 10.000 Z: 1.940\n< 21:13:35: Bed X: -55.000 Y: 10.000 Z: 2.050\n< 21:13:35: Bed X: -50.000 Y: 10.000 Z: 2.150\n< 21:13:36: Bed X: -45.000 Y: 10.000 Z: 2.230\n< 21:13:36: Bed X: -40.000 Y: 10.000 Z: 2.310\n< 21:13:37: Bed X: -35.000 Y: 10.000 Z: 2.410\n< 21:13:37: Bed X: -30.000 Y: 10.000 Z: 2.510\n< 21:13:38: Bed X: -25.000 Y: 10.000 Z: 2.600\n< 21:13:38: Bed X: -20.000 Y: 10.000 Z: 2.660\n< 21:13:39: Bed X: -15.000 Y: 10.000 Z: 2.720\n< 21:13:40: Bed X: -10.000 Y: 10.000 Z: 2.780\n< 21:13:40: Bed X: -5.000 Y: 10.000 Z: 2.840\n< 21:13:41: Bed X: 0.000 Y: 10.000 Z: 2.860\n< 21:13:41: Bed X: 5.000 Y: 10.000 Z: 2.910\n< 21:13:42: Bed X: 10.000 Y: 10.000 Z: 2.920\n< 21:13:42: Bed X: 15.000 Y: 10.000 Z: 2.920\n< 21:13:43: Bed X: 20.000 Y: 10.000 Z: 2.900\n< 21:13:43: Bed X: 25.000 Y: 10.000 Z: 2.870\n< 21:13:44: Bed X: 30.000 Y: 10.000 Z: 2.830\n< 21:13:44: Bed X: 35.000 Y: 10.000 Z: 2.790\n< 21:13:45: Bed X: 40.000 Y: 10.000 Z: 2.750\n< 21:13:45: Bed X: 45.000 Y: 10.000 Z: 2.740\n< 21:13:46: Bed X: 50.000 Y: 10.000 Z: 2.750\n< 21:13:47: Bed X: 50.000 Y: 15.000 Z: 2.860\n< 21:13:47: Bed X: 45.000 Y: 15.000 Z: 2.840\n< 21:13:48: Bed X: 40.000 Y: 15.000 Z: 2.830\n< 21:13:48: Bed X: 35.000 Y: 15.000 Z: 2.860\n< 21:13:49: Bed X: 30.000 Y: 15.000 Z: 2.880\n< 21:13:49: Bed X: 25.000 Y: 15.000 Z: 2.920\n< 21:13:50: Bed X: 20.000 Y: 15.000 Z: 2.950\n< 21:13:50: Bed X: 15.000 Y: 15.000 Z: 2.970\n< 21:13:51: Bed X: 10.000 Y: 15.000 Z: 2.970\n< 21:13:51: Bed X: 5.000 Y: 15.000 Z: 2.970\n< 21:13:52: Bed X: 0.000 Y: 15.000 Z: 2.930\n< 21:13:52: Bed X: -5.000 Y: 15.000 Z: 2.880\n< 21:13:53: Bed X: -10.000 Y: 15.000 Z: 2.840\n< 21:13:53: Bed X: -15.000 Y: 15.000 Z: 2.790\n< 21:13:54: Bed X: -20.000 Y: 15.000 Z: 2.740\n< 21:13:55: Bed X: -25.000 Y: 15.000 Z: 2.650\n< 21:13:55: Bed X: -30.000 Y: 15.000 Z: 2.570\n< 21:13:56: Bed X: -35.000 Y: 15.000 Z: 2.490\n< 21:13:56: Bed X: -40.000 Y: 15.000 Z: 2.380\n< 21:13:57: Bed X: -45.000 Y: 15.000 Z: 2.310\n< 21:13:57: Bed X: -50.000 Y: 15.000 Z: 2.220\n< 21:13:58: Bed X: -55.000 Y: 15.000 Z: 2.110\n< 21:13:58: Bed X: -60.000 Y: 15.000 Z: 2.000\n< 21:13:59: Bed X: -65.000 Y: 15.000 Z: 1.910\n< 21:13:59: Bed X: -65.000 Y: 20.000 Z: 1.920\n< 21:14:00: Bed X: -60.000 Y: 20.000 Z: 2.040\n< 21:14:01: Bed X: -55.000 Y: 20.000 Z: 2.160\n< 21:14:01: Bed X: -50.000 Y: 20.000 Z: 2.280\n< 21:14:02: Bed X: -45.000 Y: 20.000 Z: 2.370\n< 21:14:02: Bed X: -40.000 Y: 20.000 Z: 2.450\n< 21:14:03: Bed X: -35.000 Y: 20.000 Z: 2.530\n< 21:14:03: Bed X: -30.000 Y: 20.000 Z: 2.630\n< 21:14:04: Bed X: -25.000 Y: 20.000 Z: 2.720\n< 21:14:04: Bed X: -20.000 Y: 20.000 Z: 2.780\n< 21:14:05: Bed X: -15.000 Y: 20.000 Z: 2.860\n< 21:14:05: Bed X: -10.000 Y: 20.000 Z: 2.920\n< 21:14:06: Bed X: -5.000 Y: 20.000 Z: 2.970\n< 21:14:06: Bed X: 0.000 Y: 20.000 Z: 3.010\n< 21:14:07: Bed X: 5.000 Y: 20.000 Z: 3.050\n< 21:14:07: Bed X: 10.000 Y: 20.000 Z: 3.050\n< 21:14:08: Bed X: 15.000 Y: 20.000 Z: 3.070\n< 21:14:09: Bed X: 20.000 Y: 20.000 Z: 3.050\n< 21:14:09: Bed X: 25.000 Y: 20.000 Z: 3.020\n< 21:14:10: Bed X: 30.000 Y: 20.000 Z: 2.980\n< 21:14:10: Bed X: 35.000 Y: 20.000 Z: 2.960\n< 21:14:11: Bed X: 40.000 Y: 20.000 Z: 2.920\n< 21:14:11: Bed X: 45.000 Y: 20.000 Z: 2.930\n< 21:14:12: Bed X: 50.000 Y: 20.000 Z: 2.950\n< 21:14:12: Bed X: 50.000 Y: 25.000 Z: 3.080\n< 21:14:13: Bed X: 45.000 Y: 25.000 Z: 3.010\n< 21:14:13: Bed X: 40.000 Y: 25.000 Z: 3.010\n< 21:14:14: Bed X: 35.000 Y: 25.000 Z: 3.020\n< 21:14:15: Bed X: 30.000 Y: 25.000 Z: 3.040\n< 21:14:15: Bed X: 25.000 Y: 25.000 Z: 3.060\n< 21:14:16: Bed X: 20.000 Y: 25.000 Z: 3.090\n< 21:14:16: Bed X: 15.000 Y: 25.000 Z: 3.100\n< 21:14:17: Bed X: 10.000 Y: 25.000 Z: 3.100\n< 21:14:17: Bed X: 5.000 Y: 25.000 Z: 3.080\n< 21:14:18: Bed X: 0.000 Y: 25.000 Z: 3.050\n< 21:14:18: Bed X: -5.000 Y: 25.000 Z: 3.010\n< 21:14:19: Bed X: -10.000 Y: 25.000 Z: 3.030\n< 21:14:19: Bed X: -15.000 Y: 25.000 Z: 2.900\n< 21:14:20: Bed X: -20.000 Y: 25.000 Z: 2.840\n< 21:14:20: Bed X: -25.000 Y: 25.000 Z: 2.780\n< 21:14:21: Bed X: -30.000 Y: 25.000 Z: 2.680\n< 21:14:22: Bed X: -35.000 Y: 25.000 Z: 2.580\n< 21:14:22: Bed X: -40.000 Y: 25.000 Z: 2.500\n< 21:14:23: Bed X: -45.000 Y: 25.000 Z: 2.410\n< 21:14:23: Bed X: -50.000 Y: 25.000 Z: 2.310\n< 21:14:24: Bed X: -55.000 Y: 25.000 Z: 2.230\n< 21:14:24: Bed X: -60.000 Y: 25.000 Z: 2.120\n< 21:14:25: Bed X: -65.000 Y: 25.000 Z: 2.010\n< 21:14:25: Bed X: -60.000 Y: 30.000 Z: 2.150\n< 21:14:26: Bed X: -55.000 Y: 30.000 Z: 2.260\n< 21:14:27: Bed X: -50.000 Y: 30.000 Z: 2.360\n< 21:14:27: Bed X: -45.000 Y: 30.000 Z: 2.460\n< 21:14:28: Bed X: -40.000 Y: 30.000 Z: 2.550\n< 21:14:28: Bed X: -35.000 Y: 30.000 Z: 2.640\n< 21:14:29: Bed X: -30.000 Y: 30.000 Z: 2.720\n< 21:14:29: Bed X: -25.000 Y: 30.000 Z: 2.810\n< 21:14:30: Bed X: -20.000 Y: 30.000 Z: 2.890\n< 21:14:30: Bed X: -15.000 Y: 30.000 Z: 2.960\n< 21:14:31: Bed X: -10.000 Y: 30.000 Z: 3.030\n< 21:14:31: Bed X: -5.000 Y: 30.000 Z: 3.070\n< 21:14:32: Bed X: 0.000 Y: 30.000 Z: 3.110\n< 21:14:32: Bed X: 5.000 Y: 30.000 Z: 3.150\n< 21:14:33: Bed X: 10.000 Y: 30.000 Z: 3.150\n< 21:14:33: Bed X: 15.000 Y: 30.000 Z: 3.160\n< 21:14:34: Bed X: 20.000 Y: 30.000 Z: 3.150\n< 21:14:34: Bed X: 25.000 Y: 30.000 Z: 3.130\n< 21:14:35: Bed X: 30.000 Y: 30.000 Z: 3.110\n< 21:14:36: Bed X: 35.000 Y: 30.000 Z: 3.100\n< 21:14:36: Bed X: 40.000 Y: 30.000 Z: 3.090\n< 21:14:37: Bed X: 45.000 Y: 30.000 Z: 3.120\n< 21:14:37: Bed X: 50.000 Y: 30.000 Z: 3.140\n< 21:14:38: Bed X: 50.000 Y: 35.000 Z: 3.210\n< 21:14:38: Bed X: 45.000 Y: 35.000 Z: 3.190\n< 21:14:39: Bed X: 40.000 Y: 35.000 Z: 3.160\n< 21:14:39: Bed X: 35.000 Y: 35.000 Z: 3.140\n< 21:14:40: Bed X: 30.000 Y: 35.000 Z: 3.160\n< 21:14:40: Bed X: 25.000 Y: 35.000 Z: 3.170\n< 21:14:41: Bed X: 20.000 Y: 35.000 Z: 3.180\n< 21:14:41: Bed X: 15.000 Y: 35.000 Z: 3.190\n< 21:14:42: Bed X: 10.000 Y: 35.000 Z: 3.190\n< 21:14:43: Bed X: 5.000 Y: 35.000 Z: 3.170\n< 21:14:43: Bed X: 0.000 Y: 35.000 Z: 3.140\n< 21:14:44: Bed X: -5.000 Y: 35.000 Z: 3.100\n< 21:14:44: Bed X: -10.000 Y: 35.000 Z: 3.060\n< 21:14:45: Bed X: -15.000 Y: 35.000 Z: 3.010\n< 21:14:45: Bed X: -20.000 Y: 35.000 Z: 2.930\n< 21:14:46: Bed X: -25.000 Y: 35.000 Z: 2.850\n< 21:14:46: Bed X: -30.000 Y: 35.000 Z: 2.770\n< 21:14:47: Bed X: -35.000 Y: 35.000 Z: 2.680\n< 21:14:47: Bed X: -40.000 Y: 35.000 Z: 2.600\n< 21:14:48: Bed X: -45.000 Y: 35.000 Z: 2.520\n< 21:14:49: Bed X: -50.000 Y: 35.000 Z: 2.410\n< 21:14:49: Bed X: -55.000 Y: 35.000 Z: 2.310\n< 21:14:50: Bed X: -60.000 Y: 35.000 Z: 2.200\n< 21:14:50: Bed X: -55.000 Y: 40.000 Z: 2.330\n< 21:14:51: Bed X: -50.000 Y: 40.000 Z: 2.450\n< 21:14:51: Bed X: -45.000 Y: 40.000 Z: 2.530\n< 21:14:52: Bed X: -40.000 Y: 40.000 Z: 2.650\n< 21:14:52: Bed X: -35.000 Y: 40.000 Z: 2.720\n< 21:14:53: Bed X: -30.000 Y: 40.000 Z: 2.810\n< 21:14:53: Bed X: -25.000 Y: 40.000 Z: 2.900\n< 21:14:54: Bed X: -20.000 Y: 40.000 Z: 2.980\n< 21:14:54: Bed X: -15.000 Y: 40.000 Z: 3.040\n< 21:14:55: Bed X: -10.000 Y: 40.000 Z: 3.100\n< 21:14:56: Bed X: -5.000 Y: 40.000 Z: 3.150\n< 21:14:56: Bed X: 0.000 Y: 40.000 Z: 3.180\n< 21:14:57: Bed X: 5.000 Y: 40.000 Z: 3.200\n< 21:14:57: Bed X: 10.000 Y: 40.000 Z: 3.230\n< 21:14:58: Bed X: 15.000 Y: 40.000 Z: 3.240\n< 21:14:58: Bed X: 20.000 Y: 40.000 Z: 3.230\n< 21:14:59: Bed X: 25.000 Y: 40.000 Z: 3.230\n< 21:14:59: Bed X: 30.000 Y: 40.000 Z: 3.230\n< 21:15:00: Bed X: 35.000 Y: 40.000 Z: 3.220\n< 21:15:00: Bed X: 40.000 Y: 40.000 Z: 3.250\n< 21:15:01: Bed X: 45.000 Y: 40.000 Z: 3.260\n< 21:15:01: Bed X: 50.000 Y: 40.000 Z: 3.280\n< 21:15:02: Bed X: 50.000 Y: 45.000 Z: 3.330\n< 21:15:03: Bed X: 45.000 Y: 45.000 Z: 3.310\n< 21:15:03: Bed X: 40.000 Y: 45.000 Z: 3.290\n< 21:15:04: Bed X: 35.000 Y: 45.000 Z: 3.270\n< 21:15:04: Bed X: 30.000 Y: 45.000 Z: 3.250\n< 21:15:05: Bed X: 25.000 Y: 45.000 Z: 3.260\n< 21:15:05: Bed X: 20.000 Y: 45.000 Z: 3.260\n< 21:15:06: Bed X: 15.000 Y: 45.000 Z: 3.260\n< 21:15:06: Bed X: 10.000 Y: 45.000 Z: 3.240\n< 21:15:07: Bed X: 5.000 Y: 45.000 Z: 3.240\n< 21:15:07: Bed X: 0.000 Y: 45.000 Z: 3.210\n< 21:15:08: Bed X: -5.000 Y: 45.000 Z: 3.190\n< 21:15:08: Bed X: -10.000 Y: 45.000 Z: 3.130\n< 21:15:09: Bed X: -15.000 Y: 45.000 Z: 3.080\n< 21:15:10: Bed X: -20.000 Y: 45.000 Z: 3.010\n< 21:15:10: Bed X: -25.000 Y: 45.000 Z: 2.930\n< 21:15:11: Bed X: -30.000 Y: 45.000 Z: 2.860\n< 21:15:11: Bed X: -35.000 Y: 45.000 Z: 2.790\n< 21:15:12: Bed X: -40.000 Y: 45.000 Z: 2.670\n< 21:15:12: Bed X: -45.000 Y: 45.000 Z: 2.580\n< 21:15:13: Bed X: -50.000 Y: 45.000 Z: 2.470\n< 21:15:13: Bed X: -45.000 Y: 50.000 Z: 2.620\n< 21:15:14: Bed X: -40.000 Y: 50.000 Z: 2.720\n< 21:15:14: Bed X: -35.000 Y: 50.000 Z: 2.810\n< 21:15:15: Bed X: -30.000 Y: 50.000 Z: 2.880\n< 21:15:16: Bed X: -25.000 Y: 50.000 Z: 2.960\n< 21:15:16: Bed X: -20.000 Y: 50.000 Z: 3.030\n< 21:15:17: Bed X: -15.000 Y: 50.000 Z: 3.090\n< 21:15:17: Bed X: -10.000 Y: 50.000 Z: 3.150\n< 21:15:18: Bed X: -5.000 Y: 50.000 Z: 3.200\n< 21:15:18: Bed X: 0.000 Y: 50.000 Z: 3.230\n< 21:15:19: Bed X: 5.000 Y: 50.000 Z: 3.260\n< 21:15:19: Bed X: 10.000 Y: 50.000 Z: 3.260\n< 21:15:20: Bed X: 15.000 Y: 50.000 Z: 3.270\n< 21:15:20: Bed X: 20.000 Y: 50.000 Z: 3.290\n< 21:15:21: Bed X: 25.000 Y: 50.000 Z: 3.290\n< 21:15:21: Bed X: 30.000 Y: 50.000 Z: 3.290\n< 21:15:22: Bed X: 35.000 Y: 50.000 Z: 3.330\n< 21:15:22: Bed X: 40.000 Y: 50.000 Z: 3.350\n< 21:15:23: Bed X: 45.000 Y: 50.000 Z: 3.370\n\"\"\"", "Format the raw data as x, y, z vectors.", "lines = in_data.strip().splitlines()\n\nx = []\ny = []\nz = []\nfor line in lines:\n cols = line.split()\n x.append(float(cols[4]))\n y.append(float(cols[6]))\n z.append(float(cols[8]))\nx = np.array(x)\ny = np.array(y)\nz = np.array(z)", "Set x and y axis offsets (X_PROBE_OFFSET_FROM_EXTRUDER and Y_PROBE_OFFSET_FROM_EXTRUDER)", "# note that the offset values actually aren't used in delta leveling, but\n# we want to use them to improve calibration accuracy\n\nx_offset = 0.\ny_offset = -17.5\n\nx = x + x_offset\ny = y + y_offset\n\n# limit the radius of the analysis to bed centre\nd = np.sqrt(x**2 + y**2)\n\nx = x[d < 50]\ny = y[d < 50]\nz = z[d < 50]", "z values acquired:", "z", "Plane calibration\nFirst, let's level the bed. Optimally, you should adjust physical bed leveling, but we can also do it in the software. Note that this is precisely what the \"G29\" command does.\nPlot the raw values. Nice!", "fig = plt.figure()\nax = fig.gca(projection='3d')\nsurf = ax.plot_trisurf(x, y, z, cmap=cm.coolwarm, linewidth=0)", "Solve (OK, we're lazy - optimize) the plane equation.", "def f(k_x, k_y, b):\n return z + k_x * x + k_y * y + b\n\ndef fopt(k):\n k_x, k_y, b = k\n return f(k_x, k_y, b)\n\nres = leastsq(fopt, [0, 0, 0])\n\nres\n\nk_x, k_y, b = res[0]", "Once we have flattened the print bed, this is the residual curve:", "z2 = f(k_x, k_y, b)\n\nfig = plt.figure()\nax = fig.gca(projection='3d')\nsurf = ax.plot_trisurf(x, y, z2, cmap=cm.coolwarm, linewidth=0)", "Cool, eh? Verify that the center point (highest or lowest point of the dome or the cup) is really close to 0, 0 - if not, adjust the offset values.\nIn my example, the curve has a distinct saddle-like shape. This is most likely due to some skew in printer frame geometry - I'm still finishing the printer construction and haven't had time to fix everything. I'll ignore that for the moment.\nGeometry calibration\nNext, let's flatten the residual curve. We'll assume that the diagonal rod length is known and accurate (add the carbon rod length to the steel ball diameter, in case of the magnetic arms). Hence, we'll only have to solve the delta radius error.\nThe delta printer kinematics is described here:\nhttp://reprap.org/wiki/File:Rostock_Delta_Kinematics_3.pdf", "# Known variables\n# DELTA_DIAGONAL_ROD\nL = 210.0\n# DELTA_RADIUS\nDR = 105.3", "Inverse kinematics equations:", "def inv_kin(L, DR, x, y):\n Avx = 0\n Avy = DR\n\n Bvx = DR * np.cos(30.0 / (2*np.pi))\n Bvy = -DR * np.sin(30.0 / (2*np.pi))\n\n Cvx = -DR * np.cos(30.0 / (2*np.pi))\n Cvy = -DR * np.sin(30.0 / (2*np.pi))\n\n Acz = np.sqrt(L**2 - (x - Avx)**2 - (y - Avy)**2)\n Bcz = np.sqrt(L**2 - (x - Bvx)**2 - (y - Bvy)**2)\n Ccz = np.sqrt(L**2 - (x - Cvx)**2 - (y - Cvy)**2)\n \n return Acz, Bcz, Ccz\n\nAcz, Bcz, Ccz = inv_kin(L, DR, x, y)", "For true L and DR, these return the zero height locations.\nFor our imperfect dimensions, we need to calculate the actual x,y,z coordinates with forward kinematics. Since we're lazy, we solve the forward kinematic equations numerically from the inverse ones.", "def fwd_kin_scalar(L, DR, Az, Bz, Cz):\n \n # now, solve the inv_kin equation:\n # Acz, Bcz, Ccz = inv_kin(L, DR, x, y, z)\n # for x, y, z\n \n def fopt(x_):\n x, y, z = x_\n Aczg, Bczg, Cczg = inv_kin(L, DR, x, y)\n #print(\"F: \", Aczg, Bczg, Cczg)\n return [Aczg+z-Az, Bczg+z-Bz, Cczg+z-Cz]\n res = leastsq(fopt, [0, 0, 1])\n x, y, z = res[0]\n \n return x, y, z\nfwd_kin = np.vectorize(fwd_kin_scalar)", "Test fwd_kin correctness", "def test_fwd_kin_reciprocity():\n xs = np.linspace(-100, 70, 51)\n ys = np.linspace(60, -50, 51)\n zs = np.linspace(0, 50, 51)\n for i, (x, y, z) in enumerate(zip(xs, ys, zs)):\n Acz, Bcz, Ccz = inv_kin(L, DR, x, y)\n xi, yi, zi = fwd_kin(L, DR, Acz+z, Bcz+z, Ccz+z)\n assert np.abs(x-xi) < 1e-5\n assert np.abs(y-yi) < 1e-5\n assert np.abs(z-zi) < 1e-5\ntest_fwd_kin_reciprocity()", "Now for the beef. Define the error incurred by incorrect delta radius value.", "def Z_err(e):\n # idealized inverse kinematics (this is what the firmware calculates)\n Acz, Bcz, Ccz = inv_kin(L, DR, x, y)\n # and this is where the extruder ends up in real world\n x_e, y_e, z_e = fwd_kin(L, DR+e, Acz, Bcz, Ccz)\n \n return z_e", "Define the optimization function.", "def fopt(x):\n e, = x\n zerr = Z_err(e)\n # ignore any constant offset\n zerr = zerr - np.mean(zerr)\n return np.sum((z2-zerr)**2)", "Run the optimization. This might take a few moments.", "res2 = minimize(fopt, \n np.array([-2]), \n method='COBYLA', \n options={\"disp\": True})\nres2\n\ne, = res2.x\n\ne\n\nzerr = Z_err(e)\nzerr = zerr - np.mean(zerr)\n\nfig = plt.figure()\nax = fig.gca(projection='3d')\nsurf = ax.plot_trisurf(x, y, z2-zerr, cmap=cm.coolwarm, linewidth=0)", "Note the z axis values. The residual has flattened quite a bit. Take the e value and add that to DELTA_RADIUS and upload the new firmware. You still need to re-run calibration and most likely also readjust Z_PROBE_OFFSET_FROM_EXTRUDER to ensure you get a perfect first layer height." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sthuggins/phys202-2015-work
assignments/assignment03/NumpyEx03.ipynb
mit
[ "Numpy Exercise 3\nImports", "import numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nimport antipackage\nimport github.ellisonbg.misc.vizarray as va", "Geometric Brownian motion\nHere is a function that produces standard Brownian motion using NumPy. This is also known as a Wiener Process.", "def brownian(maxt, n):\n \"\"\"Return one realization of a Brownian (Wiener) process with n steps and a max time of t.\"\"\"\n t = np.linspace(0.0,maxt,n)\n h = t[1]-t[0]\n Z = np.random.normal(0.0,1.0,n-1)\n dW = np.sqrt(h)*Z\n W = np.zeros(n)\n W[1:] = dW.cumsum()\n return t, W", "Call the brownian function to simulate a Wiener process with 1000 steps and max time of 1.0. Save the results as two arrays t and W.", "brownian(1.0,1000)\n\nassert isinstance(t, np.ndarray)\nassert isinstance(W, np.ndarray)\nassert t.dtype==np.dtype(float)\nassert W.dtype==np.dtype(float)\nassert len(t)==len(W)==1000", "Visualize the process using plt.plot with t on the x-axis and W(t) on the y-axis. Label your x and y axes.", "plt.plot(brownian(1.0,1000))\nplt.xlabel('t')\nplt.ylabel('W(t)')\n\nassert True # this is for grading", "Use np.diff to compute the changes at each step of the motion, dW, and then compute the mean and standard deviation of those differences.", "np.diff(brownian(1.0,1000))\nnp.diff(brownian(1.0,1000)).mean()\nnp.diff(brownian(1.0,1000)).std()\n\nassert len(dW)==len(W)-1\nassert dW.dtype==np.dtype(float)", "Write a function that takes $W(t)$ and converts it to geometric Brownian motion using the equation:\n$$\nX(t) = X_0 e^{((\\mu - \\sigma^2/2)t + \\sigma W(t))}\n$$\nUse Numpy ufuncs and no loops in your function.", "def geo_brownian(t, W, X0, mu, sigma):\n \"Return X(t) for geometric brownian motion with drift mu, volatility sigma.\"\"\"\n X0*exp(((mu-sigma**2)/2)*t + sigma*W(t))\n \n\nassert True # leave this for grading", "Use your function to simulate geometric brownian motion, $X(t)$ for $X_0=1.0$, $\\mu=0.5$ and $\\sigma=0.3$ with the Wiener process you computed above.\nVisualize the process using plt.plot with t on the x-axis and X(t) on the y-axis. Label your x and y axes.", "# YOUR CODE HERE\nraise NotImplementedError()\n\nassert True # leave this for grading" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
gdsfactory/gdsfactory
docs/notebooks/041_routing_electical.ipynb
mit
[ "Routing electrical\nFor routing low speed DC electrical ports you can use sharp corners instead of smooth bends.\nYou can also define port.orientation = None to ignore the port orientation for low speed DC ports.\nSingle route functions\nget_route_electrical\nGet route_electrical bend = wire_corner defaults to 90 degrees bend.", "import gdsfactory as gf\n\nc = gf.Component(\"pads\")\npt = c << gf.components.pad_array(orientation=270, columns=3)\npb = c << gf.components.pad_array(orientation=90, columns=3)\npt.move((70, 200))\nc\n\nc = gf.Component(\"pads_with_routes_with_bends\")\npt = c << gf.components.pad_array(orientation=270, columns=3)\npb = c << gf.components.pad_array(orientation=90, columns=3)\npt.move((70, 200))\nroute = gf.routing.get_route_electrical(\n pt.ports[\"e11\"], pb.ports[\"e11\"], bend=\"bend_euler\", radius=30\n)\nc.add(route.references)\nc\n\nc = gf.Component(\"pads_with_routes_with_wire_corners\")\npt = c << gf.components.pad_array(orientation=270, columns=3)\npb = c << gf.components.pad_array(orientation=90, columns=3)\npt.move((70, 200))\nroute = gf.routing.get_route_electrical(\n pt.ports[\"e11\"], pb.ports[\"e11\"], bend=\"wire_corner\"\n)\nc.add(route.references)\nc\n\nc = gf.Component(\"pads_with_routes_with_wire_corners_no_orientation\")\npt = c << gf.components.pad_array(orientation=None, columns=3)\npb = c << gf.components.pad_array(orientation=None, columns=3)\npt.move((70, 200))\nroute = gf.routing.get_route_electrical(\n pt.ports[\"e11\"], pb.ports[\"e11\"], bend=\"wire_corner\"\n)\nc.add(route.references)\nc", "route_quad", "c = gf.Component(\"pads_route_quad\")\npt = c << gf.components.pad_array(orientation=270, columns=3)\npb = c << gf.components.pad_array(orientation=90, columns=3)\npt.move((100, 200))\nroute = gf.routing.route_quad(pt.ports[\"e11\"], pb.ports[\"e11\"], layer=(49, 0))\nc.add(route)\nc", "get_route_from_steps", "c = gf.Component(\"pads_route_from_steps\")\npt = c << gf.components.pad_array(orientation=270, columns=3)\npb = c << gf.components.pad_array(orientation=90, columns=3)\npt.move((100, 200))\nroute = gf.routing.get_route_from_steps(\n pb.ports[\"e11\"],\n pt.ports[\"e11\"],\n steps=[\n {\"y\": 200},\n ],\n cross_section=gf.cross_section.metal3,\n bend=gf.components.wire_corner,\n)\nc.add(route.references)\nc\n\nc = gf.Component(\"pads_route_from_steps_None_orientation\")\npt = c << gf.components.pad_array(orientation=None, columns=3)\npb = c << gf.components.pad_array(orientation=None, columns=3)\npt.move((100, 200))\nroute = gf.routing.get_route_from_steps(\n pb.ports[\"e11\"],\n pt.ports[\"e11\"],\n steps=[\n {\"y\": 200},\n ],\n cross_section=gf.cross_section.metal3,\n bend=gf.components.wire_corner,\n)\nc.add(route.references)\nc", "Bundle of routes (get_bundle_electrical)", "import gdsfactory as gf\n\nc = gf.Component(\"pads_bundle\")\npt = c << gf.components.pad_array(orientation=270, columns=3)\npb = c << gf.components.pad_array(orientation=90, columns=3)\npt.move((100, 200))\n\nroutes = gf.routing.get_bundle_electrical(\n pb.ports, pt.ports, end_straight_length=60, separation=30\n)\n\nfor route in routes:\n c.add(route.references)\nc", "get bundle from steps", "c = gf.Component(\"pads_bundle_steps\")\npt = c << gf.components.pad_array(\n gf.partial(gf.components.pad, size=(30, 30)),\n orientation=270,\n columns=3,\n spacing=(50, 0),\n)\npb = c << gf.components.pad_array(orientation=90, columns=3)\npt.move((300, 500))\n\nroutes = gf.routing.get_bundle_from_steps_electrical(\n pb.ports, pt.ports, end_straight_length=60, separation=30, steps=[{\"dy\": 100}]\n)\n\nfor route in routes:\n c.add(route.references)\n\nc" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
turbomanage/training-data-analyst
blogs/bigquery_datascience/bigquery_datascience.ipynb
apache-2.0
[ "How data scientists use BigQuery\nThis notebook accompanies the presentation \n\"Machine Learning and Bayesian Statistics in minutes: How data scientists use BigQuery\"\nBayesian Statistics in minutes\nLet's say that we want to find the probability of a flight being late $\\theta$ given a specific departure delay $\\textbf{D}$ \nBayes' Law tells that can be obtained for any specific departure delay using the formula:\n<center><font size=\"+5\">\n$P(\\theta|\\textbf{D}) = P(\\theta ) \\frac{P(\\textbf{D} |\\theta)}{P(\\textbf{D})} $\n</font></center>\nOnce you have large datasets, the probabilities above are just exercises in counting and so, applying Bayesian statistics is super-easy in BigQuery. \nFor example, let's find the probability that a flight will be 15+ minutes late:", "%%bigquery df\nWITH rawnumbers AS (\nSELECT\n departure_delay,\n COUNT(1) AS num_flights,\n COUNTIF(arrival_delay < 15) AS num_ontime\nFROM\n `bigquery-samples.airline_ontime_data.flights`\nGROUP BY\n departure_delay\nHAVING\n num_flights > 100\n),\n\ntotals AS (\nSELECT\n SUM(num_flights) AS tot_flights,\n SUM(num_ontime) AS tot_ontime\nFROM rawnumbers\n),\n\nbayes AS (\nSELECT\n departure_delay,\n num_flights / tot_flights AS prob_D,\n num_ontime / tot_ontime AS prob_D_theta,\n tot_ontime / tot_flights AS prob_theta\nFROM\n rawnumbers, totals\nWHERE\n num_ontime > 0\n)\n\nSELECT\n *, (prob_theta * prob_D_theta / prob_D) AS prob_ontime\nFROM\n bayes\nORDER BY\n departure_delay ASC\n\ndf.plot(x='departure_delay', y='prob_ontime');", "But is it right, though? What's with the weird hump for early departures (departure_delay less than zero)?\nFirst, we should verify that we can apply Bayes Law. Grouping by the departure delay is incorrect if the departure delay is a chaotic input variable. We have do exploratory analysis to validate that:\n\nIf a flight departs late, will it arrive late?\nIs the relationship between the two variables non-chaotic?\nDoes the linearity hold even for extreme values of departure delays?\n\nThis, too, is straightforward in BigQuery", "%%bigquery df\nSELECT\n departure_delay,\n COUNT(1) AS num_flights,\n APPROX_QUANTILES(arrival_delay, 10) AS arrival_delay_deciles\nFROM\n `bigquery-samples.airline_ontime_data.flights`\nGROUP BY\n departure_delay\nHAVING\n num_flights > 100\nORDER BY\n departure_delay ASC\n\nimport pandas as pd\npercentiles = df['arrival_delay_deciles'].apply(pd.Series)\npercentiles = percentiles.rename(columns = lambda x : str(x*10) + \"%\")\ndf = pd.concat([df['departure_delay'], percentiles], axis=1)\ndf.head()\n\nwithout_extremes = df.drop(['0%', '100%'], 1)\nwithout_extremes.plot(x='departure_delay', xlim=(-30,50), ylim=(-50,50));", "Note the crazy non-linearity for top half of of the flights that leave more than 20 minutes early. Most likely, these are planes that try to beat some weather situation. About half of such flights succeed (the linear bottom) and the other half don't (the non-linear top). The average is what we saw as the weird hump in the probability plot. So yes, the hump is real. The rest of the distribution is clear-cut and the Bayes probabilities are quite valid.\nSolving the flights problem using GCP tools end-to-end (from ingest to machine learning) is covered in this book:\n<img src=\"https://aisoftwarellc.weebly.com/uploads/5/1/0/0/51003227/published/data-science-on-gcp_2.jpg?1563508887\"></img>\nMachine Learning in BigQuery\nHere, we will use BigQuery ML to create a deep neural network that predicts the duration of bicycle rentals in London.", "%%bigquery\nCREATE OR REPLACE MODEL ch09eu.bicycle_model_dnn\nOPTIONS(input_label_cols=['duration'], \n model_type='dnn_regressor', hidden_units=[32, 4])\nTRANSFORM(\n duration\n , start_station_name\n , CAST(EXTRACT(dayofweek from start_date) AS STRING)\n as dayofweek\n , CAST(EXTRACT(hour from start_date) AS STRING)\n as hourofday \n)\nAS\nSELECT \n duration, start_station_name, start_date\nFROM \n `bigquery-public-data`.london_bicycles.cycle_hire\n\n%%bigquery\nSELECT * FROM ML.EVALUATE(MODEL ch09eu.bicycle_model_dnn)\n\n%%bigquery\nSELECT * FROM ML.PREDICT(MODEL ch09eu.bicycle_model_dnn,(\n SELECT \n 'Park Street, Bankside' AS start_station_name\n ,CURRENT_TIMESTAMP() AS start_date\n))", "BigQuery and TensorFlow\nBatch predictions of a TensorFlow model from BigQuery!", "%%bigquery\nCREATE OR REPLACE MODEL advdata.txtclass_tf\nOPTIONS (model_type='tensorflow',\n model_path='gs://cloud-training-demos/txtclass/export/exporter/1549825580/*')\n\n%%bigquery\nSELECT\n input,\n (SELECT AS STRUCT(p, ['github', 'nytimes', 'techcrunch'][ORDINAL(s)]) prediction FROM\n (SELECT p, ROW_NUMBER() OVER() AS s FROM\n (SELECT * FROM UNNEST(dense_1) AS p)) \n ORDER BY p DESC LIMIT 1).*\nFROM ML.PREDICT(MODEL advdata.txtclass_tf,\n(\nSELECT 'Unlikely Partnership in House Gives Lawmakers Hope for Border Deal' AS input\nUNION ALL SELECT \"Fitbit\\'s newest fitness tracker is just for employees and health insurance members\"\nUNION ALL SELECT \"Show HN: Hello, a CLI tool for managing social media\"\n))", "We use the bicycle rentals problem as a way to illustrate lots of BigQuery features in\n<img src=\"https://aisoftwarellc.weebly.com/uploads/5/1/0/0/51003227/published/bigquery-the-definitive-guide.jpg?1563508864\"></img>\nCopyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/nuist/cmip6/models/sandbox-1/atmos.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: NUIST\nSource ID: SANDBOX-1\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:34\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'nuist', 'sandbox-1', 'atmos')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Overview\n2. Key Properties --&gt; Resolution\n3. Key Properties --&gt; Timestepping\n4. Key Properties --&gt; Orography\n5. Grid --&gt; Discretisation\n6. Grid --&gt; Discretisation --&gt; Horizontal\n7. Grid --&gt; Discretisation --&gt; Vertical\n8. Dynamical Core\n9. Dynamical Core --&gt; Top Boundary\n10. Dynamical Core --&gt; Lateral Boundary\n11. Dynamical Core --&gt; Diffusion Horizontal\n12. Dynamical Core --&gt; Advection Tracers\n13. Dynamical Core --&gt; Advection Momentum\n14. Radiation\n15. Radiation --&gt; Shortwave Radiation\n16. Radiation --&gt; Shortwave GHG\n17. Radiation --&gt; Shortwave Cloud Ice\n18. Radiation --&gt; Shortwave Cloud Liquid\n19. Radiation --&gt; Shortwave Cloud Inhomogeneity\n20. Radiation --&gt; Shortwave Aerosols\n21. Radiation --&gt; Shortwave Gases\n22. Radiation --&gt; Longwave Radiation\n23. Radiation --&gt; Longwave GHG\n24. Radiation --&gt; Longwave Cloud Ice\n25. Radiation --&gt; Longwave Cloud Liquid\n26. Radiation --&gt; Longwave Cloud Inhomogeneity\n27. Radiation --&gt; Longwave Aerosols\n28. Radiation --&gt; Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --&gt; Boundary Layer Turbulence\n31. Turbulence Convection --&gt; Deep Convection\n32. Turbulence Convection --&gt; Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --&gt; Large Scale Precipitation\n35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --&gt; Optical Cloud Properties\n38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\n39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --&gt; Isscp Attributes\n42. Observation Simulation --&gt; Cosp Attributes\n43. Observation Simulation --&gt; Radar Inputs\n44. Observation Simulation --&gt; Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --&gt; Orographic Gravity Waves\n47. Gravity Waves --&gt; Non Orographic Gravity Waves\n48. Solar\n49. Solar --&gt; Solar Pathways\n50. Solar --&gt; Solar Constant\n51. Solar --&gt; Orbital Parameters\n52. Solar --&gt; Insolation Ozone\n53. Volcanos\n54. Volcanos --&gt; Volcanoes Treatment \n1. Key Properties --&gt; Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Family\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of atmospheric model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBasic approximations made in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Range Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.4. Number Of Vertical Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "2.5. High Top\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the orography.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n", "4.2. Changes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n", "5. Grid --&gt; Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Discretisation --&gt; Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n", "6.3. Scheme Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation function order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.4. Horizontal Pole\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal discretisation pole singularity treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal grid type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7. Grid --&gt; Discretisation --&gt; Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType of vertical coordinate system", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere dynamical core", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the dynamical core of the model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Timestepping Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestepping framework type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of the model prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Dynamical Core --&gt; Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Top Heat\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary heat treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Top Wind\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary wind treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Dynamical Core --&gt; Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nType of lateral boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Dynamical Core --&gt; Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal diffusion scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal diffusion scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Dynamical Core --&gt; Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTracer advection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.3. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.4. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracer advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Dynamical Core --&gt; Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMomentum advection schemes name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Scheme Staggering Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme staggering type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Radiation --&gt; Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nShortwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Radiation --&gt; Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Radiation --&gt; Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18. Radiation --&gt; Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Radiation --&gt; Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Radiation --&gt; Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21. Radiation --&gt; Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Radiation --&gt; Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLongwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "23. Radiation --&gt; Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Radiation --&gt; Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Physical Reprenstation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25. Radiation --&gt; Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26. Radiation --&gt; Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27. Radiation --&gt; Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28. Radiation --&gt; Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere convection and turbulence", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. Turbulence Convection --&gt; Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBoundary layer turbulence scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBoundary layer turbulence scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.3. Closure Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBoundary layer turbulence scheme closure order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Counter Gradient\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "31. Turbulence Convection --&gt; Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDeep convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Turbulence Convection --&gt; Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nShallow convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nshallow convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nshallow convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n", "32.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34. Microphysics Precipitation --&gt; Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34.2. Hydrometeors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLarge scale cloud microphysics processes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the atmosphere cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.3. Atmos Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n", "36.4. Uses Separate Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36.6. Prognostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.7. Diagnostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.8. Prognostic Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37. Cloud Scheme --&gt; Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37.2. Cloud Inhomogeneity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "38.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "38.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale water distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "39.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "39.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "39.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of observation simulator characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "41. Observation Simulation --&gt; Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.2. Top Height Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator ISSCP top height direction", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42. Observation Simulation --&gt; Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP run configuration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42.2. Number Of Grid Points\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of grid points", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.3. Number Of Sub Columns\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.4. Number Of Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of levels", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43. Observation Simulation --&gt; Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar frequency (Hz)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43.2. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "43.3. Gas Absorption\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses gas absorption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "43.4. Effective Radius\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses effective radius", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "44. Observation Simulation --&gt; Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator lidar ice type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "44.2. Overlap\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator lidar overlap", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "45.2. Sponge Layer\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.3. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground wave distribution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.4. Subgrid Scale Orography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSubgrid scale orography effects taken into account.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46. Gravity Waves --&gt; Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "46.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47. Gravity Waves --&gt; Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "47.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n", "47.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of solar insolation of the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "49. Solar --&gt; Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "50. Solar --&gt; Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the solar constant.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "50.2. Fixed Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "50.3. Transient Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nsolar constant transient characteristics (W m-2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51. Solar --&gt; Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "51.2. Fixed Reference Date\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "51.3. Transient Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of transient orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51.4. Computation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used for computing orbital parameters.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "52. Solar --&gt; Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "54. Volcanos --&gt; Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tbphu/fachkurs_bachelor
tellurium/loesungen/Roadrunner_Uebung_Loesung.ipynb
mit
[ "import tellurium as te\nimport urllib2\n%matplotlib inline", "Roadrunner Methoden\nAntimony Modell aus Modell-Datenbank abfragen:\nLade mithilfe von urllib2 das Antimony-Modell des \"Repressilator\" herunter. Benutze dazu die urllib2 Methoden urlopen() und read()\nDie URL für den Repressilator lautet:\nhttp://antimony.sourceforge.net/examples/biomodels/BIOMD0000000012.txt\nElowitz, M. B., & Leibler, S. (2000). A synthetic oscillatory network of transcriptional regulators. Nature, 403(6767), 335-338.", "Repressilator = urllib2.urlopen('http://antimony.sourceforge.net/examples/biomodels/BIOMD0000000012.txt').read()", "Erstelle eine Instanz von roadrunner, indem du gleichzeitig den Repressilator als Modell lädst. Benutze dazu loada() von tellurium.", "rr = te.loada(Repressilator)", "Im folgenden Teil wollen wir einige der Methoden von telluriums roadrunner ausprobieren.\nLass dir dazu das Modell als Antimony oder SBML anzeigen. Dies erreichst du mit getAntimony() oder getSBML().", "print rr.getAntimony()\n\nprint rr.getSBML()", "Solver Methoden\nAchtung: Obwohl resetToOrigin() das Modell in den ursprünglichen Zustand zurück setzt, bleiben Solver-spezifische Einstellungen erhalten. Daher benutze am besten immer te.loada() als vollständigen Reset!\nMit getIntegrator() ist es möglich, den Solver und dessen gesetzte Einstellungen anzeigen zu lassen.", "rr = te.loada(Repressilator)\nprint rr.getIntegrator()", "Ändere den verwendeten Solver von 'CVODE' auf Runge-Kutta 'rk45' und lass dir die Settings nochmals anzeigen.\nVerwende dazu setIntegrator() und getIntegrator().\nWas fällt auf?", "rr = te.loada(Repressilator)\nrr.setIntegrator('rk45')\nprint rr.getIntegrator()", "Simuliere den Repressilator von 0s bis 1000s und plotte die Ergebnisse für verschiedene steps-Werte (z.b. steps = 10 oder 10000) in der simulate-Methode. Was macht das Argument steps?", "rr = te.loada(Repressilator)\nrr.simulate(0,1000,1000)\nrr.plot()", "Benutze weiterhin 'CVODE' und verändere den Paramter 'relative_tolerance' des Solvers (z.b. 1 oder 10).\nVerwendete dabei steps = 10000 in simulate(). \nWas fällt auf?\nHinweis - die nötige Methode lautet roadrunner.getIntegrator().setValue().", "rr = te.loada(Repressilator)\nrr.getIntegrator().setValue('relative_tolerance',0.0000001)\nrr.getIntegrator().setValue('relative_tolerance',1)\nrr.simulate(0,1000,1000)\nrr.plot()", "ODE-Modell als Objekt in Python\nOben haben wir gesehen, dass tellurium eine Instanz von RoadRunner erzeugt, wenn ein Modell eingelesen wird.\nAußerdem ist der Zugriff auf das eigentliche Modell möglich. Unter Verwendung von .model gibt es zusätzliche Methoden um das eigentliche Modell zu manipulieren:", "rr = te.loada(Repressilator)\nprint type(rr)\nprint type(rr.model)", "Aufgabe 1 - Parameterscan:\nA) Sieh dir die Implementierung des Modells 'Repressilator' an, welche Paramter gibt es?\nB) Erstelle einen Parameterscan, welcher den Wert des Paramters mit der Bezeichnung 'n' im Repressilator ändert.\n(Beispielsweise für n=1,n=2,n=3,...)\nLasse das Modell für jedes gewählte 'n' simulieren.\nBeantworte dazu folgende Fragen:\n\na) Welchen Zweck erfüllt 'n' im Modell im Hinblick auf die Reaktion, in der 'n' auftaucht?\nb) Im Gegensatz dazu, welchen Effekt hat 'n' auf das Modellverhalten?\nc) Kannst du einen Wert für 'n' angeben, bei dem sich das Verhalten des Modells qualitativ ändert?\n\nC) Visualisiere die Simulationen. Welche Art von Plot ist günstig, um die Modellsimulation darzustellen? Es gibt mehrere geeignete Varianten, aber beschränke die Anzahl der Graphen im Plot(z.b. wähle eine Spezies und plotte).\nHinweise:\nNutze die \"Autovervollständigung\" des Python-Notebook und außerdem die offizielle Dokumentation von RoadRunner, um die Methoden zu finden, die für die Implementierung eines Parameterscans notwendig sind. Natürlich kannst du auch das Notebook von der Tellurium Einführung als Hilfe benutzen.\nZiehe in Erwägung, dass das Modell einen oder mehrere Resets benötigt. Überlege, an welcher Stelle deiner implementierung und welche Reset-Methode du idealerweise einsetzen solltest.", "import matplotlib.pyplot as plt\nimport numpy as np\n\nfig_phase = plt.figure(figsize=(5,5))\n\nrr = te.loada(Repressilator)\nfor l,i in enumerate([1.0,1.7,3.0,10.]):\n \n fig_phase.add_subplot(2,2,l+1)\n \n rr.n = i\n rr.reset()\n result = rr.simulate(0,500,500,selections=['time','X','PX'])\n\n plt.plot(result['X'],result['PX'],label='n = %s' %i)\n \n plt.xlabel('X')\n plt.ylabel('PX')\n plt.legend() \n\nplt.tight_layout()\n\nfig_timecourse= plt.figure(figsize=(5,5))\n\nrr = te.loada(Repressilator)\nfor l,i in enumerate([1.0,1.7,3.0,10.]):\n \n rr.n = i\n rr.reset()\n result = rr.simulate(0,500,500,selections=['time','X','PX'])\n\n plt.plot(result['time'],result['PX'],label='PX; n = %s' %i)\n \n plt.xlabel('time')\n plt.ylabel('Species amounts')\n plt.legend() \n \n \n \nplt.tight_layout()", "Aufgabe 2 - (Initial value)-scan:\nErstelle einen \"Scan\", der den Anfwangswert von der Spezies Y ändert.\nDas Modellverhalten ist hierbei weniger interessant.\nAchte vielmehr darauf, die Resets so zu setzen, dass 'Y' bei der Simulation tatsächlich beim gesetzten Wert startet.", "import matplotlib.pyplot as plt\nimport numpy as np\n\nrr = te.loada(Repressilator)\nprint rr.model.getFloatingSpeciesInitAmountIds()\nprint rr.model.getFloatingSpeciesInitAmounts()\n\nfor l,i in enumerate([1,5,10,20]):\n \n # Auswahl einiger Varianten (es gibt noch mehr Möglichkeiten...)\n #Variante1 - Falsch \n #rr.Y=i\n\n #Variante2 - Falsch\n #rr.Y=i\n #rr.reset()\n \n #Variante3 - Richtig\n rr.model[\"init(Y)\"] = i\n rr.reset() \n \n \n result = rr.simulate(0,10,1000,selections=['Y','PY'])\n \n #plt.plot(result[:,0],result['PY'],label='n = %s' %i)\n plt.plot(result['Y'],label='initial Y = %s' %i)\n plt.xlabel('time')\n plt.ylabel('Species in amounts')\n plt.axhline(y=i,linestyle = ':',color='black')\n plt.legend()\n " ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
uber/ludwig
examples/kfold_cv/regression_example.ipynb
apache-2.0
[ "K-fold cross validation - Regression Model\nBased on the Ludwig regression example \nData set\nThis example demonstrates teh following:\n\nDownload a data set and create a pandas dataframe\nCreate a training and hold-out test data sets\nCreate a Ludwig config data structure from the pandas dataframe\nRun a 5-fold cross validation analysis with the training data\nUse Ludwig APIs to train and assess model performance on hold-out test data set", "import logging\nimport os\nimport os.path\nimport shutil\nimport tempfile\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport requests\nimport scipy.stats as stats\nimport seaborn as sns\nfrom sklearn.model_selection import train_test_split\n\nfrom ludwig.api import kfold_cross_validate, LudwigModel", "Contstants", "DATA_SET_URL = 'http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'\nDATA_SET = 'auto_mpg.data'\nRESULTS_DIR = 'results'", "Clean out previous results", "if os.path.isfile(DATA_SET):\n os.remove(DATA_SET)\n \nshutil.rmtree(RESULTS_DIR, ignore_errors=True)", "Retrieve data from UCI Machine Learning Repository\nDownload required data", "r = requests.get(DATA_SET_URL)\nif r.status_code == 200:\n with open(DATA_SET,'w') as f:\n f.write(r.content.decode(\"utf-8\"))", "Create Pandas DataFrame from downloaded data", "raw_df = pd.read_csv(DATA_SET,\n header=None,\n na_values = \"?\", comment='\\t',\n sep=\" \", skipinitialspace=True)\n\n\nraw_df.columns = ['MPG','Cylinders','Displacement','Horsepower','Weight',\n 'Acceleration', 'ModelYear', 'Origin']\nraw_df.shape\n\nraw_df.head()", "Create train/test split", "train_df, test_df = train_test_split(raw_df, train_size=0.8, random_state=17)\nprint(train_df.shape)\nprint(test_df.shape)", "Setup Ludwig config", "num_features = ['Cylinders', 'Displacement', 'Horsepower', 'Weight', 'Acceleration', 'ModelYear']\ncat_features = ['Origin']", "Create Ludwig input_features", "input_features = []\n# setup input features for numerical variables\nfor p in num_features:\n a_feature = {'name': p, 'type': 'numerical', \n 'preprocessing': {'missing_value_strategy': 'fill_with_mean', 'normalization': 'zscore'}}\n input_features.append(a_feature)\n\n# setkup input features for categorical variables\nfor p in cat_features:\n a_feature = {'name': p, 'type': 'category'}", "Create Ludwig output features", "output_features =[\n {\n 'name': 'MPG',\n 'type': 'numerical',\n 'num_fc_layers': 2,\n 'fc_size': 64\n }\n]\n\nconfig = {\n 'input_features' : input_features,\n 'output_features': output_features,\n 'training' :{\n 'epochs': 100,\n 'batch_size': 32\n }\n}\nconfig", "Perform K-fold Cross Validation analysis", "%%time\nwith tempfile.TemporaryDirectory() as tmpdir:\n data_csv_fp = os.path.join(tmpdir,'train.csv')\n train_df.to_csv(data_csv_fp, index=False)\n\n (\n kfold_cv_stats, \n kfold_split_indices \n ) = kfold_cross_validate(\n num_folds=5,\n config=config,\n dataset=data_csv_fp,\n output_directory=tmpdir,\n logging_level=logging.ERROR\n )\n\n\nkfold_cv_stats['overall']['MPG']", "Train model and assess model performance", "model = LudwigModel(\n config=config,\n logging_level=logging.ERROR\n)\n\n%%time\ntraining_stats = model.train(\n training_set=train_df,\n output_directory=RESULTS_DIR,\n)\n\ntest_stats, mpg_hat_df, _ = model.evaluate(dataset=test_df, collect_predictions=True, collect_overall_stats=True)\n\ntest_stats\n\na = plt.axes(aspect='equal')\nsns.scatterplot(test_df['MPG'].values, mpg_hat_df['MPG_predictions'].values,\n s=50)\nplt.xlabel('True Values [MPG]')\nplt.ylabel('Predictions [MPG]')\nlims = [0, 50]\nplt.xlim(lims)\nplt.ylim(lims)\n_ = plt.plot(lims, lims)\n", "Compare K-fold Cross Validation metrics against hold-out test metrics\nHold-out Test Metrics", "test_stats['MPG']", "K-fold Cross Validation Metrics", "kfold_cv_stats['overall']['MPG']" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
LorenzoBi/courses
UQ/assignment_2/Untitled.ipynb
mit
[ "Assignment 2\nLorenzo Biasi and Michael Aichmüller", "import numpy as np\nimport matplotlib.pyplot as plt\nfrom sympy import *\n%matplotlib inline\n\ninit_printing() ", "Exercise 1\nWe proceed building the alogrithm for testing the accuracy of the numerical derivative", "def f(x):\n return np.exp(np.sin(x))\n\ndef df(x):\n return f(x) * np.cos(x)\n\ndef absolute_err(f, df, h):\n g = (f(h) - f(0)) / h\n return np.abs(df(0) - g)\n\nhs = 10. ** -np.arange(15)\nepsilons = np.empty(15)\nfor i, h in enumerate(hs):\n epsilons[i] = absolute_err(f, df, h) ", "a)", "plt.plot(hs, epsilons, 'o')\nplt.yscale('log')\nplt.xscale('log')\nplt.xlabel(r'h')\nplt.ylabel(r'$\\epsilon(h)$')\nplt.grid(linestyle='dotted')", "We can see that until $h = 10^7$ the trend is that the absolute error diminishes, but after that it goes back up. This is due to the fact that when we compute $f(h) - f(0)$ we are using an ill-conditioned operation. In fact these two values are really close to each other.\nExercise 2\na.\nWe can easily see that when $\\|x\\| \\ll 1$ we have that both $\\frac{1 - x }{x + 1}$ and $\\frac{1}{2 x + 1}$ are almost equal to 1, so the subtraction is ill-conditined.", "x_1 = symbols('x_1')\nfun1 = 1 / (1 + 2*x_1) - (1 - x_1) / (1 + x_1)\nfun1", "We can modify the previous expression for being well conditioned around 0. This is well conditoned.", "fun2 = simplify(fun1)\nfun2", "A comparison between the two ways of computing this value. we can clearly see that if we are far from 1 the methods are nearly identical, but the closer you get to 0 the two methods diverge", "def f1(x):\n return 1 / (1 + 2*x) - (1 - x) / (1 + x)\n\ndef f2(x):\n return 2*x**2/((1 + 2*x)*(1 + x))\n\nhs = 2. ** - np.arange(64)\nplt.plot(hs, np.abs(f1(hs) - f2(hs)))\nplt.yscale('log')\nplt.xscale('log')\nplt.xlabel(r'h')\nplt.ylabel('differences')\nplt.grid(linestyle='dotted')", "b.\nAs before we have the subtraction of two really close values, so it is going to be ill conditioned for $x$ really big.\n$ \\sqrt{x + \\frac{1}{x}} - \\sqrt{x - \\frac{1}{x}} = \\sqrt{x + \\frac{1}{x}} - \\sqrt{x - \\frac{1}{x}} \\frac{\\sqrt{x + \\frac{1}{x}} + \\sqrt{x - \\frac{1}{x}} }{\\sqrt{x + \\frac{1}{x}} + \\sqrt{x - \\frac{1}{x}}} = \\frac{2}{x(\\sqrt{x + \\frac{1}{x}} + \\sqrt{x - \\frac{1}{x}})}$", "def f3(x):\n return np.sqrt(x + 1/x) - np.sqrt(x - 1 / x)\n\ndef f4(x):\n return 2 / (np.sqrt(x + 1/x) + np.sqrt(x - 1 / x)) / x\n\nhs = 2 ** np.arange(64)\n\nplt.plot(hs, np.abs(f3(hs) - f4(hs)), 'o')\n#plt.plot(hs, , 'o')\nplt.yscale('log')\nplt.xscale('log')\nplt.xlabel(r'h')\nplt.ylabel('differences$')\nplt.grid(linestyle='dotted')", "Exercise 3.\na.\nIf we assume we posess a 6 faced dice, we have at each throw three possible outcome. So we have to take all the combination of 6 numbers repeating 3 times. It is intuitive that our $\\Omega$ will be composed by $6^3 = 216$ samples, and will be of type:\n$(1, 1, 1), (1, 1, 2), (1, 1, 3), ... (6, 6, 5), (6, 6, 6)$", "import itertools\nx = [1, 2, 3, 4, 5, 6]\nomega = set([p for p in itertools.product(x, repeat=3)])\nprint(r'Omega has', len(omega), 'elements and they are:')\nprint(omega)", "Concerning the $\\sigma$-algebra we need to state that there does not exist only a $\\sigma$-algebra for a given $\\Omega$, but it this case a reasonable choice would be the powerset of $\\Omega$.\nb.\nIn case of fairness of dice we will have the discrete uniform distribution. And for computing the value of $\\rho(\\omega)$ we just need to compute the inverse of our sample space $\\rho(\\omega) = \\frac{1}{6^3}$", "1/(6**3)", "c.\nIf we want to determine the set $A$ we can take in consideration its complementary $A^c = {\\text{Not even one throw is 6}}$. This event is analogous the sample space of a 5-faced dice. So it's dimension will be $5^3$. For computing the size of $A$ we can simply compute $6^3 - 5^3$ and for the event its self we just need to $\\Omega \\setminus A^c = A$", "print('Size of A^c:', 5**3)\nprint('Size of A: ', 6 ** 3 - 5 ** 3)\n\n36 + 5 * 6 + 5 * 5\n\nx = [1, 2, 3, 4, 5]\nA_c = set([p for p in itertools.product(x, repeat=3)])\nprint('A^c has ', len(A_c), 'elements.\\nA^c =', A_c)\nprint('A has ', len(omega - A_c), 'elements.\\nA^c =', omega - A_c)", "P(A) will be $\\frac{91}{216}$", "91 / 216" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AllenDowney/ModSim
python/soln/chap22.ipynb
gpl-2.0
[ "Chapter 22\nModeling and Simulation in Python\nCopyright 2021 Allen Downey\nLicense: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International", "# install Pint if necessary\n\ntry:\n import pint\nexcept ImportError:\n !pip install pint\n\n# download modsim.py if necessary\n\nfrom os.path import exists\n\nfilename = 'modsim.py'\nif not exists(filename):\n from urllib.request import urlretrieve\n url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'\n local, _ = urlretrieve(url+filename, filename)\n print('Downloaded ' + local)\n\n# import functions from modsim\n\nfrom modsim import *", "In the previous chapter we modeled objects moving in one dimension, with and without drag. Now let's move on to two dimensions, and baseball!\nIn this chapter we model the flight of a baseball including the effect\nof air resistance. In the next chapter we use this model to solve an\noptimization problem.\nBaseball\nTo model the flight of a baseball, we have to make some\ndecisions. To get started, we'll ignore any spin that might be on the ball, and the resulting Magnus force (see http://modsimpy.com/magnus). Under this assumption, the ball travels in a vertical plane, so we'll run simulations in two dimensions, rather than three.\nAir resistance has a substantial effect on most projectiles in air, so\nwe will include a drag force.\nTo model air resistance, we'll need the mass, frontal area, and drag\ncoefficient of a baseball. Mass and diameter are easy to find (see\nhttp://modsimpy.com/baseball). Drag coefficient is only a little\nharder; according to The Physics of Baseball, the drag coefficient of a baseball is approximately 0.33 (with no units).\nHowever, this value does depend on velocity. At low velocities it\nmight be as high as 0.5, and at high velocities as low as 0.28.\nFurthermore, the transition between these regimes typically happens\nexactly in the range of velocities we are interested in, between 20 m/s and 40 m/s.\nNevertheless, we'll start with a simple model where the drag coefficient does not depend on velocity; as an exercise at the end of the chapter, you will have a chance to implement a more detailed model and see what effect is has on the results.\nBut first we need a new computational tool, the Vector object.\nVectors\nNow that we are working in two dimensions, it will be useful to\nwork with vector quantities, that is, quantities that represent both a magnitude and a direction. We will use vectors to represent positions, velocities, accelerations, and forces in two and three dimensions.\nModSim provides a function called Vector the creates a Pandas Series that contains the components of the vector.\nIn a Vector that represents a position in space, the components are the $x$ and $y$ coordinates in 2-D, plus a $z$ coordinate if the Vector is in 3-D.\nYou can create a Vector by specifying its components. The following\nVector represents a point 3 units to the right (or east) and 4 units up (or north) from an implicit origin:", "A = Vector(3, 4)\nA", "You can access the components of a Vector by name using the dot\noperator, like this:", "A.x, A.y", "You can also access them by index using brackets, like this:", "A[0], A[1]", "Vector objects support most mathematical operations, including\naddition and subtraction:", "B = Vector(1, 2)\nB\n\nA + B\n\nA - B", "For the definition and graphical interpretation of these operations, see http://modsimpy.com/vecops.\nWe can specify a Vector with coordinates x and y, as in the previous examples.\nEquivalently, we can specify a Vector with a magnitude and angle.\nMagnitude is the length of the vector: if the Vector represents a position, magnitude is the distance from the origin; if it represents a velocity, magnitude is speed, that is, how fast the object is moving, regardless of direction.\nThe angle of a Vector is its direction, expressed as an angle in radians from the positive $x$ axis. In the Cartesian plane, the angle 0 rad is due east, and the angle $\\pi$ rad is due west.\nModSim provides functions to compute the magnitude and angle of a Vector. For example, here are the magnitude and angle of A:", "\n\nmag = vector_mag(A)\ntheta = vector_angle(A)\nmag, theta", "The magnitude is 5 because the length of A is the hypotenuse of a 3-4-5 triangle.\nThe result from vector_angle is in radians, and most Python functions, like sin and cos, work with radians. \nBut many people think more naturally in degrees. \nFortunately, NumPy provides a function to convert radians to degrees:", "from numpy import rad2deg\n\nangle = rad2deg(theta)\nangle", "And a function to convert degrees to radians:", "from numpy import deg2rad\n\ntheta = deg2rad(angle)\ntheta", "Following convention, I'll use angle for a value in degrees and theta for a value in radians.\nIf you are given an angle and velocity, you can make a Vector using\npol2cart, which converts from polar to Cartesian coordinates. For example, here's a new Vector with the same angle and magnitude of A:", "x, y = pol2cart(theta, mag)\nVector(x, y)", "Another way to represent the direction of A is a unit vector,\nwhich is a vector with magnitude 1 that points in the same direction as\nA. You can compute a unit vector by dividing a vector by its\nmagnitude:", "A / vector_mag(A)", "We can do the same thing using the vector_hat function, so named because unit vectors are conventionally decorated with a hat, like this: $\\hat{A}$.", "vector_hat(A)", "Now let's get back to the game.\nSimulating baseball flight\nLet's simulate the flight of a baseball that is batted from home plate\nat an angle of 45° and initial speed 40 m/s. We'll use the center of home plate as the origin, a horizontal x-axis (parallel to the ground), and vertical y-axis (perpendicular to the ground). The initial height is about 1 m.", "params = Params(\n x = 0, # m\n y = 1, # m\n angle = 45, # degree\n velocity = 40, # m / s\n\n mass = 145e-3, # kg \n diameter = 73e-3, # m \n C_d = 0.33, # dimensionless\n\n rho = 1.2, # kg/m**3\n g = 9.8, # m/s**2\n t_end = 10, # s\n)", "I got the mass and diameter of the baseball from Wikipedia and the coefficient of drag is from The Physics of Baseball:\nThe density of air, rho, is based on a temperature of 20 °C at sea level (see http://modsimpy.com/tempress). \nAnd we'll need the acceleration of gravity, g.\nThe following function uses these quantities to make a System object.", "from numpy import pi, deg2rad\n\ndef make_system(params):\n \n # convert angle to degrees\n theta = deg2rad(params.angle)\n \n # compute x and y components of velocity\n vx, vy = pol2cart(theta, params.velocity)\n \n # make the initial state\n init = State(x=params.x, y=params.y, vx=vx, vy=vy)\n \n # compute the frontal area\n area = pi * (params.diameter/2)**2\n\n return System(params,\n init = init,\n area = area,\n )", "make_system uses deg2rad to convert angle to radians and\npol2cart to compute the $x$ and $y$ components of the initial\nvelocity.\ninit is a State object with four state variables:\n\n\nx and y are the components of position.\n\n\nvx and vy are the components of velocity.\n\n\nThe System object also contains t_end, which is 10 seconds, long enough for the ball to land on the ground.\nHere's the System object.", "system = make_system(params)", "And here's the initial State:", "system.init", "Next we need a function to compute drag force:", "def drag_force(V, system):\n rho, C_d, area = system.rho, system.C_d, system.area\n \n mag = rho * vector_mag(V)**2 * C_d * area / 2\n direction = -vector_hat(V)\n f_drag = mag * direction\n return f_drag", "This function takes V as a Vector and returns f_drag as a\nVector. \n\n\nIt uses vector_mag to compute the magnitude of V, \nand the drag equation to compute the magnitude of the drag force, mag.\n\n\nThen it uses vector_hat to compute direction, which is a unit vector in the opposite direction of V.\n\n\nFinally, it computes the drag force vector by multiplying mag and direction.\n\n\nWe can test it like this:", "vx, vy = system.init.vx, system.init.vy\nV_test = Vector(vx, vy)\ndrag_force(V_test, system)", "The result is a Vector that represents the drag force on the baseball, in Newtons, under the initial conditions.\nNow we're ready for a slope function:", "def slope_func(t, state, system):\n x, y, vx, vy = state\n mass, g = system.mass, system.g\n \n V = Vector(vx, vy)\n a_drag = drag_force(V, system) / mass\n a_grav = g * Vector(0, -1)\n \n A = a_grav + a_drag\n \n return V.x, V.y, A.x, A.y", "As usual, the parameters of the slope function are a time, a State object, and a System object. \nIn this example, we don't use t, but we can't leave it out because when run_solve_ivp calls the slope function, it always provides the same arguments, whether they are needed or not.\nslope_func unpacks the State object into variables x, y, vx, and vy.\nThen it packs vx and vy into a Vector, which it uses to compute drag force and acceleration due to drag, a_drag.\nTo represent acceleration due to gravity, it makes a Vector with magnitude g in the negative $y$ direction.\nThe total acceleration of the baseball, A, is the sum of accelerations due to gravity and drag.\nThe return value is a sequence that contains:\n\n\nThe components of velocity, V.x and V.y.\n\n\nThe components of acceleration, A.x and A.y.\n\n\nTogether, these components represent the slope of the state variables, because V is the derivative of position and A is the derivative of velocity.\nAs always, we can test the slope function by running it with the initial conditions:", "slope_func(0, system.init, system)", "Using vectors to represent forces and accelerations makes the code\nconcise, readable, and less error-prone. In particular, when we add\na_grav and a_drag, the directions are likely to be correct, because they are encoded in the Vector objects.\nWe're almost ready to run the simulation. The last thing we need is an event function that stops when the ball hits the ground.", "def event_func(t, state, system):\n x, y, vx, vy = state\n return y", "The event function takes the same parameters as the slope function, and returns the $y$ coordinate of position. When the $y$ coordinate passes through 0, the simulation stops.\nAs we did with slope_func, we can test event_func with the initial conditions.", "event_func(0, system.init, system)", "Now we're ready to run the simulation:", "\n\nresults, details = run_solve_ivp(system, slope_func,\n events=event_func)\ndetails.message", "details contains information about the simulation, including a message that indicates that a \"termination event\" occurred; that is, the simulated ball reached the ground.\nresults is a TimeFrame with one column for each of the state variables:", "results.tail()", "We can get the flight time like this:", "flight_time = results.index[-1]\nflight_time", "And the final state like this:", "final_state = results.iloc[-1]\nfinal_state", "The final value of y is close to 0, as it should be. The final value of x tells us how far the ball flew, in meters.", "x_dist = final_state.x\nx_dist", "We can also get the final velocity, like this:", "final_V = Vector(final_state.vx, final_state.vy)\nfinal_V", "The speed of the ball on impact is about 26 m/s, which is substantially slower than the initial velocity, 40 m/s.", "vector_mag(final_V)", "Trajectories\nTo visualize the results, we can plot the $x$ and $y$ components of position like this:", "results.x.plot(color='C4')\nresults.y.plot(color='C2', style='--')\n\ndecorate(xlabel='Time (s)',\n ylabel='Position (m)')", "As expected, the $x$ component increases as the ball moves away from home plate. The $y$ position climbs initially and then descends, falling to 0 m near 5.0 s.\nAnother way to view the results is to plot the $x$ component on the\nx-axis and the $y$ component on the y-axis, so the plotted line follows the trajectory of the ball through the plane:", "def plot_trajectory(results):\n x = results.x\n y = results.y\n make_series(x, y).plot(label='trajectory')\n\n decorate(xlabel='x position (m)',\n ylabel='y position (m)')\n\nplot_trajectory(results)", "This way of visualizing the results is called a trajectory plot (see http://modsimpy.com/trajec).\nA trajectory plot can be easier to interpret than a time series plot,\nbecause it shows what the motion of the projectile would look like (at\nleast from one point of view). Both plots can be useful, but don't get\nthem mixed up! If you are looking at a time series plot and interpreting it as a trajectory, you will be very confused.\nAnimation\nOne of the best ways to visualize the results of a physical model is animation. If there are problems with the model, animation can make them apparent.\nThe ModSimPy library provides animate, which takes as parameters a TimeSeries and a draw function.\nThe draw function should take as parameters a State object and the time. It should draw a single frame of the animation.\nInside the draw function, you almost always have to call set_xlim and set_ylim. Otherwise matplotlib auto-scales the axes, which is usually not what you want.", "from matplotlib.pyplot import plot\n\nxlim = results.x.min(), results.x.max()\nylim = results.y.min(), results.y.max()\n\ndef draw_func(t, state):\n plot(state.x, state.y, 'bo')\n decorate(xlabel='x position (m)',\n ylabel='y position (m)',\n xlim=xlim,\n ylim=ylim)\n\n# animate(results, draw_func)", "Exercises\nExercise: Run the simulation with and without air resistance. How wrong would we be if we ignored drag?", "# Hint\n\nsystem2 = make_system(params.set(C_d=0))\n\n# Solution\n\nresults2, details2 = run_solve_ivp(system2, slope_func, \n events=event_func)\ndetails.message\n\n# Solution\n\nplot_trajectory(results)\nplot_trajectory(results2)\n\n# Solution\n\nx_dist2 = results2.iloc[-1].x\nx_dist2\n\n# Solution\n\nx_dist2 - x_dist", "Exercise: The baseball stadium in Denver, Colorado is 1,580 meters above sea level, where the density of air is about 1.0 kg / meter$^3$. How much farther would a ball hit with the same velocity and launch angle travel?", "# Hint\n\nsystem3 = make_system(params.set(rho=1.0))\n\n# Solution\n\nresults3, details3 = run_solve_ivp(system3, slope_func, \n events=event_func)\n\nx_dist3 = results3.iloc[-1].x\nx_dist3\n\n# Solution\n\nx_dist3 - x_dist", "Exercise: The model so far is based on the assumption that coefficient of drag does not depend on velocity, but in reality it does. The following figure, from Adair, The Physics of Baseball, shows coefficient of drag as a function of velocity.\n<img src=\"https://github.com/AllenDowney/ModSimPy/raw/master/figs/baseball_drag.png\" width=\"400\">\nI used an online graph digitizer to extract the data and save it in a CSV file. Here's how we can read it:", "import os\n\nfilename = 'baseball_drag.csv'\n\nif not os.path.exists(filename):\n !wget https://raw.githubusercontent.com/AllenDowney/ModSimPy/master/data/baseball_drag.csv\n\nfrom pandas import read_csv\n\n\nbaseball_drag = read_csv(filename)\nmph = Quantity(baseball_drag['Velocity in mph'], units.mph)\nmps = mph.to(units.meter / units.second)\nbaseball_drag.index = mps.magnitude\nbaseball_drag.index.name = 'Velocity in meters per second'\nbaseball_drag.head()\n\ndrag_interp = interpolate(baseball_drag['Drag coefficient'])\nvs = linspace(0, 60)\ncds = drag_interp(vs)\nmake_series(vs, cds).plot()\ndecorate(xlabel='Velocity (m/s)', ylabel='C_d')", "Modify the model to include the dependence of C_d on velocity, and see how much it affects the results.", "# Solution\n\ndef drag_force2(V, system):\n \"\"\"Computes drag force in the opposite direction of `v`.\n \n v: velocity\n system: System object with rho, C_d, area\n \n returns: Vector drag force\n \"\"\"\n rho, area = system.rho, system.area\n \n C_d = drag_interp(vector_mag(V))\n mag = -rho * vector_mag(V)**2 * C_d * area / 2\n direction = vector_hat(V)\n f_drag = direction * mag\n return f_drag\n\n# Solution\n\ndef slope_func2(t, state, system):\n x, y, vx, vy = state\n mass, g = system.mass, system.g\n \n V = Vector(vx, vy)\n a_drag = drag_force2(V, system) / mass\n a_grav = g * Vector(0, -1)\n \n A = a_grav + a_drag\n \n return V.x, V.y, A.x, A.y\n\n# Solution\n\nsystem4 = make_system(params)\n\n# Solution\n\nV = Vector(30, 30)\nf_drag = drag_force(V, system4)\nf_drag\n\n# Solution\n\nslope_func(0, system4.init, system4)\n\n# Solution\n\nresults4, details4 = run_solve_ivp(system4, slope_func2, \n events=event_func)\ndetails4.message\n\n# Solution\n\nresults4.tail()\n\n# Solution\n\nx_dist4 = results4.iloc[-1].x\nx_dist4\n\n# Solution\n\nx_dist4 - x_dist", "Under the hood\nVector" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Raag079/self-driving-car
Term01-Computer-Vision-and-Deep-Learning/P1-LaneLines/P1.ipynb
mit
[ "Finding Lane Lines on the Road\n\nIn this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip \"raw-lines-example.mp4\" (also contained in this repository) to see what the output should look like after using the helper functions below. \nOnce you have a result that looks roughly like \"raw-lines-example.mp4\", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video \"P1_example.mp4\". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.\n\nLet's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the \"play\" button above) to display the image.\nNote If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the \"Kernel\" menu above and selecting \"Restart & Clear Output\".\n\nThe tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.\n\n<figure>\n <img src=\"line-segments-example.jpg\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your output should look something like this (above) after detecting line segments using the helper functions below </p> \n </figcaption>\n</figure>\n<p></p>\n<figure>\n <img src=\"laneLines_thirdPass.jpg\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your goal is to connect/average/extrapolate line segments to get output like this</p> \n </figcaption>\n</figure>", "#importing some useful packages\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nimport cv2\n%matplotlib inline\n\n#reading in an image\nimage = mpimg.imread('test_images/solidWhiteRight.jpg')\n#printing out some stats and plotting\nprint('This image is:', type(image), 'with dimesions:', image.shape)\nplt.imshow(image) #call as plt.imshow(gray, cmap='gray') to show a grayscaled image", "Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:\ncv2.inRange() for color selection\ncv2.fillPoly() for regions selection\ncv2.line() to draw lines on an image given endpoints\ncv2.addWeighted() to coadd / overlay two images\ncv2.cvtColor() to grayscale or change color\ncv2.imwrite() to output images to file\ncv2.bitwise_and() to apply a mask to an image\nCheck out the OpenCV documentation to learn about these and discover even more awesome functionality!\nBelow are some helper functions to help get you started. They should look familiar from the lesson!", "import math\n\ndef grayscale(img):\n \"\"\"Applies the Grayscale transform\n This will return an image with only one color channel\n but NOTE: to see the returned image as grayscale\n you should call plt.imshow(gray, cmap='gray')\"\"\"\n return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n # Or use BGR2GRAY if you read an image with cv2.imread()\n # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n \ndef canny(img, low_threshold, high_threshold):\n \"\"\"Applies the Canny transform\"\"\"\n return cv2.Canny(img, low_threshold, high_threshold)\n\ndef gaussian_blur(img, kernel_size):\n \"\"\"Applies a Gaussian Noise kernel\"\"\"\n return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)\n\ndef region_of_interest(img, vertices):\n \"\"\"\n Applies an image mask.\n \n Only keeps the region of the image defined by the polygon\n formed from `vertices`. The rest of the image is set to black.\n \"\"\"\n #defining a blank mask to start with\n mask = np.zeros_like(img) \n \n #defining a 3 channel or 1 channel color to fill the mask with depending on the input image\n if len(img.shape) > 2:\n channel_count = img.shape[2] # i.e. 3 or 4 depending on your image\n ignore_mask_color = (255,) * channel_count\n else:\n ignore_mask_color = 255\n \n #filling pixels inside the polygon defined by \"vertices\" with the fill color \n cv2.fillPoly(mask, vertices, ignore_mask_color)\n \n #returning the image only where mask pixels are nonzero\n masked_image = cv2.bitwise_and(img, mask)\n return masked_image\n\n\ndef draw_lines(img, lines, color=[255, 0, 0], thickness=2):\n \"\"\"\n NOTE: this is the function you might want to use as a starting point once you want to \n average/extrapolate the line segments you detect to map out the full\n extent of the lane (going from the result shown in raw-lines-example.mp4\n to that shown in P1_example.mp4). \n \n Think about things like separating line segments by their \n slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left\n line vs. the right line. Then, you can average the position of each of \n the lines and extrapolate to the top and bottom of the lane.\n \n This function draws `lines` with `color` and `thickness`. \n Lines are drawn on the image inplace (mutates the image).\n If you want to make the lines semi-transparent, think about combining\n this function with the weighted_img() function below\n \"\"\"\n for line in lines:\n for x1,y1,x2,y2 in line:\n cv2.line(img, (x1, y1), (x2, y2), color, thickness)\n\ndef hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):\n \"\"\"\n `img` should be the output of a Canny transform.\n \n Returns an image with hough lines drawn.\n \"\"\"\n lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)\n line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)\n draw_lines(line_img, lines)\n return line_img\n\n# Python 3 has support for cool math symbols.\n\ndef weighted_img(img, initial_img, α=0.8, β=1., λ=0.):\n \"\"\"\n `img` is the output of the hough_lines(), An image with lines drawn on it.\n Should be a blank image (all black) with lines drawn on it.\n \n `initial_img` should be the image before any processing.\n \n The result image is computed as follows:\n \n initial_img * α + img * β + λ\n NOTE: initial_img and img must be the same shape!\n \"\"\"\n return cv2.addWeighted(initial_img, α, img, β, λ)", "Test on Images\nNow you should build your pipeline to work on the images in the directory \"test_images\"\nYou should make sure your pipeline works well on these images before you try the videos.", "import os\nimages = os.listdir(\"test_images/\")\n\n# Create a directory to save processed images\nprocessed_directory_name = \"processed_images\"\n\nif not os.path.exists(processed_directory_name):\n os.mkdir(processed_directory_name)", "run your solution on all test_images and make copies into the test_images directory).", "# TODO: Build your pipeline that will draw lane lines on the test_images\n# then save them to the test_images directory.\n\n# kernel_size for gaussian blur\nkernel_size = 5\n\n# thresholds for canny edge\nlow_threshold = 60\nhigh_threshold = 140\n\n# constants for Hough transformation \nrho = 1 # distance resolution in pixels of the Hough grid\ntheta = np.pi/180 # angular resolution in radians of the Hough grid\nthreshold = 20 # minimum number of votes (intersections in Hough grid cell)\nmin_line_len = 30 #minimum number of pixels making up a line\nmax_line_gap = 150 # maximum gap in pixels between connectable line segments\n\n# vertices for polygon with area of interest \nleft_bottom = [50, 539]\nright_bottom = [900, 539]\napex = [470, 320]\n\nvertices = [left_bottom, right_bottom, apex]\n\ndef image_process_pipeline(image):\n \n # Convert GrayScale\n image_grayscale = grayscale(image)\n #plt.imshow(image_grayscale)\n\n # Apply Gaussian Blur\n image_gaussianBlur = gaussian_blur(image_grayscale, kernel_size)\n #plt.imshow(image_gaussianBlur)\n\n # Detect Edges\n image_cannyEdge = canny(image_gaussianBlur, low_threshold, high_threshold)\n #plt.imshow(image_cannyEdge)\n\n # Mask edges to area of interest\n imshape = image.shape\n vertices = np.array([[(0,imshape[0]),(450, 320), (490, 320), (imshape[1],imshape[0])]], dtype=np.int32)\n\n image_Mask = region_of_interest(image_cannyEdge, vertices)\n #plt.imshow(image_Mask)\n\n # Detect Hough Lines and draw lines on blank image\n image_houghLines = hough_lines(image_Mask, rho, theta, threshold, min_line_len, max_line_gap)\n #plt.imshow(image_houghLines)\n\n # Draw lines on original image\n image_linesAndEdges = weighted_img(image_houghLines, image)\n \n return image_linesAndEdges\n\n\nfor raw_image in images:\n \n # Read Image as Matrix\n #image = mpimg.imread(\"test_images/\"+images[0])\n image = mpimg.imread(\"test_images/\"+raw_image)\n\n result = image_process_pipeline(image)\n \n # Show processed image\n plt.imshow(result)\n\n # Save the image\n mpimg.imsave(os.path.join(processed_directory_name, \"processed\"+raw_image), result)\n ", "Test on Videos\nYou know what's cooler than drawing lanes over images? Drawing lanes over video!\nWe can test our solution on two provided videos:\nsolidWhiteRight.mp4\nsolidYellowLeft.mp4", "# Import everything needed to edit/save/watch video clips\nfrom moviepy.editor import VideoFileClip\nfrom IPython.display import HTML\n\ndef process_image(image):\n # NOTE: The output you return should be a color image (3 channel) for processing video below\n # TODO: put your pipeline here,\n # you should return the final output (image with lines are drawn on lanes)\n result = image_process_pipeline(image)\n return result", "Let's try the one with the solid white lane on the right first ...", "white_output = 'white.mp4'\nclip1 = VideoFileClip(\"solidWhiteRight.mp4\")\nwhite_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!\n%time white_clip.write_videofile(white_output, audio=False)", "Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.", "HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(white_output))", "At this point, if you were successful you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. Modify your draw_lines function accordingly and try re-running your pipeline.\nNow for the one with the solid yellow lane on the left. This one's more tricky!", "yellow_output = 'yellow.mp4'\nclip2 = VideoFileClip('solidYellowLeft.mp4')\nyellow_clip = clip2.fl_image(process_image)\n%time yellow_clip.write_videofile(yellow_output, audio=False)\n\nHTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(yellow_output))", "Reflections\nCongratulations on finding the lane lines! As the final step in this project, we would like you to share your thoughts on your lane finding pipeline... specifically, how could you imagine making your algorithm better / more robust? Where will your current algorithm be likely to fail?\nPlease add your thoughts below, and if you're up for making your pipeline more robust, be sure to scroll down and check out the optional challenge video below!\nReflection about Project\nSo far I have used the knowledge that I gained throgh lectures very well to implement lane finding project. I couldn't solve the optional challenge. My algorith can be made better if can detect and remove possible dots adjacent to lanes. It can be more robust I can get tweak canny edge detection and hough space functions to avoid drawing lines to trees and other objects possibly detected as lane.\nSubmission\nIf you're satisfied with your video outputs it's time to submit! Submit this ipython notebook for review.\nOptional Challenge\nTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!", "challenge_output = 'extra.mp4'\nclip2 = VideoFileClip('challenge.mp4')\nchallenge_clip = clip2.fl_image(process_image)\n%time challenge_clip.write_videofile(challenge_output, audio=False)\n\nHTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(challenge_output))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
wireservice/agate
example.py.ipynb
mit
[ "Using agate in a Jupyter notebook\nFirst we import agate. Then we create an agate Table by loading data from a CSV file.", "import agate\n\ntable = agate.Table.from_csv('examples/realdata/ks_1033_data.csv')\n\ntable", "Question 1: What was the total cost to Kansas City area counties?\nTo answer this question, we first must filter the table to only those rows which refer to a Kansas City area county.", "kansas_city = table.where(lambda r: r['county'] in ('JACKSON', 'CLAY', 'CASS', 'PLATTE'))\n\nprint(len(table.rows))\nprint(len(kansas_city.rows))", "We can then print the Sum of the costs of all those rows. (The cost column is named total_cost.)", "print('$%d' % kansas_city.aggregate(agate.Sum('total_cost')))", "Question 2: Which counties spent the most?\nThis question is more complicated. First we group the data by county, which gives us a TableSet named counties. A TableSet is a group of tables with the same columns.", "# Group by county\ncounties = table.group_by('county')\n\nprint(counties.keys())", "We then use the aggregate function to sum the total_cost column for each table in the group. The resulting values are collapsed into a new table, totals, which has a row for each county and a column named total_cost_sum containing the new total.", "# Aggregate totals for all counties\ntotals = counties.aggregate([\n ('total_cost_sum', agate.Sum('total_cost'),)\n])\n\nprint(totals.column_names)", "Finally, we sort the counties by their total cost, limit the results to the top 10 and then print the results as a text bar chart.", "totals.order_by('total_cost_sum', reverse=True).limit(20).print_bars('county', 'total_cost_sum', width=100)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rdhyee/ipython-spark
clustering.ipynb
apache-2.0
[ "local mode", "from pyspark import (SparkContext, SparkConf)\n\n# http://spark.apache.org/docs/1.2.0/configuration.html\nconf = SparkConf()\n\n# https://spark.apache.org/faq.html\n# local[N] or local[*]\nconf.setMaster(\"local[10]\").setAppName(\"Simple App\")\n#conf.set(\"spark.cores.max\", \"10\")\n\nsc = SparkContext(conf=conf)\n\n#sc = SparkContext(master=\"local\", appName=\"Simple App\")\nr = sc.parallelize(range(10000))\n\nfrom math import factorial, log10\n\nfact_sum = r.map(factorial).sum()\nlog10(fact_sum)", "stand-alone mode\nhttp://spark.apache.org/docs/1.2.0/spark-standalone.html", "!ls /spark/sbin\n\n!ls ./sbin/start-master.sh", "I will move to mesos, because mesos support docker\nmesos\nhttp://spark.apache.org/docs/1.2.0/running-on-mesos.html\nMaybe try https://elastic.mesosphere.io/ --> but is there a script for this?\nhttp://mesos.apache.org/documentation/latest/ec2-scripts/\nhttps://digitalocean.mesosphere.com/clusters/new\n17 steps in https://mesosphere.com/docs/tutorials/run-spark-on-mesos/ ???" ]
[ "markdown", "code", "markdown", "code", "markdown" ]