repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
|---|---|---|---|
jamesfolberth/NGC_STEM_camp_AWS
|
notebooks/data8_notebooks/lab07/lab07.ipynb
|
bsd-3-clause
|
# Run this cell, but please don't change it.
# These lines import the Numpy and Datascience modules.
import numpy as np
from datascience import *
# These lines do some fancy plotting magic.
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
import warnings
warnings.simplefilter('ignore', FutureWarning)
# These lines load the tests.
from client.api.assignment import load_assignment
tests = load_assignment('lab07.ok')
"""
Explanation: Lab 7: Regression
Welcome to Lab 7!
Today we will get some hands-on practice with linear regression. You can find more information about this topic in
section 13.2.
Administrative details
Lab submissions are due by Friday, November 4 at 7:00 PM. Remember to submit your lab by running all the tests and then running the final cell in the lab.
End of explanation
"""
# For the curious: this is how to display a YouTube video in a
# Jupyter notebook. The argument to YouTubeVideo is the part
# of the URL (called a "query parameter") that identifies the
# video. For example, the full URL for this video is:
# https://www.youtube.com/watch?v=wE8NDuzt8eg
from IPython.display import YouTubeVideo
YouTubeVideo("wE8NDuzt8eg")
"""
Explanation: 1. How Faithful is Old Faithful?
(Note: clever title comes from here.)
Old Faithful is a geyser in Yellowstone National Park in the central United States. It's famous for erupting on a fairly regular schedule. You can see a video below.
End of explanation
"""
faithful = Table.read_table("faithful.csv")
faithful
"""
Explanation: Some of Old Faithful's eruptions last longer than others. When it has a long eruption, there's generally a longer wait until the next eruption.
If you visit Yellowstone, you might want to predict when the next eruption will happen, so you can see the rest of the park and come to see the geyser when it happens. Today, we will use a dataset on eruption durations and waiting times to see if we can make such predictions accurately with linear regression.
The dataset has one row for each observed eruption. It includes the following columns:
- duration: Eruption duration, in minutes
- wait: Time between this eruption and the next, also in minutes
Run the next cell to load the dataset.
End of explanation
"""
...
"""
Explanation: We would like to use linear regression to make predictions, but that won't work well if the data aren't roughly linearly related. To check that, we should look at the data.
Question 1
Make a scatter plot of the data. It's conventional to put the column we will try to predict on the vertical axis and the other column on the horizontal axis.
End of explanation
"""
duration_mean = ...
duration_std = ...
wait_mean = ...
wait_std = ...
faithful_standard = Table().with_columns(
"duration (standard units)", ...,
"wait (standard units)", ...)
faithful_standard
_ = tests.grade('q1_3')
"""
Explanation: Question 2
Look at the scatter plot. Are eruption duration and waiting time roughly linearly related? Is the relationship positive, as we claimed earlier? You may want to consult the textbook chapter 13 for the definition of "linearly related."
Write your answer here, replacing this text.
We're going to continue with the provisional assumption that they are linearly related, so it's reasonable to use linear regression to analyze this data.
We'd next like to plot the data in standard units. Recall that, if nums is an array of numbers, then
(nums - np.mean(nums)) / np.std(nums)
...is an array of those numbers in standard units.
Question 3
Compute the mean and standard deviation of the eruption durations and waiting times. Then create a table called faithful_standard containing the eruption durations and waiting times in standard units. (The columns should be named "duration (standard units)" and "wait (standard units)".
End of explanation
"""
...
"""
Explanation: Question 4
Plot the data again, but this time in standard units.
End of explanation
"""
r = ...
r
_ = tests.grade('q1_6')
"""
Explanation: You'll notice that this plot looks exactly the same as the last one! The data really are different, but the axes are scaled differently. (The method scatter scales the axes so the data fill up the available space.) So it's important to read the ticks on the axes.
Question 5
Among the following numbers, which would you guess is closest to the correlation between eruption duration and waiting time in this dataset?
-1
0
1
Write your answer here, replacing this text.
Question 6
Compute the correlation r. Hint: Use faithful_standard. Section 13.1 explains how to do this.
End of explanation
"""
def plot_data_and_line(dataset, x, y, point_0, point_1):
"""Makes a scatter plot of the dataset, along with a line passing through two points."""
dataset.scatter(x, y, label="data")
plt.plot(make_array(point_0.item(0), point_1.item(0)), make_array(point_0.item(1), point_1.item(1)), label="regression line")
plt.legend(bbox_to_anchor=(1.5,.8))
plot_data_and_line(faithful_standard, "duration (standard units)", "wait (standard units)", make_array(-2, -2*r), make_array(2, 2*r))
"""
Explanation: 2. The regression line
Recall that the correlation is the slope of the regression line when the data are put in standard units.
The next cell plots the regression line in standard units:
$$\text{waiting time (standard units)} = r \times \text{eruption duration (standard units)}.$$
Then, it plots the original data again, for comparison.
End of explanation
"""
slope = ...
slope
"""
Explanation: How would you take a point in standard units and convert it back to original units? We'd have to "stretch" its horizontal position by duration_std and its vertical position by wait_std.
That means the same thing would happen to the slope of the line.
Stretching a line horizontally makes it less steep, so we divide the slope by the stretching factor. Stretching a line vertically makes it more steep, so we multiply the slope by the stretching factor.
Question 1
What is the slope of the regression line in original units?
(If the "stretching" explanation is unintuitive, consult section 13.2 in the textbook.)
End of explanation
"""
intercept = slope*(-duration_mean) + wait_mean
intercept
_ = tests.grade('q2_1')
"""
Explanation: We know that the regression line passes through the point (duration_mean, wait_mean). You might recall from high-school algebra that the equation for the line is therefore:
$$\text{waiting time} - \verb|wait_mean| = \texttt{slope} \times (\text{eruption duration} - \verb|duration_mean|)$$
After rearranging that equation slightly, the intercept turns out to be:
End of explanation
"""
two_minute_predicted_waiting_time = ...
five_minute_predicted_waiting_time = ...
# Here is a helper function to print out your predictions
# (you don't need to modify it):
def print_prediction(duration, predicted_waiting_time):
print("After an eruption lasting", duration,
"minutes, we predict you'll wait", predicted_waiting_time,
"minutes until the next eruption.")
print_prediction(2, two_minute_predicted_waiting_time)
print_prediction(5, five_minute_predicted_waiting_time)
"""
Explanation: 3. Investigating the regression line
The slope and intercept tell you exactly what the regression line looks like. To predict the waiting time for an eruption, multiply the eruption's duration by slope and then add intercept.
Question 1
Compute the predicted waiting time for an eruption that lasts 2 minutes, and for an eruption that lasts 5 minutes.
End of explanation
"""
plot_data_and_line(faithful, "duration", "wait", make_array(2, two_minute_predicted_waiting_time), make_array(5, five_minute_predicted_waiting_time))
"""
Explanation: The next cell plots the line that goes between those two points, which is (a segment of) the regression line.
End of explanation
"""
faithful_predictions = ...
faithful_predictions
_ = tests.grade("q3_2")
"""
Explanation: Question 2
Make predictions for the waiting time after each eruption in the faithful table. (Of course, we know exactly what the waiting times were! We are doing this so we can see how accurate our predictions are.) Put these numbers into a column in a new table called faithful_predictions. Its first row should look like this:
|duration|wait|predicted wait|
|-|-|-|
|3.6|79|72.1011|
Hint: Your answer can be just one line. There is no need for a for loop; use array arithmetic instead.
End of explanation
"""
faithful_residuals = ...
faithful_residuals
_ = tests.grade("q3_3")
"""
Explanation: Question 3
How close were we? Compute the residual for each eruption in the dataset. The residual is the difference (not the absolute difference) between the actual waiting time and the predicted waiting time. Add the residuals to faithful_predictions as a new column called "residual", naming the resulting table faithful_residuals.
Hint: Again, your code will be much simpler if you don't use a for loop.
End of explanation
"""
faithful_residuals.scatter("duration", "residual", color="r")
"""
Explanation: Here is a plot of the residuals you computed. Each point corresponds to one eruption. It shows how much our prediction over- or under-estimated the waiting time.
End of explanation
"""
faithful_residuals.scatter("duration", "wait", label="actual waiting time", color="blue")
plt.scatter(faithful_residuals.column("duration"), faithful_residuals.column("residual"), label="residual", color="r")
plt.plot(make_array(2, 5), make_array(two_minute_predicted_waiting_time, five_minute_predicted_waiting_time), label="regression line")
plt.legend(bbox_to_anchor=(1.7,.8));
"""
Explanation: There isn't really a pattern in the residuals, which confirms that it was reasonable to try linear regression. It's true that there are two separate clouds; the eruption durations seemed to fall into two distinct clusters. But that's just a pattern in the eruption durations, not a pattern in the relationship between eruption durations and waiting times.
4. How accurate are different predictions?
Earlier, you should have found that the correlation is fairly close to 1, so the line fits fairly well on the training data. That means the residuals are overall small (close to 0) in comparison to the waiting times.
We can see that visually by plotting the waiting times and residuals together:
End of explanation
"""
zero_minute_predicted_waiting_time = ...
two_point_five_minute_predicted_waiting_time = ...
hour_predicted_waiting_time = ...
print_prediction(0, zero_minute_predicted_waiting_time)
print_prediction(2.5, two_point_five_minute_predicted_waiting_time)
print_prediction(60, hour_predicted_waiting_time)
_ = tests.grade('q4_1')
"""
Explanation: However, unless you have a strong reason to believe that the linear regression model is true, you should be wary of applying your prediction model to data that are very different from the training data.
Question 1
In faithful, no eruption lasted exactly 0, 2.5, or 60 minutes. Using this line, what is the predicted waiting time for an eruption that lasts 0 minutes? 2.5 minutes? An hour?
End of explanation
"""
# For your convenience, you can run this cell to run all the tests at once!
import os
print("Running all tests...")
_ = [tests.grade(q[:-3]) for q in os.listdir("tests") if q.startswith('q')]
print("Finished running all tests.")
# Run this cell to submit your work *after* you have passed all of the test cells.
# It's ok to run this cell multiple times. Only your final submission will be scored.
!TZ=America/Los_Angeles jupyter nbconvert --output=".lab07_$(date +%m%d_%H%M)_submission.html" lab07.ipynb && echo "Submitted successfully."
"""
Explanation: Question 2. Do you believe any of these values are reliable predictions? If you don't believe some of them, say why.
Write your answer here, replacing this text.
End of explanation
"""
|
Danghor/Formal-Languages
|
ANTLR4-Python/Calculator/Calculator.ipynb
|
gpl-2.0
|
!cat -n Program.g4
"""
Explanation: Embedded Actions in <span style="font-variant:small-caps;">Antlr</span> Grammars
The pure grammar is stored in the file Grammar.g4.
End of explanation
"""
!cat -n Calculator.g4
"""
Explanation: The grammar shown above has no semantic actions (with the exception of the skip action).
We extend this grammar now with semantic actions so that we can actually compute something.
This grammar is stored in the file Calculator.g4. It describes a language for a
symbolic calculator: This calculator is able to evaluate arithmetic expressions and, furthermore,
where we can store the results of our computations in variables.
End of explanation
"""
!antlr4 -Dlanguage=Python3 Calculator.g4
"""
Explanation: First, we have to generate both the scanner and the parser.
End of explanation
"""
!ls -l
"""
Explanation: We can use the system command ls to see which files have been generated by <span style="font-variant:small-caps;">Antlr</span>.
If you are using a windows system you have to use the command dir instead.
End of explanation
"""
from CalculatorLexer import CalculatorLexer
from CalculatorParser import CalculatorParser
import antlr4
"""
Explanation: The files CalculatorLexer.py and CalculatorParser.py contain the generated scanner and parser, respectively. We have to import these files. Furthermore, the runtime of
<span style="font-variant:small-caps;">Antlr</span>
needs to be imported.
End of explanation
"""
def main():
parser = CalculatorParser(None) # generate parser without lexer
parser.Values = {}
line = input('> ')
while line != '':
input_stream = antlr4.InputStream(line)
lexer = CalculatorLexer(input_stream)
token_stream = antlr4.CommonTokenStream(lexer)
parser.setInputStream(token_stream)
parser.start()
line = input('> ')
return parser.Values
main()
!rm *.py *.tokens *.interp
!rm -r __pycache__/
!ls -l
"""
Explanation: Let us parse and evaluate the input that we read from a prompt.
End of explanation
"""
|
phoebe-project/phoebe2-docs
|
development/tutorials/undo_redo.ipynb
|
gpl-3.0
|
!pip install -I "phoebe>=2.1,<2.2"
"""
Explanation: Advanced: Undo/Redo
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger(clevel='INFO')
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
"""
b.enable_history()
"""
Explanation: Enabling/Disabling Logging History
Undo and redo support is built directly into the bundle. Every time you make a change to a parameter or call a method, a new Parameter is created with the 'history' context. These parameters then know how to undo or redo that action. Of course, this can result in a large list of Parameters that you may not want - see the tutorial on Settings for more details on how to change the log size or enable/disable history entirely.
By default history logging is off, so let's first enable it.
End of explanation
"""
b['ra@system']
b['ra@system'] = 10
b['ra@system']
"""
Explanation: Undoing
First let's set a value so we know what we're expecting to undo
End of explanation
"""
b.get_history(-1)
print b.get_history(-1)['redo_func'], b.get_history(-1)['redo_kwargs']
"""
Explanation: The history context contains a list of parameters, all numbered and ugly. But there is a convenience method which allows you to get history items by index - including reverse indexing. This is probably the most common way to get a history item... and you'll most likely want the LATEST item.
End of explanation
"""
print b.get_history(-1)['undo_func'], b.get_history(-1)['undo_kwargs']
"""
Explanation: Here you can see that redo_func and redo_kwargs shows exactly the last call we made to the bundle that actually changed something (we did b['ra@system'] = 10).
We can also look at what will be called when we undo this item
End of explanation
"""
b.undo()
"""
Explanation: If we want, we can then automatically call that undo method (note that you can also pass the index to undo, but it does assume -1 by default)
End of explanation
"""
b['ra@system']
"""
Explanation: And we can see that it did exactly what we expected.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/hammoz-consortium/cmip6/models/sandbox-3/atmos.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-3', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: HAMMOZ-CONSORTIUM
Source ID: SANDBOX-3
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:03
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
sf-wind/caffe2
|
caffe2/python/tutorials/Basics.ipynb
|
apache-2.0
|
# We'll also import a few standard python libraries
from matplotlib import pyplot
import numpy as np
import time
# These are the droids you are looking for.
from caffe2.python import core, workspace
from caffe2.proto import caffe2_pb2
# Let's show all plots inline.
%matplotlib inline
"""
Explanation: Caffe2 Basic Concepts - Operators & Nets
In this tutorial we will go through a set of Caffe2 basics: the basic concepts including how operators and nets are being written.
First, let's import caffe2. core and workspace are usually the two that you need most. If you want to manipulate protocol buffers generated by caffe2, you probably also want to import caffe2_pb2 from caffe2.proto.
End of explanation
"""
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
print("Workspace has blob 'X'? {}".format(workspace.HasBlob("X")))
"""
Explanation: You might see a warning saying that caffe2 does not have GPU support. That means you are running a CPU-only build. Don't be alarmed - anything CPU is still runnable without problem.
Workspaces
Let's cover workspaces first, where all the data reside.
If you are familiar with Matlab, workspace consists of blobs you create and store in memory. For now, consider a blob to be a N-dimensional Tensor similar to numpy's ndarray, but is contiguous. Down the road, we will show you that a blob is actually a typed pointer that can store any type of C++ objects, but Tensor is the most common type stored in a blob. Let's show what the interface looks like.
Blobs() prints out all existing blobs in the workspace.
HasBlob() queries if a blob exists in the workspace. For now, we don't have anything yet.
End of explanation
"""
X = np.random.randn(2, 3).astype(np.float32)
print("Generated X from numpy:\n{}".format(X))
workspace.FeedBlob("X", X)
"""
Explanation: We can feed blobs into the workspace using FeedBlob().
End of explanation
"""
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
print("Workspace has blob 'X'? {}".format(workspace.HasBlob("X")))
print("Fetched X:\n{}".format(workspace.FetchBlob("X")))
"""
Explanation: Now, let's take a look what blobs there are in the workspace.
End of explanation
"""
np.testing.assert_array_equal(X, workspace.FetchBlob("X"))
"""
Explanation: Let's verify that the arrays are equal.
End of explanation
"""
try:
workspace.FetchBlob("invincible_pink_unicorn")
except RuntimeError as err:
print(err)
"""
Explanation: Also, if you are trying to access a blob that does not exist, an error will be thrown:
End of explanation
"""
print("Current workspace: {}".format(workspace.CurrentWorkspace()))
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
# Switch the workspace. The second argument "True" means creating
# the workspace if it is missing.
workspace.SwitchWorkspace("gutentag", True)
# Let's print the current workspace. Note that there is nothing in the
# workspace yet.
print("Current workspace: {}".format(workspace.CurrentWorkspace()))
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
"""
Explanation: One thing that you might not use immediately: you can have multiple workspaces in Python using different names, and switch between them. Blobs in different workspaces are separate from each other. You can query the current workspace using CurrentWorkspace. Let's try switching the workspace by name (gutentag) and creating a new one if it doesn't exist.
End of explanation
"""
workspace.SwitchWorkspace("default")
print("Current workspace: {}".format(workspace.CurrentWorkspace()))
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
"""
Explanation: Let's switch back to the default workspace.
End of explanation
"""
workspace.ResetWorkspace()
"""
Explanation: Finally, ResetWorkspace() clears anything that is in the current workspace.
End of explanation
"""
# Create an operator.
op = core.CreateOperator(
"Relu", # The type of operator that we want to run
["X"], # A list of input blobs by their names
["Y"], # A list of output blobs by their names
)
# and we are done!
"""
Explanation: Operators
Operators in Caffe2 are kind of like functions. From the C++ side, they all derive from a common interface, and are registered by type, so that we can call different operators during runtime. The interface of operators is defined in caffe2/proto/caffe2.proto. Basically, it takes in a bunch of inputs, and produces a bunch of outputs.
Remember, when we say "create an operator" in Caffe2 Python, nothing gets run yet. All it does is to create the protocol buffere that specifies what the operator should be. At a later time it will be sent to the C++ backend for execution. If you are not familiar with protobuf, it is a json-like serialization tool for structured data. Find more about protocol buffers here.
Let's see an actual example.
End of explanation
"""
print("Type of the created op is: {}".format(type(op)))
print("Content:\n")
print(str(op))
"""
Explanation: As we mentioned, the created op is actually a protobuf object. Let's show the content.
End of explanation
"""
workspace.FeedBlob("X", np.random.randn(2, 3).astype(np.float32))
workspace.RunOperatorOnce(op)
"""
Explanation: OK, let's run the operator. We first feed in the input X to the workspace.
Then the simplest way to run an operator is to do workspace.RunOperatorOnce(operator)
End of explanation
"""
print("Current blobs in the workspace: {}\n".format(workspace.Blobs()))
print("X:\n{}\n".format(workspace.FetchBlob("X")))
print("Y:\n{}\n".format(workspace.FetchBlob("Y")))
print("Expected:\n{}\n".format(np.maximum(workspace.FetchBlob("X"), 0)))
"""
Explanation: After execution, let's see if the operator is doing the right thing, which is our neural network's activation function (Relu) in this case.
End of explanation
"""
op = core.CreateOperator(
"GaussianFill",
[], # GaussianFill does not need any parameters.
["Z"],
shape=[100, 100], # shape argument as a list of ints.
mean=1.0, # mean as a single float
std=1.0, # std as a single float
)
print("Content of op:\n")
print(str(op))
"""
Explanation: This is working if your Expected output matches your Y output in this example.
Operators also take optional arguments if needed. They are specified as key-value pairs. Let's take a look at one simple example, which takes a tensor and fills it with Gaussian random variables.
End of explanation
"""
workspace.RunOperatorOnce(op)
temp = workspace.FetchBlob("Z")
pyplot.hist(temp.flatten(), bins=50)
pyplot.title("Distribution of Z")
"""
Explanation: Let's run it and see if things are as intended.
End of explanation
"""
net = core.Net("my_first_net")
print("Current network proto:\n\n{}".format(net.Proto()))
"""
Explanation: If you see a bell shaped curve then it worked!
Nets
Nets are essentially computation graphs. We keep the name Net for backward consistency (and also to pay tribute to neural nets). A Net is composed of multiple operators just like a program written as a sequence of commands. Let's take a look.
When we talk about nets, we will also talk about BlobReference, which is an object that wraps around a string so we can do easy chaining of operators.
Let's create a network that is essentially the equivalent of the following python math:
X = np.random.randn(2, 3)
W = np.random.randn(5, 3)
b = np.ones(5)
Y = X * W^T + b
We'll show the progress step by step. Caffe2's core.Net is a wrapper class around a NetDef protocol buffer.
When creating a network, its underlying protocol buffer is essentially empty other than the network name. Let's create the net and then show the proto content.
End of explanation
"""
X = net.GaussianFill([], ["X"], mean=0.0, std=1.0, shape=[2, 3], run_once=0)
print("New network proto:\n\n{}".format(net.Proto()))
"""
Explanation: Let's create a blob called X, and use GaussianFill to fill it with some random data.
End of explanation
"""
print("Type of X is: {}".format(type(X)))
print("The blob name is: {}".format(str(X)))
"""
Explanation: You might have observed a few differences from the earlier core.CreateOperator call. Basically, when we have a net, you can direct create an operator and add it to the net at the same time using Python tricks: essentially, if you call net.SomeOp where SomeOp is a registered type string of an operator, this essentially gets translated to
op = core.CreateOperator("SomeOp", ...)
net.Proto().op.append(op)
Also, you might be wondering what X is. X is a BlobReference which basically records two things:
- what its name is. You can access the name by str(X)
- which net it gets created from. It is recorded by an internal variable _from_net, but most likely
you won't need that.
Let's verify it. Also, remember, we are not actually running anything yet, so X contains nothing but a symbol. Don't expect to get any numerical values out of it right now :)
End of explanation
"""
W = net.GaussianFill([], ["W"], mean=0.0, std=1.0, shape=[5, 3], run_once=0)
b = net.ConstantFill([], ["b"], shape=[5,], value=1.0, run_once=0)
"""
Explanation: Let's continue to create W and b.
End of explanation
"""
Y = X.FC([W, b], ["Y"])
"""
Explanation: Now, one simple code sugar: since the BlobReference objects know what net it is generated from, in addition to creating operators from net, you can also create operators from BlobReferences. Let's create the FC operator in this way.
End of explanation
"""
print("Current network proto:\n\n{}".format(net.Proto()))
"""
Explanation: Under the hood, X.FC(...) simply delegates to net.FC by inserting X as the first input of the corresponding operator, so what we did above is equivalent to
Y = net.FC([X, W, b], ["Y"])
Let's take a look at the current network.
End of explanation
"""
from caffe2.python import net_drawer
from IPython import display
graph = net_drawer.GetPydotGraph(net, rankdir="LR")
display.Image(graph.create_png(), width=800)
"""
Explanation: Too verbose huh? Let's try to visualize it as a graph. Caffe2 ships with a very minimal graph visualization tool for this purpose. Let's show that in ipython.
End of explanation
"""
workspace.ResetWorkspace()
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
workspace.RunNetOnce(net)
print("Blobs in the workspace after execution: {}".format(workspace.Blobs()))
# Let's dump the contents of the blobs
for name in workspace.Blobs():
print("{}:\n{}".format(name, workspace.FetchBlob(name)))
"""
Explanation: So we have defined a Net, but nothing gets executed yet. Remember that the net above is essentially a protobuf that holds the definition of the network. When we actually want to run the network, what happens under the hood is:
- Instantiate a C++ net object from the protobuf;
- Call the instantiated net's Run() function.
Before we do anything, we should clear any earlier workspace variables with ResetWorkspace().
Then there are two ways to run a net from Python. We will do the first option in the example below.
Using workspace.RunNetOnce(), which instantiates, runs and immediately destructs the network.
A little bit more complex and involves two steps:
(a) call workspace.CreateNet() to create the C++ net object owned by the workspace, and
(b) use workspace.RunNet() by passing the name of the network to it.
End of explanation
"""
workspace.ResetWorkspace()
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
workspace.CreateNet(net)
workspace.RunNet(net.Proto().name)
print("Blobs in the workspace after execution: {}".format(workspace.Blobs()))
for name in workspace.Blobs():
print("{}:\n{}".format(name, workspace.FetchBlob(name)))
"""
Explanation: Now let's try the second way to create the net, and run it. First clear the variables with ResetWorkspace(), create the net with the workspace's net object you created earlier CreateNet(net_object), and then run the net by name with RunNet(net_name).
End of explanation
"""
# It seems that %timeit magic does not work well with
# C++ extensions so we'll basically do for loops
start = time.time()
for i in range(1000):
workspace.RunNetOnce(net)
end = time.time()
print('Run time per RunNetOnce: {}'.format((end - start) / 1000))
start = time.time()
for i in range(1000):
workspace.RunNet(net.Proto().name)
end = time.time()
print('Run time per RunNet: {}'.format((end - start) / 1000))
"""
Explanation: There are a few differences between RunNetOnce and RunNet, but probably the main difference is the computation time overhead. Since RunNetOnce involves serializing the protobuf to pass between Python and C and instantiating the network, it may take longer to run. Let's see in this case what the overhead is.
End of explanation
"""
|
fluxcapacitor/source.ml
|
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/Conferences/ODSC/MasterClass/Mar-01-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb
|
apache-2.0
|
import numpy as np
import os
import tensorflow as tf
from tensorflow.contrib.session_bundle import exporter
import time
# make things wide
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = "<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def=None, width=1200, height=800, max_const_size=32, ungroup_gradients=False):
if not graph_def:
graph_def = tf.get_default_graph().as_graph_def()
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
data = str(strip_def)
if ungroup_gradients:
data = data.replace('"gradients/', '"b_')
#print(data)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(data), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:{}px;height:{}px;border:0" srcdoc="{}"></iframe>
""".format(width, height, code.replace('"', '"'))
display(HTML(iframe))
# If this errors out, increment the `export_version` variable, restart the Kernel, and re-run
flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_integer("batch_size", 10, "The batch size to train")
flags.DEFINE_integer("epoch_number", 10, "Number of epochs to run trainer")
flags.DEFINE_integer("steps_to_validate", 1,
"Steps to validate and print loss")
flags.DEFINE_string("checkpoint_dir", "./checkpoint/",
"indicates the checkpoint directory")
#flags.DEFINE_string("model_path", "./model/", "The export path of the model")
flags.DEFINE_string("model_path", "/root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export/", "The export path of the model")
flags.DEFINE_integer("export_version", 27, "The version number of the model")
# If this errors out, increment the `export_version` variable, restart the Kernel, and re-run
def main():
# Define training data
x = np.ones(FLAGS.batch_size)
y = np.ones(FLAGS.batch_size)
# Define the model
X = tf.placeholder(tf.float32, shape=[None], name="X")
Y = tf.placeholder(tf.float32, shape=[None], name="yhat")
w = tf.Variable(1.0, name="weight")
b = tf.Variable(1.0, name="bias")
loss = tf.square(Y - tf.mul(X, w) - b)
train_op = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
predict_op = tf.mul(X, w) + b
saver = tf.train.Saver()
checkpoint_dir = FLAGS.checkpoint_dir
checkpoint_file = checkpoint_dir + "/checkpoint.ckpt"
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
# Start the session
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
ckpt = tf.train.get_checkpoint_state(checkpoint_dir)
if ckpt and ckpt.model_checkpoint_path:
print("Continue training from the model {}".format(ckpt.model_checkpoint_path))
saver.restore(sess, ckpt.model_checkpoint_path)
saver_def = saver.as_saver_def()
print(saver_def.filename_tensor_name)
print(saver_def.restore_op_name)
# Start training
start_time = time.time()
for epoch in range(FLAGS.epoch_number):
sess.run(train_op, feed_dict={X: x, Y: y})
# Start validating
if epoch % FLAGS.steps_to_validate == 0:
end_time = time.time()
print("[{}] Epoch: {}".format(end_time - start_time, epoch))
saver.save(sess, checkpoint_file)
tf.train.write_graph(sess.graph_def, checkpoint_dir, 'trained_model.pb', as_text=False)
tf.train.write_graph(sess.graph_def, checkpoint_dir, 'trained_model.txt', as_text=True)
start_time = end_time
# Print model variables
w_value, b_value = sess.run([w, b])
print("The model of w: {}, b: {}".format(w_value, b_value))
# Export the model
print("Exporting trained model to {}".format(FLAGS.model_path))
model_exporter = exporter.Exporter(saver)
model_exporter.init(
sess.graph.as_graph_def(),
named_graph_signatures={
'inputs': exporter.generic_signature({"features": X}),
'outputs': exporter.generic_signature({"prediction": predict_op})
})
model_exporter.export(FLAGS.model_path, tf.constant(FLAGS.export_version), sess)
print('Done exporting!')
if __name__ == "__main__":
main()
show_graph()
"""
Explanation: Where Am I?
ODSC Masterclass Summit - San Francisco - Mar 01, 2017
Who Am I?
Chris Fregly
Research Scientist @ PipelineIO
Video Series Author "High Performance Tensorflow in Production" @ OReilly (Coming Soon)
Founder @ Advanced Spark and Tensorflow Meetup
Github Repo
DockerHub Repo
Slideshare
YouTube
Who Was I?
Software Engineer @ Netflix, Databricks, IBM Spark Tech Center
1. Infrastructure and Tools
Docker
Images, Containers
Useful Docker Image: AWS + GPU + Docker + Tensorflow + Spark
Kubernetes
Container Orchestration Across Clusters
Weavescope
Kubernetes Cluster Visualization
Jupyter Notebooks
What We're Using Here for Everything!
Airflow
Invoke Any Type of Workflow on Any Type of Schedule
Github
Commit New Model to Github, Airflow Workflow Triggered for Continuous Deployment
DockerHub
Maintains Docker Images
Continuous Deployment
Not Just for Code, Also for ML/AI Models!
Canary Release
Deploy and Compare New Model Alongside Existing
Metrics and Dashboards
Not Just System Metrics, ML/AI Model Prediction Metrics
NetflixOSS-based
Prometheus
Grafana
Elasticsearch
Separate Cluster Concerns
Training/Admin Cluster
Prediction Cluster
Hybrid Cloud Deployment for eXtreme High Availability (XHA)
AWS and Google Cloud
Apache Spark
Tensorflow + Tensorflow Serving
2. Model Deployment Bundles
KeyValue
ie. Recommendations
In-memory: Redis, Memcache
On-disk: Cassandra, RocksDB
First-class Servable in Tensorflow Serving
PMML
It's Useful and Well-Supported
Apple, Cisco, Airbnb, HomeAway, etc
Please Don't Re-build It - Reduce Your Technical Debt!
Native Code
Hand-coded (Python + Pickling)
Generate Java Code from PMML?
Tensorflow Model Exports
freeze_graph.py: Combine Tensorflow Graph (Static) with Trained Weights (Checkpoints) into Single Deployable Model
3. Model Deployments and Rollbacks
Mutable
Each New Model is Deployed to Live, Running Container
Immutable
Each New Model is a New Docker Image
4. Optimizing Tensorflow Models for Serving
Python Scripts
optimize_graph_for_inference.py
Pete Warden's Blog
Graph Transform Tool
Compile (Tensorflow 1.0+)
XLA Compiler
Compiles 3 graph operations (input, operation, output) into 1 operation
Removes need for Tensorflow Runtime (20 MB is significant on tiny devices)
Allows new backends for hardware-specific optimizations (better portability)
tfcompile
Convert Graph into executable code
Compress/Distill Ensemble Models
Convert ensembles or other complex models into smaller models
Re-score training data with output of model being distilled
Train smaller model to produce same output
Output of smaller model learns more information than original label
5. Optimizing Serving Runtime Environment
Throughput
Option 1: Add more Tensorflow Serving servers behind load balancer
Option 2: Enable request batching in each Tensorflow Serving
Option Trade-offs: Higher Latency (bad) for Higher Throughput (good)
$TENSORFLOW_SERVING_HOME/bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server
--port=9000
--model_name=tensorflow_minimal
--model_base_path=/root/models/tensorflow_minimal/export
--enable_batching=true
--max_batch_size=1000000
--batch_timeout_micros=10000
--max_enqueued_batches=1000000
Latency
The deeper the model, the longer the latency
Start inference in parallel where possible (ie. user inference in parallel with item inference)
Pre-load common inputs from database (ie. user attributes, item attributes)
Pre-compute/partial-compute common inputs (ie. popular word embeddings)
Memory
Word embeddings are huge!
Use hashId for each word
Off-load embedding matrices to parameter server and share between serving servers
6. Demos!!
Train and Deploy Tensorflow AI Model (Simple Model, Immutable Deploy)
Train Tensorflow AI Model
End of explanation
"""
!ls -l /root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export
!ls -l /root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export/00000027
!git status
!git add --all /root/pipeline/prediction.ml/tensorflow/models/tensorflow_minimal/export/00000027/
!git status
!git commit -m "updated tensorflow model"
!git status
# If this fails with "Permission denied", use terminal within jupyter to manually `git push`
!git push
"""
Explanation: Commit and Deploy New Tensorflow AI Model
Commit Model to Github
End of explanation
"""
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
from IPython.display import clear_output, Image, display, HTML
html = '<iframe width=100% height=500px src="http://demo.pipeline.io:8080/admin">'
display(HTML(html))
"""
Explanation: Airflow Workflow Deploys New Model through Github Post-Commit Webhook to Triggers
End of explanation
"""
!kubectl scale --context=awsdemo --replicas=2 rc spark-worker-2-0-1
!kubectl get pod --context=awsdemo
"""
Explanation: Train and Deploy Spark ML Model (Airbnb Model, Mutable Deploy)
Scale Out Spark Training Cluster
Kubernetes CLI
End of explanation
"""
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
from IPython.display import clear_output, Image, display, HTML
html = '<iframe width=100% height=500px src="http://kubernetes-aws.demo.pipeline.io">'
display(HTML(html))
"""
Explanation: Weavescope Kubernetes AWS Cluster Visualization
End of explanation
"""
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler, StandardScaler
from pyspark.ml.feature import OneHotEncoder, StringIndexer
from pyspark.ml import Pipeline, PipelineModel
from pyspark.ml.regression import LinearRegression
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
"""
Explanation: Generate PMML from Spark ML Model
End of explanation
"""
df = spark.read.format("csv") \
.option("inferSchema", "true").option("header", "true") \
.load("s3a://datapalooza/airbnb/airbnb.csv.bz2")
df.registerTempTable("df")
print(df.head())
print(df.count())
"""
Explanation: Step 0: Load Libraries and Data
End of explanation
"""
df_filtered = df.filter("price >= 50 AND price <= 750 AND bathrooms > 0.0 AND bedrooms is not null")
df_filtered.registerTempTable("df_filtered")
df_final = spark.sql("""
select
id,
city,
case when state in('NY', 'CA', 'London', 'Berlin', 'TX' ,'IL', 'OR', 'DC', 'WA')
then state
else 'Other'
end as state,
space,
cast(price as double) as price,
cast(bathrooms as double) as bathrooms,
cast(bedrooms as double) as bedrooms,
room_type,
host_is_super_host,
cancellation_policy,
cast(case when security_deposit is null
then 0.0
else security_deposit
end as double) as security_deposit,
price_per_bedroom,
cast(case when number_of_reviews is null
then 0.0
else number_of_reviews
end as double) as number_of_reviews,
cast(case when extra_people is null
then 0.0
else extra_people
end as double) as extra_people,
instant_bookable,
cast(case when cleaning_fee is null
then 0.0
else cleaning_fee
end as double) as cleaning_fee,
cast(case when review_scores_rating is null
then 80.0
else review_scores_rating
end as double) as review_scores_rating,
cast(case when square_feet is not null and square_feet > 100
then square_feet
when (square_feet is null or square_feet <=100) and (bedrooms is null or bedrooms = 0)
then 350.0
else 380 * bedrooms
end as double) as square_feet
from df_filtered
""").persist()
df_final.registerTempTable("df_final")
df_final.select("square_feet", "price", "bedrooms", "bathrooms", "cleaning_fee").describe().show()
print(df_final.count())
print(df_final.schema)
# Most popular cities
spark.sql("""
select
state,
count(*) as ct,
avg(price) as avg_price,
max(price) as max_price
from df_final
group by state
order by count(*) desc
""").show()
# Most expensive popular cities
spark.sql("""
select
city,
count(*) as ct,
avg(price) as avg_price,
max(price) as max_price
from df_final
group by city
order by avg(price) desc
""").filter("ct > 25").show()
"""
Explanation: Step 1: Clean, Filter, and Summarize the Data
End of explanation
"""
continuous_features = ["bathrooms", \
"bedrooms", \
"security_deposit", \
"cleaning_fee", \
"extra_people", \
"number_of_reviews", \
"square_feet", \
"review_scores_rating"]
categorical_features = ["room_type", \
"host_is_super_host", \
"cancellation_policy", \
"instant_bookable", \
"state"]
"""
Explanation: Step 2: Define Continous and Categorical Features
End of explanation
"""
[training_dataset, validation_dataset] = df_final.randomSplit([0.8, 0.2])
"""
Explanation: Step 3: Split Data into Training and Validation
End of explanation
"""
continuous_feature_assembler = VectorAssembler(inputCols=continuous_features, outputCol="unscaled_continuous_features")
continuous_feature_scaler = StandardScaler(inputCol="unscaled_continuous_features", outputCol="scaled_continuous_features", \
withStd=True, withMean=False)
"""
Explanation: Step 4: Continous Feature Pipeline
End of explanation
"""
categorical_feature_indexers = [StringIndexer(inputCol=x, \
outputCol="{}_index".format(x)) \
for x in categorical_features]
categorical_feature_one_hot_encoders = [OneHotEncoder(inputCol=x.getOutputCol(), \
outputCol="oh_encoder_{}".format(x.getOutputCol() )) \
for x in categorical_feature_indexers]
"""
Explanation: Step 5: Categorical Feature Pipeline
End of explanation
"""
feature_cols_lr = [x.getOutputCol() \
for x in categorical_feature_one_hot_encoders]
feature_cols_lr.append("scaled_continuous_features")
feature_assembler_lr = VectorAssembler(inputCols=feature_cols_lr, \
outputCol="features_lr")
"""
Explanation: Step 6: Assemble our Features and Feature Pipeline
End of explanation
"""
linear_regression = LinearRegression(featuresCol="features_lr", \
labelCol="price", \
predictionCol="price_prediction", \
maxIter=10, \
regParam=0.3, \
elasticNetParam=0.8)
estimators_lr = \
[continuous_feature_assembler, continuous_feature_scaler] \
+ categorical_feature_indexers + categorical_feature_one_hot_encoders \
+ [feature_assembler_lr] + [linear_regression]
pipeline = Pipeline(stages=estimators_lr)
pipeline_model = pipeline.fit(training_dataset)
print(pipeline_model)
"""
Explanation: Step 7: Train a Linear Regression Model
End of explanation
"""
from jpmml import toPMMLBytes
pmmlBytes = toPMMLBytes(spark, training_dataset, pipeline_model)
print(pmmlBytes.decode("utf-8"))
"""
Explanation: Step 8: Convert PipelineModel to PMML
End of explanation
"""
import urllib.request
update_url = 'http://prediction-pmml-aws.demo.pipeline.io/update-pmml/pmml_airbnb'
update_headers = {}
update_headers['Content-type'] = 'application/xml'
req = urllib.request.Request(update_url, \
headers=update_headers, \
data=pmmlBytes)
resp = urllib.request.urlopen(req)
print(resp.status) # Should return Http Status 200
import urllib.request
update_url = 'http://prediction-pmml-gcp.demo.pipeline.io/update-pmml/pmml_airbnb'
update_headers = {}
update_headers['Content-type'] = 'application/xml'
req = urllib.request.Request(update_url, \
headers=update_headers, \
data=pmmlBytes)
resp = urllib.request.urlopen(req)
print(resp.status) # Should return Http Status 200
import urllib.parse
import json
evaluate_url = 'http://prediction-pmml-aws.demo.pipeline.io/evaluate-pmml/pmml_airbnb'
evaluate_headers = {}
evaluate_headers['Content-type'] = 'application/json'
input_params = '{"bathrooms":2.0, \
"bedrooms":2.0, \
"security_deposit":175.00, \
"cleaning_fee":25.0, \
"extra_people":1.0, \
"number_of_reviews": 2.0, \
"square_feet": 250.0, \
"review_scores_rating": 2.0, \
"room_type": "Entire home/apt", \
"host_is_super_host": "0.0", \
"cancellation_policy": "flexible", \
"instant_bookable": "1.0", \
"state": "CA"}'
encoded_input_params = input_params.encode('utf-8')
req = urllib.request.Request(evaluate_url, \
headers=evaluate_headers, \
data=encoded_input_params)
resp = urllib.request.urlopen(req)
print(resp.read())
import urllib.parse
import json
evaluate_url = 'http://prediction-pmml-aws.demo.pipeline.io/evaluate-pmml/pmml_airbnb'
evaluate_headers = {}
evaluate_headers['Content-type'] = 'application/json'
input_params = '{"bathrooms":2.0, \
"bedrooms":2.0, \
"security_deposit":175.00, \
"cleaning_fee":25.0, \
"extra_people":1.0, \
"number_of_reviews": 2.0, \
"square_feet": 250.0, \
"review_scores_rating": 2.0, \
"room_type": "Entire home/apt", \
"host_is_super_host": "0.0", \
"cancellation_policy": "flexible", \
"instant_bookable": "1.0", \
"state": "CA"}'
encoded_input_params = input_params.encode('utf-8')
req = urllib.request.Request(evaluate_url, \
headers=evaluate_headers, \
data=encoded_input_params)
resp = urllib.request.urlopen(req)
print(resp.read())
import urllib.parse
import json
evaluate_url = 'http://prediction-pmml-gcp.demo.pipeline.io/evaluate-pmml/pmml_airbnb'
evaluate_headers = {}
evaluate_headers['Content-type'] = 'application/json'
input_params = '{"bathrooms":2.0, \
"bedrooms":2.0, \
"security_deposit":175.00, \
"cleaning_fee":25.0, \
"extra_people":1.0, \
"number_of_reviews": 2.0, \
"square_feet": 250.0, \
"review_scores_rating": 2.0, \
"room_type": "Entire home/apt", \
"host_is_super_host": "0.0", \
"cancellation_policy": "flexible", \
"instant_bookable": "1.0", \
"state": "CA"}'
encoded_input_params = input_params.encode('utf-8')
req = urllib.request.Request(evaluate_url, \
headers=evaluate_headers, \
data=encoded_input_params)
resp = urllib.request.urlopen(req)
print(resp.read())
"""
Explanation: Push PMML to Live, Running Spark ML Model Server (Mutable)
End of explanation
"""
from urllib import request
sourceBytes = ' \n\
private String str; \n\
\n\
public void initialize(Map<String, Object> args) { \n\
} \n\
\n\
public Object predict(Map<String, Object> inputs) { \n\
String id = (String)inputs.get("id"); \n\
\n\
return id.equals("21619"); \n\
} \n\
'.encode('utf-8')
from urllib import request
name = 'codegen_equals'
update_url = 'http://prediction-codegen-aws.demo.pipeline.io/update-codegen/%s/' % name
update_headers = {}
update_headers['Content-type'] = 'text/plain'
req = request.Request("%s" % update_url, headers=update_headers, data=sourceBytes)
resp = request.urlopen(req)
generated_code = resp.read()
print(generated_code.decode('utf-8'))
from urllib import request
name = 'codegen_equals'
update_url = 'http://prediction-codegen-gcp.demo.pipeline.io/update-codegen/%s/' % name
update_headers = {}
update_headers['Content-type'] = 'text/plain'
req = request.Request("%s" % update_url, headers=update_headers, data=sourceBytes)
resp = request.urlopen(req)
generated_code = resp.read()
print(generated_code.decode('utf-8'))
from urllib import request
name = 'codegen_equals'
evaluate_url = 'http://prediction-codegen-aws.demo.pipeline.io/evaluate-codegen/%s' % name
evaluate_headers = {}
evaluate_headers['Content-type'] = 'application/json'
input_params = '{"id":"21618"}'
encoded_input_params = input_params.encode('utf-8')
req = request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params)
resp = request.urlopen(req)
print(resp.read()) # Should return true
from urllib import request
name = 'codegen_equals'
evaluate_url = 'http://prediction-codegen-aws.demo.pipeline.io/evaluate-codegen/%s' % name
evaluate_headers = {}
evaluate_headers['Content-type'] = 'application/json'
input_params = '{"id":"21619"}'
encoded_input_params = input_params.encode('utf-8')
req = request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params)
resp = request.urlopen(req)
print(resp.read()) # Should return false
"""
Explanation: Deploy Java-based Model (Simple Model, Mutable Deploy)
End of explanation
"""
from urllib import request
sourceBytes = ' \n\
public Map<String, Object> data = new HashMap<String, Object>(); \n\
\n\
public void initialize(Map<String, Object> args) { \n\
data.put("url", "http://demo.pipeline.io:9040/prediction/"); \n\
} \n\
\n\
public Object predict(Map<String, Object> inputs) { \n\
try { \n\
String userId = (String)inputs.get("userId"); \n\
String itemId = (String)inputs.get("itemId"); \n\
String url = data.get("url") + "/" + userId + "/" + itemId; \n\
\n\
return org.apache.http.client.fluent.Request \n\
.Get(url) \n\
.execute() \n\
.returnContent(); \n\
\n\
} catch(Exception exc) { \n\
System.out.println(exc); \n\
throw exc; \n\
} \n\
} \n\
'.encode('utf-8')
from urllib import request
name = 'codegen_httpclient'
# Note: Must have trailing '/'
update_url = 'http://prediction-codegen-aws.demo.pipeline.io/update-codegen/%s/' % name
update_headers = {}
update_headers['Content-type'] = 'text/plain'
req = request.Request("%s" % update_url, headers=update_headers, data=sourceBytes)
resp = request.urlopen(req)
generated_code = resp.read()
print(generated_code.decode('utf-8'))
from urllib import request
name = 'codegen_httpclient'
# Note: Must have trailing '/'
update_url = 'http://prediction-codegen-gcp.demo.pipeline.io/update-codegen/%s/' % name
update_headers = {}
update_headers['Content-type'] = 'text/plain'
req = request.Request("%s" % update_url, headers=update_headers, data=sourceBytes)
resp = request.urlopen(req)
generated_code = resp.read()
print(generated_code.decode('utf-8'))
from urllib import request
name = 'codegen_httpclient'
evaluate_url = 'http://prediction-codegen-aws.demo.pipeline.io/evaluate-codegen/%s' % name
evaluate_headers = {}
evaluate_headers['Content-type'] = 'application/json'
input_params = '{"userId":"21619", "itemId":"10006"}'
encoded_input_params = input_params.encode('utf-8')
req = request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params)
resp = request.urlopen(req)
print(resp.read()) # Should return float
from urllib import request
name = 'codegen_httpclient'
evaluate_url = 'http://prediction-codegen-gcp.demo.pipeline.io/evaluate-codegen/%s' % name
evaluate_headers = {}
evaluate_headers['Content-type'] = 'application/json'
input_params = '{"userId":"21619", "itemId":"10006"}'
encoded_input_params = input_params.encode('utf-8')
req = request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params)
resp = request.urlopen(req)
print(resp.read()) # Should return float
"""
Explanation: Deploy Java Model (HttpClient Model, Mutable Deploy)
End of explanation
"""
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
from IPython.display import clear_output, Image, display, HTML
html = '<iframe width=100% height=500px src="http://hystrix.demo.pipeline.io/hystrix-dashboard/monitor/monitor.html?streams=%5B%7B%22name%22%3A%22Predictions%20-%20AWS%22%2C%22stream%22%3A%22http%3A%2F%2Fturbine-aws.demo.pipeline.io%2Fturbine.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%2C%7B%22name%22%3A%22Predictions%20-%20GCP%22%2C%22stream%22%3A%22http%3A%2F%2Fturbine-gcp.demo.pipeline.io%2Fturbine.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%5D">'
display(HTML(html))
"""
Explanation: Load Test and Compare Cloud Providers (AWS and Google)
Monitor Performance Across Cloud Providers
NetflixOSS Services Dashboard (Hystrix)
End of explanation
"""
# Spark ML - PMML - Airbnb
!kubectl create --context=awsdemo -f /root/pipeline/loadtest.ml/loadtest-aws-airbnb-rc.yaml
!kubectl create --context=gcpdemo -f /root/pipeline/loadtest.ml/loadtest-aws-airbnb-rc.yaml
# Codegen - Java - Simple
!kubectl create --context=awsdemo -f /root/pipeline/loadtest.ml/loadtest-aws-equals-rc.yaml
!kubectl create --context=gcpdemo -f /root/pipeline/loadtest.ml/loadtest-aws-equals-rc.yaml
# Tensorflow AI - Tensorflow Serving - Simple
!kubectl create --context=awsdemo -f /root/pipeline/loadtest.ml/loadtest-aws-minimal-rc.yaml
!kubectl create --context=gcpdemo -f /root/pipeline/loadtest.ml/loadtest-aws-minimal-rc.yaml
"""
Explanation: Start Load Tests
Run JMeter Tests from Local Laptop (Limited by Laptop)
Run Headless JMeter Tests from Training Clusters in Cloud
End of explanation
"""
!kubectl delete --context=awsdemo rc loadtest-aws-airbnb
!kubectl delete --context=gcpdemo rc loadtest-aws-airbnb
!kubectl delete --context=awsdemo rc loadtest-aws-equals
!kubectl delete --context=gcpdemo rc loadtest-aws-equals
!kubectl delete --context=awsdemo rc loadtest-aws-minimal
!kubectl delete --context=gcpdemo rc loadtest-aws-minimal
"""
Explanation: End Load Tests
End of explanation
"""
!kubectl rolling-update prediction-tensorflow --context=awsdemo --image-pull-policy=Always --image=fluxcapacitor/prediction-tensorflow
!kubectl get pod --context=awsdemo
!kubectl rolling-update prediction-tensorflow --context=gcpdemo --image-pull-policy=Always --image=fluxcapacitor/prediction-tensorflow
!kubectl get pod --context=gcpdemo
"""
Explanation: Rolling Deploy Tensorflow AI (Simple Model, Immutable Deploy)
Kubernetes CLI
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.19/_downloads/a1ab4842a5aa341564b4fa0a6bf60065/plot_dipole_orientations.ipynb
|
bsd-3-clause
|
import mne
import numpy as np
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
data_path = sample.data_path()
evokeds = mne.read_evokeds(data_path + '/MEG/sample/sample_audvis-ave.fif')
left_auditory = evokeds[0].apply_baseline()
fwd = mne.read_forward_solution(
data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif')
mne.convert_forward_solution(fwd, surf_ori=True, copy=False)
noise_cov = mne.read_cov(data_path + '/MEG/sample/sample_audvis-cov.fif')
subject = 'sample'
subjects_dir = data_path + '/subjects'
trans_fname = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'
"""
Explanation: The role of dipole orientations in distributed source localization
When performing source localization in a distributed manner
(MNE/dSPM/sLORETA/eLORETA),
the source space is defined as a grid of dipoles that spans a large portion of
the cortex. These dipoles have both a position and an orientation. In this
tutorial, we will look at the various options available to restrict the
orientation of the dipoles and the impact on the resulting source estimate.
See inverse_orientation_constrains
Loading data
Load everything we need to perform source localization on the sample dataset.
End of explanation
"""
lh = fwd['src'][0] # Visualize the left hemisphere
verts = lh['rr'] # The vertices of the source space
tris = lh['tris'] # Groups of three vertices that form triangles
dip_pos = lh['rr'][lh['vertno']] # The position of the dipoles
dip_ori = lh['nn'][lh['vertno']]
dip_len = len(dip_pos)
dip_times = [0]
white = (1.0, 1.0, 1.0) # RGB values for a white color
actual_amp = np.ones(dip_len) # misc amp to create Dipole instance
actual_gof = np.ones(dip_len) # misc GOF to create Dipole instance
dipoles = mne.Dipole(dip_times, dip_pos, actual_amp, dip_ori, actual_gof)
trans = mne.read_trans(trans_fname)
fig = mne.viz.create_3d_figure(size=(600, 400), bgcolor=white)
coord_frame = 'mri'
# Plot the cortex
fig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir,
trans=trans, surfaces='white',
coord_frame=coord_frame, fig=fig)
# Mark the position of the dipoles with small red dots
fig = mne.viz.plot_dipole_locations(dipoles=dipoles, trans=trans,
mode='sphere', subject=subject,
subjects_dir=subjects_dir,
coord_frame=coord_frame,
scale=7e-4, fig=fig)
mne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.25)
"""
Explanation: The source space
Let's start by examining the source space as constructed by the
:func:mne.setup_source_space function. Dipoles are placed along fixed
intervals on the cortex, determined by the spacing parameter. The source
space does not define the orientation for these dipoles.
End of explanation
"""
fig = mne.viz.create_3d_figure(size=(600, 400))
# Plot the cortex
fig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir,
trans=trans,
surfaces='white', coord_frame='head', fig=fig)
# Show the dipoles as arrows pointing along the surface normal
fig = mne.viz.plot_dipole_locations(dipoles=dipoles, trans=trans,
mode='arrow', subject=subject,
subjects_dir=subjects_dir,
coord_frame='head',
scale=7e-4, fig=fig)
mne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.1)
"""
Explanation: Fixed dipole orientations
While the source space defines the position of the dipoles, the inverse
operator defines the possible orientations of them. One of the options is to
assign a fixed orientation. Since the neural currents from which MEG and EEG
signals originate flows mostly perpendicular to the cortex [1]_, restricting
the orientation of the dipoles accordingly places a useful restriction on the
source estimate.
By specifying fixed=True when calling
:func:mne.minimum_norm.make_inverse_operator, the dipole orientations are
fixed to be orthogonal to the surface of the cortex, pointing outwards. Let's
visualize this:
End of explanation
"""
# Compute the source estimate for the 'left - auditory' condition in the sample
# dataset.
inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=True)
stc = apply_inverse(left_auditory, inv, pick_ori=None)
# Visualize it at the moment of peak activity.
_, time_max = stc.get_peak(hemi='lh')
brain_fixed = stc.plot(surface='white', subjects_dir=subjects_dir,
initial_time=time_max, time_unit='s', size=(600, 400))
"""
Explanation: Restricting the dipole orientations in this manner leads to the following
source estimate for the sample data:
End of explanation
"""
fig = mne.viz.create_3d_figure(size=(600, 400))
# Plot the cortex
fig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir,
trans=trans,
surfaces='white', coord_frame='head', fig=fig)
# Show the three dipoles defined at each location in the source space
fig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir,
trans=trans, fwd=fwd,
surfaces='white', coord_frame='head', fig=fig)
mne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.1)
"""
Explanation: The direction of the estimated current is now restricted to two directions:
inward and outward. In the plot, blue areas indicate current flowing inwards
and red areas indicate current flowing outwards. Given the curvature of the
cortex, groups of dipoles tend to point in the same direction: the direction
of the electromagnetic field picked up by the sensors.
Loose dipole orientations
Forcing the source dipoles to be strictly orthogonal to the cortex makes the
source estimate sensitive to the spacing of the dipoles along the cortex,
since the curvature of the cortex changes within each ~10 square mm patch.
Furthermore, misalignment of the MEG/EEG and MRI coordinate frames is more
critical when the source dipole orientations are strictly constrained [2]_.
To lift the restriction on the orientation of the dipoles, the inverse
operator has the ability to place not one, but three dipoles at each
location defined by the source space. These three dipoles are placed
orthogonally to form a Cartesian coordinate system. Let's visualize this:
End of explanation
"""
# Make an inverse operator with loose dipole orientations
inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False,
loose=1.0)
# Compute the source estimate, indicate that we want a vector solution
stc = apply_inverse(left_auditory, inv, pick_ori='vector')
# Visualize it at the moment of peak activity.
_, time_max = stc.magnitude().get_peak(hemi='lh')
brain_mag = stc.plot(subjects_dir=subjects_dir, initial_time=time_max,
time_unit='s', size=(600, 400), overlay_alpha=0)
"""
Explanation: When computing the source estimate, the activity at each of the three dipoles
is collapsed into the XYZ components of a single vector, which leads to the
following source estimate for the sample data:
End of explanation
"""
# Set loose to 0.2, the default value
inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False,
loose=0.2)
stc = apply_inverse(left_auditory, inv, pick_ori='vector')
# Visualize it at the moment of peak activity.
_, time_max = stc.magnitude().get_peak(hemi='lh')
brain_loose = stc.plot(subjects_dir=subjects_dir, initial_time=time_max,
time_unit='s', size=(600, 400), overlay_alpha=0)
"""
Explanation: Limiting orientations, but not fixing them
Often, the best results will be obtained by allowing the dipoles to have
somewhat free orientation, but not stray too far from a orientation that is
perpendicular to the cortex. The loose parameter of the
:func:mne.minimum_norm.make_inverse_operator allows you to specify a value
between 0 (fixed) and 1 (unrestricted or "free") to indicate the amount the
orientation is allowed to deviate from the surface normal.
End of explanation
"""
# Only retain vector magnitudes
stc = apply_inverse(left_auditory, inv, pick_ori=None)
# Visualize it at the moment of peak activity.
_, time_max = stc.get_peak(hemi='lh')
brain = stc.plot(surface='white', subjects_dir=subjects_dir,
initial_time=time_max, time_unit='s', size=(600, 400))
"""
Explanation: Discarding dipole orientation information
Often, further analysis of the data does not need information about the
orientation of the dipoles, but rather their magnitudes. The pick_ori
parameter of the :func:mne.minimum_norm.apply_inverse function allows you
to specify whether to return the full vector solution ('vector') or
rather the magnitude of the vectors (None, the default) or only the
activity in the direction perpendicular to the cortex ('normal').
End of explanation
"""
|
Kaggle/learntools
|
notebooks/computer_vision/raw/tut1.ipynb
|
apache-2.0
|
#$HIDE_INPUT$
# Imports
import os, warnings
import matplotlib.pyplot as plt
from matplotlib import gridspec
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing import image_dataset_from_directory
# Reproducability
def set_seed(seed=31415):
np.random.seed(seed)
tf.random.set_seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
os.environ['TF_DETERMINISTIC_OPS'] = '1'
set_seed(31415)
# Set Matplotlib defaults
plt.rc('figure', autolayout=True)
plt.rc('axes', labelweight='bold', labelsize='large',
titleweight='bold', titlesize=18, titlepad=10)
plt.rc('image', cmap='magma')
warnings.filterwarnings("ignore") # to clean up output cells
# Load training and validation sets
ds_train_ = image_dataset_from_directory(
'../input/car-or-truck/train',
labels='inferred',
label_mode='binary',
image_size=[128, 128],
interpolation='nearest',
batch_size=64,
shuffle=True,
)
ds_valid_ = image_dataset_from_directory(
'../input/car-or-truck/valid',
labels='inferred',
label_mode='binary',
image_size=[128, 128],
interpolation='nearest',
batch_size=64,
shuffle=False,
)
# Data Pipeline
def convert_to_float(image, label):
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
return image, label
AUTOTUNE = tf.data.experimental.AUTOTUNE
ds_train = (
ds_train_
.map(convert_to_float)
.cache()
.prefetch(buffer_size=AUTOTUNE)
)
ds_valid = (
ds_valid_
.map(convert_to_float)
.cache()
.prefetch(buffer_size=AUTOTUNE)
)
"""
Explanation: Welcome to Computer Vision!
Have you ever wanted to teach a computer to see? In this course, that's exactly what you'll do!
In this course, you'll:
- Use modern deep-learning networks to build an image classifier with Keras
- Design your own custom convnet with reusable blocks
- Learn the fundamental ideas behind visual feature extraction
- Master the art of transfer learning to boost your models
- Utilize data augmentation to extend your dataset
If you've taken the Introduction to Deep Learning course, you'll know everything you need to be successful.
Now let's get started!
Introduction
This course will introduce you to the fundamental ideas of computer vision. Our goal is to learn how a neural network can "understand" a natural image well-enough to solve the same kinds of problems the human visual system can solve.
The neural networks that are best at this task are called convolutional neural networks (Sometimes we say convnet or CNN instead.) Convolution is the mathematical operation that gives the layers of a convnet their unique structure. In future lessons, you'll learn why this structure is so effective at solving computer vision problems.
We will apply these ideas to the problem of image classification: given a picture, can we train a computer to tell us what it's a picture of? You may have seen apps that can identify a species of plant from a photograph. That's an image classifier! In this course, you'll learn how to build image classifiers just as powerful as those used in professional applications.
While our focus will be on image classification, what you'll learn in this course is relevant to every kind of computer vision problem. At the end, you'll be ready to move on to more advanced applications like generative adversarial networks and image segmentation.
The Convolutional Classifier
A convnet used for image classification consists of two parts: a convolutional base and a dense head.
<center>
<!-- <img src="./images/1-parts-of-a-convnet.png" width="600" alt="The parts of a convnet: image, base, head, class; input, extract, classify, output.">-->
<img src="https://i.imgur.com/U0n5xjU.png" width="600" alt="The parts of a convnet: image, base, head, class; input, extract, classify, output.">
</center>
The base is used to extract the features from an image. It is formed primarily of layers performing the convolution operation, but often includes other kinds of layers as well. (You'll learn about these in the next lesson.)
The head is used to determine the class of the image. It is formed primarily of dense layers, but might include other layers like dropout.
What do we mean by visual feature? A feature could be a line, a color, a texture, a shape, a pattern -- or some complicated combination.
The whole process goes something like this:
<center>
<!-- <img src="./images/1-extract-classify.png" width="600" alt="The idea of feature extraction."> -->
<img src="https://i.imgur.com/UUAafkn.png" width="600" alt="The idea of feature extraction.">
</center>
The features actually extracted look a bit different, but it gives the idea.
Training the Classifier
The goal of the network during training is to learn two things:
1. which features to extract from an image (base),
2. which class goes with what features (head).
These days, convnets are rarely trained from scratch. More often, we reuse the base of a pretrained model. To the pretrained base we then attach an untrained head. In other words, we reuse the part of a network that has already learned to do 1. Extract features, and attach to it some fresh layers to learn 2. Classify.
<center>
<!-- <img src="./images/1-attach-head-to-base.png" width="400" alt="Attaching a new head to a trained base."> -->
<img src="https://imgur.com/E49fsmV.png" width="400" alt="Attaching a new head to a trained base.">
</center>
Because the head usually consists of only a few dense layers, very accurate classifiers can be created from relatively little data.
Reusing a pretrained model is a technique known as transfer learning. It is so effective, that almost every image classifier these days will make use of it.
Example - Train a Convnet Classifier
Throughout this course, we're going to be creating classifiers that attempt to solve the following problem: is this a picture of a Car or of a Truck? Our dataset is about 10,000 pictures of various automobiles, around half cars and half trucks.
Step 1 - Load Data
This next hidden cell will import some libraries and set up our data pipeline. We have a training split called ds_train and a validation split called ds_valid.
End of explanation
"""
#$HIDE_INPUT$
import matplotlib.pyplot as plt
"""
Explanation: Let's take a look at a few examples from the training set.
End of explanation
"""
pretrained_base = tf.keras.models.load_model(
'../input/cv-course-models/cv-course-models/vgg16-pretrained-base',
)
pretrained_base.trainable = False
"""
Explanation: Step 2 - Define Pretrained Base
The most commonly used dataset for pretraining is ImageNet, a large dataset of many kind of natural images. Keras includes a variety models pretrained on ImageNet in its applications module. The pretrained model we'll use is called VGG16.
End of explanation
"""
from tensorflow import keras
from tensorflow.keras import layers
model = keras.Sequential([
pretrained_base,
layers.Flatten(),
layers.Dense(6, activation='relu'),
layers.Dense(1, activation='sigmoid'),
])
"""
Explanation: Step 3 - Attach Head
Next, we attach the classifier head. For this example, we'll use a layer of hidden units (the first Dense layer) followed by a layer to transform the outputs to a probability score for class 1, Truck. The Flatten layer transforms the two dimensional outputs of the base into the one dimensional inputs needed by the head.
End of explanation
"""
model.compile(
optimizer='adam',
loss='binary_crossentropy',
metrics=['binary_accuracy'],
)
history = model.fit(
ds_train,
validation_data=ds_valid,
epochs=30,
verbose=0,
)
"""
Explanation: Step 4 - Train
Finally, let's train the model. Since this is a two-class problem, we'll use the binary versions of crossentropy and accuracy. The adam optimizer generally performs well, so we'll choose it as well.
End of explanation
"""
import pandas as pd
history_frame = pd.DataFrame(history.history)
history_frame.loc[:, ['loss', 'val_loss']].plot()
history_frame.loc[:, ['binary_accuracy', 'val_binary_accuracy']].plot();
"""
Explanation: When training a neural network, it's always a good idea to examine the loss and metric plots. The history object contains this information in a dictionary history.history. We can use Pandas to convert this dictionary to a dataframe and plot it with a built-in method.
End of explanation
"""
|
kaushik94/sympy
|
examples/notebooks/Sylvester_resultant.ipynb
|
bsd-3-clause
|
x = sym.symbols('x')
"""
Explanation: Resultant
If $p$ and $q$ are two polynomials over a commutative ring with identity which can be factored into linear factors,
$$p(x)= a_0 (x - r_1) (x- r_2) \dots (x - r_m) $$
$$q(x)=b_0 (x - s_1)(x - s_2) \dots (x - s_n)$$
then the resultant $R(p,q)$ of $p$ and $q$ is defined as:
$$R(p,q)=a^n_{0}b^m_{0}\prod_{i=1}^{m}\prod_{j=1}^{n}(r_i - s_j)$$
Since the resultant is a symmetric function of the roots of the polynomials $p$ and $q$, it can be expressed as a polynomial in the coefficients of $p$ and $q$.
From the definition, it is clear that the resultant will equal zero if and only if $p$ and $q$ have at least one common root. Thus, the resultant becomes very useful in identifying whether common roots exist.
Sylvester's Resultant
It was proven that the determinant of the Sylvester's matrix is equal to the resultant. Assume the two polynomials:
$$p(x) = a_0 x_m + a_1 x_{m-1}+\dots+a_{m-1}x+a_m$$
$$q(x)=b_0x_n + b_1x_{n-1}+\dots+b_{n-1}x+b_n$$
Then the Sylverster matrix in the $(m+n)\times(m+n)$ matrix:
$$
\left|
\begin{array}{cccccc}
a_{0} & a_{1} & a_{2} & \ldots & a_{m} & 0 & \ldots &0 \
0 & a_{0} & a_{1} & \ldots &a_{m-1} & a_{m} & \ldots &0 \
\vdots & \ddots & \ddots& \ddots& \ddots& \ddots& \ddots&\vdots \
0 & 0 & \ddots & \ddots& \ddots& \ddots& \ddots&a_{m}\
b_{0} & b_{1} & b_{2} & \ldots & b_{n} & 0 & \ldots & 0 \
0 & b_{0} & b_{1} & \ldots & b_{n-1} & b_{n} & \ldots & 0\
\ddots &\ddots & \ddots& \ddots& \ddots& \ddots& \ddots&\ddots \
0 & 0 & \ldots& \ldots& \ldots& \ldots& \ldots& b_{n}\
\end{array}
\right| = \Delta $$
Thus $\Delta$ is equal to the $R(p, q)$.
Example: Existence of common roots
Two examples are consider here. Note that if the system has a common root we are expecting the resultant/determinant to equal to zero.
End of explanation
"""
f = x ** 2 - 5 * x + 6
g = x ** 2 - 3 * x + 2
f, g
subresultants_qq_zz.sylvester(f, g, x)
subresultants_qq_zz.sylvester(f, g, x).det()
"""
Explanation: A common root exists.
End of explanation
"""
z = x ** 2 - 7 * x + 12
h = x ** 2 - x
z, h
matrix = subresultants_qq_zz.sylvester(z, h, x)
matrix
matrix.det()
"""
Explanation: A common root does not exist.
End of explanation
"""
y = sym.symbols('y')
f = x ** 2 + x * y + 2 * x + y -1
g = x ** 2 + 3 * x - y ** 2 + 2 * y - 1
f, g
matrix = subresultants_qq_zz.sylvester(f, g, y)
matrix
matrix.det().factor()
"""
Explanation: Example: Two variables, eliminator
When we have system of two variables we solve for one and the second is kept as a coefficient.Thus we can find the roots of the equations, that is why the resultant is often refeered to as the eliminator.
End of explanation
"""
f.subs({x:-3}).factor(), g.subs({x:-3}).factor()
f.subs({x:-3, y:1}), g.subs({x:-3, y:1})
"""
Explanation: Three roots for x $\in {-3, 0, 1}$.
For $x=-3$ then $y=1$.
End of explanation
"""
f.subs({x:0}).factor(), g.subs({x:0}).factor()
f.subs({x:0, y:1}), g.subs({x:0, y:1})
"""
Explanation: For $x=0$ the $y=1$.
End of explanation
"""
f.subs({x:1}).factor(), g.subs({x:1}).factor()
f.subs({x:1, y:-1}), g.subs({x:1, y:-1})
f.subs({x:1, y:3}), g.subs({x:1, y:3})
"""
Explanation: For $x=1$ then $y=-1$ is the common root,
End of explanation
"""
a = sym.IndexedBase("a")
b = sym.IndexedBase("b")
f = a[1] * x + a[0]
g = b[2] * x ** 2 + b[1] * x + b[0]
matrix = subresultants_qq_zz.sylvester(f, g, x)
matrix.det()
"""
Explanation: Example: Generic Case
End of explanation
"""
|
gVallverdu/cookbook
|
intro_folium.ipynb
|
gpl-2.0
|
import folium
"""
Explanation: Folium examples
Germain Salvato Vallverdu germain.vallverdu@gmail.com
This notebook shows simple examples of Folium package in order to draw marker on a map.
Colors from flatui colors.
End of explanation
"""
carte = folium.Map(location=[45.5236, -122.6750], zoom_start=12)
marker = folium.Marker([45.5, -122.7], popup='Un marker')
marker.add_to(carte)
carte
"""
Explanation: Basic marker
Basic marker with only position and popup message
End of explanation
"""
carte = folium.Map(location=[45.5236, -122.6750], zoom_start=12)
circle = folium.CircleMarker(
[45.5, -122.7],
radius=1000,
popup='Un cercle',
color="#e74c3c", # rouge
fill_color="#27ae60", # vert
fill_opacity=0.9
)
circle.add_to(carte)
carte
"""
Explanation: Circle Marker
Help on CircleMarker object :
Init signature: folium.CircleMarker(location, radius=500, color='black', fill_color='black', fill_opacity=0.6, popup=None)
Docstring:
Creates a CircleMarker object for plotting on a Map.
Parameters
----------
location: tuple or list, default None
Latitude and Longitude of Marker (Northing, Easting)
radius: int
The radius of the circle in pixels.
color: str, default 'black'
The color of the marker's edge in a HTML-compatible format.
fill_color: str, default 'black'
The fill color of the marker in a HTML-compatible format.
fill_opacity: float, default à.6
The fill opacity of the marker, between 0. and 1.
popup: string or folium.Popup, default None
Input text or visualization for object.
End of explanation
"""
carte = folium.Map(location=[45.5236, -122.6750], zoom_start=12)
# add firt marker with bootstrap icon
icone1 = folium.Icon(icon="asterisk", icon_color="#9b59b6", color="lightblue")
marker1 = folium.Marker([45.5, -122.7], popup='Un icone', icon=icone1)
marker1.add_to(carte)
# add second marker with font-awesome icon
icone1 = folium.Icon(icon="globe", icon_color="#e67e22", color="lightgreen", prefix="fa")
marker1 = folium.Marker([45.5, -122.6], popup='Un icone', icon=icone1)
marker1.add_to(carte)
carte
"""
Explanation: Icon Marker
Help on Icon object :
Init signature: folium.Icon(color='blue', icon_color='white', icon='info-sign', angle=0, prefix='glyphicon')
Docstring:
Creates an Icon object that will be rendered
using Leaflet.awesome-markers.
Parameters
----------
color : str, default 'blue'
The color of the marker. You can use:
['red', 'blue', 'green', 'purple', 'orange', 'darkred',
'lightred', 'beige', 'darkblue', 'darkgreen', 'cadetblue',
'darkpurple', 'white', 'pink', 'lightblue', 'lightgreen',
'gray', 'black', 'lightgray']
icon_color : str, default 'white'
The color of the drawing on the marker. You can use colors above,
or an html color code.
icon : str, default 'info-sign'
The name of the marker sign.
See Font-Awesome website to choose yours.
Warning : depending on the icon you choose you may need to adapt
the `prefix` as well.
angle : int, default 0
The icon will be rotated by this amount of degrees.
prefix : str, default 'glyphicon'
The prefix states the source of the icon. 'fa' for font-awesome or
'glyphicon' for bootstrap 3.
Icons can be drawn from bootstrap or font-awesome glyphs. You can chose custum color for the icon but the maker
color must be chosen in the provided list (see doc above).
Web sites for glyphs :
bootstrap icons : bootstrap glyphicons
font-awesome icons : font-awesome
End of explanation
"""
import matplotlib
import matplotlib.pyplot as plt
color = plt.cm.winter(22)
color
matplotlib.colors.rgb2hex(color)
"""
Explanation: RGB(A) to HEX colors
How can I convert RGB or RGBA colors to hex colors ?
You can use an internal function of matplotlib to convert a RGB or RGB(A) color to HEX color.
End of explanation
"""
|
dipanjanS/text-analytics-with-python
|
New-Second-Edition/Ch10 - The Promise of Deep Learning/Ch10a - Deep Transfer Learning for NLP - Text Classification with Universal Embeddings.ipynb
|
apache-2.0
|
!pip install tensorflow-hub
"""
Explanation: Sentiment Analysis - Text Classification with Universal Embeddings
Textual data in spite of being highly unstructured, can be classified into two major types of documents.
- Factual documents which typically depict some form of statements or facts with no specific feelings or emotion attached to them. These are also known as objective documents.
- Subjective documents on the other hand have text which expresses feelings, mood, emotions and opinion.
Sentiment Analysis is also popularly known as opinion analysis or opinion mining. The key idea is to use techniques from text analytics, NLP, machine learning and linguistics to extract important information or data points from unstructured text. This in turn can help us derive the sentiment from text data
Here we will be looking at building supervised sentiment analysis classification models thanks to the advantage of labeled data! The dataset we will be working with is the IMDB Large Movie Review Dataset having 50000 reviews classified into positive and negative sentiment. I have provided a compressed version of the dataset in this repository itself for your benefit!
Do remember that the focus here is not sentiment analysis but text classification by leveraging universal sentence embeddings.
We will leverage the following sentence encoders here for demonstration from TensorFlow Hub:
Neural-Net Language Model (nnlm-en-dim128)
Universal Sentence Encoder (universal-sentence-encoder)
Developed by Dipanjan (DJ) Sarkar
Install Tensorflow Hub
End of explanation
"""
import tensorflow as tf
import tensorflow_hub as hub
import numpy as np
import pandas as pd
"""
Explanation: Load up Dependencies
End of explanation
"""
tf.test.is_gpu_available()
tf.test.gpu_device_name()
"""
Explanation: Check if GPU is available for use!
End of explanation
"""
dataset = pd.read_csv('movie_reviews.csv.bz2', compression='bz2')
dataset.info()
dataset['sentiment'] = [1 if sentiment == 'positive' else 0 for sentiment in dataset['sentiment'].values]
dataset.head()
"""
Explanation: Load and View Dataset
End of explanation
"""
reviews = dataset['review'].values
sentiments = dataset['sentiment'].values
train_reviews = reviews[:30000]
train_sentiments = sentiments[:30000]
val_reviews = reviews[30000:35000]
val_sentiments = sentiments[30000:35000]
test_reviews = reviews[35000:]
test_sentiments = sentiments[35000:]
train_reviews.shape, val_reviews.shape, test_reviews.shape
"""
Explanation: Build train, validation and test datasets
End of explanation
"""
!pip install contractions
!pip install beautifulsoup4
import contractions
from bs4 import BeautifulSoup
import unicodedata
import re
def strip_html_tags(text):
soup = BeautifulSoup(text, "html.parser")
[s.extract() for s in soup(['iframe', 'script'])]
stripped_text = soup.get_text()
stripped_text = re.sub(r'[\r|\n|\r\n]+', '\n', stripped_text)
return stripped_text
def remove_accented_chars(text):
text = unicodedata.normalize('NFKD', text).encode('ascii', 'ignore').decode('utf-8', 'ignore')
return text
def expand_contractions(text):
return contractions.fix(text)
def remove_special_characters(text, remove_digits=False):
pattern = r'[^a-zA-Z0-9\s]' if not remove_digits else r'[^a-zA-Z\s]'
text = re.sub(pattern, '', text)
return text
def pre_process_document(document):
# strip HTML
document = strip_html_tags(document)
# lower case
document = document.lower()
# remove extra newlines (often might be present in really noisy text)
document = document.translate(document.maketrans("\n\t\r", " "))
# remove accented characters
document = remove_accented_chars(document)
# expand contractions
document = expand_contractions(document)
# remove special characters and\or digits
# insert spaces between special characters to isolate them
special_char_pattern = re.compile(r'([{.(-)!}])')
document = special_char_pattern.sub(" \\1 ", document)
document = remove_special_characters(document, remove_digits=True)
# remove extra whitespace
document = re.sub(' +', ' ', document)
document = document.strip()
return document
pre_process_corpus = np.vectorize(pre_process_document)
train_reviews = pre_process_corpus(train_reviews)
val_reviews = pre_process_corpus(val_reviews)
test_reviews = pre_process_corpus(test_reviews)
"""
Explanation: Basic Text Wrangling
End of explanation
"""
# Training input on the whole training set with no limit on training epochs.
train_input_fn = tf.estimator.inputs.numpy_input_fn(
{'sentence': train_reviews}, train_sentiments,
batch_size=256, num_epochs=None, shuffle=True)
# Prediction on the whole training set.
predict_train_input_fn = tf.estimator.inputs.numpy_input_fn(
{'sentence': train_reviews}, train_sentiments, shuffle=False)
# Prediction on the whole validation set.
predict_val_input_fn = tf.estimator.inputs.numpy_input_fn(
{'sentence': val_reviews}, val_sentiments, shuffle=False)
# Prediction on the test set.
predict_test_input_fn = tf.estimator.inputs.numpy_input_fn(
{'sentence': test_reviews}, test_sentiments, shuffle=False)
"""
Explanation: Build Data Ingestion Functions
End of explanation
"""
embedding_feature = hub.text_embedding_column(
key='sentence',
module_spec="https://tfhub.dev/google/universal-sentence-encoder/2",
trainable=False)
dnn = tf.estimator.DNNClassifier(
hidden_units=[512, 128],
feature_columns=[embedding_feature],
n_classes=2,
activation_fn=tf.nn.relu,
dropout=0.1,
optimizer=tf.train.AdagradOptimizer(learning_rate=0.005))
"""
Explanation: Build Deep Learning Model with Universal Sentence Encoder
End of explanation
"""
256*1500 / 30000
"""
Explanation: Train for approx 12 epochs
End of explanation
"""
tf.logging.set_verbosity(tf.logging.ERROR)
import time
TOTAL_STEPS = 1500
STEP_SIZE = 100
for step in range(0, TOTAL_STEPS+1, STEP_SIZE):
print()
print('-'*100)
print('Training for step =', step)
start_time = time.time()
dnn.train(input_fn=train_input_fn, steps=STEP_SIZE)
elapsed_time = time.time() - start_time
print('Train Time (s):', elapsed_time)
print('Eval Metrics (Train):', dnn.evaluate(input_fn=predict_train_input_fn))
print('Eval Metrics (Validation):', dnn.evaluate(input_fn=predict_val_input_fn))
"""
Explanation: Model Training
End of explanation
"""
dnn.evaluate(input_fn=predict_train_input_fn)
dnn.evaluate(input_fn=predict_test_input_fn)
"""
Explanation: Model Evaluation
End of explanation
"""
import time
TOTAL_STEPS = 1500
STEP_SIZE = 500
my_checkpointing_config = tf.estimator.RunConfig(
keep_checkpoint_max = 2, # Retain the 2 most recent checkpoints.
)
def train_and_evaluate_with_sentence_encoder(hub_module, train_module=False, path=''):
embedding_feature = hub.text_embedding_column(
key='sentence', module_spec=hub_module, trainable=train_module)
print()
print('='*100)
print('Training with', hub_module)
print('Trainable is:', train_module)
print('='*100)
dnn = tf.estimator.DNNClassifier(
hidden_units=[512, 128],
feature_columns=[embedding_feature],
n_classes=2,
activation_fn=tf.nn.relu,
dropout=0.1,
optimizer=tf.train.AdagradOptimizer(learning_rate=0.005),
model_dir=path,
config=my_checkpointing_config)
for step in range(0, TOTAL_STEPS+1, STEP_SIZE):
print('-'*100)
print('Training for step =', step)
start_time = time.time()
dnn.train(input_fn=train_input_fn, steps=STEP_SIZE)
elapsed_time = time.time() - start_time
print('Train Time (s):', elapsed_time)
print('Eval Metrics (Train):', dnn.evaluate(input_fn=predict_train_input_fn))
print('Eval Metrics (Validation):', dnn.evaluate(input_fn=predict_val_input_fn))
train_eval_result = dnn.evaluate(input_fn=predict_train_input_fn)
test_eval_result = dnn.evaluate(input_fn=predict_test_input_fn)
return {
"Model Dir": dnn.model_dir,
"Training Accuracy": train_eval_result["accuracy"],
"Test Accuracy": test_eval_result["accuracy"],
"Training AUC": train_eval_result["auc"],
"Test AUC": test_eval_result["auc"],
"Training Precision": train_eval_result["precision"],
"Test Precision": test_eval_result["precision"],
"Training Recall": train_eval_result["recall"],
"Test Recall": test_eval_result["recall"]
}
"""
Explanation: Build a Generic Model Trainer on any Input Sentence Encoder
End of explanation
"""
tf.logging.set_verbosity(tf.logging.ERROR)
results = {}
results["nnlm-en-dim128"] = train_and_evaluate_with_sentence_encoder(
"https://tfhub.dev/google/nnlm-en-dim128/1", path='/storage/models/nnlm-en-dim128_f/')
results["nnlm-en-dim128-with-training"] = train_and_evaluate_with_sentence_encoder(
"https://tfhub.dev/google/nnlm-en-dim128/1", train_module=True, path='/storage/models/nnlm-en-dim128_t/')
results["use-512"] = train_and_evaluate_with_sentence_encoder(
"https://tfhub.dev/google/universal-sentence-encoder/2", path='/storage/models/use-512_f/')
results["use-512-with-training"] = train_and_evaluate_with_sentence_encoder(
"https://tfhub.dev/google/universal-sentence-encoder/2", train_module=True, path='/storage/models/use-512_t/')
"""
Explanation: Train Deep Learning Models on difference Sentence Encoders
NNLM - pre-trained and fine-tuning
USE - pre-trained and fine-tuning
End of explanation
"""
results_df = pd.DataFrame.from_dict(results, orient="index")
results_df
best_model_dir = results_df[results_df['Test Accuracy'] == results_df['Test Accuracy'].max()]['Model Dir'].values[0]
best_model_dir
embedding_feature = hub.text_embedding_column(
key='sentence', module_spec="https://tfhub.dev/google/universal-sentence-encoder/2", trainable=True)
dnn = tf.estimator.DNNClassifier(
hidden_units=[512, 128],
feature_columns=[embedding_feature],
n_classes=2,
activation_fn=tf.nn.relu,
dropout=0.1,
optimizer=tf.train.AdagradOptimizer(learning_rate=0.005),
model_dir=best_model_dir)
dnn
def get_predictions(estimator, input_fn):
return [x["class_ids"][0] for x in estimator.predict(input_fn=input_fn)]
predictions = get_predictions(estimator=dnn, input_fn=predict_test_input_fn)
predictions[:10]
!pip install seaborn
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
with tf.Session() as session:
cm = tf.confusion_matrix(test_sentiments, predictions).eval()
LABELS = ['negative', 'positive']
sns.heatmap(cm, annot=True, xticklabels=LABELS, yticklabels=LABELS, fmt='g')
xl = plt.xlabel("Predicted")
yl = plt.ylabel("Actuals")
from sklearn.metrics import classification_report
print(classification_report(y_true=test_sentiments, y_pred=predictions, target_names=LABELS))
"""
Explanation: Model Evaluations
End of explanation
"""
|
smharper/openmc
|
examples/jupyter/mgxs-part-ii.ipynb
|
mit
|
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-dark')
import openmoc
import openmc
import openmc.mgxs as mgxs
import openmc.data
from openmc.openmoc_compatible import get_openmoc_geometry
%matplotlib inline
"""
Explanation: This IPython Notebook illustrates the use of the openmc.mgxs module to calculate multi-group cross sections for a heterogeneous fuel pin cell geometry. In particular, this Notebook illustrates the following features:
Creation of multi-group cross sections on a heterogeneous geometry
Calculation of cross sections on a nuclide-by-nuclide basis
The use of tally precision triggers with multi-group cross sections
Built-in features for energy condensation in downstream data processing
The use of the openmc.data module to plot continuous-energy vs. multi-group cross sections
Validation of multi-group cross sections with OpenMOC
Note: This Notebook was created using OpenMOC to verify the multi-group cross-sections generated by OpenMC. You must install OpenMOC on your system in order to run this Notebook in its entirety. In addition, this Notebook illustrates the use of Pandas DataFrames to containerize multi-group cross section data.
Generate Input Files
End of explanation
"""
# 1.6% enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide('U235', 3.7503e-4)
fuel.add_nuclide('U238', 2.2625e-2)
fuel.add_nuclide('O16', 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide('H1', 4.9457e-2)
water.add_nuclide('O16', 2.4732e-2)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide('Zr90', 7.2758e-3)
"""
Explanation: First we need to define materials that will be used in the problem. We'll create three distinct materials for water, clad and fuel.
End of explanation
"""
# Instantiate a Materials collection
materials_file = openmc.Materials([fuel, water, zircaloy])
# Export to "materials.xml"
materials_file.export_to_xml()
"""
Explanation: With our materials, we can now create a Materials object that can be exported to an actual XML file.
End of explanation
"""
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, r=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, r=0.45720)
# Create box to surround the geometry
box = openmc.model.rectangular_prism(1.26, 1.26, boundary_type='reflective')
"""
Explanation: Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes.
End of explanation
"""
# Create a Universe to encapsulate a fuel pin
pin_cell_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
pin_cell_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
pin_cell_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius & box
pin_cell_universe.add_cell(moderator_cell)
"""
Explanation: With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.
End of explanation
"""
# Create Geometry and set root Universe
openmc_geometry = openmc.Geometry(pin_cell_universe)
# Export to "geometry.xml"
openmc_geometry.export_to_xml()
"""
Explanation: We now must create a geometry with the pin cell universe and export it to XML.
End of explanation
"""
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 10000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': True}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.Source(space=uniform_dist)
# Activate tally precision triggers
settings_file.trigger_active = True
settings_file.trigger_max_batches = settings_file.batches * 4
# Export to "settings.xml"
settings_file.export_to_xml()
"""
Explanation: Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 10,000 particles.
End of explanation
"""
# Instantiate a "coarse" 2-group EnergyGroups object
coarse_groups = mgxs.EnergyGroups([0., 0.625, 20.0e6])
# Instantiate a "fine" 8-group EnergyGroups object
fine_groups = mgxs.EnergyGroups([0., 0.058, 0.14, 0.28,
0.625, 4.0, 5.53e3, 821.0e3, 20.0e6])
"""
Explanation: Now we are finally ready to make use of the openmc.mgxs module to generate multi-group cross sections! First, let's define "coarse" 2-group and "fine" 8-group structures using the built-in EnergyGroups class.
End of explanation
"""
# Extract all Cells filled by Materials
openmc_cells = openmc_geometry.get_all_material_cells().values()
# Create dictionary to store multi-group cross sections for all cells
xs_library = {}
# Instantiate 8-group cross sections for each cell
for cell in openmc_cells:
xs_library[cell.id] = {}
xs_library[cell.id]['transport'] = mgxs.TransportXS(groups=fine_groups)
xs_library[cell.id]['fission'] = mgxs.FissionXS(groups=fine_groups)
xs_library[cell.id]['nu-fission'] = mgxs.FissionXS(groups=fine_groups, nu=True)
xs_library[cell.id]['nu-scatter'] = mgxs.ScatterMatrixXS(groups=fine_groups, nu=True)
xs_library[cell.id]['chi'] = mgxs.Chi(groups=fine_groups)
"""
Explanation: Now we will instantiate a variety of MGXS objects needed to run an OpenMOC simulation to verify the accuracy of our cross sections. In particular, we define transport, fission, nu-fission, nu-scatter and chi cross sections for each of the three cells in the fuel pin with the 8-group structure as our energy groups.
End of explanation
"""
# Create a tally trigger for +/- 0.01 on each tally used to compute the multi-group cross sections
tally_trigger = openmc.Trigger('std_dev', 1e-2)
# Add the tally trigger to each of the multi-group cross section tallies
for cell in openmc_cells:
for mgxs_type in xs_library[cell.id]:
xs_library[cell.id][mgxs_type].tally_trigger = tally_trigger
"""
Explanation: Next, we showcase the use of OpenMC's tally precision trigger feature in conjunction with the openmc.mgxs module. In particular, we will assign a tally trigger of 1E-2 on the standard deviation for each of the tallies used to compute multi-group cross sections.
End of explanation
"""
# Instantiate an empty Tallies object
tallies_file = openmc.Tallies()
# Iterate over all cells and cross section types
for cell in openmc_cells:
for rxn_type in xs_library[cell.id]:
# Set the cross sections domain to the cell
xs_library[cell.id][rxn_type].domain = cell
# Tally cross sections by nuclide
xs_library[cell.id][rxn_type].by_nuclide = True
# Add OpenMC tallies to the tallies file for XML generation
for tally in xs_library[cell.id][rxn_type].tallies.values():
tallies_file.append(tally, merge=True)
# Export to "tallies.xml"
tallies_file.export_to_xml()
"""
Explanation: Now, we must loop over all cells to set the cross section domains to the various cells - fuel, clad and moderator - included in the geometry. In addition, we will set each cross section to tally cross sections on a per-nuclide basis through the use of the MGXS class' boolean by_nuclide instance attribute.
End of explanation
"""
# Run OpenMC
openmc.run()
"""
Explanation: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
End of explanation
"""
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.082.h5')
"""
Explanation: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.
End of explanation
"""
# Iterate over all cells and cross section types
for cell in openmc_cells:
for rxn_type in xs_library[cell.id]:
xs_library[cell.id][rxn_type].load_from_statepoint(sp)
"""
Explanation: The statepoint is now ready to be analyzed by our multi-group cross sections. We simply have to load the tallies from the StatePoint into each object as follows and our MGXS objects will compute the cross sections for us under-the-hood.
End of explanation
"""
nufission = xs_library[fuel_cell.id]['nu-fission']
nufission.print_xs(xs_type='micro', nuclides=['U235', 'U238'])
"""
Explanation: That's it! Our multi-group cross sections are now ready for the big spotlight. This time we have cross sections in three distinct spatial zones - fuel, clad and moderator - on a per-nuclide basis.
Extracting and Storing MGXS Data
Let's first inspect one of our cross sections by printing it to the screen as a microscopic cross section in units of barns.
End of explanation
"""
nufission = xs_library[fuel_cell.id]['nu-fission']
nufission.print_xs(xs_type='macro', nuclides='sum')
"""
Explanation: Our multi-group cross sections are capable of summing across all nuclides to provide us with macroscopic cross sections as well.
End of explanation
"""
nuscatter = xs_library[moderator_cell.id]['nu-scatter']
df = nuscatter.get_pandas_dataframe(xs_type='micro')
df.head(10)
"""
Explanation: Although a printed report is nice, it is not scalable or flexible. Let's extract the microscopic cross section data for the moderator as a Pandas DataFrame .
End of explanation
"""
# Extract the 8-group transport cross section for the fuel
fine_xs = xs_library[fuel_cell.id]['transport']
# Condense to the 2-group structure
condensed_xs = fine_xs.get_condensed_xs(coarse_groups)
"""
Explanation: Next, we illustate how one can easily take multi-group cross sections and condense them down to a coarser energy group structure. The MGXS class includes a get_condensed_xs(...) method which takes an EnergyGroups parameter with a coarse(r) group structure and returns a new MGXS condensed to the coarse groups. We illustrate this process below using the 2-group structure created earlier.
End of explanation
"""
condensed_xs.print_xs()
df = condensed_xs.get_pandas_dataframe(xs_type='micro')
df
"""
Explanation: Group condensation is as simple as that! We now have a new coarse 2-group TransportXS in addition to our original 8-group TransportXS. Let's inspect the 2-group TransportXS by printing it to the screen and extracting a Pandas DataFrame as we have already learned how to do.
End of explanation
"""
# Create an OpenMOC Geometry from the OpenMC Geometry
openmoc_geometry = get_openmoc_geometry(sp.summary.geometry)
"""
Explanation: Verification with OpenMOC
Now, let's verify our cross sections using OpenMOC. First, we construct an equivalent OpenMOC geometry.
End of explanation
"""
# Get all OpenMOC cells in the gometry
openmoc_cells = openmoc_geometry.getRootUniverse().getAllCells()
# Inject multi-group cross sections into OpenMOC Materials
for cell_id, cell in openmoc_cells.items():
# Ignore the root cell
if cell.getName() == 'root cell':
continue
# Get a reference to the Material filling this Cell
openmoc_material = cell.getFillMaterial()
# Set the number of energy groups for the Material
openmoc_material.setNumEnergyGroups(fine_groups.num_groups)
# Extract the appropriate cross section objects for this cell
transport = xs_library[cell_id]['transport']
nufission = xs_library[cell_id]['nu-fission']
nuscatter = xs_library[cell_id]['nu-scatter']
chi = xs_library[cell_id]['chi']
# Inject NumPy arrays of cross section data into the Material
# NOTE: Sum across nuclides to get macro cross sections needed by OpenMOC
openmoc_material.setSigmaT(transport.get_xs(nuclides='sum').flatten())
openmoc_material.setNuSigmaF(nufission.get_xs(nuclides='sum').flatten())
openmoc_material.setSigmaS(nuscatter.get_xs(nuclides='sum').flatten())
openmoc_material.setChi(chi.get_xs(nuclides='sum').flatten())
"""
Explanation: Next, we we can inject the multi-group cross sections into the equivalent fuel pin cell OpenMOC geometry.
End of explanation
"""
# Generate tracks for OpenMOC
track_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=128, azim_spacing=0.1)
track_generator.generateTracks()
# Run OpenMOC
solver = openmoc.CPUSolver(track_generator)
solver.computeEigenvalue()
"""
Explanation: We are now ready to run OpenMOC to verify our cross-sections from OpenMC.
End of explanation
"""
# Print report of keff and bias with OpenMC
openmoc_keff = solver.getKeff()
openmc_keff = sp.k_combined.n
bias = (openmoc_keff - openmc_keff) * 1e5
print('openmc keff = {0:1.6f}'.format(openmc_keff))
print('openmoc keff = {0:1.6f}'.format(openmoc_keff))
print('bias [pcm]: {0:1.1f}'.format(bias))
"""
Explanation: We report the eigenvalues computed by OpenMC and OpenMOC here together to summarize our results.
End of explanation
"""
openmoc_geometry = get_openmoc_geometry(sp.summary.geometry)
openmoc_cells = openmoc_geometry.getRootUniverse().getAllCells()
# Inject multi-group cross sections into OpenMOC Materials
for cell_id, cell in openmoc_cells.items():
# Ignore the root cell
if cell.getName() == 'root cell':
continue
openmoc_material = cell.getFillMaterial()
openmoc_material.setNumEnergyGroups(coarse_groups.num_groups)
# Extract the appropriate cross section objects for this cell
transport = xs_library[cell_id]['transport']
nufission = xs_library[cell_id]['nu-fission']
nuscatter = xs_library[cell_id]['nu-scatter']
chi = xs_library[cell_id]['chi']
# Perform group condensation
transport = transport.get_condensed_xs(coarse_groups)
nufission = nufission.get_condensed_xs(coarse_groups)
nuscatter = nuscatter.get_condensed_xs(coarse_groups)
chi = chi.get_condensed_xs(coarse_groups)
# Inject NumPy arrays of cross section data into the Material
openmoc_material.setSigmaT(transport.get_xs(nuclides='sum').flatten())
openmoc_material.setNuSigmaF(nufission.get_xs(nuclides='sum').flatten())
openmoc_material.setSigmaS(nuscatter.get_xs(nuclides='sum').flatten())
openmoc_material.setChi(chi.get_xs(nuclides='sum').flatten())
# Generate tracks for OpenMOC
track_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=128, azim_spacing=0.1)
track_generator.generateTracks()
# Run OpenMOC
solver = openmoc.CPUSolver(track_generator)
solver.computeEigenvalue()
# Print report of keff and bias with OpenMC
openmoc_keff = solver.getKeff()
openmc_keff = sp.k_combined.n
bias = (openmoc_keff - openmc_keff) * 1e5
print('openmc keff = {0:1.6f}'.format(openmc_keff))
print('openmoc keff = {0:1.6f}'.format(openmoc_keff))
print('bias [pcm]: {0:1.1f}'.format(bias))
"""
Explanation: As a sanity check, let's run a simulation with the coarse 2-group cross sections to ensure that they also produce a reasonable result.
End of explanation
"""
# Create a figure of the U-235 continuous-energy fission cross section
fig = openmc.plot_xs('U235', ['fission'])
# Get the axis to use for plotting the MGXS
ax = fig.gca()
# Extract energy group bounds and MGXS values to plot
fission = xs_library[fuel_cell.id]['fission']
energy_groups = fission.energy_groups
x = energy_groups.group_edges
y = fission.get_xs(nuclides=['U235'], order_groups='decreasing', xs_type='micro')
y = np.squeeze(y)
# Fix low energy bound
x[0] = 1.e-5
# Extend the mgxs values array for matplotlib's step plot
y = np.insert(y, 0, y[0])
# Create a step plot for the MGXS
ax.plot(x, y, drawstyle='steps', color='r', linewidth=3)
ax.set_title('U-235 Fission Cross Section')
ax.legend(['Continuous', 'Multi-Group'])
ax.set_xlim((x.min(), x.max()))
"""
Explanation: There is a non-trivial bias in both the 2-group and 8-group cases. In the case of a pin cell, one can show that these biases do not converge to <100 pcm with more particle histories. For heterogeneous geometries, additional measures must be taken to address the following three sources of bias:
Appropriate transport-corrected cross sections
Spatial discretization of OpenMOC's mesh
Constant-in-angle multi-group cross sections
Visualizing MGXS Data
It is often insightful to generate visual depictions of multi-group cross sections. There are many different types of plots which may be useful for multi-group cross section visualization, only a few of which will be shown here for enrichment and inspiration.
One particularly useful visualization is a comparison of the continuous-energy and multi-group cross sections for a particular nuclide and reaction type. We illustrate one option for generating such plots with the use of the openmc.plotter module to plot continuous-energy cross sections from the openly available cross section library distributed by NNDC.
The MGXS data can also be plotted using the openmc.plot_xs command, however we will do this manually here to show how the openmc.Mgxs.get_xs method can be used to obtain data.
End of explanation
"""
# Construct a Pandas DataFrame for the microscopic nu-scattering matrix
nuscatter = xs_library[moderator_cell.id]['nu-scatter']
df = nuscatter.get_pandas_dataframe(xs_type='micro')
# Slice DataFrame in two for each nuclide's mean values
h1 = df[df['nuclide'] == 'H1']['mean']
o16 = df[df['nuclide'] == 'O16']['mean']
# Cast DataFrames as NumPy arrays
h1 = h1.values
o16 = o16.values
# Reshape arrays to 2D matrix for plotting
h1.shape = (fine_groups.num_groups, fine_groups.num_groups)
o16.shape = (fine_groups.num_groups, fine_groups.num_groups)
"""
Explanation: Another useful type of illustration is scattering matrix sparsity structures. First, we extract Pandas DataFrames for the H-1 and O-16 scattering matrices.
End of explanation
"""
# Create plot of the H-1 scattering matrix
fig = plt.subplot(121)
fig.imshow(h1, interpolation='nearest', cmap='jet')
plt.title('H-1 Scattering Matrix')
plt.xlabel('Group Out')
plt.ylabel('Group In')
# Create plot of the O-16 scattering matrix
fig2 = plt.subplot(122)
fig2.imshow(o16, interpolation='nearest', cmap='jet')
plt.title('O-16 Scattering Matrix')
plt.xlabel('Group Out')
plt.ylabel('Group In')
# Show the plot on screen
plt.show()
"""
Explanation: Matplotlib's imshow routine can be used to plot the matrices to illustrate their sparsity structures.
End of explanation
"""
|
bbglab/adventofcode
|
2018/ferran/day12/subterranean_sustainability.ipynb
|
mit
|
initial = '.##..#.#..##..##..##...#####.#.....#..#..##.###.#.####......#.......#..###.#.#.##.#.#.###...##.###.#'
r = ! cat input.txt | tr '\n' ';'
r = dict(list(map(lambda x: tuple(x.split(' => ')), r[0].split(';')[:-1])))
def evolve(state, rules, time):
s = state
for t in range(1, time + 1):
n = len(s)
s = '....' + s + '....'
new = []
for i in range(n + 5):
k = s[i: i + 5]
new.append(rules.get(k, '.'))
s = ''.join(new)
yield list(filter(lambda x: s[x + 2 * t] == '#', range(-2 * t, len(state) + 2 * t)))
def sumofpots(state, rules, time):
final = None
for l in evolve(state, rules, time):
final = l
return sum(final)
"""
Explanation: Part 1
End of explanation
"""
initial_test = '#..#.#..##......###...###'
rules_test = ! cat input_test.txt | tr '\n' ';'
rules_test = dict(list(map(lambda x: tuple(x.split(' => ')), rules_test[0].split(';')[:-1])))
sumofpots(initial_test, rules_test, 20)
"""
Explanation: Test
End of explanation
"""
sumofpots(initial, r, 20)
"""
Explanation: Solution
End of explanation
"""
forever = 50000000000
def pattern_repetition(state, rules, time):
hashes = {}
period = None
for i, l in enumerate(evolve(state, rules, time)):
sig = hash(tuple([c - l[0] for c in l]))
if sig in hashes:
period = i - hashes[sig]
break
hashes[hash(sig)] = i
return i + 1, period, l
generation, period, l = pattern_repetition(initial, r, forever)
generation, period
len(l) * (forever - generation) + sum(l)
"""
Explanation: Part 2
A pattern is a configuration of plants modulo shifting. Every pattern has the following signature: the sequence of numbers occupied by plants, with respect to the leftmost position which we take as 0.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/cas/cmip6/models/fgoals-f3-h/seaice.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cas', 'fgoals-f3-h', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CAS
Source ID: FGOALS-F3-H
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:44
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion
|
notebooks/launching_into_ml/labs/2_first_model.ipynb
|
apache-2.0
|
PROJECT = !gcloud config get-value project
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
%env PROJECT=$PROJECT
%env BUCKET=$BUCKET
%env REGION=$REGION
"""
Explanation: First BigQuery ML models for Taxifare Prediction
Learning Objectives
* Choose the correct BigQuery ML model type and specify options
* Evaluate the performance of your ML model
* Improve model performance through data quality cleanup
* Create a Deep Neural Network (DNN) using SQL
Overview
In this notebook, we will use BigQuery ML to build our first models for taxifare prediction.BigQuery ML provides a fast way to build ML models on large structured and semi-structured datasets. We'll start by creating a dataset to hold all the models we create in BigQuery.
Set environment variables
End of explanation
"""
%%bash
# Create a BigQuery dataset for serverlessml if it doesn't exist
datasetexists=$(bq ls -d | grep -w serverlessml)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: serverlessml"
bq --location=US mk --dataset \
--description 'Taxi Fare' \
$PROJECT:serverlessml
echo "\nHere are your current datasets:"
bq ls
fi
# Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "\nHere are your current buckets:"
gsutil ls
fi
"""
Explanation: Create a BigQuery Dataset and Google Cloud Storage Bucket
A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called serverlessml if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL
serverlessml.model1_rawdata
# TODO 1: Choose the correct ML model_type for forecasting:
# i.e. Linear Regression (linear_reg) or Logistic Regression (logistic_reg)
# Enter in the appropriate ML OPTIONS() in the line below:
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count * 1.0 AS passengers
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1
"""
Explanation: Model 1: Raw data
Let's build a model using just the raw data. It's not going to be very good, but sometimes it is good to actually experience this.
The model will take a minute or so to train. When it comes to ML, this is blazing fast.
End of explanation
"""
%%bigquery
# TODO 2: Specify the command to evaluate your newly trained model
SELECT * FROM
"""
Explanation: Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.
Note that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. We can look at eval statistics on that held-out data:
End of explanation
"""
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL serverlessml.model1_rawdata)
"""
Explanation: Let's report just the error we care about, the Root Mean Squared Error (RMSE)
End of explanation
"""
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL serverlessml.model1_rawdata, (
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count * 1.0 AS passengers # treat as decimal
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 2
# Placeholder for additional filters as part of TODO 3 later
))
"""
Explanation: We told you it was not going to be good! Recall that our heuristic got 8.13, and our target is $6.
Note that the error is going to depend on the dataset that we evaluate it on.
We can also evaluate the model on our own held-out benchmark/test dataset, but we shouldn't make a habit of this (we want to keep our benchmark dataset as the final evaluation, not make decisions using it all along the way. If we do that, our test dataset won't be truly independent).
End of explanation
"""
%%bigquery
CREATE OR REPLACE TABLE
serverlessml.cleaned_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1
AND trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
%%bigquery
-- LIMIT 0 is a free query, this allows us to check that the table exists.
SELECT * FROM serverlessml.cleaned_training_data
LIMIT 0
%%bigquery
CREATE OR REPLACE MODEL
serverlessml.model2_cleanup
OPTIONS(input_label_cols=['fare_amount'],
model_type='linear_reg') AS
SELECT
*
FROM
serverlessml.cleaned_training_data
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL serverlessml.model2_cleanup)
"""
Explanation: What was the RMSE from the above?
TODO 3: Now apply the below filters to the previous query inside the WHERE clause. Does the performance improve? Why or why not?
sql
AND trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
Model 2: Apply data cleanup
Recall that we did some data cleanup in the previous lab. Let's do those before training.
This is a dataset that we will need quite frequently in this notebook, so let's extract it first.
End of explanation
"""
%%bigquery
-- This training takes on the order of 15 minutes.
CREATE OR REPLACE MODEL
serverlessml.model3b_dnn
# TODO 4a: Choose correct BigQuery ML model type for DNN and label field
# Options: dnn_regressor, linear_reg, logistic_reg
OPTIONS() AS
SELECT
*
FROM
serverlessml.cleaned_training_data
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL serverlessml.model3b_dnn)
"""
Explanation: Model 3: More sophisticated models
What if we try a more sophisticated model? Let's try Deep Neural Networks (DNNs) in BigQuery:
DNN
To create a DNN, simply specify dnn_regressor for the model_type and add your hidden layers.
End of explanation
"""
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
# TODO 4b: What is the command to see how well a
# ML model performed? ML.What?
FROM
ML.WHATCOMMAND(MODEL serverlessml.model3b_dnn, (
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count * 1.0 AS passengers,
'unused' AS key
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2
AND trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
))
"""
Explanation: Nice!
Evaluate DNN on benchmark dataset
Let's use the same validation dataset to evaluate -- remember that evaluation metrics depend on the dataset. You can not compare two models unless you have run them on the same withheld data.
End of explanation
"""
|
BrownDwarf/ApJdataFrames
|
notebooks/Devor2008.ipynb
|
mit
|
import pandas as pd
from astropy.io import ascii, votable, misc
"""
Explanation: ApJdataFrames
Devor et al. 2008
Title: IDENTIFICATION, CLASSIFICATIONS, AND ABSOLUTE PROPERTIES OF 773 ECLIPSING BINARIES FOUND IN THE TRANS-ATLANTIC EXOPLANET SURVEY
Authors: Jonathan Devor, David Charbonneau, Francis T O'Donovan, Georgi Mandushev, and Guillermo Torres
Data is from this paper:
http://iopscience.iop.org/article/10.1088/0004-6256/135/3/850/
End of explanation
"""
#! mkdir ../data/Devor2008
#! curl http://iopscience.iop.org/1538-3881/135/3/850/suppdata/aj259648_mrt7.txt >> ../data/Devor2008/aj259648_mrt7.txt
! du -hs ../data/Devor2008/aj259648_mrt7.txt
"""
Explanation: Download Data
End of explanation
"""
dat = ascii.read('../data/Devor2008/aj259648_mrt7.txt')
! head ../data/Devor2008/aj259648_mrt7.txt
dat.info
df = dat.to_pandas()
df.head()
df.columns
sns.distplot(df.Per, norm_hist=False, kde=False)
"""
Explanation: Not too big at all.
Data wrangle-- read in the data
End of explanation
"""
gi = (df.RAh == 4) & (df.RAm == 16) & (df.DEd == 28) & (df.DEm == 7)
gi.sum()
df[gi].T
"""
Explanation: Look for LkCa 4
End of explanation
"""
! head ../data/Devor2008/T-Tau0-01262.lc
cols = ['HJD-2400000', 'r_band', 'r_unc']
lc_raw = pd.read_csv('../data/Devor2008/T-Tau0-01262.lc', names=cols, delim_whitespace=True)
lc_raw.head()
lc_raw.count()
sns.set_context('talk')
plt.plot(lc_raw['HJD-2400000'], lc_raw.r_band, '.')
plt.ylim(0.6, -0.6)
plt.plot(np.mod(lc_raw['HJD-2400000'], 3.375)/3.375, lc_raw.r_band, '.', alpha=0.5)
plt.xlabel('phase')
plt.ylabel('$\Delta \;\; r$')
plt.ylim(0.6, -0.6)
plt.plot(np.mod(lc_raw['HJD-2400000'], 6.74215), lc_raw.r_band, '.')
plt.ylim(0.6, -0.6)
"""
Explanation: The source is named T-Tau0-01262
Get the raw lightcurve
http://jdevor.droppages.com/Catalog.html
The light curve files have the following 3-column format:
Column 1 - the Heliocentric Julian date (HJD), minus 2400000
Column 2 - normalized r-band magnitude
Column 3 - magnitude uncertainty
End of explanation
"""
! ls /Users/gully/Downloads/catalog/T-Tau0-* | head -n 10
lc2 = pd.read_csv('/Users/gully/Downloads/catalog/T-Tau0-00397.lc', names=cols, delim_whitespace=True)
plt.plot(lc2['HJD-2400000'], lc2.r_band, '.')
plt.ylim(0.6, -0.6)
this_p = df.Per[df.Name == 'T-Tau0-00397']
plt.plot(np.mod(lc2['HJD-2400000'], this_p), lc2.r_band, '.', alpha=0.5)
plt.xlabel('phase')
plt.ylabel('$\Delta \;\; r$')
plt.ylim(0.6, -0.6)
"""
Explanation: The Devor et al. period is just twice the photometric period of 3.375 days.
Are those large vertical drops flares?
End of explanation
"""
|
4dsolutions/Python5
|
S_Train.ipynb
|
mit
|
from IPython.display import YouTubeVideo
YouTubeVideo("1VXDejQcAWY")
"""
Explanation: Oregon Curriculum Network <br />
Discovering Math with Python
All Aboard the S Train!
Those of us exploring the geometry of thinking laid out in Synergetics (subtitled explorations in the geometry of thinking) will be familiar with the Jitterbug Transformation, popularized in this Youtube introduction to the International Mathematicians Union logo:
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/46320625832/in/dateposted-public/" title="imu_logo_u2be"><img src="https://farm5.staticflickr.com/4815/46320625832_7c33a06f9e.jpg" width="500" height="461" alt="imu_logo_u2be"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
End of explanation
"""
import gmpy2
gmpy2.get_context().precision=200
root2 = gmpy2.sqrt(2)
root7 = gmpy2.sqrt(7)
root5 = gmpy2.sqrt(5)
root3 = gmpy2.sqrt(3)
# phi
𝜙 = (gmpy2.sqrt(5) + 1)/2
# Synergetics modules
Smod = (𝜙 **-5)/2
Emod = (root2/8) * (𝜙 ** -3)
sfactor = Smod/Emod
print("sfactor: {:60.57}".format(sfactor))
"""
Explanation: The cuboctahedron and icosahedron are related by having the same edge length. The ratio of the two, in terms of volume, is: $20 : 5 \sqrt{2} \phi^2$.
Lets call this the "S factor". It also happens to be the Smod/Emod volume ratio.
End of explanation
"""
sfactor = 2 * root2 * 𝜙 ** -2 # 2 * (7 - 3 * root5).sqrt()
print("sfactor: {:60.57}".format(sfactor))
# sfactor in terms of phi-scaled emods
e3 = Emod * 𝜙 ** -3
print("sfactor: {:60.57}".format(24*Emod + 8*e3))
# length of skew icosa edge EF Fig 988.13A below, embedded in
# octa of edge a=2
EF = 2 * gmpy2.sqrt(7 - 3 * root5)
print("sfactor: {:60.57}".format(EF))
"""
Explanation: Icosa * sfactor = Cubocta.
End of explanation
"""
icosatet = 1/sfactor
icosatet
JB_icosa = 20 * icosatet
print("Icosahedron: {:60.57}".format(JB_icosa)) # for volume of JB icosahedron
"""
Explanation: The cuboctahedron that jitterbugs into an icosahedron takes twenty regular tetrahedrons -- in volume, eight of them so formed (the other twelve paired in six half-octahedra) -- into twenty irregular tetrahedrons in the corresponding regular icosahedron (same surface edge lengths).
Each of those 20 irregular tetrahedrons we may refer to as an "icosatet" (IcosaTet).
The computation below shows the icosatet (1/sfactor) times 20, giving the same volume as the "Jitterbug icosa" (edges 2R).
End of explanation
"""
icosa_within = 2.5 * sfactor * sfactor
icosa_within
"""
Explanation: From Figure 988.00 in Synergetics:
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/46319721212/in/dateposted-public/" title="Jitterbug Relation"><img src="https://farm5.staticflickr.com/4908/46319721212_5144721a96.jpg" width="500" height="295" alt="Jitterbug Relation"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
<div align="center">Jitterbug Relationship</div>
The S Train is Leaving the Station...
However there's another twinning or pairing of the cubocta and icosa in Synergetics that arises when we fit both into a contextualizing octahedron.
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/46009103944/in/dateposted-public/" title="Phi Scaled S Module"><img src="https://farm5.staticflickr.com/4847/46009103944_bda5a5f0c3.jpg" width="500" height="500" alt="Phi Scaled S Module"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
Consider the canonical octahedron of volume 4, with a cuboctahedron inside, its triangular faces flush with the octahedron's. Its volume is 2.5.
Now consider an icosahedron with eight of its twenty faces flush to the same octahedron, but skewed (tilted) relative to the cuboctahedron's.
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/31432640247/in/dateposted-public/" title="icosa_within"><img src="https://farm5.staticflickr.com/4876/31432640247_14b56cdc4b.jpg" width="500" height="409" alt="icosa_within"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
<div align="center">From Figure 988.12 in Synergetics by RBF</div>
The relationship between this pair is different than in the Jitterbug Transformation. For one thing, the edges are no longer the same length, and for another, the icosahedron's edges are longer, and its volume is greater.
However, despite these differences, the S-Factor is still involved.
For one thing: the longer edge of the icosahedron is the S-factor, given edges and radii of the cuboctahedron of volume 2.5 are all R = 1 = the radius of one CCP sphere -- each encased by the volume 6 RD (see below).
From Figure 988.00 in Synergetics:
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/46319721512/in/dateposted-public/" title="Skew Relationship"><img src="https://farm5.staticflickr.com/4827/46319721512_e1f04c3ca2.jpg" width="500" height="272" alt="Skew Relationship"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
<div align="center">Cuboctahedron and Icosahedron<br /> both with faces flush to Octahedron of volume 4</div>
For another: the cuboctahedron's volume, times S-Factor to the 2nd power, gives the icosahedron's volume.
End of explanation
"""
smod = (4 - icosa_within)/24
print("smod: {:60.57}".format(smod))
(𝜙**-5)/2
print("smod: {:60.57}".format(smod))
"""
Explanation: Verifying S Module Volume
The "skew icosahedron" inside the volume 4 octahedron is what we use to derive the 24 S modules, which make up the difference in volume between the two. The S module's volume may also be expressed in terms of φ.
End of explanation
"""
import tetvols
# assume a = 1 D
a = 1
# common apex is F
FH = 1/𝜙
FE = sfactor/2
FG = root3 * FE/2
# connecting the base (same order, i.e. H, E, G)
HE = (3 - root5)/2
EG = FE/2
GH = EG
Smod = tetvols.ivm_volume((FH, FE, FG, HE, EG, GH))
print("smod: {:60.57}".format(Smod))
print("Octa Edge = 1")
print("FH: {:60.57}".format(FH))
print("FE: {:60.57}".format(FE))
print("FG: {:60.57}".format(FG))
print("HE: {:60.57}".format(HE))
print("EG: {:60.57}".format(EG))
print("GH: {:60.57}".format(GH))
"""
Explanation: Lets look at the S module in more detail, and compute its volume from scratch, using a Python formula.
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/45589318711/in/dateposted-public/" title="dejong"><img src="https://farm2.staticflickr.com/1935/45589318711_677d272397.jpg" width="417" height="136" alt="dejong"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
<br />
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/32732893998/in/dateposted-public/" title="smod_dimensions"><img src="https://farm5.staticflickr.com/4892/32732893998_cd5f725f3d.jpg" width="500" height="484" alt="smod_dimensions"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
Picking a common apex for three lengths (radials), and then connecting the dots around the base so defined, is step one in using our algorithm. We'll use gmpy2 for its extended precision capabilities.
The Tetrahedron class in tetravolume module is set by default to work in D units (D = 2R) i.e. the canonical tetrahedron, octahedron, icosahedron, all have edges 1.
End of explanation
"""
print("Octa Edge = 2")
print("FH: {:60.57}".format(FH * 2))
print("FE: {:60.57}".format(FE * 2))
print("FG: {:60.57}".format(FG * 2))
print("HE: {:60.57}".format(HE * 2))
print("EG: {:60.57}".format(EG * 2))
print("GH: {:60.57}".format(GH * 2))
"""
Explanation: Setting a = 2 give us the following edges table:
End of explanation
"""
SmallGuy = 20 * (1/sfactor) ** 3
SmallGuy
print("SmallGuy: {:60.57}".format(SmallGuy))
"""
Explanation: The S Train
The fact that the cuboctahedron and icosahedron relate in two ways via a common S-factor suggests the metaphor of a train or subway route.
Start at the cuboctahedron and follow the Jitterbug Pathway (one stop, one application of the S-factor, but as a reciprocal, since we're dropping in volume).
We've arrived at the Jitterbug icosahedron. Applying 1/S twice more will take us to another cuboctahedron (dubbed "SmallGuy" in some writings). Its triangular faces overlap those of the Jitterbug icosahedron.
End of explanation
"""
print("SmallGuy Edge: {:56.54}".format(2 * (1/sfactor))) # SmallGuy edge
print("Icosahedron: {:56.53}".format(JB_icosa)) # for volume of JB icosahedron
"""
Explanation: SmallGuy's edges are 2R times 1/sfactor, since linear change is a 3rd root of volumetric change (when shape is held constant).
Interestingly, this result is one tenth the JB_icosahedron's volume, but a linear measure in this instance.
End of explanation
"""
Syn3 = gmpy2.sqrt(gmpy2.mpq(9,8))
JB_icosa = SmallGuy * sfactor * sfactor
print("JB Icosa: {:60.57}".format(JB_icosa))
JB_cubocta = JB_icosa * sfactor
print("JB Cubocta: {:60.57}".format(JB_cubocta))
SuperRT = JB_cubocta * Syn3
SuperRT # 20*S3
print("SuperRT: {:60.57}".format(SuperRT))
"""
Explanation: When going in the other direction (smaller to bigger), apply the S factor directly (not the reciprocal) since the volumes increase.
For example start at the cuboctahedron of volume 2.5, apply the S factor twice to get the corresponding skew icosahedron ("Icosahedron Within"), its faces embedded in the same volume 4 octahedron (see above).
S is for "Skew"...
However, we might also say "S" is for "Sesame Street" and for "spine" as the Concentric Hierarchy forms the backbone of Synergetics and becomes the familiar neighborhood, what we keep coming back to.
... and for "Subway"
The idea of scale factors taking us from one "station stop" to another within the Concentric Hierarchy jibes with the "hypertoon" concept: smooth transformations terminating in "switch points" from which other transformations also branch (a nodes and edges construct, like the polyhedrons themselves).
Successive applications of both S and Syn3 take us to "station stops" along the "S train" e.g.
$$SmallGuy \rightarrow S^2 \rightarrow icosa \rightarrow S \rightarrow cubocta \rightarrow Syn3 \rightarrow RT$$
and so on. Bigger and bigger (or other way).
Remember Syn3? That's also our $IVM \Leftrightarrow XYZ$ conversion constant. Yet here we're not using it that way, as we're staying in tetravolumes the whole time.
However, what's so is the ratio between the volume of the cube of edges R and the volume of the tetrahedron of edges D (D = 2R) is the same as that between the RT and volume 20 cuboctahedron, where long diagonals of RT = edges of cubocta.
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/21077777642/in/photolist-FRd2LJ-y7z7Xm-frqefo-8thDyL-6zKk1y-5KBFWR-5KFVMm-5uinM4" title="Conversion Constant"><img src="https://farm1.staticflickr.com/702/21077777642_9803ddb65e.jpg" width="500" height="375" alt="Conversion Constant"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
<div align="center">Cube edges = 1/2 x Tetrahedron edges;<br /> Cube:Tetrahedron volume ratio = S3</div>
End of explanation
"""
volume1 = SuperRT - JB_icosa
volume2 = (4 - 24*Smod) * (1/sfactor)
print("volume1: {:60.57}".format(volume1))
print("volume2: {:60.57}".format(volume2))
# one more application of the 1/sfactor gives the 2.5 cubocta
print("Edged 1 Cubocta: {:60.57}".format(volume2 * (1/sfactor)))
"""
Explanation: The SuperRT is the RT defined by the Jitterbug icosa (JB_icosa) and its dual, the Pentagonal Dodecahedron of tetravolume $3\sqrt{2}(\phi^2 + 1)$.
The S train through the 2.5 cubocta, which stops at "Icosa Within" does not meet up with S train through 20 cubocta, which runs to SmallGuy.
The 20 and 2.5 cubocta stations are linked by "Double D express" (halve or double all edge lengths).
$$Cubocta 20 \rightarrow DoubleD \rightarrow Cubocta 2.5 \rightarrow S^2 \rightarrow Icosa Within \rightarrow + 24 Smods \rightarrow Octa4$$
The Phi Commuter does a lot of the heavy lifting, multiplying all edges by phi or 1/phi, as in the ...e6, e3, E, E3, E6... progression.
Multiplying edges by x entails multiplying volume by $x^3$.
Take Phi Commuter from SuperRT to the 120 E Mods RT (with radius R), get off and transfer to the T Mods RT (mind the gap of ~0.9994), then take the local to the 7.5 RT.
The space-filling RD6 will be at the same corner (they share vertexes).
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/4178618670/in/photolist-28MC8r3-27PVk6E-27PVjh5-27PVkvN-27PViQ3-27PVjC5-KgsYkX-KgsXRk-KgsZ2B-27KsgFG-27xwi3K-9WvZwa-97TTvV-7nfvKu" title="The 6 and the 7.5"><img src="https://farm3.staticflickr.com/2767/4178618670_1b4729e527.jpg" width="500" height="456" alt="The 6 and the 7.5"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
<div align="center">RT of volume 7.5 and RD of volume 6<br /> with shared vertexes (by David Koski using vZome)</div>
The RD6's long diagonals make Octa4, your bridge to Icosa Within and the S line to the 2.5 cubocta.
$$SuperRT \rightarrow \phi Commuter \rightarrow Emod RT \rightarrow Tmod RT \rightarrow 3/2 \rightarrow 7.5 RT \rightarrow RD6 \rightarrow Octa4$$
This kind of touring by scale factor and switching pathways is called "taking subways around the neighborhood" (i.e. Sesame Street).
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/31433920137/in/dateposted-public/" title="Sesame Street Subway"><img src="https://farm5.staticflickr.com/4812/31433920137_ecb829e3bd.jpg" width="500" height="375" alt="Sesame Street Subway"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
Here's another one, derived by David Koski in early March, 2021:
Icosa of (Octa4 - 24 S modules) $\rightarrow$ S-factor down $\rightarrow$ the volumetric difference between SuperRT and the Jitterbug Icosa (which latter inscribes in the former as long face diagonals).
End of explanation
"""
|
steven-murray/halomod
|
devel/einasto_profile.ipynb
|
mit
|
%pylab inline
from halomod import HaloModel
from scipy.interpolate import InterpolatedUnivariateSpline as spline
hm = HaloModel(profile_model="Einasto")
"""
Explanation: Einasto Profile
In this notebook we visually test the Einasto profile (and do some timing etc.)
End of explanation
"""
_ = hm.profile.rho(hm.r,hm.m)
"""
Explanation: Density Profile
First, to check for blatant errors, we run with full vectors for both $r$ and $m$:
End of explanation
"""
hm.update(profile_model="Einasto")
plot(hm.r, hm.profile.rho(hm.r,1e12),label="m=12",color="b")
plot(hm.r, hm.profile.rho(hm.r,1e14),label="m=14",color="r")
plot(hm.r, hm.profile.rho(hm.r,1e16),label="m=16",color="g")
hm.update(profile_model="NFW")
plot(hm.r, hm.profile.rho(hm.r,1e12),label="m=12",ls="--",color="b")
plot(hm.r, hm.profile.rho(hm.r,1e14),label="m=14",ls="--",color="r")
plot(hm.r, hm.profile.rho(hm.r,1e16),label="m=16",ls="--",color="g")
legend(loc=0)
xscale('log')
yscale('log')
show()
"""
Explanation: Now plot versus $r$:
End of explanation
"""
hm.update(profile_model="Einasto")
plot(hm.m, hm.profile.rho(0.01,hm.m),label="r=0.01",color="b")
plot(hm.m, hm.profile.rho(0.1,hm.m),label="r=0.1",color="r")
plot(hm.m, hm.profile.rho(1.0,hm.m),label="r=1",color="g")
hm.update(profile_model="NFW")
plot(hm.m, hm.profile.rho(0.01,hm.m),ls="--",color="b")
plot(hm.m, hm.profile.rho(0.1,hm.m),ls="--",color="r")
plot(hm.m, hm.profile.rho(1.0,hm.m),ls="--",color="g")
legend(loc=0)
xscale('log')
yscale('log')
show()
"""
Explanation: Now plot versus $m$:
End of explanation
"""
hm.update(profile_model="Einasto")
#plot(hm.k, hm.profile.u(hm.k,1e12),label="m=12",color="b")
#print hm.profile.u(hm.k,1e12)
#plot(hm.k, hm.profile.u(hm.k,1e14),label="m=14",color="r")
#plot(hm.k, hm.profile.u(hm.k,1e16),label="m=16",color="g")
plot(hm.k, hm.profile.u(hm.k,1e12),label="m=12",color="b")
#print hm.profile.u(hm.k,1e12)
plot(hm.k, hm.profile.u(hm.k,1e14),label="m=14",color="r")
plot(hm.k, hm.profile.u(hm.k,1e16),label="m=16",color="g")
hm.update(profile_model="NFW")
plot(hm.k, hm.profile.u(hm.k,1e12),label="m=12",ls="--",color="b")
plot(hm.k, hm.profile.u(hm.k,1e14),label="m=14",ls="--",color="r")
plot(hm.k, hm.profile.u(hm.k,1e16),label="m=16",ls="--",color="g")
hm.update(profile_model="Einasto")
legend(loc=0)
xscale('log')
yscale('log')
show()
"""
Explanation: Fourier Transform
First plot against $k$:
End of explanation
"""
hm.update(profile_model="Einasto")
plot(hm.m, hm.profile.u(0.01,hm.m),label="k=0.01",color="b")
plot(hm.m, hm.profile.u(5,hm.m),label="k=5",color="r")
plot(hm.m, hm.profile.u(1000,hm.m),label="k=1000",color="g")
hm.update(profile_model="NFW")
plot(hm.m, hm.profile.u(0.01,hm.m),ls="--",color="b")
plot(hm.m, hm.profile.u(5,hm.m),ls="--",color="r")
plot(hm.m, hm.profile.u(1000,hm.m),ls="--",color="g")
legend(loc=0)
xscale('log')
yscale('log')
show()
"""
Explanation: Now plot against $m$:
End of explanation
"""
hm.update(profile_model="Einasto")
%timeit hm.profile.u(hm.k,hm.m)
hm.update(profile_model="NFW")
%timeit hm.profile.u(hm.k,hm.m)
"""
Explanation: We may have to be a bit wary of the high-mass, high-k tail, but other than that we should be okay. Finally, to make sure things run properly, try full matrix:
End of explanation
"""
def f(x,a=0.18):
return np.exp((-2/a)*(x**a-1))
def _p(K, c):
minsteps = 1000
res = np.zeros((len(K),len(c)))
for ik, kappa in enumerate(K):
smallest_period = np.pi / kappa
dx = smallest_period / 8
nsteps = max(int(np.ceil(c.max() / dx)),minsteps)
x, dx = np.linspace(0, c.max(), nsteps, retstep=True)
spl = spline(x, x*f(x)*np.sin(kappa*x)/kappa)
intg = spl.antiderivative()
res[ik,:] = intg(c) - intg(0)
return np.clip(res,0,None)
K = np.logspace(-4,4,500)
c = np.logspace(0,2,1000)
pk = _p(K,c)
#plot(K,pk)
#xscale('log')
#yscale('log')
np.savez("uKc_einasto.npz",pk=pk,K=K,c=c)
from scipy.interpolate import RectBivariateSpline
def _newp(K,c):
data = np.load("uKc_einasto.npz")
pk = data['pk']
_k = data['K']
_c = data['c']
c = np.atleast_1d(c)
if np.isscalar(K):
K = np.atleast_2d(K)
if K.ndim < 2:
if len(K)!=len(c):
K = np.atleast_2d(K).T # should be len(rs) x len(k)
else:
K = np.atleast_2d(K)
pk[pk<=0] = 1e-8
spl = RectBivariateSpline(np.log(_k),np.log(_c),np.log(pk))
cc = np.repeat(c,K.shape[0])
return np.exp(hm.profile._reduce(spl.ev(np.log(K.flatten()),np.log(cc)).reshape(K.shape)))
c,K = hm.profile._get_k_variables(hm.k,hm.m)
%timeit _newp(K,c)
plot(np.logspace(-4,4,500),pk[:,0]/ _newp(np.logspace(-4,4,500),1))
plot(np.logspace(-4,4,500),_p(np.logspace(-4,4,500),np.atleast_1d(1)).flatten()/_newp(np.logspace(-4,4,500),1))
plot(np.logspace(-4,4,500),pk[:,0]/ _p(np.logspace(-4,4,500),np.atleast_1d(1)).flatten())
plot()
xscale('log')
"""
Explanation: Perhaps it's better to pre-cache results.
Attempt to Cache Results
End of explanation
"""
|
dennys-bd/Coursera-Machine-Learning-Specialization
|
Course 2 - ML, Regression/week-2-multiple-regression-assignment-2-blank.ipynb
|
mit
|
import graphlab
"""
Explanation: Regression Week 2: Multiple Regression (gradient descent)
In the first notebook we explored multiple regression using graphlab create. Now we will use graphlab along with numpy to solve for the regression weights with gradient descent.
In this notebook we will cover estimating multiple regression weights via gradient descent. You will:
* Add a constant column of 1's to a graphlab SFrame to account for the intercept
* Convert an SFrame into a Numpy array
* Write a predict_output() function using Numpy
* Write a numpy function to compute the derivative of the regression weights with respect to a single feature
* Write gradient descent function to compute the regression weights given an initial weight vector, step size and tolerance.
* Use the gradient descent function to estimate regression weights for multiple features
Fire up graphlab create
Make sure you have the latest version of graphlab (>= 1.7)
End of explanation
"""
sales = graphlab.SFrame('kc_house_data.gl/')
"""
Explanation: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
"""
import numpy as np # note this allows us to refer to numpy as np instead
"""
Explanation: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the other Week 2 notebook. For this notebook, however, we will work with the existing features.
Convert to Numpy Array
Although SFrames offer a number of benefits to users (especially when using Big Data and built-in graphlab functions) in order to understand the details of the implementation of algorithms it's important to work with a library that allows for direct (and optimized) matrix operations. Numpy is a Python solution to work with matrices (or any multi-dimensional "array").
Recall that the predicted value given the weights and the features is just the dot product between the feature and weight vector. Similarly, if we put all of the features row-by-row in a matrix then the predicted value for all the observations can be computed by right multiplying the "feature matrix" by the "weight vector".
First we need to take the SFrame of our data and convert it into a 2D numpy array (also called a matrix). To do this we use graphlab's built in .to_dataframe() which converts the SFrame into a Pandas (another python library) dataframe. We can then use Panda's .as_matrix() to convert the dataframe into a numpy matrix.
End of explanation
"""
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe[output]
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
"""
Explanation: Now we will write a function that will accept an SFrame, a list of feature names (e.g. ['sqft_living', 'bedrooms']) and an target feature e.g. ('price') and will return two things:
* A numpy matrix whose columns are the desired features plus a constant column (this is how we create an 'intercept')
* A numpy array containing the values of the output
With this in mind, complete the following function (where there's an empty line you should write a line of code that does what the comment above indicates)
Please note you will need GraphLab Create version at least 1.7.1 in order for .to_numpy() to work!
End of explanation
"""
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price') # the [] around 'sqft_living' makes it a list
print example_features[0,:] # this accesses the first row of the data the ':' indicates 'all columns'
print example_output[0] # and the corresponding output
"""
Explanation: For testing let's use the 'sqft_living' feature and a constant as our features and price as our output:
End of explanation
"""
my_weights = np.array([1., 1.]) # the example weights
my_features = example_features[0,] # we'll use the first data point
predicted_value = np.dot(my_features, my_weights)
print predicted_value
"""
Explanation: Predicting output given regression weights
Suppose we had the weights [1.0, 1.0] and the features [1.0, 1180.0] and we wanted to compute the predicted output 1.0*1.0 + 1.0*1180.0 = 1181.0 this is the dot product between these two arrays. If they're numpy arrayws we can use np.dot() to compute this:
End of explanation
"""
def predict_output(feature_matrix, weights):
# assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array
# create the predictions vector by using np.dot()
predictions = np.dot(feature_matrix, weights)
return(predictions)
"""
Explanation: np.dot() also works when dealing with a matrix and a vector. Recall that the predictions from all the observations is just the RIGHT (as in weights on the right) dot product between the features matrix and the weights vector. With this in mind finish the following predict_output function to compute the predictions for an entire matrix of features given the matrix and the weights:
End of explanation
"""
test_predictions = predict_output(example_features, my_weights)
print test_predictions[0] # should be 1181.0
print test_predictions[1] # should be 2571.0
"""
Explanation: If you want to test your code run the following cell:
End of explanation
"""
def feature_derivative(errors, feature):
# Assume that errors and feature are both numpy arrays of the same length (number of data points)
# compute twice the dot product of these vectors as 'derivative' and return the value
derivative = np.dot(errors, feature)*2
return(derivative)
"""
Explanation: Computing the Derivative
We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output.
Since the derivative of a sum is the sum of the derivatives we can compute the derivative for a single data point and then sum over data points. We can write the squared difference between the observed output and predicted output for a single point as follows:
(w[0]*[CONSTANT] + w[1]*[feature_1] + ... + w[i] *[feature_i] + ... + w[k]*[feature_k] - output)^2
Where we have k features and a constant. So the derivative with respect to weight w[i] by the chain rule is:
2*(w[0]*[CONSTANT] + w[1]*[feature_1] + ... + w[i] *[feature_i] + ... + w[k]*[feature_k] - output)* [feature_i]
The term inside the paranethesis is just the error (difference between prediction and output). So we can re-write this as:
2*error*[feature_i]
That is, the derivative for the weight for feature i is the sum (over data points) of 2 times the product of the error and the feature itself. In the case of the constant then this is just twice the sum of the errors!
Recall that twice the sum of the product of two vectors is just twice the dot product of the two vectors. Therefore the derivative for the weight for feature_i is just two times the dot product between the values of feature_i and the current errors.
With this in mind complete the following derivative function which computes the derivative of the weight given the value of the feature (over all data points) and the errors (over all data points).
End of explanation
"""
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price')
my_weights = np.array([0., 0.]) # this makes all the predictions 0
test_predictions = predict_output(example_features, my_weights)
# just like SFrames 2 numpy arrays can be elementwise subtracted with '-':
errors = test_predictions - example_output # prediction errors in this case is just the -example_output
feature = example_features[:,0] # let's compute the derivative with respect to 'constant', the ":" indicates "all rows"
derivative = feature_derivative(errors, feature)
print derivative
print -np.sum(example_output)*2 # should be the same as derivative
"""
Explanation: To test your feature derivartive run the following:
End of explanation
"""
from math import sqrt # recall that the magnitude/length of a vector [g[0], g[1], g[2]] is sqrt(g[0]^2 + g[1]^2 + g[2]^2)
def regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance):
converged = False
weights = np.array(initial_weights) # make sure it's a numpy array
while not converged:
# compute the predictions based on feature_matrix and weights using your predict_output() function
predictions = predict_output(feature_matrix, weights)
# compute the errors as predictions - output
errors = predictions - output
gradient_sum_squares = 0 # initialize the gradient sum of squares
# while we haven't reached the tolerance yet, update each feature's weight
for i in range(len(weights)): # loop over each weight
# Recall that feature_matrix[:, i] is the feature column associated with weights[i]
# compute the derivative for weight[i]:
derivative = feature_derivative(errors, feature_matrix[:, i])
# add the squared value of the derivative to the gradient sum of squares (for assessing convergence)
gradient_sum_squares = gradient_sum_squares + derivative * derivative
# subtract the step size times the derivative from the current weight
weights[i] = weights[i] - derivative*step_size
# compute the square-root of the gradient sum of squares to get the gradient magnitude:
gradient_magnitude = sqrt(gradient_sum_squares)
if gradient_magnitude < tolerance:
converged = True
return(weights)
"""
Explanation: Gradient Descent
Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of decrease and we're trying to minimize a cost function.
The amount by which we move in the negative gradient direction is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. We define this by requiring that the magnitude (length) of the gradient vector to be smaller than a fixed 'tolerance'.
With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent we update the weight for each feature befofe computing our stopping criteria
End of explanation
"""
train_data,test_data = sales.random_split(.8,seed=0)
"""
Explanation: A few things to note before we run the gradient descent. Since the gradient is a sum over all the data points and involves a product of an error and a feature the gradient itself will be very large since the features are large (squarefeet) and the output is large (prices). So while you might expect "tolerance" to be small, small is only relative to the size of the features.
For similar reasons the step size will be much smaller than you might expect but this is because the gradient has such large values.
Running the Gradient Descent as Simple Regression
First let's split the data into training and test data.
End of explanation
"""
# let's test out the gradient descent
simple_features = ['sqft_living']
my_output = 'price'
(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)
initial_weights = np.array([-47000., 1.])
step_size = 7e-12
tolerance = 2.5e7
"""
Explanation: Although the gradient descent is designed for multiple regression since the constant is now a feature we can use the gradient descent function to estimat the parameters in the simple regression on squarefeet. The folowing cell sets up the feature_matrix, output, initial weights and step size for the first model:
End of explanation
"""
weights = regression_gradient_descent(simple_feature_matrix, output, initial_weights, step_size, tolerance)
print weights
"""
Explanation: Next run your gradient descent with the above parameters.
End of explanation
"""
(test_simple_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output)
"""
Explanation: How do your weights compare to those achieved in week 1 (don't expect them to be exactly the same)?
Quiz Question: What is the value of the weight for sqft_living -- the second element of ‘simple_weights’ (rounded to 1 decimal place)?
Use your newly estimated weights and your predict_output() function to compute the predictions on all the TEST data (you will need to create a numpy array of the test feature_matrix and test output first:
End of explanation
"""
predictions = predict_output(test_simple_feature_matrix, weights)
print predictions
"""
Explanation: Now compute your predictions using test_simple_feature_matrix and your weights from above.
End of explanation
"""
round(predictions[0], 2)
"""
Explanation: Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 1 (round to nearest dollar)?
End of explanation
"""
residuals = test_output - predictions
RSS = sum(residuals*residuals)
print RSS
"""
Explanation: Now that you have the predictions on test data, compute the RSS on the test data set. Save this value for comparison later. Recall that RSS is the sum of the squared errors (difference between prediction and output).
End of explanation
"""
model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors.
my_output = 'price'
(feature_matrix, output) = get_numpy_data(train_data, model_features, my_output)
initial_weights = np.array([-100000., 1., 1.])
step_size = 4e-12
tolerance = 1e9
"""
Explanation: Running a multiple regression
Now we will use more than one actual feature. Use the following code to produce the weights for a second model with the following parameters:
End of explanation
"""
weights_more_features = regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance)
"""
Explanation: Use the above parameters to estimate the model weights. Record these values for your quiz.
End of explanation
"""
(test_feature_matrix, test_output) = get_numpy_data(test_data, model_features, my_output)
predicted = predict_output(test_feature_matrix, weights_more_features)
"""
Explanation: Use your newly estimated weights and the predict_output function to compute the predictions on the TEST data. Don't forget to create a numpy array for these features from the test set first!
End of explanation
"""
round(predicted[0], 2)
"""
Explanation: Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 2 (round to nearest dollar)?
End of explanation
"""
test_data[0]['price']
"""
Explanation: What is the actual price for the 1st house in the test data set?
End of explanation
"""
residuals_2 = test_output - predicted
RSS_2 = sum(residuals_2**2)
print RSS_2
"""
Explanation: Quiz Question: Which estimate was closer to the true price for the 1st house on the TEST data set, model 1 or model 2?
Now use your predictions and the output to compute the RSS for model 2 on TEST data.
End of explanation
"""
print 'Residual 1: %f e Residual 2: %f' % (RSS, RSS_2)
"""
Explanation: Quiz Question: Which model (1 or 2) has lowest RSS on all of the TEST data?
End of explanation
"""
|
KaiSzuttor/espresso
|
doc/tutorials/08-visualization/08-visualization.ipynb
|
gpl-3.0
|
from matplotlib import pyplot
import espressomd
import numpy
espressomd.assert_features("LENNARD_JONES")
# system parameters (10000 particles)
box_l = 10.7437
density = 0.7
# interaction parameters (repulsive Lennard-Jones)
lj_eps = 1.0
lj_sig = 1.0
lj_cut = 1.12246
lj_cap = 20
# integration parameters
system = espressomd.System(box_l=[box_l, box_l, box_l])
system.time_step = 0.0001
system.cell_system.skin = 0.4
system.thermostat.set_langevin(kT=1.0, gamma=1.0, seed=42)
# warmup integration (with capped LJ potential)
warm_steps = 100
warm_n_times = 30
# do the warmup until the particles have at least the distance min_dist
min_dist = 0.9
# integration
int_steps = 1000
int_n_times = 100
#############################################################
# Setup System #
#############################################################
# interaction setup
system.non_bonded_inter[0, 0].lennard_jones.set_params(
epsilon=lj_eps, sigma=lj_sig,
cutoff=lj_cut, shift="auto")
system.force_cap = lj_cap
# particle setup
volume = box_l * box_l * box_l
n_part = int(volume * density)
for i in range(n_part):
system.part.add(id=i, pos=numpy.random.random(3) * system.box_l)
act_min_dist = system.analysis.min_dist()
#############################################################
# Warmup Integration #
#############################################################
# set LJ cap
lj_cap = 20
system.force_cap = lj_cap
# warmup integration loop
i = 0
while (i < warm_n_times and act_min_dist < min_dist):
system.integrator.run(warm_steps)
# warmup criterion
act_min_dist = system.analysis.min_dist()
i += 1
# increase LJ cap
lj_cap = lj_cap + 10
system.force_cap = lj_cap
#############################################################
# Integration #
#############################################################
# remove force capping
lj_cap = 0
system.force_cap = lj_cap
def main():
for i in range(int_n_times):
print("\rrun %d at time=%.0f " % (i, system.time), end='')
system.integrator.run(int_steps)
print('\rSimulation complete')
main()
"""
Explanation: Tutorial 8: Visualization
Introduction
When you are running a simulation, it is often useful to see what is going on
by visualizing particles in a 3D view or by plotting observables over time.
That way, you can easily determine things like whether your choice of parameters
has led to a stable simulation or whether your system has equilibrated. You may
even be able to do your complete data analysis in real time as the simulation progresses.
Thanks to ESPResSo's Python interface, we can make use of standard libraries
like Mayavi or OpenGL (for interactive 3D views) and Matplotlib (for line graphs)
for this purpose. We will also use NumPy, which both of these libraries depend on,
to store data and perform some basic analysis.
Simulation
First, we need to set up a simulation.
We will simulate a simple Lennard-Jones liquid.
End of explanation
"""
matplotlib_notebook = True # toggle this off when outside IPython/Jupyter
# setup matplotlib canvas
pyplot.xlabel("Time")
pyplot.ylabel("Energy")
plot, = pyplot.plot([0], [0])
if matplotlib_notebook:
from IPython import display
else:
pyplot.show(block=False)
# setup matplotlib update function
current_time = -1
def update_plot():
i = current_time
if i < 3:
return None
plot.set_xdata(energies[:i + 1, 0])
plot.set_ydata(energies[:i + 1, 1])
pyplot.xlim(0, energies[i, 0])
pyplot.ylim(energies[:i + 1, 1].min(), energies[:i + 1, 1].max())
# refresh matplotlib GUI
if matplotlib_notebook:
display.clear_output(wait=True)
display.display(pyplot.gcf())
else:
pyplot.draw()
pyplot.pause(0.01)
# re-define the main() function
def main():
global current_time
for i in range(int_n_times):
system.integrator.run(int_steps)
energies[i] = (system.time, system.analysis.energy()['total'])
current_time = i
update_plot()
if matplotlib_notebook:
display.clear_output(wait=True)
system.time = 0 # reset system timer
energies = numpy.zeros((int_n_times, 2))
main()
if not matplotlib_notebook:
pyplot.close()
"""
Explanation: Live plotting
Let's have a look at the total energy of the simulation. We can determine the
individual energies in the system using <tt>system.analysis.energy()</tt>.
We will adapt the <tt>main()</tt> function to store the total energy at each
integration run into a NumPy array. We will also create a function to draw a
plot after each integration run.
End of explanation
"""
from espressomd import visualization
from threading import Thread
visualizer = visualization.openGLLive(system)
# alternative: visualization.mayaviLive(system)
"""
Explanation: Live visualization and plotting
To interact with a live visualization, we need to move the main integration loop into a secondary thread and run the visualizer in the main thread (note that visualization or plotting cannot be run in secondary threads). First, choose a visualizer:
End of explanation
"""
def main():
global current_time
for i in range(int_n_times):
system.integrator.run(int_steps)
energies[i] = (system.time, system.analysis.energy()['total'])
current_time = i
visualizer.update()
system.time = 0 # reset system timer
"""
Explanation: Then, re-define the <tt>main()</tt> function to run the visualizer:
End of explanation
"""
# setup new matplotlib canvas
if matplotlib_notebook:
pyplot.xlabel("Time")
pyplot.ylabel("Energy")
plot, = pyplot.plot([0], [0])
# execute main() in a secondary thread
t = Thread(target=main)
t.daemon = True
t.start()
# execute the visualizer in the main thread
visualizer.register_callback(update_plot, interval=int_steps // 2)
visualizer.start()
"""
Explanation: Next, create a secondary thread for the <tt>main()</tt> function. However,
as we now have multiple threads, and the first thread is already used by
the visualizer, we cannot call <tt>update_plot()</tt> from
the <tt>main()</tt> anymore.
The solution is to register the <tt>update_plot()</tt> function as a
callback of the visualizer:
End of explanation
"""
|
tuanavu/coursera-university-of-washington
|
machine_learning/4_clustering_and_retrieval/assigment/week2/1_nearest-neighbors-lsh-implementation_graphlab.ipynb
|
mit
|
import numpy as np
import graphlab
from scipy.sparse import csr_matrix
from scipy.sparse.linalg import norm
from sklearn.metrics.pairwise import pairwise_distances
import time
from copy import copy
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Locality Sensitive Hashing
Locality Sensitive Hashing (LSH) provides for a fast, efficient approximate nearest neighbor search. The algorithm scales well with respect to the number of data points as well as dimensions.
In this assignment, you will
* Implement the LSH algorithm for approximate nearest neighbor search
* Examine the accuracy for different documents by comparing against brute force search, and also contrast runtimes
* Explore the role of the algorithm’s tuning parameters in the accuracy of the method
Note to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.
Import necessary packages
End of explanation
"""
# !conda upgrade -y scipy
"""
Explanation: Upgrading to Scipy 0.16.0 or later. This assignment requires SciPy 0.16.0 or later. To upgrade, uncomment and run the following cell:
End of explanation
"""
wiki = graphlab.SFrame('people_wiki.gl/')
"""
Explanation: Load in the Wikipedia dataset
End of explanation
"""
wiki = wiki.add_row_number()
wiki
"""
Explanation: For this assignment, let us assign a unique ID to each document.
End of explanation
"""
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text'])
wiki
"""
Explanation: Extract TF-IDF matrix
We first use GraphLab Create to compute a TF-IDF representation for each document.
End of explanation
"""
def sframe_to_scipy(column):
"""
Convert a dict-typed SArray into a SciPy sparse matrix.
Returns
-------
mat : a SciPy sparse matrix where mat[i, j] is the value of word j for document i.
mapping : a dictionary where mapping[j] is the word whose values are in column j.
"""
# Create triples of (row_id, feature_id, count).
x = graphlab.SFrame({'X1':column})
# 1. Add a row number.
x = x.add_row_number()
# 2. Stack will transform x to have a row for each unique (row, key) pair.
x = x.stack('X1', ['feature', 'value'])
# Map words into integers using a OneHotEncoder feature transformation.
f = graphlab.feature_engineering.OneHotEncoder(features=['feature'])
# We first fit the transformer using the above data.
f.fit(x)
# The transform method will add a new column that is the transformed version
# of the 'word' column.
x = f.transform(x)
# Get the feature mapping.
mapping = f['feature_encoding']
# Get the actual word id.
x['feature_id'] = x['encoded_features'].dict_keys().apply(lambda x: x[0])
# Create numpy arrays that contain the data for the sparse matrix.
i = np.array(x['id'])
j = np.array(x['feature_id'])
v = np.array(x['value'])
width = x['id'].max() + 1
height = x['feature_id'].max() + 1
# Create a sparse matrix.
mat = csr_matrix((v, (i, j)), shape=(width, height))
return mat, mapping
"""
Explanation: For the remainder of the assignment, we will use sparse matrices. Sparse matrices are [matrices](https://en.wikipedia.org/wiki/Matrix_(mathematics%29 ) that have a small number of nonzero entries. A good data structure for sparse matrices would only store the nonzero entries to save space and speed up computation. SciPy provides a highly-optimized library for sparse matrices. Many matrix operations available for NumPy arrays are also available for SciPy sparse matrices.
We first convert the TF-IDF column (in dictionary format) into the SciPy sparse matrix format.
End of explanation
"""
start=time.time()
corpus, mapping = sframe_to_scipy(wiki['tf_idf'])
end=time.time()
print end-start
"""
Explanation: The conversion should take a few minutes to complete.
End of explanation
"""
assert corpus.shape == (59071, 547979)
print 'Check passed correctly!'
"""
Explanation: Checkpoint: The following code block should return 'Check passed correctly', indicating that your matrix contains TF-IDF values for 59071 documents and 547979 unique words. Otherwise, it will return Error.
End of explanation
"""
def generate_random_vectors(num_vector, dim):
return np.random.randn(dim, num_vector)
"""
Explanation: Train an LSH model
LSH performs an efficient neighbor search by randomly partitioning all reference data points into different bins. Today we will build a popular variant of LSH known as random binary projection, which approximates cosine distance. There are other variants we could use for other choices of distance metrics.
The first step is to generate a collection of random vectors from the standard Gaussian distribution.
End of explanation
"""
# Generate 3 random vectors of dimension 5, arranged into a single 5 x 3 matrix.
np.random.seed(0) # set seed=0 for consistent results
generate_random_vectors(num_vector=3, dim=5)
"""
Explanation: To visualize these Gaussian random vectors, let's look at an example in low-dimensions. Below, we generate 3 random vectors each of dimension 5.
End of explanation
"""
# Generate 16 random vectors of dimension 547979
np.random.seed(0)
random_vectors = generate_random_vectors(num_vector=16, dim=547979)
random_vectors.shape
"""
Explanation: We now generate random vectors of the same dimensionality as our vocubulary size (547979). Each vector can be used to compute one bit in the bin encoding. We generate 16 vectors, leading to a 16-bit encoding of the bin index for each document.
End of explanation
"""
doc = corpus[0, :] # vector of tf-idf values for document 0
doc.dot(random_vectors[:, 0]) >= 0 # True if positive sign; False if negative sign
"""
Explanation: Next, we partition data points into bins. Instead of using explicit loops, we'd like to utilize matrix operations for greater efficiency. Let's walk through the construction step by step.
We'd like to decide which bin document 0 should go. Since 16 random vectors were generated in the previous cell, we have 16 bits to represent the bin index. The first bit is given by the sign of the dot product between the first random vector and the document's TF-IDF vector.
End of explanation
"""
doc.dot(random_vectors[:, 1]) >= 0 # True if positive sign; False if negative sign
"""
Explanation: Similarly, the second bit is computed as the sign of the dot product between the second random vector and the document vector.
End of explanation
"""
doc.dot(random_vectors) >= 0 # should return an array of 16 True/False bits
np.array(doc.dot(random_vectors) >= 0, dtype=int) # display index bits in 0/1's
"""
Explanation: We can compute all of the bin index bits at once as follows. Note the absence of the explicit for loop over the 16 vectors. Matrix operations let us batch dot-product computation in a highly efficent manner, unlike the for loop construction. Given the relative inefficiency of loops in Python, the advantage of matrix operations is even greater.
End of explanation
"""
corpus[0:2].dot(random_vectors) >= 0 # compute bit indices of first two documents
corpus.dot(random_vectors) >= 0 # compute bit indices of ALL documents
"""
Explanation: All documents that obtain exactly this vector will be assigned to the same bin. We'd like to repeat the identical operation on all documents in the Wikipedia dataset and compute the corresponding bin indices. Again, we use matrix operations so that no explicit loop is needed.
End of explanation
"""
np.arange(15, -1, -1)
"""
Explanation: We're almost done! To make it convenient to refer to individual bins, we convert each binary bin index into a single integer:
Bin index integer
[0,0,0,0,0,0,0,0,0,0,0,0] => 0
[0,0,0,0,0,0,0,0,0,0,0,1] => 1
[0,0,0,0,0,0,0,0,0,0,1,0] => 2
[0,0,0,0,0,0,0,0,0,0,1,1] => 3
...
[1,1,1,1,1,1,1,1,1,1,0,0] => 65532
[1,1,1,1,1,1,1,1,1,1,0,1] => 65533
[1,1,1,1,1,1,1,1,1,1,1,0] => 65534
[1,1,1,1,1,1,1,1,1,1,1,1] => 65535 (= 2^16-1)
By the rules of binary number representation, we just need to compute the dot product between the document vector and the vector consisting of powers of 2:
Notes
End of explanation
"""
doc = corpus[0, :] # first document
index_bits = (doc.dot(random_vectors) >= 0)
powers_of_two = (1 << np.arange(15, -1, -1))
print index_bits
print powers_of_two
print index_bits.dot(powers_of_two)
"""
Explanation: The Operators:
x << y
- Returns x with the bits shifted to the left by y places (and new bits on the right-hand-side are zeros). This is the same as multiplying x by 2**y.
End of explanation
"""
index_bits = corpus.dot(random_vectors) >= 0
index_bits.dot(powers_of_two)
"""
Explanation: Since it's the dot product again, we batch it with a matrix operation:
End of explanation
"""
def train_lsh(data, num_vector=16, seed=None):
dim = corpus.shape[1]
if seed is not None:
np.random.seed(seed)
random_vectors = generate_random_vectors(num_vector, dim)
powers_of_two = 1 << np.arange(num_vector-1, -1, -1)
table = {}
# Partition data points into bins
bin_index_bits = (data.dot(random_vectors) >= 0)
# Encode bin index bits into integers
bin_indices = bin_index_bits.dot(powers_of_two)
# Update `table` so that `table[i]` is the list of document ids with bin index equal to i.
for data_index, bin_index in enumerate(bin_indices):
if bin_index not in table:
# If no list yet exists for this bin, assign the bin an empty list.
table[bin_index] = [] # YOUR CODE HERE
# Fetch the list of document ids associated with the bin and add the document id to the end.
# data_index: document ids
# append() will add a list of document ids to table dict() with key as bin_index
table[bin_index].append(data_index) # YOUR CODE HERE
model = {'data': data,
'bin_index_bits': bin_index_bits,
'bin_indices': bin_indices,
'table': table,
'random_vectors': random_vectors,
'num_vector': num_vector}
return model
"""
Explanation: This array gives us the integer index of the bins for all documents.
Now we are ready to complete the following function. Given the integer bin indices for the documents, you should compile a list of document IDs that belong to each bin. Since a list is to be maintained for each unique bin index, a dictionary of lists is used.
Compute the integer bin indices. This step is already completed.
For each document in the dataset, do the following:
Get the integer bin index for the document.
Fetch the list of document ids associated with the bin; if no list yet exists for this bin, assign the bin an empty list.
Add the document id to the end of the list.
End of explanation
"""
model = train_lsh(corpus, num_vector=16, seed=143)
table = model['table']
if 0 in table and table[0] == [39583] and \
143 in table and table[143] == [19693, 28277, 29776, 30399]:
print 'Passed!'
else:
print 'Check your code.'
"""
Explanation: Checkpoint.
End of explanation
"""
wiki[wiki['name'] == 'Barack Obama']
"""
Explanation: Note. We will be using the model trained here in the following sections, unless otherwise indicated.
Inspect bins
Let us look at some documents and see which bins they fall into.
End of explanation
"""
model
# document id of Barack Obama
wiki[wiki['name'] == 'Barack Obama']['id'][0]
# bin_index contains Barack Obama's article
print model['bin_indices'][35817] # integer format
"""
Explanation: Quiz Question. What is the document id of Barack Obama's article?
Quiz Question. Which bin contains Barack Obama's article? Enter its integer index.
End of explanation
"""
wiki[wiki['name'] == 'Joe Biden']
"""
Explanation: Recall from the previous assignment that Joe Biden was a close neighbor of Barack Obama.
End of explanation
"""
# document id of Joe Biden
wiki[wiki['name'] == 'Joe Biden']['id'][0]
# bin_index of Joe Biden
print np.array(model['bin_index_bits'][24478], dtype=int) # list of 0/1's
# bit representations of the bins containing Joe Biden
print model['bin_indices'][24478] # integer format
model['bin_index_bits'][35817] == model['bin_index_bits'][24478]
# sum of bits agree between Barack Obama and Joe Biden
sum(model['bin_index_bits'][35817] == model['bin_index_bits'][24478])
"""
Explanation: Quiz Question. Examine the bit representations of the bins containing Barack Obama and Joe Biden. In how many places do they agree?
16 out of 16 places (Barack Obama and Joe Biden fall into the same bin)
14 out of 16 places
12 out of 16 places
10 out of 16 places
8 out of 16 places
End of explanation
"""
wiki[wiki['name']=='Wynn Normington Hugh-Jones']
print np.array(model['bin_index_bits'][22745], dtype=int) # list of 0/1's
print model['bin_indices'][22745] # integer format
model['bin_index_bits'][35817] == model['bin_index_bits'][22745]
"""
Explanation: Compare the result with a former British diplomat, whose bin representation agrees with Obama's in only 8 out of 16 places.
End of explanation
"""
model['table'][model['bin_indices'][35817]]
"""
Explanation: How about the documents in the same bin as Barack Obama? Are they necessarily more similar to Obama than Biden? Let's look at which documents are in the same bin as the Barack Obama article.
End of explanation
"""
doc_ids = list(model['table'][model['bin_indices'][35817]])
doc_ids.remove(35817) # display documents other than Obama
docs = wiki.filter_by(values=doc_ids, column_name='id') # filter by id column
docs
"""
Explanation: There is four other documents that belong to the same bin. Which document are they?
End of explanation
"""
def cosine_distance(x, y):
xy = x.dot(y.T)
dist = xy/(norm(x)*norm(y))
return 1-dist[0,0]
obama_tf_idf = corpus[35817,:]
biden_tf_idf = corpus[24478,:]
print '================= Cosine distance from Barack Obama'
print 'Barack Obama - {0:24s}: {1:f}'.format('Joe Biden',
cosine_distance(obama_tf_idf, biden_tf_idf))
for doc_id in doc_ids:
doc_tf_idf = corpus[doc_id,:]
print 'Barack Obama - {0:24s}: {1:f}'.format(wiki[doc_id]['name'],
cosine_distance(obama_tf_idf, doc_tf_idf))
"""
Explanation: It turns out that Joe Biden is much closer to Barack Obama than any of the four documents, even though Biden's bin representation differs from Obama's by 2 bits.
End of explanation
"""
from itertools import combinations
num_vector = 16
search_radius = 3
for diff in combinations(range(num_vector), search_radius):
print diff
"""
Explanation: Moral of the story. Similar data points will in general tend to fall into nearby bins, but that's all we can say about LSH. In a high-dimensional space such as text features, we often get unlucky with our selection of only a few random vectors such that dissimilar data points go into the same bin while similar data points fall into different bins. Given a query document, we must consider all documents in the nearby bins and sort them according to their actual distances from the query.
Query the LSH model
Let us first implement the logic for searching nearby neighbors, which goes like this:
1. Let L be the bit representation of the bin that contains the query documents.
2. Consider all documents in bin L.
3. Consider documents in the bins whose bit representation differs from L by 1 bit.
4. Consider documents in the bins whose bit representation differs from L by 2 bits.
...
To obtain candidate bins that differ from the query bin by some number of bits, we use itertools.combinations, which produces all possible subsets of a given list. See this documentation for details.
1. Decide on the search radius r. This will determine the number of different bits between the two vectors.
2. For each subset (n_1, n_2, ..., n_r) of the list [0, 1, 2, ..., num_vector-1], do the following:
* Flip the bits (n_1, n_2, ..., n_r) of the query bin to produce a new bit vector.
* Fetch the list of documents belonging to the bin indexed by the new bit vector.
* Add those documents to the candidate set.
Each line of output from the following cell is a 3-tuple indicating where the candidate bin would differ from the query bin. For instance,
(0, 1, 3)
indicates that the candiate bin differs from the query bin in first, second, and fourth bits.
End of explanation
"""
def search_nearby_bins(query_bin_bits, table, search_radius=2, initial_candidates=set()):
"""
For a given query vector and trained LSH model, return all candidate neighbors for
the query among all bins within the given search radius.
Example usage
-------------
>>> model = train_lsh(corpus, num_vector=16, seed=143)
>>> q = model['bin_index_bits'][0] # vector for the first document
>>> candidates = search_nearby_bins(q, model['table'])
"""
num_vector = len(query_bin_bits)
powers_of_two = 1 << np.arange(num_vector-1, -1, -1)
# Allow the user to provide an initial set of candidates.
candidate_set = copy(initial_candidates)
for different_bits in combinations(range(num_vector), search_radius):
# Flip the bits (n_1,n_2,...,n_r) of the query bin to produce a new bit vector.
## Hint: you can iterate over a tuple like a list
alternate_bits = copy(query_bin_bits)
for i in different_bits:
# Flip the bits
alternate_bits[i] = ~alternate_bits[i] # YOUR CODE HERE
# Convert the new bit vector to an integer index
nearby_bin = alternate_bits.dot(powers_of_two)
# Fetch the list of documents belonging to the bin indexed by the new bit vector.
# Then add those documents to candidate_set
# Make sure that the bin exists in the table!
# Hint: update() method for sets lets you add an entire list to the set
if nearby_bin in table:
more_docs = table[nearby_bin] # Get all document_ids of the bin
candidate_set.update(more_docs) # YOUR CODE HERE: Update candidate_set with the documents in this bin.
return candidate_set
"""
Explanation: With this output in mind, implement the logic for nearby bin search:
End of explanation
"""
obama_bin_index = model['bin_index_bits'][35817] # bin index of Barack Obama
candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=0)
if candidate_set == set([35817, 21426, 53937, 39426, 50261]):
print 'Passed test'
else:
print 'Check your code'
print 'List of documents in the same bin as Obama: 35817, 21426, 53937, 39426, 50261'
"""
Explanation: Checkpoint. Running the function with search_radius=0 should yield the list of documents belonging to the same bin as the query.
End of explanation
"""
candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=1, initial_candidates=candidate_set)
if candidate_set == set([39426, 38155, 38412, 28444, 9757, 41631, 39207, 59050, 47773, 53937, 21426, 34547,
23229, 55615, 39877, 27404, 33996, 21715, 50261, 21975, 33243, 58723, 35817, 45676,
19699, 2804, 20347]):
print 'Passed test'
else:
print 'Check your code'
"""
Explanation: Checkpoint. Running the function with search_radius=1 adds more documents to the fore.
End of explanation
"""
def query(vec, model, k, max_search_radius):
data = model['data']
table = model['table']
random_vectors = model['random_vectors']
num_vector = random_vectors.shape[1]
# Compute bin index for the query vector, in bit representation.
bin_index_bits = (vec.dot(random_vectors) >= 0).flatten()
# Search nearby bins and collect candidates
candidate_set = set()
for search_radius in xrange(max_search_radius+1):
candidate_set = search_nearby_bins(bin_index_bits, table, search_radius, initial_candidates=candidate_set)
# Sort candidates by their true distances from the query
nearest_neighbors = graphlab.SFrame({'id':candidate_set})
candidates = data[np.array(list(candidate_set)),:]
nearest_neighbors['distance'] = pairwise_distances(candidates, vec, metric='cosine').flatten()
return nearest_neighbors.topk('distance', k, reverse=True), len(candidate_set)
"""
Explanation: Note. Don't be surprised if few of the candidates look similar to Obama. This is why we add as many candidates as our computational budget allows and sort them by their distance to the query.
Now we have a function that can return all the candidates from neighboring bins. Next we write a function to collect all candidates and compute their true distance to the query.
End of explanation
"""
query(corpus[35817,:], model, k=10, max_search_radius=3)
"""
Explanation: Let's try it out with Obama:
End of explanation
"""
query(corpus[35817,:], model, k=10, max_search_radius=3)[0].join(wiki[['id', 'name']], on='id').sort('distance')
"""
Explanation: To identify the documents, it's helpful to join this table with the Wikipedia table:
End of explanation
"""
wiki[wiki['name']=='Barack Obama']
num_candidates_history = []
query_time_history = []
max_distance_from_query_history = []
min_distance_from_query_history = []
average_distance_from_query_history = []
for max_search_radius in xrange(17):
start=time.time()
result, num_candidates = query(corpus[35817,:], model, k=10,
max_search_radius=max_search_radius)
end=time.time()
query_time = end-start
print 'Radius:', max_search_radius
print result.join(wiki[['id', 'name']], on='id').sort('distance')
average_distance_from_query = result['distance'][1:].mean()
max_distance_from_query = result['distance'][1:].max()
min_distance_from_query = result['distance'][1:].min()
num_candidates_history.append(num_candidates)
query_time_history.append(query_time)
average_distance_from_query_history.append(average_distance_from_query)
max_distance_from_query_history.append(max_distance_from_query)
min_distance_from_query_history.append(min_distance_from_query)
"""
Explanation: We have shown that we have a working LSH implementation!
Experimenting with your LSH implementation
In the following sections we have implemented a few experiments so that you can gain intuition for how your LSH implementation behaves in different situations. This will help you understand the effect of searching nearby bins and the performance of LSH versus computing nearest neighbors using a brute force search.
Effect of nearby bin search
How does nearby bin search affect the outcome of LSH? There are three variables that are affected by the search radius:
* Number of candidate documents considered
* Query time
* Distance of approximate neighbors from the query
Let us run LSH multiple times, each with different radii for nearby bin search. We will measure the three variables as discussed above.
End of explanation
"""
plt.figure(figsize=(7,4.5))
plt.plot(num_candidates_history, linewidth=4)
plt.xlabel('Search radius')
plt.ylabel('# of documents searched')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(query_time_history, linewidth=4)
plt.xlabel('Search radius')
plt.ylabel('Query time (seconds)')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(average_distance_from_query_history, linewidth=4, label='Average of 10 neighbors')
plt.plot(max_distance_from_query_history, linewidth=4, label='Farthest of 10 neighbors')
plt.plot(min_distance_from_query_history, linewidth=4, label='Closest of 10 neighbors')
plt.xlabel('Search radius')
plt.ylabel('Cosine distance of neighbors')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
"""
Explanation: Notice that the top 10 query results become more relevant as the search radius grows. Let's plot the three variables:
End of explanation
"""
for i, v in enumerate(average_distance_from_query_history):
if v <= 0.78:
print i, v
"""
Explanation: Some observations:
* As we increase the search radius, we find more neighbors that are a smaller distance away.
* With increased search radius comes a greater number documents that have to be searched. Query time is higher as a consequence.
* With sufficiently high search radius, the results of LSH begin to resemble the results of brute-force search.
Quiz Question. What was the smallest search radius that yielded the correct nearest neighbor, namely Joe Biden?
Quiz Question. Suppose our goal was to produce 10 approximate nearest neighbors whose average distance from the query document is within 0.01 of the average for the true 10 nearest neighbors. For Barack Obama, the true 10 nearest neighbors are on average about 0.77. What was the smallest search radius for Barack Obama that produced an average distance of 0.78 or better?
Answer: What was the smallest search radius that yielded the correct nearest neighbor, namely Joe Biden?
Based on result table, the answer is: Radius: 2
Answer. Suppose our goal was to produce 10 approximate nearest neighbors whose average distance from the query document is within 0.01 of the average for the true 10 nearest neighbors. For Barack Obama, the true 10 nearest neighbors are on average about 0.77. What was the smallest search radius for Barack Obama that produced an average distance of 0.78 or better?
- Clearly, the smallest search radius is 7
End of explanation
"""
def brute_force_query(vec, data, k):
num_data_points = data.shape[0]
# Compute distances for ALL data points in training set
nearest_neighbors = graphlab.SFrame({'id':range(num_data_points)})
nearest_neighbors['distance'] = pairwise_distances(data, vec, metric='cosine').flatten()
return nearest_neighbors.topk('distance', k, reverse=True)
"""
Explanation: Quality metrics for neighbors
The above analysis is limited by the fact that it was run with a single query, namely Barack Obama. We should repeat the analysis for the entirety of data. Iterating over all documents would take a long time, so let us randomly choose 10 documents for our analysis.
For each document, we first compute the true 25 nearest neighbors, and then run LSH multiple times. We look at two metrics:
Precision@10: How many of the 10 neighbors given by LSH are among the true 25 nearest neighbors?
Average cosine distance of the neighbors from the query
Then we run LSH multiple times with different search radii.
End of explanation
"""
max_radius = 17
precision = {i:[] for i in xrange(max_radius)}
average_distance = {i:[] for i in xrange(max_radius)}
query_time = {i:[] for i in xrange(max_radius)}
np.random.seed(0)
num_queries = 10
for i, ix in enumerate(np.random.choice(corpus.shape[0], num_queries, replace=False)):
print('%s / %s' % (i, num_queries))
ground_truth = set(brute_force_query(corpus[ix,:], corpus, k=25)['id'])
# Get the set of 25 true nearest neighbors
for r in xrange(1,max_radius):
start = time.time()
result, num_candidates = query(corpus[ix,:], model, k=10, max_search_radius=r)
end = time.time()
query_time[r].append(end-start)
# precision = (# of neighbors both in result and ground_truth)/10.0
precision[r].append(len(set(result['id']) & ground_truth)/10.0)
average_distance[r].append(result['distance'][1:].mean())
plt.figure(figsize=(7,4.5))
plt.plot(range(1,17), [np.mean(average_distance[i]) for i in xrange(1,17)], linewidth=4, label='Average over 10 neighbors')
plt.xlabel('Search radius')
plt.ylabel('Cosine distance')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(1,17), [np.mean(precision[i]) for i in xrange(1,17)], linewidth=4, label='Precison@10')
plt.xlabel('Search radius')
plt.ylabel('Precision')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(1,17), [np.mean(query_time[i]) for i in xrange(1,17)], linewidth=4, label='Query time')
plt.xlabel('Search radius')
plt.ylabel('Query time (seconds)')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
"""
Explanation: The following cell will run LSH with multiple search radii and compute the quality metrics for each run. Allow a few minutes to complete.
End of explanation
"""
precision = {i:[] for i in xrange(5,20)}
average_distance = {i:[] for i in xrange(5,20)}
query_time = {i:[] for i in xrange(5,20)}
num_candidates_history = {i:[] for i in xrange(5,20)}
ground_truth = {}
np.random.seed(0)
num_queries = 10
docs = np.random.choice(corpus.shape[0], num_queries, replace=False)
for i, ix in enumerate(docs):
ground_truth[ix] = set(brute_force_query(corpus[ix,:], corpus, k=25)['id'])
# Get the set of 25 true nearest neighbors
for num_vector in xrange(5,20):
print('num_vector = %s' % (num_vector))
model = train_lsh(corpus, num_vector, seed=143)
for i, ix in enumerate(docs):
start = time.time()
result, num_candidates = query(corpus[ix,:], model, k=10, max_search_radius=3)
end = time.time()
query_time[num_vector].append(end-start)
precision[num_vector].append(len(set(result['id']) & ground_truth[ix])/10.0)
average_distance[num_vector].append(result['distance'][1:].mean())
num_candidates_history[num_vector].append(num_candidates)
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(average_distance[i]) for i in xrange(5,20)], linewidth=4, label='Average over 10 neighbors')
plt.xlabel('# of random vectors')
plt.ylabel('Cosine distance')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(precision[i]) for i in xrange(5,20)], linewidth=4, label='Precison@10')
plt.xlabel('# of random vectors')
plt.ylabel('Precision')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(query_time[i]) for i in xrange(5,20)], linewidth=4, label='Query time (seconds)')
plt.xlabel('# of random vectors')
plt.ylabel('Query time (seconds)')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(num_candidates_history[i]) for i in xrange(5,20)], linewidth=4,
label='# of documents searched')
plt.xlabel('# of random vectors')
plt.ylabel('# of documents searched')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
"""
Explanation: The observations for Barack Obama generalize to the entire dataset.
Effect of number of random vectors
Let us now turn our focus to the remaining parameter: the number of random vectors. We run LSH with different number of random vectors, ranging from 5 to 20. We fix the search radius to 3.
Allow a few minutes for the following cell to complete.
End of explanation
"""
|
landlab/landlab
|
notebooks/tutorials/flow_direction_and_accumulation/PriorityFlood_realDEMs.ipynb
|
mit
|
import sys, time, os
from pathlib import Path
import numpy as np
import matplotlib.pyplot as plt
from landlab.components import FlowAccumulator, PriorityFloodFlowRouter, ChannelProfiler
from landlab.io.netcdf import read_netcdf
from landlab.utils import get_watershed_mask
from landlab import imshowhs_grid, imshow_grid
from landlab.io import read_esri_ascii, write_esri_ascii
from bmi_topography import Topography
"""
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
Introduction to priority flood component
<hr>
The priority flood flow director is designed to calculate flow properties over large scale grids.
In the following notebook we illustrate how flow accumulation can be calculated for a real DEM downloaded with the BMI_topography data component. Moreover, we demonstrate how shaded relief can be plotted using the imshowhs_grid function.
First we will import all the modules we need.
End of explanation
"""
def get_topo(buffer, north=40.16, south=40.14, east=-105.49, west=-105.51):
params = Topography.DEFAULT.copy()
params["south"] = south - buffer
params["north"] = north + buffer
params["west"] = -105.51 - buffer
params["east"] = -105.49 + buffer
params["output_format"] = "AAIGrid"
params["cache_dir"] = Path.cwd()
dem = Topography(**params)
name = dem.fetch()
props = dem.load()
dim_x = props.sizes["x"]
dim_y = props.sizes["y"]
cells = props.sizes["x"] * props.sizes["y"]
grid, z = read_esri_ascii(name, name="topographic__elevation")
return dim_x, dim_y, cells, grid, z, dem
"""
Explanation: Create a function to download and save SRTM images using BMI_topography.
End of explanation
"""
def plotting(
grid, topo=True, DA=True, hill_DA=False, flow_metric="D8", hill_flow_metric="Quinn"
):
if topo:
azdeg = 200
altdeg = 20
ve = 1
plt.figure()
plot_type = "DEM"
ax = imshowhs_grid(
grid,
"topographic__elevation",
grid_units=("deg", "deg"),
var_name="Topo, m",
cmap="terrain",
plot_type=plot_type,
vertical_exa=ve,
azdeg=azdeg,
altdeg=altdeg,
default_fontsize=12,
cbar_tick_size=10,
cbar_width="100%",
cbar_or="vertical",
bbox_to_anchor=[1.03, 0.3, 0.075, 14],
colorbar_label_y=-15,
colorbar_label_x=0.5,
ticks_km=False,
)
if DA:
# %% Plot first instance of drainage_area
grid.at_node["drainage_area"][grid.at_node["drainage_area"] == 0] = (
grid.dx * grid.dx
)
plot_DA = np.log10(grid.at_node["drainage_area"] * 111e3 * 111e3)
plt.figure()
plot_type = "Drape1"
drape1 = plot_DA
thres_drape1 = None
alpha = 0.5
myfile1 = "temperature.cpt"
cmap1 = "terrain"
ax = imshowhs_grid(
grid,
"topographic__elevation",
grid_units=("deg", "deg"),
cmap=cmap1,
plot_type=plot_type,
drape1=drape1,
vertical_exa=ve,
azdeg=azdeg,
altdeg=altdeg,
thres_drape1=thres_drape1,
alpha=alpha,
default_fontsize=12,
cbar_tick_size=10,
var_name="$log^{10}DA, m^2$",
cbar_width="100%",
cbar_or="vertical",
bbox_to_anchor=[1.03, 0.3, 0.075, 14],
colorbar_label_y=-15,
colorbar_label_x=0.5,
ticks_km=False,
)
props = dict(boxstyle="round", facecolor="white", alpha=0.6)
textstr = flow_metric
ax.text(
0.05,
0.95,
textstr,
transform=ax.transAxes,
fontsize=10,
verticalalignment="top",
bbox=props,
)
if hill_DA:
# Plot second instance of drainage_area (hill_drainage_area)
grid.at_node["hill_drainage_area"][grid.at_node["hill_drainage_area"] == 0] = (
grid.dx * grid.dx
)
plotDA = np.log10(grid.at_node["hill_drainage_area"] * 111e3 * 111e3)
# plt.figure()
# imshow_grid(grid, plotDA,grid_units=("m", "m"), var_name="Elevation (m)", cmap='terrain')
plt.figure()
plot_type = "Drape1"
# plot_type='Drape2'
drape1 = np.log10(grid.at_node["hill_drainage_area"])
thres_drape1 = None
alpha = 0.5
myfile1 = "temperature.cpt"
cmap1 = "terrain"
ax = imshowhs_grid(
grid,
"topographic__elevation",
grid_units=("deg", "deg"),
cmap=cmap1,
plot_type=plot_type,
drape1=drape1,
vertical_exa=ve,
azdeg=azdeg,
altdeg=altdeg,
thres_drape1=thres_drape1,
alpha=alpha,
default_fontsize=10,
cbar_tick_size=10,
var_name="$log^{10}DA, m^2$",
cbar_width="100%",
cbar_or="vertical",
bbox_to_anchor=[1.03, 0.3, 0.075, 14],
colorbar_label_y=-15,
colorbar_label_x=0.5,
ticks_km=False,
)
props = dict(boxstyle="round", facecolor="white", alpha=0.6)
textstr = hill_flow_metric
ax.text(
0.05,
0.95,
textstr,
transform=ax.transAxes,
fontsize=10,
verticalalignment="top",
bbox=props,
)
"""
Explanation: Make function to plot DEMs and drainage accumulation with shaded relief.
End of explanation
"""
# Download or reload topo data with given buffer
dim_x, dim_y, cells, grid_LL, z_LL, dem = get_topo(0.05)
fa_LL = FlowAccumulator(
grid_LL, flow_director="D8", depression_finder="DepressionFinderAndRouter"
)
fa_LL.run_one_step()
# Plot output products
plotting(grid_LL)
"""
Explanation: Compare default Landlab flow accumulator with priority flood flow accumulator
For small DEMs (small buffer size, in degrees), the default flow accumulator is slightly faster than the priority flood flow accumulator. For large DEMs, the priority flood flow accumulator outperforms the default flow accumulator by several orders of magnitude. To test the performance for larger DEM's increase the buffer size (e.g. with 1 degree = 111 km).
Default flow director/accumulator
End of explanation
"""
# Download or reload topo data with given buffer
dim_x, dim_y, cells, grid_PF, z_PF, dem = get_topo(0.05)
# Here, we only calculate flow directions using the first instance of the flow accumulator
flow_metric = "D8"
fa_PF = PriorityFloodFlowRouter(
grid_PF,
surface="topographic__elevation",
flow_metric=flow_metric,
suppress_out=False,
depression_handler="fill",
accumulate_flow=True,
)
fa_PF.run_one_step()
# Plot output products
plotting(grid_PF)
"""
Explanation: Priority flood flow director/accumulator
Calculate flow directions/flow accumulation using the first instance of the flow accumulator
End of explanation
"""
# 3. Priority flow director/accumualtor
# Download or reload topo data with given buffer
dim_x, dim_y, cells, grid_PF, z_PF, dem = get_topo(0.05)
# For timing compare only single flow
flow_metric = "D8"
hill_flow_metric = "Quinn"
fa_PF = PriorityFloodFlowRouter(
grid_PF,
surface="topographic__elevation",
flow_metric=flow_metric,
suppress_out=False,
depression_handler="fill",
accumulate_flow=True,
separate_hill_flow=True,
accumulate_flow_hill=True,
update_hill_flow_instantaneous=False,
hill_flow_metric=hill_flow_metric,
)
fa_PF.run_one_step()
fa_PF.update_hill_fdfa()
# 4. Plot output products
plotting(grid_PF, hill_DA=True, flow_metric="D8", hill_flow_metric="Quinn")
# Remove downloaded DEM. Uncomment to remove DEM.
# os.remove(dem.fetch())
"""
Explanation: Priority flood flow director/accumulator
Calculate flow directions/flow accumulation using the second instance of the flow accumulator
End of explanation
"""
|
keoghdata/bradlib
|
install.ipynb
|
gpl-3.0
|
module_name = 'bradlib'
"""
Explanation: Script to copy files to Anaconda paths so can import and use scripts
End of explanation
"""
from distutils.sysconfig import get_python_lib #; print(get_python_lib())
path_main = get_python_lib()
path_main
path_main.split('Anaconda3')
"""
Explanation: find main path for install
End of explanation
"""
dest_paths_list = []
dest_paths_list.append(path_main + '\\' + module_name)
dest_paths_list
"""
Explanation: make list of paths need to copy files to & add main path
End of explanation
"""
x = !conda env list
#x[2:-2]
print('------------------------------------------------')
print('Conda envrionments found which will install to:')
for i in x[2:-2]:
y = i.split(' ')
print(y[0])
new_path = path_main.split('Anaconda3')[0] +'Anaconda3\\envs\\'+y[0]+'\\Lib\\site-packages\\' + module_name
#print(new_path)
dest_paths_list.append(new_path)
#dest_paths_list
"""
Explanation: add paths to list for conda environments currently available
MAKE SURE HAS 'module name' AT END OF PATH!!
End of explanation
"""
import os
def copy_to_paths(source,dest):
"""
Function takes source and destination folders and copies files.
#Source and dest needed in fomrat below:
#source = ".\\bradlib"
#dest = "C:\\Users\\bjk1y13\\dev\\garbage\\bradlib"
"""
#### Remove __pycache__ folder as is not required
pycache_loc = source + "\\__pycache__"
if os.path.isdir(pycache_loc) == True:
print("__pycache__ found in source and being deleted...")
!rmdir $pycache_loc /S /Q
#### Copy files to new destination
print('------------------------')
print('Destination: ', dest)
print('---------')
folder_exists = os.path.isdir(dest)
if folder_exists == True:
print('Folder exists')
### delete older version folder
print('Deleting old folder...')
!rmdir $dest /S /Q
print('Copying new folder...')
!xcopy $source $dest /E /I
elif folder_exists == False:
print('Folder does not exist')
print('Copying new folder...')
!xcopy $source $dest /E /I
else:
print('Something has gone wrong!!')
print('COMPLETE')
print('------------------------')
return
source = ".\\" + module_name
"""
Explanation: function to copy files to list of paths
End of explanation
"""
for destination in dest_paths_list:
print(destination)
copy_to_paths(source, destination)
print('INSTALL SUCCESSFUL')
"""
Explanation: Run code for each location
End of explanation
"""
|
wmvanvliet/neuroscience_tutorials
|
eeg-erp/adept.ipynb
|
bsd-2-clause
|
from mne.io import read_raw_bdf
raw = read_raw_bdf('data/magic-trick-raw.bdf', preload=True)
print(raw)
"""
Explanation: <img src="images/charmeleon.png" alt="Adept" width="200">
Data preprocessing
Welcome to the next level!
In the previous level, you have learned some programming basics, culminating in you successfully visualizing some continuous EEG data.
Now, we're going to use some more functions of the MNE-Python module to perform frequency filtering and cutting up the signal into epochs.
The experiment
The EEG data was recorded while a brave volunteer was sitting in front of a computer screen, looking at pictures of playing cards.
At the beginning of the experiment, the volunteer chose one of the cards.
Then, all cards started flashing across the screen in a random order.
The volunteer was silently counting how many times his/her chosen card was presented.
<center>
<img src="images/cards.png" width="400" style="margin-top: 1ex">
</center>
By analyzing the EEG signal, we should be able to see a change in the ERP whenever the chosen card appeared on the screen.
Loading the data
Below is some code that will load the EEG data.
It should look familiar to you if you have completed the last level.
However, I've added a bit. Can you spot what I have added?
End of explanation
"""
%matplotlib notebook
print('From now on, all graphics will send to your browser.')
"""
Explanation: Dissecting the program above
The first line of the program above imports the read_raw_bdf function from the mne.io module.
The second line of the program is the most complicated. A lot of stuff is going on there:
<img src="images/function_call_explanation.png">
The read_raw_bdf function is called with two parameters. The first parameter is a piece of text (a "string") containing the name of the BDF file to load. Literal text (strings) must always be enclosed in ' quotes. The second parameter is a "named" parameter, which is something I added since last level. We will use named parameters a lot during this session (see below). This parameter is set to the special value True. Python has three special values: True, False, and None, which are often used to indicate "yes", "no", and "I don't know/care" respectively. Finally, the result is stored in a variable called raw.
The last line of the program calls the print function, which is used to display things. Here, it is called with the raw variable as parameter, so it displays the data contained in this variable, namely the data we loaded with read_raw_bdf.
Named parameters
Many functions of MNE-Python take dozens of parameters that fine-tune exactly how to perform some operation. If you had to specify them all every time you want to call a function, you'd spend ages worrying about little details and get nothing done. Luckily, Python allows us to specify default values for parameters, which means these parameters may be omitted when calling a function, and the default will be used. In MNE-Python, most parameters have a default value, so while a function may have 20 parameters, you only have to specify one or two. The rest of the parameters are like little knobs and buttons you can use to fine tune things, or just leave alone. This allows MNE-Python to keep simple things simple, while making complicated things possible.
Parameters with default values are called "named" parameters, and you specify them with name=value. The preload parameter that you saw in the program above is such a named parameter. It controls whether to load all of the data in memory, or only read the "metadata" of the file, i.e., when it was recorded, how long it is, how many sensors the MEG machine had, etc. By default, preload=False, meaning only the metadata is read. In the example above, we set it to True, indicating we wish to really load all the data in memory.
Visualizing the data
As we have seen in the last level, raw data can be visualized (or "plotted") by the plot_raw function that is kept inside the mne.viz module.
It needs one parameter: the variable containing the data you wish to plot. (It also has a lot of named parameters, but you can leave them alone for now.)
As a quick refresher, I'm going to let you write the visualization code.
But first, there is a little housekeeping that we need to do.
We need to tell the visualization engine to send the results to your browser and not attempt to open a window on the server where this code is running. Please run the cell below:
End of explanation
"""
fig = plot_raw(raw, events=events)
"""
Explanation: Now, it's your turn! Write the Python code that will visualize the raw EEG data we just loaded.
Keep the following things in mind:
1. The function is called plot_raw and is kept inside the mne.viz module. Remember to import to function first!
2. Call the function with one parameter, namely the raw variable we created above that contains the MEG data.
3. Assign the result of the plot_raw function to a variable (pick any name you want), otherwise the figure will show twice.
Use the cell below to write your code:
If you wrote the code correctly, you should be looking at a little interface that shows the data collected on all the EEG sensors. Click inside the scrollbars or use the arrow keys to explore the data.
Events, and how to read the documentation
Browsing through the sensors, you will notice there are two types:
8 EEG sensors, named Fz-P2
1 STIM sensor, which is not really a sensor, so we'll call it a "channel". Its name is "Status"
Take a close look at the STIM channel.
On this channel, the computer that is presenting the stimuli was sending timing information to the MEG equipment.
Whenever a stimulus (one of the 9 playing cards) was presented, the signal at this channel jumps briefly from 0 to 1-9, indicating which playing card was being shown.
We can use this channel to create an "events" matrix: a table listing all the times a stimulus was presented, along with the time of the event and the type of stimulus.
The function to do this is called find_events, and is kept inside the mne module.
In this document, all the function names are links to their documentation. Click on find_events to pull up its documentation. It will open a new browser tab. It should look like this:
<img src="images/doc_with_explanation.png" alt="Documentation for find_events"/>
Looking at the function "signature" reveals that many of the parameters have default values associated with them. This means these are named parameters and we can ignore them if we want. There is only a single required parameter, named raw. Looking at the parameter list, it seems we need to set it to the raw data we just loaded with the read_raw_bdf function. If we called the function correctly, it should provide us with an "array" (don't worry about what an array is for now) with all the events.
Now, call the function and find some events! Keep the following things in mind:
The function is called find_events and is kept inside the mne module. Remember to import to function first!
Call the function. Use the documentation to find out what parameters it needs.
Assign the result to a variable called events.
If you called the function correctly, running the cell below should display the found events on top of the raw data. It should show as cyan lines, with a number on top indicating the type of event.
These numbers are referred to as "event codes".
End of explanation
"""
event_id = { 'Ace of spades': 1, 'Jack of clubs': 2, 'Queen of hearts': 3, 'King of diamonds': 4, '10 of spades': 5, '3 of clubs': 6, '10 of hearts': 7, '3 of diamonds': 8, 'King of spades': 9 }
"""
Explanation: Frequency filtering, or, working with objects
Throughout this exercise, we have created many variables, such as raw and events.
Up to now, we've treated these as simple boxes that hold some data, or rather "objects" as they are called in Python.
However, the box/object metaphor is not really a good one.
Variables are more like little machines of their own.
They can do things!
The raw variable is a very powerful object.
If you want, you can look at the documentation for Raw to see the long list of things it can do.
One useful things is that a raw object knows how to visualize (i.e. "plot") itself.
You already know that modules hold functions, but objects can hold functions too.
Functions that are kept inside objects are called "methods", to distinguish them from "functions" that are kept inside modules.
Instead of using the plot_raw function, we can use the plotting method of the object itself, like this:
python
fig = raw.plot()
Notice how the .plot() method call doesn't need any parameters: it already know which object it needs to visualize, namely the object it belong to.
In MNE-Python, many objects have such a plot method.
Another method of the raw object is called filter, which applies a frequency filter to the data.
A frequency filter gets rid of some of the waves in the data that are either too slow or to fast to be of any interest to us.
The raw.filter() method takes two parameters: a lower bound and upper bound, expressed in Hz.
From this, we can deduce it applies a "bandpass" filter: keeping only waves within a certain frequency range.
Here is an example of a bandpass filter that keeps only waves between 5 to 50 Hz:
python
raw.filter(5, 50)
Notice how the result of the method is not assigned to any variable.
In this case, the raw.filter method operated on the raw variable directly, overwriting the data contained in it.
In this experiment, we're hunting for the P300 "oddball" effect, which is a relatively slow wave, but note extremely slow.
A good choice for us would be to get rid of all waves slower than 0.5 Hz and all waves faster than 20 Hz.
In the cell below, write the code to apply the desired bandpass filter to the data.
Note that the example given above used a different frequency range (5-50 Hz) than the one we actually want (0.5-20 Hz), so you will have to make some changes.
After you have applied the frequency filter, plot the data using the raw.plot() method so you can see the result of your hard work.
Epochs, or, how to create a dictionary
<img src="images/dictionary.jpg" width="200" style="float: right; margin-left: 10px">
Now that we have the information on what stimulus was presented at one time, we can extract "epochs". Epochs are little snippets of signal surrounding an event. These epochs can then be averaged to produce the "evoked" signal.
In order to create epochs, we need a way to translate the event codes (1, 2, 3, ...) into something more descriptive.
This can be done using a new type of variable called a "dictionary".
A Python dictionary, allows us (or rather the computer) to "look things up".
The following piece of code creates a dictionary called event_id. Take a look and run it:
End of explanation
"""
event_id = {
'Ace of spades': 1,
'Jack of clubs': 2,
'Queen of hearts': 3,
'King of diamonds': 4,
'10 of spades': 5,
'3 of clubs': 6,
'10 of hearts': 7,
'3 of diamonds': 8,
'King of spades': 9,
}
"""
Explanation: A dictionary is created by using curly braces { } and colons :. I've spread out the code over multiple lines to make things a little clearer. The way you create a dictionary is to say {this: means that} and you use commas if you want to put more than one thing in the dictionary.
Finally, you should know that Python allows you to spread out lists across multiple lines, so we can write our dictionary like this, which is much nicer to read:
End of explanation
"""
epochs.plot()
"""
Explanation: Armed with our event_id dictionary, we can move on to creating epochs.
For each event, let's cut a snippet of signal from 0.2 seconds before the moment the stimulus was presented, up until 0.5 seconds after it was presented. If we take the moment the stimulus was presented as time 0, we will cut epochs from -0.2 until 0.5 seconds.
The function to do this is called Epochs (with a capital E).
Click on the function name to open its documentation and look at the parameters it needs.
<div style="margin-left: 50px; margin-top: 1ex"><img src="images/OMG.png" width="20" style="display: inline"> That's a lot of parameters!</div>
<div style="margin-left: 50px"><img src="images/thinking.png" width="20" style="display: inline"> Ok, how many are optional?</div>
<div style="margin-left: 50px"><img src="images/phew.png" width="20" style="display: inline"> Almost all of them, phew!</div>
Here are the points of the documentation that are most relevant to us right now:
* There are two required arguments, the raw data and the events array we created earlier.
* The next optional parameter is the event_id dictionary we just created.
* The next two optional parameters, tmin and tmax, specify the time range to cut epochs for. They are set to -0.2 to 0.5 seconds by default. We want a little more than that. Set them to cut epochs from -0.2 to 0.8 seconds.
* We can also leave the rest of the parameters alone.
Go ahead and import the Epochs function from the mne module.
Then call it with the correct parameters and store the result in a variable called epochs (small e):
Most MNE-Python have a .plot() method to visualize themselves. The cell below will create an interactive visualization of your epochs. Click inside the scrollbars or use the arrow keys to scroll through the data.
End of explanation
"""
|
frankbearzou/Data-analysis
|
Star Wars survey/Star Wars survey.ipynb
|
mit
|
star_wars = pd.read_csv('star_wars.csv', encoding="ISO-8859-1")
star_wars.head()
star_wars.columns
"""
Explanation: Data Exploration
End of explanation
"""
star_wars = star_wars.dropna(subset=['RespondentID'])
"""
Explanation: Data Cleaning
Remove invalid first column RespondentID which are NaN.
End of explanation
"""
star_wars['Do you consider yourself to be a fan of the Star Wars film franchise?'].isnull().value_counts()
star_wars['Have you seen any of the 6 films in the Star Wars franchise?'].value_counts()
"""
Explanation: Change the second and third columns.
End of explanation
"""
star_wars['Have you seen any of the 6 films in the Star Wars franchise?'] = star_wars['Have you seen any of the 6 films in the Star Wars franchise?'].map({'Yes': True, 'No': False})
star_wars['Do you consider yourself to be a fan of the Star Wars film franchise?'] = star_wars['Do you consider yourself to be a fan of the Star Wars film franchise?'].map({'Yes': True, 'No': False})
"""
Explanation: The values for the second and third columns which are Have you seen any of the 6 films in the Star Wars franchise? and Do you consider yourself to be a fan of the Star Wars film franchise? respectively are Yes, No, NaN. We want to change them to True or False.
End of explanation
"""
for col in star_wars.columns[3:9]:
star_wars[col] = star_wars[col].apply(lambda x: False if pd.isnull(x) else True)
"""
Explanation: Cleaning the columns from index 3 to 9.
From the fourth column to ninth columns are checkbox questions:
If values are the movie names: they have seen the movies.
If values are NaN: they have not seen the movies.
We are going to convert the values of these columns to bool type.
End of explanation
"""
star_wars.rename(columns={'Which of the following Star Wars films have you seen? Please select all that apply.': 'seen_1', \
'Unnamed: 4': 'seen_2', \
'Unnamed: 5': 'seen_3', \
'Unnamed: 6': 'seen_4', \
'Unnamed: 7': 'seen_5', \
'Unnamed: 8': 'seen_6'}, inplace=True)
"""
Explanation: Rename the columns from index 3 to 9 for better readibility.
seen_1 means Star Wars Episode I, and so on.
End of explanation
"""
star_wars[star_wars.columns[9:15]] = star_wars[star_wars.columns[9:15]].astype(float)
"""
Explanation: Cleaning the columns from index 9 to 15.
Changing data type to float.
End of explanation
"""
star_wars.rename(columns={'Please rank the Star Wars films in order of preference with 1 being your favorite film in the franchise and 6 being your least favorite film.': 'ranking_1', \
'Unnamed: 10': 'ranking_2', \
'Unnamed: 11': 'ranking_3', \
'Unnamed: 12': 'ranking_4', \
'Unnamed: 13': 'ranking_5', \
'Unnamed: 14': 'ranking_6'}, inplace=True)
"""
Explanation: Renaming columns names.
End of explanation
"""
star_wars.rename(columns={'Please state whether you view the following characters favorably, unfavorably, or are unfamiliar with him/her.': 'Luck Skywalker', \
'Unnamed: 16': 'Han Solo', \
'Unnamed: 17': 'Princess Leia Oragana', \
'Unnamed: 18': 'Obi Wan Kenobi', \
'Unnamed: 19': 'Yoda', \
'Unnamed: 20': 'R2-D2', \
'Unnamed: 21': 'C-3P0', \
'Unnamed: 22': 'Anakin Skywalker', \
'Unnamed: 23': 'Darth Vader', \
'Unnamed: 24': 'Lando Calrissian', \
'Unnamed: 25': 'Padme Amidala', \
'Unnamed: 26': 'Boba Fett', \
'Unnamed: 27': 'Emperor Palpatine', \
'Unnamed: 28': 'Jar Jar Binks'}, inplace=True)
"""
Explanation: Cleaning the cloumns from index 15 to 29.
End of explanation
"""
seen_sum = star_wars[['seen_1', 'seen_2', 'seen_3', 'seen_4', 'seen_5', 'seen_6']].sum()
seen_sum
seen_sum.idxmax()
"""
Explanation: Data Analysis
Finding The Most Seen Movie
End of explanation
"""
ax = seen_sum.plot(kind='bar')
for p in ax.patches:
ax.annotate(str(p.get_height()), (p.get_x() * 1.005, p.get_height() * 1.01))
plt.show()
"""
Explanation: From the data above, we can find that the most seen movie is the episode V.
End of explanation
"""
ranking_mean = star_wars[['ranking_1', 'ranking_2', 'ranking_3', 'ranking_4', 'ranking_5', 'ranking_6']].mean()
ranking_mean
ranking_mean.idxmin()
"""
Explanation: Finding The Highest Ranked Movie.
End of explanation
"""
ranking_mean.plot(kind='bar')
plt.show()
"""
Explanation: The highest ranked movie is ranking_5 which is the episode V.
End of explanation
"""
males = star_wars[star_wars['Gender'] == 'Male']
females = star_wars[star_wars['Gender'] == 'Female']
"""
Explanation: Let's break down data by Gender.
End of explanation
"""
males[males.columns[3:9]].sum().plot(kind='bar', title='male seen')
plt.show()
males[females.columns[3:9]].sum().plot(kind='bar', title='female seen')
plt.show()
"""
Explanation: The number of movies seen.
End of explanation
"""
males[males.columns[9:15]].mean().plot(kind='bar', title='Male Ranking')
plt.show()
females[males.columns[9:15]].mean().plot(kind='bar', title='Female Ranking')
plt.show()
"""
Explanation: The ranking of movies.
End of explanation
"""
star_wars['Luck Skywalker'].value_counts()
star_wars[star_wars.columns[15:29]].head()
fav = star_wars[star_wars.columns[15:29]].dropna()
fav.head()
"""
Explanation: From the charts above, we do not find significant difference among gender.
Star Wars Character Favorability Ratings
End of explanation
"""
fav_df_list = []
for col in fav.columns.tolist():
row = fav[col].value_counts()
d1 = pd.DataFrame(data={'favorably': row[0] + row[1], \
'neutral': row[2], \
'unfavorably': row[4] + row[5], \
'Unfamiliar': row[3]}, \
index=[col], \
columns=['favorably', 'neutral', 'unfavorably', 'Unfamiliar'])
fav_df_list.append(d1)
fav_pivot = pd.concat(fav_df_list)
fav_pivot
fig = plt.figure()
ax = plt.subplot(111)
fav_pivot.plot(kind='barh', stacked=True, figsize=(10,10), ax=ax)
# Shrink current axis's height by 10% on the bottom
box = ax.get_position()
ax.set_position([box.x0, box.y0 + box.height * 0.1,
box.width, box.height * 0.9])
# Put a legend below current axis
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05),
fancybox=True, shadow=True, ncol=5)
plt.show()
"""
Explanation: Convert fav to pivot table.
End of explanation
"""
shot_first = star_wars['Which character shot first?'].value_counts()
shot_first
shot_sum = shot_first.sum()
shot_first = shot_first.apply(lambda x: x / shot_sum * 100)
shot_first
ax = shot_first.plot(kind='barh')
for p in ax.patches:
ax.annotate(str("{0:.2f}%".format(round(p.get_width(),2))), (p.get_width() * 1.005, p.get_y() + p.get_height() * 0.5))
plt.show()
"""
Explanation: Who Shot First?
End of explanation
"""
|
slundberg/shap
|
notebooks/overviews/Be careful when interpreting predictive models in search of causal insights.ipynb
|
mit
|
# This cell defines the functions we use to generate the data in our scenario
import numpy as np
import pandas as pd
import scipy.stats
import sklearn
import xgboost
class FixableDataFrame(pd.DataFrame):
""" Helper class for manipulating generative models.
"""
def __init__(self, *args, fixed={}, **kwargs):
self.__dict__["__fixed_var_dictionary"] = fixed
super(FixableDataFrame, self).__init__(*args, **kwargs)
def __setitem__(self, key, value):
out = super(FixableDataFrame, self).__setitem__(key, value)
if isinstance(key, str) and key in self.__dict__["__fixed_var_dictionary"]:
out = super(FixableDataFrame, self).__setitem__(key, self.__dict__["__fixed_var_dictionary"][key])
return out
# generate the data
def generator(n, fixed={}, seed=0):
""" The generative model for our subscriber retention example.
"""
if seed is not None:
np.random.seed(seed)
X = FixableDataFrame(fixed=fixed)
# the number of sales calls made to this customer
X["Sales calls"] = np.random.uniform(0, 4, size=(n,)).round()
# the number of sales calls made to this customer
X["Interactions"] = X["Sales calls"] + np.random.poisson(0.2, size=(n,))
# the health of the regional economy this customer is a part of
X["Economy"] = np.random.uniform(0, 1, size=(n,))
# the time since the last product upgrade when this customer came up for renewal
X["Last upgrade"] = np.random.uniform(0, 20, size=(n,))
# how much the user perceives that they need the product
X["Product need"] = (X["Sales calls"] * 0.1 + np.random.normal(0, 1, size=(n,)))
# the fractional discount offered to this customer upon renewal
X["Discount"] = ((1-scipy.special.expit(X["Product need"])) * 0.5 + 0.5 * np.random.uniform(0, 1, size=(n,))) / 2
# What percent of the days in the last period was the user actively using the product
X["Monthly usage"] = scipy.special.expit(X["Product need"] * 0.3 + np.random.normal(0, 1, size=(n,)))
# how much ad money we spent per user targeted at this user (or a group this user is in)
X["Ad spend"] = X["Monthly usage"] * np.random.uniform(0.99, 0.9, size=(n,)) + (X["Last upgrade"] < 1) + (X["Last upgrade"] < 2)
# how many bugs did this user encounter in the since their last renewal
X["Bugs faced"] = np.array([np.random.poisson(v*2) for v in X["Monthly usage"]])
# how many bugs did the user report?
X["Bugs reported"] = (X["Bugs faced"] * scipy.special.expit(X["Product need"])).round()
# did the user renew?
X["Did renew"] = scipy.special.expit(7 * (
0.18 * X["Product need"] \
+ 0.08 * X["Monthly usage"] \
+ 0.1 * X["Economy"] \
+ 0.05 * X["Discount"] \
+ 0.05 * np.random.normal(0, 1, size=(n,)) \
+ 0.05 * (1 - X['Bugs faced'] / 20) \
+ 0.005 * X["Sales calls"] \
+ 0.015 * X["Interactions"] \
+ 0.1 / (X["Last upgrade"]/4 + 0.25)
+ X["Ad spend"] * 0.0 - 0.45
))
# in real life we would make a random draw to get either 0 or 1 for if the
# customer did or did not renew. but here we leave the label as the probability
# so that we can get less noise in our plots. Uncomment this line to get
# noiser causal effect lines but the same basic results
X["Did renew"] = scipy.stats.bernoulli.rvs(X["Did renew"])
return X
def user_retention_dataset():
""" The observed data for model training.
"""
n = 10000
X_full = generator(n)
y = X_full["Did renew"]
X = X_full.drop(["Did renew", "Product need", "Bugs faced"], axis=1)
return X, y
def fit_xgboost(X, y):
""" Train an XGBoost model with early stopping.
"""
X_train,X_test,y_train,y_test = sklearn.model_selection.train_test_split(X, y)
dtrain = xgboost.DMatrix(X_train, label=y_train)
dtest = xgboost.DMatrix(X_test, label=y_test)
model = xgboost.train(
{ "eta": 0.001, "subsample": 0.5, "max_depth": 2, "objective": "reg:logistic"}, dtrain, num_boost_round=200000,
evals=((dtest, "test"),), early_stopping_rounds=20, verbose_eval=False
)
return model
X, y = user_retention_dataset()
model = fit_xgboost(X, y)
"""
Explanation: Be careful when interpreting predictive models in search of causal insights
A joint article about causality and interpretable machine learning with Eleanor Dillon, Jacob LaRiviere, Scott Lundberg, Jonathan Roth, and Vasilis Syrgkanis from Microsoft.
Predictive machine learning models like XGBoost become even more powerful when paired with interpretability tools like SHAP. These tools identify the most informative relationships between the input features and the predicted outcome, which is useful for explaining what the model is doing, getting stakeholder buy-in, and diagnosing potential problems. It is tempting to take this analysis one step further and assume that interpretation tools can also identify what features decision makers should manipulate if they want to change outcomes in the future. However, in this article, we discuss how using predictive models to guide this kind of policy choice can often be misleading.
The reason relates to the fundamental difference between correlation and causation. SHAP makes transparent the correlations picked up by predictive ML models. But making correlations transparent does not make them causal! All predictive models implicitly assume that everyone will keep behaving the same way in the future, and therefore correlation patterns will stay constant. To understand what happens if someone starts behaving differently, we need to build causal models, which requires making assumptions and using the tools of causal analysis.
A subscriber retention example
Imagine we are tasked with building a model that predicts whether a customer will renew their product subscription. Let's assume that after a bit of digging we manage to get eight features which are important for predicting churn: customer discount, ad spending, customer's monthly usage, last upgrade, bugs reported by a customer, interactions with a customer, sales calls with a customer, and macroeconomic activity. We then use those features to train a basic XGBoost model to predict if a customer will renew their subscription when it expires:
End of explanation
"""
import shap
explainer = shap.Explainer(model)
shap_values = explainer(X)
clust = shap.utils.hclust(X, y, linkage="single")
shap.plots.bar(shap_values, clustering=clust, clustering_cutoff=1)
"""
Explanation: Once we have our XGBoost customer retention model in hand, we can begin exploring what it has learned with an interpretability tool like SHAP. We start by plotting the global importance of each feature in the model:
End of explanation
"""
shap.plots.scatter(shap_values, ylabel="SHAP value\n(higher means more likely to renew)")
"""
Explanation: This bar plot shows that the discount offered, ad spend, and number of bugs reported are the top three factors driving the model's prediction of customer retention. This is interesting and at first glance looks reasonable. The bar plot also includes a feature redundancy clustering which we will use later.
However, when we dig deeper and look at how changing the value of each feature impacts the model's prediction, we find some unintuitive patterns. SHAP scatter plots show how changing the value of a feature impacts the model's prediction of renewal probabilities. If the blue dots follow an increasing pattern, this means that the larger the feature, the higher is the model's predicted renewal probability.
End of explanation
"""
import graphviz
names = [
"Bugs reported", "Monthly usage", "Sales calls", "Economy",
"Discount", "Last upgrade", "Ad spend", "Interactions"
]
g = graphviz.Digraph()
for name in names:
g.node(name, fontsize="10")
g.node("Product need", style="dashed", fontsize="10")
g.node("Bugs faced", style="dashed", fontsize="10")
g.node("Did renew", style="filled", fontsize="10")
g.edge("Product need", "Did renew")
g.edge("Product need", "Discount")
g.edge("Product need", "Bugs reported")
g.edge("Product need", "Monthly usage")
g.edge("Discount", "Did renew")
g.edge("Monthly usage", "Bugs faced")
g.edge("Monthly usage", "Did renew")
g.edge("Monthly usage", "Ad spend")
g.edge("Economy", "Did renew")
g.edge("Sales calls", "Did renew")
g.edge("Sales calls", "Product need")
g.edge("Sales calls", "Interactions")
g.edge("Interactions", "Did renew")
g.edge("Bugs faced", "Did renew")
g.edge("Bugs faced", "Bugs reported")
g.edge("Last upgrade", "Did renew")
g.edge("Last upgrade", "Ad spend")
g
"""
Explanation: Prediction tasks versus causal tasks
The scatter plots show some surprising findings:
- Users who report more bugs are more likely to renew!
- Users with larger discounts are less likely to renew!
We triple-check our code and data pipelines to rule out a bug, then talk to some business partners who offer an intuitive explanation:
- Users with high usage who value the product are more likely to report bugs and to renew their subscriptions.
- The sales force tends to give high discounts to customers they think are less likely to be interested in the product, and these customers have higher churn.
Are these at-first counter-intuitive relationships in the model a problem? That depends on what our goal is!
Our original goal for this model was to predict customer retention, which is useful for projects like estimating future revenue for financial planning. Since users reporting more bugs are in fact more likely to renew, capturing this relationship in the model is helpful for prediction. As long as our model has good fit out-of-sample, we should be able to provide finance with a good prediction, and therefore shouldn't worry about the direction of this relationship in the model.
This is an example of a class of tasks called prediction tasks. In a prediction task, the goal is to predict an outcome Y (e.g. renewals) given a set of features X. A key component of a prediction exercise is that we only care that the prediction model(X) is close to Y in data distributions similar to our training set. A simple correlation between X and Y can be helpful for these types of predictions.
However, suppose a second team picks up our prediction model with the new goal of determining what actions our company can take to retain more customers. This team cares a lot about how each X feature relates to Y, not just in our training distribution, but the counterfactual scenario produced when the world changes. In that use case, it is no longer sufficient to identify a stable correlation between variables; this team wants to know whether manipulating feature X will cause a change in Y. Picture the face of the chief of engineering when you tell him that you want him to introduce new bugs to increase customer renewals!
This is an example of a class of tasks called causal tasks. In a causal task, we want to know how changing an aspect of the world X (e.g bugs reported) affects an outcome Y (renewals). In this case, it's critical to know whether changing X causes an increase in Y, or whether the relationship in the data is merely correlational.
The challenges of estimating causal effects
A useful tool to understanding causal relationships is writing down a causal graph of the data generating process we're interested in. A causal graph of our example illustrates why the robust predictive relationships picked up by our XGBoost customer retention model differ from the causal relationships of interest to the team that wants to plan interventions to increase retention. This graph is just a summary of the true data generating mechanism (which is defined above). Solid ovals represent features that we observe, while dashed ovals represent hidden features that we don't measure. Each feature is a function of all the features with an arrow to it, plus some random effects.
In our example we know the causal graph because we simulate the data. In practice the true causal graph will not be known, but we may be able to use context-specific domain knowledge about how the world works to infer which relationships can or cannot exist.
End of explanation
"""
def marginal_effects(generative_model, num_samples=100, columns=None, max_points=20, logit=True, seed=0):
""" Helper function to compute the true marginal causal effects.
"""
X = generative_model(num_samples)
if columns is None:
columns = X.columns
ys = [[] for _ in columns]
xs = [X[c].values for c in columns]
xs = np.sort(xs, axis=1)
xs = [xs[i] for i in range(len(xs))]
for i,c in enumerate(columns):
xs[i] = np.unique([np.nanpercentile(xs[i], v, interpolation='nearest') for v in np.linspace(0, 100, max_points)])
for x in xs[i]:
Xnew = generative_model(num_samples, fixed={c: x}, seed=seed)
val = Xnew["Did renew"].mean()
if logit:
val = scipy.special.logit(val)
ys[i].append(val)
ys[i] = np.array(ys[i])
ys = [ys[i] - ys[i].mean() for i in range(len(ys))]
return list(zip(xs, ys))
shap.plots.scatter(shap_values, ylabel="SHAP value\n(higher means more likely to renew)", overlay={
"True causal effects": marginal_effects(generator, 10000, X.columns)
})
"""
Explanation: There are lots of relationships in this graph, but the first important concern is that some of the features we can measure are influenced by unmeasured confounding features like product need and bugs faced. For example, users who report more bugs are encountering more bugs because they use the product more, and they are also more likely to report those bugs because they need the product more. Product need has its own direct causal effect on renewal. Because we can't directly measure product need, the correlation we end up capturing in predictive models between bugs reported and renewal combines a small negative direct effect of bugs faced and a large positive confounding effect from product need. The figure below plots the SHAP values in our example against the true causal effect of each feature (known in this example since we generated the data).
End of explanation
"""
# Economy is independent of other measured features.
shap.plots.bar(shap_values, clustering=clust, clustering_cutoff=1)
"""
Explanation: The predictive model captures an overall positive effect of bugs reported on retention (as shown with SHAP), even though the causal effect of reporting a bug is zero, and the effect of encoutering a bug is negative.
We see a similar problem with Discounts, which are also driven by unobserved customer need for the product. Our predictive model finds a negative relationship between discounts and retention, driven by this correlation with the unobserved feature, Product Need, even though there is actually a small positive causal effect of discounts on renewal! Put another way, if two customers with have the same Product Need and are otherwise similar, then the customer with the larger discount is more likely to renew.
This plot also reveals a second, sneakier problem when we start to interpret predictive models as if they were causal. Notice that Ad Spend has a similar problem - it has no causal effect on retention (the black line is flat), but the predictive model is picking up a positive effect!
In this case, Ad Spend is only driven by Last Upgrade and Monthly Usage, so we don't have an unobserved confounding problem, instead we have an observed confounding problem. There is statistical redundancy between Ad Spend and features that influence Ad Spend. When we have the same information captured by several features, predictive models can use any of those features for prediction, even though they are not all causal. While Ad Spend has no causal effect on renewal itself, it is strongly correlated with several features that do drive renewal. Our regularized model identifies Ad Spend as a useful predictor because it summarizes multiple causal drivers (so leading to a sparser model), but that becomes seriously misleading if we start to interpret it as a causal effect.
We will now tackle each piece of our example in turn to illustrate when predictive models can accurately measure causal effects, and when they cannot. We will also introduce some causal tools that can sometimes estimate causal effects in cases where predictive models fail.
When predictive models can answer causal questions
Let's start with the successes in our example. Notice that our predictive model does a good job of capturing the real causal effect of the Economy feature (a better economy has a positive effect on retention). So when can we expect predictive models to capture true causal effects?
The important ingredient that allowed XGBoost to get a good causal effect estimate for Economy is the feature's strong independent component (in this simulation); its predictive power for retention is not strongly redundant with any other measured features, or with any unmeasured confounders. In consequence, it is not subject to bias from either unmeasured confounders or feature redundancy.
End of explanation
"""
# Ad spend is very redundant with Monthly usage and Last upgrade.
shap.plots.bar(shap_values, clustering=clust, clustering_cutoff=1)
"""
Explanation: Since we have added clustering to the right side of the SHAP bar plot we can see the redundancy structure of our data as a dendrogram. When features merge together at the bottom (left) of the dendrogram it means that that the information those features contain about the outcome (renewal) is very redundant and the model could have used either feature. When features merge together at the top (right) of the dendrogram it means the information they contain about the outcome is independent from each other.
We can see that Economy is independent from all the other measured features by noting that Economy does not merge with any other features until the very top of the clustering dendrogram. This tells us that Economy does not suffer from observed confounding. But to trust that the Economy effect is causal we also need to check for unobserved confounding. Checking for unmeasured confounders is harder and requires using domain knowledge (provided by the business partners in our example above).
For classic predictive ML models to deliver causal results the features need to be independent not only of other features in the model, but also of unobserved confounders. It's not common to find examples of drivers of interest that exhibit this level of independence naturally, but we can often find examples of independent features when our data contains some experiments.
When predictive models cannot answer causal questions but causal inference methods can help
In most real-world datasets features are not independent and unconfounded, so standard predictive models will not learn the true causal effects. As a result, explaining them with SHAP will not reveal causal effects. But all is not lost, sometimes we can fix or at least minimize this problem using the tools of observational causal inference.
Observed confounding
The first scenario where causal inference can help is observed confounding. A feature is "confounded" when there is another feature that causally affects both the original feature and the outcome we are predicting. If we can measure that other feature it is called an observed confounder.
End of explanation
"""
from econml.dml import LinearDML
from sklearn.base import BaseEstimator, clone
import matplotlib.pyplot as plt
class RegressionWrapper(BaseEstimator):
""" Turns a classifier into a 'regressor'.
We use the regression formulation of double ML, so we need to approximate the classifer
as a regression model. This treats the probabilities as just quantitative value targets
for least squares regression, but it turns out to be a reasonable approximation.
"""
def __init__(self, clf):
self.clf = clf
def fit(self, X, y, **kwargs):
self.clf_ = clone(self.clf)
self.clf_.fit(X, y, **kwargs)
return self
def predict(self, X):
return self.clf_.predict_proba(X)[:, 1]
# Run Double ML, controlling for all the other features
def double_ml(y, causal_feature, control_features):
""" Use doubleML from econML to estimate the slope of the causal effect of a feature.
"""
xgb_model = xgboost.XGBClassifier(objective="binary:logistic", random_state=42)
est = LinearDML(model_y=RegressionWrapper(xgb_model))
est.fit(y, causal_feature, W=control_features)
return est.effect_inference()
def plot_effect(effect, xs, true_ys, ylim=None):
""" Plot a double ML effect estimate from econML as a line.
Note that the effect estimate from double ML is an average effect *slope* not a full
function. So we arbitrarily draw the slope of the line as passing through the origin.
"""
plt.figure(figsize=(5, 3))
pred_xs = [xs.min(), xs.max()]
mid = (xs.min() + xs.max())/2
pred_ys = [effect.pred[0]*(xs.min() - mid), effect.pred[0]*(xs.max() - mid)]
plt.plot(xs, true_ys - true_ys[0], label='True causal effect', color="black", linewidth=3)
point_pred = effect.point_estimate * pred_xs
pred_stderr = effect.stderr * np.abs(pred_xs)
plt.plot(pred_xs, point_pred - point_pred[0], label='Double ML slope', color=shap.plots.colors.blue_rgb, linewidth=3)
# 99.9% CI
plt.fill_between(pred_xs, point_pred - point_pred[0] - 3.291 * pred_stderr,
point_pred - point_pred[0] + 3.291 * pred_stderr, alpha=.2, color=shap.plots.colors.blue_rgb)
plt.legend()
plt.xlabel("Ad spend", fontsize=13)
plt.ylabel("Zero centered effect")
if ylim is not None:
plt.ylim(*ylim)
plt.gca().xaxis.set_ticks_position('bottom')
plt.gca().yaxis.set_ticks_position('left')
plt.gca().spines['right'].set_visible(False)
plt.gca().spines['top'].set_visible(False)
plt.show()
# estimate the causal effect of Ad spend controlling for all the other features
causal_feature = "Ad spend"
control_features = [
"Sales calls", "Interactions", "Economy", "Last upgrade", "Discount",
"Monthly usage", "Bugs reported"
]
effect = double_ml(y, X[causal_feature], X.loc[:,control_features])
# plot the estimated slope against the true effect
xs, true_ys = marginal_effects(generator, 10000, X[["Ad spend"]], logit=False)[0]
plot_effect(effect, xs, true_ys, ylim=(-0.2, 0.2))
"""
Explanation: An example of this in our scenario is the Ad Spend feature. Even though Ad Spend has no direct causal effect on retention, it is correlated with the Last Upgrade and Monthly Usage features, which do drive retention. Our predictive model identifies Ad Spend as the one of the best single predictors of retention because it captures so many of the true causal drivers through correlations. XGBoost imposes regularization, which is a fancy way of saying that it tries to choose the simplest possible model that still predicts well. If it could predict equally well using one feature rather than three, it will tend to do that to avoid overfitting. But this means that if Ad Spend is highly correlated with both Last Upgrade and Monthly Usage, XGBoost may use Ad Spend instead of the causal features! This property of XGBoost (or any other machine learning model with regularization) is very useful for generating robust predictions of future retention, but not good for understanding which features we should manipulate if we want to increase retention.
This highlights the importance of matching the right modeling tools to each question. Unlike the bug reporting example, there is nothing intuitively wrong with the conclusion that increasing ad spend increases retention. Without proper attention to what our predictive model is, and is not, measuring, we could easily have proceeded with this finding and only learned our mistake after increasing spending on advertising and not getting the renewal results we expected.
Observational Causal Inference
The good news for Ad Spend is that we can measure all the features that could confound it (those features with arrows into Ad Spend in our causal graph above). Therefore, this is an example of observed confounding, and we should be able to disentangle the correlation patterns using only the data we've already collected; we just need to use the right tools from observational causal inference. These tools allow us to specify what features could confound Ad Spend and then adjust for those features, to get an unconfounded estimate of the causal effect of Ad Spend on product renewal.
One particularly flexible tool for observational causal inference is double/debiased machine learning. It uses any machine learning model you want to first deconfound the feature of interest (i.e. Ad Spend) and then estimate the average causal effect of changing that feature (i.e. the average slope of the causal effect).
Double ML works as follows:
1. Train a model to predict a feature of interest (i.e. Ad Spend) using a set of possible confounders (i.e. any features not caused by Ad Spend).
2. Train a model to predict the outcome (i.e. Did Renew) using the same set of possible confounders.
3. Train a model to predict the residual variation of the outcome (the variation left after subtracting our prediction) using the residual variation of the causal feature of interest.
The intuition is that if Ad Spend causes renewal, then the part of Ad Spend that can't be predicted by other confounding features should be correlated with the part of renewal that can't be predicted by other confounding features. Stated another way, double ML assumes that there is an independent (unobserved) noise feature that impacts Ad Spend (since Ad Spend is not perfectly determined by the other features), so we can impute the value of this independent noise feature and then train a model on this independent feature to predict the output.
While we could do all the double ML steps manually, it is easier to use a causal inference package like econML or CausalML. Here we use econML's LinearDML model. This returns a P-value of whether that treatment has a non-zero a causal effect, and works beautifully in our scenario, correctly identifying that there is no evidence for a causal effect of ad spending on renewal (P-value = 0.85):
End of explanation
"""
# Interactions and sales calls are very redundant with one another.
shap.plots.bar(shap_values, clustering=clust, clustering_cutoff=1)
"""
Explanation: Remember, double ML (or any other observational causal inference method) only works when you can measure and identify all the possible confounders of the feature for which you want to estimate causal effects. Here we know the causal graph and can see that Monthly Usage and Last Upgrade are the two direct confounders we need to control for. But if we didn't know the causal graph we could still look at the redundancy in the SHAP bar plot and see that Monthly Usage and Last Upgrade are the most redundant features and so are good candidates to control for (as are Discounts and Bugs Reported).
Non-confounding redundancy
The second scenario where causal inference can help is non-confounding redundancy. This occurs when the feature we want causal effects for causally drives, or is driven by, another feature included in the model, but that other feature is not a confounder of our feature of interest.
End of explanation
"""
# Fit, explain, and plot a univariate model with just Sales calls
# Note how this model does not have to split of credit between Sales calls and
# Interactions, so we get a better agreement with the true causal effect.
sales_calls_model = fit_xgboost(X[["Sales calls"]], y)
sales_calls_shap_values = shap.Explainer(sales_calls_model)(X[["Sales calls"]])
shap.plots.scatter(sales_calls_shap_values, overlay={
"True causal effects": marginal_effects(generator, 10000, ["Sales calls"])
})
"""
Explanation: An example of this is the Sales Calls feature. Sales Calls directly impact retention, but also have an indirect effect on retention through Interactions. When we include both the Interactions and Sales Calls features in the model the causal effect shared by both features is forced to spread out between them. We can see this in the SHAP scatter plots above, which show how XGBoost underestimates the true causal effect of Sales Calls because most of that effect got put onto the Interactions feature.
Non-confounding redundancy can be fixed in principle by removing the redundant variables from the model (see below). For example, if we removed Interactions from the model then we will capture the full effect of making a sales call on renewal probability. This removal is also important for double ML, since double ML will fail to capture indirect causal effects if you control for downstream features caused by the feature of interest. In this case double ML will only measure the "direct" effect that does not pass through the other feature. Double ML is however robust to controlling for upstream non-confounding redundancy (where the redundant feature causes the feature of interest), though this will reduce your statistical power to detect true effects.
Unfortunately, we often don't know the true causal graph so it can be hard to know when another feature is redundant with our feature of interest because of observed confounding vs. non-confounding redundancy. If it is because of confounding then we should control for that feature using a method like double ML, whereas if it is a downstream consequence then we should drop the feature from our model if we want full causal effects rather than only direct effects. Controlling for a feature we shouldn't tends to hide or split up causal effects, while failing to control for a feature we should have controlled for tends to infer causal effects that do not exist. This generally makes controlling for a feature the safer option when you are uncertain.
End of explanation
"""
# Discount and Bugs reported seem are fairly independent of the other features we can
# measure, but they are not independent of Product need, which is an unobserved confounder.
shap.plots.bar(shap_values, clustering=clust, clustering_cutoff=1)
"""
Explanation: When neither predictive models nor unconfounding methods can answer causal questions
Double ML (or any other causal inference method that assumes unconfoundedness) only works when you can measure and identify all the possible confounders of the feature for which you want to estimate causal effects. If you can't measure all the confounders then you are in the hardest possible scenario: unobserved confounding.
End of explanation
"""
# estimate the causal effect of Ad spend controlling for all the other features
causal_feature = "Discount"
control_features = [
"Sales calls", "Interactions", "Economy", "Last upgrade",
"Monthly usage", "Ad spend", "Bugs reported"
]
effect = double_ml(y, X[causal_feature], X.loc[:,control_features])
# plot the estimated slope against the true effect
xs, true_ys = marginal_effects(generator, 10000, X[[causal_feature]], logit=False)[0]
plot_effect(effect, xs, true_ys, ylim=(-0.5, 0.2))
"""
Explanation: The Discount and Bugs Reported features both suffer from unobserved confounding because not all important variables (e.g., Product Need and Bugs Faced) are measured in the data. Even though both features are relatively independent of all the other features in the model, there are important drivers that are unmeasured. In this case, both predictive models and causal models that require confounders to be observed, like double ML, will fail. This is why double ML estimates a large negative causal effect for the Discount feature even when controlling for all other observed features:
End of explanation
"""
|
anandha2017/udacity
|
nd101 Deep Learning Nanodegree Foundation/DockerImages/24_embeddings_and_word2vec/notebooks/01-embeddings/Skip-Grams-Solution.ipynb
|
mit
|
import time
import numpy as np
import tensorflow as tf
import utils
"""
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
"""
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
"""
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
"""
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
"""
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
"""
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
"""
from collections import Counter
import random
threshold = 1e-5
word_counts = Counter(int_words)
total_count = len(int_words)
freqs = {word: count/total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}
train_words = [word for word in int_words if random.random() < (1 - p_drop[word])]
"""
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is that probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
"""
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = set(words[start:idx] + words[idx+1:stop+1])
return list(target_words)
"""
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window.
End of explanation
"""
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
"""
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
"""
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None], name='inputs')
labels = tf.placeholder(tf.int32, [None, None], name='labels')
"""
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
"""
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs)
"""
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
"""
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(n_vocab))
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b,
labels, embed,
n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
"""
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
"""
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
"""
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
"""
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
"""
Explanation: Restore the trained network if you need to:
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
"""
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation
"""
|
landlab/landlab
|
notebooks/tutorials/component_tutorial/component_tutorial.ipynb
|
mit
|
from landlab.components import LinearDiffuser
from landlab.plot import imshow_grid
from landlab import RasterModelGrid
import matplotlib as mpl
import matplotlib.cm as cm
from matplotlib.pyplot import figure, show, plot, xlabel, ylabel, title
import numpy as np
"""
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
Getting to know the Landlab component library
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>
<hr>
This notebook walks you through the stages of creating and running a Landlab model using the Landlab component library.
We are going to create three models: firstly, a single-component driver implementing just linear diffusion; then a three-component driver implementing linear diffusion, flow routing, and stream power incision; and finally a similar model, but implementing a storm-interstorm sequence.
The basics: one component
Let's begin with the one-component diffusion model.
Firstly, import the library elements we'll need. The component classes can all be imported from the landlab.components library. They're all formatted in CamelCaseLikeThis. Anything you see in that folder that isn't formatted like this isn't a component!
End of explanation
"""
mg = RasterModelGrid((80, 80), xy_spacing=5.0)
z = mg.add_zeros("topographic__elevation", at="node")
"""
Explanation: Let's start by creating the grid that we'll do the first part of this exercise with, and putting some data into its fields. Note that you need to create the fields that a component takes as inputs before instantiating a component - though you can put values into the arrays later if you need to (as illustrated below). For more info on working with fields, see the fields tutorial.
End of explanation
"""
LinearDiffuser.input_var_names
"""
Explanation: How did we know this was a field we needed as an input? Well, firstly because we read the component documentation (always do this!), but secondly we can get a reminder using the Landlab Component Standard Interface:
End of explanation
"""
LinearDiffuser.var_help("topographic__elevation")
"""
Explanation: Note we didn't have to instantiate the component to be able to do this! Other standard properties are output_var_names and optional_var_names; pass an input or output name to var_loc, var_type, var_units, and var_definition to get the centering ('node', 'link', etc.), array dtype (float, int), units (meters, etc.), and a descriptive string, respectively. var_help will give you a lot of this information at once:
End of explanation
"""
for edge in (mg.nodes_at_left_edge, mg.nodes_at_right_edge):
mg.status_at_node[edge] = mg.BC_NODE_IS_CLOSED
for edge in (mg.nodes_at_top_edge, mg.nodes_at_bottom_edge):
mg.status_at_node[edge] = mg.BC_NODE_IS_FIXED_VALUE
"""
Explanation: It's also a good idea to set the grid boundary conditions before component instantiation. Let's have fixed value top and bottom and closed left and right (see the boundary conditions tutorial):
End of explanation
"""
lin_diffuse = LinearDiffuser(mg, linear_diffusivity=0.2)
"""
Explanation: You will find that all components within landlab share a similar interface. We'll examine how it looks first on the diffusion component.
Landlab components have a standardised instantiation signature. Inputs to the component can be fed in as arguments to the constructor (i.e., the function that gets called when you create a new instances of a component), rather than being fed in as strings from a text input file (though note, you an still do this, see below). This has two major advantages: firstly, components now have plainly declared default values, which are visible just as they would be in, say, a numpy function; secondly, because the inputs are now Python objects, it's a lot easier to work with spatially variable inputs that need to be passed in as arrays, and also to feed dynamically changing inputs into a component.
The standard signature to instantiate a component looks like this:
python
MyComponent(grid, input1=default1, input2=default2, input3=default3, ...)
Because defaults are provided, you can instantiate a component with default values very simply. The diffuser, for example, requires only that a linear_diffusity be supplied:
End of explanation
"""
total_t = 200000.0
dt = 1000.0
uplift_rate = 0.001
nt = int(total_t // dt)
# ^note if we didn't know a priori that there are a round number of steps dt in the
# total time, we'd have to take care to account for the "extra" time (see example below)
for i in range(nt):
lin_diffuse.run_one_step(dt)
z[mg.core_nodes] += uplift_rate * dt # add the uplift
# add some output to let us see we aren't hanging:
if i % 50 == 0:
print(i * dt)
"""
Explanation: We'll see some other ways of initializing (e.g., from an input file) below.
Now we're ready to run the component! Run methods are also standardized. Most Landlab components have a standard run method named run_one_step, and it looks like this:
python
my_comp.run_one_step(dt)
If the component is time-dependent, dt, the timestep, will be the first argument. (In Landlab 1.x, some components have subsequent keywords, which will typically be flags that control the way the component runs, and usually can be left as their default values; these extra keywords are absent in Landlab 2.x). Note that nothing is returned from a run method like this, but that nonetheless the grid fields are updated.
This dt is properly thought of as the external model timestep; it controls essentially the frequency at which the various Landlab components you're implementing can exchange information with each other and with the driver (e.g., frequency at which uplift steps are added to the grid). If your model has a stability condition that demands a shorter timestep, the external timestep will be subdivided internally down to this shorter timescale.
So let's do it. It's up to you as the component designer to make sure your driver script accounts properly for the total time the model runs. Here, we want to run for 200000 years with a timestep of 1000 years, with an uplift rate of 0.001 m/y. So:
End of explanation
"""
# the following line makes figures show up correctly in this document (only needed for Jupyter notebook)
%matplotlib inline
# Create a figure and plot the elevations
figure(1)
im = imshow_grid(
mg, "topographic__elevation", grid_units=["m", "m"], var_name="Elevation (m)"
)
figure(2)
elev_rast = mg.node_vector_to_raster(z)
ycoord_rast = mg.node_vector_to_raster(mg.node_y)
ncols = mg.number_of_node_columns
im = plot(ycoord_rast[:, int(ncols // 2)], elev_rast[:, int(ncols // 2)])
xlabel("horizontal distance (m)")
ylabel("vertical distance (m)")
title("topographic__elevation cross section")
"""
Explanation: Note that we're using z to input the uplift here, which we already bound to the Landlab field mg.at_node['topographic__elevation] when we instantiated that field. This works great, but always be careful to update the values inside the array, not to reset the variable as equal to something else, i.e., to put new values in the field do::
python
z[:] = new_values # values copied into the existing field
not
python
z = new_values # z is now "new_values", not the field!
Now plot the output!
End of explanation
"""
z[:] = 0.0 # reset the elevations to zero
k_diff = mg.zeros("node", dtype=float)
k_diff.fill(1.0)
k_diff *= (mg.node_x.max() - 0.9 * mg.x_of_node) / mg.x_of_node.max()
k_field = mg.add_field("linear_diffusivity", k_diff, at="node", clobber=True)
imshow_grid(mg, k_diff, var_name="k_diff", cmap="winter") # check it looks good
"""
Explanation: Now, let's repeat this exercise, but illustrating the way we can input fields as some parameters for components. We're going to make the diffusivity spatially variable, falling by a factor of ten as we move across the grid.
End of explanation
"""
lin_diffuse = LinearDiffuser(mg, linear_diffusivity="linear_diffusivity")
# we could also have passed in `k_diff` in place of the string
"""
Explanation: Now we re-initialize the component instance to bind the k_diff field to the component:
End of explanation
"""
for i in range(nt):
lin_diffuse.run_one_step(dt)
z[mg.core_nodes] += uplift_rate * dt # add the uplift
# add some output to let us see we aren't hanging:
if i % 50 == 0:
print(i * dt)
figure(3)
im = imshow_grid(
mg, "topographic__elevation", grid_units=["m", "m"], var_name="Elevation (m)"
)
"""
Explanation: ...and run just as before. Note this will be slower than before; the internal timestep is shorter because we've modified the diffusivities.
End of explanation
"""
from landlab.components import FlowAccumulator, FastscapeEroder
from landlab import load_params
"""
Explanation: Running two or more components
Now we're going to take a similar approach but this time combine the outputs of three distinct Landlab components: the diffuser, the monodirectional flow router, and the stream power incisor. For clarity, we're going to repeat the whole process from the start.
So first, let's import everything we don't already have:
End of explanation
"""
input_file = "./coupled_params.txt"
inputs = load_params(input_file) # load the data into a dictionary
nrows = inputs["nrows"]
ncols = inputs["ncols"]
dx = inputs["dx"]
uplift_rate = inputs["uplift_rate"]
total_t = inputs["total_time"]
dt = inputs["dt"]
nt = int(total_t // dt) # this is how many loops we'll need
uplift_per_step = uplift_rate * dt
# illustrate what the MPD looks like:
print(inputs)
"""
Explanation: More components means more input parameters. So this time, we're going to make our lives easier by instantiating our components from an input file. Note also that we've now switched length units to km from m.
We're going to handle our input file using the very powerful load_params Landlab function. This function can read input text files formatted in a variety of different ways, including the yaml standard. It automatically types the values it finds in the input file (i.e., makes them int, float, string, etc.), and returns them as a Python dictionary. This dictionary is the model parameter dictionary (MPD). However, feel free to use your own way of reading in a text file. The important thing is that you end up with a dictionary that contains 'input_parameter_name': parameter_value pairs. Note that the file format has subsets of parameters grouped, using indentation:
yaml
stream_power:
K_sp: 0.3
m_sp: 0.5
linear_diffuser:
linear_diffusivity: 0.0001
When read into a dictionary, this forms two sub-dictionaries, with the keys stream_power and linear_diffuser. We will pass these two sub-dictionaries as **kwargs arguments to the FastscapeEroder and LinearDiffuser components, respectively.
End of explanation
"""
mg = RasterModelGrid((nrows, ncols), dx)
z = mg.add_zeros("topographic__elevation", at="node")
# add some roughness, as this lets "natural" channel planforms arise
initial_roughness = np.random.rand(z.size) / 100000.0
z += initial_roughness
for edge in (mg.nodes_at_left_edge, mg.nodes_at_right_edge):
mg.status_at_node[edge] = mg.BC_NODE_IS_CLOSED
for edge in (mg.nodes_at_top_edge, mg.nodes_at_bottom_edge):
mg.status_at_node[edge] = mg.BC_NODE_IS_FIXED_VALUE
"""
Explanation: Now instantiate the grid, set the initial conditions, and set the boundary conditions:
End of explanation
"""
fr = FlowAccumulator(mg)
sp = FastscapeEroder(mg, **inputs["stream_power"])
lin_diffuse = LinearDiffuser(mg, **inputs["linear_diffuser"])
"""
Explanation: So far, so familiar.
Now we're going to instantiate all our components, using the MPD. We can do this using a bit of Python magic that lets you pass dictionaries into functions as sets of keywords. We do this by passing the dictionary as the final input, with to asterisks - ** in front of it:
End of explanation
"""
for i in range(nt):
# lin_diffuse.run_one_step(dt) no diffusion this time
fr.run_one_step() # run_one_step isn't time sensitive, so it doesn't take dt as input
sp.run_one_step(dt)
mg.at_node["topographic__elevation"][
mg.core_nodes
] += uplift_per_step # add the uplift
if i % 20 == 0:
print("Completed loop %d" % i)
"""
Explanation: What's happening here is that the component is looking inside the dictionary for any keys that match its keywords, and using them. Values in the dictionary will override component defaults, but note that you cannot provide a keyword manually that is also defined in a supplied dictionary, i.e., this would result in a TypeError:
```python
lin_diffuse = LinearDiffuser(mg, linear_diffusivity=1.,
**{'linear_diffusivity': 1.})
TypeError
```
A note on the FlowAccumulator. This component provides a variety of options for the flow direction method used (e.g., D4/SteepestDescent, D8, MFD etc.). By default it uses D4 flow routing and does not deal with depression finding and routing.
In order to use the DepressionFinderAndRouter inside the FlowAccumulator specify depression_finder = 'DepressionFinderAndRouter'.
If you are using the FlowAccumulator in additional projects or using this notebook as a starting place for additional work, work through the three tutorials on the FlowDirectors and the FlowAccumulator first.
And now we run! We're going to run once with the diffusion and once without.
End of explanation
"""
figure("topo without diffusion")
imshow_grid(
mg, "topographic__elevation", grid_units=["km", "km"], var_name="Elevation (km)"
)
"""
Explanation: You'll need to give the above code a few seconds to run.
End of explanation
"""
z[:] = initial_roughness
for i in range(nt):
lin_diffuse.run_one_step(dt) # no diffusion this time
fr.run_one_step() # run_one_step isn't time sensitive, so it doesn't take dt as input
sp.run_one_step(dt)
mg.at_node["topographic__elevation"][
mg.core_nodes
] += uplift_per_step # add the uplift
if i % 20 == 0:
print("Completed loop %d" % i)
figure("topo with diffusion")
imshow_grid(
mg, "topographic__elevation", grid_units=["km", "km"], var_name="Elevation (km)"
)
"""
Explanation: And now let's reset the grid elevations and do everything again, but this time, with the diffusion turned on:
End of explanation
"""
from landlab.components import ChannelProfiler, PrecipitationDistribution
from matplotlib.pyplot import loglog
z[:] = initial_roughness
"""
Explanation: Beautiful! We've smoothed away the fine-scale channel roughness, as expected, and produced some lovely convex-up hillslopes in its place. Note that even though the initial conditions were identical in both cases, including the roughness, the channel positions have been moved significantly by the hillslope diffusion into the channel.
As a final step, we're going to show off some of Landlab's fancier functionality. We're going to repeat the above coupled model run, but this time we're going to plot some evolving channel profiles, and we're going to drive the simulation with a sequence of storms, not just a fixed timestep. We'll also produce a slope-area plot for the final conditions.
Working with timesteps of varying length requires a bit more bookkeeping, but the principle is the same as what we've seen before.
So, load the new landlab objects we'll need, then reset the initial conditions:
End of explanation
"""
dt = 0.1
total_t = 250.0
storm_inputs = load_params("./coupled_params_storms.txt")
precip = PrecipitationDistribution(total_t=total_t, delta_t=dt, **storm_inputs)
print(storm_inputs)
# make a color mapping appropriate for our time duration
norm = mpl.colors.Normalize(vmin=0, vmax=total_t)
map_color = cm.ScalarMappable(norm=norm, cmap="viridis")
"""
Explanation: Instantiate the storm generator. This time, we're going to mix an input file for some components with manual definition of others (that we already defined above).
End of explanation
"""
out_interval = 20.0
last_trunc = total_t # we use this to trigger taking an output plot
for (
interval_duration,
rainfall_rate,
) in precip.yield_storm_interstorm_duration_intensity():
if rainfall_rate > 0.0:
# note diffusion also only happens when it's raining...
fr.run_one_step()
sp.run_one_step(interval_duration)
lin_diffuse.run_one_step(interval_duration)
z[mg.core_nodes] += uplift_rate * interval_duration
this_trunc = precip.elapsed_time // out_interval
if this_trunc != last_trunc: # time to plot a new profile!
print("made it to time %d" % (out_interval * this_trunc))
last_trunc = this_trunc
figure("long_profiles")
# get and plot the longest profile
cp = ChannelProfiler(mg)
cp.run_one_step()
cp.plot_profiles(color=map_color.to_rgba(precip.elapsed_time))
# no need to track elapsed time, as the generator will stop automatically
# make the figure look nicer:
figure("long_profiles")
xlabel("Distance upstream (km)")
ylabel("Elevation (km)")
title("Long profiles evolving through time")
mpl.pyplot.colorbar(map_color)
"""
Explanation: Now run:
End of explanation
"""
figure("topo with diffusion and storms")
imshow_grid(
mg, "topographic__elevation", grid_units=["km", "km"], var_name="Elevation (km)"
)
"""
Explanation: Note that the "wobbles" in the long profile here are being created by the stochastic storm sequence. We could reduce their impact by reducing the storm-interstorm timescales, or allowing diffusion while it's not raining, but we've chosen not to here to show that the storms are having an effect.
End of explanation
"""
cp = ChannelProfiler(
mg, number_of_watersheds=7, minimum_channel_threshold=0.01, main_channel_only=False
)
cp.run_one_step()
cp.plot_profiles_in_map_view()
"""
Explanation: We can also plot the location of the channels in map view.
Here we plot all channel segments with drainage area greater than 0.01 square kilometers in the seven biggest drainage basins.
End of explanation
"""
figure("final slope-area plot")
loglog(mg.at_node["drainage_area"], mg.at_node["topographic__steepest_slope"], ".")
xlabel("Drainage area (km**2)")
ylabel("Local slope")
title("Slope-Area plot for whole landscape")
"""
Explanation: Next we make a slope area plot.
End of explanation
"""
|
fantasycheng/udacity-deep-learning-project
|
tutorials/dcgan-svhn/DCGAN.ipynb
|
mit
|
%matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
"""
Explanation: Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.
You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.
So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.
End of explanation
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
"""
Explanation: Getting the data
Here you can download the SVHN dataset. Run the cell above and it'll download to your machine.
End of explanation
"""
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
"""
Explanation: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
End of explanation
"""
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
"""
Explanation: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
End of explanation
"""
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.shuffle = shuffle
def batches(self, batch_size):
if self.shuffle:
idx = np.arange(len(dataset.train_x))
np.random.shuffle(idx)
self.train_x = self.train_x[idx]
self.train_y = self.train_y[idx]
n_batches = len(self.train_y)//batch_size
for ii in range(0, len(self.train_y), batch_size):
x = self.train_x[ii:ii+batch_size]
y = self.train_y[ii:ii+batch_size]
yield self.scaler(x), y
"""
Explanation: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
End of explanation
"""
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
"""
Explanation: Network Inputs
Here, just creating some placeholders like normal.
End of explanation
"""
def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x1 = tf.layers.dense(z, 4*4*512)
# Reshape it to start the convolutional stack
x1 = tf.reshape(x1, (-1, 4, 4, 512))
x1 = tf.layers.batch_normalization(x1, training=training)
x1 = tf.maximum(alpha * x1, x1)
# 4x4x512 now
x2 = tf.layers.conv2d_transpose(x1, 256, 5, strides=2, padding='same')
x2 = tf.layers.batch_normalization(x2, training=training)
x2 = tf.maximum(alpha * x2, x2)
# 8x8x256 now
x3 = tf.layers.conv2d_transpose(x2, 128, 5, strides=2, padding='same')
x3 = tf.layers.batch_normalization(x3, training=training)
x3 = tf.maximum(alpha * x3, x3)
# 16x16x128 now
# Output layer
logits = tf.layers.conv2d_transpose(x3, output_dim, 5, strides=2, padding='same')
# 32x32x3 now
out = tf.tanh(logits)
return out
"""
Explanation: Generator
Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.
You keep stack layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper:
Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3.
End of explanation
"""
def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
x1 = tf.layers.conv2d(x, 64, 5, strides=2, padding='same')
relu1 = tf.maximum(alpha * x1, x1)
# 16x16x64
x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same')
bn2 = tf.layers.batch_normalization(x2, training=True)
relu2 = tf.maximum(alpha * bn2, bn2)
# 8x8x128
x3 = tf.layers.conv2d(relu2, 256, 5, strides=2, padding='same')
bn3 = tf.layers.batch_normalization(x3, training=True)
relu3 = tf.maximum(alpha * bn3, bn3)
# 4x4x256
# Flatten it
flat = tf.reshape(relu3, (-1, 4*4*256))
logits = tf.layers.dense(flat, 1)
out = tf.sigmoid(logits)
return out, logits
"""
Explanation: Discriminator
Here you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.
You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.
Note: in this project, your batch normalization layers will always use batch statistics. (That is, always set training to True.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the training parameter appropriately.
End of explanation
"""
def model_loss(input_real, input_z, output_dim, alpha=0.2):
"""
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
"""
g_model = generator(input_z, output_dim, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
"""
Explanation: Model Loss
Calculating the loss like before, nothing new here.
End of explanation
"""
def model_opt(d_loss, g_loss, learning_rate, beta1):
"""
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
"""
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
"""
Explanation: Optimizers
Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics.
End of explanation
"""
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.input_real, self.input_z = model_inputs(real_size, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,
real_size[2], alpha=0.2)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)
"""
Explanation: Building the model
Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.
End of explanation
"""
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img, aspect='equal')
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
"""
Explanation: Here is a function for displaying generated images.
End of explanation
"""
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.uniform(-1, 1, size=(72, z_size))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in dataset.batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})
_ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
gen_samples = sess.run(
generator(net.input_z, 3, reuse=True, training=False),
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 6, 12, figsize=figsize)
plt.show()
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return losses, samples
"""
Explanation: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an errror without it because of the tf.control_dependencies block we created in model_opt.
End of explanation
"""
real_size = (32,32,3)
z_size = 100
learning_rate = 0.0002
batch_size = 128
epochs = 25
alpha = 0.2
beta1 = 0.5
# Create the network
net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)
dataset = Dataset(trainset, testset)
losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
"""
Explanation: Hyperparameters
GANs are very senstive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.
End of explanation
"""
|
sdpython/ensae_teaching_cs
|
_doc/notebooks/td2a_ml/td2a_pipeline_tree_selection_correction.ipynb
|
mit
|
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
"""
Explanation: 2A.ml - Pipeline pour un réduction d'une forêt aléatoire - correction
Le modèle Lasso permet de sélectionner des variables, une forêt aléatoire produit une prédiction comme étant la moyenne d'arbres de régression. Cet aspect a été abordé dans le notebook Reduction d'une forêt aléatoire. On cherche à automatiser le processus.
End of explanation
"""
from sklearn.datasets import load_boston
data = load_boston()
X, y = data.data, data.target
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
"""
Explanation: Datasets
Comme il faut toujours des données, on prend ce jeu Boston.
End of explanation
"""
import numpy
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import Lasso
# Apprentissage d'une forêt aléatoire
clr = RandomForestRegressor()
clr.fit(X_train, y_train)
# Récupération de la prédiction de chaque arbre
X_train_2 = numpy.zeros((X_train.shape[0], len(clr.estimators_)))
estimators = numpy.array(clr.estimators_).ravel()
for i, est in enumerate(estimators):
pred = est.predict(X_train)
X_train_2[:, i] = pred
# Apprentissage d'une régression Lasso
lrs = Lasso(max_iter=10000)
lrs.fit(X_train_2, y_train)
lrs.coef_
"""
Explanation: Forêt aléatoire suivi de Lasso
La méthode consiste à apprendre une forêt aléatoire puis à effectuer d'une régression sur chacun des estimateurs.
End of explanation
"""
from sklearn.pipeline import Pipeline
try:
pipe = Pipeline(steps=[
('rf', RandomForestRegressor()),
("une fonction qui n'existe pas encore", fct),
("lasso", Lasso()),
])
except Exception as e:
print(e)
"""
Explanation: Nous avons réussi à reproduire le processus dans son ensemble. Pas toujours simple de se souvenir de toutes les étapes, c'est pourquoi il est plus simple de compiler l'ensemble dans un pipeline.
Premier pipeline
L'idée est d'avoir quelque chose qui ressemble à ce qui suit.
End of explanation
"""
from sklearn.preprocessing import FunctionTransformer
def random_forest_tree_prediction(rf, X):
preds = numpy.zeros((X.shape[0], len(rf.estimators_)))
estimators = numpy.array(rf.estimators_).ravel()
for i, est in enumerate(estimators):
pred = est.predict(X)
preds[:, i] = pred
return preds
random_forest_tree_prediction(clr, X)
fct = FunctionTransformer(lambda X, rf=clr: random_forest_tree_prediction(rf, X) )
fct.transform(X_train)
"""
Explanation: Dans un pipeline, on ne peut y mettre que des modèles prédictifs, classifieur, régresseur ou des transformeur (normalisseur). La fonction qui extrait les prédictions des arbres doit être emballés dans un transformer. C'est le rôle d'un FunctionTransformer.
End of explanation
"""
try:
pipe = Pipeline(steps=[
('rf', RandomForestRegressor()),
("tree_pred", fct),
("lasso", Lasso()),
])
except Exception as e:
print(e)
"""
Explanation: Tout se passe bien. Il suffit de l'insérer dans le pipeline.
End of explanation
"""
hasattr(clr, 'transform')
from jyquickhelper import RenderJsDot
RenderJsDot("""digraph {
A [label="RandomForestRegressor pipline"];
A2 [label="RandomForestRegressor - pretrained"];
B [label="FunctionTransformer"]; C [label="Lasso"];
A -> B [label="X"]; B -> C [label="X2"]; A2 -> B [label="rf"]; }""")
"""
Explanation: Ca ne marche toujours pas parce qu'un pipeline à ce que toutes les étapes excepté la dernière doivent être un transformeur et implémenter la méthode transform et ce n'est pas le cas. Et cela pose également un autre problème, la fonction ne fonctionne que si elle reçoit la forêt aléatoire en argument et nous avons passé celle déjà apprise mais ce n'aurait pas été celle apprise dans le pipeline.
End of explanation
"""
class RandomForestRegressorAsTransformer:
def __init__(self, **kwargs):
self.rf = RandomForestRegressor(**kwargs)
def fit(self, X, y):
self.rf.fit(X, y)
return self
def transform(self, X):
preds = numpy.zeros((X.shape[0], len(self.rf.estimators_)))
estimators = numpy.array(self.rf.estimators_).ravel()
for i, est in enumerate(estimators):
pred = est.predict(X)
preds[:, i] = pred
return preds
trrf = RandomForestRegressorAsTransformer()
trrf.fit(X_train, y_train)
trrf.transform(X_train)
"""
Explanation: Comme ça ne marche pas, on passe à une seconde idée.
Second pipeline
On déguise la forêt aléatoire en un transformeur.
End of explanation
"""
pipe = Pipeline(steps=[('trrf', RandomForestRegressorAsTransformer()),
("lasso", Lasso())])
pipe.fit(X_train, y_train)
"""
Explanation: Tout va bien. On refait le pipeline.
End of explanation
"""
pipe.steps[1][1].coef_
"""
Explanation: On récupère les coefficients.
End of explanation
"""
from sklearn.model_selection import GridSearchCV
param_grid = {'trrf__n_estimators': [30, 50, 80, 100],
'lasso__alpha': [0.5, 1.0, 1.5]}
try:
grid = GridSearchCV(pipe, cv=5, verbose=1, param_grid=param_grid)
grid.fit(X_train, y_train)
except Exception as e:
print(e)
"""
Explanation: A quoi ça sert : GridSearchCV
Comme l'ensemble des traitements sont maintenant dans un seul pipeline que scikit-learn considère comme un modèle comme les autres, on peut rechercher les meilleurs hyper-paramètres du modèle, comme le nombre d'arbres initial, le paramètre alpha, la profondeur des arbres...
End of explanation
"""
class RandomForestRegressorAsTransformer:
def __init__(self, **kwargs):
self.rf = RandomForestRegressor(**kwargs)
def fit(self, X, y):
self.rf.fit(X, y)
return self
def transform(self, X):
preds = numpy.zeros((X.shape[0], len(self.rf.estimators_)))
estimators = numpy.array(self.rf.estimators_).ravel()
for i, est in enumerate(estimators):
pred = est.predict(X)
preds[:, i] = pred
return preds
def set_params(self, **params):
self.rf.set_params(**params)
import warnings
from sklearn.exceptions import ConvergenceWarning
pipe = Pipeline(steps=[('trrf', RandomForestRegressorAsTransformer()),
("lasso", Lasso())])
param_grid = {'trrf__n_estimators': [50, 100],
'lasso__alpha': [0.5, 1.0, 1.5]}
grid = GridSearchCV(pipe, cv=5, verbose=2, param_grid=param_grid)
with warnings.catch_warnings(record=False) as w:
# On ignore les convergence warning car il y en beaucoup.
warnings.simplefilter("ignore", ConvergenceWarning)
grid.fit(X_train, y_train)
grid.best_params_
grid.best_estimator_.steps[1][1].coef_
grid.best_score_
"""
Explanation: La classe RandomForestRegressorAsTransformer a besoin de la méthode set_params... Aucun problème.
End of explanation
"""
grid.score(X_test, y_test)
"""
Explanation: On essaye sur la base de test.
End of explanation
"""
coef = grid.best_estimator_.steps[1][1].coef_
coef.shape, sum(coef != 0)
"""
Explanation: Et il y a combien de coefficients non nuls extactement...
End of explanation
"""
|
okkhoy/pyDataAnalysis
|
ml-foundation/recommendation/Song recommender.ipynb
|
mit
|
import graphlab
"""
Explanation: Building a song recommender
Fire up GraphLab Create
End of explanation
"""
song_data = graphlab.SFrame('song_data.gl/')
"""
Explanation: Load music data
End of explanation
"""
song_data.head()
"""
Explanation: Explore data
Music data shows how many times a user listened to a song, as well as the details of the song.
End of explanation
"""
graphlab.canvas.set_target('ipynb')
song_data['song'].show()
len(song_data)
"""
Explanation: Showing the most popular songs in the dataset
End of explanation
"""
users = song_data['user_id'].unique()
len(users)
"""
Explanation: Count number of unique users in the dataset
End of explanation
"""
train_data,test_data = song_data.random_split(.8,seed=0)
"""
Explanation: Create a song recommender
End of explanation
"""
popularity_model = graphlab.popularity_recommender.create(train_data,
user_id='user_id',
item_id='song')
"""
Explanation: Simple popularity-based recommender
End of explanation
"""
popularity_model.recommend(users=[users[0]])
popularity_model.recommend(users=[users[1]])
"""
Explanation: Use the popularity model to make some predictions
A popularity model makes the same prediction for all users, so provides no personalization.
End of explanation
"""
personalized_model = graphlab.item_similarity_recommender.create(train_data,
user_id='user_id',
item_id='song')
"""
Explanation: Build a song recommender with personalization
We now create a model that allows us to make personalized recommendations to each user.
End of explanation
"""
personalized_model.recommend(users=[users[0]])
personalized_model.recommend(users=[users[1]])
"""
Explanation: Applying the personalized model to make song recommendations
As you can see, different users get different recommendations now.
End of explanation
"""
personalized_model.get_similar_items(['With Or Without You - U2'])
personalized_model.get_similar_items(['Chan Chan (Live) - Buena Vista Social Club'])
"""
Explanation: We can also apply the model to find similar songs to any song in the dataset
End of explanation
"""
if graphlab.version[:3] >= "1.6":
model_performance = graphlab.compare(test_data, [popularity_model, personalized_model], user_sample=0.05)
graphlab.show_comparison(model_performance,[popularity_model, personalized_model])
else:
%matplotlib inline
model_performance = graphlab.recommender.util.compare_models(test_data, [popularity_model, personalized_model], user_sample=.05)
"""
Explanation: Quantitative comparison between the models
We now formally compare the popularity and the personalized models using precision-recall curves.
End of explanation
"""
|
rashikaranpuria/Machine-Learning-Specialization
|
Regression/Assignment_four/week-4-ridge-regression-assignment-1-blank.ipynb
|
mit
|
import graphlab
"""
Explanation: Regression Week 4: Ridge Regression (interpretation)
In this notebook, we will run ridge regression multiple times with different L2 penalties to see which one produces the best fit. We will revisit the example of polynomial regression as a means to see the effect of L2 regularization. In particular, we will:
* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression
* Use matplotlib to visualize polynomial regressions
* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression, this time with L2 penalty
* Use matplotlib to visualize polynomial regressions under L2 regularization
* Choose best L2 penalty using cross-validation.
* Assess the final fit using test data.
We will continue to use the House data from previous notebooks. (In the next programming assignment for this module, you will implement your own ridge regression learning algorithm using gradient descent.)
Fire up graphlab create
End of explanation
"""
def polynomial_sframe(feature, degree):
# assume that degree >= 1
# initialize the SFrame:
poly_sframe = graphlab.SFrame()
# and set poly_sframe['power_1'] equal to the passed feature
poly_sframe['power_1'] = feature
# first check if degree > 1
if degree > 1:
# then loop over the remaining degrees:
# range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree
for power in range(2, degree+1):
# first we'll give the column a name:
name = 'power_' + str(power)
# then assign poly_sframe[name] to the appropriate power of feature
poly_sframe[name] = feature**power
return poly_sframe
"""
Explanation: Polynomial regression, revisited
We build on the material from Week 3, where we wrote the function to produce an SFrame with columns containing the powers of a given input. Copy and paste the function polynomial_sframe from Week 3:
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
sales = graphlab.SFrame('kc_house_data.gl/kc_house_data.gl')
len(sales[0])
"""
Explanation: Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
End of explanation
"""
sales = sales.sort(['sqft_living','price'])
"""
Explanation: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
End of explanation
"""
l2_small_penalty = 1e-5
"""
Explanation: Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_sframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5:
End of explanation
"""
poly1_data = polynomial_sframe(sales['sqft_living'], 15) # use equivalent of `polynomial_sframe`
my_features = poly1_data.column_names()
poly1_data['price'] = sales['price']
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = my_features, validation_set = None, l2_penalty=l2_small_penalty)
model1.get("coefficients")
"""
Explanation: Note: When we have so many features and so few data points, the solution can become highly numerically unstable, which can sometimes lead to strange unpredictable results. Thus, rather than using no regularization, we will introduce a tiny amount of regularization (l2_penalty=1e-5) to make the solution numerically stable. (In lecture, we discussed the fact that regularization can also help with numerical stability, and here we are seeing a practical example.)
With the L2 penalty specified above, fit the model and print out the learned weights.
Hint: make sure to add 'price' column to the new SFrame before calling graphlab.linear_regression.create(). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set=None in this call.
End of explanation
"""
(semi_split1, semi_split2) = sales.random_split(.5,seed=0)
(set_1, set_2) = semi_split1.random_split(0.5, seed=0)
(set_3, set_4) = semi_split2.random_split(0.5, seed=0)
"""
Explanation: QUIZ QUESTION: What's the learned value for the coefficient of feature power_1?
Observe overfitting
Recall from Week 3 that the polynomial fit of degree 15 changed wildly whenever the data changed. In particular, when we split the sales data into four subsets and fit the model of degree 15, the result came out to be very different for each subset. The model had a high variance. We will see in a moment that ridge regression reduces such variance. But first, we must reproduce the experiment we did in Week 3.
First, split the data into split the sales data into four subsets of roughly equal size and call them set_1, set_2, set_3, and set_4. Use .random_split function and make sure you set seed=0.
End of explanation
"""
poly1_data = polynomial_sframe(set_1['sqft_living'], 15) # use equivalent of `polynomial_sframe`
my_features = poly1_data.column_names()
poly1_data['price'] = set_1['price']
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = my_features, verbose = False, validation_set = None, l2_penalty=l2_small_penalty)
model1.get("coefficients")
poly1_data = polynomial_sframe(set_2['sqft_living'], 15) # use equivalent of `polynomial_sframe`
my_features = poly1_data.column_names()
poly1_data['price'] = set_2['price']
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = my_features, verbose = False, validation_set = None, l2_penalty=l2_small_penalty)
model1.get("coefficients")
poly1_data = polynomial_sframe(set_3['sqft_living'], 15) # use equivalent of `polynomial_sframe`
my_features = poly1_data.column_names()
poly1_data['price'] = set_3['price']
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = my_features, verbose = False, validation_set = None, l2_penalty=l2_small_penalty)
model1.get("coefficients")
poly1_data = polynomial_sframe(set_4['sqft_living'], 15) # use equivalent of `polynomial_sframe`
my_features = poly1_data.column_names()
poly1_data['price'] = set_4['price']
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = my_features, verbose = False, validation_set = None, l2_penalty=l2_small_penalty)
model1.get("coefficients")
"""
Explanation: Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.
Hint: When calling graphlab.linear_regression.create(), use the same L2 penalty as before (i.e. l2_small_penalty). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
End of explanation
"""
poly1_data = polynomial_sframe(set_1['sqft_living'], 15) # use equivalent of `polynomial_sframe`
my_features = poly1_data.column_names()
poly1_data['price'] = set_1['price']
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = my_features, verbose = False, validation_set = None, l2_penalty=1e5)
model1.get("coefficients")
poly1_data = polynomial_sframe(set_2['sqft_living'], 15) # use equivalent of `polynomial_sframe`
my_features = poly1_data.column_names()
poly1_data['price'] = set_2['price']
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = my_features, verbose = False, validation_set = None, l2_penalty=1e5)
model1.get("coefficients")
poly1_data = polynomial_sframe(set_3['sqft_living'], 15) # use equivalent of `polynomial_sframe`
my_features = poly1_data.column_names()
poly1_data['price'] = set_3['price']
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = my_features, verbose = False, validation_set = None, l2_penalty=1e5)
model1.get("coefficients")
poly1_data = polynomial_sframe(set_4['sqft_living'], 15) # use equivalent of `polynomial_sframe`
my_features = poly1_data.column_names()
poly1_data['price'] = set_4['price']
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = my_features, verbose = False, validation_set = None, l2_penalty=1e5)
model1.get("coefficients")
"""
Explanation: The four curves should differ from one another a lot, as should the coefficients you learned.
QUIZ QUESTION: For the models learned in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
Ridge regression comes to rescue
Generally, whenever we see weights change so much in response to change in data, we believe the variance of our estimate to be large. Ridge regression aims to address this issue by penalizing "large" weights. (Weights of model15 looked quite small, but they are not that small because 'sqft_living' input is in the order of thousands.)
With the argument l2_penalty=1e5, fit a 15th-order polynomial model on set_1, set_2, set_3, and set_4. Other than the change in the l2_penalty parameter, the code should be the same as the experiment above. Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
End of explanation
"""
(train_valid, test) = sales.random_split(.9, seed=1)
train_valid_shuffled = graphlab.toolkits.cross_validation.shuffle(train_valid, random_seed=1)
"""
Explanation: These curves should vary a lot less, now that you applied a high degree of regularization.
QUIZ QUESTION: For the models learned with the high level of regularization in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
Selecting an L2 penalty via cross-validation
Just like the polynomial degree, the L2 penalty is a "magic" parameter we need to select. We could use the validation set approach as we did in the last module, but that approach has a major disadvantage: it leaves fewer observations available for training. Cross-validation seeks to overcome this issue by using all of the training set in a smart way.
We will implement a kind of cross-validation called k-fold cross-validation. The method gets its name because it involves dividing the training set into k segments of roughtly equal size. Similar to the validation set method, we measure the validation error with one of the segments designated as the validation set. The major difference is that we repeat the process k times as follows:
Set aside segment 0 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>
Set aside segment 1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>
...<br>
Set aside segment k-1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set
After this process, we compute the average of the k validation errors, and use it as an estimate of the generalization error. Notice that all observations are used for both training and validation, as we iterate over segments of data.
To estimate the generalization error well, it is crucial to shuffle the training data before dividing them into segments. GraphLab Create has a utility function for shuffling a given SFrame. We reserve 10% of the data as the test set and shuffle the remainder. (Make sure to use seed=1 to get consistent answer.)
End of explanation
"""
n = len(train_valid_shuffled)
k = 10 # 10-fold cross-validation
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k-1
print i, (start, end)
"""
Explanation: Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1.
With this pattern in mind, we write a short loop that prints the starting and ending indices of each segment, just to make sure you are getting the splits right.
End of explanation
"""
train_valid_shuffled[0:10] # rows 0 to 9
"""
Explanation: Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
End of explanation
"""
validation4 = train_valid_shuffled[5818:7758] #rows 0 to 9
"""
Explanation: Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.
Extract the fourth segment (segment 3) and assign it to a variable called validation4.
End of explanation
"""
print int(round(validation4['price'].mean(), 0))
"""
Explanation: To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
End of explanation
"""
n = len(train_valid_shuffled)
first_two = train_valid_shuffled[0:2]
last_two = train_valid_shuffled[n-2:n]
"""
Explanation: After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0:start) and (end+1:n) of the data and paste them together. SFrame has append() method that pastes together two disjoint sets of rows originating from a common dataset. For instance, the following cell pastes together the first and last two rows of the train_valid_shuffled dataframe.
End of explanation
"""
train4=train_valid_shuffled[0:5818].append(train_valid_shuffled[7758:19396])
"""
Explanation: Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.
End of explanation
"""
print int(round(train4['price'].mean(), 0))
"""
Explanation: To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
End of explanation
"""
def get_RSS(prediction, output):
residual = output - prediction
# square the residuals and add them up
RS = residual*residual
RSS = RS.sum()
return(RSS)
def k_fold_cross_validation(k, l2_penalty, data, features_list):
n = len(data)
RSS = 0
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k-1
validation=data[start:end+1]
train=data[0:start].append(data[end+1:n])
model = graphlab.linear_regression.create(train, target='price', features = features_list, l2_penalty=l2_penalty,validation_set=None,verbose = False)
predictions=model.predict(validation)
A =get_RSS(predictions,validation['price'])
RSS = RSS + A
Val_err = RSS/k
return Val_err
"""
Explanation: Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.
For each i in [0, 1, ..., k-1]:
Compute starting and ending indices of segment i and call 'start' and 'end'
Form validation set by taking a slice (start:end+1) from the data.
Form training set by appending slice (end+1:n) to the end of slice (0:start).
Train a linear model using training set just formed, with a given l2_penalty
Compute validation error using validation set just formed
End of explanation
"""
import numpy as np
poly_data = polynomial_sframe(train_valid_shuffled['sqft_living'], 15)
my_features = poly_data.column_names()
poly_data['price'] = train_valid_shuffled['price']
for l2_penalty in np.logspace(1, 7, num=13):
Val_err = k_fold_cross_validation(10, l2_penalty, poly_data, my_features)
print l2_penalty
print Val_err
"""
Explanation: Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following:
* We will again be aiming to fit a 15th-order polynomial model using the sqft_living input
* For l2_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, you can use this Numpy function: np.logspace(1, 7, num=13).)
* Run 10-fold cross-validation with l2_penalty
* Report which L2 penalty produced the lowest average validation error.
Note: since the degree of the polynomial is now fixed to 15, to make things faster, you should generate polynomial features in advance and re-use them throughout the loop. Make sure to use train_valid_shuffled when generating polynomial features!
End of explanation
"""
# Plot the l2_penalty values in the x axis and the cross-validation error in the y axis.
# Using plt.xscale('log') will make your plot more intuitive.
"""
Explanation: QUIZ QUESTIONS: What is the best value for the L2 penalty according to 10-fold validation?
You may find it useful to plot the k-fold cross-validation errors you have obtained to better understand the behavior of the method.
End of explanation
"""
poly1_data = polynomial_sframe(train_valid_shuffled['sqft_living'], 15) # use equivalent of `polynomial_sframe`
my_features = poly1_data.column_names()
poly1_data['price'] = train_valid_shuffled['price']
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = my_features, verbose = False, validation_set = None, l2_penalty=1000)
Val_err = k_fold_cross_validation(10, 1000, poly1_data, my_features)
print Val_err
"""
Explanation: Once you found the best value for the L2 penalty using cross-validation, it is important to retrain a final model on all of the training data using this value of l2_penalty. This way, your final model will be trained on the entire dataset.
End of explanation
"""
|
wzxiong/DAVIS-Machine-Learning
|
homeworks/HW2.ipynb
|
mit
|
import numpy as np
import pandas as pd
# dataset path
data_dir = "."
"""
Explanation: STA 208: Homework 2
This is based on the material in Chapters 3, 4.4 of 'Elements of Statistical Learning' (ESL), in addition to lectures 4-6. Chunzhe Zhang came up with the dataset and the analysis in the second section.
Instructions
We use a script that extracts your answers by looking for cells in between the cells containing the exercise statements (beginning with Exercise X.X). So you
MUST add cells in between the exercise statements and add answers within them and
MUST NOT modify the existing cells, particularly not the problem statement
To make markdown, please switch the cell type to markdown (from code) - you can hit 'm' when you are in command mode - and use the markdown language. For a brief tutorial see: https://daringfireball.net/projects/markdown/syntax
In the conceptual exercises you should provide an explanation, with math when necessary, for any answers. When answering with math you should use basic LaTeX, as in
$$E(Y|X=x) = \int_{\mathcal{Y}} f_{Y|X}(y|x) dy = \int_{\mathcal{Y}} \frac{f_{Y,X}(y,x)}{f_{X}(x)} dy$$
for displayed equations, and $R_{i,j} = 2^{-|i-j|}$ for inline equations. (To see the contents of this cell in markdown, double click on it or hit Enter in escape mode.) To see a list of latex math symbols see here: http://web.ift.uib.no/Teori/KURS/WRK/TeX/symALL.html
1. Conceptual Exercises
Exercise 1.1. (5 pts) Ex. 3.29 in ESL
Firstly, we set variable $X$ as $N\times 1$ $\begin{bmatrix}.\ x\ .\end{bmatrix}_{N \times 1}$. Then according to formula of Rigde regression we know
$$\beta ^{ridge}=[X^TX+\lambda I]^{-1}X^TY$$
$$\alpha = \frac{X^TY}{X^TX+\lambda I}$$
let $Z=X^TX$, so we got $\alpha = \frac{X^TY}{Z+\lambda I}$, after that, we include exact copy $X^*=X$ and get new variable $X_{new}=[X,X]_{n \times 2}$. Then we refit ridge regression:
$$\begin{align}
\beta ^{ridge}_{new} &= \left [ \begin{bmatrix}X^T\X^T \end{bmatrix}[X X]+\lambda I_2 \right ]^{-1}\begin{bmatrix}X^T\X^T \end{bmatrix}Y\
& = \left [ \begin{bmatrix} X^TX & X^TX\ X^TX& X^TX\end{bmatrix}+\lambda I_2 \right ]^{-1}\begin{bmatrix}X^TY\X^TY \end{bmatrix}\
& = \begin{bmatrix} Z+\lambda I& Z\ Z& Z+\lambda I\end{bmatrix}^{-1}\begin{bmatrix}X^TY\X^TY \end{bmatrix}\
& = \frac{1}{\left \|(Z+\lambda I)^T(Z+\lambda I)-Z^TZ \right \|}\begin{bmatrix} Z+\lambda I& -Z\ -Z& Z+\lambda I\end{bmatrix}^{-1}\begin{bmatrix}X^TY\X^TY \end{bmatrix}\
& = \frac{1}{\left \|(2\lambda Z+\lambda ^2I \right \|}\begin{bmatrix} Z+\lambda I& -Z\ -Z& Z+\lambda I\end{bmatrix}^{-1}\begin{bmatrix}X^TY\X^TY \end{bmatrix}\
& = \frac{1}{\left \|(2\lambda Z+\lambda ^2I \right \|}\begin{bmatrix}X^TY \lambda\X^TY\lambda \end{bmatrix}\
\end{align}$$
Accordingly, we prove that both coefficients are identical, which is $\frac{X^TY \lambda}{\left \|(2\lambda Z+\lambda ^2I \right \|}$
Now, when consider there are m copies of variable $X_j$ we got new variable $X_{new}=[X_1,...,X_i,X_j,...,X]_{n \times M}$, then for every $i,j \in 1,2,...,m$ $X_i=X_j$ and $j\neq i$, the coefficient should be $[\beta_1,...,\beta_i,\beta_j,...,\beta_m]$, because each variable is the same, when we change the order, for example, exchange $X_i$ and $X_j$, the coefficient we get will be identical, which means $[\beta_1,...,\beta_i,\beta_j,...,\beta_m]=[\beta_1,...,\beta_j,\beta_i,...,\beta_m]$, which means that $\beta_j=\beta_i$. What's more we can also exchange other variables which will give us same answer. Above all, in general that if m copies of a variable $X_j$ are
included in a ridge regression, their coefficients are all the same.
Exercise 1.2 (5 pts) Ex. 3.30 in ESL
Firstly, we let variables $X y$ denote Elastic Net, and use variable $X' y'$ denote Lasso. Then we augment $X y$.
$$X'=\begin{pmatrix}X \ \sqrt{\lambda I_p} \end{pmatrix} y'=\begin{pmatrix}y \ 0\end{pmatrix}$$
Then we use the formula of Lasso
$$\begin{align}
\hat\beta&= \text{argmin}{\beta}\left(\left \|y'-X'\beta\right \|^2+\gamma \left \| \beta \right \|_1 \right)\
& = \text{argmin}{\beta}\left((y'-X'\beta)^T(y'-X'\beta)+\gamma \left \| \beta \right \|1\right)\
& = \text{argmin}{\beta}\left(y'^Ty'-2\beta X'^Ty'+\beta X'^TX'\beta+\gamma \left \| \beta \right \|_1\right)
\end{align}$$
In order to solve this equation we divide it into three smaller equations.
$$y'^Ty'= (y^T 0)\begin{pmatrix}y \ 0\end{pmatrix}=y^Ty$$
$$\beta^TX^Ty' = \beta^T(X^T \sqrt{\lambda I_p})\begin{pmatrix}y \ 0\end{pmatrix}=\beta^TX^Ty$$
$$\begin{align}
\beta^TX'^TX'\beta& = \beta^T(X^T \sqrt{\lambda I_p})\begin{pmatrix}X \ \sqrt{\lambda I_p}\end{pmatrix}\beta\
&=\beta^T(X^TX+\lambda I_p)\beta\
&=\beta^TX^TX\beta+\lambda \beta^T\beta
\end{align}$$
Combine previous three we can calculate original function
$$\begin{align}
\hat\beta&= \text{argmin}{\beta}\left(y'^Ty'-2\beta X'^Ty'+\beta X'^TX'\beta+\gamma \left \| \beta \right \|_1\right)\
&=\text{argmin}{\beta}\left((y-X\beta)^T(y-X\beta)+\lambda \left \| \beta \right \|2^2+\gamma\left \| \beta \right \|_1\right)\
&=\text{argmin}{\beta}\left((y-X\beta)^T(y-X\beta)+\tilde{\lambda}\left(\alpha \left \| \beta \right \|_2^2+(1-\alpha)\left \| \beta \right \|_1\right)\right)\
\end{align}$$
where $\lambda=\tilde{\lambda}\alpha$ and $\gamma = \tilde{\lambda}(1-\alpha)$, accordingly we find that elastic-net optimization problem can be turned into a lasso problem, using an augmented version of $X$ and $y$.
Exercise 1.3 (5 pts) $Y \in {0,1}$ follows an exponential family model with natural parameter $\eta$ if
$$P(Y=y) = \exp\left( y \eta - \psi(\eta) \right).$$
Show that when $\eta = x^\top \beta$ then $Y$ follows a logistic regression model.
Since $Y \in {0,1}$, so we can view it as classification problem, and $Y|X$ is Binomial, since the logistic model assumes that
$$\text{log}\frac{\mathbb{P}{Y=1|X=x}}{\mathbb{P}{Y=0|X=x}}=x^T\beta$$
so we can use this as criteria to find out whether this is logistic model. We can use the fact condition $\eta = x^\top \beta$.
$$\mathbb{P}{Y=0|X=x}=\exp\left( 0\times x^T \beta - \psi(x^T \beta) \right)=\exp\left(- \psi(x^T \beta)\right)$$
$$\mathbb{P}{Y=1|X=x}=\exp\left( 1\times x^T \beta - \psi(x^T \beta) \right)=\exp\left(x^T \beta\right)\times\exp\left(- \psi(x^T \beta)\right)$$
Then we put those two into original function to calculate.
$$\text{log}\frac{\mathbb{P}{Y=1|X=x}}{\mathbb{P}{Y=0|X=x}}=\text{log}\frac{\exp\left(x^T \beta\right)\times\exp\left(- \psi(x^T \beta)\right)}{\exp\left(- \psi(x^T \beta)\right)}=x^T\beta$$
Accordingly, we show that if $\eta = x^\top \beta$ then $Y$ follows a logistic regression model.
2. Data Analysis
End of explanation
"""
sample_data = pd.read_csv(data_dir+"/hw2.csv", delimiter=',')
sample_data.head()
sample_data.V1 = sample_data.V1.eq('Yes').mul(1)
"""
Explanation: Load the following medical dataset with 750 patients. The response variable is survival dates (Y), the predictors are 104 measurements measured at a specific time (numerical variables have been standardized).
End of explanation
"""
X = np.array(sample_data.iloc[:,range(2,104)])
y = np.array(sample_data.iloc[:,0])
z = np.array(sample_data.iloc[:,1])
"""
Explanation: The response variable is Y for 2.1-2.3 and Z for 2.4.
End of explanation
"""
from sklearn.preprocessing import scale
from sklearn.linear_model import LinearRegression, Ridge,lars_path, RidgeCV, Lasso, LassoCV,lasso_path
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
alphas = 10**np.linspace(10,-2,100)*0.5
ridge = Ridge()
coefs = []
for a in alphas:
ridge.set_params(alpha=a)
ridge.fit(scale(X), y)
coefs.append(ridge.coef_)
ax = plt.gca()
ax.plot(alphas, coefs)
ax.set_xscale('log')
ax.set_xlim(ax.get_xlim()[::-1]) # reverse axis
plt.axis('tight')
plt.xlabel('lambda')
plt.ylabel('weights')
plt.title('Ridge coefficients as a function of the regularization')
plt.show()
alphas = np.linspace(100,0.05,1000)
rcv = RidgeCV(alphas = alphas, store_cv_values=True,normalize=False)
# RidgeCV performs Generalized Cross-Validation, which is a form of efficient Leave-One-Out cross-validation
rcv.fit(X,y)
cv_vals = rcv.cv_values_
LOOr = cv_vals.mean(axis=0)
plt.plot(alphas,LOOr)
plt.xlabel('lambda')
plt.ylabel('Risk')
plt.title('LOO Risk for Ridge');
plt.show()
LOOr[-1]
r10cv = RidgeCV(alphas = alphas, cv = 10,normalize=False)
r10cv.fit(X,y)
r10cv.alpha_
"""
Explanation: Exercise 2.1 (10 pts) Perform ridge regression on the method and cross-validate to find the best ridge parameter.
The plot shows that using Leave-one-out method the lower the lambda the lower the risk, the lowest one is the lowest boundary I got. Accordingly, the ridge regression for this dataset does not perform better than OLS. But there are right answer when we use 10-fold cross validation which give us best parameter 3.45, and lasso give us 21.19 with 10-fold.
End of explanation
"""
lasso = lasso_path(X,y)
from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes
from mpl_toolkits.axes_grid1.inset_locator import mark_inset
lasso = lasso_path(X,y)
fig, ax = plt.subplots(figsize=(10,7))
for j in range(102):
ax.plot(lasso[0],lasso[1][j,:],'r',linewidth=.5)
plt.title('Lasso path for simulated data')
plt.xlabel('lambda')
plt.ylabel('Coef')
axins = zoomed_inset_axes(ax, 2.8, loc=1)
for j in range(102):
axins.plot(lasso[0],lasso[1][j,:],'r')
x1, x2, y1, y2 = 0, 250, -250, 250 # specify the limits
axins.set_xlim(x1, x2) # apply the x-limits
axins.set_ylim(y1, y2) # apply the y-limits
plt.yticks(visible=False)
plt.xticks(visible=False)
mark_inset(ax, axins, loc1=2, loc2=4, fc="none", ec="0.5")
plt.show()
lar = lars_path(X,y,method="lar")
fig, ax = plt.subplots(figsize=(10,7))
for j in range(102):
ax.plot(lar[0],lar[2][j,:],'r',linewidth=.5)
plt.title('Lar path for simulated data')
plt.xlabel('lambda')
plt.ylabel('Coef')
axins = zoomed_inset_axes(ax, 2.8, loc=1)
for j in range(102):
axins.plot(lar[0],lar[2][j,:],'r')
x1, x2, y1, y2 = 0, 250, -250, 250 # specify the limits
axins.set_xlim(x1, x2) # apply the x-limits
axins.set_ylim(y1, y2) # apply the y-limits
plt.yticks(visible=False)
plt.xticks(visible=False)
mark_inset(ax, axins, loc1=2, loc2=4, fc="none", ec="0.5")
plt.show()
leave_para=[]
coeff = []
for i in range(len(lar[2])):
z,x=-2,-2
for j in lar[2][i]:
if j > 0:
z=1
if j<0:
x=-1
if z+x==0:
leave_para.append('V'+str(i+2))
coeff.append(lar[2][i])
print set(leave_para)
print coeff[0]
leave_para=[]
coeff = []
for i in range(len(lasso[1])):
z,x=-2,-2
for j in lasso[1][i]:
if j > 0:
z=1
if j<0:
x=-1
if z+x==0:
leave_para.append('V'+str(i+2))
coeff.append(lasso[1][i])
print set(leave_para)
print coeff
"""
Explanation: Exercise 2.2 (10 pts) Plot the lasso and lars path for each of the coefficients. All coefficients for a given method should be on the same plot, you should get 2 plots. What are the major differences, if any? Are there any 'leaving' events in the lasso path?
We can find out most of the coefficents go to zero quickly, so they are not important, and there is no apparent difference between lasso and lars path for each of the coefficients. In addition, there is no leaving event happened in lasso path, but the leaving event happened at parameter 'V35' in lar path.
End of explanation
"""
from sklearn.model_selection import LeaveOneOut
loo = LeaveOneOut()
looiter = loo.split(X)
hitlasso = LassoCV(cv=looiter)
hitlasso.fit(X,y)
print("The selected lambda value is {:.2f}".format(hitlasso.alpha_))
hitlasso.coef_
np.mean(hitlasso.mse_path_[hitlasso.alphas_ == hitlasso.alpha_])
la10 = LassoCV(cv=10)
la10.fit(X,y)
la10.alpha_
"""
Explanation: Exercise 2.3 (10 pts) Cross-validate the Lasso and compare the results to the answer to 2.1.
We can find out that by using leave-one-out method the best lambda is 19.77, and the mse is 344419. This method perform better than ridge regression, which have risk 365349. When using 10-fold cross validation will give us best parameter 3.45, and lasso give us 21.19.
End of explanation
"""
filted_index = np.where(hitlasso.coef_ != 0)[0]
filted_X = X[:,filted_index]
filted_X.shape
from sklearn.linear_model import LogisticRegression
logis = LogisticRegression()
logis.fit(filted_X, z)
logis.coef_
logis.score(filted_X, z)
print 'active set'
active = ['V'+str(i+2) for i in filted_index]
active
"""
Explanation: Exercise 2.4 (15 pts) Obtain the 'best' active set from 2.3, and create a new design matrix with only these variables. Use this to predict the categorical variable $z$ with logistic regression.
By using the active set from 2.3, I predict the category with logistic regression, and the accuracy rate is 93 percent.
End of explanation
"""
|
laserson/phip-stat
|
notebooks/phip_modeling/phip-kinetic-computations.ipynb
|
apache-2.0
|
df = pd.read_csv('/Users/laserson/lasersonlab/larman/libraries/T7-Pep_InputCountsComplete46M.csv', header=None, index_col=0)
counts = df.values.ravel()
sns.distplot(counts)
"""
Explanation: PhIP-Seq kinetics computations
Reaction summary
IP reaction (1 mL)
* IgG
* MW of IgG = 150 kDa
* 2 µg IgG = 13.3 pmol = 8.03e12 molecules
* 13.3 nM in the reaction
* Phage
* 100k particles per clone on average
* Add ~1e10 total particles per mL reaction
* 5k - 50k of each clone per reaction
* Equiv to per clone concentration of 0.0083 fM to
* Protein A/Protein G Beads
* 40 µL total => 1.2 mg beads => capture 9.6 µg Ab according to manual
* Should capture all Ab in reaction so will ignore in calculation
* Kd maybe ~10 nM
Ab in reaction
Kd = [Ab] [L] / [AbL]
Inputs:
Desired Kd ability to resolve
Total Ab and L (e.g., [Ab] + [AbL])
requires overwhelming Protein A/G binding sites?
Input library
End of explanation
"""
iles = (counts.min(), sp.stats.scoreatpercentile(counts, 10), sp.stats.scoreatpercentile(counts, 50), sp.stats.scoreatpercentile(counts, 90), counts.max())
iles
cov = sum(counts)
cov
"""
Explanation: (min, 10%ile, 50%ile, 90%ile, max)
End of explanation
"""
tuple([float(val) / cov for val in iles])
counts.mean(), counts.std()
(18. / cov) * 1e10
(229. / cov) * 1e10
(counts > 0).sum()
counts.shape
def equil_conc(total_antibody, total_phage, Kd):
s = total_antibody + total_phage + Kd
bound = 0.5 * (s - np.sqrt(s * s - 4 * total_antibody * total_phage))
equil_antibody = total_antibody - bound
equil_phage = total_phage - bound
return (equil_antibody, equil_phage, bound)
equil_conc(13e-15, 8.302889405513118e-17, 1e-9)
np.logspace?
antibody_concentrations = np.logspace(-15, -3, num=25)
phage_concentrations = np.logspace(-18, -12, num=13)
antibody_labels = ['{:.1e}'.format(c) for c in antibody_concentrations]
phage_labels = ['{:.1e}'.format(c) for c in phage_concentrations]
Kd = 1e-8
frac_antibody_bound = np.zeros((len(antibody_concentrations), len(phage_concentrations)))
frac_phage_bound = np.zeros((len(antibody_concentrations), len(phage_concentrations)))
for (i, a) in enumerate(antibody_concentrations):
for (j, p) in enumerate(phage_concentrations):
bound = equil_conc(a, p, Kd)[2]
frac_antibody_bound[i, j] = bound / a
frac_phage_bound[i, j] = bound / p
fig = plt.figure(figsize=(12, 6))
ax = fig.add_subplot(121)
sns.heatmap(frac_antibody_bound, xticklabels=phage_labels, yticklabels=antibody_labels, square=True, ax=ax)
ax.set_title('Fraction Antibody Bound')
ax.set_ylabel('total antibody clone conc')
ax.set_xlabel('total phage clone conc')
ax = fig.add_subplot(122)
sns.heatmap(frac_phage_bound, xticklabels=phage_labels, yticklabels=antibody_labels, square=True, ax=ax)
ax.set_title('Fraction Phage Bound')
ax.set_ylabel('total antibody clone conc')
ax.set_xlabel('total phage clone conc')
"""
Explanation: And the same values as frequencies
End of explanation
"""
antibody_concentrations = np.logspace(-15, -3, num=25)
Kds = np.logspace(-15, -6, num=19)
antibody_labels = ['{:.1e}'.format(c) for c in antibody_concentrations]
Kd_labels = ['{:.1e}'.format(c) for c in Kds]
phage_concentration = 2e-15
frac_antibody_bound = np.zeros((len(antibody_concentrations), len(Kds)))
frac_phage_bound = np.zeros((len(antibody_concentrations), len(Kds)))
for (i, a) in enumerate(antibody_concentrations):
for (j, Kd) in enumerate(Kds):
bound = equil_conc(a, phage_concentration, Kd)[2]
frac_antibody_bound[i, j] = bound / a
frac_phage_bound[i, j] = bound / phage_concentration
fig = plt.figure(figsize=(9, 9))
# ax = fig.add_subplot(121)
# sns.heatmap(frac_antibody_bound, xticklabels=Kd_labels, yticklabels=antibody_labels, square=True, ax=ax)
# ax.set_title('Fraction Antibody Bound')
# ax.set_ylabel('total antibody clone conc')
# ax.set_xlabel('Kd')
ax = fig.add_subplot(111)
sns.heatmap(frac_phage_bound, xticklabels=Kd_labels, yticklabels=antibody_labels, square=True, ax=ax)
ax.set_title('Fraction Phage Bound')
ax.set_ylabel('total antibody clone conc')
ax.set_xlabel('Kd')
"""
Explanation: It's most important to ensure we get maximal phage capture, and this seems to be independent of the total phage concentration. Let's instead explore the fraction phage bound as a function of the antibody concentration and Kd
End of explanation
"""
|
anachlas/w210_vendor_recommendor
|
vendor recommender - EDA.ipynb
|
gpl-3.0
|
import google.datalab.bigquery as bq
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
from sklearn import cross_validation as cv
from sklearn.metrics.pairwise import pairwise_distances
from sklearn.metrics import mean_squared_error
from math import sqrt
"""
Explanation: Vendor Recommender - EDA
@olibolly
Open TO-DO
Link the notebook with github using ungit - DONE
Provide access to the project if we go for Big query - DONE
Re-pull EDA using updated 2016-2017 data - DONE
Further EDA on collaborative filtering - DONE
Run first regression to understand what features matter - DONE
Join tables FAPIIS and USA spending
Useful links
https://github.com/antontarasenko/gpq
Dataset
USASpending.gov available on BigQuery dataset (17 years of data, 45mn transactions, $6.7tn worth of goods and services): gpqueries:contracts
Past Performance Information Retrieval System (PPIRS) -> review - not public data
System for Award Management (SAM)
FAPIIS
Are they any other dataset we should be considering??
Table gpqueries:contracts.raw
Table gpqueries:contracts.raw contains the unmodified data from the USASpending.gov archives. It's constructed from <year>_All_Contracts_Full_20160515.csv.zip files and includes contracts from 2000 to May 15, 2016.
Table gpqueries:contracts.raw contains 45M rows and 225 columns.
Each row refers to a transaction (a purchase or refund) made by a federal agency. It may be a pizza or an airplane.
The columns are grouped into categories:
Transaction: unique_transaction_id-baseandalloptionsvalue
Buyer (government agency): maj_agency_cat-fundedbyforeignentity
Dates: signeddate-lastdatetoorder, last_modified_date
Contract: contractactiontype-programacronym
Contractor (supplier, vendor): vendorname-statecode
Place of performance: PlaceofPerformanceCity-placeofperformancecongressionaldistrict
Product or service bought: psc_cat-manufacturingorganizationtype
General contract information: agencyid-idvmodificationnumber
Competitive procedure: solicitationid-statutoryexceptiontofairopportunity
Contractor details: organizationaltype-otherstatutoryauthority
Contractor's executives: prime_awardee_executive1-interagencycontractingauthority
Detailed description for each variable is available in the official codebook:
USAspending.govDownloadsDataDictionary.pdf
End of explanation
"""
%%sql
select * from [fiery-set-171213:vrec.sam_exclusions] limit 5
%%sql
select Exclusion_Type from [fiery-set-171213:vrec.sam_exclusions] group by 1;
%%sql
select Classification from [fiery-set-171213:vrec.sam_exclusions] group by 1;
%%sql
select
count(*)
from [fiery-set-171213:vrec.sam_exclusions]
where Classification in ('Firm')
;
"""
Explanation: SAM (System for Award Management) - exclusions
https://www.sam.gov/sam/transcript/SAM_Exclusions_Public_Extract_Layout.pdf
End of explanation
"""
%%bq query -n df_query
select
EXTRACT(YEAR FROM Active_Date) as year,
count(*) as count
from `fiery-set-171213.vrec.sam_exclusions`
where Classification in ('Firm')
and Active_Date is not NULL
group by 1
order by 1;
df = df_query.execute(output_options=bq.QueryOutput.dataframe()).result()
ax = df.plot(kind='bar', x='year', title='Excluded Firms per year', figsize=(15,8))
ax.set_xlabel('Year')
ax.set_ylabel('count')
%%bq query
select
#Name,
SAM_Number,
count(*) as count
from `fiery-set-171213.vrec.sam_exclusions`
where Classification in ('Firm')
#and Active_Date is not NULL
group by 1
order by 2 DESC
limit 5;
%%bq query
select
NPI,
count(*) as count
from `fiery-set-171213.vrec.sam_exclusions`
where Classification in ('Firm')
#and CAGE is not NULL
group by 1
order by 2 DESC
limit 5;
%%bq query
select
CAGE,
count(*) as count
from `fiery-set-171213.vrec.sam_exclusions`
where Classification in ('Firm')
#and CAGE is not NULL
group by 1
order by 2 DESC
limit 5;
"""
Explanation: There are 8,659 firms on the SAM exclusion list
End of explanation
"""
%%bq query
select *
from `fiery-set-171213.vrec.fapiis`
limit 5
%%bq query -n df_query
select
EXTRACT(YEAR FROM RECORD_DATE) as year,
count(*) as count
from `fiery-set-171213.vrec.fapiis`
group by 1
order by 1;
df = df_query.execute(output_options=bq.QueryOutput.dataframe()).result()
ax = df.plot(kind='bar', x='year', title='Firms by Record date', figsize=(10,5))
ax.set_xlabel('Year')
ax.set_ylabel('count')
%%bq query -n df_query
select
EXTRACT(YEAR FROM TERMINATION_DATE) as year,
count(*) as count
from `fiery-set-171213.vrec.fapiis`
group by 1
order by 1;
df = df_query.execute(output_options=bq.QueryOutput.dataframe()).result()
ax = df.plot(kind='bar', x='year', title='Firms by termination date', figsize=(10,5))
ax.set_xlabel('Year')
ax.set_ylabel('count')
%%bq query
select
AWARDEE_NAME,
DUNS,
count(*) as count
from `fiery-set-171213.vrec.fapiis`
group by 1,2
order by 3 DESC
limit 5;
%%bq query
select
*
from `fiery-set-171213.vrec.fapiis`
where AWARDEE_NAME in ('ALPHA RAPID ENGINEERING SOLUTIONS')
limit 5;
%%bq query
select
RECORD_TYPE,
count(*) as count
from `fiery-set-171213.vrec.fapiis`
group by 1
order by 2 DESC
"""
Explanation: NPI and CAGE don't seem to be great keys to join the data - ideally we can use SAM
Federal Awardee Performance and Integrity Information System (FAPIIS)
This is the contractor's fault - you can do business with these contractors on SAM one cannot do business with
Only 5 years by design
End of explanation
"""
%%bq query -n df_query
select count(*) as transactions
from `fiery-set-171213.vrec.usa_spending_all`
df = df_query.execute(output_options=bq.QueryOutput.dataframe()).result()
df.head()
%%bq query
select *
from `fiery-set-171213.vrec.usa_spending_all`
where mod_agency in ('1700: DEPT OF THE NAVY')
limit 5
%%bq query -n df_query
select
#substr(signeddate, 1, 2) month,
fiscal_year as year,
count(*) transactions,
sum(dollarsobligated) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
group by year
order by year asc
df = df_query.execute(output_options=bq.QueryOutput.dataframe()).result()
ax = df.set_index('year')['dollarsobligated'].plot(kind='bar', title='Government purchases by years')
ax.set_ylabel('dollars obligated')
%%bq query -n df_query
select
fiscal_year as year,
sum(dollarsobligated)/count(*) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
group by year
order by year asc
df = df_query.execute(output_options=bq.QueryOutput.dataframe()).result()
ax = df.set_index('year')['dollarsobligated'].plot(kind='bar', title='avg. transaction size by years')
ax.set_ylabel('dollars obligated')
"""
Explanation: FAPIIS is not bad with 3002 DUNS code but time range goes only from 2012 to 2017
USA Spending
Link to collaborative filtering
https://docs.google.com/presentation/d/1x5g-wIoSUGRSwDqHC6MhZBZD5d2LQ19WKFlRneN2TyU/edit#slide=id.p121
https://www.usaspending.gov/DownloadCenter/Documents/USAspending.govDownloadsDataDictionary.pdf
End of explanation
"""
%%bq query
select
maj_agency_cat,
mod_agency,
count(*)
from `fiery-set-171213.vrec.usa_spending_all`
group by 1,2
order by 3 DESC
limit 20
%%bq query
select
mod_parent,
vendorname,
sum(dollarsobligated) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
group by 1,2
order by 3 DESC
limit 20
"""
Explanation: Which means we're dealing with 49.5M transactions totalling 6.7 trillion dollars. These purchases came from 622k vendors that won 2.2mn solicitations issued by government agencies.
End of explanation
"""
%%bq query
select
productorservicecode,
systemequipmentcode,
claimantprogramcode,
principalnaicscode,
sum(dollarsobligated) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
where vendorname in ('LOCKHEED MARTIN CORPORATION')
group by 1,2,3,4
order by 5 DESC
limit 20
%%bq query
select
#mod_parent,
vendorname,
systemequipmentcode,
sum(dollarsobligated) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
where productorservicecode in ('1510: AIRCRAFT, FIXED WING')
group by 1,2
order by 3 DESC
limit 20
%%bq query
select
vendorname,
systemequipmentcode,
claimantprogramcode,
principalnaicscode,
sum(dollarsobligated) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
where productorservicecode in ('1510: AIRCRAFT, FIXED WING')
and contractingofficerbusinesssizedetermination in ('S: SMALL BUSINESS')
group by 1,2,3,4
order by dollarsobligated DESC
limit 20
%%bq query
select
*
from `gpqueries.contracts.raw`
where productorservicecode in ('1510: AIRCRAFT, FIXED WING')
and contractingofficerbusinesssizedetermination in ('S: SMALL BUSINESS')
limit 1
%%bq query
select
claimantprogramcode,
principalnaicscode,
sum(dollarsobligated) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
where contractingofficerbusinesssizedetermination in ("S: SMALL BUSINESS")
group by 1,2
order by dollarsobligated DESC
limit 10
"""
Explanation: Understanding where the budget is spent
End of explanation
"""
%%bq query -n df_query
select
fiscal_year,
sum(dollarsobligated) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
where contractingofficerbusinesssizedetermination in ("S: SMALL BUSINESS")
group by 1
order by 1
df = df_query.execute(output_options=bq.QueryOutput.dataframe()).result()
plt = df.set_index('fiscal_year')['dollarsobligated'].plot(kind='bar', title='transactions amount for SMBs')
%%bq query -n df_query
#%%sql
select
smb.fiscal_year,
sum(smb.transaction) as smb,
sum(total.transaction) as total,
sum(smb.transaction)/sum(total.transaction) as percentage
from
(select
fiscal_year,
sum(dollarsobligated) as transaction
from `fiery-set-171213.vrec.usa_spending_all`
where contractingofficerbusinesssizedetermination in ("S: SMALL BUSINESS")
group by 1) as smb
join
(select
fiscal_year,
sum(dollarsobligated) as transaction
from `fiery-set-171213.vrec.usa_spending_all`
group by 1) as total
on smb.fiscal_year = total.fiscal_year
group by 1
order by 1
df = df_query.execute(output_options=bq.QueryOutput.dataframe()).result()
plt = df.set_index('fiscal_year')['percentage'].plot(kind='bar', title='dollars % for SMBs')
"""
Explanation: Looking at SMBs by year
End of explanation
"""
%%bq query
select
smb.principalnaicscode as principalnaicscode,
sum(total.count) as count,
sum(smb.dollarsobligated) as dollarsobligated_smb,
sum(total.dollarsobligated) as dollarsobligated_total,
sum(smb.dollarsobligated)/sum(total.dollarsobligated) as smb_percentage
from
(select
principalnaicscode,
count(*) as count,
sum(dollarsobligated) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
where contractingofficerbusinesssizedetermination in ("S: SMALL BUSINESS")
group by 1) as smb
join
(select
principalnaicscode,
count(*) as count,
sum(dollarsobligated) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
group by 1
having dollarsobligated > 0
) as total
on smb.principalnaicscode = total.principalnaicscode
group by 1
order by 5 DESC
limit 10
"""
Explanation: SMB contract by gov. agency & by naics code
End of explanation
"""
%%bq query -n df_query
select
maj_agency_cat,
#mod_agency,
#contractactiontype,
#typeofcontractpricing,
#performancebasedservicecontract,
state,
#vendorcountrycode,
#principalnaicscode,
contractingofficerbusinesssizedetermination,
#sum(dollarsobligated) as dollarsobligated
dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
where vendorcountrycode in ('UNITED STATES', 'USA: UNITED STATES OF AMERICA')
and contractingofficerbusinesssizedetermination in ('O: OTHER THAN SMALL BUSINESS', 'S: SMALL BUSINESS')
and dollarsobligated > 0
#group by 1,2,3
limit 20000
df = df_query.execute(output_options=bq.QueryOutput.dataframe()).result()
df.head()
# Create dummy variable using pandas function get_dummies
df1 = df.join(pd.get_dummies(df['maj_agency_cat']))
df1 = df1.join(pd.get_dummies(df['state']))
df1 = df1.join(pd.get_dummies(df['contractingofficerbusinesssizedetermination']))
df1 = df1.drop('maj_agency_cat', axis = 1)
df1 = df1.drop('state', axis = 1)
df1 = df1.drop('contractingofficerbusinesssizedetermination', axis = 1)
df1.head()
train_data = df1.iloc[:,1:]
train_labels = df[['dollarsobligated']]
lm = LinearRegression()
lm.fit(train_data, train_labels)
# The coefficients
print('Coefficients: \n', lm.coef_)
# The mean squared error
print("Mean squared error: %.2f"
% np.mean((lm.predict(train_data) - train_labels) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % lm.score(train_data, train_labels))
"""
Explanation: Simple Linear regression (LR)
LR: predict the size of the contract
A lot of categorical feature -> needs to binarize it -> creates very sparse Matrix -> poor performance for LR
R square of to 2%
Not ideal for the problem we are trying for here
End of explanation
"""
%%bq query -n df_query
select
vendorname,
maj_agency_cat,
state,
contractingofficerbusinesssizedetermination,
count(*) as count,
sum(dollarsobligated) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
where vendorcountrycode in ('UNITED STATES', 'USA: UNITED STATES OF AMERICA')
and contractingofficerbusinesssizedetermination in ('O: OTHER THAN SMALL BUSINESS', 'S: SMALL BUSINESS')
and dollarsobligated > 0
group by 1,2,3,4
limit 20000
df = df_query.execute(output_options=bq.QueryOutput.dataframe()).result()
df.head()
#Create dummy variable using pandas function get_dummies
df1 = df.join(pd.get_dummies(df['maj_agency_cat']))
df1 = df1.join(pd.get_dummies(df['state']))
df1 = df1.join(pd.get_dummies(df['contractingofficerbusinesssizedetermination']))
df1 = df1.drop('maj_agency_cat', axis = 1)
df1 = df1.drop('state', axis = 1)
df1 = df1.drop('contractingofficerbusinesssizedetermination', axis = 1)
df1 = df1.drop('vendorname', axis = 1)
df1 = df1.drop('dollarsobligated', axis = 1)
train_data = df1.iloc[:,1:]
train_labels = df[['count']]
lm = LinearRegression()
lm.fit(train_data, train_labels)
# The coefficients
print('Coefficients: \n', lm.coef_)
# The mean squared error
print("Mean squared error: %.2f"
% np.mean((lm.predict(train_data) - train_labels) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % lm.score(train_data, train_labels))
"""
Explanation: LR: Predict the number of contracts (popularity)
Same issue than previously
R square of to 1%
Not ideal for the problem we are trying for here
End of explanation
"""
%%bq query
select
#principalnaicscode,
fiscal_year,
maj_agency_cat,
#contractingofficerbusinesssizedetermination,
#vendorname,
productorservicecode,
count(*) as count,
sum(dollarsobligated) as dollarsobligated
from `fiery-set-171213.vrec.usa_spending_all`
#where contractingofficerbusinesssizedetermination in ("S: SMALL BUSINESS")
#where regexp_contains(principalnaicscode, "CONSTRUCTION")
#and regexp_contains(maj_agency_cat, "AGRICULTURE")
where regexp_contains(productorservicecode, "MEAT")
#and fiscal_year = 2016
group by 1,2,3
order by dollarsobligated DESC
limit 10
"""
Explanation: MVP
MVP 1 - The most popular vendor
Search query = 'construction'
Enter your department name - eg. 'agriculture'
Ranking based on 'counts' of number of contracts that occured
TO-DO check the uppercase and lowercase in the REGEX
Do we want to add more parameters, such as Geo, size of the contract? To be discussed
End of explanation
"""
%%bq query -n df_query
select
contractingofficerbusinesssizedetermination,
mod_agency,
vendorname,
count(*) as count
from `fiery-set-171213.vrec.usa_spending_all`
where vendorcountrycode in ('UNITED STATES', 'USA: UNITED STATES OF AMERICA')
and contractingofficerbusinesssizedetermination in ('O: OTHER THAN SMALL BUSINESS', 'S: SMALL BUSINESS')
and mod_agency not in ("")
group by 1,2,3
order by count DESC
limit 20000
df = df_query.execute(output_options=bq.QueryOutput.dataframe()).result()
df.head()
df1 = df.drop('contractingofficerbusinesssizedetermination', axis = 1)
n_agency = df1.mod_agency.unique().shape[0]
n_vendors = df1.vendorname.unique().shape[0]
print 'Number of gov agency = ' + str(n_agency) + ' | Number of vendors = ' + str(n_vendors)
# Convert categorial value with label encoding
le_agency = LabelEncoder()
label_agency = le_agency.fit_transform(df1['mod_agency'])
le_vendor = LabelEncoder()
label_vendor = le_vendor.fit_transform(df1['vendorname'])
df_agency = pd.DataFrame(label_agency)
df_vendor = pd.DataFrame(label_vendor)
df2 = pd.concat([df_agency, df_vendor], axis = 1)
df2 = pd.concat([df2, df1['count']], axis = 1)
df2.columns = ['mod_agency', 'vendorname', 'count']
df2.head(5)
# To ge the right label back
# le_agency.inverse_transform([173, 100])
# Split into training and test data set
train_data, test_data = cv.train_test_split(df2, test_size=0.25)
#Build the matrix
train_data_matrix = np.zeros((n_agency, n_vendors))
for line in train_data.itertuples():
train_data_matrix[line[1]-1, line[2]-1] = line[3]
test_data_matrix = np.zeros((n_agency, n_vendors))
for line in test_data.itertuples():
test_data_matrix[line[1]-1, line[2]-1] = line[3]
#Compute cosine distance
user_similarity = pairwise_distances(train_data_matrix, metric='cosine')
item_similarity = pairwise_distances(train_data_matrix.T, metric='cosine')
def predict(ratings, similarity, type='user'):
if type == 'user':
mean_user_rating = ratings.mean(axis=1)
#You use np.newaxis so that mean_user_rating has same format as ratings
ratings_diff = (ratings - mean_user_rating[:, np.newaxis])
pred = mean_user_rating[:, np.newaxis] + similarity.dot(ratings_diff) / np.array([np.abs(similarity).sum(axis=1)]).T
elif type == 'item':
pred = ratings.dot(similarity) / np.array([np.abs(similarity).sum(axis=1)])
return pred
item_prediction = predict(train_data_matrix, item_similarity, type='item')
user_prediction = predict(train_data_matrix, user_similarity, type='user')
# Evaluation
def rmse(prediction, ground_truth):
prediction = prediction[ground_truth.nonzero()].flatten()
ground_truth = ground_truth[ground_truth.nonzero()].flatten() #filter out all items with no 0 as we only want to predict in the test set
return sqrt(mean_squared_error(prediction, ground_truth))
print 'User-based CF RMSE: ' + str(rmse(user_prediction, test_data_matrix))
print 'Item-based CF RMSE: ' + str(rmse(item_prediction, test_data_matrix))
"""
Explanation: MVP 2 - Collaborative filtering
If a person A likes item 1, 2, 3 and B like 2,3,4 then they have similar interests and A should like item 4 and B should like item 1.
Looking at match between gov mod_agency (275) & vendors (770526)
See: https://cambridgespark.com/content/tutorials/implementing-your-own-recommender-systems-in-Python/index.html
TO DO - TRAINING (1.9M rows) kernel crashed abover 20K -> Need to Map/Reduce or getting a higher performance machine or use another algorithm (matrix factorization)?
TO DO - Think about scaling or binarizing the count data -> to improve results
TO DO - Look at match between product service code (5833) & vendors (770526)
TO DO - Add Geo filter?
TO DO - Already done business with a company?
End of explanation
"""
print 'Worklow 1'
print '=' * 100
print 'Select your agency:'
agency = df1['mod_agency'][10]
print agency
print '=' * 100
print '1. Have you considered working with these SMB companies (user prediction)?'
agency = le_agency.transform(agency)
vendor_reco = pd.DataFrame(user_prediction[agency, :])
labels = pd.DataFrame(le_vendor.inverse_transform(range(0, len(vendor_reco))))
df_reco = pd.concat([vendor_reco, labels], axis = 1)
df_reco.columns = ['reco_score', 'vendorname']
#Join to get the SMB list
df_smb = df.drop(['mod_agency', 'count'], axis = 1)
df_reco = df_reco.set_index('vendorname').join(df_smb.set_index('vendorname'))
df_reco = df_reco.sort_values(['reco_score'], ascending = [0])
df_reco[df_reco['contractingofficerbusinesssizedetermination'] == 'S: SMALL BUSINESS'].head(10)
"""
Explanation: Worklow 1
<br>
a. Collaborative Filtering - user-item prediction
End of explanation
"""
print '=' * 100
print '2. Have you considered working with these SMB companies (item-item prediction?)'
vendor_reco = pd.DataFrame(item_prediction[agency, :])
df_reco = pd.concat([vendor_reco, labels], axis = 1)
df_reco.columns = ['reco_score', 'vendorname']
df_reco = df_reco.set_index('vendorname').join(df_smb.set_index('vendorname'))
df_reco = df_reco.sort_values(['reco_score'], ascending = [0])
df_reco[df_reco['contractingofficerbusinesssizedetermination'] == 'S: SMALL BUSINESS'].head(10)
print 'Worklow 2'
print '=' * 100
print 'Select a vendor:'
# Workflow 2 - WIP
# Select a vendor
# Other similar vendor
"""
Explanation: b. Collaborative Filtering - item-item prediction
End of explanation
"""
%%sql
select
substr(productorservicecode, 1, 4) product_id,
first(substr(productorservicecode, 7)) product_name,
count(*) transactions,
sum(dollarsobligated) sum_dollarsobligated
from
gpqueries:contracts.raw
group by
product_id
order by
sum_dollarsobligated desc
limit 10
"""
Explanation: OTHERS - FROM TUTORIAL - Anton Tarasenko
Data Mining Government Clients
Suppose you want to start selling to the government. While FBO.gov publishes government RFPs and you can apply there, government agencies often issue requests when they've already chosen the supplier. Agencies go through FBO.gov because it's a mandatory step for deals north of $25K. But winning at this stage is unlikely if an RFP is already tailored for another supplier.
Reaching warm leads in advance would increase chances of winning a government contract. The contracts data helps identify the warm leads by looking at purchases in the previous years.
There're several ways of searching through those years.
Who Buys What You Make
The goods and services bought in each transaction are encoded in the variable productorservicecode. Top ten product categories according to this variable:
End of explanation
"""
%%sql
select
substr(agencyid, 1, 4) agency_id,
first(substr(agencyid, 7)) agency_name,
count(*) transactions,
sum(dollarsobligated) sum_dollarsobligated
from
gpqueries:contracts.raw
where
productorservicecode contains 'software'
group by
agency_id
order by
sum_dollarsobligated desc
ignore case
"""
Explanation: You can find agencies that buy products like yours. If it's "software":
End of explanation
"""
%%sql
select
substr(agencyid, 1, 4) agency_id,
first(substr(agencyid, 7)) agency_name,
substr(principalnaicscode, 1, 6) naics_id,
first(substr(principalnaicscode, 9)) naics_name,
count(*) transactions,
sum(dollarsobligated) sum_dollarsobligated
from
gpqueries:contracts.raw
where
principalnaicscode contains 'software' and
fiscal_year = 2015
group by
agency_id, naics_id
order by
sum_dollarsobligated desc
ignore case
"""
Explanation: What Firms in Your Industry Sell to the Government
Another way to find customers is the variable principalnaicscode that encodes the industry in which the vendor does business.
The list of NAICS codes is available at Census.gov, but you can do text search in the table. Let's find who bought software from distributors in 2015:
End of explanation
"""
%%sql
select
fiscal_year,
dollarsobligated,
vendorname, city, state, annualrevenue, numberofemployees,
descriptionofcontractrequirement
from
gpqueries:contracts.raw
where
agencyid contains 'transportation security administration' and
principalnaicscode contains 'computer and software stores'
ignore case
"""
Explanation: Inspecting Specific Transactions
You can learn details from looking at transactions for a specific (agency, NAICS) pair. For example, what software does TSA buy?
End of explanation
"""
%%sql
select
agencyid,
dollarsobligated,
vendorname,
descriptionofcontractrequirement
from
gpqueries:contracts.raw
where
vendorname contains 'tableau' or
vendorname contains 'socrata' or
vendorname contains 'palantir' or
vendorname contains 'revolution analytics' or
vendorname contains 'mathworks' or
vendorname contains 'statacorp' or
vendorname contains 'mathworks'
order by
dollarsobligated desc
limit
100
ignore case
"""
Explanation: Alternatively, specify vendors your product relates to and check how the government uses it. Top deals in data analytics:
End of explanation
"""
%%sql
select
agencyid,
dollarsobligated,
descriptionofcontractrequirement
from
gpqueries:contracts.raw
where
descriptionofcontractrequirement contains 'body camera'
limit
100
ignore case
"""
Explanation: Searching Through Descriptions
Full-text search and regular expressions for the variable descriptionofcontractrequirement narrow results for relevant product groups:
End of explanation
"""
%%sql
select
substr(pop_state_code, 1, 2) state_code,
first(substr(pop_state_code, 4)) state_name,
sum(dollarsobligated) sum_dollarsobligated
from
gpqueries:contracts.raw
group by
state_code
order by
sum_dollarsobligated desc
"""
Explanation: Some rows of descriptionofcontractrequirement contain codes like "IGF::CT::IGF". These codes classify the purchase into three groups of "Inherently Governmental Functions" (IGF):
IGF::CT::IGF for Critical Functions
IGF::CL::IGF for Closely Associated
IGF::OT::IGF for Other Functions
Narrowing Your Geography
You can find local opportunities using variables for vendors (city, state) and services sold (PlaceofPerformanceCity, pop_state_code). The states where most contracts are delivered in:
End of explanation
"""
%%sql --module gpq
define query vendor_size_by_agency
select
substr(agencyid, 1, 4) agency_id,
first(substr(agencyid, 7)) agency_name,
nth(11, quantiles(annualrevenue, 21)) vendor_median_annualrevenue,
nth(11, quantiles(numberofemployees, 21)) vendor_median_numberofemployees,
count(*) transactions,
sum(dollarsobligated) sum_dollarsobligated
from
gpqueries:contracts.raw
group by
agency_id
having
transactions > 1000 and
sum_dollarsobligated > 10e6
order by
vendor_median_annualrevenue asc
bq.Query(gpq.vendor_size_by_agency).to_dataframe()
"""
Explanation: Facts about Government Contracting
Let's check some popular statements about government contracting.
Small Businesses Win Most Contracts
Contractors had to report their revenue and the number of employees. It makes easy to check if small business is welcomed in government contracting:
End of explanation
"""
%%sql
select
womenownedflag,
count(*) transactions,
sum(dollarsobligated) sum_dollarsobligated
from
gpqueries:contracts.raw
group by
womenownedflag
"""
Explanation: The median shows the most likely supplier. Agencies on the top of the table actively employ vendors whose annual revenue is less than $1mn.
The Department of Defence, the largest buyer with $4.5tn worth of goods and services bought over these 17 years, has the median vendor with $2.5mn in revenue and 20 employees. It means that half of the DoD's vendors have less than $2.5mn in revenue.
Set-Aside Deals Take a Small Share
Set-aside purchases are reserved for special categories of suppliers, like women-, minority-, and veteran-owned businesses. There's a lot of confusion about their share in transactions. We can settle this confusion with data:
End of explanation
"""
%%sql
select
womenownedflag, veteranownedflag, minorityownedbusinessflag,
count(*) transactions,
sum(dollarsobligated) sum_dollarsobligated
from
gpqueries:contracts.raw
group by
womenownedflag, veteranownedflag, minorityownedbusinessflag
order by
womenownedflag, veteranownedflag, minorityownedbusinessflag desc
"""
Explanation: Women-owned businesses make about one tenth of the transactions, but their share in terms of sales is only 3.7%.
A cross-tabulation for major set-aside categories:
End of explanation
"""
%%sql
select
sum(if(before2015.dunsnumber is null, 1, 0)) new_vendors,
sum(if(before2015.dunsnumber is null, 0, 1)) old_vendors
from
flatten((select unique(dunsnumber) dunsnumber from gpqueries:contracts.raw where fiscal_year = 2015), dunsnumber) in2015
left join
flatten((select unique(dunsnumber) dunsnumber from gpqueries:contracts.raw where fiscal_year < 2015), dunsnumber) before2015
on before2015.dunsnumber = in2015.dunsnumber
"""
Explanation: For example, firms owned by women, veterans, and minorities (all represented at the same time) sell $5bn in goods and services. That's 0.07% of all government purchases.
New Vendors Emerge Each Year
Becoming a government contractor may seem difficult at first, but let's see how many new contractors the government had in 2015.
End of explanation
"""
|
bmeaut/python_nlp_2017_fall
|
course_material/09_Morphology_lab/09_Morphology_lab.ipynb
|
mit
|
import os
# Note that the actual output of `ls` is not printed!
print('Exit code:', os.system('ls -a'))
files = os.listdir('.')
print('Should have printed:\n\n{}'.format('\n'.join(files if len(files) <= 3 else files[:3] + ['...'])))
"""
Explanation: 9. Morphology — Lab exercises
XFST / foma
XFST provides two formalisms for creating FSA / FST for morphology and related fields:
- regular expressions: similar to Python's (e.g. {reg}?*({expr}) $\equiv$ reg.*(expr)?)
- lexc: a much simpler formalism for lexicographers
In this lab, we shall learn the latter via the open-source reimplementation of XFST: foma. We shall also acquaint ourselves with the Hungarian HFST morphology. We are not going into details of how foma works; for that, see the
- https://fomafst.github.io/
- https://github.com/mhulden/foma/
- the XFST book (Kenneth R. Beesley and Lauri Karttunen: Finite State Morphology)
But first...
Command-line access from Python
In some cases, we need to interface with command-line applications from our script. There are two ways to do this in Python, and an additional method in Jupyter.
1. os.system()
The os.system(cmd) call executes cmd, sends its output to the stdout of the interpreter, and returns the exit code of the process. As such, there is no way to capture the output in the script, so this method is only useful if we are interested solely in the exit code.
End of explanation
"""
import subprocess
p = subprocess.Popen(['ls', '-a'], # manual cmd split; see next example
stdout=subprocess.PIPE) # we need the output
ret = p.communicate()
print('Exit code: {}\nOutput:\n\n{}'.format(p.returncode, ret[0].decode('utf-8')))
"""
Explanation: 2. subprocess
The subprocess module provides full access to the command line. The basic method of usage is to create a Popen object and call its methods:
End of explanation
"""
p = subprocess.Popen('cat -', shell=True, # automatic cmd split -> ['cat', '-']
stdin=subprocess.PIPE, # we shall use stdin
stdout=subprocess.PIPE)
ret = p.communicate('hello\nbello'.encode('utf-8'))
print(ret[0].decode('utf-8'))
"""
Explanation: It is also possible to send input to a program started by Popen:
End of explanation
"""
# From Python 3.5
ret = subprocess.run('ls -a', shell=True, stdout=subprocess.PIPE)
print('run():\n{}'.format(
ret.stdout.decode('utf-8')))
# Even easier
print('check_output()\n{}'.format(
subprocess.check_output('ls -a', shell=True).decode('utf-8')))
"""
Explanation: From Python 3.6, Popen supports the encoding parameter, which alleviates the need for encode/decode.
There are also functions that cover the basic cases:
End of explanation
"""
directory = '.'
s = !ls -a {directory}
print(s)
"""
Explanation: 3. Jupyter!
Jupyter also has a shorthand for executing commands: !. It is a bit more convenient, as it does string encoding behind the scenes and parses the output into a list. However, it is not available in regular Python scripts.
End of explanation
"""
# Utility functions
from functools import partial
import os
import subprocess
import tempfile
from IPython.display import display, Image
def execute_commands(*cmds, fancy=True):
"""
Starts foma end executes the specified commands.
Might not work if there are too many...
"""
if fancy:
print('Executing commands...\n=====================\n')
args = ' '.join('-e "{}"'.format(cmd) for cmd in cmds)
output = subprocess.check_output('foma {} -s'.format(args),
stderr=subprocess.STDOUT,
shell=True).decode('utf-8')
print(output)
if fancy:
print('=====================\n')
def compile_lexc(lexc_string, fst_file):
"""
Compiles a string describing a lexc lexicon with foma. The FST
is written to fst_file.
"""
with tempfile.NamedTemporaryFile(mode='wt', encoding='utf-8', delete=False) as outf:
outf.write(lexc_string)
try:
execute_commands('read lexc {}'.format(outf.name),
'save stack {}'.format(fst_file), fancy=False)
#!foma -e "read lexc {outf.name}" -e "save stack {fst_file}" -s
finally:
os.remove(outf.name)
def apply(fst_file, words, up=True):
"""
Applies the FST in fst_file on the supplied words. The default direction
is up.
"""
if isinstance(words, list):
words = '\n'.join(map(str, words))
elif not isinstance(words, str):
raise ValueError('words must be a str or list')
header = 'Applying {} {}...'.format(fst_file, 'up' if up else 'down')
print('{}\n{}\n'.format(header, '=' * len(header)))
invert = '-i' if not up else ''
result = subprocess.check_output('flookup {} {}'.format(invert, fst_file),
stderr=subprocess.STDOUT, shell=True,
input=words.encode('utf-8'))
print(result.decode('utf-8')[:-1]) # Skip last newline
print('=' * len(header), '\n')
apply_up = partial(apply, up=True)
apply_down = partial(apply, up=False)
def draw_net(fst_file, inline=True):
"""
Displays a compiled network inline or in a separate window.
The package imagemagic must be installed for this function to work.
"""
!foma -e "load stack {fst_file}" -e "print dot >{fst_file}.dot" -s
if inline:
png_data = subprocess.check_output(
'cat {}.dot | dot -Tpng'.format(fst_file), shell=True)
display(Image(data=png_data, format='png'))
else:
!cat {fst_file}.dot | dot -Tpng | display
!rm {fst_file}.dot
"""
Explanation: Morphology
Take a few minutes to make yourself familiar with the code below. It defines the functions we use to communicate with foma via the command line.
End of explanation
"""
grammar = """
LEXICON Root
pack # ;
talk # ;
walk # ;
"""
compile_lexc(grammar, 'warm_up.fst')
draw_net('warm_up.fst')
"""
Explanation: Warm-up
First a few warm-up exercises. This will teach you how to use the functions defined above and give you a general idea of how a lexical transducer looks like. We shall cover a tiny subset of the English verb morphology.
Task W1.
A lexc grammar consists of LEXICONs, which corresponds to continuation classes. One lexicon, Root must always be present. Let's add the two words pack and talk to it. We shall build the grammar in a Python string and use the compile_lexc() function to compile it to binary format, and draw_net() to display the resulting automaton.
End of explanation
"""
grammar = """
LEXICON Root
! see how the continuation changes to the new LEXICON
! BTW this is a comment
pack Infl ;
talk Infl ;
walk Infl ;
LEXICON Infl
! add the endings here, without the hyphens
"""
compile_lexc(grammar, 'warm_up.fst')
draw_net('warm_up.fst')
"""
Explanation: There are several points to observe here:
- the format of a word (morpheme) definition line is: morpheme next_lexicon ;
- the next_lexicon can be the word end mark #
- word definitions must end with a semicolon (;); LEXICON lines must not
- the basic unit in the FSA is the character, not the whole word
- the FSA is determinized and minimized to save space: see how the states (3) and (5) and the arc -k-> are re-used
Task W2.
Let's add some inflection to the grammar. These are all regular verbs, so they all can receive -s, -ed, and -ing to form the third person singular, past and gerund forms, respectively. Add these to the second lexicon, and
compile the network again.
End of explanation
"""
apply_up('warm_up.fst', ['walked', 'talking', 'packs', 'walk'])
execute_commands('load stack warm_up.fst', 'print words')
"""
Explanation: Now, we can test what words the automaton can recognize in two ways:
- call the apply_up or apply_down functions with the word form
- use the print words foma command
End of explanation
"""
grammar = """
"""
compile_lexc(grammar, 'warm_up.fst')
draw_net('warm_up.fst')
"""
Explanation: Uh-oh. Something's wrong: the automaton didn't recognize walk. What happened?
The explanation is very simple: now all words in Root continue to Infl, which requires one of the inflectional endings. See how state (6) ceased to be an accepting state.
The solution: replicate the code from above, but also add the "zero morpheme" ending # ; to Infl! Make sure that state (6) is accepting again and that the recognized words now include the basic form.
Task W3.
Here we change our automaton to a transducer that lemmatizes words it receives on its bottom tape. Transduction in lexc is denoted by the colon (:). Again, copy your grammar below, but replace the contents of LEXICON Infl with
# ;
0:s # ;
0:ed # ;
0:ing # ;
Note that
- only a single colon is allowed on a line
- everything left of it is "up", right is "down"
- the $\varepsilon$ (empty character) is denoted by 0
- deletion happens on the top, "output" side
End of explanation
"""
# apply_up('warm_up.fst', ['walked', 'talking', 'packs', 'walk'])
# execute_commands('load stack warm_up.fst', 'print words')
"""
Explanation: Experiment again with apply_up and apply_down. How do they behave differently?
See how the output of the print words command changed. It is also useful to print just the upper or lower tape with print upper-words and print lower-words.
End of explanation
"""
adjectives_1 = """
csendes ! quiet
egészséges ! healthy
idős ! old
kék ! blue
mély ! deep
öntelt ! conceited
szeles ! windy
terhes ! pregnant; arduous
zsémbes ! shrewish
"""
grammar = """
"""
compile_lexc(grammar, 'h1.fst')
"""
Explanation: Lexc Intuition
While the ideas behind lexc are very logical, one might need some time to wrap their heads around it. In this notebook, I try to give some advice on how to "think lexc". Do not hesitate to check it out if the tasks below seem to hard. I also provide the solution to task H1 in there, though you are encouraged to come up with your own.
Hungarian Adjectives
In this exercise, we shall model a subset of the Hungarian nominal paradigm:
- regular adjectives
- vowel harmony
- plurals
- the accusative case
- comparative and superlative forms
The goal is to replicate the output of the Hungarian HFST morphology. We shall learn the following techniques:
- defining lexical automata and tranducers with lexc
- multi-character symbols
- flag diacritics
Task H1.
We start small with a tiny lexical FSA.
- define a LEXICON for the adjectives in the code cell below
- add continuation classes to handle:
- the plural form (-ek)
- accusative (-et)
A little help for the latter two: in Hungarian, adjectives (and numerals) are inflected the same way as nouns; this is called the nominal paradigm. A simplified schematic would be
Root (Plur)? (Case)?
Plural is marked by -k, and accusative by -t. However, if the previous morpheme ends with a consonant (as is the case here), a link vowel is inserted before the k or t. Which vowel gets inserted is decided by complicated vowel harmony rules. The adjectives below all contain front vowels only, so the link vowel is e.
End of explanation
"""
grammar = """
"""
compile_lexc(grammar, 'h2.fst')
# apply_up('h2.fst', [])
"""
Explanation: Task H2.
What we have now is a simple (lexical) FSA. In this task, we modify it to have a proper lexical FST that can parse (apply_up) surface forms to morphological features and vice versa (apply_down).
Run the words through HFST:
Start a new docker bash shell by running docker exec -it <container name or id> bash
Start HFST by typing hfst-lookup --cascade=composition /nlp/hfst/hu.hfstol into the shell
Type the words in our FSA (don't forget plural / accusative, e.g. nagyok, finomat) into hfst-lookup one-by-one. See what features appear on the upper side (limit yourself to the correct parse, i.e. the one with [/Adj]).
Add the same features to our lexc grammar:
remember that you want to keep the surface form in the upper side as well, so e.g. [/Pl]:ek won't do. You must
either repeat it twice: ek[/Pl]:ek
or use two lexicons e.g. Plur and PlurTag, and have ek in the first and [/Pl]:0 in the second
all tags, such as [/Pl] must be defined in the Multichar_Symbols header:
```
Multichar Symbols Symb1 Symb2 ...
LEXICON Root
...
``
Play around withapply_upandapply_down. Make sure you covered all tags in the HFST output. (Note: HFST tags color names as[/Adj|col]`. You don't need to make this distinction in this exercise.)
End of explanation
"""
adjectives_2 = """
abszurd ! absurd
bájos ! charming
finom ! delicious
gyanús ! suspicious
okos ! clever
piros ! red
száraz ! dry
zord ! grim
"""
grammar = """
"""
compile_lexc(grammar, 'h3.fst')
# apply_up('h3.fst', [])
"""
Explanation: Task H2b*.
Copy the apply functions and create a hfst_apply version of each, which calls hfst instead of foma.. Note that hfst-lookup does not support generation. You will probably need communicate() to implement function.
Important: do not start this exercise until you finish all foma-related ones!
Task H3.
In the next few exercises, we are going to delve deeper into vowel harmony and techniques to handle it. For now, add the adjectives below to the grammar. In these words, back vowels dominate, so the link vowel for plural and accusative is a. Create LEXICON structures that mirror what you have for the front adjectives to handle the new words.
End of explanation
"""
grammar = """
"""
compile_lexc(grammar, 'h4.fst')
# apply_up('h4.fst', [])
"""
Explanation: Task H4.
The previous solution works, but implementing one distinction (a/e) required us to double the number of lexicons; this clearly doesn't scale. Here, we introduce a more flexible solution: flag diacritics.
Flag diacritics are (multichar!) symbol with a few special properties:
- they have the form @COMMAND.FEATURE_NAME.FEATURE_VALUE@, where command is
- P: set
- R: require
- D: disallow (the opposite of R)
- C: clear (removes the flag)
- U: unification (first P, then R)
- they must appear on both tapes (upper and lower) to have any effect (e.g. @P.FEAT.VALUE@:0 won't work, but @P.FEAT.VALUE@xxx will)
- even so, flag diacritics never appear in the final upper / lower strings -- they can be considered an "implementation detail"
- flag diacritics incur some performance overhead, but decrease the size of the FSTs
Add flag diacritics to your grammar. You will want to keep the two adjective types in separate lexicons, e.g.
LEXICON Root
@U.HARM.FRONT@ AdjFront ;
@U.HARM.BACK@ AdjBack ;
However, the two plural / accusative lexicons can be merged, like so:
LEXICON Plur
@U.HARM.FRONT@ek PlurTag ;
@U.HARM.BACK@ak PlurTag ;
Compile your grammar to see that the network became smaller. Check and see if the new FST accepts the same language as the old one.
End of explanation
"""
grammar = """
"""
compile_lexc(grammar, 'h5.fst')
# apply_up('h5.fst', [])
"""
Explanation: Task H5.
We round up the exercise by adding adjective comparison. Incorporate the following rules into your grammar:
- Comparative forms are marked by -bb, which takes a link vowel similarly to plural
- The superlative form is marked by the leg- prefix and -bb, i.e. a circumfix
- The exaggerated form is the same as the superlative, with any number of leges coming before leg
The full simplified paradigm thus becomes:
((leges)* leg)? Root (-bb)? (Plur)? (Case)?
Again, the circumfix is best handled with flag diacritics. However, the U command probably won't work because its main use is for agreement. Try to implement an if-else structure with the other commands!
End of explanation
"""
|
tensorflow/docs-l10n
|
site/pt-br/tutorials/images/transfer_learning_with_hub.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
from __future__ import absolute_import, division, print_function, unicode_literals
import matplotlib.pylab as plt
try:
# %tensorflow_version only exists in Colab.
!pip install tf-nightly
except Exception:
pass
import tensorflow as tf
!pip install -U tf-hub-nightly
!pip install tfds-nightly
import tensorflow_hub as hub
from tensorflow.keras import layers
"""
Explanation: Transferência de aprendizado com TensorFlow Hub
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/images/transfer_learning_with_hub"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />Ver em TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/pt-br/tutorials/images/transfer_learning_with_hub.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Executar no Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/pt-br/tutorials/images/transfer_learning_with_hub.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />Ver código fonte no GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/pt-br/tutorials/images/transfer_learning_with_hub.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Baixar notebook</a>
</td>
</table>
O [TensorFlow Hub] (http://tensorflow.org/hub) é uma maneira de compartilhar componentes de modelo pré-treinados. Consulte o [TensorFlow Module Hub] (https://tfhub.dev/) para obter uma lista pesquisável de modelos pré-treinados. Este tutorial demonstra:
Como usar o TensorFlow Hub com o tf.keras.
Como fazer a classificação da imagem usando o TensorFlow Hub.
Como fazer um simples aprendizado de transferência.
Configuração
End of explanation
"""
classifier_url ="https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/2" #@param {type:"string"}
IMAGE_SHAPE = (224, 224)
classifier = tf.keras.Sequential([
hub.KerasLayer(classifier_url, input_shape=IMAGE_SHAPE+(3,))
])
"""
Explanation: Um Classificador ImageNet
Baixar o classificador
Use hub.module para carregar uma mobilenet e tf.keras.layers.Lambda para envolvê-la como uma camada keras. Qualquer [URL do classificador de imagem compatível com TensorFlow 2] (https://tfhub.dev/s?q=tf2&module-type=image-classification) do tfhub.dev funcionará aqui.
End of explanation
"""
import numpy as np
import PIL.Image as Image
grace_hopper = tf.keras.utils.get_file('image.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg')
grace_hopper = Image.open(grace_hopper).resize(IMAGE_SHAPE)
grace_hopper
grace_hopper = np.array(grace_hopper)/255.0
grace_hopper.shape
"""
Explanation: Execute-o em uma única imagem
Faça o download de uma única imagem para experimentar o modelo.
End of explanation
"""
result = classifier.predict(grace_hopper[np.newaxis, ...])
result.shape
"""
Explanation: Adicione uma dimensão em batch e passe a imagem para o modelo.
End of explanation
"""
predicted_class = np.argmax(result[0], axis=-1)
predicted_class
"""
Explanation: O resultado é um vetor de 1001 elementos de logits, classificando a probabilidade de cada classe para a imagem.
Portanto, o ID da classe superior pode ser encontrado com argmax:
End of explanation
"""
labels_path = tf.keras.utils.get_file('ImageNetLabels.txt','https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')
imagenet_labels = np.array(open(labels_path).read().splitlines())
plt.imshow(grace_hopper)
plt.axis('off')
predicted_class_name = imagenet_labels[predicted_class]
_ = plt.title("Prediction: " + predicted_class_name.title())
"""
Explanation: Decodificar as previsões
Temos o ID da classe previsto,
Busque as etiquetas ImageNet e decodifique as previsões
End of explanation
"""
data_root = tf.keras.utils.get_file(
'flower_photos','https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
"""
Explanation: Transferência de aprendizado simples
Usando o TF Hub, é simples treinar novamente a camada superior do modelo para reconhecer as classes em nosso conjunto de dados.
Conjunto de Dados
Neste exemplo, você usará o conjunto de dados de flores TensorFlow:
End of explanation
"""
image_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1/255)
image_data = image_generator.flow_from_directory(str(data_root), target_size=IMAGE_SHAPE)
"""
Explanation: A maneira mais simples de carregar esses dados em nosso modelo é usando tf.keras.preprocessing.image.ImageDataGenerator,
Todos os módulos de imagem do TensorFlow Hub esperam entradas flutuantes na faixa [0,1]. Use o parâmetro rescale do ImageDataGenerator para conseguir isso.
O tamanho da imagem será tratado posteriormente.
End of explanation
"""
for image_batch, label_batch in image_data:
print("Image batch shape: ", image_batch.shape)
print("Label batch shape: ", label_batch.shape)
break
"""
Explanation: O objeto resultante é um iterador que retorna os pares image_batch, label_batch.
End of explanation
"""
result_batch = classifier.predict(image_batch)
result_batch.shape
predicted_class_names = imagenet_labels[np.argmax(result_batch, axis=-1)]
predicted_class_names
"""
Explanation: Rode o classificador em um lote de imagens
Agora, execute o classificador em um lote de imagens.
End of explanation
"""
plt.figure(figsize=(10,9))
plt.subplots_adjust(hspace=0.5)
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
plt.title(predicted_class_names[n])
plt.axis('off')
_ = plt.suptitle("ImageNet predictions")
"""
Explanation: Agora verifique como essas previsões estão alinhadas com as imagens:
End of explanation
"""
feature_extractor_url = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/2" #@param {type:"string"}
"""
Explanation: Veja o arquivo LICENSE.txt para atribuições de imagem.
Os resultados estão longe de serem perfeitos, mas razoáveis, considerando que essas não são as classes para as quais o modelo foi treinado (exceto "daisy").
Faça o download do modelo sem cabeça
O TensorFlow Hub também distribui modelos sem a camada de classificação superior. Eles podem ser usados para transferir facilmente o aprendizado.
Qualquer [URL do vetor de recurso de imagem compatível com Tensorflow 2] (https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) do tfhub.dev funcionará aqui.
End of explanation
"""
feature_extractor_layer = hub.KerasLayer(feature_extractor_url,
input_shape=(224,224,3))
"""
Explanation: Crie o extrator de características.
End of explanation
"""
feature_batch = feature_extractor_layer(image_batch)
print(feature_batch.shape)
"""
Explanation: Isto retorna um vetor de tamanho 1280 para cada imagem:
End of explanation
"""
feature_extractor_layer.trainable = False
"""
Explanation: Congele as variáveis na camada extrator de característica, para que o treinamento modifique apenas a nova camada do classificador.
End of explanation
"""
model = tf.keras.Sequential([
feature_extractor_layer,
layers.Dense(image_data.num_classes, activation='softmax')
])
model.summary()
predictions = model(image_batch)
predictions.shape
"""
Explanation: Anexar um cabeçalho de classificação
Agora envolva a camada do hub em um modelo tf.keras.Sequential e adicione uma nova camada de classificação.
End of explanation
"""
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss='categorical_crossentropy',
metrics=['acc'])
"""
Explanation: Treine o Modelo
Use compile para configurar o processo de treinamento:
End of explanation
"""
class CollectBatchStats(tf.keras.callbacks.Callback):
def __init__(self):
self.batch_losses = []
self.batch_acc = []
def on_train_batch_end(self, batch, logs=None):
self.batch_losses.append(logs['loss'])
self.batch_acc.append(logs['acc'])
self.model.reset_metrics()
steps_per_epoch = np.ceil(image_data.samples/image_data.batch_size)
batch_stats_callback = CollectBatchStats()
history = model.fit_generator(image_data, epochs=2,
steps_per_epoch=steps_per_epoch,
callbacks = [batch_stats_callback])
"""
Explanation: Agora use o método .fit para treinar o modelo.
Para manter este exemplo, treine apenas duas épocas. Para visualizar o progresso do treinamento, use um retorno de chamada personalizado para registrar a perda e a acurácia de cada lote individualmente, em vez da média da época.
End of explanation
"""
plt.figure()
plt.ylabel("Loss")
plt.xlabel("Training Steps")
plt.ylim([0,2])
plt.plot(batch_stats_callback.batch_losses)
plt.figure()
plt.ylabel("Accuracy")
plt.xlabel("Training Steps")
plt.ylim([0,1])
plt.plot(batch_stats_callback.batch_acc)
"""
Explanation: Agora, depois de apenas algumas iterações de treinamento, já podemos ver que o modelo está progredindo na tarefa.
End of explanation
"""
class_names = sorted(image_data.class_indices.items(), key=lambda pair:pair[1])
class_names = np.array([key.title() for key, value in class_names])
class_names
"""
Explanation: Verificando as previsões
Para refazer a plotagem de antes, primeiro obtenha a lista ordenada de nomes de classe:
End of explanation
"""
predicted_batch = model.predict(image_batch)
predicted_id = np.argmax(predicted_batch, axis=-1)
predicted_label_batch = class_names[predicted_id]
"""
Explanation: Execute o lote de imagens através do modelo e converta os índices em nomes de classe.
End of explanation
"""
label_id = np.argmax(label_batch, axis=-1)
plt.figure(figsize=(10,9))
plt.subplots_adjust(hspace=0.5)
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
color = "green" if predicted_id[n] == label_id[n] else "red"
plt.title(predicted_label_batch[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (green: correct, red: incorrect)")
"""
Explanation: Plote o resultado
End of explanation
"""
import time
t = time.time()
export_path = "/tmp/saved_models/{}".format(int(t))
model.save(export_path, save_format='tf')
export_path
"""
Explanation: Exporte seu modelo
Agora que você treinou o modelo, exporte-o como um modelo salvo:
End of explanation
"""
reloaded = tf.keras.models.load_model(export_path)
result_batch = model.predict(image_batch)
reloaded_result_batch = reloaded.predict(image_batch)
abs(reloaded_result_batch - result_batch).max()
"""
Explanation: Agora confirme que podemos recarregá-lo e ainda dá os mesmos resultados:
End of explanation
"""
|
martinjrobins/hobo
|
examples/sampling/slice-rank-shrinking-mcmc.ipynb
|
bsd-3-clause
|
import matplotlib.pyplot as plt
import numpy as np
import pints
import pints.toy
# Define target
log_pdf = pints.toy.MultimodalGaussianLogPDF(modes = [[0, 0], [10, 10], [10, 0]])
# Plot target
levels = np.linspace(-3,12,20)
num_points = 100
x = np.linspace(-5, 15, num_points)
y = np.linspace(-5, 15, num_points)
X, Y = np.meshgrid(x, y)
Z = np.zeros(X.shape)
Z = np.exp([[log_pdf([i, j]) for i in x] for j in y])
plt.contour(X, Y, Z)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
"""
Explanation: Slice sampling with rank shrinking
This notebook describes how to use a method of slice sampling introduced in [1] (see slice rank shrinking) to generate MCMC samples from a given target distribution. Unlike most other variants of slice sampling, this approach uses sensitivities to guide the sampler back towards a slice (an area of parameter space where the target density exceeds a given value).
[1] "Covariance-adaptive slice sampling", 2010. Thompson, M and Neal, RM, arXiv preprint arXiv:1003.3201.
Problem 1: Multimodal Distribution
In experimenting, we found that this type of slice sampling was surprisingly effective at sampling from multimodal distributions. Here, we demonstrate this. First, we plot the multimodal target.
End of explanation
"""
# Create an adaptive covariance MCMC routine
x0 = np.random.uniform([2, 2], [8, 8], size=(4, 2))
mcmc = pints.MCMCController(log_pdf, 4, x0, method=pints.SliceRankShrinkingMCMC)
for sampler in mcmc.samplers():
sampler.set_sigma_c(5)
# Set maximum number of iterations
mcmc.set_max_iterations(1000)
# Disable logging
mcmc.set_log_to_screen(True)
mcmc.set_log_interval(200)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
"""
Explanation: Next we use this method of slice sampling to generate MCMC samples from this distribution.
End of explanation
"""
stacked = np.vstack(chains)
plt.contour(X, Y, Z, colors='k', alpha=0.5)
plt.scatter(stacked[:,0], stacked[:,1], marker='.', alpha=0.2)
plt.xlim(-5, 15)
plt.ylim(-5, 15)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
"""
Explanation: Overlaying the samples on the modes, we see good coverage across them.
End of explanation
"""
print('KL divergence by mode: ' + str(log_pdf.kl_divergence(stacked)))
"""
Explanation: And a low KL divergence from the target at each mode.
End of explanation
"""
import pints.noise
model = pints.toy.SimpleHarmonicOscillatorModel()
times = np.linspace(0, 30, 500)
parameters = model.suggested_parameters()
values = model.simulate(parameters, times)
values += pints.noise.independent(0.1, values.shape)
plt.figure(figsize=(15,2))
plt.xlabel('t')
plt.ylabel(r'$y$ (Displacement)')
plt.plot(times, values)
plt.show()
# Create an object with links to the model and time series
problem = pints.SingleOutputProblem(model, times, values)
# Create a log-likelihood function (adds an extra parameter!)
log_likelihood = pints.GaussianLogLikelihood(problem)
# Create a uniform prior over both the parameters and the new noise variable
log_prior = pints.UniformLogPrior(
[-4, -4, 0, 0],
[4, 4, 3, 3],
)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Choose starting points for mcmc chains
num_chains = 4
xs = log_prior.sample(num_chains)
# Create mcmc routine
mcmc = pints.MCMCController(
log_posterior, num_chains, xs, method=pints.SliceRankShrinkingMCMC)
for sampler in mcmc.samplers():
sampler.set_sigma_c(1)
# Add stopping criterion
mcmc.set_max_iterations(800)
# Set up modest logging
mcmc.set_log_to_screen(True)
mcmc.set_log_interval(200)
# Run!
print('Running...')
full_chains = mcmc.run()
print('Done!')
import pints.plot
# Show traces and histograms
pints.plot.trace(full_chains)
# Discard warm up and stack
full_chains_filtered = full_chains[:, 400:, :]
stacked = np.vstack(full_chains_filtered)
# Examine sampling distribution
pints.plot.pairwise(stacked, kde=False, ref_parameters = parameters.tolist() + [0.1])
# Show graphs
plt.show()
"""
Explanation: Problem 2: Simple harmonic oscillator
We now try the same method on a more realistic time-series problem using the simple harmonic oscillator. model
Plot model solutions with additive noise.
End of explanation
"""
results = pints.MCMCSummary(
chains=full_chains_filtered,
time=mcmc.time(),
parameter_names=['y(0)', 'dy/dt(0)', 'theta', 'sigma']
)
print(results)
"""
Explanation: Tabulate results.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/inm/cmip6/models/sandbox-2/ocean.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'sandbox-2', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: INM
Source ID: SANDBOX-2
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:05
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
cjcardinale/climlab
|
docs/source/courseware/PolarAmplification.ipynb
|
mit
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import climlab
from climlab import constants as const
"""
Explanation: Polar amplification in simple models
End of explanation
"""
ebm = climlab.GreyRadiationModel(num_lev=1, num_lat=90)
insolation = climlab.radiation.AnnualMeanInsolation(domains=ebm.Ts.domain)
ebm.add_subprocess('insolation', insolation)
ebm.subprocess.SW.flux_from_space = ebm.subprocess.insolation.insolation
print(ebm)
# add a fixed relative humidity process
# (will only affect surface evaporation)
h2o = climlab.radiation.ManabeWaterVapor(state=ebm.state, **ebm.param)
ebm.add_subprocess('H2O', h2o)
# Add surface heat fluxes
shf = climlab.surface.SensibleHeatFlux(state=ebm.state, Cd=3E-4)
lhf = climlab.surface.LatentHeatFlux(state=ebm.state, Cd=3E-4)
# couple water vapor to latent heat flux process
lhf.q = h2o.q
ebm.add_subprocess('SHF', shf)
ebm.add_subprocess('LHF', lhf)
ebm.integrate_years(1)
plt.plot(ebm.lat, ebm.Ts)
plt.plot(ebm.lat, ebm.Tatm)
co2ebm = climlab.process_like(ebm)
co2ebm.subprocess['LW'].absorptivity = ebm.subprocess['LW'].absorptivity*1.1
co2ebm.integrate_years(3.)
# no heat transport but with evaporation -- no polar amplification
plt.plot(ebm.lat, co2ebm.Ts - ebm.Ts)
plt.plot(ebm.lat, co2ebm.Tatm - ebm.Tatm)
"""
Explanation: EBM with surface and atm layers
End of explanation
"""
diffebm = climlab.process_like(ebm)
# thermal diffusivity in W/m**2/degC
D = 0.6
# meridional diffusivity in m**2/s
K = D / diffebm.Tatm.domain.heat_capacity * const.a**2
d = climlab.dynamics.MeridionalDiffusion(K=K, state={'Tatm': diffebm.Tatm}, **diffebm.param)
diffebm.add_subprocess('diffusion', d)
print(diffebm)
diffebm.integrate_years(3)
plt.plot(diffebm.lat, diffebm.Ts)
plt.plot(diffebm.lat, diffebm.Tatm)
def inferred_heat_transport( energy_in, lat_deg ):
'''Returns the inferred heat transport (in PW) by integrating the net energy imbalance from pole to pole.'''
from scipy import integrate
from climlab import constants as const
lat_rad = np.deg2rad( lat_deg )
return ( 1E-15 * 2 * np.math.pi * const.a**2 * integrate.cumtrapz( np.cos(lat_rad)*energy_in,
x=lat_rad, initial=0. ) )
# Plot the northward heat transport in this model
Rtoa = np.squeeze(diffebm.timeave['ASR'] - diffebm.timeave['OLR'])
plt.plot(diffebm.lat, inferred_heat_transport(Rtoa, diffebm.lat))
## Now warm it up!
co2diffebm = climlab.process_like(diffebm)
co2diffebm.subprocess['LW'].absorptivity = diffebm.subprocess['LW'].absorptivity*1.1
co2diffebm.integrate_years(5)
# with heat transport and evaporation
# Get some modest polar amplifcation of surface warming
# but larger equatorial amplification of atmospheric warming
# Increased atmospheric gradient = increased poleward flux.
plt.plot(diffebm.lat, co2diffebm.Ts - diffebm.Ts, label='Ts')
plt.plot(diffebm.lat, co2diffebm.Tatm - diffebm.Tatm, label='Tatm')
plt.legend()
Rtoa = np.squeeze(diffebm.timeave['ASR'] - diffebm.timeave['OLR'])
Rtoa_co2 = np.squeeze(co2diffebm.timeave['ASR'] - co2diffebm.timeave['OLR'])
plt.plot(diffebm.lat, inferred_heat_transport(Rtoa, diffebm.lat), label='1xCO2')
plt.plot(diffebm.lat, inferred_heat_transport(Rtoa_co2, diffebm.lat), label='2xCO2')
plt.legend()
"""
Explanation: Now with meridional heat transport
End of explanation
"""
diffebm2 = climlab.process_like(diffebm)
diffebm2.remove_subprocess('LHF')
diffebm2.integrate_years(3)
co2diffebm2 = climlab.process_like(co2diffebm)
co2diffebm2.remove_subprocess('LHF')
co2diffebm2.integrate_years(3)
# With transport and no evaporation...
# No polar amplification, either of surface or air temperature!
plt.plot(diffebm2.lat, co2diffebm2.Ts - diffebm2.Ts, label='Ts')
plt.plot(diffebm2.lat, co2diffebm2.Tatm[:,0] - diffebm2.Tatm[:,0], label='Tatm')
plt.legend()
plt.figure()
# And in this case, the lack of polar amplification is DESPITE an increase in the poleward heat transport.
Rtoa = np.squeeze(diffebm2.timeave['ASR'] - diffebm2.timeave['OLR'])
Rtoa_co2 = np.squeeze(co2diffebm2.timeave['ASR'] - co2diffebm2.timeave['OLR'])
plt.plot(diffebm2.lat, inferred_heat_transport(Rtoa, diffebm2.lat), label='1xCO2')
plt.plot(diffebm2.lat, inferred_heat_transport(Rtoa_co2, diffebm2.lat), label='2xCO2')
plt.legend()
"""
Explanation: Same thing but with NO EVAPORATION
End of explanation
"""
model = climlab.GreyRadiationModel(num_lev=30, num_lat=90, abs_coeff=1.6E-4)
insolation = climlab.radiation.AnnualMeanInsolation(domains=model.Ts.domain)
model.add_subprocess('insolation', insolation)
model.subprocess.SW.flux_from_space = model.subprocess.insolation.insolation
print(model)
# Convective adjustment for atmosphere only
conv = climlab.convection.ConvectiveAdjustment(state={'Tatm':model.Tatm}, adj_lapse_rate=6.5,
**model.param)
model.add_subprocess('convective adjustment', conv)
# add a fixed relative humidity process
# (will only affect surface evaporation)
h2o = climlab.radiation.water_vapor.ManabeWaterVapor(state=model.state, **model.param)
model.add_subprocess('H2O', h2o)
# Add surface heat fluxes
shf = climlab.surface.SensibleHeatFlux(state=model.state, Cd=1E-3)
lhf = climlab.surface.LatentHeatFlux(state=model.state, Cd=1E-3)
lhf.q = model.subprocess.H2O.q
model.add_subprocess('SHF', shf)
model.add_subprocess('LHF', lhf)
model.integrate_years(3.)
def plot_temp_section(model, timeave=True):
fig = plt.figure()
ax = fig.add_subplot(111)
if timeave:
field = model.timeave['Tatm'].transpose()
else:
field = model.Tatm.transpose()
cax = ax.contourf(model.lat, model.lev, field)
ax.invert_yaxis()
ax.set_xlim(-90,90)
ax.set_xticks([-90, -60, -30, 0, 30, 60, 90])
fig.colorbar(cax)
plot_temp_section(model, timeave=False)
co2model = climlab.process_like(model)
co2model.subprocess['LW'].absorptivity = model.subprocess['LW'].absorptivity*1.1
co2model.integrate_years(3)
plot_temp_section(co2model, timeave=False)
# Without transport, get equatorial amplification
plt.plot(model.lat, co2model.Ts - model.Ts, label='Ts')
plt.plot(model.lat, co2model.Tatm[:,0] - model.Tatm[:,0], label='Tatm')
plt.legend()
"""
Explanation: A column model approach
End of explanation
"""
diffmodel = climlab.process_like(model)
# thermal diffusivity in W/m**2/degC
D = 0.05
# meridional diffusivity in m**2/s
K = D / diffmodel.Tatm.domain.heat_capacity[0] * const.a**2
print(K)
d = climlab.dynamics.MeridionalDiffusion(K=K, state={'Tatm':diffmodel.Tatm}, **diffmodel.param)
diffmodel.add_subprocess('diffusion', d)
print(diffmodel)
diffmodel.integrate_years(3)
plot_temp_section(diffmodel)
# Plot the northward heat transport in this model
Rtoa = np.squeeze(diffmodel.timeave['ASR'] - diffmodel.timeave['OLR'])
plt.plot(diffmodel.lat, inferred_heat_transport(Rtoa, diffmodel.lat))
## Now warm it up!
co2diffmodel = climlab.process_like(diffmodel)
co2diffmodel.subprocess['LW'].absorptivity = diffmodel.subprocess['LW'].absorptivity*1.1
co2diffmodel.integrate_years(3)
# With transport, get polar amplification...
# of surface temperature, but not of air temperature!
plt.plot(diffmodel.lat, co2diffmodel.Ts - diffmodel.Ts, label='Ts')
plt.plot(diffmodel.lat, co2diffmodel.Tatm[:,0] - diffmodel.Tatm[:,0], label='Tatm')
plt.legend()
Rtoa = np.squeeze(diffmodel.timeave['ASR'] - diffmodel.timeave['OLR'])
Rtoa_co2 = np.squeeze(co2diffmodel.timeave['ASR'] - co2diffmodel.timeave['OLR'])
plt.plot(diffmodel.lat, inferred_heat_transport(Rtoa, diffmodel.lat), label='1xCO2')
plt.plot(diffmodel.lat, inferred_heat_transport(Rtoa_co2, diffmodel.lat), label='2xCO2')
"""
Explanation: Now with meridional heat tranpsort!
End of explanation
"""
diffmodel2 = climlab.process_like(diffmodel)
diffmodel2.remove_subprocess('LHF')
print(diffmodel2)
diffmodel2.integrate_years(3)
co2diffmodel2 = climlab.process_like(co2diffmodel)
co2diffmodel2.remove_subprocess('LHF')
co2diffmodel2.integrate_years(3)
# With transport and no evaporation...
# No polar amplification, either of surface or air temperature!
plt.plot(diffmodel2.lat, co2diffmodel2.Ts - diffmodel2.Ts, label='Ts')
plt.plot(diffmodel2.lat, co2diffmodel2.Tatm[:,0] - diffmodel2.Tatm[:,0], label='Tatm')
plt.legend()
Rtoa = np.squeeze(diffmodel2.timeave['ASR'] - diffmodel2.timeave['OLR'])
Rtoa_co2 = np.squeeze(co2diffmodel2.timeave['ASR'] - co2diffmodel2.timeave['OLR'])
plt.plot(diffmodel2.lat, inferred_heat_transport(Rtoa, diffmodel2.lat), label='1xCO2')
plt.plot(diffmodel2.lat, inferred_heat_transport(Rtoa_co2, diffmodel2.lat), label='2xCO2')
"""
Explanation: Same thing but with NO EVAPORATION
End of explanation
"""
diffmodel3 = climlab.process_like(diffmodel)
diffmodel3.subprocess['LHF'].Cd *= 0.5
diffmodel3.integrate_years(5.)
# Reduced evaporation gives equatorially enhanced warming of surface
# and cooling of near-surface air temperature
plt.plot(diffmodel.lat, diffmodel3.Ts - diffmodel.Ts, label='Ts')
plt.plot(diffmodel.lat, diffmodel3.Tatm[:,0] - diffmodel.Tatm[:,0], label='Tatm')
plt.legend()
"""
Explanation: Warming effect of a DECREASE IN EVAPORATION EFFICIENCY
Take a column model that includes evaporation and heat transport, and reduce the drag coefficient by a factor of 2.
How does the surface temperature change?
End of explanation
"""
diffebm3 = climlab.process_like(diffebm)
diffebm3.subprocess['LHF'].Cd *= 0.5
diffebm3.integrate_years(5.)
# Reduced evaporation gives equatorially enhanced warming of surface
# and cooling of near-surface air temperature
plt.plot(diffebm.lat, diffebm3.Ts - diffebm.Ts, label='Ts')
plt.plot(diffebm.lat, diffebm3.Tatm[:,0] - diffebm.Tatm[:,0], label='Tatm')
"""
Explanation: Same calculation in a two-layer EBM
End of explanation
"""
# Put in some ozone
import xarray as xr
ozonepath = "http://thredds.atmos.albany.edu:8080/thredds/dodsC/CLIMLAB/ozone/apeozone_cam3_5_54.nc"
ozone = xr.open_dataset(ozonepath)
# Dimensions of the ozone file
lat = ozone.lat
lon = ozone.lon
lev = ozone.lev
# Taking annual, zonal average of the ozone data
O3_zon = ozone.OZONE.mean(dim=("time","lon"))
# make a model on the same grid as the ozone
model1 = climlab.BandRCModel(lev=lev, lat=lat)
insolation = climlab.radiation.AnnualMeanInsolation(domains=model1.Ts.domain)
model1.add_subprocess('insolation', insolation)
model1.subprocess.SW.flux_from_space = model1.subprocess.insolation.insolation
print(model1)
# Set the ozone mixing ratio
O3_trans = O3_zon.transpose()
# Put in the ozone
model1.absorber_vmr['O3'] = O3_trans
model1.param
# Convective adjustment for atmosphere only
model1.remove_subprocess('convective adjustment')
conv = climlab.convection.ConvectiveAdjustment(state={'Tatm':model1.Tatm}, **model1.param)
model1.add_subprocess('convective adjustment', conv)
# Add surface heat fluxes
shf = climlab.surface.SensibleHeatFlux(state=model1.state, Cd=0.5E-3)
lhf = climlab.surface.LatentHeatFlux(state=model1.state, Cd=0.5E-3)
# set the water vapor input field for LHF process
lhf.q = model1.q
model1.add_subprocess('SHF', shf)
model1.add_subprocess('LHF', lhf)
model1.step_forward()
model1.integrate_years(1.)
model1.integrate_years(1.)
plot_temp_section(model1, timeave=False)
co2model1 = climlab.process_like(model1)
co2model1.absorber_vmr['CO2'] *= 2
co2model1.integrate_years(3.)
plot_temp_section(co2model1, timeave=False)
"""
Explanation: Pretty much the same result.
Some stuff with Band models
End of explanation
"""
diffmodel1 = climlab.process_like(model1)
# thermal diffusivity in W/m**2/degC
D = 0.01
# meridional diffusivity in m**2/s
K = D / diffmodel1.Tatm.domain.heat_capacity[0] * const.a**2
d = climlab.dynamics.MeridionalDiffusion(K=K, state={'Tatm': diffmodel1.Tatm}, **diffmodel1.param)
diffmodel1.add_subprocess('diffusion', d)
diffmodel1.absorber_vmr['CO2'] *= 4.
print(diffmodel1)
diffmodel1.integrate_years(3.)
plot_temp_section(diffmodel1, timeave=False)
Rtoa = np.squeeze(diffmodel1.timeave['ASR'] - diffmodel1.timeave['OLR'])
plt.plot(diffmodel1.lat, inferred_heat_transport(Rtoa, diffmodel1.lat))
plt.plot(diffmodel1.lat, diffmodel1.Ts-273.15)
# Now double CO2
co2diffmodel1 = climlab.process_like(diffmodel1)
co2diffmodel1.absorber_vmr['CO2'] *= 2.
co2diffmodel1.integrate_years(5)
# No polar amplification in this model!
plt.plot(diffmodel1.lat, co2diffmodel1.Ts - diffmodel1.Ts, label='Ts')
plt.plot(diffmodel1.lat, co2diffmodel1.Tatm[:,0] - diffmodel1.Tatm[:,0], label='Tatm')
plt.legend()
plt.figure()
Rtoa = np.squeeze(diffmodel1.timeave['ASR'] - diffmodel1.timeave['OLR'])
Rtoa_co2 = np.squeeze(co2diffmodel1.timeave['ASR'] - co2diffmodel1.timeave['OLR'])
plt.plot(diffmodel1.lat, inferred_heat_transport(Rtoa, diffmodel1.lat), label='1xCO2')
plt.plot(diffmodel1.lat, inferred_heat_transport(Rtoa_co2, diffmodel1.lat), label='2xCO2')
plt.legend()
"""
Explanation: Model gets very very hot near equator. Very large equator-to-pole gradient.
Band model with heat transport and evaporation
End of explanation
"""
|
yhat/ggplot
|
docs/how-to/Customizing Colors.ipynb
|
bsd-2-clause
|
ggplot(aes(x='carat', y='price', color='clarity'), data=diamonds) +\
geom_point() +\
scale_color_brewer(type='qual')
ggplot(aes(x='carat', y='price', color='clarity'), data=diamonds) + \
geom_point() + \
scale_color_brewer(type='seq')
ggplot(aes(x='carat', y='price', color='clarity'), data=diamonds) + \
geom_point() + \
scale_color_brewer(type='seq', palette=4)
ggplot(aes(x='carat', y='price', color='clarity'), data=diamonds) + \
geom_point() + \
scale_color_brewer(type='div', palette=5)
"""
Explanation: Colors
ggplot comes with a variety of "scales" that allow you to theme your plots and make them easier to interpret. In addition to the deafult color schemes that ggplot provides, there are also several color scales which allow you to specify more targeted "palettes" of colors to use in your plots.
scale_color_brewer
scale_color_brewer provides sets of colors that are optimized for displaying data on maps. It comes from Cynthia Brewer's aptly named Color Brewer. Lucky for us, these palettes also look great on plots that aren't maps.
End of explanation
"""
import pandas as pd
temperature = pd.DataFrame({"celsius": range(-88, 58)})
temperature['farenheit'] = temperature.celsius*1.8 + 32
temperature['kelvin'] = temperature.celsius + 273.15
ggplot(temperature, aes(x='celsius', y='farenheit', color='kelvin')) + \
geom_point() + \
scale_color_gradient(low='blue', high='red')
ggplot(aes(x='x', y='y', color='z'), data=diamonds.head(1000)) +\
geom_point() +\
scale_color_gradient(low='red', high='white')
ggplot(aes(x='x', y='y', color='z'), data=diamonds.head(1000)) +\
geom_point() +\
scale_color_gradient(low='#05D9F6', high='#5011D1')
ggplot(aes(x='x', y='y', color='z'), data=diamonds.head(1000)) +\
geom_point() +\
scale_color_gradient(low='#E1FA72', high='#F46FEE')
"""
Explanation: scale_color_gradient
scale_color_gradient allows you to create gradients of colors that can represent a spectrum of values. For instance, if you're displaying temperature data, you might want to have lower values be blue, hotter values be red, and middle values be somewhere in between. scale_color_gradient will calculate the colors each point should be--even those in between colors.
End of explanation
"""
my_colors = [
"#ff7f50",
"#ff8b61",
"#ff9872",
"#ffa584",
"#ffb296",
"#ffbfa7",
"#ffcbb9",
"#ffd8ca",
"#ffe5dc",
"#fff2ed"
]
ggplot(aes(x='carat', y='price', color='clarity'), data=diamonds) + \
geom_point() + \
scale_color_manual(values=my_colors)
# https://coolors.co/app/69a2b0-659157-a1c084-edb999-e05263
ggplot(aes(x='carat', y='price', color='cut'), data=diamonds) + \
geom_point() + \
scale_color_manual(values=['#69A2B0', '#659157', '#A1C084', '#EDB999', '#E05263'])
"""
Explanation: scale_color_manual
Want to just specify the colors yourself? No problem, just use scale_color_manual. Add it to your plot as a layer and specify the colors you'd like using a list.
End of explanation
"""
|
ffmmjj/intro_to_data_science_workshop
|
04-Exemplo - Análise de sobreviventes do Titanic.ipynb
|
apache-2.0
|
import pandas as pd
raw_data = pd.read_csv('datasets/titanic.csv')
raw_data.head()
raw_data.info()
"""
Explanation: Análise de sobreviventes do Titanic
O dataset de sobrevivents do Titanic é bastante usado como exemplo didático para ilustrar conceitos de tratamento e exploração de dados.
Vamos começar importando dados para um pandas DataFrame a partir de um arquivo CSV:
End of explanation
"""
# POrcentagem de valores em branco em cada coluna
(raw_data.isnull().sum() / len(raw_data)) * 100.0
"""
Explanation: A informação acima mostra que esse dataset contém informações sobre 891 passageiros: seus nome, gênero, idade, etc (para uma descrição completa do significado de cada coluna, confira este link).
Valores em branco
Antes de iniciar a análise em si, precisamos checar como está a "saúde" dos dados verificando quanta informação está presente de fato em cada coluna.
End of explanation
"""
raw_data.drop('Cabin', axis='columns', inplace=True)
raw_data.info()
"""
Explanation: Pode-se ver que 77% dos passageiros não apresentam informação sobre em qual cabine eles estavam alojados. Essa informação pode ser útil para análise posterior mas, por enquanto, vamos descartar essa coluna:
End of explanation
"""
raw_data.dropna(subset=['Embarked'], inplace=True)
(raw_data.isnull().sum() / len(raw_data)) * 100.0
"""
Explanation: A coluna Embarked, que informa em qual porto o passageiro embarcou, possui apenas algumas poucas linhas em branoc. Como a quantidade de passageiros sem essa informação é pequena, é razoável assumir que eles podem ser descartados do dataset sem grandes prejuízos:
End of explanation
"""
raw_data.fillna({'Age': raw_data.Age.median()}, inplace=True)
(raw_data.isnull().sum() / len(raw_data)) * 100.0
"""
Explanation: Finalmente, cerca de 20% dos passageiros não possuem informação de idade. Não parece razoável exluir todos eles e tampouco descartar a coluna inteira, então uma solução possível é preenchermos os valores em branco dessa coluna com o valor mediano dela no dataset:
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
overall_fig = raw_data.Survived.value_counts().plot(kind='bar')
overall_fig.set_xlabel('Survived')
overall_fig.set_ylabel('Amount')
"""
Explanation: Por que usar a mediana ao invés da média?
A mediana representa uma estatística robusta. Uma estatística é um número que resume um conjunto de valores, enquanto que uma estatística é considerada robusta se ela não for afetada significativamente por variações nos dados.
Suponha, por exemplo, que temos um grupo de pessoas cujas idades sejam [15, 16, 14, 15, 15, 19, 14, 17]. A média de idades nesse grupo é de 15.625. Se uma pessoa de 80 anos for adicionada a esse grupo, a média de idades será agora 22.77, o que não parece representar bem o perfil de idades prevalente desse grupo.
A mediana nesses dois exemplos, por sua vez, é de 15 anos - isto é, o valor da mediana não é afetado pela presença de um outlier nos dados, o que a torna uma estatística robusta para as idades desse grupo.
Agora que todas as informações sobre os passageiros no conjunto de dados foram "limpas", podemos começar a analisar os dados.
Análise exploratória
Vamos começar explorando quantas pessoas nesse dataset sobreviveram ao Titanic:
End of explanation
"""
survived_sex = raw_data[raw_data['Survived']==1]['Sex'].value_counts()
dead_sex = raw_data[raw_data['Survived']==0]['Sex'].value_counts()
df = pd.DataFrame([survived_sex,dead_sex])
df.index = ['Survivors','Non-survivors']
df.plot(kind='bar',stacked=True, figsize=(15,8));
"""
Explanation: No geral, 38% dos passageiros sobreviveram.
Vamos agora segmentar a proporção de pessoas que sobreviveram ao longo de diferentes recortes (o código usado para gerar os gráficos abaixo foram retirados desse link).
Recorte de gênero
End of explanation
"""
figure = plt.figure(figsize=(15,8))
plt.hist([raw_data[raw_data['Survived']==1]['Age'], raw_data[raw_data['Survived']==0]['Age']],
stacked=True, color=['g','r'],
bins=30, label=['Survivors','Non-survivors'])
plt.xlabel('Idade')
plt.ylabel('No. passengers')
plt.legend();
"""
Explanation: Recorte de idade
End of explanation
"""
import matplotlib.pyplot as plt
figure = plt.figure(figsize=(15,8))
plt.hist([raw_data[raw_data['Survived']==1]['Fare'], raw_data[raw_data['Survived']==0]['Fare']],
stacked=True, color=['g','r'],
bins=50, label=['Survivors','Non-survivors'])
plt.xlabel('Fare')
plt.ylabel('No. passengers')
plt.legend();
"""
Explanation: Por preço de passagem
End of explanation
"""
data_for_prediction = raw_data[['Name', 'Sex', 'Age', 'Fare', 'Survived']]
data_for_prediction.is_copy = False
data_for_prediction.info()
"""
Explanation: Os gráficos acima indicam que passageiros que sejam do gênero feminino, com menos de 20 anos de idade e que pagaram passsagens mais caras tiveram uma maior chance de ter sobrevivido.
Como podemos usar essa informação para tentar prever se um passageiro qualquer teria sobrevivido ao acidente?
Previsão das chances de sobrevivência
Iniciemos então mantendo apenas a informação que queremos utilizar - os nomes de passageiros também serão mantidos para análise posterior:
End of explanation
"""
data_for_prediction['Sex'] = data_for_prediction.Sex.map({'male': 0, 'female': 1})
data_for_prediction.info()
"""
Explanation: Codificação numérica de Strings
Algumas informaçõe estão codificadas como strings: a informação sobre o gênero do passageiro, por exemplo, é representada pelas strings male e female. Para usar essa informação no nosso futuro modelo preditivo, devemos converter esses valores para um formato numérico:
End of explanation
"""
from sklearn.model_selection import train_test_split
train_data, test_data = train_test_split(data_for_prediction, test_size=0.25, random_state=254)
len(train_data), len(test_data)
"""
Explanation: Divisão entre treinamento e teste
Para poder avaliar a capacidade preditiva do modelo, uma parte dos dados (nesse caso, 25%) deve ser separada para um conjunto de teste.
Um conjunto de teste é um dataset para o qual os valores a serem previstos são conhecidos mas que não são usados para treinar o modelo, sendo portanto usados para avaliar quantos acertos o modelo consegue fazer em um conjunto de exemplos que ele nunca viu durante o treinamento. Isso permite que avaliemos, de maneira não-enviesada, o quanto o modelo deve acertar ao ser aplicado a dados reais.
End of explanation
"""
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier().fit(train_data[['Sex', 'Age', 'Fare']], train_data.Survived)
tree.score(test_data[['Sex', 'Age', 'Fare']], test_data.Survived)
"""
Explanation: Previsão de chances de sobrevivência usando árvores de decisão
Usaremos um simples modelo de Árvore de Decisão para prever se um passageiro teria sobrevivido ao Titanic usando seu gênero, idade e preço de passagem.
End of explanation
"""
test_data.is_copy = False
test_data['Predicted'] = tree.predict(test_data[['Sex', 'Age', 'Fare']])
test_data[test_data.Predicted != test_data.Survived]
"""
Explanation: Com uma simples árvore de decisão, o resultado cima indica que seria possível prever corretamente a sobrevivência de cerca de 80% dos passageiros.
Um exercício interessante de se fazer após treinar um modelo é dar uma olhada nos casos em que ele erra:
End of explanation
"""
|
google/starthinker
|
colabs/dbm.ipynb
|
apache-2.0
|
!pip install git+https://github.com/google/starthinker
"""
Explanation: DV360 Report
Create a DV360 report.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
"""
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
"""
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
"""
FIELDS = {
'auth_read':'user', # Credentials used for reading data.
'report':'{}', # Report body and filters.
'delete':False, # If report exists, delete it before creating a new one.
}
print("Parameters Set To: %s" % FIELDS)
"""
Explanation: 3. Enter DV360 Report Recipe Parameters
Reference field values from the DV360 API to build a report.
Copy and paste the JSON definition of a report, sample for reference.
The report is only created, a seperate script is required to move the data.
To reset a report, delete it from DV360 reporting.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
"""
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'dbm':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},
'report':{'field':{'name':'report','kind':'json','order':1,'default':'{}','description':'Report body and filters.'}},
'delete':{'field':{'name':'delete','kind':'boolean','order':2,'default':False,'description':'If report exists, delete it before creating a new one.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
"""
Explanation: 4. Execute DV360 Report
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation
"""
|
turbomanage/training-data-analyst
|
courses/machine_learning/deepdive2/time_series_prediction/solutions/3_modeling_bqml.ipynb
|
apache-2.0
|
PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
%env
PROJECT = PROJECT
REGION = REGION
%%bash
sudo python3 -m pip freeze | grep google-cloud-bigquery==1.6.1 || \
sudo python3 -m pip install google-cloud-bigquery==1.6.1
"""
Explanation: Time Series Prediction with BQML and AutoML
Objectives
1. Learn how to use BQML to create a classification time-series model using CREATE MODEL.
2. Learn how to use BQML to create a linear regression time-series model.
3. Learn how to use AutoML Tables to build a time series model from data in BigQuery.
Set up environment variables and load necessary libraries
End of explanation
"""
from google.cloud import bigquery
from IPython import get_ipython
bq = bigquery.Client(project=PROJECT)
def create_dataset():
dataset = bigquery.Dataset(bq.dataset("stock_market"))
try:
bq.create_dataset(dataset) # Will fail if dataset already exists.
print("Dataset created")
except:
print("Dataset already exists")
def create_features_table():
error = None
try:
bq.query('''
CREATE TABLE stock_market.eps_percent_change_sp500
AS
SELECT *
FROM `asl-ml-immersion.stock_market.eps_percent_change_sp500`
''').to_dataframe()
except Exception as e:
error = str(e)
if error is None:
print('Table created')
elif 'Already Exists' in error:
print('Table already exists.')
else:
raise Exception('Table was not created.')
create_dataset()
create_features_table()
"""
Explanation: Create the dataset
End of explanation
"""
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
stock_market.eps_percent_change_sp500
LIMIT
10
"""
Explanation: Review the dataset
In the previous lab we created the data we will use modeling and saved them as tables in BigQuery. Let's examine that table again to see that everything is as we expect. Then, we will build a model using BigQuery ML using this table.
End of explanation
"""
%%bigquery --project $PROJECT
#standardSQL
CREATE OR REPLACE MODEL
stock_market.direction_model OPTIONS(model_type = "logistic_reg",
input_label_cols = ["direction"]) AS
-- query to fetch training data
SELECT
symbol,
Date,
Open,
close_MIN_prior_5_days,
close_MIN_prior_20_days,
close_MIN_prior_260_days,
close_MAX_prior_5_days,
close_MAX_prior_20_days,
close_MAX_prior_260_days,
close_AVG_prior_5_days,
close_AVG_prior_20_days,
close_AVG_prior_260_days,
close_STDDEV_prior_5_days,
close_STDDEV_prior_20_days,
close_STDDEV_prior_260_days,
direction
FROM
`stock_market.eps_percent_change_sp500`
WHERE
tomorrow_close IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) <= 15 * 70
"""
Explanation: Using BQML
Create classification model for direction
To create a model
1. Use CREATE MODEL and provide a destination table for resulting model. Alternatively we can use CREATE OR REPLACE MODEL which allows overwriting an existing model.
2. Use OPTIONS to specify the model type (linear_reg or logistic_reg). There are many more options we could specify, such as regularization and learning rate, but we'll accept the defaults.
3. Provide the query which fetches the training data
Have a look at Step Two of this tutorial to see another example.
The query will take about two minutes to complete
We'll start with creating a classification model to predict the direction of each stock.
We'll take a random split using the symbol value. With about 500 different values, using ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1 will give 30 distinct symbol values which corresponds to about 171,000 training examples. After taking 70% for training, we will be building a model on about 110,000 training examples.
End of explanation
"""
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
ML.EVALUATE(MODEL `stock_market.direction_model`,
(
SELECT
symbol,
Date,
Open,
close_MIN_prior_5_days,
close_MIN_prior_20_days,
close_MIN_prior_260_days,
close_MAX_prior_5_days,
close_MAX_prior_20_days,
close_MAX_prior_260_days,
close_AVG_prior_5_days,
close_AVG_prior_20_days,
close_AVG_prior_260_days,
close_STDDEV_prior_5_days,
close_STDDEV_prior_20_days,
close_STDDEV_prior_260_days,
direction
FROM
`stock_market.eps_percent_change_sp500`
WHERE
tomorrow_close IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) > 15 * 70
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) <= 15 * 85))
"""
Explanation: Get training statistics and examine training info
After creating our model, we can evaluate the performance using the ML.EVALUATE function. With this command, we can find the precision, recall, accuracy F1-score and AUC of our classification model.
End of explanation
"""
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
ML.TRAINING_INFO(MODEL `stock_market.direction_model`)
ORDER BY iteration
"""
Explanation: We can also examine the training statistics collected by Big Query. To view training results we use the ML.TRAINING_INFO function.
End of explanation
"""
%%bigquery --project $PROJECT
#standardSQL
WITH
eval_data AS (
SELECT
symbol,
Date,
Open,
close_MIN_prior_5_days,
close_MIN_prior_20_days,
close_MIN_prior_260_days,
close_MAX_prior_5_days,
close_MAX_prior_20_days,
close_MAX_prior_260_days,
close_AVG_prior_5_days,
close_AVG_prior_20_days,
close_AVG_prior_260_days,
close_STDDEV_prior_5_days,
close_STDDEV_prior_20_days,
close_STDDEV_prior_260_days,
direction
FROM
`stock_market.eps_percent_change_sp500`
WHERE
tomorrow_close IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) > 15 * 70
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) <= 15 * 85)
SELECT
direction,
(COUNT(direction)* 100 / (
SELECT
COUNT(*)
FROM
eval_data)) AS percentage
FROM
eval_data
GROUP BY
direction
"""
Explanation: Compare to simple benchmark
Another way to asses the performance of our model is to compare with a simple benchmark. We can do this by seeing what kind of accuracy we would get using the naive strategy of just predicted the majority class. For the training dataset, the majority class is 'STAY'. The following query we can see how this naive strategy would perform on the eval set.
End of explanation
"""
%%bigquery --project $PROJECT
#standardSQL
CREATE OR REPLACE MODEL
stock_market.price_model OPTIONS(model_type = "linear_reg",
input_label_cols = ["normalized_change"]) AS
-- query to fetch training data
SELECT
symbol,
Date,
Open,
close_MIN_prior_5_days,
close_MIN_prior_20_days,
close_MIN_prior_260_days,
close_MAX_prior_5_days,
close_MAX_prior_20_days,
close_MAX_prior_260_days,
close_AVG_prior_5_days,
close_AVG_prior_20_days,
close_AVG_prior_260_days,
close_STDDEV_prior_5_days,
close_STDDEV_prior_20_days,
close_STDDEV_prior_260_days,
normalized_change
FROM
`stock_market.eps_percent_change_sp500`
WHERE
normalized_change IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) <= 15 * 70
"""
Explanation: So, the naive strategy of just guessing the majority class would have accuracy of 0.5509 on the eval dataset, just below our BQML model.
Create regression model for normalized change
We can also use BigQuery to train a regression model to predict the normalized change for each stock. To do this in BigQuery we need only change the OPTIONS when calling CREATE OR REPLACE MODEL. This will give us a more precise prediction rather than just predicting if the stock will go up, down, or stay the same. Thus, we can treat this problem as either a regression problem or a classification problem, depending on the business needs.
End of explanation
"""
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
ML.EVALUATE(MODEL `stock_market.price_model`,
(
SELECT
symbol,
Date,
Open,
close_MIN_prior_5_days,
close_MIN_prior_20_days,
close_MIN_prior_260_days,
close_MAX_prior_5_days,
close_MAX_prior_20_days,
close_MAX_prior_260_days,
close_AVG_prior_5_days,
close_AVG_prior_20_days,
close_AVG_prior_260_days,
close_STDDEV_prior_5_days,
close_STDDEV_prior_20_days,
close_STDDEV_prior_260_days,
normalized_change
FROM
`stock_market.eps_percent_change_sp500`
WHERE
normalized_change IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) > 15 * 70
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) <= 15 * 85))
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
ML.TRAINING_INFO(MODEL `stock_market.price_model`)
ORDER BY iteration
"""
Explanation: Just as before we can examine the evaluation metrics for our regression model and examine the training statistics in Big Query
End of explanation
"""
|
bjornaa/roppy
|
examples/flux_feie_shetland.ipynb
|
mit
|
# Imports
=======
The class depends on `numpy` and is part of `roppy`. To read the data `netCDF4` is needed.
The graphic package `matplotlib` is not required for `FluxSection` but is used for visualisation in this notebook.
# Imports
import numpy as np
import matplotlib.pyplot as plt
from netCDF4 import Dataset
import roppy
%matplotlib inline
"""
Explanation: Examples on the use of roppy's FluxSection class
The FluxSection class implements a staircase approximation to a section,
starting and ending in psi-points and following U- and V-edges.
No interpolation is needed to estimate the flux, giving good conservation
properties. On the other hand, this limits the flexibility of the approach.
As distance get distorded, depending on the stair shape, it is not suited
for plotting normal current and other properties along the section.
End of explanation
"""
# Settings
# Data
romsfile = './data/ocean_avg_example.nc'
tstep = 2 # Third time frame in the file
# Section end points
lon0, lat0 = 4.72, 60.75 # Section start - Feie
lon1, lat1 = -0.67, 60.75 # Section stop - Shetland
"""
Explanation: User settings
First the ROMS dataset and the section must be described. The section is described by its end points.
By convention the flux is considered positive if the direction is to the right of the section
going from the first to the second end point.
End of explanation
"""
# Make SGrid and FluxSection objects
fid = Dataset(romsfile)
grid = roppy.SGrid(fid)
# End points in grid coordinates
x0, y0 = grid.ll2xy(lon0, lat0)
x1, y1 = grid.ll2xy(lon1, lat1)
# Find nearest psi-points
i0, i1, j0, j1 = [int(np.ceil(v)) for v in [x0, x1, y0, y1]]
# The staircase flux section
I, J = roppy.staircase_from_line(i0, i1, j0, j1)
sec = roppy.FluxSection(grid, I, J)
"""
Explanation: Make SGrid and FluxSection objects
This datafile contains enough horizontal and vertical information to determine
an SGrid object.
The SGrid class has a method ll2xy to convert from lon/lat to grid coordinates.
Thereafter the nearest $\psi$-points are found and a staircase curve joining
the two $\psi$-points. Thereafter a FluxSection object can be created.
End of explanation
"""
# Make a quick and dirty horizontal plot of the section
# Read topography
H = fid.variables['h'][:,:]
Levels = (0, 100, 300, 1000, 3000, 5000)
plt.contourf(H, levels=Levels, cmap=plt.get_cmap('Blues'))
plt.colorbar()
# Poor man's coastline
plt.contour(H, levels=[10], colors='black')
# Plot the stair case section
# NOTE: subtract 0.5 to go from psi-index to grid coordinate
plt.plot(sec.I - 0.5, sec.J - 0.5, lw=2, color='red') # Staircase
"""
Explanation: Visual check
To check the section specification plot it in a simple map.
End of explanation
"""
# Zoom in on the staircase
# Plot blue line between end points
plt.plot([sec.I[0]-0.5, sec.I[-1]-0.5], [sec.J[0]-0.5, sec.J[-1]-0.5])
# Plot red staircase curve
plt.plot(sec.I-0.5, sec.J-0.5, lw=2, color='red')
plt.grid(True)
_ = plt.axis('equal')
"""
Explanation: Staircase approximation
The next plot is just an illustration of how the function staircase_from_line works, interpolating the straight line in the grid plane as closely as possible.
End of explanation
"""
# Read the velocity
U = fid.variables['u'][tstep, :, :, :]
V = fid.variables['v'][tstep, :, :, :]
"""
Explanation: Read the velocity
To compute the fluxes, we need the 3D velocity components
End of explanation
"""
# Compute volume flux through the section
# ----------------------------------------
netflux,posflux = sec.transport(U, V)
print("Net flux = {:6.2f} Sv".format(netflux * 1e-6))
print("Total northwards flux = {:6.2f} Sv".format(posflux * 1e-6))
print("Total southwards flux = {:6.2f} Sv".format((posflux-netflux)*1e-6))
"""
Explanation: Total volume flux
Obtaining the total volume flux is easy, there is a convenient method transport for this purpose returning the net and positive transport to the right of the section (northwards in this case).
End of explanation
"""
# Flux of specific water mass
# --------------------------------
# Read hydrography
S = fid.variables['salt'][tstep, :, :]
T = fid.variables['temp'][tstep, :, :]
# Compute section arrays
Flux = sec.flux_array(U, V)
S = sec.sample3D(S)
T = sec.sample3D(T)
# Compute Atlantic flux where S > 34.9 and T > 5
S_lim = 34.9
T_lim = 5.0
cond = (S > S_lim) & (T > T_lim)
net_flux = np.sum(Flux[cond]) * 1e-6
# Northwards component
cond1 = (cond) & (Flux > 0)
north_flux = np.sum(Flux[cond1]) * 1e-6
print("Net flux, S > {:4.1f}, T > {:4.1f} = {:6.2f} Sv".format(S_lim, T_lim, net_flux))
print("Northwards flux, S > {:4.1f}, T > {:4.1f} = {:6.2f} Sv".format(S_lim, T_lim, north_flux))
print("Southwards flux, S > {:4.1f}, T > {:4.1f} = {:6.2f} Sv".format(S_lim, T_lim, north_flux - net_flux))
"""
Explanation: Flux limited by watermass
The class is flexible enough that more complicated flux calculations can be done.
The method flux_array returns a 2D array of flux through the cells along the section.
Using numpy's advanced logical indexing, different conditions can be prescribed.
For instance a specific water mass can be given by inequalities in salinity and temperature.
NOTE: Different conditions must be parenthesed before using logical operators.
The 3D hydrographic fields must be sampled to the section cells, this is done by the method sample3D.
End of explanation
"""
# Salt flux
# ---------
rho = 1025.0 # Density, could compute this from hydrography
salt_flux = rho * np.sum(Flux * S)
# unit Gg/s = kt/s
print "Net salt flux = {:5.2f} Gg/s".format(salt_flux * 1e-9)
"""
Explanation: Property flux
The flux of properties can be determined. Different definitions and/or reference levels may be applied.
As an example, the code below computes the total tranport of salt by the net flux through the section
End of explanation
"""
# Flux in a depth range
# ----------------------
depth_lim = 100.0
# Have not sampled the depth of the rho-points,
# instead approximate by the average from w-depths
z_r = 0.5*(sec.z_w[:-1,:] + sec.z_w[1:,:])
# Shallow flux
cond = z_r > -depth_lim
net_flux = np.sum(Flux[cond]) * 1e-6
cond1 = (cond) & (Flux > 0)
north_flux = np.sum(Flux[cond1]) * 1e-6
print("Net flux, depth < {:4.0f} = {:6.3f} Sv".format(depth_lim, net_flux))
print("Northwards flux, depth < {:4.0f} = {:6.3f} Sv".format(depth_lim, north_flux))
print("Southwards flux, depth < {:4.0f} = {:6.3f} Sv".format(depth_lim, north_flux - net_flux))
# Deep flux
cond = z_r < -depth_lim
net_flux = np.sum(Flux[cond]) * 1e-6
cond1 = (cond) & (Flux > 0)
north_flux = np.sum(Flux[cond1]) * 1e-6
print("")
print("Net flux, depth > {:4.0f} = {:6.3f} Sv".format(depth_lim, net_flux))
print("Northwards flux, depth > {:4.0f} = {:6.3f} Sv".format(depth_lim, north_flux))
print("Southwards flux, depth > {:4.0f} = {:6.3f} Sv".format(depth_lim, north_flux - net_flux))
"""
Explanation: Flux in a depth range
The simplest way to compute the flux in a depth range is to use only
flux cells where the $\rho$-point is in the depth range. This can be
done by the logical indexing.
End of explanation
"""
depth_lim = 100
# Make an integration kernel
K = (sec.z_w[1:,:] + depth_lim) / sec.dZ # Fraction of cell above limit
np.clip(K, 0.0, 1.0, out=K)
net_flux = np.sum(K*Flux) * 1e-6
north_flux = np.sum((K*Flux)[Flux>0]) *1e-6
print("Net flux, depth > {:4.0f} = {:6.3f} Sv".format(depth_lim, net_flux))
print("Northwards flux, depth > {:4.0f} = {:6.3f} Sv".format(depth_lim, north_flux))
"""
Explanation: Alternative algorithm
A more accurate algorithm is to include the fraction of the grid cell
above the depth limit. This can be done by an integrating kernel,
that is a 2D array K where the entries are zero if the cell is totally
below the limit, one if totally above the limit and the fraction above the
limit if the flux cell contains the limit. The total flux above the limit is found
by multiplying the flux array with K and summing.
This algorithm is not more complicated than above. In our example, the
estimated flux values are almost equal, we had to include the third decimal to
notice the difference.
End of explanation
"""
# Examine the staircase
# ------------------------
# Flux in X-direction (mostly east)
cond = sec.Eu # Only use U-edges
# Extend the array in the vertical
cond = np.logical_and.outer(sec.N*[True], cond)
net_flux = np.sum(Flux[cond]) * 1e-6
# Postive component
cond1 = (cond) & (Flux > 0)
pos_flux = np.sum(Flux[cond1]) * 1e-6
print("net X flux = {:6.2f} Sv".format(net_flux))
print("pos X flux = {:6.2f} Sv".format(pos_flux))
print("neg X flux = {:6.2f} Sv".format(pos_flux-net_flux))
# Flux in Y-direction (mostly north)
cond = np.logical_and.outer(sec.N*[True], sec.Ev) # Only V-edges
net_flux = np.sum(Flux[cond]) * 1e-6
# Postive component
cond1 = (cond) & (Flux > 0)
pos_flux = np.sum(Flux[cond1]) * 1e-6
print("")
print("net Y flux = {:6.2f} Sv".format(net_flux))
print("pos Y flux = {:6.2f} Sv".format(pos_flux))
print("neg Y flux = {:6.2f} Sv".format(pos_flux-net_flux))
"""
Explanation: Componentwise fluxes
It may be instructional to examine the staircase behaviour of the flux.
We may separate the flux across U- and V-edges respectively. The
FluxSection class has 1D horiozontal logical arrays Eu and Ev
pointing to the respective edge types.
To use the logical indexing pattern
from the other examples, this has to be extended vertically so that we get
a condition on the flux cell indicating wheter it is part of a U- or V-edge.
The numpy function logical_and.outer with a True argument may be used
for this. [Better ways?]
End of explanation
"""
# Print the limits of the section
## print I[0], I[-1], J[0], J[-1]
# Specify a subgrid
i0, i1, j0, j1 = 94, 131, 114, 130 # Minimal subgrid
# Check that the section is contained in the subgrid
assert i0 < I[0] < i1 and i0 < I[-1] < i1
assert j0 < J[0] < j1 and j0 < J[-1] < j1
# Make a SGrid object for the subgrid
grd1 = roppy.SGrid(fid, subgrid=(i0,i1,j0,j1))
# Make a FluxSection object
sec1 = roppy.FluxSection(grd1, I, J)
# Read velocity for the subgrid only
U1 = fid.variables['u'][tstep, :, grd1.Ju, grd1.Iu]
V1 = fid.variables['v'][tstep, :, grd1.Jv, grd1.Iv]
# Compute net and positive fluxes
netflux1, posflux1 = sec1.transport(U1, V1)
# Control that the values have not changed from the computations for the whole grid
print(" whole grid subgrid")
print("Net flux : {:6.3f} {:6.3f} Sv".format(netflux * 1e-6, netflux1 * 1e-6))
print("Total northwards flux : {:6.3f} {:6.3f} Sv".format(posflux * 1e-6, posflux1 * 1e-6))
"""
Explanation: Flux calculations on a subgrid
It may save memory and I/O time to work on a subgrid. Just specify the subgrid using
the SGrid subgrid convention and use the staircase function unchanged. The SGrid object
is responsible for handling any offsets.
End of explanation
"""
|
RyanAlberts/Springbaord-Capstone-Project
|
Statistics_Exercises/sliderule_dsi_inferential_statistics_exercise_3.ipynb
|
mit
|
%matplotlib inline
import pandas as pd
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
import bokeh.plotting as bkp
from mpl_toolkits.axes_grid1 import make_axes_locatable
# read in readmissions data provided
hospital_read_df = pd.read_csv('data/cms_hospital_readmissions.csv')
"""
Explanation: Hospital Readmissions Data Analysis and Recommendations for Reduction
Background
In October 2012, the US government's Center for Medicare and Medicaid Services (CMS) began reducing Medicare payments for Inpatient Prospective Payment System hospitals with excess readmissions. Excess readmissions are measured by a ratio, by dividing a hospital’s number of “predicted” 30-day readmissions for heart attack, heart failure, and pneumonia by the number that would be “expected,” based on an average hospital with similar patients. A ratio greater than 1 indicates excess readmissions.
Exercise Directions
In this exercise, you will:
+ critique a preliminary analysis of readmissions data and recommendations (provided below) for reducing the readmissions rate
+ construct a statistically sound analysis and make recommendations of your own
More instructions provided below. Include your work in this notebook and submit to your Github account.
Resources
Data source: https://data.medicare.gov/Hospital-Compare/Hospital-Readmission-Reduction/9n3s-kdb3
More information: http://www.cms.gov/Medicare/medicare-fee-for-service-payment/acuteinpatientPPS/readmissions-reduction-program.html
Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
End of explanation
"""
# deal with missing and inconvenient portions of data
clean_hospital_read_df = hospital_read_df[hospital_read_df['Number of Discharges'] != 'Not Available']
clean_hospital_read_df.loc[:, 'Number of Discharges'] = clean_hospital_read_df['Number of Discharges'].astype(int)
clean_hospital_read_df = clean_hospital_read_df.sort_values('Number of Discharges')
# generate a scatterplot for number of discharges vs. excess rate of readmissions
# lists work better with matplotlib scatterplot function
x = [a for a in clean_hospital_read_df['Number of Discharges'][81:-3]]
y = list(clean_hospital_read_df['Excess Readmission Ratio'][81:-3])
fig, ax = plt.subplots(figsize=(8,5))
ax.scatter(x, y,alpha=0.2)
ax.fill_between([0,350], 1.15, 2, facecolor='red', alpha = .15, interpolate=True)
ax.fill_between([800,2500], .5, .95, facecolor='green', alpha = .15, interpolate=True)
ax.set_xlim([0, max(x)])
ax.set_xlabel('Number of discharges', fontsize=12)
ax.set_ylabel('Excess rate of readmissions', fontsize=12)
ax.set_title('Scatterplot of number of discharges vs. excess rate of readmissions', fontsize=14)
ax.grid(True)
fig.tight_layout()
"""
Explanation: Preliminary Analysis
End of explanation
"""
# A. Do you agree with the above analysis and recommendations? Why or why not?
import seaborn as sns
relevant_columns = clean_hospital_read_df[['Excess Readmission Ratio', 'Number of Discharges']][81:-3]
sns.regplot(relevant_columns['Number of Discharges'], relevant_columns['Excess Readmission Ratio'])
"""
Explanation: Preliminary Report
Read the following results/report. While you are reading it, think about if the conclusions are correct, incorrect, misleading or unfounded. Think about what you would change or what additional analyses you would perform.
A. Initial observations based on the plot above
+ Overall, rate of readmissions is trending down with increasing number of discharges
+ With lower number of discharges, there is a greater incidence of excess rate of readmissions (area shaded red)
+ With higher number of discharges, there is a greater incidence of lower rates of readmissions (area shaded green)
B. Statistics
+ In hospitals/facilities with number of discharges < 100, mean excess readmission rate is 1.023 and 63% have excess readmission rate greater than 1
+ In hospitals/facilities with number of discharges > 1000, mean excess readmission rate is 0.978 and 44% have excess readmission rate greater than 1
C. Conclusions
+ There is a significant correlation between hospital capacity (number of discharges) and readmission rates.
+ Smaller hospitals/facilities may be lacking necessary resources to ensure quality care and prevent complications that lead to readmissions.
D. Regulatory policy recommendations
+ Hospitals/facilties with small capacity (< 300) should be required to demonstrate upgraded resource allocation for quality care to continue operation.
+ Directives and incentives should be provided for consolidation of hospitals and facilities to have a smaller number of them with higher capacity and number of discharges.
End of explanation
"""
rv =relevant_columns
print rv[rv['Number of Discharges'] < 100][['Excess Readmission Ratio']].mean()
print '\nPercent of subset with excess readmission rate > 1: ', len(rv[(rv['Number of Discharges'] < 100) & (rv['Excess Readmission Ratio'] > 1)]) / len(rv[relevant_columns['Number of Discharges'] < 100])
print '\n', rv[rv['Number of Discharges'] > 1000][['Excess Readmission Ratio']].mean()
print '\nPercent of subset with excess readmission rate > 1: ', len(rv[(rv['Number of Discharges'] > 1000) & (rv['Excess Readmission Ratio'] > 1)]) / len(rv[relevant_columns['Number of Discharges'] > 1000])
"""
Explanation: <div class="span5 alert alert-info">
### Exercise
Include your work on the following **in this notebook and submit to your Github account**.
A. Do you agree with the above analysis and recommendations? Why or why not?
B. Provide support for your arguments and your own recommendations with a statistically sound analysis:
1. Setup an appropriate hypothesis test.
2. Compute and report the observed significance value (or p-value).
3. Report statistical significance for $\alpha$ = .01.
4. Discuss statistical significance and practical significance. Do they differ here? How does this change your recommendation to the client?
5. Look at the scatterplot above.
- What are the advantages and disadvantages of using this plot to convey information?
- Construct another plot that conveys the same information in a more direct manner.
You can compose in notebook cells using Markdown:
+ In the control panel at the top, choose Cell > Cell Type > Markdown
+ Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
</div>
Overall, rate of readmissions is trending down with increasing number of discharges
Agree, according to regression trend line shown above
With lower number of discharges, there is a greater incidence of excess rate of readmissions (area shaded red)
Agree
With higher number of discharges, there is a greater incidence of lower rates of readmissions (area shaded green)
Agree
End of explanation
"""
np.corrcoef(rv['Number of Discharges'], rv['Excess Readmission Ratio'])
"""
Explanation: In hospitals/facilities with number of discharges < 100, mean excess readmission rate is 1.023 and 63% have excess readmission rate greater than 1
Accurate
In hospitals/facilities with number of discharges > 1000, mean excess readmission rate is 0.978 and 44% have excess readmission rate greater than 1
Correction: mean excess readmission rate is 0.979, and 44.565% have excess readmission rate > 1
End of explanation
"""
|
asharel/ml
|
LAB2/Recursos/Red_Dimension.ipynb
|
gpl-3.0
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import sklearn.feature_selection as FS
data = pd.read_csv("./wine_dataset.csv", delimiter=";")
data.head()
"""
Explanation: Práctica Reduccción de Dimensionalidad
Métodos de Filtrado
Métodos Wrapper
Métodos Extracción:
LDA
PCA
End of explanation
"""
data["Type"] = pd.Categorical.from_array(data["Type"]).codes
data["Type"].replace("A",0)
data["Type"].replace("B",1)
data["Type"].replace("C",2)
data.head()
data.describe()
"""
Explanation: Se sustituye la columna Type por un valor categórico
End of explanation
"""
data_y = data["Type"]
data_X = data.drop("Type", 1)
data_X.head()
"""
Explanation: Separamos la columna target del resto de variables predictoras
End of explanation
"""
mi = FS.mutual_info_classif(data_X, data_y)
print(mi)
data_X.head(0)
names=data_X.axes[1]
names
indice=np.argsort(mi)[::-1]
print(indice)
print(names[indice])
plt.figure(figsize=(8,6))
plt.subplot(121)
plt.scatter(data[data.Type==1].Flavanoids,data[data.Type==1].Color_Intensity, color='red')
plt.scatter(data[data.Type==2].Flavanoids,data[data.Type==2].Color_Intensity, color='blue')
plt.scatter(data[data.Type==0].Flavanoids,data[data.Type==0].Color_Intensity, color='green')
plt.title('Good Predictor Variables \n Flavanoids vs Color_Intensity')
plt.xlabel('Flavanoids')
plt.ylabel('Color_Intensity')
plt.legend(['A','B','C'])
plt.subplot(122)
plt.scatter(data[data.Type==1].Ash,data[data.Type==1].Nonflavanoid_Phenols, color='red')
plt.scatter(data[data.Type==2].Ash,data[data.Type==2].Nonflavanoid_Phenols, color='blue')
plt.scatter(data[data.Type==0].Ash,data[data.Type==0].Nonflavanoid_Phenols, color='green')
plt.title('Ash vs Nonflavanoid_Phenols')
plt.xlabel('Ash')
plt.ylabel('Nonflavanoid_Phenols')
plt.legend(['A','B','C'])
plt.show()
"""
Explanation: Mutual Information
End of explanation
"""
chi = FS.chi2(X = data_X, y = data["Type"])[0]
print(chi)
indice_chi=np.argsort(chi)[::-1]
print(indice_chi)
print(names[indice_chi])
plt.figure()
plt.scatter(data[data.Type==1].Proline,data[data.Type==1].Color_Intensity, color='red')
plt.scatter(data[data.Type==2].Proline,data[data.Type==2].Color_Intensity, color='blue')
plt.scatter(data[data.Type==0].Proline,data[data.Type==0].Color_Intensity, color='green')
plt.title('Good Predictor Variables Chi-Square \n Proline vs Color_Intensity')
plt.xlabel('Proline')
plt.ylabel('Color_Intensity')
plt.legend(['A','B','C'])
plt.show()
"""
Explanation: Chi-Square
Ahora aplicamos Chi-Square para seleccionar las variables informativas
End of explanation
"""
from sklearn.decomposition.pca import PCA
"""
Explanation: Principal Component Analysis (PCA)
End of explanation
"""
pca = PCA()
pca.fit(data_X)
plt.plot(pca.explained_variance_)
plt.ylabel("eigenvalues")
plt.xlabel("position")
plt.show()
print ("Eigenvalues\n",pca.explained_variance_)
# Percentage of variance explained for each components
print('\nExplained variance ratio (first two components):\n %s'
% str(pca.explained_variance_ratio_))
"""
Explanation: PCA without normalization
End of explanation
"""
pca = PCA(n_components=2)
X_pca = pd.DataFrame(pca.fit_transform(data_X))
pca_A = X_pca[data_y == 0]
pca_B = X_pca[data_y == 1]
pca_C = X_pca[data_y == 2]
#plot
plt.scatter(x = pca_A[0], y = pca_A[1], c="blue")
plt.scatter(x = pca_B[0], y = pca_B[1], c="turquoise")
plt.scatter(x = pca_C[0], y = pca_C[1], c="darkorange")
plt.xlabel("First Component")
plt.ylabel("Second Component")
plt.legend(["A","B","C"])
plt.show()
"""
Explanation: Dibujamos la proyección en las dos primeras componentes principales
End of explanation
"""
from sklearn import preprocessing
X_scaled = preprocessing.scale(data_X)
pca = PCA()
pca.fit(X_scaled)
plt.plot(pca.explained_variance_)
plt.ylabel("eigenvalues")
plt.xlabel("position")
plt.show()
print ("Eigenvalues\n",pca.explained_variance_)
# Percentage of variance explained for each components
print('\nExplained variance ratio (first two components):\n %s'
% str(pca.explained_variance_ratio_))
pca = PCA(n_components=2)
X_pca = pd.DataFrame(pca.fit_transform(X_scaled))
pca_A = X_pca[data_y == 'A']
pca_B = X_pca[data_y == 'B']
pca_C = X_pca[data_y == 'C']
#plot
plt.scatter(x = pca_A[0], y = pca_A[1], c="blue")
plt.scatter(x = pca_B[0], y = pca_B[1], c="turquoise")
plt.scatter(x = pca_C[0], y = pca_C[1], c="darkorange")
plt.xlabel("First Component")
plt.ylabel("Second Component")
plt.legend(["A","B","C"])
plt.show()
"""
Explanation: PCA with Normalization
End of explanation
"""
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
"""
Explanation: Linear Discriminant Analysis (LDA)
Linear Discriminant Analysis
A classifier with a linear decision boundary, generated by fitting class conditional densities to the data and using Bayes’ rule.
The model fits a Gaussian density to each class.
The fitted model can also be used to reduce the dimensionality of the input by projecting it to the most discriminative directions.
End of explanation
"""
lda = LDA()
lda.fit(data_X,data_y)
print("Porcentaje explicado:", lda.explained_variance_ratio_)
X_lda = pd.DataFrame(lda.fit_transform(data_X, data_y))
# Dividimos en los 3 tipos para ponerles diferentes colores
lda_A = X_lda[data_y == 0]
lda_B = X_lda[data_y == 1]
lda_C = X_lda[data_y == 2]
#plot
plt.scatter(x = lda_A[0], y = lda_A[1], c="blue")
plt.scatter(x = lda_B[0], y = lda_B[1], c="turquoise")
plt.scatter(x = lda_C[0], y = lda_C[1], c="darkorange")
plt.title("LDA without normalization")
plt.xlabel("First LDA Component")
plt.ylabel("Second LDA Component")
plt.legend((["A","B","C"]), loc="lower right")
plt.show()
"""
Explanation: LDA without normalization
End of explanation
"""
lda = LDA(n_components=2)
lda.fit(X_scaled,data_y)
print("Porcentaje explicado:", lda.explained_variance_ratio_)
X_lda = pd.DataFrame(lda.fit_transform(data_X, data_y))
# Dividimos en los 3 tipos para ponerles diferentes colores
lda_A = X_lda[data_y == 0]
lda_B = X_lda[data_y == 1]
lda_C = X_lda[data_y == 2]
#plot
plt.scatter(x = lda_A[0], y = lda_A[1], c="blue")
plt.scatter(x = lda_B[0], y = lda_B[1], c="turquoise")
plt.scatter(x = lda_C[0], y = lda_C[1], c="darkorange")
plt.xlabel("First LDA Component")
plt.ylabel("Second LDA Component")
plt.legend(["A","B","C"],loc="lower right")
plt.title("LDA with normalization")
plt.show()
"""
Explanation: LDA with normalization
End of explanation
"""
|
giacomov/astromodels
|
examples/Priors_for_Bayesian_analysis.ipynb
|
bsd-3-clause
|
from astromodels import *
# Create a point source named "pts1"
pts1 = PointSource('pts1',ra=125.23, dec=17.98, spectral_shape=powerlaw())
# Create the model
my_model = Model(pts1)
"""
Explanation: Priors for Bayesian analysis
Astromodels supports the definition of priors for all parameters in your model. You can use as prior any function (although of course not all functions should be used this way, but the choice is up to you).
First let's define a simple model containing one point source (see the "Model tutorial" for more info):
End of explanation
"""
uniform_prior.info()
"""
Explanation: Now let's assign uniform priors to the parameters of the powerlaw function. The function uniform_prior is defined like this:
End of explanation
"""
# Set 'lower_bound' to -10, 'upper bound' to 10, and leave the 'value' parameter
# to the default value
pts1.spectrum.main.powerlaw.K.prior = log_uniform_prior(lower_bound = 1e-15, upper_bound=1e-7)
# Display it
pts1.spectrum.main.powerlaw.K.display()
# Set 'lower_bound' to -10, 'upper bound' to 0, and leave the 'value' parameter
# to the default value
pts1.spectrum.main.powerlaw.index.prior = uniform_prior(lower_bound = -10, upper_bound=0)
pts1.spectrum.main.powerlaw.index.display()
"""
Explanation: We can use it as such:
End of explanation
"""
# Create a short cut to avoid writing too much
po = pts1.spectrum.main.powerlaw
# Evaluate the prior in 2.3e-5
point = 2.3e-21
prior_value1 = po.K.prior(point * po.K.unit)
# Equivalently we can use the fast call with no units
prior_value2 = po.K.prior.fast_call(point)
assert prior_value1 == prior_value2
print("The prior for logK evaluate to %s in %s" % (prior_value1, point))
"""
Explanation: Now we can evaluate the prior simply as:
End of explanation
"""
# You need matplotlib installed for this
import matplotlib.pyplot as plt
# This is for the IPython notebook
%matplotlib inline
# Let's get 500 points uniformly distributed between -20 and 20
random_points = np.logspace(-30,2,50)
plt.loglog(random_points,pts1.spectrum.main.powerlaw.K.prior.fast_call(random_points), '.' )
#plt.xscale("log")
#plt.ylim([-0.1,1.2])
plt.xlabel("value of K")
plt.ylabel("Prior")
"""
Explanation: Let's plot the value of the prior at some random locations:
End of explanation
"""
|
Serulab/Py4Bio
|
notebooks/Chapter 12 - Python and Databases.ipynb
|
mit
|
!curl https://raw.githubusercontent.com/Serulab/Py4Bio/master/samples/samples.tar.bz2 -o samples.tar.bz2
!mkdir samples
!tar xvfj samples.tar.bz2 -C samples
!wget https://raw.githubusercontent.com/Serulab/Py4Bio/master/code/ch12/PythonU.sql
!apt-get -y install mysql-server
!/etc/init.d/mysql start
!mysql -e 'create database PythonU;'
!mysql PythonU < PythonU.sql
!mysql -e "UPDATE mysql.user SET authentication_string=password('mypassword'),host='%',plugin='mysql_native_password' WHERE user='root';flush privileges;"
"""
Explanation: Python for Bioinformatics
This Jupyter notebook is intented to be used alongside the book Python for Bioinformatics
Note: The code in this chapter requires a database servers to run (a MySQL and MongoDB), so you should provide one and then change the appropiate parameters in the connect string. The example with sqlite can run in this Jupyter Notebook.
End of explanation
"""
!pip install PyMySQL
import pymysql
db = pymysql.connect(host="localhost", user="root", passwd="mypassword", db="PythonU")
cursor = db.cursor()
cursor.execute("SELECT * FROM Students")
cursor.fetchone()
cursor.fetchone()
cursor.fetchone()
cursor.fetchall()
"""
Explanation: Chapter 12: Python and Databases
End of explanation
"""
!/etc/init.d/mysql stop
get_ipython().system_raw('mysqld_safe --skip-grant-tables &')
!mysql -e "UPDATE mysql.user SET authentication_string=password('secret'),host='%',plugin='mysql_native_password' WHERE user='root';flush privileges;"
import pymysql
db = pymysql.connect(host='localhost',
user='root', passwd='secret', db='PythonU')
cursor = db.cursor()
recs = cursor.execute('SELECT * FROM Students')
for x in range(recs):
print(cursor.fetchone())
"""
Explanation: Listing 12.1: pymysql1.py: Reading results once at a time
End of explanation
"""
import pymysql
db = pymysql.connect(host='localhost',
user='root', passwd='secret', db='PythonU')
cursor = db.cursor()
cursor.execute('SELECT * FROM Students')
for row in cursor:
print(row)
"""
Explanation: Listing 12.2: pymysql2.py: Iterating directly over the DB cursor
End of explanation
"""
import sqlite3
db = sqlite3.connect('samples/PythonU.db')
cursor = db.cursor()
cursor.execute('Select * from Students')
for row in cursor:
print(row)
!apt install mongodb
!/etc/init.d/mongodb start
from pymongo import MongoClient
from pymongo import MongoClient
client = MongoClient('localhost:27017')
client.list_database_names()
db = client.PythonU
client.list_database_names()
client.drop_database('Employee')
students = db.Students
student_1 = {'Name':'Harry', 'LastName':'Wilkinson',
'DateJoined':'2016-02-10', 'OutstandingBalance':False,
'Courses':[('Python 101', 7, '2016/1'), ('Mathematics for CS',
8, '2016/1')]}
student_2 = {'Name':'Jonathan', 'LastName':'Hunt',
'DateJoined':'2014-02-16', 'OutstandingBalance':False,
'Courses':[('Python 101', 6, '2016/1'), ('Mathematics for CS',
9, '2015/2')]}
students.count()
students.insert(student_1)
students.insert(student_2)
students.count()
from bson.objectid import ObjectId
search_id = {'_id':ObjectId('5ed902d980378228f849a40d')}
my_student = students.find_one(search_id)
my_student['LastName']
my_student['_id'].generation_time
for student in students.find():
print(student['Name'], student['LastName'])
list(students.find())
"""
Explanation: Listing 12.3: sqlite1.py: Same script as 12.2, but with SQLite
End of explanation
"""
|
sdss/marvin
|
docs/sphinx/jupyter/saving_and_restoring.ipynb
|
bsd-3-clause
|
# let's grab the H-alpha emission line flux map
from marvin.tools.maps import Maps
mapfile = '/Users/Brian/Work/Manga/analysis/v2_0_1/2.0.2/SPX-GAU-MILESHC/8485/1901/manga-8485-1901-MAPS-SPX-GAU-MILESHC.fits.gz'
maps = Maps(filename=mapfile)
haflux = maps.getMap('emline_gflux', channel='ha_6564')
print(haflux)
"""
Explanation: Saving and Restoring Marvin objects
With all Marvin Tools, you can save the object you are working with locally to your filesystem, and restore it later on. This works using the Python pickle package. The objects are pickled (i.e. formatted and compressed) into a pickle file object. All Marvin Tools, Queries, and Results can be saved and restored.
We can save a map...
End of explanation
"""
haflux.save('my_haflux_map')
"""
Explanation: We can save any Marvin object with the save method. This methods accepts a string filename+path as the name of the pickled file. If a full file path is not specified, it defaults to the current directory. save also accepts an overwrite boolean keyword in case you want to overwrite an existing file.
End of explanation
"""
# import the individual Map class
from marvin.tools.quantities import Map
# restore the Halpha flux map into a new variable
filename = '/Users/Brian/Work/github_projects/Marvin/docs/sphinx/jupyter/my_haflux_map'
newflux = Map.restore(filename)
print(newflux)
"""
Explanation: Now we have a saved map. We can restore it anytime we want using the restore class method. A class method means you call it from the imported class itself, and not on the instance. restore accepts a string filename as input and returns the instantianted object.
End of explanation
"""
from marvin.tools.query import Query, Results
# let's make a query
f = 'nsa.z < 0.1'
q = Query(search_filter=f)
print(q)
# and run it
r = q.run()
print(r)
"""
Explanation: We can also save and restore Marvin Queries and Results. First let's create and run a simple query...
End of explanation
"""
q.save()
r.save()
"""
Explanation: Let's save both the query and results for later use. Without specifiying a filename, by default Marvin will name the query or results using your provided search filter.
End of explanation
"""
newquery = Query.restore('/Users/Brian/marvin_query_nsa.z<0.1.mpf')
print('query', newquery)
print('filter', newquery.search_filter)
myresults = Results.restore('/Users/Brian/marvin_results_nsa.z<0.1.mpf')
print(myresults.results)
"""
Explanation: By default, if you don't specify a filename for the pickled file, Marvin will auto assign one for you with extension .mpf (MaNGA Pickle File).
Now let's restore...
End of explanation
"""
|
rcurrie/tumornormal
|
treehouse.ipynb
|
apache-2.0
|
import os
import json
import numpy as np
import pandas as pd
import tensorflow as tf
import keras
import matplotlib.pyplot as pyplot
# fix random seed for reproducibility
np.random.seed(42)
# See https://github.com/h5py/h5py/issues/712
os.environ["HDF5_USE_FILE_LOCKING"] = "FALSE"
"""
Explanation: Classify Treehouse
Load models trained in other notebooks and see how they do on the Treehouse samples
End of explanation
"""
%%time
X = pd.read_hdf("data/tcga_target_gtex.h5", "expression")
Y = pd.read_hdf("data/tcga_target_gtex.h5", "labels")
X_treehouse = pd.read_hdf("data/treehouse.h5", "expression")
Y_treehouse = pd.read_hdf("data/treehouse.h5", "labels")
"""
Explanation: Load Datasets
End of explanation
"""
# Load the model
model = keras.models.model_from_json(open("models/primary_site.model.json").read())
model.load_weights("models/primary_site.weights.h5")
params = json.loads(open("models/primary_site.params.json").read())
# Let's run it on the training set just to make sure we haven't lost something...
from sklearn import preprocessing
encoder = preprocessing.LabelBinarizer()
y_onehot = encoder.fit_transform(Y.primary_site.values)
# Prune X to only include genes in the gene sets
X_pruned = X.drop(labels=(set(X.columns) - set(params["genes"])), axis=1, errors="ignore")
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.evaluate(X_pruned, y_onehot)
# Now let's try on Treehouse
# Prune X to only include genes in the gene sets
X_treehouse_pruned = X_treehouse.drop(labels=(set(X.columns) - set(params["genes"])), axis=1, errors="ignore")
Y_treehouse["primary_site_predicted"] = [", ".join(["{}({:0.2f})".format(params["labels"][i], p[i])
for i in p.argsort()[-3:][::-1]])
for p in model.predict(X_treehouse_pruned)]
Y_treehouse.primary_site_predicted[0:3]
Y_treehouse.to_csv("models/treehouse_predictions.tsv", sep="\t")
"""
Explanation: Primary Site Classifier
End of explanation
"""
Y = pd.read_csv("models/Y_treehouse_predictions.tsv", sep="\t", )
Y.head()
import glob
import json
id = "TH01_0051_S01"
conf_path = glob.glob(
"/treehouse/archive/downstream/{}/tertiary/treehouse-protocol*/compendium*/conf.json".format(y.id))
# if conf_path:
# with open(conf_path[0]) as f:
# conf = json.loads(f.read())
# if "disease" in conf["info"]:
# print(conf["info"]["disease"])
clinical.head()
conf
"""
Explanation: Treehouse Pathways
Load predictions from pathway model, enrich with pathways and disease from tertiary protocol and analyze
End of explanation
"""
|
conversationai/unintended-ml-bias-analysis
|
archive/unintended_ml_bias/Bias_fuzzed_test_set.ipynb
|
apache-2.0
|
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import pandas as pd
import urllib
import matplotlib.pyplot as plt
%matplotlib inline
COMMENTS = '../data/toxicity_annotated_comments.tsv'
ANNOTATIONS = '../data/toxicity_annotations.tsv'
comments = pd.read_csv(COMMENTS, sep='\t')
annotations = pd.read_csv(ANNOTATIONS, sep='\t')
# convert rev_id from float to int
comments['rev_id'] = comments['rev_id'].astype(int)
annotations['rev_id'] = annotations['rev_id'].astype(int)
# remove newline and tab tokens
comments['comment'] = comments['comment'].apply(lambda x: x.replace("NEWLINE_TOKEN", " "))
comments['comment'] = comments['comment'].apply(lambda x: x.replace("TAB_TOKEN", " "))
comments.head()
# label a comment as toxic if the majority of annotators did so
comments.set_index('rev_id', inplace=True)
comments['toxic'] = annotations.groupby('rev_id')['toxicity'].mean() > 0.5
"""
Explanation: Fuzzing a test set for model bias analysis
This notebook creates a test set "fuzzed" over a set of identity terms. This fuzzed test set can be used for analyzing bias in a model.
The idea is that, for the most part, the specific identity term used should not be the key feature determining whether a comment is toxic or non-toxic. For example, the sentence "I had a <x> friend growing up" should be considered non-toxic, and "All <x> people must be wiped off the earth" should be considered toxic for all values of x in our terms set.
Given a set of terms, this code finds comments that mention those terms and replaces each instance with a random other term in the set. This fuzzed test set can be used to evaluate a model for bias. If the model performs worse on the fuzzed test set than on the non-fuzzed test set,
Data prep
End of explanation
"""
import re
def word_bound(pat):
"""Adds word boundary matchers to pattern."""
return r'\b{}\b'.format(pat)
IDENTITY_TERMS = ['christian', 'catholic', 'protestant', 'muslim', 'sikh', 'jewish', 'jew',
'lesbian', 'gay', 'transgender', 'queer', 'homosexual', 'heterosexual']
%%time
test_comments = comments[comments.split == 'test']
IDENTITY_TERMS_BOUNDED = [word_bound(term) for term in IDENTITY_TERMS]
identity_comments = test_comments[test_comments.comment.str.contains('|'.join(IDENTITY_TERMS_BOUNDED), case=False)]
identity_comments[identity_comments.comment.str.len() < 30].comment
import random
def fuzz_comment(text, identity_terms):
terms_present = [term for term in identity_terms
if re.search(word_bound(term), text, flags=re.IGNORECASE)]
# TODO(jetpack): earlier replacements may be "overwritten" by later replacements.
# not sure if there's a non-random effect from iterating this list.
# since each choice is random, i don't think so?
for term in terms_present:
# Replace text with random other term.
text, _count = re.subn(word_bound(term), random.choice(identity_terms), text, flags=re.IGNORECASE)
return text
fuzz_comment("Gay is a term that primarily refers to a homosexual person or the trait of being homosexual", IDENTITY_TERMS)
identity_comments[identity_comments.comment.str.len() < 30].comment.apply(lambda s: fuzz_comment(s, IDENTITY_TERMS))
"""
Explanation: Identity term fuzzing
End of explanation
"""
len(test_comments)
len(identity_comments)
_non = test_comments.drop(identity_comments.index)
def build_fuzzed_testset(comments, identity_terms=IDENTITY_TERMS):
"""Builds a test sets 'fuzzed' over the given identity terms.
Returns both a fuzzed and non-fuzzed test set. Each are comprised
of the same comments. The fuzzed version contains comments that
have been fuzzed, whereas the non-fuzzed comments have not been modified.
"""
any_terms_pat = '|'.join(word_bound(term) for term in identity_terms)
test_comments = comments[comments.split == 'test'][['comment', 'toxic']].copy()
identity_comments = test_comments[test_comments.comment.str.contains(any_terms_pat, case=False)]
non_identity_comments = test_comments.drop(identity_comments.index).sample(len(identity_comments))
fuzzed_identity_comments = identity_comments.copy()
fuzzed_identity_comments.loc[:, 'comment'] = fuzzed_identity_comments['comment'].apply(lambda s: fuzz_comment(s, IDENTITY_TERMS))
nonfuzzed_testset = pd.concat([identity_comments, non_identity_comments]).sort_index()
fuzzed_testset = pd.concat([fuzzed_identity_comments, non_identity_comments]).sort_index()
return {'fuzzed': fuzzed_testset, 'nonfuzzed': nonfuzzed_testset}
testsets = build_fuzzed_testset(comments)
testsets['fuzzed'].query('comment.str.len() < 50').sample(15)
testsets['fuzzed'].to_csv('../eval_datasets/toxicity_fuzzed_testset.csv')
testsets['nonfuzzed'].to_csv('../eval_datasets/toxicity_nonfuzzed_testset.csv')
"""
Explanation: Write new fuzzed test set
We also randomly sample comments that don't mention identity terms. This is because the absolute score ranges are important. For example, AUC can still be high even if all identity term comments have elevated scores relative to other comments. Including non-identity term comments will cause AUC to drop if this is the case.
End of explanation
"""
|
carthach/essentia
|
src/examples/tutorial/example_truepeakdetector.ipynb
|
agpl-3.0
|
import essentia.standard as es
import numpy as np
import matplotlib
matplotlib.use('nbagg')
import matplotlib.pyplot as plt
import ipywidgets as wg
from IPython.display import Audio
from essentia import array as esarr
plt.rcParams["figure.figsize"] =(9, 5)
"""
Explanation: TruePeakDetector use example
This algorithm implements the “true-peak” level meter as descripted in the second annex of the ITU-R BS.1770-2[1] or the ITU-R BS.1770-4[2] (default).
Note: the parameters 'blockDC' and 'emphatise' work only when 'version' is set to 2.
References:
[1] Series, B. S. (2011). Recommendation ITU-R BS.1770-2. Algorithms to
measure audio programme loudness and true-peak audio level,
https://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.1770-2-201103-S!!PDF-E.
pdfe
[2] Series, B. S. (2011). Recommendation ITU-R BS.1770-4. Algorithms
to measure audio programme loudness and true-peak audio level,
https://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.1770-4-201510-I!!PDF-E.
pdf
End of explanation
"""
# Parameters
duration = 10 # s
fs = 1 # hz
k = 1. # amplitude
oversamplingFactor = 4 # factor of oversampling for the real signal
nSamples = fs * duration
time = np.arange(-nSamples/2, nSamples/2,
2 ** -oversamplingFactor, dtype='float')
samplingPoints = time[::2 ** oversamplingFactor]
def shifted_sinc(x, k, offset):
xShifted = x - offset
y = np.zeros(len(xShifted))
for idx, i in enumerate(xShifted):
if not i:
y[idx] = k
else:
y[idx] = (k * np.sin(np.pi * i) / (np.pi * i))
return y
def resampleStrategy(y, fs, quality=0, oversampling=4):
yResample = es.Resample(inputSampleRate=fs,
outputSampleRate=fs*oversampling,
quality=quality)(y.astype(np.float32))
tResample = np.arange(np.min(samplingPoints), np.max(samplingPoints)
+ 1, 1. / (fs * oversampling))
tResample = tResample[:len(yResample)]
# getting the stimated peaks
yResMax = np.max(yResample)
tResMax = tResample[np.argmax(yResample)]
return yResample, tResample, yResMax, tResMax
def parabolicInterpolation(y, threshold=.6):
# todo plot the parabol maybe
positions, amplitudes = es.PeakDetection(threshold=threshold)\
(y.astype(np.float32))
pos = int(positions[0] * (len(y-1)))
a = y[pos - 1]
b = y[pos]
c = y[pos + 1]
tIntMax = samplingPoints[pos] + (a - c) / (2 * (a - 2 * b + c))
yIntMax = b - ((a - b) ** 2) / (8 * (a - 2 * b + c))
return tIntMax, yIntMax
def process():
## Processing
# "real" sinc
yReal = shifted_sinc(time, k, offset.value)
# sampled sinc
y = shifted_sinc(samplingPoints, k, offset.value)
# Resample strategy
yResample, tResample, yResMax, tResMax = \
resampleStrategy(y, fs, quality=0, oversampling=4)
# Parabolic Interpolation extrategy
tIntMax, yIntMax = parabolicInterpolation(y)
## Plotting
ax.clear()
plt.title('Interpeak detection estrategies')
ax.grid(True)
ax.grid(xdata=samplingPoints)
ax.plot(time, yReal, label='real signal')
yRealMax = np.max(yReal)
sampledLabel = 'sampled signal. Error:{:.3f}'\
.format(np.abs(np.max(y) - yRealMax))
ax.plot(samplingPoints, y, label=sampledLabel, ls='-.',
color='r', marker='x', markersize=6, alpha=.7)
ax.plot(tResample, yResample, ls='-.',
color='y', marker='x', alpha=.7)
resMaxLabel = 'Resample Peak. Error:{:.3f}'\
.format(np.abs(yResMax - yRealMax))
ax.plot(tResMax, yResMax, label= resMaxLabel,
color='y', marker = 'x', markersize=12)
intMaxLabel = 'Interpolation Peak. Error:{:.3f}'\
.format(np.abs(yIntMax - yRealMax))
ax.plot(tIntMax, yIntMax, label= intMaxLabel,
marker = 'x', markersize=12)
fig.legend()
fig.show()
# matplotlib.use('TkAgg')
offset = wg.FloatSlider()
offset.max = 1
offset.min = -1
offset.step = .1
display(offset)
fig, ax = plt.subplots()
process()
def on_value_change(change):
process()
offset.observe(on_value_change, names='value')
"""
Explanation: The problem of true peak estimation
The following widget demonstrates two intersample detection techniques:
- Signal upsampling.
- parabolic interpolation.
The accuracy of both methods can be assessed in real-time by shifting the sampling points in a Sinc function and evaluating the error produced by both systems.
End of explanation
"""
fs = 44100.
eps = np.finfo(np.float32).eps
audio_dir = '../../audio/'
audio = es.MonoLoader(filename='{}/{}'.format(audio_dir,
'recorded/distorted.wav'),
sampleRate=fs)()
times = np.linspace(0, len(audio) / fs, len(audio))
peakLocations, output = es.TruePeakDetector(version=2)(audio)
oversampledtimes = np.linspace(0, len(output) / (fs*4), len(output))
random_indexes = [1, 300, 1000, 3000]
figu, axes = plt.subplots(len(random_indexes))
plt.subplots_adjust(hspace=.9)
for idx, ridx in enumerate(random_indexes):
l0 = axes[idx].axhline(0, color='r', alpha=.7, ls = '--')
l1 = axes[idx].plot(times, 20 * np.log10(np.abs(audio + eps)))
l2 = axes[idx].plot(oversampledtimes, 20 * np.log10(output + eps), alpha=.8)
axes[idx].set_xlim([peakLocations[ridx] / fs - .0002, peakLocations[ridx] / fs + .0002])
axes[idx].set_ylim([-.15, 0.15])
axes[idx].set_title('Clipping peak located at {:.2f}s'.format(peakLocations[ridx] / (fs*4)))
axes[idx].set_ylabel('dB')
figu.legend([l0, l1[-1], l2[-1]], ['Dynamic range limit', 'Original signal', 'Resampled signal'])
plt.show()
"""
Explanation: As it can be seen from the widget, the oversampling strategy generates a smaller error in most of the cases.
The ITU-R BS.1770 approach
The ITU-R BS.1770 recommentation proposess the following signal chain based on the oversampling strategy:
-12.04dB --> x4 oversample --> LowPass --> abs() --> 20 * log10() --> +12.04dB
In our implementation, the gain control is suppressed from the chain as in not required when working with float point values, and the result is returned in natural units as it can be converted to dB as a postprocessing step. Here we can see an example.
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
stable/_downloads/7ba58cd4e9bc2622d60527d21fc13577/decoding_spatio_temporal_source.ipynb
|
bsd-3-clause
|
# Author: Denis A. Engemann <denis.engemann@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Jean-Remi King <jeanremi.king@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import SelectKBest, f_classif
from sklearn.linear_model import LogisticRegression
import mne
from mne.minimum_norm import apply_inverse_epochs, read_inverse_operator
from mne.decoding import (cross_val_multiscore, LinearModel, SlidingEstimator,
get_coef)
print(__doc__)
data_path = mne.datasets.sample.data_path()
meg_path = data_path / 'MEG' / 'sample'
fname_fwd = meg_path / 'sample_audvis-meg-oct-6-fwd.fif'
fname_evoked = meg_path / 'sample_audvis-ave.fif'
subjects_dir = data_path / 'subjects'
"""
Explanation: Decoding source space data
Decoding to MEG data in source space on the left cortical surface. Here
univariate feature selection is employed for speed purposes to confine the
classification to a small number of potentially relevant features. The
classifier then is trained to selected features of epochs in source space.
End of explanation
"""
raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'
event_fname = meg_path / 'sample_audvis_filt-0-40_raw-eve.fif'
fname_cov = meg_path / 'sample_audvis-cov.fif'
fname_inv = meg_path / 'sample_audvis-meg-oct-6-meg-inv.fif'
tmin, tmax = -0.2, 0.8
event_id = dict(aud_r=2, vis_r=4) # load contra-lateral conditions
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(None, 10., fir_design='firwin')
events = mne.read_events(event_fname)
# Set up pick list: MEG - bad channels (modify to your needs)
raw.info['bads'] += ['MEG 2443'] # mark bads
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True,
exclude='bads')
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, eog=150e-6),
decim=5) # decimate to save memory and increase speed
"""
Explanation: Set parameters
End of explanation
"""
snr = 3.0
noise_cov = mne.read_cov(fname_cov)
inverse_operator = read_inverse_operator(fname_inv)
stcs = apply_inverse_epochs(epochs, inverse_operator,
lambda2=1.0 / snr ** 2, verbose=False,
method="dSPM", pick_ori="normal")
"""
Explanation: Compute inverse solution
End of explanation
"""
# Retrieve source space data into an array
X = np.array([stc.lh_data for stc in stcs]) # only keep left hemisphere
y = epochs.events[:, 2]
# prepare a series of classifier applied at each time sample
clf = make_pipeline(StandardScaler(), # z-score normalization
SelectKBest(f_classif, k=500), # select features for speed
LinearModel(LogisticRegression(C=1, solver='liblinear')))
time_decod = SlidingEstimator(clf, scoring='roc_auc')
# Run cross-validated decoding analyses:
scores = cross_val_multiscore(time_decod, X, y, cv=5, n_jobs=1)
# Plot average decoding scores of 5 splits
fig, ax = plt.subplots(1)
ax.plot(epochs.times, scores.mean(0), label='score')
ax.axhline(.5, color='k', linestyle='--', label='chance')
ax.axvline(0, color='k')
plt.legend()
"""
Explanation: Decoding in sensor space using a logistic regression
End of explanation
"""
# The fitting needs not be cross validated because the weights are based on
# the training sets
time_decod.fit(X, y)
# Retrieve patterns after inversing the z-score normalization step:
patterns = get_coef(time_decod, 'patterns_', inverse_transform=True)
stc = stcs[0] # for convenience, lookup parameters from first stc
vertices = [stc.lh_vertno, np.array([], int)] # empty array for right hemi
stc_feat = mne.SourceEstimate(np.abs(patterns), vertices=vertices,
tmin=stc.tmin, tstep=stc.tstep, subject='sample')
brain = stc_feat.plot(views=['lat'], transparent=True,
initial_time=0.1, time_unit='s',
subjects_dir=subjects_dir)
"""
Explanation: To investigate weights, we need to retrieve the patterns of a fitted model
End of explanation
"""
|
bkimo/discrete-math-with-python
|
lab2-bubble-sort.ipynb
|
mit
|
def bubbleSort(alist):
for i in range(0, len(alist)-1):
for j in range(0, len(alist)-1-i):
if alist[j] > alist[j+1]:
alist[j], alist[j+1] = alist[j+1], alist[j]
alist = [54,26,93,17,77,31,44,55,20]
bubbleSort(alist)
print(alist)
"""
Explanation: Algorithm Complexity: Array and Bubble Sort
An algorithm is a list of instructions for doing something, and algorithm design is essential to computer science. Here we will study simple algorithms of sorting an array of numbers.
An array is a sequence of variables $x_1, x_2, x_3, ..., x_n$; e.g.,
Notice that the order of the elements in an array matters, and an array can have duplicate entries.
A sort is an algorithm that guarantees that
$$ x_1\leq x_2\leq x_3\leq \cdots \leq x_n $$
after the algorithm finishes.
Bubble sort
Let $x_1, x_2, ..., x_n$ be an array whose elements can be compared by $\leq $. The following algorithm is called a bubble sort.
The bubble sort makes multiple passes through an array. It compares adjacent items and exchanges those that are out of order. Each pass through the array places the next largest value in its proper place. In essence, each item “bubbles” up to the location where it belongs.
Following figure shows the first pass of a bubble sort. The shaded items are being compared to see if they are out of order. If there are $n$ items in the array, then there are $n−1$ pairs of items that need to be compared on the first pass. It is important to note that once the largest value in the array is part of a pair, it will continually be moved along until the pass is complete.
At the start of the second pass, the largest value is now in place. There are $n−1$ items left to sort, meaning that there will be $n−2$ pairs. Since each pass places the next largest value in place, the total number of passes necessary will be $n−1$. After completing the $n−1$, the smallest item must be in the correct position with no further processing required.
The exchange operation, sometimes called a “swap” as in the algorithm, is slightly different in Python than in most other programming languages. Typically, swapping two elements in an array requires a temporary storage location (an additional memory location). A code fragment such as
will exchange the $i$th and $j$th items in the array. Without the temporary storage, one of the values would be overwritten.
In Python, it is possible to perform simultaneous assignment. The statement a,b=b,a will result in two assignment statements being done at the same time. Using simultaneous assignment, the exchange operation can be done in one statement.
The following example shows the complete bubbleSort function working on the array shown above.
End of explanation
"""
def shortBubbleSort(alist):
exchanges = True
passnum = len(alist)-1
while passnum > 0 and exchanges:
exchanges = False
for i in range(passnum):
# print(i)
if alist[i]>alist[i+1]:
exchanges = True
alist[i], alist[i+1] = alist[i+1], alist[i]
passnum = passnum-1
# print('passnum = ', passnum)
alist = [54,26,93,17,77,31,44,55,20]
#alist = [17, 20, 26, 31, 44, 54, 55, 77, 93]
shortBubbleSort(alist)
print(alist)
"""
Explanation: To analyze the bubble sort, we should note that regardless of how the items are arranged in the initial array, $n−1$ passes will be made to sort an array of size n. Table below shows the number of comparisons for each pass. The total number of comparisons is the sum of the first $n−1$ integers. Recall that the sum of the first $n-1$ integers is $\frac{n(n-1)}{2}$ This is still $\mathcal{O}(n^2)$ comparisons. In the best case, if the list is already ordered, no exchanges will be made. However, in the worst case, every comparison will cause an exchange. On average, we exchange half of the time.
Remark A bubble sort is often considered the most inefficient sorting method since it must exchange items before the final location is known. These “wasted” exchange operations are very costly. However, because the bubble sort makes passes through the entire unsorted portion of the list, it has the capability to do something most sorting algorithms cannot. In particular, if during a pass there are no exchanges, then we know that the list must have been sorted already. A bubble sort can be modified to stop early if it finds that the list has become sorted. This means that for lists that require just a few passes, a bubble sort may have an advantage in that it will recognize the sorted list and stop. The following shows this modification, which is often referred to as the short bubble.
End of explanation
"""
from matplotlib import pyplot
import numpy as np
import timeit
from functools import partial
import random
def fconst(N):
"""
O(1) function
"""
x = 1
def flinear(N):
"""
O(n) function
"""
x = [i for i in range(N)]
def fsquare(N):
"""
O(n^2) function
"""
for i in range(N):
for j in range(N):
x = i*j
def fshuffle(N):
# O(N)
random.shuffle(list(range(N)))
def fsort(N):
x = list(range(N))
random.shuffle(x)
x.sort()
def plotTC(fn, nMin, nMax, nInc, nTests):
"""
Run timer and plot time complexity
"""
x = []
y = []
for i in range(nMin, nMax, nInc):
N = i
testNTimer = timeit.Timer(partial(fn, N))
t = testNTimer.timeit(number=nTests)
x.append(i)
y.append(t)
p1 = pyplot.plot(x, y, 'o')
#pyplot.legend([p1,], [fn.__name__, ])
# main() function
def main():
print('Analyzing Algorithms...')
#plotTC(fconst, 10, 1000, 10, 10)
#plotTC(flinear, 10, 1000, 10, 10)
plotTC(fsquare, 10, 1000, 10, 10)
#plotTC(fshuffle, 10, 1000, 1000, 10)
#plotTC(fsort, 10, 1000, 10, 10)
# enable this in case you want to set y axis limits
#pyplot.ylim((-0.1, 0.5))
# show plot
pyplot.show()
# call main
if __name__ == '__main__':
main()
"""
Explanation: Plotting Algorithmic Time Complexity of a Function using Python
We may take an idea of using the Python Timer and timeit methods to create a simple plotting scheme using matplotlib.
Here is the code. The code is quite simple. Perhaps the only interesting thing here is the use of partial to pass in the function and the $N$ parameter into Timer. You can add in your own function here and plot the time complexity.
End of explanation
"""
|
anugrah-saxena/pycroscopy
|
jupyter_notebooks/BE_Processing.ipynb
|
mit
|
!pip install -U numpy matplotlib Ipython ipywidgets pycroscopy
# Ensure python 3 compatibility
from __future__ import division, print_function, absolute_import
# Import necessary libraries:
# General utilities:
import sys
import os
# Computation:
import numpy as np
import h5py
# Visualization:
import matplotlib.pyplot as plt
from IPython.display import display
import ipywidgets as widgets
# Finally, pycroscopy itself
import pycroscopy as px
# set up notebook to show plots within the notebook
% matplotlib inline
"""
Explanation: Band Excitation data procesing using pycroscopy
Suhas Somnath, Chris R. Smith, Stephen Jesse
The Center for Nanophase Materials Science and The Institute for Functional Imaging for Materials <br>
Oak Ridge National Laboratory<br>
2/10/2017
Configure the notebook
End of explanation
"""
max_mem = 1024*8 # Maximum memory to use, in Mbs. Default = 1024
max_cores = None # Number of logical cores to use in fitting. None uses all but 2 available cores.
"""
Explanation: Set some basic parameters for computation
This notebook performs some functional fitting whose duration can be substantially decreased by using more memory and CPU cores. We have provided default values below but you may choose to change them if necessary.
End of explanation
"""
input_file_path = px.io_utils.uiGetFile(caption='Select translated .h5 file or raw experiment data',
filter='Parameters for raw BE data (*.txt *.mat *xls *.xlsx);; \
Translated file (*.h5)')
(data_dir, data_name) = os.path.split(input_file_path)
if input_file_path.endswith('.h5'):
# No translation here
h5_path = input_file_path
force = False # Set this to true to force patching of the datafile.
tl = px.LabViewH5Patcher()
hdf = tl.translate(h5_path, force_patch=force)
else:
# Set the data to be translated
data_path = input_file_path
(junk, base_name) = os.path.split(data_dir)
# Check if the data is in the new or old format. Initialize the correct translator for the format.
if base_name == 'newdataformat':
(junk, base_name) = os.path.split(junk)
translator = px.BEPSndfTranslator(max_mem_mb=max_mem)
else:
translator = px.BEodfTranslator(max_mem_mb=max_mem)
if base_name.endswith('_d'):
base_name = base_name[:-2]
# Translate the data
h5_path = translator.translate(data_path, show_plots=True, save_plots=False)
hdf = px.ioHDF5(h5_path)
print('Working on:\n' + h5_path)
h5_main = px.hdf_utils.getDataSet(hdf.file, 'Raw_Data')[0]
"""
Explanation: Make the data pycroscopy compatible
Converting the raw data into a pycroscopy compatible hierarchical data format (HDF or .h5) file gives you access to the fast fitting algorithms and powerful analysis functions within pycroscopy
H5 files:
are like smart containers that can store matrices with data, folders to organize these datasets, images, metadata like experimental parameters, links or shortcuts to datasets, etc.
are readily compatible with high-performance computing facilities
scale very efficiently from few kilobytes to several terabytes
can be read and modified using any language including Python, Matlab, C/C++, Java, Fortran, Igor Pro, etc.
You can load either of the following:
Any .mat or .txt parameter file from the original experiment
A .h5 file generated from the raw data using pycroscopy - skips translation
You can select desired file type by choosing the second option in the pull down menu on the bottom right of the file window
End of explanation
"""
print('Datasets and datagroups within the file:\n------------------------------------')
px.io.hdf_utils.print_tree(hdf.file)
print('\nThe main dataset:\n------------------------------------')
print(h5_main)
print('\nThe ancillary datasets:\n------------------------------------')
print(hdf.file['/Measurement_000/Channel_000/Position_Indices'])
print(hdf.file['/Measurement_000/Channel_000/Position_Values'])
print(hdf.file['/Measurement_000/Channel_000/Spectroscopic_Indices'])
print(hdf.file['/Measurement_000/Channel_000/Spectroscopic_Values'])
print('\nMetadata or attributes in a datagroup\n------------------------------------')
for key in hdf.file['/Measurement_000'].attrs:
print('{} : {}'.format(key, hdf.file['/Measurement_000'].attrs[key]))
"""
Explanation: Inspect the contents of this h5 data file
The file contents are stored in a tree structure, just like files on a conventional computer.
The data is stored as a 2D matrix (position, spectroscopic value) regardless of the dimensionality of the data. Thus, the positions will be arranged as row0-col0, row0-col1.... row0-colN, row1-col0.... and the data for each position is stored as it was chronologically collected
The main dataset is always accompanied by four ancillary datasets that explain the position and spectroscopic value of any given element in the dataset.
End of explanation
"""
h5_pos_inds = px.hdf_utils.getAuxData(h5_main, auxDataName='Position_Indices')[-1]
pos_sort = px.hdf_utils.get_sort_order(np.transpose(h5_pos_inds))
pos_dims = px.hdf_utils.get_dimensionality(np.transpose(h5_pos_inds), pos_sort)
pos_labels = np.array(px.hdf_utils.get_attr(h5_pos_inds, 'labels'))[pos_sort]
print(pos_labels, pos_dims)
parm_dict = hdf.file['/Measurement_000'].attrs
is_ckpfm = hdf.file.attrs['data_type'] == 'cKPFMData'
if is_ckpfm:
num_write_steps = parm_dict['VS_num_DC_write_steps']
num_read_steps = parm_dict['VS_num_read_steps']
num_fields = 2
"""
Explanation: Get some basic parameters from the H5 file
This information will be vital for futher analysis and visualization of the data
End of explanation
"""
px.be_viz_utils.jupyter_visualize_be_spectrograms(h5_main)
"""
Explanation: Visualize the raw data
Use the sliders below to visualize spatial maps (2D only for now), and spectrograms.
For simplicity, all the spectroscopic dimensions such as frequency, excitation bias, cycle, field, etc. have been collapsed to a single slider.
End of explanation
"""
sho_fit_points = 5 # The number of data points at each step to use when fitting
h5_sho_group = px.hdf_utils.findH5group(h5_main, 'SHO_Fit')
sho_fitter = px.BESHOmodel(h5_main, parallel=True)
if len(h5_sho_group) == 0:
print('No SHO fit found. Doing SHO Fitting now')
h5_sho_guess = sho_fitter.do_guess(strategy='complex_gaussian', processors=max_cores, options={'num_points':sho_fit_points})
h5_sho_fit = sho_fitter.do_fit(processors=max_cores)
else:
print('Taking previous SHO results already present in file')
h5_sho_guess = h5_sho_group[-1]['Guess']
try:
h5_sho_fit = h5_sho_group[-1]['Fit']
except KeyError:
print('Previously computed guess found. Now computing fit')
h5_sho_fit = sho_fitter.do_fit(processors=max_cores, h5_guess=h5_sho_guess)
"""
Explanation: Fit the Band Excitation (BE) spectra
Fit each of the acquired spectra to a simple harmonic oscillator (SHO) model to extract the following information regarding the response:
* Oscillation amplitude
* Phase
* Resonance frequency
* Quality factor
By default, the cell below will take any previous result instead of re-computing the SHO fit
End of explanation
"""
h5_sho_spec_inds = px.hdf_utils.getAuxData(h5_sho_fit, auxDataName='Spectroscopic_Indices')[0]
sho_spec_labels = px.io.hdf_utils.get_attr(h5_sho_spec_inds,'labels')
if is_ckpfm:
# It turns out that the read voltage index starts from 1 instead of 0
# Also the VDC indices are NOT repeating. They are just rising monotonically
write_volt_index = np.argwhere(sho_spec_labels == 'write_bias')[0][0]
read_volt_index = np.argwhere(sho_spec_labels == 'read_bias')[0][0]
h5_sho_spec_inds[read_volt_index, :] -= 1
h5_sho_spec_inds[write_volt_index, :] = np.tile(np.repeat(np.arange(num_write_steps), num_fields), num_read_steps)
(Nd_mat, success, nd_labels) = px.io.hdf_utils.reshape_to_Ndims(h5_sho_fit, get_labels=True)
print('Reshape Success: ' + str(success))
print(nd_labels)
print(Nd_mat.shape)
use_sho_guess = False
use_static_viz_func = False
if use_sho_guess:
sho_dset = h5_sho_guess
else:
sho_dset = h5_sho_fit
data_type = px.io.hdf_utils.get_attr(hdf.file, 'data_type')
if data_type == 'BELineData' or len(pos_dims) != 2:
use_static_viz_func = True
step_chan = None
else:
vs_mode = px.io.hdf_utils.get_attr(h5_main.parent.parent, 'VS_mode')
if vs_mode not in ['AC modulation mode with time reversal',
'DC modulation mode']:
use_static_viz_func = True
else:
if vs_mode == 'DC modulation mode':
step_chan = 'DC_Offset'
else:
step_chan = 'AC_Amplitude'
if not use_static_viz_func:
try:
# use interactive visualization
px.be_viz_utils.jupyter_visualize_beps_sho(sho_dset, step_chan)
except:
raise
print('There was a problem with the interactive visualizer')
use_static_viz_func = True
if use_static_viz_func:
# show plots of SHO results vs. applied bias
px.be_viz_utils.visualize_sho_results(sho_dset, show_plots=True,
save_plots=False)
"""
Explanation: Visualize the SHO results
Here, we visualize the parameters for the SHO fits. BE-line (3D) data is visualized via simple spatial maps of the SHO parameters while more complex BEPS datasets (4+ dimensions) can be visualized using a simple interactive visualizer below.
You can choose to visualize the guesses for SHO function or the final fit values from the first line of the cell below.
Use the sliders below to inspect the BE response at any given location.
End of explanation
"""
# Do the Loop Fitting on the SHO Fit dataset
loop_success = False
h5_loop_group = px.hdf_utils.findH5group(h5_sho_fit, 'Loop_Fit')
if len(h5_loop_group) == 0:
try:
loop_fitter = px.BELoopModel(h5_sho_fit, parallel=True)
print('No loop fits found. Fitting now....')
h5_loop_guess = loop_fitter.do_guess(processors=max_cores, max_mem=max_mem)
h5_loop_fit = loop_fitter.do_fit(processors=max_cores, max_mem=max_mem)
loop_success = True
except ValueError:
print('Loop fitting is applicable only to DC spectroscopy datasets!')
else:
loop_success = True
print('Taking previously computed loop fits')
h5_loop_guess = h5_loop_group[-1]['Guess']
h5_loop_fit = h5_loop_group[-1]['Fit']
"""
Explanation: Fit loops to a function
This is applicable only to DC voltage spectroscopy datasets from BEPS. The PFM hysteresis loops in this dataset will be projected to maximize the loop area and then fitted to a function.
Note: This computation generally takes a while for reasonably sized datasets.
End of explanation
"""
# Prepare some variables for plotting loops fits and guesses
# Plot the Loop Guess and Fit Results
if loop_success:
h5_projected_loops = h5_loop_guess.parent['Projected_Loops']
h5_proj_spec_inds = px.hdf_utils.getAuxData(h5_projected_loops,
auxDataName='Spectroscopic_Indices')[-1]
h5_proj_spec_vals = px.hdf_utils.getAuxData(h5_projected_loops,
auxDataName='Spectroscopic_Values')[-1]
# reshape the vdc_vec into DC_step by Loop
sort_order = px.hdf_utils.get_sort_order(h5_proj_spec_inds)
dims = px.hdf_utils.get_dimensionality(h5_proj_spec_inds[()],
sort_order[::-1])
vdc_vec = np.reshape(h5_proj_spec_vals[h5_proj_spec_vals.attrs['DC_Offset']], dims).T
#Also reshape the projected loops to Positions-DC_Step-Loop
# Also reshape the projected loops to Positions-DC_Step-Loop
proj_nd, _ = px.hdf_utils.reshape_to_Ndims(h5_projected_loops)
proj_3d = np.reshape(proj_nd, [h5_projected_loops.shape[0],
proj_nd.shape[2], -1])
"""
Explanation: Prepare datasets for visualization
End of explanation
"""
use_static_plots = False
if loop_success:
if not use_static_plots:
try:
px.be_viz_utils.jupyter_visualize_beps_loops(h5_projected_loops, h5_loop_guess, h5_loop_fit)
except:
print('There was a problem with the interactive visualizer')
use_static_plots = True
if use_static_plots:
for iloop in range(h5_loop_guess.shape[1]):
fig, ax = px.be_viz_utils.plot_loop_guess_fit(vdc_vec[:, iloop], proj_3d[:, :, iloop],
h5_loop_guess[:, iloop], h5_loop_fit[:, iloop],
title='Loop {} - All Positions'.format(iloop))
"""
Explanation: Visualize Loop fits
End of explanation
"""
# hdf.close()
"""
Explanation: Save and close
Save the .h5 file that we are working on by closing it. <br>
Also, consider exporting this notebook as a notebook or an html file. <br> To do this, go to File >> Download as >> HTML
Finally consider saving this notebook if necessary
End of explanation
"""
|
keras-team/keras-io
|
examples/nlp/ipynb/active_learning_review_classification.ipynb
|
apache-2.0
|
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import matplotlib.pyplot as plt
import re
import string
tfds.disable_progress_bar()
"""
Explanation: Review Classification using Active Learning
Author: Darshan Deshpande<br>
Date created: 2021/10/29<br>
Last modified: 2021/10/29<br>
Description: Demonstrating the advantages of active learning through review classification.
Introduction
With the growth of data-centric Machine Learning, Active Learning has grown in popularity
amongst businesses and researchers. Active Learning seeks to progressively
train ML models so that the resultant model requires lesser amount of training data to
achieve competitive scores.
The structure of an Active Learning pipeline involves a classifier and an oracle. The
oracle is an annotator that cleans, selects, labels the data, and feeds it to the model
when required. The oracle is a trained individual or a group of individuals that
ensure consistency in labeling of new data.
The process starts with annotating a small subset of the full dataset and training an
initial model. The best model checkpoint is saved and then tested on a balanced test
set. The test set must be carefully sampled because the full training process will be
dependent on it. Once we have the initial evaluation scores, the oracle is tasked with
labeling more samples; the number of data points to be sampled is usually determined by
the business requirements. After that, the newly sampled data is added to the training
set, and the training procedure repeats. This cycle continues until either an
acceptable score is reached or some other business metric is met.
This tutorial provides a basic demonstration of how Active Learning works by
demonstrating a ratio-based (least confidence) sampling strategy that results in lower
overall false positive and negative rates when compared to a model trained on the entire
dataset. This sampling falls under the domain of uncertanity sampling, in which new
datasets are sampled based on the uncertanity that the model outputs for the
corresponding label. In our example, we compare our model's false positive and false
negative rates and annotate the new data based on their ratio.
Some other sampling techniques include:
Committee sampling:
Using multiple models to vote for the best data points to be sampled
Entropy reduction:
Sampling according to an entropy threshold, selecting more of the samples that produce the highest entropy score.
Minimum margin based sampling:
Selects data points closest to the decision boundary
Importing required libraries
End of explanation
"""
dataset = tfds.load(
"imdb_reviews",
split="train + test",
as_supervised=True,
batch_size=-1,
shuffle_files=False,
)
reviews, labels = tfds.as_numpy(dataset)
print("Total examples:", reviews.shape[0])
"""
Explanation: Loading and preprocessing the data
We will be using the IMDB reviews dataset for our experiments. This dataset has 50,000
reviews in total, including training and testing splits. We will merge these splits and
sample our own, balanced training, validation and testing sets.
End of explanation
"""
val_split = 2500
test_split = 2500
train_split = 7500
# Separating the negative and positive samples for manual stratification
x_positives, y_positives = reviews[labels == 1], labels[labels == 1]
x_negatives, y_negatives = reviews[labels == 0], labels[labels == 0]
# Creating training, validation and testing splits
x_val, y_val = (
tf.concat((x_positives[:val_split], x_negatives[:val_split]), 0),
tf.concat((y_positives[:val_split], y_negatives[:val_split]), 0),
)
x_test, y_test = (
tf.concat(
(
x_positives[val_split : val_split + test_split],
x_negatives[val_split : val_split + test_split],
),
0,
),
tf.concat(
(
y_positives[val_split : val_split + test_split],
y_negatives[val_split : val_split + test_split],
),
0,
),
)
x_train, y_train = (
tf.concat(
(
x_positives[val_split + test_split : val_split + test_split + train_split],
x_negatives[val_split + test_split : val_split + test_split + train_split],
),
0,
),
tf.concat(
(
y_positives[val_split + test_split : val_split + test_split + train_split],
y_negatives[val_split + test_split : val_split + test_split + train_split],
),
0,
),
)
# Remaining pool of samples are stored separately. These are only labeled as and when required
x_pool_positives, y_pool_positives = (
x_positives[val_split + test_split + train_split :],
y_positives[val_split + test_split + train_split :],
)
x_pool_negatives, y_pool_negatives = (
x_negatives[val_split + test_split + train_split :],
y_negatives[val_split + test_split + train_split :],
)
# Creating TF Datasets for faster prefetching and parallelization
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
pool_negatives = tf.data.Dataset.from_tensor_slices(
(x_pool_negatives, y_pool_negatives)
)
pool_positives = tf.data.Dataset.from_tensor_slices(
(x_pool_positives, y_pool_positives)
)
print(f"Initial training set size: {len(train_dataset)}")
print(f"Validation set size: {len(val_dataset)}")
print(f"Testing set size: {len(test_dataset)}")
print(f"Unlabeled negative pool: {len(pool_negatives)}")
print(f"Unlabeled positive pool: {len(pool_positives)}")
"""
Explanation: Active learning starts with labeling a subset of data.
For the ratio sampling technique that we will be using, we will need well-balanced training,
validation and testing splits.
End of explanation
"""
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, "<br />", " ")
return tf.strings.regex_replace(
stripped_html, f"[{re.escape(string.punctuation)}]", ""
)
vectorizer = layers.TextVectorization(
3000, standardize=custom_standardization, output_sequence_length=150
)
# Adapting the dataset
vectorizer.adapt(
train_dataset.map(lambda x, y: x, num_parallel_calls=tf.data.AUTOTUNE).batch(256)
)
def vectorize_text(text, label):
text = vectorizer(text)
return text, label
train_dataset = train_dataset.map(
vectorize_text, num_parallel_calls=tf.data.AUTOTUNE
).prefetch(tf.data.AUTOTUNE)
pool_negatives = pool_negatives.map(vectorize_text, num_parallel_calls=tf.data.AUTOTUNE)
pool_positives = pool_positives.map(vectorize_text, num_parallel_calls=tf.data.AUTOTUNE)
val_dataset = val_dataset.batch(256).map(
vectorize_text, num_parallel_calls=tf.data.AUTOTUNE
)
test_dataset = test_dataset.batch(256).map(
vectorize_text, num_parallel_calls=tf.data.AUTOTUNE
)
"""
Explanation: Fitting the TextVectorization layer
Since we are working with text data, we will need to encode the text strings as vectors which
would then be passed through an Embedding layer. To make this tokenization process
faster, we use the map() function with its parallelization functionality.
End of explanation
"""
# Helper function for merging new history objects with older ones
def append_history(losses, val_losses, accuracy, val_accuracy, history):
losses = losses + history.history["loss"]
val_losses = val_losses + history.history["val_loss"]
accuracy = accuracy + history.history["binary_accuracy"]
val_accuracy = val_accuracy + history.history["val_binary_accuracy"]
return losses, val_losses, accuracy, val_accuracy
# Plotter function
def plot_history(losses, val_losses, accuracies, val_accuracies):
plt.plot(losses)
plt.plot(val_losses)
plt.legend(["train_loss", "val_loss"])
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.show()
plt.plot(accuracies)
plt.plot(val_accuracies)
plt.legend(["train_accuracy", "val_accuracy"])
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.show()
"""
Explanation: Creating Helper Functions
End of explanation
"""
def create_model():
model = keras.models.Sequential(
[
layers.Input(shape=(150,)),
layers.Embedding(input_dim=3000, output_dim=128),
layers.Bidirectional(layers.LSTM(32, return_sequences=True)),
layers.GlobalMaxPool1D(),
layers.Dense(20, activation="relu"),
layers.Dropout(0.5),
layers.Dense(1, activation="sigmoid"),
]
)
model.summary()
return model
"""
Explanation: Creating the Model
We create a small bidirectional LSTM model. When using Active Learning, you should make sure
that the model architecture is capable of overfitting to the initial data.
Overfitting gives a strong hint that the model will have enough capacity for
future, unseen data.
End of explanation
"""
def train_full_model(full_train_dataset, val_dataset, test_dataset):
model = create_model()
model.compile(
loss="binary_crossentropy",
optimizer="rmsprop",
metrics=[
keras.metrics.BinaryAccuracy(),
keras.metrics.FalseNegatives(),
keras.metrics.FalsePositives(),
],
)
# We will save the best model at every epoch and load the best one for evaluation on the test set
history = model.fit(
full_train_dataset.batch(256),
epochs=20,
validation_data=val_dataset,
callbacks=[
keras.callbacks.EarlyStopping(patience=4, verbose=1),
keras.callbacks.ModelCheckpoint(
"FullModelCheckpoint.h5", verbose=1, save_best_only=True
),
],
)
# Plot history
plot_history(
history.history["loss"],
history.history["val_loss"],
history.history["binary_accuracy"],
history.history["val_binary_accuracy"],
)
# Loading the best checkpoint
model = keras.models.load_model("FullModelCheckpoint.h5")
print("-" * 100)
print(
"Test set evaluation: ",
model.evaluate(test_dataset, verbose=0, return_dict=True),
)
print("-" * 100)
return model
# Sampling the full train dataset to train on
full_train_dataset = (
train_dataset.concatenate(pool_positives)
.concatenate(pool_negatives)
.cache()
.shuffle(20000)
)
# Training the full model
full_dataset_model = train_full_model(full_train_dataset, val_dataset, test_dataset)
"""
Explanation: Training on the entire dataset
To show the effectiveness of Active Learning, we will first train the model on the entire
dataset containing 40,000 labeled samples. This model will be used for comparison later.
End of explanation
"""
def train_active_learning_models(
train_dataset,
pool_negatives,
pool_positives,
val_dataset,
test_dataset,
num_iterations=3,
sampling_size=5000,
):
# Creating lists for storing metrics
losses, val_losses, accuracies, val_accuracies = [], [], [], []
model = create_model()
# We will monitor the false positives and false negatives predicted by our model
# These will decide the subsequent sampling ratio for every Active Learning loop
model.compile(
loss="binary_crossentropy",
optimizer="rmsprop",
metrics=[
keras.metrics.BinaryAccuracy(),
keras.metrics.FalseNegatives(),
keras.metrics.FalsePositives(),
],
)
# Defining checkpoints.
# The checkpoint callback is reused throughout the training since it only saves the best overall model.
checkpoint = keras.callbacks.ModelCheckpoint(
"AL_Model.h5", save_best_only=True, verbose=1
)
# Here, patience is set to 4. This can be set higher if desired.
early_stopping = keras.callbacks.EarlyStopping(patience=4, verbose=1)
print(f"Starting to train with {len(train_dataset)} samples")
# Initial fit with a small subset of the training set
history = model.fit(
train_dataset.cache().shuffle(20000).batch(256),
epochs=20,
validation_data=val_dataset,
callbacks=[checkpoint, early_stopping],
)
# Appending history
losses, val_losses, accuracies, val_accuracies = append_history(
losses, val_losses, accuracies, val_accuracies, history
)
for iteration in range(num_iterations):
# Getting predictions from previously trained model
predictions = model.predict(test_dataset)
# Generating labels from the output probabilities
rounded = tf.where(tf.greater(predictions, 0.5), 1, 0)
# Evaluating the number of zeros and ones incorrrectly classified
_, _, false_negatives, false_positives = model.evaluate(test_dataset, verbose=0)
print("-" * 100)
print(
f"Number of zeros incorrectly classified: {false_negatives}, Number of ones incorrectly classified: {false_positives}"
)
# This technique of Active Learning demonstrates ratio based sampling where
# Number of ones/zeros to sample = Number of ones/zeros incorrectly classified / Total incorrectly classified
if false_negatives != 0 and false_positives != 0:
total = false_negatives + false_positives
sample_ratio_ones, sample_ratio_zeros = (
false_positives / total,
false_negatives / total,
)
# In the case where all samples are correctly predicted, we can sample both classes equally
else:
sample_ratio_ones, sample_ratio_zeros = 0.5, 0.5
print(
f"Sample ratio for positives: {sample_ratio_ones}, Sample ratio for negatives:{sample_ratio_zeros}"
)
# Sample the required number of ones and zeros
sampled_dataset = pool_negatives.take(
int(sample_ratio_zeros * sampling_size)
).concatenate(pool_positives.take(int(sample_ratio_ones * sampling_size)))
# Skip the sampled data points to avoid repetition of sample
pool_negatives = pool_negatives.skip(int(sample_ratio_zeros * sampling_size))
pool_positives = pool_positives.skip(int(sample_ratio_ones * sampling_size))
# Concatenating the train_dataset with the sampled_dataset
train_dataset = train_dataset.concatenate(sampled_dataset).prefetch(
tf.data.AUTOTUNE
)
print(f"Starting training with {len(train_dataset)} samples")
print("-" * 100)
# We recompile the model to reset the optimizer states and retrain the model
model.compile(
loss="binary_crossentropy",
optimizer="rmsprop",
metrics=[
keras.metrics.BinaryAccuracy(),
keras.metrics.FalseNegatives(),
keras.metrics.FalsePositives(),
],
)
history = model.fit(
train_dataset.cache().shuffle(20000).batch(256),
validation_data=val_dataset,
epochs=20,
callbacks=[
checkpoint,
keras.callbacks.EarlyStopping(patience=4, verbose=1),
],
)
# Appending the history
losses, val_losses, accuracies, val_accuracies = append_history(
losses, val_losses, accuracies, val_accuracies, history
)
# Loading the best model from this training loop
model = keras.models.load_model("AL_Model.h5")
# Plotting the overall history and evaluating the final model
plot_history(losses, val_losses, accuracies, val_accuracies)
print("-" * 100)
print(
"Test set evaluation: ",
model.evaluate(test_dataset, verbose=0, return_dict=True),
)
print("-" * 100)
return model
active_learning_model = train_active_learning_models(
train_dataset, pool_negatives, pool_positives, val_dataset, test_dataset
)
"""
Explanation: Training via Active Learning
The general process we follow when performing Active Learning is demonstrated below:
The pipeline can be summarized in five parts:
Sample and annotate a small, balanced training dataset
Train the model on this small subset
Evaluate the model on a balanced testing set
If the model satisfies the business criteria, deploy it in a real time setting
If it doesn't pass the criteria, sample a few more samples according to the ratio of
false positives and negatives, add them to the training set and repeat from step 2 till
the model passes the tests or till all available data is exhausted.
For the code below, we will perform sampling using the following formula:<br/>
Active Learning techniques use callbacks extensively for progress tracking. We will be
using model checkpointing and early stopping for this example. The patience parameter
for Early Stopping can help minimize overfitting and the time required. We have set it
patience=4 for now but since the model is robust, we can increase the patience level if
desired.
Note: We are not loading the checkpoint after the first training iteration. In my
experience working on Active Learning techniques, this helps the model probe the
newly formed loss landscape. Even if the model fails to improve in the second iteration,
we will still gain insight about the possible future false positive and negative rates.
This will help us sample a better set in the next iteration where the model will have a
greater chance to improve.
End of explanation
"""
|
Kaggle/learntools
|
notebooks/sql_advanced/raw/tut1.ipynb
|
apache-2.0
|
#$HIDE_INPUT$
from google.cloud import bigquery
# Create a "Client" object
client = bigquery.Client()
# Construct a reference to the "hacker_news" dataset
dataset_ref = client.dataset("hacker_news", project="bigquery-public-data")
# API request - fetch the dataset
dataset = client.get_dataset(dataset_ref)
# Construct a reference to the "comments" table
table_ref = dataset_ref.table("comments")
# API request - fetch the table
table = client.get_table(table_ref)
# Preview the first five lines of the table
client.list_rows(table, max_results=5).to_dataframe()
"""
Explanation: Introduction
In the Intro to SQL micro-course, you learned how to use INNER JOIN to consolidate information from two different tables. Now you'll learn about a few more types of JOIN, along with how to use UNIONs to pull information from multiple tables.
Along the way, we'll work with two imaginary tables, called owners and pets.
Each row of the owners table identifies a different pet owner, where the ID column is a unique identifier. The Pet_ID column (in the owners table) contains the ID for the pet that belongs to the owner (this number matches the ID for the pet from the pets table).
For example,
- the pets table shows that Dr. Harris Bonkers is the pet with ID 1.
- The owners table shows that Aubrey Little is the owner of the pet with ID 1.
Putting these two facts together, Dr. Harris Bonkers is owned by Aubrey Little. Likewise, since Veronica Dunn does not have a corresponding Pet_ID, she does not have a pet. And, since 5 does not appear in the Pet_ID column, Maisie does not have an owner.
JOINs
Recall that we can use an INNER JOIN to pull rows from both tables where the value in the Pet_ID column in the owners table has a match in the ID column of the pets table.
In this case, Veronica Dunn and Maisie are not included in the results. But what if we instead want to create a table containing all pets, regardless of whether they have owners? Or, what if we want to combine all of the rows in both tables? In these cases, we need only use a different type of JOIN.
For instance, to create a table containing all rows from the owners table, we use a LEFT JOIN. In this case, "left" refers to the table that appears before the JOIN in the query. ("Right" refers to the table that is after the JOIN.)
Replacing INNER JOIN in the query above with LEFT JOIN returns all rows where the two tables have matching entries, along with all of the rows in the left table (whether there is a match or not).
If we instead use a RIGHT JOIN, we get the matching rows, along with all rows in the right table (whether there is a match or not).
Finally, a FULL JOIN returns all rows from both tables. Note that in general, any row that does not have a match in both tables will have NULL entries for the missing values. You can see this in the image below.
UNIONs
As you've seen, JOINs horizontally combine results from different tables. If you instead would like to vertically concatenate columns, you can do so with a UNION. The example query below combines the Age columns from both tables.
Note that with a UNION, the data types of both columns must be the same, but the column names can be different. (So, for instance, we cannot take the UNION of the Age column from the owners table and the Pet_Name column from the pets table.)
We use UNION ALL to include duplicate values - you'll notice that 9 appears in both the owners table and the pets table, and shows up twice in the concatenated results. If you'd like to drop duplicate values, you need only change UNION ALL in the query to UNION DISTINCT.
Example
We'll work with the Hacker News dataset. We begin by reviewing the first several rows of the comments table. (The corresponding code is hidden, but you can un-hide it by clicking on the "Code" button below.)
End of explanation
"""
# Construct a reference to the "stories" table
table_ref = dataset_ref.table("stories")
# API request - fetch the table
table = client.get_table(table_ref)
# Preview the first five lines of the table
client.list_rows(table, max_results=5).to_dataframe()
"""
Explanation: You'll also work with the stories table.
End of explanation
"""
# Query to select all stories posted on January 1, 2012, with number of comments
join_query = """
WITH c AS
(
SELECT parent, COUNT(*) as num_comments
FROM `bigquery-public-data.hacker_news.comments`
GROUP BY parent
)
SELECT s.id as story_id, s.by, s.title, c.num_comments
FROM `bigquery-public-data.hacker_news.stories` AS s
LEFT JOIN c
ON s.id = c.parent
WHERE EXTRACT(DATE FROM s.time_ts) = '2012-01-01'
ORDER BY c.num_comments DESC
"""
# Run the query, and return a pandas DataFrame
join_result = client.query(join_query).result().to_dataframe()
join_result.head()
"""
Explanation: Since you are already familiar with JOINs from the Intro to SQL micro-course, we'll work with a relatively complex example of a JOIN that uses a common table expression (CTE).
The query below pulls information from the stories and comments tables to create a table showing all stories posted on January 1, 2012, along with the corresponding number of comments. We use a LEFT JOIN so that the results include stories that didn't receive any comments.
End of explanation
"""
# None of these stories received any comments
join_result.tail()
"""
Explanation: Since the results are ordered by the num_comments column, stories without comments appear at the end of the DataFrame. (Remember that NaN stands for "not a number".)
End of explanation
"""
# Query to select all users who posted stories or comments on January 1, 2014
union_query = """
SELECT c.by
FROM `bigquery-public-data.hacker_news.comments` AS c
WHERE EXTRACT(DATE FROM c.time_ts) = '2014-01-01'
UNION DISTINCT
SELECT s.by
FROM `bigquery-public-data.hacker_news.stories` AS s
WHERE EXTRACT(DATE FROM s.time_ts) = '2014-01-01'
"""
# Run the query, and return a pandas DataFrame
union_result = client.query(union_query).result().to_dataframe()
union_result.head()
"""
Explanation: Next, we write a query to select all usernames corresponding to users who wrote stories or comments on January 1, 2014. We use UNION DISTINCT (instead of UNION ALL) to ensure that each user appears in the table at most once.
End of explanation
"""
# Number of users who posted stories or comments on January 1, 2014
len(union_result)
"""
Explanation: To get the number of users who posted on January 1, 2014, we need only take the length of the DataFrame.
End of explanation
"""
|
Who8MyLunch/ipynb_widget_canvas
|
notebooks/02 - Canvas Widget Example.ipynb
|
mit
|
ll ../widget_canvas/
fname = '../widget_canvas/widget_canvas.js'
f = os.path.abspath(fname)
js = IPython.display.Javascript(filename=f) # data=None, url=None, filename=None, lib=None
print('inject!')
IPython.display.display(js)
from __future__ import print_function, unicode_literals, division, absolute_import
import IPython
from widget_canvas import CanvasImage
"""
Explanation: Canvas Widget Example
End of explanation
"""
from widget_canvas.image import read
data_image = read('images/Whippet.jpg')
data_image.shape
"""
Explanation: Load some image data
Load test data using my own image file reader helper function based on PIL/Pillow.
End of explanation
"""
wid_canvas = CanvasImage(data_image)
wid_canvas.border_color = 'black'
wid_canvas.border_width = 2
wid_canvas
"""
Explanation: My New Canvas Widget
My new canvas widget is simpler to use than IPython's built-in image display widget since it takes a Numpy array as input. Behind the scenes it takes care of compressing and encoding the data and then feeding it into the canvas element in a manner similar to the example just above.
End of explanation
"""
data_image_2 = read('images/Doberman.jpg')
wid_canvas.data = data_image_2
"""
Explanation: Changing the displayed image is as easy as setting the data property to a new Numpy array.
End of explanation
"""
# Build an event handler function.
def simple_handler(wid, info):
msg = 'Click: {:3d}, {:3d}'.format(info['canvasX'], info['canvasY'])
print(msg)
# Attach the handler to widget's `on_click` events.
wid_canvas.on_mouse_click(simple_handler)
"""
Explanation: Mouse events
End of explanation
"""
|
zerothi/ts-tbt-sisl-tutorial
|
TB_03/run.ipynb
|
gpl-3.0
|
graphene = sisl.geom.graphene().tile(2, axis=0)
H = sisl.Hamiltonian(graphene)
H.construct([[0.1, 1.43], [0., -2.7]])
"""
Explanation: This example will setup the required electronic structures for usage in TBtrans.
We will continue with the graphene nearest neighbour tight-binding model and perform simple transport calculations using TBtrans.
Again we require the graphene unit-cell and the construction of the Hamiltonian object:
End of explanation
"""
print(H)
"""
Explanation: Note that the above call of the graphene lattice is different from TB 2, and similar to TB 1. In this example we will create a non-orthogonal graphene lattice, i.e. the lattice vectors are the minimal lattice vectors of graphene.
The minimal graphene lattice consists of 2 Carbon atoms.
We tile the Geometry to make it slightly bigger.
You are encouraged to draw the graphene lattice vectors, and draw an arrow in the direction of the transport (along the 2nd lattice vector). Note that one can calculate transport along non-orthogonal directions (also in TranSiesta).
Assert that we have 16 non zero elements:
End of explanation
"""
H.write('ELEC.nc')
"""
Explanation: The Hamiltonian we have thus far created will be our electrode. Lets write it to a TBtrans readable file:
End of explanation
"""
H_device = H.tile(3, axis=1)
print(H_device)
"""
Explanation: Now a file ELEC.nc file exists in the folder and it contains all the information (and more) that TBtrans requires to construct the self-energies for the electrode.
Creating the device, Hamiltonian $\to$ Hamiltonian
The Geometry.tile function is an explicit method to create bigger lattices from a smaller reference latice. Howewer, the tile routine is also available to the Hamiltonian object. Not only is it much easier to use, it also presents these advantages:
It guarentees that the matrix elements are the same as the reference Hamiltonian, i.e. you need not specify the parameters to construct twice,
It is much faster when creating systems of $>500,000$ atoms/orbitals from smaller reference systems,
It also requires less code which increases readability and is less prone to errors.
End of explanation
"""
H_device.write('DEVICE.nc')
"""
Explanation: For more information you may execute the following lines to view the documentation:
help(Geometry.tile)
help(Hamiltonian.tile)
Now we have created the device electronic structure. The final step is to store it in a TBtrans readable format:
End of explanation
"""
tbt = sisl.get_sile('siesta.TBT.nc')
"""
Explanation: Now run tbtrans:
tbtrans RUN.fdf
End of explanation
"""
plt.plot(tbt.E, tbt.transmission(), label='k-averaged');
plt.plot(tbt.E, tbt.transmission(kavg=tbt.kindex([0, 0, 0])), label=r'$\Gamma$');
plt.xlabel('Energy [eV]'); plt.ylabel('Transmission'); plt.ylim([0, None]) ; plt.legend();
"""
Explanation: After calculating the transport properties of the transport problem you may also use sisl to interact with the TBtrans output (in the *.TBT.nc file). Please repeat the same convergence tests you performed in example 02.
What are the required k-point sampling compared to 02 for a similar transmission function ?
End of explanation
"""
|
google/jax
|
docs/notebooks/Writing_custom_interpreters_in_Jax.ipynb
|
apache-2.0
|
import numpy as np
import jax
import jax.numpy as jnp
from jax import jit, grad, vmap
from jax import random
"""
Explanation: Writing custom Jaxpr interpreters in JAX
JAX offers several composable function transformations (jit, grad, vmap,
etc.) that enable writing concise, accelerated code.
Here we show how to add your own function transformations to the system, by writing a custom Jaxpr interpreter. And we'll get composability with all the other transformations for free.
This example uses internal JAX APIs, which may break at any time. Anything not in the API Documentation should be assumed internal.
End of explanation
"""
x = random.normal(random.PRNGKey(0), (5000, 5000))
def f(w, b, x):
return jnp.tanh(jnp.dot(x, w) + b)
fast_f = jit(f)
"""
Explanation: What is JAX doing?
JAX provides a NumPy-like API for numerical computing which can be used as is, but JAX's true power comes from composable function transformations. Take the jit function transformation, which takes in a function and returns a semantically identical function but is lazily compiled by XLA for accelerators.
End of explanation
"""
def examine_jaxpr(closed_jaxpr):
jaxpr = closed_jaxpr.jaxpr
print("invars:", jaxpr.invars)
print("outvars:", jaxpr.outvars)
print("constvars:", jaxpr.constvars)
for eqn in jaxpr.eqns:
print("equation:", eqn.invars, eqn.primitive, eqn.outvars, eqn.params)
print()
print("jaxpr:", jaxpr)
def foo(x):
return x + 1
print("foo")
print("=====")
examine_jaxpr(jax.make_jaxpr(foo)(5))
print()
def bar(w, b, x):
return jnp.dot(w, x) + b + jnp.ones(5), x
print("bar")
print("=====")
examine_jaxpr(jax.make_jaxpr(bar)(jnp.ones((5, 10)), jnp.ones(5), jnp.ones(10)))
"""
Explanation: When we call fast_f, what happens? JAX traces the function and constructs an XLA computation graph. The graph is then JIT-compiled and executed. Other transformations work similarly in that they first trace the function and handle the output trace in some way. To learn more about Jax's tracing machinery, you can refer to the "How it works" section in the README.
Jaxpr tracer
A tracer of special importance in Jax is the Jaxpr tracer, which records ops into a Jaxpr (Jax expression). A Jaxpr is a data structure that can be evaluated like a mini functional programming language and
thus Jaxprs are a useful intermediate representation
for function transformation.
To get a first look at Jaxprs, consider the make_jaxpr transformation. make_jaxpr is essentially a "pretty-printing" transformation:
it transforms a function into one that, given example arguments, produces a Jaxpr representation of its computation.
make_jaxpr is useful for debugging and introspection.
Let's use it to look at how some example Jaxprs are structured.
End of explanation
"""
# Importing Jax functions useful for tracing/interpreting.
import numpy as np
from functools import wraps
from jax import core
from jax import lax
from jax._src.util import safe_map
"""
Explanation: jaxpr.invars - the invars of a Jaxpr are a list of the input variables to Jaxpr, analogous to arguments in Python functions.
jaxpr.outvars - the outvars of a Jaxpr are the variables that are returned by the Jaxpr. Every Jaxpr has multiple outputs.
jaxpr.constvars - the constvars are a list of variables that are also inputs to the Jaxpr, but correspond to constants from the trace (we'll go over these in more detail later).
jaxpr.eqns - a list of equations, which are essentially let-bindings. Each equation is a list of input variables, a list of output variables, and a primitive, which is used to evaluate inputs to produce outputs. Each equation also has a params, a dictionary of parameters.
Altogether, a Jaxpr encapsulates a simple program that can be evaluated with inputs to produce an output. We'll go over how exactly to do this later. The important thing to note now is that a Jaxpr is a data structure that can be manipulated and evaluated in whatever way we want.
Why are Jaxprs useful?
Jaxprs are simple program representations that are easy to transform. And because Jax lets us stage out Jaxprs from Python functions, it gives us a way to transform numerical programs written in Python.
Your first interpreter: invert
Let's try to implement a simple function "inverter", which takes in the output of the original function and returns the inputs that produced those outputs. For now, let's focus on simple, unary functions which are composed of other invertible unary functions.
Goal:
python
def f(x):
return jnp.exp(jnp.tanh(x))
f_inv = inverse(f)
assert jnp.allclose(f_inv(f(1.0)), 1.0)
The way we'll implement this is by (1) tracing f into a Jaxpr, then (2) interpreting the Jaxpr backwards. While interpreting the Jaxpr backwards, for each equation we'll look up the primitive's inverse in a table and apply it.
1. Tracing a function
Let's use make_jaxpr to trace a function into a Jaxpr.
End of explanation
"""
def f(x):
return jnp.exp(jnp.tanh(x))
closed_jaxpr = jax.make_jaxpr(f)(jnp.ones(5))
print(closed_jaxpr.jaxpr)
print(closed_jaxpr.literals)
"""
Explanation: jax.make_jaxpr returns a closed Jaxpr, which is a Jaxpr that has been bundled with
the constants (literals) from the trace.
End of explanation
"""
def eval_jaxpr(jaxpr, consts, *args):
# Mapping from variable -> value
env = {}
def read(var):
# Literals are values baked into the Jaxpr
if type(var) is core.Literal:
return var.val
return env[var]
def write(var, val):
env[var] = val
# Bind args and consts to environment
safe_map(write, jaxpr.invars, args)
safe_map(write, jaxpr.constvars, consts)
# Loop through equations and evaluate primitives using `bind`
for eqn in jaxpr.eqns:
# Read inputs to equation from environment
invals = safe_map(read, eqn.invars)
# `bind` is how a primitive is called
outvals = eqn.primitive.bind(*invals, **eqn.params)
# Primitives may return multiple outputs or not
if not eqn.primitive.multiple_results:
outvals = [outvals]
# Write the results of the primitive into the environment
safe_map(write, eqn.outvars, outvals)
# Read the final result of the Jaxpr from the environment
return safe_map(read, jaxpr.outvars)
closed_jaxpr = jax.make_jaxpr(f)(jnp.ones(5))
eval_jaxpr(closed_jaxpr.jaxpr, closed_jaxpr.literals, jnp.ones(5))
"""
Explanation: 2. Evaluating a Jaxpr
Before we write a custom Jaxpr interpreter, let's first implement the "default" interpreter, eval_jaxpr, which evaluates the Jaxpr as-is, computing the same values that the original, un-transformed Python function would.
To do this, we first create an environment to store the values for each of the variables, and update the environment with each equation we evaluate in the Jaxpr.
End of explanation
"""
inverse_registry = {}
"""
Explanation: Notice that eval_jaxpr will always return a flat list even if the original function does not.
Furthermore, this interpreter does not handle higher-order primitives (like jit and pmap), which we will not cover in this guide. You can refer to core.eval_jaxpr (link) to see the edge cases that this interpreter does not cover.
Custom inverse Jaxpr interpreter
An inverse interpreter doesn't look too different from eval_jaxpr. We'll first set up the registry which will map primitives to their inverses. We'll then write a custom interpreter that looks up primitives in the registry.
It turns out that this interpreter will also look similar to the "transpose" interpreter used in reverse-mode autodifferentiation found here.
End of explanation
"""
inverse_registry[lax.exp_p] = jnp.log
inverse_registry[lax.tanh_p] = jnp.arctanh
"""
Explanation: We'll now register inverses for some of the primitives. By convention, primitives in Jax end in _p and a lot of the popular ones live in lax.
End of explanation
"""
def inverse(fun):
@wraps(fun)
def wrapped(*args, **kwargs):
# Since we assume unary functions, we won't worry about flattening and
# unflattening arguments.
closed_jaxpr = jax.make_jaxpr(fun)(*args, **kwargs)
out = inverse_jaxpr(closed_jaxpr.jaxpr, closed_jaxpr.literals, *args)
return out[0]
return wrapped
"""
Explanation: inverse will first trace the function, then custom-interpret the Jaxpr. Let's set up a simple skeleton.
End of explanation
"""
def inverse_jaxpr(jaxpr, consts, *args):
env = {}
def read(var):
if type(var) is core.Literal:
return var.val
return env[var]
def write(var, val):
env[var] = val
# Args now correspond to Jaxpr outvars
safe_map(write, jaxpr.outvars, args)
safe_map(write, jaxpr.constvars, consts)
# Looping backward
for eqn in jaxpr.eqns[::-1]:
# outvars are now invars
invals = safe_map(read, eqn.outvars)
if eqn.primitive not in inverse_registry:
raise NotImplementedError(
f"{eqn.primitive} does not have registered inverse.")
# Assuming a unary function
outval = inverse_registry[eqn.primitive](*invals)
safe_map(write, eqn.invars, [outval])
return safe_map(read, jaxpr.invars)
"""
Explanation: Now we just need to define inverse_jaxpr, which will walk through the Jaxpr backward and invert primitives when it can.
End of explanation
"""
def f(x):
return jnp.exp(jnp.tanh(x))
f_inv = inverse(f)
assert jnp.allclose(f_inv(f(1.0)), 1.0)
"""
Explanation: That's it!
End of explanation
"""
jax.make_jaxpr(inverse(f))(f(1.))
"""
Explanation: Importantly, you can trace through a Jaxpr interpreter.
End of explanation
"""
jit(vmap(grad(inverse(f))))((jnp.arange(5) + 1.) / 5.)
"""
Explanation: That's all it takes to add a new transformation to a system, and you get composition with all the others for free! For example, we can use jit, vmap, and grad with inverse!
End of explanation
"""
|
AstroHackWeek/AstroHackWeek2016
|
day2-machine-learning/machine-learning-on-SDSS.ipynb
|
mit
|
## get the data locally ... I put this on a gist
!curl -k -O https://gist.githubusercontent.com/anonymous/53781fe86383c435ff10/raw/4cc80a638e8e083775caec3005ae2feaf92b8d5b/qso10000.csv
!curl -k -O https://gist.githubusercontent.com/anonymous/2984cf01a2485afd2c3e/raw/964d4f52c989428628d42eb6faad5e212e79b665/star1000.csv
!curl -k -O https://gist.githubusercontent.com/anonymous/2984cf01a2485afd2c3e/raw/335cd1953e72f6c7cafa9ebb81b43c47cb757a9d/galaxy1000.csv
## Python 2 backward compatibility
from __future__ import absolute_import, division, print_function, unicode_literals
# For pretty plotting, pandas, sklearn
!conda install pandas seaborn matplotlib scikit-learn==0.17.1 -y
import copy
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['axes.labelsize'] = 20
import pandas as pd
pd.set_option('display.max_columns', None)
import seaborn as sns
sns.set()
pd.read_csv("qso10000.csv",index_col=0).head()
"""
Explanation: <h1>Worked machine learning examples using SDSS data</h1>
[AstroHackWeek 2014, 2016- J. S. Bloom @profjsb]
<hr>
Here we'll see some worked ML examples using scikit-learn on Sloan Digital Sky Survey Data (SDSS). This should work in both Python 2 and Python 3.
It's easiest to grab data from the <a href="http://skyserver.sdss3.org/public/en/tools/search/sql.aspx">SDSS skyserver SQL</a> server.
For example to do a basic query to get two types of photometry (aperature and petrosian), corrected for extinction, for 1000 QSO sources with redshifts:
<font color="blue">
<pre>SELECT *,dered_u - mag_u AS diff_u, dered_g - mag_g AS diff_g, dered_r - mag_r AS diff_g, dered_i - mag_i AS diff_i, dered_z - mag_z AS diff_z from
(SELECT top 1000
objid, ra, dec, dered_u,dered_g,dered_r,dered_i,dered_z,psfmag_u-extinction_u AS mag_u,
psfmag_g-extinction_g AS mag_g, psfmag_r-extinction_r AS mag_r, psfmag_i-extinction_i AS mag_i,psfmag_z-extinction_z AS mag_z,z AS spec_z,dered_u - dered_g AS u_g_color,
dered_g - dered_r AS g_r_color,dered_r - dered_i AS r_i_color,dered_i - dered_z AS i_z_color,class
FROM SpecPhoto
WHERE
(class = 'QSO')
) as sp
</pre>
</font>
Saving this and others like it as a csv we can then start to make our data set for classification/regression.
End of explanation
"""
usecols = [str(x) for x in ["objid","dered_r","spec_z","u_g_color","g_r_color","r_i_color",
"i_z_color","diff_u",\
"diff_g1","diff_i","diff_z"]]
qsos = pd.read_csv("qso10000.csv",index_col=0,
usecols=usecols)
qso_features = copy.copy(qsos)
qso_redshifts = qsos["spec_z"]
del qso_features["spec_z"]
qso_features.head()
f, ax = plt.subplots()
bins = ax.hist(qso_redshifts.values)
ax.set_xlabel("redshift", fontsize=18)
ax.set_ylabel("N",fontsize=18)
"""
Explanation: Notice that there are several things about this dataset. First, RA and DEC are probably not something we want to use in making predictions: it's the location of the object on the sky. Second, the magnitudes are highly covariant with the colors. So dumping all but one of the magnitudes might be a good idea to avoid overfitting.
End of explanation
"""
import matplotlib as mpl
import matplotlib.cm as cm
## truncate the color at z=2.5 just to keep some contrast.
norm = mpl.colors.Normalize(vmin=min(qso_redshifts.values), vmax=2.5)
cmap = cm.jet
m = cm.ScalarMappable(norm=norm, cmap=cmap)
rez = pd.scatter_matrix(qso_features[0:2000],
alpha=0.2,figsize=[15,15],color=m.to_rgba(qso_redshifts.values))
"""
Explanation: Pretty clearly a big cut at around $z=2$.
End of explanation
"""
min(qso_features["dered_r"].values)
"""
Explanation: Egad. Some pretty crazy values for dered_r and g_r_color. Let's figure out why.
End of explanation
"""
qsos = pd.read_csv("qso10000.csv",index_col=0,
usecols=usecols)
qsos = qsos[(qsos["dered_r"] > -9999) & (qsos["g_r_color"] > -10) & (qsos["g_r_color"] < 10)]
qso_features = copy.copy(qsos)
qso_redshifts = qsos["spec_z"]
del qso_features["spec_z"]
rez = pd.scatter_matrix(qso_features[0:2000], alpha=0.2,figsize=[15,15],\
color=m.to_rgba(qso_redshifts.values))
"""
Explanation: Looks like there are some missing values in the catalog which are set at -9999. Let's zoink those from the dataset for now.
End of explanation
"""
qsos.to_csv("qsos.clean.csv")
"""
Explanation: Ok. This looks pretty clean. Let's save this for future use.
End of explanation
"""
X = qso_features.values # 9-d feature space
Y = qso_redshifts.values # redshifts
print("feature vector shape=", X.shape)
print("class shape=", Y.shape)
# half of data
import math
half = math.floor(len(Y)/2)
train_X = X[:half]
train_Y = Y[:half]
test_X = X[half:]
test_Y = Y[half:]
"""
Explanation: Data Munging done. Let's do some ML!
Basic Model Fitting
We need to create a training set and a testing set.
End of explanation
"""
from sklearn import linear_model
clf = linear_model.LinearRegression()
clf.
# fit the model
clf.fit(train_X, train_Y)
# now do the prediction
Y_lr_pred = clf.predict(test_X)
# how well did we do?
from sklearn.metrics import mean_squared_error
mse = np.sqrt(mean_squared_error(test_Y,Y_lr_pred)) ; print("MSE",mse)
plt.plot(test_Y,Y_lr_pred - test_Y,'o',alpha=0.1)
plt.title("Linear Regression Residuals - MSE = %.1f" % mse)
plt.xlabel("Spectroscopic Redshift")
plt.ylabel("Residual")
plt.hlines(0,min(test_Y),max(test_Y),color="red")
# here's the MSE guessing the AVERAGE value
print("naive mse", ((1./len(train_Y))*(train_Y - train_Y.mean())**2).sum())
mean_squared_error?
"""
Explanation: Linear Regression
http://scikit-learn.org/stable/modules/linear_model.html
End of explanation
"""
from sklearn import neighbors
from sklearn import preprocessing
X_scaled = preprocessing.scale(X) # many methods work better on scaled X
clf1 = neighbors.KNeighborsRegressor(10)
train_X = X_scaled[:half]
test_X = X_scaled[half:]
clf1.fit(train_X,train_Y)
Y_knn_pred = clf1.predict(test_X)
mse = mean_squared_error(test_Y,Y_knn_pred) ; print("MSE (KNN)", mse)
plt.plot(test_Y, Y_knn_pred - test_Y,'o',alpha=0.2)
plt.title("k-NN Residuals - MSE = %.1f" % mse)
plt.xlabel("Spectroscopic Redshift")
plt.ylabel("Residual")
plt.hlines(0,min(test_Y),max(test_Y),color="red")
from sklearn import neighbors
from sklearn import preprocessing
X_scaled = preprocessing.scale(X) # many methods work better on scaled X
train_X = X_scaled[:half]
train_Y = Y[:half]
test_X = X_scaled[half:]
test_Y = Y[half:]
clf1 = neighbors.KNeighborsRegressor(5)
clf1.fit(train_X,train_Y)
Y_knn_pred = clf1.predict(test_X)
mse = mean_squared_error(test_Y,Y_knn_pred) ; print("MSE=",mse)
plt.scatter(test_Y, Y_knn_pred - test_Y,alpha=0.2)
plt.title("k-NN Residuals - MSE = %.1f" % mse)
plt.xlabel("Spectroscopic Redshift")
plt.ylabel("Residual")
plt.hlines(0,min(test_Y),max(test_Y),color="red")
"""
Explanation: k-Nearest Neighbor (KNN) Regression
End of explanation
"""
from sklearn.ensemble import RandomForestRegressor
clf2 = RandomForestRegressor(n_estimators=100,
criterion='mse', max_depth=None,
min_samples_split=2, min_samples_leaf=1,
max_features='auto', max_leaf_nodes=None,
bootstrap=True, oob_score=False, n_jobs=1,
random_state=None, verbose=0, warm_start=False)
clf2.fit(train_X,train_Y)
Y_rf_pred = clf2.predict(test_X)
mse = mean_squared_error(test_Y,Y_rf_pred) ; print("MSE",mse)
plt.scatter(test_Y, Y_rf_pred - test_Y,alpha=0.2)
plt.title("RF Residuals - MSE = %.1f" % mse)
plt.xlabel("Spectroscopic Redshift")
plt.ylabel("Residual")
plt.hlines(0,min(test_Y),max(test_Y),color="red")
"""
Explanation: Random Forests
Pretty good intro
http://blog.yhathq.com/posts/random-forests-in-python.html
End of explanation
"""
from sklearn import cross_validation
from sklearn import linear_model
clf = linear_model.LinearRegression()
from sklearn.cross_validation import cross_val_score
def print_cv_score_summary(model, xx, yy, cv):
scores = cross_val_score(model, xx, yy, cv=cv, n_jobs=1)
print("mean: {:3f}, stdev: {:3f}".format(
np.mean(scores), np.std(scores)))
print_cv_score_summary(clf,X,Y,cv=cross_validation.KFold(len(Y), 5))
print_cv_score_summary(clf,X,Y,
cv=cross_validation.KFold(len(Y),10,shuffle=True,random_state=1))
print_cv_score_summary(clf2,X,Y,
cv=cross_validation.KFold(len(Y),3,shuffle=True,random_state=1))
"""
Explanation: model selection: cross-validation
End of explanation
"""
usecols = [str(x) for x in ["objid","dered_r","u_g_color","g_r_color","r_i_color","i_z_color","diff_u",\
"diff_g1","diff_i","diff_z","class"]]
all_sources = pd.read_csv("qso10000.csv",index_col=0,usecols=usecols)[:1000]
all_sources = all_sources.append(pd.read_csv("star1000.csv",index_col=0,usecols=usecols))
all_sources = all_sources.append(pd.read_csv("galaxy1000.csv",index_col=0,usecols=usecols))
all_sources = all_sources[(all_sources["dered_r"] > -9999) & (all_sources["g_r_color"] > -10) & (all_sources["g_r_color"] < 10)]
all_features = copy.copy(all_sources)
all_label = all_sources["class"]
del all_features["class"]
X = copy.copy(all_features.values)
Y = copy.copy(all_label.values)
all_sources.tail()
print("feature vector shape=", X.shape)
print("class shape=", Y.shape)
Y[Y=="QSO"] = 0
Y[Y=="STAR"] = 1
Y[Y=="GALAXY"] = 2
Y = list(Y)
"""
Explanation: Classification
Let's do a 3-class classification problem: star, galaxy, or QSO
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=200,oob_score=True)
clf.fit(X,Y)
"""
Explanation: Let's look at random forest
End of explanation
"""
sorted(zip(all_sources.columns.values,clf.feature_importances_),key=lambda q: q[1],reverse=True)
clf.oob_score_ ## "Out of Bag" Error
import numpy as np
from sklearn import svm, datasets
cmap = cm.jet_r
# import some data to play with
plt.figure(figsize=(10,10))
X = all_features.values[:, 1:3] # use only two features for training and plotting purposes
h = 0.02 # step size in the mesh
# we create an instance of SVM and fit out data. We do not scale our
# data since we want to plot the support vectors
C = 1.0 # SVM regularization parameter
svc = svm.SVC(kernel=str('linear'), C=C).fit(X, Y)
rbf_svc = svm.SVC(kernel=str('rbf'), gamma=0.7, C=C).fit(X, Y)
poly_svc = svm.SVC(kernel=str('poly'), degree=3, C=C).fit(X, Y)
lin_svc = svm.LinearSVC(C=C).fit(X, Y)
# create a mesh to plot in
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# title for the plots
titles = ['SVC with linear kernel',
'SVC with RBF kernel',
'SVC with polynomial (degree 3) kernel',
'LinearSVC (linear kernel)']
norm = mpl.colors.Normalize(vmin=min(Y), vmax=max(Y))
m = cm.ScalarMappable(norm=norm, cmap=cmap)
for i, clf in enumerate((svc, rbf_svc, poly_svc, lin_svc)):
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
plt.subplot(2, 2, i + 1)
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z,cmap=cm.Paired)
plt.axis('off')
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=m.to_rgba(Y),cmap=cm.Paired)
plt.title(titles[i])
"""
Explanation: what are the important features in the data?
End of explanation
"""
# fit a support vector machine classifier
from sklearn import grid_search
from sklearn import svm
from sklearn import metrics
import logging
logging.basicConfig(level=logging.INFO,
format='%(asctime)s %(levelname)s %(message)s')
# instantiate the SVM object
sdss_svm = svm.SVC()
X = all_features.values
Y = all_label.values
# parameter values over which we will search
parameters = {'kernel':(str('linear'), str('rbf')), \
'gamma':[0.5, 0.3, 0.1, 0.01],
'C':[0.1, 2, 4, 5, 10, 20,30]}
#parameters = {'kernel':('linear', 'rbf')}
# do a grid search to find the highest 3-fold CV zero-one score
svm_tune = grid_search.GridSearchCV(sdss_svm, parameters,\
n_jobs = -1, cv = 3,verbose=1)
svm_opt = svm_tune.fit(X, Y)
# print the best score and estimator
print(svm_opt.best_score_)
print(svm_opt.best_estimator_)
from sklearn.cross_validation import train_test_split
from sklearn.metrics import confusion_matrix
X_train, X_test, y_train, y_test = train_test_split(X, Y, random_state=0)
classifier = svm.SVC(**svm_opt.best_estimator_.get_params())
y_pred = classifier.fit(X_train, y_train).predict(X_test)
# Compute confusion matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
# Show confusion matrix in a separate window
plt.matshow(cm)
plt.title('Confusion matrix')
plt.colorbar()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
# instantiate the RF learning object
sdss_rf = RandomForestClassifier()
X = all_features.values
Y = all_label.values
# parameter values over which we will search
parameters = {'n_estimators':(10,50,200),"max_features": ["auto",3,5],
'criterion':[str("gini"),str("entropy")],"min_samples_leaf": [1,2]}
#parameters = {'kernel':('linear', 'rbf')}
# do a grid search to find the highest 3-fold CV zero-one score
rf_tune = grid_search.GridSearchCV(sdss_rf, parameters,\
n_jobs = -1, cv = 3,verbose=1)
rf_opt = rf_tune.fit(X, Y)
# print the best score and estimator
print(rf_opt.best_score_)
print(rf_opt.best_estimator_)
clf.get_params()
svm_opt.best_estimator_.get_params()
grid_search.GridSearchCV?
"""
Explanation: model improvement with GridSearchCV
Hyperparameter optimization. Parallel: makes use of joblib
End of explanation
"""
import time
start = time.time()
## this takes about 30 seconds
# instantiate the RF learning object
sdss_rf = RandomForestClassifier()
X = all_features.values
Y = all_label.values
# parameter values over which we will search
parameters = {'n_estimators':(10,50,200),"max_features": ["auto",3,5],
'criterion':["gini","entropy"],"min_samples_leaf": [1,2]}
#parameters = {'kernel':('linear', 'rbf')}
# do a grid search to find the highest 3-fold CV zero-one score
rf_tune = grid_search.GridSearchCV(sdss_rf, parameters,\
n_jobs = -1, cv = 3,verbose=1)
rf_opt = rf_tune.fit(X, Y)
# print the best score and estimator
print(rf_opt.best_score_)
print(rf_opt.best_estimator_)
print("total time in seconds",time.time()- start)
"""
Explanation: Parallelism & Hyperparameter Fitting
GridSearchCV is not compute/RAM optimized. It's also not obviously optimal.
End of explanation
"""
import time
start = time.time()
# instantiate the RF learning object
sdss_rf = RandomForestClassifier()
X = all_features.values
Y = all_label.values
# parameter values over which we will search
parameters = {'n_estimators':(10,50,200),"max_features": ["auto",3,5],
'criterion':["gini","entropy"],"min_samples_leaf": [1,2]}
#parameters = {'kernel':('linear', 'rbf')}
# do a grid search to find the highest 3-fold CV zero-one score
rf_tune = grid_search.RandomizedSearchCV(sdss_rf, parameters,\
n_jobs = -1, cv = 3,verbose=1)
rf_opt = rf_tune.fit(X, Y)
# print the best score and estimator
print(rf_opt.best_score_)
print(rf_opt.best_estimator_)
print("total time in seconds",time.time()- start)
!conda install dask distributed -y
import os
myhome = os.getcwd()
os.environ["PYTHONPATH"] = myhome + "/dask-learn"
myhome = !pwd
!git clone https://github.com/dask/dask-learn.git
%cd dask-learn
!git pull
!python setup.py install
from dklearn.grid_search import GridSearchCV as DaskGridSearchCV
import time
start = time.time()
# instantiate the RF learning object
sdss_rf = RandomForestClassifier()
X = all_features.values
Y = all_label.values
# parameter values over which we will search
parameters = {'n_estimators':(10,50,200),"max_features": ["auto",3,5],
'criterion':["gini","entropy"],"min_samples_leaf": [1,2]}
#parameters = {'kernel':('linear', 'rbf')}
# do a grid search to find the highest 3-fold CV zero-one score
rf_tune = DaskGridSearchCV(sdss_rf, parameters,\
cv = 3)
rf_opt = rf_tune.fit(X, Y)
# print the best score and estimator
print(rf_opt.best_score_)
print(rf_opt.best_estimator_)
print("total time in seconds",time.time()- start)
#To do distributed:
#from distributed import Executor
#executor = Executor()
#executor
"""
Explanation: Let's do this without a full search...
End of explanation
"""
usecols = [str(x) for x in ["objid","dered_r","u_g_color","g_r_color","r_i_color","i_z_color","diff_u",\
"diff_g1","diff_i","diff_z","class"]]
all_sources = pd.read_csv("qso10000.csv",index_col=0,usecols=usecols)[:1000]
all_sources = all_sources.append(pd.read_csv("star1000.csv",index_col=0,usecols=usecols))
all_sources = all_sources.append(pd.read_csv("galaxy1000.csv",index_col=0,usecols=usecols))
all_sources = all_sources[(all_sources["dered_r"] > -9999) & (all_sources["g_r_color"] > -10) & (all_sources["g_r_color"] < 10)]
all_features = copy.copy(all_sources)
all_label = all_sources["class"]
del all_features["class"]
X = copy.copy(all_features.values)
Y = copy.copy(all_label.values)
# instantiate the RF learning object
sdss_rf = RandomForestClassifier()
X = all_features.values
Y = all_label.values
# parameter values over which we will search
parameters = {'n_estimators':(100,),"max_features": ["auto",3,4],
'criterion':["entropy"],"min_samples_leaf": [1,2]}
# do a grid search to find the highest 5-fold CV zero-one score
rf_tune = grid_search.GridSearchCV(sdss_rf, parameters,\
n_jobs = -1, cv = 5,verbose=1)
rf_opt = rf_tune.fit(X, Y)
# print the best score and estimator
print(rf_opt.best_score_)
print(rf_opt.best_estimator_)
probs = rf_opt.best_estimator_.predict_proba(X)
print(rf_opt.best_estimator_.classes_)
for i in range(probs.shape[0]):
if rf_opt.best_estimator_.classes_[np.argmax(probs[i,:])] != Y[i]:
print("Label={0:6s}".format(Y[i]), end=" ")
print("Pgal={0:0.3f} Pqso={1:0.3f} Pstar={2:0.3f}".format(probs[i,0],probs[i,1],probs[i,2]),end=" ")
print("http://skyserver.sdss.org/dr12/en/tools/quicklook/summary.aspx?id=" + str(all_sources.index[i]))
"""
Explanation: Clustering, Unsupervised Learning & Anomoly Detection
It's often of interest to find patterns in the data that you didn't know where there, as an end to itself or as a starting point of exploration.
One approach is to look at individual sources that are mis-classified.
End of explanation
"""
from sklearn import (manifold, datasets, decomposition, ensemble,
discriminant_analysis, random_projection)
rp = random_projection.SparseRandomProjection(n_components=2, density=0.3, random_state=1)
X_projected = rp.fit_transform(X)
Y[Y=="QSO"] = 0
Y[Y=="STAR"] = 1
Y[Y=="GALAXY"] = 2
Yi = Y.astype(np.int64)
plt.title("Manifold Sparse Random Projection")
plt.scatter(X_projected[:, 0], X_projected[:, 1],c=plt.cm.Set1(Yi / 3.),alpha=0.2,
edgecolor='none',s=5*(X[:,0] - np.min(X[:,0])))
clf = manifold.MDS(n_components=2, n_init=1, max_iter=100)
X_mds = clf.fit_transform(X)
plt.title("MDS Projection")
plt.scatter(X_mds[:, 0], X_mds[:, 1],c=plt.cm.Set1(Yi / 3.),alpha=0.3,
s=5*(X[:,0] - np.min(X[:,0])))
"""
Explanation: We can also do manifold learning to be able to project structure in lower dimensions.
End of explanation
"""
|
mbeyeler/opencv-machine-learning
|
notebooks/09.01-Understanding-perceptrons.ipynb
|
mit
|
import numpy as np
class Perceptron(object):
def __init__(self, lr=0.01, n_iter=10):
"""Constructor
Parameters
----------
lr : float
Learning rate.
n_iter : int
Number of iterations after which the algorithm should
terminate.
"""
self.lr = lr
self.n_iter = n_iter
def predict(self, X):
"""Predict target labels
Parameters
----------
X : array-like
Feature matrix, <n_samples x n_features>
Returns
-------
Predicted target labels, +1 or -1.
Notes
-----
Must run `fit` first.
"""
# Whenever the term (X * weights + bias) >= 0, we return
# label +1, else we return label -1
return np.where(np.dot(X, self.weights) + self.bias >= 0.0,
1, -1)
def fit(self, X, y):
"""Fit the model to data
Parameters
----------
X : array-like
Feature matrix, <n_samples x n_features>
y : array-like
Vector of target labels, <n_samples x 1>
"""
self.weights = np.zeros(X.shape[1])
self.bias = 0.0
for _ in range(self.n_iter):
for xi, yi in zip(X, y):
delta = self.lr * (yi - self.predict(xi))
self.weights += delta * xi
self.bias += delta
"""
Explanation: <!--BOOK_INFORMATION-->
<a href="https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv" target="_blank"><img align="left" src="data/cover.jpg" style="width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;"></a>
This notebook contains an excerpt from the book Machine Learning for OpenCV by Michael Beyeler.
The code is released under the MIT license,
and is available on GitHub.
Note that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations.
If you find this content useful, please consider supporting the work by
buying the book!
<!--NAVIGATION-->
< 9. Using Deep Learning to Classify Handwritten Digits | Contents | Implementing a Multi-Layer Perceptron (MLP) in OpenCV >
Understanding Perceptrons
In the 1950s, American psychologist and artificial intelligence researcher Frank Rosenblatt invented an algorithm that would automatically learn the optimal weight coefficients $w_0$ and $w_1$ needed to perform an accurate binary classification: the perceptron learning rule.
Rosenblatt's original perceptron algorithm can be summed up as follows:
Initialize the weights to zero or some small random numbers.
For each training sample $s_i$, perform the following steps:
Compute the predicted target value $ŷ_i$.
Compare $ŷ_i$ to the ground truth $y_i$, and update the weights accordingly:
If the two are the same (correct prediction), skip ahead.
If the two are different (wrong prediction), push the weight coefficients $w_0$ and $w_1$ towards the positive or negative target class respectively.
Implemeting our first perceptron
Perceptrons are easy enough to be implemented from scratch. We can mimic the typical OpenCV or scikit-learn implementation of a classifier by creating a Perceptron object. This will allow us to initialize new perceptron objects that can learn from data via a fit method and make predictions via a separate predict method.
When we initialize a new perceptron object, we want to pass a learning rate (lr) and the number of iterations after which the algorithm should terminate (n_iter):
End of explanation
"""
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=100, centers=2,
cluster_std=2.2, random_state=42)
"""
Explanation: Generating a toy dataset
To test our perceptron classifier, we need to create some mock data. Let's keep things simple for now and generate 100 data samples (n_samples) belonging to one of two blobs (centers), again relying on scikit-learn's make_blobs function:
End of explanation
"""
y = 2 * y - 1
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
plt.figure(figsize=(10, 6))
plt.scatter(X[:, 0], X[:, 1], s=100, c=y);
plt.xlabel('x1')
plt.ylabel('x2')
plt.savefig('perceptron-data.png')
"""
Explanation: Adjust the labels so they're either +1 or -1:
End of explanation
"""
p = Perceptron(lr=0.1, n_iter=10)
p.fit(X, y)
"""
Explanation: Fitting the perceptron to data
We can instantiate our perceptron object similar to other classifiers we encountered with
OpenCV:
End of explanation
"""
p.weights
p.bias
"""
Explanation: Let's have a look at the learned weights:
End of explanation
"""
from sklearn.metrics import accuracy_score
accuracy_score(p.predict(X), y)
def plot_decision_boundary(classifier, X_test, y_test):
# create a mesh to plot in
h = 0.02 # step size in mesh
x_min, x_max = X_test[:, 0].min() - 1, X_test[:, 0].max() + 1
y_min, y_max = X_test[:, 1].min() - 1, X_test[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
X_hypo = np.c_[xx.ravel().astype(np.float32),
yy.ravel().astype(np.float32)]
zz = classifier.predict(X_hypo)
zz = zz.reshape(xx.shape)
plt.contourf(xx, yy, zz, cmap=plt.cm.coolwarm, alpha=0.8)
plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, s=200)
plt.figure(figsize=(10, 6))
plot_decision_boundary(p, X, y)
plt.xlabel('x1')
plt.ylabel('x2')
"""
Explanation: If we plug these values into our equation for $ϕ$, it becomes clear that the perceptron learned
a decision boundary of the form $2.2 x_1 - 0.48 x_2 + 0.2 >= 0$.
Evaluating the perceptron classifier
End of explanation
"""
X, y = make_blobs(n_samples=100, centers=2,
cluster_std=5.2, random_state=42)
y = 2 * y - 1
plt.figure(figsize=(10, 6))
plt.scatter(X[:, 0], X[:, 1], s=100, c=y);
plt.xlabel('x1')
plt.ylabel('x2')
"""
Explanation: Applying the perceptron to data that is not linearly separable
Since the perceptron is a linear classifier, you can imagine that it would have trouble trying
to classify data that is not linearly separable. We can test this by increasing the spread
(cluster_std) of the two blobs in our toy dataset so that the two blobs start overlapping:
End of explanation
"""
p = Perceptron(lr=0.1, n_iter=10)
p.fit(X, y)
accuracy_score(p.predict(X), y)
plt.figure(figsize=(10, 6))
plot_decision_boundary(p, X, y)
plt.xlabel('x1')
plt.ylabel('x2')
"""
Explanation: So what would happen if we applied the perceptron classifier to this dataset?
End of explanation
"""
|
sebastianmarkow/san-francisco-crime-kaggle
|
prediction.ipynb
|
mit
|
import datetime
import gc
import zipfile
import matplotlib as mpl
import numpy as np
import pandas as pd
import seaborn as sns
import sklearn as sk
from pandas.tseries.holiday import USFederalHolidayCalendar
from sklearn.cross_validation import KFold, cross_val_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import SGDClassifier
from sklearn.grid_search import GridSearchCV
from sklearn.kernel_approximation import RBFSampler
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler, LabelEncoder, LabelBinarizer
from sklearn_pandas import DataFrameMapper
%matplotlib inline
DATADIR = "./data/"
"""
Explanation: Predicting crime in San Francisco with machine learning (Kaggle 2015)
Notice: sklearn-pandas should be checked out from latest master branch
~~~
$ pip2 install --upgrade git+https://github.com/paulgb/sklearn-pandas.git@master
~~~
Requirements
End of explanation
"""
train, test = pd.DataFrame(), pd.DataFrame()
with zipfile.ZipFile(DATADIR + "train.csv.zip", "r") as zf:
train = pd.read_csv(zf.open("train.csv"), parse_dates=['Dates'])
with zipfile.ZipFile(DATADIR + "test.csv.zip", "r") as zf:
test = pd.read_csv(zf.open("test.csv"), parse_dates=['Dates'])
train.head()
test.head()
"""
Explanation: Raw data
End of explanation
"""
train_view, test_view = pd.DataFrame(), pd.DataFrame()
train_view["category"] = train["Category"]
for (view, data) in [(train_view, train), (test_view, test)]:
view["district"] = data["PdDistrict"]
view["hour"] = data["Dates"].map(lambda x: x.hour)
view["weekday"] = data["Dates"].map(lambda x: x.weekday())
view["day"] = data["Dates"].map(lambda x: x.day)
view["dayofyear"] = data["Dates"].map(lambda x: x.dayofyear)
view["month"] = data["Dates"].map(lambda x: x.month)
view["year"] = data["Dates"].map(lambda x: x.year)
view["lon"] = data["X"]
view["lat"] = data["Y"]
view["address"] = data["Address"].map(lambda x: x.split(" ", 1)[1] if x.split(" ", 1)[0].isdigit() else x)
view["corner"] = data["Address"].map(lambda x: "/" in x)
days_off = USFederalHolidayCalendar().holidays(start='2003-01-01', end='2015-05-31').to_pydatetime()
for (view, data) in [(train_view, train), (test_view, test)]:
view["holiday"] = data["Dates"].map(lambda x: datetime.datetime(x.year,x.month,x.day) in days_off)
view["workhour"] = data["Dates"].map(lambda x: x.hour in range(9,17))
view["sunlight"] = data["Dates"].map(lambda x: x.hour in range(7,19))
train_view.head()
test_view.head()
"""
Explanation: View composing & feature engineering
End of explanation
"""
del train
del test
gc.collect()
target_mapper = DataFrameMapper([
("category", LabelEncoder()),
])
y = target_mapper.fit_transform(train_view.copy())
print "sample:", y[0]
print "shape:", y.shape
data_mapper = DataFrameMapper([
("district", LabelBinarizer()),
("hour", StandardScaler()),
("weekday", StandardScaler()),
("day", StandardScaler()),
("dayofyear", StandardScaler()),
("month", StandardScaler()),
("year", StandardScaler()),
("lon", StandardScaler()),
("lat", StandardScaler()),
("address", [LabelEncoder(), StandardScaler()]),
("corner", LabelEncoder()),
("holiday", LabelEncoder()),
("workhour", LabelEncoder()),
("sunlight", LabelEncoder()),
])
data_mapper.fit(pd.concat([train_view.copy(), test_view.copy()]))
X = data_mapper.transform(train_view.copy())
X_test = data_mapper.transform(test_view.copy())
print "sample:", X[0]
print "shape:", X.shape
"""
Explanation: Garbage Collection
End of explanation
"""
samples = np.random.permutation(np.arange(X.shape[0]))[:100000]
X_sample = X[samples]
y_sample = y[samples]
y_sample = np.reshape(y_sample, -1)
y = np.reshape(y, -1)
"""
Explanation: Draw samples
End of explanation
"""
sgd_rbf = make_pipeline(RBFSampler(gamma=0.1, random_state=1), SGDClassifier())
"""
Explanation: Stochastic Gradient Descent (SGD)
SGD with Kernel Approximation
RBF Kernel by Monte Carlo approximation of its Fourier transform
End of explanation
"""
alpha_range = 10.0**-np.arange(1,7)
loss_function = ["hinge", "log", "modified_huber", "squared_hinge", "perceptron"]
params = dict(
sgdclassifier__alpha=alpha_range,
sgdclassifier__loss=loss_function
)
%%time
grid = GridSearchCV(sgd_rbf, params, cv=10, scoring="accuracy", n_jobs=1)
grid.fit(X_sample, y_sample)
print "best score:", grid.best_score_
print "parameter:", grid.best_params_
"""
Explanation: Parameter Search
End of explanation
"""
%%time
rbf = RBFSampler(gamma=0.1, random_state=1)
rbf.fit(np.concatenate((X, X_test), axis=0))
X_rbf = rbf.transform(X)
X_test_rbf = rbf.transform(X_test)
%%time
sgd = SGDClassifier(loss="log", alpha=0.001, n_iter=1000, n_jobs=-1)
sgd.fit(X_rbf, y)
"""
Explanation: Training
End of explanation
"""
results = sgd_predict_proba(X_test_rbf)
"""
Explanation: Classification
End of explanation
"""
%%time
rfc = RandomForestClassifier(max_depth=16,n_estimators=1024)
cross_val_score(rfc, X, y, cv=10, scoring="accuracy", n_jobs=-1).mean()
"""
Explanation: Random Forest Classifier
End of explanation
"""
|
andrzejkrawczyk/python-course
|
part_1/08.Funkcje.ipynb
|
apache-2.0
|
def foo():
pass
def suma(a, b):
return a + b
print(foo())
print(foo)
print(suma(5, 10))
def suma(a, b, c=5, d=10):
return a + b + c + d
print(suma(1, 2))
def suma(a, b, c=5, d=10):
return a + b + c + d
print(suma(1, 2, 3))
def suma(a, b, c=5, d=10):
return a + b + c + d
print(suma(1, 2, d=5, c=5))
def suma(a, b, c=5, d=10):
return a + b + c + d
print(suma(1, d=5, c=5))
def suma(a, b, c=5, d=10):
return a + b + c + d
print(suma(1, d=5, e=5))
def suma(a, b, c=5, d=10):
return a + b + c + d
liczby = {
"c": 5,
"d": 10
}
print(suma(1, 2, **liczby))
def suma(a, b, c=5, d=10):
return a + b + c + d
liczby = {
"c": 5,
"e": 10
}
print(suma(1, 2, **liczby))
def suma(a, b, **kwargs):
print(kwargs)
print(type(kwargs))
return a + b + sum(kwargs.values())
liczby = {
"c": 5,
"e": 10
}
print(suma(1, 2, **liczby))
def suma(a, b, **kwargs):
print(kwargs)
return a + b + sum(kwargs.values())
liczby = {
"c": 5,
"e": 10
}
print(suma(1, 2, a=5))
def suma(a, b, **kwargs):
print(kwargs)
return a + b + sum(kwargs.values())
liczby = {
"c": 5,
"e": 10
}
print(suma(1, 2, liczba1=1, liczba2=2, liczba3=3))
def suma(a, b, *args, **kwargs):
print(kwargs)
print(args)
return a + b + sum(args) + sum(kwargs.values())
liczby = {
"c": 5,
"e": 10
}
print(suma(1, 2, 3, liczba1=1, liczba2=2, liczba3=3))
def suma(a, b, *args, **kwargs):
print(kwargs)
print(args)
return a + b + sum(args) + sum(kwargs.values())
liczby = {
"c": 5,
"e": 10
}
print(suma(1, 2))
"""
Explanation: Funkcje
definicja przez słowo kluczowe def
funkcje to obiekty, Callable
nazwa funkcji wskazuje na obiekt
dowolna ilość argumentów
dowolna ilość zwracanych argumentów
wykonywane w runtime
domyślnie zwracany argument None
End of explanation
"""
def akumulator(liczby=[]):
liczby.append(1)
return liczby
akumulator()
akumulator()
print(akumulator())
print(akumulator([]))
print(akumulator())
def akumulator(liczby=[]):
liczby.append(1)
return liczby
akumulator()
akumulator()
akumulator()
akumulator([])
akumulator()
print(akumulator.__defaults__)
def akumulator(liczby=None):
if liczby is None:
liczby = []
liczby.append(1)
return liczby
akumulator()
akumulator()
print(akumulator())
print(akumulator([]))
print(akumulator())
"""
Explanation: <center><h1>?</h1></center>
End of explanation
"""
_no_value = object()
def akumulator(liczby=_no_value):
if liczby is _no_value:
liczby = []
liczby.append(1)
return liczby
akumulator()
akumulator()
print(akumulator())
print(akumulator([]))
print(akumulator())
def foo():
return 1, 2, 3, 4, 5
a, b, c, *d = foo()
print(a)
print(b)
print(c)
print(d)
def foo():
return 1, 2, 3, 4, 5
_, *d, _ = foo()
print(d)
def foo():
lista = [1, 2, 3, 4, 5]
return lista
a, b, c, *d = foo()
print(a)
print(b)
print(c)
print(d)
x = lambda x: x + 5
a = x(10)
print(a)
x = lambda x, y: x + 5 + y
a = x(10, 5)
print(a)
x = lambda: 10
a = x()
print(a)
"""
Explanation: <center><h1>?</h1></center>
End of explanation
"""
x = 10
def foo():
print(x)
def bar():
x = 5
print(x)
def foobar():
global x
x = 1
print(x)
foo()
bar()
print(x)
foobar()
print(x)
x = 10
class Point():
a = x
print(":)")
def foo(self):
print(self.a)
self.a = 5
print(self.a)
def bar(self):
print(x)
print(x)
Point().foo()
print(x)
x = 7
Point().bar()
"""
Explanation: Zasięg zmiennych
Local
Enclosing
Global
Bult-in
End of explanation
"""
|
jmhsi/justin_tinker
|
data_science/courses/deeplearning1/nbs/lesson1.ipynb
|
apache-2.0
|
%matplotlib inline
"""
Explanation: Using Convolutional Neural Networks
Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.
Introduction to this week's task: 'Dogs vs Cats'
We're going to try to create a model to enter the Dogs vs Cats competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): "State of the art: The current literature suggests machine classifiers can score above 80% accuracy on this task". So if we can beat 80%, then we will be at the cutting edge as of 2013!
Basic setup
There isn't too much to do to get started - just a few simple configuration steps.
This shows plots in the web page itself - we always wants to use this when using jupyter notebook:
End of explanation
"""
path = "data/dogscats/"
# path = "data/dogscats/sample/"
"""
Explanation: Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.)
End of explanation
"""
from __future__ import division,print_function
from importlib import reload
import os, json
from glob import glob
import numpy as np
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
"""
Explanation: A few basic libraries that we'll need for the initial exercises:
End of explanation
"""
import theano
import utils; reload(utils)
from utils import plots
"""
Explanation: We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
End of explanation
"""
# As large as you can, but no larger than 64 is recommended.
# If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this.
batch_size=64
# Import our class, and instantiate
import vgg16; reload(vgg16)
from vgg16 import Vgg16
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
vgg.finetune(batches)
vgg.fit(batches, val_batches, nb_epoch=1)
"""
Explanation: Use a pretrained VGG model with our Vgg16 class
Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.
We have created a python class, Vgg16, which makes using the VGG 16 model very straightforward.
The punchline: state of the art custom model in 7 lines of code
Here's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work.
End of explanation
"""
vgg = Vgg16()
"""
Explanation: The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.
Let's take a look at how this works, step by step...
Use Vgg16 for basic image recognition
Let's start off by using the Vgg16 class to recognise the main imagenet category for each image.
We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.
First, create a Vgg16 object:
End of explanation
"""
batches = vgg.get_batches(path+'train', batch_size=4)
"""
Explanation: Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.
Let's grab batches of data from our training folder:
End of explanation
"""
imgs,labels = next(batches)
"""
Explanation: (BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)
Batches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.
End of explanation
"""
plots(imgs, titles=labels)
"""
Explanation: As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding.
The arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.
End of explanation
"""
vgg.predict(imgs, True)
"""
Explanation: We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.
End of explanation
"""
vgg.classes[:4]
"""
Explanation: The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four:
End of explanation
"""
batch_size=64
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size)
"""
Explanation: (Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)
Use our Vgg16 class to finetune a Dogs vs Cats model
To change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specific categories, we need to use a process called "finetuning". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.
However, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune().
We create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.
End of explanation
"""
vgg.finetune(batches)
"""
Explanation: Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
End of explanation
"""
vgg.fit(batches, val_batches, nb_epoch=1)
"""
Explanation: Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)
End of explanation
"""
from numpy.random import random, permutation
from scipy import misc, ndimage
from scipy.ndimage.interpolation import zoom
import keras
from keras import backend as K
from keras.utils.data_utils import get_file
from keras.models import Sequential, Model
from keras.layers.core import Flatten, Dense, Dropout, Lambda
from keras.layers import Input
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD, RMSprop
from keras.preprocessing import image
"""
Explanation: That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.
Next up, we'll dig one level deeper to see what's going on in the Vgg16 class.
Create a VGG model from scratch in Keras
For the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes.
Model setup
We need to import all the modules we'll be using from numpy, scipy, and keras:
End of explanation
"""
FILES_PATH = 'http://files.fast.ai/models/'; CLASS_FILE='imagenet_class_index.json'
# Keras' get_file() is a handy function that downloads files, and caches them for re-use later
fpath = get_file(CLASS_FILE, FILES_PATH+CLASS_FILE, cache_subdir='models')
with open(fpath) as f: class_dict = json.load(f)
# Convert dictionary with string indexes into an array
classes = [class_dict[str(i)][1] for i in range(len(class_dict))]
"""
Explanation: Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.
End of explanation
"""
classes[:5]
"""
Explanation: Here's a few examples of the categories we just imported:
End of explanation
"""
def ConvBlock(layers, model, filters):
for i in range(layers):
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(filters, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
"""
Explanation: Model creation
Creating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.
VGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition:
End of explanation
"""
def FCBlock(model):
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
"""
Explanation: ...and here's the fully-connected definition.
End of explanation
"""
# Mean of each channel as provided by VGG researchers
vgg_mean = np.array([123.68, 116.779, 103.939]).reshape((3,1,1))
def vgg_preprocess(x):
x = x - vgg_mean # subtract mean
return x[:, ::-1] # reverse axis bgr->rgb
"""
Explanation: When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model:
End of explanation
"""
def VGG_16():
model = Sequential()
model.add(Lambda(vgg_preprocess, input_shape=(3,224,224)))
ConvBlock(2, model, 64)
ConvBlock(2, model, 128)
ConvBlock(3, model, 256)
ConvBlock(3, model, 512)
ConvBlock(3, model, 512)
model.add(Flatten())
FCBlock(model)
FCBlock(model)
model.add(Dense(1000, activation='softmax'))
return model
"""
Explanation: Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!
End of explanation
"""
model = VGG_16()
"""
Explanation: We'll learn about what these different blocks do later in the course. For now, it's enough to know that:
Convolution layers are for finding patterns in images
Dense (fully connected) layers are for combining patterns across an image
Now that we've defined the architecture, we can create the model like any python object:
End of explanation
"""
fpath = get_file('vgg16.h5', FILES_PATH+'vgg16.h5', cache_subdir='models')
model.load_weights(fpath)
"""
Explanation: As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem.
Downloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.
End of explanation
"""
batch_size = 4
"""
Explanation: Getting imagenet predictions
The setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call predict() on them.
End of explanation
"""
def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True,
batch_size=batch_size, class_mode='categorical'):
return gen.flow_from_directory(path+dirname, target_size=(224,224),
class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)
"""
Explanation: Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data:
End of explanation
"""
batches = get_batches('train', batch_size=batch_size)
val_batches = get_batches('valid', batch_size=batch_size)
imgs,labels = next(batches)
# This shows the 'ground truth'
plots(imgs, titles=labels)
"""
Explanation: From here we can use exactly the same steps as before to look at predictions from the model.
End of explanation
"""
def pred_batch(imgs):
preds = model.predict(imgs)
idxs = np.argmax(preds, axis=1)
print('Shape: {}'.format(preds.shape))
print('First 5 classes: {}'.format(classes[:5]))
print('First 5 probabilities: {}\n'.format(preds[0, :5]))
print('Predictions prob/class: ')
for i in range(len(idxs)):
idx = idxs[i]
print (' {:.4f}/{}'.format(preds[i, idx], classes[idx]))
pred_batch(imgs)
"""
Explanation: The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with np.argmax()) we can find the predicted label.
End of explanation
"""
|
ToqueWillot/M2DAC
|
FDMS/TME2/TME2_Paul_Willot.ipynb
|
gpl-2.0
|
%matplotlib inline
import sklearn
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import random
import copy
from sklearn.datasets import fetch_mldata
from sklearn import cross_validation
from sklearn import base
from sklearn.linear_model import Lasso
from sklearn.linear_model import ElasticNet
#mnist = fetch_mldata('iris')
import matplotlib.pyplot as plt
"""
Explanation: FDMS TME2
Paul Willot
End of explanation
"""
ds = sklearn.datasets.make_classification(n_samples=20000,
n_features=30, # 30 features
n_informative=5, # only 5 informatives ones
n_redundant=0,
n_repeated=3, # and 3 duplicate
n_classes=2,
n_clusters_per_class=1,
weights=None,
flip_y=0.03,
class_sep=0.8,
hypercube=True,
shift=0.0,
scale=1.0,
shuffle=True,
random_state=None)
X= ds[0]
y= ds[1]
# labels: [0,1] -> [-1,1]
for idx,i in enumerate(y):
if (i==0):
y[idx]=-1
print(X[0])
print(y[0])
"""
Explanation: Data generation
End of explanation
"""
class GradientDescent(base.BaseEstimator):
def __init__(self,theta,lamb,eps):
self.theta=theta
self.eps=eps
self.lamb=lamb
self.used_features=len(theta)
def fit(self,X,y,nbIt=1000,printevery=-1):
l=len(X)
xTrans = X.transpose()
for i in xrange(0,nbIt):
#index = np.random.randint(l)
loss = np.dot(X, self.theta) - y
cost = np.sum(loss ** 2) * (1 / l) + (self.lamb*np.linalg.norm(self.theta))
gradient = np.dot(xTrans,(np.dot(self.theta,xTrans)-y))
if i%(nbIt/100)==0:
thetaprime = self.theta - self.eps * (np.sign(theta)*self.lamb)
else:
thetaprime = self.theta - self.eps * gradient
for k in xrange(0,len(theta)):
self.theta[k] = 0 if thetaprime[k]*theta[k]<0 else thetaprime[k]
if printevery!=-1 and i%printevery==0:
print("Iteration %s | Cost: %f | Score: %.03f" % (str(i).ljust(6), cost,self.score(X,y)))
ttt = self.nb_used_features()
print("%d features used"%(ttt))
self.used_features=ttt
elif i%1000==0:
ttt = self.nb_used_features()
self.used_features=ttt
def predict(self,x):
ret=[]
for i in x:
ret.append(1 if np.dot(i,self.theta)>0 else -1)
return ret
def score(self,X,y):
cpt=0.0
allpred = self.predict(X)
for idx,i in enumerate(allpred):
cpt += 1 if i==y[idx] else 0
return cpt/len(X)
def nb_used_features(self):
cpt=0
for ii in self.theta:
if ii==0:
cpt+=1
return len(self.theta)-cpt
theta = copy.deepcopy(X[0])
lamb=500
eps=0.00001
gd = GradientDescent(theta,lamb,eps)
nbIterations = 5000
gd.fit(X,y,nbIterations,printevery=nbIterations/10)
scores = cross_validation.cross_val_score(gd, X, y, cv=5,scoring="accuracy")
print("Cross validation scores: %s, mean: %.02f"%(scores,np.mean(scores)))
"""
Explanation: L1
Advantage: good features selection
L1 gradient pseudocode
End of explanation
"""
eps=0.00001
la = []
cross_sc = []
used_features = []
for lamb in np.arange(0,4000,200):
theta = copy.deepcopy(X[0])
gd = GradientDescent(theta,lamb,eps)
nbIterations = 4000
gd.fit(X,y,nbIterations)
scoresSvm = cross_validation.cross_val_score(gd, X, y, cv=5,scoring="accuracy")
print("Lamda: %s | Cross val mean: %.03f | Features: %d"%(str(lamb).ljust(5),np.mean(scoresSvm),gd.used_features))
#print("Lamda: %.02f | Cross val mean: %.02f | Features: %d"%(lamb,gd.score(X,y),gd.used_features))
cross_sc.append(np.mean(scoresSvm))
la.append(lamb)
used_features.append(gd.used_features)
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(la, cross_sc, '#6DC433')
ax2.plot(la, used_features, '#5AC8ED')
ax1.set_xlabel('lambda')
ax1.set_ylabel('Cross val score', color='#6DC433')
ax2.set_ylabel('Nb features used', color='#5AC8ED')
ax1.yaxis.grid(False)
ax2.grid(False)
plt.show()
"""
Explanation: Selecting lambda
We have only 5 informatives features over 30, and 3 redundancies.
We thrive to reach this number of features, while keeping a good classification score.
End of explanation
"""
class GradientDescentL2(base.BaseEstimator):
def __init__(self,theta,lamb,eps):
self.theta=theta
self.eps=eps
self.lamb=lamb
self.used_features=len(theta)
def fit(self,X,y,nbIt=1000,printevery=-1):
l=len(X)
xTrans = X.transpose()
for i in xrange(0,nbIt):
index = np.random.randint(l)
loss = np.dot(X, self.theta) - y
cost = np.sum(loss ** 2) * (1 / l) + (self.lamb*np.linalg.norm(self.theta))**2
gradient = np.dot(xTrans,(np.dot(self.theta,xTrans)-y))
if i%(nbIt/100)==0:
thetaprime = self.theta - self.eps * (np.sign(theta)*self.lamb)
else:
thetaprime = self.theta - self.eps * gradient
for k in xrange(0,len(theta)):
self.theta[k] = 0 if thetaprime[k]*theta[k]<0 else thetaprime[k]
if printevery!=-1 and i%printevery==0:
print("Iteration %s | Cost: %f | Score: %.03f" % (str(i).ljust(6), cost,self.score(X,y)))
ttt = self.nb_used_features()
print("%d features used"%(ttt))
self.used_features=ttt
elif i%1000==0:
ttt = self.nb_used_features()
self.used_features=ttt
def predict(self,x):
ret=[]
for i in x:
ret.append(1 if np.dot(i,self.theta)>0 else -1)
return ret
def score(self,X,y):
cpt=0.0
allpred = self.predict(X)
for idx,i in enumerate(allpred):
cpt += 1 if i==y[idx] else 0
return cpt/len(X)
def nb_used_features(self):
cpt=0
for ii in self.theta:
if ii==0:
cpt+=1
return len(self.theta)-cpt
"""
Explanation: L2
The difference between the L1 and L2 regularization is that L1 work on the sum of the weights, and L2 work on the sum of the square of the weights, and is therefor more sensitive to outliers.
Advantage: good predictions with significants constraints
End of explanation
"""
ds = sklearn.datasets.make_classification(n_samples=200,
n_features=30, # 30 features
n_informative=5, # only 5 informatives ones
n_redundant=0,
n_repeated=3, # and 3 duplicate
n_classes=2,
n_clusters_per_class=1,
weights=None,
flip_y=0.01,
class_sep=0.8,
hypercube=True,
shift=0.0,
scale=1.0,
shuffle=True,
random_state=None)
X= ds[0]
y= ds[1]
# labels: [0,1] -> [-1,1]
for idx,i in enumerate(y):
if (i==0):
y[idx]=-1
theta = copy.deepcopy(X[0])
lamb=2000
eps=0.00001
gd = GradientDescentL2(theta,lamb,eps)
#gd.tmp
nbIterations = 5000
gd.fit(X,y,nbIterations,printevery=nbIterations/10)
scores = cross_validation.cross_val_score(gd, X, y, cv=5,scoring="accuracy")
print("Cross validation scores: %s, mean: %.02f"%(scores,np.mean(scores)))
"""
Explanation: Test with only 200 samples
End of explanation
"""
eps=0.00001
la = []
cross_sc = []
used_features = []
for lamb in np.arange(0,4000,200):
theta = copy.deepcopy(X[0])
gd = GradientDescentL2(theta,lamb,eps)
nbIterations = 5000
gd.fit(X,y,nbIterations)
scoresSvm = cross_validation.cross_val_score(gd, X, y, cv=5,scoring="accuracy")
print("Lamda: %s | Cross val mean: %.03f | Features: %d"%(str(lamb).ljust(5),np.mean(scoresSvm),gd.used_features))
cross_sc.append(np.mean(scoresSvm))
la.append(lamb)
used_features.append(gd.used_features)
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(la, cross_sc, '#6DC433')
ax2.plot(la, used_features, '#5AC8ED')
ax1.set_xlabel('lambda')
ax1.set_ylabel('Cross val score', color='#6DC433')
ax2.set_ylabel('Nb features used', color='#5AC8ED')
ax1.yaxis.grid(False)
ax2.grid(False)
plt.show()
"""
Explanation: Selecting lambda
Similar to L1
End of explanation
"""
#used to cross-val on lasso and elastic-net
def scorer(estimator, X, y):
pred = estimator.predict(X)
cpt=0.0
for idx,i in enumerate(pred):
if i<0:
cpt += 1 if y[idx]==-1 else 0
else:
cpt += 1 if y[idx]==1 else 0
return cpt/len(y)
lass = Lasso(alpha = 0.2)
lass.fit(X,y)
scores = cross_validation.cross_val_score(lass, X, y, cv=5,scoring=scorer)
print("Cross validation scores: %s, mean: %.02f"%(scores,np.mean(scores)))
print(lass.coef_)
print("Feature used: %d"%np.count_nonzero(lass.coef_))
eps=0.00001
la = []
cross_sc = []
used_features = []
for lamb in np.arange(0.05,1.05,0.05):
theta = copy.deepcopy(X[0])
gd = Lasso(alpha = lamb)
nbIterations = 4000
gd.fit(X,y)
scoresSvm = cross_validation.cross_val_score(gd, X, y, cv=5,scoring=scorer)
print("Lamda: %s | Cross val mean: %.03f | Features: %d"%(str(lamb).ljust(5),np.mean(scoresSvm),np.count_nonzero(gd.coef_)))
#print("Lamda: %.02f | Cross val mean: %.02f | Features: %d"%(lamb,gd.score(X,y),gd.used_features))
cross_sc.append(np.mean(scoresSvm))
la.append(lamb)
used_features.append(np.count_nonzero(gd.coef_))
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(la, cross_sc, '#6DC433')
ax2.plot(la, used_features, '#5AC8ED')
ax1.set_xlabel('lambda')
ax1.set_ylabel('Cross val score', color='#6DC433')
ax2.set_ylabel('Nb features used', color='#5AC8ED')
ax1.yaxis.grid(False)
ax2.grid(False)
plt.show()
"""
Explanation: Evaluation using sklearn Lasso
Sklearn's Lasso works the same, although way faster, and a lambda 0 < λ < 1 is more practical
End of explanation
"""
lass = ElasticNet(alpha = 0.2, l1_ratio=0)
lass.fit(X,y)
scores = cross_validation.cross_val_score(lass, X, y, cv=5,scoring=scorer)
print("Cross validation scores: %s, mean: %.02f"%(scores,np.mean(scores)))
print("Feature used: %d"%np.count_nonzero(lass.coef_))
lass = ElasticNet(alpha = 0.2, l1_ratio=0.5)
lass.fit(X,y)
scores = cross_validation.cross_val_score(lass, X, y, cv=5,scoring=scorer)
print("Cross validation scores: %s, mean: %.02f"%(scores,np.mean(scores)))
print("Feature used: %d"%np.count_nonzero(lass.coef_))
lass = ElasticNet(alpha = 0.2, l1_ratio=1)
lass.fit(X,y)
scores = cross_validation.cross_val_score(lass, X, y, cv=5,scoring=scorer)
print("Cross validation scores: %s, mean: %.02f"%(scores,np.mean(scores)))
print("Feature used: %d"%np.count_nonzero(lass.coef_))
"""
Explanation: Comparaison of L1 and L2 using sklearn ElasticNet
End of explanation
"""
eps=0.00001
la = []
cross_sc = []
used_features = []
for lamb in np.arange(0.05,1.05,0.05):
theta = copy.deepcopy(X[0])
gd = ElasticNet(alpha = 0.2, l1_ratio=lamb)
nbIterations = 4000
gd.fit(X,y)
scoresSvm = cross_validation.cross_val_score(gd, X, y, cv=5,scoring=scorer)
print("Lamda: %s | Cross val mean: %.03f | Features: %d"%(str(lamb).ljust(5),np.mean(scoresSvm),np.count_nonzero(gd.coef_)))
#print("Lamda: %.02f | Cross val mean: %.02f | Features: %d"%(lamb,gd.score(X,y),gd.used_features))
cross_sc.append(np.mean(scoresSvm))
la.append(lamb)
used_features.append(np.count_nonzero(gd.coef_))
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(la, cross_sc, '#FF9900')
ax2.plot(la, used_features, '#9933FF')
ax1.set_xlabel('L1 L2 ratio')
ax1.set_ylabel('Cross val score', color='#FF9900')
ax2.set_ylabel('Nb features used', color='#9933FF')
ax1.yaxis.grid(False)
ax2.grid(False)
plt.show()
"""
Explanation: We observe that, as expected, the more we take L1 into account the less features are used.
End of explanation
"""
|
aoool/behavioral-cloning
|
data.ipynb
|
mit
|
import os
import zipfile
if not (os.path.isdir("data_raw") and os.path.exists("data_raw.csv")):
zip_ref = zipfile.ZipFile("data_raw.zip", 'r')
zip_ref.extractall(".")
zip_ref.close()
"""
Explanation: Behavioral Cloning
Here the driving data collected using the driving simulator will be explored and augmented.
Extract Raw Data Archive
End of explanation
"""
import os
import pandas as pd
data = pd.read_csv("data_raw.csv").to_dict(orient='list')
n_records = len(data['STEERING_ANGLE'])
n_images = n_records * 3 # center, left and right images per record
print("Number of samples:", n_records)
print("Number of images:", n_images)
# validate that directory number of images in csv file
# and the number of images in the directory are equal
assert (len(os.listdir("data_raw")) == n_images)
"""
Explanation: Read CSV File for the Recently Extracted Data
End of explanation
"""
import matplotlib.pyplot as plt
# visualizations will be shown in the notebook
%matplotlib inline
plt.hist(data['STEERING_ANGLE'], bins=100)
plt.show()
"""
Explanation: Draw Histogram for Steering Angle for Raw Data
End of explanation
"""
import cv2
import os
if os.path.isdir("data_augmented") and os.path.exists("data_augmented.csv"):
print("data_augmented directory or data_augmented.csv file exists")
else:
os.mkdir("data_augmented")
with open("data_augmented.csv", "w") as csv_file:
csv_file.write("CENTER_IMAGE,LEFT_IMAGE,RIGHT_IMAGE,STEERING_ANGLE,THROTTLE,BRAKE,SPEED\n")
for i in range(n_records):
# center image names (old, new)
center_im_nm = data['CENTER_IMAGE'][i]
center_im_nm_new = center_im_nm.replace("data_raw", "data_augmented")
center_im_nm_new_flipped = center_im_nm_new.replace("center", "center_flipped")
# left image names (old,new)
left_im_nm = data['LEFT_IMAGE'][i]
left_im_nm_new = left_im_nm.replace("data_raw", "data_augmented")
left_im_nm_new_flipped = left_im_nm_new.replace("left", "left_flipped")
# right image names (old, new)
right_im_nm = data['RIGHT_IMAGE'][i]
right_im_nm_new = right_im_nm.replace("data_raw", "data_augmented")
right_im_nm_new_flipped = right_im_nm_new.replace("right", "right_flipped")
# steering angle (old, flipped)
steering_angle = data['STEERING_ANGLE'][i]
steering_angle_flipped = -1.0 * steering_angle
# create hard links to the original images in new directory
os.link(center_im_nm, center_im_nm_new)
os.link(left_im_nm, left_im_nm_new)
os.link(right_im_nm, right_im_nm_new)
# write info about old images to new csv file
csv_file.write("{c_im},{l_im},{r_im},{st_ang},{thr},{br},{sp}\n".format(
c_im=center_im_nm_new,
l_im=left_im_nm_new,
r_im=right_im_nm_new,
st_ang=data['STEERING_ANGLE'][i],
thr=data['THROTTLE'][i],
br=data['BRAKE'][i],
sp=data['SPEED'][i]))
# flip center image and save
flipped_center_im = cv2.flip(cv2.imread(center_im_nm), flipCode=1)
cv2.imwrite(center_im_nm_new_flipped, flipped_center_im)
# flip left image and save
flipped_left_im = cv2.flip(cv2.imread(left_im_nm), flipCode=1)
cv2.imwrite(left_im_nm_new_flipped, flipped_left_im)
# flip right image and save
flipped_right_im = cv2.flip(cv2.imread(right_im_nm), flipCode=1)
cv2.imwrite(right_im_nm_new_flipped, flipped_right_im)
# write info about flipped images to new csv file
csv_file.write("{c_im},{l_im},{r_im},{st_ang},{thr},{br},{sp}\n".format(
c_im=center_im_nm_new_flipped,
l_im=left_im_nm_new_flipped,
r_im=right_im_nm_new_flipped,
st_ang=steering_angle_flipped,
thr=data['THROTTLE'][i],
br=data['BRAKE'][i],
sp=data['SPEED'][i]))
"""
Explanation: Since the steering was performed using a mouse instead of buttons in the simulator during data collection, there are entries in each bin of the histogram. The results are pretty expected; the most number of the entries is around zero, and there are also many entries in the most left and right angles are of +/- 25 angle degree.
Augment the Data via Images and Steering Measurements Flipping
End of explanation
"""
import os
import pandas as pd
data_augmented = pd.read_csv("data_augmented.csv").to_dict(orient='list')
n_records_augmented = len(data_augmented['STEERING_ANGLE'])
n_images_augmented = n_records_augmented * 3 # center, left and right images per record
print("Number of samples:", n_records_augmented)
print("Number of images:", n_images_augmented)
# validate that directory number of images in csv file
# and the number of images in the directory are equal
assert (len(os.listdir("data_augmented")) == n_images_augmented)
"""
Explanation: Read CSV File for the Augmented Data
End of explanation
"""
import matplotlib.pyplot as plt
# visualizations will be shown in the notebook
%matplotlib inline
plt.hist(data_augmented['STEERING_ANGLE'], bins=100)
plt.show()
"""
Explanation: Draw Histogram for Steering Angle for Augmented Data
End of explanation
"""
import os
import zipfile
def zip_dir(path, zip_ref):
for root, dirs, files in os.walk(path):
for file in files:
zip_ref.write(os.path.join(root, file))
if (os.path.isdir("data_augmented") and os.path.exists("data_augmented.csv")):
zip_ref = zipfile.ZipFile("data_augmented.zip", 'w', zipfile.ZIP_DEFLATED)
zip_ref.write("data_augmented.csv")
zip_dir("data_augmented", zip_ref)
zip_ref.close()
"""
Explanation: The histogram for the augmented data is symmetric. It is expected and desired state for the data. The augmented data will be used to train neural network predicting steering angle.
Compress Augmented Data
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
quests/data-science-on-gcp-edition1_tf2/07_sparkml_and_bqml/logistic_regression.ipynb
|
apache-2.0
|
BUCKET='cs358-bucket' # CHANGE ME
import os
os.environ['BUCKET'] = BUCKET
# Create spark session
from __future__ import print_function
from pyspark.sql import SparkSession
from pyspark import SparkContext
sc = SparkContext('local', 'logistic')
spark = SparkSession \
.builder \
.appName("Logistic regression w/ Spark ML") \
.getOrCreate()
print(spark)
print(sc)
from pyspark.mllib.classification import LogisticRegressionWithLBFGS
from pyspark.mllib.regression import LabeledPoint
"""
Explanation: <h1> Logistic Regression using Spark ML </h1>
Set up bucket
End of explanation
"""
traindays = spark.read \
.option("header", "true") \
.csv('gs://{}/flights/trainday.csv'.format(BUCKET))
traindays.createOrReplaceTempView('traindays')
spark.sql("SELECT * from traindays LIMIT 5").show()
from pyspark.sql.types import StringType, FloatType, StructType, StructField
header = 'FL_DATE,UNIQUE_CARRIER,AIRLINE_ID,CARRIER,FL_NUM,ORIGIN_AIRPORT_ID,ORIGIN_AIRPORT_SEQ_ID,ORIGIN_CITY_MARKET_ID,ORIGIN,DEST_AIRPORT_ID,DEST_AIRPORT_SEQ_ID,DEST_CITY_MARKET_ID,DEST,CRS_DEP_TIME,DEP_TIME,DEP_DELAY,TAXI_OUT,WHEELS_OFF,WHEELS_ON,TAXI_IN,CRS_ARR_TIME,ARR_TIME,ARR_DELAY,CANCELLED,CANCELLATION_CODE,DIVERTED,DISTANCE,DEP_AIRPORT_LAT,DEP_AIRPORT_LON,DEP_AIRPORT_TZOFFSET,ARR_AIRPORT_LAT,ARR_AIRPORT_LON,ARR_AIRPORT_TZOFFSET,EVENT,NOTIFY_TIME'
def get_structfield(colname):
if colname in ['ARR_DELAY', 'DEP_DELAY', 'DISTANCE', 'TAXI_OUT']:
return StructField(colname, FloatType(), True)
else:
return StructField(colname, StringType(), True)
schema = StructType([get_structfield(colname) for colname in header.split(',')])
inputs = 'gs://{}/flights/tzcorr/all_flights-00000-*'.format(BUCKET) # 1/30th; you may have to change this to find a shard that has training data
#inputs = 'gs://{}/flights/tzcorr/all_flights-*'.format(BUCKET) # FULL
flights = spark.read\
.schema(schema)\
.csv(inputs)
# this view can now be queried ...
flights.createOrReplaceTempView('flights')
"""
Explanation: <h2> Read dataset </h2>
End of explanation
"""
trainquery = """
SELECT
f.*
FROM flights f
JOIN traindays t
ON f.FL_DATE == t.FL_DATE
WHERE
t.is_train_day == 'True'
"""
traindata = spark.sql(trainquery)
print(traindata.head(2)) # if this is empty, try changing the shard you are using.
traindata.describe().show()
"""
Explanation: <h2> Clean up </h2>
End of explanation
"""
trainquery = """
SELECT
DEP_DELAY, TAXI_OUT, ARR_DELAY, DISTANCE
FROM flights f
JOIN traindays t
ON f.FL_DATE == t.FL_DATE
WHERE
t.is_train_day == 'True' AND
f.dep_delay IS NOT NULL AND
f.arr_delay IS NOT NULL
"""
traindata = spark.sql(trainquery)
traindata.describe().show()
trainquery = """
SELECT
DEP_DELAY, TAXI_OUT, ARR_DELAY, DISTANCE
FROM flights f
JOIN traindays t
ON f.FL_DATE == t.FL_DATE
WHERE
t.is_train_day == 'True' AND
f.CANCELLED == '0.00' AND
f.DIVERTED == '0.00'
"""
traindata = spark.sql(trainquery)
traindata.describe().show()
def to_example(fields):
return LabeledPoint(\
float(fields['ARR_DELAY'] < 15), #ontime? \
[ \
fields['DEP_DELAY'], \
fields['TAXI_OUT'], \
fields['DISTANCE'], \
])
examples = traindata.rdd.map(to_example)
lrmodel = LogisticRegressionWithLBFGS.train(examples, intercept=True)
print(lrmodel.weights,lrmodel.intercept)
print(lrmodel.predict([6.0,12.0,594.0]))
print(lrmodel.predict([36.0,12.0,594.0]))
lrmodel.clearThreshold()
print(lrmodel.predict([6.0,12.0,594.0]))
print(lrmodel.predict([36.0,12.0,594.0]))
lrmodel.setThreshold(0.7) # cancel if prob-of-ontime < 0.7
print(lrmodel.predict([6.0,12.0,594.0]))
print(lrmodel.predict([36.0,12.0,594.0]))
"""
Explanation: Note that the counts for the various columns are all different; We have to remove NULLs in the delay variables (these correspond to canceled or diverted flights).
<h2> Logistic regression </h2>
End of explanation
"""
!gsutil -m rm -r gs://$BUCKET/flights/sparkmloutput/model
MODEL_FILE='gs://' + BUCKET + '/flights/sparkmloutput/model'
lrmodel.save(sc, MODEL_FILE)
print('{} saved'.format(MODEL_FILE))
lrmodel = 0
print(lrmodel)
"""
Explanation: <h2> Predict with the model </h2>
First save the model
End of explanation
"""
from pyspark.mllib.classification import LogisticRegressionModel
lrmodel = LogisticRegressionModel.load(sc, MODEL_FILE)
lrmodel.setThreshold(0.7)
print(lrmodel.predict([36.0,12.0,594.0]))
print(lrmodel.predict([8.0,4.0,594.0]))
"""
Explanation: Now retrieve the model
End of explanation
"""
lrmodel.clearThreshold() # to make the model produce probabilities
print(lrmodel.predict([20, 10, 500]))
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
dist = np.arange(10, 2000, 10)
prob = [lrmodel.predict([20, 10, d]) for d in dist]
sns.set_style("whitegrid")
ax = plt.plot(dist, prob)
plt.xlabel('distance (miles)')
plt.ylabel('probability of ontime arrival')
delay = np.arange(-20, 60, 1)
prob = [lrmodel.predict([d, 10, 500]) for d in delay]
ax = plt.plot(delay, prob)
plt.xlabel('departure delay (minutes)')
plt.ylabel('probability of ontime arrival')
"""
Explanation: <h2> Examine the model behavior </h2>
For dep_delay=20 and taxiout=10, how does the distance affect prediction?
End of explanation
"""
inputs = 'gs://{}/flights/tzcorr/all_flights-00001-*'.format(BUCKET) # you may have to change this to find a shard that has test data
flights = spark.read\
.schema(schema)\
.csv(inputs)
flights.createOrReplaceTempView('flights')
testquery = trainquery.replace("t.is_train_day == 'True'","t.is_train_day == 'False'")
print(testquery)
testdata = spark.sql(testquery)
examples = testdata.rdd.map(to_example)
testdata.describe().show() # if this is empty, change the shard you are using
def eval(labelpred):
'''
data = (label, pred)
data[0] = label
data[1] = pred
'''
cancel = labelpred.filter(lambda data: data[1] < 0.7)
nocancel = labelpred.filter(lambda data: data[1] >= 0.7)
corr_cancel = cancel.filter(lambda data: data[0] == int(data[1] >= 0.7)).count()
corr_nocancel = nocancel.filter(lambda data: data[0] == int(data[1] >= 0.7)).count()
cancel_denom = cancel.count()
nocancel_denom = nocancel.count()
if cancel_denom == 0:
cancel_denom = 1
if nocancel_denom == 0:
nocancel_denom = 1
return {'total_cancel': cancel.count(), \
'correct_cancel': float(corr_cancel)/cancel_denom, \
'total_noncancel': nocancel.count(), \
'correct_noncancel': float(corr_nocancel)/nocancel_denom \
}
# Evaluate model
lrmodel.clearThreshold() # so it returns probabilities
labelpred = examples.map(lambda p: (p.label, lrmodel.predict(p.features)))
print('All flights:')
print(eval(labelpred))
# keep only those examples near the decision threshold
print('Flights near decision threshold:')
labelpred = labelpred.filter(lambda data: data[1] > 0.65 and data[1] < 0.75)
print(eval(labelpred))
"""
Explanation: <h2> Evaluate model </h2>
Evaluate on the test data
End of explanation
"""
|
kubeflow/kfp-tekton-backend
|
components/gcp/dataproc/submit_hadoop_job/sample.ipynb
|
apache-2.0
|
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
"""
Explanation: Name
Data preparation using Hadoop MapReduce on YARN with Cloud Dataproc
Label
Cloud Dataproc, GCP, Cloud Storage, Hadoop, YARN, Apache, MapReduce
Summary
A Kubeflow Pipeline component to prepare data by submitting an Apache Hadoop MapReduce job on Apache Hadoop YARN to Cloud Dataproc.
Details
Intended use
Use the component to run an Apache Hadoop MapReduce job as one preprocessing step in a Kubeflow Pipeline.
Runtime arguments
| Argument | Description | Optional | Data type | Accepted values | Default |
|----------|-------------|----------|-----------|-----------------|---------|
| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | |
| region | The Dataproc region to handle the request. | No | GCPRegion | | |
| cluster_name | The name of the cluster to run the job. | No | String | | |
| main_jar_file_uri | The Hadoop Compatible Filesystem (HCFS) URI of the JAR file containing the main class to execute. | No | List | | |
| main_class | The name of the driver's main class. The JAR file that contains the class must be either in the default CLASSPATH or specified in hadoop_job.jarFileUris. | No | String | | |
| args | The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None |
| hadoop_job | The payload of a HadoopJob. | Yes | Dict | | None |
| job | The payload of a Dataproc job. | Yes | Dict | | None |
| wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 |
Note:
main_jar_file_uri: The examples for the files are :
- gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar
- hdfs:/tmp/test-samples/custom-wordcount.jarfile:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar
Output
Name | Description | Type
:--- | :---------- | :---
job_id | The ID of the created job. | String
Cautions & requirements
To use the component, you must:
* Set up a GCP project by following this guide.
* Create a new cluster.
* The component can authenticate to GCP. Refer to Authenticating Pipelines to GCP for details.
* Grant the Kubeflow user service account the role roles/dataproc.editor on the project.
Detailed description
This component creates a Hadoop job from Dataproc submit job REST API.
Follow these steps to use the component in a pipeline:
Install the Kubeflow Pipeline SDK:
End of explanation
"""
import kfp.components as comp
dataproc_submit_hadoop_job_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/dataproc/submit_hadoop_job/component.yaml')
help(dataproc_submit_hadoop_job_op)
"""
Explanation: Load the component using KFP SDK
End of explanation
"""
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
OUTPUT_GCS_PATH = '<Please put your output GCS path here>'
REGION = 'us-central1'
MAIN_CLASS = 'org.apache.hadoop.examples.WordCount'
INTPUT_GCS_PATH = 'gs://ml-pipeline-playground/shakespeare1.txt'
EXPERIMENT_NAME = 'Dataproc - Submit Hadoop Job'
"""
Explanation: Sample
Note: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template.
Setup a Dataproc cluster
Create a new Dataproc cluster (or reuse an existing one) before running the sample code.
Prepare a Hadoop job
Upload your Hadoop JAR file to a Cloud Storage bucket. In the sample, we will use a JAR file that is preinstalled in the main cluster, so there is no need to provide main_jar_file_uri.
Here is the WordCount example source code.
To package a self-contained Hadoop MapReduce application from the source code, follow the MapReduce Tutorial.
Set sample parameters
End of explanation
"""
!gsutil cat $INTPUT_GCS_PATH
"""
Explanation: Insepct Input Data
The input file is a simple text file:
End of explanation
"""
!gsutil rm $OUTPUT_GCS_PATH/**
"""
Explanation: Clean up the existing output files (optional)
This is needed because the sample code requires the output folder to be a clean folder. To continue to run the sample, make sure that the service account of the notebook server has access to the OUTPUT_GCS_PATH.
CAUTION: This will remove all blob files under OUTPUT_GCS_PATH.
End of explanation
"""
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc submit Hadoop job pipeline',
description='Dataproc submit Hadoop job pipeline'
)
def dataproc_submit_hadoop_job_pipeline(
project_id = PROJECT_ID,
region = REGION,
cluster_name = CLUSTER_NAME,
main_jar_file_uri = '',
main_class = MAIN_CLASS,
args = json.dumps([
INTPUT_GCS_PATH,
OUTPUT_GCS_PATH
]),
hadoop_job='',
job='{}',
wait_interval='30'
):
dataproc_submit_hadoop_job_op(
project_id=project_id,
region=region,
cluster_name=cluster_name,
main_jar_file_uri=main_jar_file_uri,
main_class=main_class,
args=args,
hadoop_job=hadoop_job,
job=job,
wait_interval=wait_interval)
"""
Explanation: Example pipeline that uses the component
End of explanation
"""
pipeline_func = dataproc_submit_hadoop_job_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
"""
Explanation: Compile the pipeline
End of explanation
"""
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
"""
Explanation: Submit the pipeline for execution
End of explanation
"""
!gsutil cat $OUTPUT_GCS_PATH/*
"""
Explanation: Inspect the output
The sample in the notebook will count the words in the input text and save them in sharded files. The command to inspect the output is:
End of explanation
"""
|
GoogleCloudPlatform/tensorflow-without-a-phd
|
tensorflow-rnn-tutorial/01_Keras_stateful_RNN_playground.ipynb
|
apache-2.0
|
# using Tensorflow 2
%tensorflow_version 2.x
import math
import numpy as np
from matplotlib import pyplot as plt
import tensorflow as tf
print("Tensorflow version: " + tf.__version__)
#@title Data formatting and display utilites [RUN ME]
def dumb_minibatch_sequencer(data, batch_size, sequence_size, nb_epochs):
"""
Divides the data into batches of sequences in the simplest way: sequentially.
:param data: the training sequence
:param batch_size: the size of a training minibatch
:param sequence_size: the unroll size of the RNN
:param nb_epochs: number of epochs to train on
:return:
x: one batch of training sequences
y: one batch of target sequences, i.e. training sequences shifted by 1
"""
data_len = data.shape[0]
nb_batches = data_len // (batch_size * sequence_size)
rounded_size = nb_batches * batch_size * sequence_size
xdata = data[:rounded_size]
ydata = np.roll(data, -1)[:rounded_size]
xdata = np.reshape(xdata, [nb_batches, batch_size, sequence_size])
ydata = np.reshape(ydata, [nb_batches, batch_size, sequence_size])
for epoch in range(nb_epochs):
for batch in range(nb_batches):
yield xdata[batch,:,:], ydata[batch,:,:]
def rnn_minibatch_sequencer(data, batch_size, sequence_size, nb_epochs):
"""
Divides the data into batches of sequences so that all the sequences in one batch
continue in the next batch. This is a generator that will keep returning batches
until the input data has been seen nb_epochs times. Sequences are continued even
between epochs, apart from one, the one corresponding to the end of data.
The remainder at the end of data that does not fit in an full batch is ignored.
:param data: the training sequence
:param batch_size: the size of a training minibatch
:param sequence_size: the unroll size of the RNN
:param nb_epochs: number of epochs to train on
:return:
x: one batch of training sequences
y: one batch of target sequences, i.e. training sequences shifted by 1
"""
data_len = data.shape[0]
# using (data_len-1) because we must provide for the sequence shifted by 1 too
nb_batches = (data_len - 1) // (batch_size * sequence_size)
assert nb_batches > 0, "Not enough data, even for a single batch. Try using a smaller batch_size."
rounded_data_len = nb_batches * batch_size * sequence_size
xdata = np.reshape(data[0:rounded_data_len], [batch_size, nb_batches * sequence_size])
ydata = np.reshape(data[1:rounded_data_len + 1], [batch_size, nb_batches * sequence_size])
whole_epochs = math.floor(nb_epochs)
frac_epoch = nb_epochs - whole_epochs
last_nb_batch = math.floor(frac_epoch * nb_batches)
for epoch in range(whole_epochs+1):
for batch in range(nb_batches if epoch < whole_epochs else last_nb_batch):
x = xdata[:, batch * sequence_size:(batch + 1) * sequence_size]
y = ydata[:, batch * sequence_size:(batch + 1) * sequence_size]
x = np.roll(x, -epoch, axis=0) # to continue the sequence from epoch to epoch (do not reset rnn state!)
y = np.roll(y, -epoch, axis=0)
yield x, y
plt.rcParams['figure.figsize']=(16.8,6.0)
plt.rcParams['axes.grid']=True
plt.rcParams['axes.linewidth']=0
plt.rcParams['grid.color']='#DDDDDD'
plt.rcParams['axes.facecolor']='white'
plt.rcParams['xtick.major.size']=0
plt.rcParams['ytick.major.size']=0
plt.rcParams['axes.titlesize']=15.0
def display_lr(lr_schedule, nb_epochs):
x = np.arange(nb_epochs)
y = [lr_schedule(i) for i in x]
plt.figure(figsize=(9,5))
plt.plot(x,y)
plt.title("Learning rate schedule\nmax={:.2e}, min={:.2e}".format(np.max(y), np.min(y)),
y=0.85)
plt.show()
def display_loss(history, full_history, nb_epochs):
plt.figure()
plt.plot(np.arange(0, len(full_history['loss']))/steps_per_epoch, full_history['loss'], label='detailed loss')
plt.plot(np.arange(1, nb_epochs+1), history['loss'], color='red', linewidth=3, label='average loss per epoch')
plt.ylim(0,3*max(history['loss'][1:]))
plt.xlabel('EPOCH')
plt.ylabel('LOSS')
plt.xlim(0, nb_epochs+0.5)
plt.legend()
for epoch in range(nb_epochs//2+1):
plt.gca().axvspan(2*epoch, 2*epoch+1, alpha=0.05, color='grey')
plt.show()
def picture_this_7(features):
subplot = 231
for i in range(6):
plt.subplot(subplot)
plt.plot(features[i])
subplot += 1
plt.show()
def picture_this_8(data, prime_data, results, offset, primelen, runlen, rmselen):
disp_data = data[offset:offset+primelen+runlen]
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
plt.subplot(211)
plt.xlim(0, disp_data.shape[0])
plt.text(primelen,2.5,"DATA |", color=colors[1], horizontalalignment="right")
plt.text(primelen,2.5,"| PREDICTED", color=colors[0], horizontalalignment="left")
displayresults = np.ma.array(np.concatenate((np.zeros([primelen]), results)))
displayresults = np.ma.masked_where(displayresults == 0, displayresults)
plt.plot(displayresults)
displaydata = np.ma.array(np.concatenate((prime_data, np.zeros([runlen]))))
displaydata = np.ma.masked_where(displaydata == 0, displaydata)
plt.plot(displaydata)
plt.subplot(212)
plt.xlim(0, disp_data.shape[0])
plt.text(primelen,2.5,"DATA |", color=colors[1], horizontalalignment="right")
plt.text(primelen,2.5,"| +PREDICTED", color=colors[0], horizontalalignment="left")
plt.plot(displayresults)
plt.plot(disp_data)
plt.axvspan(primelen, primelen+rmselen, color='grey', alpha=0.1, ymin=0.05, ymax=0.95)
plt.show()
rmse = math.sqrt(np.mean((data[offset+primelen:offset+primelen+rmselen] - results[:rmselen])**2))
print("RMSE on {} predictions (shaded area): {}".format(rmselen, rmse))
"""
Explanation: An stateful RNN model to generate sequences
RNN models can generate long sequences based on past data. This can be used to predict stock markets, temperatures, traffic or sales data based on past patterns. They can also be adapted to generate text. The quality of the prediction will depend on training data, network architecture, hyperparameters, the distance in time at which you are predicting and so on. But most importantly, it will depend on wether your training data contains examples of the behaviour patterns you are trying to predict.
End of explanation
"""
WAVEFORM_SELECT = 0 # select 0, 1 or 2
def create_time_series(datalen):
# good waveforms
frequencies = [(0.2, 0.15), (0.35, 0.3), (0.6, 0.55)]
freq1, freq2 = frequencies[WAVEFORM_SELECT]
noise = [np.random.random()*0.1 for i in range(datalen)]
x1 = np.sin(np.arange(0,datalen) * freq1) + noise
x2 = np.sin(np.arange(0,datalen) * freq2) + noise
x = x1 + x2
return x.astype(np.float32)
DATA_LEN = 1024*128+1
data = create_time_series(DATA_LEN)
plt.plot(data[:512])
plt.show()
"""
Explanation: Generate fake dataset [WORK REQUIRED]
Pick a wavewform below: 0, 1 or 2. This will be your dataset.
End of explanation
"""
RNN_CELLSIZE = 80 # size of the RNN cells
SEQLEN = 32 # unrolled sequence length
BATCHSIZE = 30 # mini-batch size
DROPOUT = 0.3 # dropout regularization: probability of neurons being dropped. Should be between 0 and 0.5
"""
Explanation: Hyperparameters
End of explanation
"""
# The function dumb_minibatch_sequencer splits the data into batches of sequences sequentially.
for features, labels in dumb_minibatch_sequencer(data, BATCHSIZE, SEQLEN, nb_epochs=1):
break
print("Features shape: " + str(features.shape))
print("Labels shape: " + str(labels.shape))
print("Excerpt from first batch:")
picture_this_7(features)
"""
Explanation: Visualize training sequences
This is what the neural network will see during training.
End of explanation
"""
def keras_model(batchsize, seqlen):
model = tf.keras.Sequential([
#
# YOUR MODEL HERE
# This is a dummy model that always predicts 1
#
tf.keras.layers.Lambda(lambda x: tf.ones([batchsize,seqlen]), input_shape=[seqlen,])
])
# to finalize the model, specify the loss, the optimizer and metrics
model.compile(
loss = 'mean_squared_error',
optimizer = 'adam',
metrics = ['RootMeanSquaredError'])
return model
# Keras model callbacks
# This callback records a per-step loss history instead of the average loss per
# epoch that Keras normally reports. It allows you to see more problems.
class LossHistory(tf.keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.history = {'loss': []}
def on_batch_end(self, batch, logs={}):
self.history['loss'].append(logs.get('loss'))
# This callback resets the RNN state at each epoch
class ResetStateCallback(tf.keras.callbacks.Callback):
def on_epoch_begin(self, batch, logs={}):
self.model.reset_states()
print('reset state')
reset_state = ResetStateCallback()
# learning rate decay callback
def lr_schedule(epoch): return 0.01
#def lr_schedule(epoch): return 0.0001 + 0.01 * math.pow(0.65, epoch)
lr_decay = tf.keras.callbacks.LearningRateScheduler(lr_schedule, verbose=True)
"""
Explanation: The model [WORK REQUIRED]
This time we want to train a "stateful" RNN model, one that runs like a state machine with an internal state updated every time an input is processed. Stateful models are typically trained (unrolled) to predict the next element in a sequence, then used in a loop (without unrolling) to generate a sequence.
This model needs more compute power. Let's use GPU acceleration.<br/>
Go to Runtime > Runtime Type and check that "GPU" is selected.
Locate the inference function keras_prediction_run() below and check that at its core, it runs the model in a loop, piping outputs into inputs and output state into input state:<br/>
for i in range(n):
Yout = model.predict(Yout)
Notice that the output is passed around in the input explicitly. In Keras, the output state is passed around as the next input state automatically if RNN layers are declared with stateful=True
Run the whole notebook as it is, with a dummy model that always predicts 1. Check that everything "works".
Now implement a one layer RNN model:
Use stateful GRU cells tf.keras.layers.GRU(RNN_CELLSIZE, stateful=True, return_sequences=True).
Make sure they all return full sequences with return_sequences=True. The model should output a full sequence of length SEQLEN. The target is the input sequence shifted by one, effectively teaching the RNN to predict the next element of a sequence.
Do not forget to replicate the regression redout layer across all time steps with tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(1))
In Keras, stateful RNNs must be defined for a fixed batch size (documentation). On the first layer, in addition to input_shape, please specify batch_size=batchsize
Adjust shapes as needed with Reshape layers. Pen, paper and fresh brain cells <font size="+2">🤯</font> still useful to follow the shapes. The shapes of inputs (a.k.a. "features") and targets (a.k.a. "labels") are displayed in the cell above this text.
Add a second RNN layer.
The predictions might be starting to look good but the loss curve is pretty noisy.
If we want to do stateful RNNs "by the book", training data should be arranged in batches in a special way so that RNN states after one batch are the correct input states for the sequences in the next batch (see this illustration). Correct data batching is already implemented: just use the rnn_minibatch_sequencer function in the training loop instead of dumb_minibatch_sequencer.
This should clean up the loss curve and improve predictions.
Finally, add a learning rate schedule. In Keras, this is also done through a callback. Edit lr_schedule below and swap the constant learning rate for a decaying one (just uncomment it).
Now the RNN should be able to continue your curve accurately.
(Optional) To do things really "by the book", shouldn't states also be reset when sequences are no longer continuous between batches, i.e. at every epoch? The reset_state callback defined below does that. Add it to the list of callbacks in model.fit and test.
It actually makes things slightly worse... Looking at the loss should tell you why: a zero state generates a much bigger loss at the start of each epoch than the state from the previous epoch. Both are incorrect but one is much worse.
(Optional) You can also add dropout regularization. Try dropout=DROPOUT on both your RNN layers.
Aaaarg 😫what happened ?
(Optional) In Keras RNN layers, the dropout parameter is an input dropout. In the first RNN layer, you are dropping your input data ! That does not make sense. Remove the dropout from your first RNN layer. With dropout, you might need to train for longer. Try 10 epochs.
<div style="text-align: right; font-family: monospace">
X shape [BATCHSIZE, SEQLEN, 1]<br/>
Y shape [BATCHSIZE, SEQLEN, 1]<br/>
H shape [BATCHSIZE, RNN_CELLSIZE*NLAYERS]
</div>
In Keras layers, the batch dimension is implicit ! For a shape of [BATCHSIZE, SEQLEN, 1], you write [SEQLEN, 1]. In pure Tensorflow however, this is NOT the case.
End of explanation
"""
# Execute this cell to reset the model
NB_EPOCHS = 8
model = keras_model(BATCHSIZE, SEQLEN)
# this prints a description of the model
model.summary()
display_lr(lr_schedule, NB_EPOCHS)
"""
Explanation: The training loop
End of explanation
"""
# You can re-execute this cell to continue training
steps_per_epoch = (DATA_LEN-1) // SEQLEN // BATCHSIZE
#generator = rnn_minibatch_sequencer(data, BATCHSIZE, SEQLEN, NB_EPOCHS)
generator = dumb_minibatch_sequencer(data, BATCHSIZE, SEQLEN, NB_EPOCHS)
full_history = LossHistory()
history = model.fit_generator(generator,
steps_per_epoch=steps_per_epoch,
epochs=NB_EPOCHS,
callbacks=[full_history, lr_decay])
display_loss(history.history, full_history.history, NB_EPOCHS)
"""
Explanation: You can re-execute this cell to continue training
End of explanation
"""
# Inference from stateful model
def keras_prediction_run(model, prime_data, run_length):
model.reset_states()
data_len = prime_data.shape[0]
#prime_data = np.expand_dims(prime_data, axis=0) # single batch with everything
prime_data = np.expand_dims(prime_data, axis=-1) # each sequence is of size 1
# prime the state from data
for i in range(data_len - 1): # keep last sample to serve as the input sequence for predictions
model.predict(np.expand_dims(prime_data[i], axis=0))
# prediction run
results = []
Yout = prime_data[-1] # start predicting from the last element of the prime_data sequence
for i in range(run_length+1):
Yout = model.predict(Yout)
results.append(Yout[0,0]) # Yout shape is [1,1] i.e one sequence of one element
return np.array(results)
PRIMELEN=256
RUNLEN=512
OFFSET=20
RMSELEN=128
prime_data = data[OFFSET:OFFSET+PRIMELEN]
# For inference, we need a single RNN cell (no unrolling)
# Create a new model that takes a single sequence of a single value (i.e. just one RNN cell)
inference_model = keras_model(1, 1)
# Copy the trained weights into it
inference_model.set_weights(model.get_weights())
results = keras_prediction_run(inference_model, prime_data, RUNLEN)
picture_this_8(data, prime_data, results, OFFSET, PRIMELEN, RUNLEN, RMSELEN)
"""
Explanation: Inference
This is a generative model: run one trained RNN cell in a loop
End of explanation
"""
|
plipp/informatica-pfr-2017
|
nbs/5/1-Marvel-World-SNA-Intro.ipynb
|
mit
|
import networkx as nx
import csv
G = nx.Graph(name="Hero Network")
with open('../../data/hero-network.csv', 'r') as data:
reader = csv.reader(data)
for row in reader:
G.add_edge(*row)
nx.info(G)
G.order() # number of nodes
G.size() # number of edges
"""
Explanation: Social Network Analysis
Analysis of the Marvel Comic Universe
A description of all characters of the Marvel Universe can be found here.
Preparation: Install networkx
bash
conda install networkx
pip install python-louvain
The networkx documentation can be found here
End of explanation
"""
hero = 'MACE' # Jeffrey Mace, aka Captain America
ego=nx.ego_graph(G,hero,radius=1)
nx.info(ego)
import matplotlib.pyplot as plt
%matplotlib inline
import warnings; warnings.simplefilter('ignore')
pos = nx.spring_layout(ego)
nx.draw(ego,pos,node_color='b',node_size=50, with_labels=True)
# ego large and red
nx.draw_networkx_nodes(ego,pos,nodelist=[hero],node_size=300,node_color='r');
"""
Explanation: Graph Visualization
=> Nice, but Hairball-Effect
=> Let's try out Ego-Graphs
Ego Graph of an arbitrary Hero
End of explanation
"""
G.degree() # see also ego-graph above
G.degree('MACE')
# degree_centrality of node 'MACE' == standardized degree
G.degree('MACE')/(G.order()-1)
nx.degree_centrality(G)['MACE']
"""
Explanation: Most important Heros
What means 'important'?
Degree:<br>
Number of connections: Measure of popularity.<br>
It is useful in determining nodes that can quickly spread information.
Betweenness:<br>
Shows which nodes are likely pathways of information, and can
be used to determine where the graph will break apart if the node is removed.
Closeness:<br>
This is a measure of reach, that is, how fast information will spread to all
other nodes from this particular node. Nodes with the most central closeness enjoy
short durations during broadcast communication.
Eigenvector:<br>
Measure of related influence. Who is closest to the most
important people in the graph? This can be used to show the power behind the
scenes, or to show relative influence beyond popularity.
For practical samples please check out these Centrality Exercises.
(Taken from Packt - Practical Datascience Cookbook)
1. Degree Concept
Degrees == number of connections of a node
Degree Centrality == percent of nodes in the graph that a node is connected to
*CD here is the Degree Centrality of the whole Graph.
End of explanation
"""
?nx.betweenness_centrality # SLOW!!!
"""
Explanation: 2. Betweenness Concept
on how many (shortest) paths does a node lie and thus enables brookerage?
Calculation
End of explanation
"""
?nx.closeness_centrality # SLOW!!!
"""
Explanation: 3. Closeness Concept
Intuition: One still wants to be in the middle of things, not too far from the center
Calculation: Sum of Reciprokal Shortest Paths
End of explanation
"""
nx.eigenvector_centrality(G)
"""
Explanation: 3. Eigenvector Concept
Page Rank => Pages, which are linked by popular pages, have a higher Page Rank
finds the most influential nodes
End of explanation
"""
# TODO
# TODO
# Cut of the hightest values (>500)
# TODO
"""
Explanation: 1. Exercise
Determine the 20 most popular Heros by their degrees
Draw the overall Degree Distribution (plt.hist)
End of explanation
"""
# TODO
"""
Explanation: 2. Exercise
Determine the 20 most influential Heros by their eigenvector-centrality
Determine also their degrees and compare with the degree distribution: Are the most influential also the most popular heros?
End of explanation
"""
cc = list(nx.connected_components(G))
len(cc)
[len(c) for c in cc]
[list(c)[:10] for c in cc]
cc[3]
"""
Explanation: Communities
Connected Components (Graph Theorie)
End of explanation
"""
import community
partition = community.best_partition(G)
partition # hero -> partion-no.
"""
Explanation: 'Real' Communities
Cliques: Every member is connected with each other members
k-Cores: Every member is connected with at least k other members
... n-Cliques, n-Clubs, ... (more restrictive than k-Cores)
End of explanation
"""
# TODO: How many partitions?
# TODO: How many heros per partition?
# TODO members of the smallest community
# TODO histogram of community sizes
"""
Explanation: 3. Exercise
how many communities/partions have been found?
how many heros are in each community/partition?
how small is the smallest community and who are its members?
draw an histogram, which shows the sizes of the single communities.
End of explanation
"""
|
ljvmiranda921/pyswarms
|
docs/examples/tutorials/visualization.ipynb
|
mit
|
# Import modules
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
# Import PySwarms
import pyswarms as ps
from pyswarms.utils.functions import single_obj as fx
from pyswarms.utils.plotters import (plot_cost_history, plot_contour, plot_surface)
"""
Explanation: Visualization
PySwarms implements tools for visualizing the behavior of your swarm. These are built on top of matplotlib, thus rendering charts that are easy to use and highly-customizable.
In this example, we will demonstrate three plotting methods available on PySwarms:
- plot_cost_history: for plotting the cost history of a swarm given a matrix
- plot_contour: for plotting swarm trajectories of a 2D-swarm in two-dimensional space
- plot_surface: for plotting swarm trajectories of a 2D-swarm in three-dimensional space
End of explanation
"""
options = {'c1':0.5, 'c2':0.3, 'w':0.9}
optimizer = ps.single.GlobalBestPSO(n_particles=50, dimensions=2, options=options)
cost, pos = optimizer.optimize(fx.sphere, iters=100)
"""
Explanation: The first step is to create an optimizer. Here, we're going to use Global-best PSO to find the minima of a sphere function. As usual, we simply create an instance of its class pyswarms.single.GlobalBestPSO by passing the required parameters that we will use. Then, we'll call the optimize() method for 100 iterations.
End of explanation
"""
plot_cost_history(cost_history=optimizer.cost_history)
plt.show()
"""
Explanation: Plotting the cost history
To plot the cost history, we simply obtain the cost_history from the optimizer class and pass it to the cost_history function. Furthermore, this method also accepts a keyword argument **kwargs similar to matplotlib. This enables us to further customize various artists and elements in the plot. In addition, we can obtain the following histories from the same class:
- mean_neighbor_history: average local best history of all neighbors throughout optimization
- mean_pbest_history: average personal best of the particles throughout optimization
End of explanation
"""
from pyswarms.utils.plotters.formatters import Mesher
# Initialize mesher with sphere function
m = Mesher(func=fx.sphere)
"""
Explanation: Animating swarms
The plotters module offers two methods to perform animation, plot_contour() and plot_surface(). As its name suggests, these methods plot the particles in a 2-D or 3-D space.
Each animation method returns a matplotlib.animation.Animation class that still needs to be animated by a Writer class (thus necessitating the installation of a writer module). For the proceeding examples, we will convert the animations into a JS script. In such case, we need to invoke some extra methods to do just that.
Lastly, it would be nice to add meshes in our swarm to plot the sphere function. This enables us to visually recognize where the particles are with respect to our objective function. We can accomplish that using the Mesher class.
End of explanation
"""
%%capture
# Make animation
animation = plot_contour(pos_history=optimizer.pos_history,
mesher=m,
mark=(0,0))
# Enables us to view it in a Jupyter notebook
animation.save('plot0.gif', writer='imagemagick', fps=10)
Image(url='plot0.gif')
"""
Explanation: There are different formatters available in the pyswarms.utils.plotters.formatters module to customize your plots and visualizations. Aside from Mesher, there is a Designer class for customizing font sizes, figure sizes, etc. and an Animator class to set delays and repeats during animation.
Plotting in 2-D space
We can obtain the swarm's position history using the pos_history attribute from the optimizer instance. To plot a 2D-contour, simply pass this together with the Mesher to the plot_contour() function. In addition, we can also mark the global minima of the sphere function, (0,0), to visualize the swarm's "target".
End of explanation
"""
# Obtain a position-fitness matrix using the Mesher.compute_history_3d()
# method. It requires a cost history obtainable from the optimizer class
pos_history_3d = m.compute_history_3d(optimizer.pos_history)
# Make a designer and set the x,y,z limits to (-1,1), (-1,1) and (-0.1,1) respectively
from pyswarms.utils.plotters.formatters import Designer
d = Designer(limits=[(-1,1), (-1,1), (-0.1,1)], label=['x-axis', 'y-axis', 'z-axis'])
%%capture
# Make animation
animation3d = plot_surface(pos_history=pos_history_3d, # Use the cost_history we computed
mesher=m, designer=d, # Customizations
mark=(0,0,0)) # Mark minima
animation3d.save('plot1.gif', writer='imagemagick', fps=10)
Image(url='plot1.gif')
"""
Explanation: Plotting in 3-D space
To plot in 3D space, we need a position-fitness matrix with shape (iterations, n_particles, 3). The first two columns indicate the x-y position of the particles, while the third column is the fitness of that given position. You need to set this up on your own, but we have provided a helper function to compute this automatically
End of explanation
"""
|
dereneaton/ipyrad
|
newdocs/API-analysis/cookbook-sharing.ipynb
|
gpl-3.0
|
%load_ext autoreload
%autoreload 2
%matplotlib inline
"""
Explanation: <h2><span style="color:gray">ipyrad-analysis toolkit:</span> sharing</h2>
Calculate and plot pairwise locus sharing and pairwise missigness
End of explanation
"""
# conda isntall -c conda-forge seaborn
import ipyrad
import ipyrad.analysis as ipa
from ipyrad.analysis.sharing import Sharing
# the path to your VCF or HDF5 formatted snps file
data = "/home/isaac/ipyrad/test-data/pedicularis/analysis-ipyrad/ped_outfiles/ped.snps.hdf5"
imap = {
"prz": ["32082_przewalskii_SRR1754729", "33588_przewalskii_SRR1754727"],
"cys": ["41478_cyathophylloides_SRR1754722", "41954_cyathophylloides_SRR1754721"],
"cya": ["30686_cyathophylla_SRR1754730"],
"sup": ["29154_superba_SRR1754715"],
"cup": ["33413_thamno_SRR1754728"],
"tha": ["30556_thamno_SRR1754720"],
"rck": ["35236_rex_SRR1754731"],
"rex": ["35855_rex_SRR1754726", "40578_rex_SRR1754724"],
"lip": ["39618_rex_SRR1754723", "38362_rex_SRR1754725"],
}
# load the snp data into sharing tool with arguments
from ipyrad.analysis.sharing import Sharing
share = Sharing(
data=data,
imap=imap,
)
share.run()
share.sharing_matrix
## Plot shared snps/missigness as proportions scaled to max values
fig, ax = share.draw()
## Plot shared snps/missigness as raw values
fig, ax = share.draw(scaled=False)
"""
Explanation: required software
This analysis tool requires the seaborn module for the heatmap plotting. toyplot also has a matrix function for plotting heatmaps, but I found that it grinds on assemblies with many taxa.
End of explanation
"""
imap = {
"prz": ["32082_przewalskii_SRR1754729", "33588_przewalskii_SRR1754727"],
"cys": ["41478_cyathophylloides_SRR1754722", "41954_cyathophylloides_SRR1754721"],
"cya": ["30686_cyathophylla_SRR1754730"],
"sup": ["29154_superba_SRR1754715"],
"cup": ["33413_thamno_SRR1754728"],
"tha": ["30556_thamno_SRR1754720"],
# "rck": ["35236_rex_SRR1754731"],
# "rex": ["35855_rex_SRR1754726", "40578_rex_SRR1754724"],
# "lip": ["39618_rex_SRR1754723", "38362_rex_SRR1754725"],
}
# load the snp data into sharing tool with arguments
share = Sharing(
data=data,
imap=imap,
)
share.run()
fig, ax = share.draw()
"""
Explanation: Removing samples from the sharing matrix is as simple as removing them from the imap
This can be a convenience for speeding up the pairwise calculations if you have lots of samples and only want to examine a few of them.
End of explanation
"""
imap = {
"prz": ["32082_przewalskii_SRR1754729", "33588_przewalskii_SRR1754727"],
"cys": ["41478_cyathophylloides_SRR1754722", "41954_cyathophylloides_SRR1754721"],
"cya": ["30686_cyathophylla_SRR1754730"],
"sup": ["29154_superba_SRR1754715"],
"cup": ["33413_thamno_SRR1754728"],
"tha": ["30556_thamno_SRR1754720"],
"rck": ["35236_rex_SRR1754731"],
"rex": ["35855_rex_SRR1754726", "40578_rex_SRR1754724"],
"lip": ["39618_rex_SRR1754723", "38362_rex_SRR1754725"],
}
# Hack to get a list of samples in some order
order = sum(imap.values(), [])[::-1]
print(order)
share = Sharing(
data=data,
imap=imap,
mincov=0.83,
)
share.run()
fig, ax = share.draw(keep_order=order)
"""
Explanation: The order of samples in the figure can be reconfigured with the keep_order argument
This allows for flexibly reordering samples in the figure without recalculating the sharing values. This parameter will accept a list or a dictionary, and will only plot the specified samples in list order.
End of explanation
"""
imap2 = {
"prz": ["32082_przewalskii_SRR1754729", "33588_przewalskii_SRR1754727"],
"cys": ["41478_cyathophylloides_SRR1754722", "41954_cyathophylloides_SRR1754721"],
"cya": ["30686_cyathophylla_SRR1754730"],
"sup": ["29154_superba_SRR1754715"],
"cup": ["33413_thamno_SRR1754728"],
"tha": ["30556_thamno_SRR1754720"],
# "rck": ["35236_rex_SRR1754731"],
# "rex": ["35855_rex_SRR1754726", "40578_rex_SRR1754724"],
# "lip": ["39618_rex_SRR1754723", "38362_rex_SRR1754725"],
}
_, _ = share.draw(keep_order=imap2)
"""
Explanation: An example of using keep_order to plot a subset of the imap samples
End of explanation
"""
fig, ax = share.draw(sort="loci")
imap
[item for sublist in imap.values() for item in sublist]
"""
Explanation: The matrices can also be sorted either by shared "loci" or shared "missing"
This will sort the rows/columns by mean locus sharing or mean missingness. Notice, no need to rerun the pairwise calculations, this is just a manipulation of the data. The sort argument will be superceded by order.
End of explanation
"""
|
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
|
pandas_intro_example-names.ipynb
|
bsd-2-clause
|
# !curl -O http://www.ssa.gov/oact/babynames/names.zip
# !mkdir -p data/names
# !mv names.zip data/names/
# !cd data/names/ && unzip names.zip
"""
Explanation: Example: Names in the Wild
This example is drawn from Wes McKinney's excellent book on the Pandas library, O'Reilly's Python for Data Analysis.
We'll be taking a look at a freely available dataset: the database of names given to babies in the United States over the last century.
First things first, we need to download the data, which can be found at http://www.ssa.gov/oact/babynames/limits.html.
If you uncomment the following commands, it will do this automatically (note that these are linux shell commands; they will not work on Windows):
End of explanation
"""
!ls data/names
"""
Explanation: Now we should have a data/names directory which contains a number of text files, one for each year of data:
End of explanation
"""
!head data/names/yob1880.txt
"""
Explanation: Let's take a quick look at one of these files:
End of explanation
"""
names1880 = pd.read_csv('data/names/yob1880.txt')
names1880.head()
"""
Explanation: Each file is just a comma-separated list of names, genders, and counts of babies with that name in each year.
We can load these files using pd.read_csv, which is specifically designed for this:
End of explanation
"""
names1880 = pd.read_csv('data/names/yob1880.txt',
names=['name', 'gender', 'births'])
names1880.head()
"""
Explanation: Oops! Something went wrong. Our algorithm tried to be smart, and use the first line as index labels.
Let's fix this by specifying the index names manually:
End of explanation
"""
males = names1880[names1880.gender == 'M']
females = names1880[names1880.gender == 'F']
"""
Explanation: That looks better. Now we can start playing with the data a bit.
GroupBy: aggregates on values
First let's think about how we might count the total number of females and males born in the US in 1880.
If you're used to NumPy, you might be tempted to use masking like this:
First, we can get a mask over all females & males, and then use it to select a subset of the data:
End of explanation
"""
males.births.sum(), females.births.sum()
"""
Explanation: Now we can take the sum of the births for each of these:
End of explanation
"""
grouped = names1880.groupby('gender')
grouped
"""
Explanation: But there's an easier way to do this, using one of Pandas' very powerful features: groupby:
End of explanation
"""
grouped.sum()
"""
Explanation: This grouped object is now an abstract representation of the data, where the data is split on the given column.
In order to actually do something with this data, we need to specify an aggregation operation to do across the data.
In this case, what we want is the sum:
End of explanation
"""
grouped.size()
grouped.mean()
"""
Explanation: We can do other aggregations as well:
End of explanation
"""
grouped.describe()
"""
Explanation: Or, if we wish, we can get a description of the grouping:
End of explanation
"""
def load_year(year):
data = pd.read_csv('data/names/yob{0}.txt'.format(year),
names=['name', 'gender', 'births'])
data['year'] = year
return data
"""
Explanation: Concatenating multiple data sources
But here we've just been looking at a single year. Let's try to put together all the data in all the years.
To do this, we'll have to use pandas concat function to concatenate all the data together.
First we'll create a function which loads the data as we did the above data:
End of explanation
"""
names = pd.concat([load_year(year) for year in range(1880, 2014)])
names.head()
"""
Explanation: Now let's load all the data into a list, and call pd.concat on that list:
End of explanation
"""
births = names.groupby('year').births.sum()
births.head()
"""
Explanation: It looks like we've done it!
Let's start with something easy: we'll use groupby again to see the total number of births per year:
End of explanation
"""
births.plot();
"""
Explanation: We can use the plot() method to see a quick plot of these (note that because we used the %matplotlib inline magic at the start of the notebook, the resulting plot will be shown inline within the notebook).
End of explanation
"""
names.groupby('year').births.count().plot();
"""
Explanation: The so-called "baby boom" generation after the second world war is abundantly clear!
We can also use other aggregates: let's see how many names are used each year:
End of explanation
"""
def add_frequency(group):
group['birth_freq'] = group.births / group.births.sum()
return group
names = names.groupby(['year', 'gender']).apply(add_frequency)
names.head()
"""
Explanation: Apparently there's been a huge increase of the diversity of names with time!
groupby can also be used to add columns to the data: think of it as a view of the data that you're modifying. Let's add a column giving the frequency of each name within each year & gender:
End of explanation
"""
men = names[names.gender == 'M']
women = names[names.gender == 'W']
"""
Explanation: Notice that the apply() function iterates over each group, and calls a function which modifies the group.
This result is then re-constructed into a container which looks ike the original dataframe.
Pivot Tables
Next we'll discuss Pivot Tables, which are an even more powerful way of (re)organizing your data.
Let's say that we want to plot the men and women separately.
We could do this by using masking, as follows:
End of explanation
"""
births = names.pivot_table('births',
index='year', columns='gender',
aggfunc=sum)
births.head()
"""
Explanation: And then we could proceed as above, using groupby to group on the year.
But we would end up with two different views of the data. A better way to do this is to use a pivot_table, which is essentially a groupby in multiple dimensions at once:
End of explanation
"""
births.plot(title='Total Births');
"""
Explanation: Note that this has grouped the index by the value of year, and grouped the columns by the value of gender.
Let's plot the results now:
End of explanation
"""
names_to_check = ['Allison', 'Alison']
# filter on just the names we're interested in
births = names[names.name.isin(names_to_check)]
# pivot table to get year vs. gender
births = births.pivot_table('births', index='year', columns='gender')
# fill all NaNs with zeros
births = births.fillna(0)
# normalize along columns
births = births.div(births.sum(1), axis=0)
births.plot(title='Fraction of babies named Allison');
"""
Explanation: Name Evolution Over Time
Some names have shifted from being girls names to being boys names. Let's take a look at some of these:
End of explanation
"""
pd.rolling_mean(births, 5).plot(title="Allisons: 5-year moving average");
"""
Explanation: We can see that prior to about 1905, all babies named Allison were male. Over the 20th century, this reversed, until the end of the century nearly all Allisons were female!
There's some noise in this data: we can smooth it out a bit by using a 5-year rolling mean:
End of explanation
"""
|
statsmodels/statsmodels.github.io
|
v0.12.1/examples/notebooks/generated/ordinal_regression.ipynb
|
bsd-3-clause
|
import numpy as np
import pandas as pd
import scipy.stats as stats
from statsmodels.miscmodels.ordinal_model import OrderedModel
"""
Explanation: Ordinal Regression
End of explanation
"""
url = "https://stats.idre.ucla.edu/stat/data/ologit.dta"
data_student = pd.read_stata(url)
data_student.head(5)
data_student.dtypes
data_student['apply'].dtype
"""
Explanation: Loading a stata data file from the UCLA website.This notebook is inspired by https://stats.idre.ucla.edu/r/dae/ordinal-logistic-regression/ which is a R notebook from UCLA.
End of explanation
"""
mod_prob = OrderedModel(data_student['apply'],
data_student[['pared', 'public', 'gpa']],
distr='probit')
res_prob = mod_prob.fit(method='bfgs')
res_prob.summary()
"""
Explanation: This dataset is about the probability for undergraduate students to apply to graduate school given three exogenous variables:
- their grade point average(gpa), a float between 0 and 4.
- pared, a binary that indicates if at least one parent went to graduate school.
- and public, a binary that indicates if the current undergraduate institution of the student is public or private.
apply, the target variable is categorical with ordered categories: unlikely < somewhat likely < very likely. It is a pd.Serie of categorical type, this is preferred over NumPy arrays.
The model is based on a numerical latent variable $y_{latent}$ that we cannot observe but that we can compute thanks to exogenous variables.
Moreover we can use this $y_{latent}$ to define $y$ that we can observe.
For more details see the the Documentation of OrderedModel, the UCLA webpage or this book.
Probit ordinal regression:
End of explanation
"""
num_of_thresholds = 2
mod_prob.transform_threshold_params(res_prob.params[-num_of_thresholds:])
"""
Explanation: In our model, we have 3 exogenous variables(the $\beta$s if we keep the documentation's notations) so we have 3 coefficients that need to be estimated.
Those 3 estimations and their standard errors can be retrieved in the summary table.
Since there are 3 categories in the target variable(unlikely, somewhat likely, very likely), we have two thresholds to estimate.
As explained in the doc of the method OrderedModel.transform_threshold_params, the first estimated threshold is the actual value and all the other thresholds are in terms of cumulative exponentiated increments. Actual thresholds values can be computed as follows:
End of explanation
"""
mod_log = OrderedModel(data_student['apply'],
data_student[['pared', 'public', 'gpa']],
distr='logit')
res_log = mod_log.fit(method='bfgs', disp=False)
res_log.summary()
predicted = res_log.model.predict(res_log.params, exog=data_student[['pared', 'public', 'gpa']])
predicted
pred_choice = predicted.argmax(1)
print('Fraction of correct choice predictions')
print((np.asarray(data_student['apply'].values.codes) == pred_choice).mean())
"""
Explanation: Logit ordinal regression:
End of explanation
"""
# using a SciPy distribution
res_exp = OrderedModel(data_student['apply'],
data_student[['pared', 'public', 'gpa']],
distr=stats.expon).fit(method='bfgs', disp=False)
res_exp.summary()
# minimal definition of a custom scipy distribution.
class CLogLog(stats.rv_continuous):
def _ppf(self, q):
return np.log(-np.log(1 - q))
def _cdf(self, x):
return 1 - np.exp(-np.exp(x))
cloglog = CLogLog()
# definition of the model and fitting
res_cloglog = OrderedModel(data_student['apply'],
data_student[['pared', 'public', 'gpa']],
distr=cloglog).fit(method='bfgs', disp=False)
res_cloglog.summary()
"""
Explanation: Ordinal regression with a custom cumulative cLogLog distribution:
In addition to logit and probit regression, any continuous distribution from SciPy.stats package can be used for the distr argument. Alternatively, one can define its own distribution simply creating a subclass from rv_continuous and implementing a few methods.
End of explanation
"""
modf_logit = OrderedModel.from_formula("apply ~ 0 + pared + public + gpa", data_student,
distr='logit')
resf_logit = modf_logit.fit(method='bfgs')
resf_logit.summary()
"""
Explanation: Using formulas - treatment of endog
Pandas' ordered categorical and numeric values are supported as dependent variable in formulas. Other types will raise a ValueError.
End of explanation
"""
data_student["apply_codes"] = data_student['apply'].cat.codes * 2 + 5
data_student["apply_codes"].head()
OrderedModel.from_formula("apply_codes ~ 0 + pared + public + gpa", data_student,
distr='logit').fit().summary()
resf_logit.predict(data_student.iloc[:5])
"""
Explanation: Using numerical codes for the dependent variable is supported but loses the names of the category levels. The levels and names correspond to the unique values of the dependent variable sorted in alphanumeric order as in the case without using formulas.
End of explanation
"""
data_student["apply_str"] = np.asarray(data_student["apply"])
data_student["apply_str"].head()
OrderedModel.from_formula("apply_str ~ 0 + pared + public + gpa", data_student,
distr='logit')
"""
Explanation: Using string values directly as dependent variable raises a ValueError.
End of explanation
"""
nobs = len(data_student)
data_student["dummy"] = (np.arange(nobs) < (nobs / 2)).astype(float)
"""
Explanation: Using formulas - no constant in model
The parameterization of OrderedModel requires that there is no constant in the model, neither explicit nor implicit. The constant is equivalent to shifting all thresholds and is therefore not separately identified.
Patsy's formula specification does not allow a design matrix without explicit or implicit constant if there are categorical variables (or maybe splines) among explanatory variables. As workaround, statsmodels removes an explit intercept.
Consequently, there are two valid cases to get a design matrix without intercept.
specify a model without explicit and implicit intercept which is possible if there are only numerical variables in the model.
specify a model with an explicit intercept which statsmodels will remove.
Models with an implicit intercept will be overparameterized, the parameter estimates will not be fully identified, cov_params will not be invertible and standard errors might contain nans.
In the following we look at an example with an additional categorical variable.
End of explanation
"""
modfd_logit = OrderedModel.from_formula("apply ~ 1 + pared + public + gpa + C(dummy)", data_student,
distr='logit')
resfd_logit = modfd_logit.fit(method='bfgs')
print(resfd_logit.summary())
modfd_logit.k_vars
modfd_logit.k_constant
"""
Explanation: explicit intercept, that will be removed:
Note "1 +" is here redundant because it is patsy's default.
End of explanation
"""
OrderedModel.from_formula("apply ~ 0 + pared + public + gpa + C(dummy)", data_student,
distr='logit')
"""
Explanation: implicit intercept creates overparameterized model
Specifying "0 +" in the formula drops the explicit intercept. However, the categorical encoding is now changed to include an implicit intercept. In this example, the created dummy variables C(dummy)[0.0] and C(dummy)[1.0] sum to one.
End of explanation
"""
modfd2_logit = OrderedModel.from_formula("apply ~ 0 + pared + public + gpa + C(dummy)", data_student,
distr='logit', hasconst=False)
resfd2_logit = modfd2_logit.fit(method='bfgs')
print(resfd2_logit.summary())
resfd2_logit.predict(data_student.iloc[:5])
resf_logit.predict()
"""
Explanation: To see what would happen in the overparameterized case, we can avoid the constant check in the model by explicitly specifying whether a constant is present or not. We use hasconst=False, even though the model has an implicit constant.
The parameters of the two dummy variable columns and the first threshold are not separately identified. Estimates for those parameters and availability of standard errors are arbitrary and depends on numerical details that differ across environments.
Some summary measures like log-likelihood value are not affected by this, within convergence tolerance and numerical precision. Prediction should also be possible. However, inference is not available, or is not valid.
End of explanation
"""
from statsmodels.discrete.discrete_model import Logit
from statsmodels.tools.tools import add_constant
"""
Explanation: Binary Model compared to Logit
If there are only two levels of the dependent ordered categorical variable, then the model can also be estimated by a Logit model.
The models are (theoretically) identical in this case except for the parameterization of the constant. Logit as most other models requires in general an intercept. This corresponds to the threshold parameter in the OrderedModel, however, with opposite sign.
The implementation differs and not all of the same results statistic and post-estimation features are available. Estimated parameters and other results statistic differ mainly based on convergence tolerance of the optimization.
End of explanation
"""
mask_drop = data_student['apply'] == "somewhat likely"
data2 = data_student.loc[~mask_drop, :]
# we need to remove the category also from the Categorical Index
data2['apply'].cat.remove_categories("somewhat likely", inplace=True)
data2.head()
mod_log = OrderedModel(data2['apply'],
data2[['pared', 'public', 'gpa']],
distr='logit')
res_log = mod_log.fit(method='bfgs', disp=False)
res_log.summary()
"""
Explanation: We drop the middle category from the data and keep the two extreme categories.
End of explanation
"""
ex = add_constant(data2[['pared', 'public', 'gpa']], prepend=False)
mod_logit = Logit(data2['apply'].cat.codes, ex)
res_logit = mod_logit.fit(method='bfgs', disp=False)
res_logit.summary()
"""
Explanation: The Logit model does not have a constant by default, we have to add it to our explanatory variables.
The results are essentially identical between Logit and ordered model up to numerical precision mainly resulting from convergence tolerance in the estimation.
The only difference is in the sign of the constant, Logit and OrdereModel have opposite signs of he constant. This is a consequence of the parameterization in terms of cut points in OrderedModel instead of including and constant column in the design matrix.
End of explanation
"""
res_logit_hac = mod_logit.fit(method='bfgs', disp=False, cov_type="hac", cov_kwds={"maxlags": 2})
res_log_hac = mod_log.fit(method='bfgs', disp=False, cov_type="hac", cov_kwds={"maxlags": 2})
res_logit_hac.bse.values - res_log_hac.bse
"""
Explanation: Robust standard errors are also available in OrderedModel in the same way as in discrete.Logit.
As example we specify HAC covariance type even though we have cross-sectional data and autocorrelation is not appropriate.
End of explanation
"""
|
letsgoexploring/economicData
|
cross-country-production/python/cross_country_production_data.ipynb
|
mit
|
# Set the current value of the PWT data file
current_pwt_file = 'pwt100.xlsx'
# Import data from local source or download if not present
if os.path.exists('../xslx/pwt100.xlsx'):
info = pd.read_excel('../xslx/'+current_pwt_file,sheet_name='Info',header=None)
legend = pd.read_excel('../xslx/'+current_pwt_file,sheet_name='Legend',index_col=0)
pwt = pd.read_excel('../xslx/'+current_pwt_file,sheet_name='Data',index_col=3,parse_dates=True)
else:
info = pd.read_excel('https://www.rug.nl/ggdc/docs/'+current_pwt_file,sheet_name='Info',header=None)
legend = pd.read_excel('https://www.rug.nl/ggdc/docs/'+current_pwt_file,sheet_name='Legend',index_col=0)
pwt = pd.read_excel('https://www.rug.nl/ggdc/docs/'+current_pwt_file,sheet_name='Data',index_col=3,parse_dates=True)
# Find PWT version
version = info.iloc[0][0].split(' ')[-1]
# Find base year for real variables
base_year = legend.loc['rgdpe']['Variable definition'].split(' ')[-1].split('US')[0]
# Most recent year
final_year = pwt[pwt['countrycode']=='USA'].sort_index().index[-1].year
metadata = pd.Series(dtype=str,name='Values')
metadata['version'] = version
metadata['base_year'] = base_year
metadata['final_year'] = final_year
metadata['gdp_per_capita_units'] = base_year+' dollars per person'
metadata.to_csv(csv_export_path+'/pwt_metadata.csv')
# Replace Côte d'Ivoire with Cote d'Ivoire
pwt['country'] = pwt['country'].str.replace(u"Côte d'Ivoire",u"Cote d'Ivoire")
# Merge country name and code
pwt['country'] = pwt['country']+' - '+pwt['countrycode']
# Create hierarchical index
pwt = pwt.set_index(['country',pwt.index])
# Display new DataFrame
pwt
"""
Explanation: Cross Country Production Data
This program extracts particular series from the Penn World Tables (PWT). Data and documentation for the PWT are available at https://pwt.sas.upenn.edu/. For additional reference see the article "The Next Generation of the Penn World Table" by Feenstra, Inklaar, and Timmer in the October 2015 issue of the American Economic Review (https://www.aeaweb.org/articles?id=10.1257/aer.20130954)
Import data and manage
End of explanation
"""
# Define a function that constructs data sets
def create_data_set(year0,pwtCode,per_capita,per_worker):
year0 = str(year0)
if per_capita:
data = pwt[pwtCode]/pwt['pop']
elif per_worker:
data = pwt[pwtCode]/pwt['emp']
else:
data = pwt[pwtCode]
data = data.unstack(level='country').loc[year0:].dropna(axis=1)
return data
"""
Explanation: Construct data sets
End of explanation
"""
# Create data sets
gdp_pc = create_data_set(year0=1960,pwtCode='rgdpo',per_capita=True,per_worker=False)
consumption_pc = create_data_set(year0=1960,pwtCode='ccon',per_capita=True,per_worker=False)
physical_capital_pc = create_data_set(year0=1960,pwtCode='cn',per_capita=True,per_worker=False)
human_capital_pc = create_data_set(year0=1960,pwtCode='hc',per_capita=False,per_worker=False)
# Find intsection of countries with data from 1960
intersection = gdp_pc.columns.intersection(consumption_pc.columns).intersection(physical_capital_pc.columns).intersection(human_capital_pc.columns)
# Adjust data
gdp_pc = gdp_pc[intersection]
consumption_pc = consumption_pc[intersection]
physical_capital_pc = physical_capital_pc[intersection]
human_capital_pc = human_capital_pc[intersection]
# Export to csv
gdp_pc.to_csv(csv_export_path+'/cross_country_gdp_per_capita.csv')
consumption_pc.to_csv(csv_export_path+'/cross_country_consumption_per_capita.csv')
physical_capital_pc.to_csv(csv_export_path+'/cross_country_physical_capital_per_capita.csv')
human_capital_pc.to_csv(csv_export_path+'/cross_country_human_capital_per_capita.csv')
"""
Explanation: Individual time series
End of explanation
"""
# Restrict data to final year
df = pwt.swaplevel(0, 1).sort_index().loc[(str(final_year),slice(None))].reset_index()
# Select columns: 'countrycode','country','rgdpo','emp','hc','cn'
df = df[['countrycode','country','rgdpo','emp','hc','cn']]
# Rename columns
df.columns = ['country_code','country','gdp','labor','human_capital','physical_capital']
# Remove country codes from country column
df['country'] = df['country'].str.split(' - ',expand=True)[0]
# Drop countries with missing observations
df = df.dropna()
# 3. Export data
df[['country_code','country','gdp','labor','human_capital','physical_capital']].to_csv(csv_export_path+'/cross_country_production.csv',index=False)
"""
Explanation: Multiple series for last year available
End of explanation
"""
# Load data
df = pd.read_csv('../csv/cross_country_gdp_per_capita.csv',index_col='year',parse_dates=True)
income60 = df.iloc[0]/1000
growth = 100*((df.iloc[-1]/df.iloc[0])**(1/(len(df.index)-1))-1)
# Construct plot
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(1,1,1)
colors = ['red','blue','magenta','green']
plt.scatter(income60,growth,s=0.0001)
for i, txt in enumerate(df.columns):
ax.annotate(txt[-3:], (income60[i],growth[i]),fontsize=10,color = colors[np.mod(i,4)])
ax.grid()
ax.set_xlabel('GDP per capita in 1960\n (thousands of 2011 $ PPP)')
ax.set_ylabel('Real GDP per capita growth\nfrom 1970 to '+str(df.index[0].year)+ ' (%)')
xlim = ax.get_xlim()
ax.set_xlim([0,xlim[1]])
fig.tight_layout()
# Save image
plt.savefig('../png/fig_GDP_GDP_Growth_site.png',bbox_inches='tight')
# Export notebook to python script
runProcs.exportNb('cross_country_income_data')
"""
Explanation: Plot for website
End of explanation
"""
|
scollins83/deep-learning
|
first-neural-network/Your_first_neural_network.ipynb
|
mit
|
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
"""
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
"""
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
"""
rides[:24*10].plot(x='dteday', y='cnt')
"""
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
"""
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
"""
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
"""
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
"""
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
"""
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
"""
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
"""
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
"""
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
"""
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1/(1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
#final_outputs = self.activation_function(final_inputs) # signals from final output layer
final_outputs = final_inputs
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = error - (y - hidden_outputs)
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error * final_outputs * (1 - final_outputs)
hidden_error_term = np.dot(output_error_term, self.weights_hidden_to_output.T) * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += learning_rate * hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += learning_rate * output_error_term * hidden_outputs[:,None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += delta_weights_h_o # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += delta_weights_i_h # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = self.activation_function(final_inputs) # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
"""
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
"""
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
"""
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
"""
import sys
### Set the hyperparameters here ###
iterations = 100
learning_rate = 0.1
hidden_nodes = 2
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
"""
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
"""
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
"""
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
"""
|
datascience-practice/data-quest
|
python_introduction/intermediate/Modules.ipynb
|
mit
|
import math
"""
Explanation: 3: The math module
Instructions
Use the sqrt() function within the math module to assign the square root of 16.0 to a.
Use the ceil() function within the math module to assign the ceiling of 111.3 to b.
Use the floor() function within the math module to assign the floor of 89.9 to c.
End of explanation
"""
a = math.sqrt(16.0)
b = math.ceil(111.3)
c = math.floor(89.9)
print(a, b, c)
"""
Explanation: Answer
End of explanation
"""
import math
print(math.pi)
"""
Explanation: 4: Variables within modules
Instructions
Assign the square root of pi to a.
Assign the ceiling of pi to b.
Assign the floor of pi to c.
End of explanation
"""
PI = math.pi
a = math.sqrt(PI)
b = math.ceil(PI)
c = math.floor(PI)
print(a, b, c)
"""
Explanation: Answer
End of explanation
"""
import csv
f = open("nfl.csv")
csvreader = csv.reader(f)
nfl = list(csvreader)
print(nfl)
"""
Explanation: 5: The csv module
Instructions
Read in all of the data from "nfl.csv" into a list variable named nfl using the csv module.
Answer
End of explanation
"""
import csv
f = open("nfl.csv")
reader = csv.reader(f)
data = list(reader)
patriots_wins = 0
for row in data:
if row[2] == "New England Patriots":
patriots_wins += 1
print(patriots_wins)
"""
Explanation: 6: Counting how many times a team won
Instructions
Fill in the mission code to do the following:
Import and use the csv module to load data from our "nfl.csv" file.
Count how many games the "New England Patriots" won from 2009-2013. To do this, set a counter to 0, and increment by 1 whenever you see a row whose winner column is equal to "New England Patriots".
Assign the count to patriots_wins.
Answer
End of explanation
"""
import csv
f = open("nfl.csv", 'r')
nfl = list(csv.reader(f))
# Define your function here
def nfl_wins(team):
team_wins = 0
for row in nfl:
if row[2] == team:
team_wins += 1
return team_wins
cowboys_wins = nfl_wins("Dallas Cowboys")
falcons_wins = nfl_wins("Atlanta Falcons")
print(cowboys_wins, falcons_wins)
"""
Explanation: 7: Making a function to count wins
Instructions
Write a function called nfl_wins that will take a team name as input.
The function should return the number of wins the team had from 2009-2013.
Use the function to assign the number of wins by the "Dallas Cowboys" to cowboys_wins.
Use the function to assign the number of wins by the "Atlanta Falcons" to falcons_wins.
End of explanation
"""
import csv
f = open("nfl.csv", 'r')
nfl = list(csv.reader(f))
def nfl_wins(team):
count = 0
for row in nfl:
if row[2] == team:
count = count + 1
return count
def nfl_wins_in_a_year(team, year):
count = 0
for row in nfl:
if row[0] == year and row[2] == team:
count = count + 1
return count
browns_2010_wins = nfl_wins_in_a_year("Cleveland Browns", "2010")
eagles_2011_wins = nfl_wins_in_a_year("Philadelphia Eagles", "2011")
print(browns_2010_wins, eagles_2011_wins)
"""
Explanation: 10: Counting wins in a given year
Instructions
Modify the nfl_wins function so that it takes a team name, in the form of a string, and a year, also in the form of a string, as input. Call this new function nfl_wins_in_a_year
Your function should output the number of wins the team had in the given year, as an integer. Use the and operator to combine booleans, checking whether the desired team won and whether the game happened in the correct year for each row in the dataset.
Use your function to assign the number of wins by the "Cleveland Browns" in "2010" to browns_2010_wins.
Use your function to assign the number of wins by the "Philadelphia Eagles" in "2011" to eagles_2011_wins.
End of explanation
"""
|
eds-uga/csci1360e-su17
|
lectures/L6.ipynb
|
mit
|
x = [51, 65, 56, 19, 11, 49, 81, 59, 45, 73]
"""
Explanation: Lecture 6: Conditionals and Exceptions
CSCI 1360E: Foundations for Informatics and Analytics
Overview and Objectives
In this lecture, we'll go over how to make "decisions" over the course of your code depending on the values certain variables take. We'll also introduce exceptions and how to handle them gracefully. By the end of the lecture, you should be able to
Build arbitrary conditional hierarchies to test a variety of possible circumstances
Construct elementary boolean logic statements
Catch basic errors and present meaningful error messages in lieu of a Python crash
Part 1: Conditionals
Up until now, we've been somewhat hobbled in our coding prowess; we've lacked the tools to make different decisions depending on the values our variables take.
For example: how do you find the maximum value in a list of numbers?
End of explanation
"""
max_val = 0
for element in x:
# ... now what?
pass
"""
Explanation: If we want to figure out the maximum value, we'll obviously need a loop to check each element of the list (which we know how to do), and a variable to store the maximum.
End of explanation
"""
x = 5
if x < 5:
print("How did this happen?!") # Spoiler alert: this won't happen.
if x == 5:
print("Working as intended.")
"""
Explanation: We also know we can check relative values, like max_val < element. If this evaluates to True, we know we've found a number in the list that's bigger than our current candidate for maximum value. But how do we execute code until this condition, and this condition alone?
Enter if / elif / else statements! (otherwise known as "conditionals")
We can use the keyword if, followed by a statement that evaluates to either True or False, to determine whether or not to execute the code. For a straightforward example:
End of explanation
"""
x = 5
if x < 5:
print("How did this happen?!") # Spoiler alert: this won't happen.
else:
print("Correct.")
"""
Explanation: In conjunction with if, we also have an else clause that we can use to execute whenever the if statement doesn't:
End of explanation
"""
x = [51, 65, 56, 19, 11, 49, 81, 59, 45, 73]
max_val = 0
for element in x:
if max_val < element:
max_val = element
print("The maximum element is: {}".format(max_val))
"""
Explanation: This is great! We can finally finish computing the maximum element of a list!
End of explanation
"""
student_grades = {
'Jen': 82,
'Shannon': 75,
'Natasha': 94,
'Benjamin': 48,
}
"""
Explanation: Let's pause here and walk through that code.
x = [51, 65, 56, 19, 11, 49, 81, 59, 45, 73]
- This code defines the list we want to look at.
max_val = 0
- And this is a placeholder for the eventual maximum value.
for element in x:
- A standard for loop header: we're iterating over the list x, one at a time storing its elements in the variable element.
if max_val < element:
- The first line of the loop body is an if statement. This statement asks: is the value in our current max_val placeholder smaller than the element of the list stored in element?
max_val = element
- If the answer to that if statement is True, then this line executes: it sets our placeholder equal to the current list element.
Let's look at slightly more complicated but utterly classic example: assigning letter grades from numerical grades.
End of explanation
"""
letter = ''
for student, grade in student_grades.items():
if grade >= 90:
letter = "A"
elif grade >= 80:
letter = "B"
elif grade >= 70:
letter = "C"
elif grade >= 60:
letter = "D"
else:
letter = "F"
print(student, letter)
"""
Explanation: We know the 90-100 range is an "A", 80-89 is a "B", and so on. How would we build a conditional to assign letter grades?
The third and final component of conditionals is the elif statement (short for "else if").
elif allows us to evaluate as many options as we'd like, all within the same conditional context (this is important). So for our grading example, it might look like this:
End of explanation
"""
x = [51, 65, 56, 19, 11, 49, 81, 59, 45, 73]
max_val = 81 # We've already found it!
second_largest = 0
"""
Explanation: Ok, that's neat. But there's still one more edge case: what happens if we want to enforce multiple conditions simultaneously?
To illustrate, let's go back to our example of finding the maximum value in a list, and this time, let's try to find the second-largest value in the list. For simplicity, let's say we've already found the largest value.
End of explanation
"""
True and True and True and True and True and True and False
"""
Explanation: Here's the rub: we now have two constraints to enforce--the second largest value needs to be larger than pretty much everything in the list, but also needs to be smaller than the maximum value. Not something we can encode using if / elif / else.
Instead, we'll use two more keywords integral to conditionals: and and or.
You've already seen and: this is used to join multiple boolean statements together in such a way that, if one of the statements is False, the entire statement is False.
End of explanation
"""
True or True or True or True or True or True or False
False or False or False or False or False or False or True
"""
Explanation: One False ruins the whole thing.
However, we haven't encountered or before. How do you think it works?
Here's are two examples:
End of explanation
"""
(True and False) or (True or False)
"""
Explanation: Figured it out?
Whereas and needs every statement it joins to be True in order for the whole statement to be True, only one statement among those joined by or needs to be True for everything to be True.
How about this example?
End of explanation
"""
for element in x:
if second_largest < element and element < max_val:
second_largest = element
print("The second-largest element is: {}".format(second_largest))
"""
Explanation: (Order of operations works the same way!)
Getting back to conditionals, then: we can use this boolean logic to enforce multiple constraints simultaneously.
End of explanation
"""
second_largest = 0
for element in x:
if second_largest < element:
if element < max_val:
second_largest = element
print("The second-largest element is: {}".format(second_largest))
"""
Explanation: Let's step through the code.
for element in x:
if second_largest < element and element < max_val:
second_largest = element
The first condition, second_largest < element, is the same as before: if our current estimate of the second largest element is smaller than the latest element we're looking at, it's definitely a candidate for second-largest.
The second condition, element < max_val, is what ensures we don't just pick the largest value again. This enforces the constraint that the current element we're looking at is also less than the maximum value.
The and keyword glues these two conditions together: it requires that they BOTH be True before the code inside the statement is allowed to execute.
It would be easy to replicate this with "nested" conditionals:
End of explanation
"""
numbers = [1, 2, 5, 6, 7, 9, 10]
for num in numbers:
if num == 2 or num == 4 or num == 6 or num == 8 or num == 10:
print("{} is an even number.".format(num))
"""
Explanation: ...but your code starts getting a little unwieldy with so many indentations.
You can glue as many comparisons as you want together with and; the whole statement will only be True if every single condition evaluates to True. This is what and means: everything must be True.
The other side of this coin is or. Like and, you can use it to glue together multiple constraints. Unlike and, the whole statement will evaluate to True as long as at least ONE condition is True. This is far less stringent than and, where ALL conditions had to be True.
End of explanation
"""
import random
list_of_numbers = [i for i in range(10)] # Generates 10 random numbers, between 1 and 100.
if 13 not in list_of_numbers:
print("Aw man, my lucky number isn't here!")
"""
Explanation: In this contrived example, I've glued together a bunch of constraints. Obviously, these constraints are mutually exclusive; a number can't be equal to both 2 and 4 at the same time, so num == 2 and num == 4 would never evaluate to True. However, using or, only one of them needs to be True for the statement underneath to execute.
There's a little bit of intuition to it.
"I want this AND this" has the implication of both at once.
"I want this OR this" sounds more like either one would be adequate.
One other important tidbit, concerning not only conditionals, but also lists and booleans: the not keyword.
An often-important task in data science, when you have a list of things, is querying whether or not some new piece of information you just received is already in your list. You could certainly loop through the list, asking "is my new_item == list[item i]?". But, thankfully, there's a better way:
End of explanation
"""
import random
list_of_numbers = [i for i in range(10)] # Generates 10 random numbers, between 1 and 100.
if 13 in list_of_numbers:
print("Somehow the number 13 is in a list generated by range(10)")
"""
Explanation: Notice a couple things here--
List comprehensions make an appearance! Can you parse it out?
The if statement asks if the number 13 is NOT found in list_of_numbers
When that statement evaluates to True--meaning the number is NOT found--it prints the message.
If you omit the not keyword, then the question becomes: "is this number in the list?"
End of explanation
"""
def divide(x, y):
return x / y
divide(11, 0)
"""
Explanation: Nothing is printed in this case, since our conditional is asking if the number 13 was in the list. Which it's not.
Be careful with this. Typing issues can hit you full force here: if you ask:
if 0 in some_list
and it's a list of floats, then this operation will always evaluate to False.
Similarly, if you ask if "shannon" in name_list, it will look for the precise string "shannon" and return False even if the string "Shannon" is in the list. With great power, etc etc.
Part 2: Error Handling
Yes, errors: plaguing us since Windows 95 (but really, since well before then).
By now, I suspect you've likely seen your fair share of Python crashes.
NotImplementedError from the homework assignments
TypeError from trying to multiply an integer by a string
KeyError from attempting to access a dictionary key that didn't exist
IndexError from referencing a list beyond its actual length
or any number of other error messages. These are the standard way in which Python (and most other programming languages) handles error messages.
The error is known as an Exception. Some other terminology here includes:
An exception is raised when such an error occurs. This is why you see the code snippet raise NotImplementedError in your homeworks. In other languages such as Java, an exception is "thrown" instead of "raised", but the meanings are equivalent.
When you are writing code that could potentially raise an exception, you can also write code to catch the exception and handle it yourself. When an exception is caught, that means it is handled without crashing the program.
Here's a fairly classic example: divide by zero!
Let's say we're designing a simple calculator application that divides two numbers. We'll ask the user for two numbers, divide them, and return the quotient. Seems simple enough, right?
End of explanation
"""
def divide_safe(x, y):
quotient = 0
try:
quotient = x / y
except ZeroDivisionError:
print("You tried to divide by zero. Why would you do that?!")
return quotient
"""
Explanation: D'oh! The user fed us a 0 for the denominator and broke our calculator. Meanie-face.
So we know there's a possibility of the user entering a 0. This could be malicious or simply by accident. Since it's only one value that could crash our app, we could in principle have an if statement that checks if the denominator is 0. That would be simple and perfectly valid.
But for the sake of this lecture, let's assume we want to try and catch the ZeroDivisionError ourselves and handle it gracefully.
To do this, we use something called a try / except block, which is very similar in its structure to if / elif / else blocks.
First, put the code that could potentially crash your program inside a try statement. Under that, have a except statement that defines
A variable for the error you're catching, and
Any code that dictates how you want to handle the error
End of explanation
"""
divide_safe(11, 0)
"""
Explanation: Now if our user tries to be snarky again--
End of explanation
"""
import random # For generating random exceptions.
num = random.randint(0, 1)
try:
# code for something can cause multiple exceptions
pass
except NameError:
print("Caught a NameError!")
except ValueError:
print("Nope, it was actually a ValueError.")
"""
Explanation: No error, no crash! Just a "helpful" error message.
Like conditionals, you can also create multiple except statements to handle multiple different possible exceptions:
End of explanation
"""
import random # For generating random exceptions.
num = random.randint(0, 1)
try:
if num == 1:
raise NameError("This happens when you use a variable you haven't defined")
else:
raise ValueError("This happens when you try to multiply a string")
except (NameError, ValueError): # MUST have the parentheses!
print("Caught...well, some kinda error, not sure which.")
"""
Explanation: Also like conditionals, you can handle multiple errors simultaneously. If, like in the previous example, your code can raise multiple exceptions, but you want to handle them all the same way, you can stack them all in one except statement:
End of explanation
"""
import random # For generating random exceptions.
num = random.randint(0, 1)
try:
if num == 1:
raise NameError("This happens when you use a variable you haven't defined")
else:
raise ValueError("This happens when you try to multiply a string")
except:
print("I caught something!")
"""
Explanation: If you're like me, and you're writing code that you know could raise one of several errors, but are too lazy to look up specifically what errors are possible, you can create a "catch-all" by just not specifying anything:
End of explanation
"""
import random # For generating random exceptions.
num = random.randint(0, 1)
try:
if num == 1:
raise NameError("This happens when you use a variable you haven't defined")
except:
print("I caught something!")
else:
print("HOORAY! Lucky coin flip!")
"""
Explanation: Finally--and this is really getting into what's known as control flow (quite literally: "controlling the flow" of your program)--you can tack an else statement onto the very end of your exception-handling block to add some final code to the handler.
Why? This is code that is only executed if NO exception occurs. Let's go back to our random number example: instead of raising one of two possible exceptions, we'll raise an exception only if we flip a 1.
End of explanation
"""
|
aje/POT
|
notebooks/plot_gromov_barycenter.ipynb
|
mit
|
# Author: Erwan Vautier <erwan.vautier@gmail.com>
# Nicolas Courty <ncourty@irisa.fr>
#
# License: MIT License
import numpy as np
import scipy as sp
import scipy.ndimage as spi
import matplotlib.pylab as pl
from sklearn import manifold
from sklearn.decomposition import PCA
import ot
"""
Explanation: Gromov-Wasserstein Barycenter example
This example is designed to show how to use the Gromov-Wasserstein distance
computation in POT.
End of explanation
"""
def smacof_mds(C, dim, max_iter=3000, eps=1e-9):
"""
Returns an interpolated point cloud following the dissimilarity matrix C
using SMACOF multidimensional scaling (MDS) in specific dimensionned
target space
Parameters
----------
C : ndarray, shape (ns, ns)
dissimilarity matrix
dim : int
dimension of the targeted space
max_iter : int
Maximum number of iterations of the SMACOF algorithm for a single run
eps : float
relative tolerance w.r.t stress to declare converge
Returns
-------
npos : ndarray, shape (R, dim)
Embedded coordinates of the interpolated point cloud (defined with
one isometry)
"""
rng = np.random.RandomState(seed=3)
mds = manifold.MDS(
dim,
max_iter=max_iter,
eps=1e-9,
dissimilarity='precomputed',
n_init=1)
pos = mds.fit(C).embedding_
nmds = manifold.MDS(
2,
max_iter=max_iter,
eps=1e-9,
dissimilarity="precomputed",
random_state=rng,
n_init=1)
npos = nmds.fit_transform(C, init=pos)
return npos
"""
Explanation: Smacof MDS
This function allows to find an embedding of points given a dissimilarity matrix
that will be given by the output of the algorithm
End of explanation
"""
def im2mat(I):
"""Converts and image to matrix (one pixel per line)"""
return I.reshape((I.shape[0] * I.shape[1], I.shape[2]))
square = spi.imread('../data/square.png').astype(np.float64)[:, :, 2] / 256
cross = spi.imread('../data/cross.png').astype(np.float64)[:, :, 2] / 256
triangle = spi.imread('../data/triangle.png').astype(np.float64)[:, :, 2] / 256
star = spi.imread('../data/star.png').astype(np.float64)[:, :, 2] / 256
shapes = [square, cross, triangle, star]
S = 4
xs = [[] for i in range(S)]
for nb in range(4):
for i in range(8):
for j in range(8):
if shapes[nb][i, j] < 0.95:
xs[nb].append([j, 8 - i])
xs = np.array([np.array(xs[0]), np.array(xs[1]),
np.array(xs[2]), np.array(xs[3])])
"""
Explanation: Data preparation
The four distributions are constructed from 4 simple images
End of explanation
"""
ns = [len(xs[s]) for s in range(S)]
n_samples = 30
"""Compute all distances matrices for the four shapes"""
Cs = [sp.spatial.distance.cdist(xs[s], xs[s]) for s in range(S)]
Cs = [cs / cs.max() for cs in Cs]
ps = [ot.unif(ns[s]) for s in range(S)]
p = ot.unif(n_samples)
lambdast = [[float(i) / 3, float(3 - i) / 3] for i in [1, 2]]
Ct01 = [0 for i in range(2)]
for i in range(2):
Ct01[i] = ot.gromov.gromov_barycenters(n_samples, [Cs[0], Cs[1]],
[ps[0], ps[1]
], p, lambdast[i], 'square_loss', # 5e-4,
max_iter=100, tol=1e-3)
Ct02 = [0 for i in range(2)]
for i in range(2):
Ct02[i] = ot.gromov.gromov_barycenters(n_samples, [Cs[0], Cs[2]],
[ps[0], ps[2]
], p, lambdast[i], 'square_loss', # 5e-4,
max_iter=100, tol=1e-3)
Ct13 = [0 for i in range(2)]
for i in range(2):
Ct13[i] = ot.gromov.gromov_barycenters(n_samples, [Cs[1], Cs[3]],
[ps[1], ps[3]
], p, lambdast[i], 'square_loss', # 5e-4,
max_iter=100, tol=1e-3)
Ct23 = [0 for i in range(2)]
for i in range(2):
Ct23[i] = ot.gromov.gromov_barycenters(n_samples, [Cs[2], Cs[3]],
[ps[2], ps[3]
], p, lambdast[i], 'square_loss', # 5e-4,
max_iter=100, tol=1e-3)
"""
Explanation: Barycenter computation
End of explanation
"""
clf = PCA(n_components=2)
npos = [0, 0, 0, 0]
npos = [smacof_mds(Cs[s], 2) for s in range(S)]
npost01 = [0, 0]
npost01 = [smacof_mds(Ct01[s], 2) for s in range(2)]
npost01 = [clf.fit_transform(npost01[s]) for s in range(2)]
npost02 = [0, 0]
npost02 = [smacof_mds(Ct02[s], 2) for s in range(2)]
npost02 = [clf.fit_transform(npost02[s]) for s in range(2)]
npost13 = [0, 0]
npost13 = [smacof_mds(Ct13[s], 2) for s in range(2)]
npost13 = [clf.fit_transform(npost13[s]) for s in range(2)]
npost23 = [0, 0]
npost23 = [smacof_mds(Ct23[s], 2) for s in range(2)]
npost23 = [clf.fit_transform(npost23[s]) for s in range(2)]
fig = pl.figure(figsize=(10, 10))
ax1 = pl.subplot2grid((4, 4), (0, 0))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax1.scatter(npos[0][:, 0], npos[0][:, 1], color='r')
ax2 = pl.subplot2grid((4, 4), (0, 1))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax2.scatter(npost01[1][:, 0], npost01[1][:, 1], color='b')
ax3 = pl.subplot2grid((4, 4), (0, 2))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax3.scatter(npost01[0][:, 0], npost01[0][:, 1], color='b')
ax4 = pl.subplot2grid((4, 4), (0, 3))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax4.scatter(npos[1][:, 0], npos[1][:, 1], color='r')
ax5 = pl.subplot2grid((4, 4), (1, 0))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax5.scatter(npost02[1][:, 0], npost02[1][:, 1], color='b')
ax6 = pl.subplot2grid((4, 4), (1, 3))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax6.scatter(npost13[1][:, 0], npost13[1][:, 1], color='b')
ax7 = pl.subplot2grid((4, 4), (2, 0))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax7.scatter(npost02[0][:, 0], npost02[0][:, 1], color='b')
ax8 = pl.subplot2grid((4, 4), (2, 3))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax8.scatter(npost13[0][:, 0], npost13[0][:, 1], color='b')
ax9 = pl.subplot2grid((4, 4), (3, 0))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax9.scatter(npos[2][:, 0], npos[2][:, 1], color='r')
ax10 = pl.subplot2grid((4, 4), (3, 1))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax10.scatter(npost23[1][:, 0], npost23[1][:, 1], color='b')
ax11 = pl.subplot2grid((4, 4), (3, 2))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax11.scatter(npost23[0][:, 0], npost23[0][:, 1], color='b')
ax12 = pl.subplot2grid((4, 4), (3, 3))
pl.xlim((-1, 1))
pl.ylim((-1, 1))
ax12.scatter(npos[3][:, 0], npos[3][:, 1], color='r')
"""
Explanation: Visualization
The PCA helps in getting consistency between the rotations
End of explanation
"""
|
hannorein/rebound
|
ipython_examples/Resonances_of_Jupiters_moons.ipynb
|
gpl-3.0
|
import rebound
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
sim = rebound.Simulation()
sim.units = ('AU', 'days', 'Msun')
# We can add Jupiter and four of its moons by name, since REBOUND is linked to the HORIZONS database.
labels = ["Jupiter", "Io", "Europa","Ganymede","Callisto"]
sim.add(labels)
"""
Explanation: Resonances of Jupiter's moons, Io, Europa, and Ganymede
Example provided by Deborah Lokhorst. In this example, the four Galilean moons of Jupiter are downloaded from HORIZONS and their orbits are integrated forwards in time. This is a well-known example of a 1:2:4 resonance (also called Laplace resonance) in orbiting bodies. We calculate the resonant arguments see them oscillate with time. We also perform a Fast Fourier Transform (FFT) on the x-position of Io, to look for the period of oscillations caused by the 2:1 resonance between Io and Europa.
Let us first import REBOUND, numpy and matplotlib. We then download the current coordinates for Jupiter and its moons from the NASA HORIZONS database. We work in units of AU, days and solar masses.
End of explanation
"""
os = sim.calculate_orbits()
print("n_i (in rad/days) = %6.3f, %6.3f, %6.3f" % (os[0].n,os[1].n,os[2].n))
print("P_i (in days) = %6.3f, %6.3f, %6.3f" % (os[0].P,os[1].P,os[2].P))
"""
Explanation: Let us now calculate the mean motions and periods of the inner three moons.
End of explanation
"""
sim.move_to_com()
fig = rebound.OrbitPlot(sim, unitlabel="[AU]", color=True, periastron=True)
"""
Explanation: We can see that the mean motions of each moon are twice that of the moon inner to it and the periods of each moon are half that of the moon inner to it. This means we are close to a 4:2:1 resonance.
Let's move to the center of mass (COM) frame and plot the orbits of the four moons around Jupiter:
End of explanation
"""
sim.integrator = "whfast"
sim.dt = 0.05 * os[0].P # 5% of Io's period
Nout = 100000 # number of points to display
tmax = 80*365.25 # let the simulation run for 80 years
Nmoons = 4
"""
Explanation: Note that REBOUND automatically plots Jupiter as the central body in this frame, complete with a star symbol (not completely representative of this case, but it'll do).
We can now start integrating the system forward in time. This example uses the symplectic Wisdom-Holman type whfast integrator since no close encounters are expected. The timestep is set to 5% of one of Io's orbits.
End of explanation
"""
x = np.zeros((Nmoons,Nout))
ecc = np.zeros((Nmoons,Nout))
longitude = np.zeros((Nmoons,Nout))
varpi = np.zeros((Nmoons,Nout))
times = np.linspace(0.,tmax,Nout)
ps = sim.particles
for i,time in enumerate(times):
sim.integrate(time)
# note we use integrate() with the default exact_finish_time=1, which changes the timestep near
# the outputs to match the output times we want. This is what we want for a Fourier spectrum,
# but technically breaks WHFast's symplectic nature. Not a big deal here.
os = sim.calculate_orbits()
for j in range(Nmoons):
x[j][i] = ps[j+1].x
ecc[j][i] = os[j].e
longitude[j][i] = os[j].l
varpi[j][i] = os[j].Omega + os[j].omega
"""
Explanation: Similar to as was done in the Fourier analysis & resonances example, we set up several arrays to hold values as the simulation runs. This includes the positions of the moons, eccentricities, mean longitudes, and longitude of pericentres.
End of explanation
"""
fig = plt.figure(figsize=(12,5))
ax = plt.subplot(111)
plt.plot(times,ecc[0],label=labels[1])
plt.plot(times,ecc[1],label=labels[2])
plt.plot(times,ecc[2],label=labels[3])
plt.plot(times,ecc[3],label=labels[4])
ax.set_xlabel("Time (days)")
ax.set_ylabel("Eccentricity")
plt.legend();
"""
Explanation: If we plot the eccentricities as a function of time, one can see that they oscillate significantly for the three inner moons, which are in resonance with each other. Contrasting with these large oscillations, is the smaller oscillation of the outer Galilean moon, Callisto, which is shown for comparison. The three inner moons are in resonance, 1:2:4, but Callisto is not quite in resonance with them, though it is expected to migrate into resonance with them eventually.
Also visible is the gradual change in eccentricity as a function of time: Callisto's mean eccentricity is decreasing and Ganymede's mean eccentricity is increasing. This is a secular change due to the interactions with the inner moons.
End of explanation
"""
fig = plt.figure(figsize=(12,5))
ax = plt.subplot(111)
plt.plot(times,x[0],label=labels[1])
plt.plot(times,x[1],label=labels[2])
plt.plot(times,x[2],label=labels[3])
plt.plot(times,x[3],label=labels[4])
ax.set_xlim(0,0.2*365.25)
ax.set_xlabel("Time (days)")
ax.set_ylabel("x locations (AU)")
ax.tick_params()
plt.legend();
"""
Explanation: We can plot their x-locations as a function of time as well, and observe their relative motions around Jupiter.
End of explanation
"""
def zeroTo360(val):
while val < 0:
val += 2*np.pi
while val > 2*np.pi:
val -= 2*np.pi
return (val*180/np.pi)
def min180To180(val):
while val < -np.pi:
val += 2*np.pi
while val > np.pi:
val -= 2*np.pi
return (val*180/np.pi)
# We can calculate theta, the resonant argument of the 1:2 Io-Europa orbital resonance,
# which oscillates about 0 degrees:
theta = [min180To180(2.*longitude[1][i] - longitude[0][i] - varpi[0][i]) for i in range(Nout)]
# There is also a secular resonance argument, corresponding to the difference in the longitude of perihelions:
# This angle oscillates around 180 degs, with a longer period component.
theta_sec = [zeroTo360(-varpi[1][i] + varpi[0][i]) for i in range(Nout)]
fig = plt.figure(figsize=(12,5))
ax = plt.subplot(111)
ax.plot(times,theta)
ax.plot(times,theta_sec) # secular resonance argument
ax.set_xlim([0,20.*365.25])
ax.set_ylim([-180,360.])
ax.set_xlabel("time (days)")
ax.set_ylabel(r"resonant argument $\theta_{2:1}$")
ax.plot([0,100],[180,180],'k--')
ax.plot([0,100],[0,0],'k--')
"""
Explanation: Resonances are identified by looking at the resonant arguments, which are defined as:
$$ \theta = (p + q)\lambda_{\rm out} - p \lambda_{\rm in} - q \omega_{\rm out/in}$$
where $\lambda_{\rm out}$ and $\lambda_{\rm in}$ are the mean longitudes of the outer and inner bodies, respectively,
and $\omega_{\rm out}$ is the longitude of pericenter of the outer/inner body.
The ratio of periods is defined as : $$P_{\rm in}/P_{\rm out} ~= p / (p + q)$$
If the resonant argument, $\theta$, oscillates but is constrained within some range of angles, then
there is a resonance between the inner and outer bodies. We call this libration of the angle $\theta$.
The trick is to find what the values of q and p are. For our case, we can easily see that
there are two 2:1 resonances between the moons, so their resonant arguments would follow
the function:
$$\theta = 2 \lambda_{\rm out} - \lambda_{\rm in} - \omega_{\rm out}$$
To make the plotting easier, we can borrow this helper function that puts angles into 0 to 360 degrees
from another example (Fourier analysis & resonances), and define a new one that puts angles
into -180 to 180 degrees.
End of explanation
"""
thetaL = [zeroTo360(-longitude[0][i] + 3.*longitude[1][i] - 2.*longitude[2][i]) for i in range(Nout)]
fig = plt.figure(figsize=(12,5))
ax = plt.subplot(111)
ax.plot(times,thetaL)
ax.set_ylim([0,360.])
ax.set_xlabel("time (days)")
ax.set_ylabel(r"libration argument $\theta_{2:1}$")
ax.plot([0,200],[180,180],'k--')
"""
Explanation: Io, Europa and Ganymede are in a Laplace 1:2:4 resonance,
which additionally has a longer period libration argument that depends on all three of
their mean longitudes, that appears slightly in the other resonant arguments:
End of explanation
"""
from scipy import signal
Npts = 3000
# look for periodicities with periods logarithmically spaced between 0.01 yrs and 100 yrs
logPmin = np.log10(0.001*365.25)
logPmax = np.log10(10.*365.25)
# set up a logspaced array from 0.01 to 100 yrs
Ps = np.logspace(logPmin,logPmax,Npts)
# calculate an array of corresponding angular frequencies
ws = np.asarray([2*np.pi/P for P in Ps])
# calculate the periogram (for Io) (using ws as the values for which to compute it)
periodogram = signal.lombscargle(times,x[0],ws)
fig = plt.figure(figsize=(12,5))
ax = plt.subplot(111)
# Since the computed periodogram is unnormalized, taking the value A**2*N/4,
# we renormalize the results by applying these functions inversely to the output:
ax.set_xscale('log')
ax.set_xlim([10**logPmin,10**logPmax])
ax.set_xlabel("Period (days)")
ax.set_ylabel("Power")
ax.plot(Ps,np.sqrt(4*periodogram/Nout))
"""
Explanation: For completeness, let's take a brief look at the Fourier transforms of the x-positions
of Io, and see if it has oscillations related to the MMR.
We are going to use the scipy Lomb-Scargle periodogram function,
which is good for non-uniform time series analysis. Therefore,
if we used the IAS15 integrator, which has adaptive timesteps,
this function would still work.
End of explanation
"""
|
SBRG/ssbio
|
docs/notebooks/Complex - Testing.ipynb
|
mit
|
import ecolime
import ecolime.flat_files
"""
Explanation: README
Notebook to test the Complex class as well as parsing code from cobrame/ecolime
From COBRAme/ECOLIme...
Flat files / ProcessData
End of explanation
"""
# First load the list of complexes which tells you complexes + subunit stoichiometry
# Converts the protein_complexes.txt file into a dictionary for ME model construction
complexes = ecolime.flat_files.get_complex_subunit_stoichiometry('protein_complexes.txt')
# Then load the modifications which tells you the modificiations (ie. cofactors) that are needed for a complex
# Converts protein_modification.txt
complex_modification_dict = ecolime.flat_files.get_complex_modifications('protein_modification.txt', 'protein_complexes.txt')
complexes
complexes['CPLX0-7']
complexes['CPLX0-1601']
"""
Explanation: Protein complexes - ComplexData and ComplexFormation (the reactions needed to assemble the complexes in ComplexData)
End of explanation
"""
from collections import defaultdict
import pandas
from os.path import dirname, join, abspath
ecoli_files_dir = join('/home/nathan/projects_unsynced/ecolime/ecolime/', 'building_data/')
from ecolime import corrections
def fixpath(filename):
return join(ecoli_files_dir, filename)
# From: ecolime.flat_files.get_reaction_to_complex, modified to just parse the file
def get_reaction_to_complex(modifications=True):
"""anything not in this dict is assumed to be an orphan"""
rxn_to_complex_dict = defaultdict(set)
# Load enzyme reaction association dataframe
df = pandas.read_csv(fixpath('enzyme_reaction_association.txt'),
delimiter='\t', names=['Reaction', 'Complexes'])
# Fix legacy naming
df = df.applymap(lambda x: x.replace('DASH', ''))
df = df.set_index('Reaction')
df = corrections.correct_enzyme_reaction_association_frame(df)
for reaction, complexes in df.itertuples():
for cplx in complexes.split(' OR '):
if modifications:
rxn_to_complex_dict[reaction].add(cplx)
else:
rxn_to_complex_dict[reaction].add(cplx.split('_mod_')[0])
return rxn_to_complex_dict
reaction_to_complex = get_reaction_to_complex()
for reaction,cplxs in reaction_to_complex.items():
for c in cplxs:
if 'NADH-DHI-CPLX' in c:
print(reaction, cplxs)
"""
Explanation: Reaction to complex information
End of explanation
"""
from collections import OrderedDict
biglist = []
for reaction,cplxs in reaction_to_complex.items():
print('Reaction:', reaction)
print('Reaction rule:', cplxs)
print()
for cplx in cplxs:
smalldict = OrderedDict()
smalldict['Reaction'] = reaction
# smalldict['Reaction_rule'] = ';'.join(cplxs)
if cplx not in complex_modification_dict:
subunits = {k.split('protein_')[1]:v for k,v in complexes[cplx].items()}
print('\tComplex ID:', cplx)
print('\tComplex subunits:', subunits)
smalldict['Complex_ID'] = cplx
smalldict['Complex_ID_mod'] = None
smalldict['Complex_subunits'] = [(k, v) for k,v in subunits.items()]
smalldict['Complex_modifications'] = None
else:
subunits = {k.split('protein_')[1]:v for k,v in complexes[complex_modification_dict[cplx]['core_enzyme']].items()}
mods = complex_modification_dict[cplx]['modifications']
print('\tComplex ID (modification):', cplx)
print('\tComplex ID (original):', complex_modification_dict[cplx]['core_enzyme'])
print('\tComplex subunits:', subunits)
print('\tComplex modification:', mods)
smalldict['Complex_ID'] = complex_modification_dict[cplx]['core_enzyme']
smalldict['Complex_ID_mod'] = cplx
smalldict['Complex_subunits'] = ((k, v) for k,v in subunits.items())
smalldict['Complex_modifications'] = ((k, v) for k,v in mods.items())
print()
biglist.append(smalldict)
import pandas as pd
pd.DataFrame(biglist)
"""
Explanation: Summary
End of explanation
"""
|
yuvrajsingh86/DeepLearning_Udacity
|
sentiment-network/Sentiment_Classification_Projects.ipynb
|
mit
|
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
"""
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (Check inside your classroom for a discount code)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem" (this lesson)
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network (video only - nothing in notebook)
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset<a id='lesson_1'></a>
The cells from here until Project 1 include code Andrew shows in the videos leading up to mini project 1. We've included them so you can run the code along with the videos without having to type in everything.
End of explanation
"""
len(reviews)
reviews[0]
labels[0]
"""
Explanation: Note: The data in reviews.txt we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
"""
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
"""
Explanation: Lesson: Develop a Predictive Theory<a id='lesson_2'></a>
End of explanation
"""
from collections import Counter
import numpy as np
"""
Explanation: Project 1: Quick Theory Validation<a id='project_1'></a>
There are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.
You'll find the Counter class to be useful in this exercise, as well as the numpy library.
End of explanation
"""
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
"""
Explanation: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
End of explanation
"""
# TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
"""
Explanation: TODO: Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.
Note: Throughout these projects, you should use split(' ') to divide a piece of text (such as a review) into individual words. If you use split() instead, you'll get slightly different results than what the videos and solutions show.
End of explanation
"""
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
"""
Explanation: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
End of explanation
"""
# Create Counter object to store positive/negative ratios
pos_neg_ratios = Counter()
# TODO: Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
"""
Explanation: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
TODO: Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in pos_neg_ratios.
Hint: the positive-to-negative ratio for a given word can be calculated with positive_counts[word] / float(negative_counts[word]+1). Notice the +1 in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews.
End of explanation
"""
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
"""
Explanation: Examine the ratios you've calculated for a few words:
End of explanation
"""
# TODO: Convert ratios to logs
"""
Explanation: Looking closely at the values you just calculated, we see the following:
Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.
Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.
Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The +1 we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.
Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:
Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.
When comparing absolute values it's easier to do that around zero than one.
To fix these issues, we'll convert all of our ratios to new values using logarithms.
TODO: Go through all the ratios you calculated and convert them to logarithms. (i.e. use np.log(ratio))
In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs.
End of explanation
"""
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
"""
Explanation: Examine the new ratios you've calculated for the same words from before:
End of explanation
"""
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
"""
Explanation: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Now run the following cells to see more ratios.
The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.)
The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).)
You should continue to see values similar to the earlier ones we checked – neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios.
End of explanation
"""
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
"""
Explanation: End of Project 1.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Transforming Text into Numbers<a id='lesson_3'></a>
The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything.
End of explanation
"""
# TODO: Create set named "vocab" containing all of the words from all of the reviews
vocab = None
"""
Explanation: Project 2: Creating the Input/Output Data<a id='project_2'></a>
TODO: Create a set named vocab that contains every word in the vocabulary.
End of explanation
"""
vocab_size = len(vocab)
print(vocab_size)
"""
Explanation: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
End of explanation
"""
from IPython.display import Image
Image(filename='sentiment_network_2.png')
"""
Explanation: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
End of explanation
"""
# TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros
layer_0 = None
"""
Explanation: TODO: Create a numpy array called layer_0 and initialize it to all zeros. You will find the zeros function particularly helpful here. Be sure you create layer_0 as a 2-dimensional matrix with 1 row and vocab_size columns.
End of explanation
"""
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
"""
Explanation: Run the following cell. It should display (1, 74074)
End of explanation
"""
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
"""
Explanation: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
End of explanation
"""
def update_input_layer(review):
""" Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
"""
global layer_0
# clear out previous state by resetting the layer to be all 0s
layer_0 *= 0
# TODO: count how many times each word is used in the given review and store the results in layer_0
"""
Explanation: TODO: Complete the implementation of update_input_layer. It should count
how many times each word is used in the given review, and then store
those counts at the appropriate indices inside layer_0.
End of explanation
"""
update_input_layer(reviews[0])
layer_0
"""
Explanation: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
End of explanation
"""
def get_target_for_label(label):
"""Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
"""
# TODO: Your code here
"""
Explanation: TODO: Complete the implementation of get_target_for_labels. It should return 0 or 1,
depending on whether the given label is NEGATIVE or POSITIVE, respectively.
End of explanation
"""
labels[0]
get_target_for_label(labels[0])
"""
Explanation: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
End of explanation
"""
labels[1]
get_target_for_label(labels[1])
"""
Explanation: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
End of explanation
"""
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = None
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = None
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
pass
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
pass
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
pass
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
pass
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
pass
"""
Explanation: End of Project 2.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Project 3: Building a Neural Network<a id='project_3'></a>
TODO: We've included the framework of a class called SentimentNetork. Implement all of the items marked TODO in the code. These include doing the following:
- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer.
- Do not add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.
- Re-use the code from earlier in this notebook to create the training data (see TODOs in the code)
- Implement the pre_process_data function to create the vocabulary for our training data generating functions
- Ensure train trains over the entire corpus
Where to Get Help if You Need it
Re-watch earlier Udacity lectures
Chapters 3-5 - Grokking Deep Learning - (Check inside your classroom for a discount code)
End of explanation
"""
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
"""
Explanation: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
End of explanation
"""
mlp.test(reviews[-1000:],labels[-1000:])
"""
Explanation: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.
End of explanation
"""
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
End of explanation
"""
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
End of explanation
"""
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
End of explanation
"""
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
"""
Explanation: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
End of Project 3.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Understanding Neural Noise<a id='lesson_4'></a>
The following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
End of explanation
"""
# TODO: -Copy the SentimentNetwork class from Projet 3 lesson
# -Modify it to reduce noise, like in the video
"""
Explanation: Project 4: Reducing Noise in Our Input Data<a id='project_4'></a>
TODO: Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:
* Copy the SentimentNetwork class you created earlier into the following cell.
* Modify update_input_layer so it does not count how many times each word is used, but rather just stores whether or not a word was used.
End of explanation
"""
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.
End of explanation
"""
mlp.test(reviews[-1000:],labels[-1000:])
"""
Explanation: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
End of explanation
"""
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
"""
Explanation: End of Project 4.
Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson.
Analyzing Inefficiencies in our Network<a id='lesson_5'></a>
The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
End of explanation
"""
# TODO: -Copy the SentimentNetwork class from Project 4 lesson
# -Modify it according to the above instructions
"""
Explanation: Project 5: Making our Network More Efficient<a id='project_5'></a>
TODO: Make the SentimentNetwork class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:
* Copy the SentimentNetwork class from the previous project into the following cell.
* Remove the update_input_layer function - you will not need it in this version.
* Modify init_network:
You no longer need a separate input layer, so remove any mention of self.layer_0
You will be dealing with the old hidden layer more directly, so create self.layer_1, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero
Modify train:
Change the name of the input parameter training_reviews to training_reviews_raw. This will help with the next step.
At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from word2index) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local list variable named training_reviews that should contain a list for each review in training_reviews_raw. Those lists should contain the indices for words found in the review.
Remove call to update_input_layer
Use self's layer_1 instead of a local layer_1 object.
In the forward pass, replace the code that updates layer_1 with new logic that only adds the weights for the indices used in the review.
When updating weights_0_1, only update the individual weights that were used in the forward pass.
Modify run:
Remove call to update_input_layer
Use self's layer_1 instead of a local layer_1 object.
Much like you did in train, you will need to pre-process the review so you can work with word indices, then update layer_1 by adding weights for the indices used in the review.
End of explanation
"""
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: Run the following cell to recreate the network and train it once again.
End of explanation
"""
mlp.test(reviews[-1000:],labels[-1000:])
"""
Explanation: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
End of explanation
"""
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
"""
Explanation: End of Project 5.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Further Noise Reduction<a id='lesson_6'></a>
End of explanation
"""
# TODO: -Copy the SentimentNetwork class from Project 5 lesson
# -Modify it according to the above instructions
"""
Explanation: Project 6: Reducing Noise by Strategically Reducing the Vocabulary<a id='project_6'></a>
TODO: Improve SentimentNetwork's performance by reducing more noise in the vocabulary. Specifically, do the following:
* Copy the SentimentNetwork class from the previous project into the following cell.
* Modify pre_process_data:
Add two additional parameters: min_count and polarity_cutoff
Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)
Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like.
Change so words are only added to the vocabulary if they occur in the vocabulary more than min_count times.
Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least polarity_cutoff
Modify __init__:
Add the same two parameters (min_count and polarity_cutoff) and use them when you call pre_process_data
End of explanation
"""
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: Run the following cell to train your network with a small polarity cutoff.
End of explanation
"""
mlp.test(reviews[-1000:],labels[-1000:])
"""
Explanation: And run the following cell to test it's performance. It should be
End of explanation
"""
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: Run the following cell to train your network with a much larger polarity cutoff.
End of explanation
"""
mlp.test(reviews[-1000:],labels[-1000:])
"""
Explanation: And run the following cell to test it's performance.
End of explanation
"""
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
"""
Explanation: End of Project 6.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Analysis: What's Going on in the Weights?<a id='lesson_7'></a>
End of explanation
"""
|
gcgruen/homework
|
data-databases-homework/.ipynb_checkpoints/Homework_2_Gruen-checkpoint.ipynb
|
mit
|
import pg8000
conn = pg8000.connect(user="postgres", password="12345", database="homework2")
"""
Explanation: Homework 2: Working with SQL (Data and Databases 2016)
This homework assignment takes the form of an IPython Notebook. There are a number of exercises below, with notebook cells that need to be completed in order to meet particular criteria. Your job is to fill in the cells as appropriate.
You'll need to download this notebook file to your computer before you can complete the assignment. To do so, follow these steps:
Make sure you're viewing this notebook in Github.
Ctrl+click (or right click) on the "Raw" button in the Github interface, and select "Save Link As..." or your browser's equivalent. Save the file in a convenient location on your own computer.
Rename the notebook file to include your own name somewhere in the filename (e.g., Homework_2_Allison_Parrish.ipynb).
Open the notebook on your computer using your locally installed version of IPython Notebook.
When you've completed the notebook to your satisfaction, e-mail the completed file to the address of the teaching assistant (as discussed in class).
Setting the scene
These problem sets address SQL, with a focus on joins and aggregates.
I've prepared a SQL version of the MovieLens data for you to use in this homework. Download this .psql file here. You'll be importing this data into your own local copy of PostgreSQL.
To import the data, follow these steps:
Launch psql.
At the prompt, type CREATE DATABASE homework2;
Connect to the database you just created by typing \c homework2
Import the .psql file you downloaded earlier by typing \i followed by the path to the .psql file.
After you run the \i command, you should see the following output:
CREATE TABLE
CREATE TABLE
CREATE TABLE
COPY 100000
COPY 1682
COPY 943
The table schemas for the data look like this:
Table "public.udata"
Column | Type | Modifiers
-----------+---------+-----------
user_id | integer |
item_id | integer |
rating | integer |
timestamp | integer |
Table "public.uuser"
Column | Type | Modifiers
------------+-----------------------+-----------
user_id | integer |
age | integer |
gender | character varying(1) |
occupation | character varying(80) |
zip_code | character varying(10) |
Table "public.uitem"
Column | Type | Modifiers
--------------------+------------------------+-----------
movie_id | integer | not null
movie_title | character varying(81) | not null
release_date | date |
video_release_date | character varying(32) |
imdb_url | character varying(134) |
unknown | integer | not null
action | integer | not null
adventure | integer | not null
animation | integer | not null
childrens | integer | not null
comedy | integer | not null
crime | integer | not null
documentary | integer | not null
drama | integer | not null
fantasy | integer | not null
film_noir | integer | not null
horror | integer | not null
musical | integer | not null
mystery | integer | not null
romance | integer | not null
scifi | integer | not null
thriller | integer | not null
war | integer | not null
western | integer | not null
Run the cell below to create a connection object. This should work whether you have pg8000 installed or psycopg2.
End of explanation
"""
conn.rollback()
"""
Explanation: If you get an error stating that database "homework2" does not exist, make sure that you followed the instructions above exactly. If necessary, drop the database you created (with, e.g., DROP DATABASE your_database_name) and start again.
In all of the cells below, I've provided the necessary Python scaffolding to perform the query and display the results. All you need to do is write the SQL statements.
As noted in the tutorial, if your SQL statement has a syntax error, you'll need to rollback your connection before you can fix the error and try the query again. As a convenience, I've included the following cell, which performs the rollback process. Run it whenever you hit trouble.
End of explanation
"""
cursor = conn.cursor()
statement = "select movie_title from uitem where horror = 1 and scifi = 1 order by release_date DESC;"
cursor.execute(statement)
for row in cursor:
print(row[0])
"""
Explanation: Problem set 1: WHERE and ORDER BY
In the cell below, fill in the string assigned to the variable statement with a SQL query that finds all movies that belong to both the science fiction (scifi) and horror genres. Return these movies in reverse order by their release date. (Hint: movies are located in the uitem table. A movie's membership in a genre is indicated by a value of 1 in the uitem table column corresponding to that genre.) Run the cell to execute the query.
Expected output:
Deep Rising (1998)
Alien: Resurrection (1997)
Hellraiser: Bloodline (1996)
Robert A. Heinlein's The Puppet Masters (1994)
Body Snatchers (1993)
Army of Darkness (1993)
Body Snatchers (1993)
Alien 3 (1992)
Heavy Metal (1981)
Alien (1979)
Night of the Living Dead (1968)
Blob, The (1958)
End of explanation
"""
cursor = conn.cursor()
statement = "select count(*) from uitem where musical = 1 or childrens = 1;"
cursor.execute(statement)
for row in cursor:
print(row[0])
"""
Explanation: Problem set 2: Aggregation, GROUP BY and HAVING
In the cell below, fill in the string assigned to the statement variable with a SQL query that returns the number of movies that are either musicals or children's movies (columns musical and childrens respectively). Hint: use the count(*) aggregate.
Expected output: 157
End of explanation
"""
cursor = conn.cursor()
statement = "select occupation, count(occupation) from uuser group by occupation having count(*) > 50;"
cursor.execute(statement)
for row in cursor:
print(row[0], row[1])
"""
Explanation: Nicely done. Now, in the cell below, fill in the indicated string with a SQL statement that returns all occupations, along with their count, from the uuser table that have more than fifty users listed for that occupation. (I.e., the occupation librarian is listed for 51 users, so it should be included in these results. There are only 12 lawyers, so lawyer should not be included in the result.)
Expected output:
administrator 79
programmer 66
librarian 51
student 196
other 105
engineer 67
educator 95
Hint: use GROUP BY and HAVING. (If you're stuck, try writing the query without the HAVING first.)
End of explanation
"""
cursor = conn.cursor()
statement = "select distinct(movie_title) from uitem join udata on uitem.movie_id = udata.item_id where uitem.documentary = 1 and uitem.release_date < '1992-01-01' and udata.rating = 5 order by movie_title;"
cursor.execute(statement)
for row in cursor:
print(row[0])
"""
Explanation: Problem set 3: Joining tables
In the cell below, fill in the indicated string with a query that finds the titles of movies in the Documentary genre released before 1992 that received a rating of 5 from any user. Expected output:
Madonna: Truth or Dare (1991)
Koyaanisqatsi (1983)
Paris Is Burning (1990)
Thin Blue Line, The (1988)
Hints:
JOIN the udata and uitem tables.
Use DISTINCT() to get a list of unique movie titles (no title should be listed more than once).
The SQL expression to include in order to find movies released before 1992 is uitem.release_date < '1992-01-01'.
End of explanation
"""
cursor = conn.cursor()
statement = "select movie_title, avg(rating) from uitem join udata on uitem.movie_id = udata.item_id where horror = 1 group by uitem.movie_title order by avg(udata.rating) limit 10;"
cursor.execute(statement)
for row in cursor:
print(row[0], "%0.2f" % row[1])
"""
Explanation: Problem set 4: Joins and aggregations... together at last
This one's tough, so prepare yourself. Go get a cup of coffee. Stretch a little bit. Deep breath. There you go.
In the cell below, fill in the indicated string with a query that produces a list of the ten lowest rated movies in the Horror genre. For the purposes of this problem, take "lowest rated" to mean "has the lowest average rating." The query should display the titles of the movies, not their ID number. (So you'll have to use a JOIN.)
Expected output:
Amityville 1992: It's About Time (1992) 1.00
Beyond Bedlam (1993) 1.00
Amityville: Dollhouse (1996) 1.00
Amityville: A New Generation (1993) 1.00
Amityville 3-D (1983) 1.17
Castle Freak (1995) 1.25
Amityville Curse, The (1990) 1.25
Children of the Corn: The Gathering (1996) 1.32
Machine, The (1994) 1.50
Body Parts (1991) 1.62
End of explanation
"""
cursor = conn.cursor()
statement = "select movie_title, avg(rating) from uitem join udata on uitem.movie_id = udata.item_id where horror = 1 group by uitem.movie_title having count(udata.rating) > 10 order by avg(udata.rating) limit 10;"
cursor.execute(statement)
for row in cursor:
print(row[0], "%0.2f" % row[1])
"""
Explanation: BONUS: Extend the query above so that it only includes horror movies that have ten or more ratings. Fill in the query as indicated below.
Expected output:
Children of the Corn: The Gathering (1996) 1.32
Body Parts (1991) 1.62
Amityville II: The Possession (1982) 1.64
Jaws 3-D (1983) 1.94
Hellraiser: Bloodline (1996) 2.00
Tales from the Hood (1995) 2.04
Audrey Rose (1977) 2.17
Addiction, The (1995) 2.18
Halloween: The Curse of Michael Myers (1995) 2.20
Phantoms (1998) 2.23
End of explanation
"""
|
sot/aca_stats
|
fit_acq_prob_model-2018-04-poly-spline-warmpix.ipynb
|
bsd-3-clause
|
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
from astropy.table import Table
from astropy.time import Time
import tables
from scipy import stats
import tables3_api
from scipy.interpolate import CubicSpline
%matplotlib inline
"""
Explanation: Fit the poly-spline-warmpix acquisition probability model in 2018-04
THIS NOTEBOOK IS FOR REFERENCE ONLY
This is an attempt to fit the flight acquisition data using the poly-spline-warmpix model.
This uses starting fit values from the accompanying fit_acq_prob_model-2018-04-binned-poly-warmpix.ipynb notebook.
This model is a 15-parameter fit for acquisition probability as a function of magnitude and warm pixel fraction. It is NOT USED in flight because the mag/T_ccd parametrization gives a better fit.
End of explanation
"""
with tables.open_file('/proj/sot/ska/data/acq_stats/acq_stats.h5', 'r') as h5:
cols = h5.root.data.cols
names = {'tstart': 'guide_tstart',
'obsid': 'obsid',
'obc_id': 'acqid',
'halfwidth': 'halfw',
'warm_pix': 'n100_warm_frac',
'mag_aca': 'mag_aca',
'mag_obs': 'mean_trak_mag',
'known_bad': 'known_bad',
'color': 'color1',
'img_func': 'img_func',
'ion_rad': 'ion_rad',
'sat_pix': 'sat_pix',
'agasc_id': 'agasc_id',
't_ccd': 'ccd_temp',
'slot': 'slot'}
acqs = Table([getattr(cols, h5_name)[:] for h5_name in names.values()],
names=list(names.keys()))
year_q0 = 1999.0 + 31. / 365.25 # Jan 31 approximately
acqs['year'] = Time(acqs['tstart'], format='cxcsec').decimalyear.astype('f4')
acqs['quarter'] = (np.trunc((acqs['year'] - year_q0) * 4)).astype('f4')
acqs['color_1p5'] = np.where(acqs['color'] == 1.5, 1, 0)
# Create 'fail' column, rewriting history as if the OBC always
# ignore the MS flag in ID'ing acq stars. Set ms_disabled = False
# to not do this
obc_id = acqs['obc_id']
obc_id_no_ms = (acqs['img_func'] == 'star') & ~acqs['sat_pix'] & ~acqs['ion_rad']
acqs['fail'] = np.where(obc_id | obc_id_no_ms, 0.0, 1.0)
acqs['fail_mask'] = acqs['fail'].astype(bool)
# Define a 'mag' column that is the observed mag if available else the catalog mag
USE_OBSERVED_MAG = False
if USE_OBSERVED_MAG:
acqs['mag'] = np.where(acqs['fail_mask'], acqs['mag_aca'], acqs['mag_obs'])
else:
acqs['mag'] = acqs['mag_aca']
# Filter for year and mag (previously used data through 2007:001)
ok = (acqs['year'] > 2014.0) & (acqs['mag'] > 8.5) & (acqs['mag'] < 10.6)
# Filter known bad obsids
print('Filtering known bad obsids, start len = {}'.format(np.count_nonzero(ok)))
bad_obsids = [
# Venus
2411,2414,6395,7306,7307,7308,7309,7311,7312,7313,7314,7315,7317,7318,7406,583,
7310,9741,9742,9743,9744,9745,9746,9747,9749,9752,9753,9748,7316,15292,16499,
16500,16501,16503,16504,16505,16506,16502,
]
for badid in bad_obsids:
ok = ok & (acqs['obsid'] != badid)
print('Filtering known bad obsids, end len = {}'.format(np.count_nonzero(ok)))
data_all = acqs[ok]
del data_all['img_func']
data_all.sort('year')
# Adjust probability (in probit space) for box size. See:
# https://github.com/sot/skanb/blob/master/pea-test-set/fit_box_size_acq_prob.ipynb
b1 = 0.96
b2 = -0.30
box0 = (data_all['halfwidth'] - 120) / 120 # normalized version of box, equal to 0.0 at nominal default
data_all['box_delta'] = b1 * box0 + b2 * box0**2
data_all = data_all.group_by('quarter')
data_mean = data_all.groups.aggregate(np.mean)
"""
Explanation: Get acq stats data and clean
End of explanation
"""
spline_mags = np.array([8.5, 9.25, 10.0, 10.4, 10.6])
def p_fail(pars, mag,
wp, wp2=None,
box_delta=0):
"""
Acquisition probability model
:param pars: 7 parameters (3 x offset, 3 x scale, p_fail for bright stars)
:param wp: warm fraction
:param box_delta: search box half width (arcsec)
"""
p_bright_fail = 0.03 # For now
p0s, p1s, p2s = pars[0:5], pars[5:10], pars[10:15]
if wp2 is None:
wp2 = wp ** 2
# Make sure box_delta has right dimensions
wp, box_delta = np.broadcast_arrays(wp, box_delta)
p0 = CubicSpline(spline_mags, p0s, bc_type=((1, 0.0), (2, 0.0)))(mag)
p1 = CubicSpline(spline_mags, p1s, bc_type=((1, 0.0), (2, 0.0)))(mag)
p2 = CubicSpline(spline_mags, p2s, bc_type=((1, 0.0), (2, 0.0)))(mag)
probit_p_fail = p0 + p1 * wp + p2 * wp2 + box_delta
p_fail = stats.norm.cdf(probit_p_fail) # transform from probit to linear probability
return p_fail
def p_acq_fail(data=None):
"""
Sherpa fit function wrapper to ensure proper use of data in fitting.
"""
if data is None:
data = data_all
wp = (data['warm_pix'] - 0.13) / 0.1
wp2 = wp ** 2
box_delta = data['box_delta']
mag = data['mag']
def sherpa_func(pars, x=None):
return p_fail(pars, mag, wp, wp2, box_delta)
return sherpa_func
def fit_poly_spline_model(data_mask=None):
from sherpa import ui
data = data_all if data_mask is None else data_all[data_mask]
comp_names = [f'p{i}{j}' for i in range(3) for j in range(5)]
# Approx starting values based on plot of p0, p1, p2 in
# fit_acq_prob_model-2018-04-poly-warmpix
spline_p = {}
spline_p[0] = np.array([-2.6, -2.3, -1.7, -1.0, 0.0])
spline_p[1] = np.array([0.1, 0.1, 0.3, 0.6, 2.4])
spline_p[2] = np.array([0.0, 0.1, 0.5, 0.4, 0.1])
data_id = 1
ui.set_method('simplex')
ui.set_stat('cash')
ui.load_user_model(p_acq_fail(data), 'model')
ui.add_user_pars('model', comp_names)
ui.set_model(data_id, 'model')
ui.load_arrays(data_id, np.array(data['year']), np.array(data['fail'], dtype=np.float))
# Initial fit values from fit of all data
fmod = ui.get_model_component('model')
for i in range(3):
for j in range(5):
comp_name = f'p{i}{j}'
setattr(fmod, comp_name, spline_p[i][j])
comp = getattr(fmod, comp_name)
comp.max = 10
comp.min = -4.0 if i == 0 else 0.0
ui.fit(data_id)
# conf = ui.get_confidence_results()
return ui.get_fit_results()
"""
Explanation: Model definition
End of explanation
"""
def plot_fit_grouped(pars, group_col, group_bin, mask=None, log=False, colors='br', label=None, probit=False):
data = data_all if mask is None else data_all[mask]
data['model'] = p_acq_fail(data)(pars)
group = np.trunc(data[group_col] / group_bin)
data = data.group_by(group)
data_mean = data.groups.aggregate(np.mean)
len_groups = np.diff(data.groups.indices)
data_fail = data_mean['fail']
model_fail = np.array(data_mean['model'])
fail_sigmas = np.sqrt(data_fail * len_groups) / len_groups
# Possibly plot the data and model probabilities in probit space
if probit:
dp = stats.norm.ppf(np.clip(data_fail + fail_sigmas, 1e-6, 1-1e-6))
dm = stats.norm.ppf(np.clip(data_fail - fail_sigmas, 1e-6, 1-1e-6))
data_fail = stats.norm.ppf(data_fail)
model_fail = stats.norm.ppf(model_fail)
fail_sigmas = np.vstack([data_fail - dm, dp - data_fail])
plt.errorbar(data_mean[group_col], data_fail, yerr=fail_sigmas,
fmt='.' + colors[1:], label=label, markersize=8)
plt.plot(data_mean[group_col], model_fail, '-' + colors[0])
if log:
ax = plt.gca()
ax.set_yscale('log')
def mag_filter(mag0, mag1):
ok = (data_all['mag'] > mag0) & (data_all['mag'] < mag1)
return ok
def t_ccd_filter(t_ccd0, t_ccd1):
ok = (data_all['t_ccd'] > t_ccd0) & (data_all['t_ccd'] < t_ccd1)
return ok
def wp_filter(wp0, wp1):
ok = (data_all['warm_pix'] > wp0) & (data_all['warm_pix'] < wp1)
return ok
def plot_fit_all(parvals, mask=None, probit=False):
if mask is None:
mask = np.ones(len(data_all), dtype=bool)
plt.figure()
plot_fit_grouped(parvals, 'mag', 0.25, wp_filter(0.20, 0.25) & mask, log=False,
colors='gk', label='0.20 < WP < 0.25')
plot_fit_grouped(parvals, 'mag', 0.25, wp_filter(0.10, 0.20) & mask, log=False,
colors='cm', label='0.10 < WP < 0.20')
plt.legend(loc='upper left');
plt.ylim(0.001, 1.0);
plt.xlim(9, 11)
plt.grid()
plt.figure()
plot_fit_grouped(parvals, 'mag', 0.25, wp_filter(0.20, 0.25) & mask, probit=True, colors='gk', label='0.20 < WP < 0.25')
plot_fit_grouped(parvals, 'mag', 0.25, wp_filter(0.10, 0.20) & mask, probit=True, colors='cm', label='0.10 < WP < 0.20')
plt.legend(loc='upper left');
# plt.ylim(0.001, 1.0);
plt.xlim(9, 11)
plt.grid()
plt.figure()
plot_fit_grouped(parvals, 'warm_pix', 0.02, mag_filter(10.3, 10.6) & mask, log=False, colors='gk', label='10.3 < mag < 10.6')
plot_fit_grouped(parvals, 'warm_pix', 0.02, mag_filter(10, 10.3) & mask, log=False, colors='cm', label='10 < mag < 10.3')
plot_fit_grouped(parvals, 'warm_pix', 0.02, mag_filter(9, 10) & mask, log=False, colors='br', label='9 < mag < 10')
plt.legend(loc='best')
plt.grid()
plt.figure()
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(10.3, 10.6) & mask, colors='gk', label='10.3 < mag < 10.6')
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(10, 10.3) & mask, colors='cm', label='10 < mag < 10.3')
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(9.5, 10) & mask, colors='br', label='9.5 < mag < 10')
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(9.0, 9.5) & mask, colors='gk', label='9.0 < mag < 9.5')
plt.legend(loc='best')
plt.grid()
plt.figure()
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(10.3, 10.6) & mask, colors='gk', label='10.3 < mag < 10.6', probit=True)
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(10, 10.3) & mask, colors='cm', label='10 < mag < 10.3', probit=True)
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(9.5, 10) & mask, colors='br', label='9.5 < mag < 10', probit=True)
plot_fit_grouped(parvals, 'year', 0.25, mag_filter(9.0, 9.5) & mask, colors='gk', label='9.0 < mag < 9.5', probit=True)
plt.legend(loc='best')
plt.grid();
def plot_splines(pars):
mag = np.arange(8.5, 10.81, 0.1)
p0 = CubicSpline(spline_mags, pars[0:5], bc_type=((1, 0.0), (2, 0.0)))(mag)
p1 = CubicSpline(spline_mags, pars[5:10], bc_type=((1, 0.0), (2, 0.0)))(mag)
p2 = CubicSpline(spline_mags, pars[10:15], bc_type=((1, 0.0), (2, 0.0)))(mag)
plt.plot(mag, p0, label='p0')
plt.plot(mag, p1, label='p1')
plt.plot(mag, p2, label='p2')
plt.grid()
plt.legend();
"""
Explanation: Plotting and validation
End of explanation
"""
# fit = fit_sota_model(data_all['color'] == 1.5, ms_disabled=True)
mask_no_1p5 = data_all['color'] != 1.5
fit_no_1p5 = fit_poly_spline_model(mask_no_1p5)
plot_splines(fit_no_1p5.parvals)
plot_fit_all(fit_no_1p5.parvals, mask_no_1p5)
plot_fit_grouped(fit_no_1p5.parvals, 'year', 0.10, mag_filter(10.3, 10.6) & mask_no_1p5,
colors='gk', label='10.3 < mag < 10.6')
plt.xlim(2016.0, None)
y0, y1 = plt.ylim()
x = DateTime('2017-10-01T00:00:00').frac_year
plt.plot([x, x], [y0, y1], '--r', alpha=0.5)
plt.grid();
plot_fit_grouped(fit_no_1p5.parvals, 'year', 0.10, mag_filter(10.0, 10.3) & mask_no_1p5,
colors='gk', label='10.0 < mag < 10.3')
plt.xlim(2016.0, None)
y0, y1 = plt.ylim()
x = DateTime('2017-10-01T00:00:00').frac_year
plt.plot([x, x], [y0, y1], '--r', alpha=0.5)
plt.grid();
"""
Explanation: Color != 1.5 fit (this is MOST acq stars)
End of explanation
"""
dat = data_all
ok = (dat['year'] > 2017.75) & (mask_no_1p5) & (dat['mag_aca'] > 10.3) & (dat['mag_aca'] < 10.6)
dok = dat[ok]
plt.hist(dok['t_ccd'], bins=np.arange(-15, -9, 0.4));
plt.grid();
dat = data_all
ok = (dat['year'] < 2017.75) & (dat['year'] > 2017.0) & (mask_no_1p5) & (dat['mag_aca'] > 10.3) & (dat['mag_aca'] < 10.6)
dok = dat[ok]
plt.hist(dok['t_ccd'], bins=np.arange(-15, -9, 0.4));
plt.grid()
dat = data_all
ok = (dat['year'] > 2017.75) & (mask_no_1p5) & (dat['mag_aca'] > 10.3) & (dat['mag_aca'] < 10.6)
dok = dat[ok]
plt.hist(dok['warm_pix'], bins=np.linspace(0.15, 0.30, 30));
plt.grid()
dat = data_all
ok = (dat['year'] < 2017.75) & (dat['year'] > 2017.0) & (mask_no_1p5) & (dat['mag_aca'] > 10.3) & (dat['mag_aca'] < 10.6)
dok = dat[ok]
plt.hist(dok['warm_pix'], bins=np.linspace(0.15, 0.30, 30));
plt.grid()
"""
Explanation: Comparing warm_pix vs. T_ccd parametrization
Looking at the next four plots, it seems that T_ccd provides more separation than warm_pix.
End of explanation
"""
np.count_nonzero(ok)
from collections import defaultdict
fails = defaultdict(list)
for row in dok:
fails[row['agasc_id']].append(row['fail'])
fails
np.count_nonzero(dat['fail_mask'][ok])
dok = dat[ok]
"""
Explanation: Looking to see if repeat observations of particular stars impact the results
Probably not.
End of explanation
"""
plt.hist(data_all['warm_pix'], bins=100)
plt.grid()
plt.xlabel('Warm pixel fraction');
plt.hist(data_all['mag'], bins=np.arange(6, 11.1, 0.1))
plt.grid()
plt.xlabel('Mag_aca')
ok = ~data_all['fail'].astype(bool)
dok = data_all[ok]
plt.plot(dok['mag_aca'], dok['mag_obs'] - dok['mag_aca'], '.')
plt.plot(dok['mag_aca'], dok['mag_obs'] - dok['mag_aca'], ',', alpha=0.3)
plt.grid()
plt.plot(data_all['year'], data_all['warm_pix'])
plt.ylim(0, None)
plt.grid();
plt.plot(data_all['year'], data_all['t_ccd'])
# plt.ylim(0, None)
plt.xlim(2017.0, None)
plt.grid();
"""
Explanation: Histogram of warm pixel fraction
End of explanation
"""
|
gagneurlab/concise
|
nbs/legacy/01-simulated-data.ipynb
|
mit
|
## Concise extensions of keras:
import concise
import concise.layers as cl
import concise.initializers as ci
import concise.regularizers as cr
import concise.metrics as cm
from concise.preprocessing import encodeDNA, encodeSplines
from concise.data import attract, encode
## layers:
cl.ConvDNA
cl.ConvDNAQuantitySplines
cr.GAMRegularizer
cl.GAMSmooth
cl.GlobalSumPooling1D
cl.InputDNA
cl.InputSplines
## initializers:
ci.PSSMKernelInitializer
ci.PSSMBiasInitializer
## regularizers
cr.GAMRegularizer
## metrics
cm.var_explained
## Preprocessing
encodeDNA
encodeSplines
## Known motifs
attract
encode
attract.get_pwm_list([519])[0].plotPWM()
"""
Explanation: Concise
Concise extends keras (https://keras.io) by providing additional layers, intializers, regularizers, metrics and preprocessors, suited for modelling genomic cis-regulatory elements.
End of explanation
"""
from concise.utils import PWM
PWM
"""
Explanation: It also implements a PWM class, used by PWM*Initializers
End of explanation
"""
# Used additional packages
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
## required keras modules
from keras.models import Model, load_model
import keras.layers as kl
import keras.optimizers as ko
"""
Explanation: Simulated data case study
In this notebook, we will replicate the results from Plositional_effect/Simulation/01_fixed_seq_len.html using concise models. Please have a look at that notebook first.
End of explanation
"""
# Load the data
data_dir = "../data/"
dt = pd.read_csv(data_dir + "/01-fixed_seq_len-1.csv")
motifs = ["TTAATGA"]
dt.head
x_seq = encodeDNA(dt["seq"])
y = dt.y.as_matrix()
x_seq.shape # (n_samples, seq_length, n_bases)
"""
Explanation: Single motif case
Prepare the data
End of explanation
"""
## Parameters
seq_length = x_seq.shape[1]
## Motifs used to initialize the model
pwm_list = [PWM.from_consensus(motif) for motif in motifs]
motif_width = 7
pwm_list
pwm_list[0].plotPWM()
np.random.seed(42)
# specify the input shape
input_dna = cl.InputDNA(seq_length)
# convolutional layer with filters initialized on a PWM
x = cl.ConvDNA(filters=1,
kernel_size=motif_width, ## motif width
activation="relu",
kernel_initializer=ci.PSSMKernelInitializer(pwm_list),
bias_initializer=ci.PSSMBiasInitializer(pwm_list,kernel_size=motif_width, mean_max_scale=1)
## mean_max_scale of 1 means that only consensus sequence gets score larger than 0
)(input_dna)
## Smoothing layer - positional-dependent effect
# output = input * (1+ pos_effect)
x = cl.GAMSmooth(n_bases=10, l2_smooth=1e-3, l2=0)(x)
x = cl.GlobalSumPooling1D()(x)
x = kl.Dense(units=1,activation="linear")(x)
model = Model(inputs=input_dna, outputs=x)
# compile the model
model.compile(optimizer="adam", loss="mse", metrics=[cm.var_explained])
"""
Explanation: Build the model
Concise is a thin wrapper around keras. To know more about keras, read the documentation: https://keras.io/.
In this tutorial, I'll be using the functional API of keras: https://keras.io/getting-started/functional-api-guide/. Feel free to use Concise with the sequential models.
End of explanation
"""
## TODO - create a callback
from keras.callbacks import EarlyStopping
model.fit(x=x_seq, y=y, epochs=30, verbose=2,
callbacks=[EarlyStopping(patience=5)],
validation_split=.2
)
"""
Explanation: Train the model
End of explanation
"""
model.save("/tmp/model.h5") ## requires h5py pacakge, pip install h5py
%ls -la /tmp/model*
model2 = load_model("/tmp/model.h5")
model2
"""
Explanation: Save and load the model
Since concise is fully compatible with keras, we can save and load the entire model to the hdf5 file.
End of explanation
"""
var_expl_history = model.history.history['val_var_explained']
plt.plot(var_expl_history)
plt.ylabel('Variance explained')
plt.xlabel('Epoch')
plt.title("Loss history")
y_pred = model.predict(x_seq)
plt.scatter(y_pred, y)
plt.xlabel("Predicted")
plt.ylabel("True")
plt.title("True vs predicted")
"""
Explanation: Interpret the model
Predictions
End of explanation
"""
# layers in the model
model.layers
## Convenience functions in layers
gam_layer = model.layers[2]
gam_layer.plot()
plt.title("Positional effect")
# Compare the curve to the theoretical
positions = gam_layer.positional_effect()["positions"]
pos_effect = gam_layer.positional_effect()["positional_effect"]
from scipy.stats import norm
pef = lambda x: 0.3*norm.pdf(x, 0.2, 0.1) + 0.05*np.sin(15*x) + 0.8
pos_effect_theoretical = pef(positions / positions.max())
# plot
plt.plot(positions, pos_effect, label="infered")
plt.plot(positions, pos_effect_theoretical, label="theoretical")
plt.ylabel('Positional effect')
plt.xlabel('Position')
plt.title("Positional effect")
plt.legend()
"""
Explanation: Plot is the same as in the original motifp report.
Weights
End of explanation
"""
# plot the filters
model.layers[1].plot_weights(plot_type="motif_raw")
model.layers[1].plot_weights(plot_type="motif_pwm")
model.layers[1].plot_weights(plot_type="motif_pwm_info")
model.layers[1].plot_weights(plot_type="heatmap")
"""
Explanation: Qualitatively, the curves are the same, quantitatively, they differ as the scale is modulated by other parameters in the model. Plot is similar to the original motifp report.
End of explanation
"""
dt = pd.read_csv(data_dir + "/01-fixed_seq_len-2.csv")
motifs = ["TTAATGA", "TATTTAT"]
## Parameters
seq_length = x_seq.shape[1]
## Motifs used to initialize the model
pwm_list = [PWM.from_consensus(motif) for motif in motifs]
motif_width = 7
pwm_list
np.random.seed(1)
input_dna = cl.InputDNA(seq_length)
# convolutional layer with filters initialized on a PWM
x = cl.ConvDNA(filters=2,
kernel_size=motif_width, ## motif width
activation="relu",
kernel_initializer=ci.PWMKernelInitializer(pwm_list, stddev=0.0),
bias_initializer=ci.PWMBiasInitializer(pwm_list,kernel_size=motif_width, mean_max_scale=0.999),
## mean_max_scale of 1 means that only consensus sequence gets score larger than 0
trainable=False,
)(input_dna)
## Smoothing layer - positional-dependent effect
x = cl.GAMSmooth(n_bases=10, l2_smooth=1e-6, l2=0)(x)
x = cl.GlobalSumPooling1D()(x)
x = kl.Dense(units=1,activation="linear")(x)
model = Model(inputs=input_dna, outputs=x)
# compile the model
model.compile(optimizer=ko.Adam(lr=0.01), loss="mse", metrics=[cm.var_explained])
x_seq = encodeDNA(dt["seq"])
y = dt.y.as_matrix()
model.fit(x=x_seq, y=y, epochs=100, verbose = 2,
callbacks=[EarlyStopping(patience=5)],
validation_split=.2
)
y_pred = model.predict(x_seq)
plt.scatter(y_pred, y)
plt.xlabel("Predicted")
plt.ylabel("True")
plt.title("True vs predicted")
## TODO - update to the new synthax
gam_layer = model.layers[2]
positions = gam_layer.positional_effect()["positions"]
pos_effect = gam_layer.positional_effect()["positional_effect"]
## Theoretical plot - from the original simulation data
from scipy.stats import norm
# https://docs.scipy.org/doc/scipy-0.16.1/reference/generated/scipy.stats.norm.html#scipy.stats.norm
pef1 = lambda x: 0.3*norm.pdf(x, 0.2, 0.1) + 0.05*np.sin(15*x) + 0.8
pos_effect_theoretical1 = pef1(positions / positions.max())
pef2 = lambda x: 0.3*norm.pdf(x, 0.35, 0.1) + 0.05*np.sin(15*x) + 0.8
pos_effect_theoretical2 = pef2(positions / positions.max())
w_motifs = model.get_weights()[-2]
b = model.get_weights()[-1]
## Create a new plot
pos_effect_calibrated = (pos_effect / np.transpose(w_motifs))/ 0.8
plt.plot(positions, pos_effect_calibrated[:,0], label="infered " + motifs[0])
plt.plot(positions, pos_effect_calibrated[:,1], label="infered " + motifs[1])
plt.plot(positions, pos_effect_theoretical1, label="theoretical " + motifs[0])
plt.plot(positions, pos_effect_theoretical2, label="theoretical " + motifs[1])
plt.ylabel('Positional effect')
plt.xlabel('Position')
plt.title("Positional effect")
plt.legend()
"""
Explanation: Model with two motifs
End of explanation
"""
|
hongguangguo/shogun
|
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
|
gpl-3.0
|
%pylab inline
%matplotlib inline
# import all Shogun classes
from modshogun import *
"""
Explanation: Kernel hypothesis testing in Shogun
By Heiko Strathmann - <a href="mailto:heiko.strathmann@gmail.com">heiko.strathmann@gmail.com</a> - <a href="github.com/karlnapf">github.com/karlnapf</a> - <a href="herrstrathmann.de">herrstrathmann.de</a>
This notebook describes Shogun's framework for <a href="http://en.wikipedia.org/wiki/Statistical_hypothesis_testing">statistical hypothesis testing</a>. We begin by giving a brief outline of the problem setting and then describe various implemented algorithms. All the algorithms discussed here are for <a href="http://en.wikipedia.org/wiki/Kernel_embedding_of_distributions#Kernel_two_sample_test">Kernel two-sample testing</a> with Maximum Mean Discrepancy and are based on embedding probability distributions into <a href="http://en.wikipedia.org/wiki/Reproducing_kernel_Hilbert_space">Reproducing Kernel Hilbert Spaces</a>( RKHS ).
Methods for two-sample testing currently consist of tests based on the Maximum Mean Discrepancy. There are two types of tests available, a quadratic time test and a linear time test. Both come in various flavours.
Independence testing is currently based in the Hilbert Schmidt Independence Criterion.
End of explanation
"""
# use scipy for generating samples
from scipy.stats import norm, laplace
def sample_gaussian_vs_laplace(n=220, mu=0.0, sigma2=1, b=sqrt(0.5)):
# sample from both distributions
X=norm.rvs(size=n, loc=mu, scale=sigma2)
Y=laplace.rvs(size=n, loc=mu, scale=b)
return X,Y
mu=0.0
sigma2=1
b=sqrt(0.5)
n=220
X,Y=sample_gaussian_vs_laplace(n, mu, sigma2, b)
# plot both densities and histograms
figure(figsize=(18,5))
suptitle("Gaussian vs. Laplace")
subplot(121)
Xs=linspace(-2, 2, 500)
plot(Xs, norm.pdf(Xs, loc=mu, scale=sigma2))
plot(Xs, laplace.pdf(Xs, loc=mu, scale=b))
title("Densities")
xlabel("$x$")
ylabel("$p(x)$")
_=legend([ 'Gaussian','Laplace'])
subplot(122)
hist(X, alpha=0.5)
xlim([-5,5])
ylim([0,100])
hist(Y,alpha=0.5)
xlim([-5,5])
ylim([0,100])
legend(["Gaussian", "Laplace"])
_=title('Histograms')
"""
Explanation: Some Formal Basics (skip if you just want code examples)
To set the context, we here briefly describe statistical hypothesis testing. Informally, one defines a hypothesis on a certain domain and then uses a statistical test to check whether this hypothesis is true. Formally, the goal is to reject a so-called null-hypothesis $H_0$, which is the complement of an alternative-hypothesis $H_A$.
To distinguish the hypotheses, a test statistic is computed on sample data. Since sample data is finite, this corresponds to sampling the true distribution of the test statistic. There are two different distributions of the test statistic -- one for each hypothesis. The null-distribution corresponds to test statistic samples under the model that $H_0$ holds; the alternative-distribution corresponds to test statistic samples under the model that $H_A$ holds.
In practice, one tries to compute the quantile of the test statistic in the null-distribution. In case the test statistic is in a high quantile, i.e. it is unlikely that the null-distribution has generated the test statistic -- the null-hypothesis $H_0$ is rejected.
There are two different kinds of errors in hypothesis testing:
A type I error is made when $H_0: p=q$ is wrongly rejected. That is, the test says that the samples are from different distributions when they are not.
A type II error is made when $H_A: p\neq q$ is wrongly accepted. That is, the test says that the samples are from the same distribution when they are not.
A so-called consistent test achieves zero type II error for a fixed type I error.
To decide whether to reject $H_0$, one could set a threshold, say at the $95\%$ quantile of the null-distribution, and reject $H_0$ when the test statistic lies below that threshold. This means that the chance that the samples were generated under $H_0$ are $5\%$. We call this number the test power $\alpha$ (in this case $\alpha=0.05$). It is an upper bound on the probability for a type I error. An alternative way is simply to compute the quantile of the test statistic in the null-distribution, the so-called p-value, and to compare the p-value against a desired test power, say $\alpha=0.05$, by hand. The advantage of the second method is that one not only gets a binary answer, but also an upper bound on the type I error.
In order to construct a two-sample test, the null-distribution of the test statistic has to be approximated. One way of doing this for any two-sample test is called bootstrapping, or the permutation test, where samples from both sources are mixed and permuted repeatedly and the test statistic is computed for every of those configurations. While this method works for every statistical hypothesis test, it might be very costly because the test statistic has to be re-computed many times. For many test statistics, there are more sophisticated methods of approximating the null distribution.
Base class for Hypothesis Testing
Shogun implements statistical testing in the abstract class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CHypothesisTest.html">CHypothesisTest</a>. All implemented methods will work with this interface at their most basic level. This class offers methods to
compute the implemented test statistic,
compute p-values for a given value of the test statistic,
compute a test threshold for a given p-value,
sampling the null distribution, i.e. perform the permutation test or bootstrappig of the null-distribution, and
performing a full two-sample test, and either returning a p-value or a binary rejection decision. This method is most useful in practice. Note that, depending on the used test statistic, it might be faster to call this than to compute threshold and test statistic seperately with the above methods.
There are special subclasses for testing two distributions against each other (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CTwoSampleTest.html">CTwoSampleTest</a>, <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CIndependenceTest.html">CIndependenceTest</a>), kernel two-sample testing (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKernelTwoSampleTest.html">CKernelTwoSampleTest</a>), and kernel independence testing (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKernelIndependenceTest.html">CKernelIndependenceTest</a>), which however mostly differ in internals and constructors.
Kernel Two-Sample Testing with the Maximum Mean Discrepancy
$\DeclareMathOperator{\mmd}{MMD}$
An important class of hypothesis tests are the two-sample tests.
In two-sample testing, one tries to find out whether two sets of samples come from different distributions. Given two probability distributions $p,q$ on some arbritary domains $\mathcal{X}, \mathcal{Y}$ respectively, and i.i.d. samples $X={x_i}{i=1}^m\subseteq \mathcal{X}\sim p$ and $Y={y_i}{i=1}^n\subseteq \mathcal{Y}\sim p$, the two sample test distinguishes the hypothesises
\begin{align}
H_0: p=q\
H_A: p\neq q
\end{align}
In order to solve this problem, it is desirable to have a criterion than takes a positive unique value if $p\neq q$, and zero if and only if $p=q$. The so called Maximum Mean Discrepancy (MMD), has this property and allows to distinguish any two probability distributions, if used in a reproducing kernel Hilbert space (RKHS). It is the distance of the mean embeddings $\mu_p, \mu_q$ of the distributions $p,q$ in such a RKHS $\mathcal{F}$ -- which can also be expressed in terms of expectation of kernel functions, i.e.
\begin{align}
\mmd[\mathcal{F},p,q]&=||\mu_p-\mu_q||\mathcal{F}^2\
&=\textbf{E}{x,x'}\left[ k(x,x')\right]-
2\textbf{E}{x,y}\left[ k(x,y)\right]
+\textbf{E}{y,y'}\left[ k(y,y')\right]
\end{align}
Note that this formulation does not assume any form of the input data, we just need a kernel function whose feature space is a RKHS, see [2, Section 2] for details. This has the consequence that in Shogun, we can do tests on any type of data (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDenseFeatures.html">CDenseFeatures</a>, <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSparseFeatures.html">CSparseFeatures</a>, <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStringFeatures.html">CStringFeatures</a>, etc), as long as we or you provide a positive definite kernel function under the interface of <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKernel.html">CKernel</a>.
We here only describe how to use the MMD for two-sample testing. Shogun offers two types of test statistic based on the MMD, one with quadratic costs both in time and space, and one with linear time and constant space costs. Both come in different versions and with different methods how to approximate the null-distribution in order to construct a two-sample test.
Running Example Data. Gaussian vs. Laplace
In order to illustrate kernel two-sample testing with Shogun, we use a couple of toy distributions. The first dataset we consider is the 1D Standard Gaussian
$p(x)=\frac{1}{\sqrt{2\pi\sigma^2}}\exp\left(-\frac{(x-\mu)^2}{\sigma^2}\right)$
with mean $\mu$ and variance $\sigma^2$, which is compared against the 1D Laplace distribution
$p(x)=\frac{1}{2b}\exp\left(-\frac{|x-\mu|}{b}\right)$
with the same mean $\mu$ and variance $2b^2$. In order to increase difficulty, we set $b=\sqrt{\frac{1}{2}}$, which means that $2b^2=\sigma^2=1$.
End of explanation
"""
print "Gaussian vs. Laplace"
print "Sample means: %.2f vs %.2f" % (mean(X), mean(Y))
print "Samples variances: %.2f vs %.2f" % (var(X), var(Y))
"""
Explanation: Now how to compare these two sets of samples? Clearly, a t-test would be a bad idea since it basically compares mean and variance of $X$ and $Y$. But we set that to be equal. By chance, the estimates of these statistics might differ, but that is unlikely to be significant. Thus, we have to look at higher order statistics of the samples. In fact, kernel two-sample tests look at all (infinitely many) higher order moments.
End of explanation
"""
# turn data into Shogun representation (columns vectors)
feat_p=RealFeatures(X.reshape(1,len(X)))
feat_q=RealFeatures(Y.reshape(1,len(Y)))
# choose kernel for testing. Here: Gaussian
kernel_width=1
kernel=GaussianKernel(10, kernel_width)
# create mmd instance of test-statistic
mmd=QuadraticTimeMMD(kernel, feat_p, feat_q)
# compute biased and unbiased test statistic (default is unbiased)
mmd.set_statistic_type(BIASED)
biased_statistic=mmd.compute_statistic()
mmd.set_statistic_type(UNBIASED)
unbiased_statistic=mmd.compute_statistic()
print "%d x MMD_b[X,Y]^2=%.2f" % (len(X), biased_statistic)
print "%d x MMD_u[X,Y]^2=%.2f" % (len(X), unbiased_statistic)
"""
Explanation: Quadratic Time MMD
We now describe the quadratic time MMD, as described in [1, Lemma 6], which is implemented in Shogun. All methods in this section are implemented in <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CQuadraticTimeMMD.html">CQuadraticTimeMMD</a>, which accepts any type of features in Shogun, and use it on the above toy problem.
An unbiased estimate for the MMD expression above can be obtained by estimating expected values with averaging over independent samples
$$
\mmd_u[\mathcal{F},X,Y]^2=\frac{1}{m(m-1)}\sum_{i=1}^m\sum_{j\neq i}^mk(x_i,x_j) + \frac{1}{n(n-1)}\sum_{i=1}^n\sum_{j\neq i}^nk(y_i,y_j)-\frac{2}{mn}\sum_{i=1}^m\sum_{j\neq i}^nk(x_i,y_j)
$$
A biased estimate would be
$$
\mmd_b[\mathcal{F},X,Y]^2=\frac{1}{m^2}\sum_{i=1}^m\sum_{j=1}^mk(x_i,x_j) + \frac{1}{n^ 2}\sum_{i=1}^n\sum_{j=1}^nk(y_i,y_j)-\frac{2}{mn}\sum_{i=1}^m\sum_{j\neq i}^nk(x_i,y_j)
.$$
Computing the test statistic using <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CQuadraticTimeMMD.html">CQuadraticTimeMMD</a> does exactly this, where it is possible to choose between the two above expressions. Note that some methods for approximating the null-distribution only work with one of both types. Both statistics' computational costs are quadratic both in time and space. Note that the method returns $m\mmd_b[\mathcal{F},X,Y]^2$ since null distribution approximations work on $m$ times null distribution. Here is how the test statistic itself is computed.
End of explanation
"""
# this is not necessary as bootstrapping is the default
mmd.set_null_approximation_method(PERMUTATION)
mmd.set_statistic_type(UNBIASED)
# to reduce runtime, should be larger practice
mmd.set_num_null_samples(100)
# now show a couple of ways to compute the test
# compute p-value for computed test statistic
p_value=mmd.compute_p_value(unbiased_statistic)
print "P-value of MMD value %.2f is %.2f" % (unbiased_statistic, p_value)
# compute threshold for rejecting H_0 for a given test power
alpha=0.05
threshold=mmd.compute_threshold(alpha)
print "Threshold for rejecting H0 with a test power of %.2f is %.2f" % (alpha, threshold)
# performing the test by hand given the above results, note that those two are equivalent
if unbiased_statistic>threshold:
print "H0 is rejected with confidence %.2f" % alpha
if p_value<alpha:
print "H0 is rejected with confidence %.2f" % alpha
# or, compute the full two-sample test directly
# fixed test power, binary decision
binary_test_result=mmd.perform_test(alpha)
if binary_test_result:
print "H0 is rejected with confidence %.2f" % alpha
significance_test_result=mmd.perform_test()
print "P-value of MMD test is %.2f" % significance_test_result
if significance_test_result<alpha:
print "H0 is rejected with confidence %.2f" % alpha
"""
Explanation: Any sub-class of <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CHypothesisTest.html">CHypothesisTest</a> can compute approximate the null distribution using permutation/bootstrapping. This way always is guaranteed to produce consistent results, however, it might take a long time as for each sample of the null distribution, the test statistic has to be computed for a different permutation of the data. Note that each of the below calls samples from the null distribution. It is wise to choose one method in practice. Also note that we set the number of samples from the null distribution to a low value to reduce runtime. Choose larger in practice, it is in fact good to plot the samples.
End of explanation
"""
# precompute kernel to be faster for null sampling
p_and_q=mmd.get_p_and_q()
kernel.init(p_and_q, p_and_q);
precomputed_kernel=CustomKernel(kernel);
mmd.set_kernel(precomputed_kernel);
# increase number of iterations since should be faster now
mmd.set_num_null_samples(500);
p_value_boot=mmd.perform_test();
print "P-value of MMD test is %.2f" % p_value_boot
"""
Explanation: Precomputing Kernel Matrices
Bootstrapping re-computes the test statistic for a bunch of permutations of the test data. For kernel two-sample test methods, in particular those of the MMD class, this means that only the joint kernel matrix of $X$ and $Y$ needs to be permuted. Thus, we can precompute the matrix, which gives a significant performance boost. Note that this is only possible if the matrix can be stored in memory. Below, we use Shogun's <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CCustomKernel.html">CCustomKernel</a> class, which allows to precompute a kernel matrix (multithreaded) of a given kernel and store it in memory. Instances of this class can then be used as if they were standard kernels.
End of explanation
"""
num_samples=500
# sample null distribution
mmd.set_num_null_samples(num_samples)
null_samples=mmd.sample_null()
# sample alternative distribution, generate new data for that
alt_samples=zeros(num_samples)
for i in range(num_samples):
X=norm.rvs(size=n, loc=mu, scale=sigma2)
Y=laplace.rvs(size=n, loc=mu, scale=b)
feat_p=RealFeatures(reshape(X, (1,len(X))))
feat_q=RealFeatures(reshape(Y, (1,len(Y))))
mmd=QuadraticTimeMMD(kernel, feat_p, feat_q)
alt_samples[i]=mmd.compute_statistic()
"""
Explanation: Now let us visualise distribution of MMD statistic under $H_0:p=q$ and $H_A:p\neq q$. Sample both null and alternative distribution for that. Use the interface of <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CTwoSampleTest.html">CTwoSampleTest</a> to sample from the null distribution (permutations, re-computing of test statistic is done internally). For the alternative distribution, compute the test statistic for a new sample set of $X$ and $Y$ in a loop. Note that the latter is expensive, as the kernel cannot be precomputed, and infinite data is needed. Though it is not needed in practice but only for illustrational purposes here.
End of explanation
"""
def plot_alt_vs_null(alt_samples, null_samples, alpha):
figure(figsize=(18,5))
subplot(131)
hist(null_samples, 50, color='blue')
title('Null distribution')
subplot(132)
title('Alternative distribution')
hist(alt_samples, 50, color='green')
subplot(133)
hist(null_samples, 50, color='blue')
hist(alt_samples, 50, color='green', alpha=0.5)
title('Null and alternative distriution')
# find (1-alpha) element of null distribution
null_samples_sorted=sort(null_samples)
quantile_idx=int(num_samples*(1-alpha))
quantile=null_samples_sorted[quantile_idx]
axvline(x=quantile, ymin=0, ymax=100, color='red', label=str(int(round((1-alpha)*100))) + '% quantile of null')
_=legend()
plot_alt_vs_null(alt_samples, null_samples, alpha)
"""
Explanation: Null and Alternative Distribution Illustrated
Visualise both distributions, $H_0:p=q$ is rejected if a sample from the alternative distribution is larger than the $(1-\alpha)$-quantil of the null distribution. See [1] for more details on their forms. From the visualisations, we can read off the test's type I and type II error:
type I error is the area of the null distribution being right of the threshold
type II error is the area of the alternative distribution being left from the threshold
End of explanation
"""
# optional: plot spectrum of joint kernel matrix
from numpy.linalg import eig
# get joint feature object and compute kernel matrix and its spectrum
feats_p_q=mmd.get_p_and_q()
mmd.get_kernel().init(feats_p_q, feats_p_q)
K=mmd.get_kernel().get_kernel_matrix()
w,_=eig(K)
# visualise K and its spectrum (only up to threshold)
figure(figsize=(18,5))
subplot(121)
imshow(K, interpolation="nearest")
title("Kernel matrix K of joint data $X$ and $Y$")
subplot(122)
thresh=0.1
plot(w[:len(w[w>thresh])])
_=title("Eigenspectrum of K until component %d" % len(w[w>thresh]))
"""
Explanation: Different Ways to Approximate the Null Distribution for the Quadratic Time MMD
As already mentioned, bootstrapping the null distribution is expensive business. There exist a couple of methods that are more sophisticated and either allow very fast approximations without guarantees or reasonably fast approximations that are consistent. We present a selection from [2], which are implemented in Shogun.
The first one is a spectral method that is based around the Eigenspectrum of the kernel matrix of the joint samples. It is faster than bootstrapping while being a consistent test. Effectively, the null-distribution of the biased statistic is sampled, but in a more efficient way than the bootstrapping approach. The converges as
$$
m\mmd^2_b \rightarrow \sum_{l=1}^\infty \lambda_l z_l^2
$$
where $z_l\sim \mathcal{N}(0,2)$ are i.i.d. normal samples and $\lambda_l$ are Eigenvalues of expression 2 in [2], which can be empirically estimated by $\hat\lambda_l=\frac{1}{m}\nu_l$ where $\nu_l$ are the Eigenvalues of the centred kernel matrix of the joint samples $X$ and $Y$. The distribution above can be easily sampled. Shogun's implementation has two parameters:
Number of samples from null-distribution. The more, the more accurate. As a rule of thumb, use 250.
Number of Eigenvalues of the Eigen-decomposition of the kernel matrix to use. The more, the better the results get. However, the Eigen-spectrum of the joint gram matrix usually decreases very fast. Plotting the Spectrum can help. See [2] for details.
If the kernel matrices are diagonal dominant, this method is likely to fail. For that and more details, see the original paper. Computational costs are much lower than bootstrapping, which is the only consistent alternative. Since Eigenvalues of the gram matrix has to be computed, costs are in $\mathcal{O}(m^3)$.
Below, we illustrate how to sample the null distribution and perform two-sample testing with the Spectrum approximation in the class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CQuadraticTimeMMD.html">CQuadraticTimeMMD</a>. This method only works with the biased statistic.
End of explanation
"""
# threshold for eigenspectrum
thresh=0.1
# compute number of eigenvalues to use
num_eigen=len(w[w>thresh])
# finally, do the test, use biased statistic
mmd.set_statistic_type(BIASED)
#tell Shogun to use spectrum approximation
mmd.set_null_approximation_method(MMD2_SPECTRUM)
mmd.set_num_eigenvalues_spectrum(num_eigen)
mmd.set_num_samples_spectrum(num_samples)
# the usual test interface
p_value_spectrum=mmd.perform_test()
print "Spectrum: P-value of MMD test is %.2f" % p_value_spectrum
# compare with ground truth bootstrapping
mmd.set_null_approximation_method(PERMUTATION)
mmd.set_num_null_samples(num_samples)
p_value_boot=mmd.perform_test()
print "Bootstrapping: P-value of MMD test is %.2f" % p_value_spectrum
"""
Explanation: The above plot of the Eigenspectrum shows that the Eigenvalues are decaying extremely fast. We choose the number for the approximation such that all Eigenvalues bigger than some threshold are used. In this case, we will not loose a lot of accuracy while gaining a significant speedup. For slower decaying Eigenspectrums, this approximation might be more expensive.
End of explanation
"""
# tell Shogun to use gamma approximation
mmd.set_null_approximation_method(MMD2_GAMMA)
# the usual test interface
p_value_gamma=mmd.perform_test()
print "Gamma: P-value of MMD test is %.2f" % p_value_gamma
# compare with ground truth bootstrapping
mmd.set_null_approximation_method(PERMUTATION)
p_value_boot=mmd.perform_test()
print "Bootstrapping: P-value of MMD test is %.2f" % p_value_spectrum
"""
Explanation: The Gamma Moment Matching Approximation and Type I errors
$\DeclareMathOperator{\var}{var}$
Another method for approximating the null-distribution is by matching the first two moments of a <a href="http://en.wikipedia.org/wiki/Gamma_distribution">Gamma distribution</a> and then compute the quantiles of that. This does not result in a consistent test, but usually also gives good results while being very fast. However, there are distributions where the method fail. Therefore, the type I error should always be monitored. Described in [2]. It uses
$$
m\mmd_b(Z) \sim \frac{x^{\alpha-1}\exp(-\frac{x}{\beta})}{\beta^\alpha \Gamma(\alpha)}
$$
where
$$
\alpha=\frac{(\textbf{E}(\text{MMD}_b(Z)))^2}{\var(\text{MMD}_b(Z))} \qquad \text{and} \qquad
\beta=\frac{m \var(\text{MMD}_b(Z))}{(\textbf{E}(\text{MMD}_b(Z)))^2}
$$
Then, any threshold and p-value can be computed using the gamma distribution in the above expression. Computational costs are in $\mathcal{O}(m^2)$. Note that the test is parameter free. It only works with the biased statistic.
End of explanation
"""
# type I error is false alarm, therefore sample data under H0
num_trials=50
rejections_gamma=zeros(num_trials)
rejections_spectrum=zeros(num_trials)
rejections_bootstrap=zeros(num_trials)
num_samples=50
alpha=0.05
for i in range(num_trials):
X=norm.rvs(size=n, loc=mu, scale=sigma2)
Y=laplace.rvs(size=n, loc=mu, scale=b)
# simulate H0 via merging samples before computing the
Z=hstack((X,Y))
X=Z[:len(X)]
Y=Z[len(X):]
feat_p=RealFeatures(reshape(X, (1,len(X))))
feat_q=RealFeatures(reshape(Y, (1,len(Y))))
# gamma
mmd=QuadraticTimeMMD(kernel, feat_p, feat_q)
mmd.set_null_approximation_method(MMD2_GAMMA)
mmd.set_statistic_type(BIASED)
rejections_gamma[i]=mmd.perform_test(alpha)
# spectrum
mmd=QuadraticTimeMMD(kernel, feat_p, feat_q)
mmd.set_null_approximation_method(MMD2_SPECTRUM)
mmd.set_num_eigenvalues_spectrum(num_eigen)
mmd.set_num_samples_spectrum(num_samples)
mmd.set_statistic_type(BIASED)
rejections_spectrum[i]=mmd.perform_test(alpha)
# bootstrap (precompute kernel)
mmd=QuadraticTimeMMD(kernel, feat_p, feat_q)
p_and_q=mmd.get_p_and_q()
kernel.init(p_and_q, p_and_q)
precomputed_kernel=CustomKernel(kernel)
mmd.set_kernel(precomputed_kernel)
mmd.set_null_approximation_method(PERMUTATION)
mmd.set_num_null_samples(num_samples)
mmd.set_statistic_type(BIASED)
rejections_bootstrap[i]=mmd.perform_test(alpha)
convergence_gamma=cumsum(rejections_gamma)/(arange(num_trials)+1)
convergence_spectrum=cumsum(rejections_spectrum)/(arange(num_trials)+1)
convergence_bootstrap=cumsum(rejections_bootstrap)/(arange(num_trials)+1)
print "Average rejection rate of H0 for Gamma is %.2f" % mean(convergence_gamma)
print "Average rejection rate of H0 for Spectrum is %.2f" % mean(convergence_spectrum)
print "Average rejection rate of H0 for Bootstrapping is %.2f" % mean(rejections_bootstrap)
"""
Explanation: As we can see, the above example was kind of unfortunate, as the approximation fails badly. We check the type I error to verify that. This works similar to sampling the alternative distribution: re-sample data (assuming infinite amounts), perform the test and average results. Below we compare type I errors or all methods for approximating the null distribution. This will take a while.
End of explanation
"""
# paramters of dataset
m=20000
distance=10
stretch=5
num_blobs=3
angle=pi/4
# these are streaming features
gen_p=GaussianBlobsDataGenerator(num_blobs, distance, 1, 0)
gen_q=GaussianBlobsDataGenerator(num_blobs, distance, stretch, angle)
# stream some data and plot
num_plot=1000
features=gen_p.get_streamed_features(num_plot)
features=features.create_merged_copy(gen_q.get_streamed_features(num_plot))
data=features.get_feature_matrix()
figure(figsize=(18,5))
subplot(121)
grid(True)
plot(data[0][0:num_plot], data[1][0:num_plot], 'r.', label='$x$')
title('$X\sim p$')
subplot(122)
grid(True)
plot(data[0][num_plot+1:2*num_plot], data[1][num_plot+1:2*num_plot], 'b.', label='$x$', alpha=0.5)
_=title('$Y\sim q$')
"""
Explanation: We see that Gamma basically never rejects, which is inline with the fact that the p-value was massively overestimated above. Note that for the other tests, the p-value is also not at its desired value, but this is due to the low number of samples/repetitions in the above code. Increasing them leads to consistent type I errors.
Linear Time MMD on Gaussian Blobs
So far, we basically had to precompute the kernel matrix for reasonable runtimes. This is not possible for more than a few thousand points. The linear time MMD statistic, implemented in <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearTimeMMD.html">CLinearTimeMMD</a> can help here, as it accepts data under the streaming interface <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStreamingFeatures.html">CStreamingFeatures</a>, which deliver data one-by-one.
And it can do more cool things, for example choose the best single (or combined) kernel for you. But we need a more fancy dataset for that to show its power. We will use one of Shogun's streaming based data generator, <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianBlobsDataGenerator.html">CGaussianBlobsDataGenerator</a> for that. This dataset consists of two distributions which are a grid of Gaussians where in one of them, the Gaussians are stretched and rotated. This dataset is regarded as challenging for two-sample testing.
End of explanation
"""
block_size=100
# if features are already under the streaming interface, just pass them
mmd=LinearTimeMMD(kernel, gen_p, gen_q, m, block_size)
# compute an unbiased estimate in linear time
statistic=mmd.compute_statistic()
print "MMD_l[X,Y]^2=%.2f" % statistic
# note: due to the streaming nature, successive calls of compute statistic use different data
# and produce different results. Data cannot be stored in memory
for _ in range(5):
print "MMD_l[X,Y]^2=%.2f" % mmd.compute_statistic()
"""
Explanation: We now describe the linear time MMD, as described in [1, Section 6], which is implemented in Shogun. A fast, unbiased estimate for the original MMD expression which still uses all available data can be obtained by dividing data into two parts and then compute
$$
\mmd_l^2[\mathcal{F},X,Y]=\frac{1}{m_2}\sum_{i=1}^{m_2} k(x_{2i},x_{2i+1})+k(y_{2i},y_{2i+1})-k(x_{2i},y_{2i+1})-
k(x_{2i+1},y_{2i})
$$
where $ m_2=\lfloor\frac{m}{2} \rfloor$. While the above expression assumes that $m$ data are available from each distribution, the statistic in general works in an online setting where features are obtained one by one. Since only pairs of four points are considered at once, this allows to compute it on data streams. In addition, the computational costs are linear in the number of samples that are considered from each distribution. These two properties make the linear time MMD very applicable for large scale two-sample tests. In theory, any number of samples can be processed -- time is the only limiting factor.
We begin by illustrating how to pass data to <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearTimeMMD.html">CLinearTimeMMD</a>. In order not to loose performance due to overhead, it is possible to specify a block size for the data stream.
End of explanation
"""
# data source
gen_p=GaussianBlobsDataGenerator(num_blobs, distance, 1, 0)
gen_q=GaussianBlobsDataGenerator(num_blobs, distance, stretch, angle)
# retreive some points, store them as non-streaming data in memory
data_p=gen_p.get_streamed_features(100)
data_q=gen_q.get_streamed_features(data_p.get_num_vectors())
print "Number of data is %d" % data_p.get_num_vectors()
# cast data in memory as streaming features again (which now stream from the in-memory data)
streaming_p=StreamingRealFeatures(data_p)
streaming_q=StreamingRealFeatures(data_q)
# it is important to start the internal parser to avoid deadlocks
streaming_p.start_parser()
streaming_q.start_parser()
# example to create mmd (note that m can be maximum the number of data in memory)
mmd=LinearTimeMMD(GaussianKernel(10,1), streaming_p, streaming_q, data_p.get_num_vectors(), 1)
print "Linear time MMD statistic: %.2f" % mmd.compute_statistic()
"""
Explanation: Sometimes, one might want to use <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearTimeMMD.html">CLinearTimeMMD</a> with data that is stored in memory. In that case, it is easy to data in the form of for example <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStreamingDenseFeatures.html">CStreamingDenseFeatures</a> into <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDenseFeatures.html">CDenseFeatures</a>.
End of explanation
"""
mmd=LinearTimeMMD(kernel, gen_p, gen_q, m, block_size)
print "m=%d samples from p and q" % m
print "Binary test result is: " + ("Rejection" if mmd.perform_test(alpha) else "No rejection")
print "P-value test result is %.2f" % mmd.perform_test()
"""
Explanation: The Gaussian Approximation to the Null Distribution
As for any two-sample test in Shogun, bootstrapping can be used to approximate the null distribution. This results in a consistent, but slow test. The number of samples to take is the only parameter. Note that since <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearTimeMMD.html">CLinearTimeMMD</a> operates on streaming features, new data is taken from the stream in every iteration.
Bootstrapping is not really necessary since there exists a fast and consistent estimate of the null-distribution. However, to ensure that any approximation is accurate, it should always be checked against bootstrapping at least once.
Since both the null- and the alternative distribution of the linear time MMD are Gaussian with equal variance (and different mean), it is possible to approximate the null-distribution by using a linear time estimate for this variance. An unbiased, linear time estimator for
$$
\var[\mmd_l^2[\mathcal{F},X,Y]]
$$
can simply be computed by computing the empirical variance of
$$
k(x_{2i},x_{2i+1})+k(y_{2i},y_{2i+1})-k(x_{2i},y_{2i+1})-k(x_{2i+1},y_{2i}) \qquad (1\leq i\leq m_2)
$$
A normal distribution with this variance and zero mean can then be used as an approximation for the null-distribution. This results in a consistent test and is very fast. However, note that it is an approximation and its accuracy depends on the underlying data distributions. It is a good idea to compare to the bootstrapping approach first to determine an appropriate number of samples to use. This number is usually in the tens of thousands.
<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearTimeMMD.html">CLinearTimeMMD</a> allows to approximate the null distribution in the same pass as computing the statistic itself (in linear time). This should always be used in practice since seperate calls of computing statistic and p-value will operator on different data from the stream. Below, we compute the test on a large amount of data (impossible to perform quadratic time MMD for this one as the kernel matrices cannot be stored in memory)
End of explanation
"""
sigmas=[2**x for x in linspace(-5,5, 10)]
print "Choosing kernel width from", ["{0:.2f}".format(sigma) for sigma in sigmas]
combined=CombinedKernel()
for i in range(len(sigmas)):
combined.append_kernel(GaussianKernel(10, sigmas[i]))
# mmd instance using streaming features
block_size=1000
mmd=LinearTimeMMD(combined, gen_p, gen_q, m, block_size)
# optmal kernel choice is possible for linear time MMD
selection=MMDKernelSelectionOpt(mmd)
# select best kernel
best_kernel=selection.select_kernel()
best_kernel=GaussianKernel.obtain_from_generic(best_kernel)
print "Best single kernel has bandwidth %.2f" % best_kernel.get_width()
"""
Explanation: Kernel Selection for the MMD -- Overview
$\DeclareMathOperator{\argmin}{arg\,min}
\DeclareMathOperator{\argmax}{arg\,max}$
Now which kernel do we actually use for our tests? So far, we just plugged in arbritary ones. However, for kernel two-sample testing, it is possible to do something more clever.
Shogun's kernel selection methods for MMD based two-sample tests are all based around [3, 4]. For the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearTimeMMD.html">CLinearTimeMMD</a>, [3] describes a way of selecting the optimal kernel in the sense that the test's type II error is minimised. For the linear time MMD, this is the method of choice. It is done via maximising the MMD statistic divided by its standard deviation and it is possible for single kernels and also for convex combinations of them. For the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CQuadraticTimeMMD.html">CQuadraticTimeMMD</a>, the best method in literature is choosing the kernel that maximised the MMD statistic [4]. For convex combinations of kernels, this can be achieved via a $L2$ norm constraint. A detailed comparison of all methods on numerous datasets can be found in [5].
MMD Kernel selection in Shogun always involves an implementation of the base class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelection.html">CMMDKernelSelection</a>, which defines the interface for kernel selection. If combinations of kernel should be considered, there is a sub-class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionComb.html">CMMDKernelSelectionComb</a>. In addition, it involves setting up a number of baseline kernels $\mathcal{K}$ to choose from/combine in the form of a <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CCombinedKernel.html">CCombinedKernel</a>. All methods compute their results for a fixed set of these baseline kernels. We later give an example how to use these classes after providing a list of available methods.
<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionMedian.html">CMMDKernelSelectionMedian</a> Selects from a set <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianKernel.html">CGaussianKernel</a> instances the one whose width parameter is closest to the median of the pairwise distances in the data. The median is computed on a certain number of points from each distribution that can be specified as a parameter. Since the median is a stable statistic, one does not have to compute all pairwise distances but rather just a few thousands. This method a useful (and fast) heuristic that in many cases gives a good hint on where to start looking for Gaussian kernel widths. It is for example described in [1]. Note that it may fail badly in selecting a good kernel for certain problems.
<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionMax.html">CMMDKernelSelectionMax</a> Selects from a set of arbitrary baseline kernels a single one that maximises the used MMD statistic -- more specific its estimate.
$$
k^*=\argmax_{k\in\mathcal{K}} \hat \eta_k,
$$
where $\eta_k$ is an empirical MMD estimate for using a kernel $k$.
This was first described in [4] and was empirically shown to perform better than the median heuristic above. However, it remains a heuristic that comes with no guarantees. Since MMD estimates can be computed in linear and quadratic time, this method works for both methods. However, for the linear time statistic, there exists a better method.
<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionOpt.html">CMMDKernelSelectionOpt</a> Selects the optimal single kernel from a set of baseline kernels. This is done via maximising the ratio of the linear MMD statistic and its standard deviation.
$$
k^=\argmax_{k\in\mathcal{K}} \frac{\hat \eta_k}{\hat\sigma_k+\lambda},
$$
where $\eta_k$ is a linear time MMD estimate for using a kernel $k$ and $\hat\sigma_k$ is a linear time variance estimate of $\eta_k$ to which a small number $\lambda$ is added to prevent division by zero.
These are estimated in a linear time way with the streaming framework that was described earlier. Therefore, this method is only available for <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearTimeMMD.html">CLinearTimeMMD</a>. Optimal here means that the resulting test's type II error is minimised for a fixed type I error. Important: For this method to work, the kernel needs to be selected on different* data than the test is performed on. Otherwise, the method will produce wrong results.
<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionCombMaxL2.html">CMMDKernelSelectionCombMaxL2</a> Selects a convex combination of kernels that maximises the MMD statistic. This is the multiple kernel analogous to <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionMax.html">CMMDKernelSelectionMax</a>. This is done via solving the convex program
$$
\boldsymbol{\beta}^*=\min_{\boldsymbol{\beta}} {\boldsymbol{\beta}^T\boldsymbol{\beta} : \boldsymbol{\beta}^T\boldsymbol{\eta}=\mathbf{1}, \boldsymbol{\beta}\succeq 0},
$$
where $\boldsymbol{\beta}$ is a vector of the resulting kernel weights and $\boldsymbol{\eta}$ is a vector of which each component contains a MMD estimate for a baseline kernel. See [3] for details. Note that this method is unable to select a single kernel -- even when this would be optimal.
Again, when using the linear time MMD, there are better methods available.
<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionCombOpt.html">CMMDKernelSelectionCombOpt</a> Selects a convex combination of kernels that maximises the MMD statistic divided by its covariance. This corresponds to \emph{optimal} kernel selection in the same sense as in class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionOpt.html">CMMDKernelSelectionOpt</a> and is its multiple kernel analogous. The convex program to solve is
$$
\boldsymbol{\beta}^*=\min_{\boldsymbol{\beta}} (\hat Q+\lambda I) {\boldsymbol{\beta}^T\boldsymbol{\beta} : \boldsymbol{\beta}^T\boldsymbol{\eta}=\mathbf{1}, \boldsymbol{\beta}\succeq 0},
$$
where again $\boldsymbol{\beta}$ is a vector of the resulting kernel weights and $\boldsymbol{\eta}$ is a vector of which each component contains a MMD estimate for a baseline kernel. The matrix $\hat Q$ is a linear time estimate of the covariance matrix of the vector $\boldsymbol{\eta}$ to whose diagonal a small number $\lambda$ is added to prevent division by zero. See [3] for details. In contrast to <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionCombMaxL2.html">CMMDKernelSelectionCombMaxL2</a>, this method is able to select a single kernel when this gives a lower type II error than a combination. In this sense, it contains <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionOpt.html">CMMDKernelSelectionOpt</a>.
MMD Kernel Selection in Shogun
In order to use one of the above methods for kernel selection, one has to create a new instance of <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CCombinedKernel.html">CCombinedKernel</a> append all desired baseline kernels to it. This combined kernel is then passed to the MMD class. Then, an object of any of the above kernel selection methods is created and the MMD instance is passed to it in the constructor. There are then multiple methods to call
compute_measures to compute a vector kernel selection criteria if a single kernel selection method is used. It will return a vector of selected kernel weights if a combined kernel selection method is used. For \shogunclass{CMMDKernelSelectionMedian}, the method does throw an error.
select_kernel returns the selected kernel of the method. For single kernels this will be one of the baseline kernel instances. For the combined kernel case, this will be the underlying <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CCombinedKernel.html">CCombinedKernel</a> instance where the subkernel weights are set to the weights that were selected by the method.
In order to utilise the selected kernel, it has to be passed to an MMD instance. We now give an example how to select the optimal single and combined kernel for the Gaussian Blobs dataset.
What is the best kernel to use here? This is tricky since the distinguishing characteristics are hidden at a small length-scale. Create some kernels to select the best from
End of explanation
"""
alpha=0.05
mmd=LinearTimeMMD(best_kernel, gen_p, gen_q, m, block_size)
mmd.set_null_approximation_method(MMD1_GAUSSIAN);
p_value_best=mmd.perform_test();
print "Bootstrapping: P-value of MMD test with optimal kernel is %.2f" % p_value_best
"""
Explanation: Now perform two-sample test with that kernel
End of explanation
"""
mmd=LinearTimeMMD(best_kernel, gen_p, gen_q, 5000, block_size)
num_samples=500
# sample null and alternative distribution, implicitly generate new data for that
null_samples=zeros(num_samples)
alt_samples=zeros(num_samples)
for i in range(num_samples):
alt_samples[i]=mmd.compute_statistic()
# tell MMD to merge data internally while streaming
mmd.set_simulate_h0(True)
null_samples[i]=mmd.compute_statistic()
mmd.set_simulate_h0(False)
"""
Explanation: For the linear time MMD, the null and alternative distributions look different than for the quadratic time MMD as plotted above. Let's sample them (takes longer, reduce number of samples a bit). Note how we can tell the linear time MMD to smulate the null hypothesis, which is necessary since we cannot permute by hand as samples are not in memory)
End of explanation
"""
plot_alt_vs_null(alt_samples, null_samples, alpha)
"""
Explanation: And visualise again. Note that both null and alternative distribution are Gaussian, which allows the fast null distribution approximation and the optimal kernel selection
End of explanation
"""
|
jdorvi/MonteCarlos_SLC
|
.ipynb_checkpoints/Distribution_Fit_MC-checkpoint.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import gridspec
import scipy
import scipy.stats as stats
import pandas as pd
import numpy as np
from ipywidgets import interact, interact_manual
import os
"""
Explanation: Fitting Distributions to a Dataset
End of explanation
"""
continuous_distributions = "C:\\Users\\jdorvinen\\Documents\\ipynbs\\Hoboken\\continuous_distributions_.csv"
dists_unindexed = pd.read_csv(continuous_distributions)
dist_list = dists_unindexed.distribution.tolist()
dists = dists_unindexed.set_index(dists_unindexed.distribution)
file_name = 'montauk_combined_data.csv'
file_path = 'C:/Users/jdorvinen/Documents/Jared/Projects/East Hampton/met_data' #Path to your data here
#If your data is in an excel spreadsheet save it as a delimited text file (.csv formatted)
filename = os.path.join(file_path,file_name)
df = pd.read_fwf(filename,usecols=['length','inter','swel','hsig','tps','a_hsig','a_tps']).dropna()
df = df[df.hsig>3.1]
title = str(file_name)+"\n"
pd.tools.plotting.scatter_matrix(df.drop(labels=['a_hsig','a_tps'],axis=1))
plt.scatter(df.length,df.tps)
import sklearn as sk
def save_figure(path,name):
plt.savefig(path+name+".png",
dpi=200,
facecolor='none',
edgecolor='none'
)
def find_nearest(array,value):
idx = (np.abs(array-value)).argmin()
return idx
def dist_fit(name,dist_name,bins,parameter):
global df
#Initialize figure and set dimensions
fig = plt.figure(figsize = (18,6))
gs = gridspec.GridSpec(2,2)
ax1 = fig.add_subplot(gs[:,0])
ax3 = fig.add_subplot(gs[:,1])
ax1.set_title(title,fontsize=20)
#Remove the plot frame lines. They are unnecessary chartjunk.
ax1.spines["top"].set_visible(False)
ax1.spines["right"].set_visible(False)
ax3.spines["top"].set_visible(False)
ax3.spines["right"].set_visible(False)
# Ensure that the axis ticks only show up on the bottom and left of the plot.
# Ticks on the right and top of the plot are generally unnecessary chartjunk.
ax1.get_xaxis().tick_bottom()
ax1.get_yaxis().tick_left()
ax3.get_xaxis().tick_bottom()
ax3.get_yaxis().tick_left()
# Make sure your axis ticks are large enough to be easily read.
# You don't want your viewers squinting to read your plot.
ax1.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="on", right="off", labelleft="on",labelsize=14)
ax3.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="on", right="off", labelleft="on",labelsize=14)
# Along the same vein, make sure your axis labels are large
# enough to be easily read as well. Make them slightly larger
# than your axis tick labels so they stand out.
ax1.set_xlabel(parameter, fontsize=16)
ax1.set_ylabel("Frequency of occurence", fontsize=16)
ax3.set_xlabel(parameter, fontsize=16)
ax3.set_ylabel("Exceedance Probability", fontsize=16)
#Setting .... variables
size = len(df[parameter])
max_val = 1.1*max(df[parameter])
min_val = min(df[parameter])
range_val = max_val-min_val
binsize = range_val/bins
x0 = np.arange(min_val,max_val,range_val*0.0001)
x1 = np.arange(min_val,max_val,binsize)
y1 = df[parameter]
#set x-axis limits
ax1.set_xlim(min_val,max_val)
ax3.set_xlim(min_val,max_val)
ax3.set_ylim(0,1.1)
#Plot histograms
EPDF = ax1.hist(y1, bins=x1, color='w')
ECDF = ax3.hist(y1, bins=x1, color='w', normed=1, cumulative=True)
#Fitting distribution
dist = getattr(scipy.stats, dist_name)
param = dist.fit(y1)
pdf_fitted = dist.pdf(x0, *param[:-2], loc=param[-2], scale=param[-1])*size*binsize
cdf_fitted = dist.cdf(x0, *param[:-2], loc=param[-2], scale=param[-1])
#Checking goodness of fit
#ks_fit = stats.kstest(pdf_fitted,dist_name) # Kolmogorov-Smirnov test: returns [KS stat (D,D+,orD-),pvalue]
#print(ks_fit)
#Finding location of 0.002 and 0.01 exceedence probability events
FiveHundInd = find_nearest(cdf_fitted,0.998)
OneHundInd = find_nearest(cdf_fitted,0.990)
#Plotting pdf and cdf
ax1.plot(x0,pdf_fitted,linewidth=2,label=dist_name)
ax3.plot(x0,cdf_fitted,linewidth=2,label=dist_name)
#update figure spacing
gs.update(wspace=0.1, hspace=0.2)
#adding a text box
ax3.text(min_val+0.1*range_val,1.1,
dist_name.upper()+" distribution\n"
+"\n"
+"0.2% - value: " + str("%.2f" %x0[FiveHundInd])+ " meters\n"
+"1.0% - value: " + str("%.2f" %x0[OneHundInd]) + " meters",
fontsize=14
)
print(dists.loc[dist_name,'description']+"\n")
param_names = (dist.shapes + ', loc, scale').split(', ') if dist.shapes else ['loc', 'scale']
param_str = ', '.join(['{}={:0.2f}'.format(k,v) for k,v in zip(param_names, param)])
dist_str = '{}({})'.format(dist_name, param_str)
print(dist_str)
plt.show()
print(stats.kstest(y1,dist_name,param,alternative='less'))
print(stats.kstest(y1,dist_name,param,alternative='greater'))
print(stats.kstest(y1,dist_name,param,alternative='two-sided'))
return name
interact_manual(dist_fit, name=filename, dist_name=dist_list,bins=[25,100,5],parameter=['length','inter','swel','hsig','tps','a_hsig','a_tps'])
"""
Explanation: Import Data
End of explanation
"""
dist = getattr(scipy.stats, 'genextreme')
#param = dist.fit(y1)
"""
Explanation: ###References:
Python <br>
http://stackoverflow.com/questions/6615489/fitting-distributions-goodness-of-fit-p-value-is-it-possible-to-do-this-with/16651524#16651524 <br>
http://stackoverflow.com/questions/6620471/fitting-empirical-distribution-to-theoretical-ones-with-scipy-python <br><br>
Extreme wave statistics <br>
http://drs.nio.org/drs/bitstream/handle/2264/4165/Nat_Hazards_64_223a.pdf;jsessionid=55AAEDE5A2BF3AA06C6CCB5CE3CBEBAD?sequence=1 <br><br>
List of available distributions can be found here <br>
http://docs.scipy.org/doc/scipy/reference/stats.html#continuous-distributions<br><br>
Goodness of fit tests <br>
http://statsmodels.sourceforge.net/stable/stats.html#goodness-of-fit-tests-and-measures <br>
http://docs.scipy.org/doc/scipy/reference/stats.html#statistical-functions <br>
End of explanation
"""
|
jrg365/gpytorch
|
examples/04_Variational_and_Approximate_GPs/GP_Regression_with_Uncertain_Inputs.ipynb
|
mit
|
import math
import torch
import tqdm
import gpytorch
from matplotlib import pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
"""
Explanation: GP Regression with Uncertain Inputs
Introduction
In this notebook, we're going to demonstrate one way of dealing with uncertainty in our training data. Let's say that we're collecting training data that models the following function.
\begin{align}
y &= \sin(2\pi x) + \epsilon \
\epsilon &\sim \mathcal{N}(0, 0.2)
\end{align}
However, now assume that we're a bit uncertain about our features. In particular, we're going to assume that every x_i value is not a point but a distribution instead. E.g.
$$ x_i \sim \mathcal{N}(\mu_i, \sigma_i). $$
Using stochastic variational inference to deal with uncertain inputs
To deal with this uncertainty, we'll use variational inference (VI) in conjunction with stochastic optimization. At every optimization iteration, we'll draw a sample x_i from the input distribution. The objective function (ELBO) that we compute will be an unbiased estimate of the true ELBO, and so a stochastic optimizer like SGD or Adam should converge to the true ELBO (or at least a local minimum of it).
End of explanation
"""
# Training data is 100 points in [0,1] inclusive regularly spaced
train_x_mean = torch.linspace(0, 1, 20)
# We'll assume the variance shrinks the closer we get to 1
train_x_stdv = torch.linspace(0.03, 0.01, 20)
# True function is sin(2*pi*x) with Gaussian noise
train_y = torch.sin(train_x_mean * (2 * math.pi)) + torch.randn(train_x_mean.size()) * 0.2
f, ax = plt.subplots(1, 1, figsize=(8, 3))
ax.errorbar(train_x_mean, train_y, xerr=(train_x_stdv * 2), fmt="k*", label="Train Data")
ax.legend()
"""
Explanation: Set up training data
In the next cell, we set up the training data for this example. We'll be using 20 regularly spaced points on [0,1].
We'll represent each of the training points $x_i$ by their mean $\mu_i$ and variance $\sigma_i$.
End of explanation
"""
from gpytorch.models import ApproximateGP
from gpytorch.variational import CholeskyVariationalDistribution
from gpytorch.variational import VariationalStrategy
class GPModel(ApproximateGP):
def __init__(self, inducing_points):
variational_distribution = CholeskyVariationalDistribution(inducing_points.size(0))
variational_strategy = VariationalStrategy(self, inducing_points, variational_distribution, learn_inducing_locations=True)
super(GPModel, self).__init__(variational_strategy)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
inducing_points = torch.randn(10, 1)
model = GPModel(inducing_points=inducing_points)
likelihood = gpytorch.likelihoods.GaussianLikelihood()
"""
Explanation: Setting up the model
Since we're performing VI to deal with the feature uncertainty, we'll be using a ~gpytorch.models.ApproximateGP. Similar to the SVGP example, we'll use a VariationalStrategy and a CholeskyVariationalDistribution to define our posterior approximation.
End of explanation
"""
# this is for running the notebook in our testing framework
import os
smoke_test = ('CI' in os.environ)
training_iter = 2 if smoke_test else 400
model.train()
likelihood.train()
# We use SGD here, rather than Adam. Emperically, we find that SGD is better for variational regression
optimizer = torch.optim.Adam([
{'params': model.parameters()},
{'params': likelihood.parameters()},
], lr=0.01)
# Our loss object. We're using the VariationalELBO
mll = gpytorch.mlls.VariationalELBO(likelihood, model, num_data=train_y.size(0))
iterator = tqdm.notebook.tqdm(range(training_iter))
for i in iterator:
# First thing: draw a sample set of features from our distribution
train_x_sample = torch.distributions.Normal(train_x_mean, train_x_stdv).rsample()
# Now do the rest of the training loop
optimizer.zero_grad()
output = model(train_x_sample)
loss = -mll(output, train_y)
iterator.set_postfix(loss=loss.item())
loss.backward()
optimizer.step()
# Get into evaluation (predictive posterior) mode
model.eval()
likelihood.eval()
# Test points are regularly spaced along [0,1]
# Make predictions by feeding model through likelihood
with torch.no_grad(), gpytorch.settings.fast_pred_var():
test_x = torch.linspace(0, 1, 51)
observed_pred = likelihood(model(test_x))
with torch.no_grad():
# Initialize plot
f, ax = plt.subplots(1, 1, figsize=(8, 3))
# Get upper and lower confidence bounds
lower, upper = observed_pred.confidence_region()
# Plot training data as black stars
ax.errorbar(train_x_mean.numpy(), train_y.numpy(), xerr=train_x_stdv, fmt='k*')
# Plot predictive means as blue line
ax.plot(test_x.numpy(), observed_pred.mean.numpy(), 'b')
# Shade between the lower and upper confidence bounds
ax.fill_between(test_x.numpy(), lower.numpy(), upper.numpy(), alpha=0.5)
ax.set_ylim([-3, 3])
ax.legend(['Observed Data', 'Mean', 'Confidence'])
"""
Explanation: Training the model with uncertain features
The training iteration should look pretty similar to the SVGP example -- where we optimize the variational parameters and model hyperparameters. The key difference is that, at every iteration, we will draw samples from our features distribution (since we don't have point measurements of our features).
```python
Inside the training iteration...
train_x_sample = torch.distributions.Normal(train_x_mean, train_x_stdv).rsample()
Rest of training iteration...
```
End of explanation
"""
|
gammapy/PyGamma15
|
tutorials/naima/naima_mcmc.ipynb
|
bsd-3-clause
|
import naima
import numpy as np
from astropy.io import ascii
import astropy.units as u
%matplotlib inline
import matplotlib.pyplot as plt
hess_spectrum = ascii.read('RXJ1713_HESS_2007.dat', format='ipac')
fig = naima.plot_data(hess_spectrum)
"""
Explanation: SED fitting with naima
In this notebook we will carry out a fit of an IC model to the HESS spectrum of RX J1713.7-3946 with the naima wrapper around emcee. This tutorial will follow loosely the tutorial found on the naima documentation.
The first step is to load the data, which we can find in the same directory as this notebook. The data format required by naima for the data files can be found in the documentation.
End of explanation
"""
from naima.models import ExponentialCutoffPowerLaw, InverseCompton
from naima import uniform_prior
ECPL = ExponentialCutoffPowerLaw(1e36/u.eV, 5*u.TeV, 2.7, 50*u.TeV)
IC = InverseCompton(ECPL, seed_photon_fields=['CMB', ['FIR', 30*u.K, 0.4*u.eV/u.cm**3]])
# define labels and initial vector for the parameters
labels = ['log10(norm)', 'index', 'log10(cutoff)']
p0 = np.array((34, 2.7, np.log10(30)))
# define the model function
def model(pars, data):
ECPL.amplitude = (10**pars[0]) / u.eV
ECPL.alpha = pars[1]
ECPL.e_cutoff = (10**pars[2]) * u.TeV
return IC.flux(data['energy'], distance=2.0*u.kpc), IC.compute_We(Eemin=1*u.TeV)
from naima import uniform_prior
def lnprior(pars):
lnprior = uniform_prior(pars[1], -1, 5)
return lnprior
"""
Explanation: The we define the model to be fit. The model function must take a tuple of free parameters as first argument and a data table as second. It must return the model flux at the energies given by data['energy'] in first place, and any extra objects will be saved with the MCMC chain.
emcee does not accept astropy Quantities as parameters, so we have to give them units before setting the attributes of the particle distribution function.
Here we define an IC model with an Exponential Cutoff Power-Law with the amplitude, index, and cutoff energy as free parameters. Because the amplitude and cutoff energy may be considered to have a uniform prior in log-space, we sample their decimal logarithms (we could also use a log-uniform prior). We also place a uniform prior on the particle index with limits between -1 and 5.
End of explanation
"""
sampler, pos = naima.run_sampler(data_table=hess_spectrum, model=model, prior=lnprior, p0=p0, labels=labels,
nwalkers=32, nburn=50, nrun=100, prefit=True, threads=4)
# inspect the chains stored in the sampler for the three free parameters
f = naima.plot_chain(sampler, 0)
f = naima.plot_chain(sampler, 1)
f = naima.plot_chain(sampler, 2)
# make a corner plot of the parameters to show covariances
f = naima.plot_corner(sampler)
# Show the fit
f = naima.plot_fit(sampler)
f.axes[0].set_ylim(bottom=1e-13)
# Inspect the metadata blob saved
f = naima.plot_blob(sampler,1, label='$W_e (E_e>1$ TeV)')
# There is also a convenience function that will plot all the above files to pngs or a single pdf
naima.save_diagnostic_plots('RXJ1713_naima_fit', sampler, blob_labels=['Spectrum','$W_e (E_e>1$ TeV)'])
"""
Explanation: We take the data, model, prior, parameter vector, and labels and call the main fitting procedure: naima.run_sampler. This function is a wrapper around emcee, and the details of the MCMC run can be configured through its arguments:
nwalkers: number of emcee walkers.
nburn: number of steps to take for the burn-in period. These steps will be discarded in the final results.
nrun: number of steps to take and save to the sampler chain.
prefit: whether to do a Nelder-Mead fit before starting the MCMC run (reduces the burn-in steps required).
interactive: whether to launch an interactive model fitter before starting the run to set the initial vector. This will only work in matplotlib is using a GUI backend (qt4, qt5, gtkagg, tkagg, etc.). The final parameters when you close the window will be used as starting point for the run.
threads: How many different threads (CPU cores) to use when computing the likelihood.
End of explanation
"""
suzaku_spectrum = ascii.read('RXJ1713_Suzaku-XIS.dat')
f=naima.plot_data(suzaku_spectrum)
"""
Explanation: Simultaneous fitting of two radiative components: Synchrotron and IC.
Use the Suzaku XIS spectrum of RX J1713 to do a simultaneous fit of the synchrotron and inverse Compton spectra and derive an estimate of the magnetic field strength under the assumption of a leptonic scenario.
End of explanation
"""
f=naima.plot_data([suzaku_spectrum, hess_spectrum], sed=True)
"""
Explanation: Note that in all naima functions (including run_sampler) you can provide a list of spectra, so you can consider both the HESS and Suzaku spectra:
End of explanation
"""
#from naima.models import ExponentialCutoffPowerLaw, InverseCompton
#from naima import uniform_prior
#ECPL = ExponentialCutoffPowerLaw(1e36/u.eV, 10*u.TeV, 2.7, 50*u.TeV)
#IC = InverseCompton(ECPL, seed_photon_fields=['CMB', ['FIR', 30*u.K, 0.4*u.eV/u.cm**3]])
## define labels and initial vector for the parameters
#labels = ['log10(norm)', 'index', 'log10(cutoff)']
#p0 = np.array((34, 2.7, np.log10(30)))
## define the model function
#def model(pars, data):
# ECPL.amplitude = (10**pars[0]) / u.eV
# ECPL.alpha = pars[1]
# ECPL.e_cutoff = (10**pars[2]) * u.TeV
# return IC.flux(data['energy'], distance=2.0*u.kpc), IC.compute_We(Eemin=1*u.TeV)
#from naima import uniform_prior
#def lnprior(pars):
# lnprior = uniform_prior(pars[1], -1, 5)
# return lnprior
"""
Explanation: Below is the model, labels, parameters and prior defined above for the IC-only fit. Modify it as needed and feed it to naima.run_sampler to obtain an estimate of the magnetic field strength.
End of explanation
"""
|
feffenberger/StatisticalMethods
|
lessons/3.PDFCharacterization.ipynb
|
gpl-2.0
|
from straightline_utils import *
%matplotlib inline
from matplotlib import rcParams
rcParams['savefig.dpi'] = 100
(x,y,sigmay) = get_data_no_outliers()
plot_yerr(x, y, sigmay)
"""
Explanation: PHYS366: Statistical Methods in Astrophysics
Lesson 3: Inference in Practice: PDF Characterization
Goals for this session:
Linear problems: General solution + short cuts. When and why are which short cuts OK?
Introduction to Monte Carlo Methods
...
Adapted from straight line notebook by Phil Marshall and Dustin Lang
Related reading:
Ivezic Ch. 8.1, 8.2, 8.8
MacKay Ch. 12
Switch to function generated data set from the start?
The Data Set
End of explanation
"""
def straight_line_log_likelihood(x, y, sigmay, m, b):
'''
Returns the log-likelihood of drawing data values *y* at
known values *x* given Gaussian measurement noise with standard
deviation with known *sigmay*, where the "true" y values are
*y_t = m * x + b*
x: list of x coordinates
y: list of y coordinates
sigmay: list of y uncertainties
m: scalar slope
b: scalar line intercept
Returns: scalar log likelihood
'''
return (np.sum(np.log(1./(np.sqrt(2.*np.pi) * sigmay))) +
np.sum(-0.5 * (y - (m*x + b))**2 / sigmay**2))
def straight_line_log_prior(m, b):
return 0.
def straight_line_log_posterior(x,y,sigmay, m,b):
return (straight_line_log_likelihood(x,y,sigmay, m,b) +
straight_line_log_prior(m, b))
# Evaluate log P(m,b | x,y,sigmay) on a grid.
# Set up grid
mgrid = np.linspace(mlo, mhi, 100)
bgrid = np.linspace(blo, bhi, 101)
log_posterior = np.zeros((len(mgrid),len(bgrid)))
# Evaluate log probability on grid
for im,m in enumerate(mgrid):
for ib,b in enumerate(bgrid):
log_posterior[im,ib] = straight_line_log_posterior(x, y, sigmay, m, b)
# Convert to probability density and plot
posterior = np.exp(log_posterior - log_posterior.max())
plt.imshow(posterior, extent=[blo,bhi, mlo,mhi],cmap='Blues',
interpolation='nearest', origin='lower', aspect=(bhi-blo)/(mhi-mlo),
vmin=0, vmax=1)
plt.contour(bgrid, mgrid, posterior, pdf_contour_levels(posterior), colors='k')
i = np.argmax(posterior)
i,j = np.unravel_index(i, posterior.shape)
print 'Grid maximum posterior values:', bgrid[i], mgrid[j]
plt.title('Straight line: posterior PDF for parameters');
#plt.plot(b_ls, m_ls, 'w+', ms=12, mew=4);
plot_mb_setup();
"""
Explanation: Bayesian Solution: Posterior distribution of model parameters
This looks like data points scattered around a straight line,
$y = b + m x$.
If the error distribution in $y$ is Gaussian, the data likelihood for a specific linear model $(m,b)$ is given by
$P({x_i,y_i,\sigma_{y_i}}|(m,b)) = \Pi_i\frac{1}{\sqrt{2\pi}\sigma_{y_i}} \exp[-1/2(y_i-(b+m x_i) )^2/\sigma_{y_i}^2]$.
Assuming a flat prior PDF for the model parameters, the posterior PDF of model parameters $(m, b)$ is directly proportional to the data likelihood. We can test this model by determining the parameter log-likelihood
$\ln(L((m,b)|{x_i,y_i,\sigma_{y_i}})\propto \sum_i -1/2(y_i-(b+m x_i) )^2/\sigma_{y_i}^2$
on a parameter grid, which captures the uncertainty in the model parameters given the data. For simple, 2-dimensional parameter spaces like this one, evaluating on a grid is not a bad way to go.
End of explanation
"""
# Linear algebra: weighted least squares
N = len(x)
A = np.zeros((N,2))
A[:,0] = 1. / sigmay
A[:,1] = x / sigmay
b = y / sigmay
theta,nil,nil,nil = np.linalg.lstsq(A, b)
plot_yerr(x, y, sigmay)
b_ls,m_ls = theta
print 'Least Squares (maximum likelihood) estimator:', b_ls,m_ls
plot_line(m_ls, b_ls);
"""
Explanation: Short Cut #1: Linear Least Squares
An industry standard: find the slope $m_{\mathrm{LS}}$ and intercept $b_{\mathrm{LS}}$ that minimize the mean square residual
$S(m,b) = \sum_i[y_i-(b+m x_i)]^2/\sigma_{y_i}^2$:
$\frac{\partial S}{\partial b}|{b{\mathrm{LS}}} =0 = -2\sum_i[y_i-(b_{\mathrm{LS}}+m_{\mathrm{LS}} x_i) ]/\sigma_{y_i}^2$,
$\frac{\partial S}{\partial m}|{m{\mathrm{LS}}} = 0 = -2\sum_i [y_i-(b_{\mathrm{LS}}+m_{\mathrm{LS}} x_i )]x_i/\sigma_{y_i}^2$.
Since the data depend linearly on the parameters, the least squares solution can be found by a matrix inversion and multiplication, conveneniently packed in numpy.linalg:
End of explanation
"""
def straight_line_posterior(x, y, sigmay, m, b):
return np.exp(straight_line_log_posterior(x, y, sigmay, m, b))
# initial m, b
m,b = 2, 0
# step sizes
mstep, bstep = 0.1, 10.
# how many steps?
nsteps = 10000
chain = []
probs = []
naccept = 0
print 'Running MH for', nsteps, 'steps'
# First point:
L_old = straight_line_log_likelihood(x, y, sigmay, m, b)
p_old = straight_line_log_prior(m, b)
prob_old = np.exp(L_old + p_old)
for i in range(nsteps):
# step
mnew = m + np.random.normal() * mstep
bnew = b + np.random.normal() * bstep
# evaluate probabilities
# prob_new = straight_line_posterior(x, y, sigmay, mnew, bnew)
L_new = straight_line_log_likelihood(x, y, sigmay, mnew, bnew)
p_new = straight_line_log_prior(mnew, bnew)
prob_new = np.exp(L_new + p_new)
if (prob_new / prob_old > np.random.uniform()):
# accept
m = mnew
b = bnew
L_old = L_new
p_old = p_new
prob_old = prob_new
naccept += 1
else:
# Stay where we are; m,b stay the same, and we append them
# to the chain below.
pass
chain.append((b,m))
probs.append((L_old,p_old))
print 'Acceptance fraction:', naccept/float(nsteps)
# Pull m and b arrays out of the Markov chain and plot them:
mm = [m for b,m in chain]
bb = [b for b,m in chain]
# Scatterplot of m,b posterior samples
plt.clf()
plt.contour(bgrid, mgrid, posterior, pdf_contour_levels(posterior), colors='k')
plt.gca().set_aspect((bhi-blo)/(mhi-mlo))
plt.plot(bb, mm, 'b.', alpha=0.1)
plot_mb_setup()
plt.show()
# 1 and 2D marginalised distributions:
import triangle
triangle.corner(chain, labels=['b','m'], range=[(blo,bhi),(mlo,mhi)],quantiles=[0.16,0.5,0.84],
show_titles=True, title_args={"fontsize": 12},
plot_datapoints=True, fill_contours=True, levels=[0.68, 0.95], color='b', bins=40, smooth=1.0);
plt.show()
# Traces, for convergence inspection:
plt.clf()
plt.subplot(2,1,1)
plt.plot(mm, 'k-')
plt.ylim(mlo,mhi)
plt.ylabel('m')
plt.subplot(2,1,2)
plt.plot(bb, 'k-')
plt.ylabel('b')
plt.ylim(blo,bhi)
plt.show()
"""
Explanation: Similarly, one can derived expressions for the uncertainty for of the least squares fit parameters, c.f. Ivezic Ch. 8.2. These expressions can be thought of as propagating the data error into parameter errors (using standard error propagation, i.e. chain rule).
The linear least squares estimator is a maximum likelihood estimator for linear model parameters, based on the assumption on Gaussian distributed data.
Short Cut #2: Laplace Approximation
Summarize MacKay Chapter 12;
Approximation to
Monte Carlo Sampling Methods
In problems with higher dimensional parameter spaces, we need a more efficient way of approximating the posterior PDF - both when characterizing it in the first place, and then when doing integrals over that PDF (to get the marginalized PDFs for the parameters, or to compress them in to single numbers with uncertainties that can be easily reported). In most applications it's sufficient to approximate a PDF with a (relatively) small number of samples drawn from it; MCMC is a procedure for drawing samples from PDFs.
End of explanation
"""
|
kyleabeauchamp/mdtraj
|
examples/rmsd-drift.ipynb
|
lgpl-2.1
|
import mdtraj.testing
crystal_fn = mdtraj.testing.get_fn('native.pdb')
trajectory_fn = mdtraj.testing.get_fn('frame0.xtc')
crystal = md.load(crystal_fn)
trajectory = md.load(trajectory_fn, top=crystal) # load the xtc. the crystal structure defines the topology
trajectory
"""
Explanation: Find two files that are distributed with MDTraj for testing purposes --
we can us them to make our plot
End of explanation
"""
rmsds_to_crystal = md.rmsd(trajectory, crystal, 0)
heavy_atoms = [atom.index for atom in crystal.topology.atoms if atom.element.symbol != 'H']
heavy_rmds_to_crystal = md.rmsd(trajectory, crystal, 0, atom_indices=heavy_atoms)
from matplotlib.pylab import *
figure()
plot(trajectory.time, rmsds_to_crystal, 'r', label='all atom')
plot(trajectory.time, heavy_rmds_to_crystal, 'b', label='heavy atom')
legend()
title('RMSDs to crystal')
xlabel('simulation time (ps)')
ylabel('RMSD (nm)')
"""
Explanation: RMSD with exchangeable hydrogen atoms is generally not a good idea
so let's take a look at just the heavy atoms
End of explanation
"""
|
tabakg/potapov_interpolation
|
commensurate_roots.ipynb
|
gpl-3.0
|
import numpy as np
import numpy.linalg as la
import Potapov_Code.Time_Delay_Network as networks
import matplotlib.pyplot as plt
%pylab inline
import sympy as sp
from sympy import init_printing
init_printing()
from fractions import gcd
## To identify commensurate delays, we must use a decimal and NOT a binary representation of the delays,
## as done by standard python floats/longs
from decimal import Decimal
X = networks.Example3()
X.delays
"""
Explanation: Finding commensurate roots
In many applications the poles of the transfer function can be found by solving a polynomial. If the denominator is a polynoamial in $x = exp(-zT_{\text{gcd}})$ for some delay $T_{\text{gcd}}$ then this can be done. In the more general case network when the delays $T_1,...,T_n$ are not commensurate, a different method is needed, such as the root-finding algorithm implemented in Roots.py. Here we show how from the commensurate delays we can find roots in a desired frequency range.
End of explanation
"""
def gcd_lst(lst):
l = len(lst)
if l == 0:
return None
elif l == 1:
return lst[0]
elif l == 2:
return gcd(lst[0],lst[1])
else:
return gcd(lst[0],gcd_lst(lst[1:]))
"""
Explanation: get gcd for a list of integers
End of explanation
"""
gcd_lst([1,2,3,4,5,6])
gcd_lst([2,4,8,6])
gcd_lst([3,27,300,6])
"""
Explanation: testing
End of explanation
"""
def find_commensurate(delays):
mult = min([d.as_tuple().exponent for d in delays])
power = 10**-mult
delays = map(lambda x: x*power,delays)
int_gcd = gcd_lst(delays)
return int_gcd/power
"""
Explanation: find the shortest commensurate delay
End of explanation
"""
delays = [Decimal('1.14'),Decimal('532.23423'),Decimal('0.06'),Decimal('0.1')]
gcd_delays = find_commensurate(delays)
gcd_delays
map(lambda z: z / gcd_delays, delays)
## in general converting floats to Decimal shoudl be avoided because floats are stored in binary form.
## converting to a string first will round the number.
## Good practice would be to specify the delays in Decimals, then convert to floats later.
gcd_delays = find_commensurate(map(lambda x: Decimal(str(x)),X.delays))
gcd_delays
Decimal_delays =map(lambda x: Decimal(str(x)),X.delays)
Decimal_delays
E = sp.Matrix(np.zeros_like(X.M1))
x = sp.symbols('x')
for i,delay in enumerate(Decimal_delays):
E[i,i] = x**int(delay / gcd_delays)
E
M1 = sp.Matrix(X.M1)
M1
E - M1
expr = sp.apart((E - M1).det())
a = sp.Poly(expr, x)
a
poly_coeffs = a.all_coeffs()
poly_coeffs
roots = np.roots(poly_coeffs)
plt.figure(figsize =(6,6))
fig = plt.scatter(roots.real,roots.imag)
plt.title('Roots of the above polynomial',{'fontsize':24})
plt.xlabel('Real part',{'fontsize':24})
plt.ylabel('Imaginary part',{'fontsize':24})
"""
Explanation: testing
End of explanation
"""
zs = np.asarray(map(lambda r: np.log(r) / - float(gcd_delays), roots))
plt.figure(figsize =(6,6))
fig = plt.scatter(zs.real,zs.imag)
plt.title('Roots of the above polynomial',{'fontsize':24})
plt.xlabel('Real part',{'fontsize':24})
plt.ylabel('Imaginary part',{'fontsize':24})
def find_instances_in_range(z,freq_range):
T = 2.*np.pi / float(lcm)
if z.imag >= freq_range[0] and z.imag <= freq_range[1]:
lst_in_range = [z]
num_below = int((z.imag - freq_range[0])/T )
num_above = int((freq_range[1] - z.imag)/T )
lst_in_range += [z + 1j * disp for disp in (np.asarray(range(num_above))+1) * T]
lst_in_range += [z - 1j * disp for disp in (np.asarray(range(num_below))+1) * T]
return lst_in_range
elif z.imag > freq_range[1]:
min_dist = (int((z.imag - freq_range[1])/T)+1) * T
max_dist = int((z.imag - freq_range[0]) / T) * T
if min_dist > max_dist:
return []
else:
return find_instances_in_range(z - 1j*min_dist,freq_range)
else:
min_dist = (int((freq_range[0] - z.imag)/T)+1) * T
max_dist = int((freq_range[1] - z.imag)/T) * T
if min_dist > max_dist:
return []
else:
return find_instances_in_range(z + 1j*min_dist,freq_range)
z = zs[40]
z.imag
## test cases...
find_instances_in_range(z,(-3000,-1000))
find_instances_in_range(z,(-3000,1000))
find_instances_in_range(z,(3000,5000))
find_instances_in_range(z,(3000,1000))
def find_roots_in_range(roots,freq_range):
return np.concatenate([find_instances_in_range(r,freq_range) for r in roots])
roots_in_range = find_roots_in_range(zs,(-1000,1000))
plt.figure(figsize =(6,12))
fig = plt.scatter(roots_in_range.real,roots_in_range.imag,label='extended')
plt.title('Roots of the above polynomial',{'fontsize':24})
plt.xlabel('Real part',{'fontsize':24})
plt.ylabel('Imaginary part',{'fontsize':24})
plt.scatter(zs.real,zs.imag,label='original',c='yellow')
plt.legend()
"""
Explanation: Going from $x = exp(-zT_{\text{gcd}})$ to solutions z. (Note the sign convention!)
The roots repeat with a period of $2 \pi / T_{\text{gcd}} \approx 628$.
End of explanation
"""
sample_pol_length = [10,50,100,1000,2000,3000]
import time
ts = [time.clock()]
for pol_length in sample_pol_length:
pol = [0]*(pol_length+1)
pol[0] = 1
pol[pol_length] = 5
np.roots(pol)
ts.append(time.clock())
delta_ts = [ts[i+1]-ts[i] for i in range(len(ts)-1)]
plt.plot(sort(sample_pol_length),sort(delta_ts))
plt.loglog()
"""
Explanation: Scaling of finding roots of polynomial using Python
The best we can hope for is $O(n^2)$ because this is a cost of an iteration in the QR algorithm, and ideally the number of iterations would not change much throughout the convergence process.
End of explanation
"""
(np.log(delta_ts[-1])-np.log(delta_ts[0])) / (np.log(sample_pol_length[-1]) - np.log(sample_pol_length[0]))
"""
Explanation: very roughly estimating the slope of the above log-log plot gives the exponent in $O(n^p)$.
End of explanation
"""
## The poles of the transfer function, no multiplicities included
poles = -zs
plt.scatter(poles.real,poles.imag)
"""
Explanation: Extracting Potapov Projectors for Commensurate systems
The slowest part of the Potapov analysis is generating the Potapov projection vectors once the roots of known. The runtime of this procedure is $O(n^2)$ because all of the previously known projectors must be invoked when finding each new projector. This results in slow performance when the number of modes is large (more than 1000 modes or so begins taking a substantial amount of time to process). Is there a way to reduce this overhead when the system is commensurate?
There are two ways to address this issue. The first is to simply note that in the high-Q approximation linear modes don't interact much so that we can just consider a diagonal linear Hamiltonian to begin with. However, this yields no information for the B/C matrices, which still depend on the Potapov vectors. Although we can still get the spatial mode location, it would be nice to actually have an efficient method to compute the Potapov projectors.
When the roots become periodic in the frequency direction of the complex plane, we might anticipate the Potapov vectors corresponding to them would also have the same periodicity (i.e. all poles ${ p_k + i n T | n \in \mathbb{R}}$ have the same Potapov vectors where $T$ is the periodicity for the system and $p_k$ is a pole).
We can test this assertion by using a modified limiting procedure described below.
In our original procedure we evaluated each $v_k$ in the product
\begin{align}
T(z) = \prod_k \left(I - v_k^\dagger v_k + v_k^\dagger v_k \left( \frac{z + \bar p_k}{z - p_k} \right) \right).
\end{align}
This was done by multiplying by $z-p_k$ and taking $z \to p_k$. Now suppose all $v_k$ are the same for some subset of poles (ones related by a period). In this case we can instead multiply by $\prod_n (z_n - p_k - i n T)$ (multiplying by some factor to normalize both sides) and then take the limit $z_n \to p_k^{(n)} \equiv p_k + i n T$ (the limits will commute when $T(z)$ is meromorphic. We can also introduce a notion of taking the limits at the same time for these poles by using the same $\delta-\epsilon$, etc., since they are just translations of each other ):
\begin{align}
\lim_{z_n \to p_k^{(n)}}\left[ T(z) \prod_n (z - p_k - i n T) \right]= \lim_{z_n \to p_k^{(n)}}\prod_n v_k^\dagger v_k (z + \bar p_k - i n T) ( z - p_k - i n T).
\end{align}
If all the $v_k$ vectors are the same on the right hand side (let's also say they are normal), then the right hand side turns into
\begin{align}
v_k^\dagger v_k \lim_{z_n \to p_k^{(n)}}\prod_n (z_n + \bar p_k^{(n)}) ( z - p_k^{(n)})
=
v_k^\dagger v_k \lim_{z \to p_k}\left(\prod_n (z_n + \bar p_k) ( z - p_k)\right)^n.
\end{align}
This suggests we can just use the same $v_k$ vectors as for the particular root and it's periodic copies as long as we arrange the Potapov terms for the root and it's periodic copies adjacent to each other in the expansion.
Another note is that the Potapov expansion can be fast to evaluate because matrix multiplicaiton only needs to be done between different commutative subspaces (i.e. all the periodic copies belong in one commensurate subset).
End of explanation
"""
T = X.T
from Potapov_Code.functions import limit
def Potapov_prod(z,poles,vecs,N):
'''
Original Potapov product.
'''
R = np.asmatrix(np.eye(N))
for pole_i,vec in zip(poles,vecs):
Pi = vec*vec.H
R = R*(np.eye(N) - Pi + Pi * ( z + pole_i.conjugate() )/( z - pole_i) )
return R
def get_Potapov_vecs(T,poles):
'''
Original method to get vectors.
'''
N = T(0).shape[0]
found_vecs = []
for pole in poles:
L = (la.inv(Potapov_prod(pole,poles,found_vecs,N)) *
limit(lambda z: (z-pole)*T(z),pole) ) ## Current bottleneck O(n^2).
[eigvals,eigvecs] = la.eig(L)
index = np.argmax(map(abs,eigvals))
big_vec = np.asmatrix(eigvecs[:,index])
found_vecs.append(big_vec)
return found_vecs
def get_Potapov_vecs_commensurate(T,commensurate_poles):
'''
Original method to get vectors.
'''
N = T(0).shape[0]
found_vecs = []
for pole in poles:
L = (la.inv(Potapov_prod(pole,poles,found_vecs,N)) *
limit(lambda z: (z-pole)*T(z),pole) ) ## Current bottleneck O(n^2).
[eigvals,eigvecs] = la.eig(L)
index = np.argmax(map(abs,eigvals))
big_vec = np.asmatrix(eigvecs[:,index])
found_vecs.append(big_vec)
return found_vecs
n_tot = 21
def periodic_poles_prod(z,pole,gcd_delays,n_tot = 21):
'''
Return the normalized product :math:`prod_n (z - pole - i * n * T_{\text{gcd}})`.
'''
n_range = np.asarray(range(n_tot)) - n_tot/2
return np.exp(np.sum([1j*np.angle(z-pole - 1j*n*2*np.pi / gcd_delays) for n in n_range]))
## testing...
periodic_poles_prod(10000j-100,poles[0],float(gcd_delays),n_tot=33)
"""
Explanation: I ended up not using the stuff below. The Potapov vectors are implemented in Potapov_Code.Potapov.py.
End of explanation
"""
|
sueiras/training
|
tensorflow/02-text/20newsgroups_keras_model.ipynb
|
gpl-3.0
|
from __future__ import print_function
import os
import sys
import numpy as np
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
from keras.layers import Dense, Input, GlobalMaxPooling1D
from keras.layers import Conv1D, MaxPooling1D, Embedding, Flatten
from keras.models import Model
from sklearn.datasets import fetch_20newsgroups
data_train = fetch_20newsgroups(subset='train', remove=('headers', 'footers', 'quotes'),
shuffle=True, random_state=42)
data_test = fetch_20newsgroups(subset='test', remove=('headers', 'footers', 'quotes'),
shuffle=True, random_state=42)
texts = data_train.data
labels = data_train.target
labels_index = {}
for i,l in enumerate(data_train.target_names):
labels_index[i] = l
labels_index
data_train.data[0]
MAX_SEQUENCE_LENGTH = 1000
MAX_NB_WORDS = 20000
EMBEDDING_DIM = 100
VALIDATION_SPLIT = 0.2
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
tokenizer = Tokenizer(num_words=MAX_NB_WORDS)
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
data = pad_sequences(sequences, maxlen=MAX_SEQUENCE_LENGTH)
labels = to_categorical(np.asarray(labels))
print('Shape of data tensor:', data.shape)
print('Shape of label tensor:', labels.shape)
# split the data into a training set and a validation set
indices = np.arange(data.shape[0])
np.random.shuffle(indices)
data = data[indices]
labels = labels[indices]
nb_validation_samples = int(VALIDATION_SPLIT * data.shape[0])
x_train = data[:-nb_validation_samples]
y_train = labels[:-nb_validation_samples]
x_val = data[-nb_validation_samples:]
y_val = labels[-nb_validation_samples:]
"""
Explanation: Using pre-trained word embeddings in a Keras model
Based on https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html
End of explanation
"""
DATA_DIR = '/home/jorge/data/text'
embeddings_index = {}
f = open(os.path.join(DATA_DIR, 'glove.6B/glove.6B.100d.txt'))
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print('Found %s word vectors.' % len(embeddings_index))
embedding_matrix = np.zeros((len(word_index) + 1, EMBEDDING_DIM))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
from keras.layers import Embedding
embedding_layer = Embedding(len(word_index) + 1,
EMBEDDING_DIM,
weights=[embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH,
trainable=False)
"""
Explanation: Preparing the Embedding layer
End of explanation
"""
from keras.optimizers import SGD
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences = embedding_layer(sequence_input)
x = Conv1D(128, 5, activation='relu')(embedded_sequences)
x = MaxPooling1D(5)(x)
x = Conv1D(128, 5, activation='relu')(x)
x = MaxPooling1D(5)(x)
x = Conv1D(128, 5, activation='relu')(x)
x = MaxPooling1D(35)(x) # global max pooling
x = Flatten()(x)
x = Dense(128, activation='relu')(x)
preds = Dense(len(labels_index), activation='softmax')(x)
model = Model(sequence_input, preds)
model.summary()
sgd_optimizer = SGD(lr=0.01, momentum=0.99, decay=0.001, nesterov=True)
model.compile(loss='categorical_crossentropy',
optimizer=sgd_optimizer,
metrics=['acc'])
# happy learning!
model.fit(x_train, y_train, validation_data=(x_val, y_val),
epochs=50, batch_size=128)
"""
Explanation: Training a 1D convnet
End of explanation
"""
|
tensorflow/docs-l10n
|
site/ja/probability/examples/Factorial_Mixture.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
import tensorflow as tf
import numpy as np
import tensorflow_probability as tfp
import matplotlib.pyplot as plt
import seaborn as sns
tfd = tfp.distributions
# Use try/except so we can easily re-execute the whole notebook.
try:
tf.enable_eager_execution()
except:
pass
"""
Explanation: 階乗混合
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/probability/examples/Factorial_Mixture"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/Factorial_Mixture.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/Factorial_Mixture.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/probability/examples/Factorial_Mixture.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
このノートブックでは、TensorFlow Probability(TFP)を使用して、次のように定義されるガウス分布の階乗混合からサンプルを取得する方法を説明します。$$p(x_1, ..., x_n) = \prod_i p_i(x_i)$$ where: $$\begin{align} p_i &\equiv \frac{1}{K}\sum_{k=1}^K \pi_{ik},\text{Normal}\left(\text{loc}=\mu_{ik},, \text{scale}=\sigma_{ik}\right)\1&=\sum_{k=1}^K\pi_{ik}, \forall i.\hphantom{MMMMMMMMMMM}\end{align}$$
それぞれの変数 $x_i$ はガウス分布の混合としてモデル化されており、すべての $n$ 変数に対する同時分布はこれらの密度の積です。
データセット $x^{(1)}, ..., x^{(T)}$ がある場合、各データポイント $x^{(j)}$ をガウス分布の階乗混合としてモデル化します: $$p(x^{(j)}) = \prod_i p_i (x_i^{(j)})$$
階乗混合は、少数のパラメータと大量のモードで分布を作成する単純な方法です。
End of explanation
"""
num_vars = 2 # Number of variables (`n` in formula).
var_dim = 1 # Dimensionality of each variable `x[i]`.
num_components = 3 # Number of components for each mixture (`K` in formula).
sigma = 5e-2 # Fixed standard deviation of each component.
# Choose some random (component) modes.
component_mean = tfd.Uniform().sample([num_vars, num_components, var_dim])
factorial_mog = tfd.Independent(
tfd.MixtureSameFamily(
# Assume uniform weight on each component.
mixture_distribution=tfd.Categorical(
logits=tf.zeros([num_vars, num_components])),
components_distribution=tfd.MultivariateNormalDiag(
loc=component_mean, scale_diag=[sigma])),
reinterpreted_batch_ndims=1)
"""
Explanation: TFP を使ってガウス分布の階乗混合を構築する
End of explanation
"""
plt.figure(figsize=(6,5))
# Compute density.
nx = 250 # Number of bins per dimension.
x = np.linspace(-3 * sigma, 1 + 3 * sigma, nx).astype('float32')
vals = tf.reshape(tf.stack(np.meshgrid(x, x), axis=2), (-1, num_vars, var_dim))
probs = factorial_mog.prob(vals).numpy().reshape(nx, nx)
# Display as image.
from matplotlib.colors import ListedColormap
cmap = ListedColormap(sns.color_palette("Blues", 256))
p = plt.pcolor(x, x, probs, cmap=cmap)
ax = plt.axis('tight');
# Plot locations of means.
means_np = component_mean.numpy().squeeze()
for mu_x in means_np[0]:
for mu_y in means_np[1]:
plt.scatter(mu_x, mu_y, s=150, marker='*', c='r', edgecolor='none');
plt.axis(ax);
plt.xlabel('$x_1$')
plt.ylabel('$x_2$')
plt.title('Density of factorial mixture of Gaussians');
"""
Explanation: tfd.Independent を使用しているところに注意してください。この「meta-distribution」は reduce_sum を、右端の reinterpreted_batch_ndims バッチ次元に対して log_prob に適用します。この場合、これは log_prob を計算するときに、バッチ次元のみを残して変数次元を合計します。これによるサンプリングへの影響はありません。
密度をプロットする
ポイントのグリッドで密度を計算し、モードの場所を赤い星印で示します。階乗混合の各モードは、根底にあるガウス分布の個体変数混合のモードのペアに対応しています。以下のプロットには 9 つのモードがありますが、必要なのは 6 つのパラメータのみです($x_1$ のモードの場所を指定する 3 つのパラメータと、$x_2$ のモードの場所を指定する 3 つのパラメータ)。逆に、2 次空間のガウス分布の混合には、9 つのモードを指定するために、2 * 9 = 18 個のパラメータが必要となります。
End of explanation
"""
samples = factorial_mog.sample(1000).numpy()
g = sns.jointplot(
x=samples[:, 0, 0],
y=samples[:, 1, 0],
kind="scatter",
marginal_kws=dict(bins=50))
g.set_axis_labels("$x_1$", "$x_2$");
"""
Explanation: サンプルと周辺密度の推定をプロットする
End of explanation
"""
|
tzk/EDeN_examples
|
classification.ipynb
|
gpl-2.0
|
from eden.util import load_target
y = load_target( 'http://www.bioinf.uni-freiburg.de/~costa/bursi.target' )
"""
Explanation: Classification
Consider a binary classification problem. The data and target files are available online. The domain of the problem is chemoinformatics. Data is about toxicity of 4K small molecules.
The creation of a predictive system happens in 3 steps:
data conversion: transform instances into a suitable graph format. This is done using specialized programs for each (domain, format) pair. In the example we have molecular graphs encoded using the gSpan format and we will therefore use the 'gspan' tool.
data vectorization: transform graphs into sparse vectors. This is done using the EDeN tool. The vectorizer accepts as parameters the (maximal) size of the fragments to be used as features, this is expressed as the pair 'radius' and the 'distance'. See for details: F. Costa, K. De Grave,''Fast Neighborhood Subgraph Pairwise Distance Kernel'', 27th International Conference on Machine Learning (ICML), 2010.
modelling: fit a predicitve system and evaluate its performance. This is done using the tools offered by the scikit library. In the example we will use a Stochastic Gradient Descent linear classifier.
In the following cells there is the code for each step.
Install the library
pip install git+https://github.com/fabriziocosta/EDeN.git --user
1 Conversion
load a target file
End of explanation
"""
from eden.converter.graph.gspan import gspan_to_eden
graphs = gspan_to_eden( 'http://www.bioinf.uni-freiburg.de/~costa/bursi.gspan' )
"""
Explanation: load data and convert it to graphs
End of explanation
"""
from eden.graph import Vectorizer
vectorizer = Vectorizer( r=2,d=5 )
"""
Explanation: 2 Vectorization
setup the vectorizer
End of explanation
"""
%%time
X = vectorizer.transform( graphs )
print 'Instances: %d Features: %d with an avg of %d features per instance' % (X.shape[0], X.shape[1], X.getnnz()/X.shape[0])
"""
Explanation: extract features and build data matrix
End of explanation
"""
%%time
#induce a predictive model
from sklearn.linear_model import SGDClassifier
predictor = SGDClassifier(average=True, class_weight='auto', shuffle=True, n_jobs=-1)
from sklearn import cross_validation
scores = cross_validation.cross_val_score(predictor, X, y, cv=10, scoring='roc_auc')
import numpy as np
print('AUC ROC: %.4f +- %.4f' % (np.mean(scores),np.std(scores)))
"""
Explanation: 3 Modelling
Induce a predictor and evaluate its performance
End of explanation
"""
|
ESO-python/ESOPythonTutorials
|
notebooks/nov_2_2015.ipynb
|
bsd-3-clause
|
%%bash
find . -name "*.c" | xargs sed -i bck "/#include<malloc\.h>/d"
%%bash
cat ./isis/abs/allocate.cbck
"""
Explanation: Finding a string in files and removing it
Removing #include<malloc.h> in all c files in a structure
End of explanation
"""
from astropy import units as u, constants as const
class SnickersBar(object):
def __init__(self, w, h, l, weight, energy_density=2460 * u.kJ/ (100 * u.g)):
self.w = u.Quantity(w, u.cm)
self.h = u.Quantity(h, u.cm)
self.l = u.Quantity(l, u.cm)
self.weight = u.Quantity(weight, u.g)
self.energy_density = u.Quantity(energy_density, u.kJ / u.g)
def calculate_volume(self):
return self.w * self.h * self.l
@property
def volume(self):
return self.w * self.h * self.l
my_snickers_bar = SnickersBar(0.5, 0.5, 4, 0.01 * u.kg)
my_snickers_bar.w = 1 * u.cm
my_snickers_bar.volume
"""
Explanation: Quick introduction to chocolate bars and classes
End of explanation
"""
%load_ext Cython
import numpy as np
import numexpr as ne
x1, y1 = np.random.random((2, 1000000))
x2, y2 = np.random.random((2, 1000000))
distance = []
def calculate_distances(x1, y1, x2, y2):
distances = []
for i in xrange(len(x1)):
distances.append(np.sqrt((x1[i] - x2[i])**2 + (y1[i] - y2[i]**2)))
return distances
def numpy_calculate_distances(x1, y1, x2, y2):
return np.sqrt((x1 - x2)**2 + (y1-y2)**2)
def ne_calculate_distances(x1, y1, x2, y2):
return ne.evaluate('sqrt((x1 - x2)**2 + (y1-y2)**2)')
#%timeit calculate_distances(x1, y1, x2, y2)
%timeit ne_calculate_distances(x1, y1, x2, y2)
%%cython -a
import numpy as np
cimport numpy as np
import cython
cdef extern from "math.h":
cpdef double sqrt(double x)
@cython.boundscheck(False)
def cython_calculate_distances(double [:] x1, double [:] y1, double [:] x2, double [:] y2):
distances = np.empty(len(x1))
cdef double [:] distances_view = distances
cdef int i
cdef int len_x1=len(x1)
for i in xrange(len_x1):
distances_view[i] = sqrt((x1[i] - x2[i])**2 + (y1[i] - y2[i]**2))
return distances
"""
Explanation: Using cython
End of explanation
"""
|
ThunderShiviah/code_guild
|
interactive-coding-challenges/recursion_dynamic/fibonacci/fibonacci_challenge.ipynb
|
mit
|
def fib_recursive(n):
# TODO: Implement me
pass
num_items = 10
cache = [None] * (num_items + 1)
def fib_dynamic(n):
# TODO: Implement me
pass
def fib_iterative(n):
# TODO: Implement me
pass
"""
Explanation: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem: Implement fibonacci recursively, dynamically, and iteratively.
Constraints
Test Cases
Algorithm
Code
Unit Test
Solution Notebook
Constraints
None
Test Cases
n = 0 -> 0
n = 1 -> 1
n > 1 -> 0, 1, 1, 2, 3, 5, 8, 13, 21, 34...
Algorithm
Refer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
Code
End of explanation
"""
# %load test_fibonacci.py
from nose.tools import assert_equal
class TestFib(object):
def test_fib(self, func):
result = []
for i in range(num_items):
result.append(func(i))
fib_seq = [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
assert_equal(result, fib_seq)
print('Success: test_fib')
def main():
test = TestFib()
test.test_fib(fib_recursive)
test.test_fib(fib_dynamic)
test.test_fib(fib_iterative)
if __name__ == '__main__':
main()
"""
Explanation: Unit Test
The following unit test is expected to fail until you solve the challenge.
End of explanation
"""
|
bgroveben/python3_machine_learning_projects
|
learn_kaggle/machine_learning/cross_validation.ipynb
|
mit
|
import pandas as pd
data = pd.read_csv('input/melbourne_data.csv')
cols_to_use = ['Rooms', 'Distance', 'Landsize', 'BuildingArea', 'YearBuilt']
X = data[cols_to_use]
y = data.Price
"""
Explanation: Cross-Validation
Machine learning is an iterative process.
You will face choices about predictive variables to use, what types of models to use, what arguments to supply those models, and many other options.
We make these choices in a data-driven way by measuring model quality of various alternatives.
You've already learned to use train_test_split to split the data, so you can measure model quality on the test data.
Cross-validation extends this approach to model scoring (or "model validation.")
Compared to train_test_split, cross-validation gives you a more reliable measure of your model's quality, though it takes longer to run.
The shortcomings of train-test split:
Imagine you have a dataset with 5000 rows.
The train_test_split function has an argument for test_size that you can use to decide how many rows go to the training set and how many go to the test set.
The larger the test set, the more reliable your measures of model quality will be.
At an extreme, you could imagine having only 1 row of data in the test set.
If you compare alternative models, which one makes the best predictions on a single data point will be mostly a matter of luck.
You will typically keep about 20% as a test dataset.
But even with 1000 rows in the test set, there's some random chance in determining model scores.
A model might do well on one set of 1000 rows, even if it would be inaccurate on a different 1000 rows.
The larger the test set, the less randomness (aka "noise") there is in our measure of model quality.
But we can only get a large test set by removing data from our training data, and smaller training datasets mean worse models.
In fact, the ideal modeling decisions on small datasets typically aren't the best modeling decisions on large datasets.
The Cross-Validation Procedure
In cross-validation, we run our modeling process on different subsets of data to get multiple measures of model quality.
For example, we could have 5 folds or experiments.
We divide the data into 5 parts, each being 20% of the full dataset.
The first fold is used as a holdout set, and the remaining parts are used as training data.
This gives us a measure of model quality based on a 20% holdout set, which gives the same result as the simple train-test split.
The second experiment (aka fold) uses everything except the second fold for training the model.
This also gives us a second estimate of the model's performance.
The process is repeated, as shown below, using every fold once in turn as the holdout set, so that 100% of the data is used as a holdout at some point.
Returning to our example above from the train-test split, if we have 5000 rows of data, cross validation allows us to measure model quality based on all 5000.
Trade-offs between train-test split and cross-validation:
Cross-validation gives a more accurate measure of model quality, which is especially important if you are making a lot of modeling decisions.
However, it can take more time to run, because it estimates models once for each fold.
So it is doing more total work.
Given these tradeoffs, when should you use each approach?
On small datasets, the extra computational burden of running cross-validation isn't a big deal.
These are also the problems where model quality scores would be least reliable with train-test split.
So, if your dataset is smaller, you should run cross-validation.
For the same reasons, a simple train-test split is sufficient for larger datasets.
It will run faster, and you may have enough data that there's little need to re-use some of it for holdout.
There's no simple threshold for what constitutes a large vs small dataset.
If your model takes a couple minute or less to run, it's probably worth switching to cross-validation.
If your model takes much longer to run, cross-validation may slow down your workflow more than it's worth.
Alternatively, you can run cross-validation and see if the scores for each experiment seem close.
If each experiment gives the same results, train-test split is probably sufficient.
The Example, Already!
End of explanation
"""
from sklearn.ensemble import RandomForestRegressor
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import Imputer
my_pipeline = make_pipeline(Imputer(), RandomForestRegressor())
my_pipeline
"""
Explanation: This is where pipelines come in handy, because doing cross-validation without them is much more challenging.
End of explanation
"""
dir(my_pipeline)
"""
Explanation: For those curious about the pipeline object's attributes:
End of explanation
"""
from sklearn.model_selection import cross_val_score
scores = cross_val_score(my_pipeline, X, y, scoring='neg_mean_absolute_error')
scores
"""
Explanation: On to the cross-validation scores.
End of explanation
"""
mean_absolute_error = (-1 * scores.mean())
mean_absolute_error
"""
Explanation: What do those numbers above tell you?
You may notice that we specified an argument for scoring.
This specifies what measure of model quality to report.
The docs for scikit-learn show a list of options.
It is a little surprising that we specify negative mean absolute error in this case.
Scikit-learn has a convention where all metrics are defined so a high number is better.
Using negatives here allows them to be consistent with that convention, though negative MAE is almost unheard of elsewhere.
You typically want a single measure of model quality to compare between models.
So we take the average across experiments.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.