repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
|---|---|---|---|
atulsingh0/MachineLearning
|
MasteringML_wSkLearn/05_Decision_Trees.ipynb
|
gpl-3.0
|
# import
import pandas as pd
from sklearn.tree import DecisionTreeClassifier
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
df = pd.read_csv("data/ad.data", header=None)
explanatory_variable_columns = set(df.columns.values)
response_variable_column = df[len(df.columns.values)-1]
# The last column describes the targets
explanatory_variable_columns.remove(len(df.columns.values)-1)
y = [1 if e == 'ad.' else 0 for e in response_variable_column]
X = df[list(explanatory_variable_columns)]
#X.replace(to_replace=' *\?', value=-1, regex=True, inplace=True)
X.replace(['?'], [-1])
X_train, X_test, y_train, y_test = train_test_split(X, y)
pipeline = Pipeline([
('clf', DecisionTreeClassifier(criterion='entropy'))
])
parameters = {
'clf__max_depth': (150, 155, 160),
'clf__min_samples_split': (1, 2, 3),
'clf__min_samples_leaf': (1, 2, 3)
}
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='f1')
#grid_search.fit(X_train, y_train)
print( 'Best score: %0.3f' % grid_search.best_score_)
print( 'Best parameters set:')
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print( '\t%s: %r' % (param_name, best_parameters[param_name]))
predictions = grid_search.predict(X_test)
print ('Accuracy:', accuracy_score(y_test, predictions))
print ('Confusion Matrix:', confusion_matrix(y_test, predictions))
print ('Classification Report:', classification_report(y_test, predictions))
"""
Explanation: Nonlinear Classification and Regression with Decision Trees
Decision trees
Decision trees are commonly learned by recursively splitting the set of training
instances into subsets based on the instances' values for the explanatory variables.
In classification tasks, the leaf nodes
of the decision tree represent classes. In regression tasks, the values of the response
variable for the instances contained in a leaf node may be averaged to produce the
estimate for the response variable. After the decision tree has been constructed,
making a prediction for a test instance requires only following the edges until a
leaf node is reached.
Let's create a decision tree using an algorithm called Iterative Dichotomiser 3 (ID3).
Invented by Ross Quinlan, ID3 was one of the first algorithms used to train decision
trees.
But how to choose the first variable on which we have to divide the data so that we can have smaller tree.
Measured in bits, entropy quantifies the amount of uncertainty in a variable. Entropy
is given by the following equation, where n is the number of outcomes and ( ) i P x is
the probability of the outcome i. Common values for b are 2, e, and 10. Because the
log of a number less than one will be negative, the entire sum is negated to return a
positive value.
entropy $$ H(X) = -\sum_{i=1}^{n} P(x_i)log_b P(x_i) $$
Information gain
Selecting the test that produces the subsets with the lowest average entropy can produce a suboptimal tree.
we will measure the reduction in entropy using a metric called information gain.
Calculated with the following equation, information gain is the difference between the entropy of the parent
node, H (T ), and the weighted average of the children nodes' entropies.
For creating Decision Tree, Algo ID3 is the one mostly used. C4.5 is a modified version of ID3
that can be used with continuous explanatory variables and can accommodate
missing values for features. C4.5 also can prune trees.
Pruning reduces the size of a tree by replacing branches that classify few instances with leaf nodes. Used by
scikit-learn's implementation of decision trees, CART is another learning algorithm
that supports pruning.
Gini impurity
Gini impurity measures the proportions of classes in a set. Gini impurity
is given by the following equation, where j is the number of classes, t is the subset
of instances for the node, and P(i|t) is the probability of selecting an element of
class i from the node's subset:
$$ Gini (t) = 1 - \sum_{i=1}^{j} P(i|t)^2 $$
Intuitively, Gini impurity is zero when all of the elements of the set are the same
class, as the probability of selecting an element of that class is equal to one. Like
entropy, Gini impurity is greatest when each class has an equal probability of being
selected. The maximum value of Gini impurity depends on the number of possible
classes, and it is given by the following equation:
$$ Gini_{max} = 1 - \frac{1}{n} $$
End of explanation
"""
pipeline = Pipeline([
('clf', RandomForestClassifier(criterion='entropy'))
])
parameters = {
'clf__n_estimators': (5, 10, 20, 50),
'clf__max_depth': (50, 150, 250),
'clf__min_samples_split': (1, 2, 3),
'clf__min_samples_leaf': (1, 2, 3)
}
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='f1')
#grid_search.fit(X_train, y_train)
"""
Explanation: Tree ensembles (RandomForestClassifier)
Ensemble learning methods combine a set of models to produce an estimator that
has better predictive performance than its individual components. A random forest
is a collection of decision trees that have been trained on randomly selected subsets
of the training instances and explanatory variables. Random forests usually make
predictions by returning the mode or mean of the predictions of their constituent
trees.
Random forests are less prone to overfitting than decision trees because no single
tree can learn from all of the instances and explanatory variables; no single tree can
memorize all of the noise in the representation
End of explanation
"""
|
steinam/teacher
|
jup_notebooks/data-science-ipython-notebooks-master/matplotlib/04.09-Text-and-Annotation.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
plt.style.use('seaborn-whitegrid')
import numpy as np
import pandas as pd
"""
Explanation: <!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.
The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!
No changes were made to the contents of this notebook from the original.
<!--NAVIGATION-->
< Multiple Subplots | Contents | Customizing Ticks >
Text and Annotation
Creating a good visualization involves guiding the reader so that the figure tells a story.
In some cases, this story can be told in an entirely visual manner, without the need for added text, but in others, small textual cues and labels are necessary.
Perhaps the most basic types of annotations you will use are axes labels and titles, but the options go beyond this.
Let's take a look at some data and how we might visualize and annotate it to help convey interesting information. We'll start by setting up the notebook for plotting and importing the functions we will use:
End of explanation
"""
births = pd.read_csv('data/births.csv')
quartiles = np.percentile(births['births'], [25, 50, 75])
mu, sig = quartiles[1], 0.74 * (quartiles[2] - quartiles[0])
births = births.query('(births > @mu - 5 * @sig) & (births < @mu + 5 * @sig)')
births['day'] = births['day'].astype(int)
births.index = pd.to_datetime(10000 * births.year +
100 * births.month +
births.day, format='%Y%m%d')
births_by_date = births.pivot_table('births',
[births.index.month, births.index.day])
births_by_date.index = [pd.datetime(2012, month, day)
for (month, day) in births_by_date.index]
fig, ax = plt.subplots(figsize=(12, 4))
births_by_date.plot(ax=ax);
"""
Explanation: Example: Effect of Holidays on US Births
Let's return to some data we worked with earler, in "Example: Birthrate Data", where we generated a plot of average births over the course of the calendar year; as already mentioned, that this data can be downloaded at https://raw.githubusercontent.com/jakevdp/data-CDCbirths/master/births.csv.
We'll start with the same cleaning procedure we used there, and plot the results:
End of explanation
"""
fig, ax = plt.subplots(figsize=(12, 4))
births_by_date.plot(ax=ax)
# Add labels to the plot
style = dict(size=10, color='gray')
ax.text('2012-1-1', 3950, "New Year's Day", **style)
ax.text('2012-7-4', 4250, "Independence Day", ha='center', **style)
ax.text('2012-9-4', 4850, "Labor Day", ha='center', **style)
ax.text('2012-10-31', 4600, "Halloween", ha='right', **style)
ax.text('2012-11-25', 4450, "Thanksgiving", ha='center', **style)
ax.text('2012-12-25', 3850, "Christmas ", ha='right', **style)
# Label the axes
ax.set(title='USA births by day of year (1969-1988)',
ylabel='average daily births')
# Format the x axis with centered month labels
ax.xaxis.set_major_locator(mpl.dates.MonthLocator())
ax.xaxis.set_minor_locator(mpl.dates.MonthLocator(bymonthday=15))
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.xaxis.set_minor_formatter(mpl.dates.DateFormatter('%h'));
"""
Explanation: When we're communicating data like this, it is often useful to annotate certain features of the plot to draw the reader's attention.
This can be done manually with the plt.text/ax.text command, which will place text at a particular x/y value:
End of explanation
"""
fig, ax = plt.subplots(facecolor='lightgray')
ax.axis([0, 10, 0, 10])
# transform=ax.transData is the default, but we'll specify it anyway
ax.text(1, 5, ". Data: (1, 5)", transform=ax.transData)
ax.text(0.5, 0.1, ". Axes: (0.5, 0.1)", transform=ax.transAxes)
ax.text(0.2, 0.2, ". Figure: (0.2, 0.2)", transform=fig.transFigure);
"""
Explanation: The ax.text method takes an x position, a y position, a string, and then optional keywords specifying the color, size, style, alignment, and other properties of the text.
Here we used ha='right' and ha='center', where ha is short for horizonal alignment.
See the docstring of plt.text() and of mpl.text.Text() for more information on available options.
Transforms and Text Position
In the previous example, we have anchored our text annotations to data locations. Sometimes it's preferable to anchor the text to a position on the axes or figure, independent of the data. In Matplotlib, this is done by modifying the transform.
Any graphics display framework needs some scheme for translating between coordinate systems.
For example, a data point at $(x, y) = (1, 1)$ needs to somehow be represented at a certain location on the figure, which in turn needs to be represented in pixels on the screen.
Mathematically, such coordinate transformations are relatively straightforward, and Matplotlib has a well-developed set of tools that it uses internally to perform them (these tools can be explored in the matplotlib.transforms submodule).
The average user rarely needs to worry about the details of these transforms, but it is helpful knowledge to have when considering the placement of text on a figure. There are three pre-defined transforms that can be useful in this situation:
ax.transData: Transform associated with data coordinates
ax.transAxes: Transform associated with the axes (in units of axes dimensions)
fig.transFigure: Transform associated with the figure (in units of figure dimensions)
Here let's look at an example of drawing text at various locations using these transforms:
End of explanation
"""
ax.set_xlim(0, 2)
ax.set_ylim(-6, 6)
fig
"""
Explanation: Note that by default, the text is aligned above and to the left of the specified coordinates: here the "." at the beginning of each string will approximately mark the given coordinate location.
The transData coordinates give the usual data coordinates associated with the x- and y-axis labels.
The transAxes coordinates give the location from the bottom-left corner of the axes (here the white box), as a fraction of the axes size.
The transFigure coordinates are similar, but specify the position from the bottom-left of the figure (here the gray box), as a fraction of the figure size.
Notice now that if we change the axes limits, it is only the transData coordinates that will be affected, while the others remain stationary:
End of explanation
"""
%matplotlib inline
fig, ax = plt.subplots()
x = np.linspace(0, 20, 1000)
ax.plot(x, np.cos(x))
ax.axis('equal')
ax.annotate('local maximum', xy=(6.28, 1), xytext=(10, 4),
arrowprops=dict(facecolor='black', shrink=0.05))
ax.annotate('local minimum', xy=(5 * np.pi, -1), xytext=(2, -6),
arrowprops=dict(arrowstyle="->",
connectionstyle="angle3,angleA=0,angleB=-90"));
"""
Explanation: This behavior can be seen more clearly by changing the axes limits interactively: if you are executing this code in a notebook, you can make that happen by changing %matplotlib inline to %matplotlib notebook and using each plot's menu to interact with the plot.
Arrows and Annotation
Along with tick marks and text, another useful annotation mark is the simple arrow.
Drawing arrows in Matplotlib is often much harder than you'd bargain for.
While there is a plt.arrow() function available, I wouldn't suggest using it: the arrows it creates are SVG objects that will be subject to the varying aspect ratio of your plots, and the result is rarely what the user intended.
Instead, I'd suggest using the plt.annotate() function.
This function creates some text and an arrow, and the arrows can be very flexibly specified.
Here we'll use annotate with several of its options:
End of explanation
"""
fig, ax = plt.subplots(figsize=(12, 4))
births_by_date.plot(ax=ax)
# Add labels to the plot
ax.annotate("New Year's Day", xy=('2012-1-1', 4100), xycoords='data',
xytext=(50, -30), textcoords='offset points',
arrowprops=dict(arrowstyle="->",
connectionstyle="arc3,rad=-0.2"))
ax.annotate("Independence Day", xy=('2012-7-4', 4250), xycoords='data',
bbox=dict(boxstyle="round", fc="none", ec="gray"),
xytext=(10, -40), textcoords='offset points', ha='center',
arrowprops=dict(arrowstyle="->"))
ax.annotate('Labor Day', xy=('2012-9-4', 4850), xycoords='data', ha='center',
xytext=(0, -20), textcoords='offset points')
ax.annotate('', xy=('2012-9-1', 4850), xytext=('2012-9-7', 4850),
xycoords='data', textcoords='data',
arrowprops={'arrowstyle': '|-|,widthA=0.2,widthB=0.2', })
ax.annotate('Halloween', xy=('2012-10-31', 4600), xycoords='data',
xytext=(-80, -40), textcoords='offset points',
arrowprops=dict(arrowstyle="fancy",
fc="0.6", ec="none",
connectionstyle="angle3,angleA=0,angleB=-90"))
ax.annotate('Thanksgiving', xy=('2012-11-25', 4500), xycoords='data',
xytext=(-120, -60), textcoords='offset points',
bbox=dict(boxstyle="round4,pad=.5", fc="0.9"),
arrowprops=dict(arrowstyle="->",
connectionstyle="angle,angleA=0,angleB=80,rad=20"))
ax.annotate('Christmas', xy=('2012-12-25', 3850), xycoords='data',
xytext=(-30, 0), textcoords='offset points',
size=13, ha='right', va="center",
bbox=dict(boxstyle="round", alpha=0.1),
arrowprops=dict(arrowstyle="wedge,tail_width=0.5", alpha=0.1));
# Label the axes
ax.set(title='USA births by day of year (1969-1988)',
ylabel='average daily births')
# Format the x axis with centered month labels
ax.xaxis.set_major_locator(mpl.dates.MonthLocator())
ax.xaxis.set_minor_locator(mpl.dates.MonthLocator(bymonthday=15))
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.xaxis.set_minor_formatter(mpl.dates.DateFormatter('%h'));
ax.set_ylim(3600, 5400);
"""
Explanation: The arrow style is controlled through the arrowprops dictionary, which has numerous options available.
These options are fairly well-documented in Matplotlib's online documentation, so rather than repeating them here it is probably more useful to quickly show some of the possibilities.
Let's demonstrate several of the possible options using the birthrate plot from before:
End of explanation
"""
|
peterwittek/ipython-notebooks
|
Comparing_DMRG_ED_and_SDP.ipynb
|
gpl-3.0
|
import pyalps
"""
Explanation: Comparing the ground state energies obtained by density matrix renormalization group, exact diagonalization, and an SDP hierarchy
We would like to compare the ground state energy of the following spinless fermionic system [1]:
$H_{\mathrm{free}}=\sum_{<rs>}\left[c_{r}^{\dagger} c_{s}+c_{s}^{\dagger} c_{r}-\gamma(c_{r}^{\dagger} c_{s}^{\dagger}+c_{s}c_{r} )\right]-2\lambda\sum_{r}c_{r}^{\dagger}c_{r},$
where $<rs>$ goes through nearest neighbour pairs in a two-dimensional lattice. The fermionic operators are subject to the following constraints:
${c_{r}, c_{s}^{\dagger}}=\delta_{rs}I_{r}$
${c_r^\dagger, c_s^\dagger}=0,$
${c_{r}, c_{s}}=0.$
Our primary goal is to benchmark the SDP hierarchy of Reference [2]. The baseline methods are density matrix renormalization group (DMRG) and exact diagonalization (ED), both of which are included in Algorithms and Libraries for Physics Simulations (ALPS, [3]). The range of predefined Hamiltonians is limited, so we simplify the equation by setting $\gamma=0$.
Prerequisites
To run this notebook, ALPS, Sympy, Scipy, and SDPA must be installed. A recent version of Ncpol2sdpa is also necessary.
Calculating the ground state energy with DMRG and ED
DMRG and ED are included in ALPS. To start the calculations, we need to import the Python interface:
End of explanation
"""
lattice_range = [2, 3, 4, 5]
parms = [{
'LATTICE' : "open square lattice", # Set up the lattice
'MODEL' : "spinless fermions", # Select the model
'L' : L, # Lattice dimension
't' : -1 , # This and the following
'mu' : 2, # are parameters to the
'U' : 0 , # Hamiltonian.
'V' : 0,
'Nmax' : 2 , # These parameters are
'SWEEPS' : 20, # specific to the DMRG
'MAXSTATES' : 300, # solver.
'NUMBER_EIGENVALUES' : 1,
'MEASURE_ENERGY' : 1
} for L in lattice_range ]
"""
Explanation: For now, we are only interested in relatively small systems, we will try lattice sizes between $2\times 2$ and $5\times 5$. With this, we set the parameters for DMRG and ED:
End of explanation
"""
def extract_ground_state_energies(data):
E0 = []
for Lsets in data:
allE = []
for q in pyalps.flatten(Lsets):
allE.append(q.y[0])
E0.append(allE[0])
return sorted(E0, reverse=True)
"""
Explanation: We will need a helper function to extract the ground state energy from the solutions:
End of explanation
"""
prefix_sparse = 'comparison_sparse'
input_file_sparse = pyalps.writeInputFiles(prefix_sparse, parms[:-1])
res = pyalps.runApplication('sparsediag', input_file_sparse)
sparsediag_data = pyalps.loadEigenstateMeasurements(
pyalps.getResultFiles(prefix=prefix_sparse))
sparsediag_ground_state_energy = extract_ground_state_energies(sparsediag_data)
sparsediag_ground_state_energy.append(0)
"""
Explanation: We invoke the solvers and extract the ground state energies from the solutions. First we use exact diagonalization, which, unfortunately does not scale beyond a lattice size of $4\times 4$.
End of explanation
"""
prefix_dmrg = 'comparison_dmrg'
input_file_dmrg = pyalps.writeInputFiles(prefix_dmrg, parms)
res = pyalps.runApplication('dmrg',input_file_dmrg)
dmrg_data = pyalps.loadEigenstateMeasurements(
pyalps.getResultFiles(prefix=prefix_dmrg))
dmrg_ground_state_energy = extract_ground_state_energies(dmrg_data)
"""
Explanation: DMRG scales to all the lattice sizes we want:
End of explanation
"""
from sympy.physics.quantum.dagger import Dagger
from ncpol2sdpa import SdpRelaxation, generate_operators, \
fermionic_constraints, get_neighbors
"""
Explanation: Calculating the ground state energy with SDP
The ground state energy problem can be rephrased as a polynomial optimiziation problem of noncommuting variables. We use Ncpol2sdpa to translate this optimization problem to a sparse SDP relaxation [4]. The relaxation is solved with SDPA, a high-performance SDP solver that deals with sparse problems efficiently [5]. First we need to import a few more functions:
End of explanation
"""
level = 1
gam, lam = 0, 1
"""
Explanation: We set the additional parameters for this formulation, including the order of the relaxation:
End of explanation
"""
sdp_ground_state_energy = []
for lattice_dimension in lattice_range:
n_vars = lattice_dimension * lattice_dimension
C = generate_operators('C%s' % (lattice_dimension), n_vars)
hamiltonian = 0
for r in range(n_vars):
hamiltonian -= 2*lam*Dagger(C[r])*C[r]
for s in get_neighbors(r, lattice_dimension):
hamiltonian += Dagger(C[r])*C[s] + Dagger(C[s])*C[r]
hamiltonian -= gam*(Dagger(C[r])*Dagger(C[s]) + C[s]*C[r])
substitutions = fermionic_constraints(C)
sdpRelaxation = SdpRelaxation(C)
sdpRelaxation.get_relaxation(level, objective=hamiltonian, substitutions=substitutions)
sdpRelaxation.solve()
sdp_ground_state_energy.append(sdpRelaxation.primal)
"""
Explanation: Then we iterate over the lattice range, defining a new Hamiltonian and new constraints in each step:
End of explanation
"""
data = [dmrg_ground_state_energy,\
sparsediag_ground_state_energy,\
sdp_ground_state_energy]
labels = ["DMRG", "ED", "SDP"]
print ("{:>4} {:>9} {:>10} {:>10} {:>10}").format("", *lattice_range)
for label, row in zip(labels, data):
print ("{:>4} {:>7.6f} {:>7.6f} {:>7.6f} {:>7.6f}").format(label, *row)
"""
Explanation: Comparison
The level-one relaxation matches the ground state energy given by DMRG and ED.
End of explanation
"""
|
vzg100/Post-Translational-Modification-Prediction
|
old/Phosphorylation Sequence Tests -MLP -dbptm+ELM-VectorAvr.-phos_stripped.ipynb
|
mit
|
from pred import Predictor
from pred import sequence_vector
from pred import chemical_vector
"""
Explanation: Template for test
End of explanation
"""
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/clean_s_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=0)
y.supervised_training("mlp_adam")
y.benchmark("Data/Benchmarks/phos_stripped.csv", "S")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/clean_s_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=1)
x.supervised_training("mlp_adam")
x.benchmark("Data/Benchmarks/phos_stripped.csv", "S")
del x
"""
Explanation: Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation.
Included is N Phosphorylation however no benchmarks are available, yet.
Training data is from phospho.elm and benchmarks are from dbptm.
End of explanation
"""
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/clean_Y_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=0)
y.supervised_training("mlp_adam")
y.benchmark("Data/Benchmarks/phos_stripped.csv", "Y")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/clean_Y_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=1)
x.supervised_training("mlp_adam")
x.benchmark("Data/Benchmarks/phos_stripped.csv", "Y")
del x
"""
Explanation: Y Phosphorylation
End of explanation
"""
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/clean_t_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=0)
y.supervised_training("mlp_adam")
y.benchmark("Data/Benchmarks/phos_stripped.csv", "T")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/clean_t_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=1)
x.supervised_training("mlp_adam")
x.benchmark("Data/Benchmarks/phos_stripped.csv", "T")
del x
"""
Explanation: T Phosphorylation
End of explanation
"""
|
Timmy-Oh/Generating-Visual-Explanation
|
XAI.ipynb
|
mit
|
import tensorflow as tf
from PIL import Image
import numpy as np
from scipy.misc import imread, imresize
from imagenet_classes import class_names
import os
"""
Explanation: Import
End of explanation
"""
#File Path
# filepath_input = "./data/run/" #input csv file path
filepath_ckpt = "./ckpt/model_weight.ckpt" #weight saver check point file path
filepath_pred = "./output/predicted.csv" #predicted value file path
filename_queue_description = tf.train.string_input_producer(['./data/description/raw_data.csv'])
num_record = 50
"""
Explanation: file_path
End of explanation
"""
label_vec_size = 5
input_vec_size = 27
batch_size = 50
state_size_1 = 100
state_size_2 = 4096 + state_size_1
hidden = 15
learning_rate = 0.01
"""
Explanation: LSTM - Hyper Params
End of explanation
"""
class vgg16:
def __init__(self, imgs, weights=None, sess=None):
self.imgs = imgs
self.convlayers()
self.fc_layers()
self.probs = tf.nn.softmax(self.fc3l)
if weights is not None and sess is not None:
self.load_weights(weights, sess)
def convlayers(self):
self.parameters = []
# zero-mean input
with tf.name_scope('preprocess') as scope:
mean = tf.constant([123.68, 116.779, 103.939], dtype=tf.float32, shape=[1, 1, 1, 3], name='img_mean')
images = self.imgs-mean
# conv1_1
with tf.name_scope('conv1_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 3, 64], dtype=tf.float32,
stddev=1e-1), name='weights')
conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32),
trainable=True, name='biases')
out = tf.nn.bias_add(conv, biases)
self.conv1_1 = tf.nn.relu(out, name=scope)
self.parameters += [kernel, biases]
# conv1_2
with tf.name_scope('conv1_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 64], dtype=tf.float32,
stddev=1e-1), name='weights')
conv = tf.nn.conv2d(self.conv1_1, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32),
trainable=True, name='biases')
out = tf.nn.bias_add(conv, biases)
self.conv1_2 = tf.nn.relu(out, name=scope)
self.parameters += [kernel, biases]
# pool1
self.pool1 = tf.nn.max_pool(self.conv1_2,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME',
name='pool1')
# conv2_1
with tf.name_scope('conv2_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32,
stddev=1e-1), name='weights')
conv = tf.nn.conv2d(self.pool1, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32),
trainable=True, name='biases')
out = tf.nn.bias_add(conv, biases)
self.conv2_1 = tf.nn.relu(out, name=scope)
self.parameters += [kernel, biases]
# conv2_2
with tf.name_scope('conv2_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 128], dtype=tf.float32,
stddev=1e-1), name='weights')
conv = tf.nn.conv2d(self.conv2_1, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32),
trainable=True, name='biases')
out = tf.nn.bias_add(conv, biases)
self.conv2_2 = tf.nn.relu(out, name=scope)
self.parameters += [kernel, biases]
# pool2
self.pool2 = tf.nn.max_pool(self.conv2_2,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME',
name='pool2')
# conv3_1
with tf.name_scope('conv3_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 256], dtype=tf.float32,
stddev=1e-1), name='weights')
conv = tf.nn.conv2d(self.pool2, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),
trainable=True, name='biases')
out = tf.nn.bias_add(conv, biases)
self.conv3_1 = tf.nn.relu(out, name=scope)
self.parameters += [kernel, biases]
# conv3_2
with tf.name_scope('conv3_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32,
stddev=1e-1), name='weights')
conv = tf.nn.conv2d(self.conv3_1, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),
trainable=True, name='biases')
out = tf.nn.bias_add(conv, biases)
self.conv3_2 = tf.nn.relu(out, name=scope)
self.parameters += [kernel, biases]
# conv3_3
with tf.name_scope('conv3_3') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32,
stddev=1e-1), name='weights')
conv = tf.nn.conv2d(self.conv3_2, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),
trainable=True, name='biases')
out = tf.nn.bias_add(conv, biases)
self.conv3_3 = tf.nn.relu(out, name=scope)
self.parameters += [kernel, biases]
# pool3
self.pool3 = tf.nn.max_pool(self.conv3_3,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME',
name='pool3')
# conv4_1
with tf.name_scope('conv4_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 512], dtype=tf.float32,
stddev=1e-1), name='weights')
conv = tf.nn.conv2d(self.pool3, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32),
trainable=True, name='biases')
out = tf.nn.bias_add(conv, biases)
self.conv4_1 = tf.nn.relu(out, name=scope)
self.parameters += [kernel, biases]
# conv4_2
with tf.name_scope('conv4_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32,
stddev=1e-1), name='weights')
conv = tf.nn.conv2d(self.conv4_1, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32),
trainable=True, name='biases')
out = tf.nn.bias_add(conv, biases)
self.conv4_2 = tf.nn.relu(out, name=scope)
self.parameters += [kernel, biases]
# conv4_3
with tf.name_scope('conv4_3') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32,
stddev=1e-1), name='weights')
conv = tf.nn.conv2d(self.conv4_2, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32),
trainable=True, name='biases')
out = tf.nn.bias_add(conv, biases)
self.conv4_3 = tf.nn.relu(out, name=scope)
self.parameters += [kernel, biases]
# pool4
self.pool4 = tf.nn.max_pool(self.conv4_3,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME',
name='pool4')
# conv5_1
with tf.name_scope('conv5_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32,
stddev=1e-1), name='weights')
conv = tf.nn.conv2d(self.pool4, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32),
trainable=True, name='biases')
out = tf.nn.bias_add(conv, biases)
self.conv5_1 = tf.nn.relu(out, name=scope)
self.parameters += [kernel, biases]
# conv5_2
with tf.name_scope('conv5_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32,
stddev=1e-1), name='weights')
conv = tf.nn.conv2d(self.conv5_1, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32),
trainable=True, name='biases')
out = tf.nn.bias_add(conv, biases)
self.conv5_2 = tf.nn.relu(out, name=scope)
self.parameters += [kernel, biases]
# conv5_3
with tf.name_scope('conv5_3') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32,
stddev=1e-1), name='weights')
conv = tf.nn.conv2d(self.conv5_2, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32),
trainable=True, name='biases')
out = tf.nn.bias_add(conv, biases)
self.conv5_3 = tf.nn.relu(out, name=scope)
self.parameters += [kernel, biases]
# pool5
self.pool5 = tf.nn.max_pool(self.conv5_3,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME',
name='pool4')
def fc_layers(self):
# fc1
with tf.name_scope('fc1') as scope:
shape = int(np.prod(self.pool5.get_shape()[1:]))
fc1w = tf.Variable(tf.truncated_normal([shape, 4096],
dtype=tf.float32,
stddev=1e-1), name='weights')
fc1b = tf.Variable(tf.constant(1.0, shape=[4096], dtype=tf.float32),
trainable=True, name='biases')
pool5_flat = tf.reshape(self.pool5, [-1, shape])
fc1l = tf.nn.bias_add(tf.matmul(pool5_flat, fc1w), fc1b)
self.fc1 = tf.nn.relu(fc1l)
self.parameters += [fc1w, fc1b]
# fc2
with tf.name_scope('fc2') as scope:
fc2w = tf.Variable(tf.truncated_normal([4096, 4096],
dtype=tf.float32,
stddev=1e-1), name='weights')
fc2b = tf.Variable(tf.constant(1.0, shape=[4096], dtype=tf.float32),
trainable=True, name='biases')
fc2l = tf.nn.bias_add(tf.matmul(self.fc1, fc2w), fc2b)
self.fc2 = tf.nn.relu(fc2l)
self.parameters += [fc2w, fc2b]
# fc3
with tf.name_scope('fc3') as scope:
fc3w = tf.Variable(tf.truncated_normal([4096, 1000],
dtype=tf.float32,
stddev=1e-1), name='weights')
fc3b = tf.Variable(tf.constant(1.0, shape=[1000], dtype=tf.float32),
trainable=True, name='biases')
self.fc3l = tf.nn.bias_add(tf.matmul(self.fc2, fc3w), fc3b)
self.parameters += [fc3w, fc3b]
def load_weights(self, weight_file, sess):
weights = np.load(weight_file)
keys = sorted(weights.keys())
for i, k in enumerate(keys):
print(i, k, np.shape(weights[k]))
sess.run(self.parameters[i].assign(weights[k]))
"""
Explanation: vgg16
End of explanation
"""
with tf.Session() as sess_vgg:
imgs = tf.placeholder(tf.float32, [None, 200, 200, 3])
vgg = vgg16(imgs, 'vgg16_weights.npz', sess_vgg)
img_files = ['./data/img/cropped/' + i for i in os.listdir('./data/img/cropped')]
imgs = [imread(file, mode='RGB') for file in img_files]
temps = [sess_vgg.run(vgg.fc1, feed_dict={vgg.imgs: [imgs[i]]})[0] for i in range(50)]
reimgs= np.reshape(a=temps, newshape=[50,-1])
sess_vgg.close()
"""
Explanation: load_vgg16
End of explanation
"""
reader = tf.TextLineReader()
key,value = reader.read(filename_queue_description)
record_defaults =[[-1], [-1], [-1], [-1], [-1], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2]]
lab1, lab2, lab3, lab4, lab5, w1, w2, w3, w4, w5, w6, w7, w8, w9, w10, w11, w12, w13, w14, w15 = tf.decode_csv(value, record_defaults)
feature_label = tf.stack([lab1, lab2, lab3, lab4, lab5])
feature_word = tf.stack([w1, w2, w3, w4, w5, w6, w7, w8, w9, w10, w11, w12, w13, w14, w15])
with tf.Session() as sess_data:
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
img_queue = []
for i in range(num_record):
# image = sess.run(images)
label, raw_word = sess_data.run([feature_label, feature_word])
onehot = tf.one_hot(indices=raw_word, depth=27)
if i == 0:
full_input = onehot
full_label = label
else:
full_input = tf.concat([full_input, onehot], 0)
full_label = tf.concat([full_label, label], 0)
# print(sess.run(tf.shape(image)))
# batch = tf.train.batch([image, label], 1)
# print(sess.run(batch))
coord.request_stop()
coord.join(threads)
sess_data.close()
"""
Explanation: File Info
End of explanation
"""
with tf.name_scope('batch') as scope:
# full_label = tf.reshape(full_label, [batch_size, hidden, label_vec_size])
full_input = tf.reshape(full_input, [batch_size, hidden, input_vec_size])
input_batch, label_batch = tf.train.batch([full_input, full_input], batch_size=1)
"""
Explanation: Text Reader
def input_pipeline(filenames, batch_size, num_epochs=None):
filename_queue = tf.train.string_input_producer(filenames, num_epochs=num_epochs, shuffle=False)
images = tf.image.decode_png(value, channels=3, dtype=tf.uint8)
reader = tf.TextLineReader()
key,value = reader.read(filename_queue_description)
record_defaults =[[-1], [-1], [-1], [-1], [-1], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2], [-2]]
lab1, lab2, lab3, lab4, lab5, w1, w2, w3, w4, w5, w6, w7, w8, w9, w10, w11, w12, w13, w14, w15 = tf.decode_csv(value, record_defaults)
feature_label = tf.stack([lab1, lab2, lab3, lab4, lab5])
feature_word = tf.stack([w1, w2, w3, w4, w5, w6, w7, w8, w9, w10, w11, w12, w13, w14, w15])
example_batch, label_batch = tf.train.batch([images, feature_label], batch_size=batch_size)
return example_batch, label_batch
with tf.Session() as sess:
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
input_pipeline(filename_queue_description, num_epochs=1, batch_size=10)
coord.request_stop()
coord.join(threads)
sess.close()
Batching
End of explanation
"""
with tf.name_scope('lstm_layer_1') as scope:
with tf.variable_scope('lstm_layer_1'):
rnn_cell_1 = tf.contrib.rnn.BasicLSTMCell(state_size_1, reuse=None)
output_1, _ = tf.contrib.rnn.static_rnn(rnn_cell_1, tf.unstack(full_input, axis=1), dtype=tf.float32)
# output_w_1 = tf.Variable(tf.truncated_normal([hidden, state_size_1, input_vec_size]))
# output_b_1 = tf.Variable(tf.zeros([input_vec_size]))
# pred_temp = tf.matmul(output_1, output_w_1) + output_b_1
with tf.Session() as sess_temp:
print(sess_temp.run(tf.shape(output_1)))
"""
Explanation: LSTM First Layer
End of explanation
"""
input_2 = [tf.concat([out, reimgs], axis=1) for out in output_1]
"""
Explanation: matrix_concat
End of explanation
"""
with tf.name_scope('lstm_layer_2') as scope:
with tf.variable_scope('lstm_layer_2'):
rnn_cell_2 = tf.contrib.rnn.BasicLSTMCell(state_size_2, reuse=None)
output_2, _ = tf.contrib.rnn.static_rnn(rnn_cell_2, tf.unstack(input_2, axis=0), dtype=tf.float32)
output_w_2 = tf.Variable(tf.truncated_normal([hidden, state_size_2, input_vec_size]))
output_b_2 = tf.Variable(tf.zeros([input_vec_size]))
pred = tf.nn.softmax(tf.matmul(output_2, output_w_2) + output_b_2)
with tf.name_scope('loss') as scope:
loss = tf.constant(0, tf.float32)
for i in range(hidden):
loss += tf.losses.softmax_cross_entropy(tf.unstack(full_input, axis=1)[i], tf.unstack(pred, axis=0)[i])
train = tf.train.AdamOptimizer(learning_rate).minimize(loss)
with tf.Session() as sess_train:
sess_train.run(tf.global_variables_initializer())
saver = tf.train.Saver()
save_path = saver.save(sess_train, filepath_ckpt)
for i in range(31):
sess_train.run(train)
if i % 5 == 0:
print("loss : ", sess_train.run(loss))
# print("pred : ", sess.run(pred))
save_path = saver.save(sess_train, filepath_ckpt)
print("= Weigths are saved in " + filepath_ckpt)
sess_train.close()
"""
Explanation: LSTM Second Layer
End of explanation
"""
with tf.Session() as sess_vgg_test:
imgs = tf.placeholder(tf.float32, [None, 200, 200, 3])
vgg = vgg16(imgs, 'vgg16_weights.npz', sess_vgg_test)
test_img_files = ['./data/img/cropped/001.png']
test_imgs = [imread(file, mode='RGB') for file in test_img_files]
# bilinear_test_imgs = [imresize(arr=img,interp='bilinear') for img in test_imgs]
temps = [sess_vgg_test.run(vgg.fc1, feed_dict={vgg.imgs: [img]})[0] for img in test_imgs]
test_reimgs= np.reshape(a=temps, newshape=[1,-1])
sess_vgg_test.close()
start_input = tf.zeros([1,15,27])
with tf.Session() as sess_init_generator:
input_init = sess_init_generator.run(start_input)
sos = [0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
input_init[0][0] = sos
with tf.name_scope('lstm_layer_1') as scope:
with tf.variable_scope('lstm_layer_1'):
rnn_cell_1 = tf.contrib.rnn.BasicLSTMCell(state_size_1, reuse=True)
output_test_1, _ = tf.contrib.rnn.static_rnn(rnn_cell_1, tf.unstack(input_init, axis=1), dtype=tf.float32)
# output_t_1 = tf.contrib.rnn.static_rnn(rnn_cell, tf.unstack(full_input, axis=1), dtype=tf.float32)
# pred = tf.nn.softmax(tf.matmul(output1, output_w[0]) + output_b[0])
input_2 = [tf.concat([out, test_reimgs], axis=1) for out in output_test_1]
with tf.name_scope('lstm_layer_2') as scope:
with tf.variable_scope('lstm_layer_2'):
rnn_cell_2 = tf.contrib.rnn.BasicLSTMCell(state_size_2, reuse=None)
output_2, _ = tf.contrib.rnn.static_rnn(rnn_cell_2, tf.unstack(input_2, axis=0), dtype=tf.float32)
output_w_2 = tf.Variable(tf.truncated_normal([hidden, state_size_2, input_vec_size]))
output_b_2 = tf.Variable(tf.zeros([input_vec_size]))
pred = tf.nn.softmax(tf.matmul(output_2, output_w_2) + output_b_2)
sess_model = tf.Session()
saver = tf.train.Saver(allow_empty=True)
saver.restore(sess_model, filepath_ckpt)
for i in range(hidden):
result = sess_model.run(pred)
result_temp = result[i]
if i == hidden -1:
pass
else:
input_init[0][i+1] = result_temp
"""
Explanation: Test
End of explanation
"""
print(result.shape)
decoded_result = np.argmax(a=result, axis=2)
print(result)
print(decoded_result)
"""
Explanation: Result Check
End of explanation
"""
|
monicathieu/cu-psych-r-tutorial
|
public/tutorials/python/3_descriptives/challenge_key.ipynb
|
mit
|
# load packages we will be using for this lesson
import pandas as pd
"""
Explanation: Descriptive Statistics Data Challange
Goals of this challenge
Students will test their ability to:
Group and categorize data in Python
Generate descriptive statistics in Python
Using the same dataset as the lesson, complete the following exercises. Make sure to reload the .csv file from the folder, don't use the version we were working on during the tutorial if you still have it open.
End of explanation
"""
df = pd.read_csv("uncapher_2016_repeated_measures_dataset.csv")
df.head()
"""
Explanation: This dataset examines the relationship between multitasking and working memory.See the original paper by Uncapher et al. 2016: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4733435/pdf/nihms712443.pd
First open the data from csv:
End of explanation
"""
df = df[['subjNum','groupStatus','hitRate','faRate','dprime','bis']]
df = df[df["faRate"] < 0.3]
"""
Explanation: Create a new data frame with only variables for subject number, group status, hitRate, faRate and bis. Also, we don't want any rows where the false alarm rate is above .3.
End of explanation
"""
df["faRate"].describe()
"""
Explanation: What is the mean, range, and standard deviation of the false alarm (FA) rate for high and low multitaskers?
End of explanation
"""
df["bisF"] = pd.cut(df["bis"],bins=2,labels=["Low","High"])
"""
Explanation: Now, we will group participants based on their BIS score. The BIS is a personality questionnaire. First, create a new variable divided into high and low BIS scores based on a median split:
End of explanation
"""
df.groupby(["bisF"])[["faRate","hitRate","dprime"]].mean()
"""
Explanation: Now calculate average values for hitRate, faRate and dprime for the two groups.
End of explanation
"""
|
samuelsinayoko/kaggle-housing-prices
|
research/outlier_detection_statsmodels.ipynb
|
mit
|
import numpy as np
import statsmodels.api as sm # For some reason this import is necessary...
import statsmodels.formula.api as smapi
import statsmodels.graphics as smgraph
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Outlier detection
Detect outliers using linear regression model and statsmodels. Based on Stackoverflow question.
End of explanation
"""
x = np.arange(30, dtype=float)
# Make some y data with random noise
y = 10 * x + 5.0*np.random.randn(30)
# Add outlier #
y[10] = 180.
y[20] = 130
plt.plot(x, y, 'o')
"""
Explanation: Test data
Here were just making some fake data. First of all, make a list of x values from 0 to 29. Next, use the x values to generate some y data that is salted with some randomness. Finally, change the 7th value to a value that is clearly an outlier from the rest.
End of explanation
"""
# Make fit #
regression = smapi.ols("data ~ x", data=dict(data=y, x=x)).fit()
regression.summary()
"""
Explanation: Regression
Here we're just doing an ordinary least squares method to fit the data. The "data ~ x" is just saying that 'data' (which is the y values) are directly related to 'x' values. This formalism apparently implies that data = m*x + b.
End of explanation
"""
test = regression.outlier_test()
test
print('Bad data points (bonf(p) < 0.05):')
test[test['bonf(p)'] < 0.05]
outliers = test[test['bonf(p)'] < 0.05].index.values
outliers
"""
Explanation: Test for Outliers
Here we're using our regression results to do a test for outliers. In this case, I guess the default is a Bonferroni outlier test. We're only printing off test results where the third column is less than 0.05.
Find outliers
End of explanation
"""
figure = smgraph.regressionplots.plot_fit(regression, 1)
line = smgraph.regressionplots.abline_plot(model_results=regression, ax=figure.axes[0])
plt.plot(outliers, y[outliers], 'xm', label='outliers', ms=14)
plt.legend(loc=0);
"""
Explanation: Figure
End of explanation
"""
import statsmodels.formula.api as smapi
def get_outliers(features, target):
regression = smapi.ols("target ~ features", data=locals()).fit()
test = regression.outlier_test()
outliers = test[test['bonf(p)'] < 0.05]
return list(outliers.index.values)
def test_outliers():
x = np.arange(30, dtype=float)
# Make some y data with random noise
y = 10 * x + 5.0*np.random.randn(30)
# Add outlier
y[10] = 180.
y[20] = 130
outliers = [10, 20]
prediction = get_outliers(features=x, target=y)
assert outliers == prediction
test_outliers()
"""
Explanation: Create a function and test it
End of explanation
"""
|
bayesimpact/bob-emploi
|
data_analysis/notebooks/datasets/imt/employment_type_api.ipynb
|
gpl-3.0
|
import os
from os import path
import matplotlib
import pandas as pd
import seaborn as _
DATA_FOLDER = os.getenv('DATA_FOLDER')
employment_types = pd.read_csv(path.join(DATA_FOLDER, 'imt/employment_type.csv'), dtype={'AREA_CODE': 'str'})
employment_types.head()
"""
Explanation: Author: Marie Laure, marielaure@bayesimpact.org
IMT Employment Type from API
The IMT dataset provides regional statistics about different jobs. Here we are interested in the employment types distribution. Employment types are categories of contract types (mostly based on contract duration and permanent vs temporary).
Previously, we retrieved IMT data by scraping the IMT website. As an exploratory step, we are interested in the sanity of the API based data and identifying putative additional information provided only by the API.
The dataset can be obtained with the following command, note that it may take some time to download:
docker-compose run --rm data-analysis-prepare make data/imt/employment_type.csv
Data Sanity
Loading and General View First let's load the csv file:
End of explanation
"""
len(employment_types)
"""
Explanation: Done! So we have access to different area types, contract types for each area and a number and percentage of offers. Documentation defines the number of offers as the number of offers with this contract type for a specific market (a given area and a particular job). Same idea for percentages. Note that data are updated annually.
How big is this dataset (how many rows)?
End of explanation
"""
employment_types.isnull().describe()
"""
Explanation: Not that bad!
Any missing value?
End of explanation
"""
employment_types.CONTRACT_TYPE_NAME.unique()
"""
Explanation: Nope! Good Job Pรดle Emploi!
How many types of contract do we have?
End of explanation
"""
rome_list = employment_types.ROME_PROFESSION_CARD_CODE.unique()
rome_list.size
'L1510' in rome_list
"""
Explanation: The expected types are here. Short-term, long-term, permanent... and even "others".
How many job groups?
End of explanation
"""
employment_types.AREA_TYPE_NAME.unique()
"""
Explanation: Almost... The job group with ROME code L1510 (latest addition to the job groups) is not yet part of this team. Remember that this dataset is updated annually...
How many area types do we have?
End of explanation
"""
employment_types[employment_types.AREA_TYPE_CODE == 'R'].AREA_CODE.unique().size
"""
Explanation: We have four different area types. Good.
Let's see if every region is represented.
End of explanation
"""
employment_types[employment_types.AREA_TYPE_CODE == 'D'].AREA_CODE.unique().size
"""
Explanation: Yes! In France, in Septempber 2017 there are 13 metropolitan regions and 5 overseas regions.
Same for the departments! Anyone missing??
End of explanation
"""
employment_types[employment_types.AREA_TYPE_CODE == 'D'].AREA_CODE.sort_values().unique()
"""
Explanation: Hmm. We would expect 101.
Let's see which ones are not expected (but truly welcomed)โฆ
End of explanation
"""
len(employment_types.groupby(['AREA_CODE', 'AREA_TYPE_CODE']))
"""
Explanation: The overseas collectivities of Saint-Barthรฉlรฉmy et Saint-Martin are defined here as departments. So far so good.
Ok we have regions, departments, "bassin"... But how many areas in total are there?
End of explanation
"""
employment_types[['NB_OFFERS', 'OFFERS_PERCENT']].describe()
"""
Explanation: OK! So we would expect 527 * 5 * 531 = 1399185 lines. Thus leaving 981944 (~70%) missing lines.
Why are these lines missing? Maybe uninformative rows (e.g. rows with zero offers) are missing (we've seen that before...). Let's get a brief description of the number and percentage of offers.
End of explanation
"""
employment_types[employment_types.NB_OFFERS == 50854]\
[['AREA_NAME', 'CONTRACT_TYPE_NAME', 'NB_OFFERS', 'OFFERS_PERCENT', 'ROME_PROFESSION_CARD_NAME']]
"""
Explanation: First thing first, there are no row with zero offers. The good news is that no percentage of offers is above 100. I don't know if you feel the same but the maximum number of offers seems crazy to me!
Let's have a closer look at this one.
End of explanation
"""
def sum_percentages(job_contracts):
num_contracts = len(job_contracts)
sum = 0.0
for i in range(num_contracts):
sum += job_contracts.OFFERS_PERCENT.iloc[i]
if sum < 99.9 or sum > 100.1:
print('{} {}'.format(job_contracts.ROME_PROFESSION_CARD_NAME, sum))
employment_types.groupby(['AREA_CODE', 'AREA_TYPE_CODE', 'ROME_PROFESSION_CARD_CODE']).apply(sum_percentages);
"""
Explanation: Seems legit. Nothing crazy here... Plenty of permanent position offers in children care on a national scope.
Maybe we should check that the sum of the percentages are close to 100%?
End of explanation
"""
employment_types.groupby(['AREA_CODE', 'AREA_TYPE_CODE', 'ROME_PROFESSION_CARD_CODE'])\
.size()\
.value_counts(normalize=True)\
.sort_index()\
.plot(kind='bar');
"""
Explanation: Great!! Everything is as expected.
Conclusion
This dataset is super clean.
The most recent job group 'L1510' is not used here.
To be present in the dataset a job has to have at least one offer with a specific employment type in an area. That leads to a lot of missing rows.
Overview and Comparison with Website Data
The scraped data provide the percentage of offers at the department level for a specific job group. So, the main differences are the availability of other area levels and the raw number of offers. We'll see how consistent this is.
An additional explanation for the high number of missing rows could be the fact that employment types are super specific for a market (job group x area intersection).
End of explanation
"""
employment_types[employment_types.CONTRACT_TYPE_CODE == 99]\
.sort_values('NB_OFFERS', ascending = False)\
[['AREA_NAME', 'AREA_TYPE_NAME', 'NB_OFFERS', 'OFFERS_PERCENT', 'ROME_PROFESSION_CARD_NAME']]\
.head(10)
"""
Explanation: Our hypothesis was not that bad. Only 5% of the markets have observations for the 5 employment types. Hopefully, we won't have too much of the "Other" category which does not mean a lot.
Out of curiosity, what are the jobs that need "other" contract types?
End of explanation
"""
employment_types.CONTRACT_TYPE_NAME.value_counts(normalize=True)\
.plot(kind='bar');
"""
Explanation: Ok that makes sense... We can find here entrepreneurs. As internships are not in our list of contract types, we can imagine that we would find them in this category.
Let's have a look to how many of each employment type we have.
End of explanation
"""
employment_types[employment_types.OFFERS_PERCENT == 100]\
.NB_OFFERS.plot(kind='box', showfliers=False)
len(employment_types[employment_types.OFFERS_PERCENT == 100])
"""
Explanation: Great the not-super-useful "Other" category is in a minority. Longer duration contracts are more often observed that shorter ones. Good for the future applicants!
One of the main difference is that we have now access to the number of observations not only to the percentages.
So it may be interesting to know what part of these observations is based on very few observations. We focus here on the markets that have only 1 employment type observed. Why those? First, because it is easier (we are lazy). But also because, these are the ones more prone to be covered only by few observations (when you have only 1 observation it represents 100% of the cases... Quick win!).
End of explanation
"""
employment_types\
.groupby(['AREA_TYPE_CODE', 'CONTRACT_TYPE_NAME']).NB_OFFERS.sum()\
.sort_values(ascending=False)\
.to_frame('total_offers')\
.reset_index()\
.pivot(index='CONTRACT_TYPE_NAME', columns='AREA_TYPE_CODE', values='total_offers')
"""
Explanation: Half of the 48090 markets with 1 contract type observed 100% of the time, have only 1 offer observed for this contract. That is a lot. Thus, setting a minimum threshold of offers seems like a good idea.
BTW, before dropping those rows, let's sum all of them to see what are the global numbers for the whole country. First let's check that we have the same totals for each area type:
End of explanation
"""
total_offers = employment_types[employment_types.AREA_TYPE_CODE == 'F']\
.groupby('CONTRACT_TYPE_NAME').NB_OFFERS.sum()\
.sort_values(ascending=False)
total_offers.plot(kind='pie', figsize=(5, 5)).axis('off')
total_offers.div(total_offers.sum()).to_frame('ratio_offers')
"""
Explanation: Perfect, congrats Pรดle emploi, no offers got lost in the count. So now let's plot the distribution as a ratio.
End of explanation
"""
employment_types[employment_types.AREA_TYPE_CODE == 'B'].NB_OFFERS.describe().to_frame()
"""
Explanation: Wow, that's pretty cool, 38% of job offers are for long term employment, and a majority (64%) are for more than 3 months. That's a good news for jobseekers.
OK back to more precise area typesโฆ What is the distribution of the number of offers at the bassin, department and region levels?
Bassins:
End of explanation
"""
employment_types[employment_types.AREA_TYPE_CODE == 'D'].NB_OFFERS.describe().to_frame()
"""
Explanation: Departments:
End of explanation
"""
employment_types[employment_types.AREA_TYPE_CODE == 'R'].NB_OFFERS.describe().to_frame()
"""
Explanation: Regions:
End of explanation
"""
offers_sum = employment_types.groupby(['AREA_CODE', 'AREA_TYPE_CODE', 'ROME_PROFESSION_CARD_CODE'])\
.NB_OFFERS.sum()\
.to_frame('total_offers')\
.reset_index()
offers_sum[offers_sum.AREA_TYPE_CODE == 'R'].describe()
"""
Explanation: Overall, there aren't that much observations for each contract type x area x job group. At the department and Bassin levels, most of the jobs has less than 20 offers in the area. Thus let's stay cautious when considering those data.
Let's not focus on a specific contract type and look at the number of offers for each job groups.
First the regions:
End of explanation
"""
offers_sum[offers_sum.AREA_TYPE_CODE == 'D'].describe()
"""
Explanation: Departments:
End of explanation
"""
offers_sum[offers_sum.AREA_TYPE_CODE == 'B'].describe()
"""
Explanation: And, Bassins:
End of explanation
"""
department_offers_sum = offers_sum[offers_sum.AREA_TYPE_CODE == 'D']
department_offers_sum.sort_values('total_offers', ascending=False)\
.reset_index(drop=True).reset_index().drop_duplicates('total_offers', keep='last')\
.sort_values('total_offers')\
.set_index('total_offers')['index'].div(len(department_offers_sum))\
.plot(xlim=(0, 70));
"""
Explanation: At the department level, most of the job groups have between 55 and 4 total offers (with information on contract types). This is a little bit more than twice what we saw when considering each contract type individually. Still, it is not huge.
Let's investigate how much data we would lost if using a threshold on the total number of offers. Let's focus on department, as it is the granularity level we are actually using in Bob.
End of explanation
"""
job_contracts = employment_types\
.sort_values('CONTRACT_TYPE_NAME')\
.groupby(['AREA_CODE', 'AREA_TYPE_CODE', 'ROME_PROFESSION_CARD_CODE'])\
.CONTRACT_TYPE_NAME.apply(lambda t: ', '.join(t))
job_contracts.value_counts().div(len(job_contracts)).to_frame().head()
"""
Explanation: We'll lost almost 25% of the data with a threshold at 5 but we'll still have 50% of the data for departments with a threshold at 15.
We've seen that most of the time, employers can propose more than one contract type for a market.
Can we find if some contract types are more often used alone or in combination (and which combinations)?
End of explanation
"""
employment_types[(employment_types.AREA_CODE=='06') & (employment_types.ROME_PROFESSION_CARD_CODE == 'F1402')]\
[['AREA_NAME', 'CONTRACT_TYPE_NAME', 'NB_OFFERS', 'OFFERS_PERCENT', 'ROME_PROFESSION_CARD_NAME']]
"""
Explanation: For 17% of the job groups, employers propose only the classical contract types. Offering only a long-term contract (CDI) is also quite common (12%). This is good news for Bob users as they mostly look for these type of contracts.
However, previous work done on how this data could trigger specific recommendations, suggests that user would benefit from looking also for long CDDs. As seen on the barplot above, long CDDs have been proposed almost 28% of the time.
Let's now compare scraped with API-retrieved data. What about 'Extraction solide' in the Alpes-Maritimes department?
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/how_google_does_ml/bigquery_ml/labs/intro_bqml.ipynb
|
apache-2.0
|
import matplotlib.pyplot as plt
"""
Explanation: Introduction to BigQuery ML - Predict Birth Weight
Learning Objectives
Use BigQuery to explore the natality dataset
Create a regression (linear regression) model in BQML
Evaluate the performance of your machine learning model
Make predictions with a trained BQML model
Introduction
In this lab, you will be using the US Centers for Disease Control and Prevention's (CDC) natality data to build a model to predict baby birth weights based on a handful of features known at pregnancy. Because we're predicting a continuous value, this is a regression problem, and for that, we'll use the linear regression model built into BQML.
End of explanation
"""
PROJECT = '<YOUR PROJECT>' #TODO Replace with your GCP PROJECT
"""
Explanation: Set up the notebook environment
VERY IMPORTANT: In the cell below you must replace the text <YOUR PROJECT> with your GCP project id as provided during the setup of your environment. Please leave any surrounding single quotes in place.
End of explanation
"""
%%bigquery
SELECT
*
FROM
publicdata.samples.natality
WHERE
year > 2000
AND gestation_weeks > 0
AND mother_age > 0
AND plurality > 0
AND weight_pounds > 0
LIMIT 10
"""
Explanation: Exploring the Data
This lab will use natality data and training on features to predict the birth weight.
The CDC's Natality data has details on US births from 1969 to 2008 and is available in BigQuery as a public data set. More details: https://bigquery.cloud.google.com/table/publicdata:samples.natality?tab=details
Start by looking at the data since 2000 with useful values, those greater than 0.
Note: "%%bigquery" is a magic which allows quick access to BigQuery from within a notebook.
End of explanation
"""
%%bigquery
SELECT
weight_pounds, -- this is the label; because it is continuous, we need to use regression
CAST(is_male AS STRING) AS is_male,
mother_age,
CAST(plurality AS STRING) AS plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND gestation_weeks > 0
AND mother_age > 0
AND plurality > 0
AND weight_pounds > 0
LIMIT 10
"""
Explanation: Define Features
Looking over the data set, there are a few columns of interest that could be leveraged into features for a reasonable prediction of approximate birth weight.
Further, some feature engineering may be accomplished with the BigQuery CAST function -- in BQML, all strings are considered categorical features and all numeric types are considered continuous ones.
The hashmonth is added so that we can repeatably split the data without leakage -- the goal is to have all babies that share a birthday to be either in training set or in test set and not spread between them (otherwise, there would be information leakage when it comes to triplets, etc.)
End of explanation
"""
%%bash
bq --location=US mk -d demo
"""
Explanation: Train Model
With the relevant columns chosen to accomplish predictions, it is then possible to create and train the model in BigQuery. First, a dataset will be needed store the model.
End of explanation
"""
%%bigquery
CREATE or REPLACE MODEL demo.babyweight_model_asis
OPTIONS
(model_type='linear_reg', labels=['weight_pounds'], optimize_strategy='batch_gradient_descent') AS
WITH natality_data AS (
SELECT
weight_pounds,-- this is the label; because it is continuous, we need to use regression
CAST(is_male AS STRING) AS is_male,
mother_age,
CAST(plurality AS STRING) AS plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND gestation_weeks > 0
AND mother_age > 0
AND plurality > 0
AND weight_pounds > 0
)
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
natality_data
WHERE
ABS(MOD(hashmonth, 4)) < 3 -- select 75% of the data as training
"""
Explanation: With the demo dataset ready, it is possible to create a linear regression model to train the model.
This will take approximately 5 to 7 minutes to run. Feedback from BigQuery will cease in output cell and the notebook will leave the "busy" state when complete.
End of explanation
"""
%%bigquery
SELECT * FROM ML.TRAINING_INFO(MODEL demo.babyweight_model_asis);
"""
Explanation: Training Statistics
For all training runs, statistics are captured in the "TRAINING_INFO" table. This table has basic performance statistics for each iteration.
The query below returns the training details.
End of explanation
"""
%%bigquery history
SELECT * FROM ML.TRAINING_INFO(MODEL demo.babyweight_model_asis)
history
plt.plot('iteration', 'loss', data=history,
marker='o', color='orange', linewidth=2)
plt.plot('iteration', 'eval_loss', data=history,
marker='', color='green', linewidth=2, linestyle='dashed')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.legend();
"""
Explanation: Some of these columns are obvious although what do the non-specific ML columns mean (specific to BQML)?
training_run - Will be zero for a newly created model. If the model is re-trained using warm_start, this will increment for each re-training.
iteration - Number of the associated training_run, starting with zero for the first iteration.
duration_ms - Indicates how long the iteration took (in ms).
Next plot the training and evaluation loss to see if the model has an overfit.
End of explanation
"""
%%bigquery
SELECT
*
FROM
ml.PREDICT(MODEL demo.babyweight_model_asis,
(SELECT
weight_pounds,
CAST(is_male AS STRING) AS is_male,
mother_age,
CAST(plurality AS STRING) AS plurality,
gestation_weeks
FROM
publicdata.samples.natality
WHERE
year > 2000
AND gestation_weeks > 0
AND mother_age > 0
AND plurality > 0
AND weight_pounds > 0
))
LIMIT 100
"""
Explanation: As you can see, the training loss and evaluation loss are essentially identical. There does not appear to be any overfitting.
Make a Prediction with BQML using the Model
With a trained model, it is now possible to make a prediction on the values. The only difference from the second query above is the reference to the model. The data has been limited (LIMIT 100) to reduce amount of data returned.
When the ml.predict function is leveraged, output prediction column name for the model is predicted_<label_column_name>.
End of explanation
"""
|
tpin3694/tpin3694.github.io
|
python/all_combinations_of_a_list_of_objects.ipynb
|
mit
|
# Import combinations with replacements from itertools
from itertools import combinations_with_replacement
"""
Explanation: Title: All Combinations For A List Of Objects
Slug: all_combinations_of_a_list_of_objects
Summary: All Combinations For A List Of Objects
Date: 2016-05-01 12:00
Category: Python
Tags: Basics
Authors: Chris Albon
Preliminary
End of explanation
"""
# Create a list of objects to combine
list_of_objects = ['warplanes', 'armor', 'infantry']
"""
Explanation: Create a list of objects
End of explanation
"""
# Create an empty list object to hold the results of the loop
combinations = []
# Create a loop for every item in the length of list_of_objects, that,
for i in list(range(len(list_of_objects))):
# Finds every combination (with replacement) for each object in the list
combinations.append(list(combinations_with_replacement(list_of_objects, i+1)))
# View the results
combinations
# Flatten the list of lists into just a list
combinations = [i for row in combinations for i in row]
# View the results
combinations
"""
Explanation: Find all combinations (with replacement) for the list
End of explanation
"""
|
jasonding1354/pyDataScienceToolkits_Base
|
Scikit-learn/.ipynb_checkpoints/(6)classification_metrics-checkpoint.ipynb
|
mit
|
# read the data into a Pandas DataFrame
import pandas as pd
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data'
col_names = ['pregnant', 'glucose', 'bp', 'skin', 'insulin', 'bmi', 'pedigree', 'age', 'label']
pima = pd.read_csv(url, header=None, names=col_names)
# print the first 5 rows of data
pima.head()
"""
Explanation: ๅ
ๅฎนๆฆ่ฆ
ๆจกๅ่ฏไผฐ็็ฎ็ๅไธ่ฌ่ฏไผฐๆต็จ
ๅ็ฑปๅ็กฎ็็็จๅคๅๅ
ถ้ๅถ
ๆททๆท็ฉ้ต๏ผconfusion matrix๏ผๆฏๅฆไฝ่กจ็คบไธไธชๅ็ฑปๅจ็ๆง่ฝ
ๆททๆท็ฉ้ตไธญ็ๅบฆ้ๆฏๅฆไฝ่ฎก็ฎ็
้่ฟๆนๅๅ็ฑป้ๅผๆฅ่ฐๆดๅ็ฑปๅจๆง่ฝ
ROCๆฒ็บฟ็็จๅค
ๆฒ็บฟไธ้ข็งฏ๏ผArea Under the Curve, AUC๏ผไธๅ็ฑปๅ็กฎ็็ไธๅ
1. ๅ้กพ
ๆจกๅ่ฏไผฐๅฏไปฅ็จไบๅจไธๅ็ๆจกๅ็ฑปๅใ่ฐ่ๅๆฐใ็นๅพ็ปๅไธญ้ๆฉ้ๅ็ๆจกๅ๏ผๆไปฅๆไปฌ้่ฆไธไธชๆจกๅ่ฏไผฐ็ๆต็จๆฅไผฐ่ฎก่ฎญ็ปๅพๅฐ็ๆจกๅๅฏนไบ้ๆ ทๆฌๆฐๆฎ็ๆณๅ่ฝๅ๏ผๅนถไธ่ฟ้่ฆๆฐๅฝ็ๆจกๅ่ฏไผฐๅบฆ้ๆๆฎตๆฅ่กก้ๆจกๅ็ๆง่ฝ่กจ็ฐใ
ๅฏนไบๆจกๅ่ฏไผฐๆต็จ่่จ๏ผไนๅไป็ปไบKๆไบคๅ้ช่ฏ็ๆนๆณ๏ผ้ๅฏนๆจกๅ่ฏไผฐๅบฆ้ๆนๆณ๏ผๅๅฝ้ฎ้ขๅฏไปฅ้็จๅนณๅ็ปๅฏน่ฏฏๅทฎ๏ผMean Absolute Error๏ผใๅๆน่ฏฏๅทฎ๏ผMean Squared Error๏ผใๅๆนๆ น่ฏฏๅทฎ๏ผRoot Mean Squared Error๏ผ๏ผ่ๅ็ฑป้ฎ้ขๅฏไปฅ้็จๅ็ฑปๅ็กฎ็ๅ่ฟ็ฏๆ็ซ ไธญไป็ป็ๅบฆ้ๆนๆณใ
2. ๅ็ฑปๅ็กฎ็๏ผClassification accuracy๏ผ
่ฟ้ๆไปฌไฝฟ็จPima Indians Diabetes dataset๏ผๅ
ถไธญๅ
ๅซๅฅๅบทๆฐๆฎๅ็ณๅฐฟ็
็ถๆๆฐๆฎ๏ผไธๅ
ฑๆ768ไธช็
ไบบ็ๆฐๆฎใ
End of explanation
"""
# define X and y
feature_cols = ['pregnant', 'insulin', 'bmi', 'age']
X = pima[feature_cols]
y = pima.label
# split X and y into training and testing sets
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# train a logistic regression model on the training set
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
# make class predictions for the testing set
y_pred_class = logreg.predict(X_test)
# calculate accuracy
from sklearn import metrics
print metrics.accuracy_score(y_test, y_pred_class)
"""
Explanation: ไธ้ข่กจๆ ผไธญ็labelไธๅ๏ผ1่กจ็คบ่ฏฅ็
ไบบๆ็ณๅฐฟ็
๏ผ0่กจ็คบ่ฏฅ็
ไบบๆฒกๆ็ณๅฐฟ็
End of explanation
"""
# examine the class distribution of the testing set (using a Pandas Series method)
y_test.value_counts()
# calculate the percentage of ones
y_test.mean()
# calculate the percentage of zeros
1 - y_test.mean()
# calculate null accuracy(for binary classification problems coded as 0/1)
max(y_test.mean(), 1-y_test.mean())
"""
Explanation: ๅ็ฑปๅ็กฎ็ๅๆฐๆฏๆๆๆๅ็ฑปๆญฃ็กฎ็็พๅๆฏใ
็ฉบๅ็กฎ็๏ผnull accuracy๏ผๆฏๆๅฝๆจกๅๆปๆฏ้ขๆตๆฏไพ่พ้ซ็็ฑปๅซ๏ผ้ฃไนๅ
ถๆญฃ็กฎ็ๆฏไพๆฏๅคๅฐ
End of explanation
"""
# calculate null accuracy (for multi-class classification problems)
y_test.value_counts().head(1) / len(y_test)
"""
Explanation: ๆไปฌ็ๅฐ็ฉบๅ็กฎ็ๆฏ68%๏ผ่ๅ็ฑปๅ็กฎ็ๆฏ69%๏ผ่ฟ่ฏดๆ่ฏฅๅ็ฑปๅ็กฎ็ๅนถไธๆฏๅพๅฅฝ็ๆจกๅๅบฆ้ๆนๆณ๏ผๅ็ฑปๅ็กฎ็็ไธไธช็ผบ็นๆฏๅ
ถไธ่ฝ่กจ็ฐไปปไฝๆๅ
ณๆต่ฏๆฐๆฎ็ๆฝๅจๅๅธใ
End of explanation
"""
# print the first 25 true and predicted responses
print "True:", y_test.values[0:25]
print "Pred:", y_pred_class[0:25]
"""
Explanation: ๆฏ่พ็ๅฎๅ้ขๆต็็ฑปๅซๅๅบๅผ๏ผ
End of explanation
"""
# IMPORTANT: first argument is true values, second argument is predicted values
print metrics.confusion_matrix(y_test, y_pred_class)
"""
Explanation: ไปไธ้ข็ๅฎๅผๅ้ขๆตๅผ็ๆฏ่พไธญๅฏไปฅ็ๅบ๏ผๅฝๆญฃ็กฎ็็ฑปๅซๆฏ0ๆถ๏ผ้ขๆต็็ฑปๅซๅบๆฌ้ฝๆฏ0๏ผๅฝๆญฃ็กฎ็็ฑปๅซๆฏ1ๆถ๏ผ้ขๆต็็ฑปๅซๅคง้ฝไธๆฏ1ใๆขๅฅ่ฏ่ฏด๏ผ่ฏฅ่ฎญ็ป็ๆจกๅๅคง้ฝๅจๆฏไพ่พ้ซ็้ฃ้กน็ฑปๅซ็้ขๆตไธญ้ขๆตๆญฃ็กฎ๏ผ่ๅจๅฆๅคไธไธญ็ฑปๅซ็้ขๆตไธญ้ขๆตๅคฑ่ดฅ๏ผ่ๆไปฌๆฒกๆณไปๅ็ฑปๅ็กฎ็่ฟ้กนๆๆ ไธญๅ็ฐ่ฟไธช้ฎ้ขใ
ๅ็ฑปๅ็กฎ็่ฟไธ่กก้ๅ็ฑปๅจ็ๆ ๅๆฏ่พๅฎนๆ็่งฃ๏ผไฝๆฏๅฎไธ่ฝๅ่ฏไฝ ๅๅบๅผ็ๆฝๅจๅๅธ๏ผๅนถไธๅฎไนไธ่ฝๅ่ฏไฝ ๅ็ฑปๅจ็ฏ้็็ฑปๅใๆฅไธๆฅไป็ป็ๆททๆท็ฉ้ตๅฏไปฅ่ฏๅซ่ฟไธช้ฎ้ขใ
3. ๆททๆท็ฉ้ต
End of explanation
"""
# save confusion matrix and slice into four pieces
confusion = metrics.confusion_matrix(y_test, y_pred_class)
TP = confusion[1, 1]
TN = confusion[0, 0]
FP = confusion[0, 1]
FN = confusion[1, 0]
print "TP:", TP
print "TN:", TN
print "FP:", FP
print "FN:", FN
"""
Explanation: ็้ณๆง๏ผTrue Positive๏ผTP๏ผ๏ผๆ่ขซๅ็ฑปๅจๆญฃ็กฎๅ็ฑป็ๆญฃไพๆฐๆฎ
็้ดๆง๏ผTrue Negative๏ผTN๏ผ๏ผๆ่ขซๅ็ฑปๅจๆญฃ็กฎๅ็ฑป็่ดไพๆฐๆฎ
ๅ้ณๆง๏ผFalse Positive๏ผFP๏ผ๏ผ่ขซ้่ฏฏๅฐๆ ่ฎฐไธบๆญฃไพๆฐๆฎ็่ดไพๆฐๆฎ
ๅ้ดๆง๏ผFalse Negative๏ผFN๏ผ๏ผ่ขซ้่ฏฏๅฐๆ ่ฎฐไธบ่ดไพๆฐๆฎ็ๆญฃไพๆฐๆฎ
End of explanation
"""
print (TP+TN) / float(TP+TN+FN+FP)
print metrics.accuracy_score(y_test, y_pred_class)
"""
Explanation: 4. ๅบไบๆททๆท็ฉ้ต็่ฏไผฐๅบฆ้
ๅ็กฎ็ใ่ฏๅซ็๏ผClassification Accuracy๏ผ๏ผๅ็ฑปๅจๆญฃ็กฎๅ็ฑป็ๆฏไพ
End of explanation
"""
print (FP+FN) / float(TP+TN+FN+FP)
print 1-metrics.accuracy_score(y_test, y_pred_class)
"""
Explanation: ้่ฏฏ็ใ่ฏฏๅ็ฑป็๏ผClassification Error๏ผ๏ผๅ็ฑปๅจ่ฏฏๅ็ฑป็ๆฏไพ
End of explanation
"""
print TP / float(TP+FN)
recall = metrics.recall_score(y_test, y_pred_class)
print metrics.recall_score(y_test, y_pred_class)
"""
Explanation: ่่็ฑปไธๅนณ่กก้ฎ้ข๏ผๅ
ถไธญๆๅ
ด่ถฃ็ไธป็ฑปๆฏ็จๅฐ็ใๅณๆฐๆฎ้็ๅๅธๅๆ ่ด็ฑปๆพ่ๅฐๅ ๅคๆฐ๏ผ่ๆญฃ็ฑปๅ ๅฐๆฐใๆ
้ขๅฏน่ฟ็ง้ฎ้ข๏ผ้่ฆๅ
ถไป็ๅบฆ้๏ผ่ฏไผฐๅ็ฑปๅจๆญฃ็กฎๅฐ่ฏๅซๆญฃไพๆฐๆฎ็ๆ
ๅตๅๆญฃ็กฎๅฐ่ฏๅซ่ดไพๆฐๆฎ็ๆ
ๅตใ
็ตๆๆง๏ผSensitivity๏ผ๏ผไน็งฐไธบ็ๆญฃไพ่ฏๅซ็ใๅฌๅ็๏ผRecall๏ผ๏ผๆญฃ็กฎ่ฏๅซ็ๆญฃไพๆฐๆฎๅจๅฎ้
ๆญฃไพๆฐๆฎไธญ็็พๅๆฏ
End of explanation
"""
print TN / float(TN+FP)
"""
Explanation: ็นๆๆง๏ผSpecificity๏ผ๏ผไน็งฐไธบ็่ดไพ็๏ผๆญฃ็กฎ่ฏๅซ็่ดไพๆฐๆฎๅจๅฎ้
่ดไพๆฐๆฎไธญ็็พๅๆฏ
End of explanation
"""
print FP / float(TN+FP)
specificity = TN / float(TN+FP)
print 1 - specificity
"""
Explanation: ๅ้ณ็๏ผFalse Positive Rate๏ผ๏ผๅฎ้
ๅผๆฏ่ดไพๆฐๆฎ๏ผ้ขๆต้่ฏฏ็็พๅๆฏ
End of explanation
"""
print TP / float(TP+FP)
precision = metrics.precision_score(y_test, y_pred_class)
print precision
"""
Explanation: ็ฒพๅบฆ๏ผPrecision๏ผ๏ผ็ๅ็ฒพ็กฎๆง็ๅบฆ้๏ผๅณๆ ่ฎฐไธบๆญฃ็ฑป็ๆฐๆฎๅฎ้
ไธบๆญฃไพ็็พๅๆฏ
End of explanation
"""
print (2*precision*recall) / (precision+recall)
print metrics.f1_score(y_test, y_pred_class)
"""
Explanation: Fๅบฆ้๏ผๅ็งฐไธบF1ๅๆฐๆFๅๆฐ๏ผ๏ผๆฏไฝฟ็จ็ฒพๅบฆๅๅฌๅ็็ๆนๆณ็ปๅๅฐไธไธชๅบฆ้ไธ
$$F = \frac{2precisionrecall}{precision+recall}$$
$$F_{\beta} = \frac{(1+{\beta}^2)precisionrecall}{{\beta}^2*precision+recall}$$
$F$ๅบฆ้ๆฏ็ฒพๅบฆๅๅฌๅ็็่ฐๅๅๅผ๏ผๅฎ่ตไบ็ฒพๅบฆๅๅฌๅ็็ธ็ญ็ๆ้ใ
$F_{\beta}$ๅบฆ้ๆฏ็ฒพๅบฆๅๅฌๅ็็ๅ ๆๅบฆ้๏ผๅฎ่ตไบๅฌๅ็ๆ้ๆฏ่ตไบ็ฒพๅบฆ็$\beta$ๅใ
End of explanation
"""
# print the first 10 predicted responses
logreg.predict(X_test)[0:10]
y_test.values[0:10]
# print the first 10 predicted probabilities of class membership
logreg.predict_proba(X_test)[0:10, :]
"""
Explanation: ๆป็ป
ๆททๆท็ฉ้ต่ตไบไธไธชๅ็ฑปๅจๆง่ฝ่กจ็ฐๆดๅ
จ้ข็่ฎค่ฏ๏ผๅๆถๅฎ้่ฟ่ฎก็ฎๅ็งๅ็ฑปๅบฆ้๏ผๆๅฏผไฝ ่ฟ่กๆจกๅ้ๆฉใ
ไฝฟ็จไปไนๅบฆ้ๅๅณไบๅ
ทไฝ็ไธๅก่ฆๆฑ๏ผ
- ๅๅพ้ฎไปถ่ฟๆปคๅจ๏ผไผๅ
ไผๅ็ฒพๅบฆๆ่
็นๆๆง๏ผๅ ไธบ่ฏฅๅบ็จๅฏนๅ้ณๆง๏ผ้ๅๅพ้ฎไปถ่ขซๆพ่ฟๅๅพ้ฎไปถ็ฎฑ๏ผ็่ฆๆฑ้ซไบๅฏนๅ้ดๆง๏ผๅๅพ้ฎไปถ่ขซๆพ่ฟๆญฃๅธธ็ๆถไปถ็ฎฑ๏ผ็่ฆๆฑ
- ๆฌบ่ฏไบคๆๆฃๆตๅจ๏ผไผๅ
ไผๅ็ตๆๅบฆ๏ผๅ ไธบ่ฏฅๅบ็จๅฏนๅ้ดๆง๏ผๆฌบ่ฏ่กไธบๆช่ขซๆฃๆต๏ผ็่ฆๆฑ้ซไบๅ้ณๆง๏ผๆญฃๅธธไบคๆ่ขซ่ฎคไธบๆฏๆฌบ่ฏ๏ผ็่ฆๆฑ
5. ่ฐๆดๅ็ฑป็้ๅผ
End of explanation
"""
# print the first 10 predicted probabilities for class 1
logreg.predict_proba(X_test)[0:10, 1]
"""
Explanation: ไธ้ข็่พๅบไธญ๏ผ็ฌฌไธๅๆพ็คบ็ๆฏ้ขๆตๅผไธบ0็็พๅๆฏ๏ผ็ฌฌไบๅๆพ็คบ็ๆฏ้ขๆตๅผไธบ1็็พๅๆฏใ
End of explanation
"""
# store the predicted probabilities for class 1
y_pred_prob = logreg.predict_proba(X_test)[:, 1]
# allow plots to appear in the notebook
%matplotlib inline
import matplotlib.pyplot as plt
# histogram of predicted probabilities
plt.hist(y_pred_prob, bins=8)
plt.xlim(0, 1)
plt.title('Histogram of predicted probabilities')
plt.xlabel('Predicted probability of diabetes')
plt.ylabel('Frequency')
"""
Explanation: ๆไปฌ็ๅฐ๏ผ้ขๆตไธบ1็ๅๅฎ้
็็ฑปๅซๅทๅทฎๅซๅพๅคง๏ผๆไปฅ่ฟ้ๆ50%ไฝไธบๅ็ฑป็้ๅผๆพ็ถไธๅคชๅ็ใไบๆฏๆไปฌๅฐๆๆ้ขๆต็ฑปๅซไธบ1็็พๅๆฏๆฐๆฎ็จ็ดๆนๅพ็ๆนๅผๅฝข่ฑกๅฐ่กจ็คบๅบๆฅ๏ผ็ถๅๅฐ่ฏ้ๆฐ่ฎพ็ฝฎ้ๅผใ
End of explanation
"""
# predict diabetes if the predicted probability is greater than 0.3
from sklearn.preprocessing import binarize
y_pred_class = binarize(y_pred_prob, 0.3)[0]
# print the first 10 predicted probabilities
y_pred_prob[0:10]
# print the first 10 predicted classes with the lower threshold
y_pred_class[0:10]
y_test.values[0:10]
"""
Explanation: ๆไปฌๅ็ฐๅจ20%-30%ไน้ด็ๆฐ้ซ่พพ45%๏ผๆ
ไปฅ50%ไฝไธบๅ็ฑป้ๅผๆถ๏ผๅชๆๅพๅฐ็ไธ้จๅๆฐๆฎไผ่ขซ่ฎคไธบๆฏ็ฑปๅซไธบ1็ๆ
ๅตใๆไปฌๅฏไปฅๅฐ้ๅผ่ฐๅฐ๏ผไปฅๆนๅๅ็ฑปๅจ็็ตๆๅบฆๅ็นๆๆงใ
End of explanation
"""
# previous confusion matrix (default threshold of 0.5)
print confusion
# new confusion matrix (threshold of 0.3)
print metrics.confusion_matrix(y_test, y_pred_class)
# sensitivity has increased (used to be 0.24)
print 46 / float(46 + 16)
print metrics.recall_score(y_test, y_pred_class)
# specificity has decreased (used to be 0.91)
print 80 / float(80 + 50)
"""
Explanation: ไปไธ้ขไธค็ปๆฐๆฎๅฏนๆฏๆฅ็๏ผๆๆ็กฎๅฎๆนๅไธๅฐ
End of explanation
"""
# IMPORTANT: first argument is true values, second argument is predicted probabilities
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred_prob)
plt.plot(fpr, tpr)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.title('ROC curve for diabetes classifier')
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.grid(True)
"""
Explanation: ๆป็ป๏ผ
- 0.5ไฝไธบ้ๅผๆถ้ป่ฎค็ๆ
ๅต
- ่ฐ่้ๅผๅฏไปฅๆนๅ็ตๆๆงๅ็นๆๆง
- ็ตๆๆงๅ็นๆๆงๆฏไธๅฏน็ธๅไฝ็จ็ๆๆ
- ่ฏฅ้ๅผ็่ฐ่ๆฏไฝไธบๆนๅๅ็ฑปๆง่ฝ็ๆๅไธๆญฅ๏ผๅบๆดๅคๅปๅ
ณๆณจๅ็ฑปๅจ็้ๆฉๆๆๅปบๆดๅฅฝ็ๅ็ฑปๅจ
6. ROCๆฒ็บฟๅAUC
ROCๆฒ็บฟๆๅ่ฏ่
ๅทฅไฝ็นๅพๆฒ็บฟ/ๆฅๆถๅจๆไฝ็นๆง(receiver operating characteristic๏ผROC)ๆฒ็บฟ, ๆฏๅๆ ็ตๆๆงๅ็นๆๆง่ฟ็ปญๅ้็็ปผๅๆๆ ,ๆฏ็จๆๅพๆณๆญ็คบๆๆๆงๅ็นๅผๆง็็ธไบๅ
ณ็ณป๏ผๅฎ้่ฟๅฐ่ฟ็ปญๅ้่ฎพๅฎๅบๅคไธชไธๅ็ไธด็ๅผ๏ผไป่่ฎก็ฎๅบไธ็ณปๅๆๆๆงๅ็นๅผๆงใ
ROCๆฒ็บฟๆฏๆ นๆฎไธ็ณปๅไธๅ็ไบๅ็ฑปๆนๅผ๏ผๅ็ๅผๆๅณๅฎ้๏ผ๏ผไปฅ็ๆญฃไพ็๏ผไนๅฐฑๆฏ็ตๆๅบฆ๏ผ๏ผTrue Positive Rate,TPR๏ผไธบ็บตๅๆ ๏ผๅๆญฃไพ็๏ผ1-็นๆๆง๏ผ๏ผFalse Positive Rate,FPR๏ผไธบๆจชๅๆ ็ปๅถ็ๆฒ็บฟใ
ROC่งๅฏๆจกๅๆญฃ็กฎๅฐ่ฏๅซๆญฃไพ็ๆฏไพไธๆจกๅ้่ฏฏๅฐๆ่ดไพๆฐๆฎ่ฏๅซๆๆญฃไพ็ๆฏไพไน้ด็ๆ่กกใTPR็ๅขๅ ไปฅFPR็ๅขๅ ไธบไปฃไปทใROCๆฒ็บฟไธ็้ข็งฏๆฏๆจกๅๅ็กฎ็็ๅบฆ้ใ
End of explanation
"""
# define a function that accepts a threshold and prints sensitivity and specificity
def evaluate_threshold(threshold):
print 'Sensitivity:', tpr[thresholds > threshold][-1]
print 'Specificity:', 1 - fpr[thresholds > threshold][-1]
evaluate_threshold(0.5)
evaluate_threshold(0.3)
"""
Explanation: ROCๆฒ็บฟไธ็ๆฏไธไธช็นๅฏนๅบไบไธไธชthreshold๏ผๅฏนไบไธไธชๅ็ฑปๅจ๏ผๆฏไธชthresholdไธไผๆไธไธชTPRๅFPRใ
ๆฏๅฆThresholdๆๅคงๆถ๏ผTP=FP=0๏ผๅฏนๅบไบๅ็น๏ผThresholdๆๅฐๆถ๏ผTN=FN=0๏ผๅฏนๅบไบๅณไธ่ง็็น(1,1)
ๆญฃๅฆไธ้ขๆ่ฟฐ๏ผTPR็ๅขๅ ไปฅFPR็ๅขๅ ไธบไปฃไปท๏ผๆไปฅROCๆฒ็บฟๅฏไปฅๅธฎๅฉๆไปฌ้ๆฉไธไธชๅฏไปฅๅนณ่กก็ตๆๆงๅ็นๆๆง็้ๅผใ้่ฟROCๆฒ็บฟๆไปฌๆฒกๆณ็ๅฐๅๅบ้ๅผ็ๅฏนๅบๅ
ณ็ณป๏ผๆไปฅๆไปฌ็จไธ้ข็ๅฝๆฐๆฅๆฅ็ใ
End of explanation
"""
# IMPORTANT: first argument is true values, second argument is predicted probabilities
print metrics.roc_auc_score(y_test, y_pred_prob)
# calculate cross-validated AUC
from sklearn.cross_validation import cross_val_score
cross_val_score(logreg, X, y, cv=10, scoring='roc_auc').mean()
"""
Explanation: AUC๏ผArea Under Curve๏ผ่ขซๅฎไนไธบROCๆฒ็บฟไธ็้ข็งฏ๏ผไนๅฏไปฅ่ฎคไธบๆฏROCๆฒ็บฟไธ้ข็งฏๅ ๅไฝ้ข็งฏ็ๆฏไพ๏ผๆพ็ถ่ฟไธช้ข็งฏ็ๆฐๅผไธไผๅคงไบ1ใๅ็ฑไบROCๆฒ็บฟไธ่ฌ้ฝๅคไบy=x่ฟๆก็ด็บฟ็ไธๆน๏ผๆไปฅAUC็ๅๅผ่ๅดๅจ0.5ๅ1ไน้ดใ
ๅฏนๅบAUCๆดๅคง็ๅ็ฑปๅจๆๆๆดๅฅฝใๆไปฅAUCๆฏ่กก้ๅ็ฑปๅจๆง่ฝ็ไธไธชๅพๅฅฝ็ๅบฆ้๏ผๅนถไธๅฎไธๅๅ็ฑปๅ็กฎ็้ฃๆ ท๏ผๅจ็ฑปๅซๆฏไพๅทฎๅซๅพๅคง็ๆ
ๅตไธ๏ผไพ็ถๆฏๅพๅฅฝ็ๅบฆ้ๆๆฎตใๅจๆฌบ่ฏไบคๆๆฃๆตไธญ๏ผ็ฑไบๆฌบ่ฏๆกไพๆฏๅพๅฐ็ไธ้จๅ๏ผ่ฟๆถๅ็ฑปๅ็กฎ็ๅฐฑไธๅๆฏไธไธช่ฏๅฅฝ็ๅบฆ้๏ผ่ๅฏไปฅไฝฟ็จAUCๆฅๅบฆ้ใ
End of explanation
"""
|
datamicroscopes/release
|
examples/normal-inverse-chisquare.ipynb
|
bsd-3-clause
|
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.set_context('talk')
sns.set_style('darkgrid')
lynx = pd.read_csv('https://vincentarelbundock.github.io/Rdatasets/csv/datasets/lynx.csv',
index_col=0)
lynx = lynx.set_index('time')
lynx.head()
lynx.plot(legend=False)
plt.xlabel('Year')
plt.title('Annual Canadian Lynx Trappings 1821-1934')
plt.ylabel('Lynx')
"""
Explanation: Univariate Data with the Normal Inverse Chi-Square Distribution
One of the simplest examples of data is univariate data
Let's consider a timeseries example:
The Annual Canadian Lynx Trappings Dataset as described by Campbel and Walker 1977 contains the number of Lynx trapped near the McKenzie River in the Northwest Territories in Canada between 1821 and 1934.
End of explanation
"""
sns.kdeplot(lynx['lynx'])
plt.title('Kernel Density Estimate of Annual Lynx Trapping')
plt.ylabel('Probability')
plt.xlabel('Number of Lynx')
"""
Explanation: Let's plot the kernel density estimate of annual lynx trapping
End of explanation
"""
ti = sns.load_dataset('titanic')
ti.head()
"""
Explanation: Our plot suggests there could be three modes in the Lynx data.
In modeling this timeseries, we could assume that the number of lynx trapped in a given year is falls into one of $k$ states, which are normally distributed with some unknown mean $\mu_i$ and variance $\sigma^2_i$ for each state
In the case of our Lynx data
$$\forall i \in [1,...,k] \hspace{2mm} p(\text{lynx trapped}| \text{state} = i) \sim \mathcal{N}(\mu_i, \sigma^2_i)$$
Now let's consider demographics data from the Titanic Dataset
The Titanic Dataset contains information about passengers of the Titanic.
End of explanation
"""
ti[['age','fare']].dropna().corr()
"""
Explanation: Passenger age and fare are both real valued. Are they related? Let's examine the correlation matrix
End of explanation
"""
sns.kdeplot(ti['age'])
plt.title('Kernel Density Estimate of Passenger Age in the Titanic Datset')
sns.kdeplot(ti['fare'])
plt.title('Kernel Density Estimate of Passenger Fare in the Titanic Datset')
"""
Explanation: Since the correlation is between the two variables is zero, we can model these two real valued columns independently.
Let's plot the kernel density estimate of each variable
End of explanation
"""
ti['logfare'] = np.log(ti['fare'])
ti[['age','logfare']].dropna().corr()
"""
Explanation: Given the long tail in the fare price, we might want to model this variable on a log scale:
End of explanation
"""
sns.kdeplot(ti['logfare'])
plt.title('Kernel Density Estimate of Log Passenger Fare in the Titanic Datset')
"""
Explanation: Again, logfare and age have near zero correlation, so we can again model these two variables independently
Let's see what a kernel density estimate of log fare would look like
End of explanation
"""
from microscopes.models import nich as normal_inverse_chisquared
"""
Explanation: In logspace, passenger fare is multimodal, suggesting that we could model this variable with a normal distirbution
If we were to model the passenger list using our Mixture Model, we would have separate likelihoods for logfare and age
$$\forall i \in [1,...,k] \hspace{2mm} p(\text{logfare}|\text{cluster}=i)=\mathcal{N}(\mu_{i,l}, \sigma^2_{i,l})$$
$$\forall i \in [1,...,k] \hspace{2mm} p(\text{age}|\text{cluster}=c)=\mathcal{N}(\mu_{i,a}, \sigma^2_{i,a})$$
Often, real value data is assumed to be normally distributed.
To learn the latent variables, $\mu_i$ $\sigma^2_i$, we would use a normal inverse-chi-square likelihood
The normal inverse-chi-square likelihood is the conjugate univariate normal likelihood in data microscopes. We also have normal likelihood, the normal inverse-wishart likelihood, optimized for multivariate datasets.
It is important to model univariate normal data with this likelihood as it acheives superior performance on univariate data.
In both these examples, we found variables that were amenable to being modeled as univariate normal:
Univariate datasets
Datasets containing real valued variables with near zero correlation
To import our univariate normal inverse-chi-squared likelihood, call:
End of explanation
"""
|
LeonardoCastro/Servicio_social
|
Parte 1 - CUDA C/03 - Multiplicacion de vectores.ipynb
|
mit
|
%%writefile Programas/Mul_vectores.cu
__global__ void multiplicar_vectores(float * device_A, float * device_B, float * device_C, int TAMANIO)
{
// Llena el kernel escribiendo la multiplicacion de los vectores A y B
}
int main( int argc, char * argv[])
{
int TAMANIO 1000 ;
float h_A[TAMANIO] ;
float h_B[TAMANIO] ;
float h_C[TAMANIO] ;
float prueba ;
for (int i = 0, i < TAMANIO, i ++)
{
h_A[i] = i ;
h_B[i] = i + 1 ;
}
// Escribe abajo las lineas para la alocacion de memoria
// Escribe abajo las lineas para copia de memoria del CPU al GPU
// Completa para escribir las dimensiones de los bloques
dim3 dimBlock( ) ;
dim3 dimGrid( ) ;
// Completa para lanzar el kernel
multiplicar_vectores<<< dimGrid, dimBlock >>>( ) ;
// Copia la memoria del GPU al CPU
// Aqui abajo YA ESTAN ESCRITAS las lineas para liberar la memoria
cudaFree(d_A) ;
cudaFree(d_B) ;
cudaFree(d_C) ;
// Aqui abajo un pequenio codigo para poder saber si tu resultado es correcto
prueba = 0. ;
for (int i = 0, i < TAMANIO, i ++)
{
prueba += h_C[i] ;
}
println( prueba) ;
return 0;
}
!nvcc -o Programas/Mul_vectores Programas/Mul_vectores.cu
"""
Explanation: Bloques, รญndices y un primer ejercicio
Hablaremos de la indexaciรณn que normalmente se utiliza en los cรณdigos de CUDA C y que ademรกs nos ayudarรก a comprender mejor los bloques.
Sin mรกs, presentamos el รญndice idx que, por cierto, se utilizarรก en el ejercicio a continuaciรณn.
C
idx = blockIdx.x * blockDim.x + threadIdx.x ;
Los componentes de idx son lo suficientemente claros. blockIdx.x se refiere al รญndice del bloque dentro de una malla, mientras que threadIdx.x se refiere al รญndice de un thread dentro del block en el que se encuentra.
blockDim.x, por otra parte, se refiere a la dimensiรณn del bloque en la direcciรณn x. Sencillo, ยฟno?
Estamos suponiendo que รบnicamente tenemos una dimensiรณn, pero todo esto es anรกlogo en dos o tres dimensiones.
Para ilustrar la magia de este รญndice, supongamos que tenemos un vector unidimensional con 16 entradas, dividido en 4 bloques de 4 entradas.
Entonces blockDim.x = 4 y es un valor fijo.
blockIdx.x y threadIdx.x van de 0 a 3.
En el primer bloque (blockIdx.x = 0), idx = 0 * 4 + threadIdx.x irรก de 0 a 3.
En el segundo (blockIdx.x = 1), idx = 4 + threadIdx.x empezarรก en 4 y terminarรก en 7.
Asรญ, sucesivamente hasta terminar en 15 y contando asรญ las 16 entradas de nuestro vector.
Ahora bien. ยฟDรณnde podemos fijar las dimensiones de los bloques y mallas?
CUDA C ha creado unas variables llamadas dim3 con las que podemos fijar muy sencillamente las dimensiones de estos objetos. Su sintรกxis es muy sencilla:
```C
dim3 dimBlock(4, 1, 1) ;
dim3 dimGrid(4, 1, 1) ;
```
Las variables dimBlock y dimGrid fueron escritas como para el ejemplo pasado. La sintรกxis es, como se puede intuir, la ya establecida (x, y, z), por lo que las dos variables reciรฉn escritas se refieren a una malla unidimensional (en la direcciรณn x) con 4 bloques. Cada uno de estos รบltimos tambiรฉn serรก unidimensional en la direcciรณn x y contarรก con 4 threads.
Ejercicio: Multiplicaciรณn de Vectores
Ahora veremos como escribir nuestro primer cรณdigo en CUDA C para realizar un ejercicio bรกsico: la multiplicaciรณn de vectores.
La dinรกmica serรก esta: A continuaciรณn pondremos una parte del cรณdigo. El lector sรณlo tendrรก que llenar las partes faltantes. Prรกcticamente todos los elementos para poder completar este primer cรณdido estรกn presentes en el notebook anterior, por lo que no hay que dudar en tomarlo como referencia.
Asรญ de fรกcil.
Nota: las partes que faltan por llenar en este cรณdigo son
+ el kernel
+ la alocaciรณn, copia y liberaciรณn de memoria
+ dimensiones de bloques y grid
End of explanation
"""
suma = 0.
for i in xrange(100):
suma += i*(i+1)
suma
# Estas cuatro lineas hacen lo mismo que todo el cรณdigo que hicimos anteriormente
# No se desanimen :(, los verdaderos resultados vendrรกn muy prontamente.
"""
Explanation: El resultado final tendrรญa que ser igual a 333300.0. Si tu resultado es correcto, ยกfelicidades! has escrito correctamente tu primer cรณdigo en CUDA C.
En caso de que no logres encontrar el resultado correcto y ya estรฉs harto de intentar, recuerda que nos puedes contactar a nuestros correos (que estรกn en el primer notebook).
End of explanation
"""
|
GoogleCloudPlatform/bigquery-notebooks
|
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-mlops/kfp_tutorial.ipynb
|
apache-2.0
|
# CHANGE the following settings
BASE_IMAGE = "gcr.io/your-image-name"
MODEL_STORAGE = "gs://your-bucket-name/folder-name" # Must include a folder in the bucket, otherwise, model export will fail
BQ_DATASET_NAME = "hotel_recommendations" # This is the name of the target dataset where you model and predictions will be stored
PROJECT_ID = "your-project-id" # This is your GCP project ID that can be found in the GCP console
KFPHOST = "your-ai-platform-pipeline-url" # Kubeflow Pipelines URL, can be found from settings button in CAIP Pipelines
REGION = "your-project-region" # For example, us-central1, note that Vertex AI endpoint deployment region must match MODEL_STORAGE bucket region
ENDPOINT_NAME = "your-vertex-ai-endpoint-name"
DEPLOY_COMPUTE = "your-endpoint-compute-size" # For example, n1-standard-4
DEPLOY_IMAGE = "us-docker.pkg.dev/vertex-ai/prediction/xgboost-cpu.0-82:latest" # Do not change, BQML XGBoost is currently compatible with 0.82
"""
Explanation: Tutorial Overview
This is a two part tutorial where part one will walk you through a complete end to end Machine Learning use case using Google Cloud Platform. You will learn how to build a hybrid recommendation model with embedding technique with Google BigQuery Machine Learning from book โBigQuery: The Definitive Guideโ, a highly recommended book written by BigQuery and ML expert Valliappa Lakshmanan. We will not cover in detail on typical machine learining steps such data exploration and cleaning, feature selection, and feature engineering (other than embedding technique we show here). We encourage the readers to do so and see if you can improve the model quality and performance. Instead we will mostly focus on show you how to orchestrate the entire machine learning process with Kubeflow on Google AI Platform Pipelines. In PART TWO, you will learn how to setup a CI/CD pipeline with Google Cloud Source Repositories and Google Cloud Build.
The use case is to predict the the propensity of booking for any user/hotel combination. The intuition behind the embedding layer with Matrix Factorization is if we can find similar hotels that are close in the embedding space, we will achieve a higher accuracy to predict whether the user will book the hotel.
Prerequisites
Download the Expedia Hotel Recommendation Dataset from Kaggle. You will be mostly working with the train.csv dataset for this tutorial
Upload the dataset to BigQuery by following the how-to guide Loading CSV Data
Follow the how-to guide create flex slots, reservation and assignment in BigQuery for training ML models. <strong>Make sure to create Flex slots and not month/year slots so you can delete them after the tutorial.</strong>
Build and push a docker image using this dockerfile as the base image for the Kubeflow pipeline components.
Create an instance of AI Platform Pipelines by following the how-to guide Setting up AI Platform Pipelines.
Create or use a Google Cloud Storage bucket to export the finalized model to.
Change the following cell to reflect your setup
End of explanation
"""
import json
import os
from typing import NamedTuple
def run_bigquery_ddl(
project_id: str, query_string: str, location: str
) -> NamedTuple("DDLOutput", [("created_table", str), ("query", str)]):
"""
Runs BigQuery query and returns a table/model name
"""
print(query_string)
from google.api_core.future import polling
from google.cloud import bigquery
from google.cloud.bigquery import retry as bq_retry
bqclient = bigquery.Client(project=project_id, location=location)
job = bqclient.query(query_string, retry=bq_retry.DEFAULT_RETRY)
job._retry = polling.DEFAULT_RETRY
while job.running():
from time import sleep
sleep(0.1)
print("Running ...")
tblname = job.ddl_target_table
tblname = "{}.{}".format(tblname.dataset_id, tblname.table_id)
print("{} created in {}".format(tblname, job.ended - job.started))
from collections import namedtuple
result_tuple = namedtuple("DDLOutput", ["created_table", "query"])
return result_tuple(tblname, query_string)
"""
Explanation: Create BigQuery function
Create a generic BigQuery function that runs a BigQuery query and returns the table/model created. This will be re-used to return BigQuery results for all the different segments of the BigQuery process in the Kubeflow Pipeline. You will see later in the tutorial where this function is being passed as parameter (ddlop) to other functions to perform certain BigQuery operation.
End of explanation
"""
def train_matrix_factorization_model(ddlop, project_id, dataset):
query = """
CREATE OR REPLACE MODEL `{project_id}.{dataset}.my_implicit_mf_model_quantiles_demo_binary_prod`
OPTIONS
(model_type='matrix_factorization',
feedback_type='implicit',
user_col='user_id',
item_col='hotel_cluster',
rating_col='rating',
l2_reg=30,
num_factors=15) AS
SELECT
user_id,
hotel_cluster,
if(sum(is_booking) > 0, 1, sum(is_booking)) AS rating
FROM `{project_id}.{dataset}.hotel_train`
group by 1,2
""".format(
project_id=project_id, dataset=dataset
)
return ddlop(project_id, query, "US")
def evaluate_matrix_factorization_model(
project_id, mf_model, location="US"
) -> NamedTuple("MFMetrics", [("msqe", float)]):
query = """
SELECT * FROM ML.EVALUATE(MODEL `{project_id}.{mf_model}`)
""".format(
project_id=project_id, mf_model=mf_model
)
print(query)
import json
from google.cloud import bigquery
bqclient = bigquery.Client(project=project_id, location=location)
job = bqclient.query(query)
metrics_df = job.result().to_dataframe()
from collections import namedtuple
result_tuple = namedtuple("MFMetrics", ["msqe"])
return result_tuple(metrics_df.loc[0].to_dict()["mean_squared_error"])
"""
Explanation: Creating the model
We will start by training a matrix factorization model that will allow us to understand the latent relationship between user and hotel clusters. The reason why we are doing this is because matrix factorization approach can only find latent relationship between a user and a hotel. However, there are other intuitive useful predictors (such as is_mobile, location, etc.) that can improve the model performance. So together, we can feed the resulting weights/factors as features among with other features to train the final XGBoost model.
End of explanation
"""
def create_user_features(ddlop, project_id, dataset, mf_model):
# Feature engineering for useres
query = """
CREATE OR REPLACE TABLE `{project_id}.{dataset}.user_features_prod` AS
WITH u as
(
select
user_id,
count(*) as total_visits,
count(distinct user_location_city) as distinct_cities,
sum(distinct site_name) as distinct_sites,
sum(is_mobile) as total_mobile,
sum(is_booking) as total_bookings,
FROM `{project_id}.{dataset}.hotel_train`
GROUP BY 1
)
SELECT
u.*,
(SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights)) AS user_factors
FROM
u JOIN ML.WEIGHTS( MODEL `{mf_model}`) w
ON processed_input = 'user_id' AND feature = CAST(u.user_id AS STRING)
""".format(
project_id=project_id, dataset=dataset, mf_model=mf_model
)
return ddlop(project_id, query, "US")
def create_hotel_features(ddlop, project_id, dataset, mf_model):
# Feature eingineering for hotels
query = """
CREATE OR REPLACE TABLE `{project_id}.{dataset}.hotel_features_prod` AS
WITH h as
(
select
hotel_cluster,
count(*) as total_cluster_searches,
count(distinct hotel_country) as distinct_hotel_countries,
sum(distinct hotel_market) as distinct_hotel_markets,
sum(is_mobile) as total_mobile_searches,
sum(is_booking) as total_cluster_bookings,
FROM `{project_id}.{dataset}.hotel_train`
group by 1
)
SELECT
h.*,
(SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights)) AS hotel_factors
FROM
h JOIN ML.WEIGHTS( MODEL `{mf_model}`) w
ON processed_input = 'hotel_cluster' AND feature = CAST(h.hotel_cluster AS STRING)
""".format(
project_id=project_id, dataset=dataset, mf_model=mf_model
)
return ddlop(project_id, query, "US")
"""
Explanation: Creating embedding features for users and hotels
We will use the matrix factorization model to create corresponding user factors, hotel factors and embed them together with additional features such as total visits and distinct cities to create a new training dataset to an XGBoost classifier which will try to predict the the likelihood of booking for any user/hotel combination. Also note that we aggregated and grouped the original dataset by user_id.
End of explanation
"""
def combine_features(
ddlop, project_id, dataset, mf_model, hotel_features, user_features
):
# Combine user and hotel embedding features with the rating associated with each combination
query = """
CREATE OR REPLACE TABLE `{project_id}.{dataset}.total_features_prod` AS
with ratings as(
SELECT
user_id,
hotel_cluster,
if(sum(is_booking) > 0, 1, sum(is_booking)) AS rating
FROM `{project_id}.{dataset}.hotel_train`
group by 1,2
)
select
h.* EXCEPT(hotel_cluster),
u.* EXCEPT(user_id),
IFNULL(rating,0) as rating
from `{hotel_features}` h, `{user_features}` u
LEFT OUTER JOIN ratings r
ON r.user_id = u.user_id AND r.hotel_cluster = h.hotel_cluster
""".format(
project_id=project_id,
dataset=dataset,
mf_model=mf_model,
hotel_features=hotel_features,
user_features=user_features,
)
return ddlop(project_id, query, "US")
"""
Explanation: Function below combines all the features selected (total_mobile_searches) and engineered (user factors and hotel factors) into a training dataset for the XGBoost classifier. Note the target variable is rating which is converted into a binary classfication.
End of explanation
"""
%%bigquery --project $PROJECT_ID
CREATE OR REPLACE FUNCTION `hotel_recommendations.arr_to_input_15_hotels`(h ARRAY<FLOAT64>)
RETURNS
STRUCT<
h1 FLOAT64,
h2 FLOAT64,
h3 FLOAT64,
h4 FLOAT64,
h5 FLOAT64,
h6 FLOAT64,
h7 FLOAT64,
h8 FLOAT64,
h9 FLOAT64,
h10 FLOAT64,
h11 FLOAT64,
h12 FLOAT64,
h13 FLOAT64,
h14 FLOAT64,
h15 FLOAT64
> AS (STRUCT(
h[OFFSET(0)],
h[OFFSET(1)],
h[OFFSET(2)],
h[OFFSET(3)],
h[OFFSET(4)],
h[OFFSET(5)],
h[OFFSET(6)],
h[OFFSET(7)],
h[OFFSET(8)],
h[OFFSET(9)],
h[OFFSET(10)],
h[OFFSET(11)],
h[OFFSET(12)],
h[OFFSET(13)],
h[OFFSET(14)]
));
CREATE OR REPLACE FUNCTION `hotel_recommendations.arr_to_input_15_users`(u ARRAY<FLOAT64>)
RETURNS
STRUCT<
u1 FLOAT64,
u2 FLOAT64,
u3 FLOAT64,
u4 FLOAT64,
u5 FLOAT64,
u6 FLOAT64,
u7 FLOAT64,
u8 FLOAT64,
u9 FLOAT64,
u10 FLOAT64,
u11 FLOAT64,
u12 FLOAT64,
u13 FLOAT64,
u14 FLOAT64,
u15 FLOAT64
> AS (STRUCT(
u[OFFSET(0)],
u[OFFSET(1)],
u[OFFSET(2)],
u[OFFSET(3)],
u[OFFSET(4)],
u[OFFSET(5)],
u[OFFSET(6)],
u[OFFSET(7)],
u[OFFSET(8)],
u[OFFSET(9)],
u[OFFSET(10)],
u[OFFSET(11)],
u[OFFSET(12)],
u[OFFSET(13)],
u[OFFSET(14)]
));
"""
Explanation: We will create a couple of BigQuery user-defined functions (UDF) to convert arrays to a struct and its array elements are the fields in the struct. <strong>Be sure to change the BigQuery dataset name to your dataset name. </strong>
End of explanation
"""
def train_xgboost_model(ddlop, project_id, dataset, total_features):
# Combine user and hotel embedding features with the rating associated with each combination
query = """
CREATE OR REPLACE MODEL `{project_id}.{dataset}.recommender_hybrid_xgboost_prod`
OPTIONS(model_type='boosted_tree_classifier', input_label_cols=['rating'], AUTO_CLASS_WEIGHTS=True)
AS
SELECT
* EXCEPT(user_factors, hotel_factors),
{dataset}.arr_to_input_15_users(user_factors).*,
{dataset}.arr_to_input_15_hotels(hotel_factors).*
FROM
`{total_features}`
""".format(
project_id=project_id, dataset=dataset, total_features=total_features
)
return ddlop(project_id, query, "US")
def evaluate_class(
project_id, dataset, class_model, total_features, location="US"
) -> NamedTuple("ClassMetrics", [("roc_auc", float)]):
query = """
SELECT
*
FROM ML.EVALUATE(MODEL `{class_model}`, (
SELECT
* EXCEPT(user_factors, hotel_factors),
{dataset}.arr_to_input_15_users(user_factors).*,
{dataset}.arr_to_input_15_hotels(hotel_factors).*
FROM
`{total_features}`
))
""".format(
dataset=dataset, class_model=class_model, total_features=total_features
)
print(query)
from google.cloud import bigquery
bqclient = bigquery.Client(project=project_id, location=location)
job = bqclient.query(query)
metrics_df = job.result().to_dataframe()
from collections import namedtuple
result_tuple = namedtuple("ClassMetrics", ["roc_auc"])
return result_tuple(metrics_df.loc[0].to_dict()["roc_auc"])
"""
Explanation: Train XGBoost model and evaluate it
End of explanation
"""
def export_bqml_model(
project_id, model, destination
) -> NamedTuple("ModelExport", [("destination", str)]):
import subprocess
# command='bq extract -destination_format=ML_XGBOOST_BOOSTER -m {}:{} {}'.format(project_id, model, destination)
model_name = "{}:{}".format(project_id, model)
print(model_name)
subprocess.run(
[
"bq",
"extract",
"-destination_format=ML_XGBOOST_BOOSTER",
"-m",
model_name,
destination,
],
check=True,
)
from collections import namedtuple
result_tuple = namedtuple("ModelExport", ["destination"])
return result_tuple(destination)
def deploy_bqml_model_vertexai(
project_id,
region,
model_name,
endpoint_name,
model_dir,
deploy_image,
deploy_compute,
):
from google.cloud import aiplatform
parent = "projects/" + project_id + "/locations/" + region
client_options = {"api_endpoint": "{}-aiplatform.googleapis.com".format(region)}
clients = {}
# upload the model to Vertex AI
clients["model"] = aiplatform.gapic.ModelServiceClient(
client_options=client_options
)
model = {
"display_name": model_name,
"metadata_schema_uri": "",
"artifact_uri": model_dir,
"container_spec": {
"image_uri": deploy_image,
"command": [],
"args": [],
"env": [],
"ports": [{"container_port": 8080}],
"predict_route": "",
"health_route": "",
},
}
upload_model_response = clients["model"].upload_model(parent=parent, model=model)
print(
"Long running operation on uploading the model:",
upload_model_response.operation.name,
)
model_info = clients["model"].get_model(
name=upload_model_response.result(timeout=180).model
)
# Create an endpoint on Vertex AI to host the model
clients["endpoint"] = aiplatform.gapic.EndpointServiceClient(
client_options=client_options
)
create_endpoint_response = clients["endpoint"].create_endpoint(
parent=parent, endpoint={"display_name": endpoint_name}
)
print(
"Long running operation on creating endpoint:",
create_endpoint_response.operation.name,
)
endpoint_info = clients["endpoint"].get_endpoint(
name=create_endpoint_response.result(timeout=180).name
)
# Deploy the model to the endpoint
dmodel = {
"model": model_info.name,
"display_name": "deployed_" + model_name,
"dedicated_resources": {
"min_replica_count": 1,
"max_replica_count": 1,
"machine_spec": {
"machine_type": deploy_compute,
"accelerator_count": 0,
},
},
}
traffic = {"0": 100}
deploy_model_response = clients["endpoint"].deploy_model(
endpoint=endpoint_info.name, deployed_model=dmodel, traffic_split=traffic
)
print(
"Long running operation on deploying the model:",
deploy_model_response.operation.name,
)
deploy_model_result = deploy_model_response.result()
"""
Explanation: Export XGBoost model and host it as a model endpoint on Vertex AI
One of the nice features of BigQuery ML is the ability to import and export machine learning models. In the function defined below, we are going to export the trained XGBoost model to a Google Cloud Storage bucket. We will later have Google Cloud AI Platform host this model as an endpoint for predictions. It is worth mentioning that you can host this model on any platform that supports Booster (XGBoost 0.82). Check out the documentation for more information on exporting BigQuery ML models and their formats.
End of explanation
"""
import time
import kfp.components as comp
import kfp.dsl as dsl
@dsl.pipeline(
name="Training pipeline for hotel recommendation prediction",
description="Training pipeline for hotel recommendation prediction",
)
def training_pipeline(project_id=PROJECT_ID):
import json
# Minimum threshold for model metric to determine if model will be deployed for prediction
mf_msqe_threshold = 0.5
class_auc_threshold = 0.8
# Defining function containers
ddlop = comp.func_to_container_op(
run_bigquery_ddl,
base_image=BASE_IMAGE,
packages_to_install=["google-cloud-bigquery"],
)
evaluate_class_op = comp.func_to_container_op(
evaluate_class,
base_image=BASE_IMAGE,
packages_to_install=["google-cloud-bigquery", "pandas"],
)
evaluate_mf_op = comp.func_to_container_op(
evaluate_matrix_factorization_model,
base_image=BASE_IMAGE,
packages_to_install=["google-cloud-bigquery", "pandas"],
)
export_bqml_model_op = comp.func_to_container_op(
export_bqml_model,
base_image=BASE_IMAGE,
packages_to_install=["google-cloud-bigquery"],
)
deploy_bqml_model_op = comp.func_to_container_op(
deploy_bqml_model_vertexai,
base_image=BASE_IMAGE,
packages_to_install=["google-cloud-aiplatform"],
)
#############################
# Defining pipeline execution graph
dataset = BQ_DATASET_NAME
# Train matrix factorization model
mf_model_output = train_matrix_factorization_model(
ddlop, PROJECT_ID, dataset
).set_display_name("train matrix factorization model")
mf_model_output.execution_options.caching_strategy.max_cache_staleness = "P0D"
mf_model = mf_model_output.outputs["created_table"]
# Evaluate matrix factorization model
mf_eval_output = evaluate_mf_op(PROJECT_ID, mf_model).set_display_name(
"evaluate matrix factorization model"
)
mf_eval_output.execution_options.caching_strategy.max_cache_staleness = "P0D"
with dsl.Condition(mf_eval_output.outputs["msqe"] < mf_msqe_threshold):
# Create features for classification model
user_features_output = create_user_features(
ddlop, PROJECT_ID, dataset, mf_model
).set_display_name("create user factors features")
user_features = user_features_output.outputs["created_table"]
user_features_output.execution_options.caching_strategy.max_cache_staleness = (
"P0D"
)
hotel_features_output = create_hotel_features(
ddlop, PROJECT_ID, dataset, mf_model
).set_display_name("create hotel factors features")
hotel_features = hotel_features_output.outputs["created_table"]
hotel_features_output.execution_options.caching_strategy.max_cache_staleness = (
"P0D"
)
total_features_output = combine_features(
ddlop, PROJECT_ID, dataset, mf_model, hotel_features, user_features
).set_display_name("combine all features")
total_features = total_features_output.outputs["created_table"]
total_features_output.execution_options.caching_strategy.max_cache_staleness = (
"P0D"
)
# Train XGBoost model
class_model_output = train_xgboost_model(
ddlop, PROJECT_ID, dataset, total_features
).set_display_name("train XGBoost model")
class_model = class_model_output.outputs["created_table"]
class_model_output.execution_options.caching_strategy.max_cache_staleness = (
"P0D"
)
class_eval_output = evaluate_class_op(
project_id, dataset, class_model, total_features
).set_display_name("evaluate XGBoost model")
class_eval_output.execution_options.caching_strategy.max_cache_staleness = "P0D"
with dsl.Condition(class_eval_output.outputs["roc_auc"] > class_auc_threshold):
# Export model
export_destination_output = export_bqml_model_op(
project_id, class_model, MODEL_STORAGE
).set_display_name("export XGBoost model")
export_destination_output.execution_options.caching_strategy.max_cache_staleness = (
"P0D"
)
export_destination = export_destination_output.outputs["destination"]
deploy_model = deploy_bqml_model_op(
PROJECT_ID,
REGION,
class_model,
ENDPOINT_NAME,
MODEL_STORAGE,
DEPLOY_IMAGE,
DEPLOY_COMPUTE,
).set_display_name("Deploy XGBoost model")
deploy_model.execution_options.caching_strategy.max_cache_staleness = "P0D"
"""
Explanation: Defining the Kubeflow Pipelines (KFP)
Now we have the necessary functions defined, we are now ready to create a workflow using Kubeflow Pipeline. The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL).
The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.
The pipeline performs the following steps -
* Trains a Matrix Factorization model
* Evaluates the trained Matrix Factorization model and if the Mean Square Error is less than threadshold, it will continue to the next step, otherwise, the pipeline will stop
* Engineers new user factors feature with the Matrix Factorization model
* Engineers new hotel factors feature with the Matrix Factorization model
* Combines all the features selected (total_mobile_searches) and engineered (user factors and hotel factors) into a training dataset for the XGBoost classifier
* Trains a XGBoost classifier
* Evalutes the trained XGBoost model and if the ROC AUC score is more than threadshold, it will continue to the next step, otherwise, the pipeline will stop
* Exports the XGBoost model to a Google Cloud Storage bucket
* Deploys the XGBoost model from the Google Cloud Storage bucket to Google Cloud AI Platform for prediction
End of explanation
"""
pipeline_func = training_pipeline
pipeline_filename = pipeline_func.__name__ + ".zip"
import kfp
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
# Specify pipeline argument values
arguments = {}
# Get or create an experiment and submit a pipeline run
client = kfp.Client(KFPHOST)
experiment = client.create_experiment("hotel_recommender_experiment")
# Submit a pipeline run
run_name = pipeline_func.__name__ + " run"
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
"""
Explanation: Submitting pipeline runs
You can trigger pipeline runs using an API from the KFP SDK or using KFP CLI. To submit the run using KFP CLI, execute the following commands. Notice how the pipeline's parameters are passed to the pipeline run.
End of explanation
"""
|
BrentDorsey/pipeline
|
gpu.ml/notebooks/01a_Explore_GPU.ipynb
|
apache-2.0
|
%%bash
nvidia-smi
"""
Explanation: Explore GPU
Sanity Check #1:
Run Standard nvidia-smi Tool
End of explanation
"""
%%bash
xla_device_test &> xla_device_test.log
tail -3 xla_device_test.log
"""
Explanation: Sanity Check #2:
Run Accelerated Linear Algebra (XLA) Tests
End of explanation
"""
%%bash
cat /root/src/main/cuda/SumArrays.cu
"""
Explanation: Run Some CUDA Code!
Show CUDA Code
End of explanation
"""
%%bash
sum_arrays
"""
Explanation: Run CUDA Code and Verify Expected Output
```
EXPECTED OUTPUT
...
Awesome! The GPU summed the arrays!!
...
```
Note the execution time.
End of explanation
"""
%%bash
# Don't go above 10!!
for _ in {1..10}
do
sum_arrays > /dev/null 2>&1
done
echo "...Done!"
"""
Explanation: Open a Terminal through Jupyter Notebook
(Menu Bar -> Terminal -> New Terminal)
Run this Command to Watch GPU Every Second:
watch -n 1 nvidia-smi
Run Code In Loop, Watch GPU
Note: Don't go higher than 10!
Otherwise the following may happen:
* this cell will take a long time to finish
* you may kill your instance!!
End of explanation
"""
%%bash
cat /root/src/main/cuda/SumArraysAsyncMemcpy.cu
"""
Explanation: Run Some Advanced CUDA Code!
We lower overall execution time using async, stream-based memcpy
Show Advanced CUDA Code, Find Stream
End of explanation
"""
%%bash
sum_arrays_async_memcpy
%%bash
# Don't go above 10!!
for _ in {1..10}
do
sum_arrays_async_memcpy > /dev/null 2>&1
done
echo "...Done!"
"""
Explanation: Run CUDA Code and Verify Expected Output
```
EXPECTED OUTPUT
...
Awesome! The GPU summed the arrays!!
...
```
Also, note the lower execution time due to async memcpy.
End of explanation
"""
|
machinelearningnanodegree/stanford-cs231
|
solutions/pranay/assignment1/features.ipynb
|
mit
|
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
"""
Explanation: Image features exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.
All of your work for this exercise will be done in this notebook.
End of explanation
"""
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
"""
Explanation: Load data
Similar to previous exercises, we will load CIFAR-10 data from disk.
End of explanation
"""
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
"""
Explanation: Extract Features
For each image we will compute a Histogram of Oriented
Gradients (HOG) as well as a color histogram using the hue channel in HSV
color space. We form our final feature vector for each image by concatenating
the HOG and color histogram feature vectors.
Roughly speaking, HOG should capture the texture of the image while ignoring
color information, and the color histogram represents the color of the input
image while ignoring texture. As a result, we expect that using both together
ought to work better than using either alone. Verifying this assumption would
be a good thing to try for the bonus section.
The hog_feature and color_histogram_hsv functions both operate on a single
image and return a feature vector for that image. The extract_features
function takes a set of images and a list of feature functions and evaluates
each feature function on each image, storing the results in a matrix where
each column is the concatenation of all feature vectors for a single image.
End of explanation
"""
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [1e5, 1e6, 1e7]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print test_accuracy
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
"""
Explanation: Train SVM on features
Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
End of explanation
"""
print X_train_feats.shape
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print test_acc
"""
Explanation: Inline question 1:
Describe the misclassification results that you see. Do they make sense?
Neural Network on image features
Earlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels.
For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
End of explanation
"""
|
science-of-imagination/nengo-buffer
|
Project/low_pass_training.ipynb
|
gpl-3.0
|
import matplotlib.pyplot as plt
%matplotlib inline
import nengo
import numpy as np
import scipy.ndimage
import matplotlib.animation as animation
from matplotlib import pylab
from PIL import Image
import nengo.spa as spa
import cPickle
import random
from nengo_extras.data import load_mnist
from nengo_extras.vision import Gabor, Mask
dim =28
"""
Explanation: Training Ensemble on MNIST Dataset
On the function points branch of nengo
On the vision branch of nengo_extras
End of explanation
"""
# --- load the data
img_rows, img_cols = 28, 28
(X_train, y_train), (X_test, y_test) = load_mnist()
X_train = 2 * X_train - 1 # normalize to -1 to 1
X_test = 2 * X_test - 1 # normalize to -1 to 1
X_train.shape
noise = np.random.random([dim,dim])
#noise = 2* noise - 1 # normalize to -1 to 1
noise = noise+np.reshape(X_train[1],(28,28))
plt.subplot(121)
plt.imshow(noise,cmap="gray")
low_pass = noise.copy()
low_pass[low_pass<0]=-1
low_pass[low_pass>0]=1
plt.subplot(122)
plt.imshow(low_pass,cmap="gray")
def intense(img):
newImg = img.copy()
newImg[newImg < 0] = -1
newImg[newImg > 0] = 1
return newImg
i=random.randint(1,100)
img = X_train[i]
img_filtered = intense(scipy.ndimage.gaussian_filter(img, sigma=1))
rotated = scipy.ndimage.rotate(img_filtered.reshape(dim,dim),6,reshape=False,cval=-1)
rotated_filtered = intense(scipy.ndimage.gaussian_filter(rotated, sigma=1))
a = np.random.random([dim,dim])
img_filtered = img_filtered+a.ravel()
plt.subplot(141)
plt.title("mnist")
plt.imshow(np.reshape(img,(dim,dim)),cmap="gray")
plt.subplot(142)
plt.title("filtered & noise")
plt.imshow(np.reshape(img_filtered,(dim,dim)),cmap="gray")
plt.subplot(143)
plt.title("rotated")
plt.imshow(np.reshape(rotated,(dim,dim)),cmap="gray")
plt.subplot(144)
plt.title("rotated filtered")
plt.imshow(np.reshape(rotated_filtered,(dim,dim)),cmap="gray")
img = np.random.random([dim,dim])
img = img.ravel()
i=random.randint(1,100)
img = img+X_train[i]
plt.subplot(121)
plt.imshow(np.reshape(img,(dim,dim)),cmap="gray")
plt.subplot(122)
plt.imshow(np.reshape(intense(img),(dim,dim)),cmap="gray")
#Create set of noisy images
noise_train = np.random.random(X_train.shape)
noise_train = 2 * noise_train -1# normalize to -1 to 1
#Training with mnist
#noise_train = noise_train + X_train
#Clean up noisy images with intensifying
clean_train = intense(noise_train)
plt.subplot(121)
plt.imshow(np.reshape(noise_train[1],(dim,dim)),cmap="gray")
plt.subplot(122)
plt.imshow(np.reshape(clean_train[1],(dim,dim)),cmap="gray")
"""
Explanation: Load the MNIST training and testing images
End of explanation
"""
rng = np.random.RandomState(9)
# --- set up network parameters
n_vis = noise_train.shape[1]
n_out = noise_train.shape[1]
#number of neurons/dimensions of semantic pointer
n_hid = 1000 #Try with more neurons for more accuracy
#Want the encoding/decoding done on the training images
ens_params = dict(
eval_points=X_train,
neuron_type=nengo.LIF(), #Why not use LIF? originally used LIFRate()
intercepts=nengo.dists.Choice([-0.5]),
max_rates=nengo.dists.Choice([100]),
)
#Least-squares solver with L2 regularization.
solver = nengo.solvers.LstsqL2(reg=0.01)
#solver = nengo.solvers.LstsqL2(reg=0.0001)
#network that generates the weight matrices between neuron activity and images and the labels
with nengo.Network(seed=3) as model:
a = nengo.Ensemble(n_hid, n_vis, seed=3, **ens_params)
v = nengo.Node(size_in=n_out)
conn = nengo.Connection(
a, v, synapse=None,
eval_points=X_train, function=X_train,#Not used anymore
solver=solver)
# linear filter used for edge detection as encoders, more plausible for human visual system
encoders = Gabor().generate(n_hid, (11, 11), rng=rng)
encoders = Mask((28, 28)).populate(encoders, rng=rng, flatten=True)
#Set the ensembles encoders to this
a.encoders = encoders
#Check the encoders were correctly made
plt.imshow(encoders[0].reshape(28, 28), vmin=encoders[0].min(), vmax=encoders[0].max(), cmap='gray')
"""
Explanation: The Network
The network parameters must be the same here as when the weight matrices are used later on
The network is made up of an ensemble and anodes
The first connection ( to v) computes the weights from the activities of the noise to the cleaned images
End of explanation
"""
#Get the one hot labels for the images
def get_outs(sim, images):
#The activity of the neurons when an image is given as input
_, acts = nengo.utils.ensemble.tuning_curves(a, sim, inputs=images)
#The activity multiplied by the weight matrix (calculated in the network) to give the one-hot labels
return np.dot(acts, sim.data[conn2].weights.T)
#Check how many of the labels were produced correctly
#def get_error(sim, images, labels):
# return np.argmax(get_outs(sim, images), axis=1) != labels
#Get label of the images
#def get_labels(sim,images):
# return np.argmax(get_outs(sim, images), axis=1)
#Get the neuron activity of an image or group of images (this is the semantic pointer in this case)
def get_activities(sim, images):
_, acts = nengo.utils.ensemble.tuning_curves(a, sim, inputs=images)
return acts
#Get the representation of the image after it has gone through the encoders (Gabor filters) but before it is in the neurons
#This must be computed to create the weight matrix for rotation from neuron activity to this step
# This allows a recurrent connection to be made from the neurons to themselves later
def get_encoder_outputs(sim,images):
#Pass the images through the encoders
outs = np.dot(images,sim.data[a].encoders.T) #before the neurons
return outs
"""
Explanation: Evaluating the network statically
Functions for computing representation of the image at different levels of encoding/decoding
get_outs returns the output of the network
able to evaluate on many images
no need to run the simulator
End of explanation
"""
with nengo.Simulator(model) as sim:
#Neuron activities
noise_acts = get_activities(sim,noise_train)
clean = get_encoder_outputs(sim,clean_train)
#solvers for a learning rule
solver_low_pass = nengo.solvers.LstsqL2(reg=1e-8)
#find weight matrix between neuron activity of the original image and the clean img
#weights returns a tuple including information about learning process, just want the weight matrix
weights,_ = solver_low_pass(noise_acts, clean)
"""
Explanation: Simulator
Calculate the neuron activities of each set of images
Generate the weight matrices between original activities and clean activities
End of explanation
"""
filename = "low_pass_weights_mnist" + str(n_hid) +".p"
cPickle.dump(weights, open( filename, "wb" ) )
"""
Explanation: Saving weight matrices
End of explanation
"""
|
rishuatgithub/MLPy
|
torch/PYTORCH_NOTEBOOKS/02-ANN-Artificial-Neural-Networks/03-Basic-PyTorch-NN.ipynb
|
apache-2.0
|
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
from sklearn.model_selection import train_test_split
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: <img src="../Pierian-Data-Logo.PNG">
<br>
<strong><center>Copyright 2019. Created by Jose Marcial Portilla.</center></strong>
Basic PyTorch Neural Network
Now it's time to put the pieces together. In this section we'll:
* create a multi-layer deep learning model
* load data
* train and validate the model<br>
We'll also introduce a new step:
* save and load a trained model
Our goal is to develop a model capable of classifying an iris plant based on four features. This is a multi-class classification where each sample can belong to ONE of 3 classes (<em>Iris setosa</em>, <em>Iris virginica</em> or <em>Iris versicolor</em>). The network will have 4 input neurons (flower dimensions) and 3 output neurons (scores). Our loss function will compare the target label (ground truth) to the corresponding output score.
<div class="alert alert-info"><strong>NOTE:</strong> Multi-class classifications usually involve converting the target vector to a one_hot encoded matrix. That is, if 5 labels show up as<br>
<pre style='background-color:rgb(217,237,247)'>tensor([0,2,1,0,1])</pre>
then we would encode them as:
<pre style='background-color:rgb(217,237,247)'>tensor([[1, 0, 0],
[0, 0, 1],
[0, 1, 0],
[1, 0, 0],
[0, 1, 0]])</pre>
This is easily accomplished with <a href='https://pytorch.org/docs/stable/nn.html#one-hot'><strong><tt>torch.nn.functional.one_hot()</tt></strong></a>.<br>
However, our loss function <a href='https://pytorch.org/docs/stable/nn.html#crossentropyloss'><strong><tt>torch.nn.CrossEntropyLoss()</tt></strong></a> takes care of this for us.</div>
Perform standard imports
End of explanation
"""
class Model(nn.Module):
def __init__(self, in_features=4, h1=8, h2=9, out_features=3):
super().__init__()
self.fc1 = nn.Linear(in_features,h1) # input layer
self.fc2 = nn.Linear(h1, h2) # hidden layer
self.out = nn.Linear(h2, out_features) # output layer
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.out(x)
return x
# Instantiate the Model class using parameter defaults:
torch.manual_seed(32)
model = Model()
"""
Explanation: Create a model class
For this exercise we're using the Iris dataset. Since a single straight line can't classify three flowers we should include at least one hidden layer in our model.
In the forward section we'll use the <a href='https://en.wikipedia.org/wiki/Rectifier_(neural_networks)'>rectified linear unit</a> (ReLU) function<br>
$\quad f(x)=max(0,x)$<br>
as our activation function. This is available as a full module <a href='https://pytorch.org/docs/stable/nn.html#relu'><strong><tt>torch.nn.ReLU</tt></strong></a> or as just a functional call <a href='https://pytorch.org/docs/stable/nn.html#id27'><strong><tt>torch.nn.functional.relu</tt></strong></a>
End of explanation
"""
df = pd.read_csv('../Data/iris.csv')
df.head()
"""
Explanation: Load the iris dataset
End of explanation
"""
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(10,7))
fig.tight_layout()
plots = [(0,1),(2,3),(0,2),(1,3)]
colors = ['b', 'r', 'g']
labels = ['Iris setosa','Iris virginica','Iris versicolor']
for i, ax in enumerate(axes.flat):
for j in range(3):
x = df.columns[plots[i][0]]
y = df.columns[plots[i][1]]
ax.scatter(df[df['target']==j][x], df[df['target']==j][y], color=colors[j])
ax.set(xlabel=x, ylabel=y)
fig.legend(labels=labels, loc=3, bbox_to_anchor=(1.0,0.85))
plt.show()
"""
Explanation: Plot the dataset
The iris dataset has 4 features. To get an idea how they correlate we can plot four different relationships among them.<br>
We'll use the index positions of the columns to grab their names in pairs with <tt>plots = [(0,1),(2,3),(0,2),(1,3)]</tt>.<br>
Here <tt>(0,1)</tt> sets "sepal length (cm)" as <tt>x</tt> and "sepal width (cm)" as <tt>y</tt>
End of explanation
"""
X = df.drop('target',axis=1).values
y = df['target'].values
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2,random_state=33)
X_train = torch.FloatTensor(X_train)
X_test = torch.FloatTensor(X_test)
# y_train = F.one_hot(torch.LongTensor(y_train)) # not needed with Cross Entropy Loss
# y_test = F.one_hot(torch.LongTensor(y_test))
y_train = torch.LongTensor(y_train)
y_test = torch.LongTensor(y_test)
"""
Explanation: Perform Train/Test/Split
End of explanation
"""
trainloader = DataLoader(X_train, batch_size=60, shuffle=True)
testloader = DataLoader(X_test, batch_size=60, shuffle=False)
"""
Explanation: Prepare DataLoader
For this analysis we don't need to create a Dataset object, but we should take advantage of PyTorch's DataLoader tool. Even though our dataset is small (120 training samples), we'll load it into our model in two batches. This technique becomes very helpful with large datasets.
Note that scikit-learn already shuffled the source dataset before preparing train and test sets. We'll still benefit from the DataLoader shuffle utility for model training if we make multiple passes throught the dataset.
End of explanation
"""
# FOR REDO
torch.manual_seed(4)
model = Model()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
"""
Explanation: Define loss equations and optimizations
As before, we'll utilize <a href='https://en.wikipedia.org/wiki/Cross_entropy'>Cross Entropy</a> with <a href='https://pytorch.org/docs/stable/nn.html#crossentropyloss'><strong><tt>torch.nn.CrossEntropyLoss()</tt></strong></a><br>
For the optimizer, we'll use a variation of Stochastic Gradient Descent called <a href='https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Adam'>Adam</a> (short for Adaptive Moment Estimation), with <a href='https://pytorch.org/docs/stable/optim.html#torch.optim.Adam'><strong><tt>torch.optim.Adam()</tt></strong></a>
End of explanation
"""
epochs = 100
losses = []
for i in range(epochs):
i+=1
y_pred = model.forward(X_train)
loss = criterion(y_pred, y_train)
losses.append(loss)
# a neat trick to save screen space:
if i%10 == 1:
print(f'epoch: {i:2} loss: {loss.item():10.8f}')
optimizer.zero_grad()
loss.backward()
optimizer.step()
"""
Explanation: Train the model
End of explanation
"""
plt.plot(range(epochs), losses)
plt.ylabel('Loss')
plt.xlabel('epoch');
"""
Explanation: Plot the loss function
End of explanation
"""
# TO EVALUATE THE ENTIRE TEST SET
with torch.no_grad():
y_val = model.forward(X_test)
loss = criterion(y_val, y_test)
print(f'{loss:.8f}')
correct = 0
with torch.no_grad():
for i,data in enumerate(X_test):
y_val = model.forward(data)
print(f'{i+1:2}. {str(y_val):38} {y_test[i]}')
if y_val.argmax().item() == y_test[i]:
correct += 1
print(f'\n{correct} out of {len(y_test)} = {100*correct/len(y_test):.2f}% correct')
"""
Explanation: Validate the model
Now we run the test set through the model to see if the loss calculation resembles the training data.
End of explanation
"""
torch.save(model.state_dict(), 'IrisDatasetModel.pt')
"""
Explanation: Here we can see that #17 was misclassified.
Save the trained model to a file
Right now <strong><tt>model</tt></strong> has been trained and validated, and seems to correctly classify an iris 97% of the time. Let's save this to disk.<br>
The tools we'll use are <a href='https://pytorch.org/docs/stable/torch.html#torch.save'><strong><tt>torch.save()</tt></strong></a> and <a href='https://pytorch.org/docs/stable/torch.html#torch.load'><strong><tt>torch.load()</tt></strong></a><br>
There are two basic ways to save a model.<br>
The first saves/loads the state_dict (learned parameters) of the model, but not the model class. The syntax follows:<br>
<tt><strong>Save:</strong> torch.save(model.state_dict(), PATH)<br><br>
<strong>Load:</strong> model = TheModelClass(*args, **kwargs)<br>
model.load_state_dict(torch.load(PATH))<br>
model.eval()</tt>
The second saves the entire model including its class and parameters as a pickle file. Care must be taken if you want to load this into another notebook to make sure all the target data is brought in properly.<br>
<tt><strong>Save:</strong> torch.save(model, PATH)<br><br>
<strong>Load:</strong> model = torch.load(PATH))<br>
model.eval()</tt>
In either method, you must call <tt>model.eval()</tt> to set dropout and batch normalization layers to evaluation mode before running inference. Failing to do this will yield inconsistent inference results.
For more information visit https://pytorch.org/tutorials/beginner/saving_loading_models.html
Save the model
End of explanation
"""
new_model = Model()
new_model.load_state_dict(torch.load('IrisDatasetModel.pt'))
new_model.eval()
with torch.no_grad():
y_val = new_model.forward(X_test)
loss = criterion(y_val, y_test)
print(f'{loss:.8f}')
"""
Explanation: Load a new model
We'll load a new model object and test it as we had before to make sure it worked.
End of explanation
"""
mystery_iris = torch.tensor([5.6,3.7,2.2,0.5])
"""
Explanation: Apply the model to classify new, unseen data
End of explanation
"""
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(10,7))
fig.tight_layout()
plots = [(0,1),(2,3),(0,2),(1,3)]
colors = ['b', 'r', 'g']
labels = ['Iris setosa','Iris virginica','Iris versicolor','Mystery iris']
for i, ax in enumerate(axes.flat):
for j in range(3):
x = df.columns[plots[i][0]]
y = df.columns[plots[i][1]]
ax.scatter(df[df['target']==j][x], df[df['target']==j][y], color=colors[j])
ax.set(xlabel=x, ylabel=y)
# Add a plot for our mystery iris:
ax.scatter(mystery_iris[plots[i][0]],mystery_iris[plots[i][1]], color='y')
fig.legend(labels=labels, loc=3, bbox_to_anchor=(1.0,0.85))
plt.show()
"""
Explanation: Let's plot this new iris in yellow to see where it falls in relation to the others:
End of explanation
"""
with torch.no_grad():
print(new_model(mystery_iris))
print()
print(labels[new_model(mystery_iris).argmax()])
"""
Explanation: Now run it through the model:
End of explanation
"""
|
DallasTrinkle/Onsager
|
docs/source/InputOutput.ipynb
|
mit
|
import numpy as np
import sys
sys.path.extend(['.', '..'])
from onsager import crystal
"""
Explanation: Input and output for Onsager transport calculation
The Onsager calculators currently include two computational approaches to determining transport coefficients: an "interstitial" calculation, and a "vacancy-mediated" calculator. Below we describe the
Assumptions used in transport model that are necessary to understand the data to be input, and the limitations of the results;
Crystal class setup needed to initiate a calculation;
Interstitial calculator setup needed for an single mobile species calculation, or,
Vacancy-mediated calculator setup needed for a vacancy-mediated substitutional solute calculation;
the creation of VASP-style input files to be run to generate input data;
proper Formatting of input data to be compatible with the calculators; and
Interpretation of output which includes how to convert output into transport coefficients.
This follows the overall structure of a transport coefficient calculation. Broadly speaking, these are the steps necessary to compute transport coefficients:
Identify the crystal to be considered; this requires mapping whatever defects are to be considered mobile onto appropriate Wyckoff sites in the crystal, even if those exact sites are not occupied by true atoms.
Generate lists of symmetry unrelated "defect states" and "defect state transitions," along with the appropriate "calculator object."
Construct input files for total energy calculations to be run outside of the Onsager codebase; extract appropriate energy and frequency information from those runs.
Input the data in a format that the calculator can understand, and transform those energies and frequencies into rates at a given temperature assuming Arrhenius behavior.
Transform the output into physically relevant quantities (Onsager coefficients, solute diffusivities, mobilities, or drag ratios) with appropriate units.
Assumptions used in Onsager
The Onsager code computes transport of defects on an infinite crystalline lattice. Currently, the code requires that the particular defects can be mapped onto Wyckoff positions in a crystal. This does not require that the defect be an atom occupying various Wyckoff positions (though that obviously is captured), but merely that the defect have the symmetry and transitions that can be equivalently described by an "object" that occupies Wyckoff positions. Simple examples include vacancies, substitutional solutes, simple interstitial atoms, as well as more complex cases such as split vacancy defects (e.g.: a V-O$_\text{i}$-V split double vacancy with oxygen interstitial in a closed-packed crystal; the entire defect complex can be mapped on to the Wyckoff position of the oxygen interstitial). In order to calculate diffusion, a few assumptions are made:
defects are dilute: we never consider more than one defect at a time in an "infinite" periodic crystal; the vacancy-mediated diffuser uses one vacancy and one solute.
defects diffuse via a Markovian process: defect states are well-defined, and the transition time from state-to-state is much longer than the equilibration time in a state, so that the evolution of the system is described by the Master equation with time-independent rates.
defects do not alter the underlying symmetry of the crystal: while the defect itself can have a lower symmetry (according to its Wyckoff position), the presence of a defect does not lead to a global phase transformation to a different crystal; moreover, the crystal maintains translational invariance so that the energy of the system with defect(s) is unchanged under translations.
All of these assumptions are usually good: the dilute limit is valid without strong interactions (such as site blocking), Markovian processes are valid as long as barriers are a few times $k_\text{B}T$, and we are not currently aware of any (simple) defects that induce phase transformations.
Furthermore, relaxation around a defect (or defect cluster) is allowed, but the assumption is that all of the atomic positions can be easily mapped back to "perfect" crystal lattice sites. This is an "off-lattice" model. In some cases, it can be possible to incorporate "new" states, especially metastable states, that are only accessible by a defect.
Finally, the code requires that all diffusion happens on a single sublattice. This sublattice is defined by a single chemical species; it can include multiple Wyckoff positions. But the current algorithms assume that transitions do not result in the creation of antisite defects (where a chemical species is on an "incorrect" sublattice).
Crystal class setup
The assumption of translational invariance of our defects is captured by the use of a Crystal object. Following the standard definition of a crystal, we need to specify (a) three lattice vectors, and (b) at least one basis position, corresponding to at least one site. The crystal needs to contain at least the Wyckoff positions on a single sublattice corresponding to the diffusing defects. It can be useful for it to contain more atoms that act as "spectator" atoms: they do not participate in diffusion, but define both the underlying symmetry of the crystal, and if atomic-scale calculations will be used to compute configuration and transition-state energies, are necessary to define the energy landscape of diffusion.
The lattice vectors of the underlying crystal set the units of length in the transport coefficients. Hence, if the vectors are entered in units of nm, this corresponds to a factor of $10^{-18}\text{ m}^2$ in the transport coefficients. This should also be considered when including factors of volume per atom as well.
The lattice vectors are given by three vectors, $\mathbf{a}_1$, $\mathbf{a}_2$, $\mathbf{a}_3$ in Cartesian coordinates. In python, these are input when creating a Crystal either as a list of three numpy vectors, or as a square numpy matrix. Note: if you enter the three vectors as a matrix, remember that it assumes the vectors are column vectors. That is, if amat is the matrix, then amat[:,0] is $\mathbf{a}_1$, amat[:,1] is $\mathbf{a}_2$, and amat[:,2] is $\mathbf{a}_3$. This may not be what you're expecting. The main recommendation is to enter the lattice vectors as a list (or tuple) of three numpy vectors.
The atomic basis is given by a list of lists of numpy vectors of positions in unit cell coordinates. For a given basis, then basis[0] is a list of all positions for the first chemical element in the crystal, basis[1] is the second chemical element, and so on. If you only have a single chemical element, you may enter a list of numpy vectors.
An optional spin degree of freedom can be included. This is a list of objects, with one for each chemical element. These can be either scalar or vectors, with the assumption that they transform as those objects under group operations. If not included, the spins are all assumed to be equal to 0. Inclusion of these additional degrees of freedom (currently) only impacts the reduction of the unit cell, and the construction of the space group operations.
We also take in, strictly for bookkeeping purposes, a list of names for the chemical elements. This is an optional input, but recommended for readability.
Once initialized, two main internal operations take place:
The unit cell is reduced and optimized. Reduction is a process where we try to find the smallest unit cell representation for the Crystal. This means that the four-atom "simple cubic" unit cell of face-centered cubic can be input, and the code will reduce it to the standard single-atom primitive cell. The reduction algorithm can end up with "unusual" choices of lattice vectors, so we also optimize the lattice vectors so that they are as close to orthogonal as possible, and ordered from smallest to largest. The atomic basis may be shifted uniformly so that if an inversion operation is present, then the inversion center is the origin. Neither choice changes the representation of the crystal; however, the reduction operation can be skipped by including the option noreduce=True.
Full symmetry analysis is performed, including: automated construction of space group generator operators, partitioning of basis sites into symmetry related Wyckoff positions, and determination of point group operations for every basis site. All of these operations are automated, and make no reference to crystallographic tables. The algorithm cannot identify which space group it has generated, nor which Wyckoff positions are present. The algorithm respects both chemistry and spin; this also makes spin a useful manipulation tool to artificially lower symmetry for testing purposes as needed.
Note: Crystals can also be constructed by manipulating existing Crystal objects. A useful case is for the interstitial diffuser: when working "interactively," it is often easier to first make the underlying "spectator" crystal, and then have that Crystal construct the set of Wyckoff positions for a single site in the crystal, and then add that to the basis. Crystal objects are intended to be read-only, so these manipulations result in the creation of a new Crystal object.
A few quick examples:
End of explanation
"""
a0 = 1.
FCCcrys = crystal.Crystal([a0*np.array([0,0.5,0.5]),
a0*np.array([0.5,0,0.5]),
a0*np.array([0.5,0.5,0])],
[np.array([0.,0.,0.])], chemistry=['fcc'])
print(FCCcrys)
"""
Explanation: Face-centered cubic crystal, vacancy-diffusion
Face-centered cubic crystals could be created either by entering the primitive basis:
End of explanation
"""
FCCcrys2 = crystal.Crystal(a0*np.eye(3),
[np.array([0.,0.,0.]), np.array([0,0.5,0.5]),
np.array([0.5,0,0.5]), np.array([0.5,0.5,0])],
chemistry=['fcc'])
print(FCCcrys2)
"""
Explanation: or by entering the simple cubic unit cell with four atoms:
End of explanation
"""
FCCcrys3 = crystal.Crystal(a0*np.eye(3),
[np.array([0.,0.,0.]), np.array([0,0.5,0.5]),
np.array([0.5,0,0.5]), np.array([0.5,0.5,0])],
chemistry=['fcc'], noreduce=True)
print(FCCcrys3)
"""
Explanation: The effect of noreduce can be seen by regenerating the FCC crystal using the simple cubic unit cell:
End of explanation
"""
MgO = crystal.Crystal([a0*np.array([0,0.5,0.5]),
a0*np.array([0.5,0,0.5]),
a0*np.array([0.5,0.5,0])],
[[np.array([0.,0.,0.])], [np.array([0.5,0.5,0.5])]],
chemistry=['Mg', 'O'])
print(MgO)
"""
Explanation: Rocksalt crystal, vacancy-diffusion
Two chemical species, with interpenetrating FCC lattices. In MgO, we would allow for V$_\text{O}$ (oxygen vacancies) to diffuse, with Mg as a "spectator species":
End of explanation
"""
octbasis = FCCcrys.Wyckoffpos(np.array([0.5, 0.5, 0.5]))
tetbasis = FCCcrys.Wyckoffpos(np.array([0.25, 0.25, 0.25]))
FCCcrysint = FCCcrys.addbasis(octbasis + tetbasis, ['int'])
print(octbasis)
print(tetbasis)
print(FCCcrysint)
"""
Explanation: Face-centered cubic crystal, interstitial diffusion
Interstitials in FCC crystals usually diffuse through a network of octahedral and tetrahedral sites. We can use the Wyckoffpos(u) function in a crystal to generate a list of equivalent sites corresponding to the interstitial positions, and the addbasis() function to create a new crystal with these interstitial sites.
End of explanation
"""
from onsager import OnsagerCalc
"""
Explanation: Interstitial calculator setup
The Interstitial calculator is designed for systems where we have a single defect species that diffuses throughout the crystal. This includes single vacancy diffusion, and interstitial solute diffusivity. As for any diffusion calculator, we need to define the configurations that the defect will sample, and the transition states of the defect. In the case of a single defect species,
configurations are simply the Wyckoff positions of the particular sublattice (specified by a chemistry index);
transition states are pairs of configurations with a displacement vector that connects the initial to the final system.
We use the sitelist(chemistry) function to construct a list of lists of indices for a given chemistry; the lists of indices are all symmetrically equivalent crystal basis indices, and each list is symmetrically inequivalent: this is a space group partitioning into equivalent Wyckoff positions.
The transition states are stored as a jumpnetwork, which is a list of lists of tuples of transitions: (initial index, final index, deltax) where the indices are self-explanatory, and deltax is a Cartesian vector corresponding to the translation from the initial state to the final state. The transitions in each list is equivalent by symmetry, and the separate lists are symmetrically inequivalent. Note also that reverse transitions are included: (final index, initial index, -deltax). While the jumpnetwork can be constructed "by hand," it is recommended to use the jumpnetwork() function inside of a crystal to automate the generation, and then remove "spurious" transitions that are identified.
The algorithm in jumpnetwork() is rather simple: a transition is included if
the distance between the initial and final state is less than a cutoff distance, and
the line segment between the initial and final state does not come within a minimum distance of other defect states, and
the line segment between the initial and final state does not come within a minimum distance of any atomic site in the crystal.
The first criterion identifies "close" jumps, while the second criterion eliminates "long" transitions between states when an intermediate configuration may be possible (i.e., $\text{A}\to\text{B}$ when $\text{A}\to\text{C}\to\text{B}$ would be more likely as the state C is "close" to the line connecting A to B), and the final criterion elimates transitions that takes the defect too close to a "spectator" atom in the crystal.
The interstitial diffuser also identifies unique tags for all configurations and transition states. The interstitial tags for configurations are strings with i: followed by unit cell coordinates of site to three decimal digits. The interstitial tags for transition states are strings with i: followed by the unit cell coordinates of the initial state, a ^, and the unit cell coordinates of the final state. When one pretty-prints the interstitial diffuser object, the symmetry unique tags are printed. Note that all of the symmetry equivalent tags are stored in the object, and can be used to identify configurations and transition states, and this is the preferred method for indexing, rather than relying on the particular index into the corresponding lists. The interstitial diffuser calculator contains dictionaries that can be used to convert from tags to indices and vice versa.
Finally, YAML interfaces to output the sitelist and jumpnetwork for an interstitial diffuser are includes; combined with the YAML output of the Crystal, this allows for a YAML serialized representation of the diffusion object.
End of explanation
"""
chem = 0
FCCsitelist = FCCcrys.sitelist(chem)
print(FCCsitelist)
chem = 0
FCCjumpnetwork = FCCcrys.jumpnetwork(chem, cutoff=a0*0.78)
for n, jn in enumerate(FCCjumpnetwork):
print('Jump type {}'.format(n))
for (i,j), dx in jn:
print(' {} -> {} dx= {}'.format(i,j,dx))
chem = 0
FCCvacancydiffuser = OnsagerCalc.Interstitial(FCCcrys, chem, FCCsitelist, FCCjumpnetwork)
print(FCCvacancydiffuser)
"""
Explanation: Face-centered cubic crystal, vacancy-diffusion
We identify the vacancy sites with the crystal sites in the lattice.
End of explanation
"""
chem = 1
MgOsitelist = MgO.sitelist(chem)
print(MgOsitelist)
chem = 1
MgOjumpnetwork = MgO.jumpnetwork(chem, cutoff=a0*0.78)
for n, jn in enumerate(MgOjumpnetwork):
print('Jump type {}'.format(n))
for (i,j), dx in jn:
print(' {} -> {} dx= {}'.format(i,j,dx))
chem = 1
MgOdiffuser = OnsagerCalc.Interstitial(MgO, chem, MgOsitelist, MgOjumpnetwork)
print(MgOdiffuser)
"""
Explanation: Rocksalt crystal, vacancy-diffusion
Two chemical species, with interpenetrating FCC lattices. In MgO, we would allow for V$_\text{O}$ (oxygen vacancies) to diffuse, with Mg as a "spectator species".
End of explanation
"""
chem = 1
FCCintsitelist = FCCcrysint.sitelist(chem)
print(FCCintsitelist)
chem = 1
FCCintjumpnetwork = FCCcrysint.jumpnetwork(chem, cutoff=a0*0.51)
for n, jn in enumerate(FCCintjumpnetwork):
print('Jump type {}'.format(n))
for (i,j), dx in jn:
print(' {} -> {} dx= {}'.format(i,j,dx))
chem = 1
FCCintdiffuser = OnsagerCalc.Interstitial(FCCcrysint, chem,
FCCintsitelist, FCCintjumpnetwork)
print(FCCintdiffuser)
"""
Explanation: Face-centered cubic crystal, interstitial diffusion
Interstitials in FCC crystals usually diffuse through a network of octahedral and tetrahedral sites. Nominally, diffusion should occur through an octahedral-tetrahedral jumps, but we can extend the cutoff distance to find additinoal jumps between tetrahedrals.
End of explanation
"""
print(FCCintdiffuser.crys.simpleYAML() +
'chem: {}\n'.format(FCCintdiffuser.chem) +
FCCintdiffuser.sitelistYAML(FCCintsitelist) +
FCCintdiffuser.jumpnetworkYAML(FCCintjumpnetwork))
"""
Explanation: The YAML representation is intended to combine both the structural information necessary to construct the (1) crystal, (2) chemistry index of the diffusing defect, (3) sitelist, and (4) jumpnetwork; and the energies, prefactors, and elastic dipoles (derivative of energy with respect to strain) for the symmetry representatives of configurations and jumps. This will become input for the diffuser when computing transport coefficients as a function of temperature, as well as derivatives with respect to strain (elastodiffusion tensor, activation volume tensor).
End of explanation
"""
chem = 0
fivefreqdiffuser = OnsagerCalc.VacancyMediated(FCCcrys, chem,
FCCsitelist, FCCjumpnetwork, 1)
print(fivefreqdiffuser)
"""
Explanation: Vacancy-mediated calculator setup
For the vacancy mediated diffuser, the configurations and transition states are more complicated. First, we have three types of configurations:
Vacancy, sufficiently far away from the solute to have zero interaction energy.
Solute, sufficiently far away from the vacancy to have zero interaction energy.
Vacancy-solute complexes.
The vacancies and solutes are assumed to be able to occupy the same sites in the crystal, and that neither the vacancy or solute lowers the underlying symmetry of the site. This is a rephrasing of our previous assumption that the symmetry of the defect can be mapped onto the symmetry of the crystal Wyckoff position. There are cases where this is not true: that is, some solutes, when substituted into a crystal, will relax in a way that breaks symmetry. While mathematically this can be treated, we do not currently have an implementation that supports this.
The complexes are only considered out to a finite distance; this is called the "thermodynamic range." It is defined in terms of "shells," which is the number of "jumps" from the solute in order to reach the vacancy. We include one more shell out, called the "kinetic range," which are complexes that include transitions to complexes in the thermodynamic range.
When we consider transition states, we have three types of transition states:
Vacancy transitions, sufficiently far away from the solute to have zero interaction energy.
Vacancy-solute complex transitions, where only the vacancy changes position (both between complexes in the thermodynamic range, and between the kinetic and thermodynamic range).
Vacancy-solute complex transitions, where the vacancy and solute exchange place.
These are called, in the "five-frequency framework", omega-0, omega-1, and omega-2 jumps, respectively. The five-frequency model technically identifies omega-1 jumps as only between complexes in the thermodynamic range, while the two "additional" jump types, omega-3 and omega-4, connect complexes in the kinetic range to the thermodynamic range. Operationally, we combine omega-1, -3, and -4 into a single set.
To make a diffuser, we need to
Identify the sitelist of the vacancies (and hence, solutes),
Identify the jumpnetwork of the vacancies
Determine the thermodynamic range
then, the diffuser automatically constructs the complexes out to the thermodynamic range, and the full jumpnetworks.
The vacancy-mediated diffuser also identifies unique tags for all configurations and transition states. The tags for configurations are strings with
v: followed by unit cell coordinates of site to three decimal digits for the vacancy;
s: followed by unit cell coordinates of site to three decimal digits for the solute;
s:...-v:... for a solute-vacancy complex.
The transition states are strings with
omega0: + (initial vacancy configuration) + ^ + (final vacancy configuration);
omega1: + (initial solute-vacancy configuration) + ^ + (final vacancy configuration);
omega2: + (initial solute-vacancy configuration) + ^ + (final solute-vacancy configuration).
When one pretty-prints the vacancy-mediated diffuser object, the symmetry unique tags are printed. Note that all of the symmetry equivalent tags are stored in the object, and can be used to identify configurations and transition states, and this is the preferred method for indexing, rather than relying on the particular index into the corresponding lists. The vacancy-mediated diffuser calculator contains dictionaries that can be used to convert from tags to indices and vice versa.
Face-centered cubic crystal, vacancy mediated-diffusion
We construct the Onsager equivalent of the classic five-frequency model. We can use the sitelist and jumpnetwork that we already constructed for the vacancy by itself. Note that the omega-1 list contains four jumps: one that is the normally identified "omega-1", and three others that correspond to vacancy "escapes" from the first neighbor complex: to the second, third, and fourth neighbors. In the classic five-frequency model, these rates are all forced to be equal.
End of explanation
"""
import h5py
# replace '/dev/null' with your file of choice, and remove backing_store=False
# to read and write to an HDF5 file.
f = h5py.File('/dev/null', 'w', driver='core', backing_store=False)
fivefreqdiffuser.addhdf5(f) # adds the diffuser to the HDF5 file
# how to read in (after opening `f` as an HDF5 file)
fivefreqcopy = OnsagerCalc.VacancyMediated.loadhdf5(f) # creates a new diffuser from HDF5
f.close() # close up the HDF5 file
print(fivefreqcopy)
"""
Explanation: An HDF5 representation of the diffusion calculator can be stored for efficient reconstruction of the object, as well as passing between machines. The HDF5 representation includes everything: the underlying Crystal, the sitelist and jumpnetworks, all of the precalculation and analysis needed for diffusion. This greatly speeds up the construction of the calculator.
End of explanation
"""
from onsager import automator
import tarfile
"""
Explanation: VASP-style input files
At this stage, we have the diffusion "calculator" necessary to compute diffusion, but we need to determine appropriate atomic-scale data to act as input into our calculators. There are two primary steps: (1) constructing appropriate "supercells" containing defect configurations and transition states to be computed, and (2) extracting the appropriate information from those calculations to use in the diffuser. This section deals with the former; the next section will deal with the latter.
The tags are the most straightforward way to identify structures as they are computed, and hence they serve as the mechanism for communicating data into the calculators. To make supercells with defects, we take advantage of the supercell module in Onsager; both calculators contain a makesupercell() function that returns dictionaries of supercells, tags, and appropriate information. Currently, to transform these into usable input files, the automator module can convert such dictionaries into tarballs with an appropriate directory structure, files containing information about appropriate tags for the different configurations, a Makefile that converts CONTCAR output into appropriate POS input for the nudged-elastic band calculation.
Both makesupercell() commands require an input supercell definition, which is a $3\times3$ integer matrix of column vectors; if N is such a matrix, then the supercell vectors are the columns of A = np.dot(a, N), so that $\mathbf A_1$ has components N[:,0] in direct coordinates.
End of explanation
"""
help(FCCintdiffuser.makesupercells)
N = np.array([[-2,2,2],[2,-2,2],[2,2,-2]]) # 32 atom FCC supercell
print(np.dot(FCCcrys.lattice, N))
FCCintsupercells = FCCintdiffuser.makesupercells(N)
help(automator.supercelltar)
with tarfile.open('io-test-int.tar.gz', mode='w:gz') as tar:
automator.supercelltar(tar, FCCintsupercells)
tar = tarfile.open('io-test-int.tar.gz', mode='r:gz')
tar.list()
"""
Explanation: Face-centered cubic crystal, interstitial diffusion
We will need to construct (and relax) appropriate intersitial sites, and the transition states between them.
End of explanation
"""
with tar.extractfile('Makefile') as f:
print(f.read().decode('ascii'))
"""
Explanation: Contents of the Makefile:
End of explanation
"""
with tar.extractfile('tags.json') as f:
print(f.read().decode('ascii'))
"""
Explanation: Contents of the tags.json file:
End of explanation
"""
with tar.extractfile('relax.00/POSCAR') as f:
print(f.read().decode('ascii'))
tar.close()
"""
Explanation: Contents of one POSCAR file for relaxation of a configuration:
End of explanation
"""
help(fivefreqdiffuser.makesupercells)
N = np.array([[-3,3,3],[3,-3,3],[3,3,-3]]) # 108 atom FCC supercell
print(np.dot(FCCcrys.lattice, N))
fivefreqsupercells = fivefreqdiffuser.makesupercells(N)
with tarfile.open('io-test-fivefreq.tar.gz', mode='w:gz') as tar:
automator.supercelltar(tar, fivefreqsupercells)
tar = tarfile.open('io-test-fivefreq.tar.gz', mode='r:gz')
tar.list()
"""
Explanation: Face-centered cubic crystal, vacancy mediated-diffusion
We will need to construct (and relax) appropriate vacancy, solute, and solute-vacancy complexes, and the transition states between them. The commands are nearly identical to the interstitial diffuser; the primary difference is the larger number of configurations and files.
End of explanation
"""
with tar.extractfile('Makefile') as f:
print(f.read().decode('ascii'))
"""
Explanation: Contents of Makefile:
End of explanation
"""
with tar.extractfile('tags.json') as f:
print(f.read().decode('ascii'))
"""
Explanation: Contents of the tags.json file:
End of explanation
"""
with tar.extractfile('relax.01/POSCAR') as f:
print(f.read().decode('ascii'))
tar.close()
"""
Explanation: Contents of one POSCAR file for relaxation of a configuration:
End of explanation
"""
help(FCCintdiffuser.diffusivity)
"""
Explanation: Formatting of input data
Once the atomic-scale data from an appropriate total energy calculation is finished, the data needs to be input into formats that the appropriate diffusion calculator can understand. There are some common definitions between the two, but some differences as well.
In all cases, we work with the assumption that our states are thermally occupied, and our rates are Arrhenius. That means that the (relative) probability of any state can be written as
$$\rho = Z^{-1}\rho^0 \exp(-E/k_\text{B}T)$$
for the partition function $Z$, a site entropic term $\rho^0 = \exp(S/k_\text{B})$, and energy $E$. The transition rate from state A to state B is given by
$$\lambda(\text{A}\to\text{B}) = \frac{\nu^\text{T}{\text{A}-\text{B}}}{\rho^0_A} \exp(-(E^\text{T}{\text{A}-\text{B}} - E_\text{A})/k_\text{B}T)$$
where $E^\text{T}{\text{A}-\text{B}}$ is the energy of the transition state between A and B, and $\nu^\text{T}{\text{A}-\text{B}}$ is the prefactor for the transition state.
If we assume harmonic transition state theory, then we can write the site entropic term $\rho^0$ as
$$\rho^0 = \frac{\prod \nu^{\text{perfect-supercell}}}{\prod \nu^{\text{defect-supercell}}}$$
where $\nu$ are the vibrational eigenvalues of the corresponding supercells, and the prefactor for the transition state is
$$\nu^\text{T} = \frac{\prod \nu^{\text{perfect-supercell}}}{\prod_{\nu^2>0} \nu^{\text{transition state}}}$$
where we take the product over the real vibrational frequencies in the transition state (there should be one imaginary mode). From a practical point of view, the perfect-supercell cancels out; we will often set $\rho^0$ to 1 for a single state (so that the other $\rho^0$ are relative probabilities), and then $\nu^\text{T}$ becomes more similar to the attempt frequency for the particular jumps. The definitions above map most simply onto a "hopping atom" approximation for the jump rates: the $3\times3$ force-constant matrix is computed for the atom that is moving in the transition, and its eigenvalues are used to determine the modes $\nu$.
Note the units: $\rho^0$ is unitless, while $\nu^\text{T}$ has units of inverse time; this means that the inverse time unit in the computed transport coefficients will come from $\nu^\text{T}$ values. If they are entered in THz, that contributes $10^{12}\text{ s}^{-1}$.
Because we normalize our probabilities, our energies and transition state energies are relative to each other. In all of our calculations, we will multiply energies by $\beta=(k_\text{B}T)^{-1}$ to get a unitless values as inputs for our diffusion calculators. This means that the diffusers do not have direct information about temperature; explicit temperature factors that appear in the Onsager coefficients must be included by hand from the output transport coefficients. It also means that the calculators do not have a "unit" of energy; rather, $k_\text{B}T$ and the energies must be in the same units.
Face-centered cubic crystal, interstitial diffusion
We need to compute prefactors and energies for our interstitial diffuser. We can also include information about elastic dipoles (derivatives of energy with respect to strain) in order to compute derivatives of diffusivity with respect to strain (elastodiffusion).
End of explanation
"""
FCCintdiffuser.tags
"""
Explanation: The ordering in the lists pre, beteene, preT and betaeneT corresponds to the sitelist and jumpnetwork lists. The tags can be used to determine the proper indices. The most straightforward way to store this in python is a dictionary, where the key is the tag, and the value is a list of [prefactor, energy]. The advantage of this is that it can be easily transformed to and from JSON for simple serialization.
To see a full list of all tags in the dictionary, the tags member of a diffuser gives a dictionary of all tags, ordered to match the structure of sitelist and jumpnetwork.
End of explanation
"""
FCCintdata = {
'i:+0.500,+0.500,+0.500': [1., 0.],
'i:+0.750,+0.750,+0.750': [2., 0.5],
'i:+0.500,+0.500,+0.500^i:+0.750,+0.750,-0.250': [10., 1.0],
'i:+0.750,+0.750,+0.750^i:+1.250,+1.250,+0.250': [50., 2.0]
}
# Conversion from dictionary to lists for a given kBT
# We go through the tags in order, and find one in our data set.
kBT = 0.25 # eV; a rather high temperature
pre = [FCCintdata[t][0] for taglist in FCCintdiffuser.tags['states']
for t in taglist if t in FCCintdata]
betaene = [FCCintdata[t][1]/kBT for taglist in FCCintdiffuser.tags['states']
for t in taglist if t in FCCintdata]
preT = [FCCintdata[t][0] for taglist in FCCintdiffuser.tags['transitions']
for t in taglist if t in FCCintdata]
betaeneT = [FCCintdata[t][1]/kBT for taglist in FCCintdiffuser.tags['transitions']
for t in taglist if t in FCCintdata]
print(pre,betaene,preT,betaeneT,sep='\n')
DFCCint, dDFCCint = FCCintdiffuser.diffusivity(pre, betaene, preT, betaeneT, CalcDeriv=True)
print(DFCCint, dDFCCint, sep='\n')
"""
Explanation: In this example, the energy of the octahedral site is 0, with a base prefactor of 1. The tetrahedral site has an energy of 0.5 (eV) above, with a higher relative vibrational degeneracy of 2. The transition state energy from octahedral to tetrahedral is 1.0 (eV) with a prefactor of 10 (THz); and the transition state energy from tetrahedral to tetrahedral is 2.0 (eV) with a prefactor of 50 (THz).
End of explanation
"""
help(fivefreqdiffuser.Lij)
"""
Explanation: The interpretation of this output will be described below.
Face-centered cubic crystal, vacancy mediated-diffusion
We will need to compute prefactors and energies for our vacancy, solute, and solute-vacancy complexes, and the transition states between them. The difference compared with the interstitial case is that complex prefactors and energies are excess quantities. That means for a complex, its $\rho^0$ is the product of $\rho^0$ for the solute state, the vacancy state, and the excess; the energy $E$ is the sum of the energy of the solute state, the vacancy state, and the excess. However for the transition states, the prefactors and energies are "absolute".
End of explanation
"""
help(fivefreqdiffuser.preene2betafree)
"""
Explanation: The vacancy-mediated diffuser expects combined $\beta F := (E - TS)/k_\text{B}T$, so that our probabilities and rates are proportional to $\exp(-\beta F)$. This is complicated to directly construct, so we have the intermediate function preene2betafree(), which is best used by feeding a dictionary of arrays:
End of explanation
"""
help(fivefreqdiffuser.tags2preene)
"""
Explanation: Even this is a bit complicated; so we use an additional function that maps the tags into the appropriate lists, tags2preene():
End of explanation
"""
fivefreqdata = {
'v:+0.000,+0.000,+0.000': [1., 0.],
's:+0.000,+0.000,+0.000': [1., 0.],
's:+0.000,+0.000,+0.000-v:+0.000,-1.000,+0.000': [1., -0.25],
'omega0:v:+0.000,+0.000,+0.000^v:+0.000,+1.000,+0.000': [10., 1.],
'omega1:s:+0.000,+0.000,+0.000-v:+1.000,+0.000,-1.000^v:+1.000,+1.000,-1.000': [10., 0.5],
'omega1:s:+0.000,+0.000,+0.000-v:+1.000,-1.000,+0.000^v:+1.000,+0.000,+0.000': [20., 0.875],
'omega1:s:+0.000,+0.000,+0.000-v:-1.000,+1.000,+0.000^v:-1.000,+2.000,+0.000': [20., 0.875],
'omega1:s:+0.000,+0.000,+0.000-v:+0.000,+1.000,+0.000^v:+0.000,+2.000,+0.000': [20., 0.875],
'omega2:s:+0.000,+0.000,+0.000-v:+0.000,-1.000,+0.000^s:+0.000,+0.000,+0.000-v:+0.000,+1.000,+0.000':
[10., 0.25]
}
# Conversion from dictionary to lists for a given kBT
# note that we can nest the mapping functions.
kBT = 0.25 # eV; a rather high temperature
fivefreqpreene = fivefreqdiffuser.tags2preene(fivefreqdata)
fivefreqbetaF = fivefreqdiffuser.preene2betafree(kBT, **fivefreqpreene)
L0vv, Lss, Lsv, L1vv = fivefreqdiffuser.Lij(*fivefreqbetaF)
print(L0vv, Lss, Lsv, L1vv, sep='\n')
"""
Explanation: In this example, we have a vacancy-solute binding energy of -0.25 (eV), a vacancy jump barrier of 1.0 (eV) with a prefactor of 10 (THz), an "omega-1" activation barrier of 0.75 (eV) which is a transition state energy of 0.75-0.25 = 0.5, an omega-2 activation barrier of 0.5 (eV) which is a transition state energy of 0.5-0.25 = 0.25, and all of the "omega-3/-4" escape jumps with a transition state energy of 1-0.25/2 = 0.875 (eV).
End of explanation
"""
print(np.dot(np.linalg.inv(DFCCint), kBT*dDFCCint))
"""
Explanation: The interpretation of this output will be described below.
Interpretation of output
The final step is to take the output from the diffuser calculator, and convert this into physical quantities: solute diffusivity, elastodiffusivity, Onsager coefficients, drag ratios, and so on.
There are two underlying definitions that we use to define our transport coefficients:
$$\mathbf j = -\underline D \nabla c$$
defines the solute diffusivity as the tensorial transport coefficient that relates defect concentration gradients to defect fluxes, and
$$\mathbf j^\text{s} = -\underline L^\text{ss}\nabla\mu^\text{s} - \underline L^\text{sv}\nabla\mu^\text{v}$$
$$\mathbf j^\text{v} = -\underline L^\text{vv}\nabla\mu^\text{v} - \underline L^\text{sv}\nabla\mu^\text{s}$$
defines the Onsager coefficients as the tensorial transport coefficients that relate solute and vacancy chemical potential gradients to solute and vacancy fluxes. We use these equation to also define the units of our transport coefficients. Fluxes are in units of (number)/area/time, so with concentration in (number)/volume, diffusivity has units of area/time. If the chemical potential is written in units of energy, the Onsager coefficients have units of (number)/length/energy/time. If the chemical potentials will instead have units of energy/volume, then the corresponding Onsager coefficients have units of area/energy/time.
Below are more specific details about the different calculators and the output available.
Interstitial diffusivity
The interstitial diffuser outputs a diffusivity tensor that has the units of squared length based on the lengths in the corresponding Crystal, and inverse time units corresponding to the rates that are given as input: the ratio of transition state prefactors to configuration prefactors. In a crystalline system, it is typical to specify the lattice vectors in either nm ($10^{-9}\text{ m}$) or ร
($10^{-10}\text{ m}$), and the prefactors of rates are often THz ($10^{12}\text{ s}$), while diffusivity is often reported in either $\text{m}^2/\text{s}$ or $\text{cm}^2/\text{s}$. The conversion factors are
$$1\text{ nm}^2\cdot\text{THz} = 10^{-6}\text{ m}^2/\text{s} = 10^{-2}\text{ cm}^2/\text{s}$$
$$1\text{ A}^2\cdot\text{THz} = 10^{-8}\text{ m}^2/\text{s} = 10^{-4}\text{ cm}^2/\text{s}$$
It it worth noting that this model of diffusion assumes that the "interstitial" form of the defect is its ground state configuration (or at least one of the configurations used in the derivation of the diffusivity is a ground state configuration). This is generally the case for the diffusion of a vacancy, or light interstitial elements; however, the are materials where a solute has a lower energy as a substitutional defect, but can occupy an interstitial site and diffuse from there. This requires knowledge of the relative occupancy of the two states. Using Kroger-Vink notation, let [B] be the total solute concentration, and $[\text{B}\text{A}]$ and $[\text{B}\text{i}]$ the substitutional and interstitial concentrations, then
$$D_\text{B} = \left{[\text{B}\text{i}]D\text{int} + [\text{B}\text{A}]D\text{sub}\right}/[\text{B}]$$
for interstitial diffusivity $D_\text{int}$ and substitutional diffusivity $D_\text{sub}$. The relative occupancies may be determined by global thermal equilibrium or local thermal equilibrium. The latter is more complex, and relies on knowledge of local defect processes and conditions, and is not discussed further here. For global thermal equilibrium, if we know the energy of the ground state substitutional defect $E_\text{sub}$ and the lowest energy configuration used by the diffuser $E_\text{int}$, then
$$[\text{B}\text{i}]/[\text{B}] = (1 + \exp((E\text{int}-E_\text{sub})/k_\text{B}T)^{-1} \approx \exp(-(E_\text{int}-E_\text{sub})/k_\text{B}T)$$
and
$$[\text{B}\text{A}]/[\text{B}] = (1 + \exp(-(E\text{int}-E_\text{sub})/k_\text{B}T)^{-1} \approx 1$$
where the approximations are valid when $E_\text{int}-E_\text{sub}\gg k_\text{B}T$.
Derivatives of diffusivity: activation barrier tensor
At any given temperature, the temperature dependence of the diffusivity can be taken as an Arrhenius form,
$$\underline D = \underline D_0 \exp(-\beta \underline E^\text{act})$$
for inverse temperature $\beta = (k_\text{B}T)^{-1}$, and the activation barrier, $\underline E^\text{act}$ can also display anisotropy. Note that in this expression, the exponential is taken on a per-component basis, not as a true tensor exponential.
We can compute $Q$ by taking the per-component logarithmic derivative with respect to inverse temperature,
$$\underline E^\text{act} = -\underline D^{-1/2}\frac{d\underline D}{d\beta}\underline D^{-1/2}$$
The diffusivity() function with CalcDeriv=True returns a second tensorial quantity, dD which when multiplied by $k_\text{B}T$, gives $d\underline D/d\beta$. Hence, to compute the activation barrier tensor, we evaluate:
End of explanation
"""
print(np.dot(Lsv, np.linalg.inv(Lss)))
"""
Explanation: In this case, as the matrices are isotropic, we can use $\underline D^{-1}$ rather than $\underline D^{-1/2}$ which must be computed via diagonalization.
This tensor has the same energy units as the variable kBT.
Given the barriers for diffusion, one might have expected that $\underline E^\text{act}$ would be 1, as that is the transition state energy to go from octahedral to tetrahedral. However, the activation barrier is approximately the rate-limiting transition state energy minus the average configuration energy. Since we've chosen a large temperature, the tetrahedral sites have non-negligible occupation, which raises the average energy. As the temperature decreases, the activation energy will approach 1.
Derivatives of diffusivity: elastodiffusion and activation volume tensor
The derivative with respect to strain is the fourth-rank elastodiffusivity tensor $\underline d$, where
$$d_{abcd} = \frac{dD_{ab}}{d\varepsilon_{cd}}$$
This is returned by the elastodiffusion function, which requires the elastic dipole tensors be included in the function call as well. The elastic dipoles have the same units of energies, and so are input as $\beta\underline P$, which is unitless. The returned tensor has the same units as the diffusivity.
The activation volume tensor (logarithmic derivative of diffusivity with respect to stress) can be computed from the elastodiffusivity tensor if the compliance tensor $\underline S$ is known; then,
$$V^\text{act}{abcd} = k\text{B}T \sum_{ijkl=1}^3 (\underline D^{-1/2}){ai} d{ijkl} (\underline D^{-1/2}){bj} S{klcd}$$
The units of this quantity are given by the units of $k_\text{B}T$ (energy) multiplied by the units of $\underline S$ (inverse pressure). Typically, $k_\text{B}T$ will be known in eV and $\underline S$ in GPa$^{-1}$, so the conversion factor
$$1\text{ eV}\cdot\text{GPa}^{-1} = 1.6022\times10^{-19}\text{ J}\cdot10^{-9}\text{ m}^3/\text{J} = 0.16022\text{ nm}^3 = 160.22\text{ A}^3$$
can be useful.
Vacancy-mediated diffusivity
The interstitial diffuser outputs a diffusivity tensor that has the units of squared length based on the lengths in the corresponding Crystal, and inverse time units corresponding to the rates that are given as input: the ratio of transition state prefactors to configuration prefactors. In a crystalline system, it is typical to specify the lattice vectors in either nm ($10^{-9}\text{ m}$) or ร
($10^{-10}\text{ m}$), and the prefactors of rates are often THz ($10^{12}\text{ s}$). The quantities L0vv, Lss, Lsv, and L1vv output by the Lij function all have the units of area/time, so the the conversion factors below are often useful:
$$1\text{ nm}^2\cdot\text{THz} = 10^{-6}\text{ m}^2/\text{s} = 10^{-2}\text{ cm}^2/\text{s}$$
$$1\text{ A}^2\cdot\text{THz} = 10^{-8}\text{ m}^2/\text{s} = 10^{-4}\text{ cm}^2/\text{s}$$
To convert the four quantities into $\underline L^\text{vv}$, $\underline L^\text{ss}$, and $\underline L^\text{sv}$, some additional information is required.
First, in the dilute limit, $\underline L^\text{ss}$ and $\underline L^\text{sv}$ are proportional to $(k_\text{B}T)^{-1}c^\text{v}c^\text{s}$; none of these quantities are known to the diffuser, and the two concentrations are essentially independent variables that must be supplied. The concentrations in these cases are fractional concentrations, not per volume. Finally, if the Onsager coefficients are for chemical potential specified as energies (not energies per volume), the quantities need to be divided by the volume per atom, and the final quantity has the appropriate units. Hence,
$\underline L^\text{ss}$ = Lss*(solute concentration)*(vacancy concentration)/(volume)/kBT
$\underline L^\text{sv}$ = Lsv*(solute concentration)*(vacancy concentration)/(volume)/kBT
where the concentration quantities are fractional.
The vacancy $\underline L^\text{vv}$ is more complicated, as it has a leading order term that is independent of solute, and a first order correction that is linear in the solute concentration. Hence,
$\underline L^\text{vv}$ = (L0vv + L1vv*(solute concentration))*(vacancy concentration)/(volume)/kBT
Drag ratio
The drag ratio is the unitless (tensorial) quantity $\underline L^\text{sv}(\underline L^\text{ss})^{-1}$. Because of the identical prefactors in front of both terms in the dilute limit, this is given by
End of explanation
"""
|
mitdbg/modeldb
|
client/workflows/demos/registry/tensorflow-mnist-end-to-end.ipynb
|
mit
|
import os
import tensorflow as tf
# restart your notebook if prompted on Colab
try:
import verta
except ImportError:
!pip install verta
import os
# Ensure credentials are set up, if not, use below
# os.environ['VERTA_EMAIL'] =
# os.environ['VERTA_DEV_KEY'] =
# os.environ['VERTA_HOST'] =
from verta import Client
client = Client(os.environ['VERTA_HOST'])
"""
Explanation: Deploying Tensorflow models on Verta
Within Verta, a "Model" can be any arbitrary function: a traditional ML model (e.g., sklearn, PyTorch, TF, etc); a function (e.g., squaring a number, making a DB function etc.); or a mixture of the above (e.g., pre-processing code, a DB call, and then a model application.) See more here.
This notebook provides an example of how to deploy a Tensorflow model on Verta as a Verta Standard Model either via convenience functions (for Keras) or by extending VertaModelBase.
0. Imports
End of explanation
"""
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer='adam',
loss=loss_fn,
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
"""
Explanation: 1. Model Training
End of explanation
"""
registered_model = client.get_or_create_registered_model(
name="mnist", labels=["computer-vision", "tensorflow"])
"""
Explanation: 2. Register Model
End of explanation
"""
from verta.environment import Python
model_version_from_obj = registered_model.create_standard_model_from_keras(
model, environment=Python(requirements=["tensorflow"]), name="v1")
"""
Explanation: 2.1 Register from the model object
If you are in the same file where you have the model object handy, use the code below to package the model
End of explanation
"""
model.save("mnist.tf_saved_model")
from verta.registry import VertaModelBase
class MNISTModel(VertaModelBase):
def __init__(self, artifacts):
import tensorflow as tf
self.model = tf.keras.models.load_model(
artifacts["mnist_model"])
def predict(self, input_data):
output = []
for input_data_point in input_data:
reshaped_data = tf.reshape(input_data_point, (1, 28, 28))
output.append(self.model(reshaped_data).numpy().tolist())
return output
# test locally
mnist_model1 = MNISTModel({"mnist_model" : "mnist.tf_saved_model/"})
mnist_model1.predict([x_test[0]])
model_version_from_cls = registered_model.create_standard_model(
MNISTModel,
environment=Python(["tensorflow"]),
name="v2",
artifacts={"mnist_model" : "mnist.tf_saved_model/"}
)
"""
Explanation: 2.2 (OR) Register a serialized version of the model using the VertaModelBase
End of explanation
"""
class MNISTModel2(VertaModelBase):
def __init__(self, artifacts):
import tensorflow as tf
import base64
self.model = tf.keras.models.load_model(artifacts["mnist_model"])
def predict(self, input_data):
# decode base64
import base64
output = []
for input_data_point in input_data:
decoded_data = base64.b64decode(input_data_point["img_bytes"])
decoded_data = tf.io.decode_image(decoded_data)
decoded_data = tf.reshape(decoded_data, (1, 28, 28))
output.append(self.model(decoded_data).numpy().tolist())
return output
# test locally
import base64
mnist_model2 = MNISTModel2({"mnist_model" : "mnist.tf_saved_model/"})
with open("2.png", "rb") as image_file:
encoded_string = base64.b64encode(image_file.read())
print(mnist_model2.predict([{"img_bytes" : encoded_string}]))
model_version_from_cls_base64 = registered_model.create_standard_model(
MNISTModel2,
environment=Python(["tensorflow"]),
name="v3",
artifacts={"mnist_model" : "mnist.tf_saved_model/"}
)
"""
Explanation: 2.3 (OR) Register a serialized version of the model using the VertaModelBase (Variation: take in a base64 encoded input vs. a tensor)
End of explanation
"""
mnist_endpoint = client.get_or_create_endpoint("mnist")
mnist_endpoint.update(model_version_from_obj, wait=True)
deployed_model = mnist_endpoint.get_deployed_model()
deployed_model.predict([x_test[0]])
mnist_endpoint = client.get_or_create_endpoint("mnist")
mnist_endpoint.update(model_version_from_cls, wait=True)
deployed_model = mnist_endpoint.get_deployed_model()
deployed_model.predict([x_test[0]])
mnist_endpoint = client.get_or_create_endpoint("mnist")
mnist_endpoint.update(model_version_from_cls_base64, wait=True)
deployed_model = mnist_endpoint.get_deployed_model()
with open("2.png", "rb") as image_file:
encoded_string = base64.b64encode(image_file.read())
print(deployed_model.predict([{"img_bytes" : encoded_string}]))
"""
Explanation: 3. Deploy model to endpoint
End of explanation
"""
|
abevieiramota/data-science-cookbook
|
2017/06-linear-regression/Linear_Regression_Tutorial.ipynb
|
mit
|
# Calculate the mean value of a list of numbers
def mean(values):
return sum(values) / float(len(values))
"""
Explanation: Regressรฃo Linear Simples
1. Introduรงรฃo
A regressรฃo linear รฉ um mรฉtodo de prediรงรฃo com mais de 200 anos de idade. A regressรฃo linear simples รฉ um รณtimo primeiro algoritmo de aprendizado de mรกquina para implementar, pois requer que vocรช avalie as propriedades do seu conjunto de dados de treinamento, mas รฉ simples o suficiente para que os iniciantes entendam.
Neste tutorial, vocรช descobrirรก como implementar o algoritmo de regressรฃo linear simples a partir do zero em Python.
Depois de completar este tutorial, vocรช saberรก:
Como estimar quantidades estatรญsticas a partir de dados de treinamento.
Como estimar os coeficientes de regressรฃo linear a partir dos dados.
Como fazer previsรตes usando regressรฃo linear para novos dados.
1.1 Dataset - Seguro de Veรญculo Sueco
Neste tutorial, usaremos o Dataset Swedish Auto Insurance. Este conjunto de dados envolve a previsรฃo de pagamentos de reclamaรงรตes totais. Faรงa o download do conjunto de dados e guarde-o no seu diretรณrio de trabalho atual com o nome do arquivo insurance.csv.
Nota: talvez seja necessรกrio converter a vรญrgula europรฉia (,) para o ponto decimal (.). Vocรช tambรฉm precisarรก alterar o arquivo de variรกveis separadas em espaรงo branco para formato CSV.
1.2 Algoritmo de Regressรฃo Linear Simples
A regressรฃo linear assume uma relaรงรฃo linear ou linha reta entre as variรกveis de entrada (X) e a variรกvel de saรญda รบnica (y). Mais especificamente, essa saรญda (y) pode ser calculada a partir de uma combinaรงรฃo linear das variรกveis de entrada (X). Quando existe uma รบnica variรกvel de entrada, o mรฉtodo รฉ referido como uma regressรฃo linear simples.
Em regressรฃo linear simples, podemos usar estatรญsticas sobre os dados de treinamento para estimar os coeficientes exigidos pelo modelo para fazer previsรตes em novos dados. A linha reta para um modelo de regressรฃo linear simples pode ser escrita como:
Onde b0 e b1 sรฃo os coeficientes que devemos estimar a partir dos dados de treinamento. Uma vez que os coeficientes sรฃo conhecidos, podemos usar esta equaรงรฃo para estimar os valores de saรญda para y dado novos exemplos de entrada de x. Exige que vocรช calcule propriedades estatรญsticas dos dados, como mรฉdia, variรขncia e covariรขncia.
Toda a รกlgebra foi dada e ficamos apenas com alguma aritmรฉtica para implementar a estimativa dos coeficientes de regressรฃo linear simples. Resumidamente, podemos estimar os coeficientes da seguinte forma:
Onde o i se refere ao valor do i-รฉsimo valor da entrada x ou saรญda y. Nรฃo se preocupe se isso nรฃo estiver claro agora, estas sรฃo as funรงรตes que implementaremos no tutorial.
2. Passos do Tutorial
Este tutorial รฉ dividido em cinco partes:
Calcule Mรฉdia e Variรขncia.
Calcule Covariรขncia.
Estimar Coeficientes.
Faรงa previsรตes.
Estudo de caso do dataset de seguro de automรณvel sueco.
Essas etapas lhe darรฃo a base que vocรช precisa para implementar e treinar modelos simples de regressรฃo linear para seus prรณprios problemas de previsรฃo.
2.1 Calcule Mรฉdia e Variรขncia
O primeiro passo รฉ estimar a mรฉdia e a variรขncia das variรกveis de entrada e saรญda dos dados de treinamento. A mรฉdia de uma lista de nรบmeros pode ser calculada como:
Abaixo estรก uma funรงรฃo chamada mean () que implementa esse comportamento para uma lista de nรบmeros.
End of explanation
"""
# Calculate the variance of a list of numbers
def variance(values, mean):
return sum([(x-mean)**2 for x in values])
"""
Explanation: The variance is the sum squared difference for each value from the mean value. Variance for a list of numbers can be calculated as:
Abaixo estรก uma funรงรฃo chamada variance () que calcula a variรขncia de uma lista de nรบmeros. Isto exige que a mรฉdia da lista seja fornecida como um argumento, apenas nรฃo precisamos calcular mais de uma vez.
End of explanation
"""
## COLOQUE SEU CODIGO AQUI
## FAรA O PLOT DOS DADOS AQUI
"""
Explanation: Exercicio 1
(a) Junte as duas funรงรตes acima e teste-as em um conjunto de dados pequeno dado. Utiloze como exemplo, o pequeno conjunto de dados de valores x e y.
x | y
--| -
1 | 1
2 | 3
4 | 3
3 | 2
5 | 5
(b) Em seguida crie um grรกfico onde voce plot esses pontos
End of explanation
"""
# Calculate covariance between x and y
def covariance(x, mean_x, y, mean_y):
covar = 0.0
for i in range(len(x)):
covar += (x[i] - mean_x) * (y[i] - mean_y)
return covar
"""
Explanation: 2.2 Calcular Covariรขncia
A covariรขncia de dois grupos de nรบmeros descreve como esses nรบmeros mudam juntos. A co-variรขncia รฉ uma generalizaรงรฃo da correlaรงรฃo. A correlaรงรฃo descreve a relaรงรฃo entre dois grupos de nรบmeros, enquanto a covariรขncia pode descrever a relaรงรฃo entre dois ou mais grupos de nรบmeros. Alรฉm disso, a covariรขncia pode ser normalizada para produzir um valor de correlaรงรฃo. No entanto, podemos calcular a covariรขncia entre duas variรกveis da seguinte forma:
Abaixo estรก uma funรงรฃo chamada covariance() que implementa esta estatรญstica. Esta funรงรฃo baseia-se no passo anterior e leva as listas de valores x e y, bem como a mรฉdia desses valores como argumentos.
End of explanation
"""
## COLOQUE SEU CODIGO AQUI
"""
Explanation: Exercicio 2
Teste o cรกlculo da covariรขncia no mesmo pequeno conjunto de dados apresentado na
seรงรฃo anterior.
End of explanation
"""
# Calculate coefficients
def coefficients(dataset):
x = [row[0] for row in dataset]
y = [row[1] for row in dataset]
x_mean, y_mean = mean(x), mean(y)
b1 = covariance(x, x_mean, y, y_mean) / variance(x, x_mean)
b0 = y_mean - b1 * x_mean
return [b0, b1]
"""
Explanation: 2.3 Estimativa dos Coeficientes
Agora, devemos estimar os valores dos dois coeficientes em regressรฃo linear simples. O primeiro รฉ B1 que pode ser estimado como:
Podemos simplificar esta fรณrmula usando as funcรตes covariance e variance apresentadas acima, conforme a fรณrmula abaixo.
Em seguida, precisamos estimar um valor para B0, tambรฉm chamado de interceptaรงรฃo, pois controla o ponto inicial da linha onde ele intersecta o eixo y.
Mais uma vez, sabemos como estimar B1 e temos uma funรงรฃo para estimar a mรฉdia (). Podemos juntar tudo isso em uma funรงรฃo denominada coefficients () que leva o conjunto de dados como um argumento e retorna os coeficientes.
End of explanation
"""
## COLOQUE SEU CODIGO AQUI
"""
Explanation: Exercicio 3
Estenda o exercรญcio anterior incluindo o cรกculo dos coeficientes para os dados sintetizados.
End of explanation
"""
def simple_linear_regression(train, test):
predictions = list()
b0, b1 = coefficients(train)
for row in test:
ypred = b0 + b1 * row[0]
predictions.append(ypred)
return predictions
"""
Explanation: 2.2 Fazer previsรตes
O modelo de regressรฃo linear simples รฉ uma linha definida pelos coeficientes estimados a partir dos dados de treinamento. Uma vez que os coeficientes sรฃo estimados, podemos usรก-los para fazer previsรตes. A equaรงรฃo para fazer previsรตes com um modelo de regressรฃo linear simples รฉ a seguinte:
Abaixo รฉ apresentada a funรงรฃo chamada simple_linear_regression () que implementa a equaรงรฃo de prediรงรฃo para fazer previsรตes em um conjunto de dados de teste. Tambรฉm une a estimativa dos coeficientes nos dados de treinamento das etapas acima. Os coeficientes preparados a partir dos dados de treinamento sรฃo usados para fazer previsรตes nos dados do teste, que sรฃo retornados.
End of explanation
"""
from math import sqrt
# Calculate root mean squared error
def rmse_metric(actual, predicted):
sum_error = 0.0
for i in range(len(actual)):
prediction_error = predicted[i] - actual[i]
sum_error += (prediction_error ** 2)
mean_error = sum_error / float(len(actual))
return sqrt(mean_error)
# Evaluate regression algorithm on training dataset
def evaluate_algorithm(dataset, algorithm):
test_set = list()
for row in dataset:
row_copy = list(row)
row_copy[-1] = None
test_set.append(row_copy)
predicted = algorithm(dataset, test_set)
print(predicted)
actual = [row[-1] for row in dataset]
rmse = rmse_metric(actual, predicted)
return rmse
"""
Explanation: Para avaliar o modelo
adicionaremos uma funรงรฃo para gerenciar a avaliaรงรฃo das previsรตes denominadas evaluate_algorithm () e outra funรงรฃo para estimar o erro quadrรกtico mรฉdio da raiz das previsรตes denominadas mรฉtrica rmse_metric (). Veja as funรงรตes abaixo:
End of explanation
"""
## COLOQUE SEU CODIGO AQUI
"""
Explanation: Exercicio 4
Agora junte tudo que foi criado para fazer previsรตes para o nosso conjunto de dados de teste.
End of explanation
"""
## COLOQUE SEU CODIGO AQUI
"""
Explanation: Exercรญcio 5
Crie um scatter plot para mostrar as previsรตes como uma linha e comparรก-lo com o conjunto de dados original.
End of explanation
"""
|
fedhere/ADSgenderclustering
|
parse_analyze_names.ipynb
|
mit
|
from __future__ import print_function, division
import os,sys
import pickle, pprint,csv
import numpy as np
import pylab as pl
%pylab inline
DEBUG = False
NMC = 1000 #number of montecarlo draws
# doing this only for >=3 author papers,
# and limiting the inference to the first 3 authors
maxauth=3
#read in list of names
# pkl_file = open('name_list/female.pkl', 'rb')
# femalenames = pickle.load(pkl_file)
# pkl_file = open('name_list/male.pkl', 'rb')
# malenames = pickle.load(pkl_file)
femalenames = []
femalecounts = []
# reading in names with clear gender id
filename = 'namedb/female_uniq.csv'
with open(filename, 'rb') as f:
reader = csv.reader(f)
try:
for row in reader:
if row[0].startswith('#'):
continue
femalenames.append(row[0].lower())
femalecounts.append(float(row[1].lower()))
except csv.Error as e:
sys.exit('file %s, line %d: %s' % (filename, reader.line_num, e))
femalenames = np.array(femalenames)
femalecounts = np.array(femalecounts)
malenames = []
malecounts = []
filename = 'namedb/male_uniq.csv'
with open(filename, 'rb') as f:
reader = csv.reader(f)
try:
for row in reader:
if row[0].startswith('#'):
continue
malenames.append(row[0].lower())
malecounts.append(float(row[1].lower()))
except csv.Error as e:
sys.exit('file %s, line %d: %s' % (filename, reader.line_num, e))
malenames = np.array(malenames)
malecounts = np.array(malecounts)
if DEBUG:
print (femalecounts,malecounts)
DEBUG = True
# reads in paper list
pkl_file = open('papers_recent.pkl', 'rb')
papers = pickle.load(pkl_file)
print ("We have a list of %d papers."%len(papers))
print ("\nThe first one looks like:")
print (papers[0])
DEBUG = False
def choosegender(first):
nratio = femalecounts[femalenames == first] / malecounts[malenames == first]
if nratio > 0.75:
return 'f'
if nratio < 0.25:
return 'm'
return 'u'
paperstats={'nauth':[],'ncite':[],'femaleratio':[]}
tot_female = 0
tot_male = 0
tot_unknowns = 0
for ppr in papers:
femalecount = 0
malecount = 0
unknowns = 0
# parse paper info
try:
ncite= ppr['number_of_citations']
except:
ncite=float('NaN')
nauth = len(ppr['authors'])
# skip if less than 3 authors
if nauth < 3 :
continue
# reduct to first 3 authors
authors=ppr['authors'][:maxauth]
for a in authors:
#read first name when possible
try:
first = a.split()[1].replace(',','').strip().lower()
except:
unknowns += 1
continue
if not '.' in first:
if DEBUG:
print ("nauth, ncite:", nauth, ncite,)
print (first)
if first in femalenames and first in malenames:
#print (first)
g = choosegender(first)
if g == 'f':
femalecount += 1
elif g == 'm':
malecount += 1
else:
unknowns += 1
elif first in femalenames :
femalecount += 1
elif first in malenames :
malecount += 1
else:
unknowns += 1
else:
unknowns += 1
if DEBUG:
print ("females: ", femalecount)
print ("males: ", malecount)
print ("unknowns:", unknowns)
if unknowns == 0:
femaleratio = float(femalecount) / float(femalecount + malecount)
# print femaleratio, "maleratio:", float(malecount)/float(maxauth)
tot_female += femalecount
tot_male += malecount
tot_unknowns += unknowns
paperstats['nauth'].append(nauth)
paperstats['ncite'].append(ncite)
paperstats['femaleratio'].append(femaleratio)
# print femaleratio, "maleratio:", float(malecount)/float(maxauth)
if DEBUG:
print ("femaleratio:", femaleratio)
pl.figure()
pl.title("ACTUAL FEMALE RATIO IN THE FIRST 3 AUTHORS")
pl.hist(paperstats['femaleratio'], color='SteelBlue')
tot_male, tot_female, tot_unknowns
pl.figure()
pl.ylabel("female ratio")
pl.xlabel("number of authors")
pl.scatter(paperstats['nauth'], paperstats['femaleratio'], alpha = 0.01)
pl.title("FEMALE CONCENTRATION IN LEAD AUTHORS VS NUMBER OF AUTHORS")
Np = len(paperstats['ncite'])
print (Np, "papers with the gender of all 3 authors identified")
Nc = int(max(paperstats['ncite']) / 5) + 1
ncite = [None] * Nc
for i in range(Nc):
ncite[i] = [paperstats['femaleratio'][ii] \
for ii in range(len(paperstats['femaleratio'])) \
if int(paperstats['ncite'][ii] / 5) == i]
#pl.figure()
#pl.title("FEMALE RATIO IN THE FIRST 3 AUTHORS AGAINST CITATION COUNT")
#for i in range(Nc):
# pl.scatter([i * 5] * len(ncite[i]), ncite[i], alpha = 0.1)
#pl.ylabel("female ratio")
#pl.xlabel("citations")
plt.figure()
plt.ylabel ("female ratio")
plt.xlabel ("number of citations")
plt.scatter(paperstats['ncite'], paperstats['femaleratio'], alpha = 0.1)
plt.show()
"""
Explanation: Culstering in gender authorship
I analyze Astronomical scientific literature to assess whether minorities have a propensity to clustering together in scientific projects.
This is an attempt to quanify, or at least justify, the assumption that having diversity in a department, particularly having diverse mentors, will foster diversity in recturing more junior scientists.
An important assumption here is that among the first few authors one will be the mentor, grad advisor, or group leader, and that one will be a more junior scientist, grad student or postdoc. This is a common dynamic in authorship in astronomy
I analyze 5000 articles extracted from ADS in January 2015.
Of those I only consider papers with >= 3 authors. Where all three first names can be read (i.e. they are not initials) I cross check the first names against the list of first names and their usage derived from Social Security Administration records from 1960 to 2012. I only keep a paper in the sample if all three first author genders can be identifies to at least 75% confidence level (where the ratio of gender usage for that first name is >0.75 for either males or females.)
From the original set of 5000 papers the final sample includes 1288 papers.
Then I can check the distribution of ratios of femal authors (among the first 3 authors) against a randome distribution (given the number of papers, and the number of female authors collectively in the sample).
The Null Hypothesis is that women authors are distributed at random among the papers.
The Alternative is that they are not.
I can check this with a MC simulation, given the number of female authors in the sample and the number of papers, distributing the female authors among the papers 1000 times and checking whether the resulting distributions are consistent with the true one with KS or anderson Darling test.
End of explanation
"""
def pickAndDel(indx, picks):
# print, indx, len(picks)
tmp = picks[indx]
del picks[indx]
return tmp
fameleFracRand = np.zeros((1000, Np))
for i in range(NMC):
picks = range(Np) + range(Np) + range(Np)
#femaleRand = randint(0, Np, tot_female)
femaleRand = np.array([pickAndDel(randint(0, len(picks)), picks) \
for j in range(tot_female)])
fameleFracRand[i] = np.array([(femaleRand == j).sum() / \
3.0 for j in range(Np)])
for i in range(10):
pl.hist(fameleFracRand[i], color='IndianRed', alpha = 0.3)
pl.hist(fameleFracRand[i], color='IndianRed', alpha = 0.3,
label = "Simulated")
pl.hist(paperstats['femaleratio'], color='SteelBlue', alpha = 0.7,
label = "True")
pl.xlabel("Fraction of female authors")
pl.legend()
ratios = np.zeros((1000,10))
for i in range(1000):
ratios[i] = histogram(fameleFracRand[i])[0]
statratios = np.array([np.array([ratios[:,i].mean(),
ratios[:,i].std()]) for i in range(10)])
statratios.T
y = pl.hist(paperstats['femaleratio'], color='SteelBlue', alpha = 0.5, label="True fractions")
x = 0.5 * (histogram(fameleFracRand[0])[1][1:] +
histogram(fameleFracRand[0])[1][:-1])
pl.errorbar(x[::3], statratios[::3, 0],
yerr = (statratios[::3, 1]**2 + y[0][::3])**0.5,
fmt = '.', color = 'IndianRed', label = "MC simulated fractions")
pl.legend(fontsize=10)
pl.xlabel("Female fraction in lead 3 authors")
pl.ylabel("Number of papers")
pl.savefig("ADSgenderclustering.png")
"""
Explanation: MC Simulation
End of explanation
"""
# KS test
sp.stats.ks_2samp(paperstats['femaleratio'], statratios[:, 0])
# AD test
sp.stats.anderson_ksamp([paperstats['femaleratio'], statratios[:, 0]])
"""
Explanation: MAIN FIGURE: The distribution of female authorship ratios is NOT consistent with random. Error bars include stochastic errors, obtained by MC simulations, and count statistics in the true distribution.
statistical Tests (not great, cause they assume continuou data)
End of explanation
"""
ks = np.zeros(NMC)
ad = np.zeros(NMC)
for i in range(NMC):
ks[i] = sp.stats.ks_2samp(paperstats['femaleratio'], ratios[i])[1]
ad[i] = sp.stats.anderson_ksamp([paperstats['femaleratio'], ratios[i]])[2]
print ("KS mean, std:", ks.mean(), ks.std())
print ("AD mean, std:", ad.mean(), ad.std())
if ks.mean() < 0.003 and ad.mean() < 0.003:
print (r"Null Rejected at > 3 Sigma!")
else:
print ("Null not rejected")
"""
Explanation: The Null hypothesis is that female authors are distributed randomely among papers is strongly rejected by both KS and AD tests to p<0.001!
There is a significant excess of both papers with no and with all three female lead authors, and a deficit of papers with a single female author compared to a random gender distribution!
End of explanation
"""
|
drakero/Electron_Spectrometer
|
Lanex_Strip_Test.ipynb
|
mit
|
#Imports
from math import *
import numpy as np
import scipy as sp
import scipy.special
import scipy.interpolate as interpolate
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.cbook as cbook
import seaborn as sns
import sys
import os
#Import custom modules
from physics import *
%matplotlib notebook
"""
Explanation: Lanex Strip Test
Some quick code for analyzing the data acquired during the Lanex strip test. In this test, the lanex strip used in the electron spectrometer design was partially covered with pieces of electrical tape before being exposed to the electron source. This was done to see how much of the signal is due to Lanex luminesence and how much is due to direct excitation in the CCD by electrons.
<img src="lanex_strips.jpg" height="628" width="368" />
End of explanation
"""
B0 = 1710.0/10**4 #Magnetic field strength in Tesla
def KEcalc(z,y):
"""Returns KE in J given z-position in m"""
return me*c**2*(sqrt((q*B0/(me*c))**2*((z**2+y**2)/(2*y))**2+1)-1)
def Radius(KE):
"""#Radius of electron orbit in m given KE in keV"""
return me*c/(q*B0)*sqrt((KE*1000*q/(me*c**2)+1)**2-1)
def zfcalc(KE,y):
"""#Returns z-position at screen in inches given KE in keV"""
R = Radius(KE)
return sqrt(R**2 - (y-R)**2)
def zfcalcGeneral(KE,yM,y):
R = Radius(KE)
zM = zfcalc(KE,yM)
return zM + (y - yM)*(R - yM)/zM
def KEcalcGeneral(zf,yM,yf):
"""Returns KE in J given z-position of electrons, y-position of magnet edge, and y-position of screen, all in m"""
a = (yM+yf)**2
b = -2*yM*(yf*(yM+yf)+zf**2)
d = yM**2*(zf**2+yf**2)
f = (me*c)/(q*B0)
g = (-b+sqrt(b**2-4*a*d))/(2*a)
return me*c**2*(sqrt(g**2+f**2)/f - 1)
def AngleIncidence(KE,yM):
R = Radius(KE)
return asin((R-yM)/R)
"""
Explanation: Espec functions
End of explanation
"""
def getfns(folder,ext=''):
"""Get a list of full path filenames for all files in a folder and subfolders for the given extension"""
fns = []
for file in os.listdir(folder):
if file.endswith(ext):
fns.append(os.path.join(folder,file))
return fns
def readcsv(filename):
"""Read in a csv file and load it as an array"""
return np.loadtxt(open(filename, "rb"), delimiter=",")
def readcsvs(filenames):
"""Read in multiple csv files and load them in to a two-dimensional array"""
template = readcsv(filenames[0])
numfiles = len(filenames)
Data = np.zeros([numfiles,len(template)])
for i in range(numfiles):
spectrum = readcsv(filenames[i])
Data[i,:] = spectrum
return Data
def DataClean(Data):
"""Read in data and clean it by removing rows that have saturated pixel values"""
maxes = np.max(Data[:,500:],1)
includes = maxes<(2**16-1)
rejects = maxes>(2**16-2)
CleanData = Data[includes,:]
return CleanData
def DataAverage(Data):
"""Average input 2d array into a 1d array"""
return np.mean(Data,0)
"""
Explanation: Data read functions
End of explanation
"""
MagnetOutFolderPath = os.curdir + '/Data/2015-08-17_Tiger_stripes_test/even_more_no_magnet'
MagnetInFolderPath = os.curdir + '/Data/2015-08-17_Tiger_stripes_test/magnet_in'
NoLaserFolderPath = os.curdir + '/Data/2015-08-17_Tiger_stripes_test/no_laser'
MagnetOutFiles = getfns(MagnetOutFolderPath,'')
MagnetInFiles = getfns(MagnetInFolderPath,'')
NoLaserFiles = getfns(NoLaserFolderPath,'')
MagnetOutData = readcsvs(MagnetOutFiles)
MagnetInData = readcsvs(MagnetInFiles)
Background = DataAverage(MagnetInData)
MagnetOutDataClean = DataClean(MagnetOutData)-Background
"""
Explanation: Load data and subtract background
End of explanation
"""
sns.set(font_scale=1.5)
fig1 = plt.figure(figsize=(8,6))
ax1 = fig1.add_subplot(111)
ax1.set_xlim(500,3648)
ax1.set_ylim(0,len(MagnetInData))
ax1.set_xlabel('Pixel')
ax1.set_ylabel('Instance')
# mesh1 = ax1.pcolormesh(MagnetInData, cmap='hot',vmin=0, vmax=10000)
mesh1 = ax1.pcolormesh(MagnetOutDataClean, cmap='inferno',vmin=0, vmax=65535)
"""
Explanation: Mesh plot all unsaturated data
End of explanation
"""
MagnetOutDataCleanAvg = DataAverage(MagnetOutDataClean)
MagnetOutDataCleanAvgHat = savitzky_golay(MagnetOutDataCleanAvg,51,3) #Smoothed Data
fig2 = plt.figure(figsize=(12,8))
ax2 = fig2.add_subplot(111)
ax2.set_xlim(0,3648)
ax2.set_ylim(0,65535)
ax2.set_xlabel('Pixel Number')
ax2.set_ylabel('Pixel Value')
#ax2.semilogy()
ax2.plot(MagnetOutDataCleanAvg,linewidth=1)
ax2.plot(MagnetOutDataCleanAvgHat,linewidth=1,color='r')
"""
Explanation: Plot averaged data
End of explanation
"""
#Pixel values were determined by picking a region within each peak and trough
LanexPixelArray = np.hstack((np.arange(240,260),np.arange(670,750),np.arange(1090,1300),np.arange(1600,1700),\
np.arange(1930,2030),np.arange(2400,2450),np.arange(2820,2920),np.arange(3280,3350)))
LanexData = np.hstack((MagnetOutDataCleanAvgHat[240:260],MagnetOutDataCleanAvgHat[670:750],\
MagnetOutDataCleanAvgHat[1090:1300],MagnetOutDataCleanAvgHat[1600:1700],\
MagnetOutDataCleanAvgHat[1930:2030],MagnetOutDataCleanAvgHat[2400:2450],\
MagnetOutDataCleanAvgHat[2820:2920],MagnetOutDataCleanAvgHat[3280:3350]))
VinylPixelArray = np.hstack((np.arange(460,490),np.arange(900,950),np.arange(1400,1500),np.arange(1750,1850),\
np.arange(2150,2250),np.arange(2540,2640)))
VinylData = np.hstack((MagnetOutDataCleanAvgHat[460:490],MagnetOutDataCleanAvgHat[900:950],\
MagnetOutDataCleanAvgHat[1400:1500],MagnetOutDataCleanAvgHat[1750:1850],\
MagnetOutDataCleanAvgHat[2150:2250],MagnetOutDataCleanAvgHat[2540:2640]))
LanexInterpFunc = interpolate.interp1d(LanexPixelArray,LanexData,kind='slinear')
LanexPixelArrayInterp = np.arange(240,3350)
LanexDataInterp = LanexInterpFunc(LanexPixelArrayInterp)
VinylInterpFunc = interpolate.interp1d(VinylPixelArray,VinylData,kind='slinear')
VinylPixelArrayInterp = np.arange(460,2640)
VinylDataInterp = VinylInterpFunc(VinylPixelArrayInterp)
"""
Explanation: Interpolate data between regions with and without vinyl
End of explanation
"""
# sns.set(context='poster',font_scale=1.5)
# sns.set_style("darkgrid")
# sns.set_palette(palette='deep')
# sns.set_color_codes(palette='deep')
plt.figure(figsize=(12,6))
#With Lanex
#plt.plot(LanexPixelArray,LanexData,linestyle='None',marker='.')
plt.plot(LanexPixelArrayInterp,LanexDataInterp, linewidth=2,linestyle='--',label='Without Vinyl',color='b')
#Without Lanex
#plt.plot(VinylPixelArray,VinylData,linestyle='None',marker='.')
plt.plot(VinylPixelArrayInterp,VinylDataInterp,linewidth=2,linestyle='--',label='With Vinyl',color='r')
#All smoothed data
plt.plot(MagnetOutDataCleanAvgHat,linewidth=2,label='All Data',color='g')
plt.xlim(0,2000)
plt.xlabel('Pixel Number')
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.ylabel('Pixel Value')
plt.legend()
plt.subplots_adjust(left=0.12,bottom=0.14) #Adjust spacing to prevent clipping of x and y labels
plt.savefig('strip_test.svg')
"""
Explanation: Plot interpolated data
End of explanation
"""
yM = 0.5 #magnet edge position in inches
CCDpos = 3.02 #CCD y-position relative to magnet edge in mm
yM = yM*.0254 #Convert to base units
CCDpos = CCDpos*10**-3 #Convert to base units
yf = yM + CCDpos #Screen position
KEcalcGeneralVec = np.vectorize(KEcalcGeneral) #Vectorize espec function for KE
ChosenPixels = [479.37419602, 658.07909036, 805.53608568, 929.62974462, 1079.52432829, 1325.661891,\
1454.37508809, 1637.97442378, 1801.34325342, 1984.12923212, 2202.34572023, 2412.63089924,\
2595.3896852]
ChosenPositions = np.multiply(np.add(np.multiply(ChosenPixels,8.0/10**3),9.5),10**-3) #Convert to meters
ChosenEnergies = KEcalcGeneralVec(ChosenPositions,yM,yf) #Energies corresponding to the chosen pixels
SignalRatio = LanexInterpFunc(ChosenPixels)/VinylInterpFunc(ChosenPixels)
print(ChosenEnergies/(1000*q),'\n')
print(SignalRatio)
"""
Explanation: Calculate signal ratios
End of explanation
"""
mass = 2.95680*10**-2 #mass of active layer in g
Nsegments = 101 #number of segments
segmentmass = mass/Nsegments #mass of each segment in g
phosphorthickness = 81.*10**-6 #thickness of active layer in m
KEsim = np.multiply([338, 391, 440, 485],10**-3) #Chosen energies (only the first 4 were used to save time)
#Create array of angles
CosThetaSim = []
for x in KEsim:
CosThetaSim.append(np.cos(AngleIncidence(x*1000,yM)))
CCDthickness = np.arange(1,21,1) #Depletion region thickness array
MCNP_Directory = '/home/drake/Documents/Physics/Research/Python/MCNP_Code'
#Read files. As far as visible photons are concerned, the depletion region thickness doesn't matter
deptharray= []
Eabsorbed_Segments = []
Eabsorbed_Segments_Error = []
for i in range(len(KEsim)):
deptharray.append([])
Eabsorbed_Segments.append([])
Eabsorbed_Segments_Error.append([])
#Directory of the input files where vinyl was not used in the simulation
if i%2==0: #Energies where vinyl was experimentally used
directory = MCNP_Directory + '/MCNP_Decks_Varying_CCD_Thickness/Old_Output/Output2/Out_{KE}MeV_{Theta}Degrees_20umCCD_inverted'\
.format(KE=str(round(KEsim[i],3)),Theta=str(round(np.arccos(CosThetaSim[i])*360/(2*pi),1)))
else: #Energies where vinyl wasn't experimentally used
directory = MCNP_Directory + '/MCNP_Decks_Varying_CCD_Thickness/Old_Output/Output1/Out_{KE}MeV_{Theta}Degrees_20umCCD'\
.format(KE=str(round(KEsim[i],3)),Theta=str(round(np.arccos(CosThetaSim[i])*360/(2*pi),1)))
for segmentnumber in range(Nsegments-1):
segment = segmentnumber+100 #segment number
deptharray[i].append(segmentnumber*phosphorthickness/Nsegments) #depth for each segment in microns
printflag = False
with open(directory) as searchfile:
for line in searchfile:
left,sep,right = line.partition(' -{segmentlabel} '.format(segmentlabel=str(segment)))
if printflag:
Eabs_per_g = float(line[17:28])
Eabs_per_g_Error = float(line[29:35])
Eabsorbed_Segments[i].append(Eabs_per_g*segmentmass)
Eabsorbed_Segments_Error[i].append(Eabs_per_g_Error*segmentmass)
printflag = False
if sep:
printflag = True
"""
Explanation: Determine signal ratios from MCNP output
<font size="4"><p>MCNP was ran at the above chosen energies both with and without vinyl covering the lanex. The CCD depletion region thickness was also varied from 1 um to 20 um for each energy. The signal ratio at each of these energies can be determined by finding the ratio between the total signal without vinyl and the total signal with vinyl. This can then be compared with the experimentally determined signal ratio in order to find the appropriate depletion region thickness.</p>
<p>First, we need to read in the MCNP data for the "without vinyl" case. The energy deposited into each segment of the phosphor layer can be added to an array (code adapted from "Read MCNP Output" script):</p>
</font>
End of explanation
"""
plt.figure(figsize=(12,6))
plt.plot(np.multiply(deptharray[1],10**6),np.multiply(Eabsorbed_Segments[0],10**3),label='338 keV')
plt.plot(np.multiply(deptharray[1],10**6),np.multiply(Eabsorbed_Segments[1],10**3),label='391 keV')
plt.plot(np.multiply(deptharray[1],10**6),np.multiply(Eabsorbed_Segments[2],10**3),label='440 keV')
plt.plot(np.multiply(deptharray[1],10**6),np.multiply(Eabsorbed_Segments[3],10**3),label='485 keV')
plt.xlabel('Depth (um)')
plt.ylabel('Energy absorbed (keV)')
plt.legend()
"""
Explanation: <font size="4"><p>Plot energy absorbed versus segment depth to make sure everything looks good:</p></font>
End of explanation
"""
scatteringlength = 2.84*10**-6 #photon scattering length in Gd2O2S in m
conversionefficiency = 0.16 #electron energy to light energy conversion efficiency
emissionwavelength = 545*10**-9 #lanex emission wavelength in m
emissionenergy = hbar*2*pi*c/emissionwavelength
photonNumberArray = []
for i in range(len(KEsim)):
photonNumber = 0
for j in range(len(deptharray[i])):
photonNumber += conversionefficiency*Eabsorbed_Segments[i][j]*10**6*q/emissionenergy\
*(j+0.5)/Nsegments # Nabs = Nexc*(Distance from top of lanex)/(Phosphor thickness)
#where Distance from top of lanex = (SegmentNumber+0.5)/(Nsegments)*(Phosphor thickness)
photonNumberArray.append(photonNumber)
QuantumEfficiency = 0.4
PhotonSignal = np.array(np.multiply(photonNumberArray,QuantumEfficiency))
print(PhotonSignal)
"""
Explanation: <font size="4"><p>Next, the number of photons that reach the CCD can be calculated. From the quantum efficiency of the CCD, the contribution to the signal from visible photons can be found.</p>
End of explanation
"""
#With vinyl
ExAbsorbedCCDVinyl = [] #x-rays
ExAbsorbedCCDErrorVinyl = []
EelAbsorbedCCDVinyl = [] #electrons
EelAbsorbedCCDErrorVinyl = []
CCDmass = np.multiply(CCDthickness,1.39740*10**-4) #mass of photoactive layer of CCD
for i in range(len(KEsim)):
ExAbsorbedCCDVinyl.append([])
ExAbsorbedCCDErrorVinyl.append([])
EelAbsorbedCCDVinyl.append([])
EelAbsorbedCCDErrorVinyl.append([])
for j in range(len(CCDthickness)):
#Directory of the input files where vinyl was used in the simulation
if i%2==0: #Energies where vinyl was experimentally used
directory = MCNP_Directory + '/MCNP_Decks_Varying_CCD_Thickness/Old_Output/Output1/Out_{KE}MeV_{Theta}Degrees_{CCD}umCCD'\
.format(KE=str(round(KEsim[i],3)),Theta=str(round(np.arccos(CosThetaSim[i])*360/(2*pi),1)),\
CCD=str(CCDthickness[j]))
else: #Energies where vinyl wasn't experimentally used
directory = MCNP_Directory + '/MCNP_Decks_Varying_CCD_Thickness/Old_Output/Output2/Out_{KE}MeV_{Theta}Degrees_{CCD}umCCD_inverted'\
.format(KE=str(round(KEsim[i],3)),Theta=str(round(np.arccos(CosThetaSim[i])*360/(2*pi),1)),\
CCD=str(CCDthickness[j]))
printflag = False
with open(directory) as searchfile:
firstoccurence = False
for line in searchfile:
left,sep,right = line.partition('cell 7')
if printflag:
Eabs_per_g = float(line[17:28])
Eabs_per_g_Error = float(line[29:35])
if firstoccurence:
ExAbsorbedCCDVinyl[i].append(Eabs_per_g*CCDmass[j])
ExAbsorbedCCDErrorVinyl[i].append(Eabs_per_g_Error*CCDmass[j])
else:
EelAbsorbedCCDVinyl[i].append(Eabs_per_g*CCDmass[j])
EelAbsorbedCCDErrorVinyl[i].append(Eabs_per_g_Error*CCDmass[j])
printflag = False
if sep: # True iff 'cell 6' in line
printflag = True
firstoccurence = not firstoccurence
#Calculate the overall contribution to the signal
Energy_eh_pair = 3.65 #Energy needed to generate electron-hole pair in Si in eV
ElectronSignalVinyl = []
XraySignalVinyl = []
for i in range(len(KEsim)):
ElectronSignalVinyl.append([])
XraySignalVinyl.append([])
ElectronSignalVinyl[i].append(np.multiply(EelAbsorbedCCDVinyl[i],10**6/Energy_eh_pair))
XraySignalVinyl[i].append(np.multiply(ExAbsorbedCCDVinyl[i],10**6/Energy_eh_pair))
ElectronSignalVinyl = np.array(ElectronSignalVinyl)
XraySignalVinyl = np.array(XraySignalVinyl)
TotalSignalVinyl = np.add(ElectronSignalVinyl,XraySignalVinyl)
#Without vinyl
ExAbsorbedCCDNoVinyl = [] #x-rays
ExAbsorbedCCDErrorNoVinyl = []
EelAbsorbedCCDNoVinyl = [] #electrons
EelAbsorbedCCDErrorNoVinyl = []
CCDmass = np.multiply(CCDthickness,1.39740*10**-4) #mass of photoactive layer of CCD
for i in range(len(KEsim)):
ExAbsorbedCCDNoVinyl.append([])
ExAbsorbedCCDErrorNoVinyl.append([])
EelAbsorbedCCDNoVinyl.append([])
EelAbsorbedCCDErrorNoVinyl.append([])
for j in range(len(CCDthickness)):
#Directory of the input files where vinyl was not used in the simulation
if i%2==0: #Energies where vinyl was experimentally used
directory = MCNP_Directory + '/MCNP_Decks_Varying_CCD_Thickness/Old_Output/Output2/Out_{KE}MeV_{Theta}Degrees_{CCD}umCCD_inverted'\
.format(KE=str(round(KEsim[i],3)),Theta=str(round(np.arccos(CosThetaSim[i])*360/(2*pi),1)),\
CCD=str(CCDthickness[j]))
else: #Energies where vinyl wasn't experimentally used
directory = MCNP_Directory + '/MCNP_Decks_Varying_CCD_Thickness/Old_Output/Output1/Out_{KE}MeV_{Theta}Degrees_{CCD}umCCD'\
.format(KE=str(round(KEsim[i],3)),Theta=str(round(np.arccos(CosThetaSim[i])*360/(2*pi),1)),\
CCD=str(CCDthickness[j]))
printflag = False
with open(directory) as searchfile:
firstoccurence = False
for line in searchfile:
left,sep,right = line.partition('cell 6')
if printflag:
Eabs_per_g = float(line[17:28])
Eabs_per_g_Error = float(line[29:35])
if firstoccurence:
ExAbsorbedCCDNoVinyl[i].append(Eabs_per_g*CCDmass[j])
ExAbsorbedCCDErrorNoVinyl[i].append(Eabs_per_g_Error*CCDmass[j])
else:
EelAbsorbedCCDNoVinyl[i].append(Eabs_per_g*CCDmass[j])
EelAbsorbedCCDErrorNoVinyl[i].append(Eabs_per_g_Error*CCDmass[j])
printflag = False
if sep: # True iff 'cell 6' in line
printflag = True
firstoccurence = not firstoccurence
#Calculate the overall contribution to the signal
Energy_eh_pair = 3.65 #Energy needed to generate electron-hole pair in Si in eV
ElectronSignalNoVinyl = []
XraySignalNoVinyl = []
for i in range(len(KEsim)):
ElectronSignalNoVinyl.append([])
XraySignalNoVinyl.append([])
ElectronSignalNoVinyl[i].append(np.multiply(EelAbsorbedCCDNoVinyl[i],10**6/Energy_eh_pair))
XraySignalNoVinyl[i].append(np.multiply(ExAbsorbedCCDNoVinyl[i],10**6/Energy_eh_pair))
TotalSignalNoVinyl = np.add(ElectronSignalNoVinyl,XraySignalNoVinyl)
for i in range(len(KEsim)):
TotalSignalNoVinyl[i] += PhotonSignal[i]
"""
Explanation: <font size="4"><p>The amount of energy deposited into the CCD depletion region from electrons and x-rays can be read for both the "with vinyl" and "without vinyl" cases:</p>
End of explanation
"""
Ratio = [TotalSignalNoVinyl[0][0][0:19]/TotalSignalVinyl[0][0][0:19],\
TotalSignalNoVinyl[1][0][0:19]/TotalSignalVinyl[1][0][0:19],\
TotalSignalNoVinyl[2][0][0:19]/TotalSignalVinyl[2][0][0:19],\
TotalSignalNoVinyl[3][0][0:19]/TotalSignalVinyl[3][0][0:19]]
MCNPSignalRatio = []
for i in range(len(CCDthickness)-1):
MCNPSignalRatio.append([])
for j in range(len(KEsim)):
MCNPSignalRatio[i].append(Ratio[j][i])
"""
Explanation: <font size="4"><p>Finally, determine the signal ratios</p>
End of explanation
"""
mpl.rcParams.update({'font.size': 24, 'font.family': 'serif'})
sns.set(context='poster',font_scale=1.5)
sns.set_style("darkgrid")
sns.set_palette(palette='deep')
sns.set_color_codes(palette='deep')
plt.figure(figsize=(12,6))
plt.plot(np.multiply(KEsim,1.0),MCNPSignalRatio[0],linewidth=2,label='1 '+ u'\u03bcm')
plt.plot(np.multiply(KEsim,1.0),MCNPSignalRatio[1],linewidth=2,label='2 '+ u'\u03bcm')
plt.plot(np.multiply(KEsim,1.0),MCNPSignalRatio[4],linewidth=2,color='y',label='5 '+ u'\u03bcm')
plt.plot(np.multiply(KEsim,1.0),MCNPSignalRatio[9],linewidth=2,color='c',label='10 '+ u'\u03bcm')
#plt.plot(np.multiply(KEsim,1.0),MCNPSignalRatio[14],linewidth=2,color='purple',label='MCNP 15 '+ u'\u03bcm photoactive region')
plt.plot(np.divide(ChosenEnergies,10**6*q),SignalRatio,linewidth=2,color='r',marker='o',label='Experiment')
plt.xlabel('Electron Energy (keV)')
plt.ylabel('Signal Ratio (No Vinyl / Vinyl)')
plt.xlim(0.33,0.50)
plt.legend(title='Photoactive Region Thickness')
plt.subplots_adjust(left=0.14,bottom=0.15) #Adjust spacing to prevent clipping of x and y labels
#plt.savefig('Signal_Ratio.svg')
"""
Explanation: <font size="4"><p>and plot the results:</p>
End of explanation
"""
|
israelzuniga/spark_streaming_class
|
spark_streaming_class/Spark_Streaming.ipynb
|
mit
|
from pyspark import SparkContext
# https://spark.apache.org/docs/latest/api/python/pyspark.streaming.html#pyspark.streaming.StreamingContext
from pyspark.streaming import StreamingContext
sc = SparkContext("local[2]", "NetworkWordCount")
ssc = StreamingContext(sc, 10)
"""
Explanation: Spark Streaming
Spark Streaming es una extensiรณn de la API core de Spark que habilita un procesamiento de flujos para canales de datos en vivo, con un soporte escalable, de alto rendimiento y tolerante a fallas. La ingesta de datos puede provenir de distintas fuentes como Kafka, Flume, Kinesis o sockets TCP. Y pueden ser procesados con algoritmos expresados mediante funciones de alto nivel como map, reduce, join y window.
Finalmente, los datos procesados se pueden guardar en un sistema de archivos, bases de datos y dashboards en vivo. Tambiรฉn se dispone la capacidad de usar los algoritmos de Spark para Machine Learning y procesamiento de Grafos sobre los flujos de datos.
Internamente, Spark Streaming recibe datos de canales en vivo y los divide en series (batches), que son procesados por el motor de Spark para generar el flujo final en forma de series finales.
Spark Streaming provee una abstracciรณn de alto nivel llamada discretized stream o DStream, que representa una corritente continua de datos. Los DStream, se pueden crear a partir de flujos de datos como Kafka, Flume y Kinesis, o despuรฉs de aplicarse operaciones de alto nivel sobre otros DStream. De forma interna, un DStream se representa como una secuencia de RDDs.
Ejemplo
Importamos StreamingContext, que es el acceso para todas las funcionalidades de Spark Streaming. Creamos una instancia del objeto con dos hilos de ejecuciรณn, y a 10 segundos como intervalo para la creaciรณn del batch.
End of explanation
"""
lines = ssc.socketTextStream("localhost", 9999)
"""
Explanation: Usando el contecto anterior, creamos un DStream que representa el flujo de datos de una fuente TCP (socket), se especifica un hostname y un puerto.
End of explanation
"""
words = lines.flatMap(lambda line: line.split(" "))
"""
Explanation: El DStream lines representa un flujo de datos que serรกn recibidos desde otro servidor. Cada registro en este DStream es una lรญnea de texto. Despuรฉs de recibir el registro, separaremos las palabras usando los espacios entre ellas.
End of explanation
"""
pairs = words.map(lambda word: (word, 1))
wordCounts = pairs.reduceByKey(lambda x, y: x + y)
wordCounts.pprint()
"""
Explanation: flatMap es una operaciรณn (transformaciรณn one-to-many) para DStream que crea un nuevo objeto al generar multiples registros por cada registro en el DStream de la fuente. En este caso, cada lรญnea serรก cortada para obtener multiples palabras y representarse en el DStream words. Despuรฉs, vamos a imprimir esas palabras.
End of explanation
"""
ssc.start()
ssc.awaitTermination()
ssc.stop() # ๐
"""
Explanation: El DStream words es transformado (map, one-to-one) al siguiente DStream de pares llave-valor con el siguiente formato (palabra, 1), donde se reduce para obtener la frecuencia de palabras en cada batch de datos. Finalmente, wordCounts.pprint() imprime el conteo generado en ese intervalo.
Debemos recordar, que aรบn cuando las lรญneas de cรณdigo anteriores ya fueron ejecutadas, Spark Streaming ejecutarรก el cรณmputo hasta que el proceso sea requerido. Mientras tanto, no ha habido un procesamiento real de datos.
Para iniciar el procesamiento de datos, despuรฉs de que las transformaciones fueron establecidas, llamamos a las funciones:
End of explanation
"""
|
xebia-france/luigi-airflow
|
Luigi_airflow_002.ipynb
|
apache-2.0
|
raw_dataset = pd.read_csv(source_path + "Speed_Dating_Data.csv")
"""
Explanation: Import data
End of explanation
"""
raw_dataset.head(3)
raw_dataset_copy = raw_dataset
#merged_datasets = raw_dataset.merge(raw_dataset_copy, left_on="pid", right_on="iid")
#merged_datasets[["iid_x","gender_x","pid_y","gender_y"]].head(5)
#same_gender = merged_datasets[merged_datasets["gender_x"] == merged_datasets["gender_y"]]
#same_gender.head()
columns_by_types = raw_dataset.columns.to_series().groupby(raw_dataset.dtypes).groups
raw_dataset.dtypes.value_counts()
raw_dataset.isnull().sum().head(3)
summary = raw_dataset.describe() #.transpose()
print summary
#raw_dataset.groupby("gender").agg({"iid": pd.Series.nunique})
raw_dataset.groupby('gender').iid.nunique()
raw_dataset.groupby('career').iid.nunique().sort_values(ascending=False).head(5)
raw_dataset.groupby(["gender","match"]).iid.nunique()
"""
Explanation: Data exploration
Shape, types, distribution, modalities and potential missing values
End of explanation
"""
local_path = "/Users/sandrapietrowska/Documents/Trainings/luigi/data_source/"
local_filename = "Speed_Dating_Data.csv"
my_variables_selection = ["iid", "pid", "match","gender","date","go_out","sports","tvsports","exercise","dining",
"museums","art","hiking","gaming","clubbing","reading","tv","theater","movies",
"concerts","music","shopping","yoga"]
class RawSetProcessing(object):
"""
This class aims to load and clean the dataset.
"""
def __init__(self,source_path,filename,features):
self.source_path = source_path
self.filename = filename
self.features = features
# Load data
def load_data(self):
raw_dataset_df = pd.read_csv(self.source_path + self.filename)
return raw_dataset_df
# Select variables to process and include in the model
def subset_features(self, df):
sel_vars_df = df[self.features]
return sel_vars_df
@staticmethod
# Remove ids with missing values
def remove_ids_with_missing_values(df):
sel_vars_filled_df = df.dropna()
return sel_vars_filled_df
@staticmethod
def drop_duplicated_values(df):
df = df.drop_duplicates()
return df
# Combine processing stages
def combiner_pipeline(self):
raw_dataset = self.load_data()
subset_df = self.subset_features(raw_dataset)
subset_no_dup_df = self.drop_duplicated_values(subset_df)
subset_filled_df = self.remove_ids_with_missing_values(subset_no_dup_df)
return subset_filled_df
raw_set = RawSetProcessing(local_path, local_filename, my_variables_selection)
dataset_df = raw_set.combiner_pipeline()
dataset_df.head(3)
# Number of unique participants
dataset_df.iid.nunique()
dataset_df.shape
"""
Explanation: Data processing
End of explanation
"""
def get_partner_features(df):
#print df[df["iid"] == 1]
df_partner = df.copy()
df_partner = df_partner.drop(['pid','match'], 1).drop_duplicates()
#print df_partner.shape
merged_datasets = df.merge(df_partner, how = "inner",left_on="pid", right_on="iid",suffixes=('_me','_partner'))
#print merged_datasets[merged_datasets["iid_me"] == 1]
return merged_datasets
feat_eng_df = get_partner_features(dataset_df)
feat_eng_df.head(3)
"""
Explanation: Feature engineering
End of explanation
"""
import sklearn
print sklearn.__version__
from sklearn import tree
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
import subprocess
"""
Explanation: Modelling
This model aims to answer the questions what is the profile of the persons regarding interests that got the most matches.
Variables:
* gender
* date (In general, how frequently do you go on dates?)
* go out (How often do you go out (not necessarily on dates)?
* sports: Playing sports/ athletics
* tvsports: Watching sports
* excersice: Body building/exercising
* dining: Dining out
* museums: Museums/galleries
* art: Art
* hiking: Hiking/camping
* gaming: Gaming
* clubbing: Dancing/clubbing
* reading: Reading
* tv: Watching TV
* theater: Theater
* movies: Movies
* concerts: Going to concerts
* music: Music
* shopping: Shopping
* yoga: Yoga/meditation
End of explanation
"""
#features = list(["gender","age_o","race_o","goal","samerace","imprace","imprelig","date","go_out","career_c"])
features = list(["gender","date","go_out","sports","tvsports","exercise","dining","museums","art",
"hiking","gaming","clubbing","reading","tv","theater","movies","concerts","music",
"shopping","yoga"])
suffix_me = "_me"
suffix_partner = "_partner"
#add suffix to each element of list
def process_features_names(features, suffix_1, suffix_2):
features_me = [feat + suffix_1 for feat in features]
features_partner = [feat + suffix_2 for feat in features]
features_all = features_me + features_partner
return features_all
features_model = process_features_names(features, suffix_me, suffix_partner)
explanatory = feat_eng_df[features_model]
explained = feat_eng_df[label]
"""
Explanation: Variables selection
End of explanation
"""
clf = tree.DecisionTreeClassifier(min_samples_split=20,min_samples_leaf=10,max_depth=4)
clf = clf.fit(explanatory, explained)
# Download http://www.graphviz.org/
with open("data.dot", 'w') as f:
f = tree.export_graphviz(clf, out_file=f, feature_names= features_model, class_names="match")
import subprocess
subprocess.call(['dot', '-Tpdf', 'data.dot', '-o' 'data.pdf'])
"""
Explanation: Decision Tree
End of explanation
"""
# Split the dataset in two equal parts
X_train, X_test, y_train, y_test = train_test_split(explanatory, explained, test_size=0.3, random_state=0)
parameters = [
{'criterion': ['gini','entropy'], 'max_depth': [4,6,10,12,14],
'min_samples_split': [10,20,30], 'min_samples_leaf': [10,15,20]
}
]
scores = ['precision', 'recall']
dtc = tree.DecisionTreeClassifier()
clf = GridSearchCV(dtc, parameters,n_jobs=3, cv=5, refit=True)
for score in scores:
print("# Tuning hyper-parameters for %s" % score)
print("")
clf = GridSearchCV(dtc, parameters, cv=5,
scoring='%s_macro' % score)
clf.fit(X_train, y_train)
print("Best parameters set found on development set:")
print("")
print(clf.best_params_)
print("")
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred))
print("")
best_param_dtc = tree.DecisionTreeClassifier(criterion="entropy",min_samples_split=10,min_samples_leaf=10,max_depth=14)
best_param_dtc = best_param_dtc.fit(explanatory, explained)
best_param_dtc.feature_importances_
raw_dataset.rename(columns={"age_o":"age_of_partner","race_o":"race_of_partner"},inplace=True)
"""
Explanation: Tuning Parameters
End of explanation
"""
raw_data = {
'subject_id': ['14', '15', '16', '17', '18'],
'first_name': ['Sue', 'Maria', 'Sandra', 'Kate', 'Aurelie'],
'last_name': ['Bonder', 'Black', 'Balwner', 'Brice', 'Btisan'],
'pid': ['4', '5', '6', '7', '8'],}
df_a = pd.DataFrame(raw_data, columns = ['subject_id', 'first_name', 'last_name','pid'])
df_a
raw_data = {
'subject_id': ['4', '5', '6', '7', '8'],
'first_name': ['Billy', 'Brian', 'Bran', 'Bryce', 'Betty'],
'last_name': ['Bonder', 'Black', 'Balwner', 'Brice', 'Btisan'],
'pid': ['14', '15', '16', '17', '18'],}
df_b = pd.DataFrame(raw_data, columns = ['subject_id', 'first_name', 'last_name','pid'])
df_b
df_a.merge(df_b, left_on='pid', right_on='subject_id', how='outer', suffixes=('_me','_partner'))
"""
Explanation: Check
End of explanation
"""
|
hetland/python4geosciences
|
examples/intro.ipynb
|
mit
|
import os
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import colors, colorbar
import shapely.geometry as geometry
pd.set_option("display.max_rows", 5) # limit number of rows shown in dataframe
# display plots within the notebook
%matplotlib inline
import seaborn as sns # for better style in plots
import fiona
import cartopy.crs as ccrs # if you install yourself using Anaconda, use 'conda install -c scitools cartopy'
import cartopy.feature as feature
import numpy as np
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import cmocean.cm as cmo # if you install yourself using Anaconda, use 'conda install -c conda-forge cmocean'
from datetime import datetime
"""
Explanation: NYC Taxis
There is a wealth of information available about taxi rides in New York City. We analysis it here to show the cool and powerful stuff we'll be doing in class!
I'm following some analysis from Drew Levitt, and doing some of my own things.
In Python, we import packages to bring in functionality. At first, this can be a little annoying because you can only use core language functions without importing in more packages. However, you'll quickly see that this enables anyone to make new packages for Python and integrate the code very easily, and since people post their code online, we can access a huge range of already-written functionality.
We import many packages here to do a wide variety of analysis and presentation in this example.
End of explanation
"""
# # May 2016 - too large! So I have previously limited it.
# url = 'https://s3.amazonaws.com/nyc-tlc/trip+data/yellow_tripdata_2016-05.csv'
# loc = '../data/' # relative path location
# os.system('wget --directory-prefix=' + loc + ' ' + url) # this downloads the data
# # also decimated to use only every 10th row of data
# df = pd.read_csv(loc + 'yellow_tripdata_2016-05-01.csv', parse_dates=[1,2], index_col=[1], keep_date_col=True)
# df[::10].to_csv(loc + 'yellow_tripdata_2016-05-01_decimated.csv')
# loc = '../data/' # relative path location
# fname = '5904615?private_link=8df0e7f96aa9cad2539d'
# url = 'https://ndownloader.figshare.com/files/' + fname # on figshare
# We won't re-download data files in the class server but will just share what is already there.
# so this is commented out:
# os.system('wget --directory-prefix=' + loc + ' ' + url)
# os.rename(loc + fname, loc + 'yellow_tripdata_2016-05-01_decimated.csv')
"""
Explanation: Here are some niceties for the fonts in our plots.
Because the data files are quite large for this example, I previously did some work to cut them down in size. I have preserved my steps in the following cell. Then I saved the file to figshare to share it with you.
End of explanation
"""
df = pd.read_csv('../data/yellow_tripdata_2016-05-01_decimated.csv', parse_dates=[0, 2], index_col=[0], keep_date_col=True)
"""
Explanation: 1D analysis
The package pandas is used for time series analysis, or anything that can easily be put into a sort of Excel format. The taxi data lends itself to being read in this way because each taxi ride is a separate row in the csv file. It also allows us to send in special arguments to the call which tell the code how to understand dates and times correctly, which can otherwise be difficult.
End of explanation
"""
df
"""
Explanation: We see that there are many columns of data in this file.
End of explanation
"""
df['trip_distance'].plot(figsize=(14,6))
"""
Explanation: Let's examine the data, starting with a time series. Here we have the length of each trip throughout the day.
End of explanation
"""
# resample to every 1 minute, taking the average of the nearby points
df['trip_distance'].resample('1T').mean().plot(figsize=(14,6))
"""
Explanation: The data is too dense to interpret very well, but we can resample very easily.
End of explanation
"""
df['fare_amount'].resample('1T').mean().plot(figsize=(14,6))
"""
Explanation: Now we can see that there tend to be longer trips in the morning โ maybe for commuting in? Or maybe more flights come in at that time of day, in a short period of time?
We see that the fare amount is visually correlated with the trip distance:
End of explanation
"""
g = sns.jointplot("trip_distance", "fare_amount", data=df.resample('1T').mean(), kind="reg",
xlim=(0, 10), ylim=(0, 50), color="r")
"""
Explanation: But how correlated are they? We can do some statistics using pandas, but let's try out seaborn since it is another great package and it's speciality is statistics.
With seaborn, which we have been using to make the defaults in our plots look nice already, we can easily look at the visual correlation between these two properties, calculate the pearson r coefficient, the p value, and look at the distribution of both variables:
End of explanation
"""
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(111)
ax.axis([-74.05, -73.75, 40.6, 40.85]) # NYC region
# ax.axis([-74.02, -73.92, 40.7, 40.85]) # Manhatten
df.plot.scatter(ax=ax, x='pickup_longitude', y='pickup_latitude', s=0.5, c='k', alpha=0.1, grid=False)
ax.get_xaxis().get_major_formatter().set_useOffset(False)
"""
Explanation: The two properties are, indeed, highly correlated.
2D analysis
Now let's look at the data spatially to learn more. Here we plot all of the pickup locations.
End of explanation
"""
# NYC neighborhoods
fname_orig = '06463a12c2104adf86335df0170c25e3pediacitiesnycneighborhoods.geojson'
fname = 'nyc_neighborhoods.geojson'
url = 'http://catalog.civicdashboards.com/dataset/eea7c03e-9917-40b0-bba5-82e8e37d6739/resource/91778048-3c58-449c-a3f9-365ed203e914/download/'+ fname
loc = '../data/' # relative path location
# don't actually download this โ just use the existing copy
# os.system('wget --directory-prefix=' + loc + ' ' + url) # download neighborhoods
# os.rename(loc + fname, loc + fname)
"""
Explanation: We see the grid of Manhatten streets, as well as lighter outlying regions with fewer pickups, and a few distant pickup areas. What are the different regions?
Maybe it would be easier to look at the spatial patterns of taxi rides from a more aggregated perspective using NYC neighborhoods. To do this, we first download a file that contains polygons defining the edge of each neighborhood.
End of explanation
"""
nyc = geometry.MultiPolygon([geometry.shape(pol['geometry']) for pol in fiona.open(loc + fname)])
"""
Explanation: We read in each neighborhood as a separate polygon using the shapely and fiona packages, and store it in the nyc variable.
End of explanation
"""
import cartopy
cartopy.__version__
fig = plt.figure(figsize=(8,6))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_extent([-74.4, -73.5, 40.3, 41.1])
# http://scitools.org.uk/cartopy/docs/latest/matplotlib/geoaxes.html?highlight=coastlines#cartopy.mpl.geoaxes.GeoAxes.coastlines
ax.coastlines(resolution='10m', linewidth=0.5)
ax.set_title('Neighborhoods in NYC', fontsize=16)
# labels and grid lines: http://scitools.org.uk/cartopy/docs/latest/matplotlib/gridliner.html
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True, linewidth=0.1, color='k', alpha=0.5, linestyle='-')
gl.xlabels_top = False
gl.ylabels_right = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.xlabel_style = {'size': 14, 'color': 'k'}
gl.ylabel_style = {'size': 14, 'color': 'k'}
for polygon in nyc:
ax.add_geometries([polygon], ccrs.PlateCarree())
"""
Explanation: What do these neighborhoods look like? We use the cartopy package here to show the neighborhoods nicely on a projected map, where we can also include the coastline (and other features if we wanted).
End of explanation
"""
# use every delta data point to save some time
delta = 20
# Define the pickups and dropffs as Points
pickups = geometry.MultiPoint(list(zip(df['pickup_longitude'][::delta], df['pickup_latitude'][::delta])))
dropoffs = geometry.MultiPoint(list(zip(df['dropoff_longitude'][::delta], df['dropoff_latitude'][::delta])))
# Use the Points to calculate pickup and dropff density
pickupdensity = np.zeros(len(nyc))
dropoffdensity = np.zeros(len(nyc))
for i, neighborhood in enumerate(nyc):
pickupdensity[i] = np.asarray([neighborhood.contains(pickup) for pickup in pickups]).sum()/neighborhood.area
dropoffdensity[i] = np.asarray([neighborhood.contains(dropoff) for dropoff in dropoffs]).sum()/neighborhood.area
"""
Explanation: Let's examine pickups and dropoffs by neighborhood. To do this, we need to count the number of each is inside each neighborhood. The shapely package allows us to do this.
End of explanation
"""
# Show pickups vs. dropoffs by neighborhood per square mile
cmap = cmo.deep
# Find max value ahead of time so colorbars are matching and accurate
vmax = max((dropoffdensity.max(), pickupdensity.max()))
fig = plt.figure(figsize=(16,8))
fig.subplots_adjust(wspace=0.03)
# Pickups
ax = fig.add_subplot(1, 2, 1, projection=ccrs.PlateCarree())
ax.set_extent([-74.1, -73.7, 40.54, 40.92]) # NYC region
ax.coastlines(resolution='10m', linewidth=0.5)
ax.set_title('Pickups', fontsize=16)
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True, linewidth=0.1, color='k', alpha=0.5, linestyle='-')
gl.xlabels_top = False
gl.ylabels_right = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.xlabel_style = {'size': 14, 'color': 'k'}
gl.ylabel_style = {'size': 14, 'color': 'k'}
for pickup, neighborhood in zip(pickupdensity, nyc):
color = cmap(pickup/pickupdensity.max())
ax.add_geometries([neighborhood], ccrs.PlateCarree(), facecolor=color)
# Make colorbar: http://matplotlib.org/examples/api/colorbar_only.html
cax = fig.add_axes([0.145, 0.8, 0.12, 0.02])
norm = colors.Normalize(vmin=0, vmax=vmax)
cb = colorbar.ColorbarBase(cax, cmap=cmap, norm=norm, orientation='horizontal')
cb.set_label('Pickups per square mile', fontsize=13)
cb.set_ticks(np.arange(0, vmax, vmax/3))
# Dropoffs
ax = fig.add_subplot(1, 2, 2, projection=ccrs.PlateCarree())
ax.set_extent([-74.1, -73.7, 40.54, 40.92]) # NYC region
ax.coastlines(resolution='10m', linewidth=0.5)
ax.set_title('Dropoffs', fontsize=16)
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True, linewidth=0.1, color='k', alpha=0.5, linestyle='-')
gl.xlabels_top = False
gl.ylabels_left = False
gl.ylabels_right = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.xlabel_style = {'size': 14, 'color': 'k'}
gl.ylabel_style = {'size': 14, 'color': 'k'}
for dropoff, neighborhood in zip(dropoffdensity, nyc):
color = cmap(dropoff/dropoffdensity.max())
ax.add_geometries([neighborhood], ccrs.PlateCarree(), facecolor=color)
# Make colorbar: http://matplotlib.org/examples/api/colorbar_only.html
cax = fig.add_axes([0.535, 0.8, 0.12, 0.02])
norm = colors.Normalize(vmin=0, vmax=vmax)
cb = colorbar.ColorbarBase(cax, cmap=cmap, norm=norm, orientation='horizontal')
cb.set_label('Dropoffs per square mile', fontsize=13)
cb.set_ticks(np.arange(0, vmax, vmax/3))
"""
Explanation: Now we can plot the results
End of explanation
"""
# Show pickups - dropoffs by neighborhood
cmap = cmo.curl_r # colormap to use
# background of the plot
fig = plt.figure(figsize=(10,8))
ax = fig.add_subplot(1, 1, 1, projection=ccrs.PlateCarree())
ax.set_extent([-74.1, -73.7, 40.54, 40.92]) # NYC region
ax.coastlines(resolution='10m', linewidth=0.5)
ax.set_title('Pickups - dropoffs', fontsize=16)
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True, linewidth=0.1, color='k', alpha=0.5, linestyle='-')
gl.xlabels_top = False
gl.ylabels_right = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.xlabel_style = {'size': 14, 'color': 'k'}
gl.ylabel_style = {'size': 14, 'color': 'k'}
# save the differences in pickups and dropoffs so the colorbar can be set correctly
diffs = [pickup*neighborhood.area - dropoff*neighborhood.area for pickup, dropoff, neighborhood in zip(pickupdensity, dropoffdensity, nyc)]
vmax = max(abs(np.asarray(diffs)))
# loop through to find the correct color for each amount of passengers
for diff, neighborhood in zip(diffs, nyc):
color = cmap(diff/(2*vmax) + 0.5) # shifts the number of passengers to be between 0 and 1, needed for the colormap
geoms = ax.add_geometries([neighborhood], ccrs.PlateCarree(), facecolor=color) # add the neighborhood to the plot
# Make colorbar: http://matplotlib.org/examples/api/colorbar_only.html
cax = fig.add_axes([0.2, 0.84, 0.25, 0.03])
norm = colors.Normalize(vmin=-vmax, vmax=vmax)
cb = colorbar.ColorbarBase(cax, cmap=cmap, norm=norm, orientation='horizontal')
cax.text(0.55, 1.1, 'More pickups', color='#18636D', transform=cax.transAxes, fontsize=12)
cax.text(0.0, 1.1, 'More dropoffs', color='#50124C', transform=cax.transAxes, fontsize=12)
"""
Explanation: We can see some differences in this maps, but it is a little hard to tell. Certainly there is more going on in the middle of downtown. Let's calculate a difference map instead.
End of explanation
"""
def calc(j, nyc):
'''
Calculate the differences in pickups and dropoffs for each neighborhood in nyc, for the first
5 minutes of the jth hour of the day.
'''
# get data for a chunk of time (5 min)
start = '2016-05-01 ' + str(j).zfill(2) + ':00'
stop = '2016-05-01 ' + str(j).zfill(2) + ':05'
pickups = geometry.MultiPoint(list(zip(df[start:stop]['pickup_longitude'], df[start:stop]['pickup_latitude'])))
dropoffs = geometry.MultiPoint(list(zip(df[start:stop]['dropoff_longitude'], df[start:stop]['dropoff_latitude'])))
pickupdensity = np.zeros(len(nyc))
dropoffdensity = np.zeros(len(nyc))
for i, neighborhood in enumerate(nyc):
pickupdensity[i] = np.asarray([neighborhood.contains(pickup) for pickup in pickups]).sum()/neighborhood.area
dropoffdensity[i] = np.asarray([neighborhood.contains(dropoff) for dropoff in dropoffs]).sum()/neighborhood.area
diffs = [pickup*neighborhood.area - dropoff*neighborhood.area for pickup, dropoff, neighborhood in zip(pickupdensity, dropoffdensity, nyc)]
return diffs
# save the differences in pickups and dropoffs so the colorbar can be set correctly
diffs = []
# Calculate for every other hour of the day
for j in range(1, 24, 2):
diffs.append(calc(j, nyc))
vmax = max(0, abs(np.asarray(diffs)).max())
"""
Explanation: Now we see that there are more pickups (greens) in certain regions, integrated throughout this particular day, and more dropoffs (reds) in other regions. Yet others cancel out to be about equal pickups and dropoffs.
But what is the pattern throughout the day? Maybe it changes? To see this, let's make a movie!
End of explanation
"""
# where to store figures
figloc = 'figures/intro/'
if not os.path.exists(figloc):
os.mkdir(figloc)
cmap = cmo.curl_r # colormap to use
# set up the plot
fig = plt.figure(figsize=(10,8))
icount = 0
for j in range(1, 24, 2):
ax = fig.add_subplot(1, 1, 1, projection=ccrs.PlateCarree())
ax.set_extent([-74.1, -73.7, 40.54, 40.92]) # NYC region
# plot the background of each frame
ax.coastlines(resolution='10m', linewidth=0.5)
ax.set_title('Pickups - dropoffs', fontsize=16)
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True, linewidth=0.1, color='k', alpha=0.5, linestyle='-')
gl.xlabels_top = False
gl.ylabels_right = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.xlabel_style = {'size': 14, 'color': 'k'}
gl.ylabel_style = {'size': 14, 'color': 'k'}
# Make colorbar: http://matplotlib.org/examples/api/colorbar_only.html
cax = fig.add_axes([0.2, 0.84, 0.25, 0.03])
norm = colors.Normalize(vmin=-vmax, vmax=vmax)
cb = colorbar.ColorbarBase(cax, cmap=cmap, norm=norm, orientation='horizontal')
cax.text(0.55, 1.1, 'More pickups', color='#18636D', transform=cax.transAxes, fontsize=12)
cax.text(0.0, 1.1, 'More dropoffs', color='#50124C', transform=cax.transAxes, fontsize=12)
# Write out the time of day
ax.text(0.1, 0.82, datetime(2016, 5, 1, j).strftime('%-I%p'), transform=ax.transAxes)
# loop through to find the correct color for each amount of passengers
for diff, neighborhood in zip(diffs[icount], nyc):
color = cmap(diff/(2*vmax) + 0.5) # shifts the number of passengers to be between 0 and 1, needed for the colormap
ax.add_geometries([neighborhood], ccrs.PlateCarree(), facecolor=color) # add the neighborhood to the plot
icount += 1
# don't save the figure since we can't write to a file
# fig.savefig(figloc + str(j).zfill(2) + '.png', bbox_inches='tight', dpi=72)
fig.clear()
"""
Explanation: Now that we have calculated the differences, we can use similar code as before to plot up the difference in pickups and dropoffs throughout the day.
End of explanation
"""
# Don't actually do this since the figures are pre-saved.
# # First need to make it so the image files have an even number of pixels in each direction
# os.chdir(figloc)
# # This makes the figure have an even number of pixels. Only run this once.
# os.system("find . -iname '*.png' -maxdepth 1 -exec convert -gravity west -chop 1x0 {} {} \;")
# os.chdir('../..')
# Don't actually do this since the movie is already created beforehand
# if os.path.exists(figloc + 'movie.mp4'):
# os.remove(figloc + 'movie.mp4')
# # Then use ffmpeg to make an animation from the frames
# os.system("ffmpeg -r 3 -pattern_type glob -i " + "'" + figloc + "'" "'/*.png' -c:v libx264 -pix_fmt yuv420p -crf 25 " + figloc + "/movie.mp4")
"""
Explanation: Now we can use the Linux program ffmpeg to link the images together as a movie.
End of explanation
"""
from IPython.display import HTML
HTML("""
<video controls>
<source src="figures/intro/movie.mp4" type="video/mp4">
</video>
""")
"""
Explanation: Now we have our animation and we can view it here! Note that this embedded video did not work in Safari, but it did work in Chrome.
End of explanation
"""
|
phoebe-project/phoebe2-docs
|
2.3/tutorials/pitch_yaw.ipynb
|
gpl-3.0
|
#!pip install -I "phoebe>=2.3,<2.4"
"""
Explanation: Misalignment (Pitch & Yaw)
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new Bundle.
End of explanation
"""
b.add_dataset('mesh', times=[0], dataset='mesh01', columns=['teffs'])
"""
Explanation: Now let's add a mesh dataset at a few different times so that we can see how the misalignment affect the surfaces of the stars.
End of explanation
"""
print(b['pitch@component'])
print(b['incl@constraint'])
"""
Explanation: Relevant Parameters
The 'pitch' parameter defines the misalignment of a given star in the same direction as the inclination. We can see how it is defined relative to the inclination by accessing the constraint.
Note that, by default, it is the inclination of the component that is constrained with the inclination of the orbit and the pitch as free parameters.
End of explanation
"""
print(b['yaw@component'])
print(b['long_an@constraint'])
"""
Explanation: Similarly, the 'yaw' parameter defines the misalignment in the direction of the lonigtude of the ascending node.
Note that, by default, it is the long_an of the component that is constrained with the long_an of the orbit and the yaw as free parameters.
End of explanation
"""
print(b['long_an@primary@component'].description)
"""
Explanation: The long_an of a star is a bit of an odd concept, and really is just meant to be analogous to the inclination case. In reality, it is the angle of the "equator" of the star on the sky.
End of explanation
"""
b['syncpar@secondary'] = 5.0
b['pitch@secondary'] = 0
b['yaw@secondary'] = 0
b.run_compute(irrad_method='none')
"""
Explanation: Note also that the system is aligned by default, with the pitch and yaw both set to zero.
Misaligned Systems
To create a misaligned system, we must set at pitch and/or yaw to be non-zero.
But first let's create an aligned system for comparison. In order to easily see the spin-axis, we'll plot the effective temperature and spin-up our star to exaggerate the effect.
End of explanation
"""
afig, mplfig = b.plot(time=0.0, fc='teffs', ec='none', x='us', y='vs', show=True)
"""
Explanation: We'll plot the mesh as it would be seen on the plane of the sky.
End of explanation
"""
afig, mplfig = b.plot(time=0.0, fc='teffs', ec='none', x='ws', y='vs', show=True)
"""
Explanation: and also with the line-of-sight along the x-axis.
End of explanation
"""
b['pitch@secondary'] = 30
b['yaw@secondary'] = 0
b.run_compute(irrad_method='none')
afig, mplfig = b.plot(time=0.0, fc='teffs', ec='none', x='us', y='vs', show=True)
afig, mplfig = b.plot(time=0.0, fc='teffs', ec='none', x='ws', y='vs', show=True)
"""
Explanation: If we set the pitch to be non-zero, we'd expect to see a change in the spin axis along the line-of-sight.
End of explanation
"""
b['pitch@secondary@component'] = 0
b['yaw@secondary@component'] = 30
b.run_compute(irrad_method='none')
afig, mplfig = b.plot(time=0.0, fc='teffs', ec='none', x='us', y='vs', show=True)
afig, mplfig = b.plot(time=0.0, fc='teffs', ec='none', x='ws', y='vs', show=True)
"""
Explanation: And if we set the yaw to be non-zero, we'll see the rotation axis rotate on the plane of the sky.
End of explanation
"""
|
jinntrance/MOOC
|
coursera/deep-neural-network/quiz and assignments/RNN/Dinosaurus+Island+--+Character+level+language+model+final+-+v3.ipynb
|
cc0-1.0
|
import numpy as np
from utils import *
import random
"""
Explanation: Character level language model - Dinosaurus land
Welcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to give names to these dinosaurs. If a dinosaur does not like its name, it might go beserk, so choose wisely!
<table>
<td>
<img src="images/dino.jpg" style="width:250;height:300px;">
</td>
</table>
Luckily you have learned some deep learning and you will use it to save the day. Your assistant has collected a list of all the dinosaur names they could find, and compiled them into this dataset. (Feel free to take a look by clicking the previous link.) To create new dinosaur names, you will build a character level language model to generate new names. Your algorithm will learn the different name patterns, and randomly generate new names. Hopefully this algorithm will keep you and your team safe from the dinosaurs' wrath!
By completing this assignment you will learn:
How to store text data for processing using an RNN
How to synthesize data, by sampling predictions at each time step and passing it to the next RNN-cell unit
How to build a character-level text generation recurrent neural network
Why clipping the gradients is important
We will begin by loading in some functions that we have provided for you in rnn_utils. Specifically, you have access to functions such as rnn_forward and rnn_backward which are equivalent to those you've implemented in the previous assignment.
End of explanation
"""
data = open('dinos.txt', 'r').read()
data= data.lower()
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))
"""
Explanation: 1 - Problem Statement
1.1 - Dataset and Preprocessing
Run the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size.
End of explanation
"""
char_to_ix = { ch:i for i,ch in enumerate(sorted(chars)) }
ix_to_char = { i:ch for i,ch in enumerate(sorted(chars)) }
print(ix_to_char)
"""
Explanation: The characters are a-z (26 characters) plus the "\n" (or newline character), which in this assignment plays a role similar to the <EOS> (or "End of sentence") token we had discussed in lecture, only here it indicates the end of the dinosaur name rather than the end of a sentence. In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26. We also create a second python dictionary that maps each index back to the corresponding character character. This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer. Below, char_to_ix and ix_to_char are the python dictionaries.
End of explanation
"""
### GRADED FUNCTION: clip
def clip(gradients, maxValue):
'''
Clips the gradients' values between minimum and maximum.
Arguments:
gradients -- a dictionary containing the gradients "dWaa", "dWax", "dWya", "db", "dby"
maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue
Returns:
gradients -- a dictionary with the clipped gradients.
'''
dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby']
### START CODE HERE ###
# clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (โ2 lines)
for gradient in [dWax, dWaa, dWya, db, dby]:
np.clip(gradient, -maxValue, maxValue, out = gradient)
### END CODE HERE ###
gradients = {"dWaa": dWaa, "dWax": dWax, "dWya": dWya, "db": db, "dby": dby}
return gradients
np.random.seed(3)
dWax = np.random.randn(5,3)*10
dWaa = np.random.randn(5,5)*10
dWya = np.random.randn(2,5)*10
db = np.random.randn(5,1)*10
dby = np.random.randn(2,1)*10
gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby}
gradients = clip(gradients, 10)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
"""
Explanation: 1.2 - Overview of the model
Your model will have the following structure:
Initialize parameters
Run the optimization loop
Forward propagation to compute the loss function
Backward propagation to compute the gradients with respect to the loss function
Clip the gradients to avoid exploding gradients
Using the gradients, update your parameter with the gradient descent update rule.
Return the learned parameters
<img src="images/rnn.png" style="width:450;height:300px;">
<caption><center> Figure 1: Recurrent Neural Network, similar to what you had built in the previous notebook "Building a RNN - Step by Step". </center></caption>
At each time-step, the RNN tries to predict what is the next character given the previous characters. The dataset $X = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is a list of characters in the training set, while $Y = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$ is such that at every time-step $t$, we have $y^{\langle t \rangle} = x^{\langle t+1 \rangle}$.
2 - Building blocks of the model
In this part, you will build two important blocks of the overall model:
- Gradient clipping: to avoid exploding gradients
- Sampling: a technique used to generate characters
You will then apply these two functions to build the model.
2.1 - Clipping the gradients in the optimization loop
In this section you will implement the clip function that you will call inside of your optimization loop. Recall that your overall loop structure usually consists of a forward pass, a cost computation, a backward pass, and a parameter update. Before updating the parameters, you will perform gradient clipping when needed to make sure that your gradients are not "exploding," meaning taking on overly large values.
In the exercise below, you will implement a function clip that takes in a dictionary of gradients and returns a clipped version of gradients if needed. There are different ways to clip gradients; we will use a simple element-wise clipping procedure, in which every element of the gradient vector is clipped to lie between some range [-N, N]. More generally, you will provide a maxValue (say 10). In this example, if any component of the gradient vector is greater than 10, it would be set to 10; and if any component of the gradient vector is less than -10, it would be set to -10. If it is between -10 and 10, it is left alone.
<img src="images/clip.png" style="width:400;height:150px;">
<caption><center> Figure 2: Visualization of gradient descent with and without gradient clipping, in a case where the network is running into slight "exploding gradient" problems. </center></caption>
Exercise: Implement the function below to return the clipped gradients of your dictionary gradients. Your function takes in a maximum threshold and returns the clipped versions of your gradients. You can check out this hint for examples of how to clip in numpy. You will need to use the argument out = ....
End of explanation
"""
# GRADED FUNCTION: sample
def sample(parameters, char_to_ix, seed):
"""
Sample a sequence of characters according to a sequence of probability distributions output of the RNN
Arguments:
parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b.
char_to_ix -- python dictionary mapping each character to an index.
seed -- used for grading purposes. Do not worry about it.
Returns:
indices -- a list of length n containing the indices of the sampled characters.
"""
# Retrieve parameters and relevant shapes from "parameters" dictionary
Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']
vocab_size = by.shape[0]
n_a = Waa.shape[1]
### START CODE HERE ###
# Step 1: Create the one-hot vector x for the first character (initializing the sequence generation). (โ1 line)
x = np.zeros((vocab_size, 1))
# Step 1': Initialize a_prev as zeros (โ1 line)
a_prev = np.zeros((n_a, 1))
# Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (โ1 line)
indices = []
# Idx is a flag to detect a newline character, we initialize it to -1
idx = -1
# Loop over time-steps t. At each time-step, sample a character from a probability distribution and append
# its index to "indices". We'll stop if we reach 50 characters (which should be very unlikely with a well
# trained model), which helps debugging and prevents entering an infinite loop.
counter = 0
newline_character = char_to_ix['\n']
while (idx != newline_character and counter != 50):
# Step 2: Forward propagate x using the equations (1), (2) and (3)
a = np.tanh(np.matmul(Wax, x) + np.matmul(Waa, a_prev) + b)
z = np.matmul(Wya, a) + by
y = softmax(z)
# for grading purposes
np.random.seed(counter+seed)
# Step 3: Sample the index of a character within the vocabulary from the probability distribution y
idx = np.random.choice(range(vocab_size), p = y.ravel())
# Append the index to "indices"
indices.append(idx)
# Step 4: Overwrite the input character as the one corresponding to the sampled index.
x = np.zeros((vocab_size, 1))
x[idx] = 1
# Update "a_prev" to be "a"
a_prev = a
# for grading purposes
seed += 1
counter +=1
### END CODE HERE ###
if (counter == 50):
indices.append(char_to_ix['\n'])
return indices
np.random.seed(2)
_, n_a = 20, 100
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
indices = sample(parameters, char_to_ix, 0)
print("Sampling:")
print("list of sampled indices:", indices)
print("list of sampled characters:", [ix_to_char[i] for i in indices])
"""
Explanation: Expected output:
<table>
<tr>
<td>
**gradients["dWaa"][1][2] **
</td>
<td>
10.0
</td>
</tr>
<tr>
<td>
**gradients["dWax"][3][1]**
</td>
<td>
-10.0
</td>
</td>
</tr>
<tr>
<td>
**gradients["dWya"][1][2]**
</td>
<td>
0.29713815361
</td>
</tr>
<tr>
<td>
**gradients["db"][4]**
</td>
<td>
[ 10.]
</td>
</tr>
<tr>
<td>
**gradients["dby"][1]**
</td>
<td>
[ 8.45833407]
</td>
</tr>
</table>
2.2 - Sampling
Now assume that your model is trained. You would like to generate new text (characters). The process of generation is explained in the picture below:
<img src="images/dinos3.png" style="width:500;height:300px;">
<caption><center> Figure 3: In this picture, we assume the model is already trained. We pass in $x^{\langle 1\rangle} = \vec{0}$ at the first time step, and have the network then sample one character at a time. </center></caption>
Exercise: Implement the sample function below to sample characters. You need to carry out 4 steps:
Step 1: Pass the network the first "dummy" input $x^{\langle 1 \rangle} = \vec{0}$ (the vector of zeros). This is the default input before we've generated any characters. We also set $a^{\langle 0 \rangle} = \vec{0}$
Step 2: Run one step of forward propagation to get $a^{\langle 1 \rangle}$ and $\hat{y}^{\langle 1 \rangle}$. Here are the equations:
$$ a^{\langle t+1 \rangle} = \tanh(W_{ax} x^{\langle t \rangle } + W_{aa} a^{\langle t \rangle } + b)\tag{1}$$
$$ z^{\langle t + 1 \rangle } = W_{ya} a^{\langle t + 1 \rangle } + b_y \tag{2}$$
$$ \hat{y}^{\langle t+1 \rangle } = softmax(z^{\langle t + 1 \rangle })\tag{3}$$
Note that $\hat{y}^{\langle t+1 \rangle }$ is a (softmax) probability vector (its entries are between 0 and 1 and sum to 1). $\hat{y}^{\langle t+1 \rangle}_i$ represents the probability that the character indexed by "i" is the next character. We have provided a softmax() function that you can use.
Step 3: Carry out sampling: Pick the next character's index according to the probability distribution specified by $\hat{y}^{\langle t+1 \rangle }$. This means that if $\hat{y}^{\langle t+1 \rangle }_i = 0.16$, you will pick the index "i" with 16% probability. To implement it, you can use np.random.choice.
Here is an example of how to use np.random.choice():
python
np.random.seed(0)
p = np.array([0.1, 0.0, 0.7, 0.2])
index = np.random.choice([0, 1, 2, 3], p = p.ravel())
This means that you will pick the index according to the distribution:
$P(index = 0) = 0.1, P(index = 1) = 0.0, P(index = 2) = 0.7, P(index = 3) = 0.2$.
Step 4: The last step to implement in sample() is to overwrite the variable x, which currently stores $x^{\langle t \rangle }$, with the value of $x^{\langle t + 1 \rangle }$. You will represent $x^{\langle t + 1 \rangle }$ by creating a one-hot vector corresponding to the character you've chosen as your prediction. You will then forward propagate $x^{\langle t + 1 \rangle }$ in Step 1 and keep repeating the process until you get a "\n" character, indicating you've reached the end of the dinosaur name.
End of explanation
"""
# GRADED FUNCTION: optimize
def optimize(X, Y, a_prev, parameters, learning_rate = 0.01):
"""
Execute one step of the optimization to train the model.
Arguments:
X -- list of integers, where each integer is a number that maps to a character in the vocabulary.
Y -- list of integers, exactly the same as X but shifted one index to the left.
a_prev -- previous hidden state.
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
b -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
learning_rate -- learning rate for the model.
Returns:
loss -- value of the loss function (cross-entropy)
gradients -- python dictionary containing:
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a)
db -- Gradients of bias vector, of shape (n_a, 1)
dby -- Gradients of output bias vector, of shape (n_y, 1)
a[len(X)-1] -- the last hidden state, of shape (n_a, 1)
"""
### START CODE HERE ###
# Forward propagate through time (โ1 line)
loss, cache = rnn_forward(X, Y, a_prev, parameters)
# Backpropagate through time (โ1 line)
gradients, a = rnn_backward(X, Y, parameters, cache)
# Clip your gradients between -5 (min) and 5 (max) (โ1 line)
gradients = clip(gradients, 5)
# Update parameters (โ1 line)
parameters = update_parameters(parameters, gradients, learning_rate)
### END CODE HERE ###
return loss, gradients, a[len(X)-1]
np.random.seed(1)
vocab_size, n_a = 27, 100
a_prev = np.random.randn(n_a, 1)
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
X = [12,3,5,11,22,3]
Y = [4,14,11,22,25, 26]
loss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
print("Loss =", loss)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("np.argmax(gradients[\"dWax\"]) =", np.argmax(gradients["dWax"]))
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
print("a_last[4] =", a_last[4])
"""
Explanation: Expected output:
<table>
<tr>
<td>
**list of sampled indices:**
</td>
<td>
[12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, <br>
7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 5, 6, 12, 25, 0, 0]
</td>
</tr><tr>
<td>
**list of sampled characters:**
</td>
<td>
['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', <br>
'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', <br>
'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'e', 'f', 'l', 'y', '\n', '\n']
</td>
</tr>
</table>
3 - Building the language model
It is time to build the character-level language model for text generation.
3.1 - Gradient descent
In this section you will implement a function performing one step of stochastic gradient descent (with clipped gradients). You will go through the training examples one at a time, so the optimization algorithm will be stochastic gradient descent. As a reminder, here are the steps of a common optimization loop for an RNN:
Forward propagate through the RNN to compute the loss
Backward propagate through time to compute the gradients of the loss with respect to the parameters
Clip the gradients if necessary
Update your parameters using gradient descent
Exercise: Implement this optimization process (one step of stochastic gradient descent).
We provide you with the following functions:
```python
def rnn_forward(X, Y, a_prev, parameters):
""" Performs the forward propagation through the RNN and computes the cross-entropy loss.
It returns the loss' value as well as a "cache" storing values to be used in the backpropagation."""
....
return loss, cache
def rnn_backward(X, Y, parameters, cache):
""" Performs the backward propagation through time to compute the gradients of the loss with respect
to the parameters. It returns also all the hidden states."""
...
return gradients, a
def update_parameters(parameters, gradients, learning_rate):
""" Updates parameters using the Gradient Descent Update Rule."""
...
return parameters
```
End of explanation
"""
# GRADED FUNCTION: model
def model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27):
"""
Trains the model and generates dinosaur names.
Arguments:
data -- text corpus
ix_to_char -- dictionary that maps the index to a character
char_to_ix -- dictionary that maps a character to an index
num_iterations -- number of iterations to train the model for
n_a -- number of units of the RNN cell
dino_names -- number of dinosaur names you want to sample at each iteration.
vocab_size -- number of unique characters found in the text, size of the vocabulary
Returns:
parameters -- learned parameters
"""
# Retrieve n_x and n_y from vocab_size
n_x, n_y = vocab_size, vocab_size
# Initialize parameters
parameters = initialize_parameters(n_a, n_x, n_y)
# Initialize loss (this is required because we want to smooth our loss, don't worry about it)
loss = get_initial_loss(vocab_size, dino_names)
# Build list of all dinosaur names (training examples).
with open("dinos.txt") as f:
examples = f.readlines()
examples = [x.lower().strip() for x in examples]
# Shuffle list of all dinosaur names
np.random.seed(0)
np.random.shuffle(examples)
# Initialize the hidden state of your LSTM
a_prev = np.zeros((n_a, 1))
# Optimization loop
for j in range(num_iterations):
### START CODE HERE ###
# Use the hint above to define one training example (X,Y) (โ 2 lines)
index = j % len(examples)
X = [None] + [char_to_ix[ch] for ch in examples[index]]
Y = X[1:] + [char_to_ix["\n"]]
# Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters
# Choose a learning rate of 0.01
curr_loss, gradients, a_prev = optimize(X, Y, a_prev, parameters)
### END CODE HERE ###
# Use a latency trick to keep the loss smooth. It happens here to accelerate the training.
loss = smooth(loss, curr_loss)
# Every 2000 Iteration, generate "n" characters thanks to sample() to check if the model is learning properly
if j % 2000 == 0:
print('Iteration: %d, Loss: %f' % (j, loss) + '\n')
# The number of dinosaur names to print
seed = 0
for name in range(dino_names):
# Sample indices and print them
sampled_indices = sample(parameters, char_to_ix, seed)
print_sample(sampled_indices, ix_to_char)
seed += 1 # To get the same result for grading purposed, increment the seed by one.
print('\n')
return parameters
"""
Explanation: Expected output:
<table>
<tr>
<td>
**Loss **
</td>
<td>
126.503975722
</td>
</tr>
<tr>
<td>
**gradients["dWaa"][1][2]**
</td>
<td>
0.194709315347
</td>
<tr>
<td>
**np.argmax(gradients["dWax"])**
</td>
<td> 93
</td>
</tr>
<tr>
<td>
**gradients["dWya"][1][2]**
</td>
<td> -0.007773876032
</td>
</tr>
<tr>
<td>
**gradients["db"][4]**
</td>
<td> [-0.06809825]
</td>
</tr>
<tr>
<td>
**gradients["dby"][1]**
</td>
<td>[ 0.01538192]
</td>
</tr>
<tr>
<td>
**a_last[4]**
</td>
<td> [-1.]
</td>
</tr>
</table>
3.2 - Training the model
Given the dataset of dinosaur names, we use each line of the dataset (one name) as one training example. Every 100 steps of stochastic gradient descent, you will sample 10 randomly chosen names to see how the algorithm is doing. Remember to shuffle the dataset, so that stochastic gradient descent visits the examples in random order.
Exercise: Follow the instructions and implement model(). When examples[index] contains one dinosaur name (string), to create an example (X, Y), you can use this:
python
index = j % len(examples)
X = [None] + [char_to_ix[ch] for ch in examples[index]]
Y = X[1:] + [char_to_ix["\n"]]
Note that we use: index= j % len(examples), where j = 1....num_iterations, to make sure that examples[index] is always a valid statement (index is smaller than len(examples)).
The first entry of X being None will be interpreted by rnn_forward() as setting $x^{\langle 0 \rangle} = \vec{0}$. Further, this ensures that Y is equal to X but shifted one step to the left, and with an additional "\n" appended to signify the end of the dinosaur name.
End of explanation
"""
parameters = model(data, ix_to_char, char_to_ix)
"""
Explanation: Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names.
End of explanation
"""
from __future__ import print_function
from keras.callbacks import LambdaCallback
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking
from keras.layers import LSTM
from keras.utils.data_utils import get_file
from keras.preprocessing.sequence import pad_sequences
from shakespeare_utils import *
import sys
import io
"""
Explanation: Conclusion
You can see that your algorithm has started to generate plausible dinosaur names towards the end of the training. At first, it was generating random characters, but towards the end you could see dinosaur names with cool endings. Feel free to run the algorithm even longer and play with hyperparameters to see if you can get even better results. Our implemetation generated some really cool names like maconucon, marloralus and macingsersaurus. Your model hopefully also learned that dinosaur names tend to end in saurus, don, aura, tor, etc.
If your model generates some non-cool names, don't blame the model entirely--not all actual dinosaur names sound cool. (For example, dromaeosauroides is an actual dinosaur name and is in the training set.) But this model should give you a set of candidates from which you can pick the coolest!
This assignment had used a relatively small dataset, so that you could train an RNN quickly on a CPU. Training a model of the english language requires a much bigger dataset, and usually needs much more computation, and could run for many hours on GPUs. We ran our dinosaur name for quite some time, and so far our favoriate name is the great, undefeatable, and fierce: Mangosaurus!
<img src="images/mangosaurus.jpeg" style="width:250;height:300px;">
4 - Writing like Shakespeare
The rest of this notebook is optional and is not graded, but we hope you'll do it anyway since it's quite fun and informative.
A similar (but more complicated) task is to generate Shakespeare poems. Instead of learning from a dataset of Dinosaur names you can use a collection of Shakespearian poems. Using LSTM cells, you can learn longer term dependencies that span many characters in the text--e.g., where a character appearing somewhere a sequence can influence what should be a different character much much later in ths sequence. These long term dependencies were less important with dinosaur names, since the names were quite short.
<img src="images/shakespeare.jpg" style="width:500;height:400px;">
<caption><center> Let's become poets! </center></caption>
We have implemented a Shakespeare poem generator with Keras. Run the following cell to load the required packages and models. This may take a few minutes.
End of explanation
"""
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
model.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback])
# Run this cell to try with different inputs without having to re-train the model
generate_output()
"""
Explanation: To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called "The Sonnets".
Let's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run generate_output, which will prompt asking you for an input (<40 characters). The poem will start with your sentence, and our RNN-Shakespeare will complete the rest of the poem for you! For example, try "Forsooth this maketh no sense " (don't enter the quotation marks). Depending on whether you include the space at the end, your results might also differ--try it both ways, and try other inputs as well.
End of explanation
"""
|
daniestevez/jupyter_notebooks
|
dslwp/DSLWP-B orbital parameter analysis.ipynb
|
gpl-3.0
|
%matplotlib inline
"""
Explanation: DSLWP-B orbital parameter analysis
In this notebook we analyse the Keplerian orbital parameters derived from the DSLWP-B tracking files published by Wei Mingchuan BG2BHC using GMAT. The ECEF cartesian state is loaded from the first line of the tracking file and then the orbit is propagated and some Keplerian state parameters are plotted to compare the elliptical orbits in different tracking files.
End of explanation
"""
GMAT_PATH = '/home/daniel/GMAT/R2018a/bin/GMAT-R2018a'
import numpy as np
import matplotlib.pyplot as plt
from astropy.time import Time
import subprocess
# Larger figure size
fig_size = [10, 6]
plt.rcParams['figure.figsize'] = fig_size
"""
Explanation: Set this to the path of your GMAT executable:
End of explanation
"""
gmat_script_template = """
%----------------------------------------
%---------- Spacecraft
%----------------------------------------
Create Spacecraft DSLWP_B;
DSLWP_B.DateFormat = UTCModJulian;
DSLWP_B.Epoch = '{epoch}';
DSLWP_B.CoordinateSystem = EarthFixed;
DSLWP_B.DisplayStateType = Cartesian;
DSLWP_B.X = {x};
DSLWP_B.Y = {y};
DSLWP_B.Z = {z};
DSLWP_B.VX = {vx};
DSLWP_B.VY = {vy};
DSLWP_B.VZ = {vz};
DSLWP_B.DryMass = 45;
DSLWP_B.DragArea = 0.25;
DSLWP_B.SRPArea = 0.25;
%----------------------------------------
%---------- ForceModels
%----------------------------------------
Create ForceModel LunaProp_ForceModel;
LunaProp_ForceModel.CentralBody = Luna;
LunaProp_ForceModel.PrimaryBodies = {{Luna}};
LunaProp_ForceModel.PointMasses = {{Earth, Jupiter, Mars, Neptune, Saturn, Sun, Uranus, Venus}};
LunaProp_ForceModel.Drag = None;
LunaProp_ForceModel.SRP = On;
LunaProp_ForceModel.RelativisticCorrection = On;
LunaProp_ForceModel.ErrorControl = RSSStep;
LunaProp_ForceModel.GravityField.Luna.Degree = 10;
LunaProp_ForceModel.GravityField.Luna.Order = 10;
LunaProp_ForceModel.GravityField.Luna.StmLimit = 100;
LunaProp_ForceModel.GravityField.Luna.PotentialFile = 'LP165P.cof';
LunaProp_ForceModel.GravityField.Luna.TideModel = 'None';
%----------------------------------------
%---------- Propagators
%----------------------------------------
Create Propagator LunaProp;
LunaProp.FM = LunaProp_ForceModel;
LunaProp.Type = PrinceDormand78;
LunaProp.InitialStepSize = 1;
LunaProp.Accuracy = 1e-13;
LunaProp.MinStep = 0;
LunaProp.MaxStep = 600;
%----------------------------------------
%---------- Coordinate Systems
%----------------------------------------
Create CoordinateSystem LunaInertial;
LunaInertial.Origin = Luna;
LunaInertial.Axes = BodyInertial;
%----------------------------------------
%---------- Subscribers
%----------------------------------------
Create OrbitView LunaOrbitView;
GMAT LunaOrbitView.SolverIterations = None;
GMAT LunaOrbitView.UpperLeft = [ 0.1801470588235294 0.04190751445086705 ];
GMAT LunaOrbitView.Size = [ 0.9926470588235294 0.9552023121387283 ];
GMAT LunaOrbitView.RelativeZOrder = 126;
GMAT LunaOrbitView.Maximized = true;
GMAT LunaOrbitView.Add = {{DSLWP_B, Earth, Luna, Sun}};
GMAT LunaOrbitView.CoordinateSystem = LunaInertial;
GMAT LunaOrbitView.DrawObject = [ true true true true ];
GMAT LunaOrbitView.DataCollectFrequency = 1;
GMAT LunaOrbitView.UpdatePlotFrequency = 50;
GMAT LunaOrbitView.NumPointsToRedraw = 0;
GMAT LunaOrbitView.ShowPlot = true;
GMAT LunaOrbitView.MaxPlotPoints = 20000;
GMAT LunaOrbitView.ShowLabels = true;
GMAT LunaOrbitView.ViewPointReference = Luna;
GMAT LunaOrbitView.ViewPointVector = [ 30000 0 0 ];
GMAT LunaOrbitView.ViewDirection = Luna;
GMAT LunaOrbitView.ViewScaleFactor = 1;
GMAT LunaOrbitView.ViewUpCoordinateSystem = LunaInertial;
GMAT LunaOrbitView.ViewUpAxis = Z;
GMAT LunaOrbitView.EclipticPlane = Off;
GMAT LunaOrbitView.XYPlane = On;
GMAT LunaOrbitView.WireFrame = Off;
GMAT LunaOrbitView.Axes = On;
GMAT LunaOrbitView.Grid = Off;
GMAT LunaOrbitView.SunLine = Off;
GMAT LunaOrbitView.UseInitialView = On;
GMAT LunaOrbitView.StarCount = 7000;
GMAT LunaOrbitView.EnableStars = On;
GMAT LunaOrbitView.EnableConstellations = Off;
Create ReportFile OrbitReport;
OrbitReport.Filename = '/home/daniel/jupyter_notebooks/dslwp/OrbitReport_{label}.txt';
OrbitReport.Add = {{DSLWP_B.UTCModJulian, DSLWP_B.Luna.SMA, DSLWP_B.Luna.ECC, DSLWP_B.LunaInertial.INC, DSLWP_B.LunaInertial.RAAN, DSLWP_B.LunaInertial.AOP, DSLWP_B.Luna.MA, DSLWP_B.Luna.TA}};
OrbitReport.WriteHeaders = false;
OrbitReport.WriteReport = true;
%----------------------------------------
%---------- Mission Sequence
%----------------------------------------
BeginMissionSequence;
Toggle OrbitReport Off
If DSLWP_B.UTCModJulian <= {start}
Propagate LunaProp(DSLWP_B) {{DSLWP_B.UTCModJulian = {start}}}
Else
Propagate BackProp LunaProp(DSLWP_B) {{DSLWP_B.UTCModJulian = {start}}}
EndIf
Toggle OrbitReport On
Propagate LunaProp(DSLWP_B) {{DSLWP_B.UTCModJulian = {end}}}
"""
"""
Explanation: The GMAT script template contains fields ready to be filled in using Python's format() function.
End of explanation
"""
mjd_unixtimestamp_offset = 10587.5
seconds_in_day = 3600 * 24
def mjd2unixtimestamp(m):
return (m - mjd_unixtimestamp_offset) * seconds_in_day
def unixtimestamp2mjd(u):
return u / seconds_in_day + mjd_unixtimestamp_offset
unixtimestamp2mjd(1528607994)
"""
Explanation: Conversion between UNIX timestamp (used by the tracking files) and GMAT Modified Julian Day.
End of explanation
"""
def load_tracking_file(path):
ncols = 7
data = np.fromfile(path, sep=' ', count=ncols)
return data
def load_orbit_file(path):
ncols = 8
data = np.fromfile(path, sep=' ')
return data.reshape((data.size // ncols, ncols))
"""
Explanation: Utility function to load the first row from a tracking file and to load the Keplerian state report from GMAT.
End of explanation
"""
utc = 0
sma = 1
ecc = 2
inc = 3
raan = 4
aop = 5
ma = 6
ta = 7
"""
Explanation: Keys for each of the columns in the orbit (Keplerian state) report.
End of explanation
"""
def gmat_propagate_tracking(track, start, end, label = '', do_not_close = False):
data = {'label' : label, 'start' : start, 'end' : end}
data['epoch'] = unixtimestamp2mjd(track[0])
data['x'], data['y'], data['z'] = track[1:4]
data['vx'], data['vy'], data['vz'] = track[4:7]
SCRIPT_PATH = '/tmp/gmat.script'
with open(SCRIPT_PATH, 'w') as f:
f.write(gmat_script_template.format(**data))
subprocess.call([GMAT_PATH, '-r', SCRIPT_PATH] + (['-x'] if not do_not_close else []))
"""
Explanation: The function below takes the data from a tracking file, generates a GMAT script and executes it. GMAT is closed automatically after the script has run unless do_not_close is set to True. This can be useful to examine the simulation output in more detail.
End of explanation
"""
#parts = ['20180526', '20180528', '20180529', '20180531', '20180601', '20180602', '20180603', '20180607', '20180609']
#parts = ['20180602', '20180603', '20180607', '20180609', '20180610', '20180615', '20180619', '20180622']
#parts = ['20180610', '20180615', '20180619', '20180622']
parts = ['20180629', '20180714', '20180727a']
parts = ['20180727a']
parts = ['20180622']
parts = ['20180727a', '20180803', '20180812', '20180814', '20180816', '20180818']
parts = ['20180812', '20180912', '20180914', '20180916', '20180930', '20181004', '20181006']
parts = ['20180812', '20180912', '20180914', '20180916', '20180930', '20181004', '20181006']
parts = ['20181006', '20181008', '20181010', '20181013', '20181015', '20181017', '20181019', '20181021']
parts = ['20181019']
parts = ['20190317']
parts = ['20190426']
parts = ['20190520']
parts = ['20190603']
parts = ['20190630']
for part in parts:
tracking = load_tracking_file('tracking_files/program_tracking_dslwp-b_{}.txt'.format(part))
gmat_propagate_tracking(tracking, start = '28560', end = '28570', label = part)
"""
Explanation: Load the cartesian state from each tracking file, propagate the orbit and write a Keplerian state report using GMAT.
End of explanation
"""
fig1 = plt.figure(figsize = [15,8], facecolor='w')
fig2 = plt.figure(figsize = [15,8], facecolor='w')
fig3 = plt.figure(figsize = [15,8], facecolor='w')
fig4 = plt.figure(figsize = [15,8], facecolor='w')
fig5 = plt.figure(figsize = [15,8], facecolor='w')
fig6 = plt.figure(figsize = [15,8], facecolor='w')
sub1 = fig1.add_subplot(111)
sub2 = fig2.add_subplot(111)
sub3 = fig3.add_subplot(111)
sub4 = fig4.add_subplot(111)
sub5 = fig5.add_subplot(111)
sub6 = fig6.add_subplot(111)
for part in parts:
orbit = load_orbit_file('OrbitReport_{}.txt'.format(part))
t = Time(mjd2unixtimestamp(orbit[:,utc]), format='unix')
sub1.plot(t.datetime, orbit[:,sma])
sub2.plot(t.datetime, orbit[:,ma])
sub3.plot(t.datetime, orbit[:,ecc])
sub4.plot(t.datetime, orbit[:,aop])
sub5.plot(t.datetime, orbit[:,inc])
sub6.plot(orbit[:,utc], orbit[:,raan])
sub1.legend(parts)
sub2.legend(parts)
sub3.legend(parts)
sub4.legend(parts)
sub5.legend(parts)
sub6.legend(parts)
sub1.set_xlabel('UTC time')
sub2.set_xlabel('UTC time')
sub3.set_xlabel('UTC time')
sub4.set_xlabel('UTC time')
sub5.set_xlabel('UTC time')
sub6.set_xlabel('UTC time')
sub1.set_ylabel('SMA (km)')
sub2.set_ylabel('MA (deg)')
sub3.set_ylabel('ECC')
sub4.set_ylabel('AOP (deg)')
sub5.set_ylabel('INC (deg)')
sub6.set_ylabel('RAAN (deg)')
sub1.set_title('Semi-major axis')
sub2.set_title('Mean anomaly')
sub3.set_title('Eccentricity')
sub4.set_title('Argument of periapsis')
sub5.set_title('Inclination')
sub6.set_title('Right ascension of ascending node');
plt.figure(figsize = [15,8], facecolor='w')
for part in parts:
orbit = load_orbit_file('OrbitReport_{}.txt'.format(part))
t = Time(mjd2unixtimestamp(orbit[:,utc]), format='unix')
plt.plot(t[:200].datetime, orbit[:200,ma])
plt.legend(parts)
plt.xlabel('UTC time')
plt.ylabel('MA (deg)')
plt.title('Mean anomaly');
"""
Explanation: Plot the orbital parameters which are vary significantly between different tracking files.
End of explanation
"""
|
phoebe-project/phoebe2-docs
|
2.2/tutorials/atm_passbands.ipynb
|
gpl-3.0
|
!pip install -I "phoebe>=2.2,<2.3"
"""
Explanation: Atmospheres & Passbands
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
"""
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
"""
Explanation: And we'll add a single light curve dataset to expose all the passband-dependent options.
End of explanation
"""
b['atm']
b['atm@primary']
b['atm@primary'].description
b['atm@primary'].choices
"""
Explanation: Relevant Parameters
An 'atm' parameter exists for each of the components in the system (for each set of compute options) and defines which atmosphere table should be used.
By default, these are set to 'ck2004' (Castelli-Kurucz) but can be set to 'blackbody' as well as 'extern_atmx' and 'extern_planckint' (which are included primarily for direct comparison with PHOEBE legacy).
End of explanation
"""
b['ld_func@primary']
b['atm@primary'] = 'blackbody'
print(b.run_checks())
b['ld_mode@primary'] = 'manual'
b['ld_func@primary'] = 'logarithmic'
print(b.run_checks())
"""
Explanation: Note that if you change the value of 'atm' to anything other than 'ck2004', the corresponding 'ld_func' will need to be changed to something other than 'interp' (warnings and errors will be raised to remind you of this).
End of explanation
"""
b['passband']
"""
Explanation: A 'passband' parameter exists for each passband-dependent-dataset (i.e. not meshes or orbits, but light curves and radial velocities). This parameter dictates which passband should be used for the computation of all intensities.
End of explanation
"""
print(b['passband'].choices)
"""
Explanation: The available choices will include both locally installed passbands as well as passbands currently available from the online PHOEBE repository. If you choose an online-passband, it will be downloaded and installed locally as soon as required by b.run_compute.
End of explanation
"""
print(phoebe.list_installed_passbands())
"""
Explanation: To see your current locally-installed passbands, call phoebe.list_installed_passbands().
End of explanation
"""
print(phoebe.list_passband_directories())
"""
Explanation: These installed passbands can be in any of a number of directories, which can be accessed via phoebe.list_passband_directories().
The first entry is the global location - this is where passbands can be stored by a server-admin to be available to all PHOEBE-users on that machine.
The second entry is the local location - this is where individual users can store passbands and where PHOEBE will download and install passbands (by default).
End of explanation
"""
print(phoebe.list_online_passbands())
"""
Explanation: To see the passbands available from the online repository, call phoebe.list_online_passbands().
End of explanation
"""
phoebe.download_passband('Cousins:Rc')
print(phoebe.list_installed_passbands())
"""
Explanation: Lastly, to manually download and install one of these online passbands, you can do so explicitly via phoebe.download_passband or by visiting tables.phoebe-project.org. See also the tutorial on updating passbands.
Note that this isn't necessary unless you want to explicitly download passbands before needed by run_compute (perhaps if you're expecting to have unreliable network connection in the future and want to ensure you have all needed passbands).
End of explanation
"""
|
mdda/deep-learning-workshop
|
notebooks/2-CNN/4-ImageNet/3-inception-v3_theano.ipynb
|
mit
|
import lasagne
from lasagne.layers import InputLayer
from lasagne.layers import Conv2DLayer, Pool2DLayer
from lasagne.layers import DenseLayer
from lasagne.layers import GlobalPoolLayer
from lasagne.layers import ConcatLayer
from lasagne.layers.normalization import batch_norm
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def bn_conv(input_layer, **kwargs):
l = Conv2DLayer(input_layer, **kwargs)
l = batch_norm(l, epsilon=0.001)
return l
def inceptionA(input_layer, nfilt):
# Corresponds to a modified version of figure 5 in the paper
l1 = bn_conv(input_layer, num_filters=nfilt[0][0], filter_size=1)
l2 = bn_conv(input_layer, num_filters=nfilt[1][0], filter_size=1)
l2 = bn_conv(l2, num_filters=nfilt[1][1], filter_size=5, pad=2)
l3 = bn_conv(input_layer, num_filters=nfilt[2][0], filter_size=1)
l3 = bn_conv(l3, num_filters=nfilt[2][1], filter_size=3, pad=1)
l3 = bn_conv(l3, num_filters=nfilt[2][2], filter_size=3, pad=1)
l4 = Pool2DLayer(input_layer, pool_size=3, stride=1, pad=1, mode='average_exc_pad')
l4 = bn_conv(l4, num_filters=nfilt[3][0], filter_size=1)
return ConcatLayer([l1, l2, l3, l4])
def inceptionB(input_layer, nfilt):
# Corresponds to a modified version of figure 10 in the paper
l1 = bn_conv(input_layer, num_filters=nfilt[0][0], filter_size=3, stride=2)
l2 = bn_conv(input_layer, num_filters=nfilt[1][0], filter_size=1)
l2 = bn_conv(l2, num_filters=nfilt[1][1], filter_size=3, pad=1)
l2 = bn_conv(l2, num_filters=nfilt[1][2], filter_size=3, stride=2)
l3 = Pool2DLayer(input_layer, pool_size=3, stride=2)
return ConcatLayer([l1, l2, l3])
def inceptionC(input_layer, nfilt):
# Corresponds to figure 6 in the paper
l1 = bn_conv(input_layer, num_filters=nfilt[0][0], filter_size=1)
l2 = bn_conv(input_layer, num_filters=nfilt[1][0], filter_size=1)
l2 = bn_conv(l2, num_filters=nfilt[1][1], filter_size=(1, 7), pad=(0, 3))
l2 = bn_conv(l2, num_filters=nfilt[1][2], filter_size=(7, 1), pad=(3, 0))
l3 = bn_conv(input_layer, num_filters=nfilt[2][0], filter_size=1)
l3 = bn_conv(l3, num_filters=nfilt[2][1], filter_size=(7, 1), pad=(3, 0))
l3 = bn_conv(l3, num_filters=nfilt[2][2], filter_size=(1, 7), pad=(0, 3))
l3 = bn_conv(l3, num_filters=nfilt[2][3], filter_size=(7, 1), pad=(3, 0))
l3 = bn_conv(l3, num_filters=nfilt[2][4], filter_size=(1, 7), pad=(0, 3))
l4 = Pool2DLayer(input_layer, pool_size=3, stride=1, pad=1, mode='average_exc_pad')
l4 = bn_conv(l4, num_filters=nfilt[3][0], filter_size=1)
return ConcatLayer([l1, l2, l3, l4])
def inceptionD(input_layer, nfilt):
# Corresponds to a modified version of figure 10 in the paper
l1 = bn_conv(input_layer, num_filters=nfilt[0][0], filter_size=1)
l1 = bn_conv(l1, num_filters=nfilt[0][1], filter_size=3, stride=2)
l2 = bn_conv(input_layer, num_filters=nfilt[1][0], filter_size=1)
l2 = bn_conv(l2, num_filters=nfilt[1][1], filter_size=(1, 7), pad=(0, 3))
l2 = bn_conv(l2, num_filters=nfilt[1][2], filter_size=(7, 1), pad=(3, 0))
l2 = bn_conv(l2, num_filters=nfilt[1][3], filter_size=3, stride=2)
l3 = Pool2DLayer(input_layer, pool_size=3, stride=2)
return ConcatLayer([l1, l2, l3])
def inceptionE(input_layer, nfilt, pool_mode):
# Corresponds to figure 7 in the paper
l1 = bn_conv(input_layer, num_filters=nfilt[0][0], filter_size=1)
l2 = bn_conv(input_layer, num_filters=nfilt[1][0], filter_size=1)
l2a = bn_conv(l2, num_filters=nfilt[1][1], filter_size=(1, 3), pad=(0, 1))
l2b = bn_conv(l2, num_filters=nfilt[1][2], filter_size=(3, 1), pad=(1, 0))
l3 = bn_conv(input_layer, num_filters=nfilt[2][0], filter_size=1)
l3 = bn_conv(l3, num_filters=nfilt[2][1], filter_size=3, pad=1)
l3a = bn_conv(l3, num_filters=nfilt[2][2], filter_size=(1, 3), pad=(0, 1))
l3b = bn_conv(l3, num_filters=nfilt[2][3], filter_size=(3, 1), pad=(1, 0))
l4 = Pool2DLayer(input_layer, pool_size=3, stride=1, pad=1, mode=pool_mode)
l4 = bn_conv(l4, num_filters=nfilt[3][0], filter_size=1)
return ConcatLayer([l1, l2a, l2b, l3a, l3b, l4])
def build_network():
net = {}
net['input'] = InputLayer((None, 3, 299, 299))
net['conv'] = bn_conv(net['input'], num_filters=32, filter_size=3, stride=2)
net['conv_1'] = bn_conv(net['conv'], num_filters=32, filter_size=3)
net['conv_2'] = bn_conv(net['conv_1'], num_filters=64, filter_size=3, pad=1)
net['pool'] = Pool2DLayer(net['conv_2'], pool_size=3, stride=2, mode='max')
net['conv_3'] = bn_conv(net['pool'], num_filters=80, filter_size=1)
net['conv_4'] = bn_conv(net['conv_3'], num_filters=192, filter_size=3)
net['pool_1'] = Pool2DLayer(net['conv_4'], pool_size=3, stride=2, mode='max')
net['mixed/join'] = inceptionA(
net['pool_1'], nfilt=((64,), (48, 64), (64, 96, 96), (32,)))
net['mixed_1/join'] = inceptionA(
net['mixed/join'], nfilt=((64,), (48, 64), (64, 96, 96), (64,)))
net['mixed_2/join'] = inceptionA(
net['mixed_1/join'], nfilt=((64,), (48, 64), (64, 96, 96), (64,)))
net['mixed_3/join'] = inceptionB(
net['mixed_2/join'], nfilt=((384,), (64, 96, 96)))
net['mixed_4/join'] = inceptionC(
net['mixed_3/join'],
nfilt=((192,), (128, 128, 192), (128, 128, 128, 128, 192), (192,)))
net['mixed_5/join'] = inceptionC(
net['mixed_4/join'],
nfilt=((192,), (160, 160, 192), (160, 160, 160, 160, 192), (192,)))
net['mixed_6/join'] = inceptionC(
net['mixed_5/join'],
nfilt=((192,), (160, 160, 192), (160, 160, 160, 160, 192), (192,)))
net['mixed_7/join'] = inceptionC(
net['mixed_6/join'],
nfilt=((192,), (192, 192, 192), (192, 192, 192, 192, 192), (192,)))
net['mixed_8/join'] = inceptionD(
net['mixed_7/join'],
nfilt=((192, 320), (192, 192, 192, 192)))
net['mixed_9/join'] = inceptionE(
net['mixed_8/join'],
nfilt=((320,), (384, 384, 384), (448, 384, 384, 384), (192,)),
pool_mode='average_exc_pad')
net['mixed_10/join'] = inceptionE(
net['mixed_9/join'],
nfilt=((320,), (384, 384, 384), (448, 384, 384, 384), (192,)),
pool_mode='max')
net['pool3'] = GlobalPoolLayer(net['mixed_10/join'])
net['softmax'] = DenseLayer(net['pool3'], num_units=1008, nonlinearity=lasagne.nonlinearities.softmax)
return net
"""
Explanation: Modern Network :: Pre-Trained for ImageNet
This example demonstrates using a network pretrained on ImageNet for classification. This image recognition task involved recognising 1000 different classes.
The Model 'inception v3'
This model was created by Google, and detailed in "Rethinking the Inception Architecture for Computer Vision", and was state-of-the-art until Dec-2015.
The model parameter file is licensed Apache 2.0, and has already been downloaded into the ./data/inception_v3 directory. The parameter file is ~80Mb of data. And that's considered small for this type of model.
End of explanation
"""
net = build_network()
output_layer = net['softmax']
print("Defined Inception3 model")
import pickle
params = pickle.load(open('./data/inception3/inception_v3.pkl', 'rb'), encoding='iso-8859-1')
#print("Saved model params.keys = ", params.keys())
#print(" License : "+params['LICENSE']) # Apache 2.0
classes = params['synset words']
lasagne.layers.set_all_param_values(output_layer, params['param values'])
print("Loaded Model")
from model import inception_v3 # This is for image preprocessing functions
"""
Explanation: Load the model parameters and metadataยถ
End of explanation
"""
image_files = [
'./images/grumpy-cat_224x224.jpg',
'./images/sad-owl_224x224.jpg',
'./images/cat-with-tongue_224x224.jpg',
'./images/doge-wiki_224x224.jpg',
]
import time
t0 = time.time()
for i, f in enumerate(image_files):
#print("Image File:%s" % (f,))
im = inception_v3.imagefile_to_np(f)
prob = np.array( lasagne.layers.get_output(output_layer, inception_v3.preprocess(im), deterministic=True).eval() )
top5 = np.argsort(prob[0])[-1:-6:-1]
plt.figure()
plt.imshow(im.astype('uint8'))
plt.axis('off')
for n, label in enumerate(top5):
plt.text(350, 50 + n * 25, '{}. {}'.format(n+1, classes[label]), fontsize=14)
print("DONE : %6.2f seconds each" %(float(time.time() - t0)/len(image_files),))
"""
Explanation: Trying it out
On pre-downloaded images
NB: If this is running on a single CPU core (likely in a VM), expect each image to take ~ 15 seconds (!)
NB: So, since there are 4 images, that means expect a full 1 minute delay ...
End of explanation
"""
import requests
index = requests.get('http://www.image-net.org/challenges/LSVRC/2012/ori_urls/indexval.html').text
image_urls = index.split('<br>')
np.random.seed(23)
np.random.shuffle(image_urls)
image_urls = image_urls[:5]
image_urls
"""
Explanation: On some test images from the web
We'll download the ILSVRC2012 validation URLs and pick a few at random
End of explanation
"""
import io
for url in image_urls:
try:
ext = url.split('.')[-1]
im = plt.imread(io.BytesIO(requests.get(url).content), ext)
prob = np.array( lasagne.layers.get_output(output_layer, inception_v3.preprocess(im), deterministic=True).eval() )
top5 = np.argsort(prob[0])[-1:-6:-1]
plt.figure()
plt.imshow(inception_v3.resize_image(im))
plt.axis('off')
for n, label in enumerate(top5):
plt.text(350, 50 + n * 25, '{}. {}'.format(n+1, classes[label]), fontsize=14)
except IOError:
print('bad url: ' + url)
"""
Explanation: Process test images and print top 5 predicted labelsยถ
(uses image pre-processing functions from ./model/inception_v3.py)
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/recommendation_systems/solutions/basic_retrieval.ipynb
|
apache-2.0
|
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
!pip install -q scann
"""
Explanation: Recommending movies: retrieval
Learning Objectives
In this notebook, we're going to build and train such a two-tower model using the Movielens dataset.
We're going to:
Get our data and split it into a training and test set.
Implement a retrieval model.
Fit and evaluate it.
Export it for efficient serving by building an approximate nearest neighbours (ANN) index.
Introduction
Real-world recommender systems are often composed of two stages:
The retrieval stage is responsible for selecting an initial set of hundreds of candidates from all possible candidates. The main objective of this model is to efficiently weed out all candidates that the user is not interested in. Because the retrieval model may be dealing with millions of candidates, it has to be computationally efficient.
The ranking stage takes the outputs of the retrieval model and fine-tunes them to select the best possible handful of recommendations. Its task is to narrow down the set of items the user may be interested in to a shortlist of likely candidates.
In this notebook, we're going to focus on the first stage, retrieval. If you are interested in the ranking stage, have a look at our ranking tutorial.
Retrieval models are often composed of two sub-models:
A query model computing the query representation (normally a fixed-dimensionality embedding vector) using query features.
A candidate model computing the candidate representation (an equally-sized vector) using the candidate features
The outputs of the two models are then multiplied together to give a query-candidate affinity score, with higher scores expressing a better match between the candidate and the query.
The dataset
The Movielens dataset is a classic dataset from the GroupLens research group at the University of Minnesota. It contains a set of ratings given to movies by a set of users, and is a workhorse of recommender system research.
The data can be treated in two ways:
It can be interpreted as expressesing which movies the users watched (and rated), and which they did not. This is a form of implicit feedback, where users' watches tell us which things they prefer to see and which they'd rather not see.
It can also be seen as expressesing how much the users liked the movies they did watch. This is a form of explicit feedback: given that a user watched a movie, we can tell roughly how much they liked by looking at the rating they have given.
In this notebook, we are focusing on a retrieval system: a model that predicts a set of movies from the catalogue that the user is likely to watch. Often, implicit data is more useful here, and so we are going to treat Movielens as an implicit system. This means that every movie a user watched is a positive example, and every movie they have not seen is an implicit negative example.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Imports
Let's first get our imports out of the way.
End of explanation
"""
# Importing necessary modules
import os
import pprint
import tempfile
from typing import Dict, Text
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
"""
Explanation: Note: Please ignore the incompatibility errors and re-run the above cell before proceeding for the lab.
End of explanation
"""
# TODO 1 - Your code is here.
# Ratings data.
ratings = tfds.load("movielens/100k-ratings", split="train")
# Features of all the available movies.
movies = tfds.load("movielens/100k-movies", split="train")
"""
Explanation: Preparing the dataset
Let's first have a look at the data.
We use the MovieLens dataset from Tensorflow Datasets. Loading movielens/100k_ratings yields a tf.data.Dataset object containing the ratings data and loading movielens/100k_movies yields a tf.data.Dataset object containing only the movies data.
Note that since the MovieLens dataset does not have predefined splits, all data are under train split.
End of explanation
"""
# Printing the user information and movie information
for x in ratings.take(1).as_numpy_iterator():
pprint.pprint(x)
"""
Explanation: The ratings dataset returns a dictionary of movie id, user id, the assigned rating, timestamp, movie information, and user information:
End of explanation
"""
# Printing the data on what genres it belongs to
for x in movies.take(1).as_numpy_iterator():
pprint.pprint(x)
"""
Explanation: The movies dataset contains the movie id, movie title, and data on what genres it belongs to. Note that the genres are encoded with integer labels.
End of explanation
"""
# Here, we are focusing on the ratings data
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"],
})
movies = movies.map(lambda x: x["movie_title"])
"""
Explanation: In this example, we're going to focus on the ratings data. Other notebooks explore how to use the movie information data as well to improve the model quality.
We keep only the user_id, and movie_title fields in the dataset.
End of explanation
"""
# Here, using tf.random module to shuffle randomly a tensor in its first dimension
tf.random.set_seed(42)
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
train = shuffled.take(80_000)
test = shuffled.skip(80_000).take(20_000)
"""
Explanation: To fit and evaluate the model, we need to split it into a training and evaluation set. In an industrial recommender system, this would most likely be done by time: the data up to time $T$ would be used to predict interactions after $T$.
In this simple example, however, let's use a random split, putting 80% of the ratings in the train set, and 20% in the test set.
End of explanation
"""
# Displaying the corresponding data according to the embedded tables.
movie_titles = movies.batch(1_000)
user_ids = ratings.batch(1_000_000).map(lambda x: x["user_id"])
unique_movie_titles = np.unique(np.concatenate(list(movie_titles)))
unique_user_ids = np.unique(np.concatenate(list(user_ids)))
unique_movie_titles[:10]
"""
Explanation: Let's also figure out unique user ids and movie titles present in the data.
This is important because we need to be able to map the raw values of our categorical features to embedding vectors in our models. To do that, we need a vocabulary that maps a raw feature value to an integer in a contiguous range: this allows us to look up the corresponding embeddings in our embedding tables.
End of explanation
"""
embedding_dimension = 32
"""
Explanation: Implementing a model
Choosing the architecure of our model a key part of modelling.
Because we are building a two-tower retrieval model, we can build each tower separately and then combine them in the final model.
The query tower
Let's start with the query tower.
The first step is to decide on the dimensionality of the query and candidate representations:
End of explanation
"""
user_model = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_user_ids, mask_token=None),
# We add an additional embedding to account for unknown tokens.
tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension)
])
"""
Explanation: Higher values will correspond to models that may be more accurate, but will also be slower to fit and more prone to overfitting.
The second is to define the model itself. Here, we're going to use Keras preprocessing layers to first convert user ids to integers, and then convert those to user embeddings via an Embedding layer. Note that we use the list of unique user ids we computed earlier as a vocabulary:
End of explanation
"""
movie_model = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_movie_titles, mask_token=None),
tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension)
])
"""
Explanation: A simple model like this corresponds exactly to a classic matrix factorization approach. While defining a subclass of tf.keras.Model for this simple model might be overkill, we can easily extend it to an arbitrarily complex model using standard Keras components, as long as we return an embedding_dimension-wide output at the end.
The candidate tower
We can do the same with the candidate tower.
End of explanation
"""
# Here, tfrs.metrics.FactorizedTopK function computes metrics for across top K candidates surfaced by a retrieval model.
metrics = tfrs.metrics.FactorizedTopK(
candidates=movies.batch(128).map(movie_model)
)
"""
Explanation: Metrics
In our training data we have positive (user, movie) pairs. To figure out how good our model is, we need to compare the affinity score that the model calculates for this pair to the scores of all the other possible candidates: if the score for the positive pair is higher than for all other candidates, our model is highly accurate.
To do this, we can use the tfrs.metrics.FactorizedTopK metric. The metric has one required argument: the dataset of candidates that are used as implicit negatives for evaluation.
In our case, that's the movies dataset, converted into embeddings via our movie model:
End of explanation
"""
# TODO 2 - Your code is here.
# Here, the function bundles together the loss function and metric computation.
task = tfrs.tasks.Retrieval(
metrics=metrics
)
"""
Explanation: Loss
The next component is the loss used to train our model. TFRS has several loss layers and tasks to make this easy.
In this instance, we'll make use of the Retrieval task object: a convenience wrapper that bundles together the loss function and metric computation:
End of explanation
"""
class MovielensModel(tfrs.Model):
def __init__(self, user_model, movie_model):
super().__init__()
self.movie_model: tf.keras.Model = movie_model
self.user_model: tf.keras.Model = user_model
self.task: tf.keras.layers.Layer = task
def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:
# We pick out the user features and pass them into the user model.
user_embeddings = self.user_model(features["user_id"])
# And pick out the movie features and pass them into the movie model,
# getting embeddings back.
positive_movie_embeddings = self.movie_model(features["movie_title"])
# The task computes the loss and the metrics.
return self.task(user_embeddings, positive_movie_embeddings)
"""
Explanation: The task itself is a Keras layer that takes the query and candidate embeddings as arguments, and returns the computed loss: we'll use that to implement the model's training loop.
The full model
We can now put it all together into a model. TFRS exposes a base model class (tfrs.models.Model) which streamlines bulding models: all we need to do is to set up the components in the __init__ method, and implement the compute_loss method, taking in the raw features and returning a loss value.
The base model will then take care of creating the appropriate training loop to fit our model.
End of explanation
"""
class NoBaseClassMovielensModel(tf.keras.Model):
def __init__(self, user_model, movie_model):
super().__init__()
self.movie_model: tf.keras.Model = movie_model
self.user_model: tf.keras.Model = user_model
self.task: tf.keras.layers.Layer = task
def train_step(self, features: Dict[Text, tf.Tensor]) -> tf.Tensor:
# Set up a gradient tape to record gradients.
with tf.GradientTape() as tape:
# Loss computation.
user_embeddings = self.user_model(features["user_id"])
positive_movie_embeddings = self.movie_model(features["movie_title"])
loss = self.task(user_embeddings, positive_movie_embeddings)
# Handle regularization losses as well.
regularization_loss = sum(self.losses)
total_loss = loss + regularization_loss
gradients = tape.gradient(total_loss, self.trainable_variables)
self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))
metrics = {metric.name: metric.result() for metric in self.metrics}
metrics["loss"] = loss
metrics["regularization_loss"] = regularization_loss
metrics["total_loss"] = total_loss
return metrics
def test_step(self, features: Dict[Text, tf.Tensor]) -> tf.Tensor:
# Loss computation.
user_embeddings = self.user_model(features["user_id"])
positive_movie_embeddings = self.movie_model(features["movie_title"])
loss = self.task(user_embeddings, positive_movie_embeddings)
# Handle regularization losses as well.
regularization_loss = sum(self.losses)
total_loss = loss + regularization_loss
metrics = {metric.name: metric.result() for metric in self.metrics}
metrics["loss"] = loss
metrics["regularization_loss"] = regularization_loss
metrics["total_loss"] = total_loss
return metrics
"""
Explanation: The tfrs.Model base class is a simply convenience class: it allows us to compute both training and test losses using the same method.
Under the hood, it's still a plain Keras model. You could achieve the same functionality by inheriting from tf.keras.Model and overriding the train_step and test_step functions (see the guide for details):
End of explanation
"""
# Compiling the model.
model = MovielensModel(user_model, movie_model)
model.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.1))
"""
Explanation: In these notebooks, however, we stick to using the tfrs.Model base class to keep our focus on modelling and abstract away some of the boilerplate.
Fitting and evaluating
After defining the model, we can use standard Keras fitting and evaluation routines to fit and evaluate the model.
Let's first instantiate the model.
End of explanation
"""
cached_train = train.shuffle(100_000).batch(8192).cache()
cached_test = test.batch(4096).cache()
"""
Explanation: Then shuffle, batch, and cache the training and evaluation data.
End of explanation
"""
# TODO 3a - Your code is here.
# Training the model.
model.fit(cached_train, epochs=3)
"""
Explanation: Then train the model:
End of explanation
"""
# TODO 3b - Your code is here.
# Evaluating the model.
model.evaluate(cached_test, return_dict=True)
"""
Explanation: As the model trains, the loss is falling and a set of top-k retrieval metrics is updated. These tell us whether the true positive is in the top-k retrieved items from the entire candidate set. For example, a top-5 categorical accuracy metric of 0.2 would tell us that, on average, the true positive is in the top 5 retrieved items 20% of the time.
Note that, in this example, we evaluate the metrics during training as well as evaluation. Because this can be quite slow with large candidate sets, it may be prudent to turn metric calculation off in training, and only run it in evaluation.
Finally, we can evaluate our model on the test set:
End of explanation
"""
# Create a model that takes in raw query features, and
index = tfrs.layers.factorized_top_k.BruteForce(model.user_model)
# recommends movies out of the entire movies dataset.
index.index_from_dataset(
tf.data.Dataset.zip((movies.batch(100), movies.batch(100).map(model.movie_model)))
)
# Get recommendations.
_, titles = index(tf.constant(["42"]))
print(f"Recommendations for user 42: {titles[0, :3]}")
"""
Explanation: Test set performance is much worse than training performance. This is due to two factors:
Our model is likely to perform better on the data that it has seen, simply because it can memorize it. This overfitting phenomenon is especially strong when models have many parameters. It can be mediated by model regularization and use of user and movie features that help the model generalize better to unseen data.
The model is re-recommending some of users' already watched movies. These known-positive watches can crowd out test movies out of top K recommendations.
The second phenomenon can be tackled by excluding previously seen movies from test recommendations. This approach is relatively common in the recommender systems literature, but we don't follow it in these notebooks. If not recommending past watches is important, we should expect appropriately specified models to learn this behaviour automatically from past user history and contextual information. Additionally, it is often appropriate to recommend the same item multiple times (say, an evergreen TV series or a regularly purchased item).
Making predictions
Now that we have a model, we would like to be able to make predictions. We can use the tfrs.layers.factorized_top_k.BruteForce layer to do this.
End of explanation
"""
# TODO 4 - Your code is here.
# Export the query model.
with tempfile.TemporaryDirectory() as tmp:
path = os.path.join(tmp, "model")
# Save the index.
tf.saved_model.save(index, path)
# Load it back; can also be done in TensorFlow Serving.
loaded = tf.saved_model.load(path)
# Pass a user id in, get top predicted movie titles back.
scores, titles = loaded(["42"])
print(f"Recommendations: {titles[0][:3]}")
"""
Explanation: Of course, the BruteForce layer is going to be too slow to serve a model with many possible candidates. The following sections shows how to speed this up by using an approximate retrieval index.
Model serving
After the model is trained, we need a way to deploy it.
In a two-tower retrieval model, serving has two components:
a serving query model, taking in features of the query and transforming them into a query embedding, and
a serving candidate model. This most often takes the form of an approximate nearest neighbours (ANN) index which allows fast approximate lookup of candidates in response to a query produced by the query model.
In TFRS, both components can be packaged into a single exportable model, giving us a model that takes the raw user id and returns the titles of top movies for that user. This is done via exporting the model to a SavedModel format, which makes it possible to serve using TensorFlow Serving.
To deploy a model like this, we simply export the BruteForce layer we created above:
End of explanation
"""
scann_index = tfrs.layers.factorized_top_k.ScaNN(model.user_model)
scann_index.index_from_dataset(
tf.data.Dataset.zip((movies.batch(100), movies.batch(100).map(model.movie_model)))
)
"""
Explanation: We can also export an approximate retrieval index to speed up predictions. This will make it possible to efficiently surface recommendations from sets of tens of millions of candidates.
To do so, we can use the scann package. This is an optional dependency of TFRS, and we installed it separately at the beginning of this notebook by calling !pip install -q scann.
Once installed we can use the TFRS ScaNN layer:
End of explanation
"""
# Get recommendations.
_, titles = scann_index(tf.constant(["42"]))
print(f"Recommendations for user 42: {titles[0, :3]}")
"""
Explanation: This layer will perform approximate lookups: this makes retrieval slightly less accurate, but orders of magnitude faster on large candidate sets.
End of explanation
"""
# Export the query model.
with tempfile.TemporaryDirectory() as tmp:
path = os.path.join(tmp, "model")
# Save the index.
tf.saved_model.save(
index,
path,
options=tf.saved_model.SaveOptions(namespace_whitelist=["Scann"])
)
# Load it back; can also be done in TensorFlow Serving.
loaded = tf.saved_model.load(path)
# Pass a user id in, get top predicted movie titles back.
scores, titles = loaded(["42"])
print(f"Recommendations: {titles[0][:3]}")
"""
Explanation: Exporting it for serving is as easy as exporting the BruteForce layer:
End of explanation
"""
|
thehyve/transmart-api-training
|
EXERCISE TranSMART REST API V2 (2017).ipynb
|
gpl-3.0
|
import getpass
from transmart import TransmartApi
api = TransmartApi(
host = 'http://transmart-test.thehyve.net',
user = input('Username:'),
password = getpass.getpass('Password:'),
apiversion = 2)
api.access()
"""
Explanation: <img style="float: right;" src="files/thehyve_logo.png">
TranSMART 17.1 REST API demonstration
Copyright (c) 2017 The Hyve B.V. This notebook is licensed under the GNU General Public License, version 3. Authors: Ward Weistra.
We start by importing the tranSMART Python library (https://pypi.python.org/pypi/transmart) and connecting to the tranSMART server.
End of explanation
"""
import pandas as pd
from pandas.io.json import json_normalize
pd.set_option('max_colwidth', 1000)
pd.set_option("display.max_rows",100)
pd.set_option("display.max_columns",100)
"""
Explanation: Next we import and configure Pandas, a Python library to work with data.
End of explanation
"""
studies = api.get_studies()
json_normalize(studies['studies'])
"""
Explanation: Part 1: Plotting blood pressure over time
As a first REST API call it would be nice to see what studies are available in this tranSMART server.
You will see a list of all studies, their name (studyId) and what dimensions are available for this study. Remember that tranSMART previously only supported the dimensions patients, concepts and studies. Now you should see studies with many more dimensions!
End of explanation
"""
study_id = 'TRAINING'
patients = api.get_patients(study = study_id)
json_normalize(patients['patients'])
"""
Explanation: We choose the TRAINING study and ask for all patients in this study. You will get a list with their patient details and patient identifier.
End of explanation
"""
obs = api.get_observations(study = study_id)
obsDataframe = json_normalize(api.format_observations(obs))
obsDataframe
#DO STUFF WITH THE TRAINING STUDY HERE
"""
Explanation: Next we ask for the full list of observations for this study. This list will include one row per observation, with information from all their dimensions. The columns will have headers like <dimension name>.<field name> and numericValue or stringValue for the actual observation value.
End of explanation
"""
patient_set_id = 28733
"""
Explanation: Part 2: Combining Glowing Bear and the Python client
For the second part we will work with the Glowing Bear user interface that was developed at The Hyve, funded by IMI Translocation and BBMRI.
An API is great to extract exactly the data you need and analyze that. But it is harder to get a nice overview of all data that is available and define the exact set to extract. That is where the Glowing Bear was built for.
Please go to http://glowingbear2-head.thehyve.net and create a Patient Set on the Data Selection tab (under Select patients). Once you have saved your patient set, copy the patient set identifier and paste that below.
End of explanation
"""
patients = api.get_patients(patientSet = patient_set_id)
json_normalize(patients['patients'])
"""
Explanation: Now let's return all patients for the patient set we made!
End of explanation
"""
obs = api.get_observations(study = study_id, patientSet = patient_set_id)
obsDataframe = json_normalize(api.format_observations(obs))
obsDataframe
"""
Explanation: And do the same for all observations for this patient set.
End of explanation
"""
|
zephirefaith/AI_Fall15_Assignments
|
A3/probability_notebook.ipynb
|
mit
|
"""Testing pbnt.
Run this before anything else
to get pbnt to work!"""
import sys
# from importlib import reload
if('pbnt/combined' not in sys.path):
sys.path.append('pbnt/combined')
from exampleinference import inferenceExample
# Should output:
# ('The marginal probability of sprinkler=false:', 0.80102921)
#('The marginal probability of wetgrass=false | cloudy=False, rain=True:', 0.055)
inferenceExample()
"""
Explanation: Assignment 3: Probabilistic modeling
In this assignment, you will work with probabilistic models known as Bayesian networks to efficiently calculate the answer to probability questions concerning discrete random variables.
To help, we've provided a package called pbnt that supports the representation of Bayesian networks and automatic inference (!) of marginal probabilities. Note that you need numpy to run pbnt.
This assignment is due on T-Square on October 15th by 9:35 AM.
Warning
Due to compatibility bugs in pbnt, this assignment requires Python 2.7 to run, which in turn requires iPython2. So you should run this notebook using
ipython2 notebook probability_notebook.ipynb
If you don't have iPython2 installed, you can download it here, unzip it, and install it using
python setup.py install
If you don't have iPython2 installed and you don't want to have more than one version installed, you can set it up using a virtual environment (virtualenv).
You can find instructions on how to set up a virtualenv under the following links:
Windows users (TL;DR use PowerShell)
Linux/Mac users (TL;DR use pip if you can)
Once you have virtualenv installed, you should navigate to the directory containing probability_notebook.ipynb and run the command
virtualenv .
which will create a subdirectory called "bin" which contains scripts to run the virtual environment. You'll then call
source bin/activate
to activate the virtualenv. You'll see that your command line now looks something like this:
(assignment_3)my-laptop:probability_assignment user_name$
From here, you can install iPython2 to your local directory by running the command
pip2.7 install "ipython [notebook]"
and then you'll be able to open probability_notebook.ipynb using iPython2.
Whenever you want to quit your virtual environment, just type
deactivate
and you can reactivate the environment later with the same command you used before. If you ever want to get rid of the virtualenv files entirely, just delete the "bin" folder.
End of explanation
"""
from Node import BayesNode
from Graph import BayesNet
def make_power_plant_net():
"""Create a Bayes Net representation of
the above power plant problem."""
T_node = BayesNode(0,2,name='temperature')
G_node = BayesNode(1,2,name='gauge')
A_node = BayesNode(2,2,name='alarm')
F_G_node = BayesNode(3,2,name='faulty gauge')
F_A_node = BayesNode(4,2,name='faulty alarm')
T_node.add_child(G_node)
T_node.add_child(F_G_node)
G_node.add_parent(T_node)
F_G_node.add_parent(T_node)
F_G_node.add_child(G_node)
G_node.add_parent(F_G_node)
G_node.add_child(A_node)
A_node.add_parent(G_node)
F_A_node.add_child(A_node)
A_node.add_parent(F_A_node)
nodes = [T_node, G_node, F_G_node, A_node, F_A_node]
return BayesNet(nodes)
from probability_tests import network_setup_test
power_plant = make_power_plant_net()
network_setup_test(power_plant)
"""
Explanation: Part 1: Bayesian network tutorial
40 points total
To start, design a basic probabilistic model for the following system.
There's a nuclear power plant in which an alarm is supposed to ring when the core temperature, indicated by a gauge, exceeds a fixed threshold. For simplicity, we assume that the temperature is represented as either high or normal. Use the following Boolean variables in your implementation:
A = alarm sounds
F<sub>A</sub> = alarm is faulty
G = gauge reading (high = True, normal = False)
F<sub>G</sub> = gauge is faulty
T = actual temperature (high = True, normal = False)
In addition, assume that the gauge is more likely to fail when the temperature is high.
You will test your implementation at the end of the section.
1a: Casting the net
10 points
Design a Bayesian network for this system, using pbnt to represent the nodes and conditional probability arcs connecting nodes. Don't worry about the probabilities for now. Fill out the function below to create the net.
The following command will create a BayesNode with 2 values, an id of 0 and the name "alarm":
A_node = BayesNode(0,2,name='alarm')
You will use BayesNode.add_parent() and BayesNode.add_child() to connect nodes. For example, to connect the alarm and temperature nodes that you've already made (i.e. assuming that temperature affects the alarm probability):
A_node.add_parent(T_node)
T_node.add_child(A_node)
You can run probability_tests.network_setup_test() to make sure your network is set up correctly.
End of explanation
"""
def is_polytree():
"""Multiple choice question about polytrees."""
choice = 'c'
answers = {
'a' : 'Yes, because it can be decomposed into multiple sub-trees.',
'b' : 'Yes, because its underlying undirected graph is a tree.',
'c' : 'No, because its underlying undirected graph is not a tree.',
'd' : 'No, because it cannot be decomposed into multiple sub-trees.'
}
return answers[choice]
"""
Explanation: 1b: Polytrees
5 points
Is the network for the power plant system a polytree? Why or why not? Choose from the following answers.
End of explanation
"""
from numpy import zeros, float32
import Distribution
from Distribution import DiscreteDistribution, ConditionalDiscreteDistribution
def set_probability(bayes_net):
"""Set probability distribution for each
node in the power plant system."""
A_node = bayes_net.get_node_by_name("alarm")
F_A_node = bayes_net.get_node_by_name("faulty alarm")
G_node = bayes_net.get_node_by_name("gauge")
F_G_node = bayes_net.get_node_by_name("faulty gauge")
T_node = bayes_net.get_node_by_name("temperature")
nodes = [A_node, F_A_node, G_node, F_G_node, T_node]
#for completely independent nodes
T_dist = DiscreteDistribution(T_node)
index = T_dist.generate_index([],[])
T_dist[index] = [0.8,0.2]
T_node.set_dist(T_dist)
F_A_dist = DiscreteDistribution(F_A_node)
index = F_A_dist.generate_index([],[])
F_A_dist[index] = [0.85,0.15]
F_A_node.set_dist(F_A_dist)
#for single parent node
dist = zeros([T_node.size(), F_G_node.size()], dtype=float32)
dist[0,:] = [0.95,0.05]
dist[1,:] = [0.2,0.8]
F_G_dist = ConditionalDiscreteDistribution(nodes = [T_node, F_G_node], table=dist)
F_G_node.set_dist(F_G_dist)
#for double parent node
dist = zeros([F_G_node.size(), T_node.size(), G_node.size()], dtype=float32)
dist[0,0,:] = [0.95,0.05]
dist[0,1,:] = [0.05,0.95]
dist[1,0,:] = [0.2,0.8]
dist[1,1,:] = [0.8,0.2]
G_dist = ConditionalDiscreteDistribution(nodes=[F_G_node,T_node,G_node], table=dist)
G_node.set_dist(G_dist)
dist = zeros([F_A_node.size(),G_node.size(),A_node.size()], dtype=float32)
dist[0,0,:] = [0.9,0.1]
dist[0,1,:] = [0.1,0.9]
dist[1,0,:] = [0.55,0.45]
dist[1,1,:] = [0.45,0.55]
A_dist = ConditionalDiscreteDistribution(nodes=[F_A_node,G_node,A_node], table=dist)
A_node.set_dist(A_dist)
return bayes_net
set_probability(power_plant)
from probability_tests import probability_setup_test
probability_setup_test(power_plant)
"""
Explanation: 1c: Setting the probabilities
15 points
Assume that the following statements about the system are true:
The temperature gauge reads the correct temperature with 95% probability when it is not faulty and 20% probability when it is faulty. For simplicity, say that the gauge's "true" value corresponds with its "hot" reading and "false" with its "normal" reading, so the gauge would have a 95% chance of returning "true" when the temperature is hot and it is not faulty.
The alarm is faulty 15% of the time.
The temperature is hot (call this "true") 20% of the time.
When the temperature is hot, the gauge is faulty 80% of the time. Otherwise, the gauge is faulty 5% of the time.
The alarm responds correctly to the gauge 55% of the time when the alarm is faulty, and it responds correctly to the gauge 90% of the time when the alarm is not faulty. For instance, when it is faulty, the alarm sounds 55% of the time that the gauge is "hot" and remains silent 55% of the time that the gauge is "normal."
Knowing these facts, set the conditional probabilities for the necessary variables on the network you just built.
Using pbnt's Distribution class: if you wanted to set the distribution for P(A) to 70% true, 30% false, you would invoke the following commands.
A_distribution = DiscreteDistribution(A_node)
index = A_distribution.generate_index([],[])
A_distribution[index] = [0.3,0.7]
A_Node.set_dist(A_distribution)
If you wanted to set the distribution for P(A|G) to be
|$G$|$P(A=true| G)$|
|------|:-----:|
|T| 0.75|
|F| 0.85|
you would invoke:
from numpy import zeros, float32
dist = zeros([G_node.size(), A_node.size()], dtype=float32)
dist[0,:] = [0.15, 0.85]
dist[1,:] = [0.25, 0.75]
A_distribution = ConditionalDiscreteDistribution(nodes=[G_node,A_node], table=dist)
A_node.set_dist(A_distribution)
Modeling a three-variable relationship is a bit trickier. If you wanted to set the following distribution for $P(A|G,T)$ to be
|$G$|$T$|$P(A=true| G, T)$|
|--|--|:----:|
|T|T|0.15|
|T|F|0.6|
|F|T|0.2|
|F|F|0.1|
you would invoke:
from numpy import zeros, float32
dist = zeros([G_node.size(), T_node.size(), A_node.size()], dtype=float32)
dist[1,1,:] = [0.85, 0.15]
dist[1,0,:] = [0.4, 0.6]
dist[0,1,:] = [0.8, 0.2]
dist[0,0,:] = [0.9, 0.1]
A_distribution = ConditionalDiscreteDistribution(nodes=[G_node, T_node, A_node], table=dist)
A_node.set_dist(A_distribution)
The key is to remember that 0 represents the index of the false probability, and 1 represents true.
You can check your probability distributions with probability_tests.probability_setup_test().
End of explanation
"""
def get_alarm_prob(bayes_net, alarm_rings):
"""Calculate the marginal
probability of the alarm
ringing (T/F) in the
power plant system."""
A_node = bayes_net.get_node_by_name('alarm')
engine = JunctionTreeEngine(bayes_net)
Q = engine.marginal(A_node)[0]
index = Q.generate_index([alarm_rings],range(Q.nDims))
alarm_prob = Q[index]
return alarm_prob
def get_gauge_prob(bayes_net, gauge_hot):
"""Calculate the marginal
probability of the gauge
showing hot (T/F) in the
power plant system."""
G_node = bayes_net.get_node_by_name('gauge')
engine = JunctionTreeEngine(bayes_net)
Q = engine.marginal(G_node)[0]
index = Q.generate_index([gauge_hot],range(Q.nDims))
gauge_prob = Q[index]
return gauge_prob
from Inference import JunctionTreeEngine
def get_temperature_prob(bayes_net,temp_hot):
"""Calculate theprobability of the
temperature being hot (T/F) in the
power plant system, given that the
alarm sounds and neither the gauge
nor alarm is faulty."""
T_node = bayes_net.get_node_by_name('temperature')
A_node = bayes_net.get_node_by_name('alarm')
F_A_node = bayes_net.get_node_by_name('faulty alarm')
F_G_node = bayes_net.get_node_by_name('faulty gauge')
engine = JunctionTreeEngine(bayes_net)
engine.evidence[A_node] = True
engine.evidence[F_A_node] = False
engine.evidence[F_G_node] = False
Q = engine.marginal(T_node)[0]
index = Q.generate_index([temp_hot],range(Q.nDims))
temp_prob = Q[index]
return temp_prob
print get_alarm_prob(power_plant,True)
print get_gauge_prob(power_plant,True)
print get_temperature_prob(power_plant,True)
"""
Explanation: 1d: Probability calculations
10 points
To finish up, you're going to perform inference on the network to calculate the following probabilities:
the marginal probability that the alarm sounds
the marginal probability that the gauge shows "hot"
the probability that the temperature is actually hot, given that the alarm sounds and the alarm and gauge are both working
You'll fill out the "get_prob" functions to calculate the probabilities.
Here's an example of how to do inference for the marginal probability of the "faulty alarm" node being True (assuming "bayes_net" is your network):
F_A_node = bayes_net.get_node_by_name('faulty alarm')
engine = JunctionTreeEngine(bayes_net)
Q = engine.marginal(F_A_node)[0]
index = Q.generate_index([True],range(Q.nDims))
prob = Q[index]
To compute the conditional probability, set the evidence variables before computing the marginal as seen below (here we're computing $P(A = false | F_A = true, T = False)$):
engine.evidence[F_A_node] = True
engine.evidence[T_node] = False
Q = engine.marginal(A_node)[0]
index = Q.generate_index([False],range(Q.nDims))
prob = Q[index]
If you need to sanity-check to make sure you're doing inference correctly, you can run inference on one of the probabilities that we gave you in 1c. For instance, running inference on $P(T=true)$ should return 0.19999994 (i.e. almost 20%). You can also calculate the answers by hand to double-check.
End of explanation
"""
def get_game_network():
"""Create a Bayes Net representation
of the game problem."""
#create the network
A = BayesNode(0,4,name='Ateam')
B = BayesNode(1,4,name='Bteam')
C = BayesNode(2,4,name='Cteam')
AvB = BayesNode(3,3,name='AvB')
BvC = BayesNode(4,3,name='BvC')
CvA = BayesNode(5,3,name='CvA')
A.add_child(AvB)
A.add_child(CvA)
B.add_child(AvB)
B.add_child(BvC)
C.add_child(BvC)
C.add_child(CvA)
AvB.add_parent(A)
AvB.add_parent(B)
BvC.add_parent(B)
BvC.add_parent(C)
CvA.add_parent(C)
CvA.add_parent(A)
nodes = [A,B,C,AvB,BvC,CvA]
game_net = BayesNet(nodes)
#setting priors for team skills
skillDist = DiscreteDistribution(A)
index = skillDist.generate_index([],[])
skillDist[index] = [0.15,0.45,0.3,0.1]
A.set_dist(skillDist)
skillDist = DiscreteDistribution(B)
index = skillDist.generate_index([],[])
skillDist[index] = [0.15,0.45,0.3,0.1]
B.set_dist(skillDist)
skillDist = DiscreteDistribution(C)
index = skillDist.generate_index([],[])
skillDist[index] = [0.15,0.45,0.3,0.1]
C.set_dist(skillDist)
#setting probability priors for winning
dist = zeros([A.size(),B.size(),AvB.size()], dtype=float32)
dist[0,0,:] = [0.1,0.1,0.8]
dist[1,1,:] = [0.1,0.1,0.8]
dist[2,2,:] = [0.1,0.1,0.8]
dist[3,3,:] = [0.1,0.1,0.8]
dist[0,1,:] = [0.2,0.4,0.2]
dist[1,2,:] = [0.2,0.4,0.2]
dist[2,3,:] = [0.2,0.4,0.2]
dist[0,2,:] = [0.15,0.75,0.1]
dist[1,3,:] = [0.15,0.75,0.1]
dist[0,3,:] = [0.05,0.9,0.05]
dist[3,0,:] = [0.9,0.05,0.05]
dist[1,0,:] = [0.4,0.2,0.2]
dist[2,1,:] = [0.4,0.2,0.2]
dist[3,2,:] = [0.4,0.2,0.2]
dist[2,0,:] = [0.75,0.15,0.1]
dist[3,1,:] = [0.75,0.15,0.1]
AvB_dist = ConditionalDiscreteDistribution(nodes=[A,B,AvB], table=dist)
AvB.set_dist(AvB_dist)
dist = zeros([B.size(),C.size(),BvC.size()], dtype=float32)
dist[0,0,:] = [0.1,0.1,0.8]
dist[1,1,:] = [0.1,0.1,0.8]
dist[2,2,:] = [0.1,0.1,0.8]
dist[3,3,:] = [0.1,0.1,0.8]
dist[0,1,:] = [0.2,0.4,0.2]
dist[1,2,:] = [0.2,0.4,0.2]
dist[2,3,:] = [0.2,0.4,0.2]
dist[0,2,:] = [0.15,0.75,0.1]
dist[1,3,:] = [0.15,0.75,0.1]
dist[0,3,:] = [0.05,0.9,0.05]
dist[3,0,:] = [0.9,0.05,0.05]
dist[1,0,:] = [0.4,0.2,0.2]
dist[2,1,:] = [0.4,0.2,0.2]
dist[3,2,:] = [0.4,0.2,0.2]
dist[2,0,:] = [0.75,0.15,0.1]
dist[3,1,:] = [0.75,0.15,0.1]
BvC_dist = ConditionalDiscreteDistribution(nodes=[B,C,BvC], table=dist)
BvC.set_dist(BvC_dist)
dist = zeros([C.size(),A.size(),CvA.size()], dtype=float32)
dist[0,0,:] = [0.1,0.1,0.8]
dist[1,1,:] = [0.1,0.1,0.8]
dist[2,2,:] = [0.1,0.1,0.8]
dist[3,3,:] = [0.1,0.1,0.8]
dist[0,1,:] = [0.2,0.4,0.2]
dist[1,2,:] = [0.2,0.4,0.2]
dist[2,3,:] = [0.2,0.4,0.2]
dist[0,2,:] = [0.15,0.75,0.1]
dist[1,3,:] = [0.15,0.75,0.1]
dist[0,3,:] = [0.05,0.9,0.05]
dist[3,0,:] = [0.9,0.05,0.05]
dist[1,0,:] = [0.4,0.2,0.2]
dist[2,1,:] = [0.4,0.2,0.2]
dist[3,2,:] = [0.4,0.2,0.2]
dist[2,0,:] = [0.75,0.15,0.1]
dist[3,1,:] = [0.75,0.15,0.1]
CvA_dist = ConditionalDiscreteDistribution(nodes=[C,A,CvA], table=dist)
CvA.set_dist(CvA_dist)
return game_net
game_net = get_game_network()
"""
Explanation: Part 2: Sampling
60 points total
For the main exercise, consider the following scenario.
There are three frisbee teams who play each other: the Airheads, the Buffoons, and the Clods (A, B and C for short). Each match is between two teams, and each team can either win, lose, or draw in a match. Each team has a fixed but unknown skill level, represented as an integer from 0 to 3. Each match's outcome is probabilistically proportional to the difference in skill level between the teams.
We want to predict the outcome of the matches, given prior knowledge of previous matches. Rather than using inference, we will do so by sampling the network using two Markov Chain Monte Carlo models: Metropolis-Hastings (2b) and Gibbs sampling (2c).
2a: Build the network
10 points
Build a Bayes Net to represent the three teams and their influences on the match outcomes. Assume the following variable conventions:
| variable name | description|
|---------|:------:|
|A| A's skill level|
|B | B's skill level|
|C | C's skill level|
|AvB | the outcome of A vs. B <br> (0 = A wins, 1 = B wins, 2 = tie)|
|BvC | the outcome of B vs. C <br> (0 = B wins, 1 = C wins, 2 = tie)|
|CvA | the outcome of C vs. A <br> (0 = C wins, 1 = A wins, 2 = tie)|
Assume that each team has the following prior distribution of skill levels:
|skill level|P(skill level)|
|----|:----:|
|0|0.15|
|1|0.45|
|2|0.30|
|3|0.10|
In addition, assume that the differences in skill levels correspond to the following probabilities of winning:
| skill difference <br> (T2 - T1) | T1 wins | T2 wins| Tie |
|------------|----------|---|:--------:|
|0|0.10|0.10|0.80|
|1|0.20|0.40|0.20|
|2|0.15|0.75|0.10|
|3|0.05|0.90|0.05|
End of explanation
"""
import random
def MH_sampling(bayes_net, initial_value):
"""Complete a single iteration of the
Metropolis-Hastings algorithm given a
Bayesian network and an initial state
value. Returns the state sampled from
the probability distribution."""
var_id = random.randint(0,2)
new_val = random.randint(0,2)
sample = [i for i in initial_value]
sample[var_id] = new_val
for node in bayes_net.nodes:
if node.id == 0:
A = node
if node.id == 1:
B = node
if node.id == 2:
C = node
if node.id == 3:
AvB = node
if node.id == 4:
BvC = node
if node.id == 5:
CvA = node
Adist = A.dist.table
Bdist = B.dist.table
Cdist = C.dist.table
AvBdist = AvB.dist.table
BvCdist = BvC.dist.table
CvAdist = CvA.dist.table
p_x = Adist[initial_value[0]]*Bdist[initial_value[1]]*Cdist[initial_value[2]]*AvBdist[initial_value[0],initial_value[1],initial_value[3]]*BvCdist[initial_value[1],initial_value[2],initial_value[4]]*CvAdist[initial_value[2],initial_value[0],initial_value[5]]
p_x_dash = Adist[sample[0]]*Bdist[sample[1]]*Cdist[sample[2]]*AvBdist[sample[0],sample[1],sample[3]]*BvCdist[sample[1],sample[2],sample[4]]*CvAdist[sample[2],sample[0],sample[5]]
if p_x!=0:
A = p_x_dash/p_x
else:
A = 1
if A>=1:
return sample
else:
weighted = [(1,int(A*100)), (0, 100-int(A*100))]
population = [val for val, cnt in weighted for i in range(cnt)]
acceptance = random.choice(population)
if acceptance == 0:
sample = initial_value
else:
sample = initial_value
sample[var_id] = new_val
return sample
# arbitrary initial state for the game system
initial_value = [0,0,0,0,2,1]
sample = MH_sampling(game_net, initial_value)
print sample
"""
Explanation: 2b: Metropolis-Hastings sampling
15 points
Now you will implement the Metropolis-Hastings algorithm, which is a method for estimating a probability distribution when it is prohibitively expensive (even for inference!) to completely compute the distribution. You'll do this in MH_sampling(), which takes a Bayesian network and initial state as a parameter and returns a sample state drawn from the network's distribution. The method should just perform a single iteration of the algorithm. If an initial value is not given, default to a state chosen uniformly at random from the possible states.
The general idea is to build an approximation of a latent probability distribution by repeatedly generating a "candidate" value for each random variable in the system, and then probabilistically accepting or rejecting the candidate value based on an underlying acceptance function. These slides provide a nice intro, and this cheat sheet provides an explanation of the details.
Hint 1: in both Metropolis-Hastings and Gibbs sampling, you'll need access to each node's probability distribution. You can access this distribution by calling
A_node.dist.table
which will return the same numpy array that you provided when constructing the probability distribution.
Hint 2: you'll also want to use the random package (e.g. random.randint()) for the probabilistic choices that sampling makes.
Hint 3: in order to count the sample states later on, you'll want to make sure the sample that you return is hashable. One way to do this is by returning the sample as a tuple.
End of explanation
"""
import random
def Gibbs_sampling(bayes_net, initial_value):
"""Complete a single iteration of the
Gibbs sampling algorithm given a
Bayesian network and an initial state
value. Returns the state sampled from
the probability distribution."""
sample = initial_value
if initial_value:
#randomly select one variable and sample from it's posterior distribution for a new value
else:
#default
for i in range(0,5):
upper_bound = 3 if i<3 else 2
sample[i] = random.randint(0,upper_bound)
return sample
"""
Explanation: 2c: Gibbs sampling
15 points
Implement the Gibbs sampling algorithm, which is a special case of Metropolis-Hastings. You'll do this in Gibbs_sampling(), which takes a Bayesian network and initial state value as a parameter and returns a sample state drawn from the network's distribution. The method should just consist of a single iteration of the algorithm. If no initial value is provided, default to a uniform distribution over the possible states.
You may find this helpful in understanding the basics of Gibbs sampling over Bayesian networks. Make sure to identify what makes it different from Metropolis-Hastings.
End of explanation
"""
def calculate_posterior(games_net):
"""Calculate the posterior distribution
of the BvC match given that A won against
B and tied C. Return a list of probabilities
corresponding to win, loss and tie likelihood."""
posterior = [0,0,0]
engine = JunctionTreeEngine(games_net)
AvB = games_net.get_node_by_name('AvB')
BvC = games_net.get_node_by_name('BvC')
CvA = games_net.get_node_by_name('CvA')
engine.evidence[AvB] = 0
engine.evidence[CvA] = 2
Q = engine.marginal(BvC)[0]
index = Q.generate_index([0],range(Q.nDims))
posterior[0] = Q[index]
index = Q.generate_index([1],range(Q.nDims))
posterior[1] = Q[index]
index = Q.generate_index([2],range(Q.nDims))
posterior[2] = Q[index]
return posterior
iter_counts = [1e1,1e3,1e5,1e6]
def compare_sampling(bayes_net, posterior):
"""Compare Gibbs and Metropolis-Hastings
sampling by calculating how long it takes
for each method to converge to the
provided posterior."""
# TODO: finish this function
return Gibbs_convergence, MH_convergence
def sampling_question():
"""Question about sampling performance."""
# TODO: assign value to choice and factor
choice = 1
options = ['Gibbs','Metropolis-Hastings']
factor = 0
return options[choice], factor
# test your sampling methods here
posterior = calculate_posterior(game_net)
compare_sampling(game_net, posterior)
"""
Explanation: 2d: Comparing sampling methods
15 points
Suppose that you know the following outcome of two of the three games: A beats B and A draws with C. Start by calculating the posterior distribution for the outcome of the BvC match in calculate_posterior(), using the inference methods from 1d.
Estimate the likelihood of different outcomes for the third match by running Gibbs sampling until it converges to a stationary distribution. We'll say that the sampler has converged when, for 10 successive iterations, the difference in expected outcome for the third match differs from the previous estimated outcome by less than .1%.
Repeat this experiment for Metropolis-Hastings sampling.
Which algorithm converges more quickly? By approximately what factor? For instance, if Metropolis-Hastings takes twice as many iterations to converge as Gibbs sampling, you'd say that it converged faster by a factor of 2. Fill in sampling_question() to answer both parts.
End of explanation
"""
def complexity_question():
# TODO: write an expression for complexity
complexity = 'O(2^n)'
return complexity
"""
Explanation: 2e: Theoretical follow-up
5 points
Suppose there are now $n$ teams in the competition, and all matches have been played except for the last match. Using inference by enumeration, how does the complexity of predicting the last match vary with $n$?
Fill in complexity_question() to answer, using big-O notation. For example, write 'O(n^2)' for second-degree polynomial runtime.
End of explanation
"""
|
angelmtenor/deep-learning
|
image-classification/dlnd_image_classification.ipynb
|
mit
|
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: Image Classification
Student: Angel Martinez-Tenor <br/>
Deep Learning Nanodegree Foundation - Udacity <br/>
March 2, 2017 <br/>
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 4
sample_id = 10
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
"""
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
"""
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
# TODO: Implement Function
return x/255 # Simple division by scalar 255: From [0-255] to [0-1]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
"""
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
"""
from sklearn import preprocessing
lb = preprocessing.LabelBinarizer() # create encoder
lb.fit(range(10)) # assigns one-hot vector to 0-9
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
# TODO: Implement Function
return lb.transform(x) # Transform the labels into one-hot encoded vectors
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
"""
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
"""
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a bach of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
# TODO: Implement Function
tensor_shape = [None] + list(image_shape)
return tf.placeholder(tf.float32, tensor_shape, name = 'x')
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
# TODO: Implement Function
tensor_shape = [None, n_classes]
return tf.placeholder(tf.float32, tensor_shape, name = 'y')
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32, name="keep_prob")
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
"""
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
"""
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
# TODO: Implement Function
height = conv_ksize[0]
width = conv_ksize[1]
input_depth = x_tensor.get_shape().as_list()[3]
output_depth = conv_num_outputs
W = tf.Variable(tf.truncated_normal((height, width, input_depth, output_depth), stddev=0.1)) # conv layer weight
b = tf.Variable(tf.truncated_normal([output_depth], stddev=0.1)) # conv layer bias
x_conv = tf.nn.conv2d(x_tensor, W, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME')
x_conv = tf.nn.bias_add(x_conv, b)
x_conv = tf.nn.relu(x_conv) # nonlinear activation ReLU
x_conv_pool = tf.nn.max_pool(x_conv, ksize=[1, pool_ksize[0], pool_ksize[1], 1],
strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME')
return x_conv_pool
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
"""
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
"""
import numpy as np # (imported again because of the above check point)
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
# TODO: Implement Function
x_dim = x_tensor.get_shape().as_list() # list with dimmensions of the tensor:[batch_size, ...]
n_input = np.prod(x_dim[1:]) # size of the image (features)
x_flat = tf.reshape(x_tensor, [-1, n_input])
return x_flat
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
"""
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
x_dim = x_tensor.get_shape().as_list()
n_input = np.prod(x_dim[1:])
W = tf.Variable(tf.truncated_normal([n_input, num_outputs], stddev=0.1))
b = tf.Variable(tf.truncated_normal([num_outputs], stddev=0.1))
fcl = tf.add(tf.matmul(x_tensor, W), b)
fcl = tf.nn.relu(fcl) # nonlinear activation ReLU
return fcl
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
"""
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
x_dim = x_tensor.get_shape().as_list()
n_input = np.prod(x_dim[1:])
W = tf.Variable(tf.truncated_normal([n_input, num_outputs], stddev=0.1))
b = tf.Variable(tf.truncated_normal([num_outputs], stddev=0.1))
out = tf.add(tf.matmul(x_tensor, W), b)
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
"""
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
"""
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_num_outputs = 32 # >=64 results in memory issues (Frankfurt AWS instance)
conv_ksize = [2,2] # better and more stable results were obtained with [2,2] than using larger masks
conv_strides = [1,1]
pool_ksize = [2,2]
pool_strides = [2,2] # (width and height will be reduced by maxpool)
#3 convolutional+maxpool layers with the same parameters except for the ouput depth:
# conv1: from 32x32x3 to 16x16x32 (maxpool reduces the size)
conv1 = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# conv2: from 16x16x32 to 8x8x128
conv_num_outputs = 128
conv2 = conv2d_maxpool(conv1, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# conv3: 8x8x128 to 4x4x512 (improves the accurracy by ~2%)
conv_num_outputs = 512
conv3 = conv2d_maxpool(conv2, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
x_flat = flatten(conv3)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fc1 = fully_conn(x_flat, 8096) # 2 hidden layers lead to overfitting
fc1 = tf.nn.dropout(fc1, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
out = output(fc1, 10)
# TODO: return output
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
"""
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
"""
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
# TODO: Implement Function
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch,
keep_prob: keep_probability})
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
"""
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
"""
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
# TODO: Implement Function
loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.})
valid_acc = session.run(accuracy, feed_dict={
x: valid_features,
y: valid_labels,
keep_prob: 1.})
print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(
loss,
valid_acc))
"""
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
"""
# TODO: Tune Parameters
epochs = 10 # tested from 5 to 50
batch_size = 256 # tested from 64 to 2048
keep_probability = 0.8 # tested from 0.5 to 0.9
"""
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
"""
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
"""
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
"""
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation
"""
|
kimkipyo/dss_git_kkp
|
ํต๊ณ, ๋จธ์ ๋ฌ๋ ๋ณต์ต/160601์_11์ผ์ฐจ_๋ฐ์ดํฐ ์ ์ฒ๋ฆฌ Data Preprocessing, (๊ฒฐ์ ๋ก ์ )์ ํ ํ๊ท ๋ถ์ Linear Regression Analysis/3.์ ํ ํ๊ท ๋ถ์์ ๊ธฐ์ด.ipynb
|
mit
|
from sklearn.datasets import make_regression
bias = 100
X0, y, coef = make_regression(n_samples=100, n_features=1, bias=bias, noise=10, coef=True, random_state=1)
X = np.hstack([np.ones_like(X0), X0])
X[:5]
"""
Explanation: ์ ํ ํ๊ท ๋ถ์์ ๊ธฐ์ด
๊ฒฐ์ ๋ก ์ ๋ชจํ์ ๊ทธ๋ฅ ํจ์๋ฅผ ์ฐพ๋ ๊ฒ. ๊ฐ๋จํ ํจ์๋ถํฐ ์์์ ํ๋ค. ๊ฐ๋จํ ํจ์๋ ์ ํ์์ ์๋ฏธํ๋ ๋ฏ
์ ํ ํ๊ท ๋ถ์์ ๋ถํธ, ํฌ๊ธฐ, ๊ด๊ณ ๋ฑ์ ์๋ ค์ฃผ๊ธฐ ๋๋ฌธ์ ๋ถ์์ ํ๋ค๋ ๋จ์ ์๋ ๋ถ๊ตฌํ๊ณ ์ ์ฐ์ด๊ณ ์๋ค. ๋น์ ํํ๊ท๋ถ์์ ๋ฌธ์ ์ ์ผ๋ก๋ overfitting ํ์์ด ๋ฐ์ํ๋ค๋ ์ . ๊ทธ๋ฆฌ๊ณ ๋ฐฉ๋ฒ๋ ๋๋ฌด ๋ง๋ค๋ ์
cross validation์ด๋ x๋ฅผ ๋จ๊ฒจ๋๋ ๊ฒ. ํจ์ ๊ฒ์ฌ๋ฅผ ์ํด์. ์ง์ง ์ํ์ ๋จ๊ฒจ๋๋ ๊ฒ๊ณผ ๊ฐ์ ์๋ฆฌ. ์ ์ด๋ 3๊ฐ ์ด์ ๋จ๊ฒจ๋๋ค.
ํ๊ท ๋ถ์(regression analysis)์ ์
๋ ฅ ์๋ฃ(๋
๋ฆฝ ๋ณ์) $x$์ ์ด์ ๋์ํ๋ ์ถ๋ ฅ ์๋ฃ(์ข
์ ๋ณ์) $y$๊ฐ์ ๊ด๊ณ๋ฅผ ์ ๋ํ ํ๊ธฐ ์ํ ์์
์ด๋ค.
ํ๊ท ๋ถ์์๋ ๊ฒฐ์ ๋ก ์ ๋ชจํ(Deterministic Model)๊ณผ ํ๋ฅ ์ ๋ชจํ(Probabilistic Model)์ด ์๋ค.
๊ฒฐ์ ๋ก ์ ๋ชจํ์ ๋จ์ํ ๋
๋ฆฝ ๋ณ์ $x$์ ๋ํด ๋์ํ๋ ์ข
์ ๋ณ์ $y$๋ฅผ ๊ณ์ฐํ๋ ํจ์๋ฅผ ๋ง๋๋ ๊ณผ์ ์ด๋ค.
$$ \hat{y} = f \left( x; { x_1, y_1, x_2, y_2, \cdots, x_N, y_N } \right) = f (x; D) = f(x) $$
์ฌ๊ธฐ์์ $ { x_1, y_1, x_2, y_2, \cdots, x_N, y_N } $ ๋ ๋ชจํ ๊ณ์ ์ถ์ ์ ์ํ ๊ณผ๊ฑฐ ์๋ฃ์ด๋ค.
๋ง์ฝ ํจ์๊ฐ ์ ํ ํจ์์ด๋ฉด ์ ํ ํ๊ท ๋ถ์(linear regression analysis)์ด๋ผ๊ณ ํ๋ค.
$$ \hat{y} = w_0 + w_1 x_1 + w_2 x_2 + \cdots + w_D x_D $$
Augmentation(์ฆ๊ฐ ๊ฐ๋
)
์ผ๋ฐ์ ์ผ๋ก ํ๊ท ๋ถ์์ ์์ ๋ค์๊ณผ ๊ฐ์ด ์์ํญ์ ๋
๋ฆฝ ๋ณ์์ ํฌํจํ๋ ์์
์ด ํ์ํ ์ ์๋ค. ์ด๋ฅผ feature augmentation์ด๋ผ๊ณ ํ๋ค.
$$
x_i =
\begin{bmatrix}
x_{i1} \ x_{i2} \ \vdots \ x_{iD}
\end{bmatrix}
\rightarrow
x_{i,a} =
\begin{bmatrix}
1 \ x_{i1} \ x_{i2} \ \vdots \ x_{iD}
\end{bmatrix}
$$
augmentation์ ํ๊ฒ ๋๋ฉด ๋ชจ๋ ์์๊ฐ 1์ธ ๋ฒกํฐ๋ฅผ feature matrix ์ ์ถ๊ฐ๋๋ค.
$$
X =
\begin{bmatrix}
x_{11} & x_{12} & \cdots & x_{1D} \
x_{21} & x_{22} & \cdots & x_{2D} \
\vdots & \vdots & \vdots & \vdots \
x_{N1} & x_{N2} & \cdots & x_{ND} \
\end{bmatrix}
\rightarrow
X_a =
\begin{bmatrix}
1 & x_{11} & x_{12} & \cdots & x_{1D} \
1 & x_{21} & x_{22} & \cdots & x_{2D} \
\vdots & \vdots & \vdots & \vdots & \vdots \
1 & x_{N1} & x_{N2} & \cdots & x_{ND} \
\end{bmatrix}
$$
augmentation์ ํ๋ฉด ๊ฐ์ค์น ๋ฒกํฐ(weight vector)๋ ์ฐจ์์ด ์ฆ๊ฐํ์ฌ ์ ์ฒด ์์์ด ๋ค์๊ณผ ๊ฐ์ด ๋จ์ํ ๋๋ค.
$$ w_0 + w_1 x_1 + w_2 x_2
\begin{bmatrix}
1 & x_1 & x_2
\end{bmatrix}
\begin{bmatrix}
w_0 \ w_1 \ w_2
\end{bmatrix}
= x_a^T w
$$
End of explanation
"""
y = y.reshape(len(y), 1)
w = np.dot(np.dot(np.linalg.inv(np.dot(X.T, X)), X.T), y)
print("bias:", bias)
print("coef:", coef)
print("w:\n", w)
w = np.linalg.lstsq(X, y)[0]
w
xx = np.linspace(np.min(X0) - 1, np.max(X0) + 1, 1000)
XX = np.vstack([np.ones(xx.shape[0]), xx.T]).T
yy = np.dot(XX, w)
plt.scatter(X0, y)
plt.plot(xx, yy, 'r-')
plt.show()
"""
Explanation: OLS (Ordinary Least Squares)
OLS๋ ๊ฐ์ฅ ๊ธฐ๋ณธ์ ์ธ ๊ฒฐ์ ๋ก ์ ํ๊ท ๋ฐฉ๋ฒ์ผ๋ก Residual Sum of Squares(RSS)๋ฅผ ์ต์ํํ๋ ๊ฐ์ค์น ๋ฒกํฐ ๊ฐ์ ๋ฏธ๋ถ์ ํตํด ๊ตฌํ๋ค.
Residual ์์ฐจ
$$ e_i = {y}_i - x_i^T w $$
Stacking (Vector Form)
$$ e = {y} - Xw $$
Residual Sum of Squares (RSS)
$$\begin{eqnarray}
\text{RSS}
&=& \sum (y_i - \hat{y}_i)^2 \
&=& \sum e_i^2 = e^Te \
&=& (y - Xw)^T(y - Xw) \
&=& y^Ty - 2y^T X w + w^TX^TXw
\end{eqnarray}$$
Minimize using Gradient
$$ \dfrac{\partial \text{RSS}}{\partial w} = -2 X^T y + 2 X^TX w = 0 $$
$$ X^TX w = X^T y $$
$$ w = (X^TX)^{-1} X^T y $$
์ฌ๊ธฐ์์ ๊ทธ๋ ๋์ธํธ๋ฅผ ๋ํ๋ด๋ ๋ค์ ์์ Normal equation ์ด๋ผ๊ณ ํ๋ค.
$$ X^T y - X^TX w = 0 $$
Normal equation ์์ ์์ฐจ์ ๋ํ ๋ค์ ํน์ฑ์ ์ ์ ์๋ค.
$$ X^T (y - X w ) = X^T e = 0 $$
bias๋ ์์ํญ์ด์ y์ ํธ. ์ฌ์ดํท์ ๋ด๋ถ์ ์๊ณ stats๋ชจ๋ธ์์๋ ๋ช
๋ น์ด ํ๋ ๋ถ๋ฌ์ผ ํ๋ค?
End of explanation
"""
from sklearn.datasets import load_diabetes
diabetes = load_diabetes()
dfX_diabetes = pd.DataFrame(diabetes.data, columns=["X%d" % (i+1) for i in range(np.shape(diabetes.data)[1])])
dfy_diabetes = pd.DataFrame(diabetes.target, columns=["target"])
df_diabetes0 = pd.concat([dfX_diabetes, dfy_diabetes], axis=1)
df_diabetes0.tail(3)
from sklearn.linear_model import LinearRegression
model_diabets = LinearRegression().fit(diabetes.data, diabetes.target)
print(model_diabets.coef_)
print(model_diabets.intercept_)
predictions = model_diabets.predict(diabetes.data)
plt.scatter(diabetes.target, predictions)
plt.xlabel("prediction")
plt.ylabel("target")
plt.show()
mean_abs_error = (np.abs(((diabetes.target - predictions) / diabetes.target)*100)).mean()
print("MAE: %.2f%%" % (mean_abs_error))
sk.metrics.median_absolute_error(diabetes.target, predictions)
sk.metrics.mean_squared_error(diabetes.target, predictions)
"""
Explanation: scikit-learn ํจํค์ง๋ฅผ ์ฌ์ฉํ ์ ํ ํ๊ท ๋ถ์
sklearn ํจํค์ง๋ฅผ ์ฌ์ฉํ์ฌ ์ ํ ํ๊ท ๋ถ์์ ํ๋ ๊ฒฝ์ฐ์๋ linear_model ์๋ธ ํจํค์ง์ LinearRegression ํด๋์ค๋ฅผ ์ฌ์ฉํ๋ค.
http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html
์
๋ ฅ ์ธ์
fit_intercept : ๋ถ๋ฆฌ์ธ, ์ต์
์์์ ์ถ๊ฐ ์ฌ๋ถ
normalize : ๋ถ๋ฆฌ์ธ, ์ต์
ํ๊ท ๋ถ์์ ์ ์ ๊ทํ ์ฌ๋ถ
์์ฑ
coef_ : ์ถ์ ๋ ๊ฐ์ค์น ๋ฒกํฐ
intercept_ : ์ถ์ ๋ ์์ํญ
Diabetes Regression
End of explanation
"""
from sklearn.datasets import load_boston
boston = load_boston()
dfX_boston = pd.DataFrame(boston.data, columns=boston.feature_names)
dfy_boston = pd.DataFrame(boston.target, columns=["MEDV"])
df_boston0 = pd.concat([dfX_boston, dfy_boston], axis=1)
df_boston0.tail(3)
model_boston = LinearRegression().fit(boston.data, boston.target)
print(model_boston.coef_)
print(model_boston.intercept_)
predictions = model_boston.predict(boston.data)
plt.scatter(predictions, boston.target)
plt.xlabel("prediction")
plt.ylabel("target")
plt.show()
mean_abs_error = (np.abs(((boston.target - predictions) / boston.target)*100)).mean()
print("MAE: %.2f%%" % (mean_abs_error))
sk.metrics.median_absolute_error(boston.target, predictions)
sk.metrics.mean_squared_error(boston.target, predictions)
"""
Explanation: Boston Housing Price
End of explanation
"""
df_diabetes = sm.add_constant(df_diabetes0)
df_diabetes.tail(3)
model_diabets2 = sm.OLS(df_diabetes.ix[:, -1], df_diabetes.ix[:, :-1])
result_diabetes2 = model_diabets2.fit()
result_diabetes2
"""
Explanation: statsmodels ๋ฅผ ์ฌ์ฉํ ์ ํ ํ๊ท ๋ถ์
์ค์ ๋ก๋ ์ ํํ๊ท๋ถ์์ ๊ฒฝ์ฐ ์ด ๋ชจ๋ธ์ ์ฌ์ฉํ๋ค.
statsmodels ํจํค์ง์์๋ OLS ํด๋์ค๋ฅผ ์ฌ์ฉํ์ฌ ์ ํ ํ๊ท ๋ถ์์ ์ค์ํ๋ค.
http://www.statsmodels.org/dev/generated/statsmodels.regression.linear_model.OLS.html
statsmodels.regression.linear_model.OLS(endog, exog=None)
์
๋ ฅ ์ธ์
endog : ์ข
์ ๋ณ์. 1์ฐจ์ ๋ฐฐ์ด
exog : ๋
๋ฆฝ ๋ณ์, 2์ฐจ์ ๋ฐฐ์ด.
statsmodels ์ OLS ํด๋์ค๋ ์๋์ผ๋ก ์์ํญ์ ๋ง๋ค์ด์ฃผ์ง ์๊ธฐ ๋๋ฌธ์ ์ฌ์ฉ์๊ฐ add_constant ๋ช
๋ น์ผ๋ก ์์ํญ์ ์ถ๊ฐํด์ผ ํ๋ค.
๋ชจํ ๊ฐ์ฒด๊ฐ ์์ฑ๋๋ฉด fit, predict ๋ฉ์๋๋ฅผ ์ฌ์ฉํ์ฌ ์ถ์ ๋ฐ ์์ธก์ ์ค์ํ๋ค.
์์ธก ๊ฒฐ๊ณผ๋ RegressionResults ํด๋์ค ๊ฐ์ฒด๋ก ์ถ๋ ฅ๋๋ฉด summary ๋ฉ์๋๋ก ๊ฒฐ๊ณผ ๋ณด๊ณ ์๋ฅผ ๋ณผ ์ ์๋ค.
End of explanation
"""
print(result_diabetes2.summary())
df_boston = sm.add_constant(df_boston0)
model_boston2 = sm.OLS(df_boston.ix[:, -1], df_boston.ix[:, :-1])
result_boston2 = model_boston2.fit()
print(result_boston2.summary())
"""
Explanation: DF๋ ndarray์ ๋ฆฌ์คํธ
๋ฆฌ์คํธ of ๋ฒกํฐ
ndarray๋ ๋ค ๋๋ค
list of list๋ ๋ฆฌ์คํธ ์์ ๋ฆฌ์คํธ
Dep. Variable์ ์ฐ๋ฆฌ๊ฐ ๊ตฌํ ๊ฐ
target์ ๋ผ๋ฒจ
No. Observations์ ์ํ์
Df Model์ parameter-1
std err๋ coef์ +- err ์์น
๊ฐ์ฅ ๋จผ์ P>|t| ์ด๊ฑฐ๋ถํฐ. ์ด๊ฒ ์ค์. 0์ธ์ง ์๋์ง. 0์ด๋ฉด ์ด๋ฆฌ๊ณ ์๋๋ฉด ์ฃฝ์ผ ๊ฐ๋ฅ์ฑ์ด ๋๋ค.
Prob(Omnibus) = 0.471์ด๋ฉด ๊ทธ๋ฅ ์ ๊ท๋ถํฌ๋ค.
Cond. No๊ฐ 10000 ์ดํ๋ฉด ๊ด์ฐฎ์
๊ทธ ๋ค์ ๋ด์ผ ํ ๊ฒ์ coef. coef_ : ์ถ์ ๋ ๊ฐ์ค์น ๋ฒกํฐ. โ-โ์ด๋ฉด ์
์ํฅ
Result๊ฐ ๋ณ๋๋ก ์ ์ฅ๋๋ ๊ฒ์ stats ๋ชจ๋ธ์ ํน์ง. ์ฌ์ดํท์ ์๋์ผ
End of explanation
"""
dir(result_boston2)
"""
Explanation: RegressionResults ํด๋์ค๋ ๋ถ์ ๊ฒฐ๊ณผ๋ฅผ ๋ค์ํ ์์ฑ์ ์ ์ฅํด์ฃผ๋ฏ๋ก ์ถํ ์ฌ์ฉ์๊ฐ ์ ํํ์ฌ ํ์ฉํ ์ ์๋ค.
End of explanation
"""
sm.graphics.plot_fit(result_boston2, "CRIM")
plt.show()
"""
Explanation: statsmodel๋ ๋ค์ํ ํ๊ท ๋ถ์ ๊ฒฐ๊ณผ ํ๋กฏ๋ ์ ๊ณตํ๋ค.
plot_fit(results, exog_idx) Plot fit against one regressor.
abline_plot([intercept, ...]) Plots a line given an intercept and slope.
influence_plot(results[, ...]) Plot of influence in regression.
plot_leverage_resid2(results) Plots leverage statistics vs.
plot_partregress(endog, ...) Plot partial regression for a single regressor.
plot_ccpr(results, exog_idx) Plot CCPR against one regressor.
plot_regress_exog(results, ...) Plot regression results against one regressor.
End of explanation
"""
|
jljones/portfolio
|
ds/Webscraping_Craigslist.ipynb
|
apache-2.0
|
# Python 3.4
%pylab inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import requests
from bs4 import BeautifulSoup as bs4
"""
Explanation: Webscraping Craigslist for House Prices in the East Bay
Jennifer Jones, PhD
jennifer.jones@cal.berkeley.edu
End of explanation
"""
# Get the data: Houses posted for sale on Craigslist in the Eastbay
url_base = 'http://sfbay.craigslist.org/search/eby/rea?housing_type=6'
data = requests.get(url_base)
print(data.url)
# BeautifulSoup can quickly parse the text, need to tell bs4 that the text is html
html = bs4(data.text, 'html.parser')
# Display the html in a somewhat readable way, to note the structure of housing listings
# then comment it out because it prints out a large amount to the screen
# print(html.prettify())
"""
Explanation: Craigslist houses for sale
Look on the Craigslist website, select relevant search criteria, and then take a look at the web address:
Houses for sale in the East Bay:
http://sfbay.craigslist.org/search/eby/rea?housing_type=6
Houses for sale in selected neighborhoods in the East Bay:
http://sfbay.craigslist.org/search/eby/rea?nh=46&nh=47&nh=48&nh=49&nh=112&nh=54&nh=55&nh=60&nh=62&nh=63&nh=66&housing_type=6
End of explanation
"""
# Looked through above output and saw housing entries contained in <p class="row">
# Get a list of housing data and store the results
houses = html.find_all('p', attrs={'class': 'row'}) # html.findAll(attrs={'class': "row"})
print(len(houses))
# List neighborhoods of the houses in the list
neighborhoods = pd.DataFrame(data = ones(len(houses)), columns = {'Neighborhoods'})
n = 0
for row in range(len(houses)-1):
one_neighborhood = houses[n].findAll(attrs={'class': 'pnr'})[0].text
neighborhoods.iloc[n] = one_neighborhood
n += 1
#print(neighborhoods)
"""
Explanation: House entries
End of explanation
"""
# There's a consistent structure to each housing listing:
# There is a 'time',
# a <span class="price">,
# a 'housing',
# a <span class="pnr"> neighborhood field
# Look at info for a single house
one_house = houses[11] # 11, 19, 28 is the selected row number for a housing listing
# Print out and view a single house entry, use prettify to make more legible
print(one_house.prettify())
"""
Explanation: Look at a single entry - for one house
To explore the data before working with the whole dataset.
End of explanation
"""
# For one housing entry look at fields of interest: Price, Neighborhood, Size, Date Posted
# Clean up values manually, to figure out how to automate
# Listing
allspecs = one_house.findAll(attrs={'class': 'l2'})[0].text # `findAll` returns a list, and there's only one entry in this html
print('Listing: \n', allspecs, '\n')
# Price
print('Price:')
price = one_house.findAll(attrs={'class': 'price'})[0].text
print(price)
price = float(one_house.find('span', {'class': 'price'}).text.strip('$'))
print(price, '\t', type(price), '\n')
# Neighborhood
print('Neighborhood:')
neighborhood = one_house.findAll(attrs={'class': 'pnr'})[0].text
print(neighborhood)
# Keep the neighborhood, remove leading spaces and parentheses.
# Then split at the closing parentheses and only take the neighborhood part
# example: ' (vallejo / benicia) pic map '
neighborhood = one_house.findAll(attrs={'class': 'pnr'})[0].text.strip(' (').split(')')[0]
print(neighborhood, '\t', type(neighborhood), '\n')
#print(len([rw.findAll(attrs={'class': 'pnr'})[0].text.strip(' (').split(')')[0] for rw in houses]))
# Size
print('Size: bedrooms and sq ft: ')
size = one_house.findAll(attrs={'class': 'housing'})[0].text
print(size)
# Strip text of leading and trailing characters: /, dashes, and spaces
# Split number of bedrooms and square footage into 2 fields in list
size = one_house.findAll(attrs={'class': 'housing'})[0].text.strip('/- ').split(' - ')
print(size)
# Delete suffixes and just keep the numbers
size[0] = float(size[0].replace('br', '')) # number of bedrooms
size[1] = float(size[1].replace('ft2', '')) # square footage
print(size, '\t', type(size[0]), '\n')
# Address/Posting Title
address = one_house.findAll(attrs={'class': 'hdrlnk'})[0].text
print(address, '\n')
#link = 'http://sfbay.craigslist.org/search' + one_house.findAll(attrs={'class': 'hdrlnk'})[0]['href']
#print(link, '\n')
# Date posted
dateposted = one_house.findAll(attrs={'class': 'pl'})[0].time['datetime']
print(dateposted, '\t', type(dateposted))
# Convert to datetime type so can extract date
date = pd.to_datetime(one_house.find('time')['datetime']).date()
print(date, '\t', type(date))
"""
Explanation: A single housing entry looks like this:
<p class="row" data-pid="5434788772" data-repost-of="5355580942">
<a class="i" data-ids="0:00N0N_iJtLy33ZoH8,0:00N0N_iJtLy33ZoH8,0:00G0G_TjCb4MW8vL,0:00K0K_k81VU0LUbQY,0:01313_fVvAnbOJv15,0:00b0b_g31lNUegSPp,0:00606_hZwz7JY1p7Y,0:00909_1Kedy8CYMdC,0:01313_gazwY7Ur82i,0:00202_af2wTsicuhA,0:00K0K_1Jm1oKML6pU,0:00s0s_7BOtkml8sUj,0:00h0h_fjOOi8ydLtF" href="/eby/reb/5434788772.html">
</a>
<span class="txt">
<span class="star">
</span>
<span class="pl">
<time datetime="2016-02-05 08:55" title="Fri 05 Feb 08:55:07 AM">
Feb 5
</time>
<a class="hdrlnk" data-id="5434788772" href="/eby/reb/5434788772.html">
OPEN House Sunday 2-4pm, For SALE Spacious 2 Bedroom, 1 Bathroom Home
</a>
</span>
<span class="l2">
<span class="price">
$440000
</span>
<span class="housing">
/ 2br - 1156ft
<sup>
2
</sup>
-
</span>
<span class="pnr">
<small>
(Oakland)
</small>
<span class="px">
<span class="p">
pic
<span class="maptag" data-pid="5434788772">
map
</span>
</span>
</span>
</span>
</span>
<span class="js-only banish-unbanish">
<span class="banish" title="hide">
<span class="trash">
</span>
</span>
<span class="unbanish" title="restore">
<span class="trash red">
</span>
</span>
</span>
</span>
</p>
<p class="row" data-pid="5433676803" data-repost-of="5414688273">
<a class="i" data-ids="0:00808_dgD6sMXvscr,0:00o0o_g8e6j9elDPU,0:00D0D_1ASSvbr4ji5,0:00y0y_5BDXnFKvcCg,0:00L0L_2tqVoeDNfer,0:00m0m_jsPZJtgSSJF,0:00202_iOyrLKhYx4a,0:00101_5cAhwpbhsBt,0:00U0U_2UkxFRw5Lj1,0:00Z0Z_2uLprMUbHjz,0:00p0p_jfCizNzlCI7,0:00303_45wmy0xh4dG,0:00P0P_i58qE8i45tT,0:00G0G_iaoe8wdRH7H,0:00p0p_jCIjJ2rA6pW" href="/eby/reb/5433676803.html">
</a>
<span class="txt">
<span class="star">
</span>
<span class="pl">
<time datetime="2016-02-04 12:33" title="Thu 04 Feb 12:33:18 PM">
Feb 4
</time>
<a class="hdrlnk" data-id="5433676803" href="/eby/reb/5433676803.html">
OPEN HOUSE SAT 1-4 Sunny Richmond Home
</a>
</span>
<span class="l2">
<span class="housing">
3br - 1330ft
<sup>
2
</sup>
-
</span>
<span class="pnr">
<small>
(richmond / point / annex)
</small>
<span class="px">
<span class="p">
pic
<span class="maptag" data-pid="5433676803">
map
</span>
</span>
</span>
</span>
</span>
<span class="js-only banish-unbanish">
<span class="banish" title="hide">
<span class="trash">
</span>
</span>
<span class="unbanish" title="restore">
<span class="trash red">
</span>
</span>
</span>
</span>
</p>
<p class="row" data-pid="5433612326">
<a class="i" data-ids="0:00Q0Q_htb9rv5xsF6,0:00J0J_8fQcbC1GN0K,0:00C0C_fWSz7oQ64wG,0:00X0X_fFkvYwa1egh,0:01717_5p2bu1Txk3P,0:00U0U_9lTAjtB6OT3,0:00b0b_19p3EtCFxMf,0:00v0v_5Ny1hBHiN69,0:00v0v_1TW0gNDOnnE,0:00l0l_h9cpsiY9FJB,0:00F0F_7c5FQ2LdYGP,0:00W0W_h8naPbNyKg6,0:00d0d_9SFc0l0Q7he,0:00S0S_auVynUzYLdJ" href="/eby/reb/5433612326.html">
</a>
<span class="txt">
<span class="star">
</span>
<span class="pl">
<time datetime="2016-02-04 11:52" title="Thu 04 Feb 11:52:28 AM">
Feb 4
</time>
<a class="hdrlnk" data-id="5433612326" href="/eby/reb/5433612326.html">
Millsmont House For Sale
</a>
</span>
<span class="l2">
<span class="price">
$579950
</span>
<span class="housing">
/ 3br - 1912ft
<sup>
2
</sup>
-
</span>
<span class="pnr">
<small>
(oakland hills / mills)
</small>
<span class="px">
<span class="p">
pic
<span class="maptag" data-pid="5433612326">
map
</span>
</span>
</span>
</span>
</span>
<span class="js-only banish-unbanish">
<span class="banish" title="hide">
<span class="trash">
</span>
</span>
<span class="unbanish" title="restore">
<span class="trash red">
</span>
</span>
</span>
</span>
</p>
<p class="row" data-pid="5432692610" data-repost-of="5137664009">
<a class="i" data-ids="0:00303_h5PfjA9mASD" href="/eby/reb/5432692610.html">
</a>
<span class="txt">
<span class="star">
</span>
<span class="pl">
<time datetime="2016-02-03 19:42" title="Wed 03 Feb 07:42:39 PM">
Feb 3
</time>
<a class="hdrlnk" data-id="5432692610" href="/eby/reb/5432692610.html">
Excellent Home in Berkeley
</a>
</span>
<span class="l2">
<span class="price">
$450000
</span>
<span class="pnr">
<small>
(berkeley)
</small>
<span class="px">
<span class="p">
pic
</span>
</span>
</span>
</span>
<span class="js-only banish-unbanish">
<span class="banish" title="hide">
<span class="trash">
</span>
</span>
<span class="unbanish" title="restore">
<span class="trash red">
</span>
</span>
</span>
</span>
</p>
<p class="row" data-pid="5432698438" data-repost-of="5113860864">
<a class="i" data-ids="0:00U0U_7EFsiQLPVhn" href="/eby/reb/5432698438.html">
</a>
<span class="txt">
<span class="star">
</span>
<span class="pl">
<time datetime="2016-02-03 19:33" title="Wed 03 Feb 07:33:29 PM">
Feb 3
</time>
<a class="hdrlnk" data-id="5432698438" href="/eby/reb/5432698438.html">
Conveniently located in Albany
</a>
</span>
<span class="l2">
<span class="price">
$600000
</span>
<span class="pnr">
<small>
(albany / el cerrito)
</small>
<span class="px">
<span class="p">
pic
</span>
</span>
</span>
</span>
<span class="js-only banish-unbanish">
<span class="banish" title="hide">
<span class="trash">
</span>
</span>
<span class="unbanish" title="restore">
<span class="trash red">
</span>
</span>
</span>
</span>
</p>
End of explanation
"""
# Define 4 functions for the price, neighborhood, sq footage & # bedrooms, and time
# that can deal with missing values (to prevent errors from showing up when running the code)
# Prices
def find_prices(results):
prices = []
for rw in results:
price = rw.find('span', {'class': 'price'})
if price is not None:
price = float(price.text.strip('$'))
else:
price = np.nan
prices.append(price)
return prices
# Neighborhoods
# Example: ' (oakland hills / mills) pic map '
# Define a function for neighborhood in case a field is missing in 'class': 'pnr'
def find_neighborhood(results):
neighborhoods = []
for rw in results:
split = rw.find('span', {'class': 'pnr'}).text.strip(' (').split(')')
#split = rw.find(attrs={'class': 'pnr'}).text.strip(' (').split(')')
if len(split) == 2:
neighborhood = split[0]
elif 'pic map' or 'pic' or 'map' in split[0]:
neighborhood = np.nan
neighborhoods.append(neighborhood)
return neighborhoods
# Size
# Make a function to deal with size in case #br or ft2 is missing
def find_size_and_brs(results):
sqft = []
bedrooms = []
for rw in results:
split = rw.find('span', attrs={'class': 'housing'})
# If the field doesn't exist altogether in a housing entry
if split is not None:
#if rw.find('span', {'class': 'housing'}) is not None:
# Removes leading and trailing spaces and dashes, splits br & ft
#split = rw.find('span', attrs={'class': 'housing'}).text.strip('/- ').split(' - ')
split = split.text.strip('/- ').split(' - ')
if len(split) == 2:
n_brs = split[0].replace('br', '')
size = split[1].replace('ft2', '')
elif 'br' in split[0]: # in case 'size' field is missing
n_brs = split[0].replace('br', '')
size = np.nan
elif 'ft2' in split[0]: # in case 'br' field is missing
size = split[0].replace('ft2', '')
n_brs = np.nan
else:
size = np.nan
n_brs = np.nan
sqft.append(float(size))
bedrooms.append(float(n_brs))
return sqft, bedrooms
# Time posted
def find_times(results):
times = []
for rw in results:
time = rw.findAll(attrs={'class': 'pl'})[0].time['datetime']
if time is not None:
time# = time
else:
time = np.nan
times.append(time)
return pd.to_datetime(times)
prices = find_prices(houses)
neighborhoods = find_neighborhood(houses)
sqft, bedrooms = find_size_and_brs(houses)
times = find_times(houses)
# Check
print(len(prices))
print(len(neighborhoods))
print(len(sqft))
print(len(bedrooms))
print(len(times))
# Add the data to a dataframe so I can work with it
housesdata = np.array([prices, sqft, bedrooms]).T
#print(housesdata)
# Add the array to the dataframe, then the dates column and the neighborhoods column
housesdf = pd.DataFrame(data = housesdata, columns = ['Price', 'SqFeet', 'nBedrooms'])
housesdf['DatePosted'] = times
housesdf['Neighborhood'] = neighborhoods
print(housesdf.tail(5))
print(housesdf.dtypes)
# Quick plot to look at the data
fig = plt.figure()
fig.set_figheight(6.0)
fig.set_figwidth(10.0)
ax = fig.add_subplot(111) # row column position
ax.plot(housesdf.SqFeet, housesdf.Price, 'bo')
ax.set_xlim(0,5000)
ax.set_ylim(0,3000000)
ax.set_xlabel('$\mathrm{Square \; feet}$',fontsize=18)
ax.set_ylabel('$\mathrm{Price \; (in \; \$)}$',fontsize=18)
len(housesdf.SqFeet)
# Quick plot to look at the data
fig = plt.figure()
fig.set_figheight(6.0)
fig.set_figwidth(10.0)
ax = fig.add_subplot(111) # row column position
ax.plot(housesdf.nBedrooms, housesdf.Price, 'bo')
ax.set_xlim(1.5, 5.5)
ax.set_ylim(0,3000000)
ax.set_xlabel('$\mathrm{Number \; of \; Bedrooms}$',fontsize=18)
ax.set_ylabel('$\mathrm{Price \; (in \; \$)}$',fontsize=18)
len(housesdf.nBedrooms)
# Get houses listed in Berkeley
#housesdf[housesdf['Neighborhood'] == 'berkeley']
housesdf[housesdf['Neighborhood'] == 'berkeley north / hills']
#housesdf[housesdf['Neighborhood'] == 'oakland rockridge / claremont']
#housesdf[housesdf['Neighborhood'] == 'albany / el cerrito']
#housesdf[housesdf['Neighborhood'] == 'richmond / point / annex']
# How many houses for sale are under $700k?
print(housesdf[(housesdf.Price < 700000)].count(), '\n') # nulls aren't counted in count
# In which neighborhoods are these houses located?
print(set(housesdf[(housesdf.Price < 700000)].Neighborhood))
# Return entries for houses under $700k, sorted by price from least expensive to most
housesdf[(housesdf.Price < 700000)].sort_values(['Price'], ascending = [True])
"""
Explanation: All rows, all housing entries
Now that I've figured out how to extract data for 1 house, do for the list of houses
End of explanation
"""
by_neighborhood = housesdf.groupby('Neighborhood')
print(by_neighborhood.count())#.head()) # NOT NULL records within each column
#print('\n')
#print(by_neighborhood.size())#.head()) # total records for each neighborhood
#by_neighborhood.Neighborhood.nunique()
print(len(housesdf.index)) # total #rows
print(len(set(housesdf.Neighborhood))) # #unique neighborhoods
set(housesdf.Neighborhood) # list the #unique neighborhoods
# Group the results by neighborhood, and then take the average home price in each neighborhood
by_neighborhood = housesdf.groupby('Neighborhood').mean().Price # by_neighborhood_mean_price
print(by_neighborhood.head(5), '\n')
print(by_neighborhood['berkeley north / hills'], '\n')
#print(by_neighborhood.index, '\n')
by_neighborhood_sort_price = by_neighborhood.sort_values(ascending = True)
#print(by_neighborhood_sort_price.index) # a list of the neighborhoods sorted by price
print(by_neighborhood_sort_price)
# Plot average home price for each neighborhood in the East Bay
# dropna()
fig = plt.figure() # or fig = plt.figure(figsize=(15,8)) # width, height
fig.set_figheight(8.0)
fig.set_figwidth(13.0)
ax = fig.add_subplot(111) # row column position
fntsz=20
titlefntsz=25
lablsz=20
mrkrsz=8
matplotlib.rc('xtick', labelsize = lablsz); matplotlib.rc('ytick', labelsize = lablsz)
# Choose a baseline, based on proximity to current location
# 'berkeley', 'berkeley north / hills', 'albany / el cerrito'
neighborhood_name = 'berkeley north / hills'
# Plot a bar chart
ax.bar(range(len(by_neighborhood_sort_price.dropna())), by_neighborhood_sort_price.dropna(), align='center')
# Add a horizontal line for Berkeley's (or the baseline's) average home price, corresponds with Berkeley bar
ax.axhline(y=housesdf.groupby('Neighborhood').mean().Price.ix[neighborhood_name], linestyle='--')
# Add a grid
ax.grid(b = True, which='major', axis='y') # which='major','both'; options/kwargs: color='r', linestyle='-', linewidth=2)
# Format x axis
ax.set_xticks(range(1,len(housesdf.groupby('Neighborhood').mean().Price.dropna()))); # 0 if first row is at least 100,000
ax.set_xticklabels(by_neighborhood_sort_price.dropna().index[1:], rotation='vertical', fontsize=fntsz) # remove [1:], 90, 45, 'vertical'
ax.set_xlim(0, len(by_neighborhood_sort_price.dropna().index)) # -1 if first row is at least 100,000
# Format y axis
minor_yticks = np.arange(0, 2000000, 100000)
ax.set_yticks(minor_yticks, minor = True)
ax.tick_params(axis='y', labelsize=fntsz)
ax.set_ylabel('$\mathrm{Price \; (Dollars)}$', fontsize = titlefntsz)
# Set figure title
ax.set_title('$\mathrm{Average \; Home \; Prices \; in \; the \; East \; Bay \; (Source: Craigslist)}$', fontsize = titlefntsz)
# Save figure
#plt.savefig("home_prices.pdf", bbox_inches='tight')
# Home prices in Berkeley (or the baseline)
print('The average home price in %s is: $' %neighborhood_name, '{0:8,.0f}'.format(housesdf.groupby('Neighborhood').mean().Price.ix[neighborhood_name]), '\n')
print('The most expensive home price in %s is: $' %neighborhood_name, '{0:8,.0f}'.format(housesdf.groupby('Neighborhood').max().Price.ix[neighborhood_name]), '\n')
print('The least expensive home price in %s is: $' %neighborhood_name, '{0:9,.0f}'.format(housesdf.groupby('Neighborhood').min().Price.ix[neighborhood_name]), '\n')
"""
Explanation: Group results by neighborhood and plot
End of explanation
"""
|
aam-at/tensorflow
|
tensorflow/lite/g3doc/performance/post_training_integer_quant.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
import logging
logging.getLogger("tensorflow").setLevel(logging.DEBUG)
import tensorflow as tf
import numpy as np
assert float(tf.__version__[:3]) >= 2.3
"""
Explanation: Post-training integer quantization
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/performance/post_training_integer_quant"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_integer_quant.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_integer_quant.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/lite/g3doc/performance/post_training_integer_quant.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
Integer quantization is an optimization strategy that converts 32-bit floating-point numbers (such as weights and activation outputs) to the nearest 8-bit fixed-point numbers. This results in a smaller model and increased inferencing speed, which is valuable for low-power devices such as microcontrollers. This data format is also required by integer-only accelerators such as the Edge TPU.
In this tutorial, you'll train an MNIST model from scratch, convert it into a Tensorflow Lite file, and quantize it using post-training quantization. Finally, you'll check the accuracy of the converted model and compare it to the original float model.
You actually have several options as to how much you want to quantize a model. In this tutorial, you'll perform "full integer quantization," which converts all weights and activation outputs into 8-bit integer dataโwhereas other strategies may leave some amount of data in floating-point.
To learn more about the various quantization strategies, read about TensorFlow Lite model optimization.
Setup
In order to quantize both the input and output tensors, we need to use APIs added in TensorFlow r2.3:
End of explanation
"""
# Load MNIST dataset
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images.astype(np.float32) / 255.0
test_images = test_images.astype(np.float32) / 255.0
# Define the model architecture
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=5,
validation_data=(test_images, test_labels)
)
"""
Explanation: Generate a TensorFlow Model
We'll build a simple model to classify numbers from the MNIST dataset.
This training won't take long because you're training the model for just a 5 epochs, which trains to about ~98% accuracy.
End of explanation
"""
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
"""
Explanation: Convert to a TensorFlow Lite model
Now you can convert the trained model to TensorFlow Lite format using the TFLiteConverter API, and apply varying degrees of quantization.
Beware that some versions of quantization leave some of the data in float format. So the following sections show each option with increasing amounts of quantization, until we get a model that's entirely int8 or uint8 data. (Notice we duplicate some code in each section so you can see all the quantization steps for each option.)
First, here's a converted model with no quantization:
End of explanation
"""
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model_quant = converter.convert()
"""
Explanation: It's now a TensorFlow Lite model, but it's still using 32-bit float values for all parameter data.
Convert using dynamic range quantization
Now let's enable the default optimizations flag to quantize all fixed parameters (such as weights):
End of explanation
"""
def representative_data_gen():
for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):
# Model has only one input so each data point has one element.
yield [input_value]
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_data_gen
tflite_model_quant = converter.convert()
"""
Explanation: The model is now a bit smaller with quantized weights, but other variable data is still in float format.
Convert using float fallback quantization
To quantize the variable data (such as model input/output and intermediates between layers), you need to provide a RepresentativeDataset. This is a generator function that provides a set of input data that's large enough to represent typical values. It allows the converter to estimate a dynamic range for all the variable data. (The dataset does not need to be unique compared to the training or evaluation dataset.)
To support multiple inputs, each representative data point is a list and elements in the list are fed to the model according to their indices.
End of explanation
"""
interpreter = tf.lite.Interpreter(model_content=tflite_model_quant)
input_type = interpreter.get_input_details()[0]['dtype']
print('input: ', input_type)
output_type = interpreter.get_output_details()[0]['dtype']
print('output: ', output_type)
"""
Explanation: Now all weights and variable data are quantized, and the model is significantly smaller compared to the original TensorFlow Lite model.
However, to maintain compatibility with applications that traditionally use float model input and output tensors, the TensorFlow Lite Converter leaves the model input and output tensors in float:
End of explanation
"""
def representative_data_gen():
for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):
yield [input_value]
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_data_gen
# Ensure that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# Set the input and output tensors to uint8 (APIs added in r2.3)
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model_quant = converter.convert()
"""
Explanation: That's usually good for compatibility, but it won't be compatible with devices that perform only integer-based operations, such as the Edge TPU.
Additionally, the above process may leave an operation in float format if TensorFlow Lite doesn't include a quantized implementation for that operation. This strategy allows conversion to complete so you have a smaller and more efficient model, but again, it won't be compatible with integer-only hardware. (All ops in this MNIST model have a quantized implementation.)
So to ensure an end-to-end integer-only model, you need a couple more parameters...
Convert using integer-only quantization
To quantize the input and output tensors, and make the converter throw an error if it encounters an operation it cannot quantize, convert the model again with some additional parameters:
End of explanation
"""
interpreter = tf.lite.Interpreter(model_content=tflite_model_quant)
input_type = interpreter.get_input_details()[0]['dtype']
print('input: ', input_type)
output_type = interpreter.get_output_details()[0]['dtype']
print('output: ', output_type)
"""
Explanation: The internal quantization remains the same as above, but you can see the input and output tensors are now integer format:
End of explanation
"""
import pathlib
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
# Save the unquantized/float model:
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
# Save the quantized model:
tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_model_quant)
"""
Explanation: Now you have an integer quantized model that uses integer data for the model's input and output tensors, so it's compatible with integer-only hardware such as the Edge TPU.
Save the models as files
You'll need a .tflite file to deploy your model on other devices. So let's save the converted models to files and then load them when we run inferences below.
End of explanation
"""
# Helper function to run inference on a TFLite model
def run_tflite_model(tflite_file, test_image_indices):
global test_images
# Initialize the interpreter
interpreter = tf.lite.Interpreter(model_path=str(tflite_file))
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()[0]
output_details = interpreter.get_output_details()[0]
predictions = np.zeros((len(test_image_indices),), dtype=int)
for i, test_image_index in enumerate(test_image_indices):
test_image = test_images[test_image_index]
test_label = test_labels[test_image_index]
# Check if the input type is quantized, then rescale input data to uint8
if input_details['dtype'] == np.uint8:
input_scale, input_zero_point = input_details["quantization"]
test_image = test_image / input_scale + input_zero_point
test_image = np.expand_dims(test_image, axis=0).astype(input_details["dtype"])
interpreter.set_tensor(input_details["index"], test_image)
interpreter.invoke()
output = interpreter.get_tensor(output_details["index"])[0]
predictions[i] = output.argmax()
return predictions
"""
Explanation: Run the TensorFlow Lite models
Now we'll run inferences using the TensorFlow Lite Interpreter to compare the model accuracies.
First, we need a function that runs inference with a given model and images, and then returns the predictions:
End of explanation
"""
import matplotlib.pylab as plt
# Change this to test a different image
test_image_index = 1
## Helper function to test the models on one image
def test_model(tflite_file, test_image_index, model_type):
global test_labels
predictions = run_tflite_model(tflite_file, [test_image_index])
plt.imshow(test_images[test_image_index])
template = model_type + " Model \n True:{true}, Predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[test_image_index]), predict=str(predictions[0])))
plt.grid(False)
"""
Explanation: Test the models on one image
Now we'll compare the performance of the float model and quantized model:
+ tflite_model_file is the original TensorFlow Lite model with floating-point data.
+ tflite_model_quant_file is the last model we converted using integer-only quantization (it uses uint8 data for input and output).
Let's create another function to print our predictions:
End of explanation
"""
test_model(tflite_model_file, test_image_index, model_type="Float")
"""
Explanation: Now test the float model:
End of explanation
"""
test_model(tflite_model_quant_file, test_image_index, model_type="Quantized")
"""
Explanation: And test the quantized model:
End of explanation
"""
# Helper function to evaluate a TFLite model on all images
def evaluate_model(tflite_file, model_type):
global test_images
global test_labels
test_image_indices = range(test_images.shape[0])
predictions = run_tflite_model(tflite_file, test_image_indices)
accuracy = (np.sum(test_labels== predictions) * 100) / len(test_images)
print('%s model accuracy is %.4f%% (Number of test samples=%d)' % (
model_type, accuracy, len(test_images)))
"""
Explanation: Evaluate the models on all images
Now let's run both models using all the test images we loaded at the beginning of this tutorial:
End of explanation
"""
evaluate_model(tflite_model_file, model_type="Float")
"""
Explanation: Evaluate the float model:
End of explanation
"""
evaluate_model(tflite_model_quant_file, model_type="Quantized")
"""
Explanation: Evaluate the quantized model:
End of explanation
"""
|
ssunkara1/bqplot
|
examples/Applications/Wealth of Nations.ipynb
|
apache-2.0
|
import pandas as pd
import numpy as np
import os
from bqplot import (
LogScale, LinearScale, OrdinalColorScale, ColorAxis,
Axis, Scatter, Lines, CATEGORY10, Label, Figure, Tooltip
)
from ipywidgets import HBox, VBox, IntSlider, Play, jslink
initial_year = 1800
"""
Explanation: This is a bqplot recreation of Mike Bostock's Wealth of Nations. This was also done by Gapminder. It is originally based on a TED Talk by Hans Rosling.
End of explanation
"""
data = pd.read_json(os.path.abspath('../data_files/nations.json'))
def clean_data(data):
for column in ['income', 'lifeExpectancy', 'population']:
data = data.drop(data[data[column].apply(len) <= 4].index)
return data
def extrap_interp(data):
data = np.array(data)
x_range = np.arange(1800, 2009, 1.)
y_range = np.interp(x_range, data[:, 0], data[:, 1])
return y_range
def extrap_data(data):
for column in ['income', 'lifeExpectancy', 'population']:
data[column] = data[column].apply(extrap_interp)
return data
data = clean_data(data)
data = extrap_data(data)
income_min, income_max = np.min(data['income'].apply(np.min)), np.max(data['income'].apply(np.max))
life_exp_min, life_exp_max = np.min(data['lifeExpectancy'].apply(np.min)), np.max(data['lifeExpectancy'].apply(np.max))
pop_min, pop_max = np.min(data['population'].apply(np.min)), np.max(data['population'].apply(np.max))
def get_data(year):
year_index = year - 1800
income = data['income'].apply(lambda x: x[year_index])
life_exp = data['lifeExpectancy'].apply(lambda x: x[year_index])
pop = data['population'].apply(lambda x: x[year_index])
return income, life_exp, pop
"""
Explanation: Cleaning and Formatting JSON Data
End of explanation
"""
tt = Tooltip(fields=['name', 'x', 'y'], labels=['Country Name', 'Income per Capita', 'Life Expectancy'])
"""
Explanation: Creating the Tooltip to display the required fields
bqplot's native Tooltip allows us to simply display the data fields we require on a mouse-interaction.
End of explanation
"""
year_label = Label(x=[0.75], y=[0.10], default_size=46, font_weight='bolder', colors=['orange'],
text=[str(initial_year)], enable_move=True)
"""
Explanation: Creating the Label to display the year
Staying true to the d3 recreation of the talk, we place a Label widget in the bottom-right of the Figure (it inherits the Figure co-ordinates when no scale is passed to it). With enable_move set to True, the Label can be dragged around.
End of explanation
"""
x_sc = LogScale(min=income_min, max=income_max)
y_sc = LinearScale(min=life_exp_min, max=life_exp_max)
c_sc = OrdinalColorScale(domain=data['region'].unique().tolist(), colors=CATEGORY10[:6])
size_sc = LinearScale(min=pop_min, max=pop_max)
ax_y = Axis(label='Life Expectancy', scale=y_sc, orientation='vertical', side='left', grid_lines='solid')
ax_x = Axis(label='Income per Capita', scale=x_sc, grid_lines='solid')
"""
Explanation: Defining Axes and Scales
The inherent skewness of the income data favors the use of a LogScale. Also, since the color coding by regions does not follow an ordering, we use the OrdinalColorScale.
End of explanation
"""
# Start with the first year's data
cap_income, life_exp, pop = get_data(initial_year)
wealth_scat = Scatter(x=cap_income, y=life_exp, color=data['region'], size=pop,
names=data['name'], display_names=False,
scales={'x': x_sc, 'y': y_sc, 'color': c_sc, 'size': size_sc},
default_size=4112, tooltip=tt, animate=True, stroke='Black',
unhovered_style={'opacity': 0.5})
nation_line = Lines(x=data['income'][0], y=data['lifeExpectancy'][0], colors=['Gray'],
scales={'x': x_sc, 'y': y_sc}, visible=False)
"""
Explanation: Creating the Scatter Mark with the appropriate size and color parameters passed
To generate the appropriate graph, we need to pass the population of the country to the size attribute and its region to the color attribute.
End of explanation
"""
time_interval = 10
fig = Figure(marks=[wealth_scat, year_label, nation_line], axes=[ax_x, ax_y],
title='Health and Wealth of Nations', animation_duration=time_interval)
"""
Explanation: Creating the Figure
End of explanation
"""
year_slider = IntSlider(min=1800, max=2008, step=1, description='Year', value=initial_year)
"""
Explanation: Using a Slider to allow the user to change the year and a button for animation
Here we see how we can seamlessly integrate bqplot into the jupyter widget infrastructure.
End of explanation
"""
def hover_changed(change):
if change.new is not None:
nation_line.x = data[data['name'] == wealth_scat.names[change.new]]['income'].values[0]
nation_line.y = data[data['name'] == wealth_scat.names[change.new]]['lifeExpectancy'].values[0]
nation_line.visible = True
else:
nation_line.visible = False
wealth_scat.observe(hover_changed, 'hovered_point')
"""
Explanation: When the hovered_point of the Scatter plot is changed (i.e. when the user hovers over a different element), the entire path of that country is displayed by making the Lines object visible and setting it's x and y attributes.
End of explanation
"""
def year_changed(change):
wealth_scat.x, wealth_scat.y, wealth_scat.size = get_data(year_slider.value)
year_label.text = [str(year_slider.value)]
year_slider.observe(year_changed, 'value')
"""
Explanation: On the slider value callback (a function that is triggered everytime the value of the slider is changed) we change the x, y and size co-ordinates of the Scatter. We also update the text of the Label to reflect the current year.
End of explanation
"""
play_button = Play(min=1800, max=2008, interval=time_interval)
jslink((play_button, 'value'), (year_slider, 'value'))
"""
Explanation: Add an animation button
End of explanation
"""
VBox([HBox([play_button, year_slider]), fig])
"""
Explanation: Displaying the GUI
End of explanation
"""
|
samuelshaner/openmc
|
docs/source/pythonapi/examples/mgxs-part-i.ipynb
|
mit
|
from IPython.display import Image
Image(filename='images/mgxs.png', width=350)
"""
Explanation: This IPython Notebook introduces the use of the openmc.mgxs module to calculate multi-group cross sections for an infinite homogeneous medium. In particular, this Notebook introduces the the following features:
General equations for scalar-flux averaged multi-group cross sections
Creation of multi-group cross sections for an infinite homogeneous medium
Use of tally arithmetic to manipulate multi-group cross sections
Introduction to Multi-Group Cross Sections (MGXS)
Many Monte Carlo particle transport codes, including OpenMC, use continuous-energy nuclear cross section data. However, most deterministic neutron transport codes use multi-group cross sections defined over discretized energy bins or energy groups. An example of U-235's continuous-energy fission cross section along with a 16-group cross section computed for a light water reactor spectrum is displayed below.
End of explanation
"""
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import openmc
import openmc.mgxs as mgxs
"""
Explanation: A variety of tools employing different methodologies have been developed over the years to compute multi-group cross sections for certain applications, including NJOY (LANL), MC$^2$-3 (ANL), and Serpent (VTT). The openmc.mgxs Python module is designed to leverage OpenMC's tally system to calculate multi-group cross sections with arbitrary energy discretizations for fine-mesh heterogeneous deterministic neutron transport applications.
Before proceeding to illustrate how one may use the openmc.mgxs module, it is worthwhile to define the general equations used to calculate multi-group cross sections. This is only intended as a brief overview of the methodology used by openmc.mgxs - we refer the interested reader to the large body of literature on the subject for a more comprehensive understanding of this complex topic.
Introductory Notation
The continuous real-valued microscopic cross section may be denoted $\sigma_{n,x}(\mathbf{r}, E)$ for position vector $\mathbf{r}$, energy $E$, nuclide $n$ and interaction type $x$. Similarly, the scalar neutron flux may be denoted by $\Phi(\mathbf{r},E)$ for position $\mathbf{r}$ and energy $E$. Note: Although nuclear cross sections are dependent on the temperature $T$ of the interacting medium, the temperature variable is neglected here for brevity.
Spatial and Energy Discretization
The energy domain for critical systems such as thermal reactors spans more than 10 orders of magnitude of neutron energies from 10$^{-5}$ - 10$^7$ eV. The multi-group approximation discretization divides this energy range into one or more energy groups. In particular, for $G$ total groups, we denote an energy group index $g$ such that $g \in {1, 2, ..., G}$. The energy group indices are defined such that the smaller group the higher the energy, and vice versa. The integration over neutron energies across a discrete energy group is commonly referred to as energy condensation.
Multi-group cross sections are computed for discretized spatial zones in the geometry of interest. The spatial zones may be defined on a structured and regular fuel assembly or pin cell mesh, an arbitrary unstructured mesh or the constructive solid geometry used by OpenMC. For a geometry with $K$ distinct spatial zones, we designate each spatial zone an index $k$ such that $k \in {1, 2, ..., K}$. The volume of each spatial zone is denoted by $V_{k}$. The integration over discrete spatial zones is commonly referred to as spatial homogenization.
General Scalar-Flux Weighted MGXS
The multi-group cross sections computed by openmc.mgxs are defined as a scalar flux-weighted average of the microscopic cross sections across each discrete energy group. This formulation is employed in order to preserve the reaction rates within each energy group and spatial zone. In particular, spatial homogenization and energy condensation are used to compute the general multi-group cross section $\sigma_{n,x,k,g}$ as follows:
$$\sigma_{n,x,k,g} = \frac{\int_{E_{g}}^{E_{g-1}}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\sigma_{n,x}(\mathbf{r},E')\Phi(\mathbf{r},E')}{\int_{E_{g}}^{E_{g-1}}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\Phi(\mathbf{r},E')}$$
This scalar flux-weighted average microscopic cross section is computed by openmc.mgxs for most multi-group cross sections, including total, absorption, and fission reaction types. These double integrals are stochastically computed with OpenMC's tally system - in particular, filters on the energy range and spatial zone (material, cell or universe) define the bounds of integration for both numerator and denominator.
Multi-Group Scattering Matrices
The general multi-group cross section $\sigma_{n,x,k,g}$ is a vector of $G$ values for each energy group $g$. The equation presented above only discretizes the energy of the incoming neutron and neglects the outgoing energy of the neutron (if any). Hence, this formulation must be extended to account for the outgoing energy of neutrons in the discretized scattering matrix cross section used by deterministic neutron transport codes.
We denote the incoming and outgoing neutron energy groups as $g$ and $g'$ for the microscopic scattering matrix cross section $\sigma_{n,s}(\mathbf{r},E)$. As before, spatial homogenization and energy condensation are used to find the multi-group scattering matrix cross section $\sigma_{n,s,k,g \to g'}$ as follows:
$$\sigma_{n,s,k,g\rightarrow g'} = \frac{\int_{E_{g'}}^{E_{g'-1}}\mathrm{d}E''\int_{E_{g}}^{E_{g-1}}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\sigma_{n,s}(\mathbf{r},E'\rightarrow E'')\Phi(\mathbf{r},E')}{\int_{E_{g}}^{E_{g-1}}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\Phi(\mathbf{r},E')}$$
This scalar flux-weighted multi-group microscopic scattering matrix is computed using OpenMC tallies with both energy in and energy out filters.
Multi-Group Fission Spectrum
The energy spectrum of neutrons emitted from fission is denoted by $\chi_{n}(\mathbf{r},E' \rightarrow E'')$ for incoming and outgoing energies $E'$ and $E''$, respectively. Unlike the multi-group cross sections $\sigma_{n,x,k,g}$ considered up to this point, the fission spectrum is a probability distribution and must sum to unity. The outgoing energy is typically much less dependent on the incoming energy for fission than for scattering interactions. As a result, it is common practice to integrate over the incoming neutron energy when computing the multi-group fission spectrum. The fission spectrum may be simplified as $\chi_{n}(\mathbf{r},E)$ with outgoing energy $E$.
Unlike the multi-group cross sections defined up to this point, the multi-group fission spectrum is weighted by the fission production rate rather than the scalar flux. This formulation is intended to preserve the total fission production rate in the multi-group deterministic calculation. In order to mathematically define the multi-group fission spectrum, we denote the microscopic fission cross section as $\sigma_{n,f}(\mathbf{r},E)$ and the average number of neutrons emitted from fission interactions with nuclide $n$ as $\nu_{n}(\mathbf{r},E)$. The multi-group fission spectrum $\chi_{n,k,g}$ is then the probability of fission neutrons emitted into energy group $g$.
Similar to before, spatial homogenization and energy condensation are used to find the multi-group fission spectrum $\chi_{n,k,g}$ as follows:
$$\chi_{n,k,g'} = \frac{\int_{E_{g'}}^{E_{g'-1}}\mathrm{d}E''\int_{0}^{\infty}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\chi_{n}(\mathbf{r},E'\rightarrow E'')\nu_{n}(\mathbf{r},E')\sigma_{n,f}(\mathbf{r},E')\Phi(\mathbf{r},E')}{\int_{0}^{\infty}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r}\nu_{n}(\mathbf{r},E')\sigma_{n,f}(\mathbf{r},E')\Phi(\mathbf{r},E')}$$
The fission production-weighted multi-group fission spectrum is computed using OpenMC tallies with both energy in and energy out filters.
This concludes our brief overview on the methodology to compute multi-group cross sections. The following sections detail more concretely how users may employ the openmc.mgxs module to power simulation workflows requiring multi-group cross sections for downstream deterministic calculations.
Generate Input Files
End of explanation
"""
# Instantiate some Nuclides
h1 = openmc.Nuclide('H1')
o16 = openmc.Nuclide('O16')
u235 = openmc.Nuclide('U235')
u238 = openmc.Nuclide('U238')
zr90 = openmc.Nuclide('Zr90')
"""
Explanation: First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.
End of explanation
"""
# Instantiate a Material and register the Nuclides
inf_medium = openmc.Material(name='moderator')
inf_medium.set_density('g/cc', 5.)
inf_medium.add_nuclide(h1, 0.028999667)
inf_medium.add_nuclide(o16, 0.01450188)
inf_medium.add_nuclide(u235, 0.000114142)
inf_medium.add_nuclide(u238, 0.006886019)
inf_medium.add_nuclide(zr90, 0.002116053)
"""
Explanation: With the nuclides we defined, we will now create a material for the homogeneous medium.
End of explanation
"""
# Instantiate a Materials collection and export to XML
materials_file = openmc.Materials([inf_medium])
materials_file.export_to_xml()
"""
Explanation: With our material, we can now create a Materials object that can be exported to an actual XML file.
End of explanation
"""
# Instantiate boundary Planes
min_x = openmc.XPlane(boundary_type='reflective', x0=-0.63)
max_x = openmc.XPlane(boundary_type='reflective', x0=0.63)
min_y = openmc.YPlane(boundary_type='reflective', y0=-0.63)
max_y = openmc.YPlane(boundary_type='reflective', y0=0.63)
"""
Explanation: Now let's move on to the geometry. This problem will be a simple square cell with reflective boundary conditions to simulate an infinite homogeneous medium. The first step is to create the outer bounding surfaces of the problem.
End of explanation
"""
# Instantiate a Cell
cell = openmc.Cell(cell_id=1, name='cell')
# Register bounding Surfaces with the Cell
cell.region = +min_x & -max_x & +min_y & -max_y
# Fill the Cell with the Material
cell.fill = inf_medium
"""
Explanation: With the surfaces defined, we can now create a cell that is defined by intersections of half-spaces created by the surfaces.
End of explanation
"""
# Instantiate Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(cell)
"""
Explanation: OpenMC requires that there is a "root" universe. Let us create a root universe and add our square cell to it.
End of explanation
"""
# Create Geometry and set root Universe
openmc_geometry = openmc.Geometry()
openmc_geometry.root_universe = root_universe
# Export to "geometry.xml"
openmc_geometry.export_to_xml()
"""
Explanation: We now must create a geometry that is assigned a root universe and export it to XML.
End of explanation
"""
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 2500
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': True}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.source.Source(space=uniform_dist)
# Export to "settings.xml"
settings_file.export_to_xml()
"""
Explanation: Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
End of explanation
"""
# Instantiate a 2-group EnergyGroups object
groups = mgxs.EnergyGroups()
groups.group_edges = np.array([0., 0.625, 20.0e6])
"""
Explanation: Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in EnergyGroups class.
End of explanation
"""
# Instantiate a few different sections
total = mgxs.TotalXS(domain=cell, groups=groups)
absorption = mgxs.AbsorptionXS(domain=cell, groups=groups)
scattering = mgxs.ScatterXS(domain=cell, groups=groups)
"""
Explanation: We can now use the EnergyGroups object, along with our previously created materials and geometry, to instantiate some MGXS objects from the openmc.mgxs module. In particular, the following are subclasses of the generic and abstract MGXS class:
TotalXS
TransportXS
NuTransportXS
AbsorptionXS
CaptureXS
FissionXS
NuFissionXS
KappaFissionXS
ScatterXS
NuScatterXS
ScatterMatrixXS
NuScatterMatrixXS
Chi
ChiPrompt
InverseVelocity
PromptNuFissionXS
These classes provide us with an interface to generate the tally inputs as well as perform post-processing of OpenMC's tally data to compute the respective multi-group cross sections. In this case, let's create the multi-group total, absorption and scattering cross sections with our 2-group structure.
End of explanation
"""
absorption.tallies
"""
Explanation: Each multi-group cross section object stores its tallies in a Python dictionary called tallies. We can inspect the tallies in the dictionary for our Absorption object as follows.
End of explanation
"""
# Instantiate an empty Tallies object
tallies_file = openmc.Tallies()
# Add total tallies to the tallies file
tallies_file += total.tallies.values()
# Add absorption tallies to the tallies file
tallies_file += absorption.tallies.values()
# Add scattering tallies to the tallies file
tallies_file += scattering.tallies.values()
# Export to "tallies.xml"
tallies_file.export_to_xml()
"""
Explanation: The Absorption object includes tracklength tallies for the 'absorption' and 'flux' scores in the 2-group structure in cell 1. Now that each MGXS object contains the tallies that it needs, we must add these tallies to a Tallies object to generate the "tallies.xml" input file for OpenMC.
End of explanation
"""
# Run OpenMC
openmc.run()
"""
Explanation: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
End of explanation
"""
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.50.h5')
"""
Explanation: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.
End of explanation
"""
# Load the tallies from the statepoint into each MGXS object
total.load_from_statepoint(sp)
absorption.load_from_statepoint(sp)
scattering.load_from_statepoint(sp)
"""
Explanation: In addition to the statepoint file, our simulation also created a summary file which encapsulates information about the materials and geometry. By default, a Summary object is automatically linked when a StatePoint is loaded. This is necessary for the openmc.mgxs module to properly process the tally data.
The statepoint is now ready to be analyzed by our multi-group cross sections. We simply have to load the tallies from the StatePoint into each object as follows and our MGXS objects will compute the cross sections for us under-the-hood.
End of explanation
"""
total.print_xs()
"""
Explanation: Voila! Our multi-group cross sections are now ready to rock 'n roll!
Extracting and Storing MGXS Data
Let's first inspect our total cross section by printing it to the screen.
End of explanation
"""
df = scattering.get_pandas_dataframe()
df.head(10)
"""
Explanation: Since the openmc.mgxs module uses tally arithmetic under-the-hood, the cross section is stored as a "derived" Tally object. This means that it can be queried and manipulated using all of the same methods supported for the Tally class in the OpenMC Python API. For example, we can construct a Pandas DataFrame of the multi-group cross section data.
End of explanation
"""
absorption.export_xs_data(filename='absorption-xs', format='excel')
"""
Explanation: Each multi-group cross section object can be easily exported to a variety of file formats, including CSV, Excel, and LaTeX for storage or data processing.
End of explanation
"""
total.build_hdf5_store(filename='mgxs', append=True)
absorption.build_hdf5_store(filename='mgxs', append=True)
scattering.build_hdf5_store(filename='mgxs', append=True)
"""
Explanation: The following code snippet shows how to export all three MGXS to the same HDF5 binary data store.
End of explanation
"""
# Use tally arithmetic to compute the difference between the total, absorption and scattering
difference = total.xs_tally - absorption.xs_tally - scattering.xs_tally
# The difference is a derived tally which can generate Pandas DataFrames for inspection
difference.get_pandas_dataframe()
"""
Explanation: Comparing MGXS with Tally Arithmetic
Finally, we illustrate how one can leverage OpenMC's tally arithmetic data processing feature with MGXS objects. The openmc.mgxs module uses tally arithmetic to compute multi-group cross sections with automated uncertainty propagation. Each MGXS object includes an xs_tally attribute which is a "derived" Tally based on the tallies needed to compute the cross section type of interest. These derived tallies can be used in subsequent tally arithmetic operations. For example, we can use tally artithmetic to confirm that the TotalXS is equal to the sum of the AbsorptionXS and ScatterXS objects.
End of explanation
"""
# Use tally arithmetic to compute the absorption-to-total MGXS ratio
absorption_to_total = absorption.xs_tally / total.xs_tally
# The absorption-to-total ratio is a derived tally which can generate Pandas DataFrames for inspection
absorption_to_total.get_pandas_dataframe()
# Use tally arithmetic to compute the scattering-to-total MGXS ratio
scattering_to_total = scattering.xs_tally / total.xs_tally
# The scattering-to-total ratio is a derived tally which can generate Pandas DataFrames for inspection
scattering_to_total.get_pandas_dataframe()
"""
Explanation: Similarly, we can use tally arithmetic to compute the ratio of AbsorptionXS and ScatterXS to the TotalXS.
End of explanation
"""
# Use tally arithmetic to ensure that the absorption- and scattering-to-total MGXS ratios sum to unity
sum_ratio = absorption_to_total + scattering_to_total
# The scattering-to-total ratio is a derived tally which can generate Pandas DataFrames for inspection
sum_ratio.get_pandas_dataframe()
"""
Explanation: Lastly, we sum the derived scatter-to-total and absorption-to-total ratios to confirm that they sum to unity.
End of explanation
"""
|
wasit7/PythonDay
|
notebook/Somkiat's Basic Python.ipynb
|
bsd-3-clause
|
x=1
print x
type(x)
x.conjugate()
type(1+2j)
z=1+2j
print z
(1,2)
t=(1,2,"text")
t
t
def foo():
return (1,2)
x,y=foo()
print x
print y
def swap(x,y):
return (y,x)
x=1;y=2
print "{0:d} {1:d}".format(x,y)
x,y=swap(x,y)
print "{:f} {:f}".format(x,y)
dir(1)
x=[]
x.append("text")
x
x.append(1)
x.pop()
x.append([1,2,3])
x
x.append(2)
x
print x[0]
print x[-2]
x.pop(-2)
x
%%timeit -n10
x=[]
for i in range(100000):
x.append(2*i+1)
%%timeit -n10
x=[]
for i in xrange(100000):
x.append(2*i+1)
range(10)
y=[2*i+1 for i in xrange(10)]
print y
type({})
x={"key":"value","foo":"bar"}
print x
key="key1"
if key in x:
print x[key]
y={ i:i*i for i in xrange(10)}
y
z=[v for (k,v) in y.iteritems()]
print z
"""
Explanation: Environment setup: Python and Jupyter
Variables: Numbers, String, Tuple, List, Dictionary
Basic operators: Arithmetic and Boolean operators
Control flow: if/else, for, while, pass, break, continue
List: access, update, del, len(), + , in, for, slicing, append(), insert(), pop(), remove()
Dictionary: access, update, del, in
Function: function definition, pass by reference, keyword argument, default argument, lambda
map reduce filter
Module: from, import, reload(), package has init.py, init and str
I/O: raw_input(), input(), open(), close(), write(), read(), rename(), remove(), mkdir(), chdir(), rmdir()
Pass by value, Pass by reference
Date/time: local time and time zone, pytz module
Variables: Numbers, String, Tuple, List, Dictionary
End of explanation
"""
p=[]
for i in xrange(2,100):
isprime=1
for j in p:
if(i%j==0):
isprime=0
break
if isprime:
p.append(i)
print p
for i in xrange(10):
pass
i=10
while i>0:
i=i-1
print i
x=['text',"str",''' Hello World\\n ''']
print x
"""
Explanation: if/else, for, while, pass, break, continue
End of explanation
"""
x=['a','b','c']
#access
print x[0]
#update
x[0]='d'
print x
print "size of x%d is"%len(x)
y=['x','y','z']
z=x+y
gamma=y+x
print z
print gamma
print 'a' in x
print y
y.remove('y')# remove by vavlue
print y
print y
y.pop(0)# remove by index
print y
y.insert(0,'x')
y.insert(1,'y')
print y
x=[i*i for i in xrange(10)]
print x
x[:3]
x[-3:]
x[-1:]
x[3:-3]
x[1:6]
x[::2]
print x
x.reverse()
print x
print x
print x[::-1]
print x
"""
Explanation: List: access, update, del, len(), + , in, for, slicing, append(), insert(), pop(), remove()
End of explanation
"""
x={}
x={'key':'value'}
x['foo']='bar'
x
x['foo']='Hello'
x
x['m']=123
x['foo','key']
keys=['foo','key']
[x[k] for k in keys]
print x
del x
print x
"""
Explanation: Dictionary: access, update, del, in
End of explanation
"""
def foo(x):
x=x+1
y=2*x
return y
print foo(3)
x=3
print foo(x)
print x
def bar(x=[]):
x.append(7)
print "in loop: {}".format(x)
x=[1,2,3]
print x
bar(x)
print x
def func(x=0,y=0,z=0):#defualt input argument
return x*100+y*10+z
func(1,2)
func(y=2,z=3,x=1)#keyword input argument
f=func
f(y=2)
distance=[13,500,1370]#meter
def meter2Kilometer(d):
return d/1000.0;
meter2Kilometer(distance)
[meter2Kilometer(d) for d in distance]
d2 = map(meter2Kilometer,distance)
print d2
d3 = map(lambda x: x/1000.0,distance)
print d3
distance=[13,500,1370]#meter
time=[1,10,100]
d3 = map(lambda s,t: s/float(t)*3.6, distance,time )
print d3
d4=filter(lambda s: s<1000, distance)
print d4
total_distance=reduce(lambda i,j : i+j, distance)
total_distance
import numpy as np
x=np.arange(101)
print x
np.histogram(x,bins=[0,50,60,70,80,100])
print np.sort(x)
"""
Explanation: Function: function definition, pass by reference, keyword argument, default argument, lambda
a built-in immutable type: str, int, long, bool, float, tuple
End of explanation
"""
class Obj:
def __init__(self, _x, _y):
self.x = _x
self.y = _y
def update(self, _x, _y):
self.x += _x
self.y += _y
def __str__(self):
return "x:%d, y:%d"%(self.x,self.y)
a=Obj(5,7)#call __init__
print a#call __str__
a.update(1,2)#call update
print a
import sys
import os
path=os.getcwd()
path=os.path.join(path,'lib')
print path
sys.path.insert(0, path)
from Obj import Obj as ob
b=ob(7,9)
print b
b.update(3,7)
print b
os.getcwd()
from mylib import mymodule as mm
mm=reload(mm)
print mm.Obj2(8,9)
"""
Explanation: Module: class, from, import, reload(), package and init
End of explanation
"""
|
mommermi/Introduction-to-Python-for-Scientists
|
notebooks/Function_Fitting_20161028.ipynb
|
mit
|
# matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# read in signal.csv
data = np.genfromtxt('signal.csv', delimiter=',',
dtype=[('x', float), ('y', float), ('yerr', float)])
"""
Explanation: Example: Function Fitting
This example involves basic plotting with Matplotlib and function fitting with Scipy.Optimize.
(see: http://matplotlib.org/ and https://docs.scipy.org/doc/scipy/reference/optimize.html)
Motivation
Imagine the following situation: you are handed some data (https://raw.githubusercontent.com/mommermi/Introduction-to-Python-for-Scientists/master/notebooks/signal.csv) - some signal $y$ with corresponding uncertainties $\sigma_y$ as a function of $x$ - and you have to find a function that describes the data. What to do?
Of course, let's have a look at the data first and plot them.
Basic Plotting with Matplotlib
End of explanation
"""
f, ax = plt.subplots()
ax.scatter(data['x'], data['y'])
plt.show()
"""
Explanation: Let's plot the data in a very simple scatter plot:
End of explanation
"""
f, ax = plt.subplots()
ax.errorbar(data['x'], data['y'], yerr=data['yerr'], linestyle='', color='red', label='Signal Data')
ax.set_xlabel('x [a.u.]')
ax.set_ylabel('y [a.u.]')
ax.legend(numpoints=1, loc=2)
plt.show()
"""
Explanation: The first line creates two things: a figure (f) and a subplot (ax), which is referred to as an axis (see http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.subplots). You can create a panel with multiple subplots (see below). The second line uses the subplot ax and creates a scatter plot with the provided $x$ and $y$ data. The scatter function can use different symbols and colors (see http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.scatter).
Let's add some axis labels and a legend, use errorbars, and change the symbol color:
End of explanation
"""
from scipy.optimize import curve_fit
# define the line function
def model(x, a, b): # first argument is always x, rest are parameters
return a*x+b
# fit the use least-squares-fitting
bestfit, cov = curve_fit(model, data['x'], data['y'], sigma=data['yerr'])
print 'best fit parameters', bestfit
print 'covariance matrix', cov
"""
Explanation: The function errorbar plots lines with vertical errorbars; more options are available (see http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.errorbar). Calling the legend function adds a legend; in this case, it shows the symbol only once and appears in the top left corner (see http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.legend).
Function Fitting
A very flexible routine that fits any kind of n-dimensional function to data is curve_fit in the scipy.optimize module (see https://docs.scipy.org/doc/scipy/reference/optimize.html). curve_fit uses a non-linear least-squares method (by default a Levenberg-Marquardt algorithm) to find the best-fit parameters to fit the data.
Let's start with a simple example and fit a line, $f(x) = ax + b$, to the data. As part of the fitting, those parameters $a$ and $b$ are found such that the curve $f(x)$ matches the datapoints best:
End of explanation
"""
f, ax = plt.subplots()
ax.errorbar(data['x'], data['y'], yerr=data['yerr'], linestyle='', color='red', label='Signal Data')
ax.plot(np.arange(0, 10, 0.1), model(np.arange(0, 10, 0.1), *bestfit), label='Best Linear Fit', color='blue')
ax.set_xlabel('x [a.u.]')
ax.set_ylabel('y [a.u.]')
ax.legend(numpoints=1, loc=2)
plt.show()
# plt.savefig('filename.pdf') # use this line to save the plot as a pdf file
"""
Explanation: curve_fit requires a function (in this case model), the first argument of which is the variable (or set of variables) over which the function is defined (data['x']). All the remaining arguments of the function are treated as parameters that are varied in order to find the best fit of the data against data['y']. The optional argument sigma allows for passing a list of uncertainties that serve as weights in the fitting process. Note that it is required that len(data['x'])==len(data['y'])==len(data['yerr']).
The function produces two objects as output:
* the first object is a list of the best fit parameters
* the second object is a covariance matrix that allows for calculating the fit uncertainties.
Let's plot the best-fit function over the data points:
End of explanation
"""
import scipy.stats as stat
print 'chi2:', stat.chisquare(data['y'], model(data['x'], *bestfit), ddof=2)
"""
Explanation: The plot function can be used to draw a continuous line (see http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot). Note that a line can simply be plotted over the scatter points.
Although the overall quality of fit is ok, most datapoints do no fall on the line. Let's calculate the goodness of fit parameter, the reduced $\chi^2$ (https://en.wikipedia.org/wiki/Goodness_of_fit):
End of explanation
"""
f, (ax1, ax2) = plt.subplots(2, sharex=True)
# top panel
ax1.errorbar(data['x'], data['y'], yerr=data['yerr'], linestyle='', color='red', label='Signal Data')
x_space = np.arange(min(data['x']), max(data['x']), 0.1)
ax1.plot(x_space, model(x_space, bestfit[0], bestfit[1]), color='blue', label='Best Linear Fit')
ax1.set_xlabel('x [a.u.]')
ax1.set_ylabel('y [a.u.]')
ax1.legend(numpoints=1, loc=2)
# bottom panel
ax2.errorbar(data['x'], data['y']-model(data['x'], bestfit[0], bestfit[1]),
yerr=data['yerr'],
linestyle='', color='red')
ax2.grid()
ax2.set_xlabel('x [a.u.]')
ax2.set_ylabel('y residual')
plt.show()
"""
Explanation: Let's try to improve our model by having a look at the residuals (the difference between the $y$ datapoints and the prediction based on our best-fit function model):
End of explanation
"""
def model(x, a, b, c, d, e): # first argument is always x, rest are parameters
return a*x+b+c*np.sin(x/d-e)
"""
Explanation: Here, we made use of two subplots that are arranged vertically and share a common x range (plt.subplots(2, sharex=True)). The variable x_space simply generates evenly spaced x values over the full range of data['x']. In the bottom plot, we also put a grid to make it easier to estimate how large those residuals are.
There seems to be a wave pattern in the residuals. We improve our model function by adding a sine term to it: $f(x) = ax + b + \sin \left( x/d - e \right)$
End of explanation
"""
# use least-squares-fitting
bestfit, cov = curve_fit(model, data['x'], data['y'], sigma=data['yerr'],
p0=[bestfit[0], bestfit[1], 0.4, 0.2, 1])
# first guess parameters p0 through manual tweaking
print 'best fit paramters', bestfit
print 'covariance matrix', cov
# determine goodness of fit
print 'chi2:', stat.chisquare(data['y'], model(data['x'], *bestfit), ddof=5)
"""
Explanation: We fit this function to the data one more time. Using curve_fit's p0 option, we provide a first guess for the fitting parameters:
End of explanation
"""
f, (ax1, ax2) = plt.subplots(2, sharex=True)
# top panel
ax1.errorbar(data['x'], data['y'], yerr=data['yerr'], linestyle='', color='red', label='Signal Data')
x_space = np.arange(min(data['x']), max(data['x']), 0.1)
ax1.plot(x_space, model(x_space, *bestfit), color='blue', label='Best Fit')
ax1.set_xlabel('x [a.u.]')
ax1.set_ylabel('y [a.u.]')
ax1.legend(numpoints=1, loc=2)
# bottom panel
ax2.errorbar(data['x'], data['y']-model(data['x'], *bestfit), yerr=data['yerr'],
linestyle='', color='red')
ax2.grid()
ax2.set_xlabel('x [a.u.]')
ax2.set_ylabel('y residual')
plt.show()
"""
Explanation: The goodness of fit is now very good (actually too good...) and the covariance matrix has more elements, since there are more fit parameters. Let's have a look at the fit and the residuals using the same code as above:
End of explanation
"""
|
jrmontag/Data-Science-45min-Intros
|
text-comparison/text_comparison.ipynb
|
unlicense
|
import itertools
import nltk
import operator
import numpy as np
import sklearn
from sklearn.feature_extraction.text import CountVectorizer,TfidfVectorizer
"""
Explanation: Introduction
There is a perception that Twitter data can be used to surface insights: unexpected features of the data that have business value. In this tutorial, I will explore some of the difficulties and opportunities of turning that perception into reality.
We will focus exclusively on text analysis, and on insights represented by textual differences between documents and corpora. We will start by constructing a small, simple data set that represents a few notions of what insights should be surfaced. We can then examine which technique uncover which insights.
Next, we will move to real data, where we don't know what we might surface. We will have to address data cleaning and curation, both at the beginning and in an iterative fashion as our insights-generation surfaces artifacts of insufficient data curation. We will finish by developing and evaluating a variety of tools and techiques for comparing text-based data.
Resources
Good further reading, and the source of some of the ideas here:
https://de.dariah.eu/tatom/feature_selection.html
Setup
Requires Python 3.6 or greater
End of explanation
"""
doc0,doc1 = ('bun cat cat dog bird','bun cat dog dog dog')
"""
Explanation: A Synthetic Example
Let's build some intuition by creating two artificial documents, which represent textual differences that we might intend to surface.
End of explanation
"""
def func(doc0,doc1,vectorizer):
"""
print difference in absolute term-frequency difference for each unigram
"""
tf = vectorizer.fit_transform([doc0,doc1])
# this is a 2-column matrix, where the columns represent doc0 and doc1
tfa = tf.toarray()
# make tuples of the tokens and the difference of their doc0 and doc1 coefficients
# if we use a basic token count vectorizer, this is the term frequency difference
tup = zip(vectorizer.get_feature_names(),tfa[0] - tfa[1])
# print the top-10 tokens ranked by the difference measure
for token,score in list(reversed(sorted(tup,key=operator.itemgetter(1))))[:10]:
print(token,score)
func(doc0,doc1,CountVectorizer())
"""
Explanation: In terms of unigram frequency, here are 3 differences:
* 1 more "cat" in doc0 than in doc1
* 2 more "dog" in doc1 than in doc0
* "bird" only exists in doc0
Let's throw together a function that prints out the differences in term frequencies:
End of explanation
"""
func(doc0,doc1,TfidfVectorizer())
"""
Explanation: Observations:
* positive numbers are more "doc0-like"
* the "dog" score is higher in absolute value than the bird score
* "bird" and "cat" are indistinguishable
Let's try inverse-document frequency.
End of explanation
"""
doc0 = 'cat '*5 + 'dog '*3 + 'bun '*350 + 'bird '
doc1 = 'cat '*4 + 'dog '*3 + 'bun '*310
func(doc0,doc1,CountVectorizer())
func(doc0,doc1,TfidfVectorizer())
"""
Explanation: Observations:
* "bird" now has a larger coefficient that "cat"
* "dog is still most significant that "cat"
How does this scale?
Let's construct:
* doc0 is +1 "cat"
* doc0 is +40 "bun"
* doc0 is +1 "bird"
End of explanation
"""
func(doc0,doc1,TfidfVectorizer(ngram_range=(1,2)))
"""
Explanation: Observations:
* "bird" stands out strongly
* "cat" and "dog" are similar in absolute value
* "bun" is the least significant token
What about including 2-grams?
End of explanation
"""
def func(doc0,doc1,vectorizer):
tf = vectorizer.fit_transform([doc0,doc1])
tfa = tf.toarray()
tup = zip(vectorizer.get_feature_names(),tfa[0] - tfa[1])
# print
max_token_length = 0
output_tuples = list(reversed(sorted(tup,key=operator.itemgetter(1))))[:10]
for token,score in output_tuples:
if max_token_length < len(token):
max_token_length = len(token)
for token,score in output_tuples:
print(f"{token:{max_token_length}s} {score:.3e}")
func(doc0,doc1,TfidfVectorizer(ngram_range=(1,2)))
"""
Explanation: That's impossible to read. Let's build better formatting into our function.
End of explanation
"""
import string
from tweet_parser.tweet import Tweet
from searchtweets import (ResultStream,
collect_results,
gen_rule_payload,
load_credentials)
search_args = load_credentials(filename="~/.twitter_keys.yaml",
account_type="enterprise")
_pats_rule = "#patriots OR @patriots"
_eagles_rule = "#eagles OR @eagles"
from_date="2018-01-28"
to_date="2018-01-29"
max_results = 3000
pats_rule = gen_rule_payload(_pats_rule,
from_date=from_date,
to_date=to_date,
)
eagles_rule = gen_rule_payload(_eagles_rule,
from_date=from_date,
to_date=to_date,
)
eagles_results_list = collect_results(eagles_rule,
max_results=max_results,
result_stream_args=search_args)
pats_results_list = collect_results(pats_rule,
max_results=max_results,
result_stream_args=search_args)
"""
Explanation: Observations:
* grams with "bird" still stand out
* scores are getting hard to interpret
Let's get some real data.
End of explanation
"""
eagles_body_text = [tweet['body'] for tweet in eagles_results_list]
eagles_doc = ' '.join(eagles_body_text)
pats_body_text = [tweet['body'] for tweet in pats_results_list]
pats_doc = ' '.join(pats_body_text)
"""
Explanation: Join all tweet bodies in a corpus into one space-delimited document.
End of explanation
"""
eagles_body_text[:10]
"""
Explanation: Let's have a look at the data (AS YOU ALWAYS SHOULD).
End of explanation
"""
tokenizer = nltk.tokenize.TweetTokenizer()
stopwords = nltk.corpus.stopwords.words('english')
stopwords.extend(string.punctuation)
vectorizer = TfidfVectorizer(
tokenizer=tokenizer.tokenize,
stop_words=stopwords,
ngram_range=(1,2)
)
"""
Explanation: Whew...this is gonna take some cleaning.
Let's start with a tokenizer and a stopword list.
End of explanation
"""
func(eagles_doc,pats_doc,vectorizer)
"""
Explanation: Here are the top 10 1- and 2-grams for the Eagles corpus/document.
End of explanation
"""
def compare_docs(doc0,doc1,vectorizer,n_to_display=10):
tfm_sparse = vectorizer.fit_transform([doc0,doc1])
tfm = tfm_sparse.toarray()
tup = zip(vectorizer.get_feature_names(),tfm[0] - tfm[1])
# print
max_token_length = 0
output_tuples = list(reversed(sorted(tup,key=operator.itemgetter(1))))[:n_to_display]
for token,score in output_tuples:
if max_token_length < len(token):
max_token_length = len(token)
for token,score in output_tuples:
print(f"{token:{max_token_length}s} {score:.3e}")
compare_docs(eagles_doc,pats_doc,vectorizer,n_to_display=30)
compare_docs(pats_doc,eagles_doc,vectorizer,n_to_display=30)
"""
Explanation: Add the ability to specify n in top-n.
End of explanation
"""
# add token filtering to the TweetTokenizer
def filter_tokens(token):
if len(token) < 2:
return False
if token.startswith('http'):
return False
if 'โ' in token:
return False
if 'โฆ' in token or '...' in token:
return False
return True
def custom_tokenizer(doc):
initial_tokens = tokenizer.tokenize(doc)
return [token for token in initial_tokens if filter_tokens(token)]
vectorizer = TfidfVectorizer(
tokenizer=custom_tokenizer,
stop_words=stopwords,
ngram_range=(1,2),
)
compare_docs(eagles_doc,pats_doc,vectorizer,n_to_display=20)
compare_docs(pats_doc,eagles_doc,vectorizer,n_to_display=20)
"""
Explanation: We can't really evalute more sophisticated text comparison techniques without doing better filtering on the data.
End of explanation
"""
eagles_body_text_noRT = [tweet['body'] for tweet in eagles_results_list if tweet['verb'] == 'post']
eagles_doc_noRT = ' '.join(eagles_body_text_noRT)
pats_body_text_noRT = [tweet['body'] for tweet in pats_results_list if tweet['verb'] == 'post']
pats_doc_noRT = ' '.join(pats_body_text_noRT)
vectorizer = TfidfVectorizer(
tokenizer=custom_tokenizer,
stop_words=stopwords,
ngram_range=(1,2),
)
compare_docs(eagles_doc_noRT,pats_doc_noRT,vectorizer,n_to_display=20)
print("\n")
compare_docs(pats_doc_noRT,eagles_doc_noRT,vectorizer,n_to_display=20)
"""
Explanation: Retweets makes a mess of a term frequency analysis on documents consisting of concatenated tweet bodies. Remove them for now.
End of explanation
"""
_pats_rule = "@patriots"
_eagles_rule = "@eagles"
from_date="2018-01-28"
to_date="2018-01-29"
max_results = 20000
pats_rule = gen_rule_payload(_pats_rule,
from_date=from_date,
to_date=to_date,
)
eagles_rule = gen_rule_payload(_eagles_rule,
from_date=from_date,
to_date=to_date,
)
eagles_results_list = collect_results(eagles_rule,
max_results=max_results,
result_stream_args=search_args)
pats_results_list = collect_results(pats_rule,
max_results=max_results,
result_stream_args=search_args)
eagles_body_text_noRT = [tweet['body'] for tweet in eagles_results_list if tweet['verb'] == 'post']
eagles_doc_noRT = ' '.join(eagles_body_text_noRT)
pats_body_text_noRT = [tweet['body'] for tweet in pats_results_list if tweet['verb'] == 'post']
pats_doc_noRT = ' '.join(pats_body_text_noRT)
vectorizer = TfidfVectorizer(
tokenizer=custom_tokenizer,
stop_words=stopwords,
ngram_range=(1,2),
)
compare_docs(eagles_doc_noRT,pats_doc_noRT,vectorizer,n_to_display=20)
print("\n")
compare_docs(pats_doc_noRT,eagles_doc_noRT,vectorizer,n_to_display=20)
"""
Explanation: Well, now we have clear evidence of the political notion of the "#patriots" clause in our rule. Let's simplfy things by removing the hashtags from the rules.
End of explanation
"""
corpus0 = ["cat","cat dog"]
corpus1 = ["bun","dog","cat"]
# basic unigram vectorizer with Twitter-specific tokenization and stopwords
vectorizer = CountVectorizer(
tokenizer=custom_tokenizer,
stop_words=stopwords,
ngram_range=(1,1)
)
# get the term-frequency matrix
m = vectorizer.fit_transform(corpus0+corpus1)
vocab = np.array(vectorizer.get_feature_names())
print(vocab)
m = m.toarray()
print(m)
# get TF matrices for each corpus
corpus0_indices = range(len(corpus0))
corpus1_indices = range(len(corpus0),len(corpus0)+len(corpus1))
m0 = m[corpus0_indices,:]
m1 = m[corpus1_indices,:]
print(m0)
# calculate the average term frequency within each corpus
c0_means = np.mean(m0,axis=0)
c1_means = np.mean(m1,axis=0)
print(c0_means)
# calculate the indices of the distinct tokens, which only occur in a single corpus
distinct_indices = c0_means * c1_means == 0
print(vocab[distinct_indices])
# now remove the distinct tokens from the vocab
print(m[:, np.invert(distinct_indices)])
# recalculate things
m0_non_distinct = m[:, np.invert(distinct_indices)][corpus0_indices,:]
m1_non_distinct = m[:, np.invert(distinct_indices)][corpus1_indices,:]
c0_non_distinct_means = np.mean(m0_non_distinct,axis=0)
c1_non_distinct_means = np.mean(m1_non_distinct,axis=0)
# and take the difference
print(c0_non_distinct_means - c1_non_distinct_means)
"""
Explanation: Things we could do:
* vectorize tweets as documents, and summarize or aggregate the coeeficients
* select tokens for which the mean coefficient within a corpus is zero
* look at the difference in mean coefficient
Let's start by going back to simple corpora, and account for individual docs this time.
End of explanation
"""
# build and identify the corpora
docs = eagles_body_text_noRT + pats_body_text_noRT
eagles_indices = range(len(eagles_body_text_noRT))
pats_indices = range(len(eagles_body_text_noRT),len(eagles_body_text_noRT) + len(pats_body_text_noRT))
# use a single vectorizer because we care about the joint vocabulary
vectorizer = CountVectorizer(
tokenizer=custom_tokenizer,
stop_words=stopwords,
ngram_range=(1,1)
)
dtm = vectorizer.fit_transform(docs).toarray()
vocab = np.array(vectorizer.get_feature_names())
eagles_dtm = dtm[eagles_indices, :]
pats_dtm = dtm[pats_indices, :]
"""
Explanation: This difference in averages is sometimes called "keyness".
Now let's do it on real data.
End of explanation
"""
# columns for every token in the vocab; rows for tweets in the corpus
eagles_means = np.mean(eagles_dtm,axis=0)
pats_means = np.mean(pats_dtm,axis=0)
"""
Explanation: Take the average coefficient for each vocab element, for each corpus.
End of explanation
"""
# get indices for any column with zero mean in either corpus
distinct_indices = eagles_means * pats_means == 0
print(str(np.count_nonzero(distinct_indices)) + " distinct tokens out of " + str(len(vocab)))
eagles_ranking = np.argsort(eagles_means[distinct_indices])[::-1]
pats_ranking = np.argsort(pats_means[distinct_indices])[::-1]
total_ranking = np.argsort(eagles_means[distinct_indices] + pats_means[distinct_indices])[::-1]
vocab[distinct_indices][total_ranking]
print("Top distinct Eagles tokens by average term count in Eagles corpus")
for token in vocab[distinct_indices][eagles_ranking][:10]:
print_str = f"{token:30s} {eagles_means[vectorizer.vocabulary_[token]]:.3g}"
print(print_str)
print("Top distinct Patriots tokens by average term count in Patriots corpus")
for token in vocab[distinct_indices][pats_ranking][:10]:
print_str = f"{token:30s} {pats_means[vectorizer.vocabulary_[token]]:.3g}"
print(print_str)
"""
Explanation: Start by looking for distinct tokens, which only exist in one corpus.
End of explanation
"""
def compare_corpora(corpus0,corpus1,vectorizer,n_to_display=10):
corpus0_indices = range(len(corpus0))
corpus1_indices = range(len(corpus0), len(corpus0) + len(corpus1))
m_sparse = vectorizer.fit_transform(corpus0 + corpus1)
m = m_sparse.toarray()
vocab = np.array(vectorizer.get_feature_names())
m_corpus0 = m[corpus0_indices,:]
m_corpus1 = m[corpus1_indices,:]
corpus0_means = np.mean(m_corpus0,axis=0)
corpus1_means = np.mean(m_corpus1,axis=0)
distinct_indices = corpus0_means * corpus1_means == 0
print(str(np.count_nonzero(distinct_indices)) + " distinct tokens out of " + str(len(vocab)) + '\n')
corpus0_ranking = np.argsort(corpus0_means[distinct_indices])[::-1]
corpus1_ranking = np.argsort(corpus1_means[distinct_indices])[::-1]
print("Top distinct tokens from corpus0 by average term count in corpus")
for token in vocab[distinct_indices][corpus0_ranking][:n_to_display]:
print_str = f"{token:30s} {corpus0_means[vectorizer.vocabulary_[token]]:.3g}"
print(print_str)
print()
print("Top distinct tokens from corpus1 by average term count in corpus")
for token in vocab[distinct_indices][corpus1_ranking][:n_to_display]:
print_str = f"{token:30s} {corpus1_means[vectorizer.vocabulary_[token]]:.3g}"
print(print_str)
#vectorizer = TfidfVectorizer(
vectorizer = CountVectorizer(
tokenizer=custom_tokenizer,
stop_words=stopwords,
ngram_range=(1,1)
)
compare_corpora(eagles_body_text_noRT,pats_body_text_noRT,vectorizer)
"""
Explanation: How does this change if we account for inverse document frequency?
Let's build a function and encapsulate this.
End of explanation
"""
def compare_corpora(corpus0,corpus1,vectorizer,n_to_display=10):
# get corpus indices
corpus0_indices = range(len(corpus0))
corpus1_indices = range(len(corpus0), len(corpus0) + len(corpus1))
m_sparse = vectorizer.fit_transform(corpus0 + corpus1)
m = m_sparse.toarray()
# get vocab and TF matrices for each corpus
vocab = np.array(vectorizer.get_feature_names())
m_corpus0 = m[corpus0_indices,:]
m_corpus1 = m[corpus1_indices,:]
corpus0_means = np.mean(m_corpus0,axis=0)
corpus1_means = np.mean(m_corpus1,axis=0)
distinct_indices = corpus0_means * corpus1_means == 0
print(str(np.count_nonzero(distinct_indices)) + " distinct tokens out of " + str(len(vocab)) + '\n')
corpus0_ranking = np.argsort(corpus0_means[distinct_indices])[::-1]
corpus1_ranking = np.argsort(corpus1_means[distinct_indices])[::-1]
print("Top distinct tokens from corpus0 by average term count in corpus")
for token in vocab[distinct_indices][corpus0_ranking][:n_to_display]:
print_str = f"{token:30s} {corpus0_means[vectorizer.vocabulary_[token]]:.3g}"
print(print_str)
print()
print("Top distinct tokens from corpus1 by average term count in corpus")
for token in vocab[distinct_indices][corpus1_ranking][:n_to_display]:
print_str = f"{token:30s} {corpus1_means[vectorizer.vocabulary_[token]]:.3g}"
print(print_str)
# remove distinct tokens
m = m[:, np.invert(distinct_indices)]
vocab = vocab[np.invert(distinct_indices)]
# recalculate stuff
m_corpus0 = m[corpus0_indices,:]
m_corpus1 = m[corpus1_indices,:]
corpus0_means = np.mean(m_corpus0,axis=0)
corpus1_means = np.mean(m_corpus1,axis=0)
# get "keyness"
keyness = corpus0_means - corpus1_means
# order token indices by keyness
ranking = np.argsort(keyness)[::-1]
print()
print("Top tokens by keyness from corpus0 by average term count in corpus")
for rank in ranking[:n_to_display]:
token = vocab[rank]
print_str = f"{token:30s} {keyness[rank]:.3g}"
print(print_str)
print()
print("Top tokens by keyness from corpus1 by average term count in corpus")
for rank in ranking[-n_to_display:]:
token = vocab[rank]
print_str = f"{token:30s} {keyness[rank]:.3g}"
print(print_str)
vectorizer = CountVectorizer(
tokenizer=custom_tokenizer,
stop_words=stopwords,
ngram_range=(1,1)
)
compare_corpora(eagles_body_text_noRT,pats_body_text_noRT,vectorizer)
"""
Explanation: Now let's remove the distrinct tokens and look at the maximum difference in means.
End of explanation
"""
|
Olsthoorn/TransientGroundwaterFlow
|
Syllabus_in_notebooks/Sec6_4_4_Theis_Hantush_implementations.ipynb
|
gpl-3.0
|
import numpy as np
from scipy.integrate import quad
from scipy.special import exp1
import matplotlib.pyplot as plt
from timeit import timeit
import pdb
# Handy for object inspection:
attribs = lambda obj: [o for o in dir(obj) if not o.startswith('_')]
def newfig(title='title', xlabel='xlabel', ylabel='ylabel',
xlim=None, ylim=None, xscale='linear', yscale='linear',
size_inches=(12, 8)):
fig, ax = plt.subplots()
fig.set_size_inches(size_inches)
ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
if xlim: ax.set_xlim(xlim)
if ylim: ax.set_ylim(ylim)
ax.set_xscale(xscale)
ax.set_yscale(yscale)
ax.grid()
return ax
"""
Explanation: Section 6.4.4
Theis and Hantush type curves (timing the functions)
Here we do some testing of Theis and Hantush well function computations using a variety if implementations in Python.
We have implemented 4 variants of the Theis well functions and four variants of the Hantush well function compared them and timed time.
While we can directly use scipy.special.exp1 for the Theis well function, there is no function in scipy.special for the equivalent Hantush wellf function. That's why we have to integrate it ourselves. Be most straight forward and in fact best way is to integrate the kernal of the formula numerically and for that use the scipy.integrate.quad method, as it is fast, stable and at least 10 didgets accurate.
Ed J.M. Veling in has warned that accurate computation of Hantush is essential for many applications and has shown in a paper how to to accurately compute it. In this notebook we explore some implementations of the function and test their performance.
@TO 2020-12-14
Theis and Hantush well functions
Theis well function ($\mbox{W(}u)$)
$$s(r, t) = \frac{Q_0} {4 \pi kD} W_{theis}(u),\,\,\,\,\,u=\frac{r^2 S }{4 kD t}$$
The Theis well function is mathematically known as the exponential integral or $\mbox{expint}(z)$. In Python this function is avaialble in the scipy.special module as exp1. So you import it as
from scipy.special import exp1
or
from scipy.special import exp1 as Wth
This renames exp1 to Wth, if you should prefer it.
$$ W_{theis} = \mbox{expint}(u) = \mbox{scipy.special.exp1}(u)$$
There exist two mathematical formulaa for the exponential integral
$$W(u) = exp1(u) = \mbox{expint}(u) = \intop_u^\infty \frac{e^{-y}} y dy$$
Then there is the power series form:
$$\mbox{expint}(u) = \sum_0^\infty\left[-\gamma - \ln u + u -\frac{u^2}{2 \times 2!}
+\frac{u^3}{3 \times 3!} - \frac{u^4}{4 \times 4!} - ...\right],\,\,\,\,\,\gamma=0.577216...$$
The $\gamma$ is so-called Euler's constant, a basic math constant like $e$, $\pi$ etc.
The second, power series form of Theis's well function comes in very handy for understanding the function's behavior and straight-forward analysis of pumping tests for wells in confined and unconfined aquifers that show Theis behavior, by which we mean, there is no equilibrium ever, because all the extractd water originates from storage in the aquifer only.
Hantush's well function ($\mbox{Wh}(u, \rho)$).
Hantush considers transient drawdown due to a well well in a semie-confined aquifer. Hence there is an drawdwown-induced infiltration from an adjacent layer, in which the head is assumed constant. This means that extrated water comes not only from storage (which is the case initially) but also from this induced infiltration. After longer times the full extraction originates from this induced infiltration, due to which the drawdown becomes stationary. But in very early times, the drawdown of the Theis and Hantush wells are the same. So, also the mathematical form of the Hantush solution resembles that of the Theis solution.
$$s(r, t) = \frac {Q_0}{4 \pi kD} Wh(u, \rho),\,\,\,\,\,u=\frac{r^2 S}{4 kD t}, \,\,\,\rho = \frac r \lambda$$
where $\lambda = \sqrt{kD c}$.
Like it is the case with the Theis well function, the Hantush well function can be written as an definite integral
$$W_h(u,\rho) = \intop_u^\infty \frac{e^{-y -\frac{\left(\frac{\rho}{2}\right)^2}{y} }}{y} dy$$
The Hantush well function may also be computed as a power series:
$$ W_h(u, \rho) = \sum_{n=0}^{\infty}\frac {-1^n} {n!} \left( \frac \rho 2 \right)^{2n} u^{-n} E_{n+1}\left(\frac {\rho^2} {4 u} \right) $$
$$ E_{n+1} = \frac 1 n \left[ e^{-u} - u E_n (u) \right] , \,\,(n=1, 2, 3, ...) $$
In which $E_i$ is the ith repeated integral of the exponential function and $E_1 = \mbox{expint}$.
Four methods are implemented below. But in conclusion, just stay with the one using the quad as it is fast enough and extremely accurate.
Import required functionality
End of explanation
"""
def W_theis0(u):
"""Return Theis well function using scipy.special function exp1 directly."""
return exp1(u)
def W_theis1(u):
"""Return Theis well function by integrating using scipy functionality.
This turns out to be a very accurate yet fast impementation, about as fast
as the exp1 function form scipy.special.
In fact we define three functions and finally compute the desired answer
with the last one. The three functions are nicely packages in the overall
W_theis1 function.
"""
def funcTh(y): return np.exp(-y) / y
def Wth2(u): return quad(funcTh, u, np.inf)
WTh = np.frompyfunc(Wth2, 1, 2)
return Wth(u)
def W_theis2(u, practically_log_inf=20, steps_per_log_cycle=50):
"""Theis well function using smart integration"""
if np.isscalar(u):
u = np.array([u])
# Genereate integration point from first u tot practially inf and mix with the
# given u, so they are in the array of u values.
lu0 = np.log10(u[0])
n = int((practically_log_inf - lu0) * steps_per_log_cycle)
uu = np.unique(np.hstack((np.logspace(lu0, practically_log_inf, n), u)))
kernel = np.exp(-uu)
dlnu = np.diff(np.log(uu))
Wuu = np.cumsum(np.hstack((0, (kernel[:-1] + kernel[1:]) * dlnu / 2)))
Wuu = Wuu[-1] - Wuu # This holds the integral from each uu to infinity
# So now just look up the Wuu values where uu is u
W = np.zeros_like(u)
for i, ui in enumerate(u):
W[i] = Wuu[np.where(uu==ui)[0][0]]
return W
def W_theis3(u):
"""Return Theis well function using power series."""
tol = 1e-16
gam = 0.577216
if np.isscalar(u):
u = np.array([u])
u1 = u[u <= 15] # All outcomes for u > 15 are identical to zero
terms0 = u1
W = -gam - np.log(u1) + terms0
for i in range(2, 250):
terms1 = -terms0 * u1 * (i -1) / (i * i)
W += terms1
if np.max(np.abs(terms0 + terms1)) < tol:
break
terms0 = terms1
return np.hstack((W, np.zeros_like(u[u > 15])))
"""
Explanation: Two variant implementations of the Theis well fuction
W_theis0: exp1 directly from scipy.special
W_theis1: by integration using scipy and numpy functionality.
End of explanation
"""
def W_hantush0(u, rho, tol=1e-14):
'''Hantush well function implemented as power series
This implementation works but has a limited reach; for very small
values of u (u<0.001) the solution will deteriorate into nonsense,
'''
tau = (rho/2)**2 / u
f0 = 1
E = exp1(u)
w0= f0 * E
W = w0
for n in range(1, 500):
E = (1/n) * (np.exp(-u) - u * E)
f0 = -f0 / n * tau
w1 = f0 * E
#print(w1)
if np.max(abs(w0 + w1)) < tol: # use w0 + w1 because terms alternate sign
#print('succes')
break
W += w1
w0 = w1 # remember previous value
return W
def W_hantush1(u, rho):
"""Return Hantush well function by straight-forward integration.
A large number of points are required to be accurate, but it won't
still be as accurate as the quad method from scipy.integrate which is
also at least as fast.
"""
if np.isscalar(u):
u = np.asarray([u])
w = np.zeros_like(u)
for i, uu in enumerate(u):
y = np.logspace(np.log10(uu), 10, 5000)
arg = np.exp(-y - (rho/2) ** 2 / y ) / y
w[i] = np.sum(np.diff(y) * 0.5 * (arg[:-1]+ arg[1:]))
return w
def W_hantush2(u, rho):
"""Return Hantush well function by integration trying to be smarter.
This function is no faster than the previous one with 5000 points.
Parameters
----------
u = np.ndarray of floats
an array of u values u = r**2 S / (4 kD t)
rho: float
value of r/lambda with lambda = sqrt(kD c)
"""
if np.isscalar(u):
u = np.asarray([u])
uu = np.unique(np.hstack((np.logspace(np.log10(np.min(u)), 10, 5000), u)))
arg = np.exp(-uu - (rho/2) ** 2 / uu) / uu
duu = np.diff(uu)
S = np.hstack((0, (arg[1:] + arg[:-1])* duu / 2))
Wsum = np.zeros_like(u)
for i, ui in enumerate(u):
Wsum[i] = np.sum(S[uu > ui])
return Wsum
def W_hantush3(u, rho):
"""Return Hantush well function by integration using scipy functinality.
This turns out to be a very accurate yet fast impementation, about as fast
as the exp1 function form scipy.special.
In fact we define three functions and finally compute the desired answer
with the last one. The three functions are nicely packages in the overall
W_theis1 function.
"""
def whkernel(y, rho): return np.exp(-y - (rho/2) ** 2 / y ) / y
def whquad(u, rho): return quad(whkernel, u, np.inf, args=(rho))
Wh = np.frompyfunc(whquad, 2, 2) # 2 inputs and tow outputs h and err
return Wh(u, rho)[0] # cut-off err
"""
Explanation: Four variant implementations of the Hantush well function
End of explanation
"""
u = np.logspace(-3, 1, 41)
rho = 0.003
theis_funcs = [W_theis0, W_theis1, W_theis2, W_theis3]
hantush_funcs = [W_hantush0, W_hantush1, W_hantush2, W_hantush3]
for i, f in enumerate(theis_funcs):
print(f'W_theis{i}: ', f(u)[:3])
for i, f in enumerate(hantush_funcs):
print(f'W_hantush{i}: ',f(u, rho)[:3])
print('W_theis0 :')
%timeit W_theis0(u)
print('W_theis1(u) :')
%timeit W_theis1(u)
print('W_theis2(u) :')
%timeit W_theis2(u)
print('W_theis3(u) :')
%timeit W_theis3(u)
print('W_hantush0(u, rho) :')
%timeit W_hantush0(u, rho)
print('W_hantus1(, rho) :')
%timeit W_hantush1(u, rho)
print('W_hantush2(u, rho) :')
%timeit W_hantush2(u, rho)
print('W_hantush3(u, rho) :')
%timeit W_hantush3(u, rho)
"""
Explanation: Timing the functions
End of explanation
"""
rhos = [0., 0.1, 0.3, 1, 3]
u = np.logspace(-6, 1, 71)
ax = newfig('Hantush type curves', '1/u', 'Wh(u, rho)', xscale='log', yscale='log')
ax.plot(1/u, W_theis0(u), lw=3, label='Theis', zorder=100)
for rho in rhos:
ax.plot(1/u, W_hantush2(u, rho), '.', label='rho={:.1f}'.format(rho))
ax.plot(1/u, W_hantush3(u, rho), label='rho={:.1f}'.format(rho))
ax.legend()
plt.show()
rhos = [0., 0.1, 0.3, 1, 3]
u = np.logspace(-6, 1, 71)
ax = newfig('Hantush type curves', '1/y', 'W(u, rho)', xscale='log')
for rho in rhos:
ax.plot(1/u, W_hantush2(u, rho), '.', label='rho={:.1f}'.format(rho))
ax.plot(1/u, W_hantush3(u, rho), label='rho={:.1f}'.format(rho))
ax.legend()
plt.show()
"""
Explanation: Results of the timing
Theis:
W_theis0 :
6.06 ยตs ยฑ 261 ns per loop (mean ยฑ std. dev. of 7 runs, 100000 loops each)
W_theis1(u) :
7.11 ยตs ยฑ 163 ns per loop (mean ยฑ std. dev. of 7 runs, 100000 loops each)
W_theis2(u) :
299 ยตs ยฑ 6.79 ยตs per loop (mean ยฑ std. dev. of 7 runs, 1000 loops each)
W_theis3(u) :
553 ยตs ยฑ 33.7 ยตs per loop (mean ยฑ std. dev. of 7 runs, 1000 loops each)
Their is almost no difference in speed between direcly using exp1 from scipy and integrating numerically using quad. Both are equally accurate.
Thex explicit integration is slow just as the summation.
Hantush:
W_hantush0(u, rho) :
86 ยตs ยฑ 1.69 ยตs per loop (mean ยฑ std. dev. of 7 runs, 10000 loops each)
W_hantus1(, rho) :
7.53 ms ยฑ 72.9 ยตs per loop (mean ยฑ std. dev. of 7 runs, 100 loops each)
W_hantush2(u, rho) :
882 ยตs ยฑ 26.9 ยตs per loop (mean ยฑ std. dev. of 7 runs, 1000 loops each)
W_hantush3(u, rho) :
8.64 ms ยฑ 75.4 ยตs per loop (mean ยฑ std. dev. of 7 runs, 100 loops each)
Note that my "smart" integration (W_hantush2) is about 9 time faster than the simple integration and th quad solution. So it turns aout to be smart enough after all.
The smart and normal integration methods are equally accurate to 5 didgets with 5000 points and haveing 1e10 as upper limit. The quad method has 10 didgets accuracy.
The series method is the slowest of all, 10 times slower than the quad and simple integration methods and 100 times slower than my smart method. But it is as accurate as the quad method. The series method is also very effective.
The series method is also not accurate. The number of terms to include must be
much larger, which would make it even slower te compute.
End of explanation
"""
|
usantamaria/iwi131
|
ipynb/08-EjerciciosRuteoFuncionesCondicionales/Ejercicios.ipynb
|
cc0-1.0
|
def mi_funcion(x,y,z):
a = x * y * z
b = x/2 + y/4 + z/8
c = a + b
return c
a = 1.0
b = 2.0
a = mi_funcion(a, b, 3.0)
print a
"""
Explanation: <header class="w3-container w3-teal">
<img src="images/utfsm.png" alt="" align="left"/>
<img src="images/inf.png" alt="" align="right"/>
</header>
<br/><br/><br/><br/><br/>
IWI131
Programaciรณn de Computadores
Sebastiรกn Flores
http://progra.usm.cl/
https://www.github.com/sebastiandres/iwi131
Clase anterior
Condicionales
Prรณximamente...
16 Nov: Actividad 2.
20 Nov: Tarea 1.
23 Nov: Certamen 1.
ยฟQuรฉ practicaremos hoy?
Ejercicios
* Diagrama de Flujo
* Ruteo de funciones
* Funciones y condicionales
ยฟPorquรฉ aprenderemos eso?
Ejercicios
* Diagrama de Flujo
* Ruteo de funciones
* Funciones y condicionales
Practicar para aprender y dominar el material.
Profesor, ยฟcรณmo puedo aprender?
http://progra.usm.cl/Archivos/certamenes/Libro_prograRB.pdf: Libro de programaciรณn.
http://pythonya.appspot.com/: Contenido y posibilidad de practicar en lรญnea.
Profesor, ยฟcรณmo puedo practicar?
Instale python y practique con los ejercicios de la clase.
http://progra.usm.cl/apunte/ejercicios/: Guรญa de ejercicios.
http://progra.usm.cl/certamenes_antiguos.html: Certรกmenes antiguos.
Ejercicios
1 - Ruteo
2 - Funciones con if-else
1.1a Ruteo de cรณdigo simple
Realice el ruteo del siguiente cรณdigo en python
End of explanation
"""
def mi_funcion(x,y,z):
a = x * y * z
b = x/2 + y/4 + z/8
c = a + b
return c
a = 1
b = 2
a = mi_funcion(a, b, 3)
print a
"""
Explanation: <table border="1">
<tr>
<th align="center" colspan="3" width="150">Globales</th>
<th colspan="6" width="400">Locales mi_funcion</th>
</tr>
<tr>
<td>a</td><td>b</td><td></td>
<td>x</td><td>y</td><td>z</td><td>a</td><td>b</td><td>c</td>
</tr>
<tr>
<td height="30"></td><td></td><td></td>
<td></td><td></td><td></td><td></td><td></td><td></td>
</tr>
<tr>
<td height="30"></td><td></td><td></td>
<td></td><td></td><td></td><td></td><td></td><td></td>
</tr>
<tr>
<td height="30"></td><td></td><td></td>
<td></td><td></td><td></td><td></td><td></td><td></td>
</tr>
<tr>
<td height="30"></td><td></td><td></td>
<td></td><td></td><td></td><td></td><td></td><td></td>
</tr>
<tr>
<td height="30"></td><td></td><td></td>
<td></td><td></td><td></td><td></td><td></td><td></td>
</tr>
<tr>
<td height="30"></td><td></td><td></td>
<td></td><td></td><td></td><td></td><td></td><td></td>
</tr>
<tr>
<td height="30"></td><td></td><td></td>
<td></td><td></td><td></td><td></td><td></td><td></td>
</tr>
<tr>
<td height="30"></td><td></td><td></td>
<td></td><td></td><td></td><td></td><td></td><td></td>
</tr>
<tr>
<td height="30"></td><td></td><td></td>
<td></td><td></td><td></td><td></td><td></td><td></td>
</tr>
</table>
1.1b Ruteo de cรณdigo simple
Realice el ruteo del siguiente cรณdigo en python
End of explanation
"""
def f(x, y):
x = int(x)/4 + float(x)/4 + len(y)
return x
def g(a, b):
if a==b:
return a
else:
return a*b
a = "dos"
b = 2
c = f(2.0, g(a,b))
"""
Explanation: <table border="1">
<tr>
<th align="center" colspan="3" width="150">Globales</th>
<th colspan="6" width="400">Locales mi_funcion</th>
</tr>
<tr>
<td>a</td><td>b</td><td></td>
<td>x</td><td>y</td><td>z</td><td>a</td><td>b</td><td>c</td>
</tr>
<tr>
<td height="30"></td><td></td><td></td>
<td></td><td></td><td></td><td></td><td></td><td></td>
</tr>
<tr>
<td height="30"></td><td></td><td></td>
<td></td><td></td><td></td><td></td><td></td><td></td>
</tr>
<tr>
<td height="30"></td><td></td><td></td>
<td></td><td></td><td></td><td></td><td></td><td></td>
</tr>
<tr>
<td height="30"></td><td></td><td></td>
<td></td><td></td><td></td><td></td><td></td><td></td>
</tr>
<tr>
<td height="30"></td><td></td><td></td>
<td></td><td></td><td></td><td></td><td></td><td></td>
</tr>
<tr>
<td height="30"></td><td></td><td></td>
<td></td><td></td><td></td><td></td><td></td><td></td>
</tr>
<tr>
<td height="30"></td><td></td><td></td>
<td></td><td></td><td></td><td></td><td></td><td></td>
</tr>
<tr>
<td height="30"></td><td></td><td></td>
<td></td><td></td><td></td><td></td><td></td><td></td>
</tr>
<tr>
<td height="30"></td><td></td><td></td>
<td></td><td></td><td></td><td></td><td></td><td></td>
</tr>
</table>
1.2 Ruteo de cรณdigo complejo
Realice el ruteo del siguiente cรณdigo en python
End of explanation
"""
def f1(a,b):
return a-b
def f2(b,a):
c = f1(a,b)
return c
a = 3
b = 4
f2(a,b)
"""
Explanation: <table border="1">
<tr>
<th align="center" colspan="3" width="300">Globales</th>
<th colspan="2" width="200">Locales f</th>
<th colspan="2" width="200">Locales g</th>
</tr>
<tr>
<td>a</td><td>b</td><td>c</td>
<td>x</td><td>y</td>
<td>a</td><td>b</td>
</tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
</table>
1.3 Ruteo de cรณdigo confuso
Realice el ruteo del siguiente cรณdigo en python
End of explanation
"""
#
def accion_central(precio, capacidad):
if precio>60:
if capacidad>10:
print "generar"
else:
print "nada"
else:
if capacidad<90:
print "bombear"
else:
print "nada"
p = float(raw_input("Ingrese precio de electricidad en USD:"))
estanque = float(raw_input("Ingrese porcentaje llenado del embalse [0-100]:"))
accion_central(p,estanque)
"""
Explanation: <table border="1">
<tr>
<th align="center" colspan="2" width="200">Globales</th>
<th colspan="2" width="200">Locales f1</th>
<th colspan="3" width="300">Locales f2</th>
</tr>
<tr>
<td>a</td><td>b</td>
<td>a</td><td>b</td>
<td>b</td><td>a</td><td>c</td>
</tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
<tr> <td height="30"></td><td></td><td></td><td></td><td></td><td></td><td></td> </tr>
</table>
2.1 Problema de la Central de Bombeo
<img src="images/central-de-bombeo.jpg" width="900" alt="" align="middle"/>
2.1 Problema de la Central Hidroelectrica de Bombeo
Una central hidroelรฉctica de bombeo tiene 2 modos: puede vaciar el estanque para generar electricidad, o bien, puede bombear el agua y llenar el estanque. La decisiรณn de generar electricidad o llenar el estanque depende del precio de la electricidad y de la capacidad del estanque. Para un estanque determinado, se genera electricidad cuando el precio de electricidad es mayor a 60 USD/MWh y si el embalse se encuentra a mรกs del 10% de su capacidad. Se consumirรก electricidad y se bombearรก agua al embalse cuando el precio de electricidad sea menor o igual a 60 USD/MWh y el embalse se encuentra a menos del 90% de su capacidad.
Realice un diagrama de flujo y un programa que solicite el precio actual de la electricidad y el porcentaje de llenado del estanque, e imprima en pantalla la decisiรณn: โGENERARโ, โBOMBEARโ o โNADAโ.
2.1 Problema de la Central Hidroelectrica de Bombeo
<img src="images/centralbombeo.png" width="600" alt="" align="middle"/>
2.1 Problema de la Central Hidroelectrica de Bombeo
[...] Para un estanque determinado, se genera electricidad cuando el precio de electricidad es mayor a 60 USD/MWh y si el embalse se encuentra a mรกs del 10% de su capacidad. Se consumirรก electricidad y se bombearรก agua al embalse cuando el precio de electricidad sea menor a 60 USD/MWh y el embalse se encuentra a menos del 90% de su capacidad.
Realice un diagrama de flujo que [...] imprima en pantalla la decisiรณn: โGENERARโ, โBOMBEARโ o โNADAโ.
End of explanation
"""
# Soluciรณn 1
def accion_central(precio, capacidad):
if precio>60:
if capacidad>10:
print "Generar"
else:
print "Nada"
else:
if capacidad>90:
print "Nada"
else:
print "Bombear"
return
p = float(raw_input("Ingrese precio de electricidad en USD: "))
estanque = float(raw_input("Ingrese porcentaje llenado del embalse [0-100]: "))
accion_central(p,estanque)
"""
Explanation: 2.1 Central Hidroelectrica de Bombeo v.1
End of explanation
"""
# Soluciรณn 2
def accion_central(precio, capacidad):
if precio>60 and capacidad>10:
print "Generar"
elif precio<=60 and capacidad<=90:
print "Bombear"
else:
print "Nada"
return
p = float(raw_input("Ingrese precio de electricidad en USD: "))
estanque = float(raw_input("Ingrese porcentaje llenado del embalse [0-100]: "))
accion_central(p,estanque)
"""
Explanation: 2.1 Central Hidroelectrica de Bombeo v.2
End of explanation
"""
# Solucion de los alumnos
def es_bisiesto(anno):
# FIX ME
return False
year = int(raw_input("Ingrese un aรฑo: "))
print es_bisiesto(year)
"""
Explanation: 2.3 Problema del Aรฑo Bisiesto
Un aรฑo es bisiesto si es divisible por 4, excepto si es divisible por 100 y no por 400.
Escriba la funciรณn es_bisiesto(anno) que reciba el aรฑo y regrese True si el aรฑo provisto es bisiesto o False si no lo es.
es_bisiesto(1988)
True
es_bisiesto(2011)
False
es_bisiesto(1700)
False
es_bisiesto(2400)
True
2.3 Problema del Aรฑo Bisiesto
Un aรฑo es bisiesto si es divisible por 4, excepto si es divisible por 100 y no por 400.
End of explanation
"""
# Solucion 1
def es_bisiesto(anno):
if anno % 400 == 0:
bisiesto = True
elif anno % 100 == 0:
bisiesto = False
elif anno % 4 == 0:
bisiesto = True
else:
bisiesto = False
return bisiesto
year = int(raw_input("Ingrese un aรฑo: "))
print es_bisiesto(year)
# Solucion 2
def es_bisiesto(anno):
if ((anno % 4 == 0 and anno % 100 != 0) or anno % 400 == 0):
bisiesto = True
else:
bisiesto = False
return bisiesto
year = int(raw_input("Ingrese un aรฑo: "))
print es_bisiesto(year)
# Solucion 3
def es_bisiesto(anno):
return ((anno % 4 == 0 and anno % 100 != 0) or anno % 400 == 0)
year = int(raw_input("Ingrese un aรฑo: "))
print es_bisiesto(year)
"""
Explanation: 2.3 Problema del Aรฑo Bisiesto
End of explanation
"""
|
franciscodominguezmateos/DeepLearningNanoDegree
|
language-translation/dlnd_language_translation.ipynb
|
mit
|
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
"""
Explanation: Language Translation
In this project, youโre going to take a peek into the realm of neural network machine translation. Youโll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
# TODO: Implement Function
source_sentences=source_text.split('\n')
source_id_text=[]
for s in source_sentences:
sentence_text_words=s.split()
sentence_id_text=[source_vocab_to_int[i] for i in sentence_text_words]
source_id_text.append(sentence_id_text)
target_sentences=target_text.split('\n')
target_id_text=[]
for s in target_sentences:
sentence_text_words=s.split()
sentence_id_text=[target_vocab_to_int[i] for i in sentence_text_words]
sentence_id_text.append(target_vocab_to_int['<EOS>'])
target_id_text.append(sentence_id_text)
return source_id_text, target_id_text
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
"""
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
for i in range(10):
print(source_int_text[i])
print(target_int_text[i])
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
"""
def model_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
"""
# TODO: Implement Function
input_ = tf.placeholder(tf.int32, [None, None], name='input')
targets_ = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_probability = tf.placeholder(tf.float32, name='keep_prob')
return (input_, targets_, learning_rate, keep_probability)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
"""
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
# TODO: Implement Function
'''Remove the last word id from each batch and concat the <GO> to the begining of each batch'''
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_decoding_input(process_decoding_input)
"""
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
End of explanation
"""
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
"""
# TODO: Implement Function
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
outputs, final_state = tf.nn.dynamic_rnn(cell, rnn_inputs,dtype=tf.float32)
return final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
"""
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
"""
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
"""
# TODO: Implement Function
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
# Apply output function
train_logits = output_fn(train_pred)
# Add dropout to the cell
drop = tf.nn.dropout(train_logits, keep_prob)
return drop
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
"""
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
"""
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
"""
# TODO: Implement Function
# Inference Decoder
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings,start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
# Add dropout to the cell
drop = tf.nn.dropout(inference_logits, keep_prob)
return drop
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
"""
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
"""
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
"""
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
"""
# TODO: Implement Function
# Decoder RNNs
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
# Decoder train
with tf.variable_scope("decoding") as decoding_scope:
# Output Layer
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
# Training Decoder
training_logit=decoding_layer_train(encoder_state, cell, dec_embed_input, sequence_length, decoding_scope,output_fn, keep_prob)
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
# Inference Decoder
inference_logits=decoding_layer_infer(encoder_state, cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'],sequence_length, vocab_size, decoding_scope, output_fn, keep_prob)
return training_logit, inference_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
"""
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
"""
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
"""
# TODO: Implement Function
#Apply embedding to the input data for the encoder.
# Encoder embedding
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
#Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
enc_state=encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
#Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
dec_input=process_decoding_input(target_data, target_vocab_to_int, batch_size)
#Apply embedding to the target data for the decoder.
# Decoder Embedding
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
#Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
training_logit, inference_logits=decoding_layer(dec_embed_input, dec_embeddings, enc_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)
return training_logit,inference_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
"""
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 128
decoding_embedding_size = 128
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.5
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import time
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)
"""
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
# TODO: Implement Function
ids=[]
for w in sentence.split():
if(w in vocab_to_int):
ids.append(vocab_to_int[w])
else:
ids.append(vocab_to_int['<UNK>'])
return ids
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
"""
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
"""
translate_sentence = 'he saw a old yellow truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
"""
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation
"""
|
fastai/fastai
|
nbs/42_tabular.model.ipynb
|
apache-2.0
|
#|export
def emb_sz_rule(
n_cat:int # Cardinality of a category
) -> int:
"Rule of thumb to pick embedding size corresponding to `n_cat`"
return min(600, round(1.6 * n_cat**0.56))
#|export
def _one_emb_sz(classes, n, sz_dict=None):
"Pick an embedding size for `n` depending on `classes` if not given in `sz_dict`."
sz_dict = ifnone(sz_dict, {})
n_cat = len(classes[n])
sz = sz_dict.get(n, int(emb_sz_rule(n_cat))) # rule of thumb
return n_cat,sz
"""
Explanation: Tabular model
A basic model that can be used on tabular data
Embeddings
End of explanation
"""
#|export
def get_emb_sz(
to:(Tabular, TabularPandas),
sz_dict:dict=None # Dictionary of {'class_name' : size, ...} to override default `emb_sz_rule`
) -> list: # List of embedding sizes for each category
"Get embedding size for each cat_name in `Tabular` or `TabularPandas`, or populate embedding size manually using sz_dict"
return [_one_emb_sz(to.classes, n, sz_dict) for n in to.cat_names]
#|export
class TabularModel(Module):
"Basic model for tabular data."
def __init__(self,
emb_szs:list, # Sequence of (num_embeddings, embedding_dim) for each categorical variable
n_cont:int, # Number of continuous variables
out_sz:int, # Number of outputs for final `LinBnDrop` layer
layers:list, # Sequence of ints used to specify the input and output size of each `LinBnDrop` layer
ps:(float, list)=None, # Sequence of dropout probabilities for `LinBnDrop`
embed_p:float=0., # Dropout probability for `Embedding` layer
y_range=None, # Low and high for `SigmoidRange` activation
use_bn:bool=True, # Use `BatchNorm1d` in `LinBnDrop` layers
bn_final:bool=False, # Use `BatchNorm1d` on final layer
bn_cont:bool=True, # Use `BatchNorm1d` on continuous variables
act_cls=nn.ReLU(inplace=True), # Activation type for `LinBnDrop` layers
lin_first:bool=True # Linear layer is first or last in `LinBnDrop` layers
):
ps = ifnone(ps, [0]*len(layers))
if not is_listy(ps): ps = [ps]*len(layers)
self.embeds = nn.ModuleList([Embedding(ni, nf) for ni,nf in emb_szs])
self.emb_drop = nn.Dropout(embed_p)
self.bn_cont = nn.BatchNorm1d(n_cont) if bn_cont else None
n_emb = sum(e.embedding_dim for e in self.embeds)
self.n_emb,self.n_cont = n_emb,n_cont
sizes = [n_emb + n_cont] + layers + [out_sz]
actns = [act_cls for _ in range(len(sizes)-2)] + [None]
_layers = [LinBnDrop(sizes[i], sizes[i+1], bn=use_bn and (i!=len(actns)-1 or bn_final), p=p, act=a, lin_first=lin_first)
for i,(p,a) in enumerate(zip(ps+[0.],actns))]
if y_range is not None: _layers.append(SigmoidRange(*y_range))
self.layers = nn.Sequential(*_layers)
def forward(self, x_cat, x_cont=None):
if self.n_emb != 0:
x = [e(x_cat[:,i]) for i,e in enumerate(self.embeds)]
x = torch.cat(x, 1)
x = self.emb_drop(x)
if self.n_cont != 0:
if self.bn_cont is not None: x_cont = self.bn_cont(x_cont)
x = torch.cat([x, x_cont], 1) if self.n_emb != 0 else x_cont
return self.layers(x)
"""
Explanation: Through trial and error, this general rule takes the lower of two values:
* A dimension space of 600
* A dimension space equal to 1.6 times the cardinality of the variable to 0.56.
This provides a good starter for a good embedding space for your variables. For more advanced users who wish to lean into this practice, you can tweak these values to your discretion. It is not uncommon for slight adjustments to this general formula to provide more success.
End of explanation
"""
emb_szs = [(4,2), (17,8)]
m = TabularModel(emb_szs, n_cont=2, out_sz=2, layers=[200,100]).eval()
x_cat = torch.tensor([[2,12]]).long()
x_cont = torch.tensor([[0.7633, -0.1887]]).float()
out = m(x_cat, x_cont)
#|export
@delegates(TabularModel.__init__)
def tabular_config(**kwargs):
"Convenience function to easily create a config for `TabularModel`"
return kwargs
"""
Explanation: This model expects your cat and cont variables seperated. cat is passed through an Embedding layer and potential Dropout, while cont is passed though potential BatchNorm1d. Afterwards both are concatenated and passed through a series of LinBnDrop, before a final Linear layer corresponding to the expected outputs.
End of explanation
"""
config = tabular_config(embed_p=0.6, use_bn=False); config
"""
Explanation: Any direct setup of TabularModel's internals should be passed through here:
End of explanation
"""
#|hide
from nbdev.export import notebook2script
notebook2script()
"""
Explanation: Export -
End of explanation
"""
|
gregmedlock/Medusa
|
docs/medusa_objects.ipynb
|
mit
|
import medusa
from medusa.test.test_ensemble import construct_textbook_ensemble
example_ensemble = construct_textbook_ensemble()
"""
Explanation: Introduction to Medusa
Loading an example ensemble and inspecting its parts
In medusa, ensembles of genome-scale metabolic network reconstructions (GENREs) are represented using the medusa.Ensemble class. To demonstrate the functionality and attributes of this class, we'll, load a test ensemble. Here, we use a function that takes the E. coli core metabolism reconstruction from cobrapy and randomly removes components to generate ensemble members.
End of explanation
"""
from IPython.display import Image
Image(filename='medusa_structure.png', width=500)
"""
Explanation: Each Ensemble has three key attributes that specify the structure of the ensemble, which we'll describe below. This schematic also summarizes the structure of Ensemble and how each attribute relates to cobrapy objects:
End of explanation
"""
extracted_base_model = example_ensemble.base_model
extracted_base_model
"""
Explanation: Components of an ensemble: base_model
The first is the base_model, which is a cobra.Model object that represents all the possible states of an individual member within the ensemble. Any reaction, metabolite, or gene that is only present in a subset of ensemble members will be present in the base_model for an Ensemble. You can inspect the base_model and manipulate it just like any other cobra.Model object.
End of explanation
"""
# looks like a list when we print it
example_ensemble.members
# Get the first member with integer indexing
first_member = example_ensemble.members[0]
"""
Explanation: Components of an ensemble: members
The second attribute that each Ensemble has is a structure called members. Ensemble.members maps an identifier for each individual GENRE in the ensemble to a medusa.Member object, which holds information about a single member (where a "single member" is an individual GENRE within an ensemble).
Ensemble.members is represented by a custom class implemented in cobrapy called a DictList, which is essentially a standard dictionary in python that can also be accessed using integer indices like a list (e.g. dictlist[0] returns the first element in the dictlist).
End of explanation
"""
print(first_member.ensemble)
print(first_member.id)
print(first_member.states)
"""
Explanation: Each Member within the Ensemble.members DictList has a handful of attributes as well. You can check the ensemble that the member belongs to, the id of the member, and the network states for that member (we'll discuss states more below).
End of explanation
"""
example_ensemble.features
"""
Explanation: Components of an ensemble: features
The states printed above are directly connected to the third attribute that Ensemble contains, Ensemble.features, which is also a DictList object. Ensemble.features contains medusa.Feature entries, which specify the components of the Ensemble.base_model that vary across the entire ensemble.
End of explanation
"""
first_feature = example_ensemble.features[0]
print(first_feature.ensemble)
print(first_feature.base_component)
print(first_feature.component_attribute)
print(first_feature.id)
"""
Explanation: Here, we see that this Ensemble has 8 features. Each Feature object specifies a network component that has a variable parameter value in at least one member of the ensemble (e.g. at least one ensemble member is missing the reaction).
In this case, there are features for 4 reactions, ACALDt,ACKr,ACONTb, and ACt2r. There are two Feature objects for each reaction, corresponding to the lower and upper bound for that reaction. A feature will be generated for any component of a cobra.Model (e.g. Reaction, Gene) that has an attribute value (e.g. Reaction.lower_bound, Reaction.gene_reaction_rule) that varies across the ensemble. As you can see from this result, a feature is created at the level of the specific attribute that varies, not the model component (e.g. we created a Feature for each bound of each Reaction, not for the Reaction objects themselves).
This information can be inferred from feature ID (medusa.Feature.id), but each Feature also has a set of attributes that encode the information. Some useful attributes, described in the order printed below: getting the Ensemble that the Feature belongs to, the component in the Ensemble.base_model that the Feature describes, the attribute of the component in the Ensemble.base_model whose value the Feature specifies, and the ID of the Feature:
End of explanation
"""
print(first_feature.states)
"""
Explanation: Just as each member has an attribute, states, that returns the value of every feature for that member, each feature has a states dictionary that maps each member.id to the value of the feature in the corresponding member, e.g.:
End of explanation
"""
# Remember, our Ensemble holds a normal cobrapy Model in base_model
extracted_base_model = example_ensemble.base_model
# Accessing object by id is common in cobrapy
rxn = extracted_base_model.reactions.get_by_id('ACALDt')
# We can do the same thing for features:
feat = example_ensemble.features.get_by_id('ACALDt_lower_bound')
print(rxn)
print(feat.base_component)
print(feat.component_attribute)
# And for members:
memb = example_ensemble.members.get_by_id('first_textbook')
print('\nHere are the states for this member:')
print(memb.states)
"""
Explanation: Strategies for getting information about an ensemble and its members
Where possible, we use conventions from cobrapy for accessing information about attributes. In cobrapy, the Model object has multiple containers in the form of DictLists: Model.reactions,Model.metabolites,Model.genes. Equivalently in medusa, each Ensemble has similarly constructed containers: Ensemble.members and Ensemble.features.
As such, information about specific Member and Feature objects can be accessed just like Reaction, Metabolite, and Gene objects in cobrapy:
End of explanation
"""
components = []
for feat in example_ensemble.features:
components.append(feat.base_component)
print(components)
# or, use the one-liner which gives the same result:
components = [feat.base_component for feat in example_ensemble.features]
print(components)
"""
Explanation: These DictList objects are all iterables, meaning that any python operation that acts on an iterable can take them as input. This is often convenient when working with either cobrapy Models or medusa Ensembles. For example, suppose we are interested in getting the list of all components described by features in the Ensemble:
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples
|
notebooks/community/ml_ops/stage2/get_started_vertex_training_sklearn.ipynb
|
apache-2.0
|
import os
# The Vertex AI Workbench Notebook product has specific requirements
IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME")
IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(
"/opt/deeplearning/metadata/env_version"
)
# Vertex AI Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_WORKBENCH_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install {USER_FLAG} --upgrade google-cloud-aiplatform -q
"""
Explanation: E2E ML on GCP: MLOps stage 2 : experimentation: get started with Vertex Training for Scikit-Learn
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage2/get_started_vertex_training_sklearn.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage2/get_started_vertex_training_sklearn.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/community/ml_ops/stage2/get_started_vertex_training_sklearn.ipynb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Open in Vertex AI Workbench
</a>
</td>
</table>
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 2 : experimentation: get started with Vertex AI Training for scikit-Learn.
Dataset
The dataset used for this tutorial is the News Aggregation from ICS Machine Learning Datasets. The trained model predicts the news category of the news article.
Objective
In this tutorial, you learn how to use Vertex AI Training for training a Scikit-Learn custom model.
This tutorial uses the following Google Cloud ML services:
Vertex AI Training
Vertex AI Model resource
The steps performed include:
Training using a Python package.
Report accuracy when hyperparameter tuning.
Save the model artifacts to Cloud Storage using GCSFuse.
Create a Vertex AI Model resource.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the
command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Install additional packages
Install the following packages for executing this notebook.
End of explanation
"""
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
"""
REGION = "[your-region]" # @param {type: "string"}
if REGION == "[your-region]":
REGION = "us-central1"
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions.
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
"""
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Vertex AI Workbench, then don't execute this code
IS_COLAB = False
if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv(
"DL_ANACONDA_HOME"
):
if "google.colab" in sys.modules:
IS_COLAB = True
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Vertex AI Workbench Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI" into the filter box, and select Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_URI = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_URI = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
"""
! gsutil mb -l $REGION $BUCKET_URI
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al $BUCKET_URI
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
import google.cloud.aiplatform as aip
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
"""
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_URI)
"""
Explanation: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
End of explanation
"""
import os
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (None, None)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
"""
Explanation: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
Otherwise specify (None, None) to use a container image to run on a CPU.
Learn more about hardware accelerator support for your region.
Note: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3. This is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
End of explanation
"""
TRAIN_VERSION = "scikit-learn-cpu.0-23"
DEPLOY_VERSION = "sklearn-cpu.0-23"
TRAIN_IMAGE = "{}-docker.pkg.dev/vertex-ai/training/{}:latest".format(
REGION.split("-")[0], TRAIN_VERSION
)
DEPLOY_IMAGE = "{}-docker.pkg.dev/vertex-ai/prediction/{}:latest".format(
REGION.split("-")[0], DEPLOY_VERSION
)
"""
Explanation: Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see Pre-built containers for training.
For the latest list, see Pre-built containers for prediction.
End of explanation
"""
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
"""
Explanation: Set machine type
Next, set the machine type to use for training.
Set the variable TRAIN_COMPUTE to configure the compute resources for the VMs you will use for for training.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
"""
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'wget',\n\n 'cloudml-hypertune',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: News Aggregation text classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
"""
Explanation: Introduction to scikit-learn training
Once you have trained a scikit-learn model, you will want to save it at a Cloud Storage location, so it can subsequently be uploaded to a Vertex AI Model resource. The Scikit-learn package does not have support to save the model to a Cloud Storage location. Instead, you will do the following steps to save to a Cloud Storage location.
Save the in-memory model to the local filesystem in pickle format (e.g., model.pkl).
Create a Cloud Storage storage client.
Upload the pickle file as a blob to the specified Cloud Storage location using the Cloud Storage storage client.
Note: You can do hyperparameter tuning with a Scikit-learn model.
Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
End of explanation
"""
%%writefile custom/trainer/task.py
import argparse
import logging
import os
import pickle
import zipfile
from typing import List, Tuple
import pandas as pd
import wget
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
import hypertune
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument("--dataset-url", dest="dataset_url",
type=str, help="Download url for the training data.")
parser.add_argument('--alpha', dest='alpha',
default=1.0, type=float,
help='Alpha parameters for MultinomialNB')
args = parser.parse_args()
logging.getLogger().setLevel(logging.INFO)
def get_data(url: str, test_size: float = 0.2) -> Tuple[List, List, List, List]:
logging.info("Downloading training data from: {}".format(args.dataset_url))
zip_filepath = wget.download(url, out=".")
with zipfile.ZipFile(zip_filepath, "r") as zf:
zf.extract(path=".", member="newsCorpora.csv")
COLUMN_NAMES = ["id", "title", "url", "publisher",
"category", "story", "hostname", "timestamp"]
dataframe = pd.read_csv(
"newsCorpora.csv", delimiter=" ", names=COLUMN_NAMES, index_col=0
)
train, test = train_test_split(dataframe, test_size=test_size)
x_train, y_train = train["title"].values, train["category"].values
x_test, y_test = test["title"].values, test["category"].values
return x_train, y_train, x_test, y_test
def get_model():
logging.info("Build model ...")
model = Pipeline([
("vectorizer", CountVectorizer()),
("tfidf", TfidfTransformer()),
("naivebayes", MultinomialNB(alpha=args.alpha)),
])
return model
def train_model(model: Pipeline, X_train: List, y_train: List, X_test: List, y_test: List
) -> Pipeline:
logging.info("Training started ...")
model.fit(X_train, y_train)
logging.info("Training completed")
return model
def evaluate_model(model: Pipeline, X_train: List, y_train: List, X_test: List, y_test: List
) -> float:
score = model.score(X_test, y_test)
logging.info(f"Evaluation completed with model score: {score}")
# report metric for hyperparameter tuning
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=score
)
return score
def export_model_to_gcs(fitted_pipeline: Pipeline, gcs_uri: str) -> str:
"""Exports trained pipeline to GCS
Parameters:
fitted_pipeline (sklearn.pipelines.Pipeline): the Pipeline object
with data already fitted (trained pipeline object).
gcs_uri (str): GCS path to store the trained pipeline
i.e gs://example_bucket/training-job.
Returns:
export_path (str): Model GCS location
"""
# Upload model artifact to Cloud Storage
artifact_filename = 'model.pkl'
storage_path = os.path.join(gcs_uri, artifact_filename)
# Save model artifact to local filesystem (doesn't persist)
with open(storage_path, 'wb') as model_file:
pickle.dump(fitted_pipeline, model_file)
def export_evaluation_report_to_gcs(report: str, gcs_uri: str) -> None:
"""
Exports training job report to GCS
Parameters:
report (str): Full report in text to sent to GCS
gcs_uri (str): GCS path to store the report
i.e gs://example_bucket/training-job
"""
# Upload model artifact to Cloud Storage
artifact_filename = 'report.txt'
storage_path = os.path.join(gcs_uri, artifact_filename)
# Save model artifact to local filesystem (doesn't persist)
with open(storage_path, 'w') as report_file:
report_file.write(report)
logging.info("Starting custom training job.")
data = get_data(args.dataset_url)
model = get_model()
model = train_model(model, *data)
score = evaluate_model(model, *data)
# export model to gcs using GCSFuse
logging.info("Exporting model artifacts ...")
gs_prefix = 'gs://'
gcsfuse_prefix = '/gcs/'
if args.model_dir.startswith(gs_prefix):
args.model_dir = args.model_dir.replace(gs_prefix, gcsfuse_prefix)
dirpath = os.path.split(args.model_dir)[0]
if not os.path.isdir(dirpath):
os.makedirs(dirpath)
export_model_to_gcs(model, args.model_dir)
export_evaluation_report_to_gcs(str(score), args.model_dir)
logging.info(f"Exported model artifacts to GCS bucket: {args.model_dir}")
"""
Explanation: Create the task script for the Python training package
Next, you create the task.py script for driving the training package. Some noteable steps include:
Command-line arguments:
model-dir: The location to save the trained model. When using Vertex AI custom training, the location will be specified in the environment variable: AIP_MODEL_DIR,
dataset_url: The location of the dataset to download.
alpha: Hyperparameter
Data preprocessing (get_data()):
Download the dataset and split into training and test.
Model architecture (get_model()):
Builds the corresponding model architecture.
Training (train_model()):
Trains the model
Evaluation (evaluate_model()):
Evaluates the model.
If hyperparameter tuning, reports the metric for accuracy.
Model artifact saving
Saves the model artifacts and evaluation metrics where the Cloud Storage location specified by model-dir.
Note: GCSFuse (/gcs) is used to do filesystem operations on Cloud Storage buckets.
End of explanation
"""
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_URI/trainer_newsaggr.tar.gz
"""
Explanation: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
End of explanation
"""
DISPLAY_NAME = "newsaggr_" + TIMESTAMP
job = aip.CustomPythonPackageTrainingJob(
display_name=DISPLAY_NAME,
python_package_gcs_uri=f"{BUCKET_URI}/trainer_newsaggr.tar.gz",
python_module_name="trainer.task",
container_uri=TRAIN_IMAGE,
model_serving_container_image_uri=DEPLOY_IMAGE,
project=PROJECT_ID,
)
"""
Explanation: Create and run custom training job
To train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job.
Create custom training job
A custom training job is created with the CustomTrainingJob class, with the following parameters:
display_name: The human readable name for the custom training job.
container_uri: The training container image.
python_package_gcs_uri: The location of the Python training package as a tarball.
python_module_name: The relative path to the training script in the Python package.
model_serving_container_uri: The container image for deploying the model.
Note: There is no requirements parameter. You specify any requirements in the setup.py script in your Python package.
End of explanation
"""
MODEL_DIR = "{}/{}".format(BUCKET_URI, TIMESTAMP)
DATASET_URL = "https://archive.ics.uci.edu/ml/machine-learning-databases/00359/NewsAggregatorDataset.zip"
DIRECT = False
if DIRECT:
CMDARGS = [
"--alpha=" + str(0.9),
"--dataset-url=" + DATASET_URL,
"--model_dir=" + MODEL_DIR,
]
else:
CMDARGS = ["--alpha=" + str(0.9), "--dataset-url=" + DATASET_URL]
"""
Explanation: Prepare your command-line arguments
Now define the command-line arguments for your custom training container:
args: The command-line arguments to pass to the executable that is set as the entry point into the container.
--model-dir : For our demonstrations, we use this command-line argument to specify where to store the model artifacts.
direct: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or
indirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification.
--dataset-url: The location of the dataset to download.
--alpha: Tunable hyperparameter
End of explanation
"""
if TRAIN_GPU:
model = job.run(
model_display_name="newsaggr_" + TIMESTAMP,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
sync=False,
)
else:
model = job.run(
model_display_name="newsaggr_" + TIMESTAMP,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
base_output_dir=MODEL_DIR,
sync=False,
)
model_path_to_deploy = MODEL_DIR
"""
Explanation: Run the custom training job
Next, you run the custom job to start the training job by invoking the method run, with the following parameters:
model_display_name: The human readable name for the Model resource.
args: The command-line arguments to pass to the training script.
replica_count: The number of compute instances for training (replica_count = 1 is single node training).
machine_type: The machine type for the compute instances.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
base_output_dir: The Cloud Storage location to write the model artifacts to.
sync: Whether to block until completion of the job.
End of explanation
"""
_job = job.list(filter=f"display_name={DISPLAY_NAME}")
print(_job)
"""
Explanation: List a custom training job
End of explanation
"""
model.wait()
"""
Explanation: Wait for completion of custom training job
Next, wait for the custom training job to complete. Alternatively, one can set the parameter sync to True in the run() method to block until the custom training job is completed.
End of explanation
"""
job.delete()
"""
Explanation: Delete a custom training job
After a training job is completed, you can delete the training job with the method delete(). Prior to completion, a training job can be canceled with the method cancel().
End of explanation
"""
# Delete the model using the Vertex model object
model.delete()
delete_bucket = False
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil rm -r $BUCKET_URI
"""
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Model
Cloud Storage Bucket
End of explanation
"""
|
arcyfelix/Courses
|
In Progress - Deep Learning - The Straight Dope/01 - Crash Course/05 - Problem Set.ipynb
|
apache-2.0
|
import mxnet as mx
mx.random.seed(1)
"""
Explanation: Problem Set
"For the things we have to learn before we can do them, we learn by doing them." - Aristotle
There's nothing quite like working with a new tool to really understand it, so we have put together some exercises through this book to give you a chance to put into practice what you learned in the previous lesson(s).
End of explanation
"""
# Problem 1 Work Area
x = mx.nd.empty(shape=[1, 256], ctx=mx.gpu(0), dtype='float32')
x
mx.nd.argmax(x, axis=1)
"""
Explanation: Problems using NDarray (Official Documentation)
Problem 1: Initialize an ndarray of dimension 1x256 on the GPU without overwriting its memory. Then, find the index corresponding to the maximum value in the array (argmax)
End of explanation
"""
# Problem 2 Work Area
random_x = mx.nd.random_uniform(shape=[4, 4], low = 0, high = 1)
random_x
identity_m = mx.nd.one_hot(mx.nd.arange(4), depth=4)
identity_m
output = mx.nd.dot(random_x, identity_m)
output
"""
Explanation: Problems from Linear Algebra
Problem 2: Create a 4x4 matrix of random values (where values are uniformly random on the iterval [0,1]. Then create an 4x4 identity matrix (an identity of size n is the n ร n square matrix with ones on the main diagonal and zeros elsewhere). Multiply the two together and verify that you get the original matrix back.
End of explanation
"""
# Problem 3 Work Area
"""
Explanation: Problem 3: Create a 3x3x20 tensor such that at every x,y coordinate, moving through the z coordinate lists the Fibonacci sequence. So, at a z position of 0, the 3x3 matrix will be all 1s. At z-position 1, the 3x3 matrix will be all 1s. At z-position 2, the 3x3 matrix will be all 2s, at z-position 3, the 3x3 matrix will be all 3s and so forth.
Hint: Create the first 2 matrices by hand and then use element-wise operations in a loop to construct the rest of the tensor.
End of explanation
"""
# Problem 4 Work Area
"""
Explanation: Problem 4: What is the sum of the vector you created? What is the mean?
End of explanation
"""
# Problem 5 Work Area
"""
Explanation: Problem 5: Create a vector [0,1], and another vector [1,0], and use mxnet to calculate the angle between them. Remember that the dot product of two vectors is equal to the cossine of the angle between the vectors, and that the arccos function is the inverse of cosine.
End of explanation
"""
# Problem 6 Work Area
"""
Explanation: Problems from Probability
Problem 6: In the classic game of Risk, the attacker can roll a maximum of three dice, while the defender can roll a maximum of two dice. Simulate the attacking and defending dice using sample_multinomial to try to estimate the odds that an attacker will win against a defender when both are rolling the maximum number of dice.
End of explanation
"""
# Problem 7 Work Area
"""
Explanation: Problems from Automatic differentiation with autograd
Problem 7: The formula for a parabola is y=ax^2+bx+c. If a=5 and b = 13, what is the slope of y when x=0. How about when x=7?
End of explanation
"""
# Problem 8 Work Area
"""
Explanation: Problem 8: Graph the parabola described in Problem 6 and inspect the slope of y when x = 0 and x = 7. Does it match up with your answer from Problem 6?
End of explanation
"""
|
BadWizard/Inflation
|
Disaggregated-Data/weather-like-plot-HICP-by-item.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
from datetime import datetime
import numpy as np
from matplotlib.ticker import FixedLocator, FixedFormatter
#import seaborn as sns
to_colors = lambda x : x/255.
ls
df_ind_items = pd.read_csv('raw_data_items.csv',header=0,index_col=0,parse_dates=0)
df_ind_items.head()
df_ind_items.index
"""
Explanation: Make a plot of HICP inflation by item groups
End of explanation
"""
df_infl_items = df_ind_items.pct_change(periods=12)*100
mask_rows_infl = df_infl_items.index.year >= 2000
df_infl_items = df_infl_items[mask_rows_infl]
df_infl_items.tail()
tt = df_infl_items.copy()
tt['month'] = tt.index.month
tt['year'] = tt.index.year
tt.head()
tt.to_csv('infl_items.csv')
"""
Explanation: Compute annual inflation rates
End of explanation
"""
df_infl_items['min'] = df_infl_items.apply(min,axis=1)
df_infl_items['max'] = df_infl_items.apply(max,axis=1)
df_infl_items['mean'] = df_infl_items.apply(np.mean,axis=1)
df_infl_items['mode'] = df_infl_items.quantile(q=0.5, axis=1)
df_infl_items['10th'] = df_infl_items.quantile(q=0.10, axis=1)
df_infl_items['90th'] = df_infl_items.quantile(q=0.90, axis=1)
df_infl_items['25th'] = df_infl_items.quantile(q=0.25, axis=1)
df_infl_items['75th'] = df_infl_items.quantile(q=0.75, axis=1)
df_infl_items.tail()
"""
Explanation: df_infl_items.rename(columns = dic)
tt = df_infl_items.copy()
tt['month'] = tt.index.month
tt['year'] = tt.index.year
melted_df = pd.melt(tt,id_vars=['month','year'])
melted_df.head()
End of explanation
"""
df_infl_items.head()
print(df_infl_items.describe())
"""
Explanation: df_infl_items['month'] = df_infl_items.index.month
df_infl_items['year'] = df_infl_items.index.year
End of explanation
"""
len(df_infl_items)
df_infl_items.columns
df_infl_items['month_order'] = range(len(df_infl_items))
month_order = df_infl_items['month_order']
max_infl = df_infl_items['max'].values
min_infl = df_infl_items['min'].values
mean_infl = df_infl_items['mean'].values
mode_infl = df_infl_items['mode'].values
p25th = df_infl_items['25th'].values
p75th = df_infl_items['75th'].values
p10th = df_infl_items['10th'].values
p90th = df_infl_items['90th'].values
inflEA = df_infl_items['76451'].values
year_begin_df = df_infl_items[df_infl_items.index.month == 1]
year_begin_df;
year_beginning_indeces = list(year_begin_df['month_order'].values)
year_beginning_indeces
year_beginning_names = list(year_begin_df.index.year)
year_beginning_names
month_order;
blue3 = map(to_colors, (24, 116, 205)) # 1874CD
wheat2 = map(to_colors, (238, 216, 174)) # EED8AE
wheat3 = map(to_colors, (205, 186, 150)) # CDBA96
wheat4 = map(to_colors, (139, 126, 102)) # 8B7E66
firebrick3 = map(to_colors, (205, 38, 38)) # CD2626
gray30 = map(to_colors, (77, 77, 77)) # 4D4D4D
fig, ax1 = plt.subplots(figsize=(15,7))
plt.bar(month_order, p90th - p10th, bottom=p10th,
edgecolor='none', color='#C3BBA4', width=1);
# Create the bars showing average highs and lows
plt.bar(month_order, p75th - p25th, bottom=p25th,
edgecolor='none', color='#9A9180', width=1);
#annotations={month_order[50]:'Dividends'}
plt.plot(month_order, inflEA, color='#5A3B49',linewidth=2 );
plt.plot(month_order, mode_infl, color='wheat',linewidth=2,alpha=.3);
plt.xticks(year_beginning_indeces,
year_beginning_names,
fontsize=10)
#ax2 = ax1.twiny()
plt.xlim(-5,200)
plt.grid(False)
##ax2 = ax1.twiny()
plt.ylim(-5, 10)
#ax3 = ax1.twinx()
plt.yticks(range(-4, 10, 2), [r'{}'.format(x)
for x in range(-4, 10, 2)], fontsize=10);
plt.grid(axis='both', color='wheat', linewidth=1.5, alpha = .5)
plt.title('HICP inflation, annual rate of change, Jan 2000 - March 2016\n\n', fontsize=20);
"""
Explanation: Generate a bunch of histograms of the data to make sure that all of the data
is in an expected range.
with plt.style.context('https://gist.githubusercontent.com/rhiever/d0a7332fe0beebfdc3d5/raw/223d70799b48131d5ce2723cd5784f39d7a3a653/tableau10.mplstyle'):
for column in df_infl_items.columns[:-2]:
#if column in ['date']:
# continue
plt.figure()
plt.hist(df_infl_items[column].values)
plt.title(column)
#plt.savefig('{}.png'.format(column))
End of explanation
"""
|
jo-tez/aima-python
|
search4e.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import random
import heapq
import math
import sys
from collections import defaultdict, deque, Counter
from itertools import combinations
class Problem(object):
"""The abstract class for a formal problem. A new domain subclasses this,
overriding `actions` and `results`, and perhaps other methods.
Subclasses can add other keywords besides initial and goal.
The default heuristic is 0 and the default step cost is 1 for all states."""
def __init__(self, initial=None, goal=None, **kwds):
self.__dict__.update(initial=initial, goal=goal, **kwds)
def actions(self, state): raise NotImplementedError
def result(self, state, action): raise NotImplementedError
def is_goal(self, state): return state == self.goal
def step_cost(self, s, action, s1): return 1
def h(self, node): return 0
def __str__(self):
return '{}({}, {})'.format(type(self).__name__, self.initial, self.goal)
class Node:
"A Node in a search tree."
def __init__(self, state, parent=None, action=None, path_cost=0):
self.__dict__.update(state=state, parent=parent, action=action, path_cost=path_cost)
def __repr__(self): return '<{}>'.format(self.state)
def __len__(self): return 0 if self.parent is None else (1 + len(self.parent))
def __lt__(self, other): return self.path_cost < other.path_cost
failure = Node('failure', path_cost=math.inf) # Indicates an algorithm couldn't find a solution.
cutoff = Node('cutoff', path_cost=math.inf) # Indicates iterative deepening search was cut off.
def expand(problem, node):
"Expand a node, generating the children nodes."
s = node.state
for action in problem.actions(s):
s1 = problem.result(s, action)
cost = node.path_cost + problem.step_cost(s, action, s1)
yield Node(s1, node, action, cost)
def path_actions(node):
"The sequence of actions to get to this node."
return [] if node.parent is None else path_actions(node.parent) + [node.action]
def path_states(node):
"The sequence of states to get to this node."
if node in (cutoff, failure, None): return []
return path_states(node.parent) + [node.state]
"""
Explanation: Search for AIMA 4th edition
Implementation of search algorithms and search problems for AIMA.
Problems and Nodes
We start by defining the abstract class for a Problem; specific problem domains will subclass this. To make it easier for algorithms that use a heuristic evaluation function, Problem has a default h function (uniformly zero), and subclasses can define their own default h function.
We also define a Node in a search tree, and some functions on nodes: expand to generate successors; path_actions and path_states to recover aspects of the path from the node.
End of explanation
"""
FIFOQueue = deque
LIFOQueue = list
class PriorityQueue:
"""A queue in which the item with minimum f(item) is always popped first."""
def __init__(self, items=(), key=lambda x: x):
self.key = key
self.items = [] # a heap of (score, item) pairs
for item in items:
self.add(item)
def add(self, item):
"""Add item to the queuez."""
pair = (self.key(item), item)
heapq.heappush(self.items, pair)
def pop(self):
"""Pop and return the item with min f(item) value."""
return heapq.heappop(self.items)[1]
def top(self): return self.items[0][1]
def __len__(self): return len(self.items)
"""
Explanation: Queues
First-in-first-out and Last-in-first-out queues, and a PriorityQueue, which allows you to keep a collection of items, and continually remove from it the item with minimum f(item) score.
End of explanation
"""
def breadth_first_search(problem):
"Search shallowest nodes in the search tree first."
frontier = FIFOQueue([Node(problem.initial)])
reached = set()
while frontier:
node = frontier.pop()
if problem.is_goal(node.state):
return node
for child in expand(problem, node):
s = child.state
if s not in reached:
reached.add(s)
frontier.appendleft(child)
return failure
def depth_limited_search(problem, limit=5):
"Search deepest nodes in the search tree first."
frontier = LIFOQueue([Node(problem.initial)])
solution = failure
while frontier:
node = frontier.pop()
if len(node) > limit:
solution = cutoff
else:
for child in expand(problem, node):
if problem.is_goal(child.state):
return child
frontier.append(child)
return solution
def iterative_deepening_search(problem):
"Do depth-limited search with increasing depth limits."
for limit in range(1, sys.maxsize):
result = depth_limited_search(problem, limit)
if result != cutoff:
return result
# TODO: bidirectional-search, RBFS, and-or-search
"""
Explanation: Search Algorithms
Here are the state-space search algorithms covered in the book:
End of explanation
"""
def best_first_search(problem, f):
"Search nodes with minimum f(node) value first."
frontier = PriorityQueue([Node(problem.initial)], key=f)
reached = {}
while frontier:
node = frontier.pop()
if problem.is_goal(node.state):
return node
for child in expand(problem, node):
s = child.state
if s not in reached or child.path_cost < reached[s].path_cost:
reached[s] = child
frontier.add(child)
return failure
def uniform_cost_search(problem):
"Search nodes with minimum path cost first."
return best_first_search(problem, f=lambda node: node.path_cost)
def astar_search(problem, h=None):
"""Search nodes with minimum f(n) = g(n) + h(n)."""
h = h or problem.h
return best_first_search(problem, f=lambda node: node.path_cost + h(node))
def weighted_astar_search(problem, weight=1.4, h=None):
"""Search nodes with minimum f(n) = g(n) + h(n)."""
h = h or problem.h
return best_first_search(problem, f=lambda node: node.path_cost + weight * h(node))
def greedy_bfs(problem, h=None):
"""Search nodes with minimum h(n)."""
h = h or problem.h
return best_first_search(problem, f=h)
def breadth_first_bfs(problem):
"Search shallowest nodes in the search tree first; using best-first."
return best_first_search(problem, f=len)
def depth_first_bfs(problem):
"Search deepest nodes in the search tree first; using best-first."
return best_first_search(problem, f=lambda node: -len(node))
"""
Explanation: Best-First Search Algorithms
Best-first search with various f(n) functions gives us different search algorithms. Note that A*, weighted A* and greedy search can be given a heuristic function, h, but if h is not supplied they use the problem's default h function.
End of explanation
"""
class RouteProblem(Problem):
"""A problem to find a route between locations on a `Map`.
Create a problem with RouteProblem(start, goal, map=Map(...)}).
States are the vertexes in the Map graph; actions are destination states."""
def actions(self, state):
"""The places neighboring `state`."""
return self.map.neighbors[state]
def result(self, state, action):
"""Go to the `action` place, if the map says that is possible."""
return action if action in self.map.neighbors[state] else state
def step_cost(self, s, action, s1):
"""The distance (cost) to go from s to s1."""
return self.map.distances[s, s1]
def h(self, node):
"Straight-line distance between state and the goal."
locs = self.map.locations
return straight_line_distance(locs[node.state], locs[self.goal])
def straight_line_distance(A, B):
"Straight-line distance between two 2D points."
return abs(complex(*A) - complex(*B))
class Map:
"""A map of places in a 2D world: a graph with vertexes and links between them.
In `Map(links, locations)`, `links` can be either [(v1, v2)...] pairs,
or a {(v1, v2): distance...} dict. Optional `locations` can be {v1: (x, y)}
If `directed=False` then for every (v1, v2) link, we add a (v2, v1)."""
def __init__(self, links, locations=None, directed=False):
if not hasattr(links, 'items'): # Distances are 1 by default
links = {link: 1 for link in links}
if not directed:
for (v1, v2) in list(links):
links[v2, v1] = links[v1, v2]
self.distances = links
self.locations = locations or defaultdict(lambda: (0, 0))
self.neighbors = multimap(links)
def multimap(pairs) -> dict:
"Given (key, val) pairs, make a dict of {key: [val,...]}."
result = defaultdict(list)
for key, val in pairs:
result[key].append(val)
return result
romania = Map(
{('O', 'Z'): 71, ('O', 'S'): 151, ('A', 'Z'): 75, ('A', 'S'): 140, ('A', 'T'): 118,
('L', 'T'): 111, ('L', 'M'): 70, ('D', 'M'): 75, ('C', 'D'): 120, ('C', 'R'): 146,
('C', 'P'): 138, ('R', 'S'): 80, ('F', 'S'): 99, ('B', 'F'): 211, ('B', 'P'): 101,
('B', 'G'): 90, ('B', 'U'): 85, ('H', 'U'): 98, ('E', 'H'): 86, ('U', 'V'): 142,
('I', 'V'): 92, ('I', 'N'): 87, ('P', 'R'): 97},
locations=dict(
A=(91, 492), B=(400, 327), C=(253, 288), D=(165, 299), E=(562, 293), F=(305, 449),
G=(375, 270), H=(534, 350), I=(473, 506), L=(165, 379), M=(168, 339), N=(406, 537),
O=(131, 571), P=(320, 368), R=(233, 410), S=(207, 457), T=(94, 410), U=(456, 350),
V=(509, 444), Z=(108, 531)))
"""
Explanation: Problem Domains
Now we turn our attention to defining some problem domains as subclasses of Problem.
Route Finding Problems
In a RouteProblem, the states are names of "cities" (or other locations), like 'A' for Arad. The actions are also city names; 'Z' is the action to move to city 'Z'. The layout of cities is given by a separate data structure, a Map, which is a graph where there are vertexes (cities), links between vertexes, distances (costs) of those links (if not specified, the default is 1 for every link), and optionally the 2D (x, y) location of each city can be specified. A RouteProblem takes this Map as input and allows actions to move between linked cities. The default heuristic is straight-line distance to the goal, or is uniformly zero if locations were not given.
End of explanation
"""
class GridProblem(Problem):
"""Finding a path on a 2D grid with obstacles. Obstacles are (x, y) cells."""
def __init__(self, initial=(15, 30), goal=(130, 30), obstacles=(), **kwds):
Problem.__init__(self, initial=initial, goal=goal,
obstacles=set(obstacles) - {initial, goal}, **kwds)
directions = [(-1, -1), (0, -1), (1, -1),
(-1, 0), (1, 0),
(-1, +1), (0, +1), (1, +1)]
def step_cost(self, s, action, s1): return straight_line_distance(s, s1)
def h(self, node): return straight_line_distance(node.state, self.goal)
def result(self, state, action):
"Both states and actions are represented by (x, y) pairs."
return action if action not in self.obstacles else state
def actions(self, state):
"""You can move one cell in any of `directions` to a non-obstacle cell."""
x, y = state
return [(x + dx, y + dy) for (dx, dy) in self.directions
if (x + dx, y + dy) not in self.obstacles]
# The following can be used to create obstacles:
def random_lines(X=range(150), Y=range(60), N=150, lengths=range(6, 12), dirs=((0, 1), (1, 0))):
"""Yield the cells in N random lines of the given lengths."""
for _ in range(N):
x, y = random.choice(X), random.choice(Y)
dx, dy = random.choice(dirs)
yield from line(x, y, dx, dy, random.choice(lengths))
def line(x, y, dx, dy, length):
"""A line of `length` cells starting at (x, y) and going in (dx, dy) direction."""
return {(x + i * dx, y + i * dy) for i in range(length)}
"""
Explanation: Grid Problems
A GridProblem involves navigating on a 2D grid, with some cells being impassible obstacles. By default you can move to any of the eight neighboring cells that are not obstacles (but in a problem instance you can supply a directions= keyword to change that). Again, the default heuristic is straight-line distance to the goal. States are (x, y) cell locations, such as (4, 2), and actions are (dx, dy) cell movements, such as (0, -1), which means leave the x coordinate alone, and decrement the y coordinate by 1.
End of explanation
"""
class EightPuzzle(Problem):
""" The problem of sliding tiles numbered from 1 to 8 on a 3x3 board,
where one of the squares is a blank, trying to reach a goal configuration.
A board state is represented as a tuple of length 9, where the element at index i
represents the tile number at index i, or 0 if for the empty square, e.g. the goal:
1 2 3
4 5 6 ==> (1, 2, 3, 4, 5, 6, 7, 8, 0)
7 8 _
"""
def __init__(self, initial, goal=(1, 2, 3, 4, 5, 6, 7, 8, 0)):
assert inversions(initial) % 2 == inversions(goal) % 2 # Parity check
self.initial, self.goal = initial, goal
def actions(self, state):
"""The indexes of the squares that the blank can move to."""
moves = ((1, 3), (0, 2, 4), (1, 5),
(0, 4, 6), (1, 3, 5, 7), (2, 4, 8),
(3, 7), (4, 6, 8), (7, 5))
blank = state.index(0)
return moves[blank]
def result(self, state, action):
"""Swap the blank with the square numbered `action`."""
s = list(state)
blank = state.index(0)
s[action], s[blank] = s[blank], s[action]
return tuple(s)
def h(self, node):
"""The Manhattan heuristic."""
X = (0, 1, 2, 0, 1, 2, 0, 1, 2)
Y = (0, 0, 0, 1, 1, 1, 2, 2, 2)
return sum(abs(X[s] - X[g]) + abs(Y[s] - Y[g])
for (s, g) in zip(node.state, self.goal) if s != 0)
def h2(self, node):
"""The misplaced tiles heuristic."""
return sum(s != g for (s, g) in zip(node.state, self.goal) if s != 0)
def inversions(board):
"The number of times a piece is a smaller number than a following piece."
return sum((a > b and a != 0 and b != 0) for (a, b) in combinations(board, 2))
def board8(board, fmt=(3 * '{} {} {}\n')):
"A string representing an 8-puzzle board"
return fmt.format(*board).replace('0', '_')
"""
Explanation: 8 Puzzle Problems
A sliding block puzzle where you can swap the blank with an adjacent piece, trying to reach a goal configuration. The cells are numbered 0 to 8, starting at the top left and going row by row left to right. The pieces are numebred 1 to 8, with 0 representing the blank. An action is the cell index number that is to be swapped with the blank (not the actual number to be swapped but the index into the state). So the diagram above left is the state (5, 2, 7, 8, 4, 0, 1, 3, 6), and the action is 8, because the last cell (the 6 in the bottom right) is swapped with the blank.
There are two disjoint sets of states that cannot be reached from each other. One set has an even number of "inversions"; the other has an odd number. An inversion is when a piece in the state is larger than a piece that follows it.
End of explanation
"""
class PourProblem(Problem):
"""Problem about pouring water between jugs to achieve some water level.
Each state is a tuples of water levels. In the initialization, also provide a tuple of
jug sizes, e.g. PourProblem(initial=(0, 0), goal=4, sizes=(5, 3)),
which means two jugs of sizes 5 and 3, initially both empty, with the goal
of getting a level of 4 in either jug."""
def actions(self, state):
"""The actions executable in this state."""
jugs = range(len(state))
return ([('Fill', i) for i in jugs if state[i] < self.sizes[i]] +
[('Dump', i) for i in jugs if state[i]] +
[('Pour', i, j) for i in jugs if state[i] for j in jugs if i != j])
def result(self, state, action):
"""The state that results from executing this action in this state."""
result = list(state)
act, i, *_ = action
if act == 'Fill': # Fill i to capacity
result[i] = self.sizes[i]
elif act == 'Dump': # Empty i
result[i] = 0
elif act == 'Pour': # Pour from i into j
j = action[2]
amount = min(state[i], self.sizes[j] - state[j])
result[i] -= amount
result[j] += amount
return tuple(result)
def is_goal(self, state):
"""True if the goal level is in any one of the jugs."""
return self.goal in state
"""
Explanation: Water Pouring Problems
In a water pouring problem you are given a collection of jugs, each of which has a size (capacity) in, say, litres, and a current level of water (in litres). The goal is to measure out a certain level of water; it can appear in any of the jugs. For example, in the movie Die Hard 3, the heroes were faced with the task of making exactly 4 gallons from jugs of size 5 gallons and 3 gallons.) A state is represented by a tuple of current water levels, and the available actions are:
- (Fill, i): fill the ith jug all the way to the top (from a tap with unlimited water).
- (Dump, i): dump all the water out of the ith jug.
- (Pour, i, j): pour water from the ith jug into the jth jug until either the jug i is empty, or jug j is full, whichever comes first.
End of explanation
"""
class GreenPourProblem(PourProblem):
"""A PourProblem in which we count not the steps, but the amount of water used."""
def step_cost(self, s, action, s1):
"The cost is the amount of water used in a fill."
act, i, *_ = action
return self.sizes[i] - s[i] if act == 'Fill' else 0
"""
Explanation: In a GreenPourProblem, the states and actions are the same, but the path cost is not the number of steps, but rather the total amount of water that flows from the tap during Fill actions. (There is an issue that non-Fill actions have 0 cost, which in general can lead to indefinitely long solutions, but in this problem there is a finite number of states, so we're ok.)
End of explanation
"""
random.seed('42')
p1 = PourProblem((1, 1, 1), 13, sizes=(2, 16, 32))
p2 = PourProblem((0, 0, 0), 21, sizes=(8, 11, 31))
p3 = PourProblem((0, 0), 8, sizes=(7,9))
p4 = PourProblem((0, 0, 0), 21, sizes=(8, 11, 31))
p5 = PourProblem((0, 0), 4, sizes=(5, 3))
g1 = GreenPourProblem((1, 1, 1), 13, sizes=(2, 16, 32))
g2 = GreenPourProblem((0, 0, 0), 21, sizes=(8, 11, 31))
g3 = GreenPourProblem((0, 0), 8, sizes=(7,9))
g4 = GreenPourProblem((0, 0, 0), 21, sizes=(8, 11, 31))
g5 = GreenPourProblem((0, 0), 4, sizes=(3, 5))
r1 = RouteProblem('A', 'B', map=romania)
r2 = RouteProblem('N', 'L', map=romania)
r3 = RouteProblem('E', 'T', map=romania)
r4 = RouteProblem('O', 'M', map=romania)
cup = line(102, 44, -1, 0, 15) | line(102, 20, -1, 0, 20) | line(102, 44, 0, -1, 24)
barriers = (line(50, 35, 0, -1, 10) | line(60, 37, 0, -1, 17)
| line(70, 31, 0, -1, 19) | line(5, 5, 0, 1, 50))
d1 = GridProblem(obstacles=random_lines(N=100))
d2 = GridProblem(obstacles=random_lines(N=150))
d3 = GridProblem(obstacles=random_lines(N=200))
d4 = GridProblem(obstacles=random_lines(N=250))
d5 = GridProblem(obstacles=random_lines(N=300))
d6 = GridProblem(obstacles=cup)
d7 = GridProblem(obstacles=cup|barriers)
e1 = EightPuzzle((4, 0, 2, 5, 1, 3, 7, 8, 6))
e2 = EightPuzzle((0, 1, 2, 3, 4, 5, 6, 7, 8))
e3 = EightPuzzle((1, 4, 2, 0, 7, 5, 3, 6, 8))
e4 = EightPuzzle((2, 5, 8, 1, 4, 7, 0, 3, 6))
e5 = EightPuzzle((8, 6, 7, 2, 5, 4, 3, 0, 1))
# Solve a Romania route problem to get a node/path; see the cost and states in the path
node = astar_search(r1)
node.path_cost, path_states(node)
# Breadth first search finds a solution with fewer steps, but higher path cost
node = breadth_first_search(r1)
node.path_cost, path_states(node)
# Solve the PourProblem of getting 13 in some jug, and show the actions and states
soln = breadth_first_search(p1)
path_actions(soln), path_states(soln)
# Solve an 8 puzzle problem and print out each state
for s in path_states(astar_search(e1)):
print(board8(s))
"""
Explanation: Specific Problems and Solutions
Now that we have some domains, we can make specific problems in those domains, and solve them:
End of explanation
"""
class CountCalls:
"""Delegate all attribute gets to the object, and count them in ._counts"""
def __init__(self, obj):
self._object = obj
self._counts = Counter()
def __getattr__(self, attr):
"Delegate to the original object, after incrementing a counter."
self._counts[attr] += 1
return getattr(self._object, attr)
def report(searchers, problems):
"Show summary statistics for each searcher on each problem."
for searcher in searchers:
print(searcher.__name__ + ':')
total_counts = Counter()
for p in problems:
prob = CountCalls(p)
soln = searcher(prob)
counts = prob._counts;
counts.update(steps=len(soln), cost=soln.path_cost)
total_counts += counts
report_counts(counts, str(p)[:40])
report_counts(total_counts, 'TOTAL\n')
def report_counts(counts, name):
"Print one line of the counts report."
print('{:9,d} nodes |{:7,d} goal |{:5.0f} cost |{:3d} steps | {}'.format(
counts['result'], counts['is_goal'], counts['cost'], counts['steps'], name))
"""
Explanation: Reporting Summary Statistics on Search Algorithms
Now let's gather some metrics on how well each algorithm does. We'll use CountCalls to wrap a Problem object in such a way that calls to its methods are delegated to the original problem, but each call increments a counter. Once we've solved the problem, we print out summary statistics.
End of explanation
"""
report([uniform_cost_search], [p1, p2, p3, p4, p5])
"""
Explanation: Here's a tiny report for uniform-cost search on the jug pouring problems:
End of explanation
"""
report((uniform_cost_search, breadth_first_search),
(p1, g1, p2, g2, p3, g3, p4, g4, p4, g4))
"""
Explanation: The last line says that, over the five problems, unifirm-cost search explored 8,138 nodes (some of which may be redundant paths ending up in duplicate states), and did 934 goal tests. Together, the five solutions had a path cost of 42 and also a total number of steps of 42 (since step cost is 1 in these problems).
Comparing uniform-cost and breadth-first search
Below we compare uiniform-cost with breadth-first search, on the pouring problems and their green counterparts. We see that breadth-first finds solutions with the minimal number of steps, and uniform-cost finds optimal solutions with the minimal path cost. Overall they explore a similar number of states.
End of explanation
"""
def astar_misplaced_tiles(problem): return astar_search(problem, h=problem.h2)
report([astar_search, astar_misplaced_tiles, uniform_cost_search],
[e1, e2, e3, e4, e5])
"""
Explanation: Comparing optimal algorithms on 8-puzzle problems
Next, let's look at the eight puzzle problems, and compare three optimal algorithms: A search with the Manhattan heuristic; A search with the less informative misplaced tiles heuristic, and uniform-cost search with no heuristic:
End of explanation
"""
report((greedy_bfs, weighted_astar_search, astar_search, uniform_cost_search),
(r1, r2, r3, r4, d1, d2, d3, d4, d5, d6, d7, e1, e2, e3, e4))
"""
Explanation: We see that they all get the optimal solutions with the minimal path cost, but the better the heuristic, the fewer nodes explored.
Comparing different h weights on grid problems
Below we report on grid problems using these four algorithms:
|Algorithm|f|Optimality|
|:---------|---:|:----------:|
|Greedy best-first search | f = h|nonoptimal|
|Weighted A search | f = g + 1.4 × h|nonoptimal|
|A search | f = g + h|optimal|
|Uniform-cost search | f = g|optimal|
We will see that greedy best-first search (which ranks nodes solely by the heuristic) explores the fewest number of nodes, but has the highest path costs. Weighted A search explores twice as many nodes (on this problem set) but gets 10% better path costs. A is optimal, but explores more nodes, and uniform-cost is also optimal, but explores an order of magnitude more nodes.
End of explanation
"""
report((astar_search, uniform_cost_search, breadth_first_search, breadth_first_bfs,
iterative_deepening_search, depth_limited_search, greedy_bfs, weighted_astar_search),
(p1, g1, r1, r2, r3, r4, e1))
"""
Explanation: We see that greedy search expands the fewest nodes, but has the highest path costs. In contrast, A* gets optimal path costs, but expands 4 or 5 times more nodes. Weighted A* is a good compromise, using half the compute time as A*, and achieving path costs within 1% or 2% of optimal. Uniform-cost is optimal, but is an order of magnitude slower than A*.
Comparing many search algorithms
Finally, we compare a host of algorihms on some of the easier problems:
End of explanation
"""
def best_first_search(problem, f):
"Search nodes with minimum f(node) value first; make `reached` global."
global reached # <<<<<<<<<<< Only change here
frontier = PriorityQueue([Node(problem.initial)], key=f)
reached = {}
while frontier:
node = frontier.pop()
if problem.is_goal(node.state):
return node
for child in expand(problem, node):
s = child.state
if s not in reached or child.path_cost < reached[s].path_cost:
reached[s] = child
frontier.add(child)
return failure
def plot_grid_problem(grid, solution, reached=(), title='Search'):
"Use matplotlib to plot the grid, obstacles, solution, and reached."
plt.figure(figsize=(15, 6))
plt.axis('off'); plt.axis('equal')
plt.scatter(*transpose(grid.obstacles), marker='s', color='darkgrey')
plt.scatter(*transpose([grid.initial, grid.goal]), 9**2, marker='D', c='red')
plt.scatter(*transpose(reached), 1**2, marker='.', c='blue')
plt.scatter(*transpose(path_states(solution)), marker='s', c='black')
plt.show()
print('{} {} search: {:.1f} path cost, {:,d} states reached'
.format(' ' * 10, title, solution.path_cost, len(reached)))
def transpose(matrix): return list(zip(*matrix))
plot_grid_problem(d3, astar_search(d3), reached)
"""
Explanation: This confirms some of the things we already knew: A and uniform-cost search are optimal, but the others are not. A explores fewer nodes than uniform-cost. And depth-limited search failed to find a solution for some of the problems, because the search was cut off too early.
Visualizing Reached States
I would like to draw a picture of the state space, marking the states that have been reached by the search.
Unfortunately, the reached variable is inaccessible inside best_first_search, so I will define a new version of best_first_search that is identical except that it declares reached to be global. I can then define plot_grid_problem to plot the obstacles of a GridProblem, along with the initial and goal states, the solution path, and the states reached during a search.
End of explanation
"""
def plot3(grid, weight=1.9):
"""Plot the results of 3 search algorithms for this grid."""
solution = astar_search(grid)
plot_grid_problem(grid, solution, reached, '(a) A*')
solution = weighted_astar_search(grid, weight)
plot_grid_problem(grid, solution, reached, '(b) Weighted A*')
solution = greedy_bfs(grid)
plot_grid_problem(grid, solution, reached, '(c) Greedy best-first')
plot3(d3)
plot3(d4)
"""
Explanation: Now let's compare the three heuristic search algorithms on the same grid:
End of explanation
"""
plot3(d6)
"""
Explanation: Now I want to try a much simpler grid problem, d6, with only a few obstacles. We see that A finds the optimnal path, skirting below the obstacles. But weighted A mistakenly takes the slightly longer path above the obstacles, because that path allowed it to stay closer to the goal in straight-line distance, which it over-weights. And greedy best-first search bad showing, not deviating from its pathg towards the goal until it is almost inside the cup made by the obstacles.
End of explanation
"""
plot3(d7)
# Some tests
def tests():
assert romania.distances['A', 'Z'] == 75
assert romania.locations['A'] == (91, 492)
assert set(romania.neighbors['A']) == {'Z', 'S', 'T'}
# Inversions for 8 puzzle
assert inversions((1, 2, 3, 4, 5, 6, 7, 8, 0)) == 0
assert inversions((1, 2, 3, 4, 6, 5, 8, 7, 0)) == 2 # 6 > 5, 8 > 7
assert line(0, 0, 1, 1, 5) == {(0, 0), (1, 1), (2, 2), (3, 3), (4, 4)}
return 'pass'
tests()
"""
Explanation: In the next problem, d7, we see the optimal path found by A, and we see that again weighted A prefers to explore states closer to the goal, and ends up erroneously going below the first two barriers, and then makes another mistake by reversing direction back towards the goal and passing above the third barrier. Again, greedy best-first makes bad decisions all around.
End of explanation
"""
|
dataventures/workshops
|
2/1 - SVM.ipynb
|
mit
|
from IPython.display import Image
from IPython.core.display import HTML
"""
Explanation: Support Vector Machines
Support vector machines (SVMs) are among the most powerful and commonly used models for supervised classification. Today, we will look at the intuition and mathematics behind SVM classifiers, then use them to solve some classification problems.
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features.
y = iris.target
"""
Explanation: Decision Boundaries and Margins
Intuition
Let's look at classification in a simple linearly separable two-class case. When faced with the classification problem, we want to find a "decision boundary" in the input space, where points on one side of the boundary are predicted to be in class A and points on the other side predicted to be in class B.
One intuitive way to define a decision boundary is to draw the line (shown on the right) that results in the greatest distance between the points of each class and the decision boundary. The SVM model is based on precisely on this approach, and the SVM is named for the support vector, the inputs for each class that are the closest to the margin.
Mathematics
Given the generic two class classification setup with data $ {x_n, t_n}_{n = 1}^N$ where $x_n \in \mathbb{R}^D$ and $t_n \in {-1, 1}$, we want to find the optimal weights $w$ and bias $b$ for the model $y(x; w, b) = w^Tx + b$.
To define the concept of the margin mathematically, first notice that $w$ will be orthogonal to the decision boundary, so we can decompose each vector into $x = x_{\perp} + r \frac{w}{||w||_2}$, where the magnitude of $r$ is the margin. Noting that $t_ny(x_n;w, b)$ will always be positive, we can rearrange this decomposition to define the margin for point $x_n$ to be
$$ r_n = \frac{t_n(w^Tx_n + b)}{||w||_2} $$
The overall margin $r$ is determined by the margins of the support vectors, so
$$ r = \min_n \frac{t_n(w^Tx_n + b)}{||w||_2}$$
Margin Maximization and Quadratic Programming
To find the optimal $w$ and $b$ for the SVM model, we want to maximize the expression for $r$ formalized above. This actual optimization of $w$ and $b$ relies on complicated math that isn't crucial to understanding why SVMs work.
If you want to know how SVMs work, the basic idea is that the process of optimizing $r$ can be expressed as a constrained optimization problem with a form that can be solved with quadratic programming. We can also apply the notion of Lagrangian Duality to this expression for $r$ to obtain a form that allows us to perform the kernel trick, which we will briefly explain.
https://en.wikipedia.org/wiki/Quadratic_programming
http://stats.stackexchange.com/questions/19181/why-bother-with-the-dual-problem-when-fitting-svm
Kernel Functions
One of the reasons why SVMs can be so powerful is their ability to utilize the kernel trick.
In the Lagrangian optimization of the margin, the solution depends only on the inputs through the term $\phi(x)^T\phi(z)$. A Kernel is defined simply as a function that takes in two vectors of the input space and outputs a scalar representing some similarity metric.
Specifically, a kernel function $K$ is defined such that $K(x, z) = \phi(x)^T\phi(z)$ for some feature representation $\phi$.
The power of kernels is that certain feature representations result in kernels that can be expressed in a closed form. For example, the RBF kernel, $K(x, z) = \phi_{RBF}(x)^T\phi_{RBF}(z)$ can be simplified as $K(x, z) = \exp(-||x - z||^2)$. $\phi_{RBF}(x)$ is an infinite dimension feature representation that would be impossible to calculate for each input $x$, but the key here is that we don't actually have to calculate this feature representation since we only deal with terms of the form $\phi_{RBF}(x)^T\phi_{RBF}(z)$, which can be computed in a very simple way.
The upshot of all of this is that the kernel trick takes advatnage of the fact that SVMs rely only on a dot product over the feature space, rather than individual feature vectors. This allows us to work with immensely complicated feature representations while keeping the actual computation of the SVM very simple. For a more in depth look at kernels (and SVMs in general), the Stanford ML notes are a great resource.
Application
Here's Scikit-learn's example of how to use their SVM implementation, classifying flowers by sepal length and width. First, we load a toy dataset.
End of explanation
"""
C = 1.0 # SVM regularization parameter
svc = svm.SVC(kernel='linear', C=C).fit(X, y)
rbf_svc = svm.SVC(kernel='rbf', gamma=0.7, C=C).fit(X, y)
poly_svc = svm.SVC(kernel='poly', degree=3, C=C).fit(X, y)
lin_svc = svm.LinearSVC(C=C).fit(X, y)
"""
Explanation: Now, we can fit different SVM models using different kernels.
End of explanation
"""
h = .02 # step size in the mesh
# create a mesh to plot in
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# title for the plots
titles = ['SVC with linear kernel',
'LinearSVC (linear kernel)',
'SVC with RBF kernel',
'SVC with polynomial (degree 3) kernel']
for i, clf in enumerate((svc, lin_svc, rbf_svc, poly_svc)):
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
plt.subplot(2, 2, i + 1)
plt.subplots_adjust(wspace=0.4, hspace=0.4)
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.title(titles[i])
plt.show()
"""
Explanation: Then, we can plot the decision boundaries for each of the SVM models that we fitted.
End of explanation
"""
# TODO: design your custom kernel
# TODO: fit an SVM with your custom kernel on the data
# Plot the decision boundary
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Paired)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired)
plt.title('Custom kernel SVM')
plt.axis('tight')
plt.show()
"""
Explanation: Challenge: Custom Kernels
Scikit-learn's SVM implementation allows you to define your own kernels. By Mercer's theorem, all kernel functions can be represented as a dot product in some feature space, so any $f : X \times X \rightarrow \mathbb{R}$ is a valid kernel. Try your hand at defining your own kernels based on the example given, and plot it as they have. See how your kernel compares to the kernels used above, and try to create one that creates the best decision boundary around the given data!
End of explanation
"""
|
dsacademybr/PythonFundamentos
|
Cap07/DesafioDSA/Missao5/missao5.ipynb
|
gpl-3.0
|
# Versรฃo da Linguagem Python
from platform import python_version
print('Versรฃo da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
"""
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capรญtulo 7</font>
Download: http://github.com/dsacademybr
End of explanation
"""
# Imports
import pandas as pd
import numpy as np
# Carrega o arquivo
load_file = "dados_compras.json"
purchase_file = pd.read_json(load_file, orient = "records")
purchase_file.head()
"""
Explanation: Missรฃo: Analisar o Comportamento de Compra de Consumidores.
Nรญvel de Dificuldade: Alto
Vocรช recebeu a tarefa de analisar os dados de compras de um web site! Os dados estรฃo no formato JSON e disponรญveis junto com este notebook.
No site, cada usuรกrio efetua login usando sua conta pessoal e pode adquirir produtos ร medida que navega pela lista de produtos oferecidos. Cada produto possui um valor de venda. Dados de idade e sexo de cada usuรกrio foram coletados e estรฃo fornecidos no arquivo JSON.
Seu trabalho รฉ entregar uma anรกlise de comportamento de compra dos consumidores. Esse รฉ um tipo de atividade comum realizado por Cientistas de Dados e o resultado deste trabalho pode ser usado, por exemplo, para alimentar um modelo de Machine Learning e fazer previsรตes sobre comportamentos futuros.
Mas nesta missรฃo vocรช vai analisar o comportamento de compra dos consumidores usando o pacote Pandas da linguagem Python e seu relatรณrio final deve incluir cada um dos seguintes itens:
Contagem de Consumidores
Nรบmero total de consumidores
Anรกlise Geral de Compras
Nรบmero de itens exclusivos
Preรงo mรฉdio de compra
Nรบmero total de compras
Rendimento total
Informaรงรตes Demogrรกficas Por Gรชnero
Porcentagem e contagem de compradores masculinos
Porcentagem e contagem de compradores do sexo feminino
Porcentagem e contagem de outros / nรฃo divulgados
Anรกlise de Compras Por Gรชnero
Nรบmero de compras
Preรงo mรฉdio de compra
Valor Total de Compra
Compras for faixa etรกria
Identifique os 5 principais compradores pelo valor total de compra e, em seguida, liste (em uma tabela):
Login
Nรบmero de compras
Preรงo mรฉdio de compra
Valor Total de Compra
Itens mais populares
Identifique os 5 itens mais populares por contagem de compras e, em seguida, liste (em uma tabela):
ID do item
Nome do item
Nรบmero de compras
Preรงo do item
Valor Total de Compra
Itens mais lucrativos
Identifique os 5 itens mais lucrativos pelo valor total de compra e, em seguida, liste (em uma tabela):
ID do item
Nome do item
Nรบmero de compras
Preรงo do item
Valor Total de Compra
Como consideraรงรตes finais:
Seu script deve funcionar para o conjunto de dados fornecido.
Vocรช deve usar a Biblioteca Pandas e o Jupyter Notebook.
End of explanation
"""
# Implemente aqui sua soluรงรฃo
"""
Explanation: Informaรงรตes Sobre os Consumidores
End of explanation
"""
# Implemente aqui sua soluรงรฃo
"""
Explanation: Anรกlise Geral de Compras
End of explanation
"""
# Implemente aqui sua soluรงรฃo
"""
Explanation: Anรกlise Demogrรกfica
End of explanation
"""
# Implemente aqui sua soluรงรฃo
"""
Explanation: Informaรงรตes Demogrรกficas Por Gรชnero
End of explanation
"""
# Implemente aqui sua soluรงรฃo
"""
Explanation: Anรกlise de Compras Por Gรชnero
End of explanation
"""
# Implemente aqui sua soluรงรฃo
"""
Explanation: Consumidores Mais Populares (Top 5)
End of explanation
"""
# Implemente aqui sua soluรงรฃo
"""
Explanation: Itens Mais Populares
End of explanation
"""
# Implemente aqui sua soluรงรฃo
"""
Explanation: Itens Mais Lucrativos
End of explanation
"""
|
turbomanage/training-data-analyst
|
courses/machine_learning/deepdive2/text_classification/labs/rnn_encoder_decoder.ipynb
|
apache-2.0
|
import os
import pickle
import sys
import nltk
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow.keras.layers import (
Dense,
Embedding,
GRU,
Input,
)
from tensorflow.keras.models import (
load_model,
Model,
)
import utils_preproc
print(tf.__version__)
SEED = 0
MODEL_PATH = 'translate_models/baseline'
DATA_URL = 'http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip'
LOAD_CHECKPOINT = False
tf.random.set_seed(SEED)
"""
Explanation: Simple RNN Encode-Decoder for Translation
Learning Objectives
1. Learn how to create a tf.data.Dataset for seq2seq problems
1. Learn how to train an encoder-decoder model in Keras
1. Learn how to save the encoder and the decoder as separate models
1. Learn how to piece together the trained encoder and decoder into a translation function
1. Learn how to use the BLUE score to evaluate a translation model
Introduction
In this lab we'll build a translation model from Spanish to English using a RNN encoder-decoder model architecture.
We will start by creating train and eval datasets (using the tf.data.Dataset API) that are typical for seq2seq problems. Then we will use the Keras functional API to train an RNN encoder-decoder model, which will save as two separate models, the encoder and decoder model. Using these two separate pieces we will implement the translation function.
At last, we'll benchmark our results using the industry standard BLEU score.
End of explanation
"""
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin=DATA_URL, extract=True)
path_to_file = os.path.join(
os.path.dirname(path_to_zip),
"spa-eng/spa.txt"
)
print("Translation data stored at:", path_to_file)
data = pd.read_csv(
path_to_file, sep='\t', header=None, names=['english', 'spanish'])
data.sample(3)
"""
Explanation: Downloading the Data
We'll use a language dataset provided by http://www.manythings.org/anki/. The dataset contains Spanish-English translation pairs in the format:
May I borrow this book? ยฟPuedo tomar prestado este libro?
The dataset is a curated list of 120K translation pairs from http://tatoeba.org/, a platform for community contributed translations by native speakers.
End of explanation
"""
raw = [
"No estamos comiendo.",
"Estรก llegando el invierno.",
"El invierno se acerca.",
"Tom no comio nada.",
"Su pierna mala le impidiรณ ganar la carrera.",
"Su respuesta es erronea.",
"ยฟQuรฉ tal si damos un paseo despuรฉs del almuerzo?"
]
processed = [utils_preproc.preprocess_sentence(s) for s in raw]
processed
"""
Explanation: From the utils_preproc package we have written for you,
we will use the following functions to pre-process our dataset of sentence pairs.
Sentence Preprocessing
The utils_preproc.preprocess_sentence() method does the following:
1. Converts sentence to lower case
2. Adds a space between punctuation and words
3. Replaces tokens that aren't a-z or punctuation with space
4. Adds <start> and <end> tokens
For example:
End of explanation
"""
integerized, tokenizer = utils_preproc.tokenize(processed)
integerized
"""
Explanation: Sentence Integerizing
The utils_preproc.tokenize() method does the following:
Splits each sentence into a token list
Maps each token to an integer
Pads to length of longest sentence
It returns an instance of a Keras Tokenizer
containing the token-integer mapping along with the integerized sentences:
End of explanation
"""
tokenizer.sequences_to_texts(integerized)
"""
Explanation: The outputted tokenizer can be used to get back the actual works
from the integers representing them:
End of explanation
"""
def load_and_preprocess(path, num_examples):
with open(path_to_file, 'r') as fp:
lines = fp.read().strip().split('\n')
sentence_pairs = # TODO 1a
return zip(*sentence_pairs)
en, sp = load_and_preprocess(path_to_file, num_examples=10)
print(en[-1])
print(sp[-1])
"""
Explanation: Creating the tf.data.Dataset
load_and_preprocess
Exercise 1
Implement a function that will read the raw sentence-pair file
and preprocess the sentences with utils_preproc.preprocess_sentence.
The load_and_preprocess function takes as input
- the path where the sentence-pair file is located
- the number of examples one wants to read in
It returns a tuple whose first component contains the english
preprocessed sentences, while the second component contains the
spanish ones:
End of explanation
"""
def load_and_integerize(path, num_examples=None):
targ_lang, inp_lang = load_and_preprocess(path, num_examples)
# TODO 1b
input_tensor, inp_lang_tokenizer = # TODO
target_tensor, targ_lang_tokenizer = # TODO
return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer
"""
Explanation: load_and_integerize
Exercise 2
Using utils_preproc.tokenize, implement the function load_and_integerize that takes as input the data path along with the number of examples we want to read in and returns the following tuple:
python
(input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer)
where
input_tensor is an integer tensor of shape (num_examples, max_length_inp) containing the integerized versions of the source language sentences
target_tensor is an integer tensor of shape (num_examples, max_length_targ) containing the integerized versions of the target language sentences
inp_lang_tokenizer is the source language tokenizer
targ_lang_tokenizer is the target language tokenizer
End of explanation
"""
TEST_PROP = 0.2
NUM_EXAMPLES = 30000
"""
Explanation: Train and eval splits
We'll split this data 80/20 into train and validation, and we'll use only the first 30K examples, since we'll be training on a single GPU.
Let us set variable for that:
End of explanation
"""
input_tensor, target_tensor, inp_lang, targ_lang = load_and_integerize(
path_to_file, NUM_EXAMPLES)
"""
Explanation: Now let's load and integerize the sentence paris and store the tokenizer for the source and the target language into the int_lang and targ_lang variable respectively:
End of explanation
"""
max_length_targ = target_tensor.shape[1]
max_length_inp = input_tensor.shape[1]
"""
Explanation: Let us store the maximal sentence length of both languages into two variables:
End of explanation
"""
splits = train_test_split(
input_tensor, target_tensor, test_size=TEST_PROP, random_state=SEED)
input_tensor_train = splits[0]
input_tensor_val = splits[1]
target_tensor_train = splits[2]
target_tensor_val = splits[3]
"""
Explanation: We are now using scikit-learn train_test_split to create our splits:
End of explanation
"""
(len(input_tensor_train), len(target_tensor_train),
len(input_tensor_val), len(target_tensor_val))
"""
Explanation: Let's make sure the number of example in each split looks good:
End of explanation
"""
print("Input Language; int to word mapping")
print(input_tensor_train[0])
print(utils_preproc.int2word(inp_lang, input_tensor_train[0]), '\n')
print("Target Language; int to word mapping")
print(target_tensor_train[0])
print(utils_preproc.int2word(targ_lang, target_tensor_train[0]))
"""
Explanation: The utils_preproc.int2word function allows you to transform back the integerized sentences into words. Note that the <start> token is alwasy encoded as 1, while the <end> token is always encoded as 0:
End of explanation
"""
def create_dataset(encoder_input, decoder_input):
# shift ahead by 1
target = tf.roll(decoder_input, -1, 1)
# replace last column with 0s
zeros = tf.zeros([target.shape[0], 1], dtype=tf.int32)
target = tf.concat((target[:, :-1], zeros), axis=-1)
dataset = # TODO
return dataset
"""
Explanation: Create tf.data dataset for train and eval
Exercise 3
Implement the create_dataset function that takes as input
* encoder_input which is an integer tensor of shape (num_examples, max_length_inp) containing the integerized versions of the source language sentences
* decoder_input which is an integer tensor of shape (num_examples, max_length_targ)containing the integerized versions of the target language sentences
It returns a tf.data.Dataset containing examples for the form
python
((source_sentence, target_sentence), shifted_target_sentence)
where source_sentence and target_setence are the integer version of source-target language pairs and shifted_target is the same as target_sentence but with indices shifted by 1.
Remark: In the training code, source_sentence (resp. target_sentence) will be fed as the encoder (resp. decoder) input, while shifted_target will be used to compute the cross-entropy loss by comparing the decoder output with the shifted target sentences.
End of explanation
"""
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
train_dataset = create_dataset(
input_tensor_train, target_tensor_train).shuffle(
BUFFER_SIZE).repeat().batch(BATCH_SIZE, drop_remainder=True)
eval_dataset = create_dataset(
input_tensor_val, target_tensor_val).batch(
BATCH_SIZE, drop_remainder=True)
"""
Explanation: Let's now create the actual train and eval dataset using the function above:
End of explanation
"""
EMBEDDING_DIM = 256
HIDDEN_UNITS = 1024
INPUT_VOCAB_SIZE = len(inp_lang.word_index) + 1
TARGET_VOCAB_SIZE = len(targ_lang.word_index) + 1
"""
Explanation: Training the RNN encoder-decoder model
We use an encoder-decoder architecture, however we embed our words into a latent space prior to feeding them into the RNN.
End of explanation
"""
encoder_inputs = Input(shape=(None,), name="encoder_input")
encoder_inputs_embedded = # TODO
encoder_rnn = # TODO
encoder_outputs, encoder_state = encoder_rnn(encoder_inputs_embedded)
"""
Explanation: Exercise 4
Implement the encoder network with Keras functional API. It will
* start with an Input layer that will consume the source language integerized sentences
* then feed them to an Embedding layer of EMBEDDING_DIM dimensions
* which in turn will pass the embeddings to a GRU recurrent layer with HIDDEN_UNITS
The output of the encoder will be the encoder_outputs and the encoder_state.
End of explanation
"""
decoder_inputs = Input(shape=(None,), name="decoder_input")
decoder_inputs_embedded = # TODO
decoder_rnn = # TODO
decoder_outputs, decoder_state = decoder_rnn(
decoder_inputs_embedded, initial_state=encoder_state)
"""
Explanation: Exercise 5
Implement the decoder network, which is very similar to the encoder network.
It will
* start with an Input layer that will consume the source language integerized sentences
* then feed that input to an Embedding layer of EMBEDDING_DIM dimensions
* which in turn will pass the embeddings to a GRU recurrent layer with HIDDEN_UNITS
Important: The main difference with the encoder, is that the recurrent GRU layer will take as input not only the decoder input embeddings, but also the encoder_state as outputted by the encoder above. This is where the two networks are linked!
The output of the encoder will be the decoder_outputs and the decoder_state.
End of explanation
"""
decoder_dense = Dense(TARGET_VOCAB_SIZE, activation='softmax')
predictions = decoder_dense(decoder_outputs)
"""
Explanation: The last part of the encoder-decoder architecture is a softmax Dense layer that will create the next word probability vector or next word predictions from the decoder_output:
End of explanation
"""
model = # TODO
model.compile(# TODO)
model.summary()
"""
Explanation: Exercise 6
To be able to train the encoder-decoder network defined above, create a trainable Keras Model by specifying which are the inputs and the outputs of our problem. They should correspond exactly to what the type of input/output in our train and eval tf.data.Dataset since that's what will be fed to the inputs and outputs we declare while instantiating the Keras Model.
While compiling our model, we should make sure that the loss is the sparse_categorical_crossentropy so that we can compare the true word indices for the target language as outputted by our train tf.data.Dataset with the next word predictions vector as outputted by the decoder:
End of explanation
"""
STEPS_PER_EPOCH = len(input_tensor_train)//BATCH_SIZE
EPOCHS = 1
history = model.fit(
train_dataset,
steps_per_epoch=STEPS_PER_EPOCH,
validation_data=eval_dataset,
epochs=EPOCHS
)
"""
Explanation: Let's now train the model!
End of explanation
"""
if LOAD_CHECKPOINT:
encoder_model = load_model(os.path.join(MODEL_PATH, 'encoder_model.h5'))
decoder_model = load_model(os.path.join(MODEL_PATH, 'decoder_model.h5'))
else:
encoder_model = # TODO
decoder_state_input = Input(shape=(HIDDEN_UNITS,), name="decoder_state_input")
# Reuses weights from the decoder_rnn layer
decoder_outputs, decoder_state = decoder_rnn(
decoder_inputs_embedded, initial_state=decoder_state_input)
# Reuses weights from the decoder_dense layer
predictions = decoder_dense(decoder_outputs)
decoder_model = # TODO
"""
Explanation: Implementing the translation (or decoding) function
We can't just use model.predict(), because we don't know all the inputs we used during training. We only know the encoder_input (source language) but not the decoder_input (target language), which is what we want to predict (i.e., the translation of the source language)!
We do however know the first token of the decoder input, which is the <start> token. So using this plus the state of the encoder RNN, we can predict the next token. We will then use that token to be the second token of decoder input, and continue like this until we predict the <end> token, or we reach some defined max length.
So, the strategy now is to split our trained network into two independent Keras models:
an encoder model with signature encoder_inputs -> encoder_state
a decoder model with signature [decoder_inputs, decoder_state_input] -> [predictions, decoder_state]
This way, we will be able to encode the source language sentence into the vector encoder_state using the encoder and feed it to the decoder model along with the <start> token at step 1.
Given that input, the decoder will produce the first word of the translation, by sampling from the predictions vector (for simplicity, our sampling strategy here will be to take the next word to be the one whose index has the maximum probability in the predictions vector) along with a new state vector, the decoder_state.
At this point, we can feed again to the decoder the predicted first word and as well as the new decoder_state to predict the translation second word.
This process can be continued until the decoder produces the token <stop>.
This is how we will implement our translation (or decoding) function, but let us first extract a separate encoder and a separate decoder from our trained encoder-decoder model.
Remark: If we have already trained and saved the models (i.e, LOAD_CHECKPOINT is True) we will just load the models, otherwise, we extract them from the trained network above by explicitly creating the encoder and decoder Keras Models with the signature we want.
Exercise 7
Create the Keras Model encoder_model with signature encoder_inputs -> encoder_state and the Keras Model decoder_model with signature [decoder_inputs, decoder_state_input] -> [predictions, decoder_state].
End of explanation
"""
def decode_sequences(input_seqs, output_tokenizer, max_decode_length=50):
"""
Arguments:
input_seqs: int tensor of shape (BATCH_SIZE, SEQ_LEN)
output_tokenizer: Tokenizer used to conver from int to words
Returns translated sentences
"""
# Encode the input as state vectors.
states_value = encoder_model.predict(input_seqs)
# Populate the first character of target sequence with the start character.
batch_size = input_seqs.shape[0]
target_seq = tf.ones([batch_size, 1])
decoded_sentences = [[] for _ in range(batch_size)]
for i in range(max_decode_length):
output_tokens, decoder_state = decoder_model.predict(
[target_seq, states_value])
# Sample a token
sampled_token_index = # TODO
tokens = # TODO
for j in range(batch_size):
decoded_sentences[j].append(tokens[j])
# Update the target sequence (of length 1).
target_seq = tf.expand_dims(tf.constant(sampled_token_index), axis=-1)
# Update states
states_value = decoder_state
return decoded_sentences
"""
Explanation: Exercise 8
Now that we have a separate encoder and a separate decoder, implement a translation function, to which we will give the generic name of decode_sequences (to stress that this procedure is general to all seq2seq problems).
decode_sequences will take as input
* input_seqs which is the integerized source language sentence tensor that the encoder can consume
* output_tokenizer which is the target languague tokenizer we will need to extract back words from predicted word integers
* max_decode_length which is the length after which we stop decoding if the <stop> token has not been predicted
Note: Now that the encoder and decoder have been turned into Keras models, to feed them their input, we need to use the .predict method.
End of explanation
"""
sentences = [
"No estamos comiendo.",
"Estรก llegando el invierno.",
"El invierno se acerca.",
"Tom no comio nada.",
"Su pierna mala le impidiรณ ganar la carrera.",
"Su respuesta es erronea.",
"ยฟQuรฉ tal si damos un paseo despuรฉs del almuerzo?"
]
reference_translations = [
"We're not eating.",
"Winter is coming.",
"Winter is coming.",
"Tom ate nothing.",
"His bad leg prevented him from winning the race.",
"Your answer is wrong.",
"How about going for a walk after lunch?"
]
machine_translations = decode_sequences(
utils_preproc.preprocess(sentences, inp_lang),
targ_lang,
max_length_targ
)
for i in range(len(sentences)):
print('-')
print('INPUT:')
print(sentences[i])
print('REFERENCE TRANSLATION:')
print(reference_translations[i])
print('MACHINE TRANSLATION:')
print(machine_translations[i])
"""
Explanation: Now we're ready to predict!
End of explanation
"""
if not LOAD_CHECKPOINT:
os.makedirs(MODEL_PATH, exist_ok=True)
# TODO
with open(os.path.join(MODEL_PATH, 'encoder_tokenizer.pkl'), 'wb') as fp:
pickle.dump(inp_lang, fp)
with open(os.path.join(MODEL_PATH, 'decoder_tokenizer.pkl'), 'wb') as fp:
pickle.dump(targ_lang, fp)
"""
Explanation: Checkpoint Model
Exercise 9
Save
* model to disk as the file model.h5
* encoder_model to disk as the file encoder_model.h5
* decoder_model to disk as the file decoder_model.h5
End of explanation
"""
def bleu_1(reference, candidate):
reference = list(filter(lambda x: x != '', reference)) # remove padding
candidate = list(filter(lambda x: x != '', candidate)) # remove padding
smoothing_function = nltk.translate.bleu_score.SmoothingFunction().method1
return nltk.translate.bleu_score.sentence_bleu(
reference, candidate, (1,), smoothing_function)
def bleu_4(reference, candidate):
reference = list(filter(lambda x: x != '', reference)) # remove padding
candidate = list(filter(lambda x: x != '', candidate)) # remove padding
smoothing_function = nltk.translate.bleu_score.SmoothingFunction().method1
return nltk.translate.bleu_score.sentence_bleu(
reference, candidate, (.25, .25, .25, .25), smoothing_function)
"""
Explanation: Evaluation Metric (BLEU)
Unlike say, image classification, there is no one right answer for a machine translation. However our current loss metric, cross entropy, only gives credit when the machine translation matches the exact same word in the same order as the reference translation.
Many attempts have been made to develop a better metric for natural language evaluation. The most popular currently is Bilingual Evaluation Understudy (BLEU).
It is quick and inexpensive to calculate.
It allows flexibility for the ordering of words and phrases.
It is easy to understand.
It is language independent.
It correlates highly with human evaluation.
It has been widely adopted.
The score is from 0 to 1, where 1 is an exact match.
It works by counting matching n-grams between the machine and reference texts, regardless of order. BLUE-4 counts matching n grams from 1-4 (1-gram, 2-gram, 3-gram and 4-gram). It is common to report both BLUE-1 and BLUE-4
It still is imperfect, since it gives no credit to synonyms and so human evaluation is still best when feasible. However BLEU is commonly considered the best among bad options for an automated metric.
The NLTK framework has an implementation that we will use.
We can't run calculate BLEU during training, because at that time the correct decoder input is used. Instead we'll calculate it now.
For more info: https://machinelearningmastery.com/calculate-bleu-score-for-text-python/
End of explanation
"""
%%time
num_examples = len(input_tensor_val)
bleu_1_total = 0
bleu_4_total = 0
for idx in range(num_examples):
reference_sentence = utils_preproc.int2word(
targ_lang, target_tensor_val[idx][1:])
decoded_sentence = decode_sequences(
input_tensor_val[idx:idx+1], targ_lang, max_length_targ)[0]
bleu_1_total += # TODO
bleu_4_total += # TODO
print('BLEU 1: {}'.format(bleu_1_total/num_examples))
print('BLEU 4: {}'.format(bleu_4_total/num_examples))
"""
Explanation: Exercise 10
Let's now average the bleu_1 and bleu_4 scores for all the sentence pairs in the eval set. The next cell takes some time to run, the bulk of which is decoding the 6000 sentences in the validation set. Please wait unitl completes.
End of explanation
"""
|
mkcor/advanced-pandas
|
notebooks/03_multiindex.ipynb
|
cc0-1.0
|
import pandas as pd
mlo = pd.read_csv('../data/co2-mm-mlo.csv', na_values=-99.99, index_col='Date', parse_dates=True)
mlo.head()
s = mlo['Interpolated']
mlo.assign(smooth=s.rolling(12).mean()).tail()
"""
Explanation: The MultiIndex object
View vs copy
End of explanation
"""
mlo.head()
s2 = mlo.loc[:'1958-05', 'Average']
s2
"""
Explanation: A copy is returned.
End of explanation
"""
s2[:] = 313
s2
mlo.head()
"""
Explanation: A view is returned.
End of explanation
"""
mlo['Average']['1958-03']
mlo['Average']['1958-03'] = 312
"""
Explanation: Hands-on exercise
How could you create a series equal to s2 while preserving the original mlo DataFrame (Hint: Remember the NumPy lesson.)
Chained indexing
End of explanation
"""
mlo.loc['1958-03', 'Average']
"""
Explanation: Generally speaking, chained indexing is not a good practice. To set a new value, use mlo.loc[row_indexer, col_indexer] because mlo.loc is guaranteed to be mlo itself.
End of explanation
"""
h_index = pd.MultiIndex.from_product([['first', 'second'], ['A', 'B']])
h_index
x = pd.Series(range(4), index=h_index)
x
x['first']
x['first']['B']
"""
Explanation: Hierarchical indexing
End of explanation
"""
x.loc[('first', 'B')]
"""
Explanation: In the above, there are two selection operations.
End of explanation
"""
gl = pd.read_csv('../data/co2-mm-gl.csv', na_values=-99.99, index_col='Date', parse_dates=True)
gl = gl[['Average']]
gl.columns = ['Average_gl']
gl.head()
ml = mlo[['Average']]
ml.columns = ['Average_mlo']
ml.head()
ml = ml[ml.index >= '1980-01']
gl = gl.head()
ml = ml.head()
multi = pd.concat([ml, gl], axis=1).stack()
multi
multi.index
multi.index.get_level_values('Date')
multi.loc[multi.index.get_level_values('Date') < '1980-03']
"""
Explanation: In the above, there is a single selection operation.
We can end up with a hierarchical index when stacking records.
End of explanation
"""
pd.concat([ml, gl], axis=1)
multi
"""
Explanation: Hands-on exercise
Select out all values of the multi series for the Average_mlo variable.
Reshaping
The stack() function compressed a level in the DataFrameโs columns to produce a Series (as a reminder, multi = pd.concat([ml, gl], axis=1).stack()).
End of explanation
"""
multi.unstack()
"""
Explanation: The inverse function is unstack(); it is designed to work with a hierarchical index.
End of explanation
"""
rec = pd.concat([ml, gl], axis=1).stack().reset_index()
rec.columns = ['date', 'variable', 'value']
rec
"""
Explanation: Hands-on exercises
Unstack x.
What does x.unstack(0) return?
What is another term for 'unstacking' (which you may have heard in the context of spreadsheets)?
Pivoting
End of explanation
"""
rec
rec[rec.variable == 'Average_mlo']
pivot_table = rec.pivot(index='date', columns='variable', values='value')
pivot_table
"""
Explanation: The above data is in 'stacked' or 'record' format.
End of explanation
"""
pivot_table['Average_gl']
pivot_table.index
"""
Explanation: The pivoted data is more suitable for timeseries analysis.
End of explanation
"""
|
Chipe1/aima-python
|
search4e.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import random
import heapq
import math
import sys
from collections import defaultdict, deque, Counter
from itertools import combinations
class Problem(object):
"""The abstract class for a formal problem. A new domain subclasses this,
overriding `actions` and `results`, and perhaps other methods.
The default heuristic is 0 and the default action cost is 1 for all states.
When yiou create an instance of a subclass, specify `initial`, and `goal` states
(or give an `is_goal` method) and perhaps other keyword args for the subclass."""
def __init__(self, initial=None, goal=None, **kwds):
self.__dict__.update(initial=initial, goal=goal, **kwds)
def actions(self, state): raise NotImplementedError
def result(self, state, action): raise NotImplementedError
def is_goal(self, state): return state == self.goal
def action_cost(self, s, a, s1): return 1
def h(self, node): return 0
def __str__(self):
return '{}({!r}, {!r})'.format(
type(self).__name__, self.initial, self.goal)
class Node:
"A Node in a search tree."
def __init__(self, state, parent=None, action=None, path_cost=0):
self.__dict__.update(state=state, parent=parent, action=action, path_cost=path_cost)
def __repr__(self): return '<{}>'.format(self.state)
def __len__(self): return 0 if self.parent is None else (1 + len(self.parent))
def __lt__(self, other): return self.path_cost < other.path_cost
failure = Node('failure', path_cost=math.inf) # Indicates an algorithm couldn't find a solution.
cutoff = Node('cutoff', path_cost=math.inf) # Indicates iterative deepening search was cut off.
def expand(problem, node):
"Expand a node, generating the children nodes."
s = node.state
for action in problem.actions(s):
s1 = problem.result(s, action)
cost = node.path_cost + problem.action_cost(s, action, s1)
yield Node(s1, node, action, cost)
def path_actions(node):
"The sequence of actions to get to this node."
if node.parent is None:
return []
return path_actions(node.parent) + [node.action]
def path_states(node):
"The sequence of states to get to this node."
if node in (cutoff, failure, None):
return []
return path_states(node.parent) + [node.state]
"""
Explanation: Search for AIMA 4th edition
Implementation of search algorithms and search problems for AIMA.
Problems and Nodes
We start by defining the abstract class for a Problem; specific problem domains will subclass this. To make it easier for algorithms that use a heuristic evaluation function, Problem has a default h function (uniformly zero), and subclasses can define their own default h function.
We also define a Node in a search tree, and some functions on nodes: expand to generate successors; path_actions and path_states to recover aspects of the path from the node.
End of explanation
"""
FIFOQueue = deque
LIFOQueue = list
class PriorityQueue:
"""A queue in which the item with minimum f(item) is always popped first."""
def __init__(self, items=(), key=lambda x: x):
self.key = key
self.items = [] # a heap of (score, item) pairs
for item in items:
self.add(item)
def add(self, item):
"""Add item to the queuez."""
pair = (self.key(item), item)
heapq.heappush(self.items, pair)
def pop(self):
"""Pop and return the item with min f(item) value."""
return heapq.heappop(self.items)[1]
def top(self): return self.items[0][1]
def __len__(self): return len(self.items)
"""
Explanation: Queues
First-in-first-out and Last-in-first-out queues, and a PriorityQueue, which allows you to keep a collection of items, and continually remove from it the item with minimum f(item) score.
End of explanation
"""
def best_first_search(problem, f):
"Search nodes with minimum f(node) value first."
node = Node(problem.initial)
frontier = PriorityQueue([node], key=f)
reached = {problem.initial: node}
while frontier:
node = frontier.pop()
if problem.is_goal(node.state):
return node
for child in expand(problem, node):
s = child.state
if s not in reached or child.path_cost < reached[s].path_cost:
reached[s] = child
frontier.add(child)
return failure
def best_first_tree_search(problem, f):
"A version of best_first_search without the `reached` table."
frontier = PriorityQueue([Node(problem.initial)], key=f)
while frontier:
node = frontier.pop()
if problem.is_goal(node.state):
return node
for child in expand(problem, node):
if not is_cycle(child):
frontier.add(child)
return failure
def g(n): return n.path_cost
def astar_search(problem, h=None):
"""Search nodes with minimum f(n) = g(n) + h(n)."""
h = h or problem.h
return best_first_search(problem, f=lambda n: g(n) + h(n))
def astar_tree_search(problem, h=None):
"""Search nodes with minimum f(n) = g(n) + h(n), with no `reached` table."""
h = h or problem.h
return best_first_tree_search(problem, f=lambda n: g(n) + h(n))
def weighted_astar_search(problem, h=None, weight=1.4):
"""Search nodes with minimum f(n) = g(n) + weight * h(n)."""
h = h or problem.h
return best_first_search(problem, f=lambda n: g(n) + weight * h(n))
def greedy_bfs(problem, h=None):
"""Search nodes with minimum h(n)."""
h = h or problem.h
return best_first_search(problem, f=h)
def uniform_cost_search(problem):
"Search nodes with minimum path cost first."
return best_first_search(problem, f=g)
def breadth_first_bfs(problem):
"Search shallowest nodes in the search tree first; using best-first."
return best_first_search(problem, f=len)
def depth_first_bfs(problem):
"Search deepest nodes in the search tree first; using best-first."
return best_first_search(problem, f=lambda n: -len(n))
def is_cycle(node, k=30):
"Does this node form a cycle of length k or less?"
def find_cycle(ancestor, k):
return (ancestor is not None and k > 0 and
(ancestor.state == node.state or find_cycle(ancestor.parent, k - 1)))
return find_cycle(node.parent, k)
"""
Explanation: Search Algorithms: Best-First
Best-first search with various f(n) functions gives us different search algorithms. Note that A*, weighted A* and greedy search can be given a heuristic function, h, but if h is not supplied they use the problem's default h function (if the problem does not define one, it is taken as h(n) = 0).
End of explanation
"""
def breadth_first_search(problem):
"Search shallowest nodes in the search tree first."
node = Node(problem.initial)
if problem.is_goal(problem.initial):
return node
frontier = FIFOQueue([node])
reached = {problem.initial}
while frontier:
node = frontier.pop()
for child in expand(problem, node):
s = child.state
if problem.is_goal(s):
return child
if s not in reached:
reached.add(s)
frontier.appendleft(child)
return failure
def iterative_deepening_search(problem):
"Do depth-limited search with increasing depth limits."
for limit in range(1, sys.maxsize):
result = depth_limited_search(problem, limit)
if result != cutoff:
return result
def depth_limited_search(problem, limit=10):
"Search deepest nodes in the search tree first."
frontier = LIFOQueue([Node(problem.initial)])
result = failure
while frontier:
node = frontier.pop()
if problem.is_goal(node.state):
return node
elif len(node) >= limit:
result = cutoff
elif not is_cycle(node):
for child in expand(problem, node):
frontier.append(child)
return result
def depth_first_recursive_search(problem, node=None):
if node is None:
node = Node(problem.initial)
if problem.is_goal(node.state):
return node
elif is_cycle(node):
return failure
else:
for child in expand(problem, node):
result = depth_first_recursive_search(problem, child)
if result:
return result
return failure
path_states(depth_first_recursive_search(r2))
"""
Explanation: Other Search Algorithms
Here are the other search algorithms:
End of explanation
"""
def bidirectional_best_first_search(problem_f, f_f, problem_b, f_b, terminated):
node_f = Node(problem_f.initial)
node_b = Node(problem_f.goal)
frontier_f, reached_f = PriorityQueue([node_f], key=f_f), {node_f.state: node_f}
frontier_b, reached_b = PriorityQueue([node_b], key=f_b), {node_b.state: node_b}
solution = failure
while frontier_f and frontier_b and not terminated(solution, frontier_f, frontier_b):
def S1(node, f):
return str(int(f(node))) + ' ' + str(path_states(node))
print('Bi:', S1(frontier_f.top(), f_f), S1(frontier_b.top(), f_b))
if f_f(frontier_f.top()) < f_b(frontier_b.top()):
solution = proceed('f', problem_f, frontier_f, reached_f, reached_b, solution)
else:
solution = proceed('b', problem_b, frontier_b, reached_b, reached_f, solution)
return solution
def inverse_problem(problem):
if isinstance(problem, CountCalls):
return CountCalls(inverse_problem(problem._object))
else:
inv = copy.copy(problem)
inv.initial, inv.goal = inv.goal, inv.initial
return inv
def bidirectional_uniform_cost_search(problem_f):
def terminated(solution, frontier_f, frontier_b):
n_f, n_b = frontier_f.top(), frontier_b.top()
return g(n_f) + g(n_b) > g(solution)
return bidirectional_best_first_search(problem_f, g, inverse_problem(problem_f), g, terminated)
def bidirectional_astar_search(problem_f):
def terminated(solution, frontier_f, frontier_b):
nf, nb = frontier_f.top(), frontier_b.top()
return g(nf) + g(nb) > g(solution)
problem_f = inverse_problem(problem_f)
return bidirectional_best_first_search(problem_f, lambda n: g(n) + problem_f.h(n),
problem_b, lambda n: g(n) + problem_b.h(n),
terminated)
def proceed(direction, problem, frontier, reached, reached2, solution):
node = frontier.pop()
for child in expand(problem, node):
s = child.state
print('proceed', direction, S(child))
if s not in reached or child.path_cost < reached[s].path_cost:
frontier.add(child)
reached[s] = child
if s in reached2: # Frontiers collide; solution found
solution2 = (join_nodes(child, reached2[s]) if direction == 'f' else
join_nodes(reached2[s], child))
#print('solution', path_states(solution2), solution2.path_cost,
# path_states(child), path_states(reached2[s]))
if solution2.path_cost < solution.path_cost:
solution = solution2
return solution
S = path_states
#A-S-R + B-P-R => A-S-R-P + B-P
def join_nodes(nf, nb):
"""Join the reverse of the backward node nb to the forward node nf."""
#print('join', S(nf), S(nb))
join = nf
while nb.parent is not None:
cost = join.path_cost + nb.path_cost - nb.parent.path_cost
join = Node(nb.parent.state, join, nb.action, cost)
nb = nb.parent
#print(' now join', S(join), 'with nb', S(nb), 'parent', S(nb.parent))
return join
#A , B = uniform_cost_search(r1), uniform_cost_search(r2)
#path_states(A), path_states(B)
#path_states(append_nodes(A, B))
"""
Explanation: Bidirectional Best-First Search
End of explanation
"""
class RouteProblem(Problem):
"""A problem to find a route between locations on a `Map`.
Create a problem with RouteProblem(start, goal, map=Map(...)}).
States are the vertexes in the Map graph; actions are destination states."""
def actions(self, state):
"""The places neighboring `state`."""
return self.map.neighbors[state]
def result(self, state, action):
"""Go to the `action` place, if the map says that is possible."""
return action if action in self.map.neighbors[state] else state
def action_cost(self, s, action, s1):
"""The distance (cost) to go from s to s1."""
return self.map.distances[s, s1]
def h(self, node):
"Straight-line distance between state and the goal."
locs = self.map.locations
return straight_line_distance(locs[node.state], locs[self.goal])
def straight_line_distance(A, B):
"Straight-line distance between two points."
return sum(abs(a - b)**2 for (a, b) in zip(A, B)) ** 0.5
class Map:
"""A map of places in a 2D world: a graph with vertexes and links between them.
In `Map(links, locations)`, `links` can be either [(v1, v2)...] pairs,
or a {(v1, v2): distance...} dict. Optional `locations` can be {v1: (x, y)}
If `directed=False` then for every (v1, v2) link, we add a (v2, v1) link."""
def __init__(self, links, locations=None, directed=False):
if not hasattr(links, 'items'): # Distances are 1 by default
links = {link: 1 for link in links}
if not directed:
for (v1, v2) in list(links):
links[v2, v1] = links[v1, v2]
self.distances = links
self.neighbors = multimap(links)
self.locations = locations or defaultdict(lambda: (0, 0))
def multimap(pairs) -> dict:
"Given (key, val) pairs, make a dict of {key: [val,...]}."
result = defaultdict(list)
for key, val in pairs:
result[key].append(val)
return result
# Some specific RouteProblems
romania = Map(
{('O', 'Z'): 71, ('O', 'S'): 151, ('A', 'Z'): 75, ('A', 'S'): 140, ('A', 'T'): 118,
('L', 'T'): 111, ('L', 'M'): 70, ('D', 'M'): 75, ('C', 'D'): 120, ('C', 'R'): 146,
('C', 'P'): 138, ('R', 'S'): 80, ('F', 'S'): 99, ('B', 'F'): 211, ('B', 'P'): 101,
('B', 'G'): 90, ('B', 'U'): 85, ('H', 'U'): 98, ('E', 'H'): 86, ('U', 'V'): 142,
('I', 'V'): 92, ('I', 'N'): 87, ('P', 'R'): 97},
{'A': ( 76, 497), 'B': (400, 327), 'C': (246, 285), 'D': (160, 296), 'E': (558, 294),
'F': (285, 460), 'G': (368, 257), 'H': (548, 355), 'I': (488, 535), 'L': (162, 379),
'M': (160, 343), 'N': (407, 561), 'O': (117, 580), 'P': (311, 372), 'R': (227, 412),
'S': (187, 463), 'T': ( 83, 414), 'U': (471, 363), 'V': (535, 473), 'Z': (92, 539)})
r0 = RouteProblem('A', 'A', map=romania)
r1 = RouteProblem('A', 'B', map=romania)
r2 = RouteProblem('N', 'L', map=romania)
r3 = RouteProblem('E', 'T', map=romania)
r4 = RouteProblem('O', 'M', map=romania)
path_states(uniform_cost_search(r1)) # Lowest-cost path from Arab to Bucharest
path_states(breadth_first_search(r1)) # Breadth-first: fewer steps, higher path cost
"""
Explanation: TODO: RBFS
Problem Domains
Now we turn our attention to defining some problem domains as subclasses of Problem.
Route Finding Problems
In a RouteProblem, the states are names of "cities" (or other locations), like 'A' for Arad. The actions are also city names; 'Z' is the action to move to city 'Z'. The layout of cities is given by a separate data structure, a Map, which is a graph where there are vertexes (cities), links between vertexes, distances (costs) of those links (if not specified, the default is 1 for every link), and optionally the 2D (x, y) location of each city can be specified. A RouteProblem takes this Map as input and allows actions to move between linked cities. The default heuristic is straight-line distance to the goal, or is uniformly zero if locations were not given.
End of explanation
"""
class GridProblem(Problem):
"""Finding a path on a 2D grid with obstacles. Obstacles are (x, y) cells."""
def __init__(self, initial=(15, 30), goal=(130, 30), obstacles=(), **kwds):
Problem.__init__(self, initial=initial, goal=goal,
obstacles=set(obstacles) - {initial, goal}, **kwds)
directions = [(-1, -1), (0, -1), (1, -1),
(-1, 0), (1, 0),
(-1, +1), (0, +1), (1, +1)]
def action_cost(self, s, action, s1): return straight_line_distance(s, s1)
def h(self, node): return straight_line_distance(node.state, self.goal)
def result(self, state, action):
"Both states and actions are represented by (x, y) pairs."
return action if action not in self.obstacles else state
def actions(self, state):
"""You can move one cell in any of `directions` to a non-obstacle cell."""
x, y = state
return {(x + dx, y + dy) for (dx, dy) in self.directions} - self.obstacles
class ErraticVacuum(Problem):
def actions(self, state):
return ['suck', 'forward', 'backward']
def results(self, state, action): return self.table[action][state]
table = dict(suck= {1:{5,7}, 2:{4,8}, 3:{7}, 4:{2,4}, 5:{1,5}, 6:{8}, 7:{3,7}, 8:{6,8}},
forward= {1:{2}, 2:{2}, 3:{4}, 4:{4}, 5:{6}, 6:{6}, 7:{8}, 8:{8}},
backward={1:{1}, 2:{1}, 3:{3}, 4:{3}, 5:{5}, 6:{5}, 7:{7}, 8:{7}})
# Some grid routing problems
# The following can be used to create obstacles:
def random_lines(X=range(15, 130), Y=range(60), N=150, lengths=range(6, 12)):
"""The set of cells in N random lines of the given lengths."""
result = set()
for _ in range(N):
x, y = random.choice(X), random.choice(Y)
dx, dy = random.choice(((0, 1), (1, 0)))
result |= line(x, y, dx, dy, random.choice(lengths))
return result
def line(x, y, dx, dy, length):
"""A line of `length` cells starting at (x, y) and going in (dx, dy) direction."""
return {(x + i * dx, y + i * dy) for i in range(length)}
random.seed(42) # To make this reproducible
frame = line(-10, 20, 0, 1, 20) | line(150, 20, 0, 1, 20)
cup = line(102, 44, -1, 0, 15) | line(102, 20, -1, 0, 20) | line(102, 44, 0, -1, 24)
d1 = GridProblem(obstacles=random_lines(N=100) | frame)
d2 = GridProblem(obstacles=random_lines(N=150) | frame)
d3 = GridProblem(obstacles=random_lines(N=200) | frame)
d4 = GridProblem(obstacles=random_lines(N=250) | frame)
d5 = GridProblem(obstacles=random_lines(N=300) | frame)
d6 = GridProblem(obstacles=cup | frame)
d7 = GridProblem(obstacles=cup | frame | line(50, 35, 0, -1, 10) | line(60, 37, 0, -1, 17) | line(70, 31, 0, -1, 19))
"""
Explanation: Grid Problems
A GridProblem involves navigating on a 2D grid, with some cells being impassible obstacles. By default you can move to any of the eight neighboring cells that are not obstacles (but in a problem instance you can supply a directions= keyword to change that). Again, the default heuristic is straight-line distance to the goal. States are (x, y) cell locations, such as (4, 2), and actions are (dx, dy) cell movements, such as (0, -1), which means leave the x coordinate alone, and decrement the y coordinate by 1.
End of explanation
"""
class EightPuzzle(Problem):
""" The problem of sliding tiles numbered from 1 to 8 on a 3x3 board,
where one of the squares is a blank, trying to reach a goal configuration.
A board state is represented as a tuple of length 9, where the element at index i
represents the tile number at index i, or 0 if for the empty square, e.g. the goal:
1 2 3
4 5 6 ==> (1, 2, 3, 4, 5, 6, 7, 8, 0)
7 8 _
"""
def __init__(self, initial, goal=(0, 1, 2, 3, 4, 5, 6, 7, 8)):
assert inversions(initial) % 2 == inversions(goal) % 2 # Parity check
self.initial, self.goal = initial, goal
def actions(self, state):
"""The indexes of the squares that the blank can move to."""
moves = ((1, 3), (0, 2, 4), (1, 5),
(0, 4, 6), (1, 3, 5, 7), (2, 4, 8),
(3, 7), (4, 6, 8), (7, 5))
blank = state.index(0)
return moves[blank]
def result(self, state, action):
"""Swap the blank with the square numbered `action`."""
s = list(state)
blank = state.index(0)
s[action], s[blank] = s[blank], s[action]
return tuple(s)
def h1(self, node):
"""The misplaced tiles heuristic."""
return hamming_distance(node.state, self.goal)
def h2(self, node):
"""The Manhattan heuristic."""
X = (0, 1, 2, 0, 1, 2, 0, 1, 2)
Y = (0, 0, 0, 1, 1, 1, 2, 2, 2)
return sum(abs(X[s] - X[g]) + abs(Y[s] - Y[g])
for (s, g) in zip(node.state, self.goal) if s != 0)
def h(self, node): return h2(self, node)
def hamming_distance(A, B):
"Number of positions where vectors A and B are different."
return sum(a != b for a, b in zip(A, B))
def inversions(board):
"The number of times a piece is a smaller number than a following piece."
return sum((a > b and a != 0 and b != 0) for (a, b) in combinations(board, 2))
def board8(board, fmt=(3 * '{} {} {}\n')):
"A string representing an 8-puzzle board"
return fmt.format(*board).replace('0', '_')
class Board(defaultdict):
empty = '.'
off = '#'
def __init__(self, board=None, width=8, height=8, to_move=None, **kwds):
if board is not None:
self.update(board)
self.width, self.height = (board.width, board.height)
else:
self.width, self.height = (width, height)
self.to_move = to_move
def __missing__(self, key):
x, y = key
if x < 0 or x >= self.width or y < 0 or y >= self.height:
return self.off
else:
return self.empty
def __repr__(self):
def row(y): return ' '.join(self[x, y] for x in range(self.width))
return '\n'.join(row(y) for y in range(self.height))
def __hash__(self):
return hash(tuple(sorted(self.items()))) + hash(self.to_move)
# Some specific EightPuzzle problems
e1 = EightPuzzle((1, 4, 2, 0, 7, 5, 3, 6, 8))
e2 = EightPuzzle((1, 2, 3, 4, 5, 6, 7, 8, 0))
e3 = EightPuzzle((4, 0, 2, 5, 1, 3, 7, 8, 6))
e4 = EightPuzzle((7, 2, 4, 5, 0, 6, 8, 3, 1))
e5 = EightPuzzle((8, 6, 7, 2, 5, 4, 3, 0, 1))
# Solve an 8 puzzle problem and print out each state
for s in path_states(astar_search(e1)):
print(board8(s))
"""
Explanation: 8 Puzzle Problems
A sliding tile puzzle where you can swap the blank with an adjacent piece, trying to reach a goal configuration. The cells are numbered 0 to 8, starting at the top left and going row by row left to right. The pieces are numebred 1 to 8, with 0 representing the blank. An action is the cell index number that is to be swapped with the blank (not the actual number to be swapped but the index into the state). So the diagram above left is the state (5, 2, 7, 8, 4, 0, 1, 3, 6), and the action is 8, because the cell number 8 (the 9th or last cell, the 6 in the bottom right) is swapped with the blank.
There are two disjoint sets of states that cannot be reached from each other. One set has an even number of "inversions"; the other has an odd number. An inversion is when a piece in the state is larger than a piece that follows it.
End of explanation
"""
class PourProblem(Problem):
"""Problem about pouring water between jugs to achieve some water level.
Each state is a tuples of water levels. In the initialization, also provide a tuple of
jug sizes, e.g. PourProblem(initial=(0, 0), goal=4, sizes=(5, 3)),
which means two jugs of sizes 5 and 3, initially both empty, with the goal
of getting a level of 4 in either jug."""
def actions(self, state):
"""The actions executable in this state."""
jugs = range(len(state))
return ([('Fill', i) for i in jugs if state[i] < self.sizes[i]] +
[('Dump', i) for i in jugs if state[i]] +
[('Pour', i, j) for i in jugs if state[i] for j in jugs if i != j])
def result(self, state, action):
"""The state that results from executing this action in this state."""
result = list(state)
act, i, *_ = action
if act == 'Fill': # Fill i to capacity
result[i] = self.sizes[i]
elif act == 'Dump': # Empty i
result[i] = 0
elif act == 'Pour': # Pour from i into j
j = action[2]
amount = min(state[i], self.sizes[j] - state[j])
result[i] -= amount
result[j] += amount
return tuple(result)
def is_goal(self, state):
"""True if the goal level is in any one of the jugs."""
return self.goal in state
"""
Explanation: Water Pouring Problems
In a water pouring problem you are given a collection of jugs, each of which has a size (capacity) in, say, litres, and a current level of water (in litres). The goal is to measure out a certain level of water; it can appear in any of the jugs. For example, in the movie Die Hard 3, the heroes were faced with the task of making exactly 4 gallons from jugs of size 5 gallons and 3 gallons.) A state is represented by a tuple of current water levels, and the available actions are:
- (Fill, i): fill the ith jug all the way to the top (from a tap with unlimited water).
- (Dump, i): dump all the water out of the ith jug.
- (Pour, i, j): pour water from the ith jug into the jth jug until either the jug i is empty, or jug j is full, whichever comes first.
End of explanation
"""
class GreenPourProblem(PourProblem):
"""A PourProblem in which the cost is the amount of water used."""
def action_cost(self, s, action, s1):
"The cost is the amount of water used."
act, i, *_ = action
return self.sizes[i] - s[i] if act == 'Fill' else 0
# Some specific PourProblems
p1 = PourProblem((1, 1, 1), 13, sizes=(2, 16, 32))
p2 = PourProblem((0, 0, 0), 21, sizes=(8, 11, 31))
p3 = PourProblem((0, 0), 8, sizes=(7,9))
p4 = PourProblem((0, 0, 0), 21, sizes=(8, 11, 31))
p5 = PourProblem((0, 0), 4, sizes=(3, 5))
g1 = GreenPourProblem((1, 1, 1), 13, sizes=(2, 16, 32))
g2 = GreenPourProblem((0, 0, 0), 21, sizes=(8, 11, 31))
g3 = GreenPourProblem((0, 0), 8, sizes=(7,9))
g4 = GreenPourProblem((0, 0, 0), 21, sizes=(8, 11, 31))
g5 = GreenPourProblem((0, 0), 4, sizes=(3, 5))
# Solve the PourProblem of getting 13 in some jug, and show the actions and states
soln = breadth_first_search(p1)
path_actions(soln), path_states(soln)
"""
Explanation: In a GreenPourProblem, the states and actions are the same, but instead of all actions costing 1, in these problems the cost of an action is the amount of water that flows from the tap. (There is an issue that non-Fill actions have 0 cost, which in general can lead to indefinitely long solutions, but in this problem there is a finite number of states, so we're ok.)
End of explanation
"""
class PancakeProblem(Problem):
"""A PancakeProblem the goal is always `tuple(range(1, n+1))`, where the
initial state is a permutation of `range(1, n+1)`. An act is the index `i`
of the top `i` pancakes that will be flipped."""
def __init__(self, initial):
self.initial, self.goal = tuple(initial), tuple(sorted(initial))
def actions(self, state): return range(2, len(state) + 1)
def result(self, state, i): return state[:i][::-1] + state[i:]
def h(self, node):
"The gap heuristic."
s = node.state
return sum(abs(s[i] - s[i - 1]) > 1 for i in range(1, len(s)))
c0 = PancakeProblem((2, 1, 4, 6, 3, 5))
c1 = PancakeProblem((4, 6, 2, 5, 1, 3))
c2 = PancakeProblem((1, 3, 7, 5, 2, 6, 4))
c3 = PancakeProblem((1, 7, 2, 6, 3, 5, 4))
c4 = PancakeProblem((1, 3, 5, 7, 9, 2, 4, 6, 8))
# Solve a pancake problem
path_states(astar_search(c0))
"""
Explanation: Pancake Sorting Problems
Given a stack of pancakes of various sizes, can you sort them into a stack of decreasing sizes, largest on bottom to smallest on top? You have a spatula with which you can flip the top i pancakes. This is shown below for i = 3; on the top the spatula grabs the first three pancakes; on the bottom we see them flipped:
How many flips will it take to get the whole stack sorted? This is an interesting problem that Bill Gates has written about. A reasonable heuristic for this problem is the gap heuristic: if we look at neighboring pancakes, if, say, the 2nd smallest is next to the 3rd smallest, that's good; they should stay next to each other. But if the 2nd smallest is next to the 4th smallest, that's bad: we will require at least one move to separate them and insert the 3rd smallest between them. The gap heuristic counts the number of neighbors that have a gap like this. In our specification of the problem, pancakes are ranked by size: the smallest is 1, the 2nd smallest 2, and so on, and the representation of a state is a tuple of these rankings, from the top to the bottom pancake. Thus the goal state is always (1, 2, ...,n) and the initial (top) state in the diagram above is (2, 1, 4, 6, 3, 5).
End of explanation
"""
class JumpingPuzzle(Problem):
"""Try to exchange L and R by moving one ahead or hopping two ahead."""
def __init__(self, N=2):
self.initial = N*'L' + '.' + N*'R'
self.goal = self.initial[::-1]
def actions(self, state):
"""Find all possible move or hop moves."""
idxs = range(len(state))
return ({(i, i + 1) for i in idxs if state[i:i+2] == 'L.'} # Slide
|{(i, i + 2) for i in idxs if state[i:i+3] == 'LR.'} # Hop
|{(i + 1, i) for i in idxs if state[i:i+2] == '.R'} # Slide
|{(i + 2, i) for i in idxs if state[i:i+3] == '.LR'}) # Hop
def result(self, state, action):
"""An action (i, j) means swap the pieces at positions i and j."""
i, j = action
result = list(state)
result[i], result[j] = state[j], state[i]
return ''.join(result)
def h(self, node): return hamming_distance(node.state, self.goal)
JumpingPuzzle(N=2).actions('LL.RR')
j3 = JumpingPuzzle(N=3)
j9 = JumpingPuzzle(N=9)
path_states(astar_search(j3))
"""
Explanation: Jumping Frogs Puzzle
In this puzzle (which also can be played as a two-player game), the initial state is a line of squares, with N pieces of one kind on the left, then one empty square, then N pieces of another kind on the right. The diagram below uses 2 blue toads and 2 red frogs; we will represent this as the string 'LL.RR'. The goal is to swap the pieces, arriving at 'RR.LL'. An 'L' piece moves left-to-right, either sliding one space ahead to an empty space, or two spaces ahead if that space is empty and if there is an 'R' in between to hop over. The 'R' pieces move right-to-left analogously. An action will be an (i, j) pair meaning to swap the pieces at those indexes. The set of actions for the N = 2 position below is {(1, 2), (3, 2)}, meaning either the blue toad in position 1 or the red frog in position 3 can swap places with the blank in position 2.
End of explanation
"""
class CountCalls:
"""Delegate all attribute gets to the object, and count them in ._counts"""
def __init__(self, obj):
self._object = obj
self._counts = Counter()
def __getattr__(self, attr):
"Delegate to the original object, after incrementing a counter."
self._counts[attr] += 1
return getattr(self._object, attr)
def report(searchers, problems, verbose=True):
"""Show summary statistics for each searcher (and on each problem unless verbose is false)."""
for searcher in searchers:
print(searcher.__name__ + ':')
total_counts = Counter()
for p in problems:
prob = CountCalls(p)
soln = searcher(prob)
counts = prob._counts;
counts.update(actions=len(soln), cost=soln.path_cost)
total_counts += counts
if verbose: report_counts(counts, str(p)[:40])
report_counts(total_counts, 'TOTAL\n')
def report_counts(counts, name):
"""Print one line of the counts report."""
print('{:9,d} nodes |{:9,d} goal |{:5.0f} cost |{:8,d} actions | {}'.format(
counts['result'], counts['is_goal'], counts['cost'], counts['actions'], name))
"""
Explanation: Reporting Summary Statistics on Search Algorithms
Now let's gather some metrics on how well each algorithm does. We'll use CountCalls to wrap a Problem object in such a way that calls to its methods are delegated to the original problem, but each call increments a counter. Once we've solved the problem, we print out summary statistics.
End of explanation
"""
report([uniform_cost_search], [p1, p2, p3, p4, p5])
report((uniform_cost_search, breadth_first_search),
(p1, g1, p2, g2, p3, g3, p4, g4, p4, g4, c1, c2, c3))
"""
Explanation: Here's a tiny report for uniform-cost search on the jug pouring problems:
End of explanation
"""
def astar_misplaced_tiles(problem): return astar_search(problem, h=problem.h1)
report([breadth_first_search, astar_misplaced_tiles, astar_search],
[e1, e2, e3, e4, e5])
"""
Explanation: Comparing heuristics
First, let's look at the eight puzzle problems, and compare three different heuristics the Manhattan heuristic, the less informative misplaced tiles heuristic, and the uninformed (i.e. h = 0) breadth-first search:
End of explanation
"""
report([astar_search, uniform_cost_search], [c1, c2, c3, c4])
"""
Explanation: We see that all three algorithms get cost-optimal solutions, but the better the heuristic, the fewer nodes explored.
Compared to the uninformed search, the misplaced tiles heuristic explores about 1/4 the number of nodes, and the Manhattan heuristic needs just 2%.
Next, we can show the value of the gap heuristic for pancake sorting problems:
End of explanation
"""
report([astar_search, astar_tree_search], [e1, e2, e3, e4, r1, r2, r3, r4])
"""
Explanation: We need to explore 300 times more nodes without the heuristic.
Comparing graph search and tree search
Keeping the reached table in best_first_search allows us to do a graph search, where we notice when we reach a state by two different paths, rather than a tree search, where we have duplicated effort. The reached table consumes space and also saves time. How much time? In part it depends on how good the heuristics are at focusing the search. Below we show that on some pancake and eight puzzle problems, the tree search expands roughly twice as many nodes (and thus takes roughly twice as much time):
End of explanation
"""
def extra_weighted_astar_search(problem): return weighted_astar_search(problem, weight=2)
report((greedy_bfs, extra_weighted_astar_search, weighted_astar_search, astar_search, uniform_cost_search),
(r0, r1, r2, r3, r4, e1, d1, d2, j9, e2, d3, d4, d6, d7, e3, e4))
"""
Explanation: Comparing different weighted search values
Below we report on problems using these four algorithms:
|Algorithm|f|Optimality|
|:---------|---:|:----------:|
|Greedy best-first search | f = h|nonoptimal|
|Extra weighted A search | f = g + 2 × h|nonoptimal|
|Weighted A search | f = g + 1.4 × h|nonoptimal|
|A search | f = g + h|optimal|
|Uniform-cost search | f = g*|optimal|
We will see that greedy best-first search (which ranks nodes solely by the heuristic) explores the fewest number of nodes, but has the highest path costs. Weighted A search explores twice as many nodes (on this problem set) but gets 10% better path costs. A is optimal, but explores more nodes, and uniform-cost is also optimal, but explores an order of magnitude more nodes.
End of explanation
"""
report((astar_search, uniform_cost_search, breadth_first_search, breadth_first_bfs,
iterative_deepening_search, depth_limited_search, greedy_bfs,
weighted_astar_search, extra_weighted_astar_search),
(p1, g1, p2, g2, p3, g3, p4, g4, r0, r1, r2, r3, r4, e1))
"""
Explanation: We see that greedy search expands the fewest nodes, but has the highest path costs. In contrast, A* gets optimal path costs, but expands 4 or 5 times more nodes. Weighted A* is a good compromise, using half the compute time as A*, and achieving path costs within 1% or 2% of optimal. Uniform-cost is optimal, but is an order of magnitude slower than A*.
Comparing many search algorithms
Finally, we compare a host of algorihms (even the slow ones) on some of the easier problems:
End of explanation
"""
def best_first_search(problem, f):
"Search nodes with minimum f(node) value first."
global reached # <<<<<<<<<<< Only change here
node = Node(problem.initial)
frontier = PriorityQueue([node], key=f)
reached = {problem.initial: node}
while frontier:
node = frontier.pop()
if problem.is_goal(node.state):
return node
for child in expand(problem, node):
s = child.state
if s not in reached or child.path_cost < reached[s].path_cost:
reached[s] = child
frontier.add(child)
return failure
def plot_grid_problem(grid, solution, reached=(), title='Search', show=True):
"Use matplotlib to plot the grid, obstacles, solution, and reached."
reached = list(reached)
plt.figure(figsize=(16, 10))
plt.axis('off'); plt.axis('equal')
plt.scatter(*transpose(grid.obstacles), marker='s', color='darkgrey')
plt.scatter(*transpose(reached), 1**2, marker='.', c='blue')
plt.scatter(*transpose(path_states(solution)), marker='s', c='blue')
plt.scatter(*transpose([grid.initial]), 9**2, marker='D', c='green')
plt.scatter(*transpose([grid.goal]), 9**2, marker='8', c='red')
if show: plt.show()
print('{} {} search: {:.1f} path cost, {:,d} states reached'
.format(' ' * 10, title, solution.path_cost, len(reached)))
def plots(grid, weights=(1.4, 2)):
"""Plot the results of 4 heuristic search algorithms for this grid."""
solution = astar_search(grid)
plot_grid_problem(grid, solution, reached, 'A* search')
for weight in weights:
solution = weighted_astar_search(grid, weight=weight)
plot_grid_problem(grid, solution, reached, '(b) Weighted ({}) A* search'.format(weight))
solution = greedy_bfs(grid)
plot_grid_problem(grid, solution, reached, 'Greedy best-first search')
def transpose(matrix): return list(zip(*matrix))
plots(d3)
plots(d4)
"""
Explanation: This confirms some of the things we already knew: A and uniform-cost search are optimal, but the others are not. A explores fewer nodes than uniform-cost.
Visualizing Reached States
I would like to draw a picture of the state space, marking the states that have been reached by the search.
Unfortunately, the reached variable is inaccessible inside best_first_search, so I will define a new version of best_first_search that is identical except that it declares reached to be global. I can then define plot_grid_problem to plot the obstacles of a GridProblem, along with the initial and goal states, the solution path, and the states reached during a search.
End of explanation
"""
plots(d6)
"""
Explanation: The cost of weighted A* search
Now I want to try a much simpler grid problem, d6, with only a few obstacles. We see that A finds the optimal path, skirting below the obstacles. Weighterd A with a weight of 1.4 finds the same optimal path while exploring only 1/3 the number of states. But weighted A* with weight 2 takes the slightly longer path above the obstacles, because that path allowed it to stay closer to the goal in straight-line distance, which it over-weights. And greedy best-first search has a bad showing, not deviating from its path towards the goal until it is almost inside the cup made by the obstacles.
End of explanation
"""
plots(d7)
"""
Explanation: In the next problem, d7, we see a similar story. the optimal path found by A, and we see that again weighted A with weight 1.4 does great and with weight 2 ends up erroneously going below the first two barriers, and then makes another mistake by reversing direction back towards the goal and passing above the third barrier. Again, greedy best-first makes bad decisions all around.
End of explanation
"""
def and_or_search(problem):
"Find a plan for a problem that has nondterministic actions."
return or_search(problem, problem.initial, [])
def or_search(problem, state, path):
"Find a sequence of actions to reach goal from state, without repeating states on path."
if problem.is_goal(state): return []
if state in path: return failure # check for loops
for action in problem.actions(state):
plan = and_search(problem, problem.results(state, action), [state] + path)
if plan != failure:
return [action] + plan
return failure
def and_search(problem, states, path):
"Plan for each of the possible states we might end up in."
if len(states) == 1:
return or_search(problem, next(iter(states)), path)
plan = {}
for s in states:
plan[s] = or_search(problem, s, path)
if plan[s] == failure: return failure
return [plan]
class MultiGoalProblem(Problem):
"""A version of `Problem` with a colllection of `goals` instead of one `goal`."""
def __init__(self, initial=None, goals=(), **kwds):
self.__dict__.update(initial=initial, goals=goals, **kwds)
def is_goal(self, state): return state in self.goals
class ErraticVacuum(MultiGoalProblem):
"""In this 2-location vacuum problem, the suck action in a dirty square will either clean up that square,
or clean up both squares. A suck action in a clean square will either do nothing, or
will deposit dirt in that square. Forward and backward actions are deterministic."""
def actions(self, state):
return ['suck', 'forward', 'backward']
def results(self, state, action): return self.table[action][state]
table = {'suck':{1:{5,7}, 2:{4,8}, 3:{7}, 4:{2,4}, 5:{1,5}, 6:{8}, 7:{3,7}, 8:{6,8}},
'forward': {1:{2}, 2:{2}, 3:{4}, 4:{4}, 5:{6}, 6:{6}, 7:{8}, 8:{8}},
'backward': {1:{1}, 2:{1}, 3:{3}, 4:{3}, 5:{5}, 6:{5}, 7:{7}, 8:{7}}}
"""
Explanation: Nondeterministic Actions
To handle problems with nondeterministic problems, we'll replace the result method with results, which returns a collection of possible result states. We'll represent the solution to a problem not with a Node, but with a plan that consist of two types of component: sequences of actions, like ['forward', 'suck'], and condition actions, like
{5: ['forward', 'suck'], 7: []}, which says that if we end up in state 5, then do ['forward', 'suck'], but if we end up in state 7, then do the empty sequence of actions.
End of explanation
"""
and_or_search(ErraticVacuum(1, {7, 8}))
"""
Explanation: Let's find a plan to get from state 1 to the goal of no dirt (states 7 or 8):
End of explanation
"""
{s: and_or_search(ErraticVacuum(s, {7,8}))
for s in range(1, 9)}
"""
Explanation: This plan says "First suck, and if we end up in state 5, go forward and suck again; if we end up in state 7, do nothing because that is a goal."
Here are the plans to get to a goal state starting from any one of the 8 states:
End of explanation
"""
from functools import lru_cache
def build_table(table, depth, state, problem):
if depth > 0 and state not in table:
problem.initial = state
table[state] = len(astar_search(problem))
for a in problem.actions(state):
build_table(table, depth - 1, problem.result(state, a), problem)
return table
def invert_table(table):
result = defaultdict(list)
for key, val in table.items():
result[val].append(key)
return result
goal = (0, 1, 2, 3, 4, 5, 6, 7, 8)
table8 = invert_table(build_table({}, 25, goal, EightPuzzle(goal)))
def report8(table8, M, Ds=range(2, 25, 2), searchers=(breadth_first_search, astar_misplaced_tiles, astar_search)):
"Make a table of average nodes generated and effective branching factor"
for d in Ds:
line = [d]
N = min(M, len(table8[d]))
states = random.sample(table8[d], N)
for searcher in searchers:
nodes = 0
for s in states:
problem = CountCalls(EightPuzzle(s))
searcher(problem)
nodes += problem._counts['result']
nodes = int(round(nodes/N))
line.append(nodes)
line.extend([ebf(d, n) for n in line[1:]])
print('{:2} & {:6} & {:5} & {:5} && {:.2f} & {:.2f} & {:.2f}'
.format(*line))
def ebf(d, N, possible_bs=[b/100 for b in range(100, 300)]):
"Effective Branching Factor"
return min(possible_bs, key=lambda b: abs(N - sum(b**i for i in range(1, d+1))))
def edepth_reduction(d, N, b=2.67):
from statistics import mean
def random_state():
x = list(range(9))
random.shuffle(x)
return tuple(x)
meanbf = mean(len(e3.actions(random_state())) for _ in range(10000))
meanbf
{n: len(v) for (n, v) in table30.items()}
%time table30 = invert_table(build_table({}, 30, goal, EightPuzzle(goal)))
%time report8(table30, 20, range(26, 31, 2))
%time report8(table30, 20, range(26, 31, 2))
from itertools import combinations
from statistics import median, mean
# Detour index for Romania
L = romania.locations
def ratio(a, b): return astar_search(RouteProblem(a, b, map=romania)).path_cost / sld(L[a], L[b])
nums = [ratio(a, b) for a,b in combinations(L, 2) if b in r1.actions(a)]
mean(nums), median(nums) # 1.7, 1.6 # 1.26, 1.2 for adjacent cities
sld
"""
Explanation: Comparing Algorithms on EightPuzzle Problems of Different Lengths
End of explanation
"""
|
wtbarnes/aia_response
|
notebooks/calculating_temperature_response_functions.ipynb
|
mit
|
import json
import numpy as np
import h5py
import seaborn as sns
from scipy.interpolate import splev,splrep
import matplotlib.pyplot as plt
import astropy.units as u
from sunpy.instr import aia
import ChiantiPy.core as ch
import ChiantiPy.tools.data as ch_data
%matplotlib inline
"""
Explanation: AIA Temperature Response Functions
In this notebook, we'll use the code already on my PR branch to first calculate the wavelength response functions. Then, we'll try to use CHIANTI and ChiantiPy to calculate the temperature response functions for a few ions.
We need to come up with a way to easily constrain how many wavelengths (or more precisely which ions) we need for each channel. We can certainly just search through every ion but this takes time!
End of explanation
"""
response = aia.Response(ssw_path='/Users/willbarnes/Documents/Rice/Research/ssw/',
#channel_list=[131,171,193,211,304]
)
response.calculate_wavelength_response(include_crosstalk=True)
response.peek_wavelength_response()
data = np.loadtxt('../aia_sample_data/aia_wresponse_raw.dat')
channels = sorted(list(response.wavelength_response.keys()))
ssw_results = {}
for i in range(len(channels)):
ssw_results[channels[i]] = {'wavelength':data[:,0],
'response':data[:,i+1]}
fig,axes = plt.subplots(3,3,figsize=(12,12))
for c,ax in zip(channels,axes.flatten()):
#ssw
ax.plot(ssw_results[c]['wavelength'],ssw_results[c]['response'],
#color=response.channel_colors[c],
label='ssw')
#sunpy
ax.plot(response.wavelength_response[c]['wavelength'],response.wavelength_response[c]['response'],
#color=response.channel_colors[c],
#marker='.',ms=6,markevery=5,
label='SunPy',linestyle=':',alpha=0.95,lw=2)
if c!=335 and c!=304:
ax.set_xlim([c-20,c+20])
#if c==335:
# ax.set_xlim([120,140])
#if c==304:
# ax.set_xlim([80,100])
ax.set_title('{} $\mathrm{{\mathring{{A}}}}$'.format(c),fontsize=20)
ax.set_xlabel(r'$\lambda$ ({0:latex})'.format(response.wavelength_response[c]['wavelength'].unit),fontsize=20)
ax.set_ylabel(r'$R_i(\lambda)$ ({0:latex})'.format(response.wavelength_response[c]['response'].unit),fontsize=20)
# contamination plots
#304
#ssw
ax = axes.flatten()[-2]
ax.plot(ssw_results[304]['wavelength'],ssw_results[304]['response'],
#color=response.channel_colors[c],
label='ssw')
#sunpy
ax.plot(response.wavelength_response[304]['wavelength'],response.wavelength_response[304]['response'],
label='SunPy',linestyle='',alpha=0.95,lw=2,
#color=response.channel_colors[c],
marker='.',ms=8,markevery=2,
)
ax.set_xlim([80,100])
ax.set_title('{} $\mathrm{{\mathring{{A}}}}$ contamination from 94'.format(304),fontsize=14)
ax.set_xlabel(r'$\lambda$ ({0:latex})'.format(response.wavelength_response[c]['wavelength'].unit),fontsize=20)
ax.set_ylabel(r'$R_i(\lambda)$ ({0:latex})'.format(response.wavelength_response[c]['response'].unit),fontsize=20)
#335
ax = axes.flatten()[-1]
ax.plot(ssw_results[335]['wavelength'],ssw_results[335]['response'],
#color=response.channel_colors[c],
label='ssw')
#sunpy
ax.plot(response.wavelength_response[335]['wavelength'],response.wavelength_response[335]['response'],
label='SunPy',linestyle='',alpha=0.95,lw=2,
#color=response.channel_colors[c],
marker='.',ms=8,markevery=2,
)
ax.set_xlim([120,140])
ax.set_title('{} $\mathrm{{\mathring{{A}}}}$ contamination from 131'.format(335),fontsize=14)
ax.set_xlabel(r'$\lambda$ ({0:latex})'.format(response.wavelength_response[c]['wavelength'].unit),fontsize=20)
ax.set_ylabel(r'$R_i(\lambda)$ ({0:latex})'.format(response.wavelength_response[c]['response'].unit),fontsize=20)
axes[0,0].legend(loc='best')
plt.tight_layout()
fig,axes = plt.subplots(3,3,figsize=(12,12),sharey=True,sharex=True)
for c,ax in zip(channels,axes.flatten()):
#ssw
ax2 = ax.twinx()
ssw_interp = ssw_results[c]['response']*response.wavelength_response[c]['response'].unit
delta_response = np.fabs(response.wavelength_response[c]['response'] - ssw_interp)/(ssw_interp)
ax.plot(response.wavelength_response[c]['wavelength'],delta_response,
#color=response.channel_colors[c]
)
ax2.plot(response.wavelength_response[c]['wavelength'],response.wavelength_response[c]['response'],
color='k',linestyle='--')
ax.set_title('{} $\mathrm{{\mathring{{A}}}}$'.format(c),fontsize=20)
ax.set_xlabel(r'$\lambda$ ({0:latex})'.format(response.wavelength_response[c]['wavelength'].unit),fontsize=20)
ax.set_ylabel(r'$\frac{|\mathrm{SSW}-\mathrm{SunPy}|}{\mathrm{SSW}}$',fontsize=20)
ax2.set_ylabel(r'$R_i(\lambda)$ ({0:latex})'.format(response.wavelength_response[c]['response'].unit))
ax.set_ylim([-1.1,1.1])
plt.tight_layout()
info_table = aia.response.aia_instr_properties_to_table([94,131,171,193,211,335],
['/Users/willbarnes/Documents/Rice/Research/ssw/sdo/aia/response/aia_V6_all_fullinst.genx'])
"""
Explanation: Wavelength Response
End of explanation
"""
temperature = np.logspace(5,8,50)*u.K
pressure = 1e15*u.K*u.cm**(-3)
density = pressure/temperature
"""
Explanation: Temperature Response
Set a temperature and density range. This is the range of temperatures and densities over which the contribution function for each ion will be calculated. Note that this is not a grid of temperatures and densities, but rather a list of $T$ and $n$ pairs.
According to Boerner et al. (2012), we should use a constant pressure of $10^{15}$ cm$^{-3}$ K.
End of explanation
"""
ion_list = (['fe_{}'.format(i) for i in np.arange(6,26)]
+ ['ca_{}'.format(i) for i in np.arange(10,20)])
"""
Explanation: The main question is: how exactly do we calculate the temperature response functions? According to Boerner et al. (2012), the response function $K$ for channel $i$ is given by,
$$
K_i(T) = \int_0^{\infty}\mathrm{d}\lambda\,G(\lambda,T)R_i(\lambda)
$$
where $K$ has units DN cm$^{-5}$ s$^{-1}$ pix$^{-1}$. So what is the right expression for $G(\lambda,n,T)$, the contribution function?
Let's divide the contribution function into a line emission part and a continuum part such that,
$$
G(\lambda,T) = G_{continuum}(\lambda,T) + \sum_XG_X(\lambda,T)
$$
where $X$ denotes an ion in the CHIANTI database. In this way, the response for each channel becomes,
$$
\begin{align}
K_i(T) &= \int_0^{\infty}\mathrm{d}\lambda\,\big(G_{continuum}(\lambda,T) + \sum_XG_X(\lambda,T)\big)R_i(\lambda) \
&= \int_0^{\infty}\mathrm{d}\lambda\,G_{continuum}(\lambda,T)R_i(\lambda) + \sum_X\int_0^{\infty}\mathrm{d}\lambda\,G_X(\lambda,T)R_i(\lambda)
\end{align}
$$
So there is a contribution from the continuum plus a contribution from each ion at every wavelength where there is a spectral line. In particular, the continuum includes three different contributions: free-free losses, free-bound losses, and two-photon losses,
$$
G_{continuum}(\lambda,T) = G_{ff}(\lambda,T) + G_{fb}(\lambda,T) + G_{tp}(\lambda,T)
$$
The expressions for these losses can be found in Landi et al. (1999). These are calculated and summed over each ion as well. Thus, the full expression for the temperature response is given by,
$$
K_i(T) = \int_0^{\infty}\mathrm{d}\lambda\,(G_{ff}(\lambda,T) + G_{fb}(\lambda,T)R_i(\lambda) + G_{tp}(\lambda,T)) + \sum_X\int_0^{\infty}\mathrm{d}\lambda\,G_X(\lambda,T)R_i(\lambda)
$$
Make a list of ions. Exactly what ions should be included is not really clear. Certainly, at least all ions of Fe. Really though, just look at all the ions in CHIANTI. Though, the user should have a chance to select which ions are included so that they can easily calculate the response functions, given only a few lines/ions.
End of explanation
"""
ch_data.Defaults['flux'] = 'photon'
ch_data.Defaults['abundfile'] = 'sun_coronal_1992_feldman'
ch_data.Defaults['ioneqfile'] = 'chianti'
"""
Explanation: Boerner et al. (2012) use the coronal abundances of Feldman and Widing (1993) and the ionization balances of Dere et al. (2009). We also want to make sure we are calculating all of the emissivities in units of photons rather than ergs as this makes it easier when multiplying by the instrument response function.
End of explanation
"""
temperature_responses = {k:np.zeros(len(temperature)) for k in response.wavelength_response}
for ion in ch_data.MasterList:
#if ion.split('_')[0] != 'fe':
# continue
print('{}: Calculating contribution function for {}'.format(ch_data.MasterList.index(ion),ion))
#declare ion object
tmp = ch.ion(ion,temperature=temperature.value,eDensity=density.value,
abundance='sun_coronal_1992_feldman')
#calculate emissivity
tmp.emiss()
em = tmp.Emiss['emiss'][np.argsort(tmp.Emiss['wvl']),:]
wvl = np.sort(tmp.Emiss['wvl'])
#calculate contribution function.
gofnt = tmp.Abundance*em*tmp.IoneqOne/tmp.EDensity
#iterate over channels
for channel in response.wavelength_response:
#print('Adding to channel {}'.format(channel))
#interpolate response function to transitions
rsp = splev(wvl,splrep(response.wavelength_response[channel]['wavelength'].value,
response.wavelength_response[channel]['response'].value))
rsp = np.where(rsp<0,0,rsp)*response._channel_info[channel]['plate_scale'].value
#weighted sum over wavelength
#add to temperature response
temperature_responses[channel] += np.dot(rsp,gofnt)
# ssw responses
precalculated_responses_data = np.loadtxt('../aia_sample_data/aia_tresponse_raw.dat')
precalc_channels = [94,131,171,193,211,304,335]
precalculated_responses = {c: precalculated_responses_data[:,i+1] for i,c in enumerate(precalc_channels)}
precalculated_responses['temperature'] = precalculated_responses_data[:,0]
# ssw responses with chiantifix and evenorm fix
precalculated_responses_data = np.loadtxt('../aia_sample_data/aia_tresponse_fix.dat')
precalculated_responses_fix = {c: precalculated_responses_data[:,i+1] for i,c in enumerate(precalc_channels)}
precalculated_responses_fix['temperature'] = precalculated_responses_data[:,0]
channel_colors = {c: sns.color_palette('Set2',7)[i] for i,c in enumerate(response.wavelength_response)}
fig,axes = plt.subplots(4,2,figsize=(15,30),sharex=True)
for channel,ax in zip(sorted(list(temperature_responses.keys())),axes.flatten()):
ax.plot(temperature,temperature_responses[channel]/(0.83*(1./4./np.pi)),
label=r'lines',
color=channel_colors[channel])
ax.plot(temperature,(temperature_responses[channel]/(0.83*(1./4./np.pi))
+ continuum_contributions[channel]),
linestyle=':',
label='lines + continuum',
color=channel_colors[channel])
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_ylim([1e-30,2e-24])
ax.set_xlim([1e5,1e8])
ax.set_title(r'{} $\AA$'.format(channel))
for i,pc in enumerate(precalc_channels):
axes.flatten()[i].plot(10**precalculated_responses['temperature'],
precalculated_responses[pc],
linestyle='--',
label=r'SSW',
color=channel_colors[pc])
axes.flatten()[i].plot(10**precalculated_responses_fix['temperature'],
precalculated_responses_fix[pc],
linestyle='-.',
label=r'SSW with chiantifix',
color=channel_colors[pc])
axes[0,1].legend(loc='best')
continuum_contributions = {k:np.zeros(len(temperature)) for k in response.wavelength_response}
wvl = response.wavelength_response[94]['wavelength'].value
for ion in ch_data.MasterList:
#if ion.split('_')[0] != 'fe':
# continue
print('{}: Calculating contribution function for {}'.format(ch_data.MasterList.index(ion),ion))
tmp = ch.ion(ion,temperature=temperature.value,eDensity=density.value,abundance='sun_coronal_1992_feldman')
#two photon emiss
tmp.twoPhoton(wvl)
if 'rate' in tmp.TwoPhoton:
two_photon = tmp.TwoPhoton['rate']
else:
two_photon = tmp.TwoPhoton['emiss']
#free-free
tmp_cont = ch.continuum(ion,temperature.value,abundance='sun_coronal_1992_feldman')
if tmp_cont.Ion > 1:
tmp_cont.freeFree(wvl)
if 'rate' in tmp_cont.FreeFree:
free_free = tmp_cont.FreeFree['rate']
else:
free_free = np.zeros((len(temperature),len(wvl)))
else:
free_free = np.zeros((len(temperature),len(wvl)))
#free-bound
if tmp_cont.Ion > 1:
tmp_cont.freeBound(wvl)
if 'rate' in tmp_cont.FreeBound:
free_bound = tmp_cont.FreeBound['rate']
else:
free_bound = np.zeros((len(temperature),len(wvl)))
else:
free_bound = np.zeros((len(temperature),len(wvl)))
#add to channels
for channel in response.wavelength_response:
continuum_contributions[channel] += np.dot((two_photon + free_free + free_bound),
(response.wavelength_response[channel]['response'].value
*response._channel_info[channel]['plate_scale'].value))
plt.figure(figsize=(8,8))
for channel in continuum_contributions:
plt.plot(temperature,
continuum_contributions[channel],
label=channel,color=channel_colors[channel])
plt.xscale('log')
plt.yscale('log')
plt.xlim([1e5,1e8])
plt.ylim([1e-32,1e-27])
plt.legend(loc='best')
"""
Explanation: Now iterate through the ion list, calculating the emission and subsequently the contribution function at each stage and then interpolating the wavelength response function to the appropriate wavelengths.
Not that for the line emission, we are using the expression,
$$
G_X(\lambda,T) = \frac{1}{4\pi}\epsilon_X(\lambda,T)\mathrm{Ab}(X)\frac{N(X^{+m}}{N(X)}\frac{1}{n_e}
$$
which has units of photons cm$^{3}$ s$^{-1}$ sr$^{-1}$. The factor of $1/4\pi$ is included in the expression return by the emiss() method on the ChiantiPy ion object.
End of explanation
"""
|
simpleblob/ml_algorithms_stepbystep
|
algo_example_NN_multilayer_FNN.ipynb
|
mit
|
from sklearn.datasets import load_digits
digits = load_digits(n_class=10)
print type(digits)
import random
digits_sample = random.sample(range(0,digits.images.shape[0]),10)
print digits_sample
#show sample digits
plt.rcParams['figure.figsize'] = (12, 4)
f, axarr = plt.subplots(2, 5)
for j in range(0,axarr.shape[1]):
for i in range(0,axarr.shape[0]):
axarr[i,j].matshow(digits.images[digits_sample[i+j]])
plt.show()
"""
Explanation: Using the MNIST digit dataset.
Source: http://yann.lecun.com/exdb/mnist/
sklearn has nicely imported it for us.
| Classes | 10 |
|-------------------|---------------|
| Samples per class | ~180 |
| Samples total | 1797 |
| Dimensionality | 64 |
| Features | integers 0-16 |
End of explanation
"""
# Graphing B part's formula
graph_x = np.arange(0,1.05,0.05)
graph_y = graph_x*(1-graph_x)
plt.title('B Part value range')
plt.xlabel('f_out')
plt.ylabel('B value')
plt.plot(graph_x, graph_y)
plt.ylim([0,0.30])
plt.show()
"""
Explanation: Multi-Layer Feed-Forward Neural Network
(in this case, 3 layers [input, 1 hidden, output])
Source: https://en.wikipedia.org/wiki/Feedforward_neural_network
Source2: http://neuralnetworksanddeeplearning.com/chap1.html
The structure is something like the below.
In our case, each image have a total of 8x8 = 64 pixels, so we will be using 64-input neurons instead of 784.
We can try using the same number of 15 hidden neurons.
(Important Note: since each node (same layer) is getting exactly the same input, it is of utmost importance to initialize all the weights with different (randomized) values! Else the output will be the same for all eternity)
<img src="http://neuralnetworksanddeeplearning.com/images/tikz12.png" style="width:500px;" />
Let's define some terms.
| symbol | meaning |
|--------------|---------------------------------------------------|
| i | index of input to the neurons |
| k | index of the neurons (per layer) |
| j | index of data input rows (each digit image) |
| w | weight vector for neuron connections |
| $w_{i,k}$ | weight connection from i input to k neuron |
| x or X | vector or array of input data |
| y or Y | vector or array of target data (0 or 1 for each digit) |
| f or f(x,..) | activation function, in this case it's a sigmoid. |
| $\eta$ | learning rate. (pronouce:"eta") |
The Node's activation function
10-digit multi-classification where probability for each node's output is a sigmoid function
$$ f(w,X) = \dfrac{1}{1+e^{-(w \cdot X)}} $$
(Note: the weight vector "w" includes bias component as $w_0$. the input array ("X") will always have first input as a "1", so bias is always activated.)
Loss function
Loss function is a mean-square error(MSE).
$$ L = \dfrac{1}{2m}\sum_{j=1}^m \bigl\|\;f(w,x_j) - y_j\;\bigr\|^2 $$
Output layer
The first partial derivative is the gradient, in simplified term is:
$$ \Delta w_{i,k,j} = - (y_k-f_{k,j}) \cdot f_{k,j} \cdot (1-f_{k,j}) \cdot x_{i,j}$$
we can sum out all the gradient of each training sample like so:
$$ \Delta w_{i,k} = - \sum_{j=1}^m (y_k-f_{k,j}) \cdot f_{k,j} \cdot (1-f_{k,j}) \cdot x_{i,j}$$
In shorter form, we group the inner part under a new variable "delta":
$$ \Delta w_{i,k} = - \sum_{j=1}^m \delta_{k,j} \cdot x_{i,j}$$
Then we can update the w (with learning rate)
$$ w_{i,k}(t+1) = w_{i,k}(t) - \eta \sum_{j=1}^m \delta_{k,j} \cdot x_{i,j}$$
Specifically, for the weights from hidden to output layer ($w^{out}$). The input "$x$" is actually the resulting output from the lower layer $f^{hid}$.
w hidden-to-output layer update:
$$ w_{i,k}^{out}(t+1) = w_{i,k}^{out}(t) - \eta \sum_{j=1}^m \delta_{k,j}^{out} \cdot f_{i,k}^{hid}$$
Intuition behind the formula
For the update gradient ($\Delta w$) itself excluding the learning rate, it comprises of 3 parts:
$$
\begin{align}
A &= (y_k-f_{k,j}^{out}) \
B &= f_{k,j}^{out} \cdot (1-f_{k,j}^{out}) \
C &= f_{i,k}^{hid}
\end{align}
$$
A is the magnitude of errors. its value is between (-1 to +1). The bigger the error, the bigger the correction.
B is the magnitude of "uncertainty". B's value can go between (0 to 0.5) with a dome-like shape (see below). This means the magnitude of corrections varies according to the confidence of its previous prediction ($f^{out}$). If $f^{out}$ is at its peak of 0.5, meaning that the truth could equally be 0 or 1 -- its warrants a large correction.
C is the magnitude of input. its value is between (0 to 1). We multiply the gradient with this so that the correction will scale with the input.
End of explanation
"""
#set size of input, features, hidden, target
sample_size = digits.images.shape[0]
feature_size = digits.images.shape[1]*digits.images.shape[2]
target_size = 10
hidden_size = 15
#make a flat 10 output with all zeros
Y = np.zeros((sample_size,10))
for j in range(0,sample_size):
Y[j][digits.target[j]] = 1
#make a row of 64 input features instead of 8x8
X = digits.images[0:sample_size].reshape(sample_size,feature_size)
X = (X-8)/8 #normalized
Xb = np.insert(X,0,1,axis=1) #add bias input, always activated
def sigmoid(w,X):
a = 1.0/(1.0 + np.exp(-w.dot(X.transpose())))
return a.transpose()
def loss_func(Y,y_pred):
return (0.5/sample_size)*np.sum((Y-y_pred)**2) #element-wise operation then aggregate
#initialize the rest of the terms
# for weights --> index = (output node , input node)
w_hid = (np.random.rand(hidden_size,feature_size+1)-0.5) #randomized, and don't forget the bias!
w_out = (np.random.rand(target_size,hidden_size+1)-0.5) #randomized, and don't forget the bias!
#for f --> index = (data row , node)
f_hid = np.random.rand(sample_size,hidden_size)
f_out = np.random.rand(sample_size,target_size)
#for deltas --> index = (data row , node)
delta_hid = np.random.rand(sample_size,hidden_size)
delta_out = np.random.rand(sample_size,target_size)
#verification with dummy data
#checking numbers from https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/
#sample_size = 1
#X = np.array([[0.05,0.1]])
#Xb = np.array([[1, 0.05,0.1]])
#Y = np.array([[0.01,0.99]])
#w_hid = np.array([[0.35,0.15,0.20],[0.35,0.25,0.3]])
#w_out = np.array([[0.60,0.40,0.45],[0.60,0.50,0.55]])
#w_out_bef = w_out.copy()
#run configuration
max_epoch = 5000
min_loss_criterion = 10**-4
#doing 1st forward pass to calculate loss
f_hid = sigmoid(w_hid,Xb)
f_hid_b = np.insert(f_hid,0,1,axis=1) #bias activation for next layer
f_out = sigmoid(w_out,f_hid_b)
curr_loss = loss_func(Y,f_out)
loss = []
loss.append(curr_loss)
print 'start_loss = {}'.format(curr_loss*2)
learning_rate = 0.7/sample_size
learning_rate_bias = 0.7/sample_size
for i in range(0,max_epoch):
#update the weights of output layer
delta_out = (f_out - Y)*(f_out)*(1-f_out) #element-wise operation
wgrad_out = np.einsum('ki,kj->ij', delta_out, f_hid) #dot operation already sums it up
w_out_bef = w_out.copy()
w_out[:,1:] = w_out[:,1:] -learning_rate*(wgrad_out)
w_out[:,0] = w_out[:,0] -learning_rate_bias*np.sum(delta_out,axis=0)*1.0
#update the weights of hidden layer
delta_hid = delta_out.dot(w_out_bef[:,1:])*(f_hid)*(1-f_hid) #dot then element-wise operation
wgrad_hid = np.einsum('ki,kj->ij',delta_hid,Xb[:,1:])
w_hid[:,1:] = w_hid[:,1:] -learning_rate*wgrad_hid
w_hid[:,0] = w_hid[:,0] -learning_rate_bias*np.sum(delta_hid,axis=0)*1.0
#re-calculate loss
f_hid = sigmoid(w_hid,Xb)
f_hid_b = np.insert(f_hid,0,1,axis=1) #bias activation for next layer
f_out = sigmoid(w_out,f_hid_b)
curr_loss = loss_func(Y,f_out)
loss.append(curr_loss)
#stopping criterion
if (i>10) and ((loss[-2] - curr_loss) < min_loss_criterion):
print 'stop at {}'.format(i)
break
print 'end_loss = {}'.format(loss[-1])
plt.figure()
plt.xlabel('no. of run')
plt.ylabel('loss function')
sns.tsplot(loss)
#get the prediction to compare with target
y_pred = np.argmax(f_out,axis=1)
from sklearn.metrics import confusion_matrix
cm_mat = confusion_matrix(digits.target[0:sample_size],y_pred)
print cm_mat.T
accuracy = np.trace(cm_mat)*100.0/sample_size
print 'Accuracy = {:.2f}%'.format(accuracy)
print 'Actually this is not true accuracy because we didn\'t verify it with the test dataset.'
df_temp = pd.DataFrame(cm_mat.flatten()[np.newaxis].T,columns = ['values'])
plt.figure(figsize = (6,4),dpi=600)
sns.heatmap(cm_mat.T, cbar=True ,annot=True, fmt=',.0f')
plt.title('Confusion Matrix')
plt.xlabel('Truth')
plt.ylabel('Predicted')
"""
Explanation: Hidden Layer
Moving on to the lower layer -- from input to hidden.
Here comes the backpropagation step --summing back the corrections from output layer:
$$ \Delta w_{i,k}^{hid} = - \sum_{j=1}^m \bigl{ \bigl(\sum_{k_{out}=1}^n \delta_{k_{out},j}^{out} \cdot w_{k_{hid},k_{out}}^{out}(t) \bigr) \cdot f_{k,j}^{hid} \cdot (1-f_{k,j}^{hid}) \cdot x_{i,j} \bigr}$$
Again, we can shorten it to:
$$ \Delta w_{i,k}^{hid} = - \sum_{j=1}^m \delta_{k,j}^{hid} \cdot x_{i,j} $$
And we can update w as such:
w input-to-hidden layer update:
$$ w_i^{hid}(t+1) = w_i^{hid}(t) - \eta \sum_{j=1}^m \delta_{i,j}^{hid} \cdot x_{i,j}$$
Intuition behind the formula, part duex
we continue the analysis of the gradient ($\Delta w$), this time for the hidden layer:
$$
\begin{align}
D &= \sum_{k_{out}=1}^n \delta_{k_{out},j}^{out} \cdot w_{k_{hid},k_{out}}^{out}(t) \
E &= f_{k,j}^{hid} \cdot (1-f_{k,j}^{hid}) \
F &= x_{i,j}
\end{align}
$$
D is the magnitude of errors. It is a sum of output layer's delta values back to the node $k^{hid}$, scaled by its outgoing weights.
E is the magnitude of "sureness". it works the same way as B above.
F is the magnitude of input. it works the same way as C above. The input of different features should be standardized such that their magnitudes have similar scale.
For someone who hates math equations
All the math notations actually made my head hurt. (And I've written those myself!)
For step-by-step calculation with actual numbers, this website has a really good write-up.
Optimization method
we are going just sum up the deltas from all the samples (all 1,797 of them) and update the weights in one go. Since it's a matrix operation, the speed is pretty fast.
I was considering doing mini-batch and other tricks, but for this Notebook I just want a clear step-by-step example of the actual algorithm itself.
End of explanation
"""
|
wllmtrng/udacity_data_analyst_nanodegree
|
P0 Relationships/Data_Analyst_ND_Project0.ipynb
|
mit
|
import pandas as pd
# pandas is a software library for data manipulation and analysis
# We commonly use shorter nicknames for certain packages. Pandas is often abbreviated to pd.
# hit shift + enter to run this cell or block of code
path = r'./chopstick-effectiveness.csv'
# Change the path to the location where the chopstick-effectiveness.csv file is located on your computer.
# If you get an error when running this block of code, be sure the chopstick-effectiveness.csv is located at the path on your computer.
dataFrame = pd.read_csv(path)
dataFrame
"""
Explanation: Chopsticks!
A few researchers set out to determine the optimal length of chopsticks for children and adults. They came up with a measure of how effective a pair of chopsticks performed, called the "Food Pinching Performance." The "Food Pinching Performance" was determined by counting the number of peanuts picked and placed in a cup (PPPC).
An investigation for determining the optimum length of chopsticks.
Link to Abstract and Paper
the abstract below was adapted from the link
Chopsticks are one of the most simple and popular hand tools ever invented by humans, but have not previously been investigated by ergonomists. Two laboratory studies were conducted in this research, using a randomised complete block design, to evaluate the effects of the length of the chopsticks on the food-serving performance of adults and children. Thirty-one male junior college students and 21 primary school pupils served as subjects for the experiment to test chopsticks lengths of 180, 210, 240, 270, 300, and 330 mm. The results showed that the food-pinching performance was significantly affected by the length of the chopsticks, and that chopsticks of about 240 and 180 mm long were optimal for adults and pupils, respectively. Based on these findings, the researchers suggested that families with children should provide both 240 and 180 mm long chopsticks. In addition, restaurants could provide 210 mm long chopsticks, considering the trade-offs between ergonomics and cost.
For the rest of this project, answer all questions based only on the part of the experiment analyzing the thirty-one adult male college students.
Download the data set for the adults, then answer the following questions based on the abstract and the data set.
If you double click on this cell, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using Markdown, which is a way to format text using headers, links, italics, and many other options. You will learn more about Markdown later in the Nanodegree Program. Hit shift + enter or shift + return to show the formatted text.
1. What is the independent variable in the experiment?
You can either double click on this cell to add your answer in this cell, or use the plus sign in the toolbar (Insert cell below) to add your answer in a new cell.
Chopstick length
2. What is the dependent variable in the experiment?
Food Pinching Performance
3. How is the dependent variable operationally defined?
The "Food Pinching Performance" is determined by counting the number of peanuts picked and placed in a cup (PPPC).
4. Based on the description of the experiment and the data set, list at least two variables that you know were controlled.
Think about the participants who generated the data and what they have in common. You don't need to guess any variables or read the full paper to determine these variables. (For example, it seems plausible that the material of the chopsticks was held constant, but this is not stated in the abstract or data description.)
Sex of the college students.
Age range of the college students.
The style from which the chopsticks were held.
Comfortability with using chopsticks.
One great advantage of ipython notebooks is that you can document your data analysis using code, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. For now, let's see some code for doing statistics.
End of explanation
"""
dataFrame['Food.Pinching.Efficiency'].mean()
"""
Explanation: Let's do a basic statistical calculation on the data using code! Run the block of code below to calculate the average "Food Pinching Efficiency" for all 31 participants and all chopstick lengths.
End of explanation
"""
meansByChopstickLength = dataFrame.groupby('Chopstick.Length')['Food.Pinching.Efficiency'].mean().reset_index()
meansByChopstickLength
# reset_index() changes Chopstick.Length from an index to column. Instead of the index being the length of the chopsticks, the index is the row numbers 0, 1, 2, 3, 4, 5.
"""
Explanation: This number is helpful, but the number doesn't let us know which of the chopstick lengths performed best for the thirty-one male junior college students. Let's break down the data by chopstick length. The next block of code will generate the average "Food Pinching Effeciency" for each chopstick length. Run the block of code below.
End of explanation
"""
# Causes plots to display within the notebook rather than in a new window
%pylab inline
import matplotlib.pyplot as plt
plt.scatter(x=meansByChopstickLength['Chopstick.Length'], y=meansByChopstickLength['Food.Pinching.Efficiency'])
# title="")
plt.xlabel("Length in mm")
plt.ylabel("Efficiency in PPPC")
plt.title("Average Food Pinching Efficiency by Chopstick Length")
plt.show()
"""
Explanation: 5. Which chopstick length performed the best for the group of thirty-one male junior college students?
240mm
End of explanation
"""
|
EvenStrangest/tensorflow
|
tensorflow/examples/udacity/5_word2vec.ipynb
|
apache-2.0
|
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
%matplotlib inline
from __future__ import print_function
import collections
import math
import numpy as np
import os
import random
import tensorflow as tf
import zipfile
from matplotlib import pylab
from six.moves import range
from six.moves.urllib.request import urlretrieve
from sklearn.manifold import TSNE
"""
Explanation: Deep Learning
Assignment 5
The goal of this assignment is to train a Word2Vec skip-gram model over Text8 data.
End of explanation
"""
url = 'http://mattmahoney.net/dc/'
def maybe_download(filename, expected_bytes):
"""Download a file if not present, and make sure it's the right size."""
if not os.path.exists(filename):
filename, _ = urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified %s' % filename)
else:
print(statinfo.st_size)
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
filename = maybe_download('text8.zip', 31344016)
"""
Explanation: Download the data from the source website if necessary.
End of explanation
"""
def read_data(filename):
"""Extract the first file enclosed in a zip file as a list of words"""
with zipfile.ZipFile(filename) as f:
data = tf.compat.as_str(f.read(f.namelist()[0])).split()
return data
words = read_data(filename)
print('Data size %d' % len(words))
"""
Explanation: Read the data into a string.
End of explanation
"""
vocabulary_size = 50000
def build_dataset(words):
count = [['UNK', -1]]
count.extend(collections.Counter(words).most_common(vocabulary_size - 1))
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
if word in dictionary:
index_into_count = dictionary[word]
else:
index_into_count = 0 # dictionary['UNK']
unk_count = unk_count + 1
data.append(index_into_count)
count[0][1] = unk_count
reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reverse_dictionary
data, count, dictionary, reverse_dictionary = build_dataset(words)
print('Most common words (+UNK)', count[:5])
print('Sample data', data[:10])
del words # Hint to reduce memory.
"""
Explanation: Build the dictionary and replace rare words with UNK token.
End of explanation
"""
data_index = 0
def generate_batch(batch_size, num_skips, skip_window):
global data_index
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
batch = np.ndarray(shape=(batch_size), dtype=np.int32)
labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
span = 2 * skip_window + 1 # [ skip_window target skip_window ]
buffer = collections.deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(batch_size // num_skips):
target = skip_window # target label at the center of the buffer
targets_to_avoid = [ skip_window ]
for j in range(num_skips):
while target in targets_to_avoid:
target = random.randint(0, span - 1)
targets_to_avoid.append(target)
batch[i * num_skips + j] = buffer[skip_window]
labels[i * num_skips + j, 0] = buffer[target]
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
print('data:', [reverse_dictionary[di] for di in data[:8]])
for num_skips, skip_window in [(2, 1), (4, 2)]:
data_index = 0
batch, labels = generate_batch(batch_size=8, num_skips=num_skips, skip_window=skip_window)
print('\nwith num_skips = %d and skip_window = %d:' % (num_skips, skip_window))
print(' batch:', [reverse_dictionary[bi] for bi in batch])
print(' labels:', [reverse_dictionary[li] for li in labels.reshape(8)])
"""
Explanation: Function to generate a training batch for the skip-gram model.
End of explanation
"""
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label.
# We pick a random validation set to sample nearest neighbors. here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.array(random.sample(range(valid_window), valid_size))
num_sampled = 64 # Number of negative examples to sample.
graph = tf.Graph()
with graph.as_default(), tf.device('/cpu:0'):
# Input data.
train_dataset = tf.placeholder(tf.int32, shape=[batch_size])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# Variables.
embeddings = tf.Variable(
tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
softmax_weights = tf.Variable(
tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))
softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))
# Model.
# Look up embeddings for inputs.
embed = tf.nn.embedding_lookup(embeddings, train_dataset)
# Compute the softmax loss, using a sample of the negative labels each time.
loss = tf.reduce_mean(
tf.nn.sampled_softmax_loss(softmax_weights, softmax_biases, embed,
train_labels, num_sampled, vocabulary_size))
# Optimizer.
# Note: The optimizer will optimize the softmax_weights AND the embeddings.
# This is because the embeddings are defined as a variable quantity and the
# optimizer's `minimize` method will by default modify all variable quantities
# that contribute to the tensor it is passed.
# See docs on `tf.train.Optimizer.minimize()` for more details.
optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
# Compute the similarity between minibatch examples and all embeddings.
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
normalized_embeddings = embeddings / norm
valid_embeddings = tf.nn.embedding_lookup(
normalized_embeddings, valid_dataset)
similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
num_steps = 100001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print('Initialized')
average_loss = 0
for step in range(num_steps):
batch_data, batch_labels = generate_batch(
batch_size, num_skips, skip_window)
feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
_, l = session.run([optimizer, loss], feed_dict=feed_dict)
average_loss += l
if step % 2000 == 0:
if step > 0:
average_loss = average_loss / 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print('Average loss at step %d: %f' % (step, average_loss))
average_loss = 0
# note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
for i in range(valid_size):
valid_word = reverse_dictionary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = reverse_dictionary[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
final_embeddings = normalized_embeddings.eval()
num_points = 400
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
two_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :])
# why ruclidean distance here, and not cosine?
def plot(embeddings, labels):
assert embeddings.shape[0] >= len(labels), 'More labels than embeddings'
pylab.figure(figsize=(15,15)) # in inches
for i, label in enumerate(labels):
x, y = embeddings[i,:]
pylab.scatter(x, y)
pylab.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points',
ha='right', va='bottom')
pylab.show()
words = [reverse_dictionary[i] for i in range(1, num_points+1)]
plot(two_d_embeddings, words)
"""
Explanation: Train a skip-gram model.
End of explanation
"""
data_index = 0
def generate_batch(batch_size, skip_window):
assert skip_window == 1 # Handling of this value is hard-coded here.
global data_index
batch = np.ndarray(shape=(batch_size, 2), dtype=np.int32)
labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
span = 2*skip_window + 1 # [ skip_window target skip_window ]
buffer = collections.deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(batch_size):
target = skip_window # target label at the center of the buffer
batch[i, 0] = buffer[skip_window-1]
batch[i, 1] = buffer[skip_window+1]
labels[i, 0] = buffer[target]
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
print('data:', [reverse_dictionary[di] for di in data[:8]])
for skip_window in [1]:
data_index = 0
batch, labels = generate_batch(batch_size=8, skip_window=skip_window)
print('\nwith skip_window = %d:' % skip_window)
print(' batch:', [[reverse_dictionary[m] for m in bi] for bi in batch])
print(' labels:', [reverse_dictionary[li] for li in labels.reshape(8)])
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label.
# We pick a random validation set to sample nearest neighbors. here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.array(random.sample(range(valid_window), valid_size))
num_sampled = 64 # Number of negative examples to sample.
graph = tf.Graph()
with graph.as_default(), tf.device('/cpu:0'):
# Input data.
span = 2*skip_window + 1 # [ skip_window target skip_window ]
train_dataset = tf.placeholder(tf.int32, shape=[batch_size, (span-1)])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# Variables.
embeddings = tf.Variable(
tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
softmax_weights = tf.Variable(
tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))
softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))
# Model.
# Look up embeddings for inputs.
assert skip_window == 1 # Handling of this value is hard-coded here.
embed0 = tf.nn.embedding_lookup(embeddings, train_dataset[:,0])
embed1 = tf.nn.embedding_lookup(embeddings, train_dataset[:,1])
embed = (embed0 + embed1)/(span-1)
# Compute the softmax loss, using a sample of the negative labels each time.
loss = tf.reduce_mean(
tf.nn.sampled_softmax_loss(softmax_weights, softmax_biases, embed,
train_labels, num_sampled, vocabulary_size))
# Optimizer.
optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
# Compute the similarity between minibatch examples and all embeddings.
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
normalized_embeddings = embeddings / norm
valid_embeddings = tf.nn.embedding_lookup(
normalized_embeddings, valid_dataset)
similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
num_steps = 100001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print('Initialized')
average_loss = 0
for step in range(num_steps):
batch_data, batch_labels = generate_batch(
batch_size, skip_window)
feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
_, l = session.run([optimizer, loss], feed_dict=feed_dict)
average_loss += l
if step % 2000 == 0:
if step > 0:
average_loss = average_loss / 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print('Average loss at step %d: %f' % (step, average_loss))
average_loss = 0
# note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
for i in xrange(valid_size):
valid_word = reverse_dictionary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in xrange(top_k):
close_word = reverse_dictionary[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
final_embeddings = normalized_embeddings.eval()
"""
Explanation: Problem
An alternative to skip-gram is another Word2Vec model called CBOW (Continuous Bag of Words). In the CBOW model, instead of predicting a context word from a word vector, you predict a word from the sum of all the word vectors in its context. Implement and evaluate a CBOW model trained on the text8 dataset.
End of explanation
"""
|
mercybenzaquen/foundations-homework
|
databases_hw/db06/Homework_6.ipynb
|
mit
|
import requests
data = requests.get('http://localhost:5000/lakes').json()
print(len(data), "lakes")
for item in data[:10]:
print(item['name'], "- elevation:", item['elevation'], "m / area:", item['area'], "km^2 / type:", item['type'])
"""
Explanation: Homework 6: Web Applications
For this homework, you're going to write a web API for the lake data in the MONDIAL database. (Make sure you've imported the data as originally outlined in our week 1 tutorial.)
The API should perform the following tasks:
A request to /lakes should return a JSON list of dictionaries, with the information from the name, elevation, area and type fields from the lake table in MONDIAL.
The API should recognize the query string parameter sort. When left blank or set to name, the results should be sorted by the name of the lake (in alphabetical order). When set to area or elevation, the results should be sorted by the requested field, in descending order.
The API should recognize the query string parameter type. When specified, the results should only include rows that have the specified value in the type field.
You should be able to use both the sort and type parameters in any request.
This notebook contains only test requests to your API. Write the API as a standalone Python program, start the program and then run the code in the cells below to ensure that your API produces the expected output. When you're done, paste the source code in the final cell (so we can check your work, if needed).
Hints when writing your API code:
You'll need to construct the SQL query as a string, piece by piece. This will likely involve a somewhat messy tangle of if statements. Lean into the messy tangle.
Make sure to use parameter placeholders (%s) in the query.
If you're getting SQL errors, print out your SQL statement in the request handler function so you can debug it. (When you use print() in Flask, the results will display in your terminal window.)
When in doubt, return to the test code. Examine it carefully and make sure you know exactly what it's trying to do.
Problem set #1: A list of lakes
Your API should return a JSON list of dictionaries (objects). Use the code below to determine what the keys of the dictionaries should be. (For brevity, this example only prints out the first ten records, but of course your API should return all of them.)
Expected output:
143 lakes
Ammersee - elevation: 533 m / area: 46 km^2 / type: None
Arresoe - elevation: None m / area: 40 km^2 / type: None
Atlin Lake - elevation: 668 m / area: 798 km^2 / type: None
Balaton - elevation: 104 m / area: 594 km^2 / type: None
Barrage de Mbakaou - elevation: None m / area: None km^2 / type: dam
Bodensee - elevation: 395 m / area: 538 km^2 / type: None
Brienzersee - elevation: 564 m / area: 29 km^2 / type: None
Caspian Sea - elevation: -28 m / area: 386400 km^2 / type: salt
Chad Lake - elevation: 250 m / area: 23000 km^2 / type: salt
Chew Bahir - elevation: 520 m / area: 800 km^2 / type: salt
End of explanation
"""
import requests
data = requests.get('http://localhost:5000/lakes?type=salt').json()
avg_area = sum([x['area'] for x in data if x['area'] is not None]) / len(data)
avg_elev = sum([x['elevation'] for x in data if x['elevation'] is not None]) / len(data)
print("average area:", int(avg_area))
print("average elevation:", int(avg_elev))
"""
Explanation: Problem set #2: Lakes of a certain type
The following code fetches all lakes of type salt and finds their average area and elevation.
Expected output:
average area: 18880
average elevation: 970
End of explanation
"""
import requests
data = requests.get('http://localhost:5000/lakes?sort=elevation').json()
for item in [x['name'] for x in data if x['elevation'] is not None][:15]:
print("*", item)
"""
Explanation: Problem set #3: Lakes in order
The following code fetches lakes in reverse order by their elevation and prints out the name of the first fifteen, excluding lakes with an empty elevation field.
Expected output:
* Licancabur Crater Lake
* Nam Co
* Lago Junin
* Lake Titicaca
* Poopo
* Salar de Uyuni
* Koli Sarez
* Lake Irazu
* Qinghai Lake
* Segara Anak
* Lake Tahoe
* Crater Lake
* Lake Tana
* Lake Van
* Issyk-Kul
End of explanation
"""
import requests
data = requests.get('http://localhost:5000/lakes?sort=area&type=caldera').json()
for item in data:
print("*", item['name'])
"""
Explanation: Problem set #4: Order and type
The following code prints the names of the largest caldera lakes, ordered in reverse order by area.
Expected output:
* Lake Nyos
* Lake Toba
* Lago Trasimeno
* Lago di Bolsena
* Lago di Bracciano
* Crater Lake
* Segara Anak
* Laacher Maar
End of explanation
"""
import requests
data = requests.get('http://localhost:5000/lakes', params={'type': "' OR true; --"}).json()
data
"""
Explanation: Problem set #5: Error handling
Your API should work fine even when faced with potential error-causing inputs. For example, the expected output for this statement is an empty list ([]), not every row in the table.
End of explanation
"""
import requests
data = requests.get('http://localhost:5000/lakes', params={'sort': "florb"}).json()
[x['name'] for x in data[:5]]
"""
Explanation: Specifying a field other than name, area or elevation for the sort parameter should fail silently, defaulting to sorting alphabetically. Expected output: ['Ammersee', 'Arresoe', 'Atlin Lake', 'Balaton', 'Barrage de Mbakaou']
End of explanation
"""
from flask import Flask, request, jsonify
import pg8000
import decimal
app= Flask(__name__)
@app.route("/lakes")
def get_lakes():
conn= pg8000.connect(database="mondial", user="mercybenzaquen")
cursor= conn.cursor()
type_lakes = request.args.get('type', '')
sort_lakes= request.args.get('sort', '')
if (type_lakes != '') and (sort_lakes != '') and (sort_lakes != 'name'):
cursor.execute("select name, elevation, area, type from lake where type = %s order by " + sort_lakes + " DESC", [type_lakes])
elif type_lakes != '':
cursor.execute("select name, elevation, area, type from lake where type = %s order by name", [type_lakes])
elif sort_lakes not in ['name', 'elevation', 'area']:
cursor.execute("select name, elevation, area, type from lake order by name" )
elif sort_lakes != '':
cursor.execute("select name, elevation, area, type from lake order by " + sort_lakes + " DESC")
else:
cursor.execute("select name, elevation, area, type from lake order by name")
def decimal_to_int(x):
if isinstance(x, decimal.Decimal):
return int(x)
else:
return None
everything_dict = []
for item in cursor.fetchall():
everything_dict.append({'name': item[0], 'elevation': decimal_to_int(item[1]), 'area': decimal_to_int(item[2]), 'type': item[3]})
return jsonify(everything_dict)
app.run(debug=True)
"""
Explanation: Paste your code
Please paste the code for your entire Flask application in the cell below, in case we want to take a look when grading or debugging your assignment.
End of explanation
"""
|
SnShine/aima-python
|
planning.ipynb
|
mit
|
from planning import *
"""
Explanation: Planning: planning.py; chapters 10-11
This notebook describes the planning.py module, which covers Chapters 10 (Classical Planning) and 11 (Planning and Acting in the Real World) of Artificial Intelligence: A Modern Approach. See the intro notebook for instructions.
We'll start by looking at PDDL and Action data types for defining problems and actions. Then, we will see how to use them by trying to plan a trip from Sibiu to Bucharest across the familiar map of Romania, from search.ipynb. Finally, we will look at the implementation of the GraphPlan algorithm.
The first step is to load the code:
End of explanation
"""
%psource Action
"""
Explanation: To be able to model a planning problem properly, it is essential to be able to represent an Action. Each action we model requires at least three things:
* preconditions that the action must meet
* the effects of executing the action
* some expression that represents the action
Planning actions have been modelled using the Action class. Let's look at the source to see how the internal details of an action are implemented in Python.
End of explanation
"""
%psource PDDL
"""
Explanation: It is interesting to see the way preconditions and effects are represented here. Instead of just being a list of expressions each, they consist of two lists - precond_pos and precond_neg. This is to work around the fact that PDDL doesn't allow for negations. Thus, for each precondition, we maintain a seperate list of those preconditions that must hold true, and those whose negations must hold true. Similarly, instead of having a single list of expressions that are the result of executing an action, we have two. The first (effect_add) contains all the expressions that will evaluate to true if the action is executed, and the the second (effect_neg) contains all those expressions that would be false if the action is executed (ie. their negations would be true).
The constructor parameters, however combine the two precondition lists into a single precond parameter, and the effect lists into a single effect parameter.
The PDDL class is used to represent planning problems in this module. The following attributes are essential to be able to define a problem:
* a goal test
* an initial state
* a set of viable actions that can be executed in the search space of the problem
View the source to see how the Python code tries to realise these.
End of explanation
"""
from utils import *
# this imports the required expr so we can create our knowledge base
knowledge_base = [
expr("Connected(Bucharest,Pitesti)"),
expr("Connected(Pitesti,Rimnicu)"),
expr("Connected(Rimnicu,Sibiu)"),
expr("Connected(Sibiu,Fagaras)"),
expr("Connected(Fagaras,Bucharest)"),
expr("Connected(Pitesti,Craiova)"),
expr("Connected(Craiova,Rimnicu)")
]
"""
Explanation: The initial_state attribute is a list of Expr expressions that forms the initial knowledge base for the problem. Next, actions contains a list of Action objects that may be executed in the search space of the problem. Lastly, we pass a goal_test function as a parameter - this typically takes a knowledge base as a parameter, and returns whether or not the goal has been reached.
Now lets try to define a planning problem using these tools. Since we already know about the map of Romania, lets see if we can plan a trip across a simplified map of Romania.
Here is our simplified map definition:
End of explanation
"""
knowledge_base.extend([
expr("Connected(x,y) ==> Connected(y,x)"),
expr("Connected(x,y) & Connected(y,z) ==> Connected(x,z)"),
expr("At(Sibiu)")
])
"""
Explanation: Let us add some logic propositions to complete our knowledge about travelling around the map. These are the typical symmetry and transitivity properties of connections on a map. We can now be sure that our knowledge_base understands what it truly means for two locations to be connected in the sense usually meant by humans when we use the term.
Let's also add our starting location - Sibiu to the map.
End of explanation
"""
knowledge_base
"""
Explanation: We now have a complete knowledge base, which can be seen like this:
End of explanation
"""
#Sibiu to Bucharest
precond_pos = [expr('At(Sibiu)')]
precond_neg = []
effect_add = [expr('At(Bucharest)')]
effect_rem = [expr('At(Sibiu)')]
fly_s_b = Action(expr('Fly(Sibiu, Bucharest)'), [precond_pos, precond_neg], [effect_add, effect_rem])
#Bucharest to Sibiu
precond_pos = [expr('At(Bucharest)')]
precond_neg = []
effect_add = [expr('At(Sibiu)')]
effect_rem = [expr('At(Bucharest)')]
fly_b_s = Action(expr('Fly(Bucharest, Sibiu)'), [precond_pos, precond_neg], [effect_add, effect_rem])
#Sibiu to Craiova
precond_pos = [expr('At(Sibiu)')]
precond_neg = []
effect_add = [expr('At(Craiova)')]
effect_rem = [expr('At(Sibiu)')]
fly_s_c = Action(expr('Fly(Sibiu, Craiova)'), [precond_pos, precond_neg], [effect_add, effect_rem])
#Craiova to Sibiu
precond_pos = [expr('At(Craiova)')]
precond_neg = []
effect_add = [expr('At(Sibiu)')]
effect_rem = [expr('At(Craiova)')]
fly_c_s = Action(expr('Fly(Craiova, Sibiu)'), [precond_pos, precond_neg], [effect_add, effect_rem])
#Bucharest to Craiova
precond_pos = [expr('At(Bucharest)')]
precond_neg = []
effect_add = [expr('At(Craiova)')]
effect_rem = [expr('At(Bucharest)')]
fly_b_c = Action(expr('Fly(Bucharest, Craiova)'), [precond_pos, precond_neg], [effect_add, effect_rem])
#Craiova to Bucharest
precond_pos = [expr('At(Craiova)')]
precond_neg = []
effect_add = [expr('At(Bucharest)')]
effect_rem = [expr('At(Craiova)')]
fly_c_b = Action(expr('Fly(Craiova, Bucharest)'), [precond_pos, precond_neg], [effect_add, effect_rem])
"""
Explanation: We now define possible actions to our problem. We know that we can drive between any connected places. But, as is evident from this list of Romanian airports, we can also fly directly between Sibiu, Bucharest, and Craiova.
We can define these flight actions like this:
End of explanation
"""
#Drive
precond_pos = [expr('At(x)')]
precond_neg = []
effect_add = [expr('At(y)')]
effect_rem = [expr('At(x)')]
drive = Action(expr('Drive(x, y)'), [precond_pos, precond_neg], [effect_add, effect_rem])
"""
Explanation: And the drive actions like this.
End of explanation
"""
def goal_test(kb):
return kb.ask(expr("At(Bucharest)"))
"""
Explanation: Finally, we can define a a function that will tell us when we have reached our destination, Bucharest.
End of explanation
"""
prob = PDDL(knowledge_base, [fly_s_b, fly_b_s, fly_s_c, fly_c_s, fly_b_c, fly_c_b, drive], goal_test)
"""
Explanation: Thus, with all the components in place, we can define the planning problem.
End of explanation
"""
|
cehbrecht/demo-notebooks
|
wps-cfchecker.ipynb
|
apache-2.0
|
from owslib.wps import WebProcessingService
token = 'a890731658ac4f1ba93a62598d2f2645'
headers = {'Access-Token': token}
wps = WebProcessingService("https://bovec.dkrz.de/ows/proxy/hummingbird", verify=False, headers=headers)
"""
Explanation: Init WPS with cfchecker proceses
hummingbird caps url: https://bovec.dkrz.de/ows/proxy/hummingbird?version=1.0.0&request=GetCapabilities&service=WPS
using twitcher access tokens: http://twitcher.readthedocs.io/en/latest/tutorial.html
End of explanation
"""
for process in wps.processes:
print process.identifier,":", process.title
"""
Explanation: Show available processes
End of explanation
"""
process = wps.describeprocess(identifier='qa_cfchecker')
for inp in process.dataInputs:
print inp.identifier, ":", inp.title, ":", inp.dataType
"""
Explanation: Show details about qa_cfchecker process
End of explanation
"""
inputs = [('dataset', 'http://bovec.dkrz.de:8090/wpsoutputs/hummingbird/output-b9855b08-42d8-11e6-b10f-abe4891050e3.nc')]
execution = wps.execute(identifier='qa_cfchecker', inputs=inputs, output='output', async=False)
print execution.status
for out in execution.processOutputs:
print out.title, out.reference
"""
Explanation: Check file available on http service
End of explanation
"""
from owslib.wps import ComplexDataInput
import base64
fp = open("/home/pingu/tmp/input2.nc", 'r')
text = fp.read()
fp.close()
encoded = base64.b64encode(text)
content = ComplexDataInput(encoded)
inputs = [ ('dataset', content) ]
execution = wps.execute(identifier='qa_cfchecker', inputs=inputs, output='output', async=False)
print execution.status
for out in execution.processOutputs:
print out.title, out.reference
"""
Explanation: Prepare local file to send to service
To send a local file with the request the file needs to be base64 encoded.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/nerc/cmip6/models/sandbox-3/ocean.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'sandbox-3', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: NERC
Source ID: SANDBOX-3
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:27
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.22/_downloads/d5dd378a96a427683b4c918f7cdf9064/plot_ssd_spatial_filters.ipynb
|
bsd-3-clause
|
# Author: Denis A. Engemann <denis.engemann@gmail.com>
# Victoria Peterson <victoriapeterson09@gmail.com>
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne import Epochs
from mne.datasets.fieldtrip_cmc import data_path
from mne.decoding import SSD
"""
Explanation: Compute Sepctro-Spatial Decomposition (SSD) spatial filters
In this example, we will compute spatial filters for retaining
oscillatory brain activity and down-weighting 1/f background signals
as proposed by :footcite:NikulinEtAl2011.
The idea is to learn spatial filters that separate oscillatory dynamics
from surrounding non-oscillatory noise based on the covariance in the
frequency band of interest and the noise covariance based on surrounding
frequencies.
End of explanation
"""
fname = data_path() + '/SubjectCMC.ds'
# Prepare data
raw = mne.io.read_raw_ctf(fname)
raw.crop(50., 110.).load_data() # crop for memory purposes
raw.resample(sfreq=250)
raw.pick_types(meg=True, eeg=False, ref_meg=False)
freqs_sig = 9, 12
freqs_noise = 8, 13
ssd = SSD(info=raw.info,
reg='oas',
sort_by_spectral_ratio=False, # False for purpose of example.
filt_params_signal=dict(l_freq=freqs_sig[0], h_freq=freqs_sig[1],
l_trans_bandwidth=1, h_trans_bandwidth=1),
filt_params_noise=dict(l_freq=freqs_noise[0], h_freq=freqs_noise[1],
l_trans_bandwidth=1, h_trans_bandwidth=1))
ssd.fit(X=raw.get_data())
"""
Explanation: Define parameters
End of explanation
"""
pattern = mne.EvokedArray(data=ssd.patterns_[:4].T,
info=ssd.info)
pattern.plot_topomap(units=dict(mag='A.U.'), time_format='')
# The topographies suggest that we picked up a parietal alpha generator.
# Transform
ssd_sources = ssd.transform(X=raw.get_data())
# Get psd of SSD-filtered signals.
psd, freqs = mne.time_frequency.psd_array_welch(
ssd_sources, sfreq=raw.info['sfreq'], n_fft=4096)
# Get spec_ratio information (already sorted).
# Note that this is not necessary if sort_by_spectral_ratio=True (default).
spec_ratio, sorter = ssd.get_spectral_ratio(ssd_sources)
# Plot spectral ratio (see Eq. 24 in Nikulin 2011).
fig, ax = plt.subplots(1)
ax.plot(spec_ratio, color='black')
ax.plot(spec_ratio[sorter], color='orange', label='sorted eigenvalues')
ax.set_xlabel("Eigenvalue Index")
ax.set_ylabel(r"Spectral Ratio $\frac{P_f}{P_{sf}}$")
ax.legend()
ax.axhline(1, linestyle='--')
# We can see that the initial sorting based on the eigenvalues
# was already quite good. However, when using few components only
# the sorting might make a difference.
"""
Explanation: Let's investigate spatial filter with max power ratio.
We will first inspect the topographies.
According to Nikulin et al. 2011 this is done by either inverting the filters
(W^{-1}) or by multiplying the noise cov with the filters Eq. (22) (C_n W)^t.
We rely on the inversion approach here.
End of explanation
"""
below50 = freqs < 50
# for highlighting the freq. band of interest
bandfilt = (freqs_sig[0] <= freqs) & (freqs <= freqs_sig[1])
fig, ax = plt.subplots(1)
ax.loglog(freqs[below50], psd[0, below50], label='max SNR')
ax.loglog(freqs[below50], psd[-1, below50], label='min SNR')
ax.loglog(freqs[below50], psd[:, below50].mean(axis=0), label='mean')
ax.fill_between(freqs[bandfilt], 0, 10000, color='green', alpha=0.15)
ax.set_xlabel('log(frequency)')
ax.set_ylabel('log(power)')
ax.legend()
# We can clearly see that the selected component enjoys an SNR that is
# way above the average power spectrum.
"""
Explanation: Let's also look at the power spectrum of that source and compare it to
to the power spectrum of the source with lowest SNR.
End of explanation
"""
# Build epochs as sliding windows over the continuous raw file.
events = mne.make_fixed_length_events(raw, id=1, duration=5.0, overlap=0.0)
# Epoch length is 5 seconds.
epochs = Epochs(raw, events, tmin=0., tmax=5,
baseline=None, preload=True)
ssd_epochs = SSD(info=epochs.info,
reg='oas',
filt_params_signal=dict(l_freq=freqs_sig[0],
h_freq=freqs_sig[1],
l_trans_bandwidth=1,
h_trans_bandwidth=1),
filt_params_noise=dict(l_freq=freqs_noise[0],
h_freq=freqs_noise[1],
l_trans_bandwidth=1,
h_trans_bandwidth=1))
ssd_epochs.fit(X=epochs.get_data())
# Plot topographies.
pattern_epochs = mne.EvokedArray(data=ssd_epochs.patterns_[:4].T,
info=ssd_epochs.info)
pattern_epochs.plot_topomap(units=dict(mag='A.U.'), time_format='')
"""
Explanation: Epoched data
Although we suggest to use this method before epoching, there might be some
situations in which data can only be treated by chunks.
End of explanation
"""
|
Naereen/notebooks
|
Une_exploration_visuelle_de_l_algorithme_du_Simplexe_en_3D_avec_Python.ipynb
|
mit
|
from IPython.display import YouTubeVideo
# https://www.youtube.com/watch?v=W_U8ozVsh8s
YouTubeVideo("W_U8ozVsh8s", width=944, height=531)
"""
Explanation: Une exploration visuelle de l'algorithme du Simplexe en 3D avec Python
Dans ce notebook (utilisant Python 3), je souhaite montrer des animations de l'algorithme du Simplexe, un peu comme dans la vidรฉo suivante :
<iframe width="500" height="250" src="https://www.youtube.com/embed/W_U8ozVsh8s" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
J'aimerai bien รฉcrire un petit morceau de code Python qui fait les รฉtapes suivantes :
on lui donne le programme linรฉaire ร rรฉsoudre (รฉventuellement sous forme acceptรฉe par lp_solve) ;
il rรฉsoud le problรจme avec scipy.optimize.linprog(method="simplex"), et s'arrรชte s'il n'y a pas de solution trouvรฉe par le simplexe ;
puis utilise le callback de cette fonction pour afficher des รฉquations en LaTeX reprรฉsentant l'รฉvolution du systรจme et des variables de la base et hors base ;
j'aimerai bien avoir une animation รฉtape par รฉtape, avec un simple "slider" avec le widget interact ;
bonus : afficher un graphique 3D, avec TikZ ?
Ce document ne sera pas :
une implรฉmentation maison de l'algorithme du simplexe : c'est trop long et je n'ai pas le temps en ce moment ;
des explications sur l'algorithme du simplexe : pour cela regardez les notes de cours de ALGO2, et la page Wikipรฉdia sur l'algorithme du Simplexe ;
probablement capable de se faire exporter proprement en HTML statique ;
et pas non plus capable de se faire exporter proprement en PDF.
A propros
Auteur : Lilian Besson
License : MIT
Date : 09/02/2021
Cours : ALGO2 @ ENS Rennes
Vidรฉo d'explication
Regardez cette vidรฉo.
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Dรฉpendances
On a sรปrement besoin de Numpy et Matplotlib :
End of explanation
"""
from scipy.optimize import linprog
"""
Explanation: On a besoin de la fonction scipy.optimize.linprog(method="simplex") du module scipy.optimize :
End of explanation
"""
from IPython.display import Latex, display
"""
Explanation: On a aussi besoin de la fonction IPython.display.Latex pour facilement afficher du code LaTeX gรฉnรฉrรฉ depuis nos cellules Python :
End of explanation
"""
def display_cos_power(power=1):
return display(Latex(fr"$$\cos(x)^{power} = 0$$"))
for power in range(1, 5):
display_cos_power(power)
"""
Explanation: Par exemple :
End of explanation
"""
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
interactive(display_cos_power,
power=(1, 10, 1)
)
"""
Explanation: On va avoir besoin des widgets IPywidgets, plus tard :
End of explanation
"""
%load_ext itikz
"""
Explanation: Et enfin, de l'extension itikz
End of explanation
"""
# Objective Function: 50x_1 + 80x_2
# Constraint 1: 5x_1 + 2x_2 <= 20
# Constraint 2: -10x_1 + -12x_2 <= -90
problem1 = {
# Cost function: 50x_1 + 80x_2
"cost": [50, 80],
# Coefficients for inequalities
"A_ub": [[5, 2], [-10, -12]],
# Constraints for inequalities: 20 and -90
"b_ub": [20, -90],
# Bounds on x, 0 <= x_i <= +oo by default
"bounds": (0, None),
}
# Objective Function: maximize x_1 + 6*x_2 + 13*x_3
# => so cost will be opposite
# Constraint 1: x1 <= 200
# Constraint 2: x2 <= 300
# Constraint 3: x1+x2+x3 <= 400
# Constraint 2: x2+3x3 <= 600
problem2 = {
# Cost function: minimize -1*x_1 + -6*x_2 + -13*x_3
"cost": [-1, -6, -13],
# Coefficients for inequalities
"A_ub": [
[1, 0, 0],
[0, 1, 0],
[1, 1, 1],
[0, 1, 3],
],
# Constraints for inequalities:
"b_ub": [200, 300, 400, 600],
# Bounds on x, 0 <= x_i <= +oo by default
"bounds": (0, None),
}
"""
Explanation: Premiรจre expรฉrience
Dรฉjร , je vais รฉcrire le problรจme รฉtudiรฉ comme un dictionnaire, que l'on pourra passer ร scipy.optimize.linprog(method="simplex") :
End of explanation
"""
def linprog_wrapper(problem, **kwargs):
result = linprog(
problem["cost"],
A_ub=problem["A_ub"],
b_ub=problem["b_ub"],
bounds=problem["bounds"],
method="simplex",
**kwargs
)
return result
"""
Explanation: Puis une petite fonction qui s'occupe de prendre ce dictionnaire et le donner ร scipy.optimize.linprog(method="simplex") :
End of explanation
"""
linprog_wrapper(problem1)
linprog_wrapper(problem2)
"""
Explanation: On va dรฉjร vรฉrifier que l'on peut rรฉsoudre ces deux exemples de problรจme de programmation linรฉaire :
End of explanation
"""
def round(np_array):
res = np.array(np.round(np_array), dtype=int)
if res.size > 1:
return list(res)
else:
return res
def dummy_callback(r):
print(f"\n- Itรฉration #{r['nit']}, phase {r['phase']} :")
fun = round(r['fun'])
print(f" Valeur objectif = {fun}")
slack = round(r['slack'])
print(f" Variables d'รฉcart = {slack}")
x = round(r['x'])
print(f" Variables objectif = {x}")
# print(r)
linprog_wrapper(problem2, callback=dummy_callback)
"""
Explanation: C'est bien la solution $x^* = [0, 300, 100]$, avec un objectif valant $+3100$, qui รฉtait trouvรฉe dans la vidรฉo !
Et si on ajoute un callback ?
End of explanation
"""
step_by_step_results = []
step_by_step_nitphase = []
def print_and_store_callback(r):
global step_by_step_results, step_by_step_nitphase
nit, phase = r['nit'], r['phase']
print(f"\n- Itรฉration #{nit}, phase {phase} :")
fun = round(r['fun'])
print(f" Valeur objectif = {fun}")
slack = round(r['slack'])
print(f" Variables d'รฉcart = {slack}")
x = round(r['x'])
print(f" Variables objectif = {x}")
if (nit, phase) not in step_by_step_nitphase:
step_by_step_results.append(r)
step_by_step_nitphase.append((nit, phase))
step_by_step_results = []
result_final = linprog_wrapper(problem2, callback=print_and_store_callback)
print(result_final)
step_by_step_results.append(result_final)
"""
Explanation: Afficher un systรจme d'รฉquation en LaTeX
End of explanation
"""
len(step_by_step_results)
"""
Explanation: On a donc rรฉcupรฉrรฉ un certain nombre d'objets rรฉsultat intermรฉdiaire d'optimisation :
End of explanation
"""
def equation_latex_from_step(result):
return r"""
\text{Maximiser}""" + cout + r"""\\
\begin{cases}
\end{cases}
"""
"""
Explanation: En fait, je me rends compte que les informations donnรฉes par ces results successifs ne sont pas suffisantes pour afficher des รฉquations comme dans la vidรฉo.
Implรฉmentation maison du Simplexe en dimension 3
Exemples
Suite des expรฉrimentations
On va รฉcrire une fonction qui produit du code LaTeX reprรฉsentant ce systรจme d'optimisation, au cours des rรฉรฉcritures qu'il subit :
End of explanation
"""
def interactive_latex_exploration(problem):
problem_solved = make_show_latex(problem1)
if problem_solved.status != 0:
print("Error: problem was not solve correctly, stopping this...")
interactive_function = make_show_latex(problem1)
max_step = problem_solved.nitint
return interact(, step=(0, max_step))
"""
Explanation: TODO: terminer รงa !
Ajouter de l'interactivitรฉ
End of explanation
"""
interactive_latex_exploration(problem)
"""
Explanation: Allez on essaie :
End of explanation
"""
%load_ext itikz
"""
Explanation: Ajouter des figures TikZ
Avec itikz
End of explanation
"""
%%itikz --temp-dir --file-prefix simplex-example-
\documentclass[tikz]{standalone}
\usepackage{amsfonts}
\begin{document}
% from http://people.irisa.fr/Francois.Schwarzentruber/algo2/ notes
\usetikzlibrary{arrows,patterns,topaths,shadows,shapes,positioning}
\begin{tikzpicture}[scale=0.012, opacity=0.7]
\tikzstyle{point} = [fill=red, circle, inner sep=0.8mm];
\draw[->] (0, 0, 0) -- (300, 0, 0) node[right] {a};
\draw[->] (0, 0, 0) -- (0, 350, 0) node[above] {b};
\draw[->] (0, 0, 0) -- (0, 0, 300) node[below] {c};
\coordinate (O) at (0,0,0);
\coordinate (D) at (200,0,0);
\coordinate (E) at (200, 0, 200);
\coordinate (F) at (0, 0, 200);
\coordinate (G) at (0, 300,0);
\coordinate (C) at (200,200,0);
\coordinate (A) at (100,300, 0);
\coordinate (B) at (0,300, 100);
\draw[fill=blue!20] (O) -- (D) -- (E) -- (F) -- (O) -- cycle;
\draw[fill=blue!20] (D) -- (C) -- (E) -- cycle;
\draw[fill=blue!20] (G) -- (B) -- (F) -- (O) -- cycle;
\draw[fill=blue!20] (B) -- (A) -- (C) --(E) -- cycle;
\draw[fill=blue!20] (B) -- (F) -- (E) -- cycle;
\draw[fill=blue!20] (B) -- (A) -- (G) -- cycle;
\node[point] at (0,0,0) {}; % TODO make this argument of function
\end{tikzpicture}
\end{document}
"""
Explanation: Par exemple on peut afficher un premier exemple, avant de chercher ร les faire bouger :
End of explanation
"""
simplex_example_str = ""
def default_cost(a, b, c):
"""1*{a} + 6*{b} + 13*{c}"""
return 1*a + 6*b + 13*c
def show_tikz_figure_with_point(a=0, b=0, c=0, cost=default_cost):
# TODO generate nice LaTeX equations
if cost:
current_cost = cost(a, b, c)
cost_doc = cost.__doc__.format(a=a, b=b, c=c)
print(f"Coรปt = {cost_doc} = {current_cost}")
equation_latex = f"""\
Cout $f(a,b,c) = {cost_doc} = {current_cost}$.\
"""
display(Latex(equation_latex))
# now tikz
global simplex_example_str
simplex_example_str = r"""
\documentclass[tikz]{standalone}
\begin{document}
% from http://people.irisa.fr/Francois.Schwarzentruber/algo2/ notes
\usetikzlibrary{arrows,patterns,topaths,shadows,shapes,positioning}
\begin{tikzpicture}[scale=0.016, opacity=0.7]
\tikzstyle{point} = [fill=red, circle, inner sep=0.8mm];
\draw[->] (0, 0, 0) -- (300, 0, 0) node[right] {a};
\draw[->] (0, 0, 0) -- (0, 350, 0) node[above] {b};
\draw[->] (0, 0, 0) -- (0, 0, 300) node[below] {c};
\coordinate (O) at (0,0,0);
\coordinate (D) at (200,0,0);
\coordinate (E) at (200, 0, 200);
\coordinate (F) at (0, 0, 200);
\coordinate (G) at (0, 300,0);
\coordinate (C) at (200,200,0);
\coordinate (A) at (100,300, 0);
\coordinate (B) at (0,300, 100);
\draw[fill=blue!20] (O) -- (D) -- (E) -- (F) -- (O) -- cycle;
\draw[fill=blue!20] (D) -- (C) -- (E) -- cycle;
\draw[fill=blue!20] (G) -- (B) -- (F) -- (O) -- cycle;
\draw[fill=blue!20] (B) -- (A) -- (C) --(E) -- cycle;
\draw[fill=blue!20] (B) -- (F) -- (E) -- cycle;
\draw[fill=blue!20] (B) -- (A) -- (G) -- cycle;
\node[point] at (""" + f"{a}, {b}, {c}" + """) {};
\end{tikzpicture}
\end{document}
"""
#print(simplex_example_str)
# TODO: run this from this function?
#%itikz --temp-dir --file-prefix simplex-example- simplex_example_str
return get_ipython().run_line_magic(
"itikz", "--temp-dir --file-prefix simplex-example- simplex_example_str"
)
show_tikz_figure_with_point(0, 0, 0)
"""
Explanation: Maintenant on peut chercher ร contrรดler la position du point objectif actuel :
a,b,c sera $x_1, x_2, x_3$.
End of explanation
"""
interact(
show_tikz_figure_with_point,
a = (-100, 300, 10),
b = (-100, 300, 10),
c = (-100, 300, 10),
cost = fixed(default_cost)
)
linprog_wrapper(problem2, callback=dummy_callback)
"""
Explanation: Et en rendant cela interactif, on peut jouer avec รงa.
<span style="color:red;">ATTENTION : mรชme si les widgets sont prรฉsents dans une version statique de cette page (au format HTML ou sur nbviewer.jupyter.org, la figure ne peut pas รชtre modifรฉe. Si vous souhaitez expรฉrimenter de votre cรดtรฉ, il faut exรฉcuter le notebook localement depuis votre propre Jupyter, ou avec MyBinder en cliquant sur un des boutons suivants :</span>
End of explanation
"""
|
amirziai/learning
|
algorithms/Spanning-Tree-with-Message-Passing.ipynb
|
mit
|
%matplotlib inline
import networkx as nx
"""
Explanation: Spanning Tree with Message Passing
End of explanation
"""
clique = nx.Graph()
clique.add_nodes_from([1, 2, 3])
clique.add_edges_from([(1, 2), (1, 3), (3, 2)])
nx.draw_networkx(clique)
"""
Explanation: Spanning tree of an undirected graph is a tree which includes all the vertices of the graph with a minimum number of possible edges.
One application of spanning trees is in the Spanning Tree Protocol (STP), where we want to avoid loops in the topology.
We want to construct an algorithm that relies on messages passed between nodes and not the knowledge of the overall topology.
Identifying the "root"
The idea is to minimize the distance from each node to the "root". The choice of root does not matter. For simplicity we'll designate the node with the lowest ID as the root.
This suggests that initially each node can think of itself as the root node and then send the state of its knowledge to neighbors.
Avoiding loops
3 node clique
End of explanation
"""
cycle = nx.Graph()
cycle.add_nodes_from([1, 2, 3, 4])
cycle.add_edges_from([(1, 2), (1, 3), (3, 4), (2, 4)])
nx.draw_networkx(cycle)
"""
Explanation: In this case we should remove the link between 2 and 3. In that case 3 and 2 have the minimum distance of 1 to the root node 1.
4 node cycle
End of explanation
"""
|
ueapy/ueapy.github.io
|
content/notebooks/2016-05-06-classes.ipynb
|
mit
|
s = 'hello world'
"""
Explanation: We start with the introduction from Python docs [1]
Compared with other programming languages, Pythonโs class mechanism adds classes with a minimum of new syntax and semantics. It is a mixture of the class mechanisms found in C++ and Modula-3. Python classes provide all the standard features of Object Oriented Programming: the class inheritance mechanism allows multiple base classes, a derived class can override any methods of its base class or classes, and a method can call the method of a base class with the same name. Objects can contain arbitrary amounts and kinds of data. As is true for modules, classes partake of the dynamic nature of Python: they are created at runtime, and can be modified further after creation.
OOP
Python supports object-oriented programming (OOP). The goals of OOP are [2]:
* to organize the code, and
* to re-use code in similar contexts.
Examples of classes
Using Python, you inevitably run into using classes, even if you don't create one yourself. Every object in Python is defined by its class and has class-specific attributes and methods.
Strings
For example, let's create a string:
End of explanation
"""
type(s)
"""
Explanation: Check its type:
End of explanation
"""
print(dir(s))
"""
Explanation: Then, using dir() function, we can print out all the methods of a str object.
End of explanation
"""
import numpy as np
a = np.zeros((10,10))
print(dir(a))
"""
Explanation: Note: a convenience to pronounce __add__ is "dunder add", where dunder stands for double underscore
Arrays
Numpy arrays are a specific class as well, with a bunch of array-specific methods and attributes.
End of explanation
"""
class MyAwesomeClass:
pass
"""
Explanation: Defining a class
The simplest way to create a class:
End of explanation
"""
c = MyAwesomeClass()
c
"""
Explanation: Note: According to PEP8, class names should normally use the CapWords convention.
Now, create a variable using the just created class:
End of explanation
"""
class Creature:
def __init__(self, name, the_level):
self.name = name
self.level = the_level
def __repr__(self):
return "Creature: {} of level {}".format(
self.name, self.level
)
tiger = Creature('big evil tiger', 21)
tiger
"""
Explanation: Let's define a bit more useful, but still a very simple class [3].
End of explanation
"""
class Dog:
tricks = [] # mistaken use of a class variable
def __init__(self, name):
self.name = name
def add_trick(self, trick):
self.tricks.append(trick)
F = Dog('Fido')
B = Dog('Buddy')
F.add_trick('roll over')
B.add_trick('play dead')
F.tricks
"""
Explanation: Note the difference in the output after we added a custom __repr__ method to our class.
Caution about using mutable objects
Shared data can have possibly surprising effects with involving mutable objects such as lists and dictionaries. For example, the tricks list in the following code should not be used as a class variable because just a single list would be shared by all Dog instances [1]:
End of explanation
"""
class Dog:
def __init__(self, name):
self.name = name
self.tricks = [] # creates a new empty list for each dog
def add_trick(self, trick):
self.tricks.append(trick)
F = Dog('Fido')
B = Dog('Buddy')
F.add_trick('roll over')
B.add_trick('play dead')
F.tricks
B.tricks
"""
Explanation: Correct design of the class should use an instance variable instead:
End of explanation
"""
class Vec2D:
def __init__(self, x, y):
self.x = x
self.y = y
def __add__(self, other):
return Vec2D(self.x + other.x, self.y + other.y)
def __sub__(self, other):
return Vec2D(self.x - other.x, self.y - other.y)
def __mul__(self, other):
return self.x*other.x + self.y*other.y
def __abs__(self):
return math.sqrt(self.x**2 + self.y**2)
def __str__(self):
return 'this is vector components: {0:g} {1:g}'.format(self.x, self.y)
def __ne__(self, other):
return self.x != other.x or self.y != other.y
"""
Explanation: More examples
The best way to understand the logic and convenience of OOP is by examining many examples of its use.
Class for vectors in the plane [4]
End of explanation
"""
u = Vec2D(0,1)
v = Vec2D(1,0)
w = Vec2D(1,1)
a = u + v
print(a)
a == w
a = u * v
print(a)
u == v
"""
Explanation: Let us play with some Vec2D objects:
End of explanation
"""
class Wind3D(object):
def __init__(self, u, v, w):
"""
Initialize a Wind3D instance
"""
if (u.shape != v.shape) or (u.shape != w.shape):
raise ValueError('u, v and w must be the same shape')
self.u = u.copy()
self.v = v.copy()
self.w = w.copy()
def magnitude(self):
"""
Calculate wind speed (magnitude of wind vector) and store it within the class
"""
self.mag = np.sqrt(self.u**2 + self.v**2 + self.w**2)
def kinetic_energy(self):
"""
Calculate KE and return it
"""
return 0.5*(self.u**2 + self.v**2 + self.w**2)
"""
Explanation: Wind field instance
Let's look at another example that can be useful in Atmospheric and Oceanic sciences. This Wind3D instance is pretty simple, having only two specific methods beside the __init__.
End of explanation
"""
HTML(html)
"""
Explanation: Other applications
Twitter bot example: AtmosSciBot
The bot in action: https://twitter.com/AtmosSciBot
Source code: https://github.com/dennissergeev/atmosscibot
References
[1] Python documentation: https://docs.python.org/3/tutorial/classes.html
[2] Short example from Scientific Python tutorial: http://www.scipy-lectures.org/intro/language/oop.html
[3] Python Jumpstart Course: https://github.com/mikeckennedy/python-jumpstart-course-demos
[4] H.P. Langtangen (2014) "A Primer on Scientific Programming with Python": http://hplgit.github.io/primer.html/doc/pub/half/book.pdf
Suggested reading
Chapters 7 and 8 from "A Hands-On Introduction to Using Python in the Atmospheric and Oceanic Sciences": http://www.johnny-lin.com/pyintro/
End of explanation
"""
|
SebastianBocquet/pygtc
|
Planck-vs-WMAP.ipynb
|
mit
|
%matplotlib inline
%config InlineBackend.figure_format = 'retina' # For mac users with Retina display
import numpy as np
from matplotlib import pyplot as plt
import pygtc
"""
Explanation: Example 2: Making a GTC/triangle plot with Planck and WMAP data!
This example is built from a jupyter notebook hosted on the pyGTC GitHub repository.
Download the data
The full set of chains from the Planck 2015 release is available at http://pla.esac.esa.int/pla/#cosmology. You will want to download COM_CosmoParams_fullGrid_R2.00.tar.gz. Careful, that's a huge file to download (3.6 GB)!
Extract everything into a directory, cd into that directory, and run this notebook.
End of explanation
"""
WMAP, Planck = [],[]
for i in range(1,5):
WMAP.append(np.loadtxt('./base/WMAP/base_WMAP_'+str(i)+'.txt'))
Planck.append(np.loadtxt('./base/plikHM_TT_lowTEB/base_plikHM_TT_lowTEB_'+str(i)+'.txt'))
# Copy all four chains into a single array
WMAPall = np.concatenate((WMAP[0],WMAP[1],WMAP[2],WMAP[3]))
Planckall = np.concatenate((Planck[0],Planck[1],Planck[2],Planck[3]))
"""
Explanation: Read in and format the data
End of explanation
"""
WMAPplot = WMAPall[:,[2,3,4,5,6,7,9,15]]
Planckplot = Planckall[:,[2,3,4,5,6,7,23,29]]
# Labels, pyGTC supports Tex enclosed in $..$
params = ('$\Omega_\mathrm{b}h^2$',
'$\Omega_\mathrm{c}h^2$',
'$100\\theta_\mathrm{MC}$',
'$\\tau$',
'$\ln(10^{10}A_s)$',
'$n_s$','$H_0$',
'$\\sigma_8$')
chainLabels = ('$Planck$ (TT+lowTEB)','WMAP')
"""
Explanation: Select the parameters and make labels
In the chain directories, there are .paramnames files that allow you
to find the parameters you are interested in.
End of explanation
"""
GTC = pygtc.plotGTC(chains=[Planckplot,WMAPplot],
weights=[Planckall[:,0],
WMAPall[:,0]],
paramNames=params,
chainLabels=chainLabels,
colorsOrder=('greens','blues'),
figureSize='APJ_page',
plotName='Planck-vs-WMAP.pdf')
"""
Explanation: Make the GTC!
Produce the plot and save it as Planck-vs-WMAP.pdf.
End of explanation
"""
|
mgalardini/2017_python_course
|
notebooks/[4a]-Exercises-solutions.ipynb
|
gpl-2.0
|
%matplotlib inline
import matplotlib.pyplot as plt
"""
Explanation: Data visualization: exercises
End of explanation
"""
plt.figure(figsize=(18, 7))
words = {}
for line in open('../data/aristotle.txt'):
for word in line.rstrip().split():
words[word] = words.get(word, 0)
words[word] += 1
plt.bar(range(len(words)),
sorted(words.values()))
plt.xlabel('word')
plt.xlabel('occurrences');
"""
Explanation: Can you plot an histogram of word frequencies for the data/aristotle.txt file?
End of explanation
"""
import pandas as pd
ab = pd.read_table('../data/abundance.tsv')
ab.head()
plt.figure(figsize=(12, 18))
plt.subplot(321)
plt.plot(ab['length'],
ab['eff_length'],
'k.')
plt.xlabel('length')
plt.ylabel('eff_length')
plt.subplot(322)
plt.plot(ab['length'],
ab['est_counts'],
'k.')
plt.xlabel('length')
plt.ylabel('est_counts')
plt.subplot(323)
plt.plot(ab['length'],
ab['tpm'],
'k.')
plt.xlabel('length')
plt.ylabel('tpm')
plt.subplot(324)
plt.plot(ab['eff_length'],
ab['est_counts'],
'k.')
plt.xlabel('eff_length')
plt.ylabel('est_counts')
plt.subplot(325)
plt.plot(ab['eff_length'],
ab['tpm'],
'k.')
plt.xlabel('eff_length')
plt.ylabel('tpm')
plt.subplot(326)
plt.plot(ab['est_counts'],
ab['tpm'],
'k.')
plt.xlabel('est_counts')
plt.ylabel('tpm');
import seaborn as sns
# much easier with seaborn
sns.pairplot(ab.set_index('target_id'));
"""
Explanation: Can you investigate the relationships between the variables of the data/abundance.tsv file? Which plot type is best for this task?
Can you investigate the relationships between the variables of the data/abundance.tsv file in a single figure, using subplots?
End of explanation
"""
plt.figure(figsize=(10, 10))
plt.plot(ab['length'],
ab['eff_length'],
'.',
label='eff_length')
plt.plot(ab['length'],
ab['est_counts'],
'.',
label='eff_length')
plt.plot(ab['length'],
ab['tpm'],
'.',
label='tpm')
plt.legend(loc='best')
plt.xlabel('length')
plt.ylabel('other variable');
"""
Explanation: Can you investigate the relationships between the variables of the data/abundance.tsv file in a single plot? You might want to use different colors...
End of explanation
"""
vowels = set('aeiouy')
dictionary1 = {}
# key: word
# value: length of the word
dictionary2 = {}
# key: word
# value: number of vowels
for line in open('../data/unixdict.txt'):
word = line.rstrip()
dictionary1[word] = len(word)
dictionary2[word] = len(set(word).intersection(vowels))
plt.figure(figsize=(7, 7))
plt.plot(dictionary1.values(),
dictionary2.values(),
'ko')
plt.xlabel('word length')
plt.ylabel('number of vowels');
"""
Explanation: Can you plot the relationship between word length and number of vowels in the data/unixdict.txt file?
End of explanation
"""
plt.figure(figsize=(7, 7))
plt.plot(ab['length'],
ab['eff_length'],
'k.')
plt.xlabel('length')
plt.ylabel('eff_length')
ax = plt.twinx()
ax.plot(ab['length'],
ab['tpm'],
'r.')
ax.set_ylabel('tpm');
"""
Explanation: Can you plot three variables (with very different scales) in a single plot? You can google to look for an answer...
End of explanation
"""
plt.figure(figsize=(2, 7))
plt.boxplot(ab['length'])
# restrict the range of the plot
plt.ylim(0, 10000)
"""
Explanation: Can you figure out how to make boxplots out of one of the variables of the data/abundance.tsv file?
End of explanation
"""
|
AlCap23/Thesis
|
Python/FOTD-Design-Simple.ipynb
|
gpl-3.0
|
# Import the needed packages, SymPy
import sympy as sp
from sympy import init_printing
init_printing()
# Define the variables
# Complex variable
s = sp.symbols('s')
# FOTD Coeffficients
T1,T2,T3,T4 = sp.symbols('T_11 T_12 T_21 T_22')
K1,K2,K3,K4 = sp.symbols('K_11 K_12 K_21 K_22')
# Time Delay Coefficients
L1,L2,L3,L4 = sp.symbols('L_11 L_12 L_21 L_22')
# Controller variables for the diagonal controller for Q
C1D, C2D = sp.symbols('C_11^* C_22^*')
# Controller variables for the diagonal controller for G
C1, C2 = sp.symbols('C_11 C_22')
# Proportional Gain
kp1,kp2= sp.symbols('k_P1 k_P2')
# Integral Gain
ki1,ki2 = sp.symbols('k_I1 k_I2')
# Vectorize
TV = [T1,T2,T3,T4]
KV = [K1,K2,K3,K4]
LV = [L1,L2,L3,L4]
QV = [[C1D,0],[0,C2D]]
GV = [[C1,0],[0,C2]]
PV = [[kp1,0],[0,kp2]]
IV = [[ki1,0],[0,ki2]]
# Define a FOTD
def FOTD(K,T,L):
return K/(T*s+1) * sp.exp(-L*s)
#Define a Matrix of FOTD with diagonal and antidiagonal part
G = sp.zeros(2)
for i in range(0,4):
G[i]= FOTD(KV[i],TV[i],LV[i])
GD = sp.Matrix([[G[0],0],[0,G[3]]])
GA = sp.Matrix([[0,G[1]],[G[2],0]])
#Define the diagonal controller in Q
KQ = sp.Matrix(QV)
#Define the diagonal controller in G
KG = sp.Matrix(GV)
# Define the Proportional and Integral Controller for later use
KP = sp.Matrix(PV)
KI = sp.Matrix(IV)
PI = KP+KI*(1/s)
# Define the decoupler with diagonal and antidiagonal part
D = G.subs(s,0)
D = sp.simplify(D**-1)
DD = sp.Matrix([[D[0],0],[0,D[3]]])
DA = sp.Matrix([[0,D[1]],[D[2],0]])
"""
Explanation: First Order Time Delay Controller - Simpler Interpretation
In the Paper 'Design of Decoupled Controllers for MIMO Systems' by Astrรถm, Johansson and Wang the Decoupling for small frequencies is described.
The following calculation is based on the First Order Time Delay (FOTD) Identification method, which results in a Two Input Two Output (TITO) System for the System in feedforward representation.
To Decouple the system a Taylor Series around the steady state s=0 is used to derive Interaction from an input to another output. Since we approximate the system always with a FOTD Model, we can derive the interaction:
End of explanation
"""
# Define the splitter for static decoupling
SS = D*DD.inv()
"""
Explanation: Design the Splitter
The splitter is an equivalent interpretation of the decoupler but holds a much more intuitive interpretation of the problem of decoupling with regards to the diagonal controller designed from the transfer function matrix.
It is given as
$S = D D_D^{-1}$
End of explanation
"""
# Get the Gamma Matrix with the assumption of
Gamma = sp.simplify(sp.diff(GD*DA*DD.inv()+GA,s).subs(s,0))*s
# Get the Equations for H
h12, h21 = sp.symbols('h_12 h_21')
# Get the safety factor ( interpreted as maximum sensitivity)
sigma = sp.symbols('\sigma')
HMax = sp.Matrix([[0,h12],[h21,0]])
1/sigma*Gamma.inv()*HMax,Gamma
GA, (GA+DA*GD).inv()
"""
Explanation: Get the Interaction
Since we want to detune with respect to the interaction of the antidiagonal parts, we can directly identify the first coefficients of the Taylor Series
$\Gamma_A = \frac{d}{ds} \left[G_D D_A D_D^{-1} + G_A\right]$
And Solve for the interaction
$K_I \leq \frac{1}{\sigma} \Gamma_A^{-1} H_{A,Max}$
End of explanation
"""
# Approximate FOTD with Taylor series
def AFOTD(K,T,L):
return K/(T*s+1) * (1-s*L)
#Define a Matrix of FOTD with diagonal and antidiagonal part
G = sp.zeros(2)
for i in range(0,4):
G[i]= AFOTD(KV[i],TV[i],LV[i])
GD = sp.Matrix([[G[0],0],[0,G[3]]])
GA = sp.Matrix([[0,G[1]],[G[2],0]])
# Define a decoupler
D = sp.simplify(G.inv())
DD = sp.Matrix([[D[0],0],[0,D[3]]])
DA = sp.Matrix([[0,D[1]],[D[2],0]])
# Design the splitter
S = D*DD.inv() - sp.eye(2)
S
# Get the Gamma Matrix as proof for decoupling
Gamma = sp.simplify(sp.diff(GD*DA*DD.inv()+GA,s).subs(s,0))
Gamma
"""
Explanation: Design a Dynamic Decoupler
End of explanation
"""
|
tensorflow/lucid
|
notebooks/building-blocks/SemanticDictionary.ipynb
|
apache-2.0
|
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
!pip install --quiet lucid==0.0.5
!npm install -g svelte-cli@2.2.0
import numpy as np
import tensorflow as tf
import lucid.modelzoo.vision_models as models
import lucid.optvis.render as render
from lucid.misc.io import show, load
from lucid.misc.io.showing import _image_url
import lucid.scratch.web.svelte as lucid_svelte
"""
Explanation: Semantic Dictionaries -- Building Blocks of Interpretability
This colab notebook is part of our Building Blocks of Intepretability series exploring how intepretability techniques combine together to explain neural networks. If you haven't already, make sure to look at the corresponding paper as well!
This notebook studies semantic dictionaries. The basic idea of semantic dictionaries is to marry neuron activations to visualizations of those neurons, transforming them from abstract vectors to something more meaningful to humans. Semantic dictionaries can also be applied to other bases, such as rotated versions of activations space that try to disentangle neurons.
<br>
<img src="https://storage.googleapis.com/lucid-static/building-blocks/notebook_heroes/semantic-dictionary.jpeg" width="648"></img>
<br>
This tutorial is based on Lucid, a network for visualizing neural networks. Lucid is a kind of spiritual successor to DeepDream, but provides flexible abstractions so that it can be used for a wide range of interpretability research.
Note: The easiest way to use this tutorial is as a colab notebook, which allows you to dive in with no setup. We recommend you enable a free GPU by going:
Runtime โโโโโ Change runtime type โโโโโ Hardware Accelerator: GPU
Thanks for trying Lucid!
Install / Import / Load
This code depends on Lucid (our visualization library), and svelte (a web framework). The following cell will install both of them, and dependancies such as TensorFlow. And then import them as appropriate.
End of explanation
"""
%%html_define_svelte SemanticDict
<div class="figure">
<div class="input_image">
<div class="image" style="background-image: url({{image_url}}); z-index: -10;"></div>
<svg class="pointer_container" viewBox="0 0 {{N[0]}} {{N[1]}}">
{{#each xs as x}}
{{#each ys as y}}
<rect x={{x}} y={{y}} width=1 height=1
class={{(x == pos[0] && y == pos[1])? "selected" : "unselected"}}
on:mouseover="set({pos: [x,y]})"></rect>
{{/each}}
{{/each}}
</svg>
</div>
<div class="dict" >
{{#each present_acts as act, act_ind}}
<div class="entry">
<div class="sprite" style="background-image: url({{spritemap_url}}); width: {{sprite_size}}px; height: {{sprite_size}}px; background-position: -{{sprite_size*(act.n%sprite_n_wrap)}}px -{{sprite_size*Math.floor(act.n/sprite_n_wrap)}}px; --info: {{act.n}};"></div>
<div class="value" style="height: {{sprite_size*act.v/1000.0}}px;"></div>
</div>
{{/each}}
</div>
</div>
<style>
.figure {
padding: 10px;
width: 1024px;
}
.input_image {
display: inline-block;
width: 224px;
height: 224px;
}
.input_image .image, .input_image .pointer_constainer {
position: absolute;
width: 224px;
height: 224px;
border-radius: 8px;
}
.pointer_container rect {
opacity: 0;
}
.pointer_container .selected {
opacity: 1;
fill: none;
stroke: hsl(24, 100%, 50%);
stroke-width: 0.1px;
}
.dict {
height: 128px;
display: inline-block;
vertical-align: bottom;
padding-bottom: 64px;
margin-left: 64px;
}
.entry {
margin-top: 9px;
margin-right: 32px;
display: inline-block;
}
.value {
display: inline-block;
width: 32px;
border-radius: 8px;
background: #777;
}
.sprite {
display: inline-block;
border-radius: 8px;
}
.dict-text {
display: none;
font-size: 24px;
color: #AAA;
margin-bottom: 20px;
}
</style>
<script>
function range(n){
return Array(n).fill().map((_, i) => i);
}
export default {
data () {
return {
spritemap_url: "",
sprite_size: 64,
sprite_n_wrap: 1e8,
image_url: "",
activations: [[[{n: 0, v: 1}]]],
pos: [0,0]
};
},
computed: {
present_acts: (activations, pos) => activations[pos[1]][pos[0]],
N: activations => [activations.length, activations[0].length],
xs: (N) => range(N[0]),
ys: (N) => range(N[1])
},
helpers: {range}
};
</script>
"""
Explanation: Semantic Dictionary Code
Defining the interface
First, we define our "semantic dictionary" interface as a svelte component. This makes it easy to manage state, like which position we're looking at.
End of explanation
"""
layer_spritemap_sizes = {
'mixed3a' : 16,
'mixed3b' : 21,
'mixed4a' : 22,
'mixed4b' : 22,
'mixed4c' : 22,
'mixed4d' : 22,
'mixed4e' : 28,
'mixed5a' : 28,
}
def googlenet_spritemap(layer):
assert layer in layer_spritemap_sizes
size = layer_spritemap_sizes[layer]
url = "https://storage.googleapis.com/lucid-static/building-blocks/googlenet_spritemaps/sprite_%s_channel_alpha.jpeg" % layer
return size, url
"""
Explanation: Spritemaps
In order to use the semantic dictionaries, we need "spritemaps" of channel visualizations.
These visualization spritemaps are large grids of images (such as this one) that visualize every channel in a layer.
We provide spritemaps for GoogLeNet because making them takes a few hours of GPU time, but
you can make your own channel spritemaps to explore other models. Check out other notebooks on how to
make your own neuron visualizations.
It's also worth noting that GoogLeNet has unusually semantically meaningful neurons. We don't know why this is -- although it's an active area of research for us. More sophisticated interfaces, such as neuron groups, may work better for networks where meaningful ideas are more entangled or less aligned with the neuron directions.
End of explanation
"""
googlenet = models.InceptionV1()
googlenet.load_graphdef()
def googlenet_semantic_dict(layer, img_url):
img = load(img_url)
# Compute the activations
with tf.Graph().as_default(), tf.Session():
t_input = tf.placeholder(tf.float32, [224, 224, 3])
T = render.import_model(googlenet, t_input, t_input)
acts = T(layer).eval({t_input: img})[0]
# Find the most interesting position for our initial view
max_mag = acts.max(-1)
max_x = np.argmax(max_mag.max(-1))
max_y = np.argmax(max_mag[max_x])
# Find appropriate spritemap
spritemap_n, spritemap_url = googlenet_spritemap(layer)
# Actually construct the semantic dictionary interface
# using our *custom component*
lucid_svelte.SemanticDict({
"spritemap_url": spritemap_url,
"sprite_size": 110,
"sprite_n_wrap": spritemap_n,
"image_url": _image_url(img),
"activations": [[[{"n": n, "v": float(act_vec[n])} for n in np.argsort(-act_vec)[:4]] for act_vec in act_slice] for act_slice in acts],
"pos" : [max_y, max_x]
})
"""
Explanation: User facing constructor
Now we'll create a convenient API for creating semantic dictionary visualizations. It will compute the network activations for an image, grab an appropriate spritemap, and render the interface.
End of explanation
"""
googlenet_semantic_dict("mixed4d", "https://storage.googleapis.com/lucid-static/building-blocks/examples/dog_cat.png")
googlenet_semantic_dict("mixed4d", "https://storage.googleapis.com/lucid-static/building-blocks/examples/flowers.png")
"""
Explanation: Now let's make some semantic dictionaries!
End of explanation
"""
|
massimo-nocentini/simulation-methods
|
notes/matrices-functions/exp-Pascal.ipynb
|
mit
|
from sympy import *
from sympy.abc import n, i, N, x, lamda, phi, z, j, r, k, a, alpha
from commons import *
from matrix_functions import *
from sequences import *
import functions_catalog
init_printing()
"""
Explanation: <p>
<img src="http://www.cerm.unifi.it/chianti/images/logo%20unifi_positivo.jpg"
alt="UniFI logo" style="float: left; width: 20%; height: 20%;">
<div align="right">
Massimo Nocentini<br>
<small>
<br>February 4, 2018: exponential P
</small>
</div>
</p>
<br>
<br>
<div align="center">
<b>Abstract</b><br>
Exponential $\mathcal{P}$, according to Paul Barry's book.
</div>
End of explanation
"""
m = 10
eP = Matrix(m, m, lambda n,k: factorial(n)*binomial(n,k)/factorial(k))
eP
inspect(eP)
eP_pm = production_matrix(eP)
eP_epm = production_matrix(eP, exp=True)
eP_pm, eP_epm
F = Matrix(m, m, diagonal_func_matrix(factorial))
F_inv = F**(-1)
F, F_inv
"""
Explanation:
End of explanation
"""
B = F_inv * eP * F
B
U = Matrix(m, m, rows_shift_matrix(by=1))
U
F_inv * U * F
F_inv * U * F * B
B**(-1) * F_inv * U * F * B
F * B**(-1) * F_inv * U * F * B * F_inv
"""
Explanation: In order to factorize eP as F U F^{-1}, for some matrix U
End of explanation
"""
P = Matrix(m, m, binomial)
P_bar = Matrix(m, m, lambda i, j: binomial(i, j) if j < i else 0)
P_bar
production_matrix(P_bar[1:,:-1], exp=False), production_matrix(P_bar[1:,:-1], exp=True)
j=3
(P_bar**j).applyfunc(lambda i: i/factorial(j))
"""
Explanation:
End of explanation
"""
|
drivendata/data-science-is-software
|
notebooks/labs/3.0-refactoring-solution.ipynb
|
mit
|
%matplotlib inline
from __future__ import print_function
import os
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
PROJ_ROOT = os.path.join(os.pardir, os.pardir)
"""
Explanation: <table style="width:100%; border: 0px solid black;">
<tr style="width: 100%; border: 0px solid black;">
<td style="width:75%; border: 0px solid black;">
<a href="http://www.drivendata.org">
<img src="https://s3.amazonaws.com/drivendata.org/kif-example/img/dd.png" />
</a>
</td>
</tr>
</table>
Data Science is Software
Developer #lifehacks for the Jupyter Data Scientist
Section 3: Refactoring for reusability
End of explanation
"""
def load_pumps_data(values_path, labels_path):
# YOUR CODE HERE
pass
values = os.path.join(PROJ_ROOT, "data", "raw", "pumps_train_values.csv")
labels = os.path.join(PROJ_ROOT, "data", "raw", "pumps_train_labels.csv")
df = load_pumps_data(values, labels)
assert df.shape == (59400, 40)
#SOLUTION
def load_pumps_data(values_path, labels_path):
train = pd.read_csv(values_path, index_col='id', parse_dates=["date_recorded"])
labels = pd.read_csv(labels_path, index_col='id')
return train.join(labels)
values = os.path.join(PROJ_ROOT, "data", "raw", "pumps_train_values.csv")
labels = os.path.join(PROJ_ROOT, "data", "raw", "pumps_train_labels.csv")
df = load_pumps_data(values, labels)
assert df.shape == (59400, 40)
"""
Explanation: Use debugging tools throughout!
Don't forget all the fun debugging tools we covered while you work on these exercises.
%debug
%pdb
import q;q.d()
And (if necessary) %prun
Exercise 1
You'll notice that our dataset actually has two different files, pumps_train_values.csv and pumps_train_labels.csv. We want to load both of these together in a single DataFrame for our exploratory analysis. Create a function that:
- Reads both of the csvs
- uses the id column as the index
- parses dates of the date_recorded columns
- joins the labels and the training set on the id
- returns the complete dataframe
End of explanation
"""
def clean_raw_data(df):
""" Takes a dataframe and performs four steps:
- Selects columns for modeling
- For numeric variables, replaces 0 values with mean for that region
- Fills invalid construction_year values with the mean construction_year
- Converts strings to categorical variables
:param df: A raw dataframe that has been read into pandas
:returns: A dataframe with the preprocessing performed.
"""
pass
def replace_value_with_grouped_mean(df, value, column, to_groupby):
""" For a given numeric value (e.g., 0) in a particular column, take the
mean of column (excluding value) grouped by to_groupby and return that
column with the value replaced by that mean.
:param df: The dataframe to operate on.
:param value: The value in column that should be replaced.
:param column: The column in which replacements need to be made.
:param to_groupby: Groupby this variable and take the mean of column.
Replace value with the group's mean.
:returns: The data frame with the invalid values replaced
"""
pass
#SOLUTION
# Load the "autoreload" extension
%load_ext autoreload
# always reload modules marked with "%aimport"
%autoreload 1
import os
import sys
# add the 'src' directory as one where we can import modules
src_dir = os.path.join(PROJ_ROOT, 'src')
sys.path.append(src_dir)
# import my method from the source code
%aimport features.preprocess_solution
from features.preprocess_solution import clean_raw_data
cleaned_df = clean_raw_data(df)
# verify construction year
assert (cleaned_df.construction_year > 1000).all()
# verify filled in other values
for numeric_col in ["population", "longitude", "latitude"]:
assert (cleaned_df[numeric_col] != 0).all()
# verify the types are in the expected types
assert (cleaned_df.dtypes
.astype(str)
.isin(["int64", "float64", "category"])).all()
# check some actual values
assert cleaned_df.latitude.mean() == -5.970642969008563
assert cleaned_df.longitude.mean() == 35.14119354200863
assert cleaned_df.population.mean() == 277.3070009774711
"""
Explanation: Exercise 2
Now that we've loaded our data, we want to do some pre-processing before we model. From inspection of the data, we've noticed that there are some numeric values that are probably not valid that we want to replace.
Select the relevant columns for modeling. For the purposes of this exercise, we'll select:
useful_columns = ['amount_tsh',
'gps_height',
'longitude',
'latitude',
'region',
'population',
'construction_year',
'extraction_type_class',
'management_group',
'quality_group',
'source_type',
'waterpoint_type',
'status_group']
Replace longitude, and population where it is 0 with mean for that region.
zero_is_bad_value = ['longitude', 'population']
Replace the latitude where it is -2E-8 (a different bad value) with the mean for that region.
other_bad_value = ['latitude']
Replace construction_year less than 1000 with the mean construction year.
Convert object type (i.e., string) variables to categoricals.
Convert the label column into a categorical variable
A skeleton for this work is below where clean_raw_data will call replace_value_with_grouped_mean internally.
Copy and Paste the skeleton below into a Python file called preprocess.py in src/features/. Import and autoload the methods from that file to run tests on your changes in this notebook.
End of explanation
"""
def logistic(df):
""" Trains a multinomial logistic regression model to predict the
status of a water pump given characteristics about the pump.
:param df: The dataframe with the features and the label.
:returns: A trained GridSearchCV classifier
"""
pass
#SOLUTION
#import my method from the source code
%aimport model.train_model_solution
from model.train_model_solution import logistic
%%time
clf = logistic(cleaned_df)
assert clf.best_score_ > 0.5
# Just for fun, let's profile the whole stack and see what's slowest!
%prun logistic(clean_raw_data(load_pumps_data(values, labels)))
"""
Explanation: Exercise 3
Now that we've got a feature matrix, let's train a model! Add a function as defined below to the src/model/train_model.py
The function should use sklearn.linear_model.LogisticRegression to train a logistic regression model. In a dataframe with categorical variables pd.get_dummies will do encoding that can be passed to sklearn.
The LogisticRegression class in sklearn handles muticlass models automatically, so no need to use get_dummies on status_group.
Finally, this method should return a GridSearchCV object that has been run with the following parameters for a logistic regression model:
params = {'C': [0.1, 1, 10]}
End of explanation
"""
|
astro4dev/OAD-Data-Science-Toolkit
|
Teaching Materials/Programming/Python/Python3Espanol/1_Introduccion/01. Introduccion.ipynb
|
gpl-3.0
|
print(10)
print("Hola")
print("Hola","como","estas")
print("Hola como estas")
print("Uno mas uno es:",2)
# Esto es un comentario
print("Uno mas uno es:",2) # Esto tambiรฉn
"""
Explanation: Cazando Planetas con Python
ยฟTe has preguntado cรณmo los cientรญficos encuentran planetas en otros sistemas solares?
Resumen
En este curso los estudiantes realizarรกn sus primeras lรญneas de programaciรณn en Python que usarรกn para formar imรกgenes de discos alrededor de estrellas, es aquรญ donde se formarรกn los exoplanetas. Los datos con los que trabajaremos son imรกgenes recientes del Telescopio Espacial Hubble alrededor de la estrella conocida como Fomalhaut donde por primera vez se detectรณ un exoplaneta en imagen directa en el 2007. Al final del curso, los estudiantes tendrรกn los conceptos bรกsicos de programaciรณn y tratamiento de imรกgenes que podrรกn aplicar a cualquier campo cientรญfico y tecnolรณgico que deseen desarrollar en un futuro.
<img src="img/fomal1.jpg">
Programando
ยฟCuรกl es la habilidad mรกs importante en programaciรณn?
(a) Pensar como un computador.
(b) Escribir cรณdigo muy bien.
(c) Poder solucionar problemas.
(d) Ser muy bueno en matemรกtica.
Un algoritmo es:
(a) Una soluciรณn de un problema que puede ser solucionado por un computador.
(b) Una lista paso-a-paso de instrucciones que si se siguen exactamente, solucionan el problema considerado.
(c) Una serie de instrucciones implementados en un lenguaje de programaciรณn.
(d) Un tipo especial de notaciรณn utilizada por programadores.
Cรณdigo fuente es otro nombre para:
(a) Las instrucciones en un programa, almacenadas en un archivo.
(b) El lenguaje en el que estรกs programando (por ejemplo Python).
(c) El ambiente o herramienta en la que estรกs programando.
(d) El nรบmero o "cรณdigo" que le debes dar a cada programa para decirle al computador quรฉ hacer.
ยฟCuรกl es la diferencia entre un lenguaje de programaciรณn de alto nivel y uno de bajo nivel?
(a) Es de alto nivel si estรกs de pie y de bajo nivel si estรกs sentado/a.
(b) Es de alto nivel si estรกs programando para un computador y de bajo nivel si estรกs programando para un telรฉfono o una tablet.
(c) Es de alto nivel si el lenguaje es mรกs cercano al lenguaje humano y de bajo nivel si es mรกs cercano al lenguaje de mรกquina.
(d) Es de alto nivel si es fรกcil de programar y es corto, es de bajo nivel si es difรญcil y los programas son largos.
Un programa es:
(a) Una secuencia de instrucciones que especifica cรณmo hacer un cรกlculo.
(b) Algo que sale en televisiรณn como una novela o un programa de concurso.
(c) Un cรกlculo, asรญ sea simbรณlico.
(d) Lo mismo que un algoritmo.
Debugging es:
(a) Encontrar errores de programaciรณn y corregirlos.
(b) Matar todos los bichos de la casa.
(c) Comenzar a escribir un programa.
(d) Desparasitar un animal.
Nuestro primer programa
La funciรณn print() nos muestra el contenido del objeto que nos interesa. Su argumento es lo que va dentro del parรฉntesis, y el resultado es el contenido del argumento.
End of explanation
"""
print("Hola" # Error de sintaxis
print(1/0) # Error de ejecuciรณn
print("Uno mas uno es:",1*1) # Error semรกntico
"""
Explanation: ยฟPara quรฉ sirven los comentarios?
(a) Para decirle al computador lo que quieres decir en tu programa.
(b) Para que las personas que estรกn leyendo tu cรณdigo entiendan lo que hace el programa.
(c) Nada, es informaciรณn superflua que no se necesita.
(d) Nada en un programa corto. Sรณlo se necesitan para programas grandes.
Errores
End of explanation
"""
|
m2dsupsdlclass/lectures-labs
|
labs/06_deep_nlp/NLP_word_vectors_classification_rendered.ipynb
|
mit
|
import numpy as np
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
newsgroups_test = fetch_20newsgroups(subset='test')
sample_idx = 1000
print(newsgroups_train["data"][sample_idx])
target_names = newsgroups_train["target_names"]
target_id = newsgroups_train["target"][sample_idx]
print("Class of previous message:", target_names[target_id])
"""
Explanation: Text classification using Neural Networks
The goal of this notebook is to learn to use Neural Networks for text classification.
In this notebook, we will:
- Train a shallow model with learning embeddings
- Download pre-trained embeddings from Glove
- Use these pre-trained embeddings
However keep in mind:
- Deep Learning can be better on text classification that simpler ML techniques, but only on very large datasets and well designed/tuned models.
- We won't be using the most efficient (in terms of computing) techniques, as Keras is good for prototyping but rather inefficient for training small embedding models on text.
- The following projects can replicate similar word embedding models much more efficiently: word2vec and gensim's word2vec (self-supervised learning only), fastText (both supervised and self-supervised learning), Vowpal Wabbit (supervised learning).
- Plain shallow sparse TF-IDF bigrams features without any embedding and Logistic Regression or Multinomial Naive Bayes is often competitive in small to medium datasets.
20 Newsgroups Dataset
The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups http://qwone.com/~jason/20Newsgroups/
End of explanation
"""
target_names
"""
Explanation: Here are all the possible classes:
End of explanation
"""
from keras.preprocessing.text import Tokenizer
MAX_NB_WORDS = 20000
# get the raw text data
texts_train = newsgroups_train["data"]
texts_test = newsgroups_test["data"]
# finally, vectorize the text samples into a 2D integer tensor
tokenizer = Tokenizer(nb_words=MAX_NB_WORDS, char_level=False)
tokenizer.fit_on_texts(texts_train)
sequences = tokenizer.texts_to_sequences(texts_train)
sequences_test = tokenizer.texts_to_sequences(texts_test)
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
"""
Explanation: Preprocessing text for the (supervised) CBOW model
We will implement a simple classification model in Keras. Raw text requires (sometimes a lot of) preprocessing.
The following cells uses Keras to preprocess text:
- using a tokenizer. You may use different tokenizers (from scikit-learn, NLTK, custom Python function etc.). This converts the texts into sequences of indices representing the 20000 most frequent words
- sequences have different lengths, so we pad them (add 0s at the end until the sequence is of length 1000)
- we convert the output classes as 1-hot encodings
End of explanation
"""
sequences[0]
"""
Explanation: Tokenized sequences are converted to list of token ids (with an integer code):
End of explanation
"""
type(tokenizer.word_index), len(tokenizer.word_index)
index_to_word = dict((i, w) for w, i in tokenizer.word_index.items())
" ".join([index_to_word[i] for i in sequences[0]])
"""
Explanation: The tokenizer object stores a mapping (vocabulary) from word strings to token ids that can be inverted to reconstruct the original message (without formatting):
End of explanation
"""
seq_lens = [len(s) for s in sequences]
print("average length: %0.1f" % np.mean(seq_lens))
print("max length: %d" % max(seq_lens))
%matplotlib inline
import matplotlib.pyplot as plt
plt.hist(seq_lens, bins=50);
"""
Explanation: Let's have a closer look at the tokenized sequences:
End of explanation
"""
plt.hist([l for l in seq_lens if l < 3000], bins=50);
"""
Explanation: Let's zoom on the distribution of regular sized posts. The vast majority of the posts have less than 1000 symbols:
End of explanation
"""
from keras.preprocessing.sequence import pad_sequences
MAX_SEQUENCE_LENGTH = 1000
# pad sequences with 0s
x_train = pad_sequences(sequences, maxlen=MAX_SEQUENCE_LENGTH)
x_test = pad_sequences(sequences_test, maxlen=MAX_SEQUENCE_LENGTH)
print('Shape of data tensor:', x_train.shape)
print('Shape of data test tensor:', x_test.shape)
from keras.utils.np_utils import to_categorical
y_train = newsgroups_train["target"]
y_test = newsgroups_test["target"]
y_train = to_categorical(np.asarray(y_train))
print('Shape of label tensor:', y_train.shape)
"""
Explanation: Let's truncate and pad all the sequences to 1000 symbols to build the training set:
End of explanation
"""
from keras.layers import Dense, Input, Flatten
from keras.layers import GlobalAveragePooling1D, Embedding
from keras.models import Model
EMBEDDING_DIM = 50
N_CLASSES = len(target_names)
# input: a sequence of MAX_SEQUENCE_LENGTH integers
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedding_layer = Embedding(MAX_NB_WORDS, EMBEDDING_DIM,
input_length=MAX_SEQUENCE_LENGTH,
trainable=True)
embedded_sequences = embedding_layer(sequence_input)
average = GlobalAveragePooling1D()(embedded_sequences)
predictions = Dense(N_CLASSES, activation='softmax')(average)
model = Model(sequence_input, predictions)
model.compile(loss='categorical_crossentropy',
optimizer='adam', metrics=['acc'], verbose=2)
model.fit(x_train, y_train, validation_split=0.1,
nb_epoch=10, batch_size=128, verbose=2)
"""
Explanation: A simple supervised CBOW model in Keras
The following computes a very simple model, as described in fastText:
<img src="images/fasttext.svg" style="width: 600px;" />
Build an embedding layer mapping each word to a vector representation
Compute the vector representation of all words in each sequence and average them
Add a dense layer to output 20 classes (+ softmax)
End of explanation
"""
# %load solutions/accuracy.py
output_test = model.predict(x_test)
test_casses = np.argmax(output_test, axis=-1)
print("test accuracy:", np.mean(test_casses == y_test))
"""
Explanation: Exercice
- compute model accuracy on test set
End of explanation
"""
# %load solutions/lstm.py
from keras.layers import LSTM, Conv1D, MaxPooling1D
# input: a sequence of MAX_SEQUENCE_LENGTH integers
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences = embedding_layer(sequence_input)
# 1D convolution with 64 output channels
x = Conv1D(64, 5)(embedded_sequences)
# MaxPool divides the length of the sequence by 5
x = MaxPooling1D(5)(x)
x = Conv1D(64, 5)(x)
x = MaxPooling1D(5)(x)
# LSTM layer with a hidden size of 64
x = LSTM(64)(x)
predictions = Dense(20, activation='softmax')(x)
model = Model(sequence_input, predictions)
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['acc'])
# You will get large speedups with these models by using a GPU
# The model might take a lot of time to converge, and even more
# if you add dropout (needed to prevent overfitting)
# %load solutions/conv1d.py
from keras.layers import Conv1D, MaxPooling1D, Flatten
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences = embedding_layer(sequence_input)
# A 1D convolution with 128 output channels
x = Conv1D(128, 5, activation='relu')(embedded_sequences)
# MaxPool divides the length of the sequence by 5
x = MaxPooling1D(5)(x)
# A 1D convolution with 64 output channels
x = Conv1D(64, 5, activation='relu')(x)
# MaxPool divides the length of the sequence by 5
x = MaxPooling1D(5)(x)
x = Flatten()(x)
predictions = Dense(20, activation='softmax')(x)
model = Model(sequence_input, predictions)
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['acc'])
model.fit(x_train, y_train, validation_split=0.1,
nb_epoch=10, batch_size=128, verbose=2)
"""
Explanation: Building more complex models
Exercise
- From the previous template, build more complex models using:
- 1d convolution and 1d maxpooling. Note that you will still need a GloabalAveragePooling or Flatten after the convolutions
- Recurrent neural networks through LSTM (you will need to reduce sequence length before)
<img src="images/unrolled_rnn_one_output_2.svg" style="width: 600px;" />
Bonus
- You may try different architectures with:
- more intermediate layers, combination of dense, conv, recurrent
- different recurrent (GRU, RNN)
- bidirectional LSTMs
Note: The goal is to build working models rather than getting better test accuracy. To achieve much better results, we'd need more computation time and data quantity. Build your model, and verify that they converge to OK results.
End of explanation
"""
embeddings_index = {}
embeddings_vectors = []
f = open('glove100K.100d.txt', 'rb')
word_idx = 0
for line in f:
values = line.decode('utf-8').split()
word = values[0]
vector = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = word_idx
embeddings_vectors.append(vector)
word_idx = word_idx + 1
f.close()
inv_index = {v: k for k, v in embeddings_index.items()}
print("found %d different words in the file" % word_idx)
# Stack all embeddings in a large numpy array
glove_embeddings = np.vstack(embeddings_vectors)
glove_norms = np.linalg.norm(glove_embeddings, axis=-1, keepdims=True)
glove_embeddings_normed = glove_embeddings / glove_norms
print(glove_embeddings.shape)
def get_emb(word):
idx = embeddings_index.get(word)
if idx is None:
return None
else:
return glove_embeddings[idx]
def get_normed_emb(word):
idx = embeddings_index.get(word)
if idx is None:
return None
else:
return glove_embeddings_normed[idx]
get_emb("computer")
"""
Explanation: Loading pre-trained embeddings
The file glove100K.100d.txt is an extract of Glove Vectors, that were trained on english Wikipedia 2014 + Gigaword 5 (6B tokens).
We extracted the 100 000 most frequent words. They have a dimension of 100
End of explanation
"""
# %load solutions/most_similar.py
def most_similar(words, topn=10):
query_emb = 0
# If we have a list of words instead of one word
# (bonus question)
if type(words) == list:
for word in words:
query_emb += get_emb(word)
else:
query_emb = get_emb(words)
query_emb = query_emb / np.linalg.norm(query_emb)
# Large numpy vector with all cosine similarities
# between emb and all other words
cosines = np.dot(glove_embeddings_normed, query_emb)
# topn most similar indexes corresponding to cosines
idxs = np.argsort(cosines)[::-1][:topn]
# pretty return with word and similarity
return [(inv_index[idx], cosines[idx]) for idx in idxs]
most_similar("cpu")
most_similar("pitt")
most_similar("jolie")
"""
Explanation: Finding most similar words
Exercice
Build a function to find most similar words, given a word as query:
- lookup the vector for the query word in the Glove index;
- compute the cosine similarity between a word embedding and all other words;
- display the top 10 most similar words.
Bonus
Change your function so that it takes multiple words as input (by averaging them)
End of explanation
"""
np.dot(get_normed_emb('aniston'), get_normed_emb('pitt'))
np.dot(get_normed_emb('jolie'), get_normed_emb('pitt'))
most_similar("1")
# bonus: yangtze is a chinese river
most_similar(["river", "chinese"])
"""
Explanation: Predict the future better than tarot:
End of explanation
"""
from sklearn.manifold import TSNE
word_emb_tsne = TSNE(perplexity=30).fit_transform(glove_embeddings_normed[:1000])
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure(figsize=(40, 40))
axis = plt.gca()
np.set_printoptions(suppress=True)
plt.scatter(word_emb_tsne[:, 0], word_emb_tsne[:, 1], marker=".", s=1)
for idx in range(1000):
plt.annotate(inv_index[idx],
xy=(word_emb_tsne[idx, 0], word_emb_tsne[idx, 1]),
xytext=(0, 0), textcoords='offset points')
plt.savefig("tsne.png")
plt.show()
"""
Explanation: Displaying vectors with t-SNE
End of explanation
"""
EMBEDDING_DIM = 100
# prepare embedding matrix
nb_words_in_matrix = 0
nb_words = min(MAX_NB_WORDS, len(word_index))
embedding_matrix = np.zeros((nb_words, EMBEDDING_DIM))
for word, i in word_index.items():
if i >= MAX_NB_WORDS:
continue
embedding_vector = get_emb(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
nb_words_in_matrix = nb_words_in_matrix + 1
print("added %d words in the embedding matrix" % nb_words_in_matrix)
"""
Explanation: Using pre-trained embeddings in our model
We want to use these pre-trained embeddings for transfer learning. This process is rather similar than transfer learning in image recognition: the features learnt on words might help us bootstrap the learning process, and increase performance if we don't have enough training data.
- We initialize embedding matrix from the model with Glove embeddings:
- take all words from our 20 Newgroup vocabulary (MAX_NB_WORDS = 20000), and look up their Glove embedding
- place the Glove embedding at the corresponding index in the matrix
- if the word is not in the Glove vocabulary, we only place zeros in the matrix
- We may fix these embeddings or fine-tune them
End of explanation
"""
pretrained_embedding_layer = Embedding(
MAX_NB_WORDS, EMBEDDING_DIM,
weights=[embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH,
)
"""
Explanation: Build a layer with pre-trained embeddings:
End of explanation
"""
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences = pretrained_embedding_layer(sequence_input)
average = GlobalAveragePooling1D()(embedded_sequences)
predictions = Dense(N_CLASSES, activation='softmax')(average)
model = Model(sequence_input, predictions)
# We don't want to fine-tune embeddings
model.layers[1].trainable=False
model.compile(loss='categorical_crossentropy',
optimizer='adam', metrics=['acc'])
model.fit(x_train, y_train, validation_split=0.1,
nb_epoch=10, batch_size=128, verbose=2)
# Note, on this type of task, this technique will
# degrade results as we train much less parameters
# and we average a large number pre-trained embeddings.
# You will notice much less overfitting then!
# Using convolutions / LSTM will help
# It is also advisable to treat seperately pre-trained
# embeddings and words out of vocabulary.
"""
Explanation: A model with pre-trained Embeddings
Average word embeddings pre-trained with Glove / Word2Vec usually works suprisingly well. However, when averaging more than 10-15 words, the resulting vector becomes too noisy and classification performance is degraded.
End of explanation
"""
|
analysiscenter/dataset
|
examples/experiments/learning_rate_schedulers/research_learning_rate_schedulers.ipynb
|
apache-2.0
|
import sys
import numpy as np
sys.path.append('../../..')
from batchflow import Pipeline, B, V, C
from batchflow.opensets import Imagenette160
from batchflow.models.torch import ResNet34
from batchflow.models.metrics import ClassificationMetrics
from batchflow.research import Research, Option, Results, KV, RP
from batchflow.utils import show_research, print_results
"""
Explanation: Learning rate schedulers comparison
About experiment
This notebook proposes research on how different learning rate schedulers affect deep learning model performance on image classification task. The dataset in use is Imagenette160 and the model is ResNet34. Research process itself is done via running several repetitions of model training with specific learning rate scheduler from predefined domain of options. Model evaluation is done periodically during each of these runs on test part of the dataset. Plots and aggregated metrics at the end of the notebook clearly show the difference between experiments with different learning rate schedulers.
End of explanation
"""
NUM_ITERS = 50000 # number of iterations to train each model for
N_REPS = 7 # number of times to repeat each model train
RESEARCH_NAME = 'research_schedulers' # name of Research object
DEVICES = [5, 6, 7] # GPUs to use
WORKERS = len(DEVICES) # number of simultaneously trained models
TEST_FREQUENCY = 150 # how often model evaluation on test data is done during train
BATCH_SIZE = 64
dataset = Imagenette160() # dataset to train models on
N_ITERS_IN_EPOCH = dataset.train.size // BATCH_SIZE
"""
Explanation: Research parameters:
End of explanation
"""
domain = (Option('decay', [
KV(None, 'None'),
KV({'name': 'exp', 'gamma': .96, 'frequency': N_ITERS_IN_EPOCH},
'exponential_high_frequency'),
KV({'name': 'exp', 'gamma': .1, 'frequency': N_ITERS_IN_EPOCH * 38},
'exponential_low_frequency'),
KV([{'name': 'exp', 'gamma': 1.01, 'frequency': 6, 'last_iter': 900},
{'name': 'exp', 'gamma': .9994, 'frequency': 2, 'first_iter': 901, 'last_iter': 22000}],
'warmup_two_stage'),
KV([{'name': 'exp', 'gamma': .1, 'frequency': 1, 'last_iter': 1},
{'name': 'exp', 'gamma': 1.0014, 'frequency': 2, 'last_iter': 4500},
{'name': 'exp', 'gamma': .9995, 'frequency': 5, 'first_iter': 5000}],
'warmup_three_stage'),
KV({'name': 'CyclicLR', 'base_lr': .0002, 'max_lr': .0016,
'step_size_up': 550, 'cycle_momentum': False,
'mode': 'exp_range', 'gamma': .99998, 'frequency': 1},
'cyclic'),
]))
"""
Explanation: Research domain
Research domain consists of various learning rate schedulers. They can be compound, meaning you can compose several different schedulers (as a list of dicts).
Scheduler applies decay to learning rate during training on specified iterations. It requires different parameters depending on type.
'name' - name or alias for a chosen learning rate scheduler;
'frequency' โ how often scheduler is applied;
'first_iter' โ when scheduler starts (default is 0);
'last_iter' โ when scheduler stops (default is -1).
Parameter 'name' must be one of:
- a class name from
torch.optim.lr_scheduler (e.g. 'LambdaLR') except 'ReduceLROnPlateau'.
- a short name of scheduler from that mapping:
| Short name | Scheduler |
|:------------|:------------------|
| 'exp' | ExponentialLR |
| 'lambda' | LambdaLR |
| 'step' | StepLR |
| 'multistep' | MultiStepLR |
| 'cos' | CosineAnnealingLR |
a class with _LRScheduler interface.
a callable which takes optimizer and optional args.
All other parameters are passed directly to chosen scheduler.
End of explanation
"""
config = {
'inputs/labels/classes': 10,
'head/layout': 'Vf',
'decay': C('decay'),
'device': C('device'),
}
"""
Explanation: A short explanation on how each of the defined schedulers work.
'None' โ apply no scheduler;
'exponential_high_frequency' โ apply exponential scheduler on every epoch with given multiplier;
'exponential_low_frequency' โ apply exponential scheduler on each 38th epoch with given multiplier;
'warmup_two_stage' โ firstly increase learning rate every 6th iteration (apply exponential scheduler with given multiplier until 900th iteration), than decrease it every 2nd iteration until the end (with new multiplier, also via exponential scheduler);
'warmup_three_stage' โ decrease learning rate once, then increase it every 2nd iteration until the 4500th. Do nothing till 5000th, then decrease every 5th iteration to the last. All schedulers here are also exponential.
'cyclic' โ apply cyclic scheduler with given params on each iteration from the beginning to the end.
Read more about warmup technique in this paper.
Model configuration
Config for ResNet34 with 10 classes. Named expression C allows substituting variable value at model initialization time. Read more about Batchflow model configuration in this tutorial.
End of explanation
"""
train_root = (dataset.train.p
.crop(shape=(160, 160), origin='center')
.to_array(channels='first', dtype=np.float32)
.multiply(multiplier=1/255)
.run_later(BATCH_SIZE, n_epochs=None, drop_last=True,
shuffle=True))
train_pipeline = (Pipeline()
.init_variable('loss')
.init_variable('learning_rate')
.init_model('dynamic', ResNet34, 'my_model', config=config)
.train_model('my_model', B('images'), B('labels'),
fetches=['loss', 'lr'],
save_to=[V('loss'), V('learning_rate')]))
"""
Explanation: Train pipeline
Root and branch pipelines for models training. First, train_root does data processing common for all branches and then each train_pipeline saves loss and learning rate values for further comparison.
End of explanation
"""
test_pipeline = (dataset.test.p
.import_model('my_model', C('import_from'))
.init_variable('metrics')
.crop(shape=(160, 160), origin='center')
.to_array(channels='first', dtype=np.float32)
.multiply(multiplier=1/255)
.predict_model('my_model', B('images'), fetches='predictions',
save_to=B('predictions'))
.gather_metrics('class', targets=B('labels'), predictions=B('predictions'),
fmt='logits', axis=-1, save_to=V('metrics', mode='u'))
.run_later(BATCH_SIZE, n_epochs=1, drop_last=False))
"""
Explanation: Test pipeline
Test pipeline for model evaluation. It does same data preprocessing as train_root and computes confusion matrix on whole test subset for further comparison.
End of explanation
"""
research = (Research()
.init_domain(domain, n_reps=N_REPS)
.add_pipeline(root=train_root, branch=train_pipeline, variables=['loss', 'learning_rate'],
name='train_pipeline', logging=True)
.add_pipeline(test_pipeline, name='test_pipeline', import_from=RP('train_pipeline'),
run=True, logging=True, execute=TEST_FREQUENCY)
.get_metrics(pipeline='test_pipeline', metrics_var='metrics', metrics_name='accuracy',
returns='accuracy', execute=TEST_FREQUENCY))
"""
Explanation: Research
Reseach combines all defined pipelines and runs them in given order and with given frequency. It launches multiple experiments in parallel with options from domain.
End of explanation
"""
research.run(NUM_ITERS, name=RESEARCH_NAME,
devices=DEVICES, workers=WORKERS,
bar=True)
"""
Explanation: Start research!
End of explanation
"""
results = Results(path=RESEARCH_NAME, concat_config=True)
"""
Explanation: Results and conclusion
Load file with research results.
End of explanation
"""
show_research(results.df, average_repetitions=True,
layouts=['train_pipeline/learning_rate', 'train_pipeline/loss', 'test_pipeline_metrics/accuracy'],
titles=['Learning rate change during train', 'Loss during train', 'Accuracy on test dataset during train'],
rolling_window=10,
nrows=3, ncols=1, figsize=(20, 28))
"""
Explanation: Plots of the learning rate and loss during train and evaluated accuracy on test dataset. Measurements for each model are averaged over N_REPS executions.
End of explanation
"""
print_results(results.df, 'test_pipeline_metrics/accuracy', sort_by='accuracy (mean)',
average_repetitions=True, n_last=100)
"""
Explanation: The table below shows mean and std of evaluated accuracy for each experiment averaged over its repetitions.
End of explanation
"""
|
southpaw94/MachineLearning
|
TextExamples/3547_13_Code.ipynb
|
gpl-2.0
|
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,matplotlib,theano,keras
# to install watermark just uncomment the following line:
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
"""
Explanation: Sebastian Raschka, 2015
Python Machine Learning
Chapter 13 - Parallelizing Neural Network Training with Theano
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
End of explanation
"""
import theano
from theano import tensor as T
# initialize
x1 = T.scalar()
w1 = T.scalar()
w0 = T.scalar()
z1 = w1 * x1 + w0
# compile
net_input = theano.function(inputs=[w1, x1, w0], outputs=z1)
# execute
net_input(2.0, 1.0, 0.5)
"""
Explanation: Sections
Building, compiling, and running expressions with Theano
First steps with Theano
Configuring Theano
Working with array structures
Wrapping things up: A linear regression example
Choosing activation functions for feedforward neural networks
Logistic function recap
Estimating probabilities in multi-class classification via the softmax function
Broadening the output spectrum using a hyperbolic tangent
<br>
<br>
Building, compiling, and running expressions with Theano
[back to top]
Depending on your system setup, it is typically sufficient to install Theano via
pip install Theano
For more help with the installation, please see: http://deeplearning.net/software/theano/install.html
<br>
<br>
First steps with Theano
Introducing the TensorType variables. For a complete list, see http://deeplearning.net/software/theano/library/tensor/basic.html#all-fully-typed-constructors
End of explanation
"""
print(theano.config.floatX)
theano.config.floatX = 'float32'
"""
Explanation: <br>
<br>
Configuring Theano
[back to top]
Configuring Theano. For more options, see
- http://deeplearning.net/software/theano/library/config.html
- http://deeplearning.net/software/theano/library/floatX.html
End of explanation
"""
print(theano.config.device)
"""
Explanation: To change the float type globally, execute
export THEANO_FLAGS=floatX=float32
in your bash shell. Or execute Python script as
THEANO_FLAGS=floatX=float32 python your_script.py
Running Theano on GPU(s). For prerequisites, please see: http://deeplearning.net/software/theano/tutorial/using_gpu.html
Note that float32 is recommended for GPUs; float64 on GPUs is currently still relatively slow.
End of explanation
"""
import numpy as np
# initialize
x = T.fmatrix(name='x')
x_sum = T.sum(x, axis=0)
# compile
calc_sum = theano.function(inputs=[x], outputs=x_sum)
# execute (Python list)
ary = [[1, 2, 3], [1, 2, 3]]
print('Column sum:', calc_sum(ary))
# execute (NumPy array)
ary = np.array([[1, 2, 3], [1, 2, 3]], dtype=theano.config.floatX)
print('Column sum:', calc_sum(ary))
"""
Explanation: You can run a Python script on CPU via:
THEANO_FLAGS=device=cpu,floatX=float64 python your_script.py
or GPU via
THEANO_FLAGS=device=gpu,floatX=float32 python your_script.py
It may also be convenient to create a .theanorc file in your home directory to make those configurations permanent. For example, to always use float32, execute
echo -e "\n[global]\nfloatX=float32\n" >> ~/.theanorc
Or, create a .theanorc file manually with the following contents
[global]
floatX = float32
device = gpu
<br>
<br>
Working with array structures
[back to top]
End of explanation
"""
# initialize
x = T.fmatrix(name='x')
w = theano.shared(np.asarray([[0.0, 0.0, 0.0]],
dtype=theano.config.floatX))
z = x.dot(w.T)
update = [[w, w + 1.0]]
# compile
net_input = theano.function(inputs=[x],
updates=update,
outputs=z)
# execute
data = np.array([[1, 2, 3]], dtype=theano.config.floatX)
for i in range(5):
print('z%d:' % i, net_input(data))
"""
Explanation: Updating shared arrays.
More info about memory management in Theano can be found here: http://deeplearning.net/software/theano/tutorial/aliasing.html
End of explanation
"""
# initialize
data = np.array([[1, 2, 3]],
dtype=theano.config.floatX)
x = T.fmatrix(name='x')
w = theano.shared(np.asarray([[0.0, 0.0, 0.0]],
dtype=theano.config.floatX))
z = x.dot(w.T)
update = [[w, w + 1.0]]
# compile
net_input = theano.function(inputs=[],
updates=update,
givens={x: data},
outputs=z)
# execute
for i in range(5):
print('z:', net_input())
"""
Explanation: We can use the givens variable to insert values into the graph before compiling it. Using this approach we can reduce the number of transfers from RAM (via CPUs) to GPUs to speed up learning with shared variables. If we use inputs, a datasets is transferred from the CPU to the GPU multiple times, for example, if we iterate over a dataset multiple times (epochs) during gradient descent. Via givens, we can keep the dataset on the GPU if it fits (e.g., a mini-batch).
End of explanation
"""
import numpy as np
X_train = np.asarray([[0.0], [1.0], [2.0], [3.0], [4.0],
[5.0], [6.0], [7.0], [8.0], [9.0]],
dtype=theano.config.floatX)
y_train = np.asarray([1.0, 1.3, 3.1, 2.0, 5.0,
6.3, 6.6, 7.4, 8.0, 9.0],
dtype=theano.config.floatX)
"""
Explanation: <br>
<br>
Wrapping things up: A linear regression example
[back to top]
Creating some training data.
End of explanation
"""
import theano
from theano import tensor as T
import numpy as np
def train_linreg(X_train, y_train, eta, epochs):
costs = []
# Initialize arrays
eta0 = T.fscalar('eta0')
y = T.fvector(name='y')
X = T.fmatrix(name='X')
w = theano.shared(np.zeros(
shape=(X_train.shape[1] + 1),
dtype=theano.config.floatX),
name='w')
# calculate cost
net_input = T.dot(X, w[1:]) + w[0]
errors = y - net_input
cost = T.sum(T.pow(errors, 2))
# perform gradient update
gradient = T.grad(cost, wrt=w)
update = [(w, w - eta0 * gradient)]
# compile model
train = theano.function(inputs=[eta0],
outputs=cost,
updates=update,
givens={X: X_train,
y: y_train,})
for _ in range(epochs):
costs.append(train(eta))
return costs, w
"""
Explanation: Implementing the training function.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
costs, w = train_linreg(X_train, y_train, eta=0.001, epochs=10)
plt.plot(range(1, len(costs)+1), costs)
plt.tight_layout()
plt.xlabel('Epoch')
plt.ylabel('Cost')
plt.tight_layout()
# plt.savefig('./figures/cost_convergence.png', dpi=300)
plt.show()
"""
Explanation: Plotting the sum of squared errors cost vs epochs.
End of explanation
"""
def predict_linreg(X, w):
Xt = T.matrix(name='X')
net_input = T.dot(Xt, w[1:]) + w[0]
predict = theano.function(inputs=[Xt], givens={w: w}, outputs=net_input)
return predict(X)
plt.scatter(X_train, y_train, marker='s', s=50)
plt.plot(range(X_train.shape[0]),
predict_linreg(X_train, w),
color='gray',
marker='o',
markersize=4,
linewidth=3)
plt.xlabel('x')
plt.ylabel('y')
plt.tight_layout()
# plt.savefig('./figures/linreg.png', dpi=300)
plt.show()
"""
Explanation: Making predictions.
End of explanation
"""
# note that first element (X[0] = 1) to denote bias unit
X = np.array([[1, 1.4, 1.5]])
w = np.array([0.0, 0.2, 0.4])
def net_input(X, w):
z = X.dot(w)
return z
def logistic(z):
return 1.0 / (1.0 + np.exp(-z))
def logistic_activation(X, w):
z = net_input(X, w)
return logistic(z)
print('P(y=1|x) = %.3f' % logistic_activation(X, w)[0])
"""
Explanation: <br>
<br>
Choosing activation functions for feedforward neural networks
[back to top]
<br>
<br>
Logistic function recap
[back to top]
The logistic function, often just called "sigmoid function" is in fact a special case of a sigmoid function.
Net input $z$:
$$z = w_1x_{1} + \dots + w_mx_{m} = \sum_{j=1}^{m} x_{j}w_{j} \ = \mathbf{w}^T\mathbf{x}$$
Logistic activation function:
$$\phi_{logistic}(z) = \frac{1}{1 + e^{-z}}$$
Output range: (0, 1)
End of explanation
"""
# W : array, shape = [n_output_units, n_hidden_units+1]
# Weight matrix for hidden layer -> output layer.
# note that first column (A[:][0] = 1) are the bias units
W = np.array([[1.1, 1.2, 1.3, 0.5],
[0.1, 0.2, 0.4, 0.1],
[0.2, 0.5, 2.1, 1.9]])
# A : array, shape = [n_hidden+1, n_samples]
# Activation of hidden layer.
# note that first element (A[0][0] = 1) is for the bias units
A = np.array([[1.0],
[0.1],
[0.3],
[0.7]])
# Z : array, shape = [n_output_units, n_samples]
# Net input of output layer.
Z = W.dot(A)
y_probas = logistic(Z)
print('Probabilities:\n', y_probas)
y_class = np.argmax(Z, axis=0)
print('predicted class label: %d' % y_class[0])
"""
Explanation: Now, imagine a MLP perceptron with 3 hidden units + 1 bias unit in the hidden unit. The output layer consists of 3 output units.
End of explanation
"""
def softmax(z):
return np.exp(z) / np.sum(np.exp(z))
def softmax_activation(X, w):
z = net_input(X, w)
return sigmoid(z)
y_probas = softmax(Z)
print('Probabilities:\n', y_probas)
y_probas.sum()
y_class = np.argmax(Z, axis=0)
y_class
"""
Explanation: <br>
<br>
Estimating probabilities in multi-class classification via the softmax function
[back to top]
The softmax function is a generalization of the logistic function and allows us to compute meaningful class-probalities in multi-class settings (multinomial logistic regression).
$$P(y=j|z) =\phi_{softmax}(z) = \frac{e^{z_j}}{\sum_{k=1}^K e^{z_k}}$$
the input to the function is the result of K distinct linear functions, and the predicted probability for the j'th class given a sample vector x is:
Output range: (0, 1)
End of explanation
"""
def tanh(z):
e_p = np.exp(z)
e_m = np.exp(-z)
return (e_p - e_m) / (e_p + e_m)
import matplotlib.pyplot as plt
%matplotlib inline
z = np.arange(-5, 5, 0.005)
log_act = logistic(z)
tanh_act = tanh(z)
# alternatives:
# from scipy.special import expit
# log_act = expit(z)
# tanh_act = np.tanh(z)
plt.ylim([-1.5, 1.5])
plt.xlabel('net input $z$')
plt.ylabel('activation $\phi(z)$')
plt.axhline(1, color='black', linestyle='--')
plt.axhline(0.5, color='black', linestyle='--')
plt.axhline(0, color='black', linestyle='--')
plt.axhline(-1, color='black', linestyle='--')
plt.plot(z, tanh_act,
linewidth=2,
color='black',
label='tanh')
plt.plot(z, log_act,
linewidth=2,
color='lightgreen',
label='logistic')
plt.legend(loc='lower right')
plt.tight_layout()
# plt.savefig('./figures/activation.png', dpi=300)
plt.show()
"""
Explanation: <br>
<br>
Broadening the output spectrum using a hyperbolic tangent
[back to top]
Another special case of a sigmoid function, it can be interpreted as a rescaled version of the logistic function.
$$\phi_{tanh}(z) = \frac{e^{z}-e^{-z}}{e^{z}+e^{-z}}$$
Output range: (-1, 1)
End of explanation
"""
import os
import struct
import numpy as np
def load_mnist(path, kind='train'):
"""Load MNIST data from `path`"""
labels_path = os.path.join(path,
'%s-labels-idx1-ubyte'
% kind)
images_path = os.path.join(path,
'%s-images-idx3-ubyte'
% kind)
with open(labels_path, 'rb') as lbpath:
magic, n = struct.unpack('>II',
lbpath.read(8))
labels = np.fromfile(lbpath,
dtype=np.uint8)
with open(images_path, 'rb') as imgpath:
magic, num, rows, cols = struct.unpack(">IIII",
imgpath.read(16))
images = np.fromfile(imgpath,
dtype=np.uint8).reshape(len(labels), 784)
return images, labels
X_train, y_train = load_mnist('mnist', kind='train')
print('Rows: %d, columns: %d' % (X_train.shape[0], X_train.shape[1]))
X_test, y_test = load_mnist('mnist', kind='t10k')
print('Rows: %d, columns: %d' % (X_test.shape[0], X_test.shape[1]))
"""
Explanation: <br>
<br>
Keras
[back to top]
Loading MNIST
1) Download the 4 MNIST datasets from http://yann.lecun.com/exdb/mnist/
train-images-idx3-ubyte.gz: training set images (9912422 bytes)
train-labels-idx1-ubyte.gz: training set labels (28881 bytes)
t10k-images-idx3-ubyte.gz: test set images (1648877 bytes)
t10k-labels-idx1-ubyte.gz: test set labels (4542 bytes)
2) Unzip those files
3 Copy the unzipped files to a directory ./mnist
End of explanation
"""
import theano
theano.config.floatX = 'float32'
X_train = X_train.astype(theano.config.floatX)
X_test = X_test.astype(theano.config.floatX)
"""
Explanation: Multi-layer Perceptron in Keras
Once you have Theano installed, Keras can be installed via
pip install Keras
In order to run the following code via GPU, you can execute the Python script that was placed in this directory via
THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python mnist_keras_mlp.py
End of explanation
"""
from keras.utils import np_utils
print('First 3 labels: ', y_train[:3])
y_train_ohe = np_utils.to_categorical(y_train)
print('\nFirst 3 labels (one-hot):\n', y_train_ohe[:3])
from keras.models import Sequential
from keras.layers.core import Dense
from keras.optimizers import SGD
np.random.seed(1)
model = Sequential()
model.add(Dense(input_dim=X_train.shape[1],
output_dim=50,
init='uniform',
activation='tanh'))
model.add(Dense(input_dim=50,
output_dim=50,
init='uniform',
activation='tanh'))
model.add(Dense(input_dim=50,
output_dim=y_train_ohe.shape[1],
init='uniform',
activation='softmax'))
sgd = SGD(lr=0.001, decay=1e-7, momentum=.9)
model.compile(loss='categorical_crossentropy', optimizer=sgd)
model.fit(X_train, y_train_ohe,
nb_epoch=50,
batch_size=300,
verbose=1,
validation_split=0.1,
show_accuracy=True)
y_train_pred = model.predict_classes(X_train, verbose=0)
print('First 3 predictions: ', y_train_pred[:3])
train_acc = np.sum(y_train == y_train_pred, axis=0) / X_train.shape[0]
print('Training accuracy: %.2f%%' % (train_acc * 100))
y_test_pred = model.predict_classes(X_test, verbose=0)
test_acc = np.sum(y_test == y_test_pred, axis=0) / X_test.shape[0]
print('Test accuracy: %.2f%%' % (test_acc * 100))
"""
Explanation: One-hot encoding of the class variable:
End of explanation
"""
|
bradkav/runDM
|
python/runDM-examples.ipynb
|
mit
|
%matplotlib inline
import numpy as np
import matplotlib
from matplotlib import pyplot as pl
import runDM
"""
Explanation: runDM v1.0 - examples
With runDMC, It's Tricky. With runDM, it's not.
runDM is a tool for calculating the running of the couplings of Dark Matter (DM) to the Standard Model (SM) in simplified models with vector mediators. By specifying the mass of the mediator and the couplings of the mediator to SM fields at high energy, the code can be used to calculate the couplings at low energy, taking into account the mixing of all dimension-6 operators. The code can also be used to extract the operator coefficients relevant for direct detection, namely low energy couplings to up, down and strange quarks and to protons and neutrons. See the manual and arXiv:1605.04917 for more details.
Initialisation
Let's start by importing the runDM module:
End of explanation
"""
c_high = runDM.setBenchmark("UniversalVector")
print "Vector coupling to all SM fermions:", c_high
c_high = runDM.setBenchmark("QuarksAxial")
print "Axial-vector coupling to all quarks:", c_high
"""
Explanation: First, let's specify the couplings at high energy. This will be an 1-D array with 16 elements. runDM comes with a number of pre-defined benchmarks, which can be accessed using setBenchmark.
End of explanation
"""
c_high = runDM.initCouplings()
c_high[0] = 1.0
c_high[1] = -1.0
c_high[12] = 1.0
print "User-defined couplings:", c_high
"""
Explanation: Alternatively, you can specify each coupling individually. You can use initCouplings() to generate an empty array of couplings and then go ahead. But any array of 16 elements with do.
End of explanation
"""
#Run from 1 TeV to 10 GeV
E1 = 1000; E2 = 10;
c_low = runDM.runCouplings(c_high, E1, E2)
print "Low energy couplings:", c_low
"""
Explanation: runCouplings: running between arbitrary scales
From these high energy couplings (defined at some energy $E_1$), you can obtain the couplings at a different energy scale $E_2$ by using runCouplings(c, $E_1$, $E_2$).
The input coupling vector c should always be the list of high energy couplings to fully gauge-invariant operators above the EW scale (see Eq. 4 of the manual) - even if $E_1$ is below $m_Z$. The output is either a list of coefficients for the same operators - if $E_2$ is above $m_Z$ - or the list of coefficients for the low energy operators below the EW scale (Eq. 6 of the manual) - if $E_2$ is below $m_Z$. Don't worry, runDM takes care of the relative values of $E_1$ and $E_2$.
End of explanation
"""
#Run from 10 TeV to 1 GeV
E1 = 10000;
c_q = runDM.DDCouplingsQuarks(c_high, E1)
couplings_str = ['c_V^u','c_V^d','c_A^u','c_A^d','c_A^s']
for k in range(5):
print couplings_str[k], "=", c_q[k]
"""
Explanation: DDCouplingsQuarks: calculating low-energy DM-quark couplings
If we're only interested in direct detection experiments, we can use the function DDCouplingsQuarks(c, $E_1$) to extract the couplings to light quarks. In this case, the code evolves the couplings from energy $E_1$, down to the nuclear energy scale ~ 1 GeV. The output is an array with 5 elements, the vector and axial-vector couplings to the light quarks: $c_q = \left(c_V^{(u)}, c_V^{(d)}, c_A^{(u)}, c_A^{(d)},c_A^{(s)}\right)$. Let's print them out:
End of explanation
"""
#Set the value of the high energy couplings
c_high = runDM.setBenchmark("QuarksAxial")
#Calculate the low energy couplings
mV = np.logspace(0, 6, 1000)
c_q = runDM.DDCouplingsQuarks(c_high, mV)
#Now let's do some plotting
f, axarr = pl.subplots(3,2 ,figsize=(8,8))
for k in range(5):
if (k < 2): #Vector currents
ax = axarr[k%3, 0]
else: #Axial-vector currents
ax = axarr[(k+1)%3, 1]
ax.semilogx(mV, c_q[:,k])
ax.set_xlabel(r'$m_V$ [GeV]', fontsize=18.0)
ax.set_ylabel(r'$'+couplings_str[k]+'$', fontsize=20.0)
ax.axvline(91.1875, color='k', linestyle='--')
ax.set_xlim(1.0, 10**6)
ax.get_yticklabels()[-1].set_visible(False)
ax.tick_params(axis='both', labelsize=12.0)
axarr[2,0].set_axis_off()
pl.tight_layout()
"""
Explanation: Now, let's take a look at the value of the low-energy light quark couplings (evaluated at $\mu_N \sim 1 \, \mathrm{GeV}$) as a function of the mediator mass $m_V$.
End of explanation
"""
#Set high energy couplings
chigh = runDM.setBenchmark("QuarksAxial")
#Set DM parameters
E1 = 10000; mx = 100; DMcurrent = "vector";
print "NR DM-proton couplings:", \
runDM.DDCouplingsNR(chigh, E1, mx, DMcurrent, "p")
print "NR DM-neutron couplings:", \
runDM.DDCouplingsNR(chigh, E1, mx, DMcurrent, "n")
"""
Explanation: DDCouplingsNR: calculating low energy non-relativistic DM-nucleon couplings
The function DDCouplingsNR(c, E1, mx, DMcurrent, N) calculates the running of the operators to the nuclear energy scale, but it also performs the embedding of the quarks in the nucleon (N = 'p', 'n') and the matching onto the non-relativisitic (NR) DM-nucleon operators, defined in arXiv:1203.3542. In order to perform the matching, the user must specify mx (DM mass in GeV) and DMcurrent, the DM interaction structure.
The output is a list of coefficients of the first 12 NR operators, with numbering matching that of arXiv:1203.3542 and arXiv:1307.5955 (but remember that python array indices start at zero, so $O_7^{NR}$ is at index 6):
End of explanation
"""
|
palrogg/foundations-homework
|
07/.ipynb_checkpoints/Homework7-JOINS-checkpoint.ipynb
|
mit
|
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv("07-hw-animals.csv")
df.columns
df.head(3)
df.sort_values(by='length', ascending=False).head(3)
df['animal'].value_counts()
dogs = df[df['animal']=='dog']
dogs
df[df['length'] > 40]
df['inches'] = .393701 * df['length']
df
cats = df[df['animal']=='cat']
dogs = df[df['animal']=='dog']
# Display all of the animals that are cats and above 12 inches long.
# First do it using the "cats" variable, then do it using your normal dataframe.
cats[cats['inches'] > 12]
df[(df['animal'] == 'cat') & (df['inches'] > 12)]
cats['length'].describe()[['mean']]
dogs['length'].describe()[['mean']]
animals = df.groupby( [ "animal"] )
animals['length'].mean()
plt.style.use('ggplot')
dogs['length'].hist()
labels = dogs['name']
sizes = dogs['length']
explode = (0.1, 0.2, 0.2) # fun
plt.pie(sizes, explode=explode, labels=labels,
autopct='%1.2f%%', shadow=True, startangle=30)
#cf: recent.head().plot(kind='pie', y='networthusbillion', labels=recent['name'].head(), legend=False)
#Make a horizontal bar graph of the length of the animals, with their name as the label
df.plot(kind='barh', x='name', y='length', legend=False)
#Make a sorted horizontal bar graph of the cats, with the larger cats on top.
cats.sort_values(by='length').plot(kind='barh', x='name', y='length', legend=False)
"""
Explanation: Contents:
1 Cats and dogs
2 Millionaires
3 Trains stations
1. Cats and dogs
End of explanation
"""
df2 = pd.read_excel("richpeople.xlsx")
df2.keys()
df2['citizenship'].value_counts().head(10)
# population: data from http://data.worldbank.org/indicator/SP.POP.TOTL
df_pop = pd.read_csv("world_pop.csv", header=2)
df_pop.keys()
#recent_pop = df_pop['2015']
#join: see http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
left = pd.DataFrame(df2, index=['countrycode'])
right = pd.DataFrame(df_pop, index=['Country Code'])
millionaires_and_pop = left.join(right)
result = pd.merge(left, right, left_index=True, right_index=True, how='outer')
millionaires_and_pop
result
#millionaires_and_pop['citizenship'].value_counts().head(10)
"""
Explanation: 2. Millionaires
What country are most billionaires from? For the top ones, how many billionaires per billion people?
End of explanation
"""
print("The average wealth of a billionaire (in billions) is:", df2['networthusbillion'].describe()['mean'])
print("The average wealth of a male billionaire is:", df2[df2['gender'] == 'male']['networthusbillion'].describe()['mean'])
print("The average wealth of a female billionaire is:", df2[df2['gender'] == 'female']['networthusbillion'].describe()['mean'])
"""
Explanation: What's the average wealth of a billionaire? Male? Female?
End of explanation
"""
print('The poorest billionaire is:', df2.get_value(df2.sort_values('networthusbillion', ascending=True).index[0],'name'))
df2.sort_values('networthusbillion', ascending=True).head(10)
"""
Explanation: Who is the poorest billionaire? Who are the top 10 poorest billionaires?
End of explanation
"""
#relationship_values = set
relationship_list = df2['relationshiptocompany'].tolist()
relationship_set = set(relationship_list)
relationship_set = [s.strip() for s in relationship_set if s == s] # to remove a naughty NaN and get rid of dumb whitespaces
print("The relationships are:", str.join(', ', relationship_set))
print('\nThe five most common relationships are:')
df2['relationshiptocompany'].value_counts().head(5)
"""
Explanation: What is 'relationship to company'? And what are the most common relationships?
End of explanation
"""
print("The three most common sources of wealth are:\n" + str(df2['typeofwealth'].value_counts().head(3)))
print("\nFor men, they are:\n" + str(df2[df2['gender'] == 'male']['typeofwealth'].value_counts().head(3)))
print("\nFor women, they are:\n" + str(df2[df2['gender'] == 'female']['typeofwealth'].value_counts().head(3)))
"""
Explanation: Most common source of wealth? Male vs. female?
End of explanation
"""
#per_country = df2.groupby(['citizenship'])
#per_country['networthusbillion'].max()
#per_country['networthusbillion'].idxmax() # DataFrame.max(axis=None, skipna=None, level=None, numeric_only=None, **kwargs)
# per_country['gdpcurrentus']
df2['percofgdp'] = (100*1000000000*df2['networthusbillion']) / (df2['gdpcurrentus'])
#pd.Series(["{0:.2f}%".format(percofgdp)])
print("NB: most countries doesn't have their GDP in the 'gdpcurrentus' column.")
df2.loc[per_country['networthusbillion'].idxmax()][['name', 'networthusbillion', 'percofgdp']]
"""
Explanation: Given the richest person in a country, what % of the GDP is their wealth?
End of explanation
"""
df_trains = pd.read_csv("stations.csv", delimiter=';')
df_trains
"""
Explanation: Trains stations
End of explanation
"""
|
tata-antares/tagging_LHCb
|
MC/ss_os_training.ipynb
|
apache-2.0
|
%pylab inline
import sys
sys.path.insert(0, "../")
"""
Explanation: About
Training of the BDT to define if track comes from the same side or opposite side.
Labels:
* 0 (NAN), cannot establish SS or OS
* -1 (OS) - opposite side tracks (good agreement with indeed OS tracks)
* 1 (SS) - tracks grandmother, grand-grandmother, grand-grand-grandmother of which is the same as for signal B
From test we come up with the statement that SS, NAN should have inverted tracks sing for $K_s$ and $K*$ decays. Thus we train OS vs SS, NAN
End of explanation
"""
import pandas
import root_numpy
from folding_group import FoldingGroupClassifier
from decisiontrain import DecisionTrainClassifier
from rep.estimators import SklearnClassifier
"""
Explanation: Import
End of explanation
"""
data = pandas.DataFrame(root_numpy.root2array('../datasets/MC/csv/WG/Bu_JPsiK/2012/Tracks.root'))
from utils import data_tracks_preprocessing
data = data_tracks_preprocessing(data)
for group in range(-1, 2, 1):
print group, 1. * numpy.sum(data.OS_SS.values == group) / len(data)
len(data)
features = ['cos_diff_phi', 'diff_pt', 'partPt', 'partP', 'nnkrec', 'diff_eta', 'EOverP',
'ptB', 'sum_PID_mu_k', 'proj', 'PIDNNe', 'sum_PID_k_e', 'PIDNNk', 'sum_PID_mu_e', 'PIDNNm',
'phi', 'IP', 'IPerr', 'IPs', 'veloch', 'max_PID_k_e', 'ghostProb',
'IPPU', 'eta', 'max_PID_mu_e', 'max_PID_mu_k', 'partlcs']
"""
Explanation: Read $B^\pm \to J\psi K^\pm$ MC samples
End of explanation
"""
kw = {'bins': 100, 'alpha': 0.4, 'normed': True}
figure(figsize=(20, 35))
for n, f in enumerate(features):
subplot(10, 4, n + 1)
r = (numpy.min(data.loc[data.OS_SS == -1, f].values), numpy.max(data.loc[data.OS_SS == -1, f].values))
hist(data.loc[data.OS_SS == -1, f].values, label='OS', range=r, **kw)
hist(data.loc[data.OS_SS == 0, f].values, label='NAN', range=r, **kw)
hist(data.loc[data.OS_SS == 1, f].values, label='SS', range=r, **kw)
title(f)
legend()
"""
Explanation: distributions for same side vs opposide side tracks
End of explanation
"""
data_os_ss = data[data.OS_SS != 0]
weight = numpy.ones(len(data_os_ss))
weight[data_os_ss.OS_SS.values >= 0] *= 1. * sum(data_os_ss.OS_SS < 0) / sum(data_os_ss.OS_SS >= 0)
data_os_ss['weight'] = weight
len(data_os_ss)
from hep_ml.losses import LogLossFunction
loss = LogLossFunction(regularization=100)
tt_base = DecisionTrainClassifier(learning_rate=0.1, n_estimators=10000, depth=6, loss=loss,
max_features=15, n_threads=12)
tt_folding = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=432,
train_features=features, group_feature='group_column')
%time tt_folding.fit(data_os_ss, data_os_ss.OS_SS >= 0)
pass
import cPickle
with open('../models/dt_ss_os_only.pkl', 'w') as f:
cPickle.dump(tt_folding, f)
prob = tt_folding.predict_proba(data_os_ss)[:, 1]
from sklearn.metrics import roc_auc_score
roc_auc_score(data_os_ss.OS_SS >= 0, prob, sample_weight=data_os_ss.weight)
from rep.report.metrics import RocAuc
tt_folding.test_on(data_os_ss, data_os_ss.OS_SS >= 0).learning_curve(RocAuc())
tt_folding.estimators[0].clf.estimators = tt_folding.estimators[0].clf.estimators[:7000]
tt_folding.estimators[1].clf.estimators = tt_folding.estimators[1].clf.estimators[:7000]
prob = tt_folding.predict_proba(data_os_ss)[:, 1]
report = tt_folding.test_on(data_os_ss, data_os_ss.OS_SS >= 0)
report.feature_importance()
"""
Explanation: Training OS vs SS
End of explanation
"""
from utils import plot_calibration
"""
Explanation: Calibration to probability to be SS
End of explanation
"""
plot_calibration(prob, data_os_ss.OS_SS.values >= 0, weight=data_os_ss.weight.values)
"""
Explanation: before calibration
End of explanation
"""
from utils import calibrate_probs
prob_calib, calibrator = calibrate_probs(data_os_ss.OS_SS.values >= 0, data_os_ss.weight.values, prob,
logistic=True)
plot_calibration(prob_calib, data_os_ss.OS_SS.values >= 0, weight=data_os_ss.weight.values)
with open('../models/os_ss_calibrator_only.pkl', 'w') as f:
cPickle.dump(calibrator, f)
probs_nan = tt_folding.predict_proba(data[data.OS_SS == 0])[:, 1]
probs_nan_calib = calibrator.predict_proba(probs_nan)
hist(prob_calib[data_os_ss.OS_SS.values < 0], normed=True, alpha=0.4, label='OS', bins=100);
hist(prob_calib[data_os_ss.OS_SS.values > 0], normed=True, alpha=0.4, label='SS', bins=100);
hist(probs_nan_calib, normed=True, alpha=0.4, label='NAN', bins=100);
legend();
"""
Explanation: after calibration
End of explanation
"""
hist(prob_calib[data_os_ss.OS_SS.values < 0], normed=True, alpha=0.4, label='OS', bins=100);
hist(prob_calib[data_os_ss.OS_SS.values > 0], normed=True, alpha=0.4, label='SS', bins=100);
hist(prob_calib[data_os_ss.OS_SS.values == 0], normed=True, alpha=0.4, label='NAN', bins=100);
legend();
"""
Explanation: OS vs SS and NAN
End of explanation
"""
|
TimothyHelton/k2datascience
|
notebooks/Classification_Exercises.ipynb
|
bsd-3-clause
|
from k2datascience import classification
from k2datascience import plotting
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
%matplotlib inline
"""
Explanation: Classification
Timothy Helton
<br>
<font color="red">
NOTE:
<br>
This notebook uses code found in the
<a href="https://github.com/TimothyHelton/k2datascience/blob/master/k2datascience/classification.py">
<strong>k2datascience.classification</strong></a> module.
To execute all the cells do one of the following items:
<ul>
<li>Install the k2datascience package to the active Python interpreter.</li>
<li>Add k2datascience/k2datascience to the PYTHON_PATH system variable.</li>
<li>Create a link to the classification.py file in the same directory as this notebook.</li>
</font>
Imports
End of explanation
"""
weekly = classification.Weekly()
weekly.data.info()
weekly.data.describe()
weekly.data.head()
plotting.correlation_heatmap_plot(
data=weekly.data, title='Weekly Stockmarket')
plotting.correlation_pair_plot(
weekly.data, title='Weekly Stockmarket')
"""
Explanation: Exercise 1
This question should be answered using the Weekly data set. This data is similar in nature to the Smarket data from earlier, except that it contains 1,089
weekly returns for 21 years, from the beginning of 1990 to the end of
2010.
Produce some numerical and graphical summaries of the Weekly
data. Do there appear to be any patterns?
Use the full data set to perform a logistic regression with
Direction as the response and the five lag variables plus Volume
as predictors. Use the summary function to print the results. Do
any of the predictors appear to be statistically significant? If so,
which ones?
Compute the confusion matrix and overall fraction of correct
predictions. Explain what the confusion matrix is telling you
about the types of mistakes made by logistic regression.
Now fit the logistic regression model using a training data period
from 1990 to 2008, with Lag2 as the only predictor. Compute the
confusion matrix and the overall fraction of correct predictions
for the held out data (that is, the data from 2009 and 2010).
Repeat (4) using LDA.
Repeat (4) using QDA.
Repeat (4) using KNN with K = 1.
Which of these methods appears to provide the best results on
this data?
Experiment with different combinations of predictors, including
possible transformations and interactions, for each of the
methods. Report the variables, method, and associated confusion
matrix that appears to provide the best results on the held
out data. Note that you should
1. Produce some numerical and graphical summaries of the Weekly
data. Do there appear to be any patterns?
End of explanation
"""
weekly.logistic_regression(data=weekly.data)
weekly.logistic_model.summary()
"""
Explanation: FINDINGS
There does not appear to be noticable patterns in dataset.
All field variables except volume appear to follow a Gausian distribution.
2. Use the full data set to perform a logistic regression with
Direction as the response and the five lag variables plus Volume
as predictors. Use the summary function to print the results. Do
any of the predictors appear to be statistically significant? If so,
which ones?
End of explanation
"""
weekly.confusion
print(weekly.classification)
"""
Explanation: FINDINGS
The intercept and lag2 features have P-values below the 0.05 threshold and appear statistically significant.
3. Compute the confusion matrix and overall fraction of correct
End of explanation
"""
weekly.logistic_regression(data=weekly.x_train)
weekly.logistic_model.summary()
weekly.confusion
print(weekly.classification)
"""
Explanation: FINDINGS
The model is not well suited to the data.
The Precision measures the accuracy of the Positive predictions.
$$\frac{T_p}{T_p - F_p}$$
The Recall measures the fraction of the model correctly identified.
$$\frac{T_p}{T_p + F_n}$$
The F1-score is the harmonic mean of the precision and recall.
Harmonic Mean is used when the average of rates is desired.
$$\frac{2 \times Precision \times Recall}{Precision + Recall}$$
The Support is the total number of each class.
The sum each row of the confusion matrix.
4. Now fit the logistic regression model using a training data period
End of explanation
"""
weekly.categorize(weekly.x_test)
weekly.calc_prediction(weekly.y_test, weekly.prediction_nom)
weekly.confusion
print(weekly.classification)
"""
Explanation: FINDINGS
Using 80% of the data as a training set did not improve the models accuracy.
End of explanation
"""
weekly.lda()
weekly.confusion
print(weekly.classification)
"""
Explanation: FINDINGS
Testing the model on the remaining 20% of the data yield a result worse than just randomly guessing.
5. Repeat (4) using LDA.
End of explanation
"""
weekly.qda()
weekly.confusion
print(weekly.classification)
"""
Explanation: FINDINGS
This model is extremely acurate.
6. Repeat (4) using QDA.
End of explanation
"""
weekly.knn()
weekly.confusion
print(weekly.classification)
"""
Explanation: FINDINGS
This model is better than the logistic regression, but not as good as the LDA model.
7. Repeat (4) using KNN with K = 1.
End of explanation
"""
auto = classification.Auto()
auto.data.info()
auto.data.describe()
auto.data.head()
"""
Explanation: FINDINGS
This model is better than the logistic regression, but not as good as the QDA.
8.Which of these methods appears to provide the best results on
this data?
The model acuracy in decending order is the following:
Linear Discriminate Analysis
Quadradic Discriminate Analysis
K-Nearest Neighbors
Logistic Regression
9. Experiment with different combinations of predictors, including
possible transformations and interactions, for each of the
methods. Report the variables, method, and associated confusion
matrix that appears to provide the best results on the held
out data. Note that you should
Exercise 2
In this problem, you will develop a model to predict whether a given
car gets high or low gas mileage based on the Auto data set.
Create a binary variable, mpg01, that contains a 1 if mpg contains
a value above its median, and a 0 if mpg contains a value below
its median.
Explore the data graphically in order to investigate the association
between mpg01 and the other features. Which of the other
features seem most likely to be useful in predicting mpg01? Scatterplots
and boxplots may be useful tools to answer this question.
Describe your findings.
Split the data into a training set and a test set.
Perform LDA on the training data in order to predict mpg01
using the variables that seemed most associated with mpg01 in
(2). What is the test error of the model obtained?
Perform QDA on the training data in order to predict mpg01
using the variables that seemed most associated with mpg01 in
(2). What is the test error of the model obtained?
Perform logistic regression on the training data in order to predict
mpg01 using the variables that seemed most associated with
mpg01 in (2). What is the test error of the model obtained?
Perform KNN on the training data, with several values of K, in
order to predict mpg01. Use only the variables that seemed most
associated with mpg01 in (2). What test errors do you obtain?
Which value of K seems to perform the best on this data set?
1. Create a binary variable, mpg01, that contains a 1 if mpg contains
a value above its median, and a 0 if mpg contains a value below
its median.
End of explanation
"""
plotting.correlation_heatmap_plot(
data=auto.data, title='Auto')
plotting.correlation_pair_plot(
data=auto.data, title='Auto')
auto.box_plots()
"""
Explanation: 2. Explore the data graphically in order to investigate the association
between mpg01 and the other features. Which of the other
features seem most likely to be useful in predicting mpg01? Scatterplots
and boxplots may be useful tools to answer this question.
Describe your findings.
End of explanation
"""
auto.x_train.info()
auto.y_train.head()
auto.x_test.info()
auto.y_test.head()
"""
Explanation: FINDINGS
The following features appear to have a direct impact on the vehicles gas milage.
Displacement
Cylinders are related to Displacement and will not be included.
Horsepower
Weight
Origin
3. Split the data into a training set and a test set.
End of explanation
"""
auto.classify_data(model='LDA')
auto.confusion
print(auto.classification)
"""
Explanation: 4. Perform LDA on the training data in order to predict mpg01
using the variables that seemed most associated with mpg01 in
(2). What is the test error of the model obtained?
End of explanation
"""
auto.classify_data(model='QDA')
auto.confusion
print(auto.classification)
"""
Explanation: 5. Perform QDA on the training data in order to predict mpg01
using the variables that seemed most associated with mpg01 in
(2). What is the test error of the model obtained?
End of explanation
"""
auto.classify_data(model='LR')
auto.confusion
print(auto.classification)
"""
Explanation: 6. Perform logistic regression on the training data in order to predict
mpg01 using the variables that seemed most associated with
mpg01 in (2). What is the test error of the model obtained?
End of explanation
"""
auto.accuracy_vs_k()
auto.classify_data(model='KNN', n=13)
auto.confusion
print(auto.classification)
"""
Explanation: 7. Perform KNN on the training data, with several values of K, in
order to predict mpg01. Use only the variables that seemed most
associated with mpg01 in (2). What test errors do you obtain?
Which value of K seems to perform the best on this data set?
End of explanation
"""
|
eds-uga/csci1360-fa16
|
lectures/L14.ipynb
|
mit
|
file_object = open("alice.txt", "r")
contents = file_object.read()
print(contents[:71])
file_object.close()
"""
Explanation: Lecture 14: Interacting with the filesystem
CSCI 1360: Foundations for Informatics and Analytics
Overview and Objectives
So far, all the data we've worked with have either been manually instantiated as NumPy arrays or lists of strings, or randomly generated. Here we'll finally get to go over reading to and writing from the filesystem. By the end of this lecture, you should be able to:
Implement a basic file reader / writer using built-in Python tools
Use exception handlers to make your interactions with the filesystem robust to failure
Use Python tools to move around the filesystem
Part 1: Interacting with text files
Text files are probably the most common and pervasive format of data. They can contain almost anything: weather data, stock market activity, literary works, and raw web data.
Text files are also convenient for your own work: once some kind of analysis has finished, it's nice to dump the results into a file you can inspect later.
Reading an entire file
So let's jump into it! Let's start with something simple; say...the text version of Lewis Carroll's Alice in Wonderland?
End of explanation
"""
file_object = open("alice.txt", "r")
"""
Explanation: Yep, I went there.
Let's walk through the code, line by line. First, we have a call to a function open() that accepts two arguments:
End of explanation
"""
contents = file_object.read()
"""
Explanation: The first argument is the file path. It's like a URL, except to a file on your computer. It should be noted that, unless you specify a leading forward slash "/", Python will interpret this path to be relative to wherever the Python script is that you're running with this command.
The second argument is the mode. This tells Python whether you're reading from a file, writing to a file, or appending to a file. We'll come to each of these.
These two arguments are part of the function open(), which then returns a file descriptor. You can think of this kind of like the reference / pointer discussion we had in our prior functions lecture: file_object is a reference to the file.
The next line is where the magic happens:
End of explanation
"""
print(contents[:71])
"""
Explanation: In this line, we're calling the method read() on the file reference we got in the previous step. This method goes into the file, pulls out everything in it, and sticks it all in the variable contents. One big string!
End of explanation
"""
file_object.close()
"""
Explanation: ...of which I then print the first 71 characters, which contains the name of the book and the author. Feel free to print the entire string contents; it'll take a few seconds, as you're printing the whole book!
Finally, the last and possibly most important line:
End of explanation
"""
with open("alice.txt", "r") as file_object:
contents = file_object.read()
print(contents[:71])
"""
Explanation: This statement explicitly closes the file reference, effectively shutting the valve to the file.
Do not underestimate the value of this statement. There are weird errors that can crop up when you forget to close file descriptors. It can be difficult to remember to do this, though; in other languages where you have to manually allocate and release any memory you use, it's a bit easier to remember. Since Python handles all that stuff for us, it's not a force of habit to explicitly shut off things we've turned on.
Fortunately, there's an alternative we can use!
End of explanation
"""
with open("alice.txt", "r") as file_object:
num_lines = 0
for line_of_text in file_object:
print(line_of_text)
num_lines += 1
if num_lines == 5: break
"""
Explanation: This code works identically to the code before it. The difference is, by using a with block, Python intrinsically closes the file descriptor at the end of the block. Therefore, no need to remember to do it yourself! Hooray!
Let's say, instead of Alice in Wonderland, we had some behemoth of a piece of literature: something along the lines of War and Peace or even an entire encyclopedia. Essentially, not something we want to read into Python all at once. Fortunately, we have an alternative:
End of explanation
"""
with open("alice.txt", "r") as file_object:
lines_of_text = file_object.readlines()
print(lines_of_text[0])
"""
Explanation: We can use a for loop just as we're used to doing with lists. In this case, at each iteration, Python will hand you exactly 1 line of text from the file to handle it however you'd like.
Of course, if you still want to read in the entire file at once, but really like the idea of splitting up the file line by line, there's a function for that, too:
End of explanation
"""
data_to_save = "This is important data. Definitely worth saving."
with open("outfile.txt", "w") as file_object:
file_object.write(data_to_save)
"""
Explanation: By using readlines() instead of plain old read(), we'll get back a list of strings, where each element of the list is a single line in the text file. In the code snippet above, I've printed the first line of text from the file.
Writing to a file
We've so far seen how to read data from a file. What if we've done some computations and want to save our results to a file?
End of explanation
"""
data_to_save = "This is ALSO important data. BOTH DATA ARE IMPORTANT."
with open("outfile.txt", "a") as file_object:
file_object.write(data_to_save)
"""
Explanation: You'll notice two important changes from before:
Switch the "r" argument in the open() function to "w". You guessed it: we've gone from Reading to Writing.
Call write() on your file descriptor, and pass in the data you want to write to the file (in this case, data_to_save).
If you try this using a new notebook on JupyterHub (or on your local machine), you should see a new text file named "outfile.txt" appear in the same directory as your script. Give it a shot!
Some notes about writing to a file:
If the file you're writing to does NOT currently exist, Python will try to create it for you. In most cases this should be fine (but we'll get to outstanding cases in Part 3 of this lecture).
If the file you're writing to DOES already exist, Python will overwrite everything in the file with the new content. As in, everything that was in the file before will be erased.
That second point seems a bit harsh, doesn't it? Luckily, there is recourse.
Appending to an existing file
If you find yourself in the situation of writing to a file multiple times, and wanting to keep what you wrote to the file previously, then you're in the market for appending to a file.
This works exactly the same as writing to a file, with one small wrinkle:
End of explanation
"""
data_to_save = "This is important data. Definitely worth saving.\n"
with open("outfile.txt", "w") as file_object:
file_object.write(data_to_save)
data_to_save = "This is ALSO important data. BOTH DATA ARE IMPORTANT."
with open("outfile.txt", "a") as file_object:
file_object.write(data_to_save)
with open("outfile.txt", "r") as file_object:
contents = file_object.readlines()
print("LINE 1: {}".format(contents[0]))
print("LINE 2: {}".format(contents[1]))
"""
Explanation: The only change that was made was switching the "w" in the open() method to "a" for, you guessed it, Append. If you look in outfile.txt, you should see both lines of text we've written.
Some notes on appending to files:
If the file does NOT already exist, then using "a" in open() is functionally identical to using "w".
You only need to use append mode if you closed the file descriptor to that file previously. If you have an open file descriptor, you can call write() multiple times; each call will append the text to the previous text. It's only when you close a descriptor, but then want to open up another one to the same file, that you'd need to switch to append mode.
Let's put together what we've seen by writing to a file, appending more to it, and then reading what we wrote.
End of explanation
"""
with open("alicee.txt", "r") as file_object:
contents = file_object.readlines()
print(contents[0])
"""
Explanation: Part 2: Preventing errors
This aspect of programming hasn't been very heavily emphasized--that of error handling--because for the most part, data science is about building models and performing computations so you can make inferences from your data.
...except, of course, from nearly every survey that says your average data scientist spends the vast majority of their time cleaning and organizing their data.
Data is messy and computers are fickle. Just because that file was there yesterday doesn't mean it'll still be there tomorrow. When you're reading from and writing to files, you'll need to put in checks to make sure things are behaving the way you expect, and if they're not, that you're handling things gracefully.
We're going to become good friends with try and except whenever we're dealing with files. For example, let's say I want to read again from that Alice in Wonderland file I had:
End of explanation
"""
filename = "alicee.txt"
try:
with open(filename, "r") as file_object:
contents = file_object.readlines()
print(contents[0])
except FileNotFoundError:
print("Sorry, the file '{}' does not seem to exist.".format(filename))
"""
Explanation: Whoops. In this example, I simply misnamed the file. In practice, maybe the file was moved; maybe it was renamed; maybe you're getting the file from the user and they incorrectly specified the name. Maybe the hard drive failed, or any number of other "acts of God." Whatever the reason, your program should be able to handle missing files.
You could probably code this up yourself:
End of explanation
"""
import os
print(os.getcwd())
"""
Explanation: Pay attention to this: this will most likely show up on future assignments / exams, and you'll be expected to properly handle missing files or incorrect filenames.
Part 3: Moving around the filesystem
Turns out, you can automate a significant chunk of the double-clicking-around that you do on a Windows machine looking for files. Python has an os module that is very powerful.
There are a ton of utilities in this module--I encourage you to check out everything it can do--but I'll highlight a few of my favorites here.
getcwd
This is one of your mainstays: it tells you the full path to where your Python program is currently executing.
"cwd" is shorthand for "current working directory."
End of explanation
"""
print(os.getcwd())
# Go up one directory.
os.chdir("..")
print(os.getcwd())
"""
Explanation: chdir
You know where you are using getcwd, but you actually want to be somewhere else. chdir is the Python equivalent of typing cd on the command line, or quite literally double-clicking a folder.
End of explanation
"""
for item in os.listdir("."): # A dot "." means the current directory
print(item)
"""
Explanation: listdir
Now you've made it into your directory of choice, but you need to know what files exist. You can use listdir to, literally, list the directory contents.
End of explanation
"""
import os.path
if os.path.exists("/Users/squinn"):
print("Path exists!")
else:
print("No such directory.")
if os.path.exists("/something/arbitrary"):
print("Path exists!")
else:
print("No such directory.")
"""
Explanation: Modifying the filesystem
There are a ton of functions at your disposal to actually make changes to the filesystem.
os.mkdir and os.rmdir: create and delete directories, respectively
os.remove and os.unlink: delete files (both are equivalent)
os.rename: renames a file or directory to something else (equivalent to "move", or "mv")
os.path
The base os module has a lot of high-level, basic tools for interacting with the filesystem. If you find that your needs exceed what this module can provide, it has a submodule for more specific filesystem interactions.
For instance: testing if a file or directory even exists at all?
End of explanation
"""
if os.path.exists("/Users/squinn") and os.path.isdir("/Users/squinn"):
print("It exists, and it's a directory.")
else:
print("Something was false.")
"""
Explanation: Once you know a file or directory exists, you can get even more specific: is it a file, or a directory?
Use os.path.isdir and os.path.isfile to find out.
End of explanation
"""
img_name = "my_cat.png"
username = "squinn"
base_path = "C:\\images"
full_path = base_path + "\\" + username + "\\" + img_name
print(full_path)
"""
Explanation: join
This is a relatively unassuming function that is quite possibly the single most useful one; I certainly find myself using it all the time.
To illustrate: you're running an image hosting site. You store your images on a hard disk, perhaps at C:\\images\\. Within that directory, you stratify by user: each user has their own directory, which has the same name as their username on the site, and all the images that user uploads are stored in their folder.
For example, if I was a user and my username was squinn, my uploaded images would be stored at C:\\images\\squinn\\. A different user, hunter2, would have their images stored at C:\\images\\hunter2\\. And so on.
Let's say I've uploaded a new image, my_cat.png. I need to stitch a full path together to move the image to that path.
One way to do it would be hard-coded (hard-core?):
End of explanation
"""
import os.path
img_name = "my_cat.png"
username = "squinn"
base_path = "C:\\images"
full_path = os.path.join(base_path, username, img_name)
print(full_path)
"""
Explanation: That...works. I mean, it works, but it ain't pretty. Also, this will fail miserably if you take this code verbatim and run it on a *nix machine!
Enter join. This not only takes the hard-coded-ness out of the process, but is also operating system aware: that is, it will add the needed directory separator for your specific OS, without any input on your part.
End of explanation
"""
|
NYUDataBootcamp/Projects
|
UG_S16/Aung-Merrick-NYC311Requests.ipynb
|
mit
|
import pandas as pd
url1='...'
url2='/Aung-Merrick-NYC311Requests/DataBootcamp311Data.csv'
url= url1+url2
data= pd.read_csv(url)
"""
Explanation: Data Bootcamp Project
Lu Maw Aung, Patrick Merrick
An Analysis of NYC 311 Service Requests from 2010/16
May 12, 2016
311 is New York City's main source of government information and non-emergency services. Whether you're a resident, business owner, or visitor, help is just a click, text, or call away.
NYC 311 Website
Forward
This report discusses the findings from an analysis performed on NYC 311 Service Request data, available at NYC Open Data [https://nycopendata.socrata.com/Social-Services/311-Service-Requests-from-2010-to-Present/erm2-nwe9]. The data available online has numerous features that can be easily modified. We chose to work only with features that related to location, time, complaint type, and resolution to make the data size more manageable.
Throughout this report, we will describe trends and patterns that we observe through analyzing each feature. The overall organization of this report is also chronological, as each analysis will be broken down by year. Doing so, we hope to show not only year-specific issues and patterns, but overall progress and change of the NYC 311 service from 2010-2016.
Reading-In the File
First we must read in the data. Please input the file directory where you downloaded our zipped file into the '...' section below.
End of explanation
"""
data.columns
"""
Explanation: Below is an overview of the features and variables we will be working with. As stated above, the features selected are related to the complaint/issue, the resolution status, the location of the service request, and the time.
End of explanation
"""
closed_date = []
for x in data['Closed Date']:
if type(x) == float:
closed_date.append('01/01/1990')
else:
closed_date.append(x.rsplit()[0])
data['Closed Date'] = closed_date
data['Closed Date'] = pd.to_datetime(data['Closed Date'], format = "%m/%d/%Y")
created_date = []
for i in data['Created Date']:
created_date.append(i.rsplit()[0])
data['Created Date'] = created_date
data['Created Date']= pd.to_datetime(data['Created Date'],format = "%m/%d/%Y")
data2016= data[data['Created Date'] >= pd.datetime(2016,1,1)]
data2015 = data[data['Created Date'] >= pd.datetime(2015,1,1)]
data2015 = data2015[data2015['Created Date'] < pd.datetime(2016,1,1)]
data2014 = data[data['Created Date'] >= pd.datetime(2014,1,1)]
data2014 = data2014[data2014['Created Date'] < pd.datetime(2015,1,1)]
data2013 = data[data['Created Date'] >= pd.datetime(2013,1,1)]
data2013 = data2013[data2013['Created Date'] < pd.datetime(2014,1,1)]
data2012 = data[data['Created Date'] >= pd.datetime(2012,1,1)]
data2012 = data2012[data2012['Created Date'] < pd.datetime(2013,1,1)]
data2011 = data[data['Created Date'] >= pd.datetime(2011,1,1)]
data2011 = data2011[data2011['Created Date'] < pd.datetime(2012,1,1)]
data2010 = data[data['Created Date'] >= pd.datetime(2010,1,1)]
data2010 = data2010[data2010['Created Date'] < pd.datetime(2011,1,1)]
"""
Explanation: The next step is to clean and manipulate the data a bit so that it is in a format we can work with easily. As we want to split the data by year, we decided to change the 'Created Date' and 'Closed Date' features into a datetime series. The request Creation Date goes back to 2010. We created a new dataframe for each year based on their creation dates.
Closed Date has some records with float variables but these records are held in place with a Jan 1, 1990, as seen below. This will be explained later in the Closed Date-Created Date Range section, but the main reason for doing this is because that is the closed date of records we will be ignoring in our Closed Date-Created Date range analysis.
End of explanation
"""
import matplotlib.pyplot as plt
import matplotlib as mpl
%matplotlib inline
"""
Explanation: The Requests
First we will examine the nature of these 311 requests each year, in terms of the number of requests, the most common issues, and the agencies that received the most requests. This will allow us to compare these requests by year.
First, we must import matplotlib packages to plot our findings.
End of explanation
"""
numberofrequests = pd.DataFrame()
numberofrequests['Year'] = ['2010','2011','2012','2013','2014','2015','2016']
requests = []
x = [data2010,data2011,data2012,data2013,data2014,data2015,data2016]
for i in x:
requests.append(len(i))
numberofrequests['Number_of_Requests'] = requests
numberofrequests= numberofrequests.set_index('Year')
fig, ax = plt.subplots()
numberofrequests.plot(ax=ax, kind='bar', color='0.8', grid=True)
ax.set_title('Number of 311 Requests by Year', size=14)
ax.set_xlabel('Year')
ax.set_ylabel('Number of Requests')
ax.legend(['Requests'], loc=0)
print(numberofrequests)
"""
Explanation: We analyze 311 requests in terms of total number of requests received per year.
End of explanation
"""
complaintquantity = []
complainttype = [[],[]]
for x in range(2010,2017):
if x == 2010:
i = data2010
if x == 2011:
i = data2011
if x == 2012:
i = data2012
if x == 2013:
i = data2013
if x == 2014:
i = data2014
if x == 2015:
i = data2015
if x == 2016:
i = data2016
for z in range(0,5):
complainttype[0].append(x)
complainttype[1].append(i['Complaint Type'].value_counts().head(5).index[z])
complaintquantity.append(i['Complaint Type'].value_counts().head(5)[z])
complainttype = list(zip(*complainttype))
complainttypeindex = pd.MultiIndex.from_tuples(complainttype, names= ['Year', 'Top 5 Issues'])
complaintsdf = pd.DataFrame(index=complainttypeindex)
complaintsdf['Quantity'] = complaintquantity
"""
Explanation: Number of Requests
In terms of quantity, 2015 was the busiest year with over 2 million 311 requests. 2016 has data for up to April, so the approximately 700,000 requests already in so far can be estimated to be a third of this year's total. Overall, the number of requests appear fairly consistent throughout, although the past 3 years have seen a steady growth.
Complaint Type
There are over a hundred complaint types filed each year, from issues as common as heating, to as random as 'Squeegee' and 'Literature Request'(Sorry. This report does not attempt to explore what a Squeegee is). Therefore, we decided to look only at the top 5 most common 311 requests each year. The code below constructs a multi-index dataframe for the top 5 issues each year.
End of explanation
"""
fig, ax = plt.subplots(4, 2, figsize=(40,35))
complaintsdf.xs(2010, level='Year').plot(kind='barh', ax=ax[0, 0], color=['m','y','r','g', 'b'] , title='2010')
complaintsdf.xs(2011, level='Year').plot(kind='barh', ax=ax[0, 1], color=['m','y','g','b', 'r'], title='2011')
complaintsdf.xs(2012, level='Year').plot(kind='barh', ax=ax[1, 0], color=['m','y','r','g', 'black'], title='2012')
complaintsdf.xs(2013, level='Year').plot(kind='barh', ax=ax[1, 1], color=['m','y','r','g', 'black'], title='2013')
complaintsdf.xs(2014, level='Year').plot(kind='barh', ax=ax[2, 0], color=['tan','b','m','r', 'gray'], title='2014')
complaintsdf.xs(2015, level='Year').plot(kind='barh', ax=ax[2, 1], color=['tan','b','gray','r', 'honeydew'], title='2015')
complaintsdf.xs(2016, level='Year').plot(kind='barh', ax=ax[3, 0], color=['tan','b','gray','honeydew', 'r'], title='2016')
for i in range(0,4):
for x in range(0,2):
ax[i, x].set_xlabel('Quantity')
ax[i,x].set_ylabel('Complaint Type')
ax[i,x].legend()
"""
Explanation: Now, we plot this dataframe by year. The graph below shows the top 5 issues by quantity each year. It is also color-coded by the complaint type.
End of explanation
"""
agencyquantity = []
agencytype = [[],[]]
for x in range(2010,2017):
if x == 2010:
i = data2010
if x == 2011:
i = data2011
if x == 2012:
i = data2012
if x == 2013:
i = data2013
if x == 2014:
i = data2014
if x == 2015:
i = data2015
if x == 2016:
i = data2016
for z in range(0,5):
agencytype[0].append(x)
agencytype[1].append(i['Agency'].value_counts().head(5).index[z])
agencyquantity.append(i['Agency'].value_counts().head(5)[z])
agencytype = list(zip(*agencytype))
agencytypeindex = pd.MultiIndex.from_tuples(agencytype, names= ['Year', 'Top 5 Agencies'])
agencydf = pd.DataFrame(index=agencytypeindex)
agencydf['Quantity'] = agencyquantity
fig, axe = plt.subplots(4, 2, figsize=(40,35))
agencydf.xs(2010, level='Year').plot(kind='barh', ax=axe[0, 0], color=['m','y','r','g', 'b'] , title='2010')
agencydf.xs(2011, level='Year').plot(kind='barh', ax=axe[0, 1], color=['m','y','g','r', 'b'], title='2011')
agencydf.xs(2012, level='Year').plot(kind='barh', ax=axe[1, 0], color=['m','y','g','r', 'b'], title='2012')
agencydf.xs(2013, level='Year').plot(kind='barh', ax=axe[1, 1], color=['m','y','g','r', 'b'], title='2013')
agencydf.xs(2014, level='Year').plot(kind='barh', ax=axe[2, 0], color=['m','y','g','r', 'b'], title='2014')
agencydf.xs(2015, level='Year').plot(kind='barh', ax=axe[2, 1], color=['m','g','y','r', 'b'], title='2015')
agencydf.xs(2016, level='Year').plot(kind='barh', ax=axe[3, 0], color=['m','g','y','r', 'b'], title='2016')
for i in range(0,4):
for x in range(0,2):
axe[i, x].set_xlabel('Quantity')
axe[i,x].set_ylabel('Agency')
axe[i,x].legend()
"""
Explanation: As seen in the bar graphs above, there are recurring common issues throughout 2010-2016. Street Light Condition, Street Condition, and Heating are some of the most common issues. These complaint types were the top 5 complaints by quantity in at least 5 of the 7 years.
By Agency
The agencies that receive the most 311 requests are much more consisent throughout this time horizon. From 2010 to 2016, the same 5 agencies received the most complaints. This is illustrated below. Again, we first prepare the data, then graph it.
End of explanation
"""
agencykey= {'DSNY':'Department of Sanitation New York', 'NYPD': 'New York Police Department',
'DEP':'Department of Environmental Protection', 'DOT': 'Department of Transportation',
'HPD': 'NYC Housing Presevation and Development'}
for i in agencykey.keys():
print(i + " stands for " + agencykey[i])
"""
Explanation: The agencies are stated as acronyms. The key for this can be generated by running the code below.
End of explanation
"""
closedcomplaints= []
totalcomplaints=[]
dataframes= [data2010,data2011,data2012,data2013,data2014,data2015,data2016]
for i in dataframes:
closedcomplaints.append(i['Status'].value_counts()[0])
totalcomplaints.append(len(i))
closeddf = pd.DataFrame()
closeddf['Closed'] = closedcomplaints
closeddf['Total'] = totalcomplaints
percent = []
for i in range(len(closeddf)):
percent.append(closeddf['Closed'][i]/closeddf['Total'][i])
closeddf['Percent'] = percent
closeddf['Year'] = ['2010', '2011', '2012', '2013', '2014', '2015', '2016']
closeddf = closeddf.set_index('Year')
fig, ax1 = plt.subplots()
closeddf['Percent'].plot(ax=ax1, kind='line', color='g', grid=True)
ax1.set_title('Closed Complaints Percentage by Year', size=14)
ax1.set_xlabel('Year')
ax1.set_ylabel('Percent')
ax1.legend(['Percent'], loc=0)
"""
Explanation: Analyzing Resolution Effectiveness
A key question we are trying to explore in this project is how effectively these 311 requests get resolved. Below, we outline resolution details by year. Our key variable of interest in exploring resolution effectiveness is Status.
In Status, we are specifically looking for 'Closed' status. This is assigned for a variety of reasons: no violations were found, complaint has been addressed, complaint is deferred, etc... Other status like 'Assigned' and 'Open' have violations that are being addressed or in progress. Overall, Closed gives a good idea of requests that at least get a response. Other statuses also seem to show progress towards resolve.
Issues with Our Target Variable:
There are some issues with our target variable that affects our analysis. The 'Closed' status can be inflated if requests are simply deferred instead of being addressed. When these agencies cannot address the resolution because of reasons like tenants not being home or building not being reachable, they close the complaint. This can also affect our analysis as requests are not really being addressed, but only acknowledged. Nevertheless, we believe the Status feature still gives a good analysis of the agencies' response, regardless of what the response is. With the data we are working with, this is the best we can do.
Below, we graph the percentage of complaints that were closed each year.
End of explanation
"""
secondsrange = pd.DataFrame()
secondsrange['Year'] = ['2010','2011','2012','2013','2014','2015','2016']
secondsrange['Average Closed-Created Time'] = [5653, 4995, 6226, 6013,5289,4897, 15394]
secondsrange = secondsrange.set_index('Year')
fig, ax2 = plt.subplots()
secondsrange.plot(ax=ax2, kind='line', color='0.3', grid=True)
ax2.set_title('Time Difference Between Closed-Created Dates', size=14)
ax2.set_xlabel('Year')
ax2.set_ylabel('Seconds')
ax2.legend(['Time Difference'], loc=0)
print(secondsrange)
"""
Explanation: Complaint Closed Rate
2014 was the year with the highest closed complaints rate, and 2011, the lowest. 2016 is currently lower, but since the year is still in progression, it is unfair to evaluate performance based on the 4 months.
Closed Date-Created Date Range
As stated earlier, a main issue with using Closed status is the clouded results we get with deferred or unaddressed requests. That is why we thought another metric to judge resolution efficiency could be the spread between the closed date and created date. The code below explores this.
Issue: For some reason, smoking complaints have a closed date of 1900-01-01 in all years. Therefore, we chose to ignore these specific records in calculating our close date-created date time spread, in addition to the other records we fixed earlier at the start of this report.
The code below resets the index for each dataframe, and for each record with a closed status in the dataframe, adds the time difference between the Closed Date and Created Date to a new list. Then a new timedelta variable of 0 days is created. For every time difference value in the new list with a value of greater than 0 days, we add the time delta to the new variable. This gives us the total Closed Date-Created Date time spread per year.
This code is a little time-intensive, so each year is partitioned into its own section. The average Closed Date-Created Date range (total time spread / total number of closed complaints) is also provided right below already if you wish to skip this part due to time constraints.
End of explanation
"""
data2010 = data2010.reset_index()
daterange2010 = []
for a in range(len(data2010)):
if data2010['Status'][a] == 'Closed':
daterange2010.append(data2010['Closed Date'][a] - data2010['Created Date'][a])
daysgap2010 = pd.Timedelta(days=0)
for i in range(len(daterange2010)):
if daterange2010[i] < pd.Timedelta(days=0):
daterange2010[i] = pd.Timedelta(days=0)
daysgap2010= daysgap2010 + daterange2010[i]
else:
daysgap2010= daysgap2010 + daterange2010[i]
data2011 = data2011.reset_index()
daterange2011 = []
for a in range(len(data2011)):
if data2011['Status'][a] == 'Closed':
daterange2011.append(data2011['Closed Date'][a] - data2011['Created Date'][a])
daysgap2011 = pd.Timedelta(days=0)
for i in range(len(daterange2011)):
if daterange2011[i] < pd.Timedelta(days=0):
daterange2011[i] = pd.Timedelta(days=0)
daysgap2011= daysgap2011 + daterange2011[i]
else:
daysgap2011= daysgap2011 + daterange2011[i]
data2012 = data2012.reset_index()
daterange2012 = []
for a in range(len(data2012)):
if data2012['Status'][a] == 'Closed':
daterange2012.append(data2012['Closed Date'][a] - data2012['Created Date'][a])
daysgap2012 = pd.Timedelta(days=0)
for i in range(len(daterange2012)):
if daterange2012[i] < pd.Timedelta(days=0):
daterange2012[i] = pd.Timedelta(days=0)
daysgap2012= daysgap2012 + daterange2012[i]
else:
daysgap2012= daysgap2012 + daterange2012[i]
data2013 = data2013.reset_index()
daterange2013 = []
for a in range(len(data2013)):
if data2013['Status'][a] == 'Closed':
daterange2013.append(data2013['Closed Date'][a] - data2013['Created Date'][a])
daysgap2013= pd.Timedelta(days=0)
for i in range(len(daterange2013)):
if daterange2013[i] < pd.Timedelta(days=0):
daterange2013[i] = pd.Timedelta(days=0)
daysgap2013= daysgap2013 + daterange213[i]
else:
daysgap2013= daysgap2013 + daterange2013[i]
data2014 = data2014.reset_index()
daterange2014 = []
for a in range(len(data2014)):
if data2014['Status'][a] == 'Closed':
daterange2014.append(data2014['Closed Date'][a] - data2014['Created Date'][a])
daysgap2014= pd.Timedelta(days=0)
for i in range(len(daterange2014)):
if daterange2014[i] < pd.Timedelta(days=0):
daterange2014[i] = pd.Timedelta(days=0)
daysgap2014= daysgap2014 + daterange2014[i]
else:
daysgap2014= daysgap2014 + daterange2014[i]
data2015 = data2015.reset_index()
daterange2015 = []
for a in range(len(data2015)):
if data2015['Status'][a] == 'Closed':
daterange2015.append(data2015['Closed Date'][a] - data2015['Created Date'][a])
daysgap2015= pd.Timedelta(days=0)
for i in range(len(daterange2015)):
if daterange2015[i] < pd.Timedelta(days=0):
daterange2015[i] = pd.Timedelta(days=0)
daysgap2015= daysgap2015 + daterange2015[i]
else:
daysgap2015= daysgap2015 + daterange2015[i]
data2016 = data2016.reset_index()
daterange2016 = []
for a in range(len(data2016)):
if data2016['Status'][a] == 'Closed':
daterange2016.append(data2016['Closed Date'][a] - data2016['Created Date'][a])
daysgap2016= pd.Timedelta(days=0)
for i in range(len(daterange2016)):
if daterange2016[i] < pd.Timedelta(days=0):
daterange2016[i] = pd.Timedelta(days=0)
daysgap2016= daysgap2016 + daterange2016[i]
else:
daysgap2016= daysgap2016 + daterange2016[i]
print(daysgap2010/len(daterange2010))
print(daysgap2011/len(daterange2011))
print(daysgap2012/len(daterange2012))
print(daysgap2013/len(daterange2013))
print(daysgap2014/len(daterange2014))
print(daysgap2015/len(daterange2015))
print(daysgap2016/len(daterange2016))
"""
Explanation: As you can see, it takes approximately an hour for a complaint to be addressed and closed. This is consistent throughout 2010-2015. 2016 sees a much higher average time difference of about 4 hours currently. This can give us some potential insights. For example, complaints get closed quicker towards the later part of the year. Another possible reason is 2016 is going less efficiently than the previous years as well.
The proof of work is shown below.
Overall, 311 appears to be a non-emergency service that has had very similar issues and complaints over the past five years. The recurring issues like heating/cooling and street conditions show how these aspects should be developed. The fact that the same five agencies deal with the most requests each year can influence government's resource allocation decisions. The closed rate and resolution analysis reveal that 311 is good at responding and addressing complaints. Moving forward, we believe this report can be improved if we can further classify and analyze the Status and Resolution Description features to understand the nature of these complaint resolutions better.
End of explanation
"""
|
GregDMeyer/dynamite
|
examples/1-BuildingOperators.ipynb
|
mit
|
from dynamite.operators import sigmax, sigmay, sigmaz, index_product
# product of sigmaz along the spin chain up to index k
k = 4
index_product(sigmaz(), size=k)
# with that, we can easily build our operator
def majorana(i):
k = i//2
edge_op = sigmay(k) if (i%2) else sigmax(k)
bulk = index_product(sigmaz(), size=k)
return edge_op*bulk
# let's check it out!
majorana(8)
"""
Explanation: Building operators: the Sachdev-Ye-Kitaev model on Majoranas
dynamite can be used for not just the obvious spin chain problems, but anything that can be mapped onto a set of spins. Here we will build a model of interacting Majoranas.
Defining Majoranas on a spin chain
There are multiple ways to define a Majorana creation/annihilation operator in a spin basis. In particular, we want to satisfy the anticommutation relation
$${ \chi_i, \chi_j } = 2 \delta_{ij}$$
for $i \neq j$. It turns out we can do so with the following mapping:
$$\chi_i = \frac{1}{2} \sigma_{\lfloor i/2 \rfloor}^{x/y} \prod_{k}^{\lfloor i/2 \rfloor - 1} \sigma^z_k$$
where that first Pauli matrix is $\sigma^x$ if $i$ is even, and $\sigma^y$ if $i$ is odd.
This basis can be shown fairly easily to satisfy the anticommutation relation we desired. Now let's implement it in dynamite!
Implementation
We need just a couple tools for this: the Pauli matrices and the product operator.
End of explanation
"""
from dynamite.operators import zero, identity
def anticommutator(a, b):
return a*b + b*a
def check_anticom():
print('i', 'j', 'correct', sep='\t')
print('=======================')
for i in range(3):
for j in range(3):
if i == j:
correct_val = 2*identity()
else:
correct_val = zero()
print(i, j, anticommutator(majorana(i), majorana(j)) == correct_val, sep='\t')
check_anticom()
"""
Explanation: Looks like exactly what we wanted! We can even check that the anticommutation relation holds:
End of explanation
"""
# rename our function, so that we can set majorana to be the dynamite one
my_majorana = majorana
from dynamite.extras import majorana
majorana(8)
majorana(8) == my_majorana(8)
"""
Explanation: It was instructive to build it ourselves, but dynamite actually has a Majorana operator built-in, for ease of use. It is the same as ours:
End of explanation
"""
from dynamite.operators import op_sum, op_product, index_sum
"""
Explanation: Definition of the SYK Hamltonian
We want to build the model
$$H_{\text{SYK}} = \sum_{i,j,k,l} J_{ijkl} \cdot \chi_i \chi_j \chi_k \chi_l$$
where the $\chi_i$ represent a Majorana creation/annihilation operator for particle index $i$, and the $J_{ijkl}$ are some random coefficients.
First we must import the things we need:
End of explanation
"""
from itertools import combinations
def get_all_indices(n):
'''
Get all combinations of indices i,j,k,l for a system of n Majoranas.
'''
return combinations(range(n), 4)
# does it do what we expect?
for n,idxs in enumerate(get_all_indices(6)):
print(idxs)
if n > 5:
break
print('...')
"""
Explanation: We need to generate all combinations of indices for i,j,k,l, without repeats. Sounds like a task for Python's itertools:
End of explanation
"""
import numpy as np
from numpy.random import seed, normal
# abbreviate
maj = majorana
def syk_hamiltonian(n, random_seed=0):
'''
Build the SYK Hamiltonian for a system of n Majoranas.
'''
# so the norm scales correctly
factor = np.sqrt(6/(n**3))/4
# it's very important to have the same seed on each process if we run in parallel!
# if we don't set the seed, each process will have a different operator!!
seed(random_seed)
return op_sum(factor*normal(-1,1)*maj(i)*maj(j)*maj(k)*maj(l) for i,j,k,l in get_all_indices(n))
"""
Explanation: Looks good! Now let's use that to build the Hamiltonian:
End of explanation
"""
syk_hamiltonian(5)
"""
Explanation: Let's try it for a (very) small system!
End of explanation
"""
H = syk_hamiltonian(16)
"""
Explanation: Neat, looks good! Why don't we build it for a bigger system, say 16 Majoranas? (which lives on 8 spins)
End of explanation
"""
def syk_hamiltonian_fast(n, random_seed=0):
'''
Build the SYK Hamiltonian for a system of n Majoranas.
'''
factor = np.sqrt(6/(n**3))/4
seed(random_seed)
majs = [maj(i) for i in range(n)]
return op_sum(op_product(majs[i] for i in idxs).scale(factor*normal(-1,1)) for idxs in get_all_indices(n))
# make sure they agree
assert(syk_hamiltonian(10) == syk_hamiltonian_fast(10))
# check which one is faster!
from timeit import timeit
orig = timeit('syk_hamiltonian(16)', number=1, globals=globals())
fast = timeit('syk_hamiltonian_fast(16)', number=1, globals=globals())
print('syk_hamiltonian: ', orig, 's')
print('syk_hamiltonian_fast:', fast, 's')
"""
Explanation: Improving operator build performance
Yikes, that was awfully slow for such a small system size. The problem is that the individual Majorana operators are being rebuilt for every term of the sum, and there are a lot of terms. Maybe we can do better by precomputing the Majorana operators. We also use op_product and operator.scale to avoid making unnecessary copies.
End of explanation
"""
m8 = majorana(8)
print('spin chain length:', m8.get_length())
"""
Explanation: That's a huge speedup!
One last thing to note. It may seem odd that we've never actually specified a spin chain length during this whole process. Don't we need to tell dynamite how many spins we need, and thus how big to make our matrices? If the spin chain length is not specified, dynamite just assumes it to extend to the position of the last non-trivial Pauli operator:
End of explanation
"""
print(m8.table())
"""
Explanation: We can use operator.table() to take a look at it:
End of explanation
"""
|
AllenDowney/ProbablyOverthinkingIt
|
multinorm.ipynb
|
mit
|
from __future__ import print_function, division
import numpy as np
import pandas as pd
from scipy.stats import multivariate_normal, wishart
from itertools import product, starmap
import thinkbayes2
import thinkplot
%matplotlib inline
"""
Explanation: Bayesian estimation with multivariate normal disributions
Copyright 2016 Allen Downey
MIT License: http://opensource.org/licenses/MIT
End of explanation
"""
a = np.array([122.8, 115.5, 102.5, 84.7, 154.2, 83.7,
122.1, 117.6, 98.1, 111.2, 80.3, 110.0,
117.6, 100.3, 107.8, 60.2])
b = np.array([82.6, 99.1, 74.6, 51.9, 62.3, 67.2,
82.4, 97.2, 68.9, 77.9, 81.5, 87.4,
92.4, 80.8, 74.7, 42.1])
n = len(a)
n
"""
Explanation: This notebook contains a solution to a problem posted on Reddit; here's the original statement of the problem:
So, I have two sets of data where the elements correspond to each other. I'm trying to find out the probability that (91.9 <= A <= 158.3) and (56.4 <= B <= 100). I know that P(91.9 <= A <= 158.3) = 0.727098 and that P(56.4 <= B <= 100) = 0.840273, given that A is a normal distribution with mean 105.5 and standard deviation 21.7 and that B is a normal distribution with mean 76.4 and standard deviation 15.4. However, since they are dependent events, P(BA)=P(A)P(B|A)=P(B)P(A|B). Is there any way that I can find out P(A|B) and P(B|A) given the data that I have?
The original poster added this clarification:
I'm going to give you some background on what I'm trying to do here first. I'm doing sports analysis trying to find the best quarterback of the 2015 NFL season using passer rating and quarterback rating, two different measures of how the quarterback performs during a game. The numbers in the sets above are the different ratings for each of the 16 games of the season (A being passer rating, B being quarterback rating, the first element being the first game, the second element being the second, etc.) The better game the quarterback has, the higher each of the two measures will be; I'm expecting that they're correlated and dependent on each other to some degree. I'm assuming that they're normally distributed because most things done by humans tend to be normally distributed.
As a first step, let's look at the data. I'll put the two datasets into NumPy arrays.
End of explanation
"""
thinkplot.Scatter(a, b, alpha=0.7)
"""
Explanation: And make a scatter plot:
End of explanation
"""
X = np.array([a, b])
"""
Explanation: It looks like modeling this data with a bi-variate normal distribution is a reasonable choice.
Let's make an single array out of it:
End of explanation
"""
xฬ = X.mean(axis=1)
print(xฬ)
"""
Explanation: And compute the sample mean
End of explanation
"""
std = X.std(axis=1)
print(std)
"""
Explanation: Sample standard deviation
End of explanation
"""
S = np.cov(X)
print(S)
"""
Explanation: Covariance matrix
End of explanation
"""
corrcoef = np.corrcoef(a, b)
print(corrcoef)
"""
Explanation: And correlation coefficient
End of explanation
"""
def make_array(center, stderr, m=11, factor=3):
return np.linspace(center-factor*stderr,
center+factor*stderr, m)
ฮผ_a = xฬ[0]
ฮผ_b = xฬ[1]
ฯ_a = std[0]
ฯ_b = std[1]
ฯ = corrcoef[0][1]
ฮผ_a_array = make_array(ฮผ_a, ฯ_a / np.sqrt(n))
ฮผ_b_array = make_array(ฮผ_b, ฯ_b / np.sqrt(n))
ฯ_a_array = make_array(ฯ_a, ฯ_a / np.sqrt(2 * (n-1)))
ฯ_b_array = make_array(ฯ_b, ฯ_b / np.sqrt(2 * (n-1)))
#ฯ_array = make_array(ฯ, np.sqrt((1 - ฯ**2) / (n-2)))
ฯ_array = make_array(ฯ, 0.15)
def min_max(array):
return min(array), max(array)
print(min_max(ฮผ_a_array))
print(min_max(ฮผ_b_array))
print(min_max(ฯ_a_array))
print(min_max(ฯ_b_array))
print(min_max(ฯ_array))
"""
Explanation: Now, let's start thinking about this as a Bayesian estimation problem.
There are 5 parameters we would like to estimate:
The means of the two variables, ฮผ_a, ฮผ_b
The standard deviations, ฯ_a, ฯ_b
The coefficient of correlation, ฯ.
As a simple starting place, I'll assume that the prior distributions for these variables are uniform over all possible values.
I'm going to use a mesh algorithm to compute the joint posterior distribution, so I'll "cheat" and construct the mesh using conventional estimates for the parameters.
For each parameter, I'll compute a range of possible values where
The center of the range is the value estimated from the data.
The width of the range is 6 standard errors of the estimate.
The likelihood of any point outside this mesh is so low, it's safe to ignore it.
Here's how I construct the ranges:
End of explanation
"""
class Params:
def __init__(self, ฮผ, ฮฃ):
self.ฮผ = ฮผ
self.ฮฃ = ฮฃ
def __lt__(self, other):
return (self.ฮผ, self.ฮฃ) < (self.ฮผ, self.ฮฃ)
def pack(ฮผ_a, ฮผ_b, ฯ_a, ฯ_b, ฯ):
ฮผ = np.array([ฮผ_a, ฮผ_b])
cross = ฯ * ฯ_a * ฯ_b
ฮฃ = np.array([[ฯ_a**2, cross],
[cross, ฯ_b**2]])
return Params(ฮผ, ฮฃ)
"""
Explanation: Although the mesh is constructed in 5 dimensions, for doing the Bayesian update, I want to express the parameters in terms of a vector of means, ฮผ, and a covariance matrix, ฮฃ.
Params is an object that encapsulates these values. pack is a function that takes 5 parameters and returns a Param object.
End of explanation
"""
mesh = product(ฮผ_a_array, ฮผ_b_array,
ฯ_a_array, ฯ_b_array, ฯ_array)
"""
Explanation: Now we can make a prior distribution. First, mesh is the Cartesian product of the parameter arrays. Since there are 5 dimensions with 11 points each, the total number of points is 11**5 = 161,051.
End of explanation
"""
mesh = starmap(pack, mesh)
"""
Explanation: The result is an iterator. We can use itertools.starmap to apply pack to each of the points in the mesh:
End of explanation
"""
class MultiNorm(thinkbayes2.Suite):
def Likelihood(self, data, hypo):
xฬ, S, n = data
dist_xฬ = multivariate_normal(hypo.ฮผ, hypo.ฮฃ/n)
dist_S = wishart(n-1, hypo.ฮฃ)
return dist_xฬ.pdf(xฬ) * dist_S.pdf((n-1) * S)
"""
Explanation: Now we need an object to encapsulate the mesh and perform the Bayesian update. MultiNorm represents a map from each Param object to its probability.
It inherits Update from thinkbayes2.Suite and provides Likelihood, which computes the probability of the data given a hypothetical set of parameters.
If we know the mean is ฮผ and the covariance matrix is ฮฃ:
The sampling distribution of the mean, xฬ, is multivariable normal with parameters ฮผ and ฮฃ/n.
The sampling distribution of (n-1) S is Wishart with parameters n-1 and ฮฃ.
So the likelihood of the observed summary statistics, xฬ and S, is the product of two probability densities:
The pdf of the multivariate normal distrbution evaluated at xฬ.
The pdf of the Wishart distribution evaluated at (n-1) S.
End of explanation
"""
suite = MultiNorm(mesh)
"""
Explanation: Now we can initialize the suite with the mesh.
End of explanation
"""
%time suite.Update((xฬ, S, n))
"""
Explanation: And update it using the data (the return value is the total probability of the data, aka the normalizing constant). This takes a minute or two on my machine (with m=11).
End of explanation
"""
sample = suite.MakeCdf().Sample(300)
"""
Explanation: Now to answer the original question, about the conditional probabilities of A and B, we can either enumerate the parameters in the posterior or draw a sample from the posterior.
Since we don't need a lot of precision, I'll draw a sample.
End of explanation
"""
def generate(ฮผ, ฮฃ, sample_size):
return np.random.multivariate_normal(ฮผ, ฮฃ, sample_size)
# run an example using sample stats
fake_X = generate(xฬ, S, 300)
"""
Explanation: For a given pair of values, ฮผ and ฮฃ, in the sample, we can generate a simulated dataset.
The size of the simulated dataset is arbitrary, but should be large enough to generate a smooth distribution of P(A|B) and P(B|A).
End of explanation
"""
def conditional_probs(sample):
df = pd.DataFrame(sample, columns=['a', 'b'])
pA = df[(91.9 <= df.a) & (df.a <= 158.3)]
pB = df[(56.4 <= df.b) & (df.b <= 100)]
pBoth = pA.index.intersection(pB.index)
pAgivenB = len(pBoth) / len(pB)
pBgivenA = len(pBoth) / len(pA)
return pAgivenB, pBgivenA
conditional_probs(fake_X)
"""
Explanation: The following function takes a sample of $a$ and $b$ and computes the conditional probabilites P(A|B) and P(B|A)
End of explanation
"""
def make_predictive_distributions(sample):
pmf = thinkbayes2.Joint()
for params in sample:
fake_X = generate(params.ฮผ, params.ฮฃ, 300)
probs = conditional_probs(fake_X)
pmf[probs] += 1
pmf.Normalize()
return pmf
predictive = make_predictive_distributions(sample)
"""
Explanation: Now we can loop through the sample of parameters, generate simulated data for each, and compute the conditional probabilities:
End of explanation
"""
thinkplot.Cdf(predictive.Marginal(0).MakeCdf())
predictive.Marginal(0).Mean()
"""
Explanation: Then pull out the posterior predictive marginal distribution of P(A|B), and print the posterior predictive mean:
End of explanation
"""
thinkplot.Cdf(predictive.Marginal(1).MakeCdf())
predictive.Marginal(1).Mean()
"""
Explanation: And then pull out the posterior predictive marginal distribution of P(B|A), with the posterior predictive mean
End of explanation
"""
def unpack(ฮผ, ฮฃ):
ฮผ_a = ฮผ[0]
ฮผ_b = ฮผ[1]
ฯ_a = np.sqrt(ฮฃ[0, 0])
ฯ_b = np.sqrt(ฮฃ[1, 1])
ฯ = ฮฃ[0, 1] / ฯ_a / ฯ_b
return ฮผ_a, ฮผ_b, ฯ_a, ฯ_b, ฯ
"""
Explanation: We don't really care about the posterior distributions of the parameters, but it's good to take a look and make sure they are not crazy.
The following function takes ฮผ and ฮฃ and unpacks them into a tuple of 5 parameters:
End of explanation
"""
def make_marginals(suite):
joint = thinkbayes2.Joint()
for params, prob in suite.Items():
t = unpack(params.ฮผ, params.ฮฃ)
joint[t] = prob
return joint
marginals = make_marginals(suite)
"""
Explanation: So we can iterate through the posterior distribution and make a joint posterior distribution of the parameters:
End of explanation
"""
thinkplot.Cdf(marginals.Marginal(0).MakeCdf())
thinkplot.Cdf(marginals.Marginal(1).MakeCdf());
"""
Explanation: And here are the posterior marginal distributions for ฮผ_a and ฮผ_b
End of explanation
"""
thinkplot.Cdf(marginals.Marginal(2).MakeCdf())
thinkplot.Cdf(marginals.Marginal(3).MakeCdf());
"""
Explanation: And here are the posterior marginal distributions for ฯ_a and ฯ_b
End of explanation
"""
thinkplot.Cdf(marginals.Marginal(4).MakeCdf());
"""
Explanation: Finally, the posterior marginal distribution for the correlation coefficient, ฯ
End of explanation
"""
raise Exception("YouShallNotPass")
def estimate(X):
return X.mean(axis=1), np.cov(X)
estimate(generate(xฬ, S, n).transpose())
def z_prime(r):
return 0.5 * np.log((1+r) / (1-r))
def sampling_distributions(stats, cov, n):
sig1, sig2, _ = std_rho(cov)
array = np.zeros((len(stats), 8))
for i, (xฬ, S) in enumerate(stats):
array[i, 0:2] = xฬ
s1, s2, r = std_rho(S)
array[i, 2] = s1
array[i, 3] = s2
array[i, 4] = r
array[i, 5] = (n-1) * S[0, 0] / cov[0, 0]
array[i, 6] = (n-1) * S[1, 1] / cov[1, 1]
array[i, 7] = z_prime(r)
return array
dists = sampling_distributions(stats, cov, n)
cdf0 = thinkbayes2.Cdf(dists[:, 0])
cdf1 = thinkbayes2.Cdf(dists[:, 1])
thinkplot.Cdfs([cdf0, cdf1])
cdf2 = thinkbayes2.Cdf(dists[:, 2])
cdf3 = thinkbayes2.Cdf(dists[:, 3])
thinkplot.Cdfs([cdf2, cdf3])
cdf4 = thinkbayes2.Cdf(dists[:, 4])
thinkplot.Cdfs([cdf4])
cdf5 = thinkbayes2.Cdf(dists[:, 5])
cdf6 = thinkbayes2.Cdf(dists[:, 6])
thinkplot.Cdfs([cdf5, cdf6])
cdf7 = thinkbayes2.Cdf(dists[:, 7])
thinkplot.Cdfs([cdf7])
def sampling_dist_mean(i, mean, cov, cdf):
sampling_dist = scipy.stats.norm(loc=mean[i], scale=np.sqrt(cov[i, i]/n))
xs = cdf.xs
ys = sampling_dist.cdf(xs)
thinkplot.plot(xs, ys)
thinkplot.Cdf(cdf)
sampling_dist_mean(0, mean, cov, cdf0)
sampling_dist_mean(1, mean, cov, cdf1)
def sampling_dist_std(i, mean, cov, cdf):
sampling_dist = scipy.stats.chi2(df=n)
xs = cdf.xs
ys = sampling_dist.cdf(xs)
thinkplot.plot(xs, ys)
thinkplot.Cdf(cdf)
sampling_dist_std(5, mean, cov, cdf5)
sampling_dist_std(6, mean, cov, cdf6)
def sampling_dist_r(i, mean, cov, cdf):
_, _, rho = std_rho(cov)
sampling_dist = scipy.stats.norm(loc=z_prime(rho), scale=1/np.sqrt(n-3))
xs = cdf.xs
ys = sampling_dist.cdf(xs)
thinkplot.plot(xs, ys)
thinkplot.Cdf(cdf)
sampling_dist_r(7, mean, cov, cdf7)
pdf_X = scipy.stats.multivariate_normal(mean, cov/n)
pdf_X.pdf(mean) - pdf_X.pdf(mean-0.1)
def make_multi_norm_marginal(index, mean, cov, n):
sigmas = std_rho(cov)
width = 6 * sigmas[index] / np.sqrt(n)
xs = np.linspace(mean[index]-width/2, mean[index]+width/2, 101)
array = np.tile(mean, (len(xs), 1))
array[:, index] = xs
pdf_X = scipy.stats.multivariate_normal(mean, cov/n)
ys = pdf_X.pdf(array)
pmf = thinkbayes2.Pmf(dict(zip(xs, ys)))
pmf.Normalize()
return pmf
pmf = make_multi_norm_marginal(0, mean, cov, n)
thinkplot.Pdf(pmf)
pmf = make_multi_norm_marginal(1, mean, cov, n)
thinkplot.Pdf(pmf)
def generate_statistics(mean, cov, n, iters):
return [estimate(generate(mean, cov, n)) for _ in range(iters)]
stats = generate_statistics(mean, cov, n, 1000)
s0 = np.zeros(len(stats))
s1 = np.zeros(len(stats))
for i, (xฬ, S) in enumerate(stats):
sigmas = std_rho(S)
s0[i] = sigmas[0]
s1[i] = sigmas[1]
thinkplot.Scatter(s0, s1)
s0 = np.zeros(len(stats))
s1 = np.zeros(len(stats))
for i, (xฬ, S) in enumerate(stats):
s0[i] = (n-1) * S[0][0]
s1[i] = (n-1) * S[1][1]
thinkplot.Scatter(s0, s1)
pdf_S = wishart(df=n-1, scale=cov)
stats = pdf_S.rvs(1000)
s0 = np.zeros(len(stats))
s1 = np.zeros(len(stats))
for i, S in enumerate(stats):
s0[i] = S[0][0]
s1[i] = S[1][1]
thinkplot.Scatter(s0, s1)
sigmas = std_rho(cov)
width = 6 * sigmas[0] / np.sqrt(2 * (n-1))
X = np.linspace(sigmas[0]-width/2, sigmas[0]+width/2, 101)
width = 6 * sigmas[1] / np.sqrt(2 * (n-1))
Y = np.linspace(sigmas[1]-width/2, sigmas[1]+width/2, 101)
Z = np.zeros((len(X), len(Y)))
pdf_S = wishart(df=n-1, scale=cov)
for i, x in enumerate(X):
for j, y in enumerate(Y):
S = cov.copy()
S[0, 0] = x**2
S[1, 1] = y**2
try:
density = pdf_S.pdf((n-1) * S)
Z[i, j] = density
except:
Z[i, j] = np.nan
thinkplot.Scatter(s0, s1)
plt.contour(X, Y, Z)
pmf_0 = thinkbayes2.Pmf()
for i, (xฬ, S) in enumerate(stats):
sig1, sig2, rho = std_rho(S)
density = pdf_S.pdf((n-1) * S)
pmf_0[sig1] += 1
thinkplot.Cdf(pmf_0.MakeCdf())
pdf_S = wishart(df=n-1, scale=cov)
pdf_S.pdf(cov)
"""
Explanation: You can ignore everything after this, which is my development code and some checks.
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.15/_downloads/plot_introduction.ipynb
|
bsd-3-clause
|
import mne
"""
Explanation: Basic MEG and EEG data processing
MNE-Python reimplements most of MNE-C's (the original MNE command line utils)
functionality and offers transparent scripting.
On top of that it extends MNE-C's functionality considerably
(customize events, compute contrasts, group statistics, time-frequency
analysis, EEG-sensor space analyses, etc.) It uses the same files as standard
MNE unix commands: no need to convert your files to a new system or database.
What you can do with MNE Python
Raw data visualization to visualize recordings, can also use
mne_browse_raw for extended functionality (see ch_browse)
Epoching: Define epochs, baseline correction, handle conditions etc.
Averaging to get Evoked data
Compute SSP projectors to remove ECG and EOG artifacts
Compute ICA to remove artifacts or select latent sources.
Maxwell filtering to remove environmental noise.
Boundary Element Modeling: single and three-layer BEM model
creation and solution computation.
Forward modeling: BEM computation and mesh creation
(see ch_forward)
Linear inverse solvers (dSPM, sLORETA, MNE, LCMV, DICS)
Sparse inverse solvers (L1/L2 mixed norm MxNE, Gamma Map,
Time-Frequency MxNE)
Connectivity estimation in sensor and source space
Visualization of sensor and source space data
Time-frequency analysis with Morlet wavelets (induced power,
intertrial coherence, phase lock value) also in the source space
Spectrum estimation using multi-taper method
Mixed Source Models combining cortical and subcortical structures
Dipole Fitting
Decoding multivariate pattern analyis of M/EEG topographies
Compute contrasts between conditions, between sensors, across
subjects etc.
Non-parametric statistics in time, space and frequency
(including cluster-level)
Scripting (batch and parallel computing)
What you're not supposed to do with MNE Python
- **Brain and head surface segmentation** for use with BEM
models -- use Freesurfer.
<div class="alert alert-info"><h4>Note</h4><p>This package is based on the FIF file format from Neuromag. It
can read and convert CTF, BTI/4D, KIT and various EEG formats to
FIF.</p></div>
Installation of the required materials
See install_python_and_mne_python.
<div class="alert alert-info"><h4>Note</h4><p>The expected location for the MNE-sample data is
``~/mne_data``. If you downloaded data and an example asks
you whether to download it again, make sure
the data reside in the examples directory and you run the script from its
current directory.
From IPython e.g. say::
cd examples/preprocessing
%run plot_find_ecg_artifacts.py</p></div>
From raw data to evoked data
Now, launch ipython_ (Advanced Python shell) using the QT backend, which
is best supported across systems::
$ ipython --matplotlib=qt
First, load the mne package:
<div class="alert alert-info"><h4>Note</h4><p>In IPython, you can press **shift-enter** with a given cell
selected to execute it and advance to the next cell:</p></div>
End of explanation
"""
mne.set_log_level('WARNING')
"""
Explanation: If you'd like to turn information status messages off:
End of explanation
"""
mne.set_log_level('INFO')
"""
Explanation: But it's generally a good idea to leave them on:
End of explanation
"""
mne.set_config('MNE_LOGGING_LEVEL', 'WARNING', set_env=True)
"""
Explanation: You can set the default level by setting the environment variable
"MNE_LOGGING_LEVEL", or by having mne-python write preferences to a file:
End of explanation
"""
mne.get_config_path()
"""
Explanation: Note that the location of the mne-python preferences file (for easier manual
editing) can be found using:
End of explanation
"""
from mne.datasets import sample # noqa
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
print(raw_fname)
"""
Explanation: By default logging messages print to the console, but look at
:func:mne.set_log_file to save output to a file.
Access raw data
^^^^^^^^^^^^^^^
End of explanation
"""
raw = mne.io.read_raw_fif(raw_fname)
print(raw)
print(raw.info)
"""
Explanation: <div class="alert alert-info"><h4>Note</h4><p>The MNE sample dataset should be downloaded automatically but be
patient (approx. 2GB)</p></div>
Read data from file:
End of explanation
"""
print(raw.ch_names)
"""
Explanation: Look at the channels in raw:
End of explanation
"""
start, stop = raw.time_as_index([100, 115]) # 100 s to 115 s data segment
data, times = raw[:, start:stop]
print(data.shape)
print(times.shape)
data, times = raw[2:20:3, start:stop] # access underlying data
raw.plot()
"""
Explanation: Read and plot a segment of raw data
End of explanation
"""
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True,
exclude='bads')
raw.save('sample_audvis_meg_raw.fif', tmin=0, tmax=150, picks=picks,
overwrite=True)
"""
Explanation: Save a segment of 150s of raw data (MEG only):
End of explanation
"""
events = mne.find_events(raw, stim_channel='STI 014')
print(events[:5])
"""
Explanation: Define and read epochs
^^^^^^^^^^^^^^^^^^^^^^
First extract events:
End of explanation
"""
mne.set_config('MNE_STIM_CHANNEL', 'STI101', set_env=True)
"""
Explanation: Note that, by default, we use stim_channel='STI 014'. If you have a different
system (e.g., a newer system that uses channel 'STI101' by default), you can
use the following to set the default stim channel to use for finding events:
End of explanation
"""
event_id = dict(aud_l=1, aud_r=2) # event trigger and conditions
tmin = -0.2 # start of each epoch (200ms before the trigger)
tmax = 0.5 # end of each epoch (500ms after the trigger)
"""
Explanation: Events are stored as a 2D numpy array where the first column is the time
instant and the last one is the event number. It is therefore easy to
manipulate.
Define epochs parameters:
End of explanation
"""
raw.info['bads'] += ['MEG 2443', 'EEG 053']
"""
Explanation: Exclude some channels (original bads + 2 more):
End of explanation
"""
picks = mne.pick_types(raw.info, meg=True, eeg=True, eog=True, stim=False,
exclude='bads')
"""
Explanation: The variable raw.info['bads'] is just a python list.
Pick the good channels, excluding raw.info['bads']:
End of explanation
"""
mag_picks = mne.pick_types(raw.info, meg='mag', eog=True, exclude='bads')
grad_picks = mne.pick_types(raw.info, meg='grad', eog=True, exclude='bads')
"""
Explanation: Alternatively one can restrict to magnetometers or gradiometers with:
End of explanation
"""
baseline = (None, 0) # means from the first instant to t = 0
"""
Explanation: Define the baseline period:
End of explanation
"""
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
"""
Explanation: Define peak-to-peak rejection parameters for gradiometers, magnetometers
and EOG:
End of explanation
"""
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
baseline=baseline, preload=False, reject=reject)
print(epochs)
"""
Explanation: Read epochs:
End of explanation
"""
epochs_data = epochs['aud_l'].get_data()
print(epochs_data.shape)
"""
Explanation: Get single epochs for one condition:
End of explanation
"""
from scipy import io # noqa
io.savemat('epochs_data.mat', dict(epochs_data=epochs_data), oned_as='row')
"""
Explanation: epochs_data is a 3D array of dimension (55 epochs, 365 channels, 106 time
instants).
Scipy supports read and write of matlab files. You can save your single
trials with:
End of explanation
"""
epochs.save('sample-epo.fif')
"""
Explanation: or if you want to keep all the information about the data you can save your
epochs in a fif file:
End of explanation
"""
saved_epochs = mne.read_epochs('sample-epo.fif')
"""
Explanation: and read them later with:
End of explanation
"""
evoked = epochs['aud_l'].average()
print(evoked)
evoked.plot()
"""
Explanation: Compute evoked responses for auditory responses by averaging and plot it:
End of explanation
"""
max_in_each_epoch = [e.max() for e in epochs['aud_l']] # doctest:+ELLIPSIS
print(max_in_each_epoch[:4]) # doctest:+ELLIPSIS
"""
Explanation: .. topic:: Exercise
Extract the max value of each epoch
End of explanation
"""
evoked_fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked1 = mne.read_evokeds(
evoked_fname, condition='Left Auditory', baseline=(None, 0), proj=True)
"""
Explanation: It is also possible to read evoked data stored in a fif file:
End of explanation
"""
evoked2 = mne.read_evokeds(
evoked_fname, condition='Right Auditory', baseline=(None, 0), proj=True)
"""
Explanation: Or another one stored in the same file:
End of explanation
"""
contrast = mne.combine_evoked([evoked1, evoked2], weights=[0.5, -0.5])
contrast = mne.combine_evoked([evoked1, -evoked2], weights='equal')
print(contrast)
"""
Explanation: Two evoked objects can be contrasted using :func:mne.combine_evoked.
This function can use weights='equal', which provides a simple
element-by-element subtraction (and sets the
mne.Evoked.nave attribute properly based on the underlying number
of trials) using either equivalent call:
End of explanation
"""
average = mne.combine_evoked([evoked1, evoked2], weights='nave')
print(contrast)
"""
Explanation: To do a weighted sum based on the number of averages, which will give
you what you would have gotten from pooling all trials together in
:class:mne.Epochs before creating the :class:mne.Evoked instance,
you can use weights='nave':
End of explanation
"""
epochs_eq = epochs.copy().equalize_event_counts(['aud_l', 'aud_r'])[0]
evoked1, evoked2 = epochs_eq['aud_l'].average(), epochs_eq['aud_r'].average()
print(evoked1)
print(evoked2)
contrast = mne.combine_evoked([evoked1, -evoked2], weights='equal')
print(contrast)
"""
Explanation: Instead of dealing with mismatches in the number of averages, we can use
trial-count equalization before computing a contrast, which can have some
benefits in inverse imaging (note that here weights='nave' will
give the same result as weights='equal'):
End of explanation
"""
import numpy as np # noqa
n_cycles = 2 # number of cycles in Morlet wavelet
freqs = np.arange(7, 30, 3) # frequencies of interest
"""
Explanation: Time-Frequency: Induced power and inter trial coherence
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Define parameters:
End of explanation
"""
from mne.time_frequency import tfr_morlet # noqa
power, itc = tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles,
return_itc=True, decim=3, n_jobs=1)
power.plot([power.ch_names.index('MEG 1332')])
"""
Explanation: Compute induced power and phase-locking values and plot gradiometers:
End of explanation
"""
from mne.minimum_norm import apply_inverse, read_inverse_operator # noqa
"""
Explanation: Inverse modeling: MNE and dSPM on evoked and raw data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Import the required functions:
End of explanation
"""
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
inverse_operator = read_inverse_operator(fname_inv)
"""
Explanation: Read the inverse operator:
End of explanation
"""
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM"
"""
Explanation: Define the inverse parameters:
End of explanation
"""
stc = apply_inverse(evoked, inverse_operator, lambda2, method)
"""
Explanation: Compute the inverse solution:
End of explanation
"""
stc.save('mne_dSPM_inverse')
"""
Explanation: Save the source time courses to disk:
End of explanation
"""
fname_label = data_path + '/MEG/sample/labels/Aud-lh.label'
label = mne.read_label(fname_label)
"""
Explanation: Now, let's compute dSPM on a raw file within a label:
End of explanation
"""
from mne.minimum_norm import apply_inverse_raw # noqa
start, stop = raw.time_as_index([0, 15]) # read the first 15s of data
stc = apply_inverse_raw(raw, inverse_operator, lambda2, method, label,
start, stop)
"""
Explanation: Compute inverse solution during the first 15s:
End of explanation
"""
stc.save('mne_dSPM_raw_inverse_Aud')
"""
Explanation: Save result in stc files:
End of explanation
"""
print("Done!")
"""
Explanation: What else can you do?
^^^^^^^^^^^^^^^^^^^^^
- detect heart beat QRS component
- detect eye blinks and EOG artifacts
- compute SSP projections to remove ECG or EOG artifacts
- compute Independent Component Analysis (ICA) to remove artifacts or
select latent sources
- estimate noise covariance matrix from Raw and Epochs
- visualize cross-trial response dynamics using epochs images
- compute forward solutions
- estimate power in the source space
- estimate connectivity in sensor and source space
- morph stc from one brain to another for group studies
- compute mass univariate statistics base on custom contrasts
- visualize source estimates
- export raw, epochs, and evoked data to other python data analysis
libraries e.g. pandas
- and many more things ...
Want to know more ?
^^^^^^^^^^^^^^^^^^^
Browse the examples gallery <auto_examples/index.html>_.
End of explanation
"""
|
y2ee201/Deep-Learning-Nanodegree
|
sentiment-rnn/Sentiment RNN.ipynb
|
mit
|
import numpy as np
import tensorflow as tf
with open('../sentiment_network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment_network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
"""
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
"""
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
"""
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
"""
# Create your dictionary that maps vocab words to integers here
from collections import Counter
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
"""
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
"""
# Convert labels to 1s and 0s for 'positive' and 'negative'
labels = labels.split()
labels = np.array([1 if each == 'positive' else 0 for each in labels])
"""
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
"""
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
"""
Explanation: If you built labels correctly, you should see the next output.
End of explanation
"""
# Filter out that review with 0 length
reviews_ints = [review for review in reviews_ints if len(review)>0]
print(len(reviews_ints))
print(reviews_ints[1])
# print([review[0:200] if len(review)>200 else 0 for review in reviews_ints])
"""
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
"""
seq_len = 200
# features = np.array([review[0:200] if len(review)>200 else np.append(np.zeros((200 - len(review))),review) for review in reviews])
print(len(reviews))
features = np.zeros((len(reviews), seq_len), dtype=int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
print(len(features))
"""
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
"""
features[:10,:100]
"""
Explanation: If you build features correctly, it should look like that cell output below.
End of explanation
"""
from sklearn.model_selection import train_test_split
split_frac = 0.8
# train_x, val_x, train_y, val_y = train_test_split(features, labels, train_size=split_frac)
# val_x, test_x, val_y, test_y = train_test_split(val_x, val_y, test_size=0.5)
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
"""
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
"""
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
"""
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2501, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
"""
n_words = len(vocab)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='prob')
"""
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
"""
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
"""
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
"""
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
"""
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
"""
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state)
"""
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
"""
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
"""
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
"""
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
"""
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
"""
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
"""
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
"""
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
"""
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
"""
test_acc = []
with tf.Session(graph=graph) as sess:
# saver.restore(sess, tf.train.latest_checkpoint('/output/checkpoints'))
saver.restore(sess, '/checkpoints/checkpoint')
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
"""
Explanation: Testing
End of explanation
"""
|
ComputationalModeling/spring-2017-danielak
|
past-semesters/fall_2016/homework/HW3/Homework_3_SOLUTIONS.ipynb
|
agpl-3.0
|
# put your code here
%matplotlib inline
import matplotlib.pyplot as plt
import random
import numpy as np
import numpy.random as rand
# takes nothing, returns an xstep, ystep pair
def step2d():
'''
step2d picks a random direction to step in (+/-x, +/-y).
Takes no arguments, returns an xstep, ystep pair.
'''
a = random.randint(0,3)
if a == 0: # left
return -1,0
elif a == 1: # right
return 1,0
elif a == 2: # up
return 0,1
elif a == 3: # down
return 0,-1
else: # error check!
print("aaah!")
return -9999,-9999
x = y = 0 # initial position (will be modified)
n_steps = 1000 # number of steps
x_traj = []
y_traj = []
# loop over n_steps, calling step2d() and incrementing trajectory
for i in range(0,n_steps):
xstep,ystep=step2d()
x += xstep
y += ystep
# keep track of trajectory to plot it!
x_traj.append(x)
y_traj.append(y)
plt.plot(x_traj,y_traj)
"""
Explanation: Homework #3
This notebook is due on Friday, October 28th, 2016 at 11:59 p.m.. Please make sure to get started early, and come by the instructors' office hours if you have any questions. Office hours and locations can be found in the course syllabus. IMPORTANT: While it's fine if you talk to other people in class about this homework - and in fact we encourage it! - you are responsible for creating the solutions for this homework on your own, and each student must submit their own homework assignment.
Some links that you may find helpful:
Markdown tutorial
The Pandas website
The Pandas tutorial
10-minute Panda Tutorial
All CMSE 201 YouTube videos
FOR THIS HOMEWORK and for all future homework assignments: We will be grading you on:
The quality of your code
The correctness of your code
Whether your code runs.
To that end:
Code quality: make sure that you use functions whenever possible, use descriptive variable names, and use comments to explain what your code does as well as function properties (including what arguments they take, what they do, and what they return).
Whether your code runs: prior to submitting your homework assignment, re-run the entire notebook and test it. Go to the "kernel" menu, select "Restart", and then click "clear all outputs and restart." Then, go to the "Cell" menu and choose "Run all" to ensure that your code produces the correct results. We will take off points for code that does not work correctly.
Your name
Put your name here!
Section 1: 2D random walk
We're going to finish up the "random walk" project that we started in class. In this section, we're going to do it in two dimensions, x and y. You need to write a program that performs a random walk that starts at the origin (x=y=0), picks a random direction (up, down, left, or right), and take one step in that direction. You will then randomly pick a new direction, take a step, and so on, for a total of $N_{step}$ = 1000 steps.
First: Write the code to do this for a single random walk, and keep track of the (x,y) position of your walker for each step in the random walk. Make a plot showing the path for this random walk. Make sure to write a function that decides what direction the walker will go on each step, and returns that information to your program.
End of explanation
"""
# put your code here
distance = []
n_steps = 1000 # number of steps
n_trials = 1000
# as in the previous cell, but now loop over number of trials as
# well as number of steps. Just keep track of distances
for j in range(n_trials):
x=y=0
for i in range(n_steps):
xstep,ystep=step2d()
x += xstep
y += ystep
distance.append( (x**2+y**2)**0.5)
plt.hist(distance,bins=40)
distance = np.array(distance)
print((distance**2).mean())
"""
Explanation: Second: Modify your code to only keep track of the final distance from the origin (magnitude, not x and y components - in other words, $d = (x^2 + y^2)^{1/2}$), and do the experiment $N_{trial} = 1000$ times, keeping track of the final distances for all of the trials. Plot the distribution of distances from the origin in a histogram, and calculate the mean value.
End of explanation
"""
# put your code here
n_steps = 1000 # number of steps
n_traj = 1000
prob_right = 0.55
def step1d_weighted(prob_right=0.5):
'''
Takes into account a probability of taking a step to the right (implicitly assumed
to be between 0.0 and 1.0, inclusive) and returns a +1 for a right step, -1 for a left step.
'''
a = random.random()
if a > prob_right: # step left
return -1
else: # step right
return 1
dist = np.array([])
# loop over some number of trajectories
for trj in range(0,n_traj):
x = 0 # initial position (will be modified)
# for each trajectory, loop over number of steps
for i in range(0,n_steps):
x += step1d_weighted(prob_right)
# keep track of all distances traveled
dist = np.append(dist,x)
plt.hist(dist,bins=20) #,bins=50)
print("average distance is:", dist.mean(), "average abs. distance is:",np.abs(dist).mean())
"""
Explanation: Question: How does the 2D random walk behave similarly to, and differently from, the 1D random walk that you explored in class? Compare them below.
It's basically the same as a 1D random walk - the x and y values behave exactly the same as separate 1D random walks, and the mean distance^2 is basically the same as a 1D random walk with the same number of steps. The only real difference is that it's 2D and not 1D.
Section 2: 1D random walk with weighting
Now we want to see what happens in the 1D random walk when the "coin toss" is biased - in other words, when you're more likely to take a step in one direction than in the other (i.e., the probability of stepping to the right is $p_{step}$, of stepping to the left is $1-p_{step}$, and $p_{step} \neq 0.5$).
Modify the function for the 1D random walk that you wrote in class to take as an argument a probability that you step in one direction (say, to the right) and then to decide what direction to go. Use that to calculate a distribution of distances from the origin for $N_{step} = 1000$ and $N_{trial} = 1000$, as well as the mean distance from the origin. Plot a histogram of the distribution. Answer the following two questions:
How does this distribution look different than the distribution that you observed for the standard 1D random walk?
How does the distribution of distances traveled, as well as the mean distance from the origin , change as $p_{step}$ varies from 0.5 to 1?
Put your answers here
End of explanation
"""
import pandas
# reads in the CSV file and puts it into a Pandas data frame called "all_states"
all_states = pandas.read_csv('https://raw.githubusercontent.com/bwoshea/2016_election_info/master/State_polling_info.csv')
# put your code here!
#all_states.head()
def votes_per_state(row):
'''
Takes in a row in a data frame (one state). Returns the string 'Clinton',
'Trump', or 'Johnson', the number of electoral votes won by that
candidate from this state, and the number of actual votes that each candidate
receives from this state.
'''
# randomly sample from the probability distributions for clinton, trump, and johnson
clinton = rand.normal(row['Clinton'],row['Cl_error'])
trump = rand.normal(row['Trump'], row['Tr_error'])
johnson = rand.normal(row['Johnson'],row['Jo_error'])
# calculate the number of people who voted for each one
clinton_vote = clinton*row['Population']/100.0
trump_vote = trump*row['Population']/100.0
johnson_vote = johnson*row['Population']/100.0
# some logic to figure out who the winner is
if clinton >= trump:
if clinton > johnson:
winner = 'Clinton'
else:
winner = 'Johnson'
else:
if trump > johnson:
winner = 'Trump'
else:
winner = 'Johnson'
# return the winner, the number of electoral votes, and number of actual
# votes from this state.
return winner, row['Electoral votes'], clinton_vote, trump_vote, johnson_vote
def election_result(country_df):
'''
Takes in a dataframe with all of the necessary information for all states + washington d.c.
loops over all of the states, calculating the total number of electoral votes as well as the
popular vote.
Return the number of electoral votes as well as the total popular vote that each candidate
receives for this model election.
'''
# counters for electoral votes
clinton_EC_votes = 0
trump_EC_votes = 0
johnson_EC_votes = 0
# counters for popular votes
clinton_pop_votes = 0
trump_pop_votes = 0
johnson_pop_votes = 0
# loop over the states + DC
for index, row in country_df.iterrows():
# for each state, figure out who wins
winner, number, c_votes, t_votes, j_votes= votes_per_state(row)
# assign electoral votes to that person
if winner == 'Clinton':
clinton_EC_votes += number
elif winner == 'Trump':
trump_EC_votes += number
elif winner == 'Johnson':
johnson_EC_votes += number
else:
print("wtf?")
# keep track of popular votes for everybody
clinton_pop_votes += c_votes
trump_pop_votes += t_votes
johnson_pop_votes += j_votes
# quick error check
if (clinton_EC_votes+trump_EC_votes+johnson_EC_votes) != 538:
print("not 538 votes - something's wrong!")
# return everything
return clinton_EC_votes, trump_EC_votes, johnson_EC_votes, clinton_pop_votes, trump_pop_votes, johnson_pop_votes
# loop over number of elections we want to simulate
N_elections = 10000
# lists to keep track of electoral votes
clinton_EC_votes = []
trump_EC_votes = []
johnson_EC_votes = []
# lists to keep track of popular votes
clinton_pop_votes = []
trump_pop_votes = []
johnson_pop_votes = []
# counters to keep track of the number of elections that clinton and trump win
clinton_winner = 0
trump_winner = 0
# loop over the number of elections we want to simulate
for i in range(N_elections):
# get electoral college votes and popular vote for all of the states
clinton_EC, trump_EC, johnson_EC, clinton_pop, trump_pop, johnson_pop = election_result(all_states)
# figure out who wins and take this into account
# (I am ignoring Gary Johnson)
if clinton_EC > trump_EC:
clinton_winner += 1
elif trump_EC > clinton_EC:
trump_winner += 1
else:
print("tie!")
# keep track of electoral college votes
clinton_EC_votes.append(clinton_EC)
trump_EC_votes.append(trump_EC)
johnson_EC_votes.append(johnson_EC)
# keep track of popular votes
clinton_pop_votes.append(clinton_pop)
trump_pop_votes.append(trump_pop)
johnson_pop_votes.append(johnson_pop)
# just to let us know something is happening!
if i%1000==0:
print("i: ",i)
"""
Explanation: ANSWER:
The distribution looks relatively similar, except it's now skewed away from the origin and the mean distance is now much larger (and gets larger the more the probability differs from 0.5)
P = probability of stepping to the right. (I am only doing P >= 0.5 because the problem is symmetrical.) For 1000 steps, the distribution looks like this:
P | <dist> | <abs_dist>
-----|------------|-----------
0.5 | -1.312 | 25.128
0.55| 98.572 | 98.604
0.6 | 198.94 | 198.48
0.8 | 600.06 | 600.04
1.0 | 1000.0 | 1009.0
Section 3: Modeling the presidential election
The final part of this homework is the creation of a model of the 2016 Presidential Election that's similar to those used at election prediction sites such as FiveThirtyEight. You've been provided with a link to a CSV (comma-separated value) file containing information about each of the 50 states as well as the District of Columbia, and you'll use this to make election predictions.
A quick civics lesson: The United States does not directly elect the President and Vice President of the United States. Instead, they choose "Electors" that are apportioned to each state based on the most recent U.S. census results (equal to the number of members of Congress to which that state receives). Electors are typically required to vote for the candidate with the majority of the popular vote in their state. At the moment, there are 538 Electors in the "Electoral College.", and to win the presidency a candidate needs to recieve half plus one of those votes, or 270 votes.
The data file: Each row of the provided CSV file contains information about one of the 50 states or Washington D.C. This information includes: the state name and abbreviation, the the number of electoral votes that state receives in the Electoral College, the state's polling information, the number of people surveyed in the last poll, and then information for each of the three candidates with significant popular support: Hilary Clinton, Donald Trump, and Gary Johnson. For each candidate, the number corresponds to the percentage of the voting population that intends to vote for them according to the polls, as well as the margin of error of the poll. Note that we are only using the most recent poll for each state.
Margin of error: The "margin of error" of the poll represents uncertainty in polling data - typically only hundreds of people are polled at any given time, and pollsters are attempting to extrapolate from that sample of people to all of the likely voters in the state. This error is typically calculated assuming that the uncertainty is a "normal" (or "Gaussian") distribution, and the reported error is the "standard deviation". For example, Polls in Michigan indicate that Hilary Clinton is currently favored by 42% of the voting population, with a margin of error of 1.71%. This means that the most likely outcome is for 42% of the population to vote for Clinton, with a 68% likelihood that the real percentage is between 40.29-43.71%, and a 95% chance likelihood that the real percentage is between 38.58-45.42%.
Project instructions
We are going to attempt to duplicate the FiveThirtyEight election predictions, which predict not just who will win the election but what the range of likely outcomes will be. In particular, we're going to reproduce the expected distribution of Electoral College votes for each candidate. To do this, we need to run large numbers of model elections - say, $N_{elec} = 10,000$ - and keep track of the results for each one. To do so, you need to do the following for each of your model elections:
Write a function that, for each state, calculates the percentage of the popular vote that votes for each candidate, decides who the winner is (i.e., who has the most votes), and then returns the number of Electoral College votes received by that person for that state. Almost all states are a "winner take all" state with regards to Electoral College votes, so whoever wins the popular vote receives all Electoral College votes for that state. Note that you can use the Python random module's random.normalvariate() function, or the NumPy random module's random.normal() function to calculate the possible outcomes from a Gaussian distribution, given the mean and standard deviation.
Write a function that loops through the list of states and keep track of the number of electoral votes awarded to each candidate by each state.
Keep track of the total Electoral College votes, as well as the total popular vote, for each candidate in each model election to decide who won.
You will then be asked to answer several questions, as described below.
End of explanation
"""
# put your code and plots here. Add additional cells if necessary.
a,b,c= plt.hist(clinton_EC_votes,color='b')
a,b,c= plt.hist(trump_EC_votes,color='r')
a,b,c= plt.hist(johnson_EC_votes,color='g')
"""
Explanation: Question 1: Plot the histogram of expected Electoral College votes for the three candidates. Who do you expect to win the election?
Put your answer here!
End of explanation
"""
# put your code and plots here. Add additional cells if necessary.
print("clinton win %:", 100.0*clinton_winner / (clinton_winner+trump_winner))
print("trump win %: ", 100.0*trump_winner / (clinton_winner+trump_winner))
"""
Explanation: Question 2: In what percentage of the model elections does Hilary Clinton win the Presidency? How about Donald Trump and Gary Johnson?
Clinton wins in approximately 97.5% of the model elections. Trump wins in about 2.5%, and Gary Johnson wins 0%.
End of explanation
"""
# put your code and plots here. Add additional cells if necessary.
clinton_pop_votes = np.array(clinton_pop_votes)
johnson_pop_votes = np.array(johnson_pop_votes)
trump_pop_votes = np.array(trump_pop_votes)
clinton_frac_votes = clinton_pop_votes/(clinton_pop_votes+johnson_pop_votes+trump_pop_votes)
clinton_frac_votes_nojohn = clinton_pop_votes/(clinton_pop_votes+trump_pop_votes)
clinton_EC_votes = np.array(clinton_EC_votes)
plt.plot(clinton_EC_votes,clinton_frac_votes,'k.')
plt.plot(clinton_EC_votes,clinton_frac_votes_nojohn,'g.')
"""
Explanation: Question 3: Let's look at the difference between popular vote and Electoral College vote for one of the two main-party candidates - let's use Hilary Clinton as our example. Make a scatter plot of the expected Electoral College votes vs. the fraction of popular vote received for all of the elections, and put lines indicating 50% of the popular vote as well as the needed 270 Electoral College votes. Is it possible to win more than half of the Electoral College vote but get less than half of the popular votes, or vice versa? Why might this be true?
In this particular election, given that there's a strong third-party candidate it's extremely unlikely that any candidate will get more than 50% of the votes, even the one that wins way more than 270 Electoral College votes. It might be possible to win the popular vote but lose the Electoral College (thinking Al Gore in 2000...) but it isn't happening for this set of election results.
End of explanation
"""
from IPython.display import HTML
HTML(
"""
<iframe
src="https://goo.gl/forms/q2zZVDznls9zeqJo2?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
"""
)
"""
Explanation: Question 4: Take a look at the results on the FiveThirtyEight election forecast page, and read their description of how they create these forecasts. How well does the distribution that you calculated in Question 1 agree with FiveThirtyEight's Electoral College forecast? If it is different, why do you think that might be?
ANSWER:
Our distribution is different - the median values for Clinton's number of electoral votes that we predicted are lower than FiveThirtyEight's (~325 instead of ~350), and the distribution we predict is narrower. The same is true for Trump's votes, but we predict higher numbers than 538 (but still a narrower distribution). It's not totally obvious why we get different results, but there are several possibilities based on their description of their methodology:
We are using instantaneous poll results and a single poll per state, and 538 uses a more complex set of data that includes multiple polls for each state, with a weighting scheme that takes into account how long it has been since the poll was done, and also attempts to correct for bias of each pollster (i.e., some tend to swing Republican or Democrate; historical data can be used to normalize this).
538 uses economic data to try to make adjustments to their predictions, and we don't do that.
538 uses a different error model than we do - a t-distribution instead of a Gaussian distribution of outcomes, which gives more weight to the tails of the distribution. They also take into account something other than Poisson noise (and a modified error taking into account other uncertainties) for each poll so that errors are greater.
538 treats the fraction of undecided voters differently than we do (we ignore it; they divide them among the candidates). This also may tie into the error budget that they use.
538 may do something different with Gary Johnson's voters, and assume that some will defect prior to the election and go to another of the candidates.
We are implicitly assuming that the polls are accurate and that all of the population will vote. We don't take into account demographic issues or trying to figure out what likely voters think, which 538 attempts to do at some level.
Section 4: Feedback (required!)
End of explanation
"""
|
nik-hil/fastai
|
deeplearning1/nbs/lesson1.ipynb
|
apache-2.0
|
# step to run on theano. Data is mounted at /data
# floyd run --mode jupyter --gpu --env theano:py2 --data rarce/datasets/dogsvscats/1:data
# if bcolz gives error, uncomment and run
# !pip install bcolz
# if keras gives error on importing l2, uncomment and run following
# !pip uninstall -y keras
# !pip install keras==1.2.2
!ls /data # we mount data at '/data'
%matplotlib inline
"""
Explanation: Using Convolutional Neural Networks
Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.
Introduction to this week's task: 'Dogs vs Cats'
We're going to try to create a model to enter the Dogs vs Cats competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): "State of the art: The current literature suggests machine classifiers can score above 80% accuracy on this task". So if we can beat 80%, then we will be at the cutting edge as of 2013!
Basic setup
There isn't too much to do to get started - just a few simple configuration steps.
This shows plots in the web page itself - we always wants to use this when using jupyter notebook:
End of explanation
"""
# path = "data/"
path = "/data/sample/"
"""
Explanation: Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.)
End of explanation
"""
from __future__ import division,print_function
import os, json
from glob import glob
import numpy as np
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
"""
Explanation: A few basic libraries that we'll need for the initial exercises:
End of explanation
"""
import utils; reload(utils)
from utils import plots
"""
Explanation: We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
End of explanation
"""
# As large as you can, but no larger than 64 is recommended.
# If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this.
batch_size=64
# Import our class, and instantiate
import vgg16; reload(vgg16)
from vgg16 import Vgg16
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
vgg.finetune(batches)
vgg.fit(batches, val_batches, nb_epoch=1)
Result
"""
Explanation: Use a pretrained VGG model with our Vgg16 class
Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.
We have created a python class, Vgg16, which makes using the VGG 16 model very straightforward.
The punchline: state of the art custom model in 7 lines of code
Here's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work.
End of explanation
"""
vgg = Vgg16()
"""
Explanation: The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.
Let's take a look at how this works, step by step...
Use Vgg16 for basic image recognition
Let's start off by using the Vgg16 class to recognise the main imagenet category for each image.
We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.
First, create a Vgg16 object:
End of explanation
"""
batches = vgg.get_batches(path+'train', batch_size=4)
"""
Explanation: Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.
Let's grab batches of data from our training folder:
End of explanation
"""
imgs,labels = next(batches)
"""
Explanation: (BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)
Batches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.
End of explanation
"""
plots(imgs, titles=labels)
"""
Explanation: As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding.
The arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.
End of explanation
"""
vgg.predict(imgs, True)
"""
Explanation: We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.
End of explanation
"""
vgg.classes[:4]
"""
Explanation: The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four:
End of explanation
"""
batch_size=64
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size)
"""
Explanation: (Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)
Use our Vgg16 class to finetune a Dogs vs Cats model
To change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specific categories, we need to use a process called "finetuning". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.
However, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune().
We create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.
End of explanation
"""
vgg.finetune(batches)
"""
Explanation: Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
End of explanation
"""
vgg.fit(batches, val_batches, nb_epoch=1)
"""
Explanation: Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)
End of explanation
"""
from numpy.random import random, permutation
from scipy import misc, ndimage
from scipy.ndimage.interpolation import zoom
import keras
from keras import backend as K
from keras.utils.data_utils import get_file
from keras.models import Sequential, Model
from keras.layers.core import Flatten, Dense, Dropout, Lambda
from keras.layers import Input
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD, RMSprop
from keras.preprocessing import image
"""
Explanation: That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.
Next up, we'll dig one level deeper to see what's going on in the Vgg16 class.
Create a VGG model from scratch in Keras
For the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes.
Model setup
We need to import all the modules we'll be using from numpy, scipy, and keras:
End of explanation
"""
FILES_PATH = 'http://files.fast.ai/models/'; CLASS_FILE='imagenet_class_index.json'
# Keras' get_file() is a handy function that downloads files, and caches them for re-use later
fpath = get_file(CLASS_FILE, FILES_PATH+CLASS_FILE, cache_subdir='models')
with open(fpath) as f: class_dict = json.load(f)
# Convert dictionary with string indexes into an array
classes = [class_dict[str(i)][1] for i in range(len(class_dict))]
"""
Explanation: Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.
End of explanation
"""
classes[:5]
"""
Explanation: Here's a few examples of the categories we just imported:
End of explanation
"""
def ConvBlock(layers, model, filters):
for i in range(layers):
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(filters, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
"""
Explanation: Model creation
Creating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.
VGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition:
End of explanation
"""
def FCBlock(model):
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
"""
Explanation: ...and here's the fully-connected definition.
End of explanation
"""
# Mean of each channel as provided by VGG researchers
vgg_mean = np.array([123.68, 116.779, 103.939]).reshape((3,1,1))
def vgg_preprocess(x):
x = x - vgg_mean # subtract mean
return x[:, ::-1] # reverse axis bgr->rgb
"""
Explanation: When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model:
End of explanation
"""
def VGG_16():
model = Sequential()
model.add(Lambda(vgg_preprocess, input_shape=(3,224,224)))
ConvBlock(2, model, 64)
ConvBlock(2, model, 128)
ConvBlock(3, model, 256)
ConvBlock(3, model, 512)
ConvBlock(3, model, 512)
model.add(Flatten())
FCBlock(model)
FCBlock(model)
model.add(Dense(1000, activation='softmax'))
return model
"""
Explanation: Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!
End of explanation
"""
model = VGG_16()
"""
Explanation: We'll learn about what these different blocks do later in the course. For now, it's enough to know that:
Convolution layers are for finding patterns in images
Dense (fully connected) layers are for combining patterns across an image
Now that we've defined the architecture, we can create the model like any python object:
End of explanation
"""
fpath = get_file('vgg16.h5', FILES_PATH+'vgg16.h5', cache_subdir='models')
model.load_weights(fpath)
"""
Explanation: As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem.
Downloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.
End of explanation
"""
batch_size = 4
"""
Explanation: Getting imagenet predictions
The setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call predict() on them.
End of explanation
"""
def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True,
batch_size=batch_size, class_mode='categorical'):
return gen.flow_from_directory(path+dirname, target_size=(224,224),
class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)
"""
Explanation: Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data:
End of explanation
"""
batches = get_batches('train', batch_size=batch_size)
val_batches = get_batches('valid', batch_size=batch_size)
imgs,labels = next(batches)
# This shows the 'ground truth'
plots(imgs, titles=labels)
"""
Explanation: From here we can use exactly the same steps as before to look at predictions from the model.
End of explanation
"""
def pred_batch(imgs):
preds = model.predict(imgs)
idxs = np.argmax(preds, axis=1)
print('Shape: {}'.format(preds.shape))
print('First 5 classes: {}'.format(classes[:5]))
print('First 5 probabilities: {}\n'.format(preds[0, :5]))
print('Predictions prob/class: ')
for i in range(len(idxs)):
idx = idxs[i]
print (' {:.4f}/{}'.format(preds[i, idx], classes[idx]))
pred_batch(imgs)
"""
Explanation: The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with np.argmax()) we can find the predicted label.
End of explanation
"""
|
opensanca/trilha-python
|
01-python-intro/aula-04/Aula 04.ipynb
|
mit
|
'{} + {} = {}'.format(10, 10, 20)
"""
Explanation: [Py-Intro] Aula 04
Tipos bรกsicos e estruturas de controles
O que vocรช vai aprender nesta aula?
Apรณs o tรฉrmino da aula vocรช terรก aprendido:
Formataรงรฃo de strings
Conjuntos: set
Mapeamentos: dicionรกrios
Formataรงรฃo de strings
Complementando a aula passada serรก explicado melhor como funciona a funรงรฃo format() e str.format() comeรงando pela รบltima.
A funรงรฃo str.format() converte os campos envolvos por chaves pelos argumentos passados a ela. Caso nรฃo seja especificado o nome ou posiรงรฃo dos argumentos a troca de chaves por dados รฉ feita na ordem:
End of explanation
"""
'{0} + {1} = {2}'.format(10, 10, 20) # esse รฉ o padrรฃo
"""
Explanation: Porรฉm podemos especificar explicitamente as posiรงรตes que queremos substituir:
End of explanation
"""
'{0} + {0} = {1}'.format(10, 20)
"""
Explanation: Podemos repetir um รบnico argumento:
End of explanation
"""
'{1} + {0} = {2}'.format(30, 20, 10) # evite fazer isso para nรฃo causar confusรฃo
"""
Explanation: Informar uma ordem arbritรกria:
End of explanation
"""
string = '{cidade} รฉ muito bonito(a) durante o(a) {estaรงรฃo}'
string.format(cidade='Bruxelas', estaรงรฃo='Inverno')
"""
Explanation: Tambรฉm รฉ possรญvel dar nomes a esses campos:
End of explanation
"""
'O total รฉ R${0}'.format(59.8912313)
'O total รฉ R${0:.2f}'.format(59.8912313)
'A porcentagem รฉ de {0:.2%}'.format(0.8912313)
"""
Explanation: Tambรฉm podemos formatar nรบmeros:
End of explanation
"""
'{0:<10} | {1:<10} | {2:<10}'.format('Qtd.', 'Cor', 'Valor') # alinhado ร esquerda
'{0:>10} | {1:>10} | {2:>10}'.format('Qtd.', 'Cor', 'Valor') # alinhado ร direita
"""
Explanation: format() tambรฉm permite alinhar elementos dentro de um espaรงo de 10 caracteres:
End of explanation
"""
'{0:^6} | {1:^9} | {2:^10}'.format('Qtd.', 'Cor', 'Valor') # centralizado
"""
Explanation: O tamanho do espaรงo pode ser diferente nos elementos:
End of explanation
"""
'{0:+^6} | {1:=^9} | {2:-^10}'.format('Qtd.', 'Cor', 'Valor') # centralizado
'{0:+^6} | {1:=^9} | {2:-^10}'.format('Qtd.', 'Cor', 'Valor') # centralizado
"""
Explanation: Tambรฉm รฉ possรญvel mudar o caracter de preenchimento do espaรงo dado de caracter em branco para algum outro:
End of explanation
"""
formato_tabela = '{0:^6} | {1:^9} | {2:^10}'
formato_tabela
produtos = [
(2, 'Amarelo', 18.50),
(5, 'Verde', 48.50),
(2, 'Azul', 78.50),
]
produtos
print(formato_tabela.format('Qtd.', 'Cor', 'Valor R$'))
for qtd, cor, valor in produtos:
print(formato_tabela.format(qtd, cor, valor))
"""
Explanation: Tambรฉm podemos armazenar format e reaplicรก-los para valores diferentes:
End of explanation
"""
'{0:e} {0:f} {0:%}'.format(.0000031)
"""
Explanation: Tambรฉm รฉ possรญvel especificar o tipo dos dados:
End of explanation
"""
import math
format(math.pi, '6.3f')
format('Python', '.<12')
format('Python', '.>12')
format('Python', '.^12')
"""
Explanation: Os formatos possรญveis sรฃo:
Tambรฉm รฉ possรญvel formatar valores usando a funรงรฃo embutida format():
End of explanation
"""
l = ['spam', 'spam', 'eggs', 'spam']
l
set(l)
"""
Explanation: Diferentemente da funรงรฃo str.format() a funรงรฃo embutida format() nรฃo permite substituiรงรฃo de caracteres usando chaves.
Para saber mais sobre o assunto consulte a documentaรงรฃo e este documento feito pelo Luciano Ramalho que explica detalhadamente como usar essa funรงรฃo.
Tipos bรกsicos - conjuntos e mapeamentos
Na aula passada vimos outros tipos bรกsicos: nรบmeros e sequรชncia.
Agora vamos falar sobre conjuntos.
Set (conjunto)
Conjunto, ou set, รฉ uma ferramenta subutilizada do Python, tanto que muitos cursos introdutรณrios nรฃo passam abordam esse assunto.
Set vem da teoria de conjuntos da matemรกtica. Um conjunto nรฃo permite a existรชncia de elementos iguais dentro de si, por conta disso รฉ muito utilizado para remover repetiรงรตes:
End of explanation
"""
A = set()
len(a)
"""
Explanation: Como podemos ver a sintaxe de set - {1}, {1, 2}, etc. - se parece exatamente com a notaรงรฃo matemรกtica, com exceรงรฃo que nรฃo existe uma notaรงรฃo para set vazio. Caso vocรช precise criar um conjunto vazio use: set().
End of explanation
"""
A = {5, 4, 3, 3, 2, 10}
A
len(A)
sum(A)
max(A)
min(A)
"""
Explanation: Vale lembrar que conjuntos tambรฉm se comportam como sequรชncias, por conta disso รฉ possรญvel usar neles as funรงรตes que aprendemos anteriormente:
End of explanation
"""
A = {4, 5, 1, 3, 4, 5, 7}
A # ordem diferente da declarada!
"""
Explanation: Um ponto importante a se observar รฉ que conjunto nรฃo mantรฉm a ordem dos elementos:
End of explanation
"""
A[0]
"""
Explanation: Por isso nรฃo รฉ possรญvel acessar os elementos pela posiรงรฃo:
End of explanation
"""
for num in A:
print(num)
"""
Explanation: ร possรญvel acessar seus elementos iterando o set:
End of explanation
"""
tuple(A)
tuple(A)[0]
list(A)
list(A)[-1]
"""
Explanation: Ou convertendo-o para tupla ou lista:
End of explanation
"""
{letra for letra in 'abrakadabraalakazam'}
{numero for numero in range(30) if numero % 3 != 0}
"""
Explanation: Assim como listas os conjuntos tambรฉm possuem um mecanismo simplificado para construir conjuntos o set comprehension:
End of explanation
"""
A = {-10, 0, 10, 20, 30, 40}
A
5 in A
-10 in A
"""
Explanation: Outra caracterรญstica importante de conjuntos รฉ que eles realizam verificaรงรตes de pertencimento de forma muito mais eficiente.
Para verificar se um elemento estรก presente em um conjunto usamos o mesmo operador in que vimos para lista, tuplas e string:
End of explanation
"""
import timeit
tempo = timeit.timeit('[math.exp(x) for x in range(10)]', setup='import math')
tempo
"""
Explanation: Para demonstrar que conjuntos verificam se um elemento pertence em uma coleรงรฃo de maneira mais rรกpida isso vamos usar o mรณdulo timeit que oferece uma maneira simples de contar o tempo de execuรงรฃo de cรณdigo Python.
O mรณdulo timeit possui uma funรงรฃo timeit(stmt='pass', setup='pass', number=1000000, ...) que executa um setup e um cรณdigo (stmt) uma dada quantidade de vezes e contabiliza o tempo levado para executar o cรณdigo (o tempo de rodar o setup nรฃo รฉ incluรญdo). O cรณdigo e o setup devem ser passados como strings.
End of explanation
"""
tempo / 1000000
"""
Explanation: O cรณdigo acima executa primeiro o setup import math depois o cรณdigo [math.exp(x) for x in range(10)], que cria uma lista com os exponencianciais de 0 a 9, 1000000 vezes.
Para sabermos qual a mรฉdia do tempo de execuรงรฃo desse cรณdigo fazemos:
End of explanation
"""
import timeit
import random
# PS: esse cรณdigo demora para ser executado
vezes = 1000
print('tamanho | tempo da lista | tempo do set | list vs set')
tamanhos = (10 ** i for i in range(3, 8))
for tamanho in tamanhos: # cria um gerador com os valores 10^3, 10^4, ..., 10^7
setup_lista = 'l = list(range({}))'.format(tamanho)
tempo = timeit.timeit('9999999 in l', setup=setup_lista, number=vezes)
media_lista = tempo / vezes
setup_set = 's = set(range({}))'.format(tamanho)
tempo = timeit.timeit('9999999 in s', setup=setup_set, number=vezes)
media_set = tempo / vezes
msg = '{:<9}| {:<15}| {:<13}| set รฉ {:<}x + rรกpido'
msg = msg.format(tamanho, round(media_lista, 8), round(media_set, 8),
round(media_lista / media_set))
print(msg)
"""
Explanation: Sabendo disso agora podemos calcular o tempo de verificaรงรฃo de um elemento em lista e set:
End of explanation
"""
A = {2, 3, 4}
B = {3, 5, 7}
A | B
A.union(B)
"""
Explanation: Esse cรณdigo usa alguns recursos mais avanรงados de formataรงรฃo de string. Para entender melhor o que รฉ feito verifique a documentaรงรฃo oficial do assunto.
Conjuntos tambรฉm oferecem algumas operaรงรตes interessantes:
<center>Uniรฃo $${(A \cup B)}$$</center>
A uniรฃo pode ser feita a partir da funรงรฃo A.union(B) ou atravรฉs da utilizaรงรฃo do operador bitwise ou | :
End of explanation
"""
A = {2, 3, 4}
B = {3, 5, 7}
A & B
A.intersection(B)
"""
Explanation: <center>Intersecรงรฃo $${A \cap B}$$</center>
A intersecรงรฃo pode ser feita a partir da funรงรฃo A.intersection(B) ou com o operador bitwise e & :
End of explanation
"""
A = {2, 3, 4}
B = {3, 5, 7}
A - B
A = {2, 3, 4}
B = {3, 5, 7}
A.difference(B)
"""
Explanation: <center>Diferenรงa $${A - B}$$</center>
A diferenรงa pode ser feita a partir da funรงรฃo A.difference(B) ou com o operador - :
End of explanation
"""
A = {2, 3, 4}
B = {3, 5, 7}
A ^ B
A.symmetric_difference(B)
"""
Explanation: <center>Diferenรงa simรฉtrica $${A \bigtriangleup B}$$</center>
A diferenรงa pode ser feita a partir da funรงรฃo A.difference(B) ou com o operador ^ :
End of explanation
"""
from faker import Factory
factory = Factory.create('pt_BR') # criando fรกbrica de dados falsos portuguรชs brasileiro
nomes = {factory.name() for _ in range(10000)}
nomes
"""
Explanation: Ok, essas funรงรตes sรฃo legais, mas eu jรก vi isso no ensino mรฉdio e nunca usei na minha vida, como isso vai ser รบtil para mim? (pelo menos foi isso que eu pensei ao ver isso)
Para testar isso vamos gerar um conjunto de nomes. Vamos usar a biblioteca externa faker que gera dados falsos.
Para usรก-la รฉ necessรกrio instalar:
$ pip install fake-factory
Depois de instalรก-la em nosso virtualenv podemos usรก-la:
End of explanation
"""
buscas = {'Joรฃo Silva', 'Ana Ferreira', 'Eduardo Santos', 'Pedro Alves', 'Enzo Correira'}
presentes = [busca for busca in buscas if busca in nomes]
presentes
ausentes = [busca for busca in buscas if busca not in nomes]
ausentes
"""
Explanation: Agora vamos supor que temos uma lista de nomes e queremos conferir se eles estรฃo no conjunto de nomes. Normalmente farรญamos:
End of explanation
"""
buscas & nomes
"""
Explanation: Porรฉm se usarmos operaรงรตes de conjunto podemos fazer isso de forma mais simples e eficiente.
Para saber quais nomes buscados estรฃo no conjunto de nomes:
End of explanation
"""
buscas - nomes
"""
Explanation: Os nomes que nรฃo estรฃo:
End of explanation
"""
# tamanho | tempo set + for | tempo set + & | for vs &
# 100 | 3.945e-05 | 1.86e-06 | & รฉ 21.25x + rรกpido
# 1000 | 5.844e-05 | 1.751e-05 | & รฉ 3.34x + rรกpido
# 10000 | 0.00014848 | 5.991e-05 | & รฉ 2.48x + rรกpido
# 100000 | 0.00015138 | 8.862e-05 | & รฉ 1.71x + rรกpido
# 1000000 | 0.00014647 | 8.113e-05 | & รฉ 1.81x + rรกpido
"""
Explanation: Comparando o tempo de execuรงรฃo das buscas usando for contra buscas usando intersecรงรฃo obtemos o seguinte resultado:
End of explanation
"""
votos = {'joรฃo': 10, 'maria': 25, 'ana': 40, 'pedro': 75}
votos['joรฃo']
votos['joรฃo'] = 11
votos
"""
Explanation: dict
Dict, dictnionary ou dicionรกrio รฉ a estrutura padrรฃo de mapeamento e รฉ indexado por chaves compostas por tipos imutรกveis (nรบmeros, strings, tuplas etc.).
Jรก falamos sobre dicionรกrios anteriormente, nesta aula faremos uma rรกpida revisรฃo e aprofundremos mais sobre o assunto:
End of explanation
"""
len(votos)
del votos['joรฃo']
votos
"""
Explanation: Como podem ver aqui os dicionรกrios nรฃo mantรฉm a ordem de seus elementos. Criamos o dict com os elementos {'joรฃo': 10, 'maria': 25, 'ana': 40, 'pedro': 75}, porรฉm sua ordem atual รฉ {'ana': 40, 'joรฃo': 11, 'maria': 25, 'pedro': 75} e, futuramente, essa ordem pode ser outra conforme alteramos esse dicionรกrio.
End of explanation
"""
votos['joรฃo']
"""
Explanation: Vale notar que ao tentar acessar um elemento nรฃo existente รฉ levantada uma exceรงรฃo:
End of explanation
"""
print(votos.get('joรฃo'))
"""
Explanation: As vezes pode ser necessรกrio evitar esse tipo de comportamento. Isso pode ser feito usando a funรงรฃo:
End of explanation
"""
votos.get('joรฃo', 0)
"""
Explanation: tambรฉm รฉ possรญvel estabelecer um valor para ser retornado caso a chave nรฃo seja encontrada:
End of explanation
"""
candidatos = list(votos.keys()) + ['joรฃo', 'muriel', 'marcola']
candidatos
for candidato in candidatos:
print('{} recebeu {} votos.'.format(candidato, votos.get(candidato, 0)))
"""
Explanation: Por exemplo se vocรช quiser contabilizar os votos de uma eleiรงรฃo em que nem todos os candidatos receberam votos e, portanto, nรฃo aparecem no dicionรกrio votos:
End of explanation
"""
votos
'ana' in votos
'penรฉlope' in votos
len(votos) # nรบmero de items no dict
outros_votos = {'milena': 100, 'mรกrio': 1}
votos.update(outros_votos) # atualiza o dicionรกrio votos com os items de outros_votos
votos
"""
Explanation: Podemos verificar se alguma chave existe no dicionรกrio com o in:
End of explanation
"""
votos.keys()
"""
Explanation: Para acessar somente as chaves de um dicionรกrio fazemos:
End of explanation
"""
['maria', 'adelaide'] & votos.keys()
['maria', 'adelaide'] - votos.keys()
"""
Explanation: Percebe-se que o retorno nรฃo รฉ uma lista de chaves, mas sim um dict_keys. Nรฃo vou entrar em detalhes, mas esse dict_keys - junto com dict_values e dict_items que serรฃo mostrados mais a frente - se comportam como conjunto, por tanto verificaรงรตes de pertencimento sรฃo muito eficientes, alรฉm de suportar algumas operaรงรตes de conjuntos:
Para mais informaรงรตes verifique a documentaรงรฃo oficial e o PEP 3106 em que essa mudanรงa foi proposta
End of explanation
"""
votos.values()
"""
Explanation: Para acessar somente os valores do dicionรกrios:
End of explanation
"""
votos.items()
"""
Explanation: Como os valores nรฃo sรฃo รบnicos o dict_values nรฃo pode se comportar como conjunto, por esse motivo ele nรฃo possui as operaรงรตes de conjuntos
End of explanation
"""
[('jean', 50), ('maria', 25)] & votos.items()
[('jean', 50), ('maria', 25)] - votos.items()
"""
Explanation: Porรฉm o dict.items() implementa as operaรงรตes de conjuntos, pois a dupla chave e valor sรฃo รบnicas:
End of explanation
"""
for nome in votos.keys():
print(nome)
for qtd_votos in votos.values():
print(qtd_votos)
for nome, qtd_votos in votos.items(): # atribuiรงรฃo mรบltipla, lembra?
print('{} recebeu {} votos.'.format(nome.capitalize(), qtd_votos))
"""
Explanation: Revendo iteraรงรฃo de dicionรกrios:
End of explanation
"""
# nรฃo mude esse cรณdigo, ele que gera os votos para vocรช testar seu programa
from faker import Factory
import random
factory = Factory.create('pt_BR')
# usa distribuiรงรฃo de gauss para gerar quantidade de votos
votos = {factory.name(): abs(round(random.gauss(0, .2) * 10000)) for _ in range(333)}
# deixa nomes completos com somente dois nomes
votos = {nome: votos for nome, votos in votos.items() if len(nome.split()) == 2}
def media(votos):
...
"""
Explanation: Exercรญcio
Calcule a mรฉdia de votos por candidatos:
End of explanation
"""
from faker import Factory
factory = Factory.create('pt_BR')
cpfs = {factory.name(): factory.cpf() for _ in range(10)}
cpfs
{numero: numero ** 2 for numero in range(10)}
"""
Explanation: Assim como listas e sets, dicionรกrios tambรฉm possuem uma maneira de criar dicts com facilidade: dict comprehension
End of explanation
"""
telefones = {'joรฃo': '9941', 'ana': '9103', 'maria': '9301', 'pedro': '9203'}
telefones
"""
Explanation: Acontece que dict comprehension nos dรก uma maneira muito bonilinda de inverter as chaves e valores de dicionรกrios:
End of explanation
"""
nomes = {}
for nome, telefone in telefones.items():
nomes[telefone] = nome
nomes
"""
Explanation: Normalmente farรญamos:
End of explanation
"""
{telefone: nome for nome, telefone in telefones.items()}
"""
Explanation: Mas com dict comprehension รฉ muito mais fรกcil:
End of explanation
"""
sorted(telefones.items())
"""
Explanation: Ordenando um dict pelas chaves:
End of explanation
"""
dict(sorted(telefones.items()))
"""
Explanation: Nรฃo faz snetido obter dict pq dict nร o mantรฉm a ordem.
Para ter um objeto do tipo dict รฉ necessรกrio converter o cรณdigo acima:
End of explanation
"""
def pega_segundo_elemento(tupla):
return tupla[1]
sorted(telefones.items(), key=pega_segundo_elemento)
"""
Explanation: Para ordernar pelos valores precisamos "avisar" a funรงรฃo sorted() para usar o valor como a chave. Isso รฉ feito passando um argumento key que recebe uma funรงรฃo que รฉ usada para pegar o valor que serรก usado na ordenaรงรฃo. Nesse caso precisamos criar uma funรงรฃo que retorna o segundo elemento de nossa lista de tuplas chave e valor, referente a esse รบltimo.
End of explanation
"""
pega_segundo_elemento = lambda x: x[1]
sorted(telefones.items(), key=pega_segundo_elemento)
"""
Explanation: Adendo: no Python funรงรตes sรฃo objetos de primeira classe. Isso signfica que funรงรตes, assim como inteiros, strings e listas, podemos passรก-las como parรขmetros, atribuรญ-las a variรกveis e outras operaรงรตes. Esse assunto serรก abordando com maior profundidade na aula sobre funรงรตes.
Um outro jeito comum de realizar essa ordenaรงรฃo รฉ usar funรงรตes anรดnimas:
End of explanation
"""
sorted(telefones.items(), key=lambda x: x[1])
"""
Explanation: Geralmente o uso de funรงรตes anรดnimas รฉ feito assim:
End of explanation
"""
from operator import itemgetter
sorted(telefones.items(), key=itemgetter(1))
"""
Explanation: Porรฉm, o jeito mais eficiente de realizar esse tipo de operaรงรฃo รฉ utilizando a biblioteca operator que implementa os operadores do python de forma eficiente:
End of explanation
"""
from collections import defaultdict
def conta_palavras(frase):
contagem = {}
...
return contagem
# rode o cรณdigo abaixo para testar a corretude de seu programa
assert conta_palavras("quod dolore dolore dolore modi sapiente quod ullam nostrum ullam") == {'ullam': 2, 'sapiente': 1, 'quod': 2, 'nostrum': 1, 'dolore': 3, 'modi': 1}
assert conta_palavras("soluta Soluta sapiente sapiente nostrum Sapiente dolore nostrum modi ullam") == {'ullam': 1, 'sapiente': 3, 'nostrum': 2, 'dolore': 1, 'soluta': 2, 'modi': 1}
assert conta_palavras("quod dolore dolore soluta sapiente sapiente dolore quod sapiente modi") == {'dolore': 3, 'quod': 2, 'soluta': 1, 'modi': 1, 'sapiente': 3}
assert conta_palavras("dolore Dolore quis quod dolore nostrum quod Nostrum sapiente soluta") == {'sapiente': 1, 'quod': 2, 'nostrum': 2, 'soluta': 1, 'dolore': 3, 'quis': 1}
assert conta_palavras("sapiente sapiente quod soluta quis ullam nostrum soluta ullam ullam") == {'ullam': 3, 'sapiente': 2, 'quod': 1, 'nostrum': 1, 'soluta': 2, 'quis': 1}
assert conta_palavras("modi Sapiente dolore Soluta sapiente quis soluta modi dolore ullam") == {'ullam': 1, 'sapiente': 2, 'quis': 1, 'dolore': 2, 'soluta': 2, 'modi': 2}
assert conta_palavras("quis quis nostrum nostrum sapiente quis nostrum quod quis dolore") == {'sapiente': 1, 'quod': 1, 'quis': 4, 'nostrum': 3, 'dolore': 1}
assert conta_palavras("nostrum sapiente quis ullam ullam quod ullam nostrum ullam soluta") == {'ullam': 4, 'sapiente': 1, 'quod': 1, 'nostrum': 2, 'soluta': 1, 'quis': 1}
assert conta_palavras("sapiente ullam quod quis dolore modi Quis quod dolore nostrum") == {'ullam': 1, 'sapiente': 1, 'quod': 2, 'quis': 2, 'nostrum': 1, 'dolore': 2, 'modi': 1}
assert conta_palavras("modi nostrum ullam Quis Soluta modi quis ullam modi ullam") == {'ullam': 3, 'soluta': 1, 'modi': 3, 'nostrum': 1, 'quis': 2}
"""
Explanation: A funรงรฃo itemgetter() da biblioteca operator faz o mesmo que a funรงรฃo pega_segundo_elemento() mas de forma muito mais rรกpida. Para saber mais sobre operator veja sua documentaรงรฃo.
Exemplos
Dicionรกrios podem ser utilizados para armazenar matrizes esparsas
Exercรญcios
Dado uma frase calcule as ocorrรชncias de cada palavra naquela. Armazene cada palavra como chave em um dict e as quantidade seu valor:
End of explanation
"""
def comprime_chaves_dict(dicionario):
...
assert comprime_chaves_dict({'molestias': 3950, 'tempore': 'possimus', 'rerum': 1200}) == {'tmpr': 'possimus', 'mlsts': 3950, 'rrm': 1200}
assert comprime_chaves_dict({'nam': 5300, 'minus': 3700, 'fugit': 8600}) == {'mns': 3700, 'nm': 5300, 'fgt': 8600}
assert comprime_chaves_dict({'magnam': 2850, 'quam': 2300, 'asperiores': 7750}) == {'qm': 2300, 'sprrs': 7750, 'mgnm': 2850}
assert comprime_chaves_dict({'quos': 'dignissimos', 'qui': 1700, 'repellendus': 'aut'}) == {'rpllnds': 'aut', 'q': 1700, 'qs': 'dignissimos'}
assert comprime_chaves_dict({'quaerat': 9850, 'magni': 8600, 'blanditiis': 'optio'}) == {'mgn': 8600, 'qrt': 9850, 'blndts': 'optio'}
"""
Explanation: Escreva uma funรงรฃo comprime_chaves_dict() que receba um dict e remova as vogais das chaves de um dicionรกrio. Por exempo o dict {'foo': 10, 'bar': 100} deve ser comprimido para {'f': 10, 'br': 100}.
End of explanation
"""
|
MartyWeissman/Python-for-number-theory
|
PwNT Notebook 6.ipynb
|
gpl-3.0
|
W = "Hello"
print W
for j in range(len(W)): # len(W) is the length of the string W.
print W[j] # Access the jth character of the string.
"""
Explanation: Part 6: Ciphers and Key exchange
In this notebook, we introduce cryptography -- how to communicate securely over insecure channels. We begin with a study of two basic ciphers, the Caesar cipher and its fancier variant, the Vigenรจre cipher. The Vigenรจre cipher uses a key to turn plaintext (i.e., the message) into ciphertext (the coded message), and uses the same key to turn the ciphertext back into plaintext. Therefore, two parties can communicate securely if they -- and only they -- possess the key.
If the security of communication rests on possession of a common key, then we're left with a new problem: how do the two parties agree on a common key, especially if they are far apart and communicating over an insecure channel?
A clever solution to this problem was published in 1976 by Whitfield Diffie and Martin Hellman, and so it's called Diffie-Hellman key exchange. It takes advantage of modular arithmetic: the existence of a primitive root (modulo a prime) and the difficulty of solving the discrete logarithm problem.
This part complements Chapter 6 of An Illustrated Theory of Numbers.
Table of Contents
Ciphers
Key exchange
<a id='cipher'></a>
Ciphers
A cipher is a way of transforming a message, called the plaintext into a different form, the ciphertext, which conceals the meaning to all but the intended recipient(s). A cipher is a code, and can take many forms. A substitution cipher might simply change every letter to a different letter in the alphabet. This is the idea behind "Cryptoquip" puzzles. These are not too hard for people to solve, and are easy for computers to solve, using frequency analysis (understanding how often different letters or letter-combinations occur).
ASCII code and the Caesar cipher
Even though substitution ciphers are easy to break, they are a good starting point. To implement substitution ciphers in Python, we need to study the string type in a bit more detail. To declare a string variable, just put your string in quotes. You can use any letters, numbers, spaces, and many symbols inside a string. You can enclose your string by single quotes, like 'Hello' or double-quotes, like "Hello". This flexibility is convenient, if you want to use quotes within your string. For example, the string Prince's favorite prime is 1999 should be described in Python with double-quotes "Prince's favorite prime is 1999" so that the apostrophe doesn't confuse things.
Strings are indexed, and their letters can be retrieved as if the string were a list of letters. Python experts will note that strings are immutable while lists are mutable objects, but we aren't going to worry about that here.
End of explanation
"""
print type(W)
print type(W[0]) # W[0] is a character.
"""
Explanation: Each "letter" of a string again belongs to the string type. A string of length one is called a character.
End of explanation
"""
chr(65)
ord('A')
"""
Explanation: Since computers store data in binary, the designers of early computers (1960s) created a code called ASCII (American Standard Code for Information Interchange) to associate to each character a number between 0 and 127. Every number between 0 and 127 is represented in binary by 7 bits (between 0000000 and 1111111), and so each character is stored with 7 bits of memory. Later, ASCII was extended with another 128 characters, so that codes between 0 and 255 were used, requiring 8 bits. 8 bits of memory is called a byte. One byte of memory suffices to store one (extended ASCII) character.
You might notice that there are 256 ASCII codes available, but there are fewer than 256 characters available on your keyboard, even once you include symbols like # and ;. Some of these "extra" codes are for accented letters, and others are relics of old computers. For example, ASCII code 7 (0000111) stands for the "bell", and older readers might remember making the Apple II computer beep by pressing Control-G on the keyboard ("G" is the 7th letter). You can look up a full ASCII table if you're curious.
Nowadays, the global community of computer users requires far more than 256 "letters" -- there are many alphabets around the world! So instead of ASCII, we can access over 100 thousand unicode characters. Scroll through a unicode table to see what is possible. In Python version 3.x, all strings are considered in Unicode, but in Python 2.7 (which we use), it's a bit trickier to work with Unicode.
Here we stay within ASCII codes, since they will suffice for basic English messages. Python has built-in commands chr and ord for converting from code-number (0--255) to character and back again.
End of explanation
"""
for a in range(32,127):
c = chr(a)
print "ASCII %d is %s"%(a, c)
"""
Explanation: The following code will produce a table of the ASCII characters with codes between 32 and 126. This is a good range which includes all the most common English characters and symbols on a U.S. keyboard. Note that ASCII code 32 corresponds to an empty space (an important character for long messages!)
End of explanation
"""
def inrange(n,range_min, range_max):
'''
The input number n can be any integer.
The output number will be between range_min and range_max (inclusive)
If the input number is already within range, it will not change.
'''
range_len = range_max - range_min + 1
a = n % range_len
if a < range_min:
a = a + range_len
return a
inrange(13,1,10)
inrange(17,5,50)
"""
Explanation: Since we only work with the ASCII range between 32 and 126, it will be useful to "cycle" other numbers into this range. For example, we will interpret 127 as 32, 128 as 33, etc., when we convert out-of-range numbers into characters.
The following function forces a number into a given range, using the mod operator. It's a common trick, to make lists loop around cyclically.
End of explanation
"""
def Caesar_shift(c, shift):
'''
Shifts the character c by shift units
within the ASCII table between 32 and 126.
The shift parameter can be any integer!
'''
ascii = ord(c)
a = ascii + shift # Now we have a number between 32+shift and 126+shift.
a = inrange(a,32,126) # Put the number back in range.
return chr(a)
"""
Explanation: Now we can implement a substitution cipher by converting characters to their ASCII codes, shuffling the codes, and converting back. One of the simplest substitution ciphers is called a Caesar cipher, in which each character is shifted -- by a fixed amount -- down the list. For example, a Caesar cipher of shift 3 would send 'A' to 'D' and 'B' to 'E', etc.. Near the end of the list, characters are shifted back to the beginning -- the list is considered cyclicly, using our inrange function.
Here is an implementation of the Caesar cipher, using the ASCII range between 32 and 126. We begin with a function to shift a single character.
End of explanation
"""
for a in range(32,127):
c = chr(a)
print "ASCII %d is %s, which shifts to %s"%(a, c, Caesar_shift(c,5)) # Shift by 5.
"""
Explanation: Let's see the effect of the Caesar cipher on our ASCII table.
End of explanation
"""
def Caesar_cipher(plaintext, shift):
ciphertext = ''
for c in plaintext: # Iterate through the characters of a string.
ciphertext = ciphertext + Caesar_shift(c,shift)
return ciphertext
print Caesar_cipher('Hello! Can you read this?', 5) # Shift forward 5 units in ASCII.
"""
Explanation: Now we can use the Caesar cipher to encrypt strings.
End of explanation
"""
print Caesar_cipher('Mjqqt&%%Hfs%~tz%wjfi%ymnxD', -5) # Shift back 5 units in ASCII.
"""
Explanation: As designed, the Caesar cipher turns plaintext into ciphertext by using a shift of the ASCII table. To decipher the ciphertext, one can just use the Caesar cipher again, with the negative shift.
End of explanation
"""
def Vigenere_cipher(plaintext, key):
ciphertext = '' # Start with an empty string
for j in range(len(plaintext)):
c = plaintext[j] # the jth letter of the plaintext
key_index = j % len(key) # Cycle through letters of the key.
shift = ord(key[key_index]) # How much we shift c by.
ciphertext = ciphertext + Caesar_shift(c,shift) # Add new letter to ciphertext
return ciphertext
print Vigenere_cipher('This is very secret', 'Key') # 'Key' is probably a bad key!!
"""
Explanation: The Vigenรจre cipher
The Caesar cipher is pretty easy to break, by a brute force attack (shift by all possible values) or a frequency analysis (compare the frequency of characters in a message to the frequency of characters in typical English messages, to make a guess).
The Vigenรจre cipher is a variant of the Caesar cipher which uses an ecryption key to vary the shift-parameter throughout the encryption process. For example, to encrypt the message "This is very secret" using the key "Key", you line up the characters of the message above repeated copies of the key.
T | h | i | s | | i | s | | v | e | r | y | | s | e | c | r | e | t
--|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--
K | e | y | K | e | y | K | e | y | K | e | y | K | e | y | K | e | y | K
Then, you turn everything into ASCII (or your preferred numerical system), and use the bottom row to shift the top row.
ASCII message | 84 | 104 | 105 | 115 | 32 | 105 | 115 | 32 | 118 | 101 | 114 | 121 | 32 | 115 | 101 | 99 | 114 | 101 | 116
---|-----|-----
Shift | 75 | 101 | 121 | 75 | 101 | 121 | 75 | 101 | 121 | 75 | 101 | 121 | 75 | 101 | 121 | 75 | 101 | 121 | 75
ASCII shifted | 159 | 205 | 226 | 190 | 133 | 226 | 190 | 133 | 239 | 176 | 215 | 242 | 107 | 216 | 222 | 174 | 215 | 222 | 191
ASCII shifted in range | 64 | 110 | 36 | 95 | 38 | 36 | 95 | 38 | 49 | 81 | 120 | 52 | 107 | 121 | 32 | 79 | 120 | 32 | 96
Finally, the shifted ASCII codes are converted back into characters for transmission. In this case, the codes 64,110,36,95, etc., are converted to the ciphertext "@n$_&$_&1Qx4ky Ox \`"
The Vigenรจre cipher is much harder to crack than the Caesar cipher, if you don't have the key. Indeed, the varying shifts make frequency analysis more difficult. The Vigenรจre cipher is weak by today's standards (see Wikipedia for a description of 19th century attacks), but illustrates the basic actors in a symmetric key cryptosystem: the plaintext, ciphertext, and a single key. Today, symmetric key cryptosystems like AES and 3DES are used all the time for secure communication.
Below, we implement the Vigenรจre cipher.
End of explanation
"""
def Vigenere_decipher(ciphertext, key):
plaintext = '' # Start with an empty string
for j in range(len(ciphertext)):
c = ciphertext[j] # the jth letter of the ciphertext
key_index = j % len(key) # Cycle through letters of the key.
shift = - ord(key[key_index]) # Note the negative sign to decipher!
plaintext = plaintext + Caesar_shift(c,shift) # Add new letter to plaintext
return plaintext
Vigenere_decipher('@n$_&$_&1Qx4ky Ox `', 'Key')
# Try a few cipher/deciphers yourself to get used to the Vigenere system.
"""
Explanation: The Vigenรจre cipher is called a symmetric cryptosystem, because the same key that is used to encrypt the plaintext can be used to decrypt the ciphertext. All we do is subtract the shift at each stage.
End of explanation
"""
def mult_order(a,p):
'''
Determines the (multiplicative) order of an integer
a, modulo p. Here p is prime, and GCD(a,p) = 1.
If bad inputs are used, this might lead to a
never-ending loop!
'''
current_number = a % p
current_exponent = 1
while current_number != 1:
current_number = (current_number * a)%p
current_exponent = current_exponent + 1
return current_exponent
for j in range(1,37):
print "The multiplicative order of %d modulo 37 is %d"%(j,mult_order(j,37))
# These orders should all be divisors of 36.
"""
Explanation: The Vigenรจre cipher becomes an effective way for two parties to communicate securely, as long as they share a secret key. In the 19th century, this often meant that the parties would require an initial in-person meeting to agree upon a key, or a well-guarded messenger would carry the key from one party to the other.
Today, as we wish to communicate securely over long distances on a regular basis, the process of agreeing on a key is more difficult. It seems like a chicken-and-egg problem, where we need a shared secret to communicate securely, but we can't share a secret without communicating securely in the first place!
Remarkably, this secret-sharing problem can be solved with some modular arithmetic tricks. This is the subject of the next section.
Exercises
A Caesar cipher was used to encode a message, with the resulting ciphertext: 'j!\'1r$v1"$v&&+1t}v(v$2'. Use a loop (brute force attack) to figure out the original message.
Imagine that you encrypt a long message (e.g., 1000 words of standard English) with a Vigenรจre cipher. How might you detect the length of the key, if it is short (e.g. 3 or 4 characters)?
Consider running a plaintext message through a Vigenรจre cipher with a 3-character key, and then running the ciphertext through a Vigenรจre cipher with a 4-character key. Explain how this is equivalent to running the original message through a single cipher with a 12-character key.
<a id='keyexchange'></a>
Key exchange
Now we study Diffie-Hellman key exchange, a remarkable way for two parties to share a secret without ever needing to directly communicate the secret with each other. Their method is based on properties of modular exponentiation and the existence of a primitive root modulo prime numbers.
Primitive roots and Sophie Germain primes
If $p$ is a prime number, and $GCD(a,p) = 1$, then recall Fermat's Little Theorem: $$a^{p-1} \equiv 1 \text{ mod } p.$$
It may be the case that $a^\ell \equiv 1$ mod $p$ for some smaller (positive) value of $\ell$ however. The smallest such positive value of $\ell$ is called the order (multiplicative order, to be precise) of $a$ modulo $p$, and it is always a divisor of $p-1$.
The following code determines the order of a number, mod $p$, with a brute force approach.
End of explanation
"""
def is_primroot_safe(b,p):
'''
Checks whether b is a primitive root modulo p,
when p is a safe prime. If p is not safe,
the results will not be good!
'''
q = (p-1) / 2 # q is the Sophie Germain prime
if b%p == 1: # Is the multiplicative order 1?
return False
if (b*b)%p == 1: # Is the multiplicative order 2?
return False
if pow(b,q,p) == 1: # Is the multiplicative order q?
return False
return True # If not, then b is a primitive root mod p.
"""
Explanation: A theorem of Gauss states that, if $p$ is prime, there exists an integer $b$ whose order is precisely $p-1$ (as big as possible!). Such an integer is called a primitive root modulo $p$. For example, the previous computation found 12 primitive roots modulo $37$: they are 2,5,13,15,17,18,19,20,22,24,32,35. To see these illustrated (mod 37), check out this poster (yes, that is blatant self-promotion!)
For everything that follows, suppose that $p$ is a prime number. Not only do primitive roots exist mod $p$, but they are pretty common. In fact, the number of primitive roots mod $p$ equals $\phi(p-1)$, where $\phi$ denotes Euler's totient. On average, $\phi(n)$ is about $6 / \pi^2$ times $n$ (for positive integers $n$). While numbers of the form $p-1$ are not "average", one still expects that $\phi(p-1)$ is a not-very-small fraction of $p-1$. You should not have to look very far if you want to find a primitive root.
The more difficult part, in practice, is determining whether a number $b$ is or is not a primitive root modulo $p$. When $p$ is very large (like hundreds or thousands of digits), $p-1$ is also very large. It is certainly not practical to cycle all the powers (from $1$ to $p-1$) of $b$ to determine whether $b$ is a primitive root!
The better approach, sometimes, is to use the fact that the multiplicative order of $b$ must be a divisor of $p-1$. If one can find all the divisors of $p-1$, then one can just check whether $b^d \equiv 1$ mod $p$ for each divisor $d$. This makes the problem of determining whether $b$ is a primitive root just about as hard as the problem of factoring $p-1$. This is a hard problem, in general!
But, for the application we're interested in, we will want to have a large prime number $p$ and a primitive root mod $p$. The easiest way to do this is to use a Sophie Germain prime $q$. A Sophie Germain prime is a prime number $q$ such that $2q + 1$ is also prime. When $q$ is a Sophie Germain prime, the resulting prime $p = 2q + 1$ is called a safe prime.
Observe that when $p$ is a safe prime, the prime decomposition of $p-1$ is
$$p-1 = 2 \cdot q.$$
That's it. So the possible multiplicative orders of an element $b$, mod $p$, are the divisors of $2q$, which are
$$1, 2, q, \text{ or } 2q.$$
In order to check whether $b$ is a primitive root, modulo a safe prime $p = 2q + 1$, we must check just three things: is $b \equiv 1$, is $b^2 \equiv 1$, or is $b^q \equiv 1$, mod $p$? If the answer to these three questions is NO, then $b$ is a primitive root mod $p$.
End of explanation
"""
from random import randint # randint chooses random integers.
def Miller_Rabin(p, base):
'''
Tests whether p is prime, using the given base.
The result False implies that p is definitely not prime.
The result True implies that p **might** be prime.
It is not a perfect test!
'''
result = 1
exponent = p-1
modulus = p
bitstring = bin(exponent)[2:] # Chop off the '0b' part of the binary expansion of exponent
for bit in bitstring: # Iterates through the "letters" of the string. Here the letters are '0' or '1'.
sq_result = result*result % modulus # We need to compute this in any case.
if sq_result == 1:
if (result != 1) and (result != exponent): # Note that exponent is congruent to -1, mod p.
return False # a ROO violation occurred, so p is not prime
if bit == '0':
result = sq_result
if bit == '1':
result = (sq_result * base) % modulus
if result != 1:
return False # a FLT violation occurred, so p is not prime.
return True # If we made it this far, no violation occurred and p might be prime.
def is_prime(p, witnesses=50): # witnesses is a parameter with a default value.
'''
Tests whether a positive integer p is prime.
For p < 2^64, the test is deterministic, using known good witnesses.
Good witnesses come from a table at Wikipedia's article on the Miller-Rabin test,
based on research by Pomerance, Selfridge and Wagstaff, Jaeschke, Jiang and Deng.
For larger p, a number (by default, 50) of witnesses are chosen at random.
'''
if (p%2 == 0): # Might as well take care of even numbers at the outset!
if p == 2:
return True
else:
return False
if p > 2**64: # We use the probabilistic test for large p.
trial = 0
while trial < witnesses:
trial = trial + 1
witness = randint(2,p-2) # A good range for possible witnesses
if Miller_Rabin(p,witness) == False:
return False
return True
else: # We use a determinisic test for p <= 2**64.
verdict = Miller_Rabin(p,2)
if p < 2047:
return verdict # The witness 2 suffices.
verdict = verdict and Miller_Rabin(p,3)
if p < 1373653:
return verdict # The witnesses 2 and 3 suffice.
verdict = verdict and Miller_Rabin(p,5)
if p < 25326001:
return verdict # The witnesses 2,3,5 suffice.
verdict = verdict and Miller_Rabin(p,7)
if p < 3215031751:
return verdict # The witnesses 2,3,5,7 suffice.
verdict = verdict and Miller_Rabin(p,11)
if p < 2152302898747:
return verdict # The witnesses 2,3,5,7,11 suffice.
verdict = verdict and Miller_Rabin(p,13)
if p < 3474749660383:
return verdict # The witnesses 2,3,5,7,11,13 suffice.
verdict = verdict and Miller_Rabin(p,17)
if p < 341550071728321:
return verdict # The witnesses 2,3,5,7,11,17 suffice.
verdict = verdict and Miller_Rabin(p,19) and Miller_Rabin(p,23)
if p < 3825123056546413051:
return verdict # The witnesses 2,3,5,7,11,17,19,23 suffice.
verdict = verdict and Miller_Rabin(p,29) and Miller_Rabin(p,31) and Miller_Rabin(p,37)
return verdict # The witnesses 2,3,5,7,11,17,19,23,29,31,37 suffice for testing up to 2^64.
def is_SGprime(p):
'''
Tests whether p is a Sophie Germain prime
'''
if is_prime(p): # A bit faster to check whether p is prime first.
if is_prime(2*p + 1): # and *then* check whether 2p+1 is prime.
return True
"""
Explanation: This would not be very useful if we couldn't find Sophie Germain primes. Fortunately, they are not so rare. The first few are 2, 3, 5, 11, 23, 29, 41, 53, 83, 89. It is expected, but unproven that there are infinitely many Sophie Germain primes. In practice, they occur fairly often. If we consider numbers of magnitude $N$, about $1 / \log(N)$ of them are prime. Among such primes, we expect about $1.3 / \log(N)$ to be Sophie Germain primes. In this way, we can expect to stumble upon Sophie Germain primes if we search for a bit (and if $\log(N)^2$ is not too large).
The code below tests whether a number $p$ is a Sophie Germain prime. We construct it by simply testing whether $p$ and $2p+1$ are both prime. We use the Miller-Rabin test (the code from the previous Python notebook) in order to test whether each is prime.
End of explanation
"""
for j in range(1,100):
if is_SGprime(j):
print j, 2*j+1
"""
Explanation: Let's test this out by finding the Sophie Germain primes up to 100, and their associated safe primes.
End of explanation
"""
test_number = 10**99 # Start looking at the first 100-digit number, which is 10^99.
while not is_SGprime(test_number):
test_number = test_number + 1
print test_number
"""
Explanation: Next, we find the first 100-digit Sophie Germain prime! This might take a minute!
End of explanation
"""
from random import SystemRandom # Import the necessary package.
r = SystemRandom().getrandbits(256)
print "The random integer is ",r
print "with binary expansion",bin(r) # r is an integer constructed from 256 random bits.
print "with bit-length ",len(bin(r)) - 2 # In case you want to check. Remember '0b' is at the beginning.
def getrandSGprime(bitlength):
'''
Creates a random Sophie Germain prime p with about
bitlength bits.
'''
while True:
p = SystemRandom().getrandbits(bitlength) # Choose a really random number.
if is_SGprime(p):
return p
"""
Explanation: In the seconds or minutes your computer was running, it checked the primality of almost 90 thousand numbers, each with 100 digits. Not bad!
The Diffie-Hellman protocol
When we study protocols for secure communication, we must keep track of the communicating parties (often called Alice and Bob), and who has knowledge of what information. We assume at all times that the "wire" between Alice and Bob is tapped -- anything they say to each other is actively monitored, and is therefore public knowledge. We also assume that what happens on Alice's private computer is private to Alice, and what happens on Bob's private computer is private to Bob. Of course, these last two assumptions are big assumptions -- they point towards the danger of computer viruses which infect computers and can violate such privacy!
The goal of the Diffie-Hellman protocol is -- at the end of the process -- for Alice and Bob to share a secret without ever having communicated the secret with each other. The process involves a series of modular arithmetic calculations performed on each of Alice and Bob's computers.
The process begins when Alice or Bob creates and publicizes a large prime number p and a primitive root g modulo p. It is best, for efficiency and security, to choose a safe prime p. Alice and Bob can create their own safe prime, or choose one from a public list online, e.g., from the RFC 3526 memo. Nowadays, it's common to take p with 2048 bits, i.e., a prime which is between $2^{2046}$ and $2^{2047}$ (a number with 617 decimal digits!).
For the purposes of this introduction, we use a smaller safe prime, with about 256 bits. We use the SystemRandom functionality of the random package to create a good random prime. It is not so much of an issue here, but in general one must be very careful in cryptography that one's "random" numbers are really "random"! The SystemRandom function uses chaotic properties of your computer's innards in order to initialize a random number generator, and is considered cryptographically secure.
End of explanation
"""
q = getrandSGprime(256) # A random ~256 bit Sophie Germain prime
p = 2*q + 1 # And its associated safe prime
print "p is ",p # Just to see what we're working with.
print "q is ",q
"""
Explanation: The function above searches and searches among random numbers until it finds a Sophie Germain prime. The (possibly endless!) search is performed with a while True: loop that may look strange. The idea is to stay in the loop until such a prime is found. Then the return p command returns the found prime as output and halts the loop. One must be careful with while True loops, since they are structured to run forever -- if there's not a loop-breaking command like return or break inside the loop, your computer will be spinning for a long time.
End of explanation
"""
def findprimroot_safe(p):
'''
Finds a primitive root,
modulo a safe prime p.
'''
b = 2 # Start trying with 2.
while True: # We just keep on looking.
if is_primroot_safe(b,p):
return b
b = b + 1 # Try the next base. Shouldn't take too long to find one!
g = findprimroot_safe(p)
print g
"""
Explanation: Next we find a primitive root, modulo the safe prime p.
End of explanation
"""
a = SystemRandom().getrandbits(256) # Alice's secret number
b = SystemRandom().getrandbits(256) # Bob's secret number
print "Only Alice should know that a = %d"%(a)
print "Only Bob should know that b = %d"%(b)
print "But everyone can know p = %d and g = %d"%(p,g)
"""
Explanation: The pair of numbers $(g, p)$, the primitive root and the safe prime, chosen by either Alice or Bob, is now made public. They can post their $g$ and $p$ on a public website or shout it in the streets. It doesn't matter. They are just tools for their secret-creation algorithm below.
Alice and Bob's private secrets
Next, Alice and Bob invent private secret numbers $a$ and $b$. They do not tell anyone these numbers. Not each other. Not their family. Nobody. They don't write them on a chalkboard, or leave them on a thumbdrive that they lose. These are really secret.
But they don't use their phone numbers, or social security numbers. It's best for Alice and Bob to use a secure random number generator on their separate private computers to create $a$ and $b$. They are often 256 bit numbers in practice, so that's what we use below.
End of explanation
"""
A = pow(g,a,p) # This would be computed on Alice's computer.
B = pow(g,b,p) # This would be computed on Bob's computer.
"""
Explanation: Now Alice and Bob use their secrets to generate new numbers. Alice computes the number
$$A = g^a \text{ mod } p,$$
and Bob computes the number
$$B = g^b \text{ mod } p.$$
End of explanation
"""
print "Everyone knows A = %d and B = %d."%(A,B)
"""
Explanation: Now Alice and Bob do something that seems very strange at first. Alice sends Bob her new number $A$ and Bob sends Alice his new number $B$. Since they are far apart, and the channel is insecure, we can assume everyone in the world now knows $A$ and $B$.
End of explanation
"""
print pow(B,a,p) # This is what Alice computes.
print pow(A,b,p) # This is what Bob computes.
"""
Explanation: Now Alice, on her private computer, computes $B^a$ mod $p$. She can do that because everyone knows $B$ and $p$, and she knows $a$ too.
Similarly, Bob, on his private computer, computes $A^b$ mod $p$. He can do that because everyone knows $A$ and $p$, and he knows $b$ too.
Alice and Bob do not share the results of their computations!
End of explanation
"""
S = pow(B,a,p) # Or we could have used pow(A,b,p)
print S
"""
Explanation: Woah! What happened? In terms of exponents, it's elementary. For
$$B^a = (g^{b})^a = g^{ba} = g^{ab} = (g^a)^b = A^b.$$
So these two computations yield the same result (mod $p$, the whole way through).
In the end, we find that Alice and Bob share a secret. We call this secret number $S$.
$$S = B^a = A^b.$$
End of explanation
"""
# We use the single-quotes for a long string, that occupies multiple lines.
# The backslash at the end of the line tells Python to ignore the newline character.
# Imagine that Alice has a secret message she wants to send to Bob.
# She writes the plaintext on her computer.
plaintext = '''Did you hear that the American Mathematical Society has an annual textbook sale? \
It's 40 percent off for members and 25 percent off for everyone else.'''
# Now Alice uses the secret S (as a string) to encrypt.
ciphertext = Vigenere_cipher(plaintext, str(S))
print ciphertext
# Alice sends the following ciphertext to Bob, over an insecure channel.
# When Bob receives the ciphertext, he decodes it with the secret S again.
print Vigenere_decipher(ciphertext, str(S))
"""
Explanation: This common secret $S$ can be used as a key for Alice and Bob to communicate hereafter. For example, they might use $S$ (converted to a string, if needed) as the key for a Vigenรจre cipher, and chat with each other knowing that only they have the secret key to encrypt and decrypt their messages.
End of explanation
"""
|
liganega/Gongsu-DataSci
|
ref_materials/exams/2017/A02/final-a02.ipynb
|
gpl-3.0
|
from __future__ import division, print_function
import numpy as np
import pandas as pd
from datetime import datetime as dt
"""
Explanation: 2017๋
2ํ๊ธฐ ๊ณต์
์ํ ๊ธฐ๋ง๊ณ ์ฌ ์ํ์ง
์ด๋ฆ:
ํ๋ฒ:
๋ชจ๋ ์ํฌํธ
์ฝ๋๋ฅผ ์คํํ๊ธฐ ์ํด ํ์ํ ๋ชจ๋๋ค์ด๋ค.
End of explanation
"""
a = np.arange(1, 12, 2)
b = a.reshape(3,2)
b
"""
Explanation: ๋ํ์ด ์ด๋ ์ด
์๋ ๋ชจ์์ ์ด๋ ์ด๋ฅผ ์์ฑํ๊ธฐ ์ํด reshape ํจ์๋ฅผ ์ด์ฉํ๋ค.
$$\left [ \begin{matrix} 1 & 3 \ 5 & 7 \ 9 & 11 \end{matrix} \right ]$$
End of explanation
"""
b[2,1] = 3
"""
Explanation: ์ถ๊ฐ๋ก ์๋ ์ฝ๋๋ฅผ ์คํํ์.
End of explanation
"""
a = np.arange(6) + np.arange(0, 51, 10)[:, np.newaxis]
a
"""
Explanation: ๋ฌธ์ 1
(1) a์ ๊ฐ์ ์ด๋ป๊ฒ ๋ณ๊ฒฝ๋์๋์ง ์ค๋ช
ํ๋ผ.
(2) ์์ ๊ฐ์ ํ์์ด ๋ฐ์ํ๋ ์ด์ ๋ฅผ ์ค๋ช
ํ๋ผ.
(3) ์์ ๊ฐ์ ํ์์ ํผํ๊ธฐ ์ํด์๋ ์ด๋ป๊ฒ ํด์ผ ํ๋์ง ์ค๋ช
ํ๋ผ.
๋ํ์ด ์ด๋ ์ด ์ธ๋ฑ์ฑ/์ฌ๋ผ์ด์ฑ
์๋ ์ฝ๋๋ก ์์ฑ๋ ์ด๋ ์ด๋ฅผ ์ด์ฉํ๋ ๋ฌธ์ ์ด๋ค.
End of explanation
"""
a[2::2, :4:2]
"""
Explanation: a ์ด๋ ์ด๋ฅผ ์ด์ฉํ์ฌ ์๋ฅผ ๋ค์ด ์๋ ๋ชจ์์ ์ด๋ ์ด๋ฅผ ์์ฑํ ์ ์๋ค.
$$\left [ \begin{matrix} 20 & 22 \ 40 & 42 \end{matrix} \right ]$$
End of explanation
"""
a[(1, 2, 3), (2, 3, 4)]
"""
Explanation: ๋ฌธ์ 2
(1) a ์ด๋ ์ด์ ์ธ๋ฑ์ฑ๊ณผ ์ฌ๋ผ์ด์ฑ์ ์ด์ฉํ์ฌ ์๋ ๋ชจ์์ ์ด๋ ์ด๋ฅผ ์์ฑํ๋ผ.
$$\left [ \begin{matrix} 20 & 21 & 22 \ 40 & 41 & 42 \end{matrix} \right ]$$
(2) a ์ด๋ ์ด์ ์ธ๋ฑ์ฑ๊ณผ ์ฌ๋ผ์ด์ฑ์ ์ด์ฉํ์ฌ ์๋ ๋ชจ์์ ์ด๋ ์ด๋ฅผ ์์ฑํ๋ผ.
$$\left [ \begin{matrix} 5 & 15 & 25 & 35 & 45 & 55 \end{matrix} \right ]$$
```
.
```
(3) ๋ง์คํฌ ์ธ๋ฑ์ฑ์ ์ฌ์ฉํ์ฌ ์๋ ๊ฒฐ๊ณผ๊ฐ ๋์ค๋๋ก ํ๋ผ.
array([ 0, 3, 12, 15, 21, 24, 30, 33, 42, 45, 51, 54])
```
.
```
์ ์ ์ธ๋ฑ์ฑ
์ ์ ์ธ๋ฑ์ฑ์ ์ฌ์ฉํ์ฌ ์๋ ๊ฒฐ๊ณผ๊ฐ ๋์ค๋๋ก ํ ์ ์๋ค.
array([ 12, 23, 34])
End of explanation
"""
prices_pd = pd.read_csv("data/Weed_Price.csv", parse_dates=[-1])
"""
Explanation: (4) ์ ์ ์ธ๋ฑ์ฑ์ ์ฌ์ฉํ์ฌ ์๋ ๋ชจ์์ ์ด๋ ์ด๊ฐ ๋์ค๋๋ก ํ๋ผ.
$$\left [ \begin{matrix} 30 & 32 & 35 \ 50 & 52 & 55 \end{matrix} \right ]$$
```
.
```
๋ฐ์ดํฐ ๋ถ์
์ค๋ ์ฌ์ฉํ ๋ฐ์ดํฐ๋ ๋ค์๊ณผ ๊ฐ๋ค.
๋ฏธ๊ตญ 51๊ฐ ์ฃผ(State)๋ณ ๋ด๋ฐฐ(์๋ฌผ) ๋๋งค๊ฐ๊ฒฉ ๋ฐ ํ๋งค์ผ์: Weed_price.csv
์๋ ๊ทธ๋ฆผ์ ๋ฏธ๊ตญ์ ์ฃผ๋ณ ๋ด๋ฐฐ(์๋ฌผ) ํ๋งค ๋ฐ์ดํฐ๋ฅผ ๋ด์ Weed_Price.csv ํ์ผ๋ฅผ ์์
๋ก ์ฝ์์ ๋์ ์ผ๋ถ๋ฅผ ๋ณด์ฌ์ค๋ค.
์ค์ ๋ฐ์ดํฐ๋์ 22899๊ฐ์ด๋ฉฐ, ์๋ ๊ทธ๋ฆผ์๋ 5๊ฐ์ ๋ฐ์ดํฐ๋ง์ ๋ณด์ฌ์ฃผ๊ณ ์๋ค.
* ์ฃผ์: 1๋ฒ์ค์ ํ
์ด๋ธ์ ์ด๋ณ ๋ชฉ๋ก(column names)์ ๋ด๊ณ ์๋ค.
* ์ด๋ณ ๋ชฉ๋ก: State, HighQ, HighQN, MedQ, MedQN, LowQ, LowQN, date
<p>
<table cellspacing="20">
<tr>
<td>
<img src="img/weed_price.png", width=600>
</td>
</tr>
</table>
</p>
๋ฌธ์ 3
(1) ์๋ ์ฝ๋๋ฅผ ์ค๋ช
ํ๋ผ.
End of explanation
"""
prices_pd.sort_values(['State', 'date'], inplace=True)
"""
Explanation: ```
.
```
(2) ์๋ ์ฝ๋๋ฅผ ์ค๋ช
ํ๋ผ.
End of explanation
"""
prices_pd.fillna(method='ffill', inplace=True)
"""
Explanation: ```
.
```
(3) ์๋ ์ฝ๋๋ฅผ ์ค๋ช
ํ๋ผ.
End of explanation
"""
california_pd = prices_pd[prices_pd.State == "California"].copy(True)
"""
Explanation: ```
.
```
(4) ์๋ ์ฝ๋๋ฅผ ์ค๋ช
ํ๋ผ.
End of explanation
"""
ca_sum = california_pd['HighQ'].sum()
ca_count = california_pd['HighQ'].count()
ca_sum / ca_count
"""
Explanation: ```
.
```
(5) ์๋ ์ฝ๋๋ฅผ ์ค๋ช
ํ๋ผ.
End of explanation
"""
def getYear(x):
return x.year
year_col = prices_pd.date.apply(getYear)
prices_pd["year"] = year_col
"""
Explanation: ```
.
```
(6) ์๋ ์ฝ๋๋ฅผ ์ค๋ช
ํ๋ผ.
End of explanation
"""
price_ca14 = prices_pd[(prices_pd.State=="California") & \
(prices_pd.year==2014)].head()
"""
Explanation: ```
.
```
(7) ์๋ ์ฝ๋๋ฅผ ์ค๋ช
ํ๋ผ.
End of explanation
"""
|
jennybrown8/python-notebook-coding-intro
|
lesson5exercises.ipynb
|
apache-2.0
|
count = 0
while (count < 5):
print "Still going! ", count
count = (count + 1)
print "All done!"
"""
Explanation: Lesson 5: While Loops
Here's a code example of a while loop. You can refer to it for ideas.
End of explanation
"""
text = "Hello, "
text = text + "Jenny. "
text = text + "How are you today? "
print text
"""
Explanation: Sally wants a program that counts backwards from 10 down to 0. Use a while loop to write the program.
Sami wants a program that counts upwards from 0 to 100 by 5's (so it prints 0, 5, 10, 15, 20, and so on). Write the program using a while loop.
Juna wants a program that takes a list of student names, and prints each name after it's entered. The program should keep running until "quit" is entered. Write the program using a while loop.
Here's an example of how to append text to a text variable. This is helpful if you want to print several things on one line, without the computer going to a new line in between.
End of explanation
"""
# Here's how you make the computer come up with a secret number.
# First we need the helper code that makes random numbers (just once).
import random
# Each time we want a new random number, we call this.
secret = random.randrange(1, 20)
# Let's see what it chose.
print secret
"""
Explanation: Tonya wants a program that prints a bar graph. If the user types in 4, then the program should print 4 # marks like this: #### If the user types in 5, it should print 5 hash marks in a row, and so on. Whatever number the user types in, the program prints one line with that many hashes in it, and then ends. The user only enters one number each time. Write the program.
Tonya likes your bar graph program. Now she wants a variant of it where the user keeps entering numbers, and each time they do, the program prints a bar graph that many hashes wide.
You can write this by putting a while loop inside another while loop. Pay close attention to the indentation here. Write the program.
Guess My Number
In this final challenge exercise, you'll write an interactive game. Think carefully about what goes before the loop (set up process), what goes inside the loop (game play), and what goes after the loop (finishing).
The computer will come up with a random number between 1 and 20. The player will try to guess the number. The computer will tell them whether they should guess higher, guess lower, or if they got it.
The player gets as many guesses as they want to try to guess the secret number. Here's a sample run:
Welcome to Guess My Number!
I'm thinking of a secret number from 1 to 20. What is it?
Guess: 10
My secret number is higher.
Guess: 15
My secret number is lower.
Guess: 12
My secret number is higher.
Guess: 13
You got it!
You win.
End of explanation
"""
|
coolharsh55/advent-of-code
|
2016/python3/Day14.ipynb
|
mit
|
import re
three_repeating_characters = re.compile(r'(.)\1{2}')
with open('../inputs/day14.txt', 'r') as f:
salt = f.readline().strip()
# TEST DATA
# salt = 'abc'
print(salt)
"""
Explanation: Day 14: One-Time Pad
author: Harshvardhan Pandit
license: MIT
link to problem statement
In order to communicate securely with Santa while you're on this mission, you've been using a one-time pad that you generate using a pre-agreed algorithm. Unfortunately, you've run out of keys in your one-time pad, and so you need to generate some more.
To generate keys, you first get a stream of random data by taking the MD5 of a pre-arranged [salt](https://en.wikipedia.org/wiki/Salt_(cryptography) (your puzzle input) and an increasing integer index (starting with 0, and represented in decimal); the resulting MD5 hash should be represented as a string of lowercase hexadecimal digits.
However, not all of these MD5 hashes are keys, and you need 64 new keys for your one-time pad. A hash is a key only if:
It contains three of the same character in a row, like 777. Only consider the first such triplet in a hash.
One of the next 1000 hashes in the stream contains that same character five times in a row, like 77777.
Considering future hashes for five-of-a-kind sequences does not cause those hashes to be skipped; instead, regardless of whether the current hash is a key, always resume testing for keys starting with the very next hash.
For example, if the pre-arranged salt is abc:
The first index which produces a triple is 18, because the MD5 hash of abc18 contains ...cc38887a5.... However, index 18 does not count as a key for your one-time pad, because none of the next thousand hashes (index 19 through index 1018) contain 88888.
The next index which produces a triple is 39; the hash of abc39 contains eee. It is also the first key: one of the next thousand hashes (the one at index 816) contains eeeee.
None of the next six triples are keys, but the one after that, at index 92, is: it contains 999 and index 200 contains 99999.
Eventually, index 22728 meets all of the criteria to generate the 64th key.
So, using our example salt of abc, index 22728 produces the 64th key.
Given the actual salt in your puzzle input, what index produces your 64th one-time pad key?
Solution logic
Our salt is out input, we apply it increasingly to integers, take their MD5 hash, and if it contains a character repeating three times, we check if there is a hash in the next 1000 integers that features the same character 7 times, and if it does, then the integer we had is our key. Find 64 such keys, with the index of the 64th key being the answer. Seems simple and straightforward.
End of explanation
"""
import hashlib
hash_index= {}
def get_hash_string(key):
if key in hash_index:
return hash_index[key]
string = '{salt}{key}'.format(salt=salt, key=key)
md5 = hashlib.md5()
md5.update(string.encode('ascii'))
hashstring = md5.hexdigest()
hash_index[key] = hashstring
return hashstring
def run():
keys = []
current_key = 0
while(len(keys) < 64):
for i in range(0, current_key):
hash_index.pop(i, None)
hashstring = get_hash_string(current_key)
repeating_chacter = three_repeating_characters.findall(hashstring)
if not repeating_chacter:
current_key += 1
continue
repeating_chacter = repeating_chacter[0]
repeating_character_five = ''.join(repeating_chacter for i in range(0, 5))
for qualifying_index in range(current_key + 1, current_key + 1001):
hashstring = get_hash_string(qualifying_index)
if repeating_character_five in hashstring:
break
else:
current_key += 1
continue
keys.append(current_key)
print(len(keys), current_key)
current_key += 1
return keys
print('answer', run()[63])
"""
Explanation: Hash index
To prevent the same hash from being counted upon again and again, we maintain a hash index to store the hashes of indexes. We also trim the index to remove any index for keys lower than the current key since we do not require those.
End of explanation
"""
hash_index = {}
def get_hash_string(key):
if key in hash_index:
return hash_index[key]
string = '{salt}{key}'.format(salt=salt, key=key)
md5 = hashlib.md5()
md5.update(string.encode('ascii'))
hashstring = md5.hexdigest()
# PART TWO
for i in range(0, 2016):
md5 = hashlib.md5()
md5.update(hashstring.encode('ascii'))
hashstring = md5.hexdigest()
hash_index[key] = hashstring
return hashstring
print('answer', run()[63])
"""
Explanation: Part Two
Of course, in order to make this process even more secure, you've also implemented key stretching.
Key stretching forces attackers to spend more time generating hashes. Unfortunately, it forces everyone else to spend more time, too.
To implement key stretching, whenever you generate a hash, before you use it, you first find the MD5 hash of that hash, then the MD5 hash of that hash, and so on, a total of 2016 additional hashings. Always use lowercase hexadecimal representations of hashes.
For example, to find the stretched hash for index 0 and salt abc:
Find the MD5 hash of abc0: 577571be4de9dcce85a041ba0410f29f.
Then, find the MD5 hash of that hash: eec80a0c92dc8a0777c619d9bb51e910.
Then, find the MD5 hash of that hash: 16062ce768787384c81fe17a7a60c7e3.
...repeat many times...
Then, find the MD5 hash of that hash: a107ff634856bb300138cac6568c0f24.
So, the stretched hash for index 0 in this situation is a107ff.... In the end, you find the original hash (one use of MD5), then find the hash-of-the-previous-hash 2016 times, for a total of 2017 uses of MD5.
The rest of the process remains the same, but now the keys are entirely different. Again for salt abc:
The first triple (222, at index 5) has no matching 22222 in the next thousand hashes.
The second triple (eee, at index 10) hash a matching eeeee at index 89, and so it is the first key.
Eventually, index 22551 produces the 64th key (triple fff with matching fffff at index 22859.
Given the actual salt in your puzzle input and using 2016 extra MD5 calls of key stretching, what index now produces your 64th one-time pad key?
Solution logic
We only need to change the definition of get_hash_string to calculate the hash 2016 times more. And then simply run the algorithm again.
To prevent computationally intensive operations from repeating themselves, we maintain an index of hashes so that we can easily lookup indexes without needing to calculate their hashes.
End of explanation
"""
|
pichot/citibike-publicspace
|
notebooks/1.7-kk-process-income.ipynb
|
mit
|
income = pd.read_excel("../data/unique/ACS_14_5YR_B19013.xls")
income = income.loc[8:]
income.head()
income = income.drop(['Unnamed: 1', 'Unnamed: 2', 'Unnamed: 3'], axis=1)
income = income.rename(columns={'B19013: MEDIAN HOUSEHOLD INCOME IN THE PAST 12 MONTHS (IN 2014 INFLATION-ADJUSTED DOLLARS) - Universe: Households': 'Zip_Code', 'Unnamed: 4': 'Median_Househould_Income', '$b': 'b'})
zips = []
for elem in income['Zip_Code']:
zips.append(str(elem))
zips2 = []
for elem in zips:
zips2.append(elem[6:])
income['Zip_Code'] = zips2
income['Zip_Code'] = pd.to_numeric(income['Zip_Code'])
income.head()
income["Zip"] = income["Zip_Code"].dropna().astype('int')
income.drop('Zip_Code', axis=1, inplace=True)
"""
Explanation: Income dataset
https://factfinder.census.gov/faces/nav/jsf/pages/searchresults.xhtml?refresh=t
The US Census generates a 'Median Household Income in the Past 12 Months (In 2014 Inflation-Adjusted Dollars)' from data collected by the American Community Survey. This report is based on 5-year estimates, from 2010-2014. The report I pulled is the most recent income-related information available by the US Census.
* I'm currently reading the data collection methodology for this project, so will update the group about how the data was collected when done.
End of explanation
"""
stations = pd.read_csv('../data/processed/stations.csv')
# left join of stations and income_zips
stations_income = stations.merge(income, how='inner', on='Zip')
stations_income.head()
print(type(stations_income))
print(len(stations_income))
stations_income.to_csv("../data/processed/stations-income.csv")
"""
Explanation: Merged income, zipcode, and station id for final dataframe
End of explanation
"""
|
ComputationalModeling/spring-2017-danielak
|
past-semesters/fall_2016/day-by-day/day23-agent-based-modeling-day1/Day_23_pre_class_notebook-SOLUTIONS.ipynb
|
agpl-3.0
|
# Put your code here!
import numpy as np
A = np.zeros((10,10), dtype='int')
for i in range(a.shape[0]):
for j in range(a.shape[1]):
A[i,j] = i+j
print(A)
"""
Explanation: Day 23 Pre-class assignment
Goals for today's pre-class assignment
In this pre-class assignment, you will:
Create and slice multi-dimensional numpy arrays
Plot 2D numpy arrays
Make an animation of 2D numpy arrays
Assignment instructions
First, work through the Numpy 2D array tutorial (Numpy_2D_array_tutorial.ipynb).
After that, write code to do the following things:
Task 1: Create a 2D Numpy array, named A, that is a 10x10 array integers, each of which is set to 0. Write a pair of for loops that iterate over A and sets A[i,j] = i + j. Print that array out to verify that it's behaving as expected - it should look like this:
[[ 0 1 2 3 4 5 6 7 8 9]
[ 1 2 3 4 5 6 7 8 9 10]
[ 2 3 4 5 6 7 8 9 10 11]
[ 3 4 5 6 7 8 9 10 11 12]
[ 4 5 6 7 8 9 10 11 12 13]
[ 5 6 7 8 9 10 11 12 13 14]
[ 6 7 8 9 10 11 12 13 14 15]
[ 7 8 9 10 11 12 13 14 15 16]
[ 8 9 10 11 12 13 14 15 16 17]
[ 9 10 11 12 13 14 15 16 17 18]]
End of explanation
"""
B = A[1::2,1::2]
print(B)
"""
Explanation: Task 2: Use Numpy's array slicing capabilities to create a new array, B, which is a subset of the values in array A. Specifically, extract every second element in both dimensions in array A, starting with the second element (i.e., index=1) in each dimension. Store this in B, and print out array B. It should look like this:
[[ 2 4 6 8 10]
[ 4 6 8 10 12]
[ 6 8 10 12 14]
[ 8 10 12 14 16]
[10 12 14 16 18]]
End of explanation
"""
C = B[1:3,:]
print(C)
"""
Explanation: Task 3: Using Numpy's slicing capabilities, extract the second and third rows of array B and store them in a third array, C. Print out C. It should look like this:
[[ 4 6 8 10 12]
[ 6 8 10 12 14]]
End of explanation
"""
def add_neighborhood(arr):
D = np.zeros_like(arr)
for i in range(arr.shape[0]):
for j in range(arr.shape[1]):
D[i,j] = arr[i,j]
if i >= 1:
D[i,j] += arr[i-1,j]
if i < arr.shape[0]-1:
D[i,j] += arr[i+1,j]
if j >= 1:
D[i,j] += arr[i,j-1]
if j < arr.shape[1]-1:
D[i,j] += arr[i,j+1]
return D
new_array = add_neighborhood(A)
print(A)
print(new_array)
new_array = add_neighborhood(B)
print(B)
print(new_array)
"""
Explanation: Task 4: Write a function, called add_neighborhood(), that:
Takes in an array as an argument
Creates an array, D, that is the same shape and data type as the incoming array but full of zeros.
Loops over all of the elements of the incoming array (using the shape() method to adjust for the fact that you don't know what its size is) and sets D[i,j] equal to the values of A[i,j] plus its four neighbors, A[i+1,j], A[i-1,j], A[i,j+1], A[i,j-1]. If you are at the edge or corner of the array (say, at A[0,0]) do not include any values that go over the edge of the array (into negative numbers or beyond the last index in any dimension).
Return the array D and print it out once it has been returned from the function.
Test this out using array A and B. When applied to array A, you should get this output:
[[ 2 5 9 13 17 21 25 29 33 27]
[ 5 10 15 20 25 30 35 40 45 39]
[ 9 15 20 25 30 35 40 45 50 43]
[13 20 25 30 35 40 45 50 55 47]
[17 25 30 35 40 45 50 55 60 51]
[21 30 35 40 45 50 55 60 65 55]
[25 35 40 45 50 55 60 65 70 59]
[29 40 45 50 55 60 65 70 75 63]
[33 45 50 55 60 65 70 75 80 67]
[27 39 43 47 51 55 59 63 67 52]]
and when you apply this function to array B, you should get this output:
[[10 18 26 34 30]
[18 30 40 50 46]
[26 40 50 60 54]
[34 50 60 70 62]
[30 46 54 62 50]]
Note: Make sure that the edges and corners have the right values!
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
myplot = plt.matshow(A, cmap='hot')
myplot.axes.get_xaxis().set_visible(False)
myplot.axes.get_yaxis().set_visible(False)
"""
Explanation: Task 5: Using the pyplot matshow() method, plot array A in a plot that uses a color map of your choice (that is not the default color map!), and where the axes are invisible.
End of explanation
"""
from IPython.display import HTML
HTML(
"""
<iframe
src="https://goo.gl/forms/VwY5ods4ugnwidnG2?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
"""
)
"""
Explanation: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!
End of explanation
"""
|
martinjrobins/hobo
|
examples/sampling/nuts-mcmc.ipynb
|
bsd-3-clause
|
import pints
import pints.toy as toy
import pints.plot
import numpy as np
import matplotlib.pyplot as plt
# Load a forward model
model = toy.LogisticModel()
# Create some toy data
real_parameters = np.array([0.015, 500])
times = np.linspace(0, 1000, 50)
org_values = model.simulate(real_parameters, times)
# Add noise
np.random.seed(1)
noise = 10
values = org_values + np.random.normal(0, noise, org_values.shape)
plt.plot(times, values, '.')
plt.plot(times, org_values)
plt.xlabel('time')
plt.show()
"""
Explanation: Inference: No-U-Turn (NUTS) MCMC
This example shows you how to perform Bayesian inference on a time-series problem, using No-U-Turn Monte Carlo (NUTS).
NUTS is a variant on the original Hamiltonian MCMC, but features a number of adaptive features that aims to reduce the number of hyper-paramters to zero.
Dual averaging is used to adapt the step size $\epsilon$ and mass matrix $M$.
More importantly, the NUTS algorithm makes an optimal choice for the number of steps per iteration ($L$), which depends on the local position of the particle in parameter space.
If the particle is in a relatively flat (low) region, then a higher $L$ is optimal to move away from this uninteresting area towards regions of higher density.
If it is in a region with more curvature, choosing a high value for $L$ can be inefficient because the particle may start to re-explore parts of space already visited; it may execute "U-Turns"
NUTS is the default sampler for packages such as Stan, PyMC3 and Pyro, so including this sampler in Pints is useful for comparison with these libraries.
However, it must be noted that the exact implementation will differ slightly between each library, so care must be taken when interpreting results.
Our goal with the Pints implementation was to reproduce the Stan version of NUTS as best we could.
First, we set up a problem using the logistic growth toy problem.
End of explanation
"""
# Create an object with links to the model and time series
problem = pints.SingleOutputProblem(model, times, values)
# Create a log-likelihood function
log_likelihood = pints.GaussianKnownSigmaLogLikelihood(problem, noise)
# Create a uniform prior over the parameters
log_prior = pints.UniformLogPrior(
[0.01, 450],
[0.02, 560]
)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Choose starting points for 3 mcmc chains
xs = [
real_parameters * 1.01,
real_parameters * 0.9,
real_parameters * 1.1,
]
# Create mcmc routine
nuts_mcmc = pints.MCMCController(log_posterior, len(xs), xs, method=pints.NoUTurnMCMC)
nuts_mcmc.set_max_iterations(1500)
# Set up modest logging
nuts_mcmc.set_log_to_screen(True)
nuts_mcmc.set_log_interval(100)
# Run!
print('Running...')
nuts_chains = nuts_mcmc.run()
print('Done!')
# Discard warm-up
nuts_chains = nuts_chains[:, 400:]
pints.plot.trace(nuts_chains)
plt.show()
# Check convergence using rhat criterion
print('R-hat:')
print(pints.rhat(nuts_chains))
"""
Explanation: Now we set up and run a sampling routine using No-U-Turn MCMC. In the initialisation of the No-U-Turn algorithm the sampler tries to find a reasonable value for epsilon, the leapfrog step size, which results in many overflows seen from evaluating the logistic model.
End of explanation
"""
# Create mcmc routine
h_mcmc = pints.MCMCController(log_posterior, len(xs), xs, method=pints.HamiltonianMCMC)
h_mcmc.set_max_iterations(1500)
# Set up modest logging
h_mcmc.set_log_to_screen(True)
h_mcmc.set_log_interval(100)
# Run!
print('Running...')
h_chains = h_mcmc.run()
print('Done!')
# Discard warm up
h_chains = h_chains[:, 400:]
pints.plot.trace(h_chains)
plt.show()
# Check convergence using rhat criterion
print('R-hat:')
print(pints.rhat(h_chains))
"""
Explanation: We can do the same using Hamiltonian MCMC.
Note that this implementation of HMC does not use dual-averaging for the step size, nor does it use an adaptive mass matrix like the NUTS implementation, so we expect HMC to be much less efficient than the tuned NUTS sampler.
End of explanation
"""
# Check convergence using rhat criterion
print('NUTS Minimum ESS:')
print(np.min([pints.effective_sample_size(nuts_chains[i,:,:]) for i in range(nuts_chains.shape[0])],axis=0))
print('\nHamiltonian Minimum ESS:')
print(np.min([pints.effective_sample_size(h_chains[i,:,:]) for i in range(h_chains.shape[0])],axis=0))
"""
Explanation: Both samplers converge according to the R-hat measure, and both sets of chains are clearly mixing well and sampling the posterior correctly.
However, the adaptive stepping in the NUTS sampler results in an order of magnitude less function evaluations.
We can check the Estimated Sample Size (ESS) for both methods as well, which shows that NUTS also gives a slightly more consistent ESS across the two parameters.
End of explanation
"""
|
srnas/barnaba
|
examples/example_09_cluster.ipynb
|
gpl-3.0
|
import glob
import barnaba as bb
import numpy as np
flist = glob.glob("snippet/*.pdb")
if(len(flist)==0):
print("# You need to run the example example8_snippet.ipynb")
exit()
# calculate G-VECTORS for all files
gvecs = []
for f in flist:
gvec,seq = bb.dump_gvec(f)
assert len(seq)==4
gvecs.extend(gvec)
"""
Explanation: Clustering RNA structures
We start by clustering the structures obtained from the previous example "example_08_snippet.ipynb", where we extracted all fragments with sequence GNRA from the PDB of the large ribosomal subunit.
First, we calculate the g-vectors for all PDB files
End of explanation
"""
gvecs = np.array(gvecs)
gvecs = gvecs.reshape(149,-1)
print(gvecs.shape)
"""
Explanation: Then, we reshape the array so that has the dimension $(N,n \ast 4\ast 4)$, where N is the number of frames and n is the number of nucleotides
End of explanation
"""
import barnaba.cluster as cc
# calculate PCA
v,w = cc.pca(gvecs,nevecs=3)
print("# Cumulative explained variance of component: 1=%5.1f 2:=%5.1f 3=%5.1f" % (v[0]*100,v[1]*100,v[2]*100))
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("white")
plt.scatter(w[:,0],w[:,1])
plt.xlabel("PC1")
plt.ylabel("PC2")
"""
Explanation: C. We project the data using a simple principal component analysis on the g-vectors
End of explanation
"""
new_labels, center_idx = cc.dbscan(gvecs,range(gvecs.shape[0]),eps=0.35,min_samples=8)
"""
Explanation: D. We make use of DBSCAN in sklearn to perform clustering. The function cc.dbscan takes four arguments:
i. the list of G-vectors gvec
ii. the list of labels for each point
iii. the eps value
iv. min_samples
v. (optional) the weight of the samples for non-uniform clustering
The function outputs some information on the clustering: the number of clusters, the number of samples assigned to clusters (non noise), and silouetthe.
For each cluster it reports the size, the maximum eRMSD distance between samples in a cluster (IC=intra-cluster), the median intra-cluster eRMSD, the maximum and median distance from the centroid.
End of explanation
"""
cp = sns.color_palette("hls",len(center_idx)+1)
colors = [cp[j-1] if(j!=0) else (0.77,0.77,0.77) for j in new_labels]
size = [40 if(j!=0) else 10 for j in new_labels]
#do scatterplot
plt.scatter(w[:,0],w[:,1],s=size,c=colors)
for i,k in enumerate(center_idx):
plt.text(w[k,0],w[k,1],str(i),ha='center',va='center',fontsize=25)
"""
Explanation: We can now color the PCA according to the different cluster and display the centroid as a label:
End of explanation
"""
import py3Dmol
cluster_0 = open(flist[center_idx[0]],'r').read()
cluster_1 = open(flist[center_idx[1]],'r').read()
cluster_2 = open(flist[center_idx[2]],'r').read()
cluster_3 = open(flist[center_idx[3]],'r').read()
p = py3Dmol.view(width=900,height=600,viewergrid=(2,2))
#p = py3Dmol.view(width=900,height=600)
#p.addModel(query_s,'pdb')
p.addModel(cluster_0,'pdb',viewer=(0,0))
p.addModel(cluster_1,'pdb',viewer=(0,1))
p.addModel(cluster_2,'pdb',viewer=(1,0))
p.addModel(cluster_3,'pdb',viewer=(1,1))
#p.addModel(hit_0,'pdb',viewer=(0,1))
p.setStyle({'stick':{}})
p.setBackgroundColor('0xeeeeee')
p.zoomTo()
p.show()
"""
Explanation: E. We finally visualise the 4 centroids:
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.